CN117710686A - Scalp hair detection method, system and equipment - Google Patents

Scalp hair detection method, system and equipment Download PDF

Info

Publication number
CN117710686A
CN117710686A CN202211023975.7A CN202211023975A CN117710686A CN 117710686 A CN117710686 A CN 117710686A CN 202211023975 A CN202211023975 A CN 202211023975A CN 117710686 A CN117710686 A CN 117710686A
Authority
CN
China
Prior art keywords
scalp hair
scalp
hair
network model
layer
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202211023975.7A
Other languages
Chinese (zh)
Inventor
蔡权
杨建辉
卢伟
严靖宇
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Zhangzhou Solex Smart Home Co Ltd
Original Assignee
Zhangzhou Solex Smart Home Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Zhangzhou Solex Smart Home Co Ltd filed Critical Zhangzhou Solex Smart Home Co Ltd
Priority to CN202211023975.7A priority Critical patent/CN117710686A/en
Priority to PCT/CN2023/114216 priority patent/WO2024041524A1/en
Publication of CN117710686A publication Critical patent/CN117710686A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/44Local feature extraction by analysis of parts of the pattern, e.g. by detecting edges, contours, loops, corners, strokes or intersections; Connectivity analysis, e.g. of connected components
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/0464Convolutional networks [CNN, ConvNet]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/048Activation functions
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/0002Inspection of images, e.g. flaw detection
    • G06T7/0012Biomedical image inspection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/764Arrangements for image or video recognition or understanding using pattern recognition or machine learning using classification, e.g. of video objects
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/77Processing image or video features in feature spaces; using data integration or data reduction, e.g. principal component analysis [PCA] or independent component analysis [ICA] or self-organising maps [SOM]; Blind source separation
    • G06V10/80Fusion, i.e. combining data from various sources at the sensor level, preprocessing level, feature extraction level or classification level
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/77Processing image or video features in feature spaces; using data integration or data reduction, e.g. principal component analysis [PCA] or independent component analysis [ICA] or self-organising maps [SOM]; Blind source separation
    • G06V10/80Fusion, i.e. combining data from various sources at the sensor level, preprocessing level, feature extraction level or classification level
    • G06V10/806Fusion, i.e. combining data from various sources at the sensor level, preprocessing level, feature extraction level or classification level of extracted features
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/82Arrangements for image or video recognition or understanding using pattern recognition or machine learning using neural networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20076Probabilistic image processing
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20084Artificial neural networks [ANN]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30004Biomedical image processing

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Evolutionary Computation (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • General Health & Medical Sciences (AREA)
  • Health & Medical Sciences (AREA)
  • Software Systems (AREA)
  • Computing Systems (AREA)
  • Artificial Intelligence (AREA)
  • Multimedia (AREA)
  • Medical Informatics (AREA)
  • Databases & Information Systems (AREA)
  • Molecular Biology (AREA)
  • Computational Linguistics (AREA)
  • Data Mining & Analysis (AREA)
  • Biophysics (AREA)
  • Biomedical Technology (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • General Engineering & Computer Science (AREA)
  • Mathematical Physics (AREA)
  • Quality & Reliability (AREA)
  • Radiology & Medical Imaging (AREA)
  • Nuclear Medicine, Radiotherapy & Molecular Imaging (AREA)
  • Image Analysis (AREA)

Abstract

The invention discloses a scalp hair detection method, a scalp hair detection system and scalp hair detection equipment, wherein the scalp hair detection method comprises the following steps: acquiring different scalp hair images; labeling and classifying scalp hair images according to scalp hair attributes to form a classified data set based on the scalp hair attributes; inputting the marked classified data set image into an improved MobileNet depth network model for training to obtain a trained depth network model based on scalp and hair attributes; inputting the scalp hair image to be detected into a trained deep network model to obtain a detection result of the scalp hair attribute; the detection result comprises a confidence coefficient corresponding to the category. According to the invention, the scalp hair attribute in the scalp hair image is detected through the improved MobileNet depth network model, and the category and the confidence corresponding to the scalp hair attribute are finally output, so that the operation speed is improved, and the end side deployment is more convenient.

Description

Scalp hair detection method, system and equipment
Technical Field
The invention relates to the technical field of scalp and hair detection, in particular to a scalp and hair detection method, a scalp and hair detection system and scalp and hair detection equipment.
Background
Scalp belongs to one of sensitive skin of human body, and due to living habit, working pressure and other reasons, more and more people are bothered by scalp and hair at present, and many people have the problems of hair damage, greasy hair, thick scalp cuticle, more scalp red blood wires, more hair follicle subcutaneous grease and the like. Many chained hairdressing institution practitioners and hair management centers exist in the market at present, the scalp is photographed in a single-point photographing mode for detecting hair, the scalp and hair states of a tested person are obtained in a manual interpretation mode, the mode is often influenced by subjective consciousness of the interpreted person, objective and accurate results cannot be obtained, and the tested person cannot accurately know the scalp and hair states of the tested person. How to objectively and accurately detect the scalp and hair state is a problem to be solved.
The patent application number 202010228550.4 discloses a scalp detection method based on deep learning, which comprises the following steps: step S1: collecting scalp image data; step S2: labeling and classifying scalp images according to scalp attributes to form classified data sets of the scalp attributes; step S3: pre-training the SqueezeNet model by using an ImageNet image database to obtain a pre-trained SqueezeNet model; step S4: modifying the pre-trained SqueezeNet model to adapt to a regression task, so as to obtain an improved SqueezeNet model; step S5: formulating scalp detection precision judgment rules, and retraining the improved SqueezeNet model by using the classification data set in the step S2 to obtain scalp detection models with various scalp attributes; and S6, classifying the scalp images to be detected according to scalp attributes, and inputting a corresponding scalp detection model to obtain a prediction result. Compared with the SqueEzeNet model, the Mobilene reduces the number of parameters and improves the operation speed, and the terminal side deployment is more convenient to carry out.
Disclosure of Invention
The invention mainly aims to provide a scalp hair detection method, a scalp hair detection system and scalp hair detection equipment, which overcome the defects existing in the prior art, detect scalp hair attributes in scalp hair images through an improved MobileNet depth network model, finally output categories and confidence degrees corresponding to the scalp hair attributes, improve operation speed and facilitate end side deployment.
The invention adopts the following technical scheme:
in one aspect, a scalp hair detection method comprises:
acquiring different scalp hair images;
labeling and classifying scalp hair images according to scalp hair attributes to form a classified data set based on the scalp hair attributes;
inputting the marked classified data set image into an improved MobileNet depth network model for training to obtain a trained depth network model based on scalp and hair attributes;
inputting the scalp hair image to be detected into a trained deep network model to obtain a detection result of the scalp hair attribute; the detection result comprises a confidence coefficient corresponding to the category.
Preferably, the improved MobileNet depth network model sequentially comprises: the device comprises a first convolution layer, a plurality of block layers, a pooling layer, a second convolution layer and a third convolution layer; each block layer comprises, in order: a fourth convolution layer, a depth convolution layer, and a fifth convolution layer; each block layer is followed by a jump connection layer connected to the last block layer.
Preferably, the improved MobileNet depth network model further comprises a plurality of first activation function layers, and the back of each convolution layer is connected with one first activation function layer respectively so as to perform nonlinear operation on the characteristic information of the scalp hair image extracted by the convolution layer.
Preferably, the first activation function layer includes a ReLU layer.
Preferably, the end of the improved MobileNet depth network model comprises a full connection layer, the full connection layer outputs images of 3 1*1 channels, and a second activation function layer connected with the full connection layer activates and outputs the confidence of each category.
Preferably, the second activation function layer comprises a Softmax layer.
Preferably, the loss calculation function of the improved MobileNet depth network model is as follows:
wherein H (y, p) represents model loss; y represents the true value of the picture tag of the test set; p represents the predicted value of the output label after being sent into the model; n represents the number of pictures in the test set; m represents a classification number; c represents the current output class; y is ic Table i the c-th class true value for the sample; p is p ic The c-th classification representing the i-th sample is fed into the model and then output as a predicted value.
Preferably, the method further comprises the following steps of:
the Precision represents the accuracy of the current round of weight in the test set; TP represents the correct number; FP represents the judgment error number.
Preferably, after obtaining the detection result of the scalp hair attribute, the method further comprises: and inputting the category and the confidence into the constructed score mapping function to obtain scores corresponding to the category and the confidence.
Preferably, the score mapping function is specifically as follows:
wherein x is the confidence of the output of the detection result; cls is the category of the output of the detection result; sigmoid (x) represents a mapping intermediate function; f (x, cls) represents a mapping function; f represents a score corresponding to the confidence.
Preferably, the scalp hair property includes at least one of thickness of hair, degree of damage of hair, hair fat, scalp cuticle, scalp red blood wire and subcutaneous hair follicle fat; each scalp hair attribute corresponds to a modified MobileNet depth network model.
In another aspect, a scalp hair detection system comprises:
the image acquisition module is used for acquiring different scalp hair images;
the classification data set labeling module is used for labeling and classifying scalp hair images according to scalp hair attributes to form a classification data set based on the scalp hair attributes;
the deep network model training module is used for inputting the marked classified data set image into the improved MobileNet deep network model for training to obtain a trained deep network model based on scalp hair attributes;
the detection result output module is used for inputting the scalp hair image to be detected into the trained deep network model to obtain a detection result corresponding to the scalp hair attribute; the detection result comprises a confidence coefficient corresponding to the category.
In yet another aspect, a scalp hair detecting apparatus includes a memory, a processor, and a computer program stored on the memory and executable on the processor, the processor implementing the scalp hair detecting method when executing the program.
Compared with the prior art, the invention has the following beneficial effects:
(1) According to the invention, the scalp hair attribute in the scalp hair image is detected through the improved MobileNet depth network model, and the category and the confidence corresponding to the scalp hair attribute are finally output, so that the operation speed is improved, and the end side deployment is more convenient;
(2) According to the improved MobileNet depth network model, the number of block layers of the original MobileNet is reduced, and as the scalp hair characteristic features are obvious, information extraction is carried out without excessive block layers, a plurality of block layers in the middle are removed, so that the calculated amount is reduced, and the operation speed is increased;
(3) According to the improved MobileNet depth network model, the jump connection layer is added to enhance feature fusion, specifically, the jump connection layer is added after each block layer to be connected to the last block layer, and then the local features and the global features are enriched through self-adaptive downsampling (the local features comprise tiny features such as red blood wires, and the global features comprise large-area features such as grease), so that the subsequent extraction or classification of the features with different scales is facilitated;
(4) According to the improved MobileNet depth network model, the 1*1 convolution layer is added at the tail end, and the 1*1 convolution layer is added, so that the model focuses on classification information more, and the convergence speed is further increased;
(5) The invention inputs the category corresponding to the scalp hair attribute and the confidence coefficient into the constructed score mapping function, and maps the score, so that a user can intuitively feel the scalp state according to the score.
The foregoing description is only an overview of the present invention, and is intended to provide a more clear understanding of the technical means of the present invention, so that it may be carried out in accordance with the teachings of the present specification, and to provide a more complete understanding of the above and other objects, features and advantages of the present invention, as exemplified by the following detailed description.
The above and other objects, advantages and features of the present invention will become more apparent to those skilled in the art from the following detailed description of the specific embodiments of the present invention when taken in conjunction with the accompanying drawings.
Drawings
FIG. 1 is a flowchart of a scalp and hair detection method according to an embodiment of the present invention;
FIG. 2 is an exemplary diagram of a prior art MobileNet deep network model;
FIG. 3 is an exemplary diagram of an improved MobileNet deep network model in accordance with an embodiment of the present invention;
FIG. 4 is an exemplary diagram of a block layer according to an embodiment of the present invention;
FIG. 5 is a hierarchical table of a prior art MobileNet deep network model;
FIG. 6 is a hierarchical table of an improved MobileNet deep network model in accordance with an embodiment of the present invention;
FIG. 7 is a graph comparing model losses of a prior art MobileNet and an improved MobileNet of an embodiment of the present invention;
FIG. 8 is a graph comparing model accuracy of a prior art MobileNet with a modified MobileNet of an embodiment of the present invention;
FIG. 9 is a detailed flowchart of a method for detecting scalp red blood filament attribute according to an embodiment of the present invention;
FIG. 10 is a block diagram of a scalp and hair detection system according to an embodiment of the present invention;
fig. 11 is a frame view of a scalp hair detecting apparatus according to an embodiment of the present invention.
Detailed Description
The technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the accompanying drawings in the embodiments of the present invention; it is apparent that the described embodiments are only some embodiments of the present invention, not all embodiments, and that all other embodiments obtained by persons of ordinary skill in the art without making creative efforts based on the embodiments in the present invention are within the protection scope of the present invention.
In the description of the present invention, it should be noted that the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising one … …" does not exclude the presence of other like elements in a process, method, article, or apparatus that comprises the element.
In the description of the present invention, it should be noted that the terms "first," "second," and "second" are used for descriptive purposes only and are not to be construed as indicating or implying relative importance.
In the description of the present invention, it should be noted that, unless explicitly specified and limited otherwise, the terms "mounted," "provided," "engaged/connected," "connected," and the like are to be construed broadly, and for example, "connected" may be a fixed connection, may be a detachable connection, or may be an integral connection, may be a mechanical connection, may be an electrical connection, may be a direct connection, may be an indirect connection via an intermediary, may be a communication between two elements, and for one of ordinary skill in the art, the specific meaning of the terms in this disclosure may be understood in a specific case.
In the description of the present invention, unless explicitly stated and defined otherwise, the step identifiers S101, S102, S103, etc. are used for convenience of description, and do not represent an execution sequence, and the corresponding execution sequence may be adjusted.
Referring to fig. 1, a scalp hair detection method of the present invention includes:
s101, acquiring different scalp hair images;
s102, labeling and classifying scalp hair images according to scalp hair attributes to form a classification data set based on the scalp hair attributes;
s103, inputting the marked classified data set image into an improved MobileNet depth network model for training to obtain a trained depth network model based on scalp and hair attributes;
s104, inputting the scalp hair image to be detected into the trained deep network model to obtain a detection result of the corresponding scalp hair attribute; the detection result comprises a confidence coefficient corresponding to the category.
The method specifically comprises the steps of obtaining scalp hair images with different light sources, different angles, different ages, different sexes and the like, and selecting the scalp hair images so that the improved MobileNet depth network model (hereinafter referred to as MobileHairNet depth network model) can detect the category and confidence of the scalp hair images under different conditions when the improved MobileNet depth network model is trained subsequently, so that the application range is enlarged.
And labeling and classifying scalp hair images according to scalp hair attributes. Specific doctors can be called to mark, and different scalp and hair attributes can be respectively classified into different categories, and can be classified into three categories, such as mild, moderate and moderate. And professional labeling is performed, so that the subsequent MobileHairNet deep network model is beneficial to continuously updating the network structure characteristic parameters of the model in the training process, and the network structure characteristic parameters are adjusted to an optimal state.
In the present invention, the scalp hair property includes at least one of thickness of hair, degree of damage of hair, hair fat, scalp cuticle, scalp red blood wire and subcutaneous hair follicle fat; each scalp hair attribute corresponds to a mobilehair net depth network model. The structure of the mobilehair net depth network model corresponding to each scalp hair attribute is the same, but the network structure characteristic parameters may be different.
Since the features of the scalp hair attributes are different, a magnification lens with variable magnification may be used when capturing images of different scalp hair attributes. For example, 50 times, 100 times and 200 times optical lenses are used to magnify the hair and scalp, respectively, for observing scalp and hair characteristics. The 50-times lens can more easily observe the conditions of scalp red blood filaments and the like, and the 100-times lens can more easily observe the conditions of scalp horny layer, scalp grease, hair follicle alopecia and the like. The lens of 200 times is used for observing the thickness condition of hair, the damage condition of hair and the like.
In addition, the embodiment identifies the features of scalp and hair images through a tri-spectral identification technology, and when the scalp and hair images are acquired, the scalp and hair images can be acquired under different light sources. The features of scalp red blood filaments and subcutaneous hair follicle grease are difficult to distinguish by naked eyes under the traditional white light. With the assistance of polarized light, the specular reflection of natural light can be eliminated, and the characteristics of red blood wires under the surface layer of the skin can be observed more easily. By means of a UV light source with a wavelength between 280nm and 400nm, reflection by the subcutaneous fat of the hair follicle is easy resulting in a red bright light. Specifically, what multiple lens and what light are more suitable for extracting what scalp hair attribute can be obtained after experimental test, and then the image corresponding to the scalp hair attribute is obtained under the corresponding multiple lens and the light source to be used as a test image for training.
The scalp hair image may be collected on the device that performs the scalp hair detection method, or may be collected on another device and then sent to the device that performs the scalp hair detection method, and the method is specifically set according to the need, which is not limited in this embodiment.
In this embodiment, the MobileHairNet deep network model sequentially includes: the device comprises a first convolution layer, a plurality of block layers, a pooling layer, a second convolution layer and a third convolution layer; each block layer comprises, in order: a fourth convolution layer, a depth convolution layer, and a fifth convolution layer; each block layer is followed by a jump connection layer connected to the last block layer.
Further, the improved MobileNet depth network model further comprises a plurality of first activation function layers, and the back of each convolution layer is connected with one first activation function layer respectively so as to perform nonlinear operation on the characteristic information of the scalp hair image extracted by the convolution layer.
The first, second, third, fourth, and fifth convolution layers herein are conv layers that do not include an activation function, and the depth convolution layer is dwconv layers that do not include an activation function. The back of the first convolution layer, the second convolution layer, the third convolution layer, the fourth convolution layer and the fifth convolution layer are respectively connected with a first activation function layer, and the back of the dwconv layer is also connected with a first activation function layer. In other embodiments, the conv layer and the first activation function layer may also be collectively referred to as a convolution layer, and the dwconv layer and the first activation function layer may be collectively referred to as a deep convolution layer, which is not particularly limited in this embodiment.
Specifically, the first activation function layer includes a ReLU layer.
The tail end of the MobileHairNet depth network model also comprises a full-connection layer, the full-connection layer outputs images of 3 1*1 channels, and a second activation function layer connected with the full-connection layer activates and outputs the confidence degree of each classification.
The second activation function layer includes a Softmax layer.
The use of the ReLU function to introduce non-linear factors enables the MobileHairNet depth network model to fit the non-linear function. So that the non-attention feature area is restrained, and important attention features are concentrated. The formula of the ReLU function is as follows: x is the value of each pixel point in the picture.
The Softmax function formula is as follows:
wherein,the channel value output for the ith category, n is the number of categories, such as corresponding to mild, moderate and severe, including three categories.
Finally, according to the confidence coefficient of each outputted classification, the category of the maximum value of the confidence coefficient is taken out and used as the belonging category of the current image output, for example, the category is output through softmax [0.5,0.2,0.3]. The probability of the information mapped on the three corresponding categories is 50% for light, 20% for medium, 20% for heavy, and finally the highest probability is taken as the category to which the current belongs. To verify whether the mobilehairet deep network model learns the desired feature of interest, verification is performed using a test set. The loss of the model was calculated using Cross Entropy Loss as follows:
wherein H (y, p) represents model loss; y represents the true value of the picture tag of the test set; p represents the predicted value of the output label after being sent into the model; n represents the number of pictures in the test set; m represents a classification number; c represents the current output class; y is ic Table i the c-th class true value for the sample; p is p ic Prediction of output after model entry of the c-th class representing the i-th sample
When the model is trained for a plurality of times by training set data, the loss of the training set and the loss of the test set are continuously reduced, which indicates that the model can converge and calculate the accuracy of the current round of weight in the test set by using an accuracy function Precision, as follows:
the Precision represents the accuracy of the current round of weight in the test set; TP represents the correct number; FP represents the judgment error number.
And performing multiple rounds of training and storing the optimal model. At the beginning of training, a preset number of rounds can be set to train the network. Comprehensive evaluation is performed according to accuracy Precision and model loss function Cross Entropy Loss obtained by each round. A model with higher accuracy of the model test set and lower model loss of the training set is saved, so that the model can have higher prediction accuracy and feature learning capability.
Specifically, referring to fig. 2, which is an exemplary diagram of a MobileNet depth network model in the prior art, fig. 3 is an exemplary diagram of a MobileHairNet depth network model in an embodiment of the present invention, and fig. 4 is an exemplary diagram of a block layer in an embodiment of the present invention. Referring to fig. 5, a hierarchical table of a MobileNet depth network model of the prior art is shown, and fig. 6 is a hierarchical table of a MobileHairNet depth network model according to an embodiment of the present invention. As can be seen from fig. 2 to 6, the MobileHairNet depth network model of the present embodiment has the following features and advantages compared with the MobileNet depth network model of the prior art:
(1) The number of block layers of mobilet is reduced. Since the characteristics are obvious, the excessive block layers are not needed to extract information, the MobileHairNet depth network model of the embodiment removes the block layers input by the middle 14×14×96 and 7×7×160, thereby reducing the calculated amount and accelerating the calculation speed.
(2) And adding a jump connection layer to enhance feature fusion. The jump connection layer is added after different block layers and connected to the last block layer, then the local features and the global features are enriched through self-adaptive downsampling (scaling of the fingers, shrinking according to a certain proportion or not being overlapped on the last layer), so that the follow-up extraction or classification of the features with different scales is facilitated (the local features refer to fine features such as scalp red blood filaments, the global features refer to large-area features such as grease, the learning is similar to the learning of pictures with different sizes, and the next time similar pictures with different sizes are seen, and the quick resolution is realized).
Experiments show that in the same batch of data sets, the accuracy of the model connected by using the jump connection layer is improved by 5.8% compared with that of the model connected by using no jump connection layer. Under the same training times, the average loss is 17% smaller than that of a model without using layer jump connection, the convergence speed is faster, and the final model is reached 25 rounds in advance (the accuracy of the subsequent iteration epoch is not improved due to the over fitting of the model, and the optimal iteration times are determined by the minimum value of loss values or the maximum value of ACC (accuracy)). A comparative graph of model loss for the prior art MobileNet and the modified MobileNet of the present embodiment is shown in fig. 7, and a comparative graph of model accuracy for the prior art MobileNet and the modified MobileNet of the present embodiment is shown in fig. 8.
(3) Adding the terminal 1*1 convolution layer accelerates convergence. Because the jump connection layer is connected to the last layer of block, after the jump connection layer passes through the maxpooling pooling layer, the previous optimization scheme (reducing the block layer number of the Mobilene) is added, compared with the 1280 layer of the Mobilene, the number of layers is reduced to 171 (3+32+16+24+32+64) layers after pooling, the calculated amount is further reduced, and finally, a 1*1 convolution layer is added, so that the model pays more attention to classification information, and the convergence rate is further increased.
Under the three strategies, the MobileHairNet depth network model of the invention is only 2.4M in size, and the reasoning speed is 5ms. The inference speed is 35ms faster than the 4.8M smaller for the SqueezeNet model.
In this embodiment, after obtaining the detection result of the scalp hair attribute, the scalp hair detection method further includes: and inputting the category and the confidence into the constructed score mapping function to obtain scores corresponding to the category and the confidence.
The score mapping function is specifically as follows:
wherein x is the confidence of the output of the detection result; cls is the category of the output of the detection result; sigmoid (x) represents a mapping intermediate function; f (x, cls) represents a mapping function; f represents a score corresponding to the confidence.
In specific implementation, after the class cls and the confidence coefficient x with the highest confidence coefficient are obtained, the class cls and the confidence coefficient x with the highest confidence coefficient can be subjected to score mapping, and each class cls and the corresponding confidence coefficient x can also be subjected to score mapping. The user can intuitively feel the scalp state of the hair according to the scores, such as mapping the light degree according to the confidence degree to between 40 and 60 minutes, the medium degree to between 60 and 80 minutes and the heavy degree to between 80 and 100 minutes.
Referring to fig. 10, in accordance with another aspect of the present invention, a scalp hair detection system comprises:
an image acquisition module 1001 for acquiring different scalp hair images;
the classification data set labeling module 1002 is configured to label and classify scalp hair images according to scalp hair attributes, and form a classification data set based on scalp hair attributes;
the deep network model training module 1003 is configured to input the labeled classification data set image into the improved MobileNet deep network model for training, so as to obtain a trained deep network model based on scalp hair attribute;
the detection result output module 1004 is configured to input a scalp hair image to be detected into a trained deep network model, and obtain a detection result corresponding to scalp hair attribute; the detection result comprises a confidence coefficient corresponding to the category.
In yet another aspect, referring to fig. 11, a scalp hair detecting apparatus 110 includes a memory 1101, a processor 1102, and a computer program 1103 stored on the memory 1101 and executable on the processor 1102, the processor 1102 implementing the scalp hair detecting method when executing the program.
In an embodiment, the computer program 1103 may be divided into one or more modules/units, which are stored in the memory 1101 and executed by the processor 1102 to complete the present application. The one or more modules/units may be a series of instruction segments of the computer program 1103 capable of performing a specific function, which instruction segments are used to describe the execution of the computer program 1103 in the scalp hair detecting device 110.
The scalp hair detecting device 110 may be a proprietary scalp hair detector, or may be a computing device such as a mobile phone, a desktop computer, a notebook computer, a palm computer, and a cloud server. The scalp hair detection device 110 may include, but is not limited to, a processor 1102 and a memory 1101. It will be appreciated by those skilled in the art that fig. 11 is merely an example of scalp hair detection device 110 and is not intended to be limiting of scalp hair detection device 110, and may include more or fewer components than shown, or may combine certain components, or different components, e.g., the scalp hair detection device 110 may also include input and output devices, network access devices, buses, etc.
The processor 1102 may be a central processing unit (Central Processing Unit, CPU), but may also be other general purpose processors 1102, digital signal processors 1102 (Digital Signal Processor, DSP), application specific integrated circuits (Application Specific Integrated Circuit, ASIC), off-the-shelf Programmable gate arrays (FPGA) or other Programmable logic devices, discrete gate or transistor logic devices, discrete hardware components, or the like. A general purpose processor 1102 may be a microprocessor 1102 or the processor 1102 may be any conventional processor 1102 or the like.
The memory 1101 may be an internal storage unit of the scalp hair detection device 110, such as a hard disk or a memory of the scalp hair detection device 110. The memory 1101 may also be an external storage device of the scalp hair detecting device 110, such as a plug-in hard disk, smart Media Card (SMC), secure Digital (SD) Card, flash memory Card (Flash Card) or the like, which are provided on the scalp hair detecting device 110. Further, the memory 1101 may also include both internal and external storage units of the scalp and hair detecting device 110. The memory 1101 is used for storing the computer program 1103 and other programs and data required by the terminal device. The memory 1101 may also be used to temporarily store data that has been output or is to be output.
The above description is only of the preferred embodiments of the present invention; the scope of the invention is not limited in this respect. Any person skilled in the art, within the technical scope of the present disclosure, may apply to the present invention, and the technical solution and the improvement thereof are all covered by the protection scope of the present invention.

Claims (13)

1. A scalp hair detection method comprising:
acquiring different scalp hair images;
labeling and classifying scalp hair images according to scalp hair attributes to form a classified data set based on the scalp hair attributes;
inputting the marked classified data set image into an improved MobileNet depth network model for training to obtain a trained depth network model based on scalp and hair attributes;
inputting the scalp hair image to be detected into a trained deep network model to obtain a detection result of the scalp hair attribute; the detection result comprises a confidence coefficient corresponding to the category.
2. The scalp hair detection method according to claim 1 wherein the modified MobileNet depth network model comprises, in order: the device comprises a first convolution layer, a plurality of block layers, a pooling layer, a second convolution layer and a third convolution layer; each block layer comprises, in order: a fourth convolution layer, a depth convolution layer, and a fifth convolution layer; each block layer is followed by a jump connection layer connected to the last block layer.
3. The scalp hair detecting method according to claim 2, wherein the improved MobileNet depth network model further comprises a plurality of first activation function layers, and each convolution layer is connected with one first activation function layer, so as to perform nonlinear operation on the characteristic information of the scalp hair image extracted by the convolution layer.
4. A scalp hair detection method according to claim 3 wherein the first activation function layer comprises a ReLU layer.
5. The scalp hair monitoring method according to claim 2 wherein the end of the modified MobileNet depth network model includes a full link layer which outputs images of 3 1*1 channels, and a second activation function layer connected to the full link layer activates the confidence of each category.
6. The scalp hair detection method according to claim 5 wherein the second activation function layer comprises a Softmax layer.
7. The scalp hair detection method according to claim 1 wherein the loss calculation function of the modified MobileNet depth network model is as follows:
wherein H (y, p) represents model loss; y represents the true value of the picture tag of the test set; p represents the predicted value of the output label after being sent into the model; n represents the number of pictures in the test set; m represents a classification number; c represents the current output class; y is ic Table i the c-th class true value for the sample; p is p ic The c-th classification representing the i-th sample is fed into the model and then output as a predicted value.
8. The scalp hair detection method according to claim 1 further comprising an accuracy calculation function as follows:
the Precision represents the accuracy of the current round of weight in the test set; TP represents the correct number; FP represents the judgment error number.
9. The scalp hair detecting method according to claim 1, wherein after obtaining the detection result of the corresponding scalp hair attribute, further comprising: and inputting the category and the confidence into the constructed score mapping function to obtain scores corresponding to the category and the confidence.
10. The scalp hair detection method according to claim 9 wherein the score mapping function is specifically as follows:
wherein x is the confidence of the output of the detection result; cls is the category of the output of the detection result; sigmoid (x) represents a mapping intermediate function; f (x, cls) represents a mapping function; f represents a score corresponding to the confidence.
11. The scalp hair detection method according to claim 1 wherein the scalp hair attribute comprises at least one of hair thickness, degree of hair damage, hair grease, scalp cuticle, scalp red blood wire and subcutaneous hair follicle grease; each scalp hair attribute corresponds to a modified MobileNet depth network model.
12. A scalp hair detection system comprising:
the image acquisition module is used for acquiring different scalp hair images;
the classification data set labeling module is used for labeling and classifying scalp hair images according to scalp hair attributes to form a classification data set based on the scalp hair attributes;
the deep network model training module is used for inputting the marked classified data set image into the improved MobileNet deep network model for training to obtain a trained deep network model based on scalp hair attributes;
the detection result output module is used for inputting the scalp hair image to be detected into the trained deep network model to obtain a detection result corresponding to the scalp hair attribute; the detection result comprises a confidence coefficient corresponding to the category.
13. A scalp hair detecting device comprising a memory, a processor and a computer program stored on the memory and executable on the processor, the processor implementing a scalp hair detecting method as claimed in any one of claims 1 to 11 when executing the program.
CN202211023975.7A 2022-08-24 2022-08-24 Scalp hair detection method, system and equipment Pending CN117710686A (en)

Priority Applications (2)

Application Number Priority Date Filing Date Title
CN202211023975.7A CN117710686A (en) 2022-08-24 2022-08-24 Scalp hair detection method, system and equipment
PCT/CN2023/114216 WO2024041524A1 (en) 2022-08-24 2023-08-22 Scalp hair detection method, system and device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202211023975.7A CN117710686A (en) 2022-08-24 2022-08-24 Scalp hair detection method, system and equipment

Publications (1)

Publication Number Publication Date
CN117710686A true CN117710686A (en) 2024-03-15

Family

ID=90012530

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202211023975.7A Pending CN117710686A (en) 2022-08-24 2022-08-24 Scalp hair detection method, system and equipment

Country Status (2)

Country Link
CN (1) CN117710686A (en)
WO (1) WO2024041524A1 (en)

Family Cites Families (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110188598B (en) * 2019-04-13 2022-07-05 大连理工大学 Real-time hand posture estimation method based on MobileNet-v2
WO2021086594A1 (en) * 2019-10-28 2021-05-06 Google Llc Synthetic generation of clinical skin images in pathology
CN111428655A (en) * 2020-03-27 2020-07-17 厦门大学 Scalp detection method based on deep learning
CN113591512A (en) * 2020-04-30 2021-11-02 青岛海尔智能技术研发有限公司 Method, device and equipment for hair identification
CN114120019B (en) * 2021-11-08 2024-02-20 贵州大学 Light target detection method

Also Published As

Publication number Publication date
WO2024041524A1 (en) 2024-02-29

Similar Documents

Publication Publication Date Title
CN108701216B (en) Face recognition method and device and intelligent terminal
CN111401339B (en) Method and device for identifying age of person in face image and electronic equipment
CN112581438B (en) Slice image recognition method and device, storage medium and electronic equipment
CN110033023A (en) It is a kind of based on the image processing method and system of drawing this identification
CN111700608A (en) Multi-classification method and device for electrocardiosignals
WO2021031817A1 (en) Emotion recognition method and device, computer device, and storage medium
CN112257728A (en) Image processing method, image processing apparatus, computer device, and storage medium
CN112818821B (en) Human face acquisition source detection method and device based on visible light and infrared light
CN110750673B (en) Image processing method, device, equipment and storage medium
CN113326778B (en) Human body posture detection method and device based on image recognition and storage medium
Alrabiah et al. Computer-based approach to detect wrinkles and suggest facial fillers
Leopold et al. Segmentation and feature extraction of retinal vascular morphology
CN111462087B (en) Image detection method, device and system based on artificial intelligence and storage medium
CN113052236A (en) Pneumonia image classification method based on NASN
CN117710686A (en) Scalp hair detection method, system and equipment
CN116130088A (en) Multi-mode face diagnosis method, device and related equipment
CN115526888A (en) Eye pattern data identification method and device, storage medium and electronic equipment
KR102165487B1 (en) Skin disease discrimination system based on skin image
CN114038045A (en) Cross-modal face recognition model construction method and device and electronic equipment
Rahman et al. Attention-Based CNN Model for Burn Severity Assessment
Jabraelzadeh et al. Providing a hybrid method for face detection and gender recognition by a transfer learning and fine-tuning approach in deep convolutional neural networks and the Yolo algorithm
CN118626974B (en) Leukocyte classification method and device
Kodumuru et al. Diabetic Retinopathy Screening Using CNN (ResNet 18)
Kanagaraju et al. Emotion detection from facial expression using image processing
Xu et al. CNN expression recognition based on feature graph

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination