WO2024041524A1 - Scalp hair detection method, system and device - Google Patents

Scalp hair detection method, system and device Download PDF

Info

Publication number
WO2024041524A1
WO2024041524A1 PCT/CN2023/114216 CN2023114216W WO2024041524A1 WO 2024041524 A1 WO2024041524 A1 WO 2024041524A1 CN 2023114216 W CN2023114216 W CN 2023114216W WO 2024041524 A1 WO2024041524 A1 WO 2024041524A1
Authority
WO
WIPO (PCT)
Prior art keywords
scalp
hair
layer
network model
deep network
Prior art date
Application number
PCT/CN2023/114216
Other languages
French (fr)
Chinese (zh)
Inventor
蔡权
杨建辉
卢伟
严靖宇
Original Assignee
漳州松霖智能家居有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 漳州松霖智能家居有限公司 filed Critical 漳州松霖智能家居有限公司
Publication of WO2024041524A1 publication Critical patent/WO2024041524A1/en

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/0464Convolutional networks [CNN, ConvNet]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/048Activation functions
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/44Local feature extraction by analysis of parts of the pattern, e.g. by detecting edges, contours, loops, corners, strokes or intersections; Connectivity analysis, e.g. of connected components
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/764Arrangements for image or video recognition or understanding using pattern recognition or machine learning using classification, e.g. of video objects
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/77Processing image or video features in feature spaces; using data integration or data reduction, e.g. principal component analysis [PCA] or independent component analysis [ICA] or self-organising maps [SOM]; Blind source separation
    • G06V10/80Fusion, i.e. combining data from various sources at the sensor level, preprocessing level, feature extraction level or classification level
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/82Arrangements for image or video recognition or understanding using pattern recognition or machine learning using neural networks

Definitions

  • the present invention relates to the technical field of scalp and hair detection, in particular to a method, system and equipment for scalp and hair detection.
  • the scalp is one of the sensitive skins of the human body. Due to living habits and work pressure, more and more people are currently suffering from scalp and hair problems. Many people suffer from damaged hair, greasy hair, thick scalp cuticles, and red scalp. Problems such as excess hair follicles and subcutaneous oil. There are many chain hairdressing institutions and hair management centers on the market today. Many of the hair tests are based on single-point photography of the scalp, and manual interpretation is used to obtain the condition of the subject's scalp and hair. , this method is often affected by the subjective consciousness of the interpreter and cannot obtain objective and accurate results, resulting in the subject being unable to correctly understand the condition of his scalp and hair. How to objectively and accurately detect the condition of scalp and hair is an urgent problem that needs to be solved.
  • the patent application number 202010228550.4 discloses a scalp detection method based on deep learning, which includes the following steps: Step S1: Collect scalp image data; Step S2: Label and classify scalp images according to scalp attributes to form classification data for each scalp attribute. Set; Step S3: Use the ImageNet image database to pre-train the SqueezeNet model to obtain the pre-trained SqueezeNet model; Step S4: Modify the pre-trained SqueezeNet model to adapt it to the regression task and obtain the improved SqueezeNet model; Step S5: Formulate the scalp detection accuracy Determine the rules, use the classification data set in step S2 to retrain the improved SqueezeNet model, and obtain scalp detection models for various scalp attributes; step S6, classify the scalp image to be tested according to the scalp attributes, and input the corresponding scalp detection model Get prediction results.
  • Mobilenet reduces the number of parameters and improves the computing speed, making it more convenient for device-side deployment.
  • the main purpose of the present invention is to propose a method, system and equipment for scalp and hair detection, which overcomes the shortcomings of the existing technology and detects scalp and hair through an improved MobileNet deep network model.
  • the scalp and hair attributes in the image are detected, and the category and confidence level corresponding to the scalp and hair attributes are finally output, which improves the computing speed and makes terminal-side deployment more convenient.
  • a scalp hair detection method includes:
  • scalp and hair images are annotated and classified to form a classification data set based on scalp and hair attributes
  • the improved MobileNet deep network model includes in order: a first convolution layer, several block layers, a pooling layer, a second convolution layer and a third convolution layer; each block layer includes in order: a fourth convolution layer.
  • the improved MobileNet deep network model also includes several first activation function layers, and each convolutional layer is connected to a first activation function layer to perform feature information on the scalp and hair images extracted by the convolutional layer.
  • Nonlinear operations are also included in the improved MobileNet deep network model.
  • the first activation function layer includes a ReLU layer.
  • the end of the improved MobileNet deep network model includes a fully connected layer, the fully connected layer outputs three 1*1 channel images, and the second activation function layer connected to the fully connected layer activates and outputs each Classification confidence.
  • the second activation function layer includes a Softmax layer.
  • the loss calculation function of the improved MobileNet deep network model is as follows:
  • H(y,p) represents the model loss
  • y represents the true value of the image label in the test set
  • p represents the predicted value of the output label after being sent to the model
  • N represents the number of images in the test set
  • M represents the number of categories
  • c represents Current output category
  • y ic represents the c-th category true value of the i sample
  • p ic represents the c-th category of the i sample The predicted value output after being fed into the model.
  • the accuracy calculation function is also included as follows:
  • the method further includes: inputting the category and confidence into the constructed score mapping function to obtain a score corresponding to the category and confidence.
  • the score mapping function is as follows:
  • x is the confidence of the detection result output
  • cls is the category of the detection result output
  • sigmoid(x) represents the mapping intermediate function
  • f(x, cls) represents the mapping function
  • f represents the score corresponding to the confidence.
  • the scalp and hair attributes include at least one of hair thickness, hair damage degree, hair oil, scalp cuticle, scalp red blood filaments, and hair follicle subcutaneous oil; each scalp and hair attribute corresponds to an improved MobileNet deep network model.
  • a scalp hair detection system includes:
  • Image acquisition module used to acquire different scalp and hair images
  • the classification data set labeling module is used to label and classify scalp and hair images according to scalp and hair attributes, forming a classification data set based on scalp and hair attributes;
  • the deep network model training module is used to input the labeled classification data set images into the improved MobileNet deep network model for training, and obtain the trained deep network model based on scalp and hair attributes;
  • the detection result output module is used to input the scalp and hair image to be tested into the trained deep network model to obtain detection results corresponding to scalp and hair attributes; the detection results include categories and confidence levels corresponding to the categories.
  • a scalp hair detection device in another aspect, includes a memory, a processor, and a computer program stored in the memory and executable on the processor. When the processor executes the program, the scalp hair detection device is implemented. Skin and hair detection methods.
  • the present invention uses an improved MobileNet deep network model to detect scalp and hair attributes in scalp and hair images, and finally outputs the category and confidence level corresponding to the scalp and hair attributes, which improves the computing speed and makes terminal-side deployment more convenient;
  • the improved MobileNet deep network model of the present invention reduces the number of block layers of the original Mobilenet. Since the scalp and hair features are relatively obvious, there is no need for too many block layers to extract information, so several block layers in the middle are removed. Thereby reducing the amount of calculation and speeding up the operation;
  • the improved MobileNet deep network model of the present invention adds a skip connection layer to enhance feature fusion.
  • a skip connection layer is added after each block layer to connect to the last block layer, and then through adaptive reduction Sampling enriches local features and global features (local features include small features, such as red blood streaks, and global features include large-area features, such as oil), which is conducive to subsequent extraction or classification of features at different scales;
  • the improved MobileNet deep network model of the present invention adds a 1*1 convolution layer at the end.
  • the addition of a 1*1 convolution layer makes the model pay more attention to classification information and further accelerates the convergence speed;
  • the present invention inputs the categories and confidence levels corresponding to scalp and hair attributes into the constructed score mapping function to map the scores, so that users can intuitively feel the state of their own scalp based on the scores based on the scores.
  • Figure 1 is a flow chart of a scalp hair detection method according to an embodiment of the present invention
  • Figure 2 is an example diagram of the MobileNet deep network model in the prior art
  • Figure 3 is an example diagram of an improved MobileNet deep network model according to an embodiment of the present invention.
  • Figure 4 is an example diagram of the block layer according to the embodiment of the present invention.
  • Figure 5 is a hierarchical table of the MobileNet deep network model in the prior art
  • Figure 6 is a hierarchical table of the improved MobileNet deep network model according to the embodiment of the present invention.
  • Figure 7 is a comparison diagram of model loss between MobileNet in the prior art and the improved MobileNet according to the embodiment of the present invention.
  • Figure 8 is a comparison diagram of model accuracy between MobileNet in the prior art and the improved MobileNet according to the embodiment of the present invention.
  • Figure 9 is a detailed flow chart of a method for detecting red blood filament attributes on the scalp according to an embodiment of the present invention.
  • Figure 10 is a structural block diagram of a scalp hair detection system according to an embodiment of the present invention.
  • Figure 11 is a frame diagram of a scalp hair detection device according to an embodiment of the present invention.
  • connection can be a fixed connection, a detachable connection, or an integral connection. It can be a mechanical connection or an electrical connection. It can be a direct connection or an indirect connection through an intermediate medium. It can be internal to two components.
  • Connection can be a fixed connection, a detachable connection, or an integral connection. It can be a mechanical connection or an electrical connection. It can be a direct connection or an indirect connection through an intermediate medium. It can be internal to two components.
  • step identification S101, S102, S103, etc. are only used for convenience of expression and do not indicate the execution order.
  • the corresponding execution order can be adjusted.
  • a scalp hair detection method of the present invention includes:
  • S102 label and classify scalp and hair images according to scalp and hair attributes to form a classification data set based on scalp and hair attributes;
  • S104 Input the scalp and hair image to be tested into the trained deep network model to obtain detection results corresponding to scalp and hair attributes; the detection results include categories and confidence levels corresponding to the categories.
  • Obtaining different scalp and hair images specifically including acquiring scalp and hair images of different light sources, different angles, different ages, different genders, etc., by selecting different scalp and hair images for subsequent analysis of the improved MobileNet deep network model (hereinafter referred to as MobileHairNet deep network) Model) is trained so that the improved MobileNet deep network model can detect the categories and confidence levels of scalp and hair images under different circumstances, expanding the scope of adaptation.
  • MobileHairNet deep network MobileNet deep network
  • scalp and hair images based on scalp and hair attributes. Specifically, you can ask a professional doctor to mark it. Different scalp and hair attributes can be divided into different categories, or they can all be divided into three categories, such as mild, moderate and moderate. Professional annotation will help the subsequent MobileHairNet deep network model to continuously update its own network structure feature parameters during the training process, and adjust the network structure feature parameters to the optimal state.
  • the scalp hair attributes include at least one of hair thickness, hair damage degree, hair oil, scalp cuticle, scalp red blood filaments and hair follicle subcutaneous oil; each scalp hair attribute corresponds to a MobileHairNet deep network model.
  • the structure of the MobileHairNet deep network model corresponding to each scalp hair attribute is the same, but the network structure characteristic parameters may be different.
  • variable magnification lens can be used when collecting images of different scalp and hair attributes.
  • 50x, 100x and 200x optical lenses are used to magnify the hair and scalp respectively to observe the characteristics of the scalp and hair.
  • the 50x lens can make it easier to observe redness and other conditions on the scalp, and the 100x lens can make it easier to observe the cuticle of the scalp, scalp oil, and hair follicle hair loss.
  • 200x lens for observing hair thickness, Hair damage, etc.
  • this embodiment uses three-spectrum recognition technology to identify the characteristics of scalp and hair images.
  • scalp and hair images can be collected under different light sources.
  • red blood streaks in the scalp and subcutaneous oil in hair follicles it is difficult to distinguish these characteristics with the naked eye under traditional white light.
  • the specular reflection of natural light can be eliminated, making it easier to observe the characteristics of red blood filaments under the surface of the skin.
  • UV light sources with wavelengths between 280nm and 400nm are easily reflected by the subcutaneous oil of hair follicles, resulting in the production of bright red light.
  • which magnification lens and which kind of light are more suitable for extracting which scalp and hair attributes can be obtained after experimental testing, and then the images corresponding to the scalp and hair attributes are obtained under the corresponding magnification lens and light source as test images for training.
  • scalp and hair images can be implemented on a device that performs a scalp and hair detection method, or can be collected on other devices and then sent to a device that performs a scalp and hair detection method.
  • the specific settings are set according to needs. This implementation Examples are not limited.
  • the MobileHairNet deep network model includes: a first convolution layer, several block layers, a pooling layer, a second convolution layer, and a third convolution layer; each block layer includes: a fourth convolution layer. , deep convolution layer and fifth convolution layer; after each block layer, there is a skip connection layer connected to the last block layer.
  • the improved MobileNet deep network model also includes several first activation function layers, and each convolutional layer is connected to a first activation function layer to perform feature information on the scalp and hair images extracted by the convolutional layer.
  • Nonlinear operations are also included in the improved MobileNet deep network model.
  • the first convolution layer, the second convolution layer, the third convolution layer, the fourth convolution layer and the fifth convolution layer are conv layers that do not include an activation function, and the depth convolution layer does not include an activation function.
  • dwconv layer There is a first activation function layer connected behind the first convolution layer, the second convolution layer, the third convolution layer, the fourth convolution layer and the fifth convolution layer respectively. There is also a first activation function layer connected behind the dwconv layer. function layer.
  • the conv layer and the first activation function layer may also be collectively referred to as the convolution layer, and the dwconv layer and the first activation function layer may be collectively referred to as the depth convolution layer, which is not specifically limited in this embodiment.
  • the first activation function layer includes a ReLU layer.
  • the end of the MobileHairNet deep network model also includes a fully connected layer.
  • the fully connected layer outputs three 1*1 channel images, and the second activation function layer connected to the fully connected layer activates and outputs the confidence of each category. .
  • the second activation function layer includes a Softmax layer.
  • x is the value of each pixel in the picture.
  • n is the number of categories, such as mild, moderate and severe, including three categories.
  • the category with the maximum confidence is taken out and used as the category of the current image output, for example, through softmax output [0.5, 0.2, 0.3]. Then the probability that the information mapped in the three corresponding categories is mild is 50%, the probability of moderate is 20%, and the probability of severe is 20%. Finally, the maximum probability is taken as the current category.
  • the test set is used for verification. Use Cross Entropy Loss to calculate the loss of the model, as follows:
  • H(y,p) represents the model loss
  • y represents the true value of the image label in the test set
  • p represents the predicted value of the output label after being sent to the model
  • N represents the number of images in the test set
  • M represents the number of categories
  • c represents Current output category
  • y ic represents the true value of the c-th classification of the i sample
  • p ic represents the prediction of the output after the c-th classification of the i-th sample is fed into the model
  • FIG. 2 shows an example diagram of the MobileNet deep network model in the prior art.
  • Figure 3 shows an example diagram of the MobileHairNet deep network model according to the embodiment of the present invention.
  • Figure 4 shows the block layer of the embodiment of the present invention. sample graph.
  • FIG. 5 a hierarchical table of the MobileNet deep network model in the prior art is shown
  • FIG. 6 is a hierarchical table of the MobileHairNet deep network model according to an embodiment of the present invention.
  • the MobileHairNet deep network model of this embodiment has the following characteristics and beneficial effects compared with the existing MobileNet deep network model:
  • the MobileHairNet deep network model in this embodiment removes the middle 14 ⁇ 14 ⁇ 96 and 7 ⁇ 7 ⁇ 160 input block layers, thereby reducing the amount of calculation. , speed up the operation.
  • a skip connection layer is added after different block layers, connected to the last block layer, and then through adaptive downsampling (referring to scaling, reducing by a certain ratio, otherwise it cannot be superimposed to the last layer) to make the local features and global features Become rich, which is conducive to subsequent extraction or classification of features of different scales (local features refer to small features, such as red blood streaks on the scalp, and global features refer to large-area features such as oil, similar to different sizes of features) Learn from pictures, and you can quickly distinguish similar pictures of different sizes next time).
  • the MobileHairNet deep network model of the present invention is only 2.4M in size and has an inference speed of 5ms. Compared with the SqueezeNet model, which is 4.8M, it is smaller and the inference speed is 35ms faster.
  • a scalp hair detection method further includes: inputting the category and confidence level into the constructed score mapping function to obtain a score corresponding to the category and confidence level.
  • the score mapping function is as follows:
  • x is the confidence of the detection result output
  • cls is the category of the detection result output
  • sigmoid(x) represents the mapping intermediate function
  • f(x, cls) represents the mapping function
  • f represents the score corresponding to the confidence.
  • the category cls with the highest confidence and the confidence Perform score mapping. Allow users to intuitively feel the condition of their hair and scalp through scores, such as mapping mild to 40-60 points, moderate to 60-80 points, and severe to 80-100 points based on confidence. .
  • a scalp hair detection system includes:
  • Image acquisition module 1001 used to acquire different scalp and hair images
  • the classification data set annotation module 1002 is used to annotate and classify scalp and hair images according to scalp and hair attributes to form a classification data set based on scalp and hair attributes;
  • the deep network model training module 1003 is used to input the labeled classification data set images into the improved MobileNet deep network model for training, and obtain the trained deep network model based on scalp and hair attributes;
  • the detection result output module 1004 is used to input the scalp and hair image to be tested into the trained deep network model to obtain detection results corresponding to scalp and hair attributes; the detection results include categories and confidence levels corresponding to the categories.
  • the image acquisition module 1001 classification data set annotation module 1002, deep network
  • modules such as the network model training module 1003 and the detection result output module 1004, see the aforementioned scalp hair detection method for details.
  • a scalp hair detection device 110 includes a memory 1101, a processor 1102, and a computer program 1103 stored in the memory 1101 and executable on the processor 1102. When the processor 1102 executes the program Implement the scalp hair detection method.
  • the computer program 1103 may be divided into one or more modules/units, and the one or more modules/units are stored in the memory 1101 and executed by the processor 1102 to Complete this application.
  • the one or more modules/units may be a series of computer program 1103 instruction segments capable of completing specific functions. The instruction segments are used to describe the execution process of the computer program 1103 in the scalp hair detection device 110 .
  • the scalp and hair detection device 110 may be a proprietary scalp and hair detection instrument, or may be a computing device such as a mobile phone, desktop computer, notebook, PDA, cloud server, etc.
  • the scalp hair detection device 110 may include, but is not limited to, a processor 1102 and a memory 1101.
  • FIG. 11 is only an example of the scalp and hair detection device 110 and does not constitute a limitation on the scalp and hair detection device 110. It may include more or fewer components than shown in the figure, or combine certain components. Or different components, for example, the scalp hair detection device 110 may also include input and output devices, network access devices, buses, etc.
  • the so-called processor 1102 can be a central processing unit (Central Processing Unit, CPU), or other general-purpose processor 1102, a digital signal processor 1102 (Digital Signal Processor, DSP), an application specific integrated circuit (Application Specific Integrated Circuit, ASIC ), off-the-shelf programmable gate array (Field-Programmable GateArray, FPGA) or other programmable logic devices, discrete gate or transistor logic devices, discrete hardware components, etc.
  • the general-purpose processor 1102 may be a microprocessor 1102 or the processor 1102 may be any conventional processor 1102 or the like.
  • the memory 1101 may be an internal storage unit of the scalp hair detection device 110, such as a hard disk or memory of the scalp hair detection device 110.
  • the memory 1101 may also be an external storage device of the scalp and hair detection device 110, such as a plug-in hard drive, a smart media card (SMC), or a secure digital device equipped on the scalp and hair detection device 110. Digital, SD) card, Flash Card, etc.
  • the memory 1101 may also include both an internal storage unit of the scalp hair detection device 110 and an external storage device.
  • the memory 1101 uses For storing the computer program 1103 and other programs and data required by the terminal device.
  • the memory 1101 can also be used to temporarily store data that has been output or is to be output.
  • the present invention is a scalp hair detection method that uses an improved MobileNet deep network model to detect scalp and hair attributes in scalp and hair images, and finally outputs the category and confidence level corresponding to the scalp and hair attributes, which improves the calculation speed and makes terminal processing more convenient. Side deployment, with good industrial practicality.

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Evolutionary Computation (AREA)
  • Computing Systems (AREA)
  • Health & Medical Sciences (AREA)
  • Artificial Intelligence (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Software Systems (AREA)
  • General Health & Medical Sciences (AREA)
  • Multimedia (AREA)
  • Biomedical Technology (AREA)
  • General Engineering & Computer Science (AREA)
  • Molecular Biology (AREA)
  • Mathematical Physics (AREA)
  • Data Mining & Analysis (AREA)
  • Computational Linguistics (AREA)
  • Biophysics (AREA)
  • Databases & Information Systems (AREA)
  • Medical Informatics (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Image Analysis (AREA)

Abstract

Disclosed in the present invention are a scalp hair detection method, system and device. The method comprises: acquiring different scalp hair images; according to scalp hair properties, labeling and classifying the scalp hair images to form scalp hair property-based classification data sets; inputting the labeled images of the classification data sets into an improved MobileNet deep network model for training to obtain a trained scalp hair property-based deep network model; and inputting a scalp hair image under detection into the trained deep network model to obtain a detection result corresponding to the scalp hair properties, wherein the detection result comprises a category and a confidence coefficient corresponding to the category. The present invention realizes scalp hair property detection on scalp hair images by means of an improved MobileNet deep network model, and categories and confidence coefficients corresponding to scalp hair properties are finally output; the operational speed is increased, and terminal-side deployment is more convenient.

Description

一种头皮头发检测方法、系统和设备Scalp hair detection method, system and equipment 技术领域Technical field
本发明涉及头皮头发检测技术领域,特别是一种头皮头发检测方法、系统和设备。The present invention relates to the technical field of scalp and hair detection, in particular to a method, system and equipment for scalp and hair detection.
背景技术Background technique
头皮属于人体的敏感皮肤之一,由于生活习惯和工作压力等原因,目前受头皮头发问题困扰的人越来越多,许多人都存在头发受损、头发油腻、头皮角质层厚、头皮红血丝多、毛囊皮下油脂多等问题。现在的市面上有许多连锁的美发机构从业者与毛发管理中心,针对头发做的检测很多都是通过单点拍照的方式对头皮进行拍照,以人工解读的方式得到受测者的头皮头发的状态,这样的方式往往受解读者的主观意识影响,无法得到客观准确的结果,导致受测者无法正确的了解到自己的头皮头发状况。如何客观、准确的检测头皮头发的状态是亟需解决的问题。The scalp is one of the sensitive skins of the human body. Due to living habits and work pressure, more and more people are currently suffering from scalp and hair problems. Many people suffer from damaged hair, greasy hair, thick scalp cuticles, and red scalp. Problems such as excess hair follicles and subcutaneous oil. There are many chain hairdressing institutions and hair management centers on the market today. Many of the hair tests are based on single-point photography of the scalp, and manual interpretation is used to obtain the condition of the subject's scalp and hair. , this method is often affected by the subjective consciousness of the interpreter and cannot obtain objective and accurate results, resulting in the subject being unable to correctly understand the condition of his scalp and hair. How to objectively and accurately detect the condition of scalp and hair is an urgent problem that needs to be solved.
申请号202010228550.4的专利公开了一种基于深度学习的头皮检测方法,包括以下步骤:步骤S1:采集头皮图像数据;步骤S2:根据头皮属性,对头皮图像进行标注分类,形成各头皮属性的分类数据集;步骤S3:使用ImageNet图像数据库对SqueezeNet模型进行预训练,得到预训练SqueezeNet模型;步骤S4:修改预训练SqueezeNet模型,使其适应回归任务,得到改进型SqueezeNet模型;步骤S5:制定头皮检测精度判定规则,使用步骤S2中的分类数据集对改进型SqueezeNet模型进行重新训练,得到各种头皮属性的头皮检测模型;步骤S6,将待测头皮图像根据头皮属性进行分类,输入对应的头皮检测模型得到预测结果。相对于SqueezeNet模型,Mobilenet减少了参数数量及提升了运算速度,更方便进行端侧部署。The patent application number 202010228550.4 discloses a scalp detection method based on deep learning, which includes the following steps: Step S1: Collect scalp image data; Step S2: Label and classify scalp images according to scalp attributes to form classification data for each scalp attribute. Set; Step S3: Use the ImageNet image database to pre-train the SqueezeNet model to obtain the pre-trained SqueezeNet model; Step S4: Modify the pre-trained SqueezeNet model to adapt it to the regression task and obtain the improved SqueezeNet model; Step S5: Formulate the scalp detection accuracy Determine the rules, use the classification data set in step S2 to retrain the improved SqueezeNet model, and obtain scalp detection models for various scalp attributes; step S6, classify the scalp image to be tested according to the scalp attributes, and input the corresponding scalp detection model Get prediction results. Compared with the SqueezeNet model, Mobilenet reduces the number of parameters and improves the computing speed, making it more convenient for device-side deployment.
发明内容Contents of the invention
本发明的主要目的在于提出一种头皮头发检测方法、系统和设备,其克服了现有技术的所存在的不足之处,通过改进的MobileNet深度网络模型对头皮头发 图像中的头皮头发属性进行检测,最终输出与头皮头发属性对应的类别及置信度,提升了运算速度,更方便进行端侧部署。The main purpose of the present invention is to propose a method, system and equipment for scalp and hair detection, which overcomes the shortcomings of the existing technology and detects scalp and hair through an improved MobileNet deep network model. The scalp and hair attributes in the image are detected, and the category and confidence level corresponding to the scalp and hair attributes are finally output, which improves the computing speed and makes terminal-side deployment more convenient.
本发明采用如下技术方案:The present invention adopts the following technical solutions:
一方面,一种头皮头发检测方法,包括:In one aspect, a scalp hair detection method includes:
获取不同的头皮头发图像;Acquire different scalp and hair images;
根据头皮头发属性,对头皮头发图像进行标注分类,形成基于头皮头发属性的分类数据集;According to scalp and hair attributes, scalp and hair images are annotated and classified to form a classification data set based on scalp and hair attributes;
将标注好的分类数据集图像输入到改进的MobileNet深度网络模型中进行训练,得到训练好的基于头皮头发属性的深度网络模型;Input the labeled classification data set images into the improved MobileNet deep network model for training, and obtain the trained deep network model based on scalp and hair attributes;
将待测头皮头发图像输入到训练好的深度网络模型中,获得对应头皮头发属性的检测结果;所述检测结果包括类别及类别对应的置信度。Input the scalp and hair image to be tested into the trained deep network model to obtain detection results corresponding to scalp and hair attributes; the detection results include categories and confidence levels corresponding to the categories.
优选的,所述改进的MobileNet深度网络模型依次包括:第一卷积层、若干block层、池化层、第二卷积层和第三卷积层;每个block层依次包括:第四卷积层、深度卷积层和第五卷积层;各block层之后设置有连接至最后一层block层的跳跃连接层。Preferably, the improved MobileNet deep network model includes in order: a first convolution layer, several block layers, a pooling layer, a second convolution layer and a third convolution layer; each block layer includes in order: a fourth convolution layer. The accumulation layer, the depth convolution layer and the fifth convolution layer; after each block layer, there is a skip connection layer connected to the last block layer.
优选的,所述改进的MobileNet深度网络模型还包括若干第一激活函数层,每个卷积层后面分别连接一个第一激活函数层,以对卷积层提取出的头皮头发图像的特征信息进行非线性运算。Preferably, the improved MobileNet deep network model also includes several first activation function layers, and each convolutional layer is connected to a first activation function layer to perform feature information on the scalp and hair images extracted by the convolutional layer. Nonlinear operations.
优选的,所述第一激活函数层包括ReLU层。Preferably, the first activation function layer includes a ReLU layer.
优选的,所述改进的MobileNet深度网络模型的末端包括一全连接层,所述全连接层输出3个1*1通道的图像,与全连接层相连接的的第二激活函数层激活输出各分类的置信度。Preferably, the end of the improved MobileNet deep network model includes a fully connected layer, the fully connected layer outputs three 1*1 channel images, and the second activation function layer connected to the fully connected layer activates and outputs each Classification confidence.
优选的,所述第二激活函数层包括Softmax层。Preferably, the second activation function layer includes a Softmax layer.
优选的,所述改进的MobileNet深度网络模型的损失计算函数如下:
Preferably, the loss calculation function of the improved MobileNet deep network model is as follows:
其中,H(y,p)表示模型损失;y表示测试集的图片标签的真实值;p表示送入模型后输出标签的预测值;N表示测试集中的图片数量;M表示分类数;c表示当前输出类别;yic表i个样本的第c个分类真实值;pic表示第i个样本的第c个分类 送入模型后输出的预测值。Among them, H(y,p) represents the model loss; y represents the true value of the image label in the test set; p represents the predicted value of the output label after being sent to the model; N represents the number of images in the test set; M represents the number of categories; c represents Current output category; y ic represents the c-th category true value of the i sample; p ic represents the c-th category of the i sample The predicted value output after being fed into the model.
优选的,还包括准确率计算函数如下:
Preferably, the accuracy calculation function is also included as follows:
其中,Precision表示本轮权重在此次测试集的准确率;TP表示判断正确个数;FP表示判断错误个数。Among them, Precision represents the accuracy of this round of weights in this test set; TP represents the number of correct judgments; FP represents the number of incorrect judgments.
优选的,获得对应头皮头发属性的检测结果之后,还包括:将类别和置信度输入构建的分数映射函数,获得与类别和置信度对应的评分。Preferably, after obtaining the detection result corresponding to the scalp hair attribute, the method further includes: inputting the category and confidence into the constructed score mapping function to obtain a score corresponding to the category and confidence.
优选的,所述分数映射函数,具体如下:

Preferably, the score mapping function is as follows:

其中,x是检测结果输出的置信度;cls是检测结果输出的类别;sigmoid(x)表示映射中间函数;f(x,cls)表示映射函数;f表示与置信度对应的分数。Among them, x is the confidence of the detection result output; cls is the category of the detection result output; sigmoid(x) represents the mapping intermediate function; f(x, cls) represents the mapping function; f represents the score corresponding to the confidence.
优选的,所述头皮头发属性包括头发粗细、头发受损程度、头发油脂、头皮角质层、头皮红血丝和毛囊皮下油脂中的至少一个;每个头皮头发属性对应一个改进的MobileNet深度网络模型。Preferably, the scalp and hair attributes include at least one of hair thickness, hair damage degree, hair oil, scalp cuticle, scalp red blood filaments, and hair follicle subcutaneous oil; each scalp and hair attribute corresponds to an improved MobileNet deep network model.
另一方面,一种头皮头发检测系统,包括:In another aspect, a scalp hair detection system includes:
图像获取模块,用于获取不同的头皮头发图像;Image acquisition module, used to acquire different scalp and hair images;
分类数据集标注模块,用于根据头皮头发属性,对头皮头发图像进行标注分类,形成基于头皮头发属性的分类数据集;The classification data set labeling module is used to label and classify scalp and hair images according to scalp and hair attributes, forming a classification data set based on scalp and hair attributes;
深度网络模型训练模块,用于将标注好的分类数据集图像输入到改进的MobileNet深度网络模型中进行训练,得到训练好的基于头皮头发属性的深度网络模型;The deep network model training module is used to input the labeled classification data set images into the improved MobileNet deep network model for training, and obtain the trained deep network model based on scalp and hair attributes;
检测结果输出模块,用于将待测头皮头发图像输入到训练好的深度网络模型中,获得对应头皮头发属性的检测结果;所述检测结果包括类别及类别对应的置信度。The detection result output module is used to input the scalp and hair image to be tested into the trained deep network model to obtain detection results corresponding to scalp and hair attributes; the detection results include categories and confidence levels corresponding to the categories.
再一方面,一种头皮头发检测设备,包括存储器、处理器及存储在存储器上并可在处理器上运行的计算机程序,所述处理器执行所述程序时实现所述的头 皮头发检测方法。In another aspect, a scalp hair detection device includes a memory, a processor, and a computer program stored in the memory and executable on the processor. When the processor executes the program, the scalp hair detection device is implemented. Skin and hair detection methods.
与现有技术相比,本发明的有益效果如下:Compared with the prior art, the beneficial effects of the present invention are as follows:
(1)本发明通过改进的MobileNet深度网络模型对头皮头发图像中的头皮头发属性进行检测,最终输出与头皮头发属性对应的类别及置信度,提升了运算速度,更方便进行端侧部署;(1) The present invention uses an improved MobileNet deep network model to detect scalp and hair attributes in scalp and hair images, and finally outputs the category and confidence level corresponding to the scalp and hair attributes, which improves the computing speed and makes terminal-side deployment more convenient;
(2)本发明的改进的MobileNet深度网络模型,减少了原Mobilenet的block层数,由于头皮头发特征特征比较明显,不需要过多的block层来进行信息的提取,因此去掉中间若干block层,从而降低计算量,加快运算速度;(2) The improved MobileNet deep network model of the present invention reduces the number of block layers of the original Mobilenet. Since the scalp and hair features are relatively obvious, there is no need for too many block layers to extract information, so several block layers in the middle are removed. Thereby reducing the amount of calculation and speeding up the operation;
(3)本发明的改进的MobileNet深度网络模型,增加了跳跃连接层,以增强特征融合,具体为在各block层之后加入跳跃连接层,以连接至最后一层block层,再经过自适应降采样使得局部特征和全局特征变得丰富(局部特征包括细小的特征,比如红血丝,全局特征包括大面积的特征比如油脂),有利于不同尺度的特征进行后续的提取或分类;(3) The improved MobileNet deep network model of the present invention adds a skip connection layer to enhance feature fusion. Specifically, a skip connection layer is added after each block layer to connect to the last block layer, and then through adaptive reduction Sampling enriches local features and global features (local features include small features, such as red blood streaks, and global features include large-area features, such as oil), which is conducive to subsequent extraction or classification of features at different scales;
(4)本发明的改进的MobileNet深度网络模型,在末端增加了1*1卷积层,增加1*1卷积层使得模型更加关注分类信息,进一步加快收敛速度;(4) The improved MobileNet deep network model of the present invention adds a 1*1 convolution layer at the end. The addition of a 1*1 convolution layer makes the model pay more attention to classification information and further accelerates the convergence speed;
(5)本发明将头皮头发属性对应的类别及置信度输入构建的分数映射函数,对分数进行映射,使得用户能够根据评分,直观地根据分值感受自身头皮状态。(5) The present invention inputs the categories and confidence levels corresponding to scalp and hair attributes into the constructed score mapping function to map the scores, so that users can intuitively feel the state of their own scalp based on the scores based on the scores.
上述说明仅是本发明技术方案的概述,为了能够更清楚地了解本发明的技术手段,从而可依照说明书的内容予以实施,并且为了让本发明的上述和其他目的、特征和优点能够更明显易懂,以下列举本发明的具体实施方式。The above description is only an overview of the technical solutions of the present invention. In order to understand the technical means of the present invention more clearly, so that they can be implemented according to the contents of the description, and in order to make the above and other objects, features and advantages of the present invention more obvious and easy to understand. Understand, the specific embodiments of the present invention are listed below.
根据下文结合附图对本发明具体实施例的详细描述,本领域技术人员将会更加明了本发明的上述及其他目的、优点和特征。From the following detailed description of specific embodiments of the present invention in conjunction with the accompanying drawings, those skilled in the art will further understand the above and other objects, advantages and features of the present invention.
附图说明Description of drawings
图1为本发明实施例的头皮头发检测方法流程图;Figure 1 is a flow chart of a scalp hair detection method according to an embodiment of the present invention;
图2为现有技术的MobileNet深度网络模型示例图;Figure 2 is an example diagram of the MobileNet deep network model in the prior art;
图3为本发明实施例的改进的MobileNet深度网络模型示例图;Figure 3 is an example diagram of an improved MobileNet deep network model according to an embodiment of the present invention;
图4为本发明实施例的block层的示例图;Figure 4 is an example diagram of the block layer according to the embodiment of the present invention;
图5为现有技术的MobileNet深度网络模型的层级表; Figure 5 is a hierarchical table of the MobileNet deep network model in the prior art;
图6为本发明实施例的改进的MobileNet深度网络模型的层级表;Figure 6 is a hierarchical table of the improved MobileNet deep network model according to the embodiment of the present invention;
图7为现有技术的MobileNet和本发明实施例的改进的MobileNet的模型损失比较图;Figure 7 is a comparison diagram of model loss between MobileNet in the prior art and the improved MobileNet according to the embodiment of the present invention;
图8为现有技术的MobileNet和本发明实施例的改进的MobileNet的模型精度比较图;Figure 8 is a comparison diagram of model accuracy between MobileNet in the prior art and the improved MobileNet according to the embodiment of the present invention;
图9为本发明实施例的以头皮红血丝属性的检测方法详细流程图;Figure 9 is a detailed flow chart of a method for detecting red blood filament attributes on the scalp according to an embodiment of the present invention;
图10为本发明实施例的头皮头发检测系统结构框图;Figure 10 is a structural block diagram of a scalp hair detection system according to an embodiment of the present invention;
图11为本发明实施例的头皮头发检测设备的框架图。Figure 11 is a frame diagram of a scalp hair detection device according to an embodiment of the present invention.
具体实施方式Detailed ways
下面将结合本发明实施例中的附图,对本发明实施例中的技术方案进行清楚、完整地描述;显然,所描述的实施例仅仅是本发明一部分实施例,而不是全部的实施例,基于本发明中的实施例,本领域普通技术人员在没有做出创造性劳动前提下所获得的所有其他实施例,都属于本发明保护的范围。The technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the accompanying drawings in the embodiments of the present invention; obviously, the described embodiments are only some of the embodiments of the present invention, not all of them. Based on The embodiments of the present invention and all other embodiments obtained by those of ordinary skill in the art without creative efforts fall within the scope of protection of the present invention.
在本发明的描述中,需要说明的是,术语“包括”、“包含”或者其任何其他变体意在涵盖非排他性的包含,从而使得包括一系列要素的过程、方法、物品或者设备不仅包括那些要素,而且还包括没有明确列出的其他要素,或者是还包括为这种过程、方法、物品或者设备所固有的要素。在没有更多限制的情况下,由语句“包括一个……”限定的要素,并不排除在包括所述要素的过程、方法、物品或者设备中还存在另外的相同要素。In the description of the present invention, it should be noted that the terms "comprising", "comprising" or any other variations thereof are intended to cover non-exclusive inclusion, such that a process, method, article or device including a series of elements not only includes Those elements, but also other elements not expressly listed or inherent in such process, method, article or equipment. Without further limitation, an element defined by the statement "comprises a..." does not exclude the presence of additional identical elements in a process, method, article, or apparatus that includes the stated element.
在本发明的描述中,需要说明的是,术语“第一”、“第二”仅用于描述目的,而不能理解为指示或暗示相对重要性。In the description of the present invention, it should be noted that the terms "first" and "second" are only used for descriptive purposes and cannot be understood as indicating or implying relative importance.
在本发明的描述中,需要说明的是,除非另有明确的规定和限定,术语“安装”、“设置有”、“套设/接”、“连接”等,应做广义理解,例如“连接”,可以是固定连接,也可以是可拆卸连接,或一体地连接,可以是机械连接,也可以是电连接,可以是直接相连,也可以通过中间媒介间接相连,可以是两个元件内部的连通,对于本领域的普通技术人员而言,可以具体情况理解上述术语在本发明中的具体含义。In the description of the present invention, it should be noted that, unless otherwise clearly stated and limited, the terms "installed", "provided with", "set/connected", "connected", etc., should be understood in a broad sense, such as " "Connection" can be a fixed connection, a detachable connection, or an integral connection. It can be a mechanical connection or an electrical connection. It can be a direct connection or an indirect connection through an intermediate medium. It can be internal to two components. For those of ordinary skill in the art, the specific meanings of the above terms in the present invention can be understood in specific situations.
在本发明的描述中,需要说明的是,除非另有明确的规定和限定,步骤标识 S101、S102、S103等仅用于方便表述,并不表示执行顺序,相应的执行顺序可以进行调节。In the description of the present invention, it should be noted that, unless otherwise expressly stated and limited, step identification S101, S102, S103, etc. are only used for convenience of expression and do not indicate the execution order. The corresponding execution order can be adjusted.
参见图1所示,本发明一种头皮头发检测方法,包括:As shown in Figure 1, a scalp hair detection method of the present invention includes:
S101,获取不同的头皮头发图像;S101, obtain different scalp and hair images;
S102,根据头皮头发属性,对头皮头发图像进行标注分类,形成基于头皮头发属性的分类数据集;S102, label and classify scalp and hair images according to scalp and hair attributes to form a classification data set based on scalp and hair attributes;
S103,将标注好的分类数据集图像输入到改进的MobileNet深度网络模型中进行训练,得到训练好的基于头皮头发属性的深度网络模型;S103, input the labeled classification data set images into the improved MobileNet deep network model for training, and obtain the trained deep network model based on scalp and hair attributes;
S104,将待测头皮头发图像输入到训练好的深度网络模型中,获得对应头皮头发属性的检测结果;所述检测结果包括类别及类别对应的置信度。S104. Input the scalp and hair image to be tested into the trained deep network model to obtain detection results corresponding to scalp and hair attributes; the detection results include categories and confidence levels corresponding to the categories.
获取不同的头皮头发图像,具体包括获取不同光源、不同角度、不同年龄、不同性别等的头皮头发图像,通过选取不同的头皮头发图像以便后续对改进的MobileNet深度网络模型(后续称为MobileHairNet深度网络模型)进行训练时使得改进的MobileNet深度网络模型能够检测出不同情况下的头皮头发图像的类别及置信度,扩大适应范围。Obtaining different scalp and hair images, specifically including acquiring scalp and hair images of different light sources, different angles, different ages, different genders, etc., by selecting different scalp and hair images for subsequent analysis of the improved MobileNet deep network model (hereinafter referred to as MobileHairNet deep network) Model) is trained so that the improved MobileNet deep network model can detect the categories and confidence levels of scalp and hair images under different circumstances, expanding the scope of adaptation.
根据头皮头发属性,对头皮头发图像进行标注分类。具体的可以请专业医生进行标注,对于不同的头皮头发属性,可以分别划分不同的类别,也可以都划分为三类,如轻度、中度和中度。进行专业的标注,有利于后续MobileHairNet深度网络模型在训练过程中不断更新自身的网络结构特征参数,将网络结构特征参数调整到最优的状态。Label and classify scalp and hair images based on scalp and hair attributes. Specifically, you can ask a professional doctor to mark it. Different scalp and hair attributes can be divided into different categories, or they can all be divided into three categories, such as mild, moderate and moderate. Professional annotation will help the subsequent MobileHairNet deep network model to continuously update its own network structure feature parameters during the training process, and adjust the network structure feature parameters to the optimal state.
本发明中,所述头皮头发属性包括头发粗细、头发受损程度、头发油脂、头皮角质层、头皮红血丝和毛囊皮下油脂中的至少一个;每个头皮头发属性对应一个MobileHairNet深度网络模型。各头皮头发属性对应的MobileHairNet深度网络模型的结构相同,但网络结构特征参数可能不同。In the present invention, the scalp hair attributes include at least one of hair thickness, hair damage degree, hair oil, scalp cuticle, scalp red blood filaments and hair follicle subcutaneous oil; each scalp hair attribute corresponds to a MobileHairNet deep network model. The structure of the MobileHairNet deep network model corresponding to each scalp hair attribute is the same, but the network structure characteristic parameters may be different.
需要说明的是,由于各头皮头发属性的特征不一样,因此在采集不同头皮头发属性的图像时,可采用可变化倍数放大镜头。如采用50倍、100倍和200倍的光学镜头分别对头发和头皮放大,用以观测头皮和头发特征。50倍的镜头可以更容易地观测头皮红血丝等的情况,100倍的镜头可以更容易的观测头皮的角质层、头皮油脂及毛囊脱发等的情况。200倍的镜头用以观测头发粗细状况、 发头发受损情况等。It should be noted that since the characteristics of each scalp and hair attribute are different, a variable magnification lens can be used when collecting images of different scalp and hair attributes. For example, 50x, 100x and 200x optical lenses are used to magnify the hair and scalp respectively to observe the characteristics of the scalp and hair. The 50x lens can make it easier to observe redness and other conditions on the scalp, and the 100x lens can make it easier to observe the cuticle of the scalp, scalp oil, and hair follicle hair loss. 200x lens for observing hair thickness, Hair damage, etc.
此外,本实施例通过三光谱识别技术对头皮头发图像的特征进行识别,在采集头皮头发图像时,可在不同的光源下对头皮头发图像进行采集。对应头皮红血丝、毛囊皮下油脂这些特征,在传统的白光下,肉眼很难去分辨这些特征。通过偏振光的辅助下,可以消除自然光的镜面反射,更容易观测皮肤表层下的红血丝的特征。通过波长在280nm和400nm之间的UV光光源,容易被毛囊皮下油脂产生反射导致产生红色亮光。具体的,哪种倍数的镜头及哪种光更适合提取哪种头皮头发属性,可以进行实验测试后获得,然后再对应倍数镜头及光源下获取该头皮头发属性对应的图像作为测试图像进行训练。In addition, this embodiment uses three-spectrum recognition technology to identify the characteristics of scalp and hair images. When collecting scalp and hair images, scalp and hair images can be collected under different light sources. Corresponding to the characteristics of red blood streaks in the scalp and subcutaneous oil in hair follicles, it is difficult to distinguish these characteristics with the naked eye under traditional white light. With the assistance of polarized light, the specular reflection of natural light can be eliminated, making it easier to observe the characteristics of red blood filaments under the surface of the skin. UV light sources with wavelengths between 280nm and 400nm are easily reflected by the subcutaneous oil of hair follicles, resulting in the production of bright red light. Specifically, which magnification lens and which kind of light are more suitable for extracting which scalp and hair attributes can be obtained after experimental testing, and then the images corresponding to the scalp and hair attributes are obtained under the corresponding magnification lens and light source as test images for training.
需要说明的是,头皮头发图像的采集可以在执行头皮头发检测方法的设备上实现,也可以在其他设备上采集完成后发送至执行头皮头发检测方法的设备,具体根据需要进行设定,本实施例不做限制。It should be noted that the collection of scalp and hair images can be implemented on a device that performs a scalp and hair detection method, or can be collected on other devices and then sent to a device that performs a scalp and hair detection method. The specific settings are set according to needs. This implementation Examples are not limited.
本实施例中,MobileHairNet深度网络模型依次包括:第一卷积层、若干block层、池化层、第二卷积层和第三卷积层;每个block层依次包括:第四卷积层、深度卷积层和第五卷积层;各block层之后设置有连接至最后一层block层的跳跃连接层。In this embodiment, the MobileHairNet deep network model includes: a first convolution layer, several block layers, a pooling layer, a second convolution layer, and a third convolution layer; each block layer includes: a fourth convolution layer. , deep convolution layer and fifth convolution layer; after each block layer, there is a skip connection layer connected to the last block layer.
进一步的,所述改进的MobileNet深度网络模型还包括若干第一激活函数层,每个卷积层后面分别连接一个第一激活函数层,以对卷积层提取出的头皮头发图像的特征信息进行非线性运算。Further, the improved MobileNet deep network model also includes several first activation function layers, and each convolutional layer is connected to a first activation function layer to perform feature information on the scalp and hair images extracted by the convolutional layer. Nonlinear operations.
此处的第一卷积层、第二卷积层、第三卷积层、第四卷积层和第五卷积层是不包括激活函数的conv层,深度卷积层是不包括激活函数的dwconv层。第一卷积层、第二卷积层、第三卷积层、第四卷积层和第五卷积层后面分别连接有一个第一激活函数层,dwconv层后面也连接有一个第一激活函数层。在其他实施例中,conv层和第一激活函数层也可以统称为卷积层,dwconv层和第一激活函数层统称为深度卷积层,本实施例不做具体限制。The first convolution layer, the second convolution layer, the third convolution layer, the fourth convolution layer and the fifth convolution layer here are conv layers that do not include an activation function, and the depth convolution layer does not include an activation function. dwconv layer. There is a first activation function layer connected behind the first convolution layer, the second convolution layer, the third convolution layer, the fourth convolution layer and the fifth convolution layer respectively. There is also a first activation function layer connected behind the dwconv layer. function layer. In other embodiments, the conv layer and the first activation function layer may also be collectively referred to as the convolution layer, and the dwconv layer and the first activation function layer may be collectively referred to as the depth convolution layer, which is not specifically limited in this embodiment.
具体的,所述第一激活函数层包括ReLU层。Specifically, the first activation function layer includes a ReLU layer.
所述MobileHairNet深度网络模型的末端还包括一全连接层,所述全连接层输出3个1*1通道的图像,与全连接层相连接的的第二激活函数层激活输出各分类的置信度。 The end of the MobileHairNet deep network model also includes a fully connected layer. The fully connected layer outputs three 1*1 channel images, and the second activation function layer connected to the fully connected layer activates and outputs the confidence of each category. .
所述第二激活函数层包括Softmax层。The second activation function layer includes a Softmax layer.
使用ReLU函数引入非线性因素使得MobileHairNet深度网络模型能够拟合非线性函数。使其抑制非关注特征区域,集中关注重要特征。ReLU函数公式如下:x为图片中各像素点的值。
Using the ReLU function to introduce nonlinear factors enables the MobileHairNet deep network model to fit nonlinear functions. It suppresses uninterested feature areas and focuses on important features. The formula of the ReLU function is as follows: x is the value of each pixel in the picture.
Softmax函数公式如下:
The formula of the Softmax function is as follows:
其中,为第i个类别输出的通道值,n为类别数,如对应轻度、中度和重度,包括三个类别。in, is the channel value output by the i-th category, n is the number of categories, such as mild, moderate and severe, including three categories.
最后,根据输出的各分类的置信度,取出置信度的最大值所在的类别,作为当前图像输出的所属类别如,经过softmax输出[0.5,0.2,0.3]。那么映射在三个对应类别的信息为轻度的概率为50%,中度的概率为20%,重度的概率为20%,最终取最大概率为当前所属类别。为了验证MobileHairNet深度网络模型是否学习到期望的关注特征,使用测试集进行验证。使用Cross Entropy Loss计算模型的损失,如下:
Finally, according to the confidence of each category in the output, the category with the maximum confidence is taken out and used as the category of the current image output, for example, through softmax output [0.5, 0.2, 0.3]. Then the probability that the information mapped in the three corresponding categories is mild is 50%, the probability of moderate is 20%, and the probability of severe is 20%. Finally, the maximum probability is taken as the current category. In order to verify whether the MobileHairNet deep network model has learned the expected attention features, the test set is used for verification. Use Cross Entropy Loss to calculate the loss of the model, as follows:
其中,H(y,p)表示模型损失;y表示测试集的图片标签的真实值;p表示送入模型后输出标签的预测值;N表示测试集中的图片数量;M表示分类数;c表示当前输出类别;yic表i个样本的第c个分类真实值;pic表示第i个样本的第c个分类送入模型后输出的预测Among them, H(y,p) represents the model loss; y represents the true value of the image label in the test set; p represents the predicted value of the output label after being sent to the model; N represents the number of images in the test set; M represents the number of categories; c represents Current output category; y ic represents the true value of the c-th classification of the i sample; p ic represents the prediction of the output after the c-th classification of the i-th sample is fed into the model
当模型经过训练集数据的多轮训练,训练集的损失和测试集的损失不断下降,说明模型能够收敛并使用准确率函数Precision计算本轮权重在此次测试集的准确率,如下:
When the model undergoes multiple rounds of training on the training set data, the loss of the training set and the loss of the test set continue to decrease, indicating that the model can converge and use the accuracy function Precision to calculate the accuracy of this round of weights in the test set, as follows:
其中,Precision表示本轮权重在此次测试集的准确率;TP表示判断正确个数;FP表示判断错误个数。Among them, Precision represents the accuracy of this round of weights in this test set; TP represents the number of correct judgments; FP represents the number of incorrect judgments.
进行多轮训练,保存最优模型。在训练开始时,可以设置预设轮数对网络 进行训练。根据每轮得出的准确率Precision和模型损失函数Cross Entropy Loss进行综合评估。保存模型测试集准确率较高和训练集模型损失较低的一个模型,以此来保证模型能够有一个较高的预测准确率和特征学习能力。Conduct multiple rounds of training and save the optimal model. At the beginning of training, you can set a preset number of rounds for the network Conduct training. A comprehensive evaluation is conducted based on the accuracy Precision and model loss function Cross Entropy Loss obtained in each round. Save a model with a higher model test set accuracy and a lower training set model loss to ensure that the model has a higher prediction accuracy and feature learning ability.
具体的,参见图2所示为现有技术的MobileNet深度网络模型示例图,图3所示为本发明实施例的MobileHairNet深度网络模型的示例图,其中图4为本发明实施例的block层的示例图。参见图5所示为现有技术的MobileNet深度网络模型的层级表,图6所示为本发明实施例的MobileHairNet深度网络模型的层级表。由图2至图6可以看出,本实施例的MobileHairNet深度网络模型与现有技术的MobileNet深度网络模型相比具有如下特征及有益效果:Specifically, see Figure 2 which shows an example diagram of the MobileNet deep network model in the prior art. Figure 3 shows an example diagram of the MobileHairNet deep network model according to the embodiment of the present invention. Figure 4 shows the block layer of the embodiment of the present invention. sample graph. Referring to FIG. 5 , a hierarchical table of the MobileNet deep network model in the prior art is shown, and FIG. 6 is a hierarchical table of the MobileHairNet deep network model according to an embodiment of the present invention. As can be seen from Figures 2 to 6, the MobileHairNet deep network model of this embodiment has the following characteristics and beneficial effects compared with the existing MobileNet deep network model:
(1)减少Mobilenet的block层数。由于特征比较明显,不需要进行过多的block层来进行信息的提取,因此本实施例的MobileHairNet深度网络模型去掉中间14×14×96和7×7×160输入的block层,从而降低计算量,加快运算速度。(1) Reduce the number of block layers in Mobilenet. Since the features are relatively obvious, there is no need to perform too many block layers to extract information. Therefore, the MobileHairNet deep network model in this embodiment removes the middle 14×14×96 and 7×7×160 input block layers, thereby reducing the amount of calculation. , speed up the operation.
(2)增加跳跃连接层,增强特征融合。在不同的block层之后加入跳跃连接层,连接至最后一层block层,再经过自适应降采样(指的缩放,按一定比例缩小,不然没法叠加到最后一层)使得局部特征和全局特征变得丰富,有利于不同尺度的特征进行后续的提取或分类(局部特征指的是细小的特征,比如头皮红血丝,全局特征指的是大面积的特征比如油脂,类似于对不同尺寸大小的图片进行学习,下次见到不同尺寸的相似图片,能够快速分辨)。(2) Add a skip connection layer to enhance feature fusion. A skip connection layer is added after different block layers, connected to the last block layer, and then through adaptive downsampling (referring to scaling, reducing by a certain ratio, otherwise it cannot be superimposed to the last layer) to make the local features and global features Become rich, which is conducive to subsequent extraction or classification of features of different scales (local features refer to small features, such as red blood streaks on the scalp, and global features refer to large-area features such as oil, similar to different sizes of features) Learn from pictures, and you can quickly distinguish similar pictures of different sizes next time).
经过实验可知,在同一批数据集之中,使用跳跃连接层连接的模型,准确率会比不使用跳跃连接层连接的模型提升5.8%。在相同的训练次数下,平均损失会比不使用跳层连接的模型小17%,收敛速度更快,提前25轮到达最终模型(后续迭代epoch准确率因模型过拟合且准确率没有提升,最佳迭代次数通过损失值的最小值或者ACC(准确率)的最大值决定)。现有技术的MobileNet和本发明实施例的改进的MobileNet的模型损失比较图参见图7所示,现有技术的MobileNet和本发明实施例的改进的MobileNet的模型精度比较图参见图8所示。Through experiments, it can be seen that in the same batch of data sets, the accuracy of the model connected by the skip connection layer will be 5.8% higher than that of the model without the skip connection layer. Under the same number of trainings, the average loss will be 17% smaller than the model without skip layer connection, the convergence speed will be faster, and the final model will be reached 25 rounds earlier (the accuracy of subsequent iteration epochs is due to overfitting of the model and the accuracy has not improved. The optimal number of iterations is determined by the minimum value of the loss value or the maximum value of ACC (accuracy)). The model loss comparison diagram between MobileNet in the prior art and the improved MobileNet according to the embodiment of the present invention is shown in Figure 7. The model accuracy comparison diagram between MobileNet in the prior art and the improved MobileNet according to the embodiment of the present invention is shown in Figure 8.
(3)增加末端1*1卷积层,加快收敛。由于跳跃连接层连接至最后一层block,经过maxpooling池化层后,再加上之前的优化方案(减少Mobilenet的block层数),相比于原Mobilenet的1280层,本发明的池化后从层数减少至171(3+32+16+24+32+64)层,进一步压缩了计算量,最后接上一层1*1卷积层使 得模型更加关注分类信息,进一步加快收敛速度。(3) Add a terminal 1*1 convolution layer to speed up convergence. Since the skip connection layer is connected to the last layer of blocks, after the maxpooling pooling layer, plus the previous optimization solution (reducing the number of block layers in Mobilenet), compared with the 1280 layers of the original Mobilenet, the pooling layer of the present invention is from The number of layers is reduced to 171 (3+32+16+24+32+64), further compressing the amount of calculation, and finally a 1*1 convolution layer is added to make The model pays more attention to classification information and further accelerates the convergence speed.
在这三种策略下,本发明的MobileHairNet深度网络模型仅2.4M大小,推理速度5ms。相对于SqueezeNet模型的4.8M更小,推理速度35ms更快。Under these three strategies, the MobileHairNet deep network model of the present invention is only 2.4M in size and has an inference speed of 5ms. Compared with the SqueezeNet model, which is 4.8M, it is smaller and the inference speed is 35ms faster.
本实施例中,一种头皮头发检测方法在获得对应头皮头发属性的检测结果之后,还包括:将类别和置信度输入构建的分数映射函数,获得与类别和置信度对应的评分。In this embodiment, after obtaining the detection results corresponding to scalp hair attributes, a scalp hair detection method further includes: inputting the category and confidence level into the constructed score mapping function to obtain a score corresponding to the category and confidence level.
所述分数映射函数,具体如下:

The score mapping function is as follows:

其中,x是检测结果输出的置信度;cls是检测结果输出的类别;sigmoid(x)表示映射中间函数;f(x,cls)表示映射函数;f表示与置信度对应的分数。Among them, x is the confidence of the detection result output; cls is the category of the detection result output; sigmoid(x) represents the mapping intermediate function; f(x, cls) represents the mapping function; f represents the score corresponding to the confidence.
具体实施时,可在获得最高置信度所述类别cls和置信度x之后,将最高置信度所述类别cls和置信度x进行分数映射,也可以将各类别cls和其对应的置信度x均进行分数映射。使用户能够根据评分,直观地根据分值感受自身头发头皮状态,如将将轻度根据置信度映射到40-60分之间,中度60-80分之间,重度80-100分之间。During specific implementation, after obtaining the category cls with the highest confidence and the confidence x, the category cls with the highest confidence and the confidence Perform score mapping. Allow users to intuitively feel the condition of their hair and scalp through scores, such as mapping mild to 40-60 points, moderate to 60-80 points, and severe to 80-100 points based on confidence. .
参见图10所示,一种头皮头发检测系统,包括:As shown in Figure 10, a scalp hair detection system includes:
图像获取模块1001,用于获取不同的头皮头发图像;Image acquisition module 1001, used to acquire different scalp and hair images;
分类数据集标注模块1002,用于根据头皮头发属性,对头皮头发图像进行标注分类,形成基于头皮头发属性的分类数据集;The classification data set annotation module 1002 is used to annotate and classify scalp and hair images according to scalp and hair attributes to form a classification data set based on scalp and hair attributes;
深度网络模型训练模块1003,用于将标注好的分类数据集图像输入到改进的MobileNet深度网络模型中进行训练,得到训练好的基于头皮头发属性的深度网络模型;The deep network model training module 1003 is used to input the labeled classification data set images into the improved MobileNet deep network model for training, and obtain the trained deep network model based on scalp and hair attributes;
检测结果输出模块1004,用于将待测头皮头发图像输入到训练好的深度网络模型中,获得对应头皮头发属性的检测结果;所述检测结果包括类别及类别对应的置信度。The detection result output module 1004 is used to input the scalp and hair image to be tested into the trained deep network model to obtain detection results corresponding to scalp and hair attributes; the detection results include categories and confidence levels corresponding to the categories.
本实施例中的所述图像获取模块1001、分类数据集标注模块1002、深度网 络模型训练模块1003和检测结果输出模块1004等模块进一步功能性描述,详见前述一种头皮头发检测方法。In this embodiment, the image acquisition module 1001, classification data set annotation module 1002, deep network For further functional description of modules such as the network model training module 1003 and the detection result output module 1004, see the aforementioned scalp hair detection method for details.
参见图11所示,一种头皮头发检测设备110,包括存储器1101、处理器1102及存储在存储器1101上并可在处理器1102上运行的计算机程序1103,所述处理器1102执行所述程序时实现所述的头皮头发检测方法。Referring to Figure 11, a scalp hair detection device 110 includes a memory 1101, a processor 1102, and a computer program 1103 stored in the memory 1101 and executable on the processor 1102. When the processor 1102 executes the program Implement the scalp hair detection method.
一实施例中,所述计算机程序1103可以被分割成一个或多个模块/单元,所述一个或者多个模块/单元被存储在所述存储器1101中,并由所述处理器1102执行,以完成本申请。所述一个或多个模块/单元可以是能够完成特定功能的一系列计算机程序1103指令段,该指令段用于描述所述计算机程序1103在所述头皮头发检测设备110中的执行过程。In an embodiment, the computer program 1103 may be divided into one or more modules/units, and the one or more modules/units are stored in the memory 1101 and executed by the processor 1102 to Complete this application. The one or more modules/units may be a series of computer program 1103 instruction segments capable of completing specific functions. The instruction segments are used to describe the execution process of the computer program 1103 in the scalp hair detection device 110 .
所述头皮头发检测设备110可以是专有的头皮头发检测仪,还可以是手机、桌上型计算机、笔记本、掌上电脑及云端服务器等计算设备。所述头皮头发检测设备110可包括,但不仅限于,处理器1102和存储器1101。本领域技术人员可以理解,图11仅仅是头皮头发检测设备110的示例,并不构成对头皮头发检测设备110的限定,可以包括比图示更多或更少的部件,或者组合某些部件,或者不同的部件,例如所述头皮头发检测设备110还可以包括输入输出设备、网络接入设备、总线等。The scalp and hair detection device 110 may be a proprietary scalp and hair detection instrument, or may be a computing device such as a mobile phone, desktop computer, notebook, PDA, cloud server, etc. The scalp hair detection device 110 may include, but is not limited to, a processor 1102 and a memory 1101. Those skilled in the art can understand that FIG. 11 is only an example of the scalp and hair detection device 110 and does not constitute a limitation on the scalp and hair detection device 110. It may include more or fewer components than shown in the figure, or combine certain components. Or different components, for example, the scalp hair detection device 110 may also include input and output devices, network access devices, buses, etc.
所称处理器1102可以是中央处理单元(Central Processing Unit,CPU),还可以是其他通用处理器1102、数字信号处理器1102(Digital Signal Processor,DSP)、专用集成电路(Application Specific Integrated Circuit,ASIC)、现成可编程门阵列(Field-Programmable GateArray,FPGA)或者其他可编程逻辑器件、分立门或者晶体管逻辑器件、分立硬件组件等。通用处理器1102可以是微处理器1102或者该处理器1102也可以是任何常规的处理器1102等。The so-called processor 1102 can be a central processing unit (Central Processing Unit, CPU), or other general-purpose processor 1102, a digital signal processor 1102 (Digital Signal Processor, DSP), an application specific integrated circuit (Application Specific Integrated Circuit, ASIC ), off-the-shelf programmable gate array (Field-Programmable GateArray, FPGA) or other programmable logic devices, discrete gate or transistor logic devices, discrete hardware components, etc. The general-purpose processor 1102 may be a microprocessor 1102 or the processor 1102 may be any conventional processor 1102 or the like.
所述存储器1101可以是所述头皮头发检测设备110的内部存储单元,例如头皮头发检测设备110的硬盘或内存。所述存储器1101也可以是所述头皮头发检测设备110的外部存储设备,例如所述头皮头发检测设备110上配备的插接式硬盘,智能存储卡(Smart Media Card,SMC),安全数字(Secure Digital,SD)卡,闪存卡(Flash Card)等。进一步地,所述存储器1101还可以既包括所述头皮头发检测设备110的内部存储单元也包括外部存储设备。所述存储器1101用 于存储所述计算机程序1103以及所述终端设备所需的其他程序和数据。所述存储器1101还可以用于暂时地存储已经输出或者将要输出的数据。The memory 1101 may be an internal storage unit of the scalp hair detection device 110, such as a hard disk or memory of the scalp hair detection device 110. The memory 1101 may also be an external storage device of the scalp and hair detection device 110, such as a plug-in hard drive, a smart media card (SMC), or a secure digital device equipped on the scalp and hair detection device 110. Digital, SD) card, Flash Card, etc. Further, the memory 1101 may also include both an internal storage unit of the scalp hair detection device 110 and an external storage device. The memory 1101 uses For storing the computer program 1103 and other programs and data required by the terminal device. The memory 1101 can also be used to temporarily store data that has been output or is to be output.
以上所述,仅为本发明较佳的具体实施方式;但本发明的保护范围并不局限于此。任何熟悉本技术领域的技术人员在本发明揭露的技术范围内,根据本发明的技术方案及其改进构思加以等同替换或改变,都应涵盖在本发明的保护范围内。The above are only preferred specific embodiments of the present invention; however, the protection scope of the present invention is not limited thereto. Any person familiar with the technical field who is familiar with the technical field shall make equivalent substitutions or changes based on the technical solutions and improvement concepts of the present invention within the technical scope disclosed in the present invention, and shall be covered by the protection scope of the present invention.
工业实用性Industrial applicability
本发明一种头皮头发检测方法,通过改进的MobileNet深度网络模型对头皮头发图像中的头皮头发属性进行检测,最终输出与头皮头发属性对应的类别及置信度,提升了运算速度,更方便进行端侧部署,具有良好的工业实用性。 The present invention is a scalp hair detection method that uses an improved MobileNet deep network model to detect scalp and hair attributes in scalp and hair images, and finally outputs the category and confidence level corresponding to the scalp and hair attributes, which improves the calculation speed and makes terminal processing more convenient. Side deployment, with good industrial practicality.

Claims (16)

  1. 一种头皮头发检测方法,其特征在于,包括:A scalp hair detection method, characterized by including:
    获取不同的头皮头发图像;Acquire different scalp and hair images;
    根据头皮头发属性,对头皮头发图像进行标注分类,形成基于头皮头发属性的分类数据集;According to scalp and hair attributes, scalp and hair images are annotated and classified to form a classification data set based on scalp and hair attributes;
    将标注好的分类数据集图像输入到改进的MobileNet深度网络模型中进行训练,得到训练好的基于头皮头发属性的深度网络模型;Input the labeled classification data set images into the improved MobileNet deep network model for training, and obtain the trained deep network model based on scalp and hair attributes;
    将待测头皮头发图像输入到训练好的深度网络模型中,获得对应头皮头发属性的检测结果;所述检测结果包括类别及类别对应的置信度。Input the scalp and hair image to be tested into the trained deep network model to obtain detection results corresponding to scalp and hair attributes; the detection results include categories and confidence levels corresponding to the categories.
  2. 根据权利要求1所述的头皮头发检测方法,其特征在于,所述改进的MobileNet深度网络模型依次包括:第一卷积层、若干block层、池化层、第二卷积层和第三卷积层;每个block层依次包括:第四卷积层、深度卷积层和第五卷积层;各block层之后设置有连接至最后一层block层的跳跃连接层。The scalp hair detection method according to claim 1, characterized in that the improved MobileNet deep network model includes in sequence: a first convolution layer, several block layers, a pooling layer, a second convolution layer and a third convolution layer. Accumulation layer; each block layer includes: the fourth convolution layer, the depth convolution layer and the fifth convolution layer; each block layer is followed by a skip connection layer connected to the last block layer.
  3. 根据权利要求2所述的头皮头发检测方法,其特征在于,所述改进的MobileNet深度网络模型还包括若干第一激活函数层,每个卷积层后面分别连接一个第一激活函数层,以对卷积层提取出的头皮头发图像的特征信息进行非线性运算。The scalp hair detection method according to claim 2, characterized in that the improved MobileNet deep network model also includes a plurality of first activation function layers, and each convolution layer is connected to a first activation function layer respectively to detect The feature information of the scalp and hair image extracted by the convolutional layer is subjected to nonlinear operations.
  4. 根据权利要求3所述的头皮头发检测方法,其特征在于,所述第一激活函数层包括ReLU层。The scalp hair detection method according to claim 3, wherein the first activation function layer includes a ReLU layer.
  5. 根据权利要求2所述的头皮头发检测方法,其特征在于,所述改进的MobileNet深度网络模型的末端包括一全连接层,所述全连接层输出3个1*1通道的图像,与全连接层相连接的的第二激活函数层激活输出各分类的置信度。The scalp hair detection method according to claim 2, characterized in that the end of the improved MobileNet deep network model includes a fully connected layer, and the fully connected layer outputs three 1*1 channel images, which are connected to the fully connected layer. The second activation function layer connected by the layers activates and outputs the confidence of each classification.
  6. 根据权利要求5所述的头皮头发检测方法,其特征在于,所述第二激活函数层包括Softmax层。The scalp hair detection method according to claim 5, wherein the second activation function layer includes a Softmax layer.
  7. 根据权利要求1所述的头皮头发检测方法,其特征在于,所述改进的MobileNet深度网络模型的损失计算函数如下:
    The scalp hair detection method according to claim 1, characterized in that the loss calculation function of the improved MobileNet deep network model is as follows:
    其中,H(y,p)表示模型损失;y表示测试集的图片标签的真实值;p表示送入 模型后输出标签的预测值;N表示测试集中的图片数量;M表示分类数;c表示当前输出类别;yic表i个样本的第c个分类真实值;pic表示第i个样本的第c个分类送入模型后输出的预测值。Among them, H(y,p) represents the model loss; y represents the true value of the image label of the test set; p represents the input The predicted value of the output label after the model; N represents the number of pictures in the test set; M represents the number of categories; c represents the current output category; y ic represents the c-th classification true value of the i sample; p ic represents the c-th classification true value of the i-th sample The predicted values output after c categories are fed into the model.
  8. 根据权利要求1所述的头皮头发检测方法,其特征在于,还包括准确率计算函数如下:
    The scalp hair detection method according to claim 1, further comprising an accuracy calculation function as follows:
    其中,Precision表示本轮权重在此次测试集的准确率;TP表示判断正确个数;FP表示判断错误个数。Among them, Precision represents the accuracy of this round of weights in this test set; TP represents the number of correct judgments; FP represents the number of incorrect judgments.
  9. 根据权利要求1所述的头皮头发检测方法,其特征在于,获得对应头皮头发属性的检测结果之后,还包括:将类别和置信度输入构建的分数映射函数,获得与类别和置信度对应的评分。The scalp hair detection method according to claim 1, characterized in that after obtaining the detection results corresponding to scalp hair attributes, it further includes: inputting the category and confidence into the constructed score mapping function to obtain a score corresponding to the category and confidence. .
  10. 根据权利要求9所述的头皮头发检测方法,其特征在于,所述分数映射函数,具体如下:

    The scalp hair detection method according to claim 9, characterized in that the score mapping function is as follows:

    其中,x是检测结果输出的置信度;cls是检测结果输出的类别;sigmoid(x)表示映射中间函数;f(x,cls)表示映射函数;f表示与置信度对应的分数。Among them, x is the confidence of the detection result output; cls is the category of the detection result output; sigmoid(x) represents the mapping intermediate function; f(x, cls) represents the mapping function; f represents the score corresponding to the confidence.
  11. 根据权利要求1所述的头皮头发检测方法,其特征在于,所述头皮头发属性包括头发粗细、头发受损程度、头发油脂、头皮角质层、头皮红血丝和毛囊皮下油脂中的至少一个;每个头皮头发属性对应一个改进的MobileNet深度网络模型。The scalp hair detection method according to claim 1, wherein the scalp hair attributes include at least one of hair thickness, hair damage degree, hair oil, scalp cuticle, scalp red blood cells and hair follicle subcutaneous oil; Each scalp and hair attribute corresponds to an improved MobileNet deep network model.
  12. 根据权利要求11所述的头皮头发检测方法,其特征在于,所述获取头皮头发图像时,依据头皮头发属性可在不同的光源下对头皮头发图像进行采集,通过三光谱识别技术对头皮头发图像的特征进行识别。The scalp and hair detection method according to claim 11, characterized in that when obtaining the scalp and hair image, the scalp and hair images can be collected under different light sources according to the attributes of the scalp and hair, and the scalp and hair images are collected through three-spectrum recognition technology. characteristics to identify.
  13. 一种头皮头发检测系统,其特征在于,包括:A scalp hair detection system, which is characterized by including:
    图像获取模块,用于获取不同的头皮头发图像;Image acquisition module, used to acquire different scalp and hair images;
    分类数据集标注模块,用于根据头皮头发属性,对头皮头发图像进行标注 分类,形成基于头皮头发属性的分类数据集;Classification data set annotation module, used to annotate scalp and hair images according to scalp and hair attributes Classification to form a classification data set based on scalp and hair attributes;
    深度网络模型训练模块,用于将标注好的分类数据集图像输入到改进的MobileNet深度网络模型中进行训练,得到训练好的基于头皮头发属性的深度网络模型;The deep network model training module is used to input the labeled classification data set images into the improved MobileNet deep network model for training, and obtain the trained deep network model based on scalp and hair attributes;
    检测结果输出模块,用于将待测头皮头发图像输入到训练好的深度网络模型中,获得对应头皮头发属性的检测结果;所述检测结果包括类别及类别对应的置信度。The detection result output module is used to input the scalp and hair image to be tested into the trained deep network model to obtain detection results corresponding to scalp and hair attributes; the detection results include categories and confidence levels corresponding to the categories.
  14. 根据权利要求13所述的头皮头发检测系统,其特征在于,所述头皮头发属性包括头发粗细、头发受损程度、头发油脂、头皮角质层、头皮红血丝和毛囊皮下油脂中的至少一个;每个头皮头发属性对应一个改进的MobileNet深度网络模型。The scalp and hair detection system according to claim 13, wherein the scalp and hair attributes include at least one of hair thickness, hair damage degree, hair oil, scalp cuticle, scalp red blood cells and hair follicle subcutaneous oil; Each scalp and hair attribute corresponds to an improved MobileNet deep network model.
  15. 根据权利要求14所述的头皮头发检测系统,其特征在于,所述图像获取模块获取头皮头发图像时,依据头皮头发属性可在不同的光源下对头皮头发图像进行采集,通过三光谱识别技术对头皮头发图像的特征进行识别。The scalp and hair detection system according to claim 14, characterized in that when the image acquisition module acquires a scalp and hair image, the scalp and hair images can be collected under different light sources according to the attributes of the scalp and hair, and the three-spectrum recognition technology is used to collect the scalp and hair images. Features of scalp and hair images are identified.
  16. 一种头皮头发检测设备,其特征在于,包括存储器、处理器及存储在存储器上并可在处理器上运行的计算机程序,所述处理器执行所述程序时实现如权利要求1~12中任意一项所述的头皮头发检测方法。 A scalp and hair detection device, characterized in that it includes a memory, a processor and a computer program stored in the memory and executable on the processor. When the processor executes the program, it implements any of claims 1 to 12. The scalp hair detection method described in one item.
PCT/CN2023/114216 2022-08-24 2023-08-22 Scalp hair detection method, system and device WO2024041524A1 (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN202211023975.7 2022-08-24
CN202211023975.7A CN117710686A (en) 2022-08-24 2022-08-24 Scalp hair detection method, system and equipment

Publications (1)

Publication Number Publication Date
WO2024041524A1 true WO2024041524A1 (en) 2024-02-29

Family

ID=90012530

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2023/114216 WO2024041524A1 (en) 2022-08-24 2023-08-22 Scalp hair detection method, system and device

Country Status (2)

Country Link
CN (1) CN117710686A (en)
WO (1) WO2024041524A1 (en)

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110188598A (en) * 2019-04-13 2019-08-30 大连理工大学 A kind of real-time hand Attitude estimation method based on MobileNet-v2
CN111428655A (en) * 2020-03-27 2020-07-17 厦门大学 Scalp detection method based on deep learning
WO2021086594A1 (en) * 2019-10-28 2021-05-06 Google Llc Synthetic generation of clinical skin images in pathology
CN113591512A (en) * 2020-04-30 2021-11-02 青岛海尔智能技术研发有限公司 Method, device and equipment for hair identification
CN114120019A (en) * 2021-11-08 2022-03-01 贵州大学 Lightweight target detection method

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110188598A (en) * 2019-04-13 2019-08-30 大连理工大学 A kind of real-time hand Attitude estimation method based on MobileNet-v2
WO2021086594A1 (en) * 2019-10-28 2021-05-06 Google Llc Synthetic generation of clinical skin images in pathology
CN111428655A (en) * 2020-03-27 2020-07-17 厦门大学 Scalp detection method based on deep learning
CN113591512A (en) * 2020-04-30 2021-11-02 青岛海尔智能技术研发有限公司 Method, device and equipment for hair identification
CN114120019A (en) * 2021-11-08 2022-03-01 贵州大学 Lightweight target detection method

Also Published As

Publication number Publication date
CN117710686A (en) 2024-03-15

Similar Documents

Publication Publication Date Title
WO2020010785A1 (en) Classroom teaching cognitive load measuring system
CN111291604A (en) Face attribute identification method, device, storage medium and processor
CN109635727A (en) A kind of facial expression recognizing method and device
CN104143079A (en) Method and system for face attribute recognition
CN110689523A (en) Personalized image information evaluation method based on meta-learning and information data processing terminal
WO2021114818A1 (en) Method, system, and device for oct image quality evaluation based on fourier transform
CN108492301A (en) A kind of Scene Segmentation, terminal and storage medium
CN112668486A (en) Method, device and carrier for identifying facial expressions of pre-activated residual depth separable convolutional network
CN111428655A (en) Scalp detection method based on deep learning
CN117010971B (en) Intelligent health risk providing method and system based on portrait identification
Helaly et al. DTL-I-ResNet18: facial emotion recognition based on deep transfer learning and improved ResNet18
CN110675312B (en) Image data processing method, device, computer equipment and storage medium
CN113052236A (en) Pneumonia image classification method based on NASN
CN110750673B (en) Image processing method, device, equipment and storage medium
WO2024041524A1 (en) Scalp hair detection method, system and device
Bilang et al. Cactaceae detection using MobileNet architecture
CN111599444A (en) Intelligent tongue diagnosis detection method and device, intelligent terminal and storage medium
CN116130088A (en) Multi-mode face diagnosis method, device and related equipment
CN110135391A (en) System is matched using the program and spectacle-frame of computer apolegamy spectacle-frame
CN115937910A (en) Palm print image identification method based on small sample measurement network
CN114328864A (en) Ophthalmic question-answering system based on artificial intelligence and knowledge graph
Singh et al. Malaria parasite recognition in thin blood smear images using squeeze and excitation networks
Naidu et al. Dermato: A Deep Learning based Application for Acne Subtype and Severity Detection
CN111860033A (en) Attention recognition method and device
Kodumuru et al. Diabetic Retinopathy Screening Using CNN (ResNet 18)

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 23856614

Country of ref document: EP

Kind code of ref document: A1