CN110543906A - Skin type automatic identification method based on data enhancement and Mask R-CNN model - Google Patents

Skin type automatic identification method based on data enhancement and Mask R-CNN model Download PDF

Info

Publication number
CN110543906A
CN110543906A CN201910806679.6A CN201910806679A CN110543906A CN 110543906 A CN110543906 A CN 110543906A CN 201910806679 A CN201910806679 A CN 201910806679A CN 110543906 A CN110543906 A CN 110543906A
Authority
CN
China
Prior art keywords
skin
model
data
training
enhancement
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201910806679.6A
Other languages
Chinese (zh)
Other versions
CN110543906B (en
Inventor
彭礼烨
梁倍源
黄思钊
毛勇健
徐阳
彭博韬
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Individual
Original Assignee
Individual
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Individual filed Critical Individual
Priority to CN201910806679.6A priority Critical patent/CN110543906B/en
Publication of CN110543906A publication Critical patent/CN110543906A/en
Application granted granted Critical
Publication of CN110543906B publication Critical patent/CN110543906B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/214Generating training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Artificial Intelligence (AREA)
  • General Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • Evolutionary Computation (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Computational Linguistics (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Health & Medical Sciences (AREA)
  • Biomedical Technology (AREA)
  • Biophysics (AREA)
  • Evolutionary Biology (AREA)
  • General Health & Medical Sciences (AREA)
  • Molecular Biology (AREA)
  • Computing Systems (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Image Analysis (AREA)
  • Image Processing (AREA)

Abstract

The invention requests to protect a skin type automatic identification method based on data enhancement and Mask R-CNN model, the scheme of the invention is composed of five stages: data marking, data enhancement, model training, parameter adjustment and model selection. The human resources in the current skin care industry are greatly saved, and efficient and quick skin detection can be realized. Meanwhile, the model has the characteristics of self-adaption and incremental learning, and higher identification accuracy can be obtained along with the expansion of a training data set and the increment of the using quantity.

Description

Skin type automatic identification method based on data enhancement and Mask R-CNN model
Technical Field
the invention belongs to the field of deep learning image identification, and particularly relates to an automatic skin type identification method based on data enhancement and a Mask R-CNN model.
Background
Skin type detection is very common in daily life of people, and skin care products developed according to different skin types are very various, so that the application of different types of skin care products to different skin types is particularly important. However, the existing skin detection means are relatively deficient, and the skin detection means is generally detected by doctors or cosmetologists, so that the problem is solved. The use of feature extraction algorithms in combination with classifiers is currently a mainstream approach in the field of image recognition.
but manually extracting features is not suitable for a general skin classification system. There are the following main reasons: 1) skin features are numerous and manual extraction of features is typically applied to skin of one or a limited number of features. It is difficult to handle large-scale data sets. 2) The skin appearance has high similarity between classes and large intra-class variation, which makes the identification of the skin type difficult. In order to solve these problems, it is important to use an automatic identification and classification method in this field, however, automatic identification and classification based on skin images is a very challenging task, and due to the limitations of classification and identification algorithms and the noise influence (such as light, camera shake, picture noise, etc.) of skin images in reality, the identification accuracy is low.
with the development and the promotion of a deep learning algorithm in recent years, the convolutional neural network and the Mask R-CNN algorithm developed on the basis of the convolutional neural network completely reveal the corners in the field of image detection, and a foundation is laid for solving the difficulty in skin identification. Based on the problems, the invention provides an automatic skin type identification method based on data enhancement and Mask R-CNN algorithm.
Disclosure of Invention
the present invention is directed to solving the above problems of the prior art. The automatic skin type identification method based on the data enhancement and the Mask R-CNN model can improve the identification efficiency and accuracy and provide auxiliary support. The technical scheme of the invention is as follows:
A skin type classification automatic identification method based on data enhancement and Mask R-CNN model is characterized by comprising the following steps:
step S1, labeling a database composed of a large number of known skin images with different skin types, labeling the characteristics of the skin images including position characteristics and type characteristics, and dividing the skin images into a training image set, a test image set and a verification image set;
step S2, performing data offline enhancement processing on the training image set of the marked known skin image, wherein the data offline enhancement processing adopts four data enhancement methods of turning, rotating, zooming and clipping to change the data number into enhancement factors of the original data set, wherein the enhancement factors refer to the multiples of the increase of the data after offline enhancement;
step S3, training on the basis of a pre-trained Microsoft COCO data set by adopting a transfer learning method to obtain optimized initial parameters, so that the training speed, the recognition rate and the generalization capability of the model are increased, 6000 skin photos marked with the marks are selected as a training image set, 2000 skin photos are selected as a test image set, 2000 skin photos are selected as a verification image set, the accuracy of the model is checked through the verification image set, and the model is subjected to parameter adjustment through a training result until the model converges;
And S4, repeating the steps S2 and S3 to train a plurality of models, comparing the evaluation indexes, and selecting an optimal model as a target by using a target optimization algorithm to finish automatic target identification.
Further, the specific implementation process of step S1 is as follows: the method comprises the steps of using a yolo _ mark image detection labeling tool to label positions and types of known target images, wherein five types of skin types are dry skin, oily skin, mixed skin, neutral skin and sensitive skin, the tool runs under a window system, recording image information by using json format files according to opencv library, and then dividing a skin disease image data set into a training set, a verification set and a test set according to the proportion of 60%, 20% and 20%.
further, the step S2 of performing data offline enhancement processing includes the following steps:
s2.1, defining an enhancement factor to be 2, wherein the increase multiple of the data after offline enhancement is 2, and mirror folding the skin picture;
s2.2, defining an enhancement factor to be 4, and rotating the skin picture by 90 degrees clockwise or anticlockwise;
s2.3, arbitrarily enlarging and reducing the skin picture, and then cutting the skin picture according to the original size.
Further, the specific implementation process of step S3 is as follows: using a tensierflow deep learning framework to build a Mask R-CNN model, and training on the basis of a pre-trained Microsoft COCO data set:
Step S3.1, taking the skin image obtained after the offline data enhancement as the input of the convolutional neural network, and performing feature extraction, wherein the feature extraction process comprises the following steps: s4.1.1, scaling the skin images with different sizes obtained by preprocessing to a fixed size, then inputting the skin images with the fixed size into a convolutional neural network, S4.1.2 performing convolution and pooling operations on the skin images for multiple times in the convolutional neural network to obtain a skin characteristic diagram;
step S3.2, generating recommended candidate regions by using an RPN (candidate region network), and outputting M candidate regions by each picture;
s3.3, mapping the candidate region to the last layer of convolution of the convolutional neural network;
Step S3.4, generating a feature map with a fixed size for each candidate area through a RoI Align layer, wherein pixels in the skin image are completely aligned with pixels in the feature map;
s3.5, sending the output of the upper layer into a full-connection layer, classifying the candidate regions, obtaining the finally output class probability by using a softmax function, and judging the class of the skin by the probability;
S3.6, after the model is trained for a certain period by using the training set according to the steps, suspending training and storing model training data, and observing the change condition of the loss function value of the model along with the training period;
And S3.7, if the loss function value shows a descending trend, continuing to train the model until convergence, otherwise, if the loss function value shows a fluctuation state or an ascending trend, needing to adjust the model parameters, and restarting to train the model after adjustment.
Further, the step S3.1 of extracting the convolutional neural network features includes the following steps:
s3.1.1 the image is first normalized and subtracted by the mean of the pixels in the data set to produce a 224x224 size image, and the input layer is responsible for loading the image from the preprocessed skin picture data set.
s3.1.2 the convolution layer takes the feature map as a unit, the convolution kernel represents the feature, each unit acts on the local area of the upper layer feature map through the convolution kernel, and the local feature of the image is obtained through the weighting of the local area and the ReLU nonlinear processing.
further, the step S3.5 of performing feature recognition by using a Softmax classifier includes the following steps:
S3.5.1, assuming that the number of input skin pictures to be identified is N, there are k (k ═ 5) class target classes, and for the tested picture xi, the probability that the current picture xi belongs to j class is estimated as p (yi ═ j | xi) according to the bayesian theorem, then the probability that the current picture xi belongs to each class is estimated by using the hypothesis function h θ (xi) as follows:
Wherein, the parameters representing the model, k is the number of categories, xi test image i, represent the normalization of the probability distribution.
Further, the loss function of step S3.6 is expressed by the following equation:
L=L+L+L
Where Lcls is the classification error, Lreg is the detection error, and Lmask is the segmentation error
lcls and Lreg are the category of each candidate region (RoI) predicted by using the fully connected layer and the coordinate value of the target regression frame, Lmask represents the error of segmenting each candidate region, wherein the dimension of the segmentation output is k m (k is the number of categories, and m is the size of the feature map), that is, k masks are encoded, each Mask has k categories, and a sigmod function is used for each pixel to obtain the binary cross entropy.
the model training adopts a batch training method, and the main parameters are set as follows: the basic learning rate is 0.01, the momentum factor parameter is 0.9, and the regularization attenuation coefficient is 0.0001.
further, the specific implementation process of step S4 is as follows: the same training set is used for training on a plurality of models, and a model with the optimal indexes is selected through a multi-objective optimization algorithm (a non-dominated sorting genetic algorithm with an elite strategy) according to the recall rate, the accuracy and the F value of each model.
the invention has the following advantages and beneficial effects:
the invention aims to provide an automatic skin type identification and classification method for a skin image. And automatically identifying and classifying the skin images by training a Mask R-CNN model. In order to enable the deep learning model to be subjected to an overfitting phenomenon caused by the limitation of image data, the data preprocessing is carried out by adopting a data enhancement and transfer learning method. By using the method, the efficiency and the accuracy of skin type identification can be improved, and a more accurate and effective skin care scheme can be formulated for a user according to the characteristics of various skin types.
(1) Compared with the traditional convolution data network algorithm, the method has the advantages of reducing the calculated amount, improving the efficiency, improving the identification accuracy and the like by adopting the Mask R-CNN (Mask-CNN) which is an advanced target detection algorithm.
(2) The method has the characteristics of self-adaption and incremental learning
(3) the method can effectively reduce the artificial misdiagnosis in the skin detection, greatly improve the skin efficiency and save a great deal of resources in the skin care industry.
(4) the method innovatively combines the image detection algorithm with the skin type classification, so that the skin type detection can be quickly and conveniently carried out, and a user can pertinently select a skin care product and make a proper beauty scheme according to the skin type detection result.
(5) the method innovatively combines a multi-objective optimization algorithm (a non-dominated sorting genetic algorithm with an elite strategy) with model selection, so that an optimized model can be more accurately selected from a large number of trained models.
Drawings
FIG. 1 is the overall network structure of the Mask R-CNN model according to the preferred embodiment of the present invention.
Detailed Description
the technical solutions in the embodiments of the present invention will be described in detail and clearly with reference to the accompanying drawings. The described embodiments are only some of the embodiments of the present invention.
The technical scheme for solving the technical problems is as follows:
The invention is based on a target detection algorithm Mask R-CNN, and the algorithm mainly comprises two modules: the first module is an RPN network used for generating a candidate region, the second module is an ROI Align used for carrying out target detection, a Binary Mask is output through a full convolution network, and the scheme comprises the following specific steps:
(1) Data annotation: marking an original dermatosis image, and drawing a focus area;
(2) Data enhancement: and performing data offline enhancement processing on the marked skin disease marked image, and changing the data number into enhancement factors such as the number of the original data set. Wherein the enhancement factor refers to the multiple of the increase of the data after the offline enhancement. The invention adopts four data enhancement methods of turning, rotating, zooming and clipping.
(3) Model training: initializing a Mask R-CNN model, wherein the overall network structure is as shown in FIG. 1:
i. A convolutional neural network: the method is used for extracting a characteristic map of the dermatosis focus, and the characteristic map is shared by an RPN network and a full connection layer.
The rpn network is used to generate a recommended candidate region.
Generating a feature map of fixed size for each candidate region by the RoI Align layer, the pixels in the skin disease image and the pixels in the feature map being perfectly aligned.
full connectivity layer: and obtaining the final output class probability by utilizing a softmax function.
(4) Parameter adjustment: when the training of the model is started, the learning rate is set to be 0.1, the approximate global optimum is obtained through the high learning rate, and then the local optimum is obtained through the smaller learning rate, so that the global optimum is obtained.
There are several cases of parameter adjustment:
and when the loss function value shows a descending trend, continuing to train the model until convergence.
When the loss function value exhibits a fluctuation or an ascending tendency, the learning rate needs to be decreased.
When the model is not converged, the number of mini lots needs to be increased, and the number of nodes of the full connection layer is reduced.
(5) Selecting a model: and selecting the optimal model by using a target optimization algorithm according to the accuracy, the recall rate and the F value of the model trained in the steps.
The invention provides a skin disease automatic identification method based on data enhancement, transfer learning and Mask R-CNN model, which comprises the following steps:
step S1, labeling a database composed of a large number of skin disease images, labeling the focus position and the disease type in the skin disease image, and then dividing the images into a training image set, a testing image set and a verification image set.
And step S2, performing data offline enhancement processing on the marked skin disease marked image, and changing the data number into an enhancement factor and the number of the original data set. Wherein the enhancement factor refers to the multiple of the increase of the data after the offline enhancement. The invention adopts four data enhancement methods of turning, rotating, zooming and clipping.
And step S3, migrating the pre-trained model on ImageNet to a training image set after data enhancement by adopting a migration learning method to train so as to obtain optimized initial parameters, thereby accelerating the speed, recognition rate and generalization capability of model training. And checking the accuracy of the model through the verification image set, and carrying out parameter adjustment on the model through the training result until the model converges.
and S4, repeating the steps S2 and S3 to train a plurality of models, comparing the evaluation indexes, and selecting an optimal model by using a target optimization algorithm.
the specific implementation process of step S1 is as follows: the skin disease image is marked with the lesion position and the disease type by using a yolo _ mark image detection marking tool which runs under a window system and depends on an opencv library. Image information is recorded by using a json format file, the information comprises the name of a skin disease image, the size of the image, the position of a focus and the type of a disease, and then a skin disease image data set is divided into a training set, a verification set and a test set according to the proportion of 60%, 20% and 20%.
The data offline enhancement processing performed in step S2 includes the following steps:
S2.1, defining an enhancement factor to be 2 (the multiple of the increase of the data after off-line enhancement is 2), and turning the picture of the skin affected part into a mirror surface.
s2.2, defining an enhancement factor to be 4 (the multiple of the increase of the data after off-line enhancement is 2), and rotating the picture of the skin affected part by 90 degrees clockwise or anticlockwise.
S2.3, randomly amplifying and reducing the picture of the skin affected part, and then cutting the picture of the affected part according to the original size.
The specific implementation process of step S3 is described as follows: and (3) building a convolutional neural network model by using a tensoflow deep learning framework, and training on the basis of a pre-trained Microsoft COCO data set. The specific process of training is as follows:
and S3.1, taking the skin image obtained by preprocessing as the input of the convolutional neural network, and extracting the characteristics. The process of feature extraction is as follows: s3.1.1, the preprocessed skin images with different sizes are scaled to a fixed size, then the skin images with fixed size are input into a convolution neural network, S3.1.2 the skin images are convolved and pooled for a plurality of times in the convolution neural network, and a skin feature map is obtained.
And step S3.2, generating recommended candidate regions by using an RPN (candidate region network), and outputting M candidate regions for each picture.
And step S3.3, mapping the candidate region to the last layer of convolution of the convolutional neural network.
Step S3.4, generating a feature map with fixed size for each candidate area through a RoI Align layer, wherein the pixels in the skin disease image are completely aligned with the pixels in the feature map.
And S3.5, sending the output of the upper layer into the full-connection layer, obtaining the final output class probability by using a softmax function, and judging the class of the skin according to the probability.
step S3.6, after the model is trained for a certain period by using the training set according to the steps, the training is suspended and the model training data is stored, the change condition of the loss function value of the model along with the training period is observed, the model training adopts a batch training method, and the main parameters are set as follows: the basic learning rate is 0.01, the momentum factor parameter is 0.9, and the regularization attenuation coefficient is 0.0001. .
And S3.7, if the loss function value shows a descending trend, continuing to train the model until convergence, otherwise, if the loss function value shows a fluctuation state or an ascending trend, needing to adjust the model parameters, and restarting to train the model after adjustment.
The specific implementation process of step S4 is described as follows: and training on a plurality of models by using the same training set, and selecting a model with the optimal indexes through a multi-objective optimization algorithm according to the recall rate, the accuracy and the F value of each model.
The above examples are to be construed as merely illustrative and not limitative of the remainder of the disclosure. After reading the description of the invention, the skilled person can make various changes or modifications to the invention, and these equivalent changes and modifications also fall into the scope of the invention defined by the claims.

Claims (9)

1. a skin type classification automatic identification method based on data enhancement and Mask R-CNN model is characterized by comprising the following steps:
step S1, labeling a database composed of a large number of known skin images with different skin types, labeling the characteristics of the skin images including position characteristics and type characteristics, and dividing the skin images into a training image set, a test image set and a verification image set;
step S2, performing data offline enhancement processing on the training image set of the marked known skin image, wherein the data offline enhancement processing adopts four data enhancement methods of turning, rotating, zooming and clipping to change the data number into enhancement factors of the original data set, wherein the enhancement factors refer to the multiples of the increase of the data after offline enhancement;
step S3, training on the basis of a pre-trained Microsoft COCO data set by adopting a transfer learning method to obtain optimized initial parameters, so that the training speed, the recognition rate and the generalization capability of the model are increased, 6000 skin photos marked with the marks are selected as a training image set, 2000 skin photos are selected as a test image set, 2000 skin photos are selected as a verification image set, the accuracy of the model is checked through the verification image set, and the model is subjected to parameter adjustment through a training result until the model converges;
And S4, repeating the steps S2 and S3 to train a plurality of models, comparing the evaluation indexes, and selecting an optimal model as a target by using a target optimization algorithm to finish automatic target identification.
2. the method for automatically identifying skin types based on data enhancement and Mask R-CNN model according to claim 1, wherein the step S1 is implemented by: the method comprises the steps of using a yolo _ mark image detection labeling tool to label positions and types of known target images, wherein five types of skin types are dry skin, oily skin, mixed skin, neutral skin and sensitive skin, the tool runs under a window system, recording image information by using json format files according to opencv library, and then dividing a skin disease image data set into a training set, a verification set and a test set according to the proportion of 60%, 20% and 20%.
3. The method for automatically identifying skin types based on data enhancement and Mask R-CNN model according to claim 2, wherein the step S2 of performing data offline enhancement processing comprises the following steps:
S2.1, defining an enhancement factor to be 2, wherein the increase multiple of the data after offline enhancement is 2, and mirror folding the skin picture;
S2.2, defining an enhancement factor to be 4, and rotating the skin picture by 90 degrees clockwise or anticlockwise;
s2.3, arbitrarily enlarging and reducing the skin picture, and then cutting the skin picture according to the original size.
4. the method for automatically identifying skin types based on data enhancement and Mask R-CNN model according to claim 3, wherein the step S3 is implemented by: using a tensoflow deep learning framework to build a Mask R-CNN model, and training on the basis of a pre-trained Microsoft COCO data set:
Step S3.1, taking the skin image obtained after the offline data enhancement as the input of the convolutional neural network, and performing feature extraction, wherein the feature extraction process comprises the following steps: s4.1.1, scaling the skin images with different sizes obtained by preprocessing to a fixed size, then inputting the skin images with the fixed size into a convolutional neural network, S4.1.2 performing convolution and pooling operations on the skin images for multiple times in the convolutional neural network to obtain a skin characteristic diagram;
s3.2, generating recommended candidate regions by using an RPN candidate region network, and outputting M candidate regions by each picture;
S3.3, mapping the candidate region to the last layer of convolution of the convolutional neural network;
step S3.4, generating a feature map with a fixed size for each candidate area through a RoI Align layer, wherein pixels in the skin image are completely aligned with pixels in the feature map;
s3.5, sending the output of the upper layer into a full-connection layer, classifying the candidate regions, obtaining the finally output class probability by using a softmax function, and judging the class of the skin by the probability;
S3.6, after the model is trained for a certain period by using the training set according to the steps, suspending training and storing model training data, and observing the change condition of the loss function value of the model along with the training period;
And S3.7, if the loss function value shows a descending trend, continuing to train the model until convergence, otherwise, if the loss function value shows a fluctuation state or an ascending trend, needing to adjust the model parameters, and restarting to train the model after adjustment.
5. The method for automatically identifying the target based on the data enhancement and Mask R-CNN model according to claim 4, wherein the step S3.1 of extracting the convolutional neural network features comprises the following steps:
S3.1.1 the image is first normalized and subtracted by the mean of the pixels in the data set to produce a 224x224 size image, and the input layer is responsible for loading the image from the preprocessed skin picture data set.
S3.1.2 the convolution layer takes the feature map as a unit, the convolution kernel represents the feature, each unit acts on the local area of the upper layer feature map through the convolution kernel, and the local feature of the image is obtained through the weighting of the local area and the ReLU nonlinear processing.
6. The method for automatically identifying the target based on the data enhancement and Mask R-CNN model according to claim 4, wherein the step S3.5 of performing the feature identification by using a Softmax classifier comprises the following steps:
S3.5.1, assuming that the number of input skin pictures to be identified is N, there is a target class of k (k ═ 5), and for the tested picture x _ i, the probability that the current picture xi belongs to the class j is estimated as p (yi ═ j | xi) according to the bayesian theorem, then the probability that the current picture xi belongs to each class is estimated by using the hypothesis function h θ (xi) as follows:
Wherein θ lT represents the parameters of the model, k is the number of classes, xi represents the normalization of the probability distribution.
7. The method for automatically identifying the target based on the data enhancement and Mask R-CNN model as claimed in claim 4, wherein the loss function of step S3.6 is expressed as follows:
L=L+L+L
Wherein Lcls is a classification error, Lreg is a detection error, Lmask is a segmentation error Lcls and Lreg is a class and a target regression frame coordinate value of each candidate region (RoI) predicted by using a full connection layer, Lmask represents an error for segmenting each candidate region, a dimension of segmentation output is k m (k is a class number, and m is a size of a feature map), that is, k masks are encoded, each Mask has k classes, and a sigmod function is used for each pixel to obtain a binary cross entropy.
8. The method for automatically identifying skin types based on data enhancement and Mask R-CNN model according to claim 4, wherein a batch training method is adopted, and main parameters are set as follows: the basic learning rate is 0.01, the momentum factor parameter is 0.9, and the regularization attenuation coefficient is 0.0001.
9. The method for automatically identifying the target based on the data enhancement and Mask R-CNN model according to claim 4, wherein the step S4 is implemented by the following steps: and training on a plurality of models by using the same training set, and selecting a model with the optimal indexes through a target optimization algorithm according to the recall rate, the accuracy and the F value of each model.
CN201910806679.6A 2019-08-29 2019-08-29 Automatic skin recognition method based on Mask R-CNN model Active CN110543906B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910806679.6A CN110543906B (en) 2019-08-29 2019-08-29 Automatic skin recognition method based on Mask R-CNN model

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910806679.6A CN110543906B (en) 2019-08-29 2019-08-29 Automatic skin recognition method based on Mask R-CNN model

Publications (2)

Publication Number Publication Date
CN110543906A true CN110543906A (en) 2019-12-06
CN110543906B CN110543906B (en) 2023-06-16

Family

ID=68710889

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910806679.6A Active CN110543906B (en) 2019-08-29 2019-08-29 Automatic skin recognition method based on Mask R-CNN model

Country Status (1)

Country Link
CN (1) CN110543906B (en)

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111310827A (en) * 2020-02-14 2020-06-19 北京工业大学 Target area detection method based on double-stage convolution model
CN111368453A (en) * 2020-03-17 2020-07-03 创新奇智(合肥)科技有限公司 Fabric cutting optimization method based on deep reinforcement learning
CN112241836A (en) * 2020-10-10 2021-01-19 天津大学 Virtual load dominant parameter identification method based on incremental learning
CN112435237A (en) * 2020-11-24 2021-03-02 山西三友和智慧信息技术股份有限公司 Skin lesion segmentation method based on data enhancement and depth network
WO2022222224A1 (en) * 2021-04-19 2022-10-27 平安科技(深圳)有限公司 Deep learning model-based data augmentation method and apparatus, device, and medium

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20180039864A1 (en) * 2015-04-14 2018-02-08 Intel Corporation Fast and accurate skin detection using online discriminative modeling
CN109730769A (en) * 2018-12-10 2019-05-10 华南理工大学 A kind of skin neoplasin based on machine vision is precisely performed the operation intelligent method for tracing and system
CN109785321A (en) * 2019-01-30 2019-05-21 杭州又拍云科技有限公司 Meibomian gland method for extracting region based on deep learning and Gabor filter
CN110148121A (en) * 2019-05-09 2019-08-20 腾讯科技(深圳)有限公司 A kind of skin image processing method, device, electronic equipment and medium

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20180039864A1 (en) * 2015-04-14 2018-02-08 Intel Corporation Fast and accurate skin detection using online discriminative modeling
CN109730769A (en) * 2018-12-10 2019-05-10 华南理工大学 A kind of skin neoplasin based on machine vision is precisely performed the operation intelligent method for tracing and system
CN109785321A (en) * 2019-01-30 2019-05-21 杭州又拍云科技有限公司 Meibomian gland method for extracting region based on deep learning and Gabor filter
CN110148121A (en) * 2019-05-09 2019-08-20 腾讯科技(深圳)有限公司 A kind of skin image processing method, device, electronic equipment and medium

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
ERICK ALFARO: "A Brief Analysis of U-Net and Mask R-CNN for Skin Lesion Segmentation", 《IWOBI2019》 *

Cited By (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111310827A (en) * 2020-02-14 2020-06-19 北京工业大学 Target area detection method based on double-stage convolution model
CN111368453A (en) * 2020-03-17 2020-07-03 创新奇智(合肥)科技有限公司 Fabric cutting optimization method based on deep reinforcement learning
CN111368453B (en) * 2020-03-17 2023-07-07 创新奇智(合肥)科技有限公司 Fabric cutting optimization method based on deep reinforcement learning
CN112241836A (en) * 2020-10-10 2021-01-19 天津大学 Virtual load dominant parameter identification method based on incremental learning
CN112241836B (en) * 2020-10-10 2022-05-20 天津大学 Virtual load leading parameter identification method based on incremental learning
CN112435237A (en) * 2020-11-24 2021-03-02 山西三友和智慧信息技术股份有限公司 Skin lesion segmentation method based on data enhancement and depth network
WO2022222224A1 (en) * 2021-04-19 2022-10-27 平安科技(深圳)有限公司 Deep learning model-based data augmentation method and apparatus, device, and medium

Also Published As

Publication number Publication date
CN110543906B (en) 2023-06-16

Similar Documents

Publication Publication Date Title
CN109299274B (en) Natural scene text detection method based on full convolution neural network
CN109145979B (en) Sensitive image identification method and terminal system
CN108830188B (en) Vehicle detection method based on deep learning
CN110543906A (en) Skin type automatic identification method based on data enhancement and Mask R-CNN model
CN111753828B (en) Natural scene horizontal character detection method based on deep convolutional neural network
CN108648191B (en) Pest image recognition method based on Bayesian width residual error neural network
CN105512638B (en) A kind of Face datection and alignment schemes based on fusion feature
CN107808129B (en) Face multi-feature point positioning method based on single convolutional neural network
CN107967456A (en) A kind of multiple neural network cascade identification face method based on face key point
Kadam et al. Detection and localization of multiple image splicing using MobileNet V1
CN110321967B (en) Image classification improvement method based on convolutional neural network
CN107784288B (en) Iterative positioning type face detection method based on deep neural network
CN108009222B (en) Three-dimensional model retrieval method based on better view and deep convolutional neural network
CN111027493A (en) Pedestrian detection method based on deep learning multi-network soft fusion
CN110929713B (en) Steel seal character recognition method based on BP neural network
CN111652273B (en) Deep learning-based RGB-D image classification method
CN113221956B (en) Target identification method and device based on improved multi-scale depth model
CN115861715B (en) Knowledge representation enhancement-based image target relationship recognition algorithm
CN114492634B (en) Fine granularity equipment picture classification and identification method and system
Udawant et al. Cotton leaf disease detection using instance segmentation
WO2020119624A1 (en) Class-sensitive edge detection method based on deep learning
CN109508670B (en) Static gesture recognition method based on infrared camera
CN115049952A (en) Juvenile fish limb identification method based on multi-scale cascade perception deep learning network
Yang et al. An improved algorithm for the detection of fastening targets based on machine vision
CN114758382A (en) Face AU detection model establishing method and application based on adaptive patch learning

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant