CN110543906B - Automatic skin recognition method based on Mask R-CNN model - Google Patents

Automatic skin recognition method based on Mask R-CNN model Download PDF

Info

Publication number
CN110543906B
CN110543906B CN201910806679.6A CN201910806679A CN110543906B CN 110543906 B CN110543906 B CN 110543906B CN 201910806679 A CN201910806679 A CN 201910806679A CN 110543906 B CN110543906 B CN 110543906B
Authority
CN
China
Prior art keywords
skin
model
training
mask
image
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201910806679.6A
Other languages
Chinese (zh)
Other versions
CN110543906A (en
Inventor
彭礼烨
梁倍源
黄思钊
毛勇健
徐阳
彭博韬
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Individual
Original Assignee
Individual
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Individual filed Critical Individual
Priority to CN201910806679.6A priority Critical patent/CN110543906B/en
Publication of CN110543906A publication Critical patent/CN110543906A/en
Application granted granted Critical
Publication of CN110543906B publication Critical patent/CN110543906B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/214Generating training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Artificial Intelligence (AREA)
  • General Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • Evolutionary Computation (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Computational Linguistics (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Health & Medical Sciences (AREA)
  • Biomedical Technology (AREA)
  • Biophysics (AREA)
  • Evolutionary Biology (AREA)
  • General Health & Medical Sciences (AREA)
  • Molecular Biology (AREA)
  • Computing Systems (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Image Analysis (AREA)
  • Image Processing (AREA)

Abstract

The invention discloses a skin automatic identification method based on data enhancement and Mask R-CNN model, which comprises five stages: data labeling, data enhancement, model training, parameter adjustment and model selection. The human resources of the current skin care industry are greatly saved, and the efficient and rapid skin detection can be realized. Meanwhile, the model has the characteristics of self-adaption and increment learning, and can obtain higher recognition accuracy along with the expansion of a training data set and the increment of the use quantity.

Description

Automatic skin recognition method based on Mask R-CNN model
Technical Field
The invention belongs to the field of deep learning image recognition, and particularly relates to an automatic skin recognition method based on a Mask R-CNN model.
Background
Skin detection is very common in daily life of people, and skin care products developed according to different skin types are also very various, so that the skin care products with different types are particularly important for different skin types. However, the prior art has relatively lack of means for detecting skin, and is generally detected by doctors or beauticians, so the problem is solved, and the invention proposes to use an image recognition method for detecting skin, which is more objective and convenient than the detection of doctors and beauticians, and can greatly save manpower and material resources paid by users in skin care. The use of feature extraction algorithms in combination with classifiers is currently a mainstream method in the field of image recognition.
However, manually extracting features is not suitable for a general skin classification system. There are the following main reasons: 1) Skin features are numerous and manually extracted features are typically applied to skin of one or a limited variety of features. It is difficult to scale data sets. 2) The skin appearance has high similarity between classes and large intra-class variation, so that the skin is difficult to identify. In order to solve these problems, it is particularly important to use an automatic identification and classification method in this field, however, automatic identification and classification based on skin images is a very challenging task, and the accuracy of identification is low due to limitations of classification and noise influence (such as light, shooting jitter, image noise, etc.) of skin images in reality.
With the development and promotion of deep learning algorithms in recent years, convolutional neural networks and Mask R-CNN algorithms developed on the basis of the convolutional neural networks are brand-new in the field of image detection, and a foundation is laid for solving the difficulty in skin recognition. Based on the problems, the invention provides an automatic skin recognition method based on data enhancement and Mask R-CNN algorithm.
Disclosure of Invention
The present invention is directed to solving the above problems of the prior art. The skin automatic identification method based on the Mask R-CNN model can improve identification efficiency and accuracy and provide auxiliary support. The technical scheme of the invention is as follows:
a skin automatic identification method based on Mask R-CNN model comprises the following steps:
step S1, marking a database composed of a large number of known skin images with different skin types, marking the features including position features and type features in the skin images, and dividing the skin images into a training image set, a test image set and a verification image set;
step S2, performing data off-line enhancement processing on a training image set of the marked known skin image, wherein the data off-line enhancement processing adopts four data enhancement methods of turning, rotating, scaling and cutting, and the number of the data is changed into the number of original data sets of enhancement factors, wherein the enhancement factors refer to the multiple of the increase of the data after off-line enhancement;
step S3, a transfer learning method is adopted, a pre-trained model on the ImageNet is transferred to a training image set after data enhancement for training, so that optimized initial parameters are obtained, the training speed, the recognition rate and the generalization capacity of the model are accelerated, 6000 skin photos marked with the model are selected as training image sets, 2000 skin photos are used as test image sets, 2000 skin photos are used as verification image sets, the accuracy of the model is checked through the verification image sets, and parameter adjustment is carried out on the model through training results until the model converges;
and S4, repeating the steps S2 and S3 to train a plurality of models, comparing the evaluation indexes, and selecting an optimal model by using a target optimization algorithm to finish automatic target recognition.
Further, the specific implementation process of the step S1 is as follows: marking positions and types of known target images are carried out by using a yolo_mark image detection marking tool, wherein the types of skin are five, namely dry skin, oily skin, mixed skin, neutral skin and sensitive skin.
Further, the step S2 of performing the data offline enhancement processing includes the following steps:
s2.1, defining an enhancement factor as 2, wherein the multiple of the increase of the data after offline enhancement is 2, and mirror-turning the skin picture;
s2.2, defining the enhancement factor as 4, and rotating the skin picture by 90 degrees clockwise or anticlockwise;
s2.3, arbitrarily enlarging and reducing the skin picture, and then cutting the skin picture according to the original size.
Further, the specific implementation process of step S3 is as follows: building a Mask R-CNN model by using a tensorsurface deep learning framework, and retraining on the basis of the model pre-trained by the ImageNet;
step S3.1, taking the skin image obtained after offline data enhancement as input of a convolutional neural network, and carrying out feature extraction, wherein the feature extraction process is as follows: s4.1.1, scaling the skin images with different sizes obtained by pretreatment to a fixed size, inputting the skin images with the fixed size into a convolutional neural network, and performing multiple rolling and pooling operations on the skin images in the convolutional neural network S4.1.2 to obtain a skin characteristic map;
s3.2, generating recommended candidate areas by utilizing an RPN candidate area network, and outputting M candidate areas for each picture;
step S3.3, mapping the candidate region to the last layer of convolution of the convolution neural network;
step S3.4, generating a feature map with a fixed size for each candidate region through the RoI Align layer, wherein the pixels in the skin image and the pixels in the feature map are completely aligned;
s3.5, sending the output of the upper layer into a full-connection layer, classifying the candidate areas, obtaining the finally output class probability by using a softmax function, and judging the class of the skin through the probability;
s3.6, after training the model for a certain period according to the steps by using the training set, suspending training and saving model training data, and observing the change condition of the loss function value of the model along with the training period;
and S3.7, if the loss function value shows a descending trend, continuing training the model until convergence, otherwise, if the loss function value shows a fluctuation state or an ascending trend, adjusting model parameters, and restarting training the model after adjustment.
Further, the step S3.1 of convolutional neural network feature extraction includes the following steps:
s3.1.1 performing normalization pretreatment on the image, subtracting a pixel mean value in a data set to obtain an image with 224x224 size, and loading the image from the preprocessed skin picture data set by an input layer;
the S3.1.2 convolution layer takes the feature map as a unit, the convolution kernel represents the feature, and each unit acts on a local area of the upper-layer feature map through the convolution kernel, and the local feature of the image is acquired through weighting of the local area and non-linear processing of the ReLU.
Further, the step S3.5 of performing feature recognition by using a Softmax classifier includes the following steps:
s3.5.1 the number of input skin pictures to be identified is N, k categories of targets are provided, k=5, and the current picture x is estimated for the tested picture x_i according to the Bayesian theorem i The probability of belonging to class j is p (y ij |x i ) Then use hypothesis function h θ (x i ) Estimating current picture x i The probability of belonging to each category is as follows:
Figure GDA0004196096710000041
where k is the number of categories, θ l T Representing parameters of the model, x i The representation of the test image i is given,
Figure GDA0004196096710000042
representing normalization of the probability distribution.
Further, the loss function of step S3.6 is expressed as:
L=L cls +L reg +L mask
wherein L is cls To classify errors, L reg To detect errors, L mask For the segmentation error, L cls And L reg The category of each candidate region RoI and the target regression frame coordinate value L are predicted by using the full connection layer mask Representing the error in dividing each candidate region, where the dimension of the division output is k×m, where k is the number of classes, and m×m is the size of the feature map, i.e., the k masks are encoded, each Mask has k classes, and the sigmod function is used to determine the binary intersection for each pixelEntropy.
Further, a batch training method is adopted, and main parameters are set as follows: the basic learning rate is 0.01, the momentum factor parameter is 0.9, and the regularized attenuation coefficient is 0.0001.
Further, the specific implementation process of step S4 is as follows: training on a plurality of models by using the same training set, and selecting a model with optimal indexes according to recall rate, accuracy and F value of each model through a target optimization algorithm.
The invention has the advantages and beneficial effects as follows:
the invention aims to provide an automatic skin recognition and classification method for skin images. By training the Mask R-CNN model, automatic identification and classification of skin images are realized. In order to enable the deep learning model to be subjected to the overfitting phenomenon caused by image data limitation, the data preprocessing is performed by adopting a data enhancement and transfer learning method. By using the method, the efficiency and the accuracy of skin identification can be improved, and a more accurate and effective skin care scheme can be formulated for a user according to the characteristics of various skin types.
(1) The advanced target detection algorithm Mask R-CNN is adopted, so that the method has the advantages of reducing the calculated amount, improving the efficiency, improving the recognition accuracy and the like compared with the traditional convolution data network algorithm.
(2) The method has self-adaptive and incremental learning characteristics
(3) The method can effectively reduce false diagnosis caused by human in skin detection, greatly improve the efficiency of skin and save a large amount of skin care industry resources.
(4) The method creatively combines the image detection algorithm with the skin classification, so that the skin detection can be rapidly and conveniently carried out, and a user can select skin care products and make proper beautifying schemes according to the skin detection result.
(5) The method creatively combines a multi-objective optimization algorithm (non-dominant ordered genetic algorithm with elite strategy) with model selection, so that an optimization model can be more accurately selected from a large number of trained models.
Drawings
FIG. 1 is a diagram of the overall network architecture of the Mask R-CNN model of the preferred embodiment provided by the present invention.
Detailed Description
The technical solutions in the embodiments of the present invention will be clearly and specifically described below with reference to the drawings in the embodiments of the present invention. The described embodiments are only a few embodiments of the present invention.
The technical scheme for solving the technical problems is as follows:
the invention is based on a target detection algorithm Mask R-CNN, and the algorithm mainly comprises two modules: the first is an RPN network for generating candidate regions, the second is an ROI alignment for target detection, and the method outputs a Binary Mask through a full convolution network, and comprises the following specific steps:
(1) And (3) data marking: labeling the original skin disease image, and drawing a focus area.
(2) Data enhancement: and carrying out data off-line enhancement processing on the marked skin disease marked image, and changing the number of data into the number of enhancement factors. Where enhancement factor refers to the multiple of the increase of data after offline enhancement. The invention adopts four data enhancement methods of turning, rotating, zooming and clipping.
(3) Model training: initializing a Mask R-CNN model, wherein the overall network structure is as shown in figure 1:
i. convolutional neural network: for extracting a dermatological lesion feature map that will be shared by the RPN network and the fully connected layer.
The rpn network is used to generate recommended candidate regions.
Generating a feature map of fixed size for each candidate region by the RoI Align layer, the pixels in the dermatological image being perfectly aligned with the pixels in the feature map.
Full tie layer: the finally output class probability is obtained by using the softmax function.
(4) Parameter adjustment: when the model is trained, the learning rate is set to 0.1, the approximate global optimum is obtained through the high learning rate, and then the local optimum is obtained through the small learning rate, so that the global optimum is obtained.
Parameter adjustment is performed in the following cases:
when the loss function value shows a decreasing trend, the model continues to be trained until convergence.
When the loss function value exhibits a fluctuation or an upward trend, the learning rate needs to be lowered.
When the model is not converged, the number of mini latches needs to be increased, and the number of nodes of the full-connection layer-by-layer is reduced.
(5) Model selection: and selecting an optimal model by utilizing a target optimization algorithm according to the accuracy rate, recall rate and F value of the model trained by the steps.
The invention provides an automatic dermatological disease identification method based on data enhancement, transfer learning and Mask R-CNN model, which comprises the following specific steps:
and S1, marking a database formed by a large number of skin disease images, marking focus positions and disease types in skin disease pictures, and dividing the images into a training image set, a test image set and a verification image set.
And S2, performing data off-line enhancement processing on the marked skin disease marker image, and changing the number of data into the number of enhancement factors which are the number of original data sets. Where enhancement factor refers to the multiple of the increase of data after offline enhancement. The invention adopts four data enhancement methods of turning, rotating, zooming and clipping.
And S3, migrating the pre-trained model on the ImageNet to a training image set with enhanced data for training by adopting a migration learning method so as to obtain optimized initial parameters, thereby accelerating the training speed, recognition rate and generalization capability of the model. And verifying the accuracy of the image set inspection model, and carrying out parameter adjustment on the model through a training result until the model converges.
And S4, repeating the steps S2 and S3 to train a plurality of models, comparing the evaluation indexes, and selecting an optimal model by using a target optimization algorithm.
The specific implementation process of the described step S1 is as follows: the lesion location and disease type are annotated to the dermatological images using a yolo_mark image detection annotation tool that operates under the window system, relying on the opencv library. Image information including dermatological image name, image size, lesion location, disease type is recorded using json format file, and then the dermatological image dataset is divided into training set, validation set and test set at a ratio of 60%, 20%, respectively.
The described step S2 of performing the data offline enhancement process includes the following steps:
s2.1, defining the enhancement factor as 2 (the multiple of the increase of the data after offline enhancement is 2), and mirror-turning the picture of the affected part of the skin.
S2.2 defines an enhancement factor of 4 (the multiple of the increase after offline enhancement of the data is 2), the skin lesion picture is rotated 90 degrees clockwise or counterclockwise.
S2.3, randomly enlarging and reducing the picture of the affected part of the skin, and then cutting the picture of the affected part according to the original size.
The specific implementation process of the described step S3 is as follows: and constructing a convolutional neural network model by using a tensorflow deep learning framework, and training on the basis of a pre-trained Microsoft COCO data set. The specific process of training is as follows:
and S3.1, taking the skin image obtained by preprocessing as the input of a convolutional neural network, and extracting the characteristics. The characteristic extraction process is as follows: s3.1.1 scaling the skin images with different sizes obtained by pretreatment to a fixed size, then inputting the skin images with the fixed size into a convolutional neural network, S3.1.2 performing multiple rolling and pooling operations on the skin images in the convolutional neural network, and obtaining a skin characteristic map.
And S3.2, generating recommended candidate areas by utilizing an RPN (candidate area network), and outputting M candidate areas for each picture.
And step S3.3, mapping the candidate region to the last layer of convolution of the convolution neural network.
Step S3.4, generating a feature map with a fixed size for each candidate region through the RoI Align layer, wherein the pixels in the dermatological image and the pixels in the feature map are completely aligned.
And S3.5, sending the output of the upper layer into the full-connection layer, obtaining the finally output class probability by using a softmax function, and judging the class of the skin through the probability.
And S3.6, after training the model for a certain period by using the training set according to the steps, suspending training and saving model training data, observing the change condition of a loss function value of the model along with the training period, wherein the model training adopts a batch training method, and main parameters are set as follows: the basic learning rate is 0.01, the momentum factor parameter is 0.9, and the regularized attenuation coefficient is 0.0001.
And S3.7, if the loss function value shows a descending trend, continuing training the model until convergence, otherwise, if the loss function value shows a fluctuation state or an ascending trend, adjusting model parameters, and restarting training the model after adjustment.
The specific implementation process of the described step S4 is as follows: training on a plurality of models by using the same training set, and selecting a model with optimal indexes according to recall rate, accuracy and F value of each model through a multi-objective optimization algorithm.
The above examples should be understood as illustrative only and not limiting the scope of the invention. Various changes and modifications to the present invention may be made by one skilled in the art after reading the teachings herein, and such equivalent changes and modifications are intended to fall within the scope of the invention as defined in the appended claims.

Claims (9)

1. A skin automatic identification method based on Mask R-CNN model is characterized by comprising the following steps:
step S1, marking a database composed of a large number of known skin images with different skin types, marking the features including position features and type features in the skin images, and dividing the skin images into a training image set, a test image set and a verification image set;
step S2, performing data off-line enhancement processing on a training image set of the marked known skin image, wherein the data off-line enhancement processing adopts four data enhancement methods of turning, rotating, scaling and cutting, and the number of the data is changed into the number of original data sets of enhancement factors, wherein the enhancement factors refer to the multiple of the increase of the data after off-line enhancement;
step S3, a transfer learning method is adopted, a pre-trained model on the ImageNet is transferred to a training image set after data enhancement for training, so that optimized initial parameters are obtained, 6000 skin photos marked with the model are selected as training image sets, 2000 skin photos are selected as test image sets, 2000 skin photos are selected as verification image sets, the accuracy of the model is verified through the verification image sets, and parameter adjustment is carried out on the model through training results until the model converges;
and S4, repeating the steps S2 and S3 to train a plurality of models, comparing the evaluation indexes, and selecting an optimal model by using a target optimization algorithm to finish automatic target recognition.
2. The automatic skin recognition method based on Mask R-CNN model according to claim 1, wherein the specific implementation process of step S1 is as follows: marking positions and types of known target images are carried out by using a yolo_mark image detection marking tool, wherein the types of skin are five, namely dry skin, oily skin, mixed skin, neutral skin and sensitive skin.
3. The automatic skin recognition method based on Mask R-CNN model according to claim 2, wherein the step S2 of performing data offline enhancement processing includes the following steps:
s2.1, defining an enhancement factor as 2, wherein the multiple of the increase of the data after offline enhancement is 2, and mirror-turning the skin picture;
s2.2, defining the enhancement factor as 4, and rotating the skin picture by 90 degrees clockwise or anticlockwise;
s2.3, arbitrarily enlarging and reducing the skin picture, and then cutting the skin picture according to the original size.
4. The automatic skin recognition method based on Mask R-CNN model according to claim 3, wherein the specific implementation process of step S3 is as follows: using a tensorsurface deep learning framework to build a Mask R-CNN model, and retraining on the basis of the model pre-trained by the ImageNet:
step S3.1, taking the skin image obtained after offline data enhancement as input of a convolutional neural network, and carrying out feature extraction, wherein the feature extraction process is as follows: s4.1.1, scaling the skin images with different sizes obtained by pretreatment to a fixed size, inputting the skin images with the fixed size into a convolutional neural network, and performing multiple rolling and pooling operations on the skin images in the convolutional neural network S4.1.2 to obtain a skin characteristic map;
s3.2, generating recommended candidate areas by utilizing an RPN candidate area network, and outputting M candidate areas for each picture;
step S3.3, mapping the candidate region to the last layer of convolution of the convolution neural network;
step S3.4, generating a feature map with a fixed size for each candidate region through the RoI Align layer, wherein the pixels in the skin image and the pixels in the feature map are completely aligned;
s3.5, sending the output of the upper layer into a full-connection layer, classifying the candidate areas, obtaining the finally output class probability by using a softmax function, and judging the class of the skin through the probability;
s3.6, after training the model for a certain period according to the steps by using the training set, suspending training and saving model training data, and observing the change condition of the loss function value of the model along with the training period;
and S3.7, if the loss function value shows a descending trend, continuing training the model until convergence, otherwise, if the loss function value shows a fluctuation state or an ascending trend, adjusting model parameters, and restarting training the model after adjustment.
5. The automatic skin recognition method based on Mask R-CNN model according to claim 4, wherein the step S3.1 of convolutional neural network feature extraction includes the steps of:
s3.1.1 performing normalization pretreatment on the image, subtracting a pixel mean value in a data set to obtain an image with 224x224 size, and loading the image from the preprocessed skin picture data set by an input layer;
the S3.1.2 convolution layer takes the feature map as a unit, the convolution kernel represents the feature, and each unit acts on a local area of the upper-layer feature map through the convolution kernel, and the local feature of the image is acquired through weighting of the local area and non-linear processing of the ReLU.
6. The automatic skin recognition method based on Mask R-CNN model according to claim 4, wherein the step S3.5 of performing feature recognition by using a Softmax classifier includes the following steps:
s3.5.1 the number of input skin pictures to be identified is N, k categories of targets are provided, k=5, and the current picture x is estimated for the tested picture x_i according to the Bayesian theorem i The probability of belonging to class j is p (y i =j|x i ) Then use hypothesis function h θ (x i ) Estimating current picture x i The probability of belonging to each category is as follows:
Figure FDA0004169255690000031
where k is the number of categories, θ l T Representing parameters of the model, x i The representation of the test image i is given,
Figure FDA0004169255690000032
representing normalization of the probability distribution.
7. The automatic skin recognition method based on Mask R-CNN model according to claim 4, wherein the loss function of step S3.6 is expressed as:
L=L cls +L reg +L mask
wherein L is cls To classify errors, L reg To detect errors, L mask For the segmentation error, L cls And L reg The category of each candidate region RoI and the target regression frame coordinate value L are predicted by using the full connection layer mask And representing an error of dividing each candidate region, wherein the dimension of the division output is k x m, k is the number of classes, m x m is the size of a feature map, namely, k masks are encoded, each Mask has k classes, and a sigmod function is used for solving a binary cross entropy for each pixel.
8. The automatic skin recognition method based on Mask R-CNN model according to claim 4, wherein a batch training method is adopted, and main parameters are set as follows: the basic learning rate is 0.01, the momentum factor parameter is 0.9, and the regularized attenuation coefficient is 0.0001.
9. The automatic skin recognition method based on Mask R-CNN model according to claim 4, wherein the specific implementation process of step S4 is as follows: training on a plurality of models by using the same training set, and selecting a model with optimal indexes according to recall rate, accuracy and F value of each model through a target optimization algorithm.
CN201910806679.6A 2019-08-29 2019-08-29 Automatic skin recognition method based on Mask R-CNN model Active CN110543906B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910806679.6A CN110543906B (en) 2019-08-29 2019-08-29 Automatic skin recognition method based on Mask R-CNN model

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910806679.6A CN110543906B (en) 2019-08-29 2019-08-29 Automatic skin recognition method based on Mask R-CNN model

Publications (2)

Publication Number Publication Date
CN110543906A CN110543906A (en) 2019-12-06
CN110543906B true CN110543906B (en) 2023-06-16

Family

ID=68710889

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910806679.6A Active CN110543906B (en) 2019-08-29 2019-08-29 Automatic skin recognition method based on Mask R-CNN model

Country Status (1)

Country Link
CN (1) CN110543906B (en)

Families Citing this family (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111310827A (en) * 2020-02-14 2020-06-19 北京工业大学 Target area detection method based on double-stage convolution model
CN111368453B (en) * 2020-03-17 2023-07-07 创新奇智(合肥)科技有限公司 Fabric cutting optimization method based on deep reinforcement learning
CN112241836B (en) * 2020-10-10 2022-05-20 天津大学 Virtual load leading parameter identification method based on incremental learning
CN112435237B (en) * 2020-11-24 2024-06-21 山西三友和智慧信息技术股份有限公司 Skin lesion segmentation method based on data enhancement and depth network
CN112686145A (en) * 2020-12-29 2021-04-20 广东各有所爱信息科技有限公司 Facial skin type identification method and intelligent terminal thereof
CN113158652B (en) * 2021-04-19 2024-03-19 平安科技(深圳)有限公司 Data enhancement method, device, equipment and medium based on deep learning model

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109730769A (en) * 2018-12-10 2019-05-10 华南理工大学 A kind of skin neoplasin based on machine vision is precisely performed the operation intelligent method for tracing and system
CN109785321A (en) * 2019-01-30 2019-05-21 杭州又拍云科技有限公司 Meibomian gland method for extracting region based on deep learning and Gabor filter
CN110148121A (en) * 2019-05-09 2019-08-20 腾讯科技(深圳)有限公司 A kind of skin image processing method, device, electronic equipment and medium

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2016165060A1 (en) * 2015-04-14 2016-10-20 Intel Corporation Skin detection based on online discriminative modeling

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109730769A (en) * 2018-12-10 2019-05-10 华南理工大学 A kind of skin neoplasin based on machine vision is precisely performed the operation intelligent method for tracing and system
CN109785321A (en) * 2019-01-30 2019-05-21 杭州又拍云科技有限公司 Meibomian gland method for extracting region based on deep learning and Gabor filter
CN110148121A (en) * 2019-05-09 2019-08-20 腾讯科技(深圳)有限公司 A kind of skin image processing method, device, electronic equipment and medium

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
A Brief Analysis of U-Net and Mask R-CNN for Skin Lesion Segmentation;Erick Alfaro;《IWOBI2019》;20190701;全文 *

Also Published As

Publication number Publication date
CN110543906A (en) 2019-12-06

Similar Documents

Publication Publication Date Title
CN110543906B (en) Automatic skin recognition method based on Mask R-CNN model
CN110443143B (en) Multi-branch convolutional neural network fused remote sensing image scene classification method
WO2023077816A1 (en) Boundary-optimized remote sensing image semantic segmentation method and apparatus, and device and medium
CN109919108B (en) Remote sensing image rapid target detection method based on deep hash auxiliary network
WO2018028255A1 (en) Image saliency detection method based on adversarial network
CN107610087B (en) Tongue coating automatic segmentation method based on deep learning
CN111753828B (en) Natural scene horizontal character detection method based on deep convolutional neural network
CN111126482B (en) Remote sensing image automatic classification method based on multi-classifier cascade model
CN107784288B (en) Iterative positioning type face detection method based on deep neural network
Kadam et al. Detection and localization of multiple image splicing using MobileNet V1
CN113408605A (en) Hyperspectral image semi-supervised classification method based on small sample learning
CN106778687A (en) Method for viewing points detecting based on local evaluation and global optimization
CN109284779A (en) Object detecting method based on the full convolutional network of depth
CN113221956B (en) Target identification method and device based on improved multi-scale depth model
CN110008899B (en) Method for extracting and classifying candidate targets of visible light remote sensing image
CN113032613B (en) Three-dimensional model retrieval method based on interactive attention convolution neural network
CN113487610B (en) Herpes image recognition method and device, computer equipment and storage medium
CN112364974B (en) YOLOv3 algorithm based on activation function improvement
Ma et al. Location-aware box reasoning for anchor-based single-shot object detection
CN115861715A (en) Knowledge representation enhancement-based image target relation recognition algorithm
Yang et al. An improved algorithm for the detection of fastening targets based on machine vision
CN117315380B (en) Deep learning-based pneumonia CT image classification method and system
CN109902692A (en) A kind of image classification method based on regional area depth characteristic coding
CN116311387B (en) Cross-modal pedestrian re-identification method based on feature intersection
CN111144469B (en) End-to-end multi-sequence text recognition method based on multi-dimensional associated time sequence classification neural network

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant