CN108256527A - A kind of cutaneous lesions multiclass semantic segmentation method based on end-to-end full convolutional network - Google Patents

A kind of cutaneous lesions multiclass semantic segmentation method based on end-to-end full convolutional network Download PDF

Info

Publication number
CN108256527A
CN108256527A CN201810064784.2A CN201810064784A CN108256527A CN 108256527 A CN108256527 A CN 108256527A CN 201810064784 A CN201810064784 A CN 201810064784A CN 108256527 A CN108256527 A CN 108256527A
Authority
CN
China
Prior art keywords
fcn
data set
data
network
image
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Withdrawn
Application number
CN201810064784.2A
Other languages
Chinese (zh)
Inventor
夏春秋
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shenzhen Vision Technology Co Ltd
Original Assignee
Shenzhen Vision Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shenzhen Vision Technology Co Ltd filed Critical Shenzhen Vision Technology Co Ltd
Priority to CN201810064784.2A priority Critical patent/CN108256527A/en
Publication of CN108256527A publication Critical patent/CN108256527A/en
Withdrawn legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • G06V10/26Segmentation of patterns in the image field; Cutting or merging of image elements to establish the pattern region, e.g. clustering-based techniques; Detection of occlusion
    • G06V10/267Segmentation of patterns in the image field; Cutting or merging of image elements to establish the pattern region, e.g. clustering-based techniques; Detection of occlusion by performing operations on regions, e.g. growing, shrinking or watersheds
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/214Generating training patterns; Bootstrap methods, e.g. bagging or boosting

Abstract

A kind of cutaneous lesions multiclass semantic segmentation method based on end-to-end full convolutional network proposed in the present invention, main contents include:Data set, the form demarcated, full convolutional network, double-deck shift learning method and performance indicator, its process is, first classify to data set, for training full convolution deep learning model, then truthful data is represented using binary mask, then the training convolutional neural networks on large data sets, obtain the positive findings of target in detection image, the data set for reusing non-medical background carries out double-deck shift learning, convergence and each relevant weight of convolution layer network, last utility index assess result.The present invention proposes a kind of method of shift learning, it trains the multistage semantic segmentation of the full convolutional network for cutaneous lesions using part shift learning and total transfer study, the problem of overcoming data deficiencies also substantially increases the efficiency and accuracy of identification and classification.

Description

A kind of cutaneous lesions multiclass semantic segmentation method based on end-to-end full convolutional network
Technical field
The present invention relates to semantic segmentation field, more particularly, to a kind of cutaneous lesions based on end-to-end full convolutional network Multiclass semantic segmentation method.
Background technology
Image, semantic segmentation is one of basic problem important in computer vision, and target is each pixel to image Point is classified, and divides the image into several visually significant or interested regions, in favor of subsequent image point Analysis and visual analysis.Image, semantic cutting techniques are applied in medical diagnosis, the mode as computer-aided diagnosis, it will As a trend of future medicine development.For example, cutaneum carcinoma is most common cancer in every other cancer, common skin There are mainly three types of lesions, i.e. benign nevus, melanoma and seborrheic keratosis, due between the class between these cutaneous lesions classifications Similitude so that doctor only differentiates it and diagnosed with being visually difficult to.Therefore, if using computer to cutaneous lesions figure As carrying out semantic segmentation and classifying, effectively doctor will be assisted to be diagnosed, reduce due to doctor's subjective judgement and bring Error, while the workload that will also greatly reduce doctor improves the efficiency of diagnosis.In addition to applying image, semantic segmentation in skin In the diagnosis of skin lesion, it can also be applied in such as classification and analysis of CT, MRI image.However, there is presently no about The research of multiclass semantic segmentation is used to detect all types of cutaneum carcinoma lesions.
The present invention proposes a kind of cutaneous lesions multiclass semantic segmentation method based on end-to-end full convolutional network, first logarithm Classify according to collection, for training full convolution deep learning model, then represent truthful data using binary mask, then The training convolutional neural networks on large data sets obtain the positive findings of target in detection image, reuse non-medical background Data set carries out double-deck shift learning, convergence and each relevant weight of convolution layer network, and last utility index is to result It is assessed.The present invention proposes a kind of method of shift learning, it is trained using part shift learning and total transfer study For the multistage semantic segmentation of the full convolutional network of cutaneous lesions, the problem of overcoming data deficiencies, identification is also substantially increased With the efficiency and accuracy of classification.
Invention content
The problem of for semantic segmentation is not applied to cutaneous lesions segmentation yet at present, the purpose of the present invention is to provide one Cutaneous lesions multiclass semantic segmentation method of the kind based on end-to-end full convolutional network, first classifies to data set, for training Full convolution deep learning model, then represents truthful data, then the training convolutional on large data sets using binary mask Neural network obtains the positive findings of target in detection image, and the data set for reusing non-medical background carries out double-deck transfer It practises, convergence and each relevant weight of convolution layer network, last utility index assess result.
To solve the above problems, the present invention provides a kind of cutaneous lesions multiclass semanteme point based on end-to-end full convolutional network Segmentation method, main contents include:
(1) data set;
(2) form demarcated;
(3) full convolutional network;
(4) double-deck shift learning method;
(5) performance indicator.
Wherein, the data set trains full convolution depth using only disclosed ISBI-2017 cutaneum carcinomas data set Practise model;Rgb color space is used to represent all images in the data set;It includes three important cutaneous lesions classifications Skin mirror image is benign nevus, melanoma and seborrheic keratosis respectively, and height is similar between having class between these three classifications Property;The data set includes the training set of 2000 skin lens images, wherein 1372 skin lens images be classified as benign nevus, 374 are melanoma and 274 seborrheic keratosis;Verification is concentrated with 150 images, and test is concentrated with 600 images;At this A data are concentrated, and the size of image changes between 540 × 722 and 4499 × 6748;For the instruction of full convolutional network (FCN) Practice, by all Image Adjustings to 500 × 375, to improve performance, reduce and calculate cost;In segmentation task, by entire data Collection is divided into three classes, i.e. benign nevus, melanoma and seborrheic keratosis.
Wherein, the form demarcated, because the performance for the convolutional neural networks of semantic segmentation exists Extensive testing in PASCALVOC2012 data sets;In this data set, input picture all defined in rgb color space, And 8 palette images are used to represent the truthful data of input picture;ISBI dermoscopies data set has the defeated of same format Enter image, i.e. RGB color space;As described above, dividing just for a class, therefore true number is represented using binary mask According to;PASCALVOC data sets initially have 21 classes, and there are three classes in task to represent benign nevus, melanoma and seborrheica Keratosis.
Wherein, the full convolutional network, the calculating intensity of convolutional neural networks is very strong, can extract hierarchical structure Into feature, the object in detection image;State-of-the-art convolutional neural networks (CNN) are trained, and lead on large data sets It crosses and provides score for each classification come the categorical measure for different objects of classifying;FCN and coder-decoder CNN can detect more A object, and object is positioned by using pixel prediction;FCN has become non-medical and medical imaging segmentation task at first Into method, and demonstrate they than conventional machines study and other deep learning methods superiority;Therefore FCN- is used Four kinds of different FCNs of AlexNet, FCN-32s, FCN-16s and FCN-8s divide task to perform cutaneum carcinoma.
Further, the FCN-AlexNet, it is the revision of the original state-of-the-art disaggregated models of AlexNet This;It is deconvoluted layer and carries out pixel prediction by using up-sampling feature by what convolutional layer earlier learnt;Input and The image demarcated all is 500 × 375;Network parameter is finely tuned so that this method have more times by using 100 learning rates learn the feature of skin lens image for 0.0001 stochastic gradient descent.
Further, described FCN-32s, FCN-16s and FCN-8s, they are based on another state-of-the-art classification net Network, the difference between these models are the up-sampling layers of different pixels span;As the title of these FCNs, in FCN-32s, Up-sampling carries out with the help of 32 pixel spans, and the pixel span of FCN-16s is 16 pixels, and the span of FCN-8 is 8 pictures Element;According to the span of small pixel, model can predict the more fine-grained analysis of object;Equally, using with FCN-AlexNet phases With network parameter train these models.
Wherein, the double-deck shift learning method, convolutional neural networks usually require a huge data set to learn These features are practised, to obtain the positive findings of target in detection image;Because having used RGB image in skin mirror image, Double-deck shift learning, Ke Yishou are carried out using the huge data set of the non-medicals background such as ImageNet and Pascal-VOC data sets It holds back and each relevant weight of convolution layer network;The use of the main reason for double-deck shift learning is medical imaging data collection it is very It is limited, therefore, when convolutional neural networks when training from the beginning on these data sets, due to the not convergence of weight, it Will not generate effective result with limited medical imaging data collection each convolutional layer it is associated.
Further, the medical imaging data collection when using deep learning in medical imaging, can pay attention to data The size of collection;Therefore, when in training convolutional neural networks on these limited medical data collection, by huge non-medical Training pattern is extremely important to use transmission study on data set, to generate better result;Shift learning is by huge non-doctor The feature transfer acquired in former model is learned on data set to medical images data sets;There are two kinds of shift learning, Only shift the part shift learning of a small number of convolutional layer features and from all layers of transfer characteristic of pervious pre-training model Total transfer learns.
Further, the part shift learning and total transfer study, part shift learning is only by from macrotaxonomy number According to 10,000,000 images of 1000 classifications of convolutional layer of training on collection (being known as ImageNet);Total transfer learns from semantic segmentation number It is more than that 2000 images form by 21 classifications according to transfer characteristic, referred to as Pascal-VOC in the model of training on collection.
Wherein, the performance indicator, in medical imaging, sensibility and specificity is the evaluation index of standard, and right It is assessed in segmentation, Dice similarity factors (Dice) are the parameters that researcher generally uses;With Dice similarity factors (Dice), sensitive Degree (Sensitivity), specific (Specificity) and horse repair evaluation index of the relative coefficient (MCC) as segmentation:
Formula (1) defines sensitivity, and wherein TP represents true positives, and FN represents false negative;Highly sensitive (close to 1.0) Represent the superperformance in segmentation, it means that all lesions are by successful division;On the other hand, specificity represents non-lesion True negative TN ratio;The ability of the method for non-lesion is not divided in high specific instruction;Dice likeness coefficients are to weigh in advance The index with truthful data similitude is surveyed, such as formula (3);The range of MCC is -1 (full of prunes binary class symbol) to 1 (right-on binary class symbol);This is that a suitable partitioning algorithm of weighing is based on binary classification (lesion and non-lesion) Performance Evaluation, such as formula (4).
Description of the drawings
Fig. 1 is a kind of system frame of the cutaneous lesions multiclass semantic segmentation method based on end-to-end full convolutional network of the present invention Frame figure.
Fig. 2 be a kind of calibration of the cutaneous lesions multiclass semantic segmentation method based on end-to-end full convolutional network of the present invention well Form.
Fig. 3 is that a kind of bilayer of the cutaneous lesions multiclass semantic segmentation method based on end-to-end full convolutional network of the present invention turns Move learning method.
Specific embodiment
It should be noted that in the absence of conflict, the feature in embodiment and embodiment in the application can phase It mutually combines, the present invention is described in further detail in the following with reference to the drawings and specific embodiments.
Fig. 1 is a kind of system frame of the cutaneous lesions multiclass semantic segmentation method based on end-to-end full convolutional network of the present invention Frame figure.Mainly include data set, the form demarcated, full convolutional network, double-deck shift learning method and performance indicator.
Data set trains full convolution deep learning model using only disclosed ISBI-2017 cutaneum carcinomas data set;RGB Color space is used to represent all images in the data set;It includes the skin mirror image of three important cutaneous lesions classifications, It is benign nevus, melanoma and seborrheic keratosis respectively, there is high similarity between class between these three classifications;The data set Include the training set of 2000 skin lens images, wherein 1372 skin lens images be classified as benign nevus, 374 be melanoma With 274 seborrheic keratosis;Verification is concentrated with 150 images, and test is concentrated with 600 images;In this data set, figure The size of picture changes between 540 × 722 and 4499 × 6748;Training for full convolutional network (FCN), by all images 500 × 375 are adjusted to, to improve performance, reduces and calculates cost;In segmentation task, entire data set is divided into three classes, i.e., it is good Property mole, melanoma and seborrheic keratosis.
Full convolutional network, the calculating intensity of convolutional neural networks is very strong, can extract hierarchical structure in feature, inspection Object in altimetric image;State-of-the-art convolutional neural networks (CNN) are trained on large data sets, and by for each classification Score is provided come the categorical measure for different objects of classifying;FCN and coder-decoder CNN can detect multiple objects, and pass through Object is positioned using pixel prediction;FCN has become the state-of-the-art method of non-medical and medical imaging segmentation task, and Them are demonstrated than conventional machines study and the superiority of other deep learning methods;Therefore FCN-AlexNet, FCN- are used Four kinds of different FCNs of 32s, FCN-16s and FCN-8s divide task to perform cutaneum carcinoma.
FCN-AlexNet is the revision of the original state-of-the-art disaggregated models of AlexNet;It is by using up-sampling Feature deconvolutes layer and carries out pixel prediction by what convolutional layer earlier learnt;Input and the image demarcated all are 500 ×375;Network parameter is finely tuned so that it by using 100 learning rates is 0.0001 that this method, which there are more times, Stochastic gradient descent learn the feature of skin lens image.
FCN-32s, FCN-16s and FCN-8 are to be based on another state-of-the-art sorter network, the difference between these models It is the up-sampling layer of different pixels span;As the title of these FCNs, in FCN-32s, up-sampling is in 32 pixel spans With the help of carry out, the pixel span of FCN-16s is 16 pixels, and the span of FCN-8 is 8 pixels;According to the span of small pixel, Model can predict the more fine-grained analysis of object;Equally, this is trained using the network parameter identical with FCN-AlexNet A little models.
Performance indicator, in medical imaging, sensibility and specificity is the evaluation index of standard, and segmentation is assessed, Dice similarity factors (Dice) are the parameters that researcher generally uses;With Dice similarity factors (Dice), sensitivity (Sensitivity), specific (Specificity) and horse repair evaluation index of the relative coefficient (MCC) as segmentation:
Formula (1) defines sensitivity, and wherein TP represents true positives, and FN represents false negative;Highly sensitive (close to 1.0) Represent the superperformance in segmentation, it means that all lesions are by successful division;On the other hand, specificity represents non-lesion True negative TN ratio;The ability of the method for non-lesion is not divided in high specific instruction;Dice likeness coefficients are to weigh in advance The index with truthful data similitude is surveyed, such as formula (3);The range of MCC is -1 (full of prunes binary class symbol) to 1 (right-on binary class symbol);This is that a suitable partitioning algorithm of weighing is based on binary classification (lesion and non-lesion) Performance Evaluation, such as formula (4).
Fig. 2 be a kind of calibration of the cutaneous lesions multiclass semantic segmentation method based on end-to-end full convolutional network of the present invention well Form.Because performance extensive testing in 2012 data sets of PASCALVOC of the convolutional neural networks for semantic segmentation; In this data set, input picture is all defined in rgb color space, and 8 palette images input for expression The truthful data of image, as schemed (a) and scheming the sample input picture in (b) with " the people's cycling " of truthful data;ISBI Dermoscopy data set has the input picture of same format, i.e. RGB color space;As described above, the challenge of segmentation is just for one A class, therefore represent truthful data using binary mask, such as scheme to represent true using binary mask in (c) and figure (d) Real data;PASCALVOC data sets initially have 21 classes, and there are three classes in task to represent benign nevus, melanoma and fat Property of overflowing keratosis.
Fig. 3 is that a kind of bilayer of the cutaneous lesions multiclass semantic segmentation method based on end-to-end full convolutional network of the present invention turns Move learning method.As shown in the figure, double-deck shift learning technology is used in cutaneum carcinoma segmentation task to use all corresponding FCN Pre-training model.Convolutional neural networks usually require a huge data set to learn these features, to obtain detection figure The positive findings of target as in;Because having used RGB image in skin mirror image, ImageNet and Pascal-VOC are used The huge data set of the non-medicals background such as data set carries out double-deck shift learning, can restrain relevant with each convolution layer network Weight;The use of the main reason for double-deck shift learning is medical imaging data collection it is very limited, therefore, when convolutional Neural net For network when being trained from the beginning on these data sets, due to the not convergence of weight, they will not generate effective result and tool Each convolutional layer of limited medical imaging data collection is associated.
When using deep learning in medical imaging, the size of data set can be paid attention to;Therefore, when in these limited doctors Learn data set on training convolutional neural networks when, by huge non-medical data set training pattern come use transmit learn It is extremely important, to generate better result;The spy that shift learning will be acquired on huge non-medical data collection in former model Sign is transferred to medical images data sets;There are two kinds of shift learning, i.e., the part for only shifting a small number of convolutional layer features turns Move study and the total transfer study of all layers of transfer characteristic from pervious pre-training model.
Part shift learning only passes through 1000 classes of convolutional layer from training on macrotaxonomy data set (being known as ImageNet) Other 10,000,000 image;Transfer characteristic, referred to as Pascal- in the model that total transfer study is trained from semantic segmentation data set VOC is more than that 2000 images form by 21 classifications.
For those skilled in the art, the present invention is not limited to the details of above-described embodiment, in the essence without departing substantially from the present invention In the case of refreshing and range, the present invention can be realized in other specific forms.In addition, those skilled in the art can be to this hair Bright to carry out various modification and variations without departing from the spirit and scope of the present invention, these improvements and modifications also should be regarded as the present invention's Protection domain.Therefore, appended claims are intended to be construed to include preferred embodiment and fall into all changes of the scope of the invention More and change.

Claims (10)

  1. A kind of 1. cutaneous lesions multiclass semantic segmentation method based on end-to-end full convolutional network, which is characterized in that mainly include Data set (one);The form (two) demarcated;Full convolutional network (three);Double-deck shift learning method (four);Performance indicator (five).
  2. 2. based on the data set (one) described in claims 1, which is characterized in that using only disclosed ISBI-2017 cutaneum carcinomas Data set trains full convolution deep learning model;Rgb color space is used to represent all images in the data set;It includes The skin mirror image of three important cutaneous lesions classifications, is benign nevus, melanoma and seborrheic keratosis respectively, these three classes With high similarity between class between not;The data set includes the training set of 2000 skin lens images, wherein 1372 skins Mirror image be classified as benign nevus, 374 be melanoma and 274 seborrheic keratosis;Verification is concentrated with 150 images, surveys Examination is concentrated with 600 images;In this data set, the size of image changes between 540 × 722 and 4499 × 6748;It is right In the training of full convolutional network (FCN), by all Image Adjustings to 500 × 375, to improve performance, reduce and calculate cost; In segmentation task, entire data set is divided into three classes, i.e. benign nevus, melanoma and seborrheic keratosis.
  3. 3. based on the form (two) demarcated described in claims 1, which is characterized in that because of the convolution for semantic segmentation The extensive testing in 2012 data sets of PASCALVOC of the performance of neural network;In this data set, input picture all exists Defined in rgb color space, and 8 palette images are used to represent the truthful data of input picture;ISBI dermoscopy data Input picture of the collection with same format, i.e. RGB color space;As described above, segmentation is just for a class, thus using two into Mask processed represents truthful data;PASCALVOC data sets initially have 21 classes, and there are three classes in task to represent benign Mole, melanoma and seborrheic keratosis.
  4. 4. the full convolutional network (three) described in based on claims 1, which is characterized in that the calculating intensity of convolutional neural networks is non- Chang Qiang can extract hierarchical structure in feature, the object in detection image;State-of-the-art convolutional neural networks (CNN) exist It is trained on large data sets, and by providing score for each classification come the categorical measure for different objects of classifying;FCN and coding Device-decoder CNN can detect multiple objects, and position object by using pixel prediction;FCN have become non-medical and Medical imaging divides the state-of-the-art method of task, and demonstrates them than conventional machines study and other deep learning methods Superiority;Therefore skin is performed using four kinds of different FCNs of FCN-AlexNet, FCN-32s, FCN-16s and FCN-8s Skin cancer divides task.
  5. 5. based on the FCN-AlexNet described in claims 4, which is characterized in that FCN-AlexNet is AlexNet original The revision of state-of-the-art disaggregated model;It is deconvoluted by using up-sampling feature by what convolutional layer earlier learnt Layer carries out pixel prediction;Input and the image demarcated all are 500 × 375;Network parameter is finely tuned so that the party Method has more times to learn the spy of skin lens image for 0.0001 stochastic gradient descent by using 100 learning rates Sign.
  6. 6. based on FCN-32s, FCN-16s and FCN-8s described in claims 4, which is characterized in that FCN-32s, FCN-16s It is to be based on another state-of-the-art sorter network with FCN-8, the difference between these models is the up-sampling of different pixels span Layer;As the title of these FCNs, in FCN-32s, up-sampling carries out with the help of 32 pixel spans, FCN-16s Pixel span for 16 pixels, the span of FCN-8 is 8 pixels;According to the span of small pixel, model can predict the thinner of object The analysis of granularity;Equally, these models are trained using the network parameter identical with FCN-AlexNet.
  7. 7. the double-deck shift learning method (four) described in based on claims 1, which is characterized in that convolutional neural networks usually need A huge data set is wanted to learn these features, to obtain the positive findings of target in detection image;Because in dermoscopy RGB image is used as in, so the huge data set using the non-medicals background such as ImageNet and Pascal-VOC data sets Double-deck shift learning is carried out, can be restrained and each relevant weight of convolution layer network;Use the main original of double-deck shift learning It is very limited because being medical imaging data collection, therefore, when convolutional neural networks are trained from the beginning on these data sets When, due to the not convergence of weight, they will not generate effective result with having each of limited medical imaging data collection Convolutional layer is associated.
  8. 8. based on the medical imaging data collection described in claims 7, which is characterized in that when using depth in medical imaging During habit, the size of data set can be paid attention to;Therefore, when in training convolutional neural networks on these limited medical data collection, lead to It is extremely important transmission to be used to learn to cross the training pattern on huge non-medical data set, to generate better result;Transfer Learn the feature transfer that will be acquired in former model on huge non-medical data collection to medical images data sets;There are two kinds The shift learning of type only shifts the part shift learning of a small number of convolutional layer features and from pervious pre-training model The total transfer study of all layers of transfer characteristic.
  9. 9. learnt based on the part shift learning described in claims 8 and total transfer, which is characterized in that part shift learning is only Pass through 10,000,000 images from 1000 classifications of convolutional layer of training on macrotaxonomy data set (being known as ImageNet);Total transfer Transfer characteristic, referred to as Pascal-VOC in the model trained from semantic segmentation data set are practised, is more than by 21 classifications 2000 image compositions.
  10. 10. based on the performance indicator (five) described in claims 1, which is characterized in that in medical imaging, sensibility and special Property be standard evaluation index, and for segmentation assess, Dice similarity factors (Dice) are the parameters that researcher generally uses;With Dice similarity factors (Dice), sensitivity (Sensitivity), specific (Specificity) and horse repair relative coefficient (MCC) evaluation index as segmentation:
    Formula (1) defines sensitivity, and wherein TP represents true positives, and FN represents false negative;Highly sensitive (close to 1.0) represents Superperformance in segmentation, it means that all lesions are by successful division;On the other hand, specificity represents the true of non-lesion The ratio of negative TN;The ability of the method for non-lesion is not divided in high specific instruction;Dice likeness coefficients be weigh prediction and The index of truthful data similitude, such as formula (3);The range of MCC is that -1 (full of prunes binary class symbol) is (complete to 1 Correct binary class symbol);This is a suitable property weighed partitioning algorithm and be based on binary classification (lesion and non-lesion) It can assess, such as formula (4).
CN201810064784.2A 2018-01-23 2018-01-23 A kind of cutaneous lesions multiclass semantic segmentation method based on end-to-end full convolutional network Withdrawn CN108256527A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201810064784.2A CN108256527A (en) 2018-01-23 2018-01-23 A kind of cutaneous lesions multiclass semantic segmentation method based on end-to-end full convolutional network

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201810064784.2A CN108256527A (en) 2018-01-23 2018-01-23 A kind of cutaneous lesions multiclass semantic segmentation method based on end-to-end full convolutional network

Publications (1)

Publication Number Publication Date
CN108256527A true CN108256527A (en) 2018-07-06

Family

ID=62742160

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201810064784.2A Withdrawn CN108256527A (en) 2018-01-23 2018-01-23 A kind of cutaneous lesions multiclass semantic segmentation method based on end-to-end full convolutional network

Country Status (1)

Country Link
CN (1) CN108256527A (en)

Cited By (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109190752A (en) * 2018-07-27 2019-01-11 国家新闻出版广电总局广播科学研究院 The image, semantic dividing method of global characteristics and local feature based on deep learning
CN109241872A (en) * 2018-08-20 2019-01-18 电子科技大学 Image, semantic fast partition method based on multistage network
CN109493359A (en) * 2018-11-21 2019-03-19 中山大学 A kind of skin injury picture segmentation method based on depth network
CN109785311A (en) * 2019-01-14 2019-05-21 深圳和而泰数据资源与云技术有限公司 A kind of methods for the diagnosis of diseases and relevant device
CN109978893A (en) * 2019-03-26 2019-07-05 腾讯科技(深圳)有限公司 Training method, device, equipment and the storage medium of image, semantic segmentation network
CN111369430A (en) * 2020-03-09 2020-07-03 中山大学 Mobile terminal portrait intelligent background replacement method based on mobile deep learning engine
CN112435237A (en) * 2020-11-24 2021-03-02 山西三友和智慧信息技术股份有限公司 Skin lesion segmentation method based on data enhancement and depth network
CN113744178A (en) * 2020-08-06 2021-12-03 西北师范大学 Skin lesion segmentation method based on convolution attention model

Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106447658A (en) * 2016-09-26 2017-02-22 西北工业大学 Significant target detection method based on FCN (fully convolutional network) and CNN (convolutional neural network)

Patent Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106447658A (en) * 2016-09-26 2017-02-22 西北工业大学 Significant target detection method based on FCN (fully convolutional network) and CNN (convolutional neural network)

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
MANU GOYAL ET AL: ""Multi-class Semantic Segmentation of Skin Lesions via Fully Convolutional Networks"", 《HTTPS://ARXIV.ORG/ABS/1711.10449》 *

Cited By (13)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109190752B (en) * 2018-07-27 2021-07-23 国家新闻出版广电总局广播科学研究院 Image semantic segmentation method based on global features and local features of deep learning
CN109190752A (en) * 2018-07-27 2019-01-11 国家新闻出版广电总局广播科学研究院 The image, semantic dividing method of global characteristics and local feature based on deep learning
CN109241872B (en) * 2018-08-20 2022-03-18 电子科技大学 Image semantic fast segmentation method based on multistage network
CN109241872A (en) * 2018-08-20 2019-01-18 电子科技大学 Image, semantic fast partition method based on multistage network
CN109493359A (en) * 2018-11-21 2019-03-19 中山大学 A kind of skin injury picture segmentation method based on depth network
CN109785311A (en) * 2019-01-14 2019-05-21 深圳和而泰数据资源与云技术有限公司 A kind of methods for the diagnosis of diseases and relevant device
CN109785311B (en) * 2019-01-14 2021-06-04 深圳和而泰数据资源与云技术有限公司 Disease diagnosis device, electronic equipment and storage medium
CN109978893A (en) * 2019-03-26 2019-07-05 腾讯科技(深圳)有限公司 Training method, device, equipment and the storage medium of image, semantic segmentation network
CN111369430A (en) * 2020-03-09 2020-07-03 中山大学 Mobile terminal portrait intelligent background replacement method based on mobile deep learning engine
CN111369430B (en) * 2020-03-09 2023-04-07 中山大学 Mobile terminal portrait intelligent background replacement method based on mobile deep learning engine
CN113744178A (en) * 2020-08-06 2021-12-03 西北师范大学 Skin lesion segmentation method based on convolution attention model
CN113744178B (en) * 2020-08-06 2023-10-20 西北师范大学 Skin lesion segmentation method based on convolution attention model
CN112435237A (en) * 2020-11-24 2021-03-02 山西三友和智慧信息技术股份有限公司 Skin lesion segmentation method based on data enhancement and depth network

Similar Documents

Publication Publication Date Title
CN108256527A (en) A kind of cutaneous lesions multiclass semantic segmentation method based on end-to-end full convolutional network
WO2020215985A1 (en) Medical image segmentation method and device, electronic device and storage medium
Li et al. Graph neural network for interpreting task-fmri biomarkers
CN109165645A (en) A kind of image processing method, device and relevant device
KR102045223B1 (en) Apparatus, method and computer program for analyzing bone age
Fan et al. Effect of image noise on the classification of skin lesions using deep convolutional neural networks
CN104484886B (en) A kind of dividing method and device of MR images
Yune et al. Beyond human perception: sexual dimorphism in hand and wrist radiographs is discernible by a deep learning model
CN109741317A (en) Medical image intelligent Evaluation method
Habtemariam et al. Cervix type and cervical cancer classification system using deep learning techniques
Abdullah et al. Multi-sectional views textural based SVM for MS lesion segmentation in multi-channels MRIs
Lang et al. Automatic localization of landmarks in craniomaxillofacial CBCT images using a local attention-based graph convolution network
Hermoza et al. Region proposals for saliency map refinement for weakly-supervised disease localisation and classification
Ozdemir et al. Age Estimation from Left-Hand Radiographs with Deep Learning Methods.
Tao et al. Highly efficient follicular segmentation in thyroid cytopathological whole slide image
Zhang et al. Multi-region saliency-aware learning for cross-domain placenta image segmentation
Kelly et al. Extracting complex lesion phenotypes in Zea mays
CN114332572A (en) Method for extracting breast lesion ultrasonic image multi-scale fusion characteristic parameters based on saliency map guided hierarchical dense characteristic fusion network
Yücel et al. Mitotic cell detection in histopathological images of neuroendocrine tumors using improved YOLOv5 by transformer mechanism
Nigudgi et al. Lung cancer CT image classification using hybrid-SVM transfer learning approach
Ghomi et al. Segmentation of COVID-19 pneumonia lesions: A deep learning approach
Snell et al. HEp-2 fluorescence pattern classification
Qi et al. Age estimation from MR images via 3D convolutional neural network and densely connect
Zhang et al. Critical element prediction of tracheal intubation difficulty: Automatic Mallampati classification by jointly using handcrafted and attention-based deep features
Liang et al. Relative saliency model over multiple images with an application to yarn surface evaluation

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
WW01 Invention patent application withdrawn after publication
WW01 Invention patent application withdrawn after publication

Application publication date: 20180706