CN109493359A - A kind of skin injury picture segmentation method based on depth network - Google Patents
A kind of skin injury picture segmentation method based on depth network Download PDFInfo
- Publication number
- CN109493359A CN109493359A CN201811393429.6A CN201811393429A CN109493359A CN 109493359 A CN109493359 A CN 109493359A CN 201811393429 A CN201811393429 A CN 201811393429A CN 109493359 A CN109493359 A CN 109493359A
- Authority
- CN
- China
- Prior art keywords
- picture
- segmentation
- method based
- skin
- pixels
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
- 230000011218 segmentation Effects 0.000 title claims abstract description 51
- 238000000034 method Methods 0.000 title claims abstract description 40
- 208000028990 Skin injury Diseases 0.000 title claims abstract description 12
- 238000012549 training Methods 0.000 claims abstract description 22
- 238000010606 normalization Methods 0.000 claims abstract description 5
- 230000006870 function Effects 0.000 claims description 28
- 238000013527 convolutional neural network Methods 0.000 claims description 18
- 238000012360 testing method Methods 0.000 claims description 10
- 230000006378 damage Effects 0.000 claims description 7
- 238000013528 artificial neural network Methods 0.000 claims description 3
- 238000005381 potential energy Methods 0.000 claims description 3
- 238000012545 processing Methods 0.000 claims description 3
- 238000013459 approach Methods 0.000 abstract description 3
- 238000000605 extraction Methods 0.000 abstract description 3
- 238000012805 post-processing Methods 0.000 abstract description 3
- 238000013473 artificial intelligence Methods 0.000 abstract description 2
- 238000005286 illumination Methods 0.000 abstract description 2
- 238000003709 image segmentation Methods 0.000 abstract description 2
- 230000002708 enhancing effect Effects 0.000 description 5
- 235000013399 edible fruits Nutrition 0.000 description 2
- 238000011156 evaluation Methods 0.000 description 2
- 201000001441 melanoma Diseases 0.000 description 2
- 238000011160 research Methods 0.000 description 2
- 230000009286 beneficial effect Effects 0.000 description 1
- 238000002790 cross-validation Methods 0.000 description 1
- 238000005516 engineering process Methods 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/10—Segmentation; Edge detection
- G06T7/143—Segmentation; Edge detection involving probabilistic approaches, e.g. Markov random field [MRF] modelling
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/10—Segmentation; Edge detection
- G06T7/11—Region-based segmentation
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20076—Probabilistic image processing
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20081—Training; Learning
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20084—Artificial neural networks [ANN]
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20172—Image enhancement details
- G06T2207/20192—Edge enhancement; Edge preservation
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/30—Subject of image; Context of image processing
- G06T2207/30004—Biomedical image processing
- G06T2207/30088—Skin; Dermal
Landscapes
- Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- Computer Vision & Pattern Recognition (AREA)
- General Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- Probability & Statistics with Applications (AREA)
- Software Systems (AREA)
- Image Analysis (AREA)
Abstract
The present invention relates to artificial intelligence fields, more specifically, it is related to a kind of skin injury picture segmentation method based on depth network, the present invention is split task without manual extraction skin picture feature, but goes voluntarily to learn to be suitable for the depth convolution feature of segmentation task using training data;Pretreatment of the invention is very simple, only carries out the normalization of picture pixels value;In addition, solving the problems, such as that illumination and contrast change greatly using the pretreatment mode of wave filter compared to TDLS and Jafari, the present invention enriches training data in such a way that data enhance, and model is allowed to learn optimal character representation voluntarily to be split;The present invention has been more than existing method in the index of true positive rate, and the runing time on GPU and CPU is all far below existing model, can accomplish real-time skin image segmentation;Invention also uses the condition random fields connected entirely as post-processing approach, and the texture color feature of low level can be effectively utilized, sharpen the segmentation of fringe region.
Description
Technical field
The present invention relates to artificial intelligence fields, more particularly to a kind of skin injury picture based on depth network point
Segmentation method.
Background technique
Current skin image, which is divided, can be divided into two major classes according to the skin image classification used: based on skin lens image
Method and based on general camera shooting image method.For the segmentation problem of skin lens image, have many research works
It can achieve good result.But the acquisition of skin lens image can relatively complex and costly become the bottleneck of the relevant technologies.Institute
With current cutting techniques are all more likely to the skin picture shot using general camera.As the mobile devices such as mobile phone are taken pictures
Function it is perfect, it is easy to obtain skin picture high-definition.Since these common skin pictures are illuminated by the light, shooting angle etc.
Factor is influenced and is differed greatly, so higher requirements are also raised to cutting techniques.
Have many research achievements for the skin picture of general camera shooting, if Jeffrey was proposed in 2012,
Using the TDLS dividing method of the texture conspicuousness of skin picture, Jafari etc. was proposed in 2016 based on convolutional neural networks
Parted pattern.But feature of the TDLS method based on manual extraction cannot effectively be directed to current segmentation task, thus lead
It causes the accuracy rate of segmentation lower, and the segmentation inefficiency of this method, needs could provide a skin picture completely within 1 minute
Segmentation result, it is poor in terms of user experience.On this basis, Jafari proposes the dividing method based on depth convolutional network,
By the segmentation feature for learning to need automatically from training sample, segmentation performance is effectively promoted.But since this method is each
The picture for needing to extract fixed window when predicting a location of pixels, be then input in network exported as a result, thus divide
The total time cut is approximately equal to: picture pixels number × network operation time.Certainly, in the case of considering by input is criticized, operation
Speed is slightly promoted.Runing time of the method that Jafari is proposed on GPU is substantially improved, but the operation on CPU
Rate is still undesirable, does not accomplish to divide in real time.In addition, the method based on depth convolutional network has one intrinsic to ask
Topic: the segmentation result of output is more coarse, cannot completely keep the marginal information of original picture.
Summary of the invention
The present invention is split task without manual extraction skin picture feature, but is gone voluntarily using training data
Study is suitable for the depth convolution feature of segmentation task;Pretreatment of the invention is very simple, only carries out picture pixels value
Normalization;In addition, compared to TDLS and Jafari using wave filter pretreatment mode solve illumination and contrast variation compared with
Big problem, the present invention data enhance by way of enrich training data, allow model voluntarily learn optimal character representation with
It is split;The present invention has been more than existing method in the index of true positive rate, and the runing time on GPU and CPU
It is all far below existing model, can accomplish real-time skin image segmentation;Invention also uses the condition randoms connected entirely
Field is used as post-processing approach, and the texture color feature of low level can be effectively utilized, sharpen the segmentation of fringe region.
To realize the above goal of the invention, the technical solution adopted is that:
A kind of skin injury picture segmentation method based on depth network, comprising the following steps:
Step S1: test image is enhanced and is pre-processed;
Step S2: it will be trained, obtain preliminary in pretreated test image is input to convolutional neural networks
Segmentation result and probability output carry out parameter adjustment to convolutional neural networks according to preliminary segmentation result and probability output;
Step S3: training image is enhanced and is pre-processed;
Step S4: pretreated training image is input in the convolutional neural networks of training completion and is trained, obtained
To preliminary segmentation result and probability output;
Step S5: segmentation result and probability output are iterated processing in the condition random field connected entirely;It obtains most
Whole segmentation result.
Preferably, the step S1 specifically includes the following steps:
Step S101: a compact No.1 rectangle frame is intercepted in picture, which surrounds in picture just damages
Skin area;
Step S102: random interception one includes No. two rectangle frames of No.1 rectangle frame;
Step S103: the picture re-scaling intercepted at random to fixed picture size;
Step S104: after scaling, random noise is introduced to picture, including change picture luminance and contrast at random;
Step S105: doing normalization operation to picture pixels value, so that treated, picture mean value is 0, variance 1.
Preferably, the picture size of the fixation of the step S103 is 224 × 224.
Preferably, the step S2 specifically includes the following steps:
Step S201: the energy function of setting condition random field is defined as follows:
Here y refers to the prediction result of full convolutional neural networks, and subscript i shows location of pixels, the first of energy function
Item is single potential-energy function ψu(yi)=- log P (yi), P (y herei) indicate neural network forecast location of pixels i classification yiProbability it is big
It is small;
Step S202: the Section 2 of energy function is set is defined as:
Wherein, μ is the compatible function of label, fiAnd fjFor the picture feature of location of pixels i, κ(m)For m-th of kernel function and
Its weight ω(m),
Step S203: following two kernel functions are used, are respectively as follows:
Wherein μ (yi, yj)=[yi≠yj], the feature input of kernel function includes location of pixels and RGB color information, i.e. public affairs
P in formulai, pj, Ii, Ij。
Preferably, the training of convolutional neural networks is trained using the cross entropy loss function of two classification in step S2.
Compared with prior art, the beneficial effects of the present invention are:
1. the present invention proposes effective data enhanced scheme, existing data enhancing is random interception window, is caused
The picture of some interceptions cannot be guaranteed the integrality for damaging skin.Contrastingly, data enhanced scheme of the invention calculates first
Then compact damage skin area intercepts the rectangle frame comprising entire damage skin area again, is effectively guaranteed damage
The integrality of skin, thus accomplished that training is consistent with test data distribution unified.
2. the present invention learns a full convolutional neural networks to skin picture collection, therefore can be before only running primary network
To in the case where propagation all over obtaining the result of all pixels position.Compared to the model based on window, full convolutional network of the invention
Computing repeatedly for convolution feature can be effectively avoided, thus the runing time on CPU and GPU greatly reduces, and can do
To real-time segmentation.
3. segmentation performance of the invention is good.
4. the present invention uses the condition random field connected entirely as post-processing approach, the segmentation knot of fringe region can be sharpened
Fruit.Whether the model based on window or full convolutional network all fail the characteristics of image for considering low level, thus its segmentation knot
Fruit is not able to maintain the structure (such as texture, color) of these low levels, and the condition random field connected entirely can as a kind of graph model
The segmentation in damage skin edge region is sharpened to make full use of these information, and gets rid of the erroneous segmentation area of small area
Domain.
Detailed description of the invention
Fig. 1 is flow chart of the invention.
Fig. 2 is influence of the data enhancing to segmentation result.
Fig. 3 is the segmentation result of different dividing methods.
Fig. 4 is that the time efficiency of different dividing methods compares.
Specific embodiment
The attached figures are only used for illustrative purposes and cannot be understood as limitating the patent;
Below in conjunction with drawings and examples, the present invention is further elaborated.
Embodiment 1
As shown in Figure 1, a kind of skin injury picture segmentation method based on depth network, comprising the following steps:
A kind of skin injury picture segmentation method based on depth network, comprising the following steps:
Step S1: test image is enhanced and is pre-processed;
Step S2: it will be trained, obtain preliminary in pretreated test image is input to convolutional neural networks
Segmentation result and probability output carry out parameter adjustment to convolutional neural networks according to preliminary segmentation result and probability output;
Step S3: training image is enhanced and is pre-processed;
Step S4: pretreated training image is input in the convolutional neural networks of training completion and is trained, obtained
To preliminary segmentation result and probability output;
Step S5: segmentation result and probability output are iterated processing in the condition random field connected entirely, obtained most
Whole segmentation result.
Preferably, the step S1 specifically includes the following steps:
Step S101: a compact No.1 rectangle frame is intercepted in picture, which surrounds in picture just damages
Skin area;
Step S102: random interception one includes No. two rectangle frames of No.1 rectangle frame;
Step S103: the picture re-scaling intercepted at random to fixed picture size;
Step S104: after scaling, random noise is introduced to picture, including change picture luminance and contrast at random;
Step S105: doing normalization operation to picture pixels value, so that treated, picture mean value is 0, variance 1.
Preferably, the picture size of the fixation of the step S103 is 224 × 224.
Preferably, the step S2 specifically includes the following steps:
Step S201: the energy function of setting condition random field is defined as follows:
Here y refers to the prediction result of full convolutional neural networks, and subscript i shows location of pixels, the first of energy function
Item is single potential-energy function ψu(yi)=- log P (yi), P (y herei) indicate neural network forecast location of pixels i classification yiProbability it is big
It is small;
Step S202: the Section 2 of energy function is set is defined as:
Wherein, μ is the compatible function of label, fiAnd fjFor the picture feature of location of pixels i, κ(m)For m-th of kernel function and
Its weight ω(m),
Step S203: following two kernel functions are used, are respectively as follows:
Wherein μ (yi, yj)=[yi≠yj], the feature input of kernel function includes location of pixels and RGB color information, i.e. public affairs
P in formulai, pj, Ii, Ij。
Preferably, the training of convolutional neural networks is trained using the cross entropy loss function of two classification in step S2.
Embodiment 2
The present invention and existing TDLS and Jafari method are split result and model running rate by the present embodiment
Compare.
For the fairness compared, the present embodiment is provided with identical experimental situation, and the training stage of model all uses
For 126 pictures of DermQuest database as training data, the inside includes 66 melanoma pictures and 60 non-melanoma figures
Piece.Since data are limited, the experimental program of cross validation is taken, training data 4 parts of sizes such as is randomly divided into, so
The 3 parts therein training for model are successively chosen afterwards, and remaining 1 part collects as evaluation and test, finally takes the flat of 4 experimental results
Mean value.In terms of evaluation metrics, three true positive rate, true negative rate and accuracy rate indexs are used.
Before comparison, it is first tested to verify the necessity of data enhancing module in the present invention, experimental result such as Fig. 2
It is shown.Wherein, data enhancing one column × indicate without use data enhancement operations, and √ expression used the present invention to mention
Data enhancement operations out.It can be seen that data enhancing influences the result of true positive rate obviously, to improve more than 12 percentages
Point.
Fig. 3 gives the segmentation result of distinct methods.It can be seen that segmentation result of the invention is in true positive rate index
It is higher than TDLS and Jafari method.
Fig. 4 gives the runing time comparison of different dividing methods.In order to accurately evaluate and test the different model running times,
Different methods is run on identical machine.Since Jafari method segmentation accuracy rate is better than TDLS method, compare at this
The method of the present invention and Jafari.Every kind of method is run 10 times when test, using this 10 runing time mean values as method
Runing time.The segmentation result of a 400*600 size picture in order to obtain, Jafari method is the case where batch size is 128
Under circulate beyond 1800 results that can just obtain each location of pixels.But the present invention has been due to having used full convolutional neural networks,
Only primary network, which need to be run, can obtain the result of whole picture.Finally, when operation on either CPU or GPU
Between, the present invention will be significantly faster than Jafari method.
Obviously, the above embodiment of the present invention be only to clearly illustrate example of the present invention, and not be pair
The restriction of embodiments of the present invention.For those of ordinary skill in the art, may be used also on the basis of the above description
To make other variations or changes in different ways.There is no necessity and possibility to exhaust all the enbodiments.It is all this
Made any modifications, equivalent replacements, and improvements etc., should be included in the claims in the present invention within the spirit and principle of invention
Protection scope within.
Claims (5)
1. a kind of skin injury picture segmentation method based on depth network, which comprises the following steps:
Step S1: test image is enhanced and is pre-processed;
Step S2: it will be trained in pretreated test image is input to convolutional neural networks, obtain preliminary segmentation
As a result and probability output, parameter adjustment is carried out to convolutional neural networks according to preliminary segmentation result and probability output;
Step S3: training image is enhanced and is pre-processed;
Step S4: pretreated training image is input in the convolutional neural networks of training completion and is trained, obtained just
The segmentation result and probability output of step;
Step S5: segmentation result and probability output are iterated processing in the condition random field connected entirely, obtained final
Segmentation result.
2. a kind of skin injury picture segmentation method based on depth network according to claim 1, which is characterized in that institute
The step S1 that states specifically includes the following steps:
Step S101: a compact No.1 rectangle frame is intercepted in picture, which surrounds just damages skin in picture
Region;
Step S102: random interception one includes No. two rectangle frames of No.1 rectangle frame;
Step S103: the picture re-scaling intercepted at random to fixed picture size;
Step S104: after scaling, random noise is introduced to picture, including change picture luminance and contrast at random;
Step S105: doing normalization operation to picture pixels value, so that treated, picture mean value is 0, variance 1.
3. a kind of skin injury picture segmentation method based on depth network according to claim 2, which is characterized in that institute
The picture size of the fixation of the step S103 stated is 224 × 224.
4. a kind of skin injury picture segmentation method based on depth network according to claim 2, which is characterized in that institute
The step S2 that states specifically includes the following steps:
Step S201: the energy function of setting condition random field is defined as follows:
Here y refers to the prediction result of full convolutional neural networks, and subscript i shows location of pixels, and the first item of energy function is
Single potential-energy function ψu(yi)=- log P (yi), P (y herei) indicate neural network forecast location of pixels i classification yiProbability size;
Step S202: the Section 2 of energy function is set is defined as:
Wherein, μ is the compatible function of label, fiAnd fjFor the picture feature of location of pixels i, κ(m)For m-th of kernel function and its power
Value ω〔m〕,
Step S203: following two kernel functions are used, are respectively as follows:
Wherein μ (yi, yj)=[yi≠yj], the feature input of kernel function includes location of pixels and RGB color information, i.e., in formula
pi, pj, Ii, Ij。
5. a kind of skin injury picture segmentation method based on depth network according to claim 1, which is characterized in that step
The training of convolutional neural networks is trained using the cross entropy loss function of two classification in rapid S2.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201811393429.6A CN109493359A (en) | 2018-11-21 | 2018-11-21 | A kind of skin injury picture segmentation method based on depth network |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201811393429.6A CN109493359A (en) | 2018-11-21 | 2018-11-21 | A kind of skin injury picture segmentation method based on depth network |
Publications (1)
Publication Number | Publication Date |
---|---|
CN109493359A true CN109493359A (en) | 2019-03-19 |
Family
ID=65697278
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201811393429.6A Pending CN109493359A (en) | 2018-11-21 | 2018-11-21 | A kind of skin injury picture segmentation method based on depth network |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN109493359A (en) |
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN114757951A (en) * | 2022-06-15 | 2022-07-15 | 深圳瀚维智能医疗科技有限公司 | Sign data fusion method, data fusion equipment and readable storage medium |
Citations (11)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20170039704A1 (en) * | 2015-06-17 | 2017-02-09 | Stoecker & Associates, LLC | Detection of Borders of Benign and Malignant Lesions Including Melanoma and Basal Cell Carcinoma Using a Geodesic Active Contour (GAC) Technique |
CN107203999A (en) * | 2017-04-28 | 2017-09-26 | 北京航空航天大学 | A kind of skin lens image automatic division method based on full convolutional neural networks |
US20180061046A1 (en) * | 2016-08-31 | 2018-03-01 | International Business Machines Corporation | Skin lesion segmentation using deep convolution networks guided by local unsupervised learning |
CN107767380A (en) * | 2017-12-06 | 2018-03-06 | 电子科技大学 | A kind of compound visual field skin lens image dividing method of high-resolution based on global empty convolution |
CN107862695A (en) * | 2017-12-06 | 2018-03-30 | 电子科技大学 | A kind of modified image segmentation training method based on full convolutional neural networks |
CN107958271A (en) * | 2017-12-06 | 2018-04-24 | 电子科技大学 | The cutaneous lesions deep learning identifying system of Analysis On Multi-scale Features based on expansion convolution |
US20180122072A1 (en) * | 2016-02-19 | 2018-05-03 | International Business Machines Corporation | Structure-preserving composite model for skin lesion segmentation |
CN108062756A (en) * | 2018-01-29 | 2018-05-22 | 重庆理工大学 | Image, semantic dividing method based on the full convolutional network of depth and condition random field |
CN108256527A (en) * | 2018-01-23 | 2018-07-06 | 深圳市唯特视科技有限公司 | A kind of cutaneous lesions multiclass semantic segmentation method based on end-to-end full convolutional network |
CN108510502A (en) * | 2018-03-08 | 2018-09-07 | 华南理工大学 | Melanoma picture tissue segmentation methods based on deep neural network and system |
CN108830853A (en) * | 2018-07-20 | 2018-11-16 | 东北大学 | A kind of melanoma aided diagnosis method based on artificial intelligence |
-
2018
- 2018-11-21 CN CN201811393429.6A patent/CN109493359A/en active Pending
Patent Citations (11)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20170039704A1 (en) * | 2015-06-17 | 2017-02-09 | Stoecker & Associates, LLC | Detection of Borders of Benign and Malignant Lesions Including Melanoma and Basal Cell Carcinoma Using a Geodesic Active Contour (GAC) Technique |
US20180122072A1 (en) * | 2016-02-19 | 2018-05-03 | International Business Machines Corporation | Structure-preserving composite model for skin lesion segmentation |
US20180061046A1 (en) * | 2016-08-31 | 2018-03-01 | International Business Machines Corporation | Skin lesion segmentation using deep convolution networks guided by local unsupervised learning |
CN107203999A (en) * | 2017-04-28 | 2017-09-26 | 北京航空航天大学 | A kind of skin lens image automatic division method based on full convolutional neural networks |
CN107767380A (en) * | 2017-12-06 | 2018-03-06 | 电子科技大学 | A kind of compound visual field skin lens image dividing method of high-resolution based on global empty convolution |
CN107862695A (en) * | 2017-12-06 | 2018-03-30 | 电子科技大学 | A kind of modified image segmentation training method based on full convolutional neural networks |
CN107958271A (en) * | 2017-12-06 | 2018-04-24 | 电子科技大学 | The cutaneous lesions deep learning identifying system of Analysis On Multi-scale Features based on expansion convolution |
CN108256527A (en) * | 2018-01-23 | 2018-07-06 | 深圳市唯特视科技有限公司 | A kind of cutaneous lesions multiclass semantic segmentation method based on end-to-end full convolutional network |
CN108062756A (en) * | 2018-01-29 | 2018-05-22 | 重庆理工大学 | Image, semantic dividing method based on the full convolutional network of depth and condition random field |
CN108510502A (en) * | 2018-03-08 | 2018-09-07 | 华南理工大学 | Melanoma picture tissue segmentation methods based on deep neural network and system |
CN108830853A (en) * | 2018-07-20 | 2018-11-16 | 东北大学 | A kind of melanoma aided diagnosis method based on artificial intelligence |
Non-Patent Citations (3)
Title |
---|
HE, XINZI ET AL: "Skin Lesion Segmentation via Deep RefineNet", 《 LECTURE NOTES IN COMPUTER SCIENCE》 * |
MENG YANG ET AL: "Fast Skin Lesion Segmentation via Fully Convolutional Network with Residual Architecture and CRF", 《2018 24TH INTERNATIONAL CONFERENCE ON PATTERN RECOGNITION (ICPR)》 * |
YUAN YADING ET AL: "Automatic Skin Lesion Segmentation Using Deep Fully Convolutional Networks With Jaccard Distance", 《IEEE TRANSACTIONS ON MEDICAL IMAGING》 * |
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN114757951A (en) * | 2022-06-15 | 2022-07-15 | 深圳瀚维智能医疗科技有限公司 | Sign data fusion method, data fusion equipment and readable storage medium |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
Isola et al. | Crisp boundary detection using pointwise mutual information | |
CN111738357B (en) | Junk picture identification method, device and equipment | |
US10657651B2 (en) | Systems and methods for detection of significant and attractive components in digital images | |
Xie et al. | Image de-noising algorithm based on Gaussian mixture model and adaptive threshold modeling | |
Zhang et al. | Learning of structured graph dictionaries | |
CN110175595A (en) | Human body attribute recognition approach, identification model training method and device | |
CN108960141A (en) | Pedestrian's recognition methods again based on enhanced depth convolutional neural networks | |
CN106204482B (en) | Based on the mixed noise minimizing technology that weighting is sparse | |
US20060269167A1 (en) | Image comparison by metric embeddings | |
CN111311520A (en) | Image processing method, device, terminal and storage medium | |
CN108197669B (en) | Feature training method and device of convolutional neural network | |
CN113158971B (en) | Event detection model training method and event classification method and system | |
CN114881861A (en) | Unbalanced image over-resolution method based on double-sampling texture perception distillation learning | |
CN109493359A (en) | A kind of skin injury picture segmentation method based on depth network | |
CN113538304B (en) | Training method and device for image enhancement model, and image enhancement method and device | |
Jamaludin et al. | The removal of specular reflection in noisy iris image | |
Song et al. | A single image dehazing method based on end-to-end cpad-net network in deep learning environment | |
CN110490876B (en) | Image segmentation method based on lightweight neural network | |
CN109886186A (en) | A kind of face identification method and device | |
Qiu et al. | Research on the enhancement of internet UI interface elements based on visual communication | |
CN109740692A (en) | A kind of target classifying method of the logistic regression based on principal component analysis | |
CN109784361A (en) | Seashell products classifying identification method and device | |
Sazonova et al. | Fast and efficient iris image enhancement using logarithmic image processing | |
Hanumantharaju et al. | A new framework for retinex-based colour image enhancement using particle swarm optimisation | |
Sun et al. | A multi-scale TVQI-based illumination normalization model |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
RJ01 | Rejection of invention patent application after publication | ||
RJ01 | Rejection of invention patent application after publication |
Application publication date: 20190319 |