CN109886986A - A kind of skin lens image dividing method based on multiple-limb convolutional neural networks - Google Patents
A kind of skin lens image dividing method based on multiple-limb convolutional neural networks Download PDFInfo
- Publication number
- CN109886986A CN109886986A CN201910062500.0A CN201910062500A CN109886986A CN 109886986 A CN109886986 A CN 109886986A CN 201910062500 A CN201910062500 A CN 201910062500A CN 109886986 A CN109886986 A CN 109886986A
- Authority
- CN
- China
- Prior art keywords
- skin
- layer
- image
- branch
- network
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
Abstract
The present invention discloses a kind of skin lens image dividing method based on multiple-limb convolutional neural networks, comprising the following steps: one, training sample collection;Two, image expands;Three, multiple-limb convolutional neural networks modelling;Four, multiple-limb convolutional network training;Five, skin lesion distribution probability figure generates;Six, segmentation result obtains.Advantage and effect of the present invention: being directed to skin lens image data characteristic, is converted using corresponding image and effectively to expand training dataset, it is ensured that network training is effective, and Generalization Capability is strong;Convolutional neural networks of the present invention include multiple branches, have merged semantic information and detailed information abundant, compare general network, can preferably restore skin lesion edge, obtain more accurately skin lesion segmentation result;The present invention is full-automatic dividing scheme, only need to input skin lens image to be split, and the program just can provide the segmentation result of the image automatically, is not necessarily to extra process, efficiently simple and direct.
Description
Technical field
The invention belongs to computer-aided diagnosis fields, and in particular to a kind of skin based on multiple-limb convolutional neural networks
Mirror image partition method.
Background technique
Cutaneous melanoma be divided into it is benign and malignant, wherein the harmfulness of pernicious cutaneous melanoma is very big, if patient is in morning
Phase does not receive to treat in time, then easilys lead to death.For cutaneous melanoma, most effective treatment method is exactly to send out as early as possible
It is existing, then carry out lesion resection.Dermoscopy is also known as epidermis light transmission microscope, can obtain all very high skin of resolution ratio and clarity
Mirror image.Skin lens image is diagnosed automatically, it can be to avoid the diagnostic error caused by subjectivity when diagnosis.
When skin lesion diagnoses, region shape and boundary information are important diagnosis basis.Skin lens image segmentation is exactly accurate in order to obtain
Skin lesion region, it is the important step in automatic auxiliary diagnosis process.However, due to the shape of different skin lesions, color equal difference
It is different very big, and the interference such as hair, bubble is frequently present of in skin lens image collection process, leading to skin lens image segmentation is one
A challenging problem.
Currently, skin lens image dividing method specifically includes that method based on edge, threshold value or region etc. and based on supervision
The method of study.The ash in image is calculated by classical operators such as Sobel, Laplacian or Canny based on the method at edge
Gradient is spent, the big region of change of gradient is then extracted, as skin lesion boundary;This method there is not skin lesion sharpness of border
There is the skin lens image effect of other interference fine.Method based on threshold value utilizes skin lesion color and background skin color usually not
One or more threshold values are arranged to divide different zones in consistent feature;This method calculating is simple and direct, but threshold value is not easy to select
It selects.Method based on region will be merged by region growing adjacent to similar pixel or subregion, until obtaining final
Skin lesion region;This method is suitable for the skin lens image being consistent inside skin lesion.Method based on supervised learning passes through hand
Work designs correlated characteristic, or by the potential feature of data automatic mining, retraining classifier to tagsort, judge subregion or
Person's pixel belongs to skin lesion or normal skin;This method is highly dependent on characteristic Design and selection, to complicated dermoscopy
Image adaptability is poor.
Convolutional neural networks are widely used in field of image processing at present, in all multitasks, such as target classification, target
Detection, Target Segmentation etc., all achieve outstanding representation.Convolutional neural networks can learn from training data to advanced automatically
Feature has very strong adaptability.The present invention devises a completely new convolutional neural networks model for dermoscopy figure
As segmentation, which has multiple branches, and low layer branch extracts detailed information, and high-rise branch extracts semantic information, and each
Branch calculates loss, carries out backpropagation training, to ensure that each branch can effectively extract feature, finally merges multiple branches
Characteristic pattern obtains accurate skin lens image segmentation result by up-sampling.
Summary of the invention
The purpose of the present invention is to provide a kind of skin lens image dividing method based on multiple-limb convolutional neural networks, it
The automatic accurate extraction skin lesion region of energy, the follow-up link of aid in skin mirror image diagnostic system improve skin lesion accuracy rate of diagnosis.This
Invention has the interference such as hair, bubble in image good by learning advanced features automatically from raw image data
Robustness, and pass through multiple-limb Fusion Features, the accurate skin lesion segmentation result of output boundary.
The present invention is a kind of skin lens image dividing method based on multiple-limb convolutional neural networks, specific technical solution packet
Include following steps:
Step 1: training sample is collected
Skin lens image used in the present invention contains 2750 original dermoscopies from public data collection in the world
Image (wherein 2000 are used to train, and 750 for verifying and testing), skin lesion region true value figure is manual by dermatologist
Label, this true value figure is bianry image, wherein 1 represents skin lesion region, 0 represents healthy skin region.The skin for including in data set
Damage shape, color, texture, in terms of all great changes have taken place, image resolution ratio 542 × 718 to 2848 × 4288 it
Between.In order to facilitate processing, original image and skin lesion true value figure are uniformly zoomed into 512 × 512 sizes.
Step 2: image expands
In general, convolutional neural networks neuron is more, network capacity is bigger, it is meant that network model can be fitted more multiple
Miscellaneous mapping relations can also have very good performance to complex task.But it is more along with neuron, in network
Parameter also will increase very much, in subsequent training process, if not enough training datas, easily cause quasi-
Conjunction problem.Therefore, in order to which training obtains a good network, this invention takes a variety of methods to expand training image
It fills.
In view of having different shooting angles when acquisition image, to each original training image, carry out level is turned over respectively
Turn, flip vertical, 90 °, 180 ° and 270 ° of rotation.Additionally, due to observing that partial skin mirror image has in upper and lower, left and right
Black surround, so translating 25 pixels to four direction respectively to every original training image.Skin lesion true value figure is done pair with its original image
The transformation answered.Finally, 2000 original training images have obtained 20000 training images by expanding.
Step 3: multiple-limb convolutional neural networks modelling
In deep neural network model, network shallow-layer extracts the detailed information such as edge, the texture in image, Gu Qiancengte
The skin lesion boundary information in many skin lens images is contained in sign figure, and network deep layer integrates the low-level features that shallow-layer extracts,
Semantic advanced features abundant are further formed, classification is facilitated.Task is divided for skin lens image, had both needed semanteme abundant
Information comes Accurate classification skin lesion and background, while detailed information being needed precisely to extract skin lesion boundary again.In this regard, the present invention constructs
One convolutional neural networks are coder-decoder framework, and wherein it is extensive to extract semantic and detailed information, decoder for encoder
Multiple characteristic pattern size, obtains final segmentation result.In addition, extracting feature in 4 branches of different layer buildings of encoder, finally
Feature in different branches is merged, to obtain accurate skin lens image segmentation result.Specific mentality of designing is as follows:
1. coder structure designs: in common convolutional network, it is generally the case that each convolutional layer only receives upper one layer defeated
Out as input, the information before upper one layer cannot then be utilized very well.In the convolutional network that the present invention constructs, in convolution block
Each convolutional layer output by be convolutional layer behind input, enable the low-level features learnt subsequent
It makes full use of, convolutional layer is avoided to learn repeated characteristic, reduce information redundancy, reduce network training difficulty.Encoder overall framework
Are as follows: input picture → convolutional layer → pond layer → the 1st convolution block → the 1st pond block → the 2nd convolution block → the 2nd pond block → 3rd
Convolution block, detail are as follows:
1. image input coding device, first passing around 1 convolutional layer, (core size is 7 × 7, step-length 2, and output dimension is
24), using 1 maximum pond layer (core size is 3 × 3, step-length 2), the 1st convolution block is recently entered.It is represented with Input#
Input picture, Conv7 × 7# represent 7 × 7 convolutional layer, and MaxPool# represents the maximum pond layer, and BN# represents batch normalization
Layer, ReLU#, which is represented, corrects linear elementary layer, can be expressed as: Input → Conv7 × 7 → BN → ReLU → MaxPool.Due to
Step-length in Conv7 × 7 and MaxPool is 2, and the resolution ratio for finally exporting characteristic pattern is the 1/4 of input picture, then inputs the
1 convolution block.
2. the 1st convolution block, the 2nd convolution block, the 3rd convolution block are made of 6 convolutional layers, convolution is represented with Conv3 × 3#
(core size is 3 × 3 to layer, step-length 1, and output dimension is that 12), then each layer of specific structure in convolution block is BN → ReLU
→Conv3×3.The specific structure for the convolution block Block that the present invention designs is as shown in Figure 1.
3. the 1st pond block, the 2nd pond block dimensionality reduction.In convolution block, characteristic pattern be all it is onesize, in order to reduce feature
Figure size reduces characteristic pattern dimension to extract higher level semantic feature to reduce parameter amount, and the present invention is the 1st and the 2nd
A pond block is respectively devised behind convolution block.Representing input feature vector figure with X#, (for port number as N), Conv3 × 3# represents convolution
Layer (core size is 3 × 3, step-length 1, and output dimension is N/2), MaxPool# represents maximum pond layer, and (core size is 2 × 2, step
It is a length of 2), then the present invention designed by pond block specific structure are as follows: X → BN → ReLU → Conv3 × 3 → MaxPool.Volume 1
Block and the characteristic pattern of the 2nd convolution block output after the block of pond, port number, it is wide and it is high be original 1/2.
2. multiple-branching construction designs: in the convolutional neural networks that the present invention designs, being extracted in different layer buildings 4 branches
Feature finally carries out concat fusion to the feature in different branches, such as the 1st branch of Fig. 2, the 2nd branch, the 3rd branch, the 4th point
Shown in branch.Each branch pass through 1 convolutional layer (core size is 1 × 1, step-length 1, and output dimension is 2) to extract information,
Middle low layer branch mainly extracts detailed information, and high-rise branch mainly extracts semantic information.It is different additionally, due to the dimensionality reduction in network
Branch export characteristic pattern size it is inconsistent, the 2nd branch export characteristic pattern wide and a height of 1st branch on the 1/2, the 3rd branch with
4th branch output characteristic pattern is then 1/4 in the 1st branch, therefore used here as bilinear interpolation, by the 2nd, 3,4 branches it is defeated
Characteristic pattern is up-sampled to size identical as the 1st branching characteristic figure out, then carries out concat fusion.Fusion feature figure is input to again
Decoder on trunk.
3. decoder architecture designs: since the spatial resolution of the fusion feature figure in encoder is original input picture
1/4, so the present invention devises a decoder to restore characteristic pattern bulk.The decoder mainly contains 2 convolution
Layer and 1 four times of up-sampling layer, the fusion feature figure of input is represented with X#, and Conv1 × 1_1# represents the 1st convolutional layer (core
Size is 1 × 1, step-length 1,4) output dimension is that Conv1 × 1_2# represents the 2nd convolutional layer, and (core size is 1 × 1, step-length
It is 1, output dimension is that the bilinear interpolation that 2) Upsample# represents 4 times up-samples layer, then the decoding utensil designed by the present invention
Body structure are as follows: X → BN → ReLU → Conv1 × 1_1 → ReLU → Conv1 × 1_2 → Upsample.
Step 4: multiple-limb convolutional network training
The characteristic pattern of decoder output passes through Softmax function, obtains the distribution probability figure in skin lesion region, then pass through intersection
Entropy function calculates loss compared with dividing true value figure.Backpropagation in a network is lost, the gradient of parameter in network is obtained, then
According to gradient descent method come adjusting parameter, reduce penalty values, network reaches best.The specific calculating of this cross entropy loss function
It is as follows:
Wherein, W and H is respectively that segmentation true value figure is wide and high, yijRepresent the true classification of pixel (i, j), skin lesion 1, back
Scape is 0, pijRepresent the probability that pixel (i, j) is skin lesion.
In addition, in order to ensure 4 extracted information of branch are effectively, to adopt to the output characteristic pattern in each branch
With above-mentioned same method, branch penalty Loss is calculated separately out1,2,3,4.Since branching characteristic figure does not directly generate final segmentation knot
Fruit, excessive in order to avoid influencing on segmentation precision, each branch penalty is damaged multiplied by coefficient 0.5 and then with the segmentation on trunk
Dephasing adds, and obtains total losses, and the parameter in last network is actually to carry out backpropagation training by total losses to obtain.Specifically may be used
It indicates are as follows:
Lossall=Loss+0.5*Loss1,2,3,4
Step 5: skin lesion distribution probability figure generates
Network inputs one designed by the present invention open skin lens image, export its skin lesion distribution probability figure.In order to improve Shandong
Stick, image to be split carry out the rotation of horizontal and vertical overturning and 90 °, 180 ° and 270 ° before inputting network respectively
Turn, add original image, in total 6 images, is separately input in segmentation network, obtains 6 skin lesion distribution probability figures.So
Afterwards, the transformation of its corresponding input picture, carries out corresponding inverse transformation for probability graph, is finally averaged, obtains to 6 probability graphs
Final skin lesion distribution probability figure.
Step 6: segmentation result obtains
After obtaining skin lesion area distribution probability graph, then it is 0.5 that threshold value, which is arranged, and pixel of the probability value greater than 0.5 is considered as skin
Pixel is damaged, is otherwise background skin, finally obtains the segmentation result of image to be split.
A kind of skin lens image dividing method based on multiple-limb convolutional neural networks of the present invention, advantage and effect:
(1) present invention is directed to skin lens image data characteristic, is converted using corresponding image and effectively to expand training data
Collection, it is ensured that network training is effective, and Generalization Capability is strong.
(2) convolutional neural networks that the present invention designs include multiple branches, have merged semantic information and details letter abundant
Breath compares general network, can preferably restore skin lesion edge, obtain more accurately skin lesion segmentation result.
(3) present invention is full-automatic dividing scheme, only need to input skin lens image to be split, the program just can give automatically
The segmentation result of the image out is not necessarily to extra process, efficiently simple and direct.
Detailed description of the invention
Fig. 1 is the convolution block structure schematic diagram in the network that the present invention designs.
Fig. 2 is the schematic network structure that the present invention designs.
Fig. 3 is implementation flow chart of the invention.
Specific embodiment
Technical solution for a better understanding of the present invention, with reference to the accompanying drawing and this hair is discussed in detail in specific embodiment
It is bright.
The present invention realizes that network structure of the invention and flow chart are respectively such as Fig. 2 under PyTorch deep learning frame
With shown in Fig. 3.Allocation of computer uses: Intel Core i5 6600K processor, 16GB memory, NVIDIA GeForce
GTX1080 video card, 16.04 operating system of Ubuntu.The present invention is a kind of dermoscopy figure based on multiple-limb convolutional neural networks
As dividing method, specifically includes the following steps:
Step 1: skin lens image training sample Collecting and dealing
Above and below the official website The International Skin Imaging Collaboration (ISIC) Challenge
Skin lens image data set is carried, including (wherein 2000 are used to train 2750 original skin lens images, and 750 for verifying
And test), and by the skin lesion region true value figure of professional skin section doctor's hand labeled.
Step 2: training image processing
First by original skin lens image and segmentation true value figure size uniformly zoom to 512 × 512, then to image into
Row transformation.In view of acquisition image when have different shooting angles, flip horizontal is carried out respectively to each original training image,
Flip vertical, 90 °, 180 ° and 270 ° of rotation.It is black additionally, due to observing partial skin mirror image in upper and lower, left and right and having
Side, so translating 25 pixels to four direction respectively again to every original image.Skin lesion true value figure is equally become accordingly
It changes.Finally, 2000 original training images have obtained 20000 training images by expanding.
Step 3: the design of multiple-limb convolutional network structure
Multiple-limb convolutional network structure that the present invention designs is as shown in Fig. 2, wherein in order to simplicity, BN layers and ReLU layers of quilt
Simplify.According to this network structure, Module is write under PyTorch deep learning frame, mainly includes following three parts:
1. coder structure: the overall framework of encoder are as follows: input picture → convolutional layer → pond layer → the 1st convolution block →
1st pond block → the 2nd convolution block → the 2nd pond block → the 3rd convolution block, detail are as follows:
1. image input coding device, first passing around 1 convolutional layer, (core size is 7 × 7, step-length 2, and output dimension is
24), using 1 maximum pond layer (core size is 3 × 3, step-length 2), the 1st convolution block is recently entered.It is represented with Input#
Input picture, Conv7 × 7# represent 7 × 7 convolutional layer, and MaxPool# represents the maximum pond layer, and BN# represents batch normalization
Layer, ReLU#, which is represented, corrects linear elementary layer, can be expressed as: Input → Conv7 × 7 → BN → ReLU → MaxPool.Due to
Step-length in Conv7 × 7 and MaxPool is 2, and the resolution ratio for finally exporting characteristic pattern is the 1/4 of input picture, then inputs the
1 convolution block.
2. the 1st convolution block, the 2nd convolution block, the 3rd convolution block are made of 6 convolutional layers, convolution is represented with Conv3 × 3#
(core size is 3 × 3 to layer, step-length 1, and output dimension is that 12), then each layer of specific structure in convolution block is BN → ReLU
→Conv3×3.The specific structure for the convolution block Block that the present invention designs is as shown in Figure 1.
3. the 1st pond block, the 2nd pond block dimensionality reduction.In convolution block, characteristic pattern be all it is onesize, in order to reduce feature
Figure size reduces characteristic pattern dimension to extract higher level semantic feature to reduce parameter amount, and the present invention is the 1st and the 2nd
A pond block is respectively devised behind convolution block.Representing input feature vector figure with X#, (for port number as N), Conv3 × 3# represents convolution
Layer (core size is 3 × 3, step-length 1, and output dimension is N/2), MaxPool# represents maximum pond layer, and (core size is 2 × 2, step
It is a length of 2), then the present invention designed by pond block specific structure are as follows: X → BN → ReLU → Conv3 × 3 → MaxPool.Volume 1
Block and the characteristic pattern of the 2nd convolution block output after the block of pond, port number, it is wide and it is high be original 1/2.
2. multiple-branching construction: in the convolutional neural networks that the present invention designs, extracting spy in different layer buildings 4 branches
Sign finally carries out concat fusion to the feature in different branches, such as the 1st branch of Fig. 2, the 2nd branch, the 3rd branch, the 4th branch
It is shown.Each branch pass through 1 convolutional layer (core size is 1 × 1, step-length 1, output dimension be 2) extract information, wherein
Low layer branch mainly extracts detailed information, and high-rise branch mainly extracts semantic information.Additionally, due to the dimensionality reduction in network, difference point
Branch exports the 1/2, the 2nd branch and the 4th that characteristic pattern size is inconsistent, in wide and a height of 1st branch of the 2nd branch output characteristic pattern
Branch's output characteristic pattern is then 1/4 in the 1st branch, therefore used here as bilinear interpolation, by the 2nd, 3, the output of 4 branches it is special
Sign figure up-sampling extremely size identical as the 1st branching characteristic figure, then carry out concat fusion.Fusion feature figure is input to trunk again
On decoder.
3. decoder architecture: since the spatial resolution of the fusion feature figure in encoder is the 1/4 of original input picture,
So the present invention devises a decoder to restore characteristic pattern bulk.The decoder mainly contains 2 convolutional layers and 1
A four times of up-sampling layer, with X# represent input fusion feature figure, Conv1 × 1_1# represent the 1st convolutional layer (core size as
1 × 1,4) step-length 1, output dimension is that Conv1 × 1_2# represents the 2nd convolutional layer, and (core size is 1 × 1, and step-length 1 is defeated
Dimension is that the bilinear interpolation that 2) Upsample# represents 4 times up-samples layer out, then the decoder specific structure designed by the present invention
Are as follows: X → BN → ReLU → Conv1 × 1_1 → ReLU → Conv1 × 1_2 → Upsample.
Step 4: the training of multiple-limb convolutional network
The characteristic pattern of decoder output passes through Softmax function, obtains the distribution probability figure in skin lesion region, then pass through intersection
Entropy function calculates loss compared with dividing true value figure.Backpropagation in a network is lost, the gradient of parameter in network is obtained, then
According to gradient descent method come adjusting parameter, reduce penalty values, network reaches best.The specific calculating of this cross entropy loss function
It is as follows:
Wherein, W and H is respectively that segmentation true value figure is wide and high, yijRepresent the true classification of pixel (i, j), skin lesion 1, back
Scape is 0, pijRepresent the probability that pixel (i, j) is skin lesion.
In addition, in order to ensure 4 extracted information of branch are effectively, to adopt to the output characteristic pattern in each branch
With above-mentioned same method, branch penalty Loss is calculated separately out1,2,3,4.Since branching characteristic figure does not directly generate final segmentation knot
Fruit, excessive in order to avoid influencing on segmentation precision, each branch penalty is damaged multiplied by coefficient 0.5 and then with the segmentation on trunk
Dephasing adds, and obtains total losses, and the parameter in last network is actually to carry out backpropagation training by total losses to obtain.Specifically may be used
It indicates are as follows:
Lossall=Loss+0.5*LosS1,2,3,4
In training process, we are bright to train network using Adam stochastic gradient descent method, and initial learning rate is set as
0.0005, every batch of quantity is set as 8, and maximum exercise wheel number is set as 200.It is no longer in continuous decrease trend when verifying collects upper total losses
When, then stop network training in advance, avoids over-fitting.
Step 5: multiple-limb convolutional network image measurement
Network inputs one designed by the present invention open skin lens image, export its skin lesion distribution probability figure.In order to improve Shandong
Stick, image to be split carry out the rotation of horizontal and vertical overturning and 90 °, 180 ° and 270 ° before inputting network respectively
Turn, add original image, in total 6 images, is separately input in segmentation network, obtains 6 skin lesion distribution probability figures.So
Afterwards, the transformation of its corresponding input picture, carries out corresponding inverse transformation for probability graph, is finally averaged, obtains to 6 probability graphs
Final skin lesion distribution probability figure.
After obtaining skin lesion area distribution probability graph, then it is 0.5 that threshold value, which is arranged, and pixel of the probability value greater than 0.5 is considered as skin
Pixel is damaged, is otherwise background skin, finally obtains the segmented image of test image.
In entire parted pattern frame, the transformation of test image and the inverse transformation of corresponding divided image and segmented image
Fusion and thresholding, by code automatic processing.Therefore in actual test, a skin lens image is inputted, the model is straight
Connect the final segmentation result figure of output.
Claims (4)
1. a kind of skin lens image dividing method based on multiple-limb convolutional neural networks, it is characterised in that: this method include with
Lower step:
Step 1: training sample is collected
Skin lens image contains 2750 original skin lens images, wherein 2000 use from public data collection in the world
In training, 750 are used to verify and test, and for skin lesion region true value figure by dermatologist's hand labeled, this true value figure is two
It is worth image, wherein 1 represents skin lesion region, 0 represents healthy skin region;It is in order to facilitate processing, original image and skin lesion is true
Value figure uniformly zooms to 512 × 512 sizes;
Step 2: image expands
Flip horizontal, flip vertical, 90 °, 180 ° and 270 ° of rotation are carried out respectively to each original training image;To every
Original training image translates 25 pixels to four direction respectively;Skin lesion true value figure does corresponding transformation with its original image;Finally,
2000 original training images have obtained 20000 training images by expanding;
Step 3: multiple-limb convolutional neural networks modelling
A convolutional neural networks are constructed, are coder-decoder framework, wherein encoder extracts semantic and detailed information, solution
Code device restores characteristic pattern size, obtains final segmentation result;In addition, extracting spy in 4 branches of different layer buildings of encoder
Sign, finally merges the feature in different branches, obtains accurate skin lens image segmentation result;
Step 4: multiple-limb convolutional network training
The characteristic pattern of decoder output passes through Softmax function, obtains the distribution probability figure in skin lesion region, then pass through cross entropy letter
Number calculates loss compared with dividing true value figure;Backpropagation in a network is lost, the gradient of parameter in network is obtained, further according to
Gradient descent method carrys out adjusting parameter, reduces penalty values, and network reaches best;The following institute of the specifically calculating of cross entropy loss function
Show:
Wherein, W and H is respectively that segmentation true value figure is wide and high, yijThe true classification of pixel (i, j), skin lesion 1 are represented, background is
0, pijRepresent the probability that pixel (i, j) is skin lesion;
In addition, in order to ensure 4 extracted information of branch are effectively, to be all made of to the output characteristic pattern in each branch
Same method is stated, branch penalty Loss is calculated separately out1,2,3,4;Since branching characteristic figure does not directly generate final segmentation result,
Excessive in order to avoid influencing on segmentation precision, each branch penalty loses phase multiplied by coefficient 0.5 and then with the segmentation on trunk
Add, obtain total losses, the parameter in last network is actually to carry out backpropagation training by total losses to obtain;It can specifically indicate
Are as follows:
Lossall=Loss+0.5*Loss1,2,3,4
Step 5: skin lesion distribution probability figure generates
Network inputs one designed by the present invention open skin lens image, export its skin lesion distribution probability figure;In order to improve robustness,
Image to be split carries out the rotation of horizontal and vertical overturning and 90 °, 180 ° and 270 ° before inputting network respectively, then plus
Upper original image, 6 images, are separately input in segmentation network, obtain 6 skin lesion distribution probability figures in total;Then, it is corresponded to
Probability graph is carried out corresponding inverse transformation by the transformation of input picture, is finally averaged to 6 probability graphs, is obtained final skin
Damage distribution probability figure;
Step 6: segmentation result obtains
After obtaining skin lesion area distribution probability graph, then it is 0.5 that threshold value, which is arranged, and pixel of the probability value greater than 0.5 is considered as skin lesion picture
Otherwise element is background skin, finally obtains the segmentation result of image to be split.
2. a kind of skin lens image dividing method based on multiple-limb convolutional neural networks according to claim 1, special
Sign is: the encoder overall framework are as follows: input picture → convolutional layer → pond layer → the 1st convolution block → the 1st pond block
→ the 2 convolution block → the 2nd pond block → the 3rd convolution block, detail are as follows:
Image input coding device, first passing around 1 core size is 7 × 7, step-length 2, and output dimension is 24 convolutional layer, then passes through
Crossing 1 core size is 3 × 3, and the maximum pond layer that step-length is 2 recently enters the 1st convolution block;With Input# representing input images,
Conv7 × 7# represents 7 × 7 convolutional layer, and MaxPool# represents the maximum pond layer, and BN# represents batch normalization layer, ReLU# generation
Table corrects linear elementary layer, can be expressed as: Input → Conv7 × 7 → BN → ReLU → MaxPool;Due to the He of Conv7 × 7
Step-length in MaxPool is 2, and the resolution ratio for finally exporting characteristic pattern is the 1/4 of input picture, then inputs the 1st convolution block;
1st convolution block, the 2nd convolution block, the 3rd convolution block are made of 6 convolutional layers, with Conv3 × 3# represent core size as 3 ×
3, step-length 1, the convolutional layer that output dimension is 12, then each layer of specific structure in convolution block is BN → ReLU → Conv3
×3;
1st pond block, the 2nd pond block dimensionality reduction;A pond block is respectively devised behind the 1st and the 2nd convolution block;It is represented with X# defeated
Entering characteristic pattern, port number N, it is 3 × 3 that Conv3 × 3#, which represents core size, step-length 1, the convolutional layer that output dimension is N/2,
It is 2 × 2 that MaxPool#, which represents core size, the maximum pond layer that step-length is 2, then pond block specific structure are as follows: X → BN → ReLU →
Conv3×3→MaxPool;1st convolution block and the characteristic pattern of the 2nd convolution block output are after the block of pond, port number, width and height
It is original 1/2.
3. a kind of skin lens image dividing method based on multiple-limb convolutional neural networks according to claim 1, special
Sign is: the multiple-branching construction design specifically: in the convolutional neural networks, in 4 branches of different layer buildings
Feature is extracted, concat fusion finally is carried out to the feature in different branches;Each branch pass through 1 core size be 1 ×
1, step-length 1 exports the convolutional layer that dimension is 2 to extract information, and wherein low layer branch mainly extracts detailed information, high-rise branch
It is main to extract semantic information;Additionally, due to the dimensionality reduction in network, different branch's output characteristic pattern sizes are inconsistent, and the 2nd branch is defeated
The 1/2, the 3rd branch in wide and a height of 1st branch of characteristic pattern and the 4th branch output characteristic pattern are then 1/ in the 1st branch out
4, therefore used here as bilinear interpolation, by the 2nd, 3, the output characteristic patterns of 4 branches up-sample to the 1st branching characteristic figure phase
Same size, then carry out concat fusion;Fusion feature figure is input to the decoder on trunk again.
4. a kind of skin lens image dividing method based on multiple-limb convolutional neural networks according to claim 1, special
Sign is: the decoder mainly contains the up-sampling layer of 2 convolutional layers and 1 four times, and the fusion for representing input with X# is special
Sign figure;Conv1 × 1_1# represents the 1st convolutional layer, and core size is 1 × 1, and step-length 1, output dimension is 4;Conv1×1_2#
The 2nd convolutional layer is represented, core size is 1 × 1, and step-length 1, output dimension is 2;Upsample# represents 4 times of bilinear interpolation
Layer is up-sampled, then the decoder specific structure are as follows: X → BN → ReLU → Conv1 × 1_1 → ReLU → Conv1 × 1_2 →
Upsample。
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201910062500.0A CN109886986B (en) | 2019-01-23 | 2019-01-23 | Dermatoscope image segmentation method based on multi-branch convolutional neural network |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201910062500.0A CN109886986B (en) | 2019-01-23 | 2019-01-23 | Dermatoscope image segmentation method based on multi-branch convolutional neural network |
Publications (2)
Publication Number | Publication Date |
---|---|
CN109886986A true CN109886986A (en) | 2019-06-14 |
CN109886986B CN109886986B (en) | 2020-09-08 |
Family
ID=66926502
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201910062500.0A Active CN109886986B (en) | 2019-01-23 | 2019-01-23 | Dermatoscope image segmentation method based on multi-branch convolutional neural network |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN109886986B (en) |
Cited By (29)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN110263797A (en) * | 2019-06-21 | 2019-09-20 | 北京字节跳动网络技术有限公司 | Crucial the point estimation method, device, equipment and the readable storage medium storing program for executing of skeleton |
CN110349161A (en) * | 2019-07-10 | 2019-10-18 | 北京字节跳动网络技术有限公司 | Image partition method, device, electronic equipment and storage medium |
CN110648311A (en) * | 2019-09-03 | 2020-01-03 | 南开大学 | Acne image focus segmentation and counting network model based on multitask learning |
CN110866565A (en) * | 2019-11-26 | 2020-03-06 | 重庆邮电大学 | Multi-branch image classification method based on convolutional neural network |
CN110910388A (en) * | 2019-10-23 | 2020-03-24 | 浙江工业大学 | Cancer cell image segmentation method based on U-Net and density estimation |
CN111105031A (en) * | 2019-11-11 | 2020-05-05 | 北京地平线机器人技术研发有限公司 | Network structure searching method and device, storage medium and electronic equipment |
CN111127487A (en) * | 2019-12-27 | 2020-05-08 | 电子科技大学 | Real-time multi-tissue medical image segmentation method |
CN111126561A (en) * | 2019-11-20 | 2020-05-08 | 江苏艾佳家居用品有限公司 | Image processing method based on multipath parallel convolution neural network |
CN111179193A (en) * | 2019-12-26 | 2020-05-19 | 苏州斯玛维科技有限公司 | Dermatoscope image enhancement and classification method based on DCNNs and GANs |
CN111583256A (en) * | 2020-05-21 | 2020-08-25 | 北京航空航天大学 | Dermatoscope image classification method based on rotating mean value operation |
CN111724399A (en) * | 2020-06-24 | 2020-09-29 | 北京邮电大学 | Image segmentation method and terminal |
RU2733823C1 (en) * | 2019-12-17 | 2020-10-07 | Акционерное общество "Российская корпорация ракетно-космического приборостроения и информационных систем" (АО "Российские космические системы") | System for segmenting images of subsoil resources of open type |
RU2734058C1 (en) * | 2019-12-17 | 2020-10-12 | Акционерное общество "Российская корпорация ракетно-космического приборостроения и информационных систем" (АО "Российские космические системы") | System for segmenting images of buildings and structures |
CN111833273A (en) * | 2020-07-17 | 2020-10-27 | 华东师范大学 | Semantic boundary enhancement method based on long-distance dependence |
CN111951288A (en) * | 2020-07-15 | 2020-11-17 | 南华大学 | Skin cancer lesion segmentation method based on deep learning |
CN112132833A (en) * | 2020-08-25 | 2020-12-25 | 沈阳工业大学 | Skin disease image focus segmentation method based on deep convolutional neural network |
CN112613359A (en) * | 2020-12-09 | 2021-04-06 | 苏州玖合智能科技有限公司 | Method for constructing neural network for detecting abnormal behaviors of people |
CN113159275A (en) * | 2021-03-05 | 2021-07-23 | 深圳市商汤科技有限公司 | Network training method, image processing method, device, equipment and storage medium |
CN113378984A (en) * | 2021-07-05 | 2021-09-10 | 国药(武汉)医学实验室有限公司 | Medical image classification method, system, terminal and storage medium |
CN113506310A (en) * | 2021-07-16 | 2021-10-15 | 首都医科大学附属北京天坛医院 | Medical image processing method and device, electronic equipment and storage medium |
CN113743280A (en) * | 2021-08-30 | 2021-12-03 | 广西师范大学 | Brain neuron electron microscope image volume segmentation method, device and storage medium |
CN113742775A (en) * | 2021-09-08 | 2021-12-03 | 哈尔滨工业大学(深圳) | Image data security detection method, system and storage medium |
CN113744178A (en) * | 2020-08-06 | 2021-12-03 | 西北师范大学 | Skin lesion segmentation method based on convolution attention model |
CN113780241A (en) * | 2021-09-29 | 2021-12-10 | 北京航空航天大学 | Acceleration method and device for detecting salient object |
CN113870194A (en) * | 2021-09-07 | 2021-12-31 | 燕山大学 | Deep layer characteristic and superficial layer LBP characteristic fused breast tumor ultrasonic image processing device |
CN113920124A (en) * | 2021-06-22 | 2022-01-11 | 西安理工大学 | Brain neuron iterative segmentation method based on segmentation and error guidance |
CN114943963A (en) * | 2022-04-29 | 2022-08-26 | 南京信息工程大学 | Remote sensing image cloud and cloud shadow segmentation method based on double-branch fusion network |
CN115147343A (en) * | 2021-03-31 | 2022-10-04 | 西门子医疗有限公司 | Digital pathology using artificial neural networks |
CN115511882A (en) * | 2022-11-09 | 2022-12-23 | 南京信息工程大学 | Melanoma identification method based on lesion weight characteristic map |
Citations (10)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN107203999A (en) * | 2017-04-28 | 2017-09-26 | 北京航空航天大学 | A kind of skin lens image automatic division method based on full convolutional neural networks |
CN107451996A (en) * | 2017-07-26 | 2017-12-08 | 广州慧扬健康科技有限公司 | Deep learning training system applied to cutaneum carcinoma identification |
CN107480261A (en) * | 2017-08-16 | 2017-12-15 | 上海荷福人工智能科技(集团)有限公司 | One kind is based on deep learning fine granularity facial image method for quickly retrieving |
CN107527069A (en) * | 2017-08-22 | 2017-12-29 | 京东方科技集团股份有限公司 | Image processing method, device, electronic equipment and computer-readable medium |
US20180061046A1 (en) * | 2016-08-31 | 2018-03-01 | International Business Machines Corporation | Skin lesion segmentation using deep convolution networks guided by local unsupervised learning |
CN108090447A (en) * | 2017-12-19 | 2018-05-29 | 青岛理工大学 | Hyperspectral image classification method and device under double branch's deep structures |
CN108154145A (en) * | 2018-01-24 | 2018-06-12 | 北京地平线机器人技术研发有限公司 | The method and apparatus for detecting the position of the text in natural scene image |
CN108510456A (en) * | 2018-03-27 | 2018-09-07 | 华南理工大学 | The sketch of depth convolutional neural networks based on perception loss simplifies method |
CN108830150A (en) * | 2018-05-07 | 2018-11-16 | 山东师范大学 | One kind being based on 3 D human body Attitude estimation method and device |
CN109241829A (en) * | 2018-07-25 | 2019-01-18 | 中国科学院自动化研究所 | The Activity recognition method and device of convolutional neural networks is paid attention to based on space-time |
-
2019
- 2019-01-23 CN CN201910062500.0A patent/CN109886986B/en active Active
Patent Citations (10)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20180061046A1 (en) * | 2016-08-31 | 2018-03-01 | International Business Machines Corporation | Skin lesion segmentation using deep convolution networks guided by local unsupervised learning |
CN107203999A (en) * | 2017-04-28 | 2017-09-26 | 北京航空航天大学 | A kind of skin lens image automatic division method based on full convolutional neural networks |
CN107451996A (en) * | 2017-07-26 | 2017-12-08 | 广州慧扬健康科技有限公司 | Deep learning training system applied to cutaneum carcinoma identification |
CN107480261A (en) * | 2017-08-16 | 2017-12-15 | 上海荷福人工智能科技(集团)有限公司 | One kind is based on deep learning fine granularity facial image method for quickly retrieving |
CN107527069A (en) * | 2017-08-22 | 2017-12-29 | 京东方科技集团股份有限公司 | Image processing method, device, electronic equipment and computer-readable medium |
CN108090447A (en) * | 2017-12-19 | 2018-05-29 | 青岛理工大学 | Hyperspectral image classification method and device under double branch's deep structures |
CN108154145A (en) * | 2018-01-24 | 2018-06-12 | 北京地平线机器人技术研发有限公司 | The method and apparatus for detecting the position of the text in natural scene image |
CN108510456A (en) * | 2018-03-27 | 2018-09-07 | 华南理工大学 | The sketch of depth convolutional neural networks based on perception loss simplifies method |
CN108830150A (en) * | 2018-05-07 | 2018-11-16 | 山东师范大学 | One kind being based on 3 D human body Attitude estimation method and device |
CN109241829A (en) * | 2018-07-25 | 2019-01-18 | 中国科学院自动化研究所 | The Activity recognition method and device of convolutional neural networks is paid attention to based on space-time |
Non-Patent Citations (3)
Title |
---|
CHAOCUN CHEN 等: "Vehicle Type Recognition based on Multi-branch and Multi-Layer Features", 《2017 IEEE》 * |
YEFEN WU 等: "Automatic skin lesion segmentation based on supervised learning", 《2013 SEVENTH INTERNATIONAL CONFERENCE ON IMAGE AND GRAPHICS》 * |
吕璐 等: "一种基于融合深度卷积神经网络与度量学习的人脸识别方法", 《现代电子技术》 * |
Cited By (47)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN110263797A (en) * | 2019-06-21 | 2019-09-20 | 北京字节跳动网络技术有限公司 | Crucial the point estimation method, device, equipment and the readable storage medium storing program for executing of skeleton |
CN110349161A (en) * | 2019-07-10 | 2019-10-18 | 北京字节跳动网络技术有限公司 | Image partition method, device, electronic equipment and storage medium |
CN110349161B (en) * | 2019-07-10 | 2021-11-23 | 北京字节跳动网络技术有限公司 | Image segmentation method, image segmentation device, electronic equipment and storage medium |
CN110648311A (en) * | 2019-09-03 | 2020-01-03 | 南开大学 | Acne image focus segmentation and counting network model based on multitask learning |
CN110910388A (en) * | 2019-10-23 | 2020-03-24 | 浙江工业大学 | Cancer cell image segmentation method based on U-Net and density estimation |
CN111105031A (en) * | 2019-11-11 | 2020-05-05 | 北京地平线机器人技术研发有限公司 | Network structure searching method and device, storage medium and electronic equipment |
CN111105031B (en) * | 2019-11-11 | 2023-10-17 | 北京地平线机器人技术研发有限公司 | Network structure searching method and device, storage medium and electronic equipment |
CN111126561A (en) * | 2019-11-20 | 2020-05-08 | 江苏艾佳家居用品有限公司 | Image processing method based on multipath parallel convolution neural network |
CN110866565A (en) * | 2019-11-26 | 2020-03-06 | 重庆邮电大学 | Multi-branch image classification method based on convolutional neural network |
CN110866565B (en) * | 2019-11-26 | 2022-06-24 | 重庆邮电大学 | Multi-branch image classification method based on convolutional neural network |
RU2733823C1 (en) * | 2019-12-17 | 2020-10-07 | Акционерное общество "Российская корпорация ракетно-космического приборостроения и информационных систем" (АО "Российские космические системы") | System for segmenting images of subsoil resources of open type |
RU2734058C1 (en) * | 2019-12-17 | 2020-10-12 | Акционерное общество "Российская корпорация ракетно-космического приборостроения и информационных систем" (АО "Российские космические системы") | System for segmenting images of buildings and structures |
CN111179193A (en) * | 2019-12-26 | 2020-05-19 | 苏州斯玛维科技有限公司 | Dermatoscope image enhancement and classification method based on DCNNs and GANs |
CN111179193B (en) * | 2019-12-26 | 2021-08-10 | 苏州斯玛维科技有限公司 | Dermatoscope image enhancement and classification method based on DCNNs and GANs |
CN111127487A (en) * | 2019-12-27 | 2020-05-08 | 电子科技大学 | Real-time multi-tissue medical image segmentation method |
CN111127487B (en) * | 2019-12-27 | 2022-04-19 | 电子科技大学 | Real-time multi-tissue medical image segmentation method |
CN111583256A (en) * | 2020-05-21 | 2020-08-25 | 北京航空航天大学 | Dermatoscope image classification method based on rotating mean value operation |
CN111583256B (en) * | 2020-05-21 | 2022-11-04 | 北京航空航天大学 | Dermatoscope image classification method based on rotating mean value operation |
CN111724399A (en) * | 2020-06-24 | 2020-09-29 | 北京邮电大学 | Image segmentation method and terminal |
CN111951288B (en) * | 2020-07-15 | 2023-07-21 | 南华大学 | Skin cancer lesion segmentation method based on deep learning |
CN111951288A (en) * | 2020-07-15 | 2020-11-17 | 南华大学 | Skin cancer lesion segmentation method based on deep learning |
CN111833273B (en) * | 2020-07-17 | 2021-08-13 | 华东师范大学 | Semantic boundary enhancement method based on long-distance dependence |
CN111833273A (en) * | 2020-07-17 | 2020-10-27 | 华东师范大学 | Semantic boundary enhancement method based on long-distance dependence |
CN113744178B (en) * | 2020-08-06 | 2023-10-20 | 西北师范大学 | Skin lesion segmentation method based on convolution attention model |
CN113744178A (en) * | 2020-08-06 | 2021-12-03 | 西北师范大学 | Skin lesion segmentation method based on convolution attention model |
CN112132833B (en) * | 2020-08-25 | 2024-03-26 | 沈阳工业大学 | Dermatological image focus segmentation method based on deep convolutional neural network |
CN112132833A (en) * | 2020-08-25 | 2020-12-25 | 沈阳工业大学 | Skin disease image focus segmentation method based on deep convolutional neural network |
CN112613359A (en) * | 2020-12-09 | 2021-04-06 | 苏州玖合智能科技有限公司 | Method for constructing neural network for detecting abnormal behaviors of people |
CN112613359B (en) * | 2020-12-09 | 2024-02-02 | 苏州玖合智能科技有限公司 | Construction method of neural network for detecting abnormal behaviors of personnel |
CN113159275A (en) * | 2021-03-05 | 2021-07-23 | 深圳市商汤科技有限公司 | Network training method, image processing method, device, equipment and storage medium |
CN115147343A (en) * | 2021-03-31 | 2022-10-04 | 西门子医疗有限公司 | Digital pathology using artificial neural networks |
US11699233B2 (en) | 2021-03-31 | 2023-07-11 | Siemens Healthcare Gmbh | Digital pathology using an artificial neural network |
CN113920124A (en) * | 2021-06-22 | 2022-01-11 | 西安理工大学 | Brain neuron iterative segmentation method based on segmentation and error guidance |
CN113378984A (en) * | 2021-07-05 | 2021-09-10 | 国药(武汉)医学实验室有限公司 | Medical image classification method, system, terminal and storage medium |
CN113378984B (en) * | 2021-07-05 | 2023-05-02 | 国药(武汉)医学实验室有限公司 | Medical image classification method, system, terminal and storage medium |
CN113506310A (en) * | 2021-07-16 | 2021-10-15 | 首都医科大学附属北京天坛医院 | Medical image processing method and device, electronic equipment and storage medium |
CN113743280A (en) * | 2021-08-30 | 2021-12-03 | 广西师范大学 | Brain neuron electron microscope image volume segmentation method, device and storage medium |
CN113743280B (en) * | 2021-08-30 | 2024-03-01 | 广西师范大学 | Brain neuron electron microscope image volume segmentation method, device and storage medium |
CN113870194B (en) * | 2021-09-07 | 2024-04-09 | 燕山大学 | Breast tumor ultrasonic image processing device with fusion of deep layer characteristics and shallow layer LBP characteristics |
CN113870194A (en) * | 2021-09-07 | 2021-12-31 | 燕山大学 | Deep layer characteristic and superficial layer LBP characteristic fused breast tumor ultrasonic image processing device |
CN113742775B (en) * | 2021-09-08 | 2023-07-28 | 哈尔滨工业大学(深圳) | Image data security detection method, system and storage medium |
CN113742775A (en) * | 2021-09-08 | 2021-12-03 | 哈尔滨工业大学(深圳) | Image data security detection method, system and storage medium |
CN113780241B (en) * | 2021-09-29 | 2024-02-06 | 北京航空航天大学 | Acceleration method and device for detecting remarkable object |
CN113780241A (en) * | 2021-09-29 | 2021-12-10 | 北京航空航天大学 | Acceleration method and device for detecting salient object |
CN114943963A (en) * | 2022-04-29 | 2022-08-26 | 南京信息工程大学 | Remote sensing image cloud and cloud shadow segmentation method based on double-branch fusion network |
CN115511882B (en) * | 2022-11-09 | 2023-03-21 | 南京信息工程大学 | Melanoma identification method based on lesion weight characteristic map |
CN115511882A (en) * | 2022-11-09 | 2022-12-23 | 南京信息工程大学 | Melanoma identification method based on lesion weight characteristic map |
Also Published As
Publication number | Publication date |
---|---|
CN109886986B (en) | 2020-09-08 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN109886986A (en) | A kind of skin lens image dividing method based on multiple-limb convolutional neural networks | |
CN107203999B (en) | Dermatoscope image automatic segmentation method based on full convolution neural network | |
CN111145170B (en) | Medical image segmentation method based on deep learning | |
Ni et al. | GC-Net: Global context network for medical image segmentation | |
CN105957063B (en) | CT image liver segmentation method and system based on multiple dimensioned weighting similarity measure | |
Qureshi et al. | Medical image segmentation using deep semantic-based methods: A review of techniques, applications and emerging trends | |
CN108268870A (en) | Multi-scale feature fusion ultrasonoscopy semantic segmentation method based on confrontation study | |
CN108257135A (en) | The assistant diagnosis system of medical image features is understood based on deep learning method | |
CN115018824B (en) | Colonoscope polyp image segmentation method based on CNN and Transformer fusion | |
CN115205300B (en) | Fundus blood vessel image segmentation method and system based on cavity convolution and semantic fusion | |
CN109389585A (en) | A kind of brain tissue extraction method based on full convolutional neural networks | |
Talavera-Martinez et al. | Hair segmentation and removal in dermoscopic images using deep learning | |
CN106408001A (en) | Rapid area-of-interest detection method based on depth kernelized hashing | |
CN109166104A (en) | A kind of lesion detection method, device and equipment | |
CN109447998A (en) | Based on the automatic division method under PCANet deep learning model | |
Bourbakis | Detecting abnormal patterns in WCE images | |
CN109447963A (en) | A kind of method and device of brain phantom identification | |
CN112465905A (en) | Characteristic brain region positioning method of magnetic resonance imaging data based on deep learning | |
CN112884788B (en) | Cup optic disk segmentation method and imaging method based on rich context network | |
Xu et al. | Dual-channel asymmetric convolutional neural network for an efficient retinal blood vessel segmentation in eye fundus images | |
Shan et al. | SCA-Net: A spatial and channel attention network for medical image segmentation | |
Xiang et al. | A novel weight pruning strategy for light weight neural networks with application to the diagnosis of skin disease | |
Chen et al. | Mu-Net: Multi-Path Upsampling Convolution Network for Medical Image Segmentation. | |
Tang et al. | HTC-Net: A hybrid CNN-transformer framework for medical image segmentation | |
Le et al. | Antialiasing attention spatial convolution model for skin lesion segmentation with applications in the medical IoT |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |