CN115393361A - Method, device, equipment and medium for segmenting skin disease image with low annotation cost - Google Patents

Method, device, equipment and medium for segmenting skin disease image with low annotation cost Download PDF

Info

Publication number
CN115393361A
CN115393361A CN202211332281.1A CN202211332281A CN115393361A CN 115393361 A CN115393361 A CN 115393361A CN 202211332281 A CN202211332281 A CN 202211332281A CN 115393361 A CN115393361 A CN 115393361A
Authority
CN
China
Prior art keywords
skin disease
labeling
pixels
image
disease image
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202211332281.1A
Other languages
Chinese (zh)
Other versions
CN115393361B (en
Inventor
梁桥康
秦海
肖海华
邹坤霖
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Hunan University
Original Assignee
Hunan University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Hunan University filed Critical Hunan University
Priority to CN202211332281.1A priority Critical patent/CN115393361B/en
Publication of CN115393361A publication Critical patent/CN115393361A/en
Application granted granted Critical
Publication of CN115393361B publication Critical patent/CN115393361B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/0002Inspection of images, e.g. flaw detection
    • G06T7/0012Biomedical image inspection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • G06V10/26Segmentation of patterns in the image field; Cutting or merging of image elements to establish the pattern region, e.g. clustering-based techniques; Detection of occlusion
    • G06V10/267Segmentation of patterns in the image field; Cutting or merging of image elements to establish the pattern region, e.g. clustering-based techniques; Detection of occlusion by performing operations on regions, e.g. growing, shrinking or watersheds
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/77Processing image or video features in feature spaces; using data integration or data reduction, e.g. principal component analysis [PCA] or independent component analysis [ICA] or self-organising maps [SOM]; Blind source separation
    • G06V10/774Generating sets of training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/70Labelling scene content, e.g. deriving syntactic or semantic representations
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30004Biomedical image processing
    • G06T2207/30088Skin; Dermal

Abstract

The invention discloses a method, a device, equipment and a medium for segmenting a skin disease image with low annotation cost, wherein the method comprises the following steps: constructing a skin disease image data set, including unmarked and marked skin disease images; designing a skin disease image segmentation network with low annotation cost, wherein the skin disease image segmentation network comprises N prediction models and a multi-model fusion module; training and labeling each prediction model in batches by using a skin disease image data set, wherein an active learning method of a multi-uncertainty strategy and a semi-supervised learning method based on a shared query value strategy are adopted, and the prediction model obtained by current training is combined with expert labeling to label the current unmarked skin disease images of any batch; repeating iterative training on each prediction model by using the marked skin disease image; and (3) segmenting and labeling the skin disease image to be segmented by using the trained skin disease image segmentation network. The invention can still obtain good segmentation effect under the condition of less labeled samples.

Description

Skin disease image segmentation method, device, equipment and medium with low annotation cost
Technical Field
The invention relates to the field of image processing, in particular to a dermatosis image segmentation method, a device, equipment and a medium with low annotation cost.
Background
Malignant melanoma is one of the fastest growing cancers in the world, with high morbidity and mortality. If it can be found early, a cure rate of 95% can be achieved. At present, the clinical diagnosis is mainly carried out by a skin mirror image. In computer-aided medicine, if the lesion in the dermatoscope image is effectively segmented, the accuracy of skin disease detection can be obviously improved, and great convenience is brought to a dermatologist for judging whether the melanoma is generated.
With the continuous development of artificial intelligence technology, deep learning is widely applied in the field of computer vision. Currently, there are many excellent segmentation models based on deep learning for medical image segmentation tasks, such as FCN, UNet, segNet, etc. However, these models can achieve better segmentation only when there are a large number of labeled training samples. However, a real problem is that it is often easy to obtain a large number of unmarked raw images, which are impossible to mark due to the limited energy of the physician. In the current common skin disease segmentation task, a single segmentation network is used, and the problem of poor robustness exists.
Disclosure of Invention
Aiming at the problems that a large number of unmarked images are easy to obtain at present and experts cannot spend a large amount of time and energy to mark each image, the method, the device, the equipment and the medium for segmenting the skin disease image with low marking cost are provided, and a good segmenting effect can be obtained under the condition of less marked samples.
In order to achieve the technical purpose, the invention adopts the following technical scheme:
a low-annotation-cost skin disease image segmentation method comprises the following steps:
constructing a skin disease image data set, wherein the skin disease image data set comprises unmarked skin disease images and marked skin disease images, and the number of the unmarked skin disease images is greater than that of the marked skin disease images;
designing a skin disease image segmentation network with low annotation cost, wherein the skin disease image segmentation network comprises N prediction models and a multi-model fusion module;
training and labeling each prediction model in batches using the skin disease image dataset: firstly, dividing unmarked skin disease images into a plurality of batches; then, training a prediction model by using the currently labeled skin disease image; then, an active learning method of various uncertain strategies and a semi-supervised learning method based on a shared query value strategy are adopted, a prediction model obtained by current training is combined with expert labeling, and the current unmarked skin disease images of any batch are labeled; then, fusing the output labels of the prediction models by using a multi-model fusion module; until all the skin disease images are marked;
repeating iterative training on each prediction model by using the marked skin disease image until each prediction model converges;
and respectively segmenting and labeling the skin disease image to be segmented by using the trained prediction models, and then fusing the output labels of the prediction models by using a multi-model fusion module to finish the segmentation and labeling of the skin disease image to be segmented.
In a further skin disease image segmentation method, the active learning method adopting multiple uncertainty strategies and the semi-supervised learning method based on a shared query value strategy are combined with expert labeling by using a prediction model obtained by current training, and labeling is performed on the current unmarked skin disease images of any batch, and the method specifically comprises the following steps:
two active learning uncertainty strategies are employed
Figure DEST_PATH_IMAGE001
And
Figure DEST_PATH_IMAGE002
separately for pixels in the unmarked skin disease image
Figure DEST_PATH_IMAGE003
Performing pre-classification, and recording the pre-classification as
Figure DEST_PATH_IMAGE004
And
Figure DEST_PATH_IMAGE005
introducing a random query factor
Figure DEST_PATH_IMAGE006
For random query factor
Figure 878918DEST_PATH_IMAGE006
And pre-classification
Figure 592796DEST_PATH_IMAGE004
Figure 106954DEST_PATH_IMAGE005
Weighting to obtain pixels
Figure DEST_PATH_IMAGE007
Confidence of classification of (2)
Figure DEST_PATH_IMAGE008
By sharing classification confidence
Figure 580792DEST_PATH_IMAGE008
And distributing a pseudo label to the pixel with the classification confidence coefficient reaching a preset value to finish labeling, and handing the rest pixels serving as uncertain pixels to an expert to finish labeling.
In a further skin disease image segmentation method, two active learning uncertainty strategies are adopted
Figure 395164DEST_PATH_IMAGE001
And
Figure 952047DEST_PATH_IMAGE002
the method for pre-classifying the pixels comprises the following steps:
Figure DEST_PATH_IMAGE009
in the formula (I), the compound is shown in the specification,
Figure DEST_PATH_IMAGE010
is shown as
Figure DEST_PATH_IMAGE011
A prediction model for the prediction of the target,
Figure DEST_PATH_IMAGE012
a class of pixels is represented by a number of pixels,
Figure DEST_PATH_IMAGE013
the number of classes representing the pixel,
Figure DEST_PATH_IMAGE014
representing a pixel
Figure 812556DEST_PATH_IMAGE003
The label of (a) to (b),
Figure DEST_PATH_IMAGE015
representing a predictive model
Figure DEST_PATH_IMAGE016
For the pixel
Figure 287488DEST_PATH_IMAGE003
Output tag of (2)
Figure DEST_PATH_IMAGE017
The probability of (c).
In a further method of skin disease image segmentation, classification confidence is shared
Figure 487525DEST_PATH_IMAGE008
The method for distributing the pseudo labels to the pixels with the classification confidence coefficient reaching the preset value comprises the following steps:
Figure DEST_PATH_IMAGE018
in the formula (I), the compound is shown in the specification,
Figure DEST_PATH_IMAGE019
is an assigned pseudo label.
In a further skin disease image segmentation method, the multi-model fusion module classifies pixels according to the magnitude of voting entropy, and the voting entropy
Figure DEST_PATH_IMAGE020
The calculating method comprises the following steps:
Figure DEST_PATH_IMAGE021
in the formula (I), the compound is shown in the specification,
Figure DEST_PATH_IMAGE022
denotes the first
Figure 90675DEST_PATH_IMAGE011
A prediction model to the pixel
Figure 845005DEST_PATH_IMAGE003
Or an expert-labeled label.
A low-annotation-cost dermatological-image segmentation apparatus, comprising:
a dataset construction module to: constructing a skin disease image data set, wherein the skin disease image data set comprises unmarked skin disease images and marked skin disease images, and the number of the unmarked skin disease images is greater than that of the marked skin disease images;
a split network design module to: designing a skin disease image segmentation network with low annotation cost, wherein the skin disease image segmentation network comprises N prediction models and a multi-model fusion module;
an image annotation module to: training and labeling each prediction model in batches by using a skin disease image dataset: firstly, dividing unmarked skin disease images into a plurality of batches; then, training a prediction model by using the currently labeled skin disease image; then, an active learning method of various uncertain strategies and a semi-supervised learning method based on a shared query value strategy are adopted, a prediction model obtained by current training is combined with expert labeling, and the current unmarked skin disease images of any batch are labeled; then, fusing the output labels of the prediction models by using a multi-model fusion module; until all skin disease images are marked;
a model training module to: repeatedly training each prediction model by using the marked skin disease image until each prediction model converges;
an image segmentation module to: and respectively segmenting and labeling the skin disease image to be segmented by using the trained prediction models, and then fusing the output labels of the prediction models by using a multi-model fusion module to finish the segmentation and labeling of the skin disease image to be segmented.
In a further skin disease image segmentation apparatus, a specific process of labeling a skin disease image by the image labeling module includes:
two active learning uncertainty strategies are employed
Figure 316437DEST_PATH_IMAGE001
And
Figure 636560DEST_PATH_IMAGE002
separately for pixels in the unmarked skin disease image
Figure 800825DEST_PATH_IMAGE003
Performing pre-classification, and recording the pre-classification as
Figure 839189DEST_PATH_IMAGE004
And
Figure 114312DEST_PATH_IMAGE005
introducing a random query factor
Figure 492204DEST_PATH_IMAGE006
For random query factor
Figure 624108DEST_PATH_IMAGE006
And pre-classification
Figure 87450DEST_PATH_IMAGE004
Figure 713735DEST_PATH_IMAGE005
Weighting to obtain pixels
Figure 211712DEST_PATH_IMAGE007
Confidence of classification of (2)
Figure 717780DEST_PATH_IMAGE008
By sharing classification confidence
Figure 465156DEST_PATH_IMAGE008
Distributing a pseudo label to the pixel with the classification confidence coefficient reaching a preset value to finish labeling, and handing the rest pixels as uncertain pixels to an expert to finish labeling;
and (5) continuing to train the prediction model by using the currently labeled skin disease image.
In a further skin disease image segmentation device, two active learning uncertainty strategies are used
Figure 82082DEST_PATH_IMAGE001
And
Figure 434566DEST_PATH_IMAGE002
the method for pre-classifying the pixels comprises the following steps:
Figure DEST_PATH_IMAGE023
in the formula (I), the compound is shown in the specification,
Figure DEST_PATH_IMAGE024
denotes the first
Figure 705011DEST_PATH_IMAGE011
The prediction model is used for predicting the prediction model,
Figure 877366DEST_PATH_IMAGE012
a class of pixels is represented by a number of pixels,
Figure 845453DEST_PATH_IMAGE013
the number of classes representing the pixel,
Figure 318023DEST_PATH_IMAGE014
representing a pixel
Figure 165893DEST_PATH_IMAGE003
The label of (a) to (b),
Figure 887861DEST_PATH_IMAGE015
representing a predictive model
Figure 846590DEST_PATH_IMAGE016
For the pixel
Figure 908087DEST_PATH_IMAGE003
Output tag of
Figure 989175DEST_PATH_IMAGE017
The probability of (c).
An electronic device comprising a memory and a processor, wherein the memory stores a computer program, and when the computer program is executed by the processor, the processor is enabled to realize the skin disease image segmentation method.
A computer-readable storage medium, on which a computer program is stored which, when being executed by a processor, implements a dermatological image segmentation method as defined in one of the above.
Advantageous effects
The invention can effectively reduce the label amount of the skin mirror image, reduce the cost of the segmentation network with the labeled data set and improve the robustness of the segmentation network. The scheme has high segmentation precision and low cost, and can assist dermatologists in diagnosing clinical tasks of melanoma. Compared with the existing skin disease image segmentation technology, the method has the following advantages:
(1) The random query method is introduced into two different active learning uncertainty strategy queries, different weights are respectively given to the two different active learning uncertainty strategy queries, the problem of consistent deviation of query pixels can be solved, therefore, difficult pixels with high uncertainty in a skin mirror image can be queried more accurately and are labeled by experts, and the defect of consistent deviation of uncertainty strategies can be effectively overcome.
(2) The multi-model fusion segmentation method provided by the invention can effectively improve the segmentation performance of a single prediction model and improve the robustness of an integrated segmentation network.
(3) The method has strong practicability, can not only mark image data sets with less number, but also keep higher segmentation effect, and effectively improves the performance of the model.
Drawings
Fig. 1 is a schematic diagram of a skin disease image segmentation method with low annotation cost according to an embodiment of the invention.
Fig. 2 is a schematic diagram of a plurality of active learning uncertainty query methods with random methods introduced in the embodiment of the present invention.
FIG. 3 is a diagram illustrating a multi-model fusion segmentation method according to an embodiment of the present invention.
FIG. 4 is a graph of the average pixel label amount per image according to an embodiment of the present invention.
FIG. 5 is a diagram illustrating the results of the dermoscopic image segmentation test according to the embodiment of the present invention.
Detailed Description
The following describes embodiments of the present invention in detail, which are developed based on the technical solutions of the present invention, and give detailed implementation manners and specific operation procedures to further explain the technical solutions of the present invention.
The embodiment provides a method for segmenting a skin disease image with low annotation cost, which can perform an experiment by using a Python programming language, and can also perform engineering application by using a C/C + + programming language. Referring to fig. 1, the method comprises the following steps:
step 1, constructing a skin disease image data set, wherein the skin disease image data set comprises unmarked skin disease images and marked skin disease images, and the number of the unmarked skin disease images is larger than that of the marked skin disease images.
In this embodiment, since the accuracy of the segmented network and the generalization accuracy of the test segmented network are to be verified subsequently, the constructed skin disease image data set includes a training set, a verification set and a test set of the model, the training set is used for training the prediction model, the verification set is used for verifying the accuracy of the training model, and the test set is used for testing the generalization accuracy of the model. The training set comprises unmarked skin disease images and marked skin disease images, and the number of the unmarked skin disease images is larger than that of the marked skin disease images.
Embodiments employ the international cooperative organization for skin imaging (ISIC) to provide digital skin lesion image datasets and expert annotations from around the world for the diagnosis of melanoma and other cancers, including two sets of ISIC 2016 and ISIC 2017. The ISIC 2016 dataset contained 900 images (727 non-melanoma and 173 melanoma) for training and 379 images (304 non-melanoma and 75 melanoma) for testing. The pixel size of the image varied from 566 x 679-2848 x 4228. The ISIC 2017 dataset contained 2000 images (1626 non-melanoma and 374 melanoma) for training and 600 images (483 non-melanoma and 117 melanoma) for testing, with pixel sizes varying from 453 × 679-4499 × 6748.
And 2, designing a skin disease image segmentation network with low annotation cost, wherein the skin disease image segmentation network comprises N prediction models and a multi-model fusion module.
Training and labeling each prediction model in batches using the skin disease image dataset: firstly, dividing unmarked skin disease images into a plurality of batches; then, training a prediction model by using the currently labeled skin disease image; then, an active learning method of various uncertain strategies and a semi-supervised learning method based on a shared query value strategy are adopted, a prediction model obtained by current training is combined with expert labeling, and the current unmarked skin disease images of any batch are labeled; then, fusing the output labels of the prediction models by using a multi-model fusion module; until all skin disease images are marked;
step 3, training and labeling each prediction model in batches by using the skin disease image data set:
step 3.1, dividing the unmarked skin disease images into a plurality of batches;
step 3.2, training a prediction model by using the currently labeled skin disease image;
3.3, marking the unmarked skin disease images of any current batch by adopting an active learning method of various uncertain strategies and a semi-supervised learning method based on a shared query value strategy and combining a prediction model obtained by current training with expert marking; specifically, the method comprises the following steps:
(1) Two active learning uncertainty strategies are employed
Figure 136123DEST_PATH_IMAGE001
And
Figure 960860DEST_PATH_IMAGE002
separately for pixels in the unmarked skin disease image
Figure 876863DEST_PATH_IMAGE003
Performing pre-classification, and recording the pre-classification as
Figure 66536DEST_PATH_IMAGE004
And
Figure 513829DEST_PATH_IMAGE005
Figure 814360DEST_PATH_IMAGE023
in the formula (I), the compound is shown in the specification,
Figure DEST_PATH_IMAGE025
is shown as
Figure 647187DEST_PATH_IMAGE011
A prediction model for the prediction of the target,
Figure 804499DEST_PATH_IMAGE012
a class of pixels is represented by a number of pixels,
Figure 191618DEST_PATH_IMAGE013
the number of categories representing the pixels is,
Figure 295840DEST_PATH_IMAGE014
representing a pixel
Figure 983173DEST_PATH_IMAGE003
The label of (a) to (b),
Figure 514649DEST_PATH_IMAGE015
representing a predictive model
Figure 670955DEST_PATH_IMAGE025
For the pixel
Figure 844447DEST_PATH_IMAGE003
Output tag of (2)
Figure 589549DEST_PATH_IMAGE017
The probability of (c).
Calculating out
Figure 354243DEST_PATH_IMAGE002
The value represents a measure of uncertainty by using the probability of predicting samples for all classes.
Figure 184796DEST_PATH_IMAGE002
The higher the value, the greater the uncertainty.
(2) Introducing random query factors
Figure 693137DEST_PATH_IMAGE006
For random query factor
Figure 558325DEST_PATH_IMAGE006
And pre-classification
Figure 431603DEST_PATH_IMAGE004
Figure 811769DEST_PATH_IMAGE005
Weighting to obtain pixels
Figure 795906DEST_PATH_IMAGE007
Confidence of classification of (2)
Figure DEST_PATH_IMAGE026
Figure DEST_PATH_IMAGE027
In the formula (I), the compound is shown in the specification,
Figure DEST_PATH_IMAGE028
representing the weights of three different query strategies,
Figure DEST_PATH_IMAGE029
representing a pixel
Figure 594228DEST_PATH_IMAGE003
The random query factor of (2).
(3) By sharing classification confidence
Figure 903987DEST_PATH_IMAGE008
To make the classification confidence reach the preset value
Figure DEST_PATH_IMAGE030
The pixel of (2) is assigned with a pseudo label to finish labeling, and the rest pixels are taken as uncertain pixels to be handed to an expert to finish labeling:
Figure DEST_PATH_IMAGE031
in the formula (I), the compound is shown in the specification,
Figure DEST_PATH_IMAGE032
the value is marked for the expert and,
Figure 37028DEST_PATH_IMAGE019
is an assigned pseudo label.
And 3.4, fusing the pixel labels by using a multi-model fusion module according to the magnitude of the voting entropy, wherein the voting entropy
Figure DEST_PATH_IMAGE033
The calculation method comprises the following steps:
Figure 637905DEST_PATH_IMAGE021
and 3.5, repeating the steps 3.2 to 3.4 until all batches of skin disease image annotations are finished.
The uncertainty strategy-based active learning module in the embodiment adopts two active learning uncertainty strategies, simultaneously introduces a random query method, weights the three strategies to form a comprehensive querier, can more accurately query the difficult pixels of the dermatoscope image to be labeled by experts, and can effectively solve the defect of consistent deviation of the uncertainty strategies, and the reference is made in fig. 2.
And 4, repeatedly training each prediction model by using the marked skin disease image until each prediction model is converged.
And 5, segmenting and labeling the skin disease image to be segmented by using the trained prediction models, and fusing output labels of the prediction models by using a multi-model fusion module to complete segmentation and labeling of the skin disease image to be segmented.
In the embodiment, the test set is used as the skin disease image to be segmented, and each pixel in the image is classified according to the voting entropy, so that the segmentation of the focus of the skin disease image is completed, as shown in fig. 3.
In step 3 and step 4 of this embodiment, because the sizes of the skin disease images input into the prediction model are not the same, batch loading of data into the network model for training cannot be realized, and the limitation of the memory of the GPU of the hardware device is considered. To this end, the embodiment uniformly scales and crops the input image into a size of 192 × 256.
During the training of the model, the Ubuntu 16.04LTS system is needed, and the system environment needs Pythroch 1.6 and Python 3.6. The hardware platform needs four RTX2080Ti graphics cards as a main computing platform, meanwhile, the CPU memory is not lower than 16G, and the solid state disk is not lower than 256G. In model training, a total of 100 epochs are trained, 16 images are loaded into the system for batch training each time, and the initial learning rate is set to be 0.01.
The embodiment of the invention realizes the skin disease image segmentation task under the condition of less marked samples based on the active learning multi-model fusion segmentation network, can effectively reduce the label amount of skin mirror images, reduces the cost of the segmentation network with label data sets, and improves the robustness of the segmentation network, thereby improving the performance of the segmentation network.
Table 1 shows the structural composition of the multi-model fusion network and four networks Net1, net2, net3, and Net4, which respectively include one to five different mainstream skin mirror image segmentation networks in the table to form an integrated segmentation model.
TABLE 1 Multi-model fusion method
Figure DEST_PATH_IMAGE034
Tables 2 and 3 are a comparison of the present invention with the best current dermatoscope image segmentation model. According to the invention, DIC and JAI values of 2016 ISIC can be respectively improved to 94.45% and 89.07% from 88.64% and 81.37%, and DIC and JAI values of 2017 ISIC can be respectively improved to 87.51% and 80.22% from 79.39% and 72.04%.
Table 2 network architecture performance comparison in ISIC 2016 dataset (%)
Figure DEST_PATH_IMAGE035
Table 3 network architecture ISIC 2017 data set performance comparison (%)
Figure DEST_PATH_IMAGE036
FIG. 4 is the average pixel label amount per image for the present invention. According to the image labeling method based on the uncertainty strategy active learning method and the high-confidence-degree strategy semi-supervised learning method, the average pixel annotation of each image is not more than 15%, so that the most uncertain image pixels can be inquired and annotated, and most of the rest pixels are endowed with pseudo labels.
In order to further verify the effectiveness of the method of the invention, comparison with the method which is mainstream internationally is carried out. Tables 4 and 5 compare the experiments performed on ISIC 2016 and ISIC 2016 data sets in the present invention, and it can be seen that the method provided by the present invention exhibits better performance. The invention achieves the segmentation performance equivalent to other methods using all training data sets by using only 80% of the training data sets.
TABLE 4 comparison of Performance (%)
Figure DEST_PATH_IMAGE037
TABLE 5 comparison of Performance (%)
Figure DEST_PATH_IMAGE038
Fig. 5 shows image labels obtained by different query strategies. It can be seen that on some images, the lesion segmented by the method provided by the invention has more accurate contour than the original label.
The above embodiments are preferred embodiments of the present application, and those skilled in the art can make various changes or modifications without departing from the general concept of the present application, and such changes or modifications should fall within the scope of the claims of the present application.

Claims (10)

1. A low-annotation-cost skin disease image segmentation method is characterized by comprising the following steps:
constructing a skin disease image data set, wherein the skin disease image data set comprises unmarked skin disease images and marked skin disease images, and the number of the unmarked skin disease images is greater than that of the marked skin disease images;
designing a skin disease image segmentation network with low annotation cost, wherein the skin disease image segmentation network comprises N prediction models and a multi-model fusion module;
training and labeling each prediction model in batches using the skin disease image dataset: firstly, dividing unmarked skin disease images into a plurality of batches; then, training a prediction model by using the currently labeled skin disease image; then, an active learning method of various uncertain strategies and a semi-supervised learning method based on a shared query value strategy are adopted, a prediction model obtained by current training is combined with expert labeling, and the current unmarked skin disease images of any batch are labeled; then, fusing the output labels of the prediction models by using a multi-model fusion module; until all skin disease images are marked;
repeating iterative training on each prediction model by using the marked skin disease image until each prediction model converges;
and (3) respectively segmenting and labeling the skin disease image to be segmented by using the trained prediction models, and then fusing the output labels of the prediction models by using a multi-model fusion module to complete segmentation and labeling of the skin disease image to be segmented.
2. The method for segmenting skin disease images according to claim 1, wherein the active learning method adopting multiple uncertainty strategies and the semi-supervised learning method based on the shared query value strategy are used for labeling any batch of unmarked skin disease images currently by combining a prediction model obtained by current training with expert labeling, and specifically comprises the following steps:
two active learning uncertainty strategies are employed
Figure 837655DEST_PATH_IMAGE001
And
Figure 541169DEST_PATH_IMAGE002
separately for pixels in the unmarked skin disease image
Figure 961786DEST_PATH_IMAGE003
Performing pre-classification, and recording the pre-classification as
Figure 699935DEST_PATH_IMAGE004
And
Figure 813384DEST_PATH_IMAGE005
introducing a random query factor
Figure 4194DEST_PATH_IMAGE006
For random query factor
Figure 369448DEST_PATH_IMAGE006
And pre-classification
Figure 430945DEST_PATH_IMAGE004
Figure 980875DEST_PATH_IMAGE005
Weighting to obtain pixels
Figure 393401DEST_PATH_IMAGE007
Confidence of classification of
Figure 686980DEST_PATH_IMAGE008
By sharing classification confidence
Figure 134141DEST_PATH_IMAGE008
And distributing a pseudo label to the pixels with the classification confidence coefficient reaching a preset value to finish labeling, and handing the rest pixels serving as uncertain pixels to an expert to finish labeling.
3. According to claimThe method of skin disease image segmentation described in claim 2, wherein two active learning uncertainty strategies are employed
Figure 323814DEST_PATH_IMAGE001
And
Figure 364583DEST_PATH_IMAGE002
the method for pre-classifying the pixels comprises the following steps:
Figure 196272DEST_PATH_IMAGE009
in the formula (I), the compound is shown in the specification,
Figure 497941DEST_PATH_IMAGE010
is shown as
Figure 124094DEST_PATH_IMAGE011
The prediction model is used for predicting the prediction model,
Figure 511213DEST_PATH_IMAGE012
a class of pixels is represented by a number of pixels,
Figure 146594DEST_PATH_IMAGE013
the number of classes representing the pixel,
Figure 37189DEST_PATH_IMAGE014
representing a pixel
Figure 473725DEST_PATH_IMAGE003
The label of (a) is used,
Figure 348140DEST_PATH_IMAGE015
representing a predictive model
Figure 521632DEST_PATH_IMAGE016
For the pixel
Figure 797893DEST_PATH_IMAGE003
Output tag of
Figure 765849DEST_PATH_IMAGE017
The probability of (c).
4. The dermatological image segmentation method of claim 3, wherein classification confidence is shared
Figure 127560DEST_PATH_IMAGE008
The method for distributing the pseudo labels to the pixels with the classification confidence coefficient reaching the preset value comprises the following steps:
Figure 839164DEST_PATH_IMAGE018
in the formula (I), the compound is shown in the specification,
Figure 845297DEST_PATH_IMAGE019
is an assigned pseudo label.
5. The dermatologic image segmentation method of claim 3, wherein the multi-model fusion module classifies pixels according to the magnitude of the voting entropy
Figure 249734DEST_PATH_IMAGE020
The calculation method comprises the following steps:
Figure 833162DEST_PATH_IMAGE021
in the formula (I), the compound is shown in the specification,
Figure 348457DEST_PATH_IMAGE022
is shown as
Figure 333730DEST_PATH_IMAGE011
A prediction model for the pixel
Figure 909068DEST_PATH_IMAGE003
The output tag of (1) or an expert-tagged tag.
6. A low labeling cost dermatologic image segmentation apparatus, comprising:
a dataset construction module to: constructing a skin disease image data set, wherein the skin disease image data set comprises unmarked skin disease images and marked skin disease images, and the number of the unmarked skin disease images is greater than that of the marked skin disease images;
a split network design module to: designing a skin disease image segmentation network with low annotation cost, wherein the skin disease image segmentation network comprises N prediction models and a multi-model fusion module;
an image annotation module to: training and labeling each prediction model in batches by using a skin disease image dataset: firstly, dividing unmarked skin disease images into a plurality of batches; then, training a prediction model by using the currently labeled skin disease image; then, an active learning method of various uncertain strategies and a semi-supervised learning method based on a shared query value strategy are adopted, a prediction model obtained by current training is combined with expert labeling, and the current unmarked skin disease images of any batch are labeled; then, fusing the output labels of the prediction models by using a multi-model fusion module; until all skin disease images are marked;
a model training module to: repeating iterative training on each prediction model by using the marked skin disease image until each prediction model converges;
an image segmentation module to: and respectively segmenting and labeling the skin disease image to be segmented by using the trained prediction models, and then fusing the output labels of the prediction models by using a multi-model fusion module to finish the segmentation and labeling of the skin disease image to be segmented.
7. The dermatosis image segmentation device according to claim 6, wherein the specific process of labeling the dermatosis image by the image labeling module comprises:
two active learning uncertainty strategies are employed
Figure 120738DEST_PATH_IMAGE001
And
Figure 174145DEST_PATH_IMAGE002
separately for pixels in the unmarked skin disease image
Figure 279504DEST_PATH_IMAGE003
Performing pre-classification, and recording the pre-classification as
Figure 760164DEST_PATH_IMAGE004
And
Figure 318184DEST_PATH_IMAGE005
introducing a random query factor
Figure 440861DEST_PATH_IMAGE006
For random query factor
Figure 135147DEST_PATH_IMAGE006
And pre-classification
Figure 927654DEST_PATH_IMAGE004
Figure 972970DEST_PATH_IMAGE005
Weighting to obtain pixels
Figure 633759DEST_PATH_IMAGE007
Confidence of classification of (2)
Figure 182552DEST_PATH_IMAGE008
By sharing classification confidence
Figure 536173DEST_PATH_IMAGE008
Distributing a pseudo label for the pixel with the classification confidence coefficient reaching a preset value to finish labeling, and handing other pixels serving as uncertain pixels to an expert to finish labeling;
and (5) continuing to train the prediction model by using the currently labeled skin disease image.
8. The dermatologic image segmentation apparatus of claim 6, wherein two active learning uncertainty strategies are employed
Figure 68785DEST_PATH_IMAGE001
And
Figure 267685DEST_PATH_IMAGE002
the method for pre-classifying the pixels comprises the following steps:
Figure 313395DEST_PATH_IMAGE023
in the formula (I), the compound is shown in the specification,
Figure 837917DEST_PATH_IMAGE024
denotes the first
Figure 592247DEST_PATH_IMAGE011
A prediction model for the prediction of the target,
Figure 594838DEST_PATH_IMAGE012
a class of pixels is represented by a number of pixels,
Figure 118223DEST_PATH_IMAGE013
the number of categories representing the pixels is,
Figure 813647DEST_PATH_IMAGE014
representing a pixel
Figure 930638DEST_PATH_IMAGE003
The label of (a) to (b),
Figure 471341DEST_PATH_IMAGE015
representing a predictive model
Figure 380391DEST_PATH_IMAGE024
For the pixel
Figure 981137DEST_PATH_IMAGE003
Output tag of (2)
Figure 710059DEST_PATH_IMAGE017
The probability of (c).
9. An electronic device comprising a memory and a processor, the memory having stored therein a computer program, wherein the computer program, when executed by the processor, causes the processor to carry out the method according to any one of claims 1 to 5.
10. A computer-readable storage medium, on which a computer program is stored which, when being executed by a processor, carries out the method according to any one of claims 1 to 5.
CN202211332281.1A 2022-10-28 2022-10-28 Skin disease image segmentation method, device, equipment and medium with low annotation cost Active CN115393361B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202211332281.1A CN115393361B (en) 2022-10-28 2022-10-28 Skin disease image segmentation method, device, equipment and medium with low annotation cost

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202211332281.1A CN115393361B (en) 2022-10-28 2022-10-28 Skin disease image segmentation method, device, equipment and medium with low annotation cost

Publications (2)

Publication Number Publication Date
CN115393361A true CN115393361A (en) 2022-11-25
CN115393361B CN115393361B (en) 2023-02-03

Family

ID=84115191

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202211332281.1A Active CN115393361B (en) 2022-10-28 2022-10-28 Skin disease image segmentation method, device, equipment and medium with low annotation cost

Country Status (1)

Country Link
CN (1) CN115393361B (en)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116109564A (en) * 2022-12-01 2023-05-12 脉得智能科技(无锡)有限公司 Method, device, equipment and medium for rapidly screening multiple types of skin disease appearance images
CN116763259A (en) * 2023-08-17 2023-09-19 普希斯(广州)科技股份有限公司 Multi-dimensional control method and device for beauty equipment and beauty equipment
CN116935388A (en) * 2023-09-18 2023-10-24 四川大学 Skin acne image auxiliary labeling method and system, and grading method and system

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112163634A (en) * 2020-10-14 2021-01-01 平安科技(深圳)有限公司 Example segmentation model sample screening method and device, computer equipment and medium
CN112348972A (en) * 2020-09-22 2021-02-09 陕西土豆数据科技有限公司 Fine semantic annotation method based on large-scale scene three-dimensional model
CN113838058A (en) * 2021-10-11 2021-12-24 重庆邮电大学 Automatic medical image labeling method and system based on small sample segmentation
CN114612702A (en) * 2022-01-24 2022-06-10 珠高智能科技(深圳)有限公司 Image data annotation system and method based on deep learning

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112348972A (en) * 2020-09-22 2021-02-09 陕西土豆数据科技有限公司 Fine semantic annotation method based on large-scale scene three-dimensional model
CN112163634A (en) * 2020-10-14 2021-01-01 平安科技(深圳)有限公司 Example segmentation model sample screening method and device, computer equipment and medium
CN113838058A (en) * 2021-10-11 2021-12-24 重庆邮电大学 Automatic medical image labeling method and system based on small sample segmentation
CN114612702A (en) * 2022-01-24 2022-06-10 珠高智能科技(深圳)有限公司 Image data annotation system and method based on deep learning

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
陈敏: "《人工智能通信理论与算法》", 31 January 2020 *

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116109564A (en) * 2022-12-01 2023-05-12 脉得智能科技(无锡)有限公司 Method, device, equipment and medium for rapidly screening multiple types of skin disease appearance images
CN116763259A (en) * 2023-08-17 2023-09-19 普希斯(广州)科技股份有限公司 Multi-dimensional control method and device for beauty equipment and beauty equipment
CN116763259B (en) * 2023-08-17 2023-12-08 普希斯(广州)科技股份有限公司 Multi-dimensional control method and device for beauty equipment and beauty equipment
CN116935388A (en) * 2023-09-18 2023-10-24 四川大学 Skin acne image auxiliary labeling method and system, and grading method and system
CN116935388B (en) * 2023-09-18 2023-11-21 四川大学 Skin acne image auxiliary labeling method and system, and grading method and system

Also Published As

Publication number Publication date
CN115393361B (en) 2023-02-03

Similar Documents

Publication Publication Date Title
Zhuang et al. An Effective WSSENet-Based Similarity Retrieval Method of Large Lung CT Image Databases.
CN115393361B (en) Skin disease image segmentation method, device, equipment and medium with low annotation cost
US11842487B2 (en) Detection model training method and apparatus, computer device and storage medium
US20210118136A1 (en) Artificial intelligence for personalized oncology
Zhang et al. A survey on deep learning of small sample in biomedical image analysis
Huang et al. Medical image segmentation using deep learning with feature enhancement
Yan et al. Symmetric convolutional neural network for mandible segmentation
Zhao et al. Versatile framework for medical image processing and analysis with application to automatic bone age assessment
WO2022178997A1 (en) Medical image registration method and apparatus, computer device, and storage medium
Wang et al. Medical matting: a new perspective on medical segmentation with uncertainty
Song et al. Dual-branch network via pseudo-label training for thyroid nodule detection in ultrasound image
Wang et al. Superpixel inpainting for self-supervised skin lesion segmentation from dermoscopic images
Jiang et al. Automatic classification of heterogeneous slit-illumination images using an ensemble of cost-sensitive convolutional neural networks
Gao et al. Class consistent and joint group sparse representation model for image classification in internet of medical things
Shen et al. Cross-modal fine-tuning: Align then refine
Li et al. Automatic bone age assessment of adolescents based on weakly-supervised deep convolutional neural networks
Xue et al. Oriented localization of surgical tools by location encoding
Wang et al. Explainable multitask Shapley explanation networks for real-time polyp diagnosis in videos
Cheng et al. Report of clinical bone age assessment using deep learning for an Asian population in Taiwan
Al-Ani et al. A review on detecting brain tumors using deep learning and magnetic resonance images.
Kuminski et al. A hybrid approach to machine learning annotation of large galaxy image databases
Na et al. Automated brain tumor segmentation from multimodal MRI data based on Tamura texture feature and an ensemble SVM classifier
Yan et al. Two and multiple categorization of breast pathological images by transfer learning
Tang et al. Work like a doctor: Unifying scan localizer and dynamic generator for automated computed tomography report generation
Bassi et al. ISNet: Costless and Implicit Image Segmentation for Deep Classifiers, with Application in COVID-19 Detection

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant