CN114359656A - Melanoma image identification method based on self-supervision contrast learning and storage device - Google Patents

Melanoma image identification method based on self-supervision contrast learning and storage device Download PDF

Info

Publication number
CN114359656A
CN114359656A CN202111520678.9A CN202111520678A CN114359656A CN 114359656 A CN114359656 A CN 114359656A CN 202111520678 A CN202111520678 A CN 202111520678A CN 114359656 A CN114359656 A CN 114359656A
Authority
CN
China
Prior art keywords
melanoma
self
network
learning
method based
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202111520678.9A
Other languages
Chinese (zh)
Inventor
闾海荣
刘开创
石顺中
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Fuzhou Institute Of Data Technology Co ltd
Original Assignee
Fuzhou Institute Of Data Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Fuzhou Institute Of Data Technology Co ltd filed Critical Fuzhou Institute Of Data Technology Co ltd
Priority to CN202111520678.9A priority Critical patent/CN114359656A/en
Publication of CN114359656A publication Critical patent/CN114359656A/en
Pending legal-status Critical Current

Links

Images

Abstract

The invention relates to the technical field of medical image analysis, in particular to a melanoma image identification method and storage equipment based on self-supervision contrast learning. The melanoma image identification method based on the self-supervision contrast learning comprises the following steps: step S101: acquiring medical image data with a label and medical image data without the label; step S102: inputting medical image data without labels into a network structure of self-supervision contrast learning, and training network parameters; step S103: inputting medical image data with labels into a melanoma recognition network model, and training a plurality of network models; step S104: and calculating whether each piece of medical image data with the label corresponds to the melanoma image or not through a plurality of network models. By the method, whether the melanoma disease is caused or not is rapidly and accurately obtained by combining the technologies of transfer learning and ensemble learning on the basis of fully utilizing a large number of existing original medical images.

Description

Melanoma image identification method based on self-supervision contrast learning and storage device
Technical Field
The invention relates to the technical field of medical image analysis, in particular to a melanoma image identification method and storage equipment based on self-supervision contrast learning.
Background
The intelligent auxiliary identification aiming at the medical image is one of main research contents in the medical intelligent field. The medical image identification technology combines medical images and artificial intelligence technology, and identifies the types of diseases by means of a computer, so that reference and assistance are provided for diagnosis of doctors.
Melanoma is the most common and most fatal cancer of the skin, and conventional manual identification is prone to misdiagnosis due to its similar appearance characteristics to other skin disorders. In recent years, with the advent of artificial intelligence technology, deep learning technology has rapidly developed and widely used in the fields of computer vision, natural language processing, bioinformatics, and the like.
The self-supervised learning algorithm is a novel algorithm different from supervised learning and unsupervised learning, the characteristics of data are directly learned from original data by self, parameters of a training model are reserved and applied to downstream tasks, and the effect of the self-supervised learning algorithm is superior to that of the traditional supervised learning algorithm in the field of image classification.
In the development process of deep learning, a large amount of labeled data becomes a prerequisite task for each field, however, in the medical field, high-quality labeling of data needs to be distinguished by experts in relevant directions. Because the cost of data labeling is high, the quality of data labeling is uneven, and the data which can be used for supervised learning training is less, the training effect is poor in the traditional supervised learning, and the discrimination capability of melanoma is poor.
Disclosure of Invention
In view of the above problems, the present application provides a melanoma image recognition method based on self-supervised contrast learning, so as to solve the technical problem that when training samples are fewer, the effect of using the conventional melanoma recognition method is poor. The specific technical scheme is as follows:
a melanoma image identification method based on self-supervision contrast learning comprises the following steps:
step S101: acquiring target data, wherein the target data comprises: medical image data with a label and medical image data without a label;
step S102: inputting the medical image data without the label into a network structure of self-supervision contrast learning, and training network parameters according to a preset parameter updating mode;
step S103: inputting the medical image data with the labels into a melanoma recognition network model, and training a plurality of network models according to a preset learning rate change mode, wherein parameters of the melanoma recognition network model are parameters of the encoder trained in the step S102;
step S104: and calculating whether each piece of medical image data with the label corresponds to a melanoma image or not through the plurality of network models, executing preset operation on each network model, and selecting an optimal integration mode.
Further, the step S102 specifically includes the steps of:
step S201: carrying out data enhancement processing on each picture to obtain two images x1And x2
Step S202: respectively convert x into1And x2Inputting into an encoder f;
step S203: adding a prediction multilayer perceptron h to one branch, and processing the coded vector;
step S204: carrying out similarity matching on the two processed vectors;
step S205: transmitting gradient change to the branch where the prediction multilayer perceptron h is located according to the loss value of the similarity;
step S206: and repeating the steps S201 to S205 until each picture is processed.
Further, the step S103 specifically includes the steps of:
step S301: carrying out data preprocessing on the acquired labeling data, and dividing the labeling data into a training set and a test set;
step S302: the parameters of the melanoma recognition network model are the parameters of the encoder after the self-supervision training in the step S102, and the output multilayer perceptron is replaced by a full connection layer;
step S303: setting a second classification loss function and a periodic cosine learning rate;
step S304: updating parameters of the network using gradient descent based on the loss;
step S305: saving the parameters of the model every other period;
step S306: and repeating the steps S301 to S305 until the loss value of the second classification reaches a preset requirement or the number of training iterations reaches the preset requirement.
Further, the preset learning rate variation mode includes: a learning rate change mode of periodic cosine annealing, wherein the learning rate change mode of the periodic cosine annealing is as follows:
Figure BDA0003407218920000031
where T is the number of iterations, T is the number of training times, and M is the number of loops.
Further, the step S204 specifically includes the steps of:
the expression for the symmetry similarity is as follows:
Figure BDA0003407218920000032
p is the output vector of the prediction multi-layer perceptron h, z is the output vector of the other branch, and the subscript table of p and z corresponds to the subscript of the input image.
Further, the step S204 specifically includes the steps of:
a gradient prevention operation is added to the loss function calculation, which is formulated as follows:
Figure BDA0003407218920000033
further, the skeleton network of the encoder f is any one of the following: VGG16, VGG18, ResNet18, DenseNet121, ResNet50, the output layers of the encoder employ a projected multi-layer perceptron.
Further, the "executing a preset operation on each network model" specifically includes the steps of:
and performing weighted integration on the voting results of each network model, and selecting an optimal integration mode.
Further, before the step S101, the method specifically includes the steps of:
a skin disease image database is constructed in advance.
In order to solve the technical problem, the storage device is further provided, and the specific technical scheme is as follows:
a storage device having stored therein a set of instructions for performing: any step in the melanoma image identification method based on the self-supervision contrast learning.
The invention has the beneficial effects that: a melanoma image identification method based on self-supervision contrast learning comprises the following steps: step S101: acquiring target data, wherein the target data comprises: medical image data with a label and medical image data without a label; step S102: inputting the medical image data without the label into a network structure of self-supervision contrast learning, and training network parameters according to a preset parameter updating mode; step S103: inputting the medical image data with the labels into a melanoma recognition network model, and training a plurality of network models according to a preset learning rate change mode, wherein parameters of the melanoma recognition network model are parameters of the encoder trained in the step S102; step S104: and calculating whether each piece of medical image data with the label corresponds to a melanoma image or not through the plurality of network models, performing weighted integration on the voting result of each network model, and selecting an optimal integration mode. By the method, on the basis of fully utilizing a large number of existing original medical images, the transfer learning and integrated learning technologies are combined, whether melanoma diseases occur or not is rapidly and accurately obtained, the automation and intelligentization degree of the model is improved, a new development idea is provided for the medical image processing technology, an intelligent means for assisting diagnosis reference is provided for medical experts, and the method has great practical significance.
Furthermore, the ideas of momentum updating and negative sample training in common self-supervision contrast learning methods are abandoned, and a gradient-descending parameter updating mode is designed, so that a large number of positive and negative samples are not needed for contrast learning, the training cost is greatly reduced, a better classification effect and a faster classification speed are achieved, and the diagnosis of melanoma can be helped.
Furthermore, the learning rate change mode of the periodic cosine annealing can enable the periodic cosine annealing to be periodically attenuated and restarted, the learning rate is increased at the lowest point of each attenuation for restarting, and the model parameters at the moment are stored, so that different local optimal points of model convergence can be utilized for integration, and the difference and complementarity between models are increased.
Furthermore, the output layer of ResNet is replaced by the projected multilayer perceptron, so that the reliability in the comparison learning process is increased, and the effect is improved by 8% in the downstream task of image classification.
The above description of the present invention is only an overview of the technical solutions of the present application, and in order to make the technical solutions of the present application more clearly understood by those skilled in the art, the present invention may be further implemented according to the content described in the text and drawings of the present application, and in order to make the above objects, other objects, features, and advantages of the present application more easily understood, the following description is made in conjunction with the detailed description of the present application and the drawings.
Drawings
The drawings are only for purposes of illustrating the principles, implementations, applications, features, and effects of particular embodiments of the present application, as well as others related thereto, and are not to be construed as limiting the application.
In the drawings of the specification:
fig. 1 is a first flowchart of a melanoma image identification method based on self-supervised contrast learning according to an embodiment;
fig. 2 is a flowchart of a melanoma image recognition method based on self-supervised contrast learning according to an embodiment;
fig. 3 is a flowchart of a melanoma image identification method based on self-supervised contrast learning according to an embodiment;
FIG. 4 is a diagram illustrating a network architecture according to an embodiment;
fig. 5 is a block diagram of a storage device according to an embodiment.
The reference numerals referred to in the above figures are explained below:
500. a storage device.
Detailed Description
In order to explain in detail possible application scenarios, technical principles, practical embodiments, and the like of the present application, the following detailed description is given with reference to the accompanying drawings in conjunction with the listed embodiments. The embodiments described herein are merely for more clearly illustrating the technical solutions of the present application, and therefore, the embodiments are only used as examples, and the scope of the present application is not limited thereby.
Reference herein to "an embodiment" means that a particular feature, structure, or characteristic described in connection with the embodiment can be included in at least one embodiment of the application. The appearances of the phrase "an embodiment" in various places in the specification are not necessarily all referring to the same embodiment, nor are separate or related to other embodiments specifically defined. In principle, in the present application, the technical features mentioned in the embodiments can be combined in any manner to form a corresponding implementable technical solution as long as there is no technical contradiction or conflict.
Unless defined otherwise, technical terms used herein have the same meaning as commonly understood by one of ordinary skill in the art to which this application belongs; the use of relational terms herein is intended only to describe particular embodiments and is not intended to limit the present application.
In the description of the present application, the term "and/or" is a expression for describing a logical relationship between objects, meaning that three relationships may exist, for example a and/or B, meaning: there are three cases of A, B, and both A and B. In addition, the character "/" herein generally indicates that the former and latter associated objects are in a logical relationship of "or".
In this application, terms such as "first" and "second" are used solely to distinguish one entity or action from another entity or action without necessarily requiring or implying any actual such relationship or order between such entities or actions.
Without further limitation, in this application, the use of "including," "comprising," "having," or other similar expressions in phrases and expressions of "including," "comprising," or "having," is intended to cover a non-exclusive inclusion, and such expressions do not exclude the presence of additional elements in a process, method, or article that includes the recited elements, such that a process, method, or article that includes a list of elements may include not only those elements but also other elements not expressly listed or inherent to such process, method, or article.
As is understood in the examination of the guidelines, the terms "greater than", "less than", "more than" and the like in this application are to be understood as excluding the number; the expressions "above", "below", "within" and the like are understood to include the present numbers. In addition, in the description of the embodiments of the present application, "a plurality" means two or more (including two), and expressions related to "a plurality" similar thereto are also understood, for example, "a plurality of groups", "a plurality of times", and the like, unless specifically defined otherwise.
First, some terms that will be referred to in the present embodiment will be explained as follows:
medical image data with a label: in a large number of local medical images of the skin diseases of the patients, a professional can identify and label a small part of images, and the identified and labeled medical images are labeled medical image data;
unlabeled medical image data: the medical image of the patient's skin disease local area is not marked by the professional.
The core technical idea of the application is as follows: the self-supervision contrast learning is introduced into melanoma identification, the images are utilized through the self-supervision contrast learning, useful representation of original unmarked data is learned, and further the utilization rate of medical image data is improved, so that the accuracy of the model for distinguishing the melanoma of the skin disease image is enhanced.
The following description is made with reference to fig. 1 to 4:
in the embodiment, a melanoma image recognition method based on self-supervised contrast learning can be applied to a storage device, including but not limited to: personal computers, servers, general purpose computers, special purpose computers, network devices, embedded devices, programmable devices, intelligent mobile terminals, etc. The following is a detailed description:
step S101: acquiring target data, wherein the target data comprises: tagged medical image data and untagged medical image data. In the actual operation process, a large number of local medical images of the skin diseases of the patient can be obtained, and professionals can identify and label a small part of images to obtain medical image data with labels and medical image data without labels.
In this embodiment, the skin disease image database may be constructed in advance in consideration of a large amount of medical image data to be processed in the actual operation process. The local medical image of the skin disease of the patient refers to medical images of all skin diseases, wherein most of the medical images are non-melanoma images, and the other medical images are melanoma patient images, a small part of the medical images are marked samples with labels according to the marks of doctors, and a large number of unmarked samples with labels are left, and then step 102 is executed.
Step S102: and inputting the label-free medical image data into a network structure of the self-supervision contrast learning, and training network parameters according to a preset parameter updating mode. The schematic diagram of the network structure is shown in fig. 4: the backbone network of the self-supervision contrast learning selects a deep learning network with low parameter and good training effect, and the framework network of the encoder f is any one of the following networks: VGG16, VGG18, ResNet18, DenseNet121, ResNet50, the output layers of the encoder employ a projected multilayer perceptron.
A simple twin network architecture is constructed by adopting a contrast-based self-supervision learning method, namely, an input image is enhanced into two images x1 and x2, and the two images are respectively input into a completely same coding network f, wherein the reference of the coder f adopts a multilayer perceptron as output prediction, and a deep learning network is adopted in a backbone network. In addition to this, the result of one of the branches is transformed and similarity matched to the vector of the other branch, and the loss function is a symmetric loss function, as shown below.
Figure BDA0003407218920000081
Where p is the output vector of the left prediction h of the algorithm, z is the output vector of the right coding of the algorithm, and the detailed expression of symmetric similarity matching is shown below.
Figure BDA0003407218920000082
In the present embodiment, the self-supervision contrast learning algorithm sets a gradient blocking operation, which means that in the first term of the loss function, i.e. the original symmetric loss is obtained through simple modification, the algorithm adopts a loss function as shown below.
Figure BDA0003407218920000083
Thus, it is assumed that the encoder processing x2 cannot receive gradient update information from z2 in the first term of the loss function, and can receive gradient information from p2 for the second term of the loss function, and vice versa for x 1.
The thinking of momentum updating and negative sample training in the common self-supervision contrast learning method is abandoned, and a gradient-descending parameter updating mode is designed, so that a large number of positive and negative samples are not needed for contrast learning, the training cost is greatly reduced, a better classification effect and a faster classification speed are achieved, and the diagnosis of melanoma can be helped.
Step S102 is explained in detail below with reference to fig. 2:
the step S102 further includes:
step S201: carrying out data enhancement processing on each picture to obtain two images x1And x2
Step S202: respectively convert x into1And x2Input to the encoder f. The framework network of the encoder in the embodiment is designed as ResNet50, and unlike the existing ResNet50, the output layer is inspired by SIMCLR and adopts a projected multi-layer perceptron. The output layer of ResNet is replaced by the projected multilayer perceptron, so that the reliability in the comparison learning process is increased, and the effect is improved by 8% in the downstream task of image classification.
Step S203: and adding a prediction multi-layer perceptron h to one branch, and processing the coded vector. As shown in FIG. 4 at x1And adding a prediction multi-layer perceptron h to the branch.
Step S204: carrying out similarity matching on the two processed vectors;
step S205: transmitting gradient change to the branch where the prediction multilayer perceptron h is located according to the loss value of the similarity;
step S206: and repeating the steps S201 to S205 until each picture is processed.
Step S103: inputting the medical image data with the labels into a melanoma recognition network model, and training a plurality of network models according to a preset learning rate change mode, wherein parameters of the melanoma recognition network model are parameters of the encoder trained in the step S102.
Step S103 is explained in detail below with reference to fig. 3:
step S301: carrying out data preprocessing on the acquired labeling data, and dividing the labeling data into a training set and a test set;
step S302: the parameters of the melanoma recognition network model are the parameters of the encoder after the self-supervision training in the step S102, and the output multilayer perceptron is replaced by a full connection layer;
step S303: setting a second classification loss function and a periodic cosine learning rate;
step S304: updating parameters of the network using gradient descent based on the loss;
step S305: saving the parameters of the model every other period;
step S306: and repeating the steps S301 to S305 until the loss value of the second classification reaches a preset requirement or the number of training iterations reaches the preset requirement.
In the actual operation of the above steps S301 to S306, the following steps may be specifically performed:
parameters of an encoder (deep learning model) in the self-supervision contrast learning are reserved, a full connection layer is accessed behind the encoder, and the category of the skin diseases is identified;
training by adopting a skin disease image sample with a label, and finely adjusting parameters of the model;
inputting the sample with the label into a melanoma recognition model, and acquiring a melanoma recognition result output by the melanoma recognition model through integrated learning;
calculating a loss value of the melanoma recognition model according to the predicted melanoma recognition result and the label;
and under the condition that the loss value is in a preset range, taking the trained melanoma recognition model as a final melanoma recognition model.
The preset learning rate change mode in this embodiment is a learning rate change mode of cyclic cosine annealing, where the learning rate change mode of cyclic cosine annealing is as follows:
Figure BDA0003407218920000101
where T is the number of iterations, T is the number of training times, and M is the number of loops.
The learning rate change mode can enable the model to be periodically cosine-decayed and restarted, the learning rate is increased at the lowest point of each decay for restarting, and model parameters at the moment are stored, so that different local optimal points of model convergence can be utilized for integration, and the difference and complementarity between models are increased.
Step S104: and calculating whether each piece of medical image data with the label corresponds to a melanoma image or not through the plurality of network models, executing preset operation on each network model, and selecting an optimal integration mode. The method for executing the preset operation on each network model specifically comprises the following steps: and performing weighted integration on the voting results of each network model, and selecting an optimal integration mode. In this embodiment, an integrated manner of weighted voting is adopted, and it can be known from the above-mentioned learning rate of periodic cosine annealing that the final effect of the model stored later is better, so the weight of the voting should be larger, and the weight of the initial model effect difference point should be smaller.
A melanoma image identification method based on self-supervision contrast learning comprises the following steps: step S101: acquiring target data, wherein the target data comprises: medical image data with a label and medical image data without a label; step S102: inputting the medical image data without the label into a network structure of self-supervision contrast learning, and training network parameters according to a preset parameter updating mode; step S103: inputting the medical image data with the labels into a melanoma recognition network model, and training a plurality of network models according to a preset learning rate change mode, wherein parameters of the melanoma recognition network model are parameters of the encoder trained in the step S102; step S104: and calculating whether each piece of medical image data with the label corresponds to a melanoma image or not through the plurality of network models, performing weighted integration on the voting result of each network model, and selecting an optimal integration mode. By the method, on the basis of fully utilizing a large number of existing original medical images, the transfer learning and integrated learning technologies are combined, whether melanoma diseases occur or not is rapidly and accurately obtained, the automation and intelligentization degree of the model is improved, a new development idea is provided for the medical image processing technology, an intelligent means for assisting diagnosis reference is provided for medical experts, and the method has great practical significance.
Referring to fig. 4 to fig. 5, in the present embodiment, an embodiment of a memory device 500 is as follows:
a storage device 500 having stored therein a set of instructions for performing:
step S101: acquiring target data, wherein the target data comprises: tagged medical image data and untagged medical image data. In the actual operation process, a large number of local medical images of the skin diseases of the patient can be obtained, and professionals can identify and label a small part of images to obtain medical image data with labels and medical image data without labels.
In this embodiment, the skin disease image database may be constructed in advance in consideration of a large amount of medical image data to be processed in the actual operation process. The local medical image of the skin disease of the patient refers to medical images of all skin diseases, wherein most of the medical images are non-melanoma images, and the other medical images are melanoma patient images, a small part of the medical images are marked samples with labels according to the marks of doctors, and a large number of unmarked samples with labels are left, and then step 102 is executed.
Step S102: and inputting the label-free medical image data into a network structure of the self-supervision contrast learning, and training network parameters according to a preset parameter updating mode. The schematic diagram of the network structure is shown in fig. 4: the backbone network of the self-supervision contrast learning selects a deep learning network with low parameter and good training effect, and the framework network of the encoder f is any one of the following networks: VGG16, VGG18, ResNet18, DenseNet121, ResNet50, the output layers of the encoder employ a projected multilayer perceptron.
A simple twin network architecture is constructed by adopting a self-supervision learning method based on comparison, namely, an input image is enhanced into two images x1And x2And respectively putting into the same coding networks f, wherein the coder f adopts a multilayer perceptron as output prediction, and the backbone network adopts a deep learning network. In addition to this, the result of one of the branches is transformed and similarity matched to the vector of the other branch, and the loss function is a symmetric loss function, as shown below.
Figure BDA0003407218920000121
Where p is the output vector of the left prediction h of the algorithm, z is the output vector of the right coding of the algorithm, and the detailed expression of symmetric similarity matching is shown below.
Figure BDA0003407218920000122
In the present embodiment, the self-supervision contrast learning algorithm sets a gradient blocking operation, which means that in the first term of the loss function, i.e. the original symmetric loss is obtained through simple modification, the algorithm adopts a loss function as shown below.
Figure BDA0003407218920000123
Set accordingly, in the first term of the loss function, process x2Cannot be taken from z2Gradient update information is received and can be derived from p by the second term of the loss function2Out of received gradient information, for x1And vice versa.
The thinking of momentum updating and negative sample training in the common self-supervision contrast learning method is abandoned, and a gradient-descending parameter updating mode is designed, so that a large number of positive and negative samples are not needed for contrast learning, the training cost is greatly reduced, a better classification effect and a faster classification speed are achieved, and the diagnosis of melanoma can be helped.
Step S102 is explained in detail below with reference to fig. 2:
the step S102 further includes:
step S201: carrying out data enhancement processing on each picture to obtain two images x1And x2
Step S202: respectively convert x into1And x2Input to the encoder f. In this implementationIn the example, the skeleton network of the encoder is designed as ResNet50, and unlike the prior ResNet50, the output layer is inspired by SIMCLR and adopts a projected multi-layer perceptron. The output layer of ResNet is replaced by the projected multilayer perceptron, so that the reliability in the comparison learning process is increased, and the effect is improved by 8% in the downstream task of image classification.
Step S203: and adding a prediction multi-layer perceptron h to one branch, and processing the coded vector. As shown in FIG. 4 at x1And adding a prediction multi-layer perceptron h to the branch.
Step S204: carrying out similarity matching on the two processed vectors;
step S205: transmitting gradient change to the branch where the prediction multilayer perceptron h is located according to the loss value of the similarity;
step S206: and repeating the steps S201 to S205 until each picture is processed.
Step S103: inputting the medical image data with the labels into a melanoma recognition network model, and training a plurality of network models according to a preset learning rate change mode, wherein parameters of the melanoma recognition network model are parameters of the encoder trained in the step S102.
Step S103 is explained in detail below with reference to fig. 3:
step S301: carrying out data preprocessing on the acquired labeling data, and dividing the labeling data into a training set and a test set;
step S302: the parameters of the melanoma recognition network model are the parameters of the encoder after the self-supervision training in the step S102, and the output multilayer perceptron is replaced by a full connection layer;
step S303: setting a second classification loss function and a periodic cosine learning rate;
step S304: updating parameters of the network using gradient descent based on the loss;
step S305: saving the parameters of the model every other period;
step S306: and repeating the steps S301 to S305 until the loss value of the second classification reaches a preset requirement or the number of training iterations reaches the preset requirement.
In the actual operation of the above steps S301 to S306, the following steps may be specifically performed:
parameters of an encoder (deep learning model) in the self-supervision contrast learning are reserved, a full connection layer is accessed behind the encoder, and the category of the skin diseases is identified;
training by adopting a skin disease image sample with a label, and finely adjusting parameters of the model;
inputting the sample with the label into a melanoma recognition model, and acquiring a melanoma recognition result output by the melanoma recognition model through integrated learning;
calculating a loss value of the melanoma recognition model according to the predicted melanoma recognition result and the label;
and under the condition that the loss value is in a preset range, taking the trained melanoma recognition model as a final melanoma recognition model.
The preset learning rate change mode in this embodiment is a learning rate change mode of cyclic cosine annealing, where the learning rate change mode of cyclic cosine annealing is as follows:
Figure BDA0003407218920000141
where T is the number of iterations, T is the number of training times, and M is the number of loops.
The learning rate change mode can enable the model to be periodically cosine-decayed and restarted, the learning rate is increased at the lowest point of each decay for restarting, and model parameters at the moment are stored, so that different local optimal points of model convergence can be utilized for integration, and the difference and complementarity between models are increased.
Step S104: and calculating whether each piece of medical image data with the label corresponds to a melanoma image or not through the plurality of network models, executing preset operation on each network model, and selecting an optimal integration mode. The method for executing the preset operation on each network model specifically comprises the following steps: and performing weighted integration on the voting results of each network model, and selecting an optimal integration mode. In this embodiment, an integrated manner of weighted voting is adopted, and it can be known from the above-mentioned learning rate of periodic cosine annealing that the final effect of the model stored later is better, so the weight of the voting should be larger, and the weight of the initial model effect difference point should be smaller.
Through the storage device 500, on the basis of fully utilizing a large number of existing original medical images, the transfer learning and integrated learning technologies are combined, whether melanoma diseases occur or not is rapidly and accurately obtained, the automation and intelligent degree of the model are improved, a new development idea is provided for the medical image processing technology, an intelligent means for auxiliary diagnosis reference is provided for medical experts, and the storage device has great practical significance.
Finally, it should be noted that, although the above embodiments have been described in the text and drawings of the present application, the scope of the patent protection of the present application is not limited thereby. All technical solutions which are generated by replacing or modifying the equivalent structure or the equivalent flow according to the contents described in the text and the drawings of the present application, and which are directly or indirectly implemented in other related technical fields, are included in the scope of protection of the present application.

Claims (10)

1. A melanoma image identification method based on self-supervision contrast learning is characterized by comprising the following steps:
step S101: acquiring target data, wherein the target data comprises: medical image data with a label and medical image data without a label;
step S102: inputting the medical image data without the label into a network structure of self-supervision contrast learning, and training network parameters according to a preset parameter updating mode;
step S103: inputting the medical image data with the labels into a melanoma recognition network model, and training a plurality of network models according to a preset learning rate change mode, wherein parameters of the melanoma recognition network model are parameters of the encoder trained in the step S102;
step S104: and calculating whether each piece of medical image data with the label corresponds to a melanoma image or not through the plurality of network models, executing preset operation on each network model, and selecting an optimal integration mode.
2. The melanoma image identification method based on self-supervised contrast learning according to claim 1, wherein the step S102 further includes the steps of:
step S201: carrying out data enhancement processing on each picture to obtain two images x1And x2
Step S202: respectively convert x into1And x2Inputting into an encoder f;
step S203: adding a prediction multilayer perceptron h to one branch, and processing the coded vector;
step S204: carrying out similarity matching on the two processed vectors;
step S205: transmitting gradient change to the branch where the prediction multilayer perceptron h is located according to the loss value of the similarity;
step S206: and repeating the steps S201 to S205 until each picture is processed.
3. The melanoma image identification method based on self-supervised contrast learning according to claim 1, wherein the step S103 further comprises:
step S301: carrying out data preprocessing on the acquired labeling data, and dividing the labeling data into a training set and a test set;
step S302: the parameters of the melanoma recognition network model are the parameters of the encoder after the self-supervision training in the step S102, and the output multilayer perceptron is replaced by a full connection layer;
step S303: setting a second classification loss function and a periodic cosine learning rate;
step S304: updating parameters of the network using gradient descent based on the loss;
step S305: saving the parameters of the model every other period;
step S306: and repeating the steps S301 to S305 until the loss value of the second classification reaches a preset requirement or the number of training iterations reaches the preset requirement.
4. The melanoma image recognition method based on self-supervised contrast learning according to claim 1,
the preset learning rate change mode comprises the following steps: a learning rate change mode of periodic cosine annealing, wherein the learning rate change mode of the periodic cosine annealing is as follows:
Figure FDA0003407218910000021
where T is the number of iterations, T is the number of training times, and M is the number of loops.
5. The melanoma image identification method based on self-supervised contrast learning according to claim 2, wherein the step S204 further comprises the steps of:
the expression for the symmetry similarity is as follows:
Figure FDA0003407218910000022
p is the output vector of the prediction multi-layer perceptron h, z is the output vector of the other branch, and the subscript table of p and z corresponds to the subscript of the input image.
6. The melanoma image identification method based on self-supervised contrast learning according to claim 5, wherein the step S204 further comprises the steps of:
a gradient prevention operation is added to the loss function calculation, which is formulated as follows:
Figure FDA0003407218910000023
7. the melanoma image identification method based on self-supervision contrast learning according to claim 2, characterized in that the skeleton network of the encoder f is any one of the following: VGG16, VGG18, ResNet18, DenseNet121, ResNet50, the output layers of the encoder employ a projected multi-layer perceptron.
8. The melanoma image recognition method based on self-supervision contrast learning according to claim 1, wherein the "performing preset operation on each network model" specifically includes the following steps:
and performing weighted integration on the voting results of each network model, and selecting an optimal integration mode.
9. The melanoma image identification method based on self-supervised contrast learning according to any one of claims 1 to 7, wherein the step S101 further includes the following specific steps:
a skin disease image database is constructed in advance.
10. A storage device having a set of instructions stored therein, the set of instructions being operable to perform: a melanoma image identification method based on self-supervised contrast learning according to any one of claims 1 to 9.
CN202111520678.9A 2021-12-13 2021-12-13 Melanoma image identification method based on self-supervision contrast learning and storage device Pending CN114359656A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202111520678.9A CN114359656A (en) 2021-12-13 2021-12-13 Melanoma image identification method based on self-supervision contrast learning and storage device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202111520678.9A CN114359656A (en) 2021-12-13 2021-12-13 Melanoma image identification method based on self-supervision contrast learning and storage device

Publications (1)

Publication Number Publication Date
CN114359656A true CN114359656A (en) 2022-04-15

Family

ID=81099112

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202111520678.9A Pending CN114359656A (en) 2021-12-13 2021-12-13 Melanoma image identification method based on self-supervision contrast learning and storage device

Country Status (1)

Country Link
CN (1) CN114359656A (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115239708A (en) * 2022-09-21 2022-10-25 广东机电职业技术学院 Plant leaf disease detection model training method and plant leaf disease detection method
WO2023207104A1 (en) * 2022-04-26 2023-11-02 云南航天工程物探检测股份有限公司 Ground penetrating radar tunnel lining quality inspection method based on self-supervised learning

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2023207104A1 (en) * 2022-04-26 2023-11-02 云南航天工程物探检测股份有限公司 Ground penetrating radar tunnel lining quality inspection method based on self-supervised learning
CN115239708A (en) * 2022-09-21 2022-10-25 广东机电职业技术学院 Plant leaf disease detection model training method and plant leaf disease detection method
CN115239708B (en) * 2022-09-21 2022-12-30 广东机电职业技术学院 Plant leaf disease detection model training method and plant leaf disease detection method

Similar Documents

Publication Publication Date Title
CN112214995B (en) Hierarchical multitasking term embedded learning for synonym prediction
CN112364174A (en) Patient medical record similarity evaluation method and system based on knowledge graph
CN111243699A (en) Chinese electronic medical record entity extraction method based on word information fusion
CN114359656A (en) Melanoma image identification method based on self-supervision contrast learning and storage device
CN111312354B (en) Mammary gland medical record entity identification marking enhancement system based on multi-agent reinforcement learning
CN116682553A (en) Diagnosis recommendation system integrating knowledge and patient representation
CN113221571B (en) Entity relation joint extraction method based on entity correlation attention mechanism
CN113707339B (en) Method and system for concept alignment and content inter-translation among multi-source heterogeneous databases
CN111222318A (en) Trigger word recognition method based on two-channel bidirectional LSTM-CRF network
CN116779091B (en) Automatic generation method of multi-mode network interconnection and fusion chest image diagnosis report
CN110322959A (en) A kind of Knowledge based engineering depth medical care problem method for routing and system
CN115859914A (en) Diagnosis ICD automatic coding method and system based on medical history semantic understanding
CN116932722A (en) Cross-modal data fusion-based medical visual question-answering method and system
CN116822579A (en) Disease classification ICD automatic coding method and device based on contrast learning
CN115579141A (en) Interpretable disease risk prediction model construction method and disease risk prediction device
CN116797848A (en) Disease positioning method and system based on medical image text alignment
CN112668633B (en) Adaptive graph migration learning method based on fine granularity field
CN116386895B (en) Epidemic public opinion entity identification method and device based on heterogeneous graph neural network
CN117012370A (en) Multi-mode disease auxiliary reasoning system, method, terminal and storage medium
CN116680407A (en) Knowledge graph construction method and device
CN116662924A (en) Aspect-level multi-mode emotion analysis method based on dual-channel and attention mechanism
CN114064938B (en) Medical literature relation extraction method and device, electronic equipment and storage medium
CN115587595A (en) Multi-granularity entity recognition method for pathological text naming
CN115906846A (en) Document-level named entity identification method based on double-graph hierarchical feature fusion
CN116994695A (en) Training method, device, equipment and storage medium of report generation model

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination