WO2022178946A1 - 黑色素瘤图像识别方法、装置、计算机设备及存储介质 - Google Patents

黑色素瘤图像识别方法、装置、计算机设备及存储介质 Download PDF

Info

Publication number
WO2022178946A1
WO2022178946A1 PCT/CN2021/084535 CN2021084535W WO2022178946A1 WO 2022178946 A1 WO2022178946 A1 WO 2022178946A1 CN 2021084535 W CN2021084535 W CN 2021084535W WO 2022178946 A1 WO2022178946 A1 WO 2022178946A1
Authority
WO
WIPO (PCT)
Prior art keywords
image
melanoma
network
model
training
Prior art date
Application number
PCT/CN2021/084535
Other languages
English (en)
French (fr)
Inventor
刘杰
王健宗
瞿晓阳
Original Assignee
平安科技(深圳)有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 平安科技(深圳)有限公司 filed Critical 平安科技(深圳)有限公司
Publication of WO2022178946A1 publication Critical patent/WO2022178946A1/zh

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/0002Inspection of images, e.g. flaw detection
    • G06T7/0012Biomedical image inspection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/214Generating training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • G06F18/241Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches
    • G06F18/2413Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches based on distances to training or reference patterns
    • G06F18/24133Distances to prototypes
    • G06F18/24137Distances to cluster centroïds
    • G06F18/2414Smoothing the distance, e.g. radial basis function networks [RBFN]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • G06F18/241Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches
    • G06F18/2415Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches based on parametric or probabilistic models, e.g. based on likelihood ratio or false acceptance rate versus a false rejection rate
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30004Biomedical image processing
    • G06T2207/30096Tumor; Lesion
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02TCLIMATE CHANGE MITIGATION TECHNOLOGIES RELATED TO TRANSPORTATION
    • Y02T10/00Road transport of goods or passengers
    • Y02T10/10Internal combustion engine [ICE] based vehicles
    • Y02T10/40Engine management systems

Definitions

  • the present application relates to the technical field of artificial intelligence, and in particular, to a neural network-based melanoma image recognition method, a neural network-based melanoma image recognition device, computer equipment, and a computer-readable storage medium.
  • Melanoma usually refers to malignant melanoma, a highly malignant tumor derived from the malignant transformation of melanocytes.
  • the inventors have found that very few deep learning algorithms have been used to identify their characteristics to detect melanoma. The main reason is that there are relatively few melanoma skin cancer image samples, and the training of deep learning algorithms often needs to be implemented on the basis of massive data samples. If the training samples of melanoma skin cancer images are not enough, it will lead to training The accuracy of the deep learning model for identifying melanoma is low.
  • the main purpose of this application is to provide a neural network-based melanoma image recognition method, a neural network-based melanoma image recognition device, computer equipment and a computer-readable storage medium, aiming to solve how to obtain a limited number of training samples based on The problem of a melanoma image recognition model with high accuracy for melanoma image recognition.
  • the present application provides a neural network-based melanoma image recognition method, comprising the following steps:
  • the image segmentation network and the judgment network of the generative adversarial network model are respectively constructed based on the fully convolutional neural network, and the deconvolution layer in the fully convolutional neural network corresponding to the judgment network is replaced with a fully connected layer;
  • the image prediction result corresponding to the melanoma image and the real image result are input into the judgment network for adversarial training, so as to optimize the model parameters corresponding to the image segmentation network and the judgment network, and
  • the fully connected layer is optimized using the training result of the judgment network, wherein the fully connected layer is used to identify the probability that the input image of the generative adversarial network model belongs to a melanoma image;
  • the target image is input into the melanoma image recognition model for analysis to obtain the probability that the target image belongs to the melanoma image.
  • the application also provides a neural network-based melanoma image recognition device
  • the neural network-based melanoma image recognition device includes:
  • the model building module is used to respectively construct the image segmentation network and the judgment network of the generative confrontation network model based on the fully convolutional neural network, and replace the deconvolution layer in the fully convolutional neural network corresponding to the judgment network with fully connected Floor;
  • the first training module is configured to acquire multiple melanoma image samples, and input the melanoma image samples into the image segmentation network for training, so as to generate image prediction results corresponding to the melanoma image samples, wherein, The melanoma image sample is marked with an image real result;
  • the second training module is configured to input the image prediction result and the real image result corresponding to the melanoma image into the judgment network for adversarial training, so as to correspond to the image segmentation network and the judgment network
  • the model parameters are optimized, and the fully connected layer is optimized using the training results of the evaluation network, wherein the fully connected layer is used to identify the probability that the input image of the generative adversarial network model belongs to a melanoma image;
  • a detection module configured to detect whether the similarity between the continuously generated prediction result of the image and the real result of the image is greater than or equal to a preset similarity in the process of optimizing the model parameters
  • a determination module used to determine that the generative adversarial network model training is completed, and use the generative adversarial network model that has been trained as a melanoma image recognition model;
  • the analysis module is configured to input the target image into the melanoma image recognition model for analysis when receiving the target image, so as to obtain the probability that the target image belongs to the melanoma image.
  • the present application also provides a computer device, the computer device comprising:
  • the computer device includes a memory, a processor, and a computer program stored on the memory and executable on the processor, the computer program implementing a neural network-based method for melanoma image recognition when executed by the processor ;
  • the steps of the neural network-based melanoma image recognition method include:
  • the image segmentation network and the judgment network of the generative adversarial network model are respectively constructed based on the fully convolutional neural network, and the deconvolution layer in the fully convolutional neural network corresponding to the judgment network is replaced with a fully connected layer;
  • the image prediction result corresponding to the melanoma image and the real image result are input into the judgment network for adversarial training, so as to optimize the model parameters corresponding to the image segmentation network and the judgment network, and
  • the fully connected layer is optimized using the training result of the judgment network, wherein the fully connected layer is used to identify the probability that the input image of the generative adversarial network model belongs to a melanoma image;
  • the target image is input into the melanoma image recognition model for analysis to obtain the probability that the target image belongs to the melanoma image.
  • the present application also provides a computer-readable storage medium on which a computer program is stored, and when the computer program is executed by a processor, a neural network-based melanoma image recognition method is implemented;
  • the steps of the neural network-based melanoma image recognition method include:
  • the image segmentation network and the judgment network of the generative adversarial network model are respectively constructed based on the fully convolutional neural network, and the deconvolution layer in the fully convolutional neural network corresponding to the judgment network is replaced with a fully connected layer;
  • the image prediction result corresponding to the melanoma image and the real image result are input into the judgment network for adversarial training, so as to optimize the model parameters corresponding to the image segmentation network and the judgment network, and
  • the fully connected layer is optimized using the training result of the judgment network, wherein the fully connected layer is used to identify the probability that the input image of the generative adversarial network model belongs to a melanoma image;
  • the target image is input into the melanoma image recognition model for analysis to obtain the probability that the target image belongs to the melanoma image.
  • the neural network-based melanoma image recognition method, the neural network-based melanoma image recognition device, the computer equipment, and the computer-readable storage medium provided in this application train melanoma through a generative confrontation network model constructed based on a fully convolutional neural network
  • a tumor image recognition model is used to optimize the similarity between the predicted results of the melanoma images generated by the model and the real results, so that the model can learn rich similarities to distinguish between true and false data during the adversarial training process, thereby reducing the need for display
  • the need for pixel-level objective function modeling reduces the number of samples required for training the model, so as to obtain a melanoma image recognition model with high accuracy for melanoma image recognition based on a limited number of training samples.
  • FIG. 1 is a schematic diagram of steps of a method for recognizing a melanoma image based on a neural network according to an embodiment of the present application
  • FIG. 2 is a schematic block diagram of a neural network-based melanoma image recognition apparatus according to an embodiment of the present application
  • FIG. 3 is a schematic structural block diagram of a computer device according to an embodiment of the present application.
  • the neural network-based melanoma image recognition method includes:
  • Step S10 respectively constructing an image segmentation network and a judgment network of a generative confrontation network model based on a fully convolutional neural network, and replacing the deconvolution layer in the fully convolutional neural network corresponding to the judgment network with a fully connected layer;
  • Step S20 acquiring a plurality of melanoma image samples, and inputting the melanoma image samples into the image segmentation network for training, so as to generate an image prediction result corresponding to the melanoma image samples, wherein the melanoma image samples
  • the image samples are annotated with the real image results
  • Step S30 Input the image prediction result corresponding to the melanoma image and the real image result into the judgment network for adversarial training, so as to perform adversarial training on the image segmentation network and the model parameters corresponding to the judgment network. optimizing, and optimizing the fully connected layer using the training result of the evaluation network, wherein the fully connected layer is used to identify the probability that the input image of the generative adversarial network model belongs to a melanoma image;
  • Step S40 in the process that described model parameter is carried out optimization, detect whether the similarity between the described image prediction result of continuous generation and described image real result is all greater than or equal to preset similarity
  • Step S50 if yes, then determine that the training of the generative adversarial network model is completed, and use the trained generative adversarial network model as a melanoma image recognition model;
  • Step S60 When the target image is received, the target image is input into the melanoma image recognition model for analysis, so as to obtain the probability that the target image belongs to the melanoma image.
  • the embodiment terminal may be a computer device, or may be a melanoma image recognition device based on a neural network.
  • the terminal uses artificial intelligence and image recognition technology to construct a generative adversarial network model (GAN, Generative Adversarial Networks) based on a fully convolutional neural network (FCN, Fully Convolutional Networks for Semantic Segmentation).
  • GAN Generative Adversarial Networks
  • FCN Fully Convolutional Networks for Semantic Segmentation
  • the constructed generative adversarial network model includes an image segmentation network and a judgment network (or called an adversarial network), and the image segmentation network and the judgment network are both constructed based on the original fully convolutional neural network. In this way, the effect of image segmentation network segmenting images can be enhanced.
  • the deconvolution layer in the fully convolutional neural network corresponding to the evaluation network is replaced with a fully connected layer, so that the evaluation network has a classification function, so that the evaluation network after subsequent training can be used. Identify the probability that the input image belongs to a melanoma image.
  • the melanoma image samples can be derived from clinically collected melanoma images (for example, melanoma skin cancer images) stored in the hospital system, and these melanoma images are pre-marked and generated by relevant engineers melanoma image samples, and then input a certain number of melanoma image samples to the terminal.
  • each melanoma image sample is marked with its corresponding real image result, and the real image result includes the probability that the image belongs to a melanoma image, and when the real image result directly indicates that the image belongs to a melanoma image, the corresponding probability should be 100%.
  • a part of non-melanoma images that is, images of normal biological organs
  • the probability of a melanoma image is 0.
  • the terminal after acquiring multiple melanoma image samples, the terminal inputs the melanoma image samples one by one into the generative adversarial network model for training.
  • the images in the melanoma image sample will be extracted and input into the image segmentation network in turn, and the melanoma image will be processed by the image segmentation network.
  • Image segmentation it should be noted that in the image segmentation network, only the image itself in the melanoma image sample needs to be input, and there is no need to input the real image results of the sample annotation; of course, the image segmentation network can also be set to ignore the original image annotation results in the sample.
  • the image segmentation network will convert the input image xi into feature maps corresponding to each channel based on the number of channels in the input layer of its corresponding fully convolutional neural network, obtain feature maps corresponding to multiple channels, and obtain a feature map set as Multi-channel feature map: X ⁇ R C ⁇ H ⁇ W ; where R is the set of real numbers, C is the number of channels (optionally red, green, and blue primary color channels), H is the image height, and W is the image width, that is, H ⁇ W is the image size (optionally 400 ⁇ 400). Then the image segmentation network extracts the image spatial structure information and image semantic information from the multi-channel feature map through the multi-layer residual convolution module and the average pooling operation, and obtains the feature maps to be output.
  • the feature map to be output is deconvolved to the original size of the image (ie H ⁇ W), and the melanoma image sample output by the image segmentation network can be obtained.
  • the spatial information can be the size and positional relationship between the two objects in the picture;
  • the semantic information can be the meaning expressed by the image, which may have several meanings, such as describing an image as a brain CT (Computed Tomography) ) image with a tumor.
  • the image prediction result output by the image segmentation network will be associated with the currently trained melanoma image sample. It should be understood that the image prediction result includes the probability that the predicted image belongs to the melanoma image (which may be recorded as the first probability).
  • step S30 the generative adversarial network model inputs the original image in the currently trained melanoma image sample, the real image result and the image prediction result corresponding to the melanoma image sample into the judgment network for further adversarial training.
  • adversarial training is a training method based on adversarial learning.
  • the process of adversarial learning can be regarded as making the training goal of the model reach: the output result obtained by the model on an input data can be as realistic as possible. results are consistent.
  • the input of the judgment network includes two cases: one is the original image + image prediction result, and the other is the original image + image real result; among them, the training target of the first case is 0, and if it is the second case, the training target is 1 .
  • both can be represented in the form of mask labels (or feature maps), where the height of the mask label is H, the width is W, and the number of channels is 2 , this is because the melanoma skin cancer image is divided into foreground and background, and the foreground is a collection of pixels in the lesion area, so the number of channels is set to 2;
  • the mask label corresponding to the image prediction result is marked as s(x i ) (predicted mask), denote the mask label corresponding to the real result of the image as yi (real mask).
  • the mask label is associated with a probability that it belongs to a melanoma skin cancer image.
  • adversarial training is mainly carried out using a joint optimization formula, the formula is as follows:
  • S represents the predicted class probability of the image segmentation network at each pixel point, so that the class probability is normalized to 1 on each pixel point;
  • D is D(x i , y), which means that y comes from y i (true mask) rather than a scalar probability estimate from s ( xi ) (prediction mask);
  • xi is the original image;
  • Js is the multi-class cross-entropy loss of the mean of all pixels in the prediction mask;
  • Jd is Binary logistic loss produced when judging network predictions.
  • is a tuning parameter used to balance the pixel loss with the adversarial loss by optimizing the respective loss functions alternately between S and D.
  • the joint optimization formula is to perform joint optimization by minimizing S and maximizing D.
  • the similarity between the image prediction results and the real image results is improved, so as to make the image prediction results tend to be more similar.
  • the results of the joint optimization adversarial training (that is, the training results) will be used to update the model parameters of the generative adversarial network model. Therefore, in the entire training process of the generative adversarial network model, it is essentially the image segmentation network and judgment. The process of alternate training of the network.
  • the key network learning can effectively transfer this global information back to the image segmentation network to enhance the segmentation effect.
  • the training results can also be input into the fully connected layer of the evaluation network for classification and discrimination processing, so as to regenerate the input image corresponding to the current generative confrontation network model (That is, the probability that the image in the melanoma image sample) belongs to the melanoma image (can be recorded as the second probability, which is equivalent to the obtained after optimizing the first probability).
  • This process is also equivalent to the process of training and optimizing the fully connected layer.
  • the fully connected layer can be optimized to classify images and identify the input corresponding to the generative adversarial network model. The ability of an image (ie, an image in a sample of melanoma images) to belong to the probability of a melanoma image.
  • some of the melanoma image samples obtained by the terminal may be marked with the real results of the images, and some of them may not be marked with the real results of the images.
  • the samples that are not marked with the real results of the images are input into the image segmentation network to obtain the image prediction results, and then the real results of the images corresponding to the samples marked with the real results of the images and the unmarked samples with the real results of the images are input.
  • the image prediction results corresponding to the samples of the real image results are input into the judgment network for adversarial training. In this way, although the training process is longer, it can save the time of manual sample preparation, and the performance of the final trained model is better than that of using all labeled samples for training.
  • step S40 in the process that the terminal uses the training result of the evaluation network to optimize the model parameters corresponding to the evaluation network and the image segmentation network, after each time the terminal obtains the corresponding image prediction result based on the melanoma image sample, the terminal can The image prediction result is checked for similarity with the real image result corresponding to the melanoma image sample, so as to obtain the similarity between the image prediction result and the real image result.
  • the higher the similarity between the image prediction result and the real image result the higher the confidence of the obtained prediction probability when the probability that the training result obtained after adversarial training of the two belongs to the melanoma image is generated.
  • the terminal detects whether the similarity between the image prediction result corresponding to each melanoma image sample and the real image result is greater than or equal to the preset similarity; wherein, the value range of the preset similarity can be selected as 90%-100%.
  • the count is incremented by one; when the terminal detects the image prediction corresponding to the current melanoma image sample When the similarity between the result and the real result of the image is less than the preset similarity, the count is cleared, and the generative adversarial network model is trained again based on the new melanoma image sample.
  • the terminal can judge whether the training of the generative adversarial network model is completed by detecting whether the count value of the similarity greater than or equal to the preset similarity is greater than the preset number of times.
  • the actual value of the preset number of times may be set according to actual needs, for example, set to at least three times.
  • step S50 when the terminal detects that the count value is greater than the preset number of times, it is determined that the similarity between the continuously generated image prediction result and the real image result is greater than or equal to the preset similarity, and further It is determined that the training of the generative adversarial network model is completed.
  • the distribution of the image prediction results and the distribution of the real image results can be realized, that is, the image prediction results and the image can be achieved.
  • the real results are consistent, so that the confidence level corresponding to the probability that the input image obtained by finally predicting the training result belongs to the melanoma image reaches the optimal value (that is, the more credible the prediction result is).
  • the trained generative confrontation network model is used as the melanoma image recognition model obtained by training.
  • the melanoma image recognition model can be used to identify whether the input image belongs to the melanoma image, and output the probability that the input image belongs to the melanoma image.
  • the melanoma image recognition model trained based on the generative adversarial network model can also enhance the overall consistency of image segmentation and extract the contours of lesion patches in melanoma images.
  • the terminal is provided with an image acquisition device, or a communication connection is established between the terminal and the image acquisition device.
  • the terminal can use the image acquisition device to collect the target image of the inspected person in real time.
  • the target image is the image to be recognized.
  • the terminal uses the target image as an input image of the melanoma image recognition model, and inputs it into the melanoma image recognition model for analysis.
  • the image segmentation network in the model will segment the target image into at least one organ region image (if the target image shows multiple organs, it can be divided into multiple organs region image), and predict the image prediction result corresponding to each organ region image. It should be understood that the principle of the image segmentation network identifying target regions (ie, organ regions) can be implemented based on image recognition technology.
  • the evaluation network in the model will further adjust the image prediction result output by the image segmentation network, and finally obtain the melanoma image prediction result corresponding to the target image (that is, optimize the first probability output by the image segmentation network to obtain the second probability ), so as to obtain the probability that the target image belongs to the melanoma image for output.
  • the terminal acquires the melanoma image prediction result output by the melanoma image recognition model, and associates the acquired melanoma image prediction result with the target image currently being recognized.
  • the terminal can also mark the prediction result of the melanoma image corresponding to each organ in the display area of each organ region in the target image, that is, mark the probability that the image of each organ region belongs to the melanoma image , so as to achieve the purpose of assisting medical staff to quickly identify melanoma images.
  • the medical staff can quickly identify the melanoma image based on this, without further identification of the normal image, which can save medical resources to a certain extent.
  • a melanoma image recognition model is trained by a generative adversarial network model constructed based on a fully convolutional neural network to optimize the similarity between the predicted results of the melanoma images generated by the model and the real results, so that the model
  • a generative adversarial network model constructed based on a fully convolutional neural network to optimize the similarity between the predicted results of the melanoma images generated by the model and the real results, so that the model
  • rich similarity can be learned to distinguish between true and false data, thereby reducing the need for modeling the display pixel-level objective function, reducing the number of samples required for training the model, without the need for traditional neural network models.
  • the method further includes:
  • Step S70 storing the melanoma image recognition model in a blockchain network.
  • the terminal establishes a communication connection with a blockchain network (Blockchain Network).
  • a blockchain network is a collection of nodes that incorporate new blocks into the blockchain through consensus.
  • Blockchain is a new application mode of computer technology such as distributed data storage, point-to-point transmission, consensus mechanism, and encryption algorithm.
  • Blockchain essentially a decentralized database, is a series of data blocks associated with cryptographic methods. Each data block contains a batch of network transaction information to verify its Validity of information (anti-counterfeiting) and generation of the next block.
  • the blockchain can include the underlying platform of the blockchain, the platform product service layer, and the application service layer.
  • the underlying platform of the blockchain can include processing modules such as user management, basic services, smart contracts, and operation monitoring.
  • the user management module is responsible for the identity information management of all blockchain participants, including maintenance of public and private key generation (account management), key management, and maintenance of the corresponding relationship between the user's real identity and blockchain address (authority management), etc.
  • account management maintenance of public and private key generation
  • key management key management
  • authorization management maintenance of the corresponding relationship between the user's real identity and blockchain address
  • the basic service module is deployed on all blockchain node devices to verify the validity of business requests, After completing the consensus on valid requests, record them in the storage.
  • the basic service For a new business request, the basic service first adapts the interface for analysis and authentication processing (interface adaptation), and then encrypts the business information through the consensus algorithm (consensus management), After encryption, it is completely and consistently transmitted to the shared ledger (network communication), and records are stored; the smart contract module is responsible for the registration and issuance of contracts, as well as contract triggering and contract execution.
  • contract logic through a programming language and publish to On the blockchain (contract registration), according to the logic of the contract terms, call the key or other events to trigger execution, complete the contract logic, and also provide the function of contract upgrade and cancellation;
  • the operation monitoring module is mainly responsible for the deployment in the product release process , configuration modification, contract settings, cloud adaptation, and visual output of real-time status in product operation, such as: alarms, monitoring network conditions, monitoring node equipment health status, etc.
  • the melanoma image recognition model can be uploaded to the blockchain network for storage.
  • the terminal After the terminal stores the trained melanoma image recognition model in the blockchain network, when the terminal uses the image acquisition device to collect the target image of the inspected person, it can send the collected target image to the blockchain network.
  • any blockchain node of the blockchain network when any blockchain node of the blockchain network receives the target image sent by the terminal, it can use the stored melanoma image recognition model to analyze the target image. After the analysis is completed, the image prediction result corresponding to the target image is output.
  • the classification result can be fed back to the terminal.
  • the image prediction result that the terminal is about to receive is used as the image prediction corresponding to the target image. As a result, the process of recognizing the target image is thus completed.
  • the method further includes:
  • Step S80 when detecting that the melanoma image recognition model stored on the blockchain network is updated, obtain model parameters corresponding to the updated melanoma image recognition model from the blockchain network;
  • Step S81 updating the locally stored melanoma image recognition model according to the obtained model parameters.
  • the hospital system can synchronously update the updated melanoma image recognition model to the blockchain network (or only upload the updated model). part of the model parameters).
  • the terminal when the terminal detects that the melanoma image recognition model stored on the blockchain network has been updated, and detects that the update operation of the melanoma image recognition model on the blockchain network is not triggered by the local terminal, then The terminal may obtain model parameters corresponding to the updated melanoma image recognition model from the blockchain network.
  • the terminal updates the locally stored melanoma image recognition model based on the acquired model parameters, so as to optimize the performance of the locally stored melanoma image recognition model.
  • the melanoma image recognition model by storing the melanoma image recognition model to the blockchain network, it can not only improve the storage security of the melanoma image recognition model and effectively save the local storage space, but also obtain more melanomas uploaded by the hospital system based on this.
  • the tumor image samples are used to update the model, thereby improving the accuracy of the melanoma image recognition model for recognizing melanoma images.
  • the target image when the target image is received, the target image is input into the melanoma image recognition model for analysis, so as to obtain the probability that the target image belongs to the melanoma image.
  • the steps include:
  • Step S90 when the target image is received, detect whether the image quality of the target image satisfies a preset condition
  • Step S91 if yes, input the target image into the melanoma image recognition model for analysis, so as to obtain the probability that the target image belongs to the melanoma image;
  • Step S92 If not, output prompt information, where the prompt information is used to prompt to re-collect the target image.
  • the terminal when the terminal receives the target image, it can first detect whether the image quality of the target image satisfies the predetermined image quality before inputting the target image as the input image of the melanoma image recognition model and inputting it into the melanoma image recognition model for analysis. Set conditions.
  • the image quality may be image brightness, image clarity, etc.
  • the corresponding preset conditions may be preset brightness, preset clarity, and the like.
  • the specific values of the preset conditions such as the preset brightness and the preset definition can be set according to actual needs, which are not limited in this embodiment.
  • the terminal when the terminal detects that the image brightness of the target image is greater than or equal to the preset brightness, and/or when it detects that the image clarity of the target image is greater than or equal to the preset clarity, the terminal may determine that the image quality of the target image satisfies the Preset conditions; when the terminal detects that the image brightness of the target image is lower than the preset brightness, or detects that the image clarity of the target image is less than the preset clarity, it determines that the image quality of the target image does not meet the preset conditions.
  • the terminal may also detect whether the image quality of the target image satisfies a preset condition by detecting whether there is an image of a human body in the target image. And if so, it is determined that the preset condition is met, and if otherwise, it is determined that the preset condition is not met.
  • the target image is used as the input image of the melanoma image recognition model, and it is input into the melanoma image recognition model for analysis, so as to utilize the melanoma image.
  • the recognition model generates the probability that the target image belongs to the melanoma image.
  • the terminal when the terminal detects that the image quality of the target image does not meet the preset condition, it outputs prompt information, where the prompt information is used to prompt the user to re-collect the target image (ie, the human organ image) of the examinee.
  • the display interface of the relevant detection instrument indicates that the quality of the currently collected target image is abnormal and needs to be re-collected.
  • the target image when the target image is received, the target image is input into the melanoma image recognition model for analysis, so as to obtain the probability that the target image belongs to the melanoma image.
  • the steps also include:
  • Step S100 detecting whether the probability that the target image belongs to a melanoma image is greater than a preset threshold
  • Step S101 If yes, output alarm information corresponding to the target image.
  • the terminal when the terminal obtains the prediction result of the melanoma image corresponding to the target image, it will detect whether the probability that the target image belongs to the melanoma image is greater than a preset threshold based on this.
  • the preset threshold is used to measure the degree to which the target image belongs to a melanoma image, so its specific value range can be set according to actual needs, for example, a value between 70% and 99% can be selected.
  • the terminal when the terminal detects that the probability that the target image belongs to the melanoma image is greater than the preset threshold, it indicates that the target image has a high probability of belonging to the melanoma image, and the terminal generates the target image according to the target image and the image prediction result corresponding to the target image. Alarm information corresponding to the image, and output the alarm information to the associated device.
  • the associated device may be the user device of the collector corresponding to the target image, or may be the associated device of the relevant medical staff.
  • the terminal when there are multiple organ region images in the target image, the terminal only needs to detect that the probability that at least one organ region image belongs to a melanoma image is greater than the preset threshold, and then determine the probability that the target image belongs to a melanoma image. greater than the preset threshold.
  • the target image is marked as a normal image.
  • the step further includes:
  • Step S110 When a determination response of the alarm information is received, the target image is used as the melanoma image sample, and the melanoma image recognition model is updated based on the target image.
  • the terminal when the terminal detects that the probability that the target image belongs to the melanoma image is greater than the preset threshold, alarm information corresponding to the target image is generated, and the alarm information is output to the associated equipment of the relevant medical staff.
  • the medical staff determines that the melanoma image recognition model has correctly identified the target image, and confirms that the target image is the melanoma image, the medical staff can send a confirmation response to the alarm information to the terminal through its associated device; if the medical staff determines that the melanoma image recognition The model recognizes the target image incorrectly, and when it is confirmed that the target image does not belong to the melanoma image, the medical staff can send a negative response to the alarm information to the terminal through its associated device.
  • the terminal when it receives a confirmation response to the alarm information, it can update the target image to a melanoma image sample, update the image prediction result corresponding to the target image to the image real result, and update the newly generated image based on the image real result.
  • Melanoma image samples are annotated.
  • the terminal detects that the melanoma image recognition model is idle, or detects that the number of newly generated melanoma image samples is greater than the preset number, the new melanoma image samples are input into the melanoma image recognition model, To iteratively update the melanoma image recognition model.
  • the preset number may be set according to actual needs, which is not limited in this embodiment.
  • an embodiment of the present application further provides a neural network-based melanoma image recognition device 10, including:
  • the model building module 11 is used to construct the image segmentation network and the judgment network of the generative confrontation network model based on the fully convolutional neural network, and replace the deconvolution layer in the fully convolutional neural network corresponding to the judgment network with a full connection layer;
  • the first training module 12 is configured to acquire a plurality of melanoma image samples, and input the melanoma image samples into the image segmentation network for training, so as to generate image prediction results corresponding to the melanoma image samples, wherein , the melanoma image sample is marked with the real image result;
  • the second training module 13 is configured to input the image prediction result corresponding to the melanoma image and the real image result into the judgment network for adversarial training, so as to perform adversarial training on the image segmentation network and the judgment network
  • the corresponding model parameters are optimized, and the fully connected layer is optimized using the training result of the evaluation network, wherein the fully connected layer is used to identify the probability that the input image of the generative adversarial network model belongs to the melanoma image;
  • a detection module 14 configured to detect whether the similarity between the continuously generated image prediction result and the real image result is greater than or equal to a preset similarity in the process of optimizing the model parameters
  • the determination module 15 is used for determining that the training of the generative adversarial network model is completed, and using the generative adversarial network model that has been trained as a melanoma image recognition model;
  • the analysis module 16 is configured to input the target image into the melanoma image recognition model for analysis when receiving the target image, so as to obtain the probability that the target image belongs to the melanoma image.
  • the neural network-based melanoma image recognition apparatus further includes a storage module, and the storage module is configured to store the melanoma image recognition model in a blockchain network.
  • the neural network-based melanoma image recognition device further includes:
  • an acquisition module configured to acquire model parameters corresponding to the updated melanoma image recognition model from the blockchain network when it is detected that the melanoma image recognition model stored on the blockchain network is updated;
  • the updating module is used for updating the locally stored melanoma image recognition model according to the obtained model parameters.
  • the neural network-based melanoma image recognition device further includes:
  • a judgment module configured to detect whether the image quality of the target image meets a preset requirement when the target image is received
  • the analysis module is further configured to input the target image into the melanoma image recognition model for analysis, so as to obtain the probability that the target image belongs to the melanoma image;
  • the prompt module is used for outputting prompt information if not, where the prompt information is used for prompting to re-collect the target image.
  • the judging module is further configured to, according to the image prediction result corresponding to the target image, detect whether the probability that the target image belongs to a melanoma image is greater than a preset threshold;
  • the second determination module is further configured to output alarm information corresponding to the target image if yes.
  • the updating module is further configured to use the target image as the melanoma image sample when receiving the determination response of the alarm information, and update the melanoma image recognition model based on the target image.
  • an embodiment of the present application further provides a computer device.
  • the computer device may be a server, and its internal structure may be as shown in FIG. 3 .
  • the computer device includes a processor, memory, a network interface, and a database connected by a system bus. Among them, the processor of the computer device is used to provide computing and control capabilities.
  • the memory of the computer device includes a non-volatile storage medium, an internal memory.
  • the nonvolatile storage medium stores an operating system, a computer program, and a database.
  • the internal memory provides an environment for the execution of the operating system and computer programs in the non-volatile storage medium.
  • the database of the computer device is used to store data related to the neural network-based image recognition method of melanoma.
  • the network interface of the computer device is used to communicate with an external terminal through a network connection.
  • the computer program when executed by the processor, implements a neural network-based melanoma image recognition method.
  • FIG. 3 is only a block diagram of a partial structure related to the solution of the present application, and does not constitute a limitation on the computer equipment to which the solution of the present application is applied.
  • the present application also provides a computer-readable storage medium
  • the computer-readable storage medium includes a computer program, and when the computer program is executed by a processor, realizes the neural network-based melanoma image recognition as described in the above embodiments steps of the method. It can be understood that, the computer-readable storage medium in this embodiment may be non-volatile or volatile.
  • the neural network-based melanoma image recognition method the neural network-based melanoma image recognition method, the neural network-based melanoma image recognition device, the computer equipment, and the storage medium provided in the embodiments of the present application, through
  • the melanoma image recognition model is trained based on the generative adversarial network model constructed by the fully convolutional neural network to optimize the similarity between the predicted results of the melanoma images generated by the model and the real results, so that the model can be used in the adversarial training process.
  • Nonvolatile memory may include read only memory (ROM), programmable ROM (PROM), electrically programmable ROM (EPROM), electrically erasable programmable ROM (EEPROM), or flash memory.
  • Volatile memory may include random access memory (RAM) or external cache memory.
  • RAM is available in various forms such as static RAM (SRAM), dynamic RAM (DRAM), synchronous DRAM (SDRAM), double-rate SDRAM (SSRSDRAM), enhanced SDRAM (ESDRAM), synchronous Link (Synchlink) DRAM (SLDRAM), memory bus (Rambus) direct RAM (RDRAM), direct memory bus dynamic RAM (DRDRAM), and memory bus dynamic RAM (RDRAM), etc.

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • General Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Artificial Intelligence (AREA)
  • Evolutionary Computation (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Health & Medical Sciences (AREA)
  • General Health & Medical Sciences (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Evolutionary Biology (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Molecular Biology (AREA)
  • Biophysics (AREA)
  • Computational Linguistics (AREA)
  • Biomedical Technology (AREA)
  • Computing Systems (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Probability & Statistics with Applications (AREA)
  • Medical Informatics (AREA)
  • Nuclear Medicine, Radiotherapy & Molecular Imaging (AREA)
  • Radiology & Medical Imaging (AREA)
  • Quality & Reliability (AREA)
  • Image Analysis (AREA)

Abstract

本申请涉及人工智能领域,公开了一种基于神经网络的黑色素瘤图像识别方法,包括:基于全卷积神经网络分别构建生成式对抗网络模型的图像分割网络和评判网络;获取多个黑色素瘤图像样本,并将所述黑色素瘤图像样本输入到所述生成式对抗网络模型中进行训练,所述图像分割网络用于生成所述黑色素瘤图像样本对应的图像预测结果;所述评判网络用于将所述图像预测结果和图像真实结果进行对抗训练;将训练完成的所述生成式对抗网络模型作为黑色素瘤图像识别模型。本申请还涉及区块链技术领域,本申请还公开了一种基于神经网络的黑色素瘤图像识别装置、计算机设备以及计算机可读存储介质。本申请得到了对黑色素瘤图像识别准确率高的黑色素瘤图像识别模型。

Description

黑色素瘤图像识别方法、装置、计算机设备及存储介质
本申请要求于2021年2月25日提交中国专利局、申请号为2021102122893,发明名称为“黑色素瘤图像识别方法、装置、计算机设备及存储介质”的中国专利申请的优先权,其全部内容通过引用结合在本申请中。
技术领域
本申请涉及人工智能技术领域,尤其涉及一种基于神经网络的黑色素瘤图像识别方法、基于神经网络的黑色素瘤图像识别装置、计算机设备以及计算机可读存储介质。
背景技术
黑色素瘤通常是指恶性黑色素瘤,由黑色素细胞恶变而来的一种高度恶性的肿瘤。目前,发明人发现只有很少的深度学习算法被用于辨识其特征来检测黑色素瘤。主要是因为黑色素瘤皮肤癌图像样本相对少,而深度学习算法的训练往往是需要在海量数据样本的基础上实现的,如果使用的黑色素瘤皮肤癌图像的训练样本不够的话,就会导致训练得到的深度学习模型识别黑色素瘤的准确率低下。
上述内容仅用于辅助理解本申请的技术方案,并不代表承认上述内容是现有技术。
技术问题
本申请的主要目的在于提供一种基于神经网络的黑色素瘤图像识别方法、基于神经网络的黑色素瘤图像识别装置、计算机设备以及计算机可读存储介质,旨在解决如何基于有限数量的训练样本,得到对黑色素瘤图像识别准确率高的黑色素瘤图像识别模型的问题。
技术解决方案
为实现上述目的,本申请提供一种基于神经网络的黑色素瘤图像识别方法,包括以下步骤:
基于全卷积神经网络分别构建生成式对抗网络模型的图像分割网络和评判网络,并将所述评判网络对应的全卷积神经网络中的反卷积层替换为全连接层;
获取多个黑色素瘤图像样本,并将所述黑色素瘤图像样本输入到所述图像分割网络中进行训练,以生成所述黑色素瘤图像样本对应的图像预测结果,其中,所述黑色素瘤图像样本标注有图像真实结果;
将所述黑色素瘤图像对应的所述图像预测结果和所述图像真实结果输入到所述评判网络中进行对抗训练,以对所述图像分割网络和所述评判网络对应的模型参数进行优化,并利用所述评判网络的训练结果优化所述全连接层,其中,所述全连接层用于识别所述生成式对抗网络模型的输入图像属于黑色素瘤图像的概率;
在对所述模型参数进行优化的过程中,检测连续生成的所述图像预测结果与所述图像真实结果之间的相似度是否均大于或等于预设相似度;
若是,则判定所述生成式对抗网络模型训练完成,并将训练完成的所述生成式对抗网络模型作为黑色素瘤图像识别模型;
接收到目标图像时,将所述目标图像输入到所述黑色素瘤图像识别模型进行分析,以得到所述目标图像属于黑色素瘤图像的概率。
为实现上述目的,本申请还提供一种基于神经网络的黑色素瘤图像识别装置,所述基 于神经网络的黑色素瘤图像识别装置包括:
模型构建模块,用于基于全卷积神经网络分别构建生成式对抗网络模型的图像分割网络和评判网络,并将所述评判网络对应的全卷积神经网络中的反卷积层替换为全连接层;
第一训练模块,用于获取多个黑色素瘤图像样本,并将所述黑色素瘤图像样本输入到所述图像分割网络中进行训练,以生成所述黑色素瘤图像样本对应的图像预测结果,其中,所述黑色素瘤图像样本标注有图像真实结果;
第二训练模块,用于将所述黑色素瘤图像对应的所述图像预测结果和所述图像真实结果输入到所述评判网络中进行对抗训练,以对所述图像分割网络和所述评判网络对应的模型参数进行优化,并利用所述评判网络的训练结果优化所述全连接层,其中,所述全连接层用于识别所述生成式对抗网络模型的输入图像属于黑色素瘤图像的概率;
检测模块,用于在对所述模型参数进行优化的过程中,检测连续生成的所述图像预测结果与所述图像真实结果之间的相似度是否均大于或等于预设相似度;
判定模块,用于若是,则判定所述生成式对抗网络模型训练完成,并将训练完成的所述生成式对抗网络模型作为黑色素瘤图像识别模型;
分析模块,用于接收到目标图像时,将所述目标图像输入到所述黑色素瘤图像识别模型进行分析,以得到所述目标图像属于黑色素瘤图像的概率。
为实现上述目的,本申请还提供一种计算机设备,所述计算机设备包括:
所述计算机设备包括存储器、处理器及存储在所述存储器上并可在所述处理器上运行的计算机程序,所述计算机程序被所述处理器执行时实现基于神经网络的黑色素瘤图像识别方法;
其中,所述基于神经网络的黑色素瘤图像识别方法的步骤包括:
基于全卷积神经网络分别构建生成式对抗网络模型的图像分割网络和评判网络,并将所述评判网络对应的全卷积神经网络中的反卷积层替换为全连接层;
获取多个黑色素瘤图像样本,并将所述黑色素瘤图像样本输入到所述图像分割网络中进行训练,以生成所述黑色素瘤图像样本对应的图像预测结果,其中,所述黑色素瘤图像样本标注有图像真实结果;
将所述黑色素瘤图像对应的所述图像预测结果和所述图像真实结果输入到所述评判网络中进行对抗训练,以对所述图像分割网络和所述评判网络对应的模型参数进行优化,并利用所述评判网络的训练结果优化所述全连接层,其中,所述全连接层用于识别所述生成式对抗网络模型的输入图像属于黑色素瘤图像的概率;
在对所述模型参数进行优化的过程中,检测连续生成的所述图像预测结果与所述图像真实结果之间的相似度是否均大于或等于预设相似度;
若是,则判定所述生成式对抗网络模型训练完成,并将训练完成的所述生成式对抗网络模型作为黑色素瘤图像识别模型;
接收到目标图像时,将所述目标图像输入到所述黑色素瘤图像识别模型进行分析,以得到所述目标图像属于黑色素瘤图像的概率。
为实现上述目的,本申请还提供一种计算机可读存储介质,所述计算机可读存储介质上存储有计算机程序,所述计算机程序被处理器执行时实现基于神经网络的黑色素瘤图像识别方法;
其中,所述基于神经网络的黑色素瘤图像识别方法的步骤包括:
基于全卷积神经网络分别构建生成式对抗网络模型的图像分割网络和评判网络,并将所述评判网络对应的全卷积神经网络中的反卷积层替换为全连接层;
获取多个黑色素瘤图像样本,并将所述黑色素瘤图像样本输入到所述图像分割网络中 进行训练,以生成所述黑色素瘤图像样本对应的图像预测结果,其中,所述黑色素瘤图像样本标注有图像真实结果;
将所述黑色素瘤图像对应的所述图像预测结果和所述图像真实结果输入到所述评判网络中进行对抗训练,以对所述图像分割网络和所述评判网络对应的模型参数进行优化,并利用所述评判网络的训练结果优化所述全连接层,其中,所述全连接层用于识别所述生成式对抗网络模型的输入图像属于黑色素瘤图像的概率;
在对所述模型参数进行优化的过程中,检测连续生成的所述图像预测结果与所述图像真实结果之间的相似度是否均大于或等于预设相似度;
若是,则判定所述生成式对抗网络模型训练完成,并将训练完成的所述生成式对抗网络模型作为黑色素瘤图像识别模型;
接收到目标图像时,将所述目标图像输入到所述黑色素瘤图像识别模型进行分析,以得到所述目标图像属于黑色素瘤图像的概率。
有益效果
本申请提供的基于神经网络的黑色素瘤图像识别方法、基于神经网络的黑色素瘤图像识别装置、计算机设备以及计算机可读存储介质,通过基于全卷积神经网络构建的生成式对抗网络模型来训练黑色素瘤图像识别模型,以优化模型生成的黑色素瘤图像的预测结果与真实结果之间的相似度,使得模型在对抗式训练过程中可以学习丰富的相似度来区分真假数据,从而降低了对显示像素级目标函数建模的需求,进而减少了训练模型所需的样本数量,以实现基于有限数量的训练样本得到对黑色素瘤图像识别准确率高的黑色素瘤图像识别模型。
附图说明
图1为本申请一实施例中基于神经网络的黑色素瘤图像识别方法步骤示意图;
图2为本申请一实施例的基于神经网络的黑色素瘤图像识别装置示意框图;
图3为本申请一实施例的计算机设备的结构示意框图。
本申请目的的实现、功能特点及优点将结合实施例,参照附图做进一步说明。
本发明的最佳实施方式
为了使本申请的目的、技术方案及优点更加清楚明白,以下结合附图及实施例,对本申请进行进一步详细说明。应当理解,此处描述的具体实施例仅仅用以解释本申请,并不用于限定本申请。
参照图1,在一实施例中,所述基于神经网络的黑色素瘤图像识别方法包括:
步骤S10、基于全卷积神经网络分别构建生成式对抗网络模型的图像分割网络和评判网络,并将所述评判网络对应的全卷积神经网络中的反卷积层替换为全连接层;
步骤S20、获取多个黑色素瘤图像样本,并将所述黑色素瘤图像样本输入到所述图像分割网络中进行训练,以生成所述黑色素瘤图像样本对应的图像预测结果,其中,所述黑色素瘤图像样本标注有图像真实结果;
步骤S30、将所述黑色素瘤图像对应的所述图像预测结果和所述图像真实结果输入到所述评判网络中进行对抗训练,以对所述图像分割网络和所述评判网络对应的模型参数进行优化,并利用所述评判网络的训练结果优化所述全连接层,其中,所述全连接层用于识别所述生成式对抗网络模型的输入图像属于黑色素瘤图像的概率;
步骤S40、在对所述模型参数进行优化的过程中,检测连续生成的所述图像预测结 果与所述图像真实结果之间的相似度是否均大于或等于预设相似度;
步骤S50、若是,则判定所述生成式对抗网络模型训练完成,并将训练完成的所述生成式对抗网络模型作为黑色素瘤图像识别模型;
步骤S60、接收到目标图像时,将所述目标图像输入到所述黑色素瘤图像识别模型进行分析,以得到所述目标图像属于黑色素瘤图像的概率。
本实施例中,实施例终端可以是一种计算机设备,也可以是一种基于神经网络的黑色素瘤图像识别装置。
如步骤S10所述:终端利用人工智能和图像识别技术,基于全卷积神经网络(FCN,Fully Convolutional Networks for Semantic Segmentation)构建生成式对抗网络模型(GAN,Generative Adversarial Networks)。
其中,构建得到的生成式对抗网络模型包括图像分割网络和评判网络(或称对抗网络),所述图像分割网络和所述评判网络均基于原始的全卷积神经网络构建。这样,可以增强图像分割网络分割图像的效果。
进一步地,终端在构建评判网络时,还将评判网络对应的全卷积神经网络中的反卷积层替换为全连接层,以使评判网络具备分类功能,从而使得后续训练完成的评判网络可以识别输入图像属于黑色素瘤图像的概率。
如步骤S20所述:可选的,黑色素瘤图像样本可来源于医院系统保存的临床采集的黑色素瘤图像(例如黑色素瘤皮肤癌图像),并由相关工程师预先对这些黑色素瘤图像进行标注后生成黑色素瘤图像样本,然后再将一定数量的黑色素瘤图像样本输入到终端。其中,每个黑色素瘤图像样本均标注有其对应的图像真实结果,图像真实结果包括图像属于黑色素瘤图像的概率,且当图像真实结果直接表示图像属于黑色素瘤图像时,则对应的概率应为100%。
当然,考虑到获取到黑色素瘤图像的数量可能比较少,因此也可以采用一部分非黑色素瘤图像(即正常生物器官图像)构建黑色素瘤图像样本,只需在标注其图像真实结果时,标注图像属于黑色素瘤图像的概率为0。
可选的,终端获取到多个黑色素瘤图像样本后,则将黑色素瘤图像样本逐个输入到生成式对抗网络模型中进行训练。
可选的,在生成式对抗网络模型训练的过程中,针对每个黑色素瘤图像样本,均会提取黑色素瘤图像样本中的图像依次输入到图像分割网络中,由图像分割网络对黑色素瘤图像进行图像分割。需要说明的是,在图像分割网络中只需输入黑色素瘤图像样本中的图像本身,而无需输入样本标注的图像真实结果;当然,也可以设置图像分割网络忽略样本中原始标注的图像真实结果。
其中,图像分割网络会基于其对应的全卷积神经网络的输入层的通道数,将输入图像x i转换为各个通道对应的特征图,获取多个通道对应的特征图,得到特征图集合作为多通道特征图:X∈R C×H×W;其中,R为实数集合,C为通道数(可选为红、绿、蓝三原色通道),H为图像高度,W为图像宽度,即H×W为图像尺寸(可选为400×400)。然后图像分割网络再通过多层残差卷积模块和平均池化操作从多通道特征图中提取图像空间结构信息和图像语义信息,得到待输出的特征图(feature maps),并在图像分割网络对应的全卷积神经网络的最后的反卷积层中将待输出的特征图反卷积到原始大小的图像尺寸(即H×W),即可得到图像分割网络输出的对黑色素瘤图像样本进行分析预测的图像预测结果。
需要说明的是,空间信息可以是图中物体两者间的大小、位置关系等;语义信息可以是图像表达的意义,可能会有几层意思,比如描述某图像是一张脑CT(Computed Tomography)图像,图中有一个肿瘤。
此时图像分割网络输出的图像预测结果,会与当前训练的黑色素瘤图像样本关联。 应当理解的是,图像预测结果包括预测图像属于黑色素瘤图像的概率(可记为第一概率)。
如步骤S30所述:生成式对抗网络模型将当前训练的黑色素瘤图像样本中的原始图像,以及该黑色素瘤图像样本对应的图像真实结果和图像预测结果输入到评判网络中作进一步的对抗训练。其中,对抗训练是基于对抗学习(adversarial learning)的一种训练方法,对抗学习的过程可以看作是要使得模型的训练目标达到:可以使得模型在一个输入数据上得到的输出结果尽可能与真实的结果一致。
评判网络的输入包括两种情况:一是原始图像+图像预测结果,二是原始图像+图像真实结果;其中,第一种情况的训练目标为0,如果是第二种情况则训练目标为1。
其中,为了方便评判网络读取图像预测结果和图像真实结果,两者均可以掩码标签(或称特征图)的形式表示,其中,掩码标签高度为H,宽度为W,通道数为2,这是由于黑色素瘤皮肤癌图像分为前景和背景两类,而前景是病变区域的像素点集合,因此通道数设为2;将图像预测结果对应的掩码标签记为s(x i)(预测掩码),将图像真实结果对应的掩码标签记为y i(真实掩码)。
而原始图像(即黑色素瘤图像)x i则其图像可以是高度为H=400,宽度为W=400,通道数为3(如红、绿、蓝三色原通道)。当然,掩码标签关联有其属于黑色素瘤皮肤癌图像的概率。
可选的,在评判网络中,主要采用联合优化公式进行对抗训练,公式如下:
Figure PCTCN2021084535-appb-000001
其中,S表示图像分割网络在每个像素点的预测类概率,使得所述类概率在每个像素点上归一为1;D为D(x i,y),表示其中y来自于y i(真实掩码)而不是来自s(x i)(预测掩码)的标量概率估计;x i为原始图像;J s是预测掩码中所有像素平均值的多类交叉熵损失;J d是评判网络预测时产生的二元逻辑损失。λ是一个调优参数,用于平衡像素损失与对抗损失,通过优化S和D之间交替使用的各自损失函数。
这样,联合优化公式即是通过最小化S,而最大化D来进行联合优化,通过采用半监督学习的方式,提高图像预测结果与图像真实结果之间的相似度,从而达到使图像预测结果趋近于图像真实结果的目的(或者使图像预测结果达到图像真实结果)。
而联合优化对抗训练的结果(即训练结果),会用于更新生成式对抗网络模型的模型参数,因此在整个生成式对抗网络模型的训练过程中,实质上是对其中的图像分割网络和评判网络进行交替训练的过程。
这样,通过这个对抗训练的过程,进行关键网络学习,可以有效地将这个全局信息转移回图像分割网络,以加强分割效果。
进一步地,在基于评判网络进行对抗训练后得到训练结果后,还可将训练结果输入到评判网络的全连接层中进行分类判别处理,以重新生成当次生成式对抗网络模型对应的输入图像(即黑色素瘤图像样本中的图像)属于黑色素瘤图像的概率(可记为第二概率,相当于对第一概率进行优化后得到的)。而这一过程同时也相当于是对全连接层进行训练优化的过程,通过提高全连接层识别图像特征的能力,以优化全连接层对图像进行分类,并识别出生成式对抗网络模型对应的输入图像(即黑色素瘤图像样本中的图像)属于黑色素瘤图像的概率的能力。
当然,终端获取到的黑色素瘤图像样本也可以是一部分是标记有图像真实结果的,一部分是未标记有图像真实结果的。生成式对抗网络模型在训练的过程中,将未标记有图像真实结果的样本输入到图像分割网络中得到图像预测结果,然后将标记有图像真实结果的样本对应的图像真实结果,以及未标记有图像真实结果的样本对应的图像预测结果,输入到评判网络中进行对抗训练。这样虽然训练过程更久,但比起全使用有标记的样本训练,可以节省人工准备样本的时间,以及最终得到的训练完成的模型的性能更好。
如步骤S40所述:在终端利用评判网络的训练结果对评判网络和图像分割网络对应的模型参数进行优化的过程中,终端在每次基于黑色素瘤图像样本得到相应的图像预测结果后,可以将该图像预测结果与该黑色素瘤图像样本对应的图像真实结果进行相似度校验,以得到图像预测结果与图像真实结果之间的相似度。其中,图像预测结果与图像真实结果之间的相似度越高,则生成两者进行对抗式训练后得到的训练结果属于黑色素瘤图像的概率时,所得到的预测概率的置信度越高。
进一步地,终端检测每个黑色素瘤图像样本对应的图像预测结果与图像真实结果之间的相似度,是否大于或等于预设相似度;其中,所述预设相似度的取值范围可选为90%-100%。
当终端检测到当前黑色素瘤图像样本对应的图像预测结果与图像真实结果之间的相似度大于或等于预设相似度时,则计数加一;当终端检测到当前黑色素瘤图像样本对应的图像预测结果与图像真实结果之间的相似度小于预设相似度时,则将计数清零,并再次基于新的黑色素瘤图像样本训练生成式对抗网络模型。
然后终端可以通过检测相似度大于或等于预设相似度的计数值是否大于预设次数,以此判断生成式对抗网络模型是否训练完成。其中,所述预设次数的实际取值可以根据实际情况需要设置,如设置为至少3次。
这样,只有当连续输入到生成式对抗网络模型进行训练的多个黑色素瘤图像样本,其图像预测结果与图像真实结果之间的相似度均大于预设相似度,则终端才会判定生成式对抗网络模型训练完成。
如步骤S50所述:当终端检测到计数值大于预设次数时,则判定连续生成的所述图像预测结果与所述图像真实结果之间的相似度均大于或等于预设相似度,并进一步判定生成式对抗网络模型训练完成。
这样,当生成式对抗网络模型基于多个黑色素瘤图像进行多次迭代训练后,当模型达到收敛时,就可以实现图像预测结果的分布与图像真实结果的分布重合,即使得图像预测结果与图像真实结果一致,从而使得最终对训练结果进行预测得到的输入图像属于黑色素瘤图像的概率对应的置信度达到最优值(即表示预测结果越可信)。
可选的,当终端判定生成式对抗网络模型训练完成时,则将训练完成的生成式对抗网络模型作为训练得到的黑色素瘤图像识别模型。此时黑色素瘤图像识别模型即可用于识别输入图像是否属于黑色素瘤图像,并输出输入图像属于黑色素瘤图像的概率。
而且,基于生成式对抗网络模型训练得到的黑色素瘤图像识别模型,还可以增强图像分割的整体一致性,并提取黑色素瘤图像中病变色块的轮廓。
如步骤S60所述:终端设有图像采集装置,或者终端与图像采集装置建立有通信连接。在黑色素瘤图像识别模型训练完成后,终端可利用图像采集装置实时采集被检查人员的目标图像。其中,目标图像即为待识别的图像。
可选的,终端接收到目标图像后,则将目标图像作为黑色素瘤图像识别模型的输入图像,将其输入到黑色素瘤图像识别模型进行分析。
在黑色素瘤图像识别模型对目标图像进行分析的过程中,模型中的图像分割网络会将目标图像分割为至少一个器官区域图像(若目标图像中显示有多个器官,则可分割为多个器官区域图像),并预测得到每个器官区域图像对应的图像预测结果。应当理解的是,图像分割网络识别目标区域(即器官区域)的原理可基于图像识别技术实现。
然后,模型中的评判网络则会对图像分割网络输出的图像预测结果作进一步调整,最终得到目标图像对应的黑色素瘤图像预测结果(即优化图像分割网络输出的第一概率,以得到第二概率),从而得到所述目标图像属于黑色素瘤图像的概率进行输出。
进一步地,终端获取黑色素瘤图像识别模型输出的黑色素瘤图像预测结果,并将获取得到的黑色素瘤图像预测结果与当前进行识别的目标图像进行关联。
若目标图像分为多个器官区域,则终端还可以在目标图像中各个器官区域的显示区域中标注出各器官对应的黑色素瘤图像预测结果,即标注出各器官区域图像属于黑色素瘤图像的概率,从而达到辅助医护人员快速识别黑色素瘤图像的目的。
这样,通过输出目标图像属于黑色素瘤图像的概率,使得医护人员基于此可以快速识别黑色素瘤图像,且无需对正常图像作进一步鉴别,可以在一定程度上节约医疗资源。
在一实施例中,通过基于全卷积神经网络构建的生成式对抗网络模型来训练黑色素瘤图像识别模型,以优化模型生成的黑色素瘤图像的预测结果与真实结果之间的相似度,使得模型在对抗式训练过程中可以学习丰富的相似度来区分真假数据,从而降低了对显示像素级目标函数建模的需求,减少了训练模型所需的样本数量,无需如传统的神经网络模型一般需要使用海量的数据样本进行模型的训练,只需使用少量的黑色素瘤图像样本即可,从而实现基于有限数量的训练样本得到对黑色素瘤图像识别准确率高的黑色素瘤图像识别模型。
在一实施例中,在上述实施例基础上,所述将训练完成的所述生成式对抗网络模型作为黑色素瘤图像识别模型的步骤之后,还包括:
步骤S70、将所述黑色素瘤图像识别模型存储至区块链网络。
本实施例中,终端与区块链网络(Blockchain Network)建立有通信连接。区块链网络是通过共识的方式将新区块纳入区块链的一系列的节点的集合。
区块链是分布式数据存储、点对点传输、共识机制、加密算法等计算机技术的新型应用模式。区块链(Blockchain),本质上是一个去中心化的数据库,是一串使用密码学方法相关联产生的数据块,每一个数据块中包含了一批次网络交易的信息,用于验证其信息的有效性(防伪)和生成下一个区块。区块链可以包括区块链底层平台、平台产品服务层以及应用服务层。
区块链底层平台可以包括用户管理、基础服务、智能合约以及运营监控等处理模块。其中,用户管理模块负责所有区块链参与者的身份信息管理,包括维护公私钥生成(账户管理)、密钥管理以及用户真实身份和区块链地址对应关系维护(权限管理)等,并且在授权的情况下,监管和审计某些真实身份的交易情况,提供风险控制的规则配置(风控审计);基础服务模块部署在所有区块链节点设备上,用来验证业务请求的有效性,并对有效请求完成共识后记录到存储上,对于一个新的业务请求,基础服务先对接口适配解析和鉴权处理(接口适配),然后通过共识算法将业务信息加密(共识管理),在加密之后完整一致的传输至共享账本上(网络通信),并进行记录存储;智能合约模块负责合约的注册发行以及合约触发和合约执行,开发人员可以通过某种编程语言定义合约逻辑,发布到区块链上(合约注册),根据合约条款的逻辑,调用密钥或者其它的事件触发执行,完成合约逻辑,同时还提供对合约升级注销的功能;运营监控模块主要负责产品发布过程中的部署、配置的修改、合约设置、云适配以及产品运行中的实时状态的可视化输出,例如:告警、监控网络情况、监控节点设备健康状态等。
可选的,当终端得到训练完成的黑色素瘤图像识别模型后,则可以将黑色素瘤图像识别模型上传至区块链网络进行存储。
在终端将训练完成的黑色素瘤图像识别模型存储至区块链网络后,当终端利用图像采集装置采集到被检查人员的目标图像时,则可以将采集到的目标图像发送至区块链网络。
可选的,当区块链网络的任一区块链节点接收到终端发送的目标图像时,即可利用存储的黑色素瘤图像识别模型对目标图像进行分析,黑色素瘤图像识别模型在对目标图像分析完成后,即会输出目标图像对应的图像预测结果。
当区块链网络中的区块链节点得到黑色素瘤图像识别模型输出的分类结果后,即可将分类结果反馈至终端,此时终端即将接收到的图像预测结果作为该目标图像对应的图像 预测结果,从而完成对目标图像进行识别的过程。
这样,不仅可以提高黑色素瘤图像识别模型存储的安全性和节约本地存储空间,而且还可以方便各医院系统从区块链模块中获取黑色素瘤图像识别模型,以快速将黑色素瘤图像识别模型投入到实际应用中。各医院系统只需接入到任一区块链网络节点,即可获取得到同一黑色素瘤图像识别模型,十分方便高效。
在一实施例中,在上述实施例基础上,所述将所述黑色素瘤图像识别模型存储至区块链网络的步骤之后,还包括:
步骤S80、检测到所述区块链网络上存储的黑色素瘤图像识别模型更新时,从所述区块链网络获取更新后的黑色素瘤图像识别模型对应的模型参数;
步骤S81、根据获取到的模型参数更新本地存储的黑色素瘤图像识别模型。
本实施例中,当任一医院系统检测到本地的黑色素瘤图像识别模型有更新时,则该医院系统可将更新后的黑色素瘤图像识别模型同步更新到区块链网络中(或者只上传更新部分的模型参数即可)。
可选的,当终端检测到区块链网络上存储的黑色素瘤图像识别模型有更新时,以及检测到该黑色素瘤图像识别模型在区块链网络上的更新操作并非是由本端触发时,则终端可以从所述区块链网络获取更新后的黑色素瘤图像识别模型对应的模型参数。
进一步地,终端基于获取到的模型参数更新本地存储的黑色素瘤图像识别模型,以对本地存储的黑色素瘤图像识别模型的性能进行优化。
这样,通过将黑色素瘤图像识别模型存储至区块链网络,不仅可以提高黑色素瘤图像识别模型存储的安全性和有效节约本地存储空间的同时,还可以基于此获取到更多医院系统上传的黑色素瘤图像样本进行模型更新,从而提高了对黑色素瘤图像识别模型识别黑色素瘤图像的准确率。
在一实施例中,在上述实施例基础上,所述接收到目标图像时,将所述目标图像输入到所述黑色素瘤图像识别模型进行分析,以得到所述目标图像属于黑色素瘤图像的概率的步骤包括:
步骤S90、接收到目标图像时,检测所述目标图像的图像质量是否满足预设条件;
步骤S91、若是,则将所述目标图像输入到所述黑色素瘤图像识别模型进行分析,以得到所述目标图像属于黑色素瘤图像的概率;
步骤S92、若否,则输出提示信息,所述提示信息用于提示重新采集目标图像。
本实施例中,终端接收到目标图像时,在将目标图像作为黑色素瘤图像识别模型的输入图像,将其输入到黑色素瘤图像识别模型进行分析前,可以先检测目标图像的图像质量是否满足预设条件。
可选的,所述图像质量可以是图像亮度、图像清晰度等,而对应的预设条件可以是预设亮度、预设清晰度等。其中,预设亮度、预设清晰度等预设条件的具体取值可以根据实际情况需要设置,本实施例对此不作限定。
可选的,终端可以是检测到目标图像的图像亮度大于或等于预设亮度时,和/或检测到目标图像的图像清晰度大于或等于预设清晰度时,则判定目标图像的图像质量满足预设条件;当终端检测到目标图像的图像亮度小于预设亮度时,或者检测到目标图像的图像清晰度小于预设清晰度时,则判定目标图像的图像质量不满足预设条件。
可选的,终端也可以是通过检测目标图像中是否具有人体器官图像,以此检测其图像质量是否满足预设条件。且若是,则判定满足预设条件,而若否则判定不满足预设条件。
可选的,当终端检测到目标图像的图像质量满足预设条件时,则将目标图像作为黑色素瘤图像识别模型的输入图像,将其输入到黑色素瘤图像识别模型进行分析,以利用黑色素瘤图像识别模型生成目标图像属于黑色素瘤图像的概率。
可选的,当终端检测到目标图像的图像质量不满足预设条件时,则输出提示信息,所述提示信息用于提示用户需重新采集被检查者的目标图像(即人体器官图像)。例如在相关检测仪器的显示界面提示当前采集的目标图像质量异常,需要重新采集。
这样,不仅可以避免因图像质量不合格而影响目标图像的检测结果的情况发生,而且当目标图像存在质量异常时,还能够及时反馈目标图像的采集质量不合格,从而保证采集的目标图片质量的稳定性。
在一实施例中,在上述实施例基础上,所述接收到目标图像时,将所述目标图像输入到所述黑色素瘤图像识别模型进行分析,以得到所述目标图像属于黑色素瘤图像的概率的步骤之后,还包括:
步骤S100、检测所述目标图像属于黑色素瘤图像的概率是否大于预设阈值;
步骤S101、若是,则输出所述目标图像对应的告警信息。
本实施例中,当终端获取到目标图像对应的黑色素瘤图像预测结果时,则基于此检测目标图像属于黑色素瘤图像的概率是否大于预设阈值。其中,所述预设阈值用于衡量目标图像属于黑色素瘤图像的程度,因此其具体的取值范围可根据实际情况需要设置,如可选取值为70%-99%之间。
可选的,当终端检测到目标图像属于黑色素瘤图像的概率大于预设阈值时,说明目标图像属于黑色素瘤图像的可能性很高,则终端根据目标图像和目标图像对应的图像预测结果生成目标图像对应的告警信息,并将告警信息输出至关联设备。其中,关联设备可以是目标图像对应的采集者的用户设备,也可以是相关医护人员的关联设备。
应当理解的是,当目标图像中存在多个器官区域图像时,则终端只需要检测到至少一个器官区域图像属于黑色素瘤图像的概率大于预设阈值时,则判定目标图像属于黑色素瘤图像的概率大于预设阈值。
这样,可以及时提醒有关人员关注目标图像对应的采集者的身体中黑色素瘤的病变情况。
可选的,当终端检测到目标图像属于黑色素瘤图像的概率小于或等于预设阈值时,则将所述目标图像标记为正常图像。
这样,可以使得医护人员无需再花费精力鉴别正常图像是否属于黑色素瘤图像。
在一实施例中,在上述实施例基础上,所述输出所述目标图像对应的告警信息的步骤之后,还包括:
步骤S110、接收到所述告警信息的确定响应时,将所述目标图像作为所述黑色素瘤图像样本,并基于所述目标图像更新所述黑色素瘤图像识别模型。
本实施例中,当终端检测到目标图像属于黑色素瘤图像的概率大于预设阈值时,则生成目标图像对应的告警信息,并将告警信息输出至相关医护人员的关联设备。
若医护人员确定黑色素瘤图像识别模型对目标图像的识别无误,确认目标图像即为黑色素瘤图像,则医护人员可以通过其关联设备向终端发送告警信息的确定响应;若医护人员确定黑色素瘤图像识别模型对目标图像的识别有误,确认目标图像不属于黑色素瘤图像时,则医护人员可以通过其关联设备向终端发送告警信息的否定响应。
可选的,当终端接收到告警信息的确定响应时,可以将目标图像更新为黑色素瘤图像样本,并将目标图像对应的图像预测结果更新为图像真实结果,以及基于图像真实结果对新生成的黑色素瘤图像样本进行标注。
进一步地,当终端检测到黑色素瘤图像识别模型空闲时,或者检测到新生成的黑色素瘤图像样本的数量大于预设数量时,则将新的黑色素瘤图像样本输入到黑色素瘤图像识别模型中,以对黑色素瘤图像识别模型进行迭代更新。其中,所述预设数量可以根据实际情况需要设置,本实施例对此不作限定。
这样,利用已认证的目标图像作为新的黑色素瘤图像样本,并基于此更新黑色素瘤图像识别模型,可以提高黑色素瘤图像识别模型识别黑色素瘤图像的准确率。
参照图2,本申请实施例中还提供一种基于神经网络的黑色素瘤图像识别装置10,包括:
模型构建模块11,用于基于全卷积神经网络分别构建生成式对抗网络模型的图像分割网络和评判网络,并将所述评判网络对应的全卷积神经网络中的反卷积层替换为全连接层;
第一训练模块12,用于获取多个黑色素瘤图像样本,并将所述黑色素瘤图像样本输入到所述图像分割网络中进行训练,以生成所述黑色素瘤图像样本对应的图像预测结果,其中,所述黑色素瘤图像样本标注有图像真实结果;
第二训练模块13,用于将所述黑色素瘤图像对应的所述图像预测结果和所述图像真实结果输入到所述评判网络中进行对抗训练,以对所述图像分割网络和所述评判网络对应的模型参数进行优化,并利用所述评判网络的训练结果优化所述全连接层,其中,所述全连接层用于识别所述生成式对抗网络模型的输入图像属于黑色素瘤图像的概率;
检测模块14,用于在对所述模型参数进行优化的过程中,检测连续生成的所述图像预测结果与所述图像真实结果之间的相似度是否均大于或等于预设相似度;
判定模块15,用于若是,则判定所述生成式对抗网络模型训练完成,并将训练完成的所述生成式对抗网络模型作为黑色素瘤图像识别模型;
分析模块16,用于接收到目标图像时,将所述目标图像输入到所述黑色素瘤图像识别模型进行分析,以得到所述目标图像属于黑色素瘤图像的概率。
在一实施例中,在上述实施例基础上,所述基于神经网络的黑色素瘤图像识别装置还包括存储模块,所述存储模块用于将所述黑色素瘤图像识别模型存储至区块链网络。
进一步地,所述基于神经网络的黑色素瘤图像识别装置还包括:
获取模块,用于检测到所述区块链网络上存储的黑色素瘤图像识别模型更新时,从所述区块链网络获取更新后的黑色素瘤图像识别模型对应的模型参数;
更新模块,用于根据获取到的模型参数更新本地存储的黑色素瘤图像识别模型。
进一步地,所述基于神经网络的黑色素瘤图像识别装置还包括:
判断模块,用于接收到目标图像时,检测所述目标图像的图像质量是否满足预设要求;
所述分析模块,还用于若是,则将所述目标图像输入到所述黑色素瘤图像识别模型进行分析,以得到所述目标图像属于黑色素瘤图像的概率;
提示模块,用于若否,则输出提示信息,所述提示信息用于提示重新采集目标图像。
进一步地,所述判断模块,还用于根据所述目标图像对应的图像预测结果,检测所述目标图像属于黑色素瘤图像的概率是否大于预设阈值;
所述第二判定模块,还用于若是,则输出所述目标图像对应的告警信息。
进一步地,所述更新模块,还用于接收到所述告警信息的确定响应时,将所述目标图像作为所述黑色素瘤图像样本,并基于所述目标图像更新所述黑色素瘤图像识别模型。
参照图3,本申请实施例中还提供一种计算机设备,该计算机设备可以是服务器,其内部结构可以如图3所示。该计算机设备包括通过系统总线连接的处理器、存储器、网络接口和数据库。其中,该计算机设备的处理器用于提供计算和控制能力。该计算机设备的存储器包括非易失性存储介质、内存储器。该非易失性存储介质存储有操作系统、计算机程序和数据库。该内存储器为非易失性存储介质中的操作系统和计算机程序的运行提供环境。该计算机设备的数据库用于存储基于神经网络的黑色素瘤图像识别方法的相关数据。该计算机设备的网络接口用于与外部的终端通过网络连接通信。该计算机程序被处理器执 行时以实现一种基于神经网络的黑色素瘤图像识别方法。
本领域技术人员可以理解,图3中示出的结构,仅仅是与本申请方案相关的部分结构的框图,并不构成对本申请方案所应用于其上的计算机设备的限定。
此外,本申请还提出一种计算机可读存储介质,所述计算机可读存储介质包括计算机程序,所述计算机程序被处理器执行时实现如以上实施例所述的基于神经网络的黑色素瘤图像识别方法的步骤。可以理解的是,本实施例中的计算机可读存储介质可以是非易失性,也可以是易失性。
综上所述,为本申请实施例中提供的基于神经网络的黑色素瘤图像识别方法、基于神经网络的黑色素瘤图像识别方法、基于神经网络的黑色素瘤图像识别装置、计算机设备和存储介质,通过基于全卷积神经网络构建的生成式对抗网络模型来训练黑色素瘤图像识别模型,以优化模型生成的黑色素瘤图像的预测结果与真实结果之间的相似度,使得模型在对抗式训练过程中可以学习丰富的相似度来区分真假数据,从而降低了对显示像素级目标函数建模的需求,进而减少了训练模型所需的样本数量,以实现基于有限数量的训练样本得到对黑色素瘤图像识别准确率高的黑色素瘤图像识别模型。
本领域普通技术人员可以理解实现上述实施例方法中的全部或部分流程,是可以通过计算机程序来指令相关的硬件来完成,所述的计算机程序可存储于一非易失性计算机可读取存储介质中,该计算机程序在执行时,可包括如上述各方法的实施例的流程。其中,本申请所提供的和实施例中所使用的对存储器、存储、数据库或其它介质的任何引用,均可包括非易失性和/或易失性存储器。非易失性存储器可以包括只读存储器(ROM)、可编程ROM(PROM)、电可编程ROM(EPROM)、电可擦除可编程ROM(EEPROM)或闪存。易失性存储器可包括随机存取存储器(RAM)或者外部高速缓冲存储器。作为说明而非局限,RAM通过多种形式可得,诸如静态RAM(SRAM)、动态RAM(DRAM)、同步DRAM(SDRAM)、双速据率SDRAM(SSRSDRAM)、增强型SDRAM(ESDRAM)、同步链路(Synchlink)DRAM(SLDRAM)、存储器总线(Rambus)直接RAM(RDRAM)、直接存储器总线动态RAM(DRDRAM)、以及存储器总线动态RAM(RDRAM)等。
需要说明的是,在本文中,术语“包括”、“包含”或者其任何其它变体意在涵盖非排他性的包含,从而使得包括一系列要素的过程、装置、物品或者方法不仅包括那些要素,而且还包括没有明确列出的其它要素,或者是还包括为这种过程、装置、物品或者方法所固有的要素。在没有更多限制的情况下,由语句“包括一个……”限定的要素,并不排除在包括该要素的过程、装置、物品或者方法中还存在另外的相同要素。
以上所述仅为本申请的优选实施例,并非因此限制本申请的专利范围,凡是利用本申请说明书及附图内容所作的等效结构或等效流程变换,或直接或间接运用在其它相关的技术领域,均同理包括在本申请的专利保护范围内。

Claims (20)

  1. 一种基于神经网络的黑色素瘤图像识别方法,其中,包括:
    基于全卷积神经网络分别构建生成式对抗网络模型的图像分割网络和评判网络,并将所述评判网络对应的全卷积神经网络中的反卷积层替换为全连接层;
    获取多个黑色素瘤图像样本,并将所述黑色素瘤图像样本输入到所述图像分割网络中进行训练,以生成所述黑色素瘤图像样本对应的图像预测结果,其中,所述黑色素瘤图像样本标注有图像真实结果;
    将所述黑色素瘤图像对应的所述图像预测结果和所述图像真实结果输入到所述评判网络中进行对抗训练,以对所述图像分割网络和所述评判网络对应的模型参数进行优化,并利用所述评判网络的训练结果优化所述全连接层,其中,所述全连接层用于识别所述生成式对抗网络模型的输入图像属于黑色素瘤图像的概率;
    在对所述模型参数进行优化的过程中,检测连续生成的所述图像预测结果与所述图像真实结果之间的相似度是否均大于或等于预设相似度;
    若是,则判定所述生成式对抗网络模型训练完成,并将训练完成的所述生成式对抗网络模型作为黑色素瘤图像识别模型;
    接收到目标图像时,将所述目标图像输入到所述黑色素瘤图像识别模型进行分析,以得到所述目标图像属于黑色素瘤图像的概率。
  2. 如权利要求1所述的基于神经网络的黑色素瘤图像识别方法,其中,所述将所述黑色素瘤图像对应的所述图像预测结果和所述图像真实结果输入到所述评判网络中进行对抗训练,以对所述图像分割网络和所述评判网络对应的模型参数进行优化的步骤包括:
    将所述黑色素瘤图像对应的所述图像预测结果和所述图像真实结果输入到所述评判网络中,利用联合优化公式对所述图像预测结果和所述图像真实结果进行对抗训练,以提高所述图像预测结果与所述图像真实结果之间的相似度;
    在得到所述对抗训练对应的训练结果时,根据所述训练结果对所述图像分割网络和所述评判网络对应的模型参数进行优化。
  3. 如权利要求1所述的基于神经网络的黑色素瘤图像识别方法,其中,所述将训练完成的所述生成式对抗网络模型作为黑色素瘤图像识别模型的步骤之后,还包括:
    将所述黑色素瘤图像识别模型存储至区块链网络。
  4. 如权利要求3所述的基于神经网络的黑色素瘤图像识别方法,其中,所述将所述黑色素瘤图像识别模型存储至区块链网络的步骤之后,还包括:
    检测到所述区块链网络上存储的黑色素瘤图像识别模型更新时,从所述区块链网络获取更新后的黑色素瘤图像识别模型对应的模型参数;
    根据获取到的模型参数更新本地存储的黑色素瘤图像识别模型。
  5. 如权利要求1所述的基于神经网络的黑色素瘤图像识别方法,其中,所述接收到目标图像时,将所述目标图像输入到所述黑色素瘤图像识别模型进行分析,以得到所述目标图像属于黑色素瘤图像的概率的步骤包括:
    接收到目标图像时,检测所述目标图像的图像质量是否满足预设条件;
    若是,则将所述目标图像输入到所述黑色素瘤图像识别模型进行分析,以得到所述目标图像属于黑色素瘤图像的概率;
    若否,则输出提示信息,所述提示信息用于提示重新采集目标图像。
  6. 如权利要求1所述的基于神经网络的黑色素瘤图像识别方法,其中,所述接收到目标图像时,将所述目标图像输入到所述黑色素瘤图像识别模型进行分析,以得到所述目标图像属于黑色素瘤图像的概率的步骤之后,还包括:
    检测所述目标图像属于黑色素瘤图像的概率是否大于预设阈值;
    若是,则输出所述目标图像对应的告警信息。
  7. 如权利要求6所述的基于神经网络的黑色素瘤图像识别方法,其中,所述输出所述目标图像对应的告警信息的步骤之后,还包括:
    接收到所述告警信息的确定响应时,将所述目标图像作为所述黑色素瘤图像样本,并基于所述目标图像更新所述黑色素瘤图像识别模型。
  8. 一种基于神经网络的黑色素瘤图像识别装置,其中,包括:
    模型构建模块,用于基于全卷积神经网络分别构建生成式对抗网络模型的图像分割网络和评判网络,并将所述评判网络对应的全卷积神经网络中的反卷积层替换为全连接层;
    第一训练模块,用于获取多个黑色素瘤图像样本,并将所述黑色素瘤图像样本输入到所述图像分割网络中进行训练,以生成所述黑色素瘤图像样本对应的图像预测结果,其中,所述黑色素瘤图像样本标注有图像真实结果;
    第二训练模块,用于将所述黑色素瘤图像对应的所述图像预测结果和所述图像真实结果输入到所述评判网络中进行对抗训练,以对所述图像分割网络和所述评判网络对应的模型参数进行优化,并利用所述评判网络的训练结果优化所述全连接层,其中,所述全连接层用于识别所述生成式对抗网络模型的输入图像属于黑色素瘤图像的概率;
    检测模块,用于在对所述模型参数进行优化的过程中,检测连续生成的所述图像预测结果与所述图像真实结果之间的相似度是否均大于或等于预设相似度;
    判定模块,用于若是,则判定所述生成式对抗网络模型训练完成,并将训练完成的所述生成式对抗网络模型作为黑色素瘤图像识别模型;
    分析模块,用于接收到目标图像时,将所述目标图像输入到所述黑色素瘤图像识别模型进行分析,以得到所述目标图像属于黑色素瘤图像的概率。
  9. 一种计算机设备,其中,所述计算机设备包括存储器、处理器及存储在所述存储器上并可在所述处理器上运行的计算机程序,所述计算机程序被所述处理器执行时实现基于神经网络的黑色素瘤图像识别方法;
    其中,所述基于神经网络的黑色素瘤图像识别方法的步骤包括:
    基于全卷积神经网络分别构建生成式对抗网络模型的图像分割网络和评判网络,并将所述评判网络对应的全卷积神经网络中的反卷积层替换为全连接层;
    获取多个黑色素瘤图像样本,并将所述黑色素瘤图像样本输入到所述图像分割网络中进行训练,以生成所述黑色素瘤图像样本对应的图像预测结果,其中,所述黑色素瘤图像样本标注有图像真实结果;
    将所述黑色素瘤图像对应的所述图像预测结果和所述图像真实结果输入到所述评判网络中进行对抗训练,以对所述图像分割网络和所述评判网络对应的模型参数进行优化,并利用所述评判网络的训练结果优化所述全连接层,其中,所述全连接层用于识别所述生成式对抗网络模型的输入图像属于黑色素瘤图像的概率;
    在对所述模型参数进行优化的过程中,检测连续生成的所述图像预测结果与所述图像真实结果之间的相似度是否均大于或等于预设相似度;
    若是,则判定所述生成式对抗网络模型训练完成,并将训练完成的所述生成式对抗网络模型作为黑色素瘤图像识别模型;
    接收到目标图像时,将所述目标图像输入到所述黑色素瘤图像识别模型进行分析,以得到所述目标图像属于黑色素瘤图像的概率。
  10. 如权利要求9所述的计算机设备,其中,所述将所述黑色素瘤图像对应的所述图像预测结果和所述图像真实结果输入到所述评判网络中进行对抗训练,以对所述图像分割网络和所述评判网络对应的模型参数进行优化的步骤包括:
    将所述黑色素瘤图像对应的所述图像预测结果和所述图像真实结果输入到所述评判网络中,利用联合优化公式对所述图像预测结果和所述图像真实结果进行对抗训练,以提 高所述图像预测结果与所述图像真实结果之间的相似度;
    在得到所述对抗训练对应的训练结果时,根据所述训练结果对所述图像分割网络和所述评判网络对应的模型参数进行优化。
  11. 如权利要求9所述的计算机设备,其中,所述将训练完成的所述生成式对抗网络模型作为黑色素瘤图像识别模型的步骤之后,还包括:
    将所述黑色素瘤图像识别模型存储至区块链网络。
  12. 如权利要求11所述的计算机设备,其中,所述将所述黑色素瘤图像识别模型存储至区块链网络的步骤之后,还包括:
    检测到所述区块链网络上存储的黑色素瘤图像识别模型更新时,从所述区块链网络获取更新后的黑色素瘤图像识别模型对应的模型参数;
    根据获取到的模型参数更新本地存储的黑色素瘤图像识别模型。
  13. 如权利要求9所述的计算机设备,其中,所述接收到目标图像时,将所述目标图像输入到所述黑色素瘤图像识别模型进行分析,以得到所述目标图像属于黑色素瘤图像的概率的步骤包括:
    接收到目标图像时,检测所述目标图像的图像质量是否满足预设条件;
    若是,则将所述目标图像输入到所述黑色素瘤图像识别模型进行分析,以得到所述目标图像属于黑色素瘤图像的概率;
    若否,则输出提示信息,所述提示信息用于提示重新采集目标图像。
  14. 如权利要求9所述的计算机设备,其中,所述接收到目标图像时,将所述目标图像输入到所述黑色素瘤图像识别模型进行分析,以得到所述目标图像属于黑色素瘤图像的概率的步骤之后,还包括:
    检测所述目标图像属于黑色素瘤图像的概率是否大于预设阈值;
    若是,则输出所述目标图像对应的告警信息。
  15. 一种计算机可读存储介质,其中,所述计算机可读存储介质上存储有计算机程序,所述计算机程序被处理器执行时实现基于神经网络的黑色素瘤图像识别方法;
    其中,所述基于神经网络的黑色素瘤图像识别方法的步骤包括:
    基于全卷积神经网络分别构建生成式对抗网络模型的图像分割网络和评判网络,并将所述评判网络对应的全卷积神经网络中的反卷积层替换为全连接层;
    获取多个黑色素瘤图像样本,并将所述黑色素瘤图像样本输入到所述图像分割网络中进行训练,以生成所述黑色素瘤图像样本对应的图像预测结果,其中,所述黑色素瘤图像样本标注有图像真实结果;
    将所述黑色素瘤图像对应的所述图像预测结果和所述图像真实结果输入到所述评判网络中进行对抗训练,以对所述图像分割网络和所述评判网络对应的模型参数进行优化,并利用所述评判网络的训练结果优化所述全连接层,其中,所述全连接层用于识别所述生成式对抗网络模型的输入图像属于黑色素瘤图像的概率;
    在对所述模型参数进行优化的过程中,检测连续生成的所述图像预测结果与所述图像真实结果之间的相似度是否均大于或等于预设相似度;
    若是,则判定所述生成式对抗网络模型训练完成,并将训练完成的所述生成式对抗网络模型作为黑色素瘤图像识别模型;
    接收到目标图像时,将所述目标图像输入到所述黑色素瘤图像识别模型进行分析,以得到所述目标图像属于黑色素瘤图像的概率。
  16. 如权利要求15所述的计算机可读存储介质,其中,所述将所述黑色素瘤图像对应的所述图像预测结果和所述图像真实结果输入到所述评判网络中进行对抗训练,以对所述图像分割网络和所述评判网络对应的模型参数进行优化的步骤包括:
    将所述黑色素瘤图像对应的所述图像预测结果和所述图像真实结果输入到所述评判 网络中,利用联合优化公式对所述图像预测结果和所述图像真实结果进行对抗训练,以提高所述图像预测结果与所述图像真实结果之间的相似度;
    在得到所述对抗训练对应的训练结果时,根据所述训练结果对所述图像分割网络和所述评判网络对应的模型参数进行优化。
  17. 如权利要求15所述的计算机可读存储介质,其中,所述将训练完成的所述生成式对抗网络模型作为黑色素瘤图像识别模型的步骤之后,还包括:
    将所述黑色素瘤图像识别模型存储至区块链网络。
  18. 如权利要求17所述的计算机可读存储介质,其中,所述将所述黑色素瘤图像识别模型存储至区块链网络的步骤之后,还包括:
    检测到所述区块链网络上存储的黑色素瘤图像识别模型更新时,从所述区块链网络获取更新后的黑色素瘤图像识别模型对应的模型参数;
    根据获取到的模型参数更新本地存储的黑色素瘤图像识别模型。
  19. 如权利要求15所述的计算机可读存储介质,其中,所述接收到目标图像时,将所述目标图像输入到所述黑色素瘤图像识别模型进行分析,以得到所述目标图像属于黑色素瘤图像的概率的步骤包括:
    接收到目标图像时,检测所述目标图像的图像质量是否满足预设条件;
    若是,则将所述目标图像输入到所述黑色素瘤图像识别模型进行分析,以得到所述目标图像属于黑色素瘤图像的概率;
    若否,则输出提示信息,所述提示信息用于提示重新采集目标图像。
  20. 如权利要求15所述的计算机可读存储介质,其中,所述接收到目标图像时,将所述目标图像输入到所述黑色素瘤图像识别模型进行分析,以得到所述目标图像属于黑色素瘤图像的概率的步骤之后,还包括:
    检测所述目标图像属于黑色素瘤图像的概率是否大于预设阈值;
    若是,则输出所述目标图像对应的告警信息。
PCT/CN2021/084535 2021-02-25 2021-03-31 黑色素瘤图像识别方法、装置、计算机设备及存储介质 WO2022178946A1 (zh)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN202110212289.3A CN112950569B (zh) 2021-02-25 2021-02-25 黑色素瘤图像识别方法、装置、计算机设备及存储介质
CN202110212289.3 2021-02-25

Publications (1)

Publication Number Publication Date
WO2022178946A1 true WO2022178946A1 (zh) 2022-09-01

Family

ID=76246208

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2021/084535 WO2022178946A1 (zh) 2021-02-25 2021-03-31 黑色素瘤图像识别方法、装置、计算机设备及存储介质

Country Status (2)

Country Link
CN (1) CN112950569B (zh)
WO (1) WO2022178946A1 (zh)

Families Citing this family (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113379716B (zh) * 2021-06-24 2023-12-29 厦门美图宜肤科技有限公司 一种色斑预测方法、装置、设备及存储介质
CN114399710A (zh) * 2022-01-06 2022-04-26 昇辉控股有限公司 一种基于图像分割的标识检测方法、系统及可读存储介质
CN114451870A (zh) * 2022-04-12 2022-05-10 中南大学湘雅医院 色素痣恶变风险监测系统
CN116091874B (zh) * 2023-04-10 2023-07-18 成都数之联科技股份有限公司 图像校验方法、训练方法、装置、介质、设备及程序产品

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110197716A (zh) * 2019-05-20 2019-09-03 广东技术师范大学 医学影像的处理方法、装置及计算机可读存储介质
CN111047594A (zh) * 2019-11-06 2020-04-21 安徽医科大学 肿瘤mri弱监督学习分析建模方法及其模型
WO2020120238A1 (en) * 2018-12-12 2020-06-18 Koninklijke Philips N.V. System and method for providing stroke lesion segmentation using conditional generative adversarial networks
CN111797976A (zh) * 2020-06-30 2020-10-20 北京灵汐科技有限公司 神经网络的训练方法、图像识别方法、装置、设备及介质
CN112132197A (zh) * 2020-09-15 2020-12-25 腾讯科技(深圳)有限公司 模型训练、图像处理方法、装置、计算机设备和存储介质

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110503654B (zh) * 2019-08-01 2022-04-26 中国科学院深圳先进技术研究院 一种基于生成对抗网络的医学图像分割方法、系统及电子设备

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2020120238A1 (en) * 2018-12-12 2020-06-18 Koninklijke Philips N.V. System and method for providing stroke lesion segmentation using conditional generative adversarial networks
CN110197716A (zh) * 2019-05-20 2019-09-03 广东技术师范大学 医学影像的处理方法、装置及计算机可读存储介质
CN111047594A (zh) * 2019-11-06 2020-04-21 安徽医科大学 肿瘤mri弱监督学习分析建模方法及其模型
CN111797976A (zh) * 2020-06-30 2020-10-20 北京灵汐科技有限公司 神经网络的训练方法、图像识别方法、装置、设备及介质
CN112132197A (zh) * 2020-09-15 2020-12-25 腾讯科技(深圳)有限公司 模型训练、图像处理方法、装置、计算机设备和存储介质

Also Published As

Publication number Publication date
CN112950569A (zh) 2021-06-11
CN112950569B (zh) 2023-07-25

Similar Documents

Publication Publication Date Title
WO2022178946A1 (zh) 黑色素瘤图像识别方法、装置、计算机设备及存储介质
CN111275080B (zh) 基于人工智能的图像分类模型训练方法、分类方法及装置
US11922626B2 (en) Systems and methods for automatic detection and quantification of pathology using dynamic feature classification
CN109190540B (zh) 活检区域预测方法、图像识别方法、装置和存储介质
JP2021536057A (ja) 医療画像に対する病変の検出及び位置決め方法、装置、デバイス、及び記憶媒体
CN112102237A (zh) 基于半监督学习的脑部肿瘤识别模型的训练方法及装置
Viji et al. RETRACTED ARTICLE: An improved approach for automatic spine canal segmentation using probabilistic boosting tree (PBT) with fuzzy support vector machine
CN112581438B (zh) 切片图像识别方法、装置和存储介质及电子设备
Mungle et al. MRF‐ANN: a machine learning approach for automated ER scoring of breast cancer immunohistochemical images
WO2022134362A1 (zh) 视盘图像分类模型的训练方法、装置、设备及存储介质
CN110163111A (zh) 基于人脸识别的叫号方法、装置、电子设备及存储介质
WO2023221697A1 (zh) 图像识别模型的训练方法、装置、设备、介质
CN112580902B (zh) 对象数据处理方法、装置、计算机设备和存储介质
WO2022194152A1 (zh) 基于图像处理模型的图像处理方法、装置、电子设备、存储介质及计算机程序产品
CN112949468A (zh) 人脸识别方法、装置、计算机设备和存储介质
CN112330624A (zh) 医学图像处理方法和装置
CN112396588A (zh) 一种基于对抗网络的眼底图像识别方法、系统及可读介质
CN110807409A (zh) 人群密度检测模型训练方法和人群密度检测方法
CN111368911A (zh) 一种图像分类方法、装置和计算机可读存储介质
WO2021155684A1 (zh) 基因疾病关系知识库构建方法、装置和计算机设备
CN111275059B (zh) 一种图像处理方法、装置和计算机可读存储介质
CN111445456B (zh) 分类模型、网络模型的训练方法及装置、识别方法及装置
CN116433679A (zh) 一种基于空间位置结构先验的内耳迷路多级标注伪标签生成与分割方法
CN114283114A (zh) 图像处理方法、装置、设备及存储介质
US20190325318A1 (en) Method and system for learning in a trustless environment

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 21927380

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 21927380

Country of ref document: EP

Kind code of ref document: A1