CN114387227A - Nodule type prediction method and device, storage medium and electronic equipment - Google Patents

Nodule type prediction method and device, storage medium and electronic equipment Download PDF

Info

Publication number
CN114387227A
CN114387227A CN202111593282.7A CN202111593282A CN114387227A CN 114387227 A CN114387227 A CN 114387227A CN 202111593282 A CN202111593282 A CN 202111593282A CN 114387227 A CN114387227 A CN 114387227A
Authority
CN
China
Prior art keywords
nodule
image
predicted
probability
medical image
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202111593282.7A
Other languages
Chinese (zh)
Inventor
荆怡
蔡巍
张霞
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shenyang Neusoft Intelligent Medical Technology Research Institute Co Ltd
Original Assignee
Shenyang Neusoft Intelligent Medical Technology Research Institute Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shenyang Neusoft Intelligent Medical Technology Research Institute Co Ltd filed Critical Shenyang Neusoft Intelligent Medical Technology Research Institute Co Ltd
Priority to CN202111593282.7A priority Critical patent/CN114387227A/en
Publication of CN114387227A publication Critical patent/CN114387227A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/0002Inspection of images, e.g. flaw detection
    • G06T7/0012Biomedical image inspection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N20/00Machine learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10024Color image
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10132Ultrasound image
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20076Probabilistic image processing
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30004Biomedical image processing

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Software Systems (AREA)
  • Medical Informatics (AREA)
  • General Physics & Mathematics (AREA)
  • Physics & Mathematics (AREA)
  • Nuclear Medicine, Radiotherapy & Molecular Imaging (AREA)
  • Quality & Reliability (AREA)
  • Radiology & Medical Imaging (AREA)
  • Health & Medical Sciences (AREA)
  • Artificial Intelligence (AREA)
  • Data Mining & Analysis (AREA)
  • Evolutionary Computation (AREA)
  • General Health & Medical Sciences (AREA)
  • Computing Systems (AREA)
  • General Engineering & Computer Science (AREA)
  • Mathematical Physics (AREA)
  • Image Analysis (AREA)

Abstract

The disclosure relates to a nodule type prediction method, a device, a storage medium and an electronic device, which are used for improving accuracy of nodule type prediction. The method comprises the following steps: acquiring at least two medical images corresponding to a nodule to be predicted; predicting each medical image respectively to obtain the probability of the nodule to be predicted corresponding to each medical image, wherein the probability under one medical image represents the possibility that the nodule to be predicted is predicted to be a preset type nodule based on the medical image; determining the target probability of the nodule to be predicted belonging to the preset type of nodule according to the probability of the nodule to be predicted corresponding to each medical image; and determining the nodule type corresponding to the nodule to be predicted according to the target probability.

Description

Nodule type prediction method and device, storage medium and electronic equipment
Technical Field
The present disclosure relates to the field of artificial intelligence technologies, and in particular, to a method and an apparatus for predicting a nodule type, a storage medium, and an electronic device.
Background
With the continuous development and maturity of AI (Artificial Intelligence) technology, the application of AI technology is also becoming more and more widespread. In the related art, the medical image may be processed through the AI to predict the type of nodule present in the medical image. However, the prediction of the type of nodule present in the medical image in the related art has a problem of poor prediction accuracy.
Disclosure of Invention
The present disclosure provides a nodule type prediction method, apparatus, storage medium, and electronic device, so as to improve accuracy of nodule type prediction.
To achieve the above object, in a first aspect, the present disclosure provides a nodule type prediction method, including:
acquiring at least two medical images corresponding to a nodule to be predicted;
predicting each medical image respectively to obtain the probability of the nodule to be predicted corresponding to each medical image, wherein the probability under one medical image represents the possibility that the nodule to be predicted is predicted to be a preset type nodule based on the medical image;
determining the target probability of the nodule to be predicted belonging to the preset type of nodule according to the probability of the nodule to be predicted corresponding to each medical image;
and determining the nodule type corresponding to the nodule to be predicted according to the target probability.
Optionally, the predicting each medical image respectively to obtain the probability that the nodule to be predicted corresponds to each medical image includes:
and performing probability prediction on the corresponding medical images through the probability prediction model corresponding to each medical image to obtain the probability of the nodule to be predicted corresponding to each medical image.
Optionally, the method for training the probabilistic predictive model corresponding to each medical image includes:
acquiring a sample data set corresponding to each medical image, wherein the sample data set under one medical image comprises a plurality of medical images, and each medical image carries a nodule type label;
and training the corresponding probability prediction model to be trained through the sample data set corresponding to each medical image to obtain the probability prediction model corresponding to each medical image.
Optionally, the determining, according to the probability that the nodule to be predicted corresponds to each medical image, a target probability that the nodule to be predicted belongs to the preset type of nodule includes:
and based on the target weight, carrying out weighted summation on the probability of the nodule to be predicted under each medical image to obtain the target probability of the nodule to be predicted belonging to the preset type nodule.
Optionally, the method further comprises:
acquiring values of preset performance evaluation indexes respectively corresponding to the probability prediction models corresponding to each medical image;
and obtaining the target weight based on the ratio of the value of each preset performance evaluation index to the sum of the values of each preset performance evaluation index.
Optionally, the at least two medical images include at least two of an elastic ultrasound image, a conventional ultrasound image, a doppler image, and a fused image, and the fused image is formed by stitching data of at least two of the elastic ultrasound image, the conventional ultrasound image, and the doppler image.
Optionally, the obtaining of the elastic ultrasound image corresponding to the nodule to be predicted and the ultrasound image corresponding to the nodule to be predicted includes:
acquiring an original ultrasonic image obtained by scanning a probe, wherein the original ultrasonic image comprises an original elastic ultrasonic image and an original traditional ultrasonic image;
carrying out color region identification on the original elastic ultrasonic image to obtain an elastic ultrasonic image corresponding to the nodule to be predicted;
and determining the image of the elastic ultrasonic image corresponding to the nodule to be predicted in the original traditional ultrasonic image, which corresponds to the same coordinate region, as the traditional ultrasonic image corresponding to the nodule to be predicted.
Optionally, the performing color region identification on the original elastic ultrasound image to obtain an elastic ultrasound image corresponding to the nodule to be predicted includes:
obtaining the variance of R, G, B values included by each pixel point in the original elastic ultrasonic image;
carrying out contrast stretching treatment on the variance of the R, G, B values included by each pixel point to obtain the stretching variance corresponding to each pixel point;
performing binarization processing on the stretching variances corresponding to the pixel points to obtain a binarization image corresponding to the original elastic ultrasonic image;
performing feature enhancement processing on the binary image to obtain an enhanced binary image;
and determining the external moment of the maximum connected domain in the enhanced binary image as the elastic ultrasonic image corresponding to the nodule to be predicted.
In order to achieve the above object, in a second aspect, the present disclosure provides a nodule type predicting apparatus, the apparatus including:
the medical image acquisition module is used for acquiring at least two medical images corresponding to the nodule to be predicted;
the probability prediction module is used for predicting each medical image respectively to obtain the probability of the nodule to be predicted corresponding to each medical image, and the probability under one medical image represents the possibility that the nodule to be predicted is predicted to be a preset type nodule based on the medical image;
the target probability determination module is used for determining the target probability of the nodule to be predicted belonging to the preset type of nodule according to the probability of the nodule to be predicted corresponding to each medical image;
and the node type determining module is used for determining the node type corresponding to the node to be predicted according to the target probability.
Optionally, the probability prediction module is further configured to perform probability prediction on the corresponding medical image through a probability prediction model corresponding to each medical image, so as to obtain a probability that the nodule to be predicted corresponds to each medical image.
Optionally, the apparatus further includes a training module, configured to obtain a sample data set corresponding to each medical image, where the sample data set in one medical image includes a plurality of the medical images, and each of the medical images carries a nodule type label; and training the corresponding probability prediction model to be trained through the sample data set corresponding to each medical image to obtain the probability prediction model corresponding to each medical image.
Optionally, the target probability determination module is further configured to perform weighted summation on the probability of the nodule to be predicted corresponding to each medical image based on the target weight, so as to obtain a target probability that the nodule to be predicted belongs to the preset type of nodule.
Optionally, the apparatus further includes a target weight determining module, configured to obtain values of preset performance evaluation indexes respectively corresponding to the probability prediction models corresponding to each medical image; and obtaining the target weight based on the ratio of the value of each preset performance evaluation index to the sum of the values of each preset performance evaluation index.
Optionally, the probability prediction model to be trained is a full convolution neural network model.
Optionally, the at least two medical images include at least two of an elastic ultrasound image, a conventional ultrasound image, a doppler image, and a fused image, and the fused image is formed by stitching data of at least two of the elastic ultrasound image, the conventional ultrasound image, and the doppler image.
Optionally, the medical image acquisition module comprises:
and the original ultrasonic image acquisition submodule is used for acquiring an original ultrasonic image obtained by scanning the probe, and the original ultrasonic image comprises an original elastic ultrasonic image and an original traditional ultrasonic image.
And the color region identification submodule is used for carrying out color region identification on the original elastic ultrasonic image to obtain an elastic ultrasonic image corresponding to the nodule to be predicted.
And the determining submodule is used for determining the image of the elastic ultrasonic image corresponding to the nodule to be predicted in the original traditional ultrasonic image, which corresponds to the same coordinate region, as the traditional ultrasonic image corresponding to the nodule to be predicted.
Optionally, the color region identification submodule is further configured to obtain a variance of R, G, B values included in each pixel point in the original elastic ultrasound image; carrying out contrast stretching treatment on the variance of the R, G, B values included by each pixel point to obtain the stretching variance corresponding to each pixel point; performing binarization processing on the stretching variances corresponding to the pixel points to obtain a binarization image corresponding to the original elastic ultrasonic image; performing feature enhancement processing on the binary image to obtain an enhanced binary image; and determining the external moment of the maximum connected domain in the enhanced binary image as the elastic ultrasonic image corresponding to the nodule to be predicted.
In a third aspect, the present disclosure provides a non-transitory computer readable storage medium having stored thereon a computer program which, when executed by a processor, performs the steps of the method of any one of the first aspects.
In a fourth aspect, the present disclosure provides an electronic device comprising:
a memory having a computer program stored thereon;
a processor for executing the computer program in the memory to implement the steps of the method of any one of the first aspect.
According to the technical scheme, after at least two medical images corresponding to the nodule to be predicted are obtained, each medical image is predicted respectively, the probability of the nodule to be predicted corresponding to each medical image is obtained, then the target probability of the nodule to be predicted belonging to the preset type of the nodule is determined according to the probability of the nodule to be predicted corresponding to each medical image, and then the nodule type corresponding to the nodule to be predicted can be determined according to the target probability.
Additional features and advantages of the disclosure will be set forth in the detailed description which follows.
Drawings
The accompanying drawings, which are included to provide a further understanding of the disclosure and are incorporated in and constitute a part of this specification, illustrate embodiments of the disclosure and together with the description serve to explain the disclosure without limiting the disclosure. In the drawings:
FIG. 1 is a flow chart illustrating a nodule type prediction method according to an exemplary embodiment of the present disclosure;
FIG. 2 is a flow chart illustrating another nodule type prediction method according to an exemplary embodiment of the present disclosure;
fig. 3 is a flowchart illustrating steps for obtaining an elastic ultrasound image corresponding to a nodule to be predicted and an ultrasound image corresponding to the nodule to be predicted according to an exemplary embodiment of the present disclosure;
FIG. 4 is a flowchart illustrating one implementation of step S212 of a nodule type prediction method according to an exemplary embodiment of the present disclosure;
FIG. 5 is a block diagram illustrating a nodule type prediction apparatus according to an exemplary embodiment of the present disclosure;
fig. 6 is a block diagram illustrating an electronic device according to an exemplary embodiment of the present disclosure.
Detailed Description
The following detailed description of specific embodiments of the present disclosure is provided in connection with the accompanying drawings. It should be understood that the detailed description and specific examples, while indicating the present disclosure, are given by way of illustration and explanation only, not limitation.
The inventor researches and discovers that a method for predicting the type of a nodule existing in a medical image through AI in the related art is mainly based on omics features in traditional ultrasound image data, and mainly distinguishes the type of the nodule by extracting underlying visual features (shape, texture and the like) of the traditional ultrasound image, and the prediction accuracy of the nodule type is poor due to the fact that the extracted feature type is single.
Therefore, the present disclosure provides a nodule type prediction method, apparatus, storage medium and electronic device, which first obtains at least two medical images corresponding to a nodule to be predicted, then, each kind of medical image is respectively predicted to obtain the probability of the nodule to be predicted corresponding to each kind of medical image, then determining the target probability of the nodule to be predicted belonging to the preset type nodule according to the probability of the nodule to be predicted corresponding to each medical image, finally determining the nodule type corresponding to the nodule to be predicted according to the target probability, because the nodule type corresponding to the nodule to be predicted is jointly predicted by combining the at least two medical images corresponding to the nodule to be predicted, more abundant characteristic information can be acquired from the at least two medical images, therefore, the node type corresponding to the node to be predicted is predicted through richer characteristic information, and the accuracy of prediction of the node type to be predicted can be improved.
Fig. 1 is a flowchart illustrating a nodule type prediction method according to an exemplary embodiment of the present disclosure. Referring to fig. 1, the nodule type prediction method includes:
and S110, acquiring at least two medical images corresponding to the nodule to be predicted.
And S120, predicting each medical image respectively to obtain the probability of the nodule to be predicted corresponding to each medical image, wherein the probability under one medical image represents the possibility that the nodule to be predicted is predicted to be a preset type nodule based on the medical image.
And S130, determining the target probability of the nodule to be predicted belonging to the preset type of nodule according to the probability of the nodule to be predicted corresponding to each medical image.
And S140, determining the nodule type corresponding to the nodule to be predicted according to the target probability.
In some embodiments, the at least two medical images include at least two of an elastography image, a conventional ultrasound image, a doppler image, and a fused image, the fused image being stitched based on data of at least two of the elastography image, the conventional ultrasound image, and the doppler image.
The splicing of the at least two medical images refers to splicing according to the sizes of the medical images.
For example, assuming that the sizes of the elastic ultrasound image and the doppler image are 224 × 3 and 224 × 3, respectively, if the fusion is performed based on the elastic ultrasound image and the doppler image, the size of the obtained fusion image is 224 × 6.
For another example, assuming that the sizes of the elastic ultrasound image, the conventional ultrasound image, and the doppler image are 224 × 3, 224 × 3, and 224 × 3, respectively, if the fusion is performed based on the elastic ultrasound image, the conventional ultrasound image, and the doppler image, the size of the obtained fusion image is 224 × 9.
In the embodiment of the disclosure, since the fusion image is formed by splicing at least two medical images, the association and the difference of the nodule on each type of image can be embodied to a certain extent, and therefore, the prediction accuracy of the nodule type can be improved by using the fusion image.
In addition, in the embodiments of the present disclosure, the types of the nodes are not limited, and for example, the nodes may be a first type node and a second type node, or there may be other more types. The preset type of nodule is any type of nodule in the nodule type.
In the embodiment of the disclosure, the electronic device may acquire at least two medical images corresponding to a nodule to be predicted, and then, the electronic device may perform nodule type prediction on each acquired medical image respectively to obtain a probability that the nodule to be predicted corresponds to each medical image, that is, the electronic device may obtain a probability that the nodule to be predicted is predicted as a preset type nodule in the corresponding medical image, and then, the electronic device may further determine, according to the probability that the nodule to be predicted corresponds to each medical image, a target probability that the nodule to be predicted belongs to the preset type nodule, and finally, the electronic device may determine, according to the target probability, a nodule type corresponding to the nodule to be predicted.
Illustratively, assuming that an elastic ultrasound image, a conventional ultrasound image, a Doppler image and a fusion image are acquired, the electronic device can predict the elastic ultrasonic image, the conventional ultrasonic image, the doppler image and the fusion image respectively to obtain the probability that the nodule to be predicted corresponds to the elastic ultrasonic image, wherein the probability is assumed to be 1, the probability that the nodule to be predicted corresponds to the conventional ultrasonic image is obtained, the probability is assumed to be 2, the probability that the nodule to be predicted corresponds to the doppler image is obtained, the probability is assumed to be 3, the probability that the nodule to be predicted corresponds to the fusion image is obtained, and the probability is assumed to be 4, then, the electronic device may determine, according to the probability 1, the probability 2, the probability 3, and the probability 4, a target probability that the nodule to be predicted belongs to the preset type of nodule, assuming that the target probability is determined as the probability 5, and finally, the electronic device may determine, according to the probability 5, the nodule type corresponding to the nodule to be predicted.
By adopting the method, the node type corresponding to the node to be predicted is jointly predicted by combining the at least two medical images corresponding to the node to be predicted, so that more abundant characteristic information can be acquired from the at least two medical images, the node type corresponding to the node to be predicted is predicted through the more abundant characteristic information, and the accuracy of prediction of the node type to be predicted can be improved.
Fig. 2 is a flowchart illustrating a nodule type prediction method according to an exemplary embodiment of the present disclosure. Referring to fig. 2, the nodule type prediction method includes:
s210, acquiring at least two medical images corresponding to the nodule to be predicted.
In combination with the foregoing, the at least two medical images include at least two of an elastic ultrasound image, a conventional ultrasound image, a doppler image, and a fusion image.
In some embodiments, referring to fig. 3, the obtaining of the elastic ultrasound image corresponding to the nodule to be predicted and the ultrasound image corresponding to the nodule to be predicted includes:
and S211, acquiring an original ultrasonic image obtained by scanning of the probe, wherein the original ultrasonic image comprises an original elastic ultrasonic image and an original traditional ultrasonic image.
S212, carrying out color region identification on the original elastic ultrasonic image to obtain an elastic ultrasonic image corresponding to the nodule to be predicted.
And S213, determining the image of the elastic ultrasonic image corresponding to the nodule to be predicted in the original traditional ultrasonic image, which corresponds to the same coordinate region, as the traditional ultrasonic image corresponding to the nodule to be predicted.
The original ultrasound image is an ultrasound image with a nodule scanned by a corresponding ultrasound probe, that is, the original ultrasound image can be obtained by intercepting video data obtained by scanning the ultrasound probe, for example, when medical staff scans by using the ultrasound probe, the video data can be obtained, after the medical staff finds the nodule from the video data, the video frame including the nodule can be intercepted, so that the original ultrasound image can be obtained, and in the obtained original ultrasound image, the original ultrasound image can include an original elastic ultrasound image and an original traditional ultrasound image.
The elastic ultrasonic image corresponding to the nodule to be predicted is an image of a region where the nodule is located, which is intercepted from the original elastic ultrasonic image, and the traditional ultrasonic image corresponding to the nodule to be predicted is an image of a region where the nodule is located, which is intercepted from the original traditional ultrasonic image. The region in which the nodule is located may also be understood as the region of interest.
In the embodiment of the present disclosure, considering that the conventional ultrasound image is a gray-scale image, if the conventional ultrasound image corresponding to the nodule to be predicted is directly extracted from the original conventional ultrasound image by a feature extraction method, the accuracy is poor, and considering the distribution characteristics of the nodule in the elastic ultrasound image, that is, the region where the nodule exists is in color distribution, therefore, the color region identification may be performed on the original elastic ultrasound image, the elastic ultrasound image corresponding to the nodule to be predicted is obtained first, then, considering the position correspondence of the nodule in the original elastic ultrasound image and the original conventional ultrasound image, that is, the characteristic that the distribution condition of the nodule in the original elastic ultrasound image and the original conventional ultrasound image is consistent, after the elastic ultrasound image corresponding to the nodule to be predicted is obtained, the image of the same coordinate region corresponding to the elastic ultrasound image corresponding to the nodule to be predicted may be searched in the original conventional ultrasound image, and determining the image of the coordinate region in the original traditional ultrasonic image as the traditional ultrasonic image corresponding to the nodule to be predicted.
By adopting the above mode, the elastic ultrasonic image corresponding to the nodule to be predicted is determined in a color region identification mode, so that the accuracy of determining the elastic ultrasonic image corresponding to the nodule to be predicted is improved, in addition, the image of the same coordinate region corresponding to the elastic ultrasonic image corresponding to the nodule to be predicted in the original traditional ultrasonic image is further determined as the traditional ultrasonic image corresponding to the nodule to be predicted, the traditional ultrasonic image corresponding to the nodule to be predicted is also prevented from being directly extracted from a gray scale image in a characteristic extraction mode, and the accuracy and the convenience of obtaining the traditional ultrasonic image corresponding to the nodule to be predicted are further improved.
Considering that the components of the color image with gray values at R, G, B are almost equal, and the components of the color image with color values at R, G, B tend to differ greatly, in the embodiment of the present disclosure, a method based on R, G, B value analysis may be used for color region identification. In addition, when color region identification is performed, in order to improve the accuracy of color region identification, some image enhancement and optimization processes may be performed. Therefore, in some embodiments, referring to fig. 4, in the step S212, the step of performing color region identification on the original elastic ultrasound image to obtain an elastic ultrasound image corresponding to the nodule to be predicted may include:
s2121, obtaining the variance of R, G, B values included by each pixel point in the original elastic ultrasonic image.
And S2122, performing contrast stretching treatment on the variance of the R, G, B value included by each pixel point to obtain a stretching variance corresponding to each pixel point.
In the embodiment of the disclosure, the variance of R, G, B values included in each pixel point is subjected to contrast stretching treatment to obtain the stretching variance corresponding to each pixel point, so that the variance difference can be enlarged, and the difference between the pixel points is enhanced.
In some embodiments, the contrast stretching process is performed on the variance of R, G, B values included in a certain pixel point, and may be a process of transforming the variance of R, G, B values included in the pixel point by using a linear function.
And S2123, performing binarization processing on the stretching variances corresponding to the pixel points to obtain a binarization image corresponding to the original elastic ultrasonic image.
In some embodiments, the stretching variance may be mapped to between 0 and 1, and then a variance threshold may be selected for binarization, for example, the variance threshold may be selected to be 0.4.
And S2124, performing feature enhancement processing on the binary image to obtain an enhanced binary image.
In some embodiments, the feature enhancement processing on the binarized image may employ morphological operations for dilation etching.
And S2125, determining the external moment of the maximum connected domain in the enhanced binary image as an elastic ultrasonic image corresponding to the nodule to be predicted.
By adopting the method, the color area identification is carried out based on the R, G, B value analysis method, and the processing procedures of image enhancement and optimization are combined, so that the accuracy of color area identification can be improved while the color area identification method is simplified.
In addition, the doppler image corresponding to the nodule to be predicted is an image of the region where the nodule is located, which is extracted from the original doppler image.
In some embodiments, the original doppler image may be obtained by capturing from video data obtained by scanning with a doppler probe, and the specific capturing manner refers to the aforementioned method for capturing the original elastic ultrasound image and the original conventional ultrasound image from video data obtained by scanning with an ultrasound probe, and is not described herein again.
After the original doppler image is acquired, the doppler image corresponding to the nodule to be predicted can be roughly positioned in the original doppler image by combining the coordinate conditions of the elastic ultrasound image corresponding to the nodule to be predicted or the conventional ultrasound image corresponding to the nodule to be predicted.
After obtaining the elastic ultrasound image corresponding to the nodule to be predicted, the conventional ultrasound image corresponding to the nodule to be predicted, or the doppler image corresponding to the nodule to be predicted, reference may be made to the foregoing detailed description for the step of determining the fusion image corresponding to the nodule to be predicted, which is not described herein again.
And S220, performing probability prediction on the corresponding medical images through the probability prediction model corresponding to each medical image to obtain the probability of the nodule to be predicted corresponding to each medical image.
In the embodiment of the present disclosure, each medical image may be predicted separately by means of a neural network model.
With reference to the foregoing example, assuming that the electronic device acquires an elastic ultrasound image, a conventional ultrasound image, a doppler image, and a fusion image, probability prediction may be performed on the elastic ultrasound image using a probability prediction model corresponding to the elastic ultrasound image, probability prediction may be performed on the conventional ultrasound image using a probability prediction model corresponding to the conventional ultrasound image, probability prediction may be performed on the doppler image using a probability prediction model corresponding to the doppler image, and probability prediction may be performed on the fusion image using a probability prediction model corresponding to the fusion image.
In some embodiments, the method for training the probabilistic predictive model corresponding to each medical image may include the steps of: acquiring a sample data set corresponding to each medical image, wherein the sample data set under one medical image comprises a plurality of medical images, and each medical image carries a nodule type label; and training the corresponding probability prediction model to be trained through the sample data set corresponding to each medical image to obtain the probability prediction model corresponding to each medical image.
In the disclosed embodiments, supervised learning may be used to train the model. That is, for the probability prediction model corresponding to each medical image, a plurality of corresponding medical images carrying nodule type labels may be used to train the probability prediction model to be trained corresponding to each medical image.
For example, for a probability prediction model corresponding to a conventional ultrasound image, a plurality of conventional ultrasound images carrying nodule type labels may be used to construct a sample data set corresponding to the conventional ultrasound image, and then the to-be-trained probability prediction model corresponding to the conventional ultrasound image may be trained by using the sample data set corresponding to the conventional ultrasound image to obtain the probability prediction model corresponding to the conventional ultrasound image.
For another example, for the probability prediction model corresponding to the fused image, a plurality of fused images carrying the nodule type label may be used to construct a sample data set corresponding to the fused image, and then the to-be-trained probability prediction model corresponding to the fused image may be trained by using the sample data set corresponding to the fused image, so as to obtain the probability prediction model corresponding to the fused image.
In some embodiments, the probabilistic predictive model to be trained may select a result 50(Residual Network50 ). The data set can be divided according to the proportion of 8.5:1.5 and respectively used as a training set and a testing set. In addition, in the training process, a loss function is used as a cross entropy, an optimizer is Adam, a performance evaluation index is AUC, an initial learning rate is 0.001, the total training times are set to 2000 times, early stop is set to prevent overfitting, the size of a training batch is 16, and each model is subjected to 5-fold cross validation to enhance stability.
In addition, considering that the electronic device acquires at least two medical images, which are independent images, the sizes of the images may not be consistent, and therefore, in order to make the size of the input image unlimited, in some embodiments, the probabilistic predictive model to be trained may be a full convolutional neural network model, that is, in the embodiment of the present disclosure, the result 50 network may be modified into a full convolutional network, and the adaptive average pooling may be replaced by a normal average pooling, and the full connection layer may be replaced by a convolutional layer.
The training process of the probability prediction model corresponding to other medical images may refer to the training process of the probability prediction model corresponding to the conventional ultrasound image, and only the sample data sets used are different, which is not described herein again.
And S230, carrying out weighted summation on the probability of the nodule to be predicted corresponding to each medical image based on the target weight to obtain the target probability of the nodule to be predicted belonging to the preset type nodule.
By introducing different medical images, richer feature information is obtained to predict the nodule type corresponding to the nodule to be predicted, the accuracy of predicting the nodule type can be improved, but different medical images are considered, the expressive features are different, therefore, when the method is used for predicting the nodule type, the prediction performance is different, when at least two medical images are used for predicting the nodule type of the nodule to be predicted, the performance difference of predicting the nodule type by using various medical images can be considered, therefore, in the embodiment of the disclosure, the probabilities of the nodule to be predicted corresponding to each medical image are weighted and summed based on the target weight, and the target probability of the nodule to be predicted belonging to the preset type of the nodule is obtained.
In some embodiments, the step of determining the target weights may comprise: acquiring values of preset performance evaluation indexes respectively corresponding to the probability prediction models corresponding to each medical image; and obtaining the target weight based on the ratio of the value of each preset performance evaluation index to the sum of the values of each preset performance evaluation index.
In the embodiment of the disclosure, after the probability prediction model corresponding to each medical image is obtained through training, the value of the preset performance evaluation index corresponding to each probability prediction model can be calculated, and then the target weight is obtained based on the ratio of the value of each preset performance evaluation index to the sum of the values of each preset performance evaluation index.
In some embodiments, the predetermined performance assessment indicator may be AUC (area under the curve ROC).
For example, assuming that the probability prediction model corresponding to the elastic ultrasound image is model 1, the probability prediction model corresponding to the conventional ultrasound image is model 2, the probability prediction model corresponding to the doppler image is model 3, the probability prediction model corresponding to the fusion image is model 4, AUC values corresponding to model 1, model 2, model 3 and model 4 are respectively calculated, assuming AUC1, AUC2, AUC3 and AUC4, respectively, and assuming that the sum of AUC1, AUC2, AUC3 and AUC4 is AUC5, the target weight can be expressed as AUC1/AUC 5: AUC2/AUC 5: AUC3/AUC 5: AUC4/AUC 5.
By adopting the method, the target probability of the nodule to be predicted, which belongs to the preset type of nodule, is obtained by performing weighted summation on the probabilities of the nodule to be predicted corresponding to each medical image, and the performance difference of the nodule type prediction by utilizing various medical images can be considered, so that the accuracy of the nodule type prediction is further improved.
And S240, determining the nodule type corresponding to the nodule to be predicted according to the target probability.
In some embodiments, determining the type of the nodule corresponding to the nodule to be predicted according to the target probability may be comparing the target probability with a preset probability threshold, and if the target probability is greater than the preset probability threshold, determining that the type of the nodule to be predicted is the preset type of nodule, otherwise, determining that the type of the nodule to be predicted is not the preset type of nodule.
Illustratively, assuming that the nodule type includes a first type nodule and a second type nodule, wherein the preset type nodule represents the first type nodule, and the preset probability threshold is 0.5, when the target probability is greater than 0.5, the nodule type of the nodule to be predicted may be determined as the first type nodule, and when the target probability is less than or equal to 0.5, the nodule type of the nodule to be predicted may be determined as the second type nodule.
According to the test in 120 test set data sets by the inventor, compared with a method for predicting the nodule type based on omics features in traditional ultrasound image data, the nodule type prediction method provided by the embodiment of the disclosure has the advantages that the AUC is obviously improved, and the accuracy rate is also obviously improved, so that the method provided by the embodiment of the disclosure is feasible for predicting the nodule type, and the accuracy of the nodule type prediction can be improved.
Based on the same concept, the present disclosure also provides a nodule type prediction apparatus, which may be a part or all of an electronic device by means of software, hardware or a combination of both. Referring to fig. 5, the nodule type predicting apparatus 300 may include: a medical image acquisition module 310, a probability prediction module 320, a target probability determination module 330, and a nodule type determination module 340, wherein:
the medical image obtaining module 310 is configured to obtain at least two medical images corresponding to a nodule to be predicted.
The probability prediction module 320 is configured to predict each medical image respectively to obtain a probability that the nodule to be predicted corresponds to each medical image, where the probability in one medical image represents a possibility that the nodule to be predicted is predicted as a preset type nodule based on the medical image.
And the target probability determination module 330 is configured to determine a target probability that the nodule to be predicted belongs to the preset type of nodule according to the probability that the nodule to be predicted corresponds to each medical image.
And a nodule type determining module 340, configured to determine a nodule type corresponding to the nodule to be predicted according to the target probability.
Optionally, the probability prediction module 320 is further configured to perform probability prediction on the corresponding medical image through a probability prediction model corresponding to each medical image, so as to obtain a probability that the nodule to be predicted corresponds to each medical image.
Optionally, the apparatus 300 further includes a training module, configured to obtain a sample data set corresponding to each medical image, where the sample data set in one medical image includes a plurality of medical images, and each medical image carries a nodule type label; and training the corresponding probability prediction model to be trained through the sample data set corresponding to each medical image to obtain the probability prediction model corresponding to each medical image.
Optionally, the target probability determining module 330 is further configured to perform weighted summation on the probability of the nodule to be predicted corresponding to each medical image based on the target weight, so as to obtain a target probability that the nodule to be predicted belongs to the preset type of nodule.
Optionally, the apparatus 300 further includes a target weight determining module, configured to obtain values of preset performance evaluation indexes respectively corresponding to the probability prediction models corresponding to each medical image; and obtaining the target weight based on the ratio of the value of each preset performance evaluation index to the sum of the values of each preset performance evaluation index.
Optionally, the probability prediction model to be trained is a full convolution neural network model.
Optionally, the at least two medical images include at least two of an elastic ultrasound image, a conventional ultrasound image, a doppler image, and a fused image, and the fused image is formed by stitching data of at least two of the elastic ultrasound image, the conventional ultrasound image, and the doppler image.
Optionally, the medical image acquisition module 310 includes:
and the original ultrasonic image acquisition submodule is used for acquiring an original ultrasonic image obtained by scanning the probe, and the original ultrasonic image comprises an original elastic ultrasonic image and an original traditional ultrasonic image.
And the color region identification submodule is used for carrying out color region identification on the original elastic ultrasonic image to obtain an elastic ultrasonic image corresponding to the nodule to be predicted.
And the determining submodule is used for determining the image of the elastic ultrasonic image corresponding to the nodule to be predicted in the original traditional ultrasonic image, which corresponds to the same coordinate region, as the traditional ultrasonic image corresponding to the nodule to be predicted.
Optionally, the color region identification submodule is further configured to obtain a variance of R, G, B values included in each pixel point in the original elastic ultrasound image; carrying out contrast stretching treatment on the variance of the R, G, B values included by each pixel point to obtain the stretching variance corresponding to each pixel point; performing binarization processing on the stretching variances corresponding to the pixel points to obtain a binarization image corresponding to the original elastic ultrasonic image; performing feature enhancement processing on the binary image to obtain an enhanced binary image; and determining the external moment of the maximum connected domain in the enhanced binary image as the elastic ultrasonic image corresponding to the nodule to be predicted.
With regard to the apparatus in the above-described embodiment, the specific manner in which each module performs the operation has been described in detail in the embodiment related to the method, and will not be elaborated here.
Based on the same inventive concept, the present disclosure also provides an electronic device, comprising:
a memory having a computer program stored thereon;
a processor for executing the computer program in the memory such that the steps of implementing comprise: acquiring at least two medical images corresponding to a nodule to be predicted; predicting each medical image respectively to obtain the probability of the nodule to be predicted corresponding to each medical image, wherein the probability under one medical image represents the possibility that the nodule to be predicted is predicted to be a preset type nodule based on the medical image; determining the target probability of the nodule to be predicted belonging to the preset type of nodule according to the probability of the nodule to be predicted corresponding to each medical image; determining a nodule type corresponding to the nodule to be predicted according to the target probability;
or, the step of implementing further comprises: performing probability prediction on the corresponding medical images through the probability prediction model corresponding to each medical image to obtain the probability of the nodule to be predicted corresponding to each medical image;
or, the step of implementing further comprises: acquiring a sample data set corresponding to each medical image, wherein the sample data set under one medical image comprises a plurality of medical images, and each medical image carries a nodule type label; training the corresponding probability prediction model to be trained through the sample data set corresponding to each medical image to obtain the probability prediction model corresponding to each medical image;
or, the step of implementing further comprises: based on the target weight, carrying out weighted summation on the probability of the nodule to be predicted under each medical image to obtain the target probability of the nodule to be predicted belonging to the preset type nodule;
or, the step of implementing further comprises: acquiring values of preset performance evaluation indexes respectively corresponding to the probability prediction models corresponding to each medical image; obtaining the target weight based on the ratio of the value of each preset performance evaluation index to the sum of the values of each preset performance evaluation index;
the probability prediction model to be trained is a full convolution neural network model.
The at least two medical images comprise at least two of an elastic ultrasonic image, a traditional ultrasonic image, a Doppler image and a fusion image, and the fusion image is formed by splicing at least two data of the elastic ultrasonic image, the traditional ultrasonic image and the Doppler image.
Or, the step of implementing further comprises: acquiring an original ultrasonic image obtained by scanning a probe, wherein the original ultrasonic image comprises an original elastic ultrasonic image and an original traditional ultrasonic image; carrying out color region identification on the original elastic ultrasonic image to obtain an elastic ultrasonic image corresponding to the nodule to be predicted; determining an image of the original traditional ultrasonic image corresponding to the elastic ultrasonic image corresponding to the nodule to be predicted as a traditional ultrasonic image corresponding to the nodule to be predicted, wherein the image corresponds to the same coordinate region;
or, the step of implementing further comprises: obtaining the variance of R, G, B values included by each pixel point in the original elastic ultrasonic image; carrying out contrast stretching treatment on the variance of the R, G, B values included by each pixel point to obtain the stretching variance corresponding to each pixel point; performing binarization processing on the stretching variances corresponding to the pixel points to obtain a binarization image corresponding to the original elastic ultrasonic image; performing feature enhancement processing on the binary image to obtain an enhanced binary image; and determining the external moment of the maximum connected domain in the enhanced binary image as the elastic ultrasonic image corresponding to the nodule to be predicted.
In a possible approach, a block diagram of the electronic device may be as shown in fig. 6. Referring to fig. 6, the electronic device 400 may include: a processor 401 and a memory 402. The electronic device 400 may also include one or more of a multimedia component 403, an input/output (I/O) interface 404, and a communications component 405.
The processor 401 is configured to control the overall operation of the electronic device 400, so as to complete all or part of the steps in the above-mentioned nodule type prediction method. The memory 402 is used to store various types of data to support operation at the electronic device 400, such as instructions for any application or method operating on the electronic device 400 and application-related data, such as contact data, transmitted and received messages, pictures, audio, video, and so forth. The Memory 402 may be implemented by any type of volatile or non-volatile Memory device or combination thereof, such as Static Random Access Memory (SRAM), Electrically Erasable Programmable Read-Only Memory (EEPROM), Erasable Programmable Read-Only Memory (EPROM), Programmable Read-Only Memory (PROM), Read-Only Memory (ROM), magnetic Memory, flash Memory, magnetic disk or optical disk. The multimedia components 403 may include a screen and an audio component. Wherein the screen may be, for example, a touch screen and the audio component is used for outputting and/or inputting audio signals. For example, the audio component may include a microphone for receiving external audio signals. The received audio signal may further be stored in the memory 402 or transmitted through the communication component 405. The audio assembly also includes at least one speaker for outputting audio signals. The I/O interface 404 provides an interface between the processor 401 and other interface modules, such as a keyboard, mouse, buttons, etc. These buttons may be virtual buttons or physical buttons. The communication component 405 is used for wired or wireless communication between the electronic device 400 and other devices. Wireless Communication, such as Wi-Fi, bluetooth, Near Field Communication (NFC), 2G, 3G, 4G, NB-IOT, eMTC, or other 5G, etc., or a combination of one or more of them, which is not limited herein. The corresponding communication component 405 may therefore include: Wi-Fi module, Bluetooth module, NFC module, etc.
In an exemplary embodiment, the electronic Device 400 may be implemented by one or more Application Specific Integrated Circuits (ASICs), Digital Signal Processors (DSPs), Digital Signal Processing Devices (DSPDs), Programmable Logic Devices (PLDs), Field Programmable Gate Arrays (FPGAs), controllers, microcontrollers, microprocessors, or other electronic components for performing the above-described nodule type prediction method.
In another exemplary embodiment, a computer readable storage medium comprising program instructions which, when executed by a processor, implement the steps of the nodule type prediction method described above is also provided. For example, the computer readable storage medium may be the memory 402 described above including program instructions executable by the processor 401 of the electronic device 400 to perform the nodule type prediction method described above.
In another exemplary embodiment, a computer program product is also provided, which comprises a computer program executable by a programmable apparatus, the computer program having code portions for performing the above-described nodule type prediction method when executed by the programmable apparatus.
The preferred embodiments of the present disclosure are described in detail with reference to the accompanying drawings, however, the present disclosure is not limited to the specific details of the above embodiments, and various simple modifications may be made to the technical solution of the present disclosure within the technical idea of the present disclosure, and these simple modifications all belong to the protection scope of the present disclosure.
It should be noted that, in the foregoing embodiments, various features described in the above embodiments may be combined in any suitable manner, and in order to avoid unnecessary repetition, various combinations that are possible in the present disclosure are not described again.
In addition, any combination of various embodiments of the present disclosure may be made, and the same should be considered as the disclosure of the present disclosure, as long as it does not depart from the spirit of the present disclosure.

Claims (11)

1. A method of nodule type prediction, the method comprising:
acquiring at least two medical images corresponding to a nodule to be predicted;
predicting each medical image respectively to obtain the probability of the nodule to be predicted corresponding to each medical image, wherein the probability under one medical image represents the possibility that the nodule to be predicted is predicted to be a preset type nodule based on the medical image;
determining the target probability of the nodule to be predicted belonging to the preset type of nodule according to the probability of the nodule to be predicted corresponding to each medical image;
and determining the nodule type corresponding to the nodule to be predicted according to the target probability.
2. The method according to claim 1, wherein the predicting each medical image separately to obtain the probability of the nodule to be predicted corresponding to each medical image comprises:
and performing probability prediction on the corresponding medical images through the probability prediction model corresponding to each medical image to obtain the probability of the nodule to be predicted corresponding to each medical image.
3. The method according to claim 2, wherein the training method of the probability prediction model corresponding to each medical image comprises:
acquiring a sample data set corresponding to each medical image, wherein the sample data set under one medical image comprises a plurality of medical images, and each medical image carries a nodule type label;
and training the corresponding probability prediction model to be trained through the sample data set corresponding to each medical image to obtain the probability prediction model corresponding to each medical image.
4. The method according to claim 2 or 3, wherein the determining the target probability that the nodule to be predicted belongs to the nodule of the preset type according to the probability that the nodule to be predicted corresponds to each medical image comprises:
and based on the target weight, carrying out weighted summation on the probability of the nodule to be predicted under each medical image to obtain the target probability of the nodule to be predicted belonging to the preset type nodule.
5. The method of claim 4, further comprising:
acquiring values of preset performance evaluation indexes respectively corresponding to the probability prediction models corresponding to each medical image;
and obtaining the target weight based on the ratio of the value of each preset performance evaluation index to the sum of the values of each preset performance evaluation index.
6. The method according to any one of claims 1-3, wherein the at least two medical images comprise at least two of an elastography image, a conventional ultrasound image, a Doppler image, and a fused image, the fused image being stitched based on data of at least two of the elastography image, the conventional ultrasound image, the Doppler image.
7. The method according to claim 6, wherein the obtaining of the elastic ultrasound image corresponding to the nodule to be predicted and the ultrasound image corresponding to the nodule to be predicted comprises:
acquiring an original ultrasonic image obtained by scanning a probe, wherein the original ultrasonic image comprises an original elastic ultrasonic image and an original traditional ultrasonic image;
carrying out color region identification on the original elastic ultrasonic image to obtain an elastic ultrasonic image corresponding to the nodule to be predicted;
and determining the image of the elastic ultrasonic image corresponding to the nodule to be predicted in the original traditional ultrasonic image, which corresponds to the same coordinate region, as the traditional ultrasonic image corresponding to the nodule to be predicted.
8. The method according to claim 7, wherein the performing color region identification on the original elastic ultrasound image to obtain an elastic ultrasound image corresponding to the nodule to be predicted comprises:
obtaining the variance of R, G, B values included by each pixel point in the original elastic ultrasonic image;
carrying out contrast stretching treatment on the variance of the R, G, B values included by each pixel point to obtain the stretching variance corresponding to each pixel point;
performing binarization processing on the stretching variances corresponding to the pixel points to obtain a binarization image corresponding to the original elastic ultrasonic image;
performing feature enhancement processing on the binary image to obtain an enhanced binary image;
and determining the external moment of the maximum connected domain in the enhanced binary image as the elastic ultrasonic image corresponding to the nodule to be predicted.
9. A nodule type prediction apparatus, comprising:
the medical image acquisition module is used for acquiring at least two medical images corresponding to the nodule to be predicted;
the probability prediction module is used for predicting each medical image respectively to obtain the probability of the nodule to be predicted corresponding to each medical image, and the probability under one medical image represents the possibility that the nodule to be predicted is predicted to be a preset type nodule based on the medical image;
the target probability determination module is used for determining the target probability of the nodule to be predicted belonging to the preset type of nodule according to the probability of the nodule to be predicted corresponding to each medical image;
and the node type determining module is used for determining the node type corresponding to the node to be predicted according to the target probability.
10. A non-transitory computer readable storage medium, on which a computer program is stored, which, when being executed by a processor, carries out the steps of the method according to any one of claims 1 to 8.
11. An electronic device, comprising:
a memory having a computer program stored thereon;
a processor for executing the computer program in the memory to carry out the steps of the method of any one of claims 1 to 8.
CN202111593282.7A 2021-12-23 2021-12-23 Nodule type prediction method and device, storage medium and electronic equipment Pending CN114387227A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202111593282.7A CN114387227A (en) 2021-12-23 2021-12-23 Nodule type prediction method and device, storage medium and electronic equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202111593282.7A CN114387227A (en) 2021-12-23 2021-12-23 Nodule type prediction method and device, storage medium and electronic equipment

Publications (1)

Publication Number Publication Date
CN114387227A true CN114387227A (en) 2022-04-22

Family

ID=81198224

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202111593282.7A Pending CN114387227A (en) 2021-12-23 2021-12-23 Nodule type prediction method and device, storage medium and electronic equipment

Country Status (1)

Country Link
CN (1) CN114387227A (en)

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107945168A (en) * 2017-11-30 2018-04-20 上海联影医疗科技有限公司 The processing method and magic magiscan of a kind of medical image
CN108961207A (en) * 2018-05-02 2018-12-07 上海大学 Lymph node Malignant and benign lesions aided diagnosis method based on multi-modal ultrasound image
CN109919928A (en) * 2019-03-06 2019-06-21 腾讯科技(深圳)有限公司 Detection method, device and the storage medium of medical image
CN110363760A (en) * 2019-07-22 2019-10-22 广东工业大学 The computer system of medical image for identification
US20200005460A1 (en) * 2018-06-28 2020-01-02 Shenzhen Imsight Medical Technology Co. Ltd. Method and device for detecting pulmonary nodule in computed tomography image, and computer-readable storage medium
CN111553919A (en) * 2020-05-12 2020-08-18 上海深至信息科技有限公司 Thyroid nodule analysis system based on elastic ultrasonic imaging
CN113627449A (en) * 2020-05-07 2021-11-09 阿里巴巴集团控股有限公司 Model training method and device and label determining method and device

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107945168A (en) * 2017-11-30 2018-04-20 上海联影医疗科技有限公司 The processing method and magic magiscan of a kind of medical image
CN108961207A (en) * 2018-05-02 2018-12-07 上海大学 Lymph node Malignant and benign lesions aided diagnosis method based on multi-modal ultrasound image
US20200005460A1 (en) * 2018-06-28 2020-01-02 Shenzhen Imsight Medical Technology Co. Ltd. Method and device for detecting pulmonary nodule in computed tomography image, and computer-readable storage medium
CN109919928A (en) * 2019-03-06 2019-06-21 腾讯科技(深圳)有限公司 Detection method, device and the storage medium of medical image
CN110363760A (en) * 2019-07-22 2019-10-22 广东工业大学 The computer system of medical image for identification
CN113627449A (en) * 2020-05-07 2021-11-09 阿里巴巴集团控股有限公司 Model training method and device and label determining method and device
CN111553919A (en) * 2020-05-12 2020-08-18 上海深至信息科技有限公司 Thyroid nodule analysis system based on elastic ultrasonic imaging

Non-Patent Citations (3)

* Cited by examiner, † Cited by third party
Title
周天绮等: "影像组学在肺肿瘤良恶性分类预测中的应用研究", 《中国医疗器械杂志》 *
段宗文 等主编: "超声弹性成像在甲状腺疾病诊断中的应用", 《临床超声医学 上》 *
齐慧颖 主编: "医学影像处理和分析方法", 《医学信息资源智能管理》 *

Similar Documents

Publication Publication Date Title
CN110363220B (en) Behavior class detection method and device, electronic equipment and computer readable medium
CN112070781A (en) Processing method and device of craniocerebral tomography image, storage medium and electronic equipment
US20140286527A1 (en) Systems and methods for accelerated face detection
CN109389096B (en) Detection method and device
CN108229675B (en) Neural network training method, object detection method, device and electronic equipment
CN109559303B (en) Method and device for identifying calcification points and computer-readable storage medium
CN112053363B (en) Retina blood vessel segmentation method, retina blood vessel segmentation device and model construction method
CN111931713B (en) Abnormal behavior detection method and device, electronic equipment and storage medium
CN111814725A (en) Early warning method for judging ignition of monitoring video based on CNN + LSTM + MLP combined neural network
CN110825969A (en) Data processing method, device, terminal and storage medium
CN111414910A (en) Small target enhancement detection method and device based on double convolutional neural network
CN115131604A (en) Multi-label image classification method and device, electronic equipment and storage medium
CN111539256B (en) Iris feature extraction method, iris feature extraction device and storage medium
CN114170688B (en) Character interaction relation identification method and device and electronic equipment
CN107948721B (en) Method and device for pushing information
CN116569210A (en) Normalizing OCT image data
CN114387227A (en) Nodule type prediction method and device, storage medium and electronic equipment
CN110059743B (en) Method, apparatus and storage medium for determining a predicted reliability metric
CN112820412B (en) User information processing method and device, storage medium and electronic equipment
CN112633348B (en) Method and device for detecting cerebral arteriovenous malformation and judging dispersion property of cerebral arteriovenous malformation
CN113470026B (en) Polyp recognition method, device, medium, and apparatus
CN114170271A (en) Multi-target tracking method with self-tracking consciousness, equipment and storage medium
CN115130543A (en) Image recognition method and device, storage medium and electronic equipment
CN115131291A (en) Object counting model training method, device, equipment and storage medium
CN113221718A (en) Formula identification method and device, storage medium and electronic equipment

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication

Application publication date: 20220422

RJ01 Rejection of invention patent application after publication