WO2019200753A1 - 病变监测方法、装置、计算机设备和存储介质 - Google Patents

病变监测方法、装置、计算机设备和存储介质 Download PDF

Info

Publication number
WO2019200753A1
WO2019200753A1 PCT/CN2018/095503 CN2018095503W WO2019200753A1 WO 2019200753 A1 WO2019200753 A1 WO 2019200753A1 CN 2018095503 W CN2018095503 W CN 2018095503W WO 2019200753 A1 WO2019200753 A1 WO 2019200753A1
Authority
WO
WIPO (PCT)
Prior art keywords
image
original
layer
data
feature
Prior art date
Application number
PCT/CN2018/095503
Other languages
English (en)
French (fr)
Inventor
王健宗
吴天博
刘新卉
肖京
Original Assignee
平安科技(深圳)有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 平安科技(深圳)有限公司 filed Critical 平安科技(深圳)有限公司
Publication of WO2019200753A1 publication Critical patent/WO2019200753A1/zh

Links

Images

Classifications

    • GPHYSICS
    • G16INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR SPECIFIC APPLICATION FIELDS
    • G16HHEALTHCARE INFORMATICS, i.e. INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR THE HANDLING OR PROCESSING OF MEDICAL OR HEALTHCARE DATA
    • G16H50/00ICT specially adapted for medical diagnosis, medical simulation or medical data mining; ICT specially adapted for detecting, monitoring or modelling epidemics or pandemics
    • G16H50/20ICT specially adapted for medical diagnosis, medical simulation or medical data mining; ICT specially adapted for detecting, monitoring or modelling epidemics or pandemics for computer-aided diagnosis, e.g. based on medical expert systems
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G16INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR SPECIFIC APPLICATION FIELDS
    • G16HHEALTHCARE INFORMATICS, i.e. INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR THE HANDLING OR PROCESSING OF MEDICAL OR HEALTHCARE DATA
    • G16H30/00ICT specially adapted for the handling or processing of medical images
    • G16H30/20ICT specially adapted for the handling or processing of medical images for handling medical images, e.g. DICOM, HL7 or PACS

Definitions

  • the present application relates to the field of convolutional neural network applications, and in particular to a lesion monitoring method, apparatus, computer device and storage medium.
  • the goal of liver cancer diagnosis is to determine whether or not the liver is in the cross-sectional image of the human body obtained from the CT tomography image.
  • the traditional method is to use the doctor's experience to locate lesions on multiple CT images, so the doctor's experience is very important, but since the CT tomogram is a grayscale image and multiple organs are displayed at the same time, the CT slices related to the liver are quite More, the amount of data is very large, leading to a picture will greatly consume the doctor's brain power and time, resulting in doctors no more time to receive more patients or analyze the disease, design treatment plans.
  • the main purpose of the present application is to provide a method for monitoring a lesion, which aims to solve the technical problem that the diagnosis of the liver cancer is mainly based on the medical experience of the doctor, and the diagnosis is time-consuming and inefficient.
  • the present application proposes a lesion monitoring method comprising:
  • the segmentation model and the recognition model are respectively trained by the first convolutional neural network and the second convolutional neural network, and the first convolutional neural network and the second convolutional neural network are cascaded.
  • the application also provides a lesion monitoring device, comprising:
  • a first input/output module configured to input sample data of the original CT image into a preset segmentation model for segmentation operation, and output the segmented liver image data
  • a second input/output module configured to input sample data of the original CT image and the segmented liver image data into a preset recognition model, and output a recognition result
  • the segmentation model and the recognition model are respectively trained by the first convolutional neural network and the second convolutional neural network, and the first convolutional neural network and the second convolutional neural network are cascaded.
  • the application also provides a computer device comprising a memory and a processor, the memory storing computer readable instructions, the processor implementing the steps of the method when the computer readable instructions are executed.
  • the present application also provides a computer non-transitory readable storage medium having stored thereon computer readable instructions that, when executed by a processor, implement the steps of the methods described above.
  • the present invention has the beneficial technical effects: the present application uses the neural network to learn the liver features and the lesion features in the original CT images, and establishes the relationship between the CT slices and the tags through two cascaded full convolutional neural networks, and divides the task training model.
  • the upsampling part of the network model of this application includes splicing, the purpose of splicing is to pull the earliest features
  • the cross-layer connection method is used to splicing the convolution output of the previous layers into the back-layer input to make up for the problem that the current layer is insufficient in data due to being in a deep position of the network, and the model training process is sampled at each step.
  • FIG. 1 is a schematic flow chart of a lesion monitoring method according to an embodiment of the present application.
  • FIG. 2 is a schematic structural view of a lesion monitoring device according to an embodiment of the present application.
  • FIG. 3 is a schematic structural diagram of a first input/output module according to an embodiment of the present application.
  • FIG. 4 is a schematic structural diagram of a first input unit according to an embodiment of the present application.
  • FIG. 5 is a schematic structural diagram of an output submodule according to an embodiment of the present application.
  • FIG. 6 is a schematic structural diagram of a first input unit according to another embodiment of the present application.
  • FIG. 7 is a schematic structural diagram of a second input unit according to an embodiment of the present application.
  • FIG. 8 is a schematic structural diagram of an apparatus for optimizing lesion monitoring according to an embodiment of the present application.
  • Figure 9 is a schematic structural view of a lesion monitoring device according to another embodiment of the present application.
  • FIG. 10 is a schematic diagram showing the internal structure of a computer device according to an embodiment of the present application.
  • a lesion monitoring method includes:
  • S1 The sample data of the original CT image is input into a preset segmentation model for segmentation operation, and the segmented liver image data is output.
  • the liver image data after segmentation in this step is the identification data of the liver part in the original CT image, including all edge feature data of the liver part in the original CT image.
  • the segmentation model of the present embodiment determines the bounding frame of the liver portion by identifying the edge feature data of the liver portion in the original CT image, thereby realizing effective segmentation of the liver portion in the original CT image.
  • the segmentation model and the recognition model are respectively trained by the first convolutional neural network and the second convolutional neural network, and the first convolutional neural network and the second convolutional neural network are cascaded.
  • CT tomography is Computed Tomography, ie Computerized tomography
  • CT scan data of a certain part of the human body formed by accurately collimating X-ray beams, ⁇ -rays, ultrasonic waves, etc.
  • CT tomography has a certain thickness
  • CT slices have multiple cross-sections of CT slices arranged in sequence.
  • each CT slice has a corresponding label
  • each CT slice also has a corresponding label to respectively represent the order of CT tomography and CT slice in the entire data, so as to accurately correspond to the correspondence of the solid organs respectively.
  • the original CT image in this application includes the original CT slice).
  • the CT of the liver is composed of several CT faults, and each CT slice is composed of multiple CT slices.
  • they are respectively Labels were introduced for each CT slice and each CT slice to correlate the sequence and correspond to the liver site.
  • the output information of the neural network is input to the second convolutional neural network, so that the data of the CT slice of the first convolutional neural network and the CT slice of the second convolutional neural network are corresponding according to the label, so as to accurately determine the first convolutional nerve
  • the lesion position of the liver outputted by the network, and the label of this embodiment is information including contents such as sorting numbers.
  • the final output of the first convolutional neural network is the input of the second convolutional neural network.
  • the two tasks are different, so the recognition targets of each layer of the two convolutional neural networks are different; the tasks are divided by two cascaded network structures. Train the model to find the optimal network parameters as quickly as possible and complete the training model.
  • the model training process is the same as the feature extraction method of the actual application of the model. In the model training, the model training is realized by more samples, and the model parameters are determined. In the actual application, the model parameters are determined, and only the samples to be analyzed are extracted. .
  • the first convolutional neural network of the present embodiment has the same structure as the second convolutional neural network, except that the input data is different, and the first convolutional neural network structure realizes the segmentation task of the liver portion by recognizing the liver surrounding frame.
  • the second convolutional neural network can identify the type of lesion to complete the identification of liver lesion types, avoiding the use of a convolutional neural network to simultaneously identify liver sites and liver lesions. It is easy to misunderstand other organs and avoid recognition results. Mix in other organs.
  • step S1 of this embodiment includes:
  • S10 Input sample data of the original CT image into the convolution part of the first convolutional neural network, and extract feature data in the original CT image by using a preset feature extraction manner in the segmentation model.
  • the first convolutional neural network of this embodiment has the same structure as the second convolutional neural network, and both include two parts, a convolution part and an upsampling part.
  • the convolution part is to identify the features of the image, such as the edge of the image is a first-order feature, the gradation of the edge is a second-order feature, the feature of the local adjacent edge constitutes a third-order texture, etc., the more the depth can distinguish the object.
  • S11 Input the feature data into an upsampled portion of the first convolutional neural network to restore a size of the original CT image and output the segmented liver image data.
  • the upsampling portion of the embodiment is used to restore the image to the original size, and the output result is the same as the input result size and size, so as to accurately perform the segmentation task and the lesion recognition task. That is, in this embodiment, the image size is restored by the upsampling portion of the first convolutional neural network and the segmentation result is output; the image size is restored by the upsampling portion of the second convolutional neural network and the lesion recognition result is output.
  • the convolution part of the first convolutional neural network includes a first convolutional layer, a second convolutional layer, and a maximum pooling layer.
  • Step S10 of the embodiment includes:
  • S100 Iteratively repeats the first specified number of times through the first convolution layer, the second convolution layer, and the maximum pooling layer to output feature data of the original CT image.
  • each time after passing through the first convolution layer and the second convolution layer it is recorded as a convolution process Conv, for example, the first convolution through the first convolution
  • Conv1 the first convolution through the first convolution
  • Conv2 the second time passes through the first convolutional layer and the second convolutional layer, and is recorded as Conv2, which is sequentially recursively performed, and passes through the maximum pooling layer after each convolution process.
  • the fine features of the original CT image are continuously extracted through the iteration of the first specified number of convolutions and pooling, and the feature space is deeper and deeper to output the characteristic data of the original CT image.
  • step S100 of this embodiment includes:
  • S1000 Input sample data of the original CT image into the first convolution layer of the convolution portion to train the first-order feature of the local feature of the original CT image.
  • the convolution portion of the embodiment includes a convolution layer and a maximum pooling layer.
  • the convolution layer is used to obtain local features of the original CT image, for example, the first convolutional layer of the first embodiment of the present embodiment.
  • S1001 Input a first-order feature of the local feature of the original CT image into a second convolution layer of the convolution portion to train a second-order feature of the local feature of the original CT image.
  • the second convolutional layer of the embodiment trains second-order features such as edge changes of the image.
  • S1002 Input a second-order feature of the local feature of the original CT image into a maximum pooling layer of the convolution portion to extract an optimized feature of the local feature of the original CT image.
  • the parameters of the maximum pooling layer are reduced while retaining the main features, such as dimensionality reduction; the amount of calculation is reduced, and the over-fitting is prevented by nonlinearization, thereby improving the generalization ability of the training model.
  • S1003 using an optimized feature of the local feature of the original CT image as sample data of the original CT image, and sequentially iterating through the first convolution layer, the second convolution layer, and the maximum pooling layer. Until the number of iterations reaches the first specified number of times; to further optimize the weight of the training model and improve the use effect.
  • the convolution part of the first convolutional neural network further includes a discarding layer; after step S100 in another embodiment of the present application, the method includes:
  • S101 Input feature data of the original CT image into a discarding layer of the convolution portion, and iteratively discard the second specified number of times to output optimized feature data of the original CT image.
  • the convolution portion of this embodiment includes a discard layer in addition to the convolution layer and the maximum pooling layer, so as to reduce data redundancy, improve the robustness of the training model, and output more optimized feature data.
  • step S11 of the embodiment includes:
  • S111 Input feature data of the original CT image into an upsampling layer of the upsampling portion to gradually restore the size of the original CT image.
  • the upsampling layer of this embodiment is used to sequentially restore the deep feature space developed by convolution and pooling of the convolution part to the target label, and restore the convolved original CT image to the original size, so that the output result is obtained. It is the same size as the input result to achieve accurate segmentation of the liver of the original CT image.
  • S112 splicing the output data of the upsampling layer with the first-order features of the first convolutional layer or the second-order features of the second convolutional layer by a splicing layer.
  • S113 Input the output data of the splicing layer and the upsampling layer into the third convolution layer, perform full CT image information fusion, and output the segmented liver image data.
  • each splicing layer will follow in this embodiment.
  • a third convolutional layer of a compressed space the feature after splicing is fused and the feature space is compressed.
  • step S1 the method includes:
  • Each tomogram of each case in this embodiment is subjected to data preprocessing in order to ensure that the original CT image is enhanced under the original image size to remove unrelated tissues as much as possible and highlight the liver.
  • pixels of the range of -100 to 400 gray scales are first filtered according to the representation of the gray scale range of the liver tissue on the original CT image to highlight the contrast of the liver in the image while ensuring the original image size. Since the contrast of the organ displayed by the original CT image in the range of [-100, 400] is not obvious, the present embodiment performs contrast enhancement by the method of histogram equalization, enlarges the gray level of the number of pixels, and expands the value of the image element. Dynamic Range.
  • this embodiment uses only 20 open-source liver lesion maps as original CT images. Since the training data samples are very few, in order to improve the robustness of the model, Gaussian noise is added to the image, and random rotation [-30, +30 ] to angle data to enhance data diversity.
  • S4 Calculate the deformation image of the original CT image by performing the specified elastic transformation calculation on the rotated image.
  • a Gaussian kernel of 105*105 pixels with a mean of 0 and a standard deviation of 4 is generated, and the Gaussian kernel is convoluted with the images A and B, respectively, to obtain convolution results A2 and B2.
  • liver cancer lesion monitoring of the original CT image of the present embodiment is specifically described by the discrimination process of the CT slice of 0/1 tag:
  • Each sample entered is a preprocessed image of 512*512*n, where n is the number of CT slices for the sample. Perform the following model training:
  • Convolution using 3*3 convolution kernel, output 64 feature maps, using Relu activation function, output 512*512 size, recorded as conv1 (conv indicates convolution layer);
  • the maximum value is pooled, using 2*2 cores and outputting 256*256 size
  • Convolution using 3 * 3 convolution kernel, output 64 feature maps, using Relu activation function, output 256 * 256 size, recorded as conv2;
  • the maximum value is pooled, using 2*2 cores and outputting 128*128 size
  • the maximum value is pooled, using 2*2 cores and outputting 64*64 size
  • the maximum value is pooled, using 2*2 cores and outputting 32*32 size
  • the maximum value is pooled, using 2*2 cores and outputting 16*16 size
  • the above part is the convolution part, and after completion, it enters the upsampling part.
  • Upsampling using 2*2 upsampling, output 128*128 size
  • Upsampling using 2*2 upsampling, output 256*256;
  • Upsampling using 2*2 upsampling, output 512*512;
  • the above model structure of the first convolutional neural network or the second convolutional neural network can be specifically expressed as the following table:
  • the corresponding position of the original CT image is first extracted from the predicted liver enclosing frame as the training picture to be segmented, and in the same manner, the corresponding position of the lesion is obtained from the prediction bounding box, and the label map of 512*512*1 is extracted, and the training is completed.
  • Convolution segmentation network In the first convolutional neural network, the first CT slice data is first prepared for the first convolutional neural network data, and the processed data is input into the first convolutional neural network for model training, and the training set is iterated 50 times, each The iteration will traverse all the input data, and cross entropy (Crossentropy) is used as the objective function in the training.
  • the predicted output of the first convolutional neural network and the enhanced original CT image are used for training, and the second convolutional neural network is input by the first convolutional neural network and the first convolutional neural network. The predicted output is taken as input, and the other parameters are unchanged.
  • the second convolutional neural network is different from the data input by the first convolutional neural network, and the identified objects are different in different tasks.
  • the first convolutional neural network identifies the frame by identifying the liver. To achieve segmentation of the liver, the second convolutional neural network realizes lesion recognition by recognizing the lesion location bounding box.
  • This embodiment adopts the Adam strategy (Adam is a first-order optimization algorithm that can replace the traditional stochastic gradient descent process, which can iteratively update the neural network weight based on the training data, and Adam derives from the Adaptive Moment Estimation).
  • the initial learning rate is 1e-5.
  • the loss on the verification set is less than 1e-7 during training, the learning rate is halved.
  • the loss reduction indicates that the training is still optimized.
  • the learning rate needs to be reduced to avoid the cause.
  • the learning rate is too large and the shock eventually leads to the inability to continue optimization.
  • the results of the segmentation of the liver during the training of the first convolutional neural network may be inaccurate.
  • the enhanced original is spliced in the input of the second convolutional neural network.
  • the CT image data makes the input of the second convolutional neural network contain relatively complete information for lesion segmentation, so there is no data loss or the data is not comprehensive, resulting in low recognition effect of the whole model.
  • the second convolutional neural network finally outputs an image of 512*512*1, which is the CT tomographic discrimination result of the 0/1 tag.
  • step S2 the method further includes:
  • S20 Obtain lesion information of the lesion position by comparing an output result of the second convolutional neural network with an output result of the first convolutional neural network.
  • the lesion information is obtained by comparing the entire liver region and the lesion region.
  • the lesion information of the present embodiment includes, but is not limited to, analysis data such as the number, size, distribution ratio, and edge characteristic information of the lesion area, to further improve the diagnosis accuracy of the liver cancer.
  • Analytical data such as the number, size, distribution, and edge characteristics of the lesions directly indicate the extent of liver cancer progression. For example, the surface of the liver S7/8 has a localized bulge, and a large irregular low-density shadow is seen. The size is about 73 ⁇ 68mm, the boundary is unclear, the partial density is higher than that of the same layer of liver, and the edge is not clear.
  • T1b indicates multiple tumor diameter >5cm; N0M0 means no regional lymphatic metastasis, no distant metastasis), combined with lymphatic metastasis, with or without distant metastasis, can be diagnosed as T3aN0M0 IIIA.
  • step S20 of the embodiment the method includes:
  • S21 Obtaining the identity information of the patient corresponding to the lesion information by externally, to form a first database with the lesion information.
  • the identity information of this step includes age, gender, occupation, and dietary preferences.
  • the first database is established to improve the human age, gender, occupation, and dietary preferences, and effectively monitor the high-risk population to improve the effectiveness of cancer prevention.
  • step S20 of another embodiment of the present application the method further includes:
  • the change data in this step includes, but is not limited to, information that the lesion area becomes larger or smaller, and the number of lesion areas decreases or increases.
  • the life information of this step includes but is not limited to the type, quantity and frequency of the diet, the type, quantity and frequency of the medication, the quality of sleep, duration and frequency.
  • the positive factors in this step represent factors that are beneficial for controlling the condition, slowing the progression of the condition, and promoting the improvement of the condition.
  • the above change data, living information and favorable factors can be grouped into a second database, so as to facilitate the comprehensive and rapid advancement of liver cancer optimization treatment.
  • the neural network is used to learn the liver features and lesion features in the original CT images, and the relationship between the CT slices and the tags is established through two cascaded full convolutional neural networks, and the task training model is divided to find the optimal as soon as possible.
  • the upsampling part of the network model of the embodiment includes splicing, and the purpose of the splicing is to pull the earliest features and use the cross layer
  • the connection method splicing the convolution output of the previous layers into the back-level input to make up for the problem that the current layer is insufficient in data due to being in a deep position of the network, and the model training process is superimposed in each step of sampling.
  • the characteristics of the same dimension in the convolution step prevent gradient disappearance and information loss in the deep network of the convolution cloud, and obtain accurate training models through less training data; effectively classify the disease data through the training model to form a classification database. Improve the preventive effect, diagnostic efficiency and therapeutic efficiency of liver cancer, and have practical value .
  • a lesion monitoring apparatus includes:
  • the first input/output module 1 is configured to input sample data of the original CT image into a preset segmentation model for segmentation operation, and output the segmented liver image data.
  • the segmented liver image data in this embodiment is the identification data of the liver portion in the original CT image, including all edge feature data of the liver site in the original CT image.
  • the segmentation model of the present embodiment determines the bounding frame of the liver portion by identifying the edge feature data of the liver portion in the original CT image, thereby realizing effective segmentation of the liver portion in the original CT image.
  • the second input/output module 2 is configured to input the sample data of the original CT image and the segmented liver image data into a preset recognition model, and output the recognition result;
  • the segmentation model and the recognition model are respectively trained by the first convolutional neural network and the second convolutional neural network, and the first convolutional neural network and the second convolutional neural network are cascaded.
  • the relationship between the CT slice and the label is established by the convolutional neural network structure of the first convolutional neural network and the second convolutional neural network, the first convolutional neural network
  • the final output is the input of the second convolutional neural network.
  • the two tasks are different. Therefore, the recognition targets of each layer of the two convolutional neural networks are different.
  • the task model is trained by two cascaded network structures to find the most as soon as possible. Excellent network parameters to complete the training model.
  • the first convolutional neural network of the present embodiment has the same structure as the second convolutional neural network, except that the input data is different, and the first convolutional neural network structure realizes the segmentation task of the liver portion by recognizing the liver surrounding frame.
  • the second convolutional neural network can identify the type of lesion to complete the identification of liver lesion types, avoiding the use of a convolutional neural network to simultaneously identify liver sites and liver lesions. It is easy to misunderstand other organs and avoid recognition results. Mix in other organs.
  • the first input/output module 1 includes:
  • the first input unit 10 is configured to input sample data of the original CT image into the convolution portion of the first convolutional neural network, and extract feature data in the original CT image by using a feature extraction manner preset in the segmentation model.
  • the first convolutional neural network of this embodiment has the same structure as the second convolutional neural network, and both include two parts, a convolution part and an upsampling part.
  • the convolution part is to identify the features of the image, such as the edge of the image is a first-order feature, the gradation of the edge is a second-order feature, the feature of the local adjacent edge constitutes a third-order texture, etc., the more the depth can distinguish the object.
  • the second input unit 11 is configured to input the feature data into an upsampled portion of the first convolutional neural network to restore the size of the original CT image and output the segmented liver image data.
  • the upsampling portion of the embodiment is used to restore the image to the original size, and the output result is the same as the input result size and size, so as to accurately perform the segmentation task and the lesion recognition task. That is, in this embodiment, the image size is restored by the upsampling portion of the first convolutional neural network and the segmentation result is output; the image size is restored by the upsampling portion of the second convolutional neural network and the lesion recognition result is output.
  • the convolution portion of the first convolutional neural network includes a first convolutional layer, a second convolutional layer, and a maximum pooling layer;
  • the first input unit 10 includes:
  • the output sub-module 100 sequentially iterates through the first convolutional layer, the second convolutional layer, and the maximum pooling layer for a first specified number of times to output feature data of the original CT image.
  • each time after passing through the first convolution layer and the second convolution layer it is recorded as a convolution process Conv, for example, the first convolution through the first convolution
  • Conv1 the first convolution through the first convolution
  • Conv2 the second time passes through the first convolutional layer and the second convolutional layer, and is recorded as Conv2, which is sequentially recursively performed, and passes through the maximum pooling layer after each convolution process.
  • the fine features of the original CT image are continuously extracted and expanded into deeper and deeper feature spaces to output accurate feature data of the original CT image.
  • the output submodule 100 of this embodiment includes:
  • the first input subunit 1000 is configured to input sample data of the original CT image into the first convolution layer of the convolution portion to train the first-order features of the local features of the original CT image.
  • the convolution portion of the embodiment includes a convolution layer and a maximum pooling layer.
  • the convolution layer is used to obtain local features of the original CT image, for example, the first convolutional layer of the first embodiment of the present embodiment.
  • the second input subunit 1001 is configured to input a first-order feature of the local feature of the original CT image to a second convolution layer of the convolution portion to train a second-order feature of the local feature of the original CT image.
  • the second convolutional layer of the embodiment trains second-order features such as edge changes of the image.
  • the third input subunit 1002 is configured to input a second-order feature of the local feature of the original CT image into a maximum pooling layer of the convolution portion to extract an optimized feature of the local feature of the original CT image.
  • the maximum pooling layer reduces parameters while retaining main features, such as dimensionality reduction; reduces the amount of calculation, and prevents over-fitting by nonlinearization, thereby improving the generalization ability of the training model.
  • the iteration sub-unit 1003 is configured to use the optimized feature of the local feature of the original CT image as sample data of the original CT image, and sequentially perform the first convolutional layer, the second convolutional layer, and the maximum pooling layer. Iterate until the number of iterations reaches the first specified number of times for the first specified number of times.
  • the convolution portion of the first convolutional neural network further includes a discarding layer.
  • the first input unit 10 of another embodiment of the present application further includes:
  • the iteration sub-module 101 is configured to input the feature data of the original CT image into the discarding layer of the convolution portion, and iteratively discard the second specified number of times to output the optimized feature data of the original CT image.
  • the convolution portion of this embodiment includes a discard layer in addition to the convolution layer and the maximum pooling layer, so as to reduce data redundancy and improve the robustness of the training model to output more optimized feature data.
  • the second input unit 11 includes:
  • the first input sub-module 111 is configured to input the feature data of the original CT image into the upsampling layer of the upsampling portion to gradually restore the size of the original CT image.
  • the upsampling layer of this embodiment is used to sequentially restore the deep feature space developed by convolution and pooling of the convolution part to the target label, and restore the convolved original CT image to the original size, so that the output result is obtained. It is the same size as the input result to achieve accurate segmentation of the liver of the original CT image.
  • the splicing sub-module 112 is configured to splicing the output data of the upsampling layer with the first-order features of the first convolution layer or the second-order features of the second convolution layer through the splicing layer.
  • the same dimension features in the convolution process are superimposed by the stitching layer to prevent the gradient disappearing in the deep convolutional neural network, avoiding information loss, and improving the training model. Precision.
  • the second input sub-module 113 is configured to input the output data of the splicing layer and the up-sampling layer into the third convolution layer, perform full CT image information fusion, and output the segmented liver image data.
  • each splicing layer will follow in this embodiment.
  • a third convolutional layer of a compressed space the feature after splicing is fused and the feature space is compressed.
  • the lesion monitoring apparatus of the embodiment of the present application further includes:
  • the rotation module 3 is configured to add Gaussian noise to the original CT image and rotate the specified angle range to generate a rotated image.
  • Each tomogram of each case in this embodiment is subjected to data preprocessing in order to ensure that the original CT image is enhanced under the original image size to remove unrelated tissues as much as possible and highlight the liver.
  • pixels of the range of -100 to 400 gray scales are first filtered according to the representation of the gray scale range of the liver tissue on the original CT image to highlight the contrast of the liver in the image while ensuring the original image size. Since the contrast of the organ displayed by the original CT image in the range of [-100, 400] is not obvious, the present embodiment performs contrast enhancement by the method of histogram equalization, enlarges the gray level of the number of pixels, and expands the value of the image element. Dynamic Range.
  • this embodiment uses only 20 open-source liver lesion maps as original CT images. Since the training data samples are very few, in order to improve the robustness of the model, Gaussian noise is added to the image, and random rotation [-30, +30 ] to angle data to enhance data diversity.
  • the calculation module 4 is configured to calculate the deformation image of the original CT image by performing the specified elastic transformation calculation on the rotated image.
  • a Gaussian kernel of 105*105 pixels with a mean of 0 and a standard deviation of 4 is generated, and the Gaussian kernel is convoluted with the images A and B, respectively, to obtain convolution results A2 and B2.
  • the planning module 5 is configured to plan the original CT image and its corresponding deformation map as sample data of the original CT image.
  • liver cancer lesion monitoring of the original CT image of the present embodiment is specifically described by the discrimination process of the CT slice of 0/1 tag:
  • Each sample entered is a preprocessed image of 512*512*n, where n is the number of CT slices for the sample.
  • the model training process is described in the method section and will not be described.
  • the corresponding position of the original CT image is first extracted from the predicted liver enclosing frame as the training picture to be segmented, and in the same manner, the corresponding position of the lesion is obtained from the prediction bounding box, and the label map of 512*512*1 is extracted, and the training is completed.
  • Convolution segmentation network Convolution segmentation network.
  • the first CT slice data is first prepared for the first convolutional neural network data, and the processed data is input into the first convolutional neural network for model training, and the training set is iterated 50 times, each The iteration will traverse all the input data, and cross entropy is used as the objective function in the training.
  • the predicted output of the first convolutional neural network and the enhanced original CT image are used for training, and the second convolutional neural network is input by the first convolutional neural network and the first convolutional neural network.
  • the predicted output is taken as input, and the other parameters are unchanged.
  • the second convolutional neural network is different from the data input by the first convolutional neural network, and the identified objects are different in different tasks.
  • the first convolutional neural network identifies the frame by identifying the liver. To achieve segmentation of the liver, the second convolutional neural network realizes lesion recognition by recognizing the lesion location bounding box.
  • This embodiment adopts the Adam strategy (Adam is a first-order optimization algorithm that can replace the traditional stochastic gradient descent process, which can iteratively update the neural network weight based on the training data, and Adam derives from the Adaptive Moment Estimation).
  • the initial learning rate is 1e-5.
  • the loss on the verification set is less than 1e-7 during training, the learning rate is halved.
  • the loss reduction indicates that the training is still optimized.
  • the learning rate needs to be reduced to avoid the cause.
  • the learning rate is too large and the shock eventually leads to the inability to continue optimization.
  • the results of the segmentation of the liver during the training of the first convolutional neural network may be inaccurate.
  • the enhanced original is spliced in the input of the second convolutional neural network.
  • the CT image data makes the input of the second convolutional neural network contain relatively complete information for lesion segmentation, so there is no data loss or the data is not comprehensive, resulting in low recognition effect of the whole model.
  • the second convolutional neural network finally outputs an image of 512*512*1, which is the CT tomographic discrimination result of the 0/1 tag.
  • a lesion monitoring apparatus includes:
  • the first obtaining module 20 is configured to obtain lesion information of the lesion location by comparing an output result of the second convolutional neural network with an output result of the first convolutional neural network.
  • the lesion information is obtained by comparing the entire liver region and the lesion region.
  • the lesion information of the present embodiment includes, but is not limited to, analysis data such as the number, size, distribution ratio, and edge characteristic information of the lesion area, to further improve the diagnosis accuracy of the liver cancer.
  • Analytical data such as the number, size, distribution, and edge characteristics of the lesions directly indicate the extent of liver cancer progression. For example, the surface of the liver S7/8 has a localized bulge, and a large irregular low-density shadow is seen. The size is about 73 ⁇ 68mm, the boundary is unclear, the partial density is higher than that of the same layer of liver, and the edge is not clear.
  • T1b indicates multiple tumor diameter >5cm; N0M0 means no regional lymphatic metastasis, no distant metastasis), combined with lymphatic metastasis, with or without distant metastasis, can be diagnosed as T3aN0M0 IIIA.
  • the lesion monitoring device of the embodiment includes:
  • the building module 21 is configured to externally acquire the patient identity information corresponding to the lesion information to form a first database with the lesion information.
  • the identity information of this embodiment includes age, gender, occupation, eating preferences, and the like.
  • the first database is established to improve the human age, gender, occupation, and dietary preferences, and effectively monitor the high-risk population to improve the effectiveness of cancer prevention.
  • the lesion monitoring device of the embodiment includes:
  • the second obtaining module 22 is configured to obtain a favorable factor of the lesion by acquiring the change data of the lesion position following time and the life information of the corresponding patient.
  • the change data of the present embodiment includes, but is not limited to, information that the lesion area becomes larger or smaller, and the number of lesion areas decreases or increases.
  • the living information of this embodiment includes, but is not limited to, the type, quantity and frequency of the diet, the type, quantity and frequency of the medication, the quality of sleep, the duration and frequency.
  • the positive factors of this example represent factors that are beneficial for controlling the condition, slowing the progression of the condition, and promoting the improvement of the condition.
  • the above change data, living information and favorable factors can be grouped into a second database, so as to facilitate the comprehensive and rapid advancement of liver cancer optimization treatment.
  • the computer device may be a server, and its internal structure may be as shown in FIG.
  • the computer device includes a processor, memory, network interface, and database connected by a system bus. Among them, the computer designed processor is used to provide calculation and control capabilities.
  • the memory of the computer device includes a non-volatile storage medium, an internal memory.
  • the non-volatile storage medium stores an operating system, computer readable instructions, and a database.
  • the memory provides an environment for the operation of operating systems and computer readable instructions in a non-volatile storage medium.
  • the database of the computer device is used to store data such as lesion monitoring.
  • the network interface of the computer device is used to communicate with an external terminal via a network connection.
  • the computer readable instructions when executed, perform the flow of an embodiment of the methods described above. It will be understood by those skilled in the art that the structure shown in FIG. 10 is only a block diagram of a part of the structure related to the solution of the present application, and does not constitute a limitation of the computer device to which the present application is applied.
  • An embodiment of the present application also provides a computer non-volatile readable storage medium having stored thereon computer readable instructions that, when executed, perform the processes of the embodiments of the methods described above.
  • the above description is only the preferred embodiment of the present application, and is not intended to limit the scope of the patent application, and the equivalent structure or equivalent process transformations made by the specification and the drawings of the present application, or directly or indirectly applied to other related The technical field is equally included in the scope of patent protection of the present application.

Landscapes

  • Engineering & Computer Science (AREA)
  • Health & Medical Sciences (AREA)
  • Public Health (AREA)
  • Biomedical Technology (AREA)
  • Medical Informatics (AREA)
  • General Health & Medical Sciences (AREA)
  • Theoretical Computer Science (AREA)
  • Primary Health Care (AREA)
  • Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • Epidemiology (AREA)
  • Computational Linguistics (AREA)
  • Evolutionary Computation (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Molecular Biology (AREA)
  • Nuclear Medicine, Radiotherapy & Molecular Imaging (AREA)
  • Radiology & Medical Imaging (AREA)
  • Computing Systems (AREA)
  • Biophysics (AREA)
  • Artificial Intelligence (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Databases & Information Systems (AREA)
  • Pathology (AREA)
  • Image Analysis (AREA)
  • Apparatus For Radiation Diagnosis (AREA)

Abstract

一种病变监测方法,包括:将原始CT影像的样本数据输入到分割模型中分割计算,输出分割后的肝脏影像数据(S1);将原始CT影像的样本数据和分割后的肝脏影像数据输入到识别模型中并输出识别结果(S2),分割模型和识别模型级联设置。该方法通过两个级联的全卷积神经网络,提高病症分析准确度。还公开了病变监测装置以及计算机设备和存储介质。

Description

病变监测方法、装置、计算机设备和存储介质
本申请要求于2018年4月17日提交中国专利局、申请号为2018103452530,发明名称为“病变监测方法、装置、计算机设备和存储介质”的中国专利申请的优先权,其全部内容通过引用结合在本申请中。
技术领域
本申请涉及到卷积神经网络应用领域,特别是涉及到病变监测方法、装置、计算机设备和存储介质。
背景技术
肝脏癌变诊断的目标是对CT断层图像获得的人体横断面图像的肝脏部位进行是否病变的判断。传统方法是借助医生的经验,对多张CT图读片定位病变,故医生的经验非常重要,但由于CT断层图像为灰度图像并同时显示多个脏器,与肝脏相关的CT切片又相当多,数据量非常大,导致看图会极大地消耗医生脑力和时间,从而导致医生没有更过的时间接待更多病人或分析病情、设计治病方案等。
技术问题
本申请的主要目的为提供一种病变监测方法,旨在解决现有肝脏癌变诊断主要依靠医生医疗经验而导致的诊断耗时长、效率低的技术问题。
技术解决方案
本申请提出一种病变监测方法,包括:
将原始CT影像的样本数据输入到预设的分割模型中进行分割运算,并输出分割后的肝脏影像数据;
将原始CT影像的样本数据和分割后的所述肝脏影像数据输入到预设的识别模型中进行运算,并输出识别结果;
其中,所述分割模型和识别模型分别通过第一卷积神经网络和第二卷积神经网络训练得到,第一卷积神经网络和第二卷积神经网络级联设置。
本申请还提供了一种病变监测装置,包括:
第一输入输出模块,用于将原始CT影像的样本数据输入到预设的分割模型中进行分割运算,并输出分割后的肝脏影像数据;
第二输入输出模块,用于将原始CT影像的样本数据和分割后的所述肝脏影像数据输入到预设的识别模型中进行运算,并输出识别结果;
其中,所述分割模型和识别模型分别通过第一卷积神经网络和第二卷积神经网络训练得到,第一卷积神经网络和第二卷积神经网络级联设置。
本申请还提供了一种计算机设备,包括存储器和处理器,所述存储器存储有计算机可读指令,所述处理器执行所述计算机可读指令时实现上述方法的步骤。
本申请还提供了一种计算机非易失性可读存储介质,其上存储有计算机可读指令,所述计算机可读指令被处理器执行时实现上述的方法的步骤。
有益效果
本申请有益技术效果:本申请借助神经网络对原始CT影像中的肝脏特征及病变特征的学习,通过两 个级联的全卷积神经网络建立CT切片与标签间的关系,分任务训练模型,以便尽快找到最优网络参数,完成模型训练,提高了医生的看病效率,提高病症分析的准确度;本申请的网络模型的上采样部分包括了拼接,拼接的目的是把最早的那些特征拉过来,采用跨层连接的方式,将前面数层的卷积输出拼接到后层输入里去,弥补当前层因为处于网络较深位置导致数据信息不足的问题,而且模型训练过程在每一步上采样中,叠加了在卷积步骤中相同维度的特征,防止卷积云深层网络中会出现梯度消失、信息丢失,通过较少的训练数据就得到精准的训练模型;通过训练模型对病症数据进行有效分类,形成分类数据库,提高肝癌的预防效果、诊断效率以及治疗效率,具有实用价值。
附图说明
图1本申请一实施例的病变监测方法流程示意图;
图2本申请一实施例的病变监测装置结构示意图;
图3本申请一实施例的第一输入输出模块的结构示意图;
图4本申请一实施例的第一输入单元的结构示意图;
图5本申请一实施例的输出子模块的结构示意图;
图6本申请另一实施例的第一输入单元的结构示意图;
图7本申请一实施例的第二输入单元的结构示意图;
图8本申请一实施例的病变监测的优化装置结构示意图;
图9本申请另一实施例的病变监测装置结构示意图;
图10本申请一实施例的计算机设备内部结构示意图。
本发明的最佳实施方式
参照图1,本申请一实施例的病变监测方法,包括:
S1:将原始CT影像的样本数据输入到预设的分割模型中进行分割运算,并输出分割后的肝脏影像数据。
本步骤中分割后的肝脏影像数据即对原始CT影像中肝脏部位的识别数据,包括原始CT影像中肝脏部位的所有边缘特征数据。本实施例的分割模型通过识别原始CT影像中肝脏部位的边缘特征数据,确定肝脏部位的包围框,实现对原始CT影像中肝脏部位的有效分割。
S2:将原始CT影像的样本数据和分割后的所述肝脏影像数据输入到预设的识别模型中进行运算,并输出识别结果;
其中,所述分割模型和识别模型分别通过第一卷积神经网络和第二卷积神经网络训练得到,第一卷积神经网络和第二卷积神经网络级联设置。
本实施例的病变监测方法中,通过第一卷积神经网络和第二卷积神经网络两个级联的卷积神经网络结构,建立CT切片与标签间的关系(CT断层即Computed Tomography,即电子计算机断层扫描,利用精确准直的X线束、γ射线、超声波等形成的人体某一部位的立体的CT断层数据,CT断层具有一定厚度,CT断层有多个横断面的CT切片顺序排列组成,为进行有效区别,每个CT断层有对应的标签,每个CT切片也有相应的标签,以分别表示CT断层、CT切片在整个数据中的排列顺序,以分别精准地对应到 实体器官的对应位置,本申请中的原始CT影像包括原始CT切片)。肝脏的CT由几个CT断层依次排列组成,每个CT断层再由多个CT切片依次排列组成,为了保证各CT断层和各CT切片的依次排列顺序与脏器的真实结构相对应,分别为各CT断层和各CT切片引入了标签,以便进行排列顺序、对应肝脏部位的关联。输入第一卷积神经网络的原始CT影像为各带有标签的CT切片,通过对各带有标签的CT切片进行特征提取,与图片尺寸还原达到分割肝脏的目的;将带有标签的第一神经网络的输出信息输入到第二卷积神经网络,以便第一卷积神经网络的CT切片的数据与第二卷积神经网络中的CT切片根据标签进行对应,以便精准确定第一卷积神经网络输出的肝脏的病变位置,本实施例的标签为包括排序序号等内容的信息。第一卷积神经网络的最后输出结果就是第二卷积神经网络的输入,两个任务不同,故两个卷积神经网络每层的识别目标也不同;通过两个级联的网络结构分任务训练模型,以便尽快找到最优网络参数,完成训练模型。本实施例在模型训练过程与模型实际应用的特征分析提取原理相同,模型训练中通过较多样本实现模型训练,确定模型参数,实际应用中模型参数已定,仅对需要分析的样本进行特征提取。本实施例的第一卷积神经网络与第二卷积神经网络的结构相同,只是因为输入的数据不同,实现了第一卷积神经网络结构通过识别肝脏包围框完成肝脏部位的分割任务,第二卷积神经网络通过识别病变部位完成肝脏病变类型的识别任务,避免了只用一个卷积神经网络同时进行肝脏部位识别与肝脏病变识别时,容易对其他脏器部位发生误识,避免识别结果混入其他脏器部位。
进一步地,本实施例的步骤S1,包括:
S10:将原始CT影像的样本数据输入所述第一卷积神经网络的卷积部分,通过所述分割模型中预设的特征提取方式,提取原始CT影像中特征数据。
本实施例的第一卷积神经网络与第二卷积神经网络的结构相同,均包括两个部分,卷积部分和上采样部分。卷积部分是为了识别图像的特征,譬如图像边缘是一阶特征、边缘的渐变是二阶特征、局部相邻边缘特征组成三阶纹理等等,越往深处越能辨别物体。
S11:将所述特征数据输入第一卷积神经网络的上采样部分,以还原原始CT影像的尺寸并输出分割后的肝脏影像数据。
本实施例的上采样部分,是用来还原图像到原始尺寸,且使输出结果与输入结果大小尺寸相同,以便精准地完成分割任务以及病变识别任务。即本实施例通过第一卷积神经网络的上采样部分还原图像尺寸并输出分割结果;通过第二卷积神经网络的上采样部分还原图像尺寸并输出病变识别结果。
进一步地,所述第一卷积神经网络的卷积部分包括第一卷积层、第二卷积层和最大池化层;本实施例的步骤S10,包括:
S100:依次经过所述第一卷积层、第二卷积层以及最大池化层迭代第一指定次数,以输出原始CT影像的特征数据。
本实施例在特征提取的卷积过程中,每次依次经过所述第一卷积层、第二卷积层后,记为一个卷积过程Conv,例如:第一次依次经过第一卷积层、第二卷积层后记为Conv1,第二次依次经过第一卷积层、第二卷积层后记为Conv2,依次递推进行,并在每一个卷积过程之后分别经过最大池化层输出,以逐渐提取局部特征的优化特征。本实施例通过卷积和池化的第一指定次数的迭代,不断提取原始CT影像的精细特征,并展成越来越深的特征空间,以输出精准的原始CT影像的特征数据。
进一步地,本实施例的步骤S100,包括:
S1000:将原始CT影像的样本数据输入卷积部分的第一卷积层,以训练原始CT影像的局部特征的一阶特征。
本实施例的卷积部分包括卷积层、最大池化层,卷积层是为了获取原始CT影像的局部特征,比如,本实施例的第一卷积层训练图像边缘等一阶特征。
S1001:将所述原始CT影像的局部特征的一阶特征,输入到卷积部分的第二卷积层,以训练原始CT影像的局部特征的二阶特征。
比如,本实施例的第二卷积层训练图像边缘变化等二阶特征。
S1002:将所述原始CT影像的局部特征的二阶特征,输入到卷积部分的最大池化层,以提取所述原始CT影像的局部特征的优化特征。
本步骤通过最大池化层在保留主要特征的同时减少参数,比如降维;减少计算量,并通过非线性化防止过拟合,提高训练模型的泛化能力。
S1003:将所述原始CT影像的局部特征的优化特征作为所述原始CT影像的样本数据,依次经过所述第一卷积层、所述第二卷积层以及所述最大池化层进行迭代,直到迭代次数达到第一指定次数;以进一步优化训练模型的权重,提高使用效果。
进一步,第一卷积神经网络的卷积部分还包括丢弃层;本申请另一实施例中步骤S100之后,包括:
S101:将所述原始CT影像的特征数据,输入到卷积部分的丢弃层,迭代丢弃第二指定次数,以输出原始CT影像的优化特征数据。
本实施例的卷积部分除了卷积层、最大池化层,还包括丢弃层,以便降低数据的冗余,提高训练模型的鲁棒性,输出更优化的特征数据。
进一步地,本实施例的步骤S11,包括:
S111:将所述原始CT影像的特征数据,输入到上采样部分的上采样层,以逐步还原原始CT影像的尺寸。
本实施例的上采样层是为了将卷积部分多次迭代卷积和池化后展成的深层特征空间依次还原出目标标签,将经过卷积的原始CT影像还原到原始尺寸,使输出结果与输入结果大小尺寸相同,以实现精准分割原始CT影像的肝脏部位。
S112:将所述上采样层的输出数据与所述第一卷积层的一阶特征或第二卷积层的二阶特征通过拼接层拼接。
本实施例中在每一次上采样步骤中,通过拼接层叠加了在卷积过程中相同维度的特征,以防止在深层的卷积神经网络中出现梯度消失的情况,避免信息丢失,提高训练模型的精准度。
S113:将所述拼接层和所述上采样层的输出数据输入第三卷积层,进行全CT影像信息融合,并输出分割后的肝脏影像数据。
本实施例在每次通过拼接层叠加了特征后,特征空间扩大两倍,需要通过压缩空间的卷积操作把特征空间压缩回叠加前的空间,所以本实施例每个拼接层后会紧跟一个压缩空间的第三卷积层,进行拼接之后的特征融合并压缩特征空间。
进一步地,步骤S1之前,包括:
S3:在原始CT影像上增加高斯噪音,并旋转指定角度范围,生成旋转图像。
本实施例的每个病例的每张断层图依次进行了数据预处理,以保证在原始图像尺寸下,对原始CT影像增强,以尽可能去除非相关组织,凸显肝脏。本实施例首先根据肝脏组织在原始CT影像上的灰度范围的表现,过滤出-100到400灰度范围的像素,以突出肝脏在图像中的对比度,同时保证原始图像尺寸。由于在[-100,400]范围内的原始CT影像显示的脏器对比度不明显,本实施例通过直方图均衡化的方法进行对比度增强,扩大像素个数多的灰度级,扩展图像元素取值的动态范围。而且本实施例采用仅有的20张开源的肝脏病变图作为原始CT影像,由于训练数据样例非常少,为了提高模型鲁棒性,在图像上增加高斯噪声,随机旋转[-30,+30]的角度,以进行数据增强,提高数据多样性。
S4:将所述旋转图像经过指定的弹性变换计算,得到原始CT影像的形变图。
本实施例的弹性变换计算过程,包括:
(1)对原始CT影像的每一个切片上的每个像素,产生两个[-1,1]范围内的随机数图像A和B。
(2)生成以0为均值,以4为标准差的105*105个像素的高斯核,将高斯核与图像A和B分别卷积,得到卷积结果A2和B2。
(3)对卷积结果A2和B2,把原始CT影像(Xi,Yi)的像素值赋值给新图像(Xi+A2i,Yi+B2i)位置,得到原始CT影像的形变图。
S5:将所述原始CT影像以及其对应的形变图,规划为原始CT影像的样本数据。
本实施例以0/1标签的CT断层的判别过程,具体说明本实施例的原始CT影像的肝脏癌症病变监测的具体流程:
首先对20个开源CT断层样本数据进行增强操作(20个开源CT断层样本数据来源于https://www.ircad.fr/research/3d-ircadb-01/),每个样本有多个CT切片。比如判别0/1标签的CT断层,训练模型输入为512*512*1的灰度CT切片,输出为512*512*1的标签。
输入的每个样本是512*512*n的预处理后图像,其中n为该样本CT切片数量。进行以下模型训练:
卷积,采用3*3卷积核,输出64个特征图,采用Relu激活函数,输出512*512大小;
卷积,采用3*3卷积核,输出64个特征图,采用Relu激活函数,输出512*512大小,记为conv1(conv表示卷积层);
最大值池化,采用2*2核,输出256*256大小;
卷积,采用3*3卷积核,输出128个特征图,采用Relu激活函数,输出256*256大小;
卷积,采用3*3卷积核,输出64个特征图,采用Relu激活函数,输出256*256大小,记为conv2;
最大值池化,采用2*2核,输出128*128大小;
卷积,采用3*3卷积核,输出256个特征图,采用Relu激活函数,输出128*128大小;
卷积,采用3*3卷积核,输出256个特征图,采用Relu激活函数,输出128*128大小,记为conv3;
最大值池化,采用2*2核,输出64*64大小;
卷积,采用3*3卷积核,输出512个特征图,采用Relu激活函数,输出64*64大小;
卷积,采用3*3卷积核,输出512个特征图,采用Relu激活函数,输出64*64大小,记为conv4;
丢弃,随机选conv4的一半输出设为0,输出记为drop4(drop表示丢弃层);
最大值池化,采用2*2核,输出32*32大小;
卷积,采用3*3卷积核,输出1024个特征图,采用Relu激活函数,输出32*32大小;
卷积,采用3*3卷积核,输出1024个特征图,采用Relu激活函数,输出32*32大小,记为conv5;
丢弃,随机选conv5的一半输出设为0,输出记为drop5;
最大值池化,采用2*2核,输出16*16大小;
卷积,采用3*3卷积核,输出2048个特征图,采用Relu激活函数,输出16*16大小;
卷积,采用3*3卷积核,输出2048个特征图,采用Relu激活函数,输出16*16大小,记为conv6;
丢弃,随机选conv6的一半输出设为0,输出记为drop6;
上述部分为卷积部分,完成后进入上采样部分。
上采样,采用2*2上采样,输出32*32;
卷积,采用2*2卷积核,输出1024个特征图,采用Relu激活函数,输出32*32大小,记为up7(up表示上采样层);
拼接,拼接drop5和up7,输出2048个特征图,32*32大小;
卷积,采用3*3卷积核,输出1024个特征图,采用Relu激活函数,输出32*32大小;
卷积,采用3*3卷积核,输出1024个特征图,采用Relu激活函数,输出32*32大小;
上采样,采用2*2上采样,输出64*64;
卷积,采用2*2卷积核,输出512个特征图,采用Relu激活函数,输出64*64大小,记为up8;
拼接,拼接drop4和up8,输出1024个特征图,64*64大小;
卷积,采用3*3卷积核,输出512个特征图,采用Relu激活函数,输出64*64大小;
卷积,采用3*3卷积核,输出512个特征图,采用Relu激活函数,输出64*64大小;
上采样,采用2*2上采样,输出128*128大小;
卷积,采用2*2卷积核,输出256个特征图,采用Relu激活函数,输出128*128大小,记为up9;
拼接,拼接conv3和up9,输出512个特征图,128*128大小;
卷积,采用3*3卷积核,输出256个特征图,采用Relu激活函数,输出128*128大小;
卷积,采用3*3卷积核,输出256个特征图,采用Relu激活函数,输出128*128大小;
上采样,采用2*2上采样,输出256*256;
卷积,采用2*2卷积核,输出128个特征图,采用Relu激活函数,输出256*256大小,记为up10;
拼接,拼接conv2和up10,输出256个特征图,256*256大小;
卷积,采用3*3卷积核,输出128个特征图,采用Relu激活函数,输出256*256大小;
卷积,采用3*3卷积核,输出128个特征图,采用Relu激活函数,输出256*256大小;
上采样,采用2*2上采样,输出512*512;
卷积,采用2*2卷积核,输出64个特征图,采用Relu激活函数,输出512*512大小,记为up11;
拼接,拼接conv1和up11,输出128个特征图,512*512大小;
卷积,采用3*3卷积核,输出64个特征图,采用Relu激活函数,输出512*512大小;
卷积,采用3*3卷积核,输出64个特征图,采用Relu激活函数,输出512*512大小;
卷积,采用3*3卷积核,输出2个特征图,采用Relu激活函数,输出512*512大小;
卷积,采用1*1卷积核,输出1个特征图,采用sigmoid激活函数,输出512*512大小。
上述为第一卷积神经网络或第二卷积神经网络的模型结构,可具体表示为下表:
层标签 功能层名 卷积核或参数 特征图个数 输出大小
  卷积 3*3 64 512*512*64
Conv1 卷积 3*3 64 512*512*64
  最大值池化 2*2 \ 256*256*64
  卷积 3*3 128 512*512*128
Conv2 卷积 3*3 128 512*512*128
  最大值池化 2*2 \ 128*128*128
  卷积 3*3 256 128*128*256
Conv3 卷积 3*3 256 128*128*256
  最大值池化 2*2 \ 64*64*256
  卷积 3*3 512 64*64*512
  卷积 3*3 512 64*64*512
Drop4 丢弃 0.5 \ 64*64*512
  最大值池化 2*2 \ 32*32*512
  卷积 3*3 1024 32*32*1024
  卷积 3*3 1024 32*32*1024
Drop5 丢弃 0.5 \ 32*32*1024
  最大值池化 2*2 \ 16*16*1024
  卷积 3*3 2048 16*16*2048
  卷积 3*3 2048 16*16*2048
Drop6 丢弃 0.5 \ 16*16*2048
  上采样 2*2 \ 32*32*2048
Up7 卷积 2*2 1024 32*32*1024
  拼接 Drop5 2048 32*32*2048
  卷积 3*3 1024 32*32*1024
  卷积 3*3 1024 32*32*1024
  上采样 2*2 \ 64*64*1024
Up8 卷积 2*2 512 64*64*512
  拼接 Drop4 1024 64*64*1024
  卷积 3*3 512 64*64*512
   卷积 3*3 512 64*64*512
  上采样 2*2 \ 128*128*512
Up9 卷积 2*2 256 128*128*256
  拼接 Conv3 512 128*128*512
  卷积 3*3 256 128*128*256
  卷积 3*3 256 128*128*256
  上采样 2*2 \ 256*256*256
Up10 卷积 2*2 128 256*256*128
  拼接 Conv2 256 256*256*256
  卷积 3*3 128 256*256*128
  卷积 3*3 128 256*256*128
  上采样 2*2 \ 512*512*128
Up11 卷积 2*2 64 512*512*64
  拼接 Conv1 128 512*512*128
  卷积 3*3 64 512*512*64
  卷积 3*3 128 512*512*64
  卷积 3*3 2 512*512*2
  卷积 3*3 128 512*512*1
本实施例首先由预测的肝脏包围框提取原始CT影像对应位置,作为待分割的训练图片,以同样的方式,由预测包围框获取病变对应位置,提取512*512*1的标签图,训练全卷积分割网络。在第一卷积神经网络中,先把最初的CT切片数据进行第一卷积神经网络数据准备,把处理完的数据输入第一卷积神经网络进行模型训练,训练设置迭代五十次,每次迭代都会遍历完所有输入数据,训练中采用交叉熵(Crossentropy)作为目标函数。本实施例的交叉熵函数表示为:
Figure PCTCN2018095503-appb-000001
其中y为期望的输出,a为神经元实际输出,n表示样本数,a=σ(z),z=∑Wj*Xj+b。在第二卷积神经网络中,利用第一卷积神经网络的预测输出和增强后原始CT影像进行训练,第二卷积神经网络以第一卷积神经网络的输入和第一卷积神经网络的预测输出作为输入,其他参数不变,第二卷积神经网络与第一卷积神经网络输入的数据不同,识别的对象不同完成的任务也不同,第一卷积神经网络通过识别肝脏包围框实现对肝脏的分割,第二卷积神经网络通过识别病变位置包围框,实现病变识别。
本实施例采用Adam策略(Adam是一种可以替代传统随机梯度下降过程的一阶优化算法,它能基于训练数据迭代地更新神经网络权重,Adam来源于适应性矩估计Adaptive Moment Estimation)进行参数更新,初始学习率为1e-5,训练中当验证集上的损失少于1e-7时,学习率减半,损失减少表示训练仍在优化,当优化到一定程度,需要降低学习率,避免因学习率过大而震荡最终导致无法继续优化。第 一卷积神经网络训练时对肝脏的分割结果可能会有不够准确的地方,为了避免由此产生的肝脏边界包围框不够准确,在第二卷积神经网络的输入中拼接了增强后的原始CT影像数据,使得第二卷积神经网络的输入含有比较齐全的用于病变分割的信息,故不会产生数据丢失或者数据不全面造成整个模型识别效果不高。第二卷积神经网络最后输出512*512*1的图像,即为0/1标签的CT断层判别结果。
进一步地,本申请另一实施例中,步骤S2之后,还包括:
S20:通过比较所述第二卷积神经网络的输出结果与所述第一卷积神经网络的输出结果,获取病变位置的病变信息。
本实施例通过识别病变部位完成肝脏病变类型的识别任务后,通过比较整个肝脏区域以及病变区域,以获取病变信息。本实施例的病变信息包括但不限于病变区域的数量、尺寸、分布占比以及边缘特性信息等分析数据,以进一步提高肝癌的诊断准确度。病变区域的数量、尺寸、分布占比以及边缘特性信息等分析数据直接显示了肝癌进展程度。举例地,肝S7/8表面局部隆突,内见一巨大不规则低密度影,大小约为73×68mm,境界不清,部分密度高于同层肝,边缘欠示欠清,可推测诊断为T3aN0M0 IIIA期(病理分析结果,T1b表示多发肿瘤直径>5cm;N0M0表示无区域淋巴转移、无远处转移),结合有无淋巴转移,有无远处转移,可确诊是否T3aN0M0 IIIA期。
进一步地,本实施例的步骤S20之后,包括:
S21:通过外部获取所述病变信息对应的病者身份信息,以与所述病变信息组建第一数据库。
本步骤的身份信息包括年龄、性别、职业、饮食喜好等。本实施例通过组建第一数据库,以提高人类区分年龄、性别、职业、饮食喜好等,有效在高发群体中进行有效监控,提高癌症防范的有效性。
进一步地,本申请另一实施例的步骤S20之后,还包括:
S22:通过获取病变位置跟随时间的变化数据以及相应病者的生活信息,以寻找病变的利好因素。
本步骤中的变化数据包括但不限于病变区域变大或减少、病变区域的个数减少或增多等信息。本步骤的生活信息包括但不限于饮食种类、数量及频次,用药种类、数量及频次,睡眠质量、时长及频次等。本步骤的利好因素表示有利于控制病症、减缓病症发展以及促进病症好转的因素。可将上述变化数据、生活信息以及利好因素组建成第二数据库,以便于综合快速地推进肝癌的优化治疗。
本实施例借助神经网络对原始CT影像中的肝脏特征及病变特征的学习,通过两个级联的全卷积神经网络建立CT切片与标签间的关系,分任务训练模型,以便尽快找到最优网络参数,完成模型训练,提高了医生的看病效率,提高病症分析的准确度;本实施例的网络模型的上采样部分包括了拼接,拼接的目的是把最早的那些特征拉过来,采用跨层连接的方式,将前面数层的卷积输出拼接到后层输入里去,弥补当前层因为处于网络较深位置导致数据信息不足的问题,而且模型训练过程在每一步上采样中,叠加了在卷积步骤中相同维度的特征,防止卷积云深层网络中会出现梯度消失、信息丢失,通过较少的训练数据就得到精准的训练模型;通过训练模型对病症数据进行有效分类,形成分类数据库,提高肝癌的预防效果、诊断效率以及治疗效率,具有实用价值。
参照图2,本申请一实施例的病变监测装置,包括:
第一输入输出模块1,用于将原始CT影像的样本数据输入到预设的分割模型中进行分割运算,并输出分割后的肝脏影像数据。
本实施例中分割后的肝脏影像数据即对原始CT影像中肝脏部位的识别数据,包括原始CT影像中肝 脏部位的所有边缘特征数据。本实施例的分割模型通过识别原始CT影像中肝脏部位的边缘特征数据,确定肝脏部位的包围框,实现对原始CT影像中肝脏部位的有效分割。
第二输入输出模块2,用于将原始CT影像的样本数据和分割后的所述肝脏影像数据输入到预设的识别模型中进行运算,并输出识别结果;
其中,所述分割模型和识别模型分别通过第一卷积神经网络和第二卷积神经网络训练得到,第一卷积神经网络和第二卷积神经网络级联设置。
本实施例的病变监测方法中,通过第一卷积神经网络和第二卷积神经网络两个级联的卷积神经网络结构,建立CT切片与标签间的关系,第一卷积神经网络的最后输出结果就是第二卷积神经网络的输入,两个任务不同,故两个卷积神经网络每层的识别目标也不同;通过两个级联的网络结构分任务训练模型,以便尽快找到最优网络参数,完成训练模型。本实施例的第一卷积神经网络与第二卷积神经网络的结构相同,只是因为输入的数据不同,实现了第一卷积神经网络结构通过识别肝脏包围框完成肝脏部位的分割任务,第二卷积神经网络通过识别病变部位完成肝脏病变类型的识别任务,避免了只用一个卷积神经网络同时进行肝脏部位识别与肝脏病变识别时,容易对其他脏器部位发生误识,避免识别结果混入其他脏器部位。
参照图3,第一输入输出模块1,包括:
第一输入单元10,用于将原始CT影像的样本数据输入所述第一卷积神经网络的卷积部分,通过所述分割模型中预设的特征提取方式,提取原始CT影像中特征数据。
本实施例的第一卷积神经网络与第二卷积神经网络的结构相同,均包括两个部分,卷积部分和上采样部分。卷积部分是为了识别图像的特征,譬如图像边缘是一阶特征、边缘的渐变是二阶特征、局部相邻边缘特征组成三阶纹理等等,越往深处越能辨别物体。
第二输入单元11,用于将所述特征数据输入第一卷积神经网络的上采样部分,以还原原始CT影像的尺寸并输出分割后的肝脏影像数据。
本实施例的上采样部分,是用来还原图像到原始尺寸,且使输出结果与输入结果大小尺寸相同,以便精准地完成分割任务以及病变识别任务。即本实施例通过第一卷积神经网络的上采样部分还原图像尺寸并输出分割结果;通过第二卷积神经网络的上采样部分还原图像尺寸并输出病变识别结果。
参照图4,第一卷积神经网络的卷积部分包括第一卷积层、第二卷积层和最大池化层;第一输入单元10,包括:
输出子模块100,依次经过所述第一卷积层、第二卷积层以及最大池化层迭代第一指定次数,以输出原始CT影像的特征数据。
本实施例在特征提取的卷积过程中,每次依次经过所述第一卷积层、第二卷积层后,记为一个卷积过程Conv,例如:第一次依次经过第一卷积层、第二卷积层后记为Conv1,第二次依次经过第一卷积层、第二卷积层后记为Conv2,依次递推进行,并在每一个卷积过程之后分别经过最大池化层输出,以逐渐提取局部特征的优化特征。本实施例通过卷积和池化的多次迭代,不断提取原始CT影像的精细特征,并展成越来越深的特征空间,以输出精准的原始CT影像的特征数据。
参照图5,本实施例的输出子模块100,包括:
第一输入子单元1000,用于将原始CT影像的样本数据输入卷积部分的第一卷积层,以训练原始CT 影像的局部特征的一阶特征。
本实施例的卷积部分包括卷积层、最大池化层,卷积层是为了获取原始CT影像的局部特征,比如,本实施例的第一卷积层训练图像边缘等一阶特征。
第二输入子单元1001,用于将所述原始CT影像的局部特征的一阶特征,输入到卷积部分的第二卷积层,以训练原始CT影像的局部特征的二阶特征。
比如,本实施例的第二卷积层训练图像边缘变化等二阶特征。
第三输入子单元1002,用于将所述原始CT影像的局部特征的二阶特征,输入到卷积部分的最大池化层,以提取原始CT影像的局部特征的优化特征。
本实施例通过最大池化层在保留主要特征的同时减少参数,比如降维;减少计算量,并通过非线性化防止过拟合,提高训练模型的泛化能力。
迭代子单元1003,用于将所述原始CT影像的局部特征的优化特征作为所述原始CT影像的样本数据,依次经过所述第一卷积层、第二卷积层以及最大池化层进行迭代,直到迭代次数达到第一指定次数第一指定次数。
参照图6,第一卷积神经网络的卷积部分还包括丢弃层;本申请另一实施例的第一输入单元10,还包括:
迭代子模块101,用于将所述原始CT影像的特征数据,输入到卷积部分的丢弃层,迭代丢弃第二指定次数,以输出原始CT影像的优化特征数据。
本实施例的卷积部分除了卷积层、最大池化层,还包括丢弃层,以便降低数据的冗余,提高训练模型的鲁棒性,以输出更优化的特征数据。
参照图7,所述第二输入单元11,包括:
第一输入子模块111,用于将原始CT影像的特征数据,输入到上采样部分的上采样层,以逐步还原原始CT影像的尺寸。
本实施例的上采样层是为了将卷积部分多次迭代卷积和池化后展成的深层特征空间依次还原出目标标签,将经过卷积的原始CT影像还原到原始尺寸,使输出结果与输入结果大小尺寸相同,以实现精准分割原始CT影像的肝脏部位。
拼接子模块112,用于将所述上采样层的输出数据与所述第一卷积层的一阶特征或第二卷积层的二阶特征通过拼接层拼接。
本实施例中在每一次上采样中,通过拼接层叠加了在卷积过程中相同维度的特征,以防止在深层的卷积神经网络中出现梯度消失的情况,避免信息丢失,提高训练模型的精准度。
第二输入子模块113,用于将所述拼接层和所述上采样层的输出数据输入第三卷积层,进行全CT影像信息融合,并输出分割后的肝脏影像数据。
本实施例在每次通过拼接层叠加了特征后,特征空间扩大两倍,需要通过压缩空间的卷积操作把特征空间压缩回叠加前的空间,所以本实施例每个拼接层后会紧跟一个压缩空间的第三卷积层,进行拼接之后的特征融合并压缩特征空间。
参照图8,本申请实施例的病变监测装置,还包括:
旋转模块3,用于在原始CT影像上增加高斯噪音,并旋转指定角度范围,生成旋转图像。
本实施例的每个病例的每张断层图依次进行了数据预处理,以保证在原始图像尺寸下,对原始CT影像增强,以尽可能去除非相关组织,凸显肝脏。本实施例首先根据肝脏组织在原始CT影像上的灰度范围的表现,过滤出-100到400灰度范围的像素,以突出肝脏在图像中的对比度,同时保证原始图像尺寸。由于在[-100,400]范围内的原始CT影像显示的脏器对比度不明显,本实施例通过直方图均衡化的方法进行对比度增强,扩大像素个数多的灰度级,扩展图像元素取值的动态范围。而且本实施例采用仅有的20张开源的肝脏病变图作为原始CT影像,由于训练数据样例非常少,为了提高模型鲁棒性,在图像上增加高斯噪声,随机旋转[-30,+30]的角度,以进行数据增强,提高数据多样性。
计算模块4,用于将所述旋转图像经过指定的弹性变换计算,得到原始CT影像的形变图。
本实施例的弹性变换计算过程,包括:
(1)对原始CT影像的每一个切片上的每个像素,产生两个[-1,1]范围内的随机数图像A和B。
(2)生成以0为均值,以4为标准差的105*105个像素的高斯核,将高斯核与图像A和B分别卷积,得到卷积结果A2和B2。
(3)对卷积结果A2和B2,把原始CT影像(Xi,Yi)的像素值赋值给新图像(Xi+A2i,Yi+B2i)位置,得到原始CT影像的形变图。
规划模块5,用于将所述原始CT影像以及其对应的形变图,规划为原始CT影像的样本数据。
本实施例以0/1标签的CT断层的判别过程,具体说明本实施例的原始CT影像的肝脏癌症病变监测的具体流程:
首先对20个开源CT断层样本数据进行增强操作(20个开源CT断层样本数据来源于https://www.ircad.fr/research/3d-ircadb-01/),每个样本有多个CT切片。比如判别0/1标签的CT断层,训练模型输入为512*512*1的灰度CT切片,输出为512*512*1的标签。
输入的每个样本是512*512*n的预处理后图像,其中n为该样本CT切片数量。模型训练过程见方法部分,不赘述。本实施例首先由预测的肝脏包围框提取原始CT影像对应位置,作为待分割的训练图片,以同样的方式,由预测包围框获取病变对应位置,提取512*512*1的标签图,训练全卷积分割网络。在第一卷积神经网络中,先把最初的CT切片数据进行第一卷积神经网络数据准备,把处理完的数据输入第一卷积神经网络进行模型训练,训练设置迭代五十次,每次迭代都会遍历完所有输入数据,训练中采用交叉熵(crossentropy)作为目标函数。本实施例的交叉熵函数表示为:
Figure PCTCN2018095503-appb-000002
其中y为期望的输出,a为神经元实际输出,n表示样本数,a=σ(z),z=∑Wj*Xj+b。在第二卷积神经网络中,利用第一卷积神经网络的预测输出和增强后原始CT影像进行训练,第二卷积神经网络以第一卷积神经网络的输入和第一卷积神经网络的预测输出作为输入,其他参数不变,第二卷积神经网络与第一卷积神经网络输入的数据不同,识别的对象不同完成的任务也不同,第一卷积神经网络通过识别肝脏包围框实现对肝脏的分割,第二卷积神经网络通过识别病变位置包围框,实现病变识别。
本实施例采用Adam策略(Adam是一种可以替代传统随机梯度下降过程的一阶优化算法,它能基于训练数据迭代地更新神经网络权重,Adam来源于适应性矩估计Adaptive Moment Estimation)进行参数更新,初始学习率为1e-5,训练中当验证集上的损失少于1e-7时,学习率减半,损失减少表示训练 仍在优化,当优化到一定程度,需要降低学习率,避免因学习率过大而震荡最终导致无法继续优化。第一卷积神经网络训练时对肝脏的分割结果可能会有不够准确的地方,为了避免由此产生的肝脏边界包围框不够准确,在第二卷积神经网络的输入中拼接了增强后的原始CT影像数据,使得第二卷积神经网络的输入含有比较齐全的用于病变分割的信息,故不会产生数据丢失或者数据不全面造成整个模型识别效果不高。第二卷积神经网络最后输出512*512*1的图像,即为0/1标签的CT断层判别结果。
参照图9,本申请另一实施例的病变监测装置,包括:
第一获取模块20,用于通过比较所述第二卷积神经网络的输出结果与所述第一卷积神经网络的输出结果,获取病变位置的病变信息。
本实施例通过识别病变部位完成肝脏病变类型的识别任务后,通过比较整个肝脏区域以及病变区域,以获取病变信息。本实施例的病变信息包括但不限于病变区域的数量、尺寸、分布占比以及边缘特性信息等分析数据,以进一步提高肝癌的诊断准确度。病变区域的数量、尺寸、分布占比以及边缘特性信息等分析数据直接显示了肝癌进展程度。举例地,肝S7/8表面局部隆突,内见一巨大不规则低密度影,大小约为73×68mm,境界不清,部分密度高于同层肝,边缘欠示欠清,可推测诊断为T3aN0M0 IIIA期(病理分析结果,T1b表示多发肿瘤直径>5cm;N0M0表示无区域淋巴转移、无远处转移),结合有无淋巴转移,有无远处转移,可确诊是否T3aN0M0 IIIA期。
进一步地,本实施例的病变监测装置,包括:
组建模块21,用于通过外部获取所述病变信息对应的病者身份信息,以与所述病变信息组建第一数据库。
本实施例的身份信息包括年龄、性别、职业、饮食喜好等。本实施例通过组建第一数据库,以提高人类区分年龄、性别、职业、饮食喜好等,有效在高发群体中进行有效监控,提高癌症防范的有效性。
进一步地,本实施例的病变监测装置,包括:
第二获取模块22,用于通过获取所述病变位置跟随时间的变化数据以及相应病者的生活信息,以寻找病变的利好因素。
本实施例的变化数据包括但不限于病变区域变大或减少、病变区域的个数减少或增多等信息。本实施例的生活信息包括但不限于饮食种类、数量及频次,用药种类、数量及频次,睡眠质量、时长及频次等。本实施例的利好因素表示有利于控制病症、减缓病症发展以及促进病症好转的因素。可将上述变化数据、生活信息以及利好因素组建成第二数据库,以便于综合快速地推进肝癌的优化治疗。
参照图10,本申请实施例中还提供一种计算机设备,该计算机设备可以是服务器,其内部结构可以如图10所示。该计算机设备包括通过系统总线连接的处理器、存储器、网络接口和数据库。其中,该计算机设计的处理器用于提供计算和控制能力。该计算机设备的存储器包括非易失性存储介质、内存储器。该非易失性存储介质存储有操作系统、计算机可读指令和数据库。该内存器为非易失性存储介质中的操作系统和计算机可读指令的运行提供环境。该计算机设备的数据库用于存储病变监测等数据。该计算机设备的网络接口用于与外部的终端通过网络连接通信。该计算机可读指令在执行时,执行如上述各方法的实施例的流程。本领域技术人员可以理解,图10中示出的结构,仅仅是与本申请方案相关的部分结构的框图,并不构成对本申请方案所应用于其上的计算机设备的限定。
本申请一实施例还提供一种计算机非易失性可读存储介质,其上存储有计算机可读指令,该计算机可读指令在执行时,执行如上述各方法的实施例的流程。以上所述仅为本申请的优选实施例,并非因此限制本申请的专利范围,凡是利用本申请说明书及附图内容所作的等效结构或等效流程变换,或直接或间接运用在其他相关的技术领域,均同理包括在本申请的专利保护范围内。

Claims (20)

  1. 一种病变监测方法,其特征在于,包括:
    将原始CT影像的样本数据输入到预设的分割模型中进行分割运算,并输出分割后的肝脏影像数据;
    将原始CT影像的样本数据和分割后的所述肝脏影像数据输入到预设的识别模型中进行运算,并输出识别结果;
    其中,所述分割模型和识别模型分别通过第一卷积神经网络和第二卷积神经网络训练得到,第一卷积神经网络和第二卷积神经网络级联设置。
  2. 根据权利要求1所述的病变监测方法,其特征在于,将原始CT影像的样本数据输入到预设的分割模型中进行分割运算,并输出分割后的肝脏影像数据的步骤,包括:
    将原始CT影像的样本数据输入所述第一卷积神经网络的卷积部分,通过所述分割模型中预设的特征提取方式,提取原始CT影像中特征数据;
    将所述特征数据输入所述第一卷积神经网络的上采样部分,以还原所述原始CT影像的尺寸并输出分割后的肝脏影像数据。
  3. 根据权利要求2所述的病变监测方法,其特征在于,所述第一卷积神经网络的卷积部分包括第一卷积层、第二卷积层和最大池化层;
    所述将原始CT影像的样本数据输入所述第一卷积神经网络的卷积部分,通过所述分割模型中预设的特征提取方式,提取原始CT影像中特征数据的步骤,包括:
    依次经过所述第一卷积层、第二卷积层以及最大池化层迭代第一指定次数,以输出所述原始CT影像的特征数据。
  4. 根据权利要求3所述的病变监测方法,其特征在于,所述依次经过所述第一卷积层、第二卷积层以及最大池化层迭代第一指定次数,以输出所述原始CT影像的特征数据的步骤,包括:
    将原始CT影像的样本数据输入卷积部分的第一卷积层,以训练所述原始CT影像的局部特征的一阶特征;
    将所述原始CT影像的局部特征的一阶特征,输入到卷积部分的第二卷积层,以训练所述原始CT影像的局部特征的二阶特征;
    将所述原始CT影像的局部特征的二阶特征,输入到卷积部分的最大池化层,以提取所述原始CT影像的局部特征的优化特征;
    将所述原始CT影像的局部特征的优化特征作为所述原始CT影像的样本数据,依次经过所述第一卷积层、所述第二卷积层以及所述最大池化层进行迭代,直到迭代次数达到第一指定次数。
  5. 根据权利要求3或4所述的病变监测方法,其特征在于,所述第一卷积神经网络的卷积部分还包括丢弃层;
    所述依次经过所述第一卷积层、第二卷积层以及最大池化层迭代第一指定次数,以输出所述原始CT影像的特征数据的步骤之后,包括:
    将所述原始CT影像的特征数据,输入到所述卷积部分的丢弃层,迭代丢弃第二指定次数,以输出所述原始CT影像的优化特征数据。
  6. 根据权利要求4所述的病变监测方法,其特征在于,所述将所述特征数据输入第一卷积神经网络的上采样部分,以还原原始CT影像的尺寸并输出分割后的肝脏影像数据的步骤,包括:
    将所述原始CT影像的特征数据,输入到上采样部分的上采样层,以逐步还原所述原始CT影像的尺寸;
    将所述上采样层的输出数据与所述第一卷积层的一阶特征或所述第二卷积层的二阶特征通过拼接层拼接;
    将所述拼接层和所述上采样层的输出数据输入第三卷积层,进行全CT影像信息融合,并输出分割后的肝脏影像数据。
  7. 根据权利要求1所述的病变监测方法,其特征在于,所述将原始CT影像的样本数据输入到预设的分割模型中进行分割运算,并输出分割后的肝脏影像数据的步骤之前,包括:
    在所述原始CT影像上增加高斯噪音,并旋转指定角度范围,生成旋转图像;
    将所述旋转图像经过指定的弹性变换计算,得到所述原始CT影像的形变图;
    将所述原始CT影像以及所述原始CT影像的形变图,规划为所述原始CT影像的样本数据。
  8. 一种病变监测装置,其特征在于,包括:
    第一输入输出模块,用于将原始CT影像的样本数据输入到预设的分割模型中进行分割运算,并输出分割后的肝脏影像数据;
    第二输入输出模块,用于将原始CT影像的样本数据和分割后的所述肝脏影像数据输入到预设的识别模型中进行运算,并输出识别结果;
    其中,所述分割模型和识别模型分别通过第一卷积神经网络和第二卷积神经网络训练得到,第一卷积神经网络和第二卷积神经网络级联设置。
  9. 根据权利要求8所述的病变监测装置,其特征在于,所述第一输入输出模块,包括:
    第一输入单元,用于将原始CT影像的样本数据输入所述第一卷积神经网络的卷积部分,通过所述分割模型中预设的特征提取方式,提取原始CT影像中特征数据;
    第二输入单元,用于将所述特征数据输入所述第一卷积神经网络的上采样部分,以还原所述原始CT影像的尺寸并输出分割后的肝脏影像数据。
  10. 根据权利要求9所述的病变监测装置,其特征在于,所述第一卷积神经网络的卷积部分包括第一卷积层、第二卷积层和最大池化层;所述第一输入单元,包括:
    输出子模块,用于依次经过所述第一卷积层、第二卷积层以及最大池化层迭代第一指定次数,以输出所述原始CT影像的特征数据。
  11. 根据权利要求10所述的病变监测装置,其特征在于,所述输出子模块,包括:
    第一输入子单元,用于将原始CT影像的样本数据输入卷积部分的第一卷积层,以训练所述原始CT影像的局部特征的一阶特征;
    第二输入子单元,用于将所述原始CT影像的局部特征的一阶特征,输入到卷积部分的第二卷积层,以训练所述原始CT影像的局部特征的二阶特征;
    第三输入子单元,用于将所述原始CT影像的局部特征的二阶特征,输入到卷积部分的最大池化层,以提取所述原始CT影像的局部特征的优化特征;
    迭代子单元,用于将所述原始CT影像的局部特征的优化特征作为所述原始CT影像的样本数据,依次经过所述第一卷积层、所述第二卷积层以及所述最大池化层进行迭代,直到迭代次数达到第一指定次数。
  12. 根据权利要求10所述的病变监测装置,其特征在于,所述第一输入单元,包括:
    迭代子模块,用于将所述原始CT影像的特征数据,输入到所述卷积部分的丢弃层,迭代丢弃第二指定次数,以输出所述原始CT影像的优化特征数据。
  13. 根据权利要求11所述的病变监测装置,其特征在于,所述第二输入单元,包括:
    第一输入子模块,用于将所述原始CT影像的特征数据,输入到上采样部分的上采样层,以逐步还原所述原始CT影像的尺寸;
    拼接子模块,用于将所述上采样层的输出数据与所述第一卷积层的一阶特征或所述第二卷积层的二阶特征通过拼接层拼接;
    第二输入子模块,用于将所述拼接层和所述上采样层的输出数据输入第三卷积层,进行全CT影像信息融合,并输出分割后的肝脏影像数据。
  14. 根据权利要求8所述的病变监测装置,其特征在于,包括:
    旋转模块,用于在所述原始CT影像上增加高斯噪音,并旋转指定角度范围,生成旋转图像;
    计算模块,用于将所述旋转图像经过指定的弹性变换计算,得到所述原始CT影像的形变图;
    规划模块,用于将所述原始CT影像以及所述原始CT影像的形变图,规划为所述原始CT影像的样本数据。
  15. 一种计算机设备,包括存储器和处理器,所述存储器存储有计算机可读指令,其特征在于,所述处理器执行所述计算机可读指令时实现一种病变监测方法,方法包括:
    将原始CT影像的样本数据输入到预设的分割模型中进行分割运算,并输出分割后的肝脏影像数据;
    将原始CT影像的样本数据和分割后的所述肝脏影像数据输入到预设的识别模型中进行运算,并输出识别结果;
    其中,所述分割模型和识别模型分别通过第一卷积神经网络和第二卷积神经网络训练得到,第一卷积神经网络和第二卷积神经网络级联设置。
  16. 根据权利要求15所述的计算机设备,其特征在于,将原始CT影像的样本数据输入到预设的分割模型中进行分割运算,并输出分割后的肝脏影像数据的步骤,包括:
    将原始CT影像的样本数据输入所述第一卷积神经网络的卷积部分,通过所述分割模型中预设的特征提取方式,提取原始CT影像中特征数据;
    将所述特征数据输入所述第一卷积神经网络的上采样部分,以还原所述原始CT影像的尺寸并输出分割后的肝脏影像数据。
  17. 根据权利要求16所述的计算机设备,其特征在于,所述第一卷积神经网络的卷积部分包括第一卷积层、第二卷积层和最大池化层;
    所述将原始CT影像的样本数据输入所述第一卷积神经网络的卷积部分,通过所述分割模型中预设的特征提取方式,提取原始CT影像中特征数据的步骤,包括:
    依次经过所述第一卷积层、第二卷积层以及最大池化层迭代第一指定次数,以输出所述原始CT影 像的特征数据。
  18. 一种计算机非易失性可读存储介质,其上存储有计算机可读指令,其特征在于,所述计算机可读指令被处理器执行时实现一种病变监测方法,方法包括:
    将原始CT影像的样本数据输入到预设的分割模型中进行分割运算,并输出分割后的肝脏影像数据;
    将原始CT影像的样本数据和分割后的所述肝脏影像数据输入到预设的识别模型中进行运算,并输出识别结果;
    其中,所述分割模型和识别模型分别通过第一卷积神经网络和第二卷积神经网络训练得到,第一卷积神经网络和第二卷积神经网络级联设置。
  19. 根据权利要求18所述的计算机非易失性可读存储介质,其特征在于,将原始CT影像的样本数据输入到预设的分割模型中进行分割运算,并输出分割后的肝脏影像数据的步骤,包括:
    将原始CT影像的样本数据输入所述第一卷积神经网络的卷积部分,通过所述分割模型中预设的特征提取方式,提取原始CT影像中特征数据;
    将所述特征数据输入所述第一卷积神经网络的上采样部分,以还原所述原始CT影像的尺寸并输出分割后的肝脏影像数据。
  20. 根据权利要求19所述的计算机非易失性可读存储介质,其特征在于,所述第一卷积神经网络的卷积部分包括第一卷积层、第二卷积层和最大池化层;
    所述将原始CT影像的样本数据输入所述第一卷积神经网络的卷积部分,通过所述分割模型中预设的特征提取方式,提取原始CT影像中特征数据的步骤,包括:
    依次经过所述第一卷积层、第二卷积层以及最大池化层迭代第一指定次数,以输出所述原始CT影像的特征数据。
PCT/CN2018/095503 2018-04-17 2018-07-12 病变监测方法、装置、计算机设备和存储介质 WO2019200753A1 (zh)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN201810345253.0A CN108806793A (zh) 2018-04-17 2018-04-17 病变监测方法、装置、计算机设备和存储介质
CN201810345253.0 2018-04-17

Publications (1)

Publication Number Publication Date
WO2019200753A1 true WO2019200753A1 (zh) 2019-10-24

Family

ID=64094369

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2018/095503 WO2019200753A1 (zh) 2018-04-17 2018-07-12 病变监测方法、装置、计算机设备和存储介质

Country Status (2)

Country Link
CN (1) CN108806793A (zh)
WO (1) WO2019200753A1 (zh)

Cited By (22)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110991254A (zh) * 2019-11-08 2020-04-10 深圳大学 超声图像视频分类预测方法及系统
CN111080593A (zh) * 2019-12-07 2020-04-28 上海联影智能医疗科技有限公司 一种图像处理装置、方法及存储介质
CN111105412A (zh) * 2019-12-30 2020-05-05 郑州大学 一种用于肠道息肉检测识别的智能辅助系统
CN111126222A (zh) * 2019-12-16 2020-05-08 山东工商学院 一种基于神经网络的穴盘苗空穴识别方法以及穴盘苗补苗系统
CN111145149A (zh) * 2019-12-18 2020-05-12 佛山科学技术学院 一种基于深度学习的图像识别方法、装置及存储介质
CN111340805A (zh) * 2020-04-16 2020-06-26 张东 B超图像处理装置、脂肪肝b超图像处理装置及b超系统
CN111429452A (zh) * 2020-04-15 2020-07-17 深圳市嘉骏实业有限公司 基于UNet卷积神经网络的膀胱超声图像分割方法及装置
CN111598870A (zh) * 2020-05-15 2020-08-28 北京小白世纪网络科技有限公司 基于卷积神经网络端对端推理计算冠状动脉钙化比的方法
CN111627033A (zh) * 2020-05-30 2020-09-04 郑州大学 一种难样本实例分割方法、设备及计算机可读存储介质
CN111696084A (zh) * 2020-05-20 2020-09-22 平安科技(深圳)有限公司 细胞图像分割方法、装置、电子设备及可读存储介质
CN111724397A (zh) * 2020-06-18 2020-09-29 上海应用技术大学 一种颅脑ct图像出血区域自动分割方法
CN111739004A (zh) * 2020-06-22 2020-10-02 联想(北京)有限公司 图像处理方法、设备及存储介质
CN112200763A (zh) * 2020-08-24 2021-01-08 江苏科技大学 一种基于肝脏ct影像的肝功能分级方法
CN112241948A (zh) * 2020-09-23 2021-01-19 深圳视见医疗科技有限公司 一种自适应层厚的肺结节检测分析方法及系统
CN112365504A (zh) * 2019-10-29 2021-02-12 杭州脉流科技有限公司 Ct左心室分割方法、装置、设备和存储介质
CN112785605A (zh) * 2021-01-26 2021-05-11 西安电子科技大学 基于语义迁移的多时相ct图像肝肿瘤分割方法
CN113034425A (zh) * 2019-12-25 2021-06-25 阿里巴巴集团控股有限公司 数据处理方法、设备及存储介质
CN113205141A (zh) * 2021-05-08 2021-08-03 脉得智能科技(无锡)有限公司 一种基于图像融合技术的甲状旁腺识别方法
CN113487628A (zh) * 2021-07-07 2021-10-08 广州市大道医疗科技有限公司 模型训练的方法及冠状血管识别方法、装置、设备和介质
CN113536575A (zh) * 2021-07-20 2021-10-22 深圳市联影高端医疗装备创新研究院 器官轮廓勾画方法、医学影像系统以及存储介质
CN114088817A (zh) * 2021-10-28 2022-02-25 扬州大学 基于深层特征的深度学习的平板陶瓷膜超声缺陷检测方法
US11861827B2 (en) 2020-02-06 2024-01-02 Siemens Healthcare Gmbh Techniques for automatically characterizing liver tissue of a patient

Families Citing this family (14)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109346159B (zh) * 2018-11-13 2024-02-13 平安科技(深圳)有限公司 病例图像分类方法、装置、计算机设备及存储介质
CN109754403A (zh) * 2018-11-29 2019-05-14 中国科学院深圳先进技术研究院 一种ct图像内的肿瘤自动分割方法及系统
CN109685803B (zh) * 2018-12-14 2020-10-23 深圳先进技术研究院 一种左心室图像分割方法、装置、设备及存储介质
CN109685810A (zh) * 2018-12-18 2019-04-26 清华大学 一种基于深度学习的肝包虫病灶识别方法及系统
CN109685809B (zh) * 2018-12-18 2020-11-17 清华大学 一种基于神经网络的肝包虫病灶分割方法及系统
CN110265141B (zh) * 2019-05-13 2023-04-18 上海大学 一种肝脏肿瘤ct影像计算机辅助诊断方法
CN110146678A (zh) * 2019-05-30 2019-08-20 广东工业大学 一种结构损伤检测系统、方法、装置及可读存储介质
US11341635B2 (en) * 2019-10-31 2022-05-24 Tencent America LLC Computer aided diagnosis system for detecting tissue lesion on microscopy images based on multi-resolution feature fusion
CN111916206B (zh) * 2020-08-04 2023-12-08 重庆大学 一种基于级联的ct影像辅助诊断系统
CN112465819B (zh) * 2020-12-18 2024-06-18 平安科技(深圳)有限公司 图像异常区域检测方法、装置、电子设备及存储介质
CN112861876A (zh) * 2021-01-25 2021-05-28 北京小白世纪网络科技有限公司 一种基于卷积神经网络的肝癌超声图自动识别方法和装置
CN113470812B (zh) * 2021-06-18 2023-08-22 浙江大学 基于图卷积神经网络和迭代阈值收缩算法的心脏跨膜电位重建方法
CN114093505B (zh) * 2021-11-17 2022-06-17 山东省计算中心(国家超级计算济南中心) 一种基于云边端架构的病理检测系统及方法
CN116309582B (zh) * 2023-05-19 2023-08-11 之江实验室 一种便携式超声扫描图像的识别方法、装置及电子设备

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104992430A (zh) * 2015-04-14 2015-10-21 杭州奥视图像技术有限公司 基于卷积神经网络的全自动的三维肝脏分割方法
US9430829B2 (en) * 2014-01-30 2016-08-30 Case Western Reserve University Automatic detection of mitosis using handcrafted and convolutional neural network features
CN106780460A (zh) * 2016-12-13 2017-05-31 杭州健培科技有限公司 一种用于胸部ct影像的肺结节自动检测系统
CN107169974A (zh) * 2017-05-26 2017-09-15 中国科学技术大学 一种基于多监督全卷积神经网络的图像分割方法
CN107203989A (zh) * 2017-04-01 2017-09-26 南京邮电大学 基于全卷积神经网络的端对端胸部ct图像分割方法
WO2017210690A1 (en) * 2016-06-03 2017-12-07 Lu Le Spatial aggregation of holistically-nested convolutional neural networks for automated organ localization and segmentation in 3d medical scans
CN107492095A (zh) * 2017-08-02 2017-12-19 西安电子科技大学 基于深度学习的医学图像肺结节检测方法

Family Cites Families (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106372390B (zh) * 2016-08-25 2019-04-02 汤一平 一种基于深度卷积神经网络的预防肺癌自助健康云服务系统
CN107169955B (zh) * 2017-04-26 2021-11-12 中国人民解放军总医院 一种智能化的医学图像处理装置与方法

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US9430829B2 (en) * 2014-01-30 2016-08-30 Case Western Reserve University Automatic detection of mitosis using handcrafted and convolutional neural network features
CN104992430A (zh) * 2015-04-14 2015-10-21 杭州奥视图像技术有限公司 基于卷积神经网络的全自动的三维肝脏分割方法
WO2017210690A1 (en) * 2016-06-03 2017-12-07 Lu Le Spatial aggregation of holistically-nested convolutional neural networks for automated organ localization and segmentation in 3d medical scans
CN106780460A (zh) * 2016-12-13 2017-05-31 杭州健培科技有限公司 一种用于胸部ct影像的肺结节自动检测系统
CN107203989A (zh) * 2017-04-01 2017-09-26 南京邮电大学 基于全卷积神经网络的端对端胸部ct图像分割方法
CN107169974A (zh) * 2017-05-26 2017-09-15 中国科学技术大学 一种基于多监督全卷积神经网络的图像分割方法
CN107492095A (zh) * 2017-08-02 2017-12-19 西安电子科技大学 基于深度学习的医学图像肺结节检测方法

Cited By (37)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112365504A (zh) * 2019-10-29 2021-02-12 杭州脉流科技有限公司 Ct左心室分割方法、装置、设备和存储介质
CN110991254B (zh) * 2019-11-08 2023-07-04 深圳大学 超声图像视频分类预测方法及系统
CN110991254A (zh) * 2019-11-08 2020-04-10 深圳大学 超声图像视频分类预测方法及系统
CN111080593A (zh) * 2019-12-07 2020-04-28 上海联影智能医疗科技有限公司 一种图像处理装置、方法及存储介质
CN111080593B (zh) * 2019-12-07 2023-06-16 上海联影智能医疗科技有限公司 一种图像处理装置、方法及存储介质
CN111126222A (zh) * 2019-12-16 2020-05-08 山东工商学院 一种基于神经网络的穴盘苗空穴识别方法以及穴盘苗补苗系统
CN111145149A (zh) * 2019-12-18 2020-05-12 佛山科学技术学院 一种基于深度学习的图像识别方法、装置及存储介质
CN111145149B (zh) * 2019-12-18 2023-04-28 佛山科学技术学院 一种基于深度学习的图像识别方法、装置及存储介质
CN113034425B (zh) * 2019-12-25 2024-05-28 阿里巴巴集团控股有限公司 数据处理方法、设备及存储介质
CN113034425A (zh) * 2019-12-25 2021-06-25 阿里巴巴集团控股有限公司 数据处理方法、设备及存储介质
CN111105412B (zh) * 2019-12-30 2023-10-20 郑州大学 一种用于肠道息肉检测识别的智能辅助系统
CN111105412A (zh) * 2019-12-30 2020-05-05 郑州大学 一种用于肠道息肉检测识别的智能辅助系统
US11861827B2 (en) 2020-02-06 2024-01-02 Siemens Healthcare Gmbh Techniques for automatically characterizing liver tissue of a patient
CN111429452A (zh) * 2020-04-15 2020-07-17 深圳市嘉骏实业有限公司 基于UNet卷积神经网络的膀胱超声图像分割方法及装置
CN111340805A (zh) * 2020-04-16 2020-06-26 张东 B超图像处理装置、脂肪肝b超图像处理装置及b超系统
CN111598870A (zh) * 2020-05-15 2020-08-28 北京小白世纪网络科技有限公司 基于卷积神经网络端对端推理计算冠状动脉钙化比的方法
CN111598870B (zh) * 2020-05-15 2023-09-15 北京小白世纪网络科技有限公司 基于卷积神经网络端对端推理计算冠状动脉钙化比的方法
CN111696084A (zh) * 2020-05-20 2020-09-22 平安科技(深圳)有限公司 细胞图像分割方法、装置、电子设备及可读存储介质
CN111696084B (zh) * 2020-05-20 2024-05-31 平安科技(深圳)有限公司 细胞图像分割方法、装置、电子设备及可读存储介质
CN111627033B (zh) * 2020-05-30 2023-10-20 郑州大学 一种难样本实例分割方法、设备及计算机可读存储介质
CN111627033A (zh) * 2020-05-30 2020-09-04 郑州大学 一种难样本实例分割方法、设备及计算机可读存储介质
CN111724397A (zh) * 2020-06-18 2020-09-29 上海应用技术大学 一种颅脑ct图像出血区域自动分割方法
CN111724397B (zh) * 2020-06-18 2024-04-16 上海应用技术大学 一种颅脑ct图像出血区域自动分割方法
CN111739004B (zh) * 2020-06-22 2024-01-23 联想(北京)有限公司 图像处理方法、设备及存储介质
CN111739004A (zh) * 2020-06-22 2020-10-02 联想(北京)有限公司 图像处理方法、设备及存储介质
CN112200763A (zh) * 2020-08-24 2021-01-08 江苏科技大学 一种基于肝脏ct影像的肝功能分级方法
CN112200763B (zh) * 2020-08-24 2024-08-06 江苏科技大学 一种基于肝脏ct影像的肝功能分级方法
CN112241948A (zh) * 2020-09-23 2021-01-19 深圳视见医疗科技有限公司 一种自适应层厚的肺结节检测分析方法及系统
CN112785605B (zh) * 2021-01-26 2023-07-28 西安电子科技大学 基于语义迁移的多时相ct图像肝肿瘤分割方法
CN112785605A (zh) * 2021-01-26 2021-05-11 西安电子科技大学 基于语义迁移的多时相ct图像肝肿瘤分割方法
CN113205141B (zh) * 2021-05-08 2023-08-29 脉得智能科技(无锡)有限公司 一种基于图像融合技术的甲状旁腺识别方法
CN113205141A (zh) * 2021-05-08 2021-08-03 脉得智能科技(无锡)有限公司 一种基于图像融合技术的甲状旁腺识别方法
CN113487628B (zh) * 2021-07-07 2024-02-23 广州市大道医疗科技有限公司 模型训练的方法及冠状血管识别方法、装置、设备和介质
CN113487628A (zh) * 2021-07-07 2021-10-08 广州市大道医疗科技有限公司 模型训练的方法及冠状血管识别方法、装置、设备和介质
CN113536575A (zh) * 2021-07-20 2021-10-22 深圳市联影高端医疗装备创新研究院 器官轮廓勾画方法、医学影像系统以及存储介质
CN114088817B (zh) * 2021-10-28 2023-10-24 扬州大学 基于深层特征的深度学习的平板陶瓷膜超声缺陷检测方法
CN114088817A (zh) * 2021-10-28 2022-02-25 扬州大学 基于深层特征的深度学习的平板陶瓷膜超声缺陷检测方法

Also Published As

Publication number Publication date
CN108806793A (zh) 2018-11-13

Similar Documents

Publication Publication Date Title
WO2019200753A1 (zh) 病变监测方法、装置、计算机设备和存储介质
JP7069359B2 (ja) 深層学習を使用して癌検出を改善するための方法およびシステム
EP3979198A1 (en) Image segmentation model training method and apparatus, computer device, and storage medium
Golan et al. Lung nodule detection in CT images using deep convolutional neural networks
US11651850B2 (en) Computer vision technologies for rapid detection
CN108038513A (zh) 一种肝脏超声图像的特征分类方法
CN108062749B (zh) 肛提肌裂孔的识别方法、装置和电子设备
EP3107031A1 (en) Method, apparatus and system for spine labeling
US20220138949A1 (en) System and Method for Tissue Classification Using Quantitative Image Analysis of Serial Scans
Naga Srinivasu et al. Variational Autoencoders‐BasedSelf‐Learning Model for Tumor Identification and Impact Analysis from 2‐D MRI Images
CN114332132A (zh) 图像分割方法、装置和计算机设备
KR102280047B1 (ko) 딥 러닝 기반 종양 치료 반응 예측 방법
Fontanella et al. Diffusion models for counterfactual generation and anomaly detection in brain images
Dhalia Sweetlin et al. Patient-Specific Model Based Segmentation of Lung Computed Tomographic Images.
Arafat et al. Brain Tumor MRI Image Segmentation and Classification based on Deep Learning Techniques
Sameer et al. Brain tumor segmentation and classification approach for MR images based on convolutional neural networks
CN112862786B (zh) Cta影像数据处理方法、装置及存储介质
CN112862787B (zh) Cta影像数据处理方法、装置及存储介质
CN112862785B (zh) Cta影像数据识别方法、装置及存储介质
CN112766333B (zh) 医学影像处理模型训练方法、医学影像处理方法及装置
Mousavi Moghaddam et al. Lung parenchyma segmentation from CT images with a fully automatic method
JP7155274B2 (ja) 加速化された臨床ワークフローのためのシステムおよび方法
Jaber et al. Medical Image Analysis Using Deep Learning and Distribution Pattern Matching Algorithm.
Gong et al. An automatic pulmonary nodules detection method using 3d adaptive template matching
US20220414882A1 (en) Method for automatic segmentation of coronary sinus

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 18915595

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 18915595

Country of ref document: EP

Kind code of ref document: A1