WO2021184799A1 - 医学图像处理方法、装置、设备及存储介质 - Google Patents

医学图像处理方法、装置、设备及存储介质 Download PDF

Info

Publication number
WO2021184799A1
WO2021184799A1 PCT/CN2020/129483 CN2020129483W WO2021184799A1 WO 2021184799 A1 WO2021184799 A1 WO 2021184799A1 CN 2020129483 W CN2020129483 W CN 2020129483W WO 2021184799 A1 WO2021184799 A1 WO 2021184799A1
Authority
WO
WIPO (PCT)
Prior art keywords
capsule
capsules
output
medical image
input
Prior art date
Application number
PCT/CN2020/129483
Other languages
English (en)
French (fr)
Inventor
吴剑煌
陈铭林
Original Assignee
中国科学院深圳先进技术研究院
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 中国科学院深圳先进技术研究院 filed Critical 中国科学院深圳先进技术研究院
Publication of WO2021184799A1 publication Critical patent/WO2021184799A1/zh

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/0002Inspection of images, e.g. flaw detection
    • G06T7/0012Biomedical image inspection
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B6/00Apparatus or devices for radiation diagnosis; Apparatus or devices for radiation diagnosis combined with radiation therapy equipment
    • A61B6/02Arrangements for diagnosis sequentially in different planes; Stereoscopic radiation diagnosis
    • A61B6/03Computed tomography [CT]
    • A61B6/032Transmission computed tomography [CT]
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B6/00Apparatus or devices for radiation diagnosis; Apparatus or devices for radiation diagnosis combined with radiation therapy equipment
    • A61B6/50Apparatus or devices for radiation diagnosis; Apparatus or devices for radiation diagnosis combined with radiation therapy equipment specially adapted for specific body parts; specially adapted for specific clinical applications
    • A61B6/501Apparatus or devices for radiation diagnosis; Apparatus or devices for radiation diagnosis combined with radiation therapy equipment specially adapted for specific body parts; specially adapted for specific clinical applications for diagnosis of the head, e.g. neuroimaging or craniography
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10072Tomographic images
    • G06T2207/10081Computed x-ray tomography [CT]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10072Tomographic images
    • G06T2207/10088Magnetic resonance imaging [MRI]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10072Tomographic images
    • G06T2207/10104Positron emission tomography [PET]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20084Artificial neural networks [ANN]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30004Biomedical image processing
    • G06T2207/30101Blood vessel; Artery; Vein; Vascular

Definitions

  • the embodiments of the present invention relate to the field of medical image processing, and in particular to a medical image processing method, device, equipment, and storage medium.
  • Intracranial hemorrhage is a cerebrovascular disease caused by the rupture of cerebral blood vessels. It has a high disability rate and a high mortality rate. According to the location of intracranial hemorrhage, intracranial hemorrhage can be roughly divided into the following five categories: parenchymal hemorrhage, ventricular hemorrhage, epidural hemorrhage, subdural hemorrhage and subarachnoid hemorrhage.
  • doctors usually need to determine the location of intracranial hemorrhage in the CT image and estimate the bleeding volume, and formulate a feasible surgical plan based on this. Among them, the bleeding volume plays a very important role in the diagnosis of intracranial hemorrhage. It is an important predictive indicator of 30-day mortality and secondary hematoma expansion. However, clinically, not every doctor can accurately determine the bleeding volume. .
  • the capsule network uses vectors or matrices as the representation unit, instead of using a single number as the representation unit like the convolutional neural network, so it usually has higher prediction accuracy, but the propagation calculation of the capsule layer requires a lot of video memory and calculation time. Under the limitation of existing computing power, it is difficult to design a deep and large capsule network similar to a convolutional neural network.
  • the existing capsule network has the problem that the propagation calculation of the capsule layer needs to consume a lot of existing problems.
  • the embodiment of the present invention provides a technical solution of a medical image processing method, which solves the existing problem that the propagation calculation of the capsule layer in the existing capsule network requires a large amount of consumption.
  • an embodiment of the present invention provides a medical image processing method, including:
  • All medical image sequences containing patient bleeding information are respectively input to at least one trained grouped capsule network model to obtain a predicted sequence diagram, wherein, when the grouped capsule network model is calculated between layers, each input capsule corresponds to The intermediate voting capsules only determine the output of part of the output capsules, so as to reduce the number of intermediate voting capsules used as input parameters in the calculation of the output capsules;
  • the bleeding volume of the patient is determined according to the predicted sequence diagram.
  • an embodiment of the present invention also provides a medical image processing device, including:
  • the predictive sequence diagram determination module is used to input all medical image sequences containing patient bleeding information into at least one trained grouped capsule network model to obtain a predicted sequence diagram, wherein the grouped capsule network model is calculated between layers
  • the intermediate voting capsule corresponding to each input capsule only determines the output of a part of the output capsules, so as to reduce the number of intermediate voting capsules used as input parameters in the calculation of the output capsules.
  • an embodiment of the present invention also provides a medical image processing device, which includes:
  • One or more processors are One or more processors;
  • Storage device for storing one or more programs
  • the one or more processors When the one or more programs are executed by the one or more processors, the one or more processors implement the medical image processing method according to any embodiment.
  • an embodiment of the present invention also provides a storage medium containing computer-executable instructions, which are used to execute the medical image processing method described in any of the embodiments when the computer-executable instructions are executed by a computer processor.
  • the technical solution of the medical image processing method provided by the embodiment of the present invention includes: inputting all medical image sequences containing patient bleeding information into at least one trained grouped capsule network model to obtain a predicted sequence diagram, wherein the grouped capsules
  • the intermediate voting capsule corresponding to each input capsule only determines the output of part of the output capsules, so as to reduce the number of intermediate voting capsules used as input parameters in the calculation of output capsules; determine the patient's bleeding according to the predicted sequence diagram quantity. Since the intermediate voting capsule corresponding to each input capsule only determines part of the output capsule, each output capsule also only corresponds to the intermediate voting capsule corresponding to part of the input capsule. Compared with the prior art, each output capsule needs to be based on each input capsule.
  • the corresponding intermediate voting capsules are determined, which can greatly reduce the number of intermediate voting capsules used as input parameters in the calculation of output capsules, thereby reducing the amount of calculation when determining the output capsules, and increasing the speed of calculation between model layers.
  • the layer depth of the capsule network can also be greatly increased, thereby improving the accuracy of the prediction of the capsule network model.
  • FIG. 1 is a flowchart of a medical image processing method according to Embodiment 1 of the present invention
  • FIG. 2A is a schematic diagram of inter-layer calculation provided by Embodiment 1 of the present invention.
  • 2B is a schematic diagram of the inter-layer calculation of the capsule neural network model in the prior art provided by the first embodiment of the present invention
  • FIG. 3 is a flowchart of the inter-layer calculation method of the packet capsule network model provided in the second embodiment of the present invention.
  • FIG. 4 is a schematic diagram of the calculation speed of capsule layers with different numbers of capsule groups according to the second embodiment of the present invention.
  • FIG. 5 is a graph of the squashing function provided by the second embodiment of the present invention and the existing squashing function
  • FIG. 6 is a structural block diagram of a medical image processing device provided by Embodiment 3 of the present invention.
  • FIG. 7 is a structural block diagram of yet another medical image processing device according to the third embodiment of the present invention.
  • Fig. 8 is a structural block diagram of a medical image processing device provided by the fourth embodiment of the present invention.
  • Fig. 1 is a flowchart of a medical image processing method according to Embodiment 1 of the present invention.
  • the technical solution of this embodiment is suitable for automatically analyzing the patient's medical image sequence to obtain the patient's bleeding volume.
  • the method may be executed by the medical image processing apparatus provided by the embodiment of the present invention, and the apparatus may be implemented in a software and/or hardware manner, and configured to be applied in a processor.
  • the method specifically includes the following steps:
  • the medical image sequence is a sequence diagram of clinical medical images that can display patient bleeding information.
  • CT Computer Purted Tomography, CT for short
  • PET Positron Emission Computed Tomography
  • PET Positron emission computed tomography
  • MRI Magnetic Resonance Imaging
  • a CT image is taken as an example for description.
  • CT images are often stored in files in the MHD (Meta Header Data) format. The files in this format mainly contain two files with the suffixes .raw and .mhd.
  • the .raw suffix file is used to store the CT scan voxel information data
  • the .mhd file stores the data header information data
  • the header information data includes the resolution and interval of the three-dimensional data.
  • a .mhd file represents the CT image data of a patient.
  • the resolution and sampling interval of the CT image may be different.
  • the resolution and sampling interval of the CT image are the same as those corresponding to the trained packet capsule network. If they are not the same, it is preferable to use bilinear interpolation to convert the resolution of the CT image, and then use the neighbor interpolation algorithm to convert the sampling interval of the resolution-converted CT image, so that the resolution and sampling interval of the CT image are respectively corresponding to the The resolution and sampling interval of the trained packet capsule network model are the same.
  • the resolution is 10 ⁇ 256 ⁇ 256
  • the sampling interval is 10mm ⁇ 1mm ⁇ 1mm.
  • the HU value corresponding to blood is usually between 0-90. Therefore, the HU value of CT image sequence that meets the preset resolution requirements is truncated between 0 and 90, that is, the HU value greater than 90 is set to 90 , The HU value less than 0 is set to 0, and then the HU value in the range of 0 to 90 is normalized to the preset gray scale interval, such as between -1 and 1.
  • the inter-layer calculation of the grouped capsule network model includes a voting stage, a clustering stage and a non-linear stage. Among them, in the voting phase, the intermediate voting capsules corresponding to each input capsule only determine part of the output capsules, so as to reduce the number of intermediate voting capsules used as input parameters in the calculation of the output capsules. As shown in FIG. 2A, each output capsule corresponds to only one capsule group, and only corresponds to one intermediate voting capsule corresponding to each type of input capsule in the capsule group.
  • the number of intermediate voting capsules based on the output capsule determination process can be greatly reduced, thereby greatly reducing the number of output capsules.
  • the calculation amount of each output capsule also achieves the technical effect of significantly reducing the amount of calculation between layers.
  • the number of trained packet capsule network models is one or more.
  • this embodiment uses multiple trained grouped capsule network models to participate in the analysis of the medical image sequence at the same time, and each trained grouped capsule network model is independent, that is, each trained grouping
  • the capsule network model is trained by the grouped capsule network based on different training samples. Therefore, even if the medical image sequence received by each trained grouped capsule network model is the same, the predicted sequence diagram output by each trained grouped capsule network model is different.
  • each image in a CT image sequence that meets the resolution requirement and sampling interval requirement is sequentially input into three independently trained grouped capsule network models to obtain three independent sets of predicted sequence diagrams.
  • image fusion is performed on the predicted images with the same identification in each predicted sequence diagram to obtain the predicted sequence diagrams involved in the calculation of bleeding volume.
  • each prediction diagram in the prediction sequence diagram is a segmentation probability diagram, and the image fusion method is preferably but not limited to weighted average.
  • the packet capsule network of this embodiment includes an encoding part and a decoding part.
  • the initial capsule layer is extracted from the input medical image sequence through two ordinary convolutional layers.
  • the initial capsule layer can use 2 types of 8-dimensional capsules, and then the layer corresponding to the initial capsule layer is gradually reduced through at least four steps.
  • a preset size such as converting a 256 ⁇ 256 layer to a 128 ⁇ 128 layer, then to a 64 ⁇ 64 layer, and then to a 32 ⁇ 32 layer.
  • These four steps must meet three rules: 1) The same step operation does not change the number and dimensions of capsule types; 2) The next operation will double the types and dimensions of the capsules of the previous operation, and the spatial resolution will be reduced to the original space.
  • the operation starts from the last output of the encoding part, and the encoding result is executed for decoding.
  • the deconvolution capsule layer is used to increase the spatial resolution output by the previous step to four times the original spatial resolution, and then the output capsules and the output capsules of the corresponding steps in the corresponding decoding part are collected together.
  • these operations meet two rules: 1) The same step operation does not change the type and number of capsules; 2) The number of capsule groups in the capsule layer is halved layer by layer.
  • each prediction image in the prediction sequence diagram is thresholded and binarized. For example, if the probability corresponding to a certain voxel is greater than 0.5, the voxel is considered to belong to the bleeding area, otherwise, the voxel is considered to be a normal background area.
  • determine the bleeding area of each predicted image determine the number of voxels of the bleeding area in each predicted image, and then determine the total prime number N of the bleeding area of all predicted images, and then convert the bleeding volume by the following formula.
  • the technical solution of the medical image processing method provided by the embodiment of the present invention includes: inputting all medical image sequences containing patient bleeding information into at least one trained grouped capsule network model to obtain a predicted sequence diagram, wherein the grouped capsules
  • the intermediate voting capsule corresponding to each input capsule only determines the output of part of the output capsules, so as to reduce the number of intermediate voting capsules used as input parameters in the calculation of output capsules; determine the patient's bleeding according to the predicted sequence diagram quantity. Since the intermediate voting capsule corresponding to each input capsule only determines part of the output capsule, each output capsule also only corresponds to the intermediate voting capsule corresponding to part of the input capsule. Compared with the prior art, each output capsule needs to be based on each input capsule.
  • the corresponding intermediate voting capsules are determined, which can greatly reduce the number of intermediate voting capsules used as input parameters in the calculation of output capsules, thereby reducing the amount of calculation when determining the output capsules, and increasing the speed of calculation between model layers.
  • the layer depth of the capsule network can also be greatly increased, thereby improving the accuracy of the prediction of the capsule network model.
  • Fig. 3 is a flowchart of the inter-layer calculation method of the packet capsule network model provided in the second embodiment of the present invention.
  • the embodiment of the present invention further introduces the inter-layer calculation method of the packet capsule network model.
  • S201 Divide the received input capsules into even-numbered capsule groups according to capsule types.
  • the input capsules are equally divided into even number of capsule groups according to the capsule type, that is, the number of capsule types in each capsule group is the same.
  • the capsule network layer has two capsule groups in total, and each capsule group contains input capsules of two capsule types, and there are two input capsules of each capsule type.
  • S202 Determine the intermediate voting capsule corresponding to the input capsule of each capsule type in each capsule group, and the number of intermediate voting capsules corresponding to each capsule type is the same as the number of input capsules of the capsule type.
  • each The intermediate voting capsule is generated through matrix transformation, as shown in the following formula:
  • S203 Perform clustering processing on the intermediate voting capsules with the same identifier and from the input capsules of different capsule types in the same capsule group by a dynamic routing algorithm, to obtain the main capsule.
  • this embodiment preferably assigns an identifier to each intermediate voting capsule.
  • the input capsule of each capsule type corresponds to two intermediate voting capsules, one is identified as 1, and the other is identified as 2.
  • the clustering processing formula is as follows:
  • the squashing function is used to perform a nonlinear transformation on each main capsule to generate the output capsule.
  • the squashing function formula is as follows:
  • the squashing function of this embodiment has similar functional characteristics to the existing squashing function, but the curve of the squashing function of this embodiment has faster forward calculation speed and reverse calculation speed.
  • this embodiment uses the squashing function of this embodiment to calculate a certain 16-dimensional vector 1000 times on the PyTorch platform, and records the time used for each calculation, and then counts the calculations of 1000 times. The total time spent; then use the prior art squashing function to calculate the 16-dimensional vector 1000 times, record the time used for each calculation, and then count the total time spent for 1000 calculations. Comparing the total time spent by the two in 1000 calculations, it is found that the total time spent using the squashing function described in this embodiment is 30% less than the total time spent using the squashing function of the prior art.
  • the input capsule is designed to contain 1, 2, 4, and 8 on the PyTorch platform
  • the capsule layer of the capsule group perform the inter-layer calculation described in the previous steps on the capsule layer to obtain the output capsule, and repeat the calculation 1000 times, and then compare the inter-layer calculation time of the capsule layers with different numbers of capsule groups, that is, the output capsule Generation time.
  • the calculation time of the capsule layers with 2, 4, and 8 capsule groups is reduced by 38%, 45%, and 59%, respectively, compared with the capsule layers without grouping.
  • the number of capsule groups is 1,
  • the capsule layer is the ungrouped capsule layer.
  • the same training sample is used to train the grouped capsule network with the number of groups of 1, 2, 4, and 8 to generate the corresponding trained grouped capsule network, and then each trained grouped capsule network is used to perform the same batch of CT
  • the intracranial hemorrhage image is analyzed, and then the evaluation indicators of the model are determined according to the analysis results of each trained grouped capsule network, such as the weight (located in the weight matrix) and the DSC value, as shown in Table 1.
  • GroupCapsNet-G1 1 4.86M 85.04% GroupCapsNet-G2 2 2.77M 87.26% GroupCapsNet-G4 4 1.75M 85.72% GroupCapsNet-G8 8 1.34M 80.98%
  • the network performance is optimal. It should be noted that the packet capsule network when the number of packet groups is 1 is essentially the original capsule network; where g in Table 1 represents the number of packets, and weight is the weight.
  • the trained grouping capsule network based on the squashing function described in this embodiment has a Dice coefficient of 87.26% and an IOU (overlap rate) of 76.34% in terms of CT intracranial hemorrhage region segmentation, which is based on the existing In terms of CT intracranial hemorrhage region segmentation, the trained grouping capsule network of the technical squashing function has a Dice coefficient of 87.02% and an IOU (overlap rate) of 76.15%.
  • the squashing function described in this embodiment not only does not reduce the performance of the packet capsule network, but also improves its performance to a certain extent.
  • the training samples used in the training process of the two trained packet capsule networks are the same.
  • each output capsule since the intermediate voting capsule corresponding to each input capsule only determines part of the output capsule, each output capsule also only corresponds to the intermediate voting capsule corresponding to a part of the input capsule.
  • each output capsule needs to be determined according to the intermediate voting capsule corresponding to each input capsule, which can greatly reduce the number of intermediate voting capsules used as input parameters when calculating the output capsule, thereby reducing the amount of calculation when determining the output capsule.
  • Increasing the speed of calculation between model layers makes it possible to greatly increase the layer depth of the capsule network under the current computing power level, thereby improving the accuracy of the capsule network model prediction.
  • Fig. 6 is a structural block diagram of a medical image processing device provided in the third embodiment of the present invention.
  • the device is used to execute the medical image processing method provided in any of the foregoing embodiments, and the device can be implemented in software or hardware.
  • the device includes:
  • the predictive sequence diagram determination module 11 is used to input all medical image sequences containing patient bleeding information into at least one trained grouped capsule network model to obtain a predicted sequence diagram, wherein the grouped capsule network model is used for inter-layer calculation ,
  • the intermediate voting capsule corresponding to each input capsule only determines the output of part of the output capsules, so as to reduce the number of intermediate voting capsules used as input parameters in the calculation of the output capsule;
  • the bleeding volume determination module 12 is used to determine the bleeding volume of the patient according to the predicted sequence diagram.
  • the predictive sequence diagram determination module 11 specifically inputs all medical image sequences containing patient bleeding information into at least two trained grouped capsule network models to obtain a prediction of the output of each trained grouped capsule network model.
  • Sequence diagram Perform image fusion on the corresponding prediction image in each prediction sequence diagram to update the prediction sequence diagram.
  • the prediction sequence graph determining module 11 includes an inter-layer calculation unit, and the inter-layer calculation unit is used for:
  • the device also includes an image acquisition module 10 for truncating the gray value of the medical image sequence that meets the resolution requirements within a preset gray range; graying the medical image sequence after the gray level is truncated Degree normalization processing to update the medical image sequence.
  • the bleeding volume determination module 12 is used to determine the bleeding area area of each predicted image in the prediction sequence diagram by threshold binarization; determine the bleeding volume according to the bleeding area area of each predicted image.
  • all the medical image sequences containing patient bleeding information are respectively input into at least one trained grouped capsule network model through the predictive sequence diagram determination module to obtain the predicted sequence diagram.
  • the intermediate voting capsule corresponding to each input capsule only determines the output of part of the output capsules, so as to reduce the number of intermediate voting capsules used as input parameters in the calculation of output capsules; it is determined by the amount of bleeding
  • the module determines the patient's bleeding volume based on the predicted sequence diagram. Since the intermediate voting capsule corresponding to each input capsule only determines part of the output capsule, each output capsule also only corresponds to the intermediate voting capsule corresponding to part of the input capsule.
  • each output capsule needs to be based on each input capsule.
  • the corresponding intermediate voting capsules are determined, which can greatly reduce the number of intermediate voting capsules used as input parameters in the calculation of the output capsules, thereby reducing the amount of calculation when determining the output capsules, increasing the speed of calculation between model layers, and making the capsule network model in Under the current computing power level of the computer, the layer depth can also be greatly increased, thereby improving the accuracy of the prediction of the capsule network model.
  • the medical image processing apparatus provided by the embodiment of the present invention can execute the medical image processing method provided by any embodiment of the present invention, and has corresponding functional modules and beneficial effects for the execution method.
  • FIG. 8 is a structural block diagram of a medical image processing device provided by Embodiment 4 of the present invention.
  • the device includes a processor 201, a memory 202, an input device 203, and an output device 204; the number of processors 201 in the device can be There are one or more.
  • One processor 201 is taken as an example in FIG. 8; the processor 201, memory 202, input device 203, and output device 204 in the device can be connected by a bus or other means. example.
  • the memory 202 can be used to store software programs, computer-executable programs, and modules, such as program instructions/modules corresponding to the medical image processing method in the embodiment of the present invention (for example, the predictive sequence diagram determining module 11). And the bleeding volume determination module 12).
  • the processor 201 executes various functional applications and data processing of the device by running the software programs, instructions, and modules stored in the memory 202, that is, realizes the aforementioned medical image processing.
  • the memory 202 may mainly include a program storage area and a data storage area.
  • the program storage area may store an operating system and an application program required by at least one function; the data storage area may store data created according to the use of the terminal, and the like.
  • the memory 202 may include a high-speed random access memory, and may also include a non-volatile memory, such as at least one magnetic disk storage device, a flash memory device, or other non-volatile solid-state storage devices.
  • the memory 202 may further include a memory remotely provided with respect to the processor 201, and these remote memories may be connected to the device through a network. Examples of the aforementioned networks include, but are not limited to, the Internet, corporate intranets, local area networks, mobile communication networks, and combinations thereof.
  • the input device 203 can be used to receive input digital or character information, and generate key signal input related to user settings and function control of the device.
  • the output device 204 may include a display device such as a display screen, for example, a display screen of a user terminal.
  • the fifth embodiment of the present invention also provides a storage medium containing computer-executable instructions, which are used to execute a medical image processing method when the computer-executable instructions are executed by a computer processor, and the method includes:
  • All medical image sequences containing patient bleeding information are respectively input to at least one trained grouped capsule network model to obtain a predicted sequence diagram, wherein, when the grouped capsule network model is calculated between layers, each input capsule corresponds to The intermediate voting capsules only determine the output of part of the output capsules, so as to reduce the number of intermediate voting capsules used as input parameters in the calculation of the output capsules;
  • the bleeding volume of the patient is determined according to the predicted sequence diagram.
  • a storage medium containing computer-executable instructions provided by an embodiment of the present invention is not limited to the method operations described above, and can also execute the medical image processing methods provided in any embodiment of the present invention. Related operations.
  • the present invention can be implemented by software and necessary general-purpose hardware, of course, it can also be implemented by hardware, but in many cases the former is a better implementation. .
  • the technical solution of the present invention essentially or the part that contributes to the prior art can be embodied in the form of a software product.
  • the computer software product can be stored in a computer-readable storage medium, such as a computer floppy disk.
  • ROM Read-Only Memory
  • RAM Random Access Memory
  • FLASH Flash memory
  • hard disk or optical disk etc.
  • a computer device which can be A personal computer, a server, or a network device, etc.

Landscapes

  • Health & Medical Sciences (AREA)
  • Engineering & Computer Science (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Medical Informatics (AREA)
  • Radiology & Medical Imaging (AREA)
  • Nuclear Medicine, Radiotherapy & Molecular Imaging (AREA)
  • General Health & Medical Sciences (AREA)
  • Physics & Mathematics (AREA)
  • Molecular Biology (AREA)
  • Theoretical Computer Science (AREA)
  • High Energy & Nuclear Physics (AREA)
  • Biophysics (AREA)
  • Optics & Photonics (AREA)
  • Pathology (AREA)
  • Veterinary Medicine (AREA)
  • Biomedical Technology (AREA)
  • Heart & Thoracic Surgery (AREA)
  • Public Health (AREA)
  • Surgery (AREA)
  • Animal Behavior & Ethology (AREA)
  • Oral & Maxillofacial Surgery (AREA)
  • Dentistry (AREA)
  • Neurology (AREA)
  • Pulmonology (AREA)
  • Neurosurgery (AREA)
  • Quality & Reliability (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • General Physics & Mathematics (AREA)
  • Image Analysis (AREA)

Abstract

一种医学图像处理方法、装置、设备及存储介质,该方法包括:将包含有患者出血信息的医学图像序列分别全部输入到至少一个已训练的分组胶囊网络模型,以得到预测序列图,其中,所述分组胶囊网络模型在层间计算时,每个输入胶囊对应的中间投票胶囊均仅决定部分输出胶囊的输出,以减少输出胶囊计算时作为输入参数的中间投票胶囊的数量(S101);根据所述预测序列图确定患者的出血量(S102)。该方法解决了现有胶囊网络存在胶囊层的传播计算需要消耗大量现存的问题。

Description

医学图像处理方法、装置、设备及存储介质 【技术领域】
本发明实施例涉及医学图像处理领域,尤其涉及一种医学图像处理方法、装置、设备及存储介质。
【背景技术】
颅内出血是一种由脑血管破裂损伤造成的脑血管疾病,它具有高致残率和高死亡率。根据颅内出血的位置,可以将颅内出血大致分为以下五类:脑实质出血、脑室出血、脑硬膜外出血、脑硬膜下腔出血和蛛网膜下腔出血。颅内出血治疗时,通常需要医生判断CT图像中的颅内出血位置以及估计出出血容积,并据此制定出可行的手术方案。其中,出血容积对于颅内出血疾病的诊断具有十分重要的作用,它是30天死亡率和二次血肿扩张的重要预测指征,但临床上,并不是每个医生都能够准确地确定出出血容积。
为了辅助医生准确地确定出血容积,人们尝试使用卷积神经网络、胶囊网络等方法来计算出血容积。其中,胶囊网络通过向量或者矩阵作为表征单元,而不是像卷积神经网络使用单个数作为表征单元,因此其通常具有更高的预测准确性,但胶囊层的传播计算需要消耗大量显存和计算时间,在现有算力的限制下,难以设计出与卷积神经网络相类似的深而大的胶囊网络。
综上,现有胶囊网络存在胶囊层的传播计算需要消耗大量现存的问题。
【发明内容】
本发明实施例提供了一种医学图像处理方法的技术方案,解决了现有胶囊网络存在胶囊层的传播计算需要消耗大量现存的问题。
第一方面,本发明实施例提供了一种医学图像处理方法,包括:
将包含有患者出血信息的医学图像序列分别全部输入到至少一个已训练的分组胶囊网络模型,以得到预测序列图,其中,所述分组胶囊网络模型在层间计算时,每个输入胶囊对应的中间投票胶囊均仅决定部分输出胶囊的输出,以减少输出胶囊计算时作为输入参数的中间投票胶囊的数量;
根据所述预测序列图确定患者的出血量。
第二方面,本发明实施例还提供了一种医学图像处理装置,包括:
预测序列图确定模块,用于将包含有患者出血信息的医学图像序列分别全部输入到至少一个已训练的分组胶囊网络模型,以得到预测序列图,其中,所述分组胶囊网络模型在层间计算时,每个输入胶囊对应的中间投票胶囊均仅决定部分输出胶囊的输出,以减少输出胶囊计算时作为输入参数的中间投票胶囊的数量。
第三方面,本发明实施例还提供了一种医学图像处理设备,该设备包括:
一个或多个处理器;
存储装置,用于存储一个或多个程序;
当所述一个或多个程序被所述一个或多个处理器执行,使得所述一个或多个处理器实现如任意实施例所述的医学图像处理方法。
第四方面,本发明实施例还提供了一种包含计算机可执行指令的存储介质,所述计算机可执行指令在由计算机处理器执行时用于执行如任意实施例所述的医学图像处理方法。
本发明实施例提供的医学图像处理方法的技术方案,包括:将包含有患者出血信息的医学图像序列分别全部输入到至少一个已训练的分组胶囊网络模型,以得到预测序列图,其中,分组胶囊网络模型在层间计算时,每个输入胶囊对应的 中间投票胶囊均仅决定部分输出胶囊的输出,以减少输出胶囊计算时作为输入参数的中间投票胶囊的数量;根据预测序列图确定患者的出血量。由于每个输入胶囊对应的中间投票胶囊仅决定部分输出胶囊,那么每个输出胶囊也仅对应部分输入胶囊所对应的中间投票胶囊,相较于现有技术每个输出胶囊需要根据每个输入胶囊所对应定的中间投票胶囊来确定,可以大大减少输出胶囊计算时作为输入参数的中间投票胶囊的数量,从而减少输出胶囊确定时的计算量,提高模型层间计算的速度,使得在现有计算机算力水平下也可以大幅提高胶囊网络的层深,进而提升胶囊网络模型预测的准确性。
【附图说明】
为了更清楚地说明本发明实施例中的技术方案,下面将对实施例描述中所需要使用的附图做一简单地介绍,显而易见地,下面描述中的附图是本发明的一些实施例,对于本领域普通技术人员来讲,在不付出创造性劳动的前提下,还可以根据这些附图获得其他的附图。
图1是本发明实施例一提供的医学图像处理方法的流程图;
图2A是本发明实施例一提供的层间计算示意图;
图2B是本发明实施例一提供的现有技术的胶囊神经网络模型的层间计算示意图;
图3是本发明实施例二提供的分组胶囊网络模型的层间计算方法的流程图;
图4是本发明实施例二提供的具有不同胶囊组数量的胶囊层的计算速度的示意图;
图5是本发明实施例二提供的squashing函数与现有squashing函数的曲线图;
图6是本发明实施例三提供的医学图像处理装置的结构框图;
图7是本发明实施例三提供又一医学图像处理装置的结构框图;
图8是本发明实施例四提供的医学图像处理设备的结构框图。
【具体实施方式】
为使本发明的目的、技术方案和优点更加清楚,以下将参照本发明实施例中的附图,通过实施方式清楚、完整地描述本发明的技术方案,显然,所描述的实施例是本发明一部分实施例,而不是全部的实施例。基于本发明中的实施例,本领域普通技术人员在没有做出创造性劳动前提下所获得的所有其他实施例,都属于本发明保护的范围。
实施例一
图1是本发明实施例一提供的医学图像处理方法的流程图。本实施例的技术方案适用于自动分析患者的医学图像序列像以得到患者的出血容积的情况。该方法可以由本发明实施例提供的医学图像处理装置来执行,该装置可以采用软件和/或硬件的方式实现,并配置在处理器中应用。该方法具体包括如下步骤:
S101、将包含有患者出血信息的医学图像序列分别全部输入到至少一个已训练的分组胶囊网络模型,以得到预测序列图,其中,分组胶囊网络模型在层间计算时,每个输入胶囊对应的中间投票胶囊均仅决定部分输出胶囊的输出,以减少输出胶囊计算时作为输入参数的中间投票胶囊的数量。
其中,医学图像序列为能够显示患者出血信息的临床医学图像的序列图,常用临床医学图像为CT(Computed Tomography,简称CT,即电子计算机断层扫描)图像、PET(Positron Emission Computed Tomography,简称PET,正电子发射型计算机断层显像)图像、MRI(Magnetic Resonance Imaging,简称MRI,磁 共振图像)图像等。本实施例以CT图像为例进行说明。CT图像常储存在MHD(Meta Header Data)格式的文件中,这种格式的文件主要包含了后缀名为.raw和.mhd的两个文件。其中.raw后缀名文件用于存储CT扫描体素信息数据,.mhd文件存储了数据头部信息数据,头部信息数据包括了三维数据的分辨率、间隔等。另外,一个.mhd文件表示一个患者的CT图像数据。
由于不同的CT图像可能是基于不同的设备参数获得的,因此其分辨率和采样间隔可能是不同的。在使用已训练的分组胶囊神经网络模型对所采集的CT图像数据进行处理时,需要保证CT图像的分辨率和采样间隔分别与已训练的分组胶囊网络所对应的分辨率和采样间隔相同,如果不相同,则优选采用双线性插值法对该CT图像进行分辨率转换,然后采用邻近插值算法对分辨率转换后的CT图像进行采样间隔转换,使得CT图像的分辨率和采样间隔分别与对应的已训练的分组胶囊网络模型的分辨率和采样间隔相同。在一个实施例中,分辨率为10×256×256,采样间隔为10mm×1mm×1mm。
在CT图像中,血液对应的HU值通常在0-90之间,因此将符合预设分辨率要求的CT图像序列的HU值截断在0到90之间,即大于90的HU值设置为90,小于0的HU值设置为0,然后将0到90范围内的HU值归一化到预设灰度区间内,比如-1到1之间。
分组胶囊网络模型的层间计算包括投票阶段、聚类阶段和非线性阶段。其中,在投票阶段,每个输入胶囊对应的中间投票胶囊均只决定部分输出胶囊,以减少输出胶囊计算时作为输入参数的中间投票胶囊的数量。参见图2A所示,每个输出胶囊仅对应一个胶囊组,且仅对应该胶囊组的每种类型的输入胶囊所对应的一个中间投票胶囊。相较于现有技术根据与输入胶囊数量相同的中间投票胶囊来确 定输出胶囊(参见图2B)来说,可以大大减少输出胶囊确定过程中所基于的中间投票胶囊的数量,从而大大减少生成每个输出胶囊的运算量,也就达到了显著减少层间计算量的技术效果。
其中,已训练的分组胶囊网络模型的个数为一个或多个。为了提高预测序列图的准确性,本实施例采用多个已训练的分组胶囊网络模型同时参与对医学图像序列的分析,且各个已训练的分组胶囊网络模型是独立的,即各个已训练的分组胶囊网络模型是由分组胶囊网络基于不同的训练样本训练而成的。因此即便各个已训练的分组胶囊网络模型所接收的医学图像序列是相同的,那么每个已训练的分组胶囊网络模型输出的预测序列图是不同的。
示例性的,将符合分辨率要求和采样间隔要求的CT图像序列中的各张图像依次输入三个独立训练的分组胶囊网络模型,得到三组独立的预测序列图。在得到各个已训练的分组胶囊网络输出的预测序列图之后,对各个预测序列图中同一标识的预测图像进行图像融合以得到参与出血容量计算的预测序列图。其中,预测序列图中的各个预测图为分割概率图,图像融合方法优选但不限于加权平均。
本实施例的分组胶囊网络包括编码部分和解码部分。在编码部分,通过两层普通卷积层从输入的医学图像序列提取初始胶囊层,初始胶囊层可采用2种类型8维的胶囊,然后通过至少四步将初始胶囊层对应的图层逐渐缩小至预设大小,比如将256×256的图层转换至128×128的图层,再转换至64×64的图层,再转换至32×32的图层。这四步操作必须满足三个规则:1)同一步操作不改变胶囊的种类数量和维度;2)下一步的操作会加倍上一层操作的胶囊种类和维度,同时空间分辨率降为原空间分辨率的1/4;3)下一步操作的分组胶囊层的分组数量会加倍。在解码部分,操作从编码部分的最后输出开始,执行编码结果进行解 码。在解码部分的每一步操作中,反卷积胶囊层用于增加上一步操作输出的空间分辨率为原来空间分辨率的四倍,然后输出胶囊和对应解码部分的相应步骤的输出胶囊汇集一起进行接下来的操作,这些操作满足两个规则:1)同一步骤操作不改变胶囊的种类和数量;2)胶囊层的胶囊组数量逐层减半。
S102、根据预测序列图确定患者的出血量。
在参与出血容量计算的预测序列图得到之后,对该预测序列图中的每张预测图像进行阈值分割二值化。比如,如果某个体素对应的概率大于0.5,则认为该体素是属于出血区域,否则,则认为该体素为正常的背景区域。在确定了每张预测图像的出血区域以后,确定每张预测图像中的出血区域的体素数量,然后确定所有预测图像的出血区域的总体素数N,然后通过以下公式换算得到出血容积。
Volume=10Nmm 3
本发明实施例提供的医学图像处理方法的技术方案,包括:将包含有患者出血信息的医学图像序列分别全部输入到至少一个已训练的分组胶囊网络模型,以得到预测序列图,其中,分组胶囊网络模型在层间计算时,每个输入胶囊对应的中间投票胶囊均仅决定部分输出胶囊的输出,以减少输出胶囊计算时作为输入参数的中间投票胶囊的数量;根据预测序列图确定患者的出血量。由于每个输入胶囊对应的中间投票胶囊仅决定部分输出胶囊,那么每个输出胶囊也仅对应部分输入胶囊所对应的中间投票胶囊,相较于现有技术每个输出胶囊需要根据每个输入胶囊所对应定的中间投票胶囊来确定,可以大大减少输出胶囊计算时作为输入参数的中间投票胶囊的数量,从而减少输出胶囊确定时的计算量,提高模型层间计算的速度,使得在现有计算机算力水平下也可以大幅提高胶囊网络的层深,进而提升胶囊网络模型预测的准确性。
实施例二
图3是本发明实施例二提供的分组胶囊网络模型的层间计算方法的流程图。本发明实施例在上述实施例的基础上,对分组胶囊网络模型的层间计算方法作了进一步的介绍。
S201、将所接收的输入胶囊按照胶囊类型均分为偶数个胶囊组。
将输入胶囊按照胶囊类型平均分成偶数个胶囊组,即每个胶囊组中的胶囊类型数量相同。如图2A所示,该胶囊网络层共有两个胶囊组,每个胶囊组包含两种胶囊类型的输入胶囊,每种胶囊类型的输入胶囊有两个。
S202、确定每个胶囊组中的每个胶囊类型的输入胶囊对应的中间投票胶囊,且每种胶囊类型对应的中间投票胶囊数量与该胶囊类型的输入胶囊数量相同。
在投票阶段,以
Figure PCTCN2020129483-appb-000001
表示第L层的t类型的输入胶囊,以
Figure PCTCN2020129483-appb-000002
表示中间投票胶囊。每个
Figure PCTCN2020129483-appb-000003
通过矩阵变换产生中间投票胶囊,如下式所示:
Figure PCTCN2020129483-appb-000004
其中,
Figure PCTCN2020129483-appb-000005
表示可训练的权重矩阵,其内存储对应输入胶囊的权重。
S203、通过动态路由算法对具有相同标识且来自于同一胶囊组中不同胶囊类型的输入胶囊的中间投票胶囊进行聚类处理,以得到主胶囊。
为了区别每种类型对应的中间投票胶囊,本实施例优选为每个中间投票胶囊赋予标识。如图2A所示,每种胶囊类型的输入胶囊对应两个中间投票胶囊,一个标识为1,另一个标识为2,对同一胶囊组中所有标识为1的中间投票胶囊进行聚类处理以得到标识为1的主胶囊,以及对同一胶囊组中所有标识为2的中间投票胶囊进行聚类处理以得到标识为2的主胶囊。
其中,聚类处理公式如下所示:
Figure PCTCN2020129483-appb-000006
其中,
Figure PCTCN2020129483-appb-000007
为加权矩阵,由动态路由算法得到,t、t’均为胶囊类型,L为胶囊层。
可以理解的是,由于中间投票胶囊数量的减少,使得同时参与聚类处理的中间投票胶囊数量减少,使得聚类处理的特征提取能力更加有效。
S204、对主胶囊进行非线性变换以生成输出胶囊。
主胶囊得到之后,采用squashing函数对每个主胶囊进行非线性变换以生成输出胶囊
Figure PCTCN2020129483-appb-000008
该squashing函数公式如下:
Figure PCTCN2020129483-appb-000009
其中,
Figure PCTCN2020129483-appb-000010
为主胶囊,
Figure PCTCN2020129483-appb-000011
为输出胶囊,L为胶囊层,t’为胶囊类型。
如图5所示,本实施例的squashing函数与现有的squashing函数具有相似的函数特性,但是本实施例的squashing函数的曲线具有更快的前向计算速度和反向计算速度。为了确定二者计算速度的差异,本实施例在PyTorch平台上分别使用本实施例的squashing函数对某一16维向量计算1000次,并记录每次计算所使用的时间,然后统计1000次计算所花的总时间;然后使用现有技术的squashing函数对该16维向量计算1000次,记录每次计算所使用的时间,然后统计1000次计算所花的总时间。对比二者在1000次计算所花的总时间,发现采用本实施例所述的squashing函数所花的总时间比采用现有技术的squashing函数所花的总时间减少了30%。
可以理解的是,在输入胶囊的胶囊类型数量一定的情况下,胶囊组数量越多,每个胶囊组中的胶囊类型就越少,每个胶囊组对应的中间投票胶囊数量就越少, 那么影响该胶囊组对应的输出胶囊的中间投票胶囊数量就越少,每个输出胶囊的生成时间就越少。
示例性的,对于给定的输入胶囊,比如16种胶囊类型的8维向量,在动态路由迭代参数为3时,在PyTorch平台上,将该输入胶囊设计成包含1、2、4和8个胶囊组的胶囊层,对该胶囊层执行前述步骤所述的层间计算以得到输出胶囊,并重复计算1000次,然后比较具有不同胶囊组数量的胶囊层的层间计算时间,即输出胶囊的生成时间。参见图4所示,胶囊组数分别为2、4、8的胶囊层比不分组的胶囊层在计算时间上分别减少了38%、45%和59%,其中,胶囊组数为1时的胶囊层即为不分组的胶囊层。
但另一方面,随着胶囊组数量的增加,中间投票胶囊随之减少,那么在计算输出胶囊时,作为输入参数的中间投票胶囊和前述权重矩阵中的权重数量逐渐减少,那么每个输出胶囊所携带的信息也随之减少,比如胶囊类型信息,各个输出胶囊所携带的信息之间的重叠性越来越小,这势必会影响到胶囊网络的稳定性和分析能力。
示例性的,采用相同训练样本对分组数量分别为1、2、4和8的分组胶囊网络进行训练以生成对应的已训练的分组胶囊网络,然后使用各个已训练的分组胶囊网络对同一批CT颅内出血图像进行分析,然后根据各个已训练的分组胶囊网络的分析结果确定模型的评价指标,比如权重(位于权重矩阵中)和DSC值,参见表1所示。
  #g #weight DSC
GroupCapsNet-G1 1 4.86M 85.04%
GroupCapsNet-G2 2 2.77M 87.26%
GroupCapsNet-G4 4 1.75M 85.72%
GroupCapsNet-G8 8 1.34M 80.98%
显而易见的是,胶囊组数量为2时,网络性能达到最优。需要说明的是,分组组数为1时的分组胶囊网络实质上是原始胶囊网络;其中,表1中g表示分组数目,weight为权重。
另外,经实验发现,基于本实施例所述的squashing函数的已训练的分组胶囊网络在CT颅内出血区域分割方面,Dice系数为87.26%,IOU(交叠率)为76.34%,而基于现有技术的squashing函数的已训练的分组胶囊网络在CT颅内出血区域分割方面,Dice系数为87.02%,IOU(交叠率)为76.15%。显然,本实施例所述的squashing函数不仅没有降低分组胶囊网络的性能,还在一定程度上提高了其性能。当然,这两个已训练的分组胶囊网络在训练过程中所采用的训练样本相同。
本实施例提供的医学图像处理方法的技术方案,由于每个输入胶囊对应的中间投票胶囊仅决定部分输出胶囊,那么每个输出胶囊也仅对应部分输入胶囊所对应的中间投票胶囊,相较于现有技术每个输出胶囊需要根据每个输入胶囊所对应定的中间投票胶囊来确定,可以大大减少输出胶囊计算时作为输入参数的中间投票胶囊的数量,从而减少输出胶囊确定时的计算量,提高模型层间计算的速度,使得在现有计算机算力水平下也可以大幅提高胶囊网络的层深,进而提升胶囊网络模型预测的准确性。
实施例三
图6是本发明实施例三提供的医学图像处理装置的结构框图。该装置用于执行上述任意实施例所提供的医学图像处理方法,该装置可选为软件或硬件实现。该装置包括:
预测序列图确定模块11,用于将包含有患者出血信息的医学图像序列分别全部输入到至少一个已训练的分组胶囊网络模型,以得到预测序列图,其中,分组胶囊网络模型在层间计算时,每个输入胶囊对应的中间投票胶囊均仅决定部分输出胶囊的输出,以减少输出胶囊计算时作为输入参数的中间投票胶囊的数量;
出血量确定模块12,用于根据预测序列图确定患者的出血量。
可选地,预测序列图确定模块11具体将包含有患者出血信息的医学图像序列分别全部输入到至少两个已训练的分组胶囊网络模型,以得到每个已训练的分组胶囊网络模型输出的预测序列图;对每个预测序列图中的对应预测图像进行图像融合,以更新预测序列图。
可选地,预测序列图确定模块11包括层间计算单元,该层间计算单元用于:
将所接收的输入胶囊按照胶囊类型均分为偶数个胶囊组;确定每个胶囊组中的每个胶囊类型的输入胶囊对应的中间投票胶囊,且每种胶囊类型对应的中间投票胶囊数量与该胶囊类型的输入胶囊数量相同;通过动态路由算法对具有相同标识且来自于同一胶囊组中不同胶囊类型的输入胶囊的中间投票胶囊进行聚类处理,以得到主胶囊;对主胶囊进行非线性变换以生成输出胶囊。
如图7所示,该装置还包括图像获取模块10,用于将符合分辨率要求的医学图像序列的灰度值截断在预设灰度区间内;对灰度截断后的医学图像序列进行灰度归一化处理,以更新所述医学图像序列。
可选地,出血量确定模块12用于通过阈值二值化确定预测序列图中的每张预测图像的出血区域面积;根据每张预测图像的出血区域面积确定出血容积。
本发明实施例提供的医学图像处理装置的技术方案,通过预测序列图确定模块将包含有患者出血信息的医学图像序列分别全部输入到至少一个已训练的分组胶囊网络模型,以得到预测序列图,其中,分组胶囊网络模型在层间计算时,每个输入胶囊对应的中间投票胶囊均仅决定部分输出胶囊的输出,以减少输出胶囊计算时作为输入参数的中间投票胶囊的数量;通过出血量确定模块根据预测序列图确定患者的出血量。由于每个输入胶囊对应的中间投票胶囊仅决定部分输出胶囊,那么每个输出胶囊也仅对应部分输入胶囊所对应的中间投票胶囊,相较于现有技术每个输出胶囊需要根据每个输入胶囊所对应定的中间投票胶囊来确定,可以大大减少输出胶囊计算时作为输入参数的中间投票胶囊的数量,从而减少输出胶囊确定时的计算量,提高模型层间计算的速度,使得胶囊网络模型在现有计算机算力水平下,也可以大幅提高层深,进而提升胶囊网络模型预测的准确性。
本发明实施例所提供的医学图像处理装置可执行本发明任意实施例所提供的医学图像处理方法,具备执行方法相应的功能模块和有益效果。
实施例四
图8为本发明实施例四提供的医学图像处理设备的结构框图,如图8所示,该设备包括处理器201、存储器202、输入装置203以及输出装置204;设备中处理器201的数量可以是一个或多个,图8中以一个处理器201为例;设备中的处理器201、存储器202、输入装置203以及输出装置204可以通过总线或其他方式连接,图8中以通过总线连接为例。
存储器202作为一种计算机可读存储介质,可用于存储软件程序、计算机可执行程序以及模块,如本发明实施例中的医学图像处理方法对应的程序指令/模块(例如,预测序列图确定模块11以及出血量确定模块12)。处理器201通过运行存储在存储器202中的软件程序、指令以及模块,从而执行设备的各种功能应用以及数据处理,即实现上述的医学图像处理。
存储器202可主要包括存储程序区和存储数据区,其中,存储程序区可存储操作系统、至少一个功能所需的应用程序;存储数据区可存储根据终端的使用所创建的数据等。此外,存储器202可以包括高速随机存取存储器,还可以包括非易失性存储器,例如至少一个磁盘存储器件、闪存器件、或其他非易失性固态存储器件。在一些实例中,存储器202可进一步包括相对于处理器201远程设置的存储器,这些远程存储器可以通过网络连接至设备。上述网络的实例包括但不限于互联网、企业内部网、局域网、移动通信网及其组合。
输入装置203可用于接收输入的数字或字符信息,以及产生与设备的用户设置以及功能控制有关的键信号输入。
输出装置204可包括显示屏等显示设备,例如,用户终端的显示屏。
实施例五
本发明实施例五还提供了一种包含计算机可执行指令的存储介质,所述计算机可执行指令在由计算机处理器执行时用于执行一种医学图像处理方法,该方法包括:
将包含有患者出血信息的医学图像序列分别全部输入到至少一个已训练的分组胶囊网络模型,以得到预测序列图,其中,所述分组胶囊网络模型在层间计 算时,每个输入胶囊对应的中间投票胶囊均仅决定部分输出胶囊的输出,以减少输出胶囊计算时作为输入参数的中间投票胶囊的数量;
根据所述预测序列图确定患者的出血量。
当然,本发明实施例所提供的一种包含计算机可执行指令的存储介质,其计算机可执行指令不限于如上所述的方法操作,还可以执行本发明任意实施例所提供的医学图像处理方法中的相关操作。
通过以上关于实施方式的描述,所属领域的技术人员可以清楚地了解到,本发明可借助软件及必需的通用硬件来实现,当然也可以通过硬件实现,但很多情况下前者是更佳的实施方式。基于这样的理解,本发明的技术方案本质上或者说对现有技术做出贡献的部分可以以软件产品的形式体现出来,该计算机软件产品可以存储在计算机可读存储介质中,如计算机的软盘、只读存储器(Read-Only Memory,简称ROM)、随机存取存储器(Random Access Memory,简称RAM)、闪存(FLASH)、硬盘或光盘等,包括若干指令用以使得一台计算机设备(可以是个人计算机,服务器,或者网络设备等)执行本发明各个实施例所述的医学图像处理方法。
值得注意的是,上述医学图像处理装置的实施例中,所包括的各个单元和模块只是按照功能逻辑进行划分的,但并不局限于上述的划分,只要能够实现相应的功能即可;另外,各功能单元的具体名称也只是为了便于相互区分,并不用于限制本发明的保护范围。
注意,上述仅为本发明的较佳实施例及所运用技术原理。本领域技术人员会理解,本发明不限于这里所述的特定实施例,对本领域技术人员来说能够进行各 种明显的变化、重新调整和替代而不会脱离本发明的保护范围。因此,虽然通过以上实施例对本发明进行了较为详细的说明,但是本发明不仅仅限于以上实施例,在不脱离本发明构思的情况下,还可以包括更多其他等效实施例,而本发明的范围由所附的权利要求范围决定。

Claims (10)

  1. 一种医学图像处理方法,其特征在于,包括:
    将包含有患者出血信息的医学图像序列分别全部输入到至少一个已训练的分组胶囊网络模型,以得到预测序列图,其中,所述分组胶囊网络模型在层间计算时,每个输入胶囊对应的中间投票胶囊均仅决定部分输出胶囊的输出,以减少输出胶囊计算时作为输入参数的中间投票胶囊的数量;
    根据所述预测序列图确定患者的出血量。
  2. 根据权利要求1所述的方法,其特征在于,将包含有患者出血信息的医学图像序列分别全部输入到至少两个已训练的分组胶囊网络模型,以得到预测序列图,包括
    将包含有患者出血信息的医学图像序列分别全部输入到至少两个已训练的分组胶囊网络模型,以得到每个已训练的分组胶囊网络模型输出的预测序列图;
    对每个预测序列图中的对应预测图像进行图像融合,以更新所述预测序列图。
  3. 根据权利要求1所述的方法,其特征在于,所述层间计算方法包括:
    将所接收的输入胶囊按照胶囊类型均分为偶数个胶囊组;
    确定每个胶囊组中的每个胶囊类型的输入胶囊对应的中间投票胶囊,且每种胶囊类型对应的中间投票胶囊数量与该胶囊类型的输入胶囊数量相同;
    通过动态路由算法对具有相同标识且来自于同一胶囊组中不同胶囊类型的输入胶囊的中间投票胶囊进行聚类处理,以得到主胶囊;
    对所述主胶囊进行非线性变换以生成输出胶囊。
  4. 根据权利要求3所述的方法,其特征在于,所述非线性变换方法包括:
    通过以下非线性变换函数对所述主胶囊进行非线性变换以更新所述输出胶囊;
    Figure PCTCN2020129483-appb-100001
    其中,
    Figure PCTCN2020129483-appb-100002
    为主胶囊,
    Figure PCTCN2020129483-appb-100003
    为输出胶囊,L为胶囊层,t’为胶囊类型。
  5. 根据权利要求1所述的方法,其特征在于,所述医学图像序列像的确定方法包括:
    将符合分辨率要求的医学图像序列的灰度值截断在预设灰度区间内;
    对灰度截断后的医学图像序列进行灰度归一化处理,以更新所述医学图像序列。
  6. 根据权利要求1所述的方法,其特征在于,根据所述预测序列图确定患者的出血量,包括:
    通过阈值二值化确定所述预测序列图中的每张预测图像的出血区域面积;
    根据每张预测图像的出血区域面积确定出血容积。
  7. 根据权利要求1-6任一所述的方法,其特征在于,所述至少两个已训练的分组胶囊网络分别基于具有相同分辨率的不同训练样本训练而成。
  8. 一种医学图像处理装置,其特征在于,包括:
    预测序列图确定模块,用于将包含有患者出血信息的医学图像序列分别全部输入到至少一个已训练的分组胶囊网络模型,以得到预测序列图,其中,所述分组胶囊网络模型在层间计算时,每个输入胶囊对应的中间投票胶囊均仅决定部分输出胶囊的输出,以减少输出胶囊计算时作为输入参数的中间投票胶囊的数量;
    出血量确定模块,用于根据所述预测序列图确定患者的出血量。
  9. 一种医学图像处理设备,其特征在于,该设备包括:
    一个或多个处理器;
    存储装置,用于存储一个或多个程序;
    当所述一个或多个程序被所述一个或多个处理器执行,使得所述一个或多个处理器实现如权利要求1-7中任一所述的医学图像处理方法。
  10. 一种包含计算机可执行指令的存储介质,其特征在于,所述计算机可执行指令在由计算机处理器执行时用于执行如权利要求1-7中任一所述的医学图像处理方法。
PCT/CN2020/129483 2020-03-19 2020-11-17 医学图像处理方法、装置、设备及存储介质 WO2021184799A1 (zh)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN202010180488.6 2020-03-19
CN202010180488.6A CN111292322B (zh) 2020-03-19 2020-03-19 医学图像处理方法、装置、设备及存储介质

Publications (1)

Publication Number Publication Date
WO2021184799A1 true WO2021184799A1 (zh) 2021-09-23

Family

ID=71029605

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2020/129483 WO2021184799A1 (zh) 2020-03-19 2020-11-17 医学图像处理方法、装置、设备及存储介质

Country Status (2)

Country Link
CN (1) CN111292322B (zh)
WO (1) WO2021184799A1 (zh)

Families Citing this family (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111292322B (zh) * 2020-03-19 2024-03-01 中国科学院深圳先进技术研究院 医学图像处理方法、装置、设备及存储介质
CN112348119B (zh) * 2020-11-30 2023-04-07 华平信息技术股份有限公司 基于胶囊网络的图像分类方法、存储介质及电子设备
CN116051463A (zh) * 2022-11-04 2023-05-02 中国科学院深圳先进技术研究院 医学图像处理方法、装置、计算机设备及存储介质

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109300107A (zh) * 2018-07-24 2019-02-01 深圳先进技术研究院 磁共振血管壁成像的斑块处理方法、装置和计算设备
CN110503654A (zh) * 2019-08-01 2019-11-26 中国科学院深圳先进技术研究院 一种基于生成对抗网络的医学图像分割方法、系统及电子设备
US20190370972A1 (en) * 2018-06-04 2019-12-05 University Of Central Florida Research Foundation, Inc. Capsules for image analysis
CN111292322A (zh) * 2020-03-19 2020-06-16 中国科学院深圳先进技术研究院 医学图像处理方法、装置、设备及存储介质

Family Cites Families (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105512723B (zh) * 2016-01-20 2018-02-16 南京艾溪信息科技有限公司 一种用于稀疏连接的人工神经网络计算装置和方法
CN108985316B (zh) * 2018-05-24 2022-03-01 西南大学 一种改进重构网络的胶囊网络图像分类识别方法
CN108898577B (zh) * 2018-05-24 2022-03-01 西南大学 基于改进胶囊网络的良恶性肺结节识别装置及方法
CN110458852B (zh) * 2019-08-13 2022-10-21 四川大学 基于胶囊网络的肺组织分割方法、装置、设备及存储介质

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20190370972A1 (en) * 2018-06-04 2019-12-05 University Of Central Florida Research Foundation, Inc. Capsules for image analysis
CN109300107A (zh) * 2018-07-24 2019-02-01 深圳先进技术研究院 磁共振血管壁成像的斑块处理方法、装置和计算设备
CN110503654A (zh) * 2019-08-01 2019-11-26 中国科学院深圳先进技术研究院 一种基于生成对抗网络的医学图像分割方法、系统及电子设备
CN111292322A (zh) * 2020-03-19 2020-06-16 中国科学院深圳先进技术研究院 医学图像处理方法、装置、设备及存储介质

Also Published As

Publication number Publication date
CN111292322A (zh) 2020-06-16
CN111292322B (zh) 2024-03-01

Similar Documents

Publication Publication Date Title
WO2021184799A1 (zh) 医学图像处理方法、装置、设备及存储介质
US10706333B2 (en) Medical image analysis method, medical image analysis system and storage medium
CN110163260B (zh) 基于残差网络的图像识别方法、装置、设备及存储介质
Sander et al. Automatic segmentation with detection of local segmentation failures in cardiac MRI
CN110321920A (zh) 图像分类方法、装置、计算机可读存储介质和计算机设备
Han et al. Automated pathogenesis-based diagnosis of lumbar neural foraminal stenosis via deep multiscale multitask learning
CN111368849B (zh) 图像处理方法、装置、电子设备及存储介质
WO2022032824A1 (zh) 图像分割方法、装置、设备及存储介质
CN110991254B (zh) 超声图像视频分类预测方法及系统
CN113012173A (zh) 基于心脏mri的心脏分割模型和病理分类模型训练、心脏分割、病理分类方法及装置
CN110570394A (zh) 医学图像分割方法、装置、设备及存储介质
CN110570407A (zh) 图像处理方法、存储介质及计算机设备
CN117558443B (zh) 出血性脑卒中患者病情发展与疗效评估的智能分析方法
CN112529863A (zh) 测量骨密度的方法及装置
CN110827283B (zh) 基于卷积神经网络的头颈血管分割方法及装置
CN110751187A (zh) 异常区域图像生成网络的训练方法和相关产品
WO2023198166A1 (zh) 图像检测方法、系统、装置及存储介质
CN116521915A (zh) 一种相似医学图像的检索方法、系统、设备及介质
CN114862823B (zh) 区域分割方法及装置
CN113393445B (zh) 乳腺癌影像确定方法及系统
CN112766333B (zh) 医学影像处理模型训练方法、医学影像处理方法及装置
Zhou et al. Balancing High-performance and Lightweight: HL-UNet for 3D Cardiac Medical Image Segmentation
CN114841985A (zh) 基于目标检测的高精度处理及神经网络硬件加速方法
Chen et al. Cardiac motion scoring based on CNN with attention mechanism
CN114359194A (zh) 基于改进U-Net网络的多模态脑卒中梗死区域图像处理方法

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 20925401

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 20925401

Country of ref document: EP

Kind code of ref document: A1

122 Ep: pct application non-entry in european phase

Ref document number: 20925401

Country of ref document: EP

Kind code of ref document: A1

32PN Ep: public notification in the ep bulletin as address of the adressee cannot be established

Free format text: NOTING OF LOSS OF RIGHTS PURSUANT TO RULE 112(1) EPC (EPO FORM 1205A DATED 10.07.2023)

122 Ep: pct application non-entry in european phase

Ref document number: 20925401

Country of ref document: EP

Kind code of ref document: A1