WO2020125498A1 - Procédé et appareil de segmentation d'images de résonance magnétique cardiaques, terminal et support de stockage - Google Patents

Procédé et appareil de segmentation d'images de résonance magnétique cardiaques, terminal et support de stockage Download PDF

Info

Publication number
WO2020125498A1
WO2020125498A1 PCT/CN2019/124351 CN2019124351W WO2020125498A1 WO 2020125498 A1 WO2020125498 A1 WO 2020125498A1 CN 2019124351 W CN2019124351 W CN 2019124351W WO 2020125498 A1 WO2020125498 A1 WO 2020125498A1
Authority
WO
WIPO (PCT)
Prior art keywords
magnetic resonance
resonance image
cardiac magnetic
training sample
sample set
Prior art date
Application number
PCT/CN2019/124351
Other languages
English (en)
Chinese (zh)
Inventor
冉崇阳
刘平
钱银铃
王琼
Original Assignee
深圳先进技术研究院
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 深圳先进技术研究院 filed Critical 深圳先进技术研究院
Publication of WO2020125498A1 publication Critical patent/WO2020125498A1/fr

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/11Region-based segmentation

Definitions

  • the present application belongs to the technical field of medical image processing, and particularly relates to a cardiac magnetic resonance image segmentation method, device, terminal device, and computer-readable storage medium.
  • Medical images refer to the image data acquired by medical imaging equipment such as CT (Computed Tomography), MRI (Magnetic Resonance Imaging), B-mode ultrasound or PET (Positron Emission Computed Tomography), which are generally sliced in two dimensions Three-dimensional image data composed in the form.
  • CT Computer Tomography
  • MRI Magnetic Resonance Imaging
  • B-mode ultrasound B-mode ultrasound
  • PET PET
  • Medical image segmentation is a key method for processing medical images. It refers to distinguishing different areas with special meanings in medical images. These areas do not cross each other, and each area meets the consistency of a specific area.
  • the use of neural network algorithms to process medical images has achieved extremely good results, and the processing methods of cardiac magnetic resonance images generally include segmentation algorithms based on two-dimensional neural networks and segmentation algorithms based on three-dimensional neural networks.
  • the segmentation algorithm based on the two-dimensional neural network specifically trains the two-dimensional convolutional neural network by using each two-dimensional slice of the marked cardiac magnetic resonance image, so that the trained neural network can automatically segment other hearts Magnetic resonance image;
  • the segmentation algorithm based on the three-dimensional neural network specifically trains the three-dimensional convolutional neural network directly with the labeled three-dimensional cardiac magnetic resonance image, so that it can automatically segment other three-dimensional cardiac magnetic resonance images.
  • the segmentation of medical images is to be applied to clinical medicine, so the accuracy of segmentation is the most important, the higher the requirement, the better.
  • the segmentation algorithm of cardiac magnetic resonance images whether it is based on a two-dimensional convolutional neural network or a three-dimensional convolutional neural network, although it can achieve automatic segmentation of cardiac magnetic resonance images, and the accuracy of segmentation greatly exceeds the previous The traditional algorithm used. However, its segmentation accuracy is still low, and it still cannot meet the requirements of clinical medicine.
  • embodiments of the present application provide a cardiac magnetic resonance image segmentation method, device, terminal device, and computer-readable storage medium to solve the problem of low accuracy of segmentation of existing cardiac magnetic resonance images.
  • a first aspect of the embodiments of the present application provides a cardiac magnetic resonance image segmentation method, including:
  • the three-dimensional fully convolutional neural network model is a U-shaped fully convolutional network, which includes a contraction part and an expansion part corresponding to the contraction part, and each convolution in the contraction part and the expansion part
  • a dense block connected to the front, the dense block is a DenseNet network with a preset number of layers, and the connection between each layer of the dense block adopts a composite operation consisting of batch normalization, ReLu activation function, and convolution;
  • the segmentation result of the cardiac magnetic resonance image to be segmented is obtained.
  • the method before acquiring the cardiac magnetic resonance image to be segmented, the method further includes:
  • the training the pre-established three-dimensional fully convolutional neural network model according to the pre-processed training sample set includes:
  • the target sub-sample is input into the pre-built three-dimensional fully convolutional neural network model for training.
  • the second data preprocessing operation on the training sample set includes:
  • the method further includes:
  • the trained three-dimensional fully convolutional neural network model is tested to obtain a test result.
  • the obtaining the segmentation result of the cardiac magnetic resonance image to be segmented according to the classification result includes:
  • the classification result with the largest value is used as the segmentation result of the cardiac magnetic resonance image to be segmented.
  • a second aspect of an embodiment of the present application provides a cardiac magnetic resonance image segmentation device, including:
  • the acquisition module is used to acquire the magnetic resonance image of the heart to be segmented
  • a first preprocessing module configured to perform a first data preprocessing operation on the cardiac magnetic resonance image to be segmented
  • a classification module which is used to input the pre-processed cardiac magnetic resonance image to be pre-trained in a three-dimensional fully convolutional neural network model to obtain a classification result
  • the three-dimensional fully convolutional neural network model is a U-shaped fully convolutional network, which includes a contraction part and an expansion part corresponding to the contraction part, and each convolution in the contraction part and the expansion part
  • a dense block connected to the front, the dense block is a DenseNet network with a preset number of layers, and the connection between each layer of the dense block adopts a composite operation consisting of batch normalization, ReLu activation function, and convolution;
  • the segmentation module is used to obtain the segmentation result of the cardiac magnetic resonance image to be segmented according to the classification result.
  • the method further includes:
  • Training sample set acquisition module used to obtain training sample set
  • a second preprocessing module configured to perform a second data preprocessing operation on the training sample set
  • the training module is configured to train the pre-established three-dimensional fully convolutional neural network model according to the pre-processed training sample set.
  • the training module includes:
  • a first selection unit configured to randomly select a training sample from the training sample set as the target training sample
  • a second selection unit used to randomly select target sub-samples of preset dimensions from the target training samples
  • the input unit is configured to input the target sub-samples into the pre-established three-dimensional fully convolutional neural network model for training.
  • the second pre-processing module includes:
  • a normalization unit configured to perform a normalization operation on the training sample set, so that the average value of the image matrix of the images in the training sample set is 0, and the variance is 1;
  • a data enhancement unit is used to perform data enhancement operations on the training sample set.
  • the method further includes:
  • the test module is used to obtain a test sample set; according to the test sample set, the trained three-dimensional fully convolutional neural network model is tested to obtain a test result.
  • the segmentation module includes:
  • the statistical unit is used to count the number of each classification result
  • the classification result with the largest value is used as the segmentation result of the cardiac magnetic resonance image to be segmented.
  • a third aspect of the embodiments of the present application provides a terminal device, including a memory, a processor, and a computer program stored in the memory and executable on the processor, which is implemented when the processor executes the computer program The steps of the cardiac magnetic resonance image segmentation method according to any one of the above-mentioned first aspects.
  • a fourth aspect of the embodiments of the present application provides a computer-readable storage medium that stores a computer program, and when the computer program is executed by a processor, the heart as described in any one of the first aspects above is implemented Steps of magnetic resonance image segmentation method.
  • the embodiments of the present application segment the cardiac magnetic resonance image to be segmented by using a U-shaped full convolutional network model.
  • the U-shaped full convolutional network model can propagate global features to a high-resolution network layer, and the network model shrinks
  • the convolution of the partial and extended parts is preceded by a dense block, that is, a DenseNet network is added.
  • the compound operation between the layers of the DenseNet network strengthens the spread of features and encourages the reuse of features, making the network
  • the model absorbs the advantages of U-Net network and DenseNet, and improves the accuracy of segmentation.
  • FIG. 1 is a schematic block diagram of a flow of a cardiac magnetic resonance image segmentation method provided by an embodiment of the present application
  • FIG. 2 is a schematic diagram of a U-DenseNet network structure provided by an embodiment of this application.
  • FIG. 3 is a schematic block diagram of another process of a cardiac magnetic resonance image segmentation method according to an embodiment of the present application.
  • step S302 is a schematic block diagram of a specific process of step S302 provided by an embodiment of the present application.
  • FIG. 5 is a schematic diagram of the automatic segmentation result of the U-DenseNet network model on the training set provided by the embodiment of the present application;
  • FIG. 6 is a schematic diagram of an automatic segmentation result of a U-DenseNet network model on a test set provided by an embodiment of this application;
  • FIG. 7 is a schematic structural block diagram of a cardiac magnetic resonance image segmentation device according to an embodiment of the present application.
  • FIG. 8 is a schematic diagram of a terminal device provided by an embodiment of the present application.
  • FIG. 1 is a schematic block diagram of a flow chart of a cardiac magnetic resonance image segmentation method according to an embodiment of the present application.
  • the method may include the following steps:
  • Step S101 Acquire a cardiac magnetic resonance image to be segmented.
  • Step S102 Perform a first data preprocessing operation on the cardiac magnetic resonance image to be segmented.
  • the foregoing first data preprocessing operation may include, but is not limited to, a normalization operation.
  • the image is normalized so that the image matrix of the image becomes zero and the variance is one.
  • the above first pre-processing operation may also include other conventional pre-processing operations, such as filtering, which is not limited herein.
  • Step S103 The pre-processed cardiac magnetic resonance image to be segmented is input into a pre-trained 3D fully convolutional neural network model to obtain a classification result; wherein, the 3D fully convolutional neural network model is a U-shaped fully convolutional network, including contraction For each part and expansion part corresponding to the contraction part, a dense block is connected in front of each convolution in the contraction part and expansion part.
  • the dense block is a DenseNet network with a preset number of layers, and the connection between each layer of the dense block is adopted.
  • a compound operation consisting of batch normalization, ReLu activation function layer and convolution layer.
  • the above dense block is a DenseNet network with a preset number of layers, and the preset number can be set according to actual needs.
  • the preset number may be 3, that is, the dense block is a 3-layer DenseNet network.
  • the preset number can also be more or less, but the greater the number of layers does not mean the better the effect.
  • the three-dimensional fully convolutional neural network model is a U-shaped fully convolutional network, that is, the overall network structure is U-shaped, that is, the overall network structure is similar to that of the U-Net network.
  • the U-shaped network model includes a contraction part and an expansion part.
  • a dense block (DenseBlock) is added in front of each convolution of the contraction part and the expansion part.
  • the DenseBlock is a 3-layer DenseNet network, which is adopted between layers.
  • Composite operation consisting of batch normalization (BN), ReLu activation function and convolution (conv).
  • the cuboid in the figure represents the features extracted by the network, and the numbers on the cuboid represent the number of features.
  • the overall structure of the network is similar to the shape of the letter "U", including a contracted portion and a corresponding extended portion.
  • the left portion of the U-shaped structure in FIG. 2 is the contracted portion, and the right portion is the extended portion.
  • a DenseBlock is added before each contraction and expansion.
  • the DenseBlock can be described in the lower left corner of the figure, which is specifically a 3-layer DenseNet, and the layers are connected using a composite operation of BN+ReLu+conv.
  • the number on the first box from the left is n
  • the number on the second box is ⁇ n
  • the number on the third box is n
  • the fourth box the fifth box
  • the first The numbers on the six cuboids are ⁇ ( ⁇ n+n), ⁇ n, n.
  • Concat connection is used between the second cuboid and the third cuboid, the fourth cuboid and the fifth cuboid, the fifth cuboid and the sixth cuboid.
  • the two cuboids connected by concat are not layers.
  • two BN+ReLu+conv are used, with a total of 3 layers.
  • the letter ⁇ represents the ratio of the number of output features of each layer in the DenseBlock to the number of input features.
  • the ⁇ may be but not limited to 1.0, and in the expansion part it may be but not limited to 0.5.
  • the parameters of each convolution in the network model can be set according to actual needs.
  • the size of the first convolution kernel in the network model is (3, 3, 3)
  • the step size is (2, 2, 2)
  • the size of the penultimate convolution kernel is (1, 1, 1)
  • the stride is (1, 1, 1); except for the penultimate convolution of the first convolution kernel, the size of the other convolution kernels are (3, 3, 3),
  • the stride is (1, 1, 1).
  • Both the pooling operation and the deconvolution operation in this network model are set to have a kernel size of (2, 2, 2) and a stride of (2, 2, 2).
  • Each deconvolution operation is followed by a ReLu operation.
  • Fig. 2 It can be seen from Fig. 2 that it inputs a cardiac magnetic resonance image and outputs the corresponding classification result after passing through the U-DenseNet network model.
  • the U-DenseNet network model not only absorbs the advantages of U-Net network, but also absorbs the advantages of DenseNet network. It is a U-shaped fully convolutional neural network as a whole, so it can propagate global features to a high-resolution network layer. At the same time, it can also make the network training easier. Adding DenseBlock before the convolution of the contraction part and the expansion part will greatly enhance the spread of features and encourage the reuse of features. In addition, the U-DenseNet network model is very stable, even if the initial parameter settings are different, it can converge to a good result, and the required data set is very small, even with a very small data set Output an excellent prediction result.
  • Step S104 Obtain a segmentation result of the cardiac magnetic resonance image to be segmented according to the classification result.
  • the output result of the U-DenseNet network model is a prediction result. Since the output result of the last convolution of the network needs to undergo the softmax operation and the argmax operation, it is the last segmentation result given by the network, so the network gives The output result needs to use the voting method to get the final segmentation result.
  • the process of obtaining the segmentation result of the cardiac magnetic resonance image to be segmented according to the classification result may specifically include: counting the number of classification results; and using the classification result with the largest value as the segmentation result of the cardiac magnetic resonance image to be segmented. That is, the number of each classification result is counted separately, and then the classification result with the largest number is taken as the final segmentation result.
  • the heart magnetic resonance image to be segmented is segmented by using a U-shaped fully convolutional network model.
  • the U-shaped fully convolutional network model can propagate global features to a high-resolution network layer, and the network model’s
  • the convolution of the contraction part and the expansion part is preceded by a dense block, that is, a DenseNet network is added.
  • the DenseNet network adopts a compound operation between layers, which enhances the spread of features and encourages the reuse of features, making the The network model absorbs the advantages of U-Net network and DenseNet and improves the accuracy of segmentation.
  • the U-DenseNet network model in the above embodiment needs to be trained and tested before image segmentation can be performed. This embodiment will describe the training process.
  • FIG. 3 is a schematic block diagram of another process of a cardiac magnetic resonance image segmentation method according to an embodiment of the present application.
  • the method may include the following steps:
  • Step S301 Acquire a training sample set, and perform a second data preprocessing operation on the training sample set.
  • the data preprocessing operation may include a normalization operation and a data enhancement operation.
  • the training sample set may be specifically the training sample set in the public data set HVSMR2016.
  • the specific process of performing the second data preprocessing operation on the training sample set may include: performing a normalization operation on the training sample set, so that the average value of the image matrix of the images in the training sample set is 0, and the variance is 1; Perform data enhancement operations on the training sample set.
  • data normalization is performed on the image so that the image matrix of the image becomes the mean value is 0 and the variance is one; then, data enhancement is performed on the image.
  • the data enhancement may specifically include rotating 90°, 180°, 270° and flip in the axial direction.
  • the above preprocessing operations may also include other conventional preprocessing operations, such as filtering.
  • the above data enhancement operations may also specifically include other operations, which are not limited herein.
  • Step S302 Train the pre-established three-dimensional fully convolutional neural network model according to the pre-processed training sample set.
  • the specific process of training the pre-established three-dimensional fully convolutional neural network model according to the pre-processed training samples may include:
  • Step S401 randomly select a training sample from the training sample set as the target training sample.
  • Step S402 Randomly select target sub-samples of preset dimensions from the target training samples.
  • Step S403 Input the target sub-samples into a pre-built three-dimensional fully convolutional neural network model for training.
  • a target training sample is randomly selected from the training sample set
  • a target sub-sample with a dimension of 64 ⁇ 64 ⁇ 64 is randomly selected from the target training sample, and the target sub-sample is input into the network model For training. Since the output of the last convolution of the network model needs to undergo the softmax operation and the argmax operation to be the prediction result given by the network, the dimension of the prediction result is the same as the input sub-image, which is equal to 64 ⁇ 64 ⁇ 64, so the network gives The prediction result of the test also needs to use the voting method to get a segmentation result with the same dimension as the test sample.
  • each training sample is different, so randomly select a small part of each sample (such as dimension 64*64*64) to feed the network model for training.
  • a small part of each sample such as dimension 64*64*64
  • multiple 64*64*64 sub-samples can be fed to the network model at a time.
  • Step S303 Obtain a test sample set, and test the trained three-dimensional fully convolutional neural network model according to the test sample set to obtain a test result.
  • the test sample set may be the test sample set in the data set HVSMR2016, and the data set HVSMR2016 includes 10 training samples and test samples.
  • test sample set also needs to be normalized, and then the normalized test sample set is input into the pre-trained network model for testing to obtain multiple classifications
  • voting method is used to vote on multiple classification results, and the final segmentation test result is obtained.
  • Step S304 Acquire a cardiac magnetic resonance image to be segmented.
  • Step S305 Perform a first data preprocessing operation on the cardiac magnetic resonance image to be segmented.
  • Step S306 The pre-processed cardiac magnetic resonance image to be segmented is input into a pre-trained three-dimensional fully convolutional neural network model to obtain a classification result.
  • Step S307 According to the classification result, obtain the segmentation result of the cardiac magnetic resonance image to be segmented
  • step S304 to step S307 are the same as step S101 to step S104 in the foregoing first embodiment, and details are not described herein again.
  • FIG. 5 is a schematic diagram of the automatic segmentation result of the U-DenseNet network model on the training set in the dataset HVSMR2016
  • Figure 6 is a schematic diagram of the automatic segmentation result of the U-DenseNet network model on the test set in the dataset HVSMR2016. It can be seen from Figures 5 and 6 that its segmentation accuracy is higher than that of the current two-dimensional convolutional neural network and three-dimensional convolutional neural network.
  • the heart magnetic resonance image is automatically segmented through the U-DenseNet network model, which improves the accuracy of image segmentation.
  • FIG. 7 is a schematic structural block diagram of a cardiac magnetic resonance image segmentation device according to an embodiment of the present application.
  • the device includes:
  • the obtaining module 71 is used to obtain a magnetic resonance image of the heart to be segmented
  • a first preprocessing module 72 configured to perform a first data preprocessing operation on the cardiac magnetic resonance image to be segmented
  • a classification module 73 configured to input the pre-processed cardiac magnetic resonance image to be pre-trained into a three-dimensional fully convolutional neural network model to obtain a classification result
  • the three-dimensional fully convolutional neural network model is a U-shaped fully convolutional network, which includes a contraction part and an expansion part corresponding to the contraction part, and each convolution in the contraction part and the expansion part
  • a dense block connected to the front, the dense block is a DenseNet network with a preset number of layers, and the connection between each layer of the dense block adopts a composite operation consisting of batch normalization, ReLu activation function, and convolution;
  • the segmentation module 74 is used to obtain the segmentation result of the cardiac magnetic resonance image to be segmented according to the classification result.
  • the above device further includes:
  • Training sample set acquisition module used to obtain training sample set
  • a second preprocessing module configured to perform a second data preprocessing operation on the training sample set
  • the training module is configured to train the pre-established three-dimensional fully convolutional neural network model according to the pre-processed training sample set.
  • the training module includes:
  • a first selection unit configured to randomly select a training sample from the training sample set as the target training sample
  • a second selection unit used to randomly select target sub-samples of preset dimensions from the target training samples
  • the input unit is configured to input the target sub-samples into the pre-established three-dimensional fully convolutional neural network model for training.
  • the foregoing second pre-processing module includes:
  • the normalization unit is used to normalize the training sample set, so that the average value of the image matrix of the image in the training sample set is 0, and the variance is 1;
  • the data enhancement unit is used to perform data enhancement operations on the training sample set.
  • the above device further includes:
  • the test module is used to obtain a test sample set; according to the test sample set, the trained three-dimensional fully convolutional neural network model is tested to obtain a test result.
  • the above segmentation module includes:
  • the classification result with the largest value is used as the segmentation result of the cardiac magnetic resonance image to be segmented.
  • cardiac magnetic resonance image segmentation apparatus corresponds one-to-one to the cardiac magnetic resonance image segmentation method in the foregoing embodiment.
  • cardiac magnetic resonance image segmentation apparatus corresponds one-to-one to the cardiac magnetic resonance image segmentation method in the foregoing embodiment.
  • the heart magnetic resonance image to be segmented is segmented by using a U-shaped fully convolutional network model.
  • the U-shaped fully convolutional network model can propagate global features to a high-resolution network layer, and the network model’s
  • the convolution of the contraction part and the expansion part is preceded by a dense block, that is, a DenseNet network is added.
  • the DenseNet network adopts a compound operation between layers, which enhances the spread of features and encourages the reuse of features, making the The network model absorbs the advantages of U-Net network and DenseNet and improves the accuracy of segmentation.
  • the terminal device 8 of this embodiment includes: a processor 80, a memory 81, and a computer program 82 stored in the memory 81 and executable on the processor 80.
  • the processor 80 executes the computer program 82, the steps in the above embodiments of the cardiac magnetic resonance image segmentation method are implemented, for example, steps S101 to S103 shown in FIG. 1.
  • the processor 80 executes the computer program 82, the functions of each module or unit in the foregoing device embodiments are realized, for example, the functions of the modules 71 to 73 shown in FIG. 7.
  • the computer program 82 may be divided into one or more modules or units, and the one or more modules or units are stored in the memory 81 and executed by the processor 80 to complete This application.
  • the one or more modules or units may be a series of computer program instruction segments capable of performing specific functions, and the instruction segments are used to describe the execution process of the computer program 82 in the terminal device 8.
  • the computer program 82 may be divided into an acquisition module, a first pre-processing module classification module, and a division module.
  • the specific functions of each module are as follows:
  • the acquisition module is used to acquire the cardiac magnetic resonance image to be segmented; the first preprocessing module is used to perform a first data preprocessing operation on the cardiac magnetic resonance image to be segmented; the classification module is used to apply the preprocessed
  • the cardiac magnetic resonance image to be segmented is input into a pre-trained three-dimensional fully convolutional neural network model to obtain a classification result; wherein, the three-dimensional fully convolutional neural network model is a U-shaped fully convolutional network, including a contraction portion and the contraction
  • the expansion part corresponding to the part is connected with a dense block in front of each convolution in the contraction part and the expansion part.
  • the dense block is a DenseNet network with a preset number of layers, and each layer of the dense block The connection between them uses a compound operation consisting of batch normalization, ReLu activation function and convolution; a segmentation module is used to obtain the segmentation result of the cardiac magnetic resonance image to be segmented according to the classification result.
  • the terminal device 8 may be a computing device such as a desktop computer, a notebook, a palmtop computer and a cloud server.
  • the terminal device may include, but is not limited to, a processor 80 and a memory 81.
  • FIG. 8 is only an example of the terminal device 8 and does not constitute a limitation on the terminal device 8, and may include more or less components than the illustration, or a combination of certain components or different components.
  • the terminal device may further include an input and output device, a network access device, a bus, and the like.
  • the so-called processor 80 may be a central processing unit (Central Processing Unit (CPU), can also be other general-purpose processors, digital signal processors (Digital Signal Processor, DSP), application-specific integrated circuits (Application Specific Integrated Circuit (ASIC), ready-made programmable gate array (Field-Programmable Gate Array, FPGA) or other programmable logic devices, discrete gate or transistor logic devices, discrete hardware components, etc.
  • the general-purpose processor may be a microprocessor or the processor may be any conventional processor or the like.
  • the memory 81 may be an internal storage unit of the terminal device 8, such as a hard disk or a memory of the terminal device 8.
  • the memory 81 may also be an external storage device of the terminal device 8, for example, a plug-in hard disk equipped on the terminal device 8, a smart memory card (Smart Media Card, SMC), and a secure digital (SD) Flash card Card) etc.
  • the memory 81 may also include both an internal storage unit of the terminal device 8 and an external storage device.
  • the memory 81 is used to store the computer program and other programs and data required by the terminal device.
  • the memory 81 can also be used to temporarily store data that has been or will be output.
  • each functional unit and module is used as an example for illustration.
  • the above-mentioned functions may be allocated by different functional units
  • Module completion means that the internal structure of the device is divided into different functional units or modules to complete all or part of the functions described above.
  • the functional units and modules in the embodiment may be integrated into one processing unit, or each unit may exist alone physically, or two or more units may be integrated into one unit.
  • the above integrated unit may use hardware It can also be implemented in the form of software functional units.
  • the specific names of each functional unit and module are only for the purpose of distinguishing each other, and are not used to limit the protection scope of the present application.
  • the disclosed device, terminal device, and method may be implemented in other ways.
  • the device and terminal device embodiments described above are only schematic.
  • the division of the module or unit is only a logical function division, and in actual implementation, there may be another division manner, such as multiple units Or components can be combined or integrated into another system, or some features can be ignored or not implemented.
  • the displayed or discussed mutual coupling or direct coupling or communication connection may be indirect coupling or communication connection through some interfaces, devices or units, and may be in electrical, mechanical or other forms.
  • the units described as separate components may or may not be physically separated, and the components displayed as units may or may not be physical units, that is, they may be located in one place, or may be distributed on multiple network units. Some or all of the units may be selected according to actual needs to achieve the purpose of the solution of this embodiment.
  • each functional unit in each embodiment of the present application may be integrated into one processing unit, or each unit may exist alone physically, or two or more units may be integrated into one unit.
  • the above integrated unit can be implemented in the form of hardware or software function unit.
  • the integrated module or unit is implemented in the form of a software functional unit and sold or used as an independent product, it may be stored in a computer-readable storage medium.
  • the present application can implement all or part of the processes in the methods of the above embodiments, or it can be completed by a computer program instructing related hardware.
  • the computer program can be stored in a computer-readable storage medium. When the program is executed by the processor, the steps of the foregoing method embodiments may be implemented.
  • the computer program includes computer program code, and the computer program code may be in a source code form, an object code form, an executable file, or some intermediate form, etc.
  • the computer readable medium may include: any entity or device capable of carrying the computer program code, a recording medium, a U disk, a mobile hard disk, a magnetic disk, an optical disk, a computer memory, a read-only memory (ROM, Read-Only Memory), random access memory (RAM, Random Access Memory), electrical carrier signals, telecommunications signals and software distribution media, etc.
  • ROM Read-Only Memory
  • RAM Random Access Memory
  • electrical carrier signals telecommunications signals and software distribution media, etc.

Landscapes

  • Engineering & Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Magnetic Resonance Imaging Apparatus (AREA)

Abstract

La présente invention concerne un procédé et un appareil de segmentation d'images de résonance magnétique cardiaques, un terminal et un support de stockage lisible par ordinateur. Ledit procédé comprend les étapes suivantes : acquisition d'une image de résonance magnétique cardiaque à segmenter (S101) ; réalisation d'une première opération de prétraitement sur des données de ladite image de résonance magnétique cardiaque (S102) ; entrée de l'image de résonance magnétique cardiaque prétraitée à segmenter dans un modèle pré-entraîné de réseau neuronal tridimensionnel entièrement convolutif, de façon à obtenir un résultat de classification (S103) ; et, en fonction du résultat de la classification, obtention d'un résultat de segmentation de ladite image de résonance magnétique cardiaque (S104). Le modèle de réseau neuronal entièrement convolutif tridimensionnel est un réseau entièrement convolutif de type U, comprenant une partie contraction et une partie expansion correspondant à la partie contraction, un bloc dense est connecté avant chaque convolution dans la partie contraction et la partie expansion, le bloc dense est un réseau DenseNet présentant un nombre prédéfini de couches, et une opération composite consistant en une normalisation de lot, une fonction d'activation ReLu et une convolution est utilisée pour la connexion entre les couches du bloc dense. Ledit procédé permet d'améliorer la précision de la segmentation.
PCT/CN2019/124351 2018-12-17 2019-12-10 Procédé et appareil de segmentation d'images de résonance magnétique cardiaques, terminal et support de stockage WO2020125498A1 (fr)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN201811542788.3A CN109785334A (zh) 2018-12-17 2018-12-17 心脏磁共振图像分割方法、装置、终端设备及存储介质
CN201811542788.3 2018-12-17

Publications (1)

Publication Number Publication Date
WO2020125498A1 true WO2020125498A1 (fr) 2020-06-25

Family

ID=66497196

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2019/124351 WO2020125498A1 (fr) 2018-12-17 2019-12-10 Procédé et appareil de segmentation d'images de résonance magnétique cardiaques, terminal et support de stockage

Country Status (2)

Country Link
CN (1) CN109785334A (fr)
WO (1) WO2020125498A1 (fr)

Families Citing this family (17)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109785334A (zh) * 2018-12-17 2019-05-21 深圳先进技术研究院 心脏磁共振图像分割方法、装置、终端设备及存储介质
CN110163876B (zh) * 2019-05-24 2021-08-17 山东师范大学 基于多特征融合的左心室分割方法、系统、设备及介质
CN110211166B (zh) * 2019-06-13 2021-10-12 北京理工大学 磁共振图像中视神经分割方法及装置
CN110473243B (zh) * 2019-08-09 2021-11-30 重庆邮电大学 基于深度轮廓感知的牙齿分割方法、装置及计算机设备
CN110738635A (zh) * 2019-09-11 2020-01-31 深圳先进技术研究院 一种特征追踪方法及装置
CN110731777B (zh) * 2019-09-16 2023-07-25 平安科技(深圳)有限公司 基于图像识别的左心室测量方法、装置以及计算机设备
CN110647939B (zh) * 2019-09-24 2022-05-24 广州大学 一种半监督智能分类方法、装置、存储介质及终端设备
CN110866931B (zh) * 2019-11-18 2022-11-01 东声(苏州)智能科技有限公司 图像分割模型训练方法及基于分类的强化图像分割方法
CN110766691A (zh) * 2019-12-06 2020-02-07 北京安德医智科技有限公司 一种心脏磁共振影像分析及心肌病预测的方法、装置
CN112950638B (zh) * 2019-12-10 2023-12-29 深圳华大生命科学研究院 图像分割方法、装置、电子设备及计算机可读存储介质
CN111583207B (zh) * 2020-04-28 2022-04-12 宁波智能装备研究院有限公司 一种斑马鱼幼鱼心脏轮廓确定方法及系统
CN112085162B (zh) * 2020-08-12 2024-02-09 北京师范大学 基于神经网络的磁共振脑组织分割方法、装置、计算设备及存储介质
CN112863650A (zh) * 2021-01-06 2021-05-28 中国人民解放军陆军军医大学第二附属医院 一种基于卷积与长短期记忆神经网络的心肌病识别系统
CN112348818B (zh) * 2021-01-08 2021-08-06 杭州晟视科技有限公司 一种图像分割方法、装置、设备以及存储介质
CN112950652B (zh) * 2021-02-08 2024-01-19 深圳市优必选科技股份有限公司 机器人及其手部图像分割方法和装置
CN113808143B (zh) * 2021-09-06 2024-05-17 沈阳东软智能医疗科技研究院有限公司 图像分割方法、装置、可读存储介质及电子设备
CN113837062A (zh) * 2021-09-22 2021-12-24 内蒙古工业大学 一种分类方法、装置、存储介质及电子设备

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107424145A (zh) * 2017-06-08 2017-12-01 广州中国科学院软件应用技术研究所 基于三维全卷积神经网络的核磁共振图像的分割方法
CN108109152A (zh) * 2018-01-03 2018-06-01 深圳北航新兴产业技术研究院 医学图像分类和分割方法和装置
US20180240235A1 (en) * 2017-02-23 2018-08-23 Zebra Medical Vision Ltd. Convolutional neural network for segmentation of medical anatomical images
CN109785334A (zh) * 2018-12-17 2019-05-21 深圳先进技术研究院 心脏磁共振图像分割方法、装置、终端设备及存储介质

Family Cites Families (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2015156735A1 (fr) * 2014-04-10 2015-10-15 Singapore Health Services Pte Ltd Procédé et dispositif d'analyse de séquence d'images cardiaques par résonance magnétique (rm)
EP3246873B1 (fr) * 2016-07-15 2018-07-11 Siemens Healthcare GmbH Procédé et unité de traitement de données pour segmenter un objet dans une image médicale
CN107633486B (zh) * 2017-08-14 2021-04-02 成都大学 基于三维全卷积神经网络的结构磁共振图像去噪方法
CN108346145B (zh) * 2018-01-31 2020-08-04 浙江大学 一种病理切片中非常规细胞的识别方法
CN108830854A (zh) * 2018-03-22 2018-11-16 广州多维魔镜高新科技有限公司 一种图像分割方法及存储介质
CN109003299A (zh) * 2018-07-05 2018-12-14 北京推想科技有限公司 一种基于深度学习的计算脑出血量的方法

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20180240235A1 (en) * 2017-02-23 2018-08-23 Zebra Medical Vision Ltd. Convolutional neural network for segmentation of medical anatomical images
CN107424145A (zh) * 2017-06-08 2017-12-01 广州中国科学院软件应用技术研究所 基于三维全卷积神经网络的核磁共振图像的分割方法
CN108109152A (zh) * 2018-01-03 2018-06-01 深圳北航新兴产业技术研究院 医学图像分类和分割方法和装置
CN109785334A (zh) * 2018-12-17 2019-05-21 深圳先进技术研究院 心脏磁共振图像分割方法、装置、终端设备及存储介质

Also Published As

Publication number Publication date
CN109785334A (zh) 2019-05-21

Similar Documents

Publication Publication Date Title
WO2020125498A1 (fr) Procédé et appareil de segmentation d'images de résonance magnétique cardiaques, terminal et support de stockage
JP7297081B2 (ja) 画像分類方法、画像分類装置、医療用電子機器、画像分類機器、及びコンピュータプログラム
US20210049397A1 (en) Semantic segmentation method and apparatus for three-dimensional image, terminal, and storage medium
WO2021169126A1 (fr) Procédé et appareil d'entraînement de modèle de classification de lésion, dispositif informatique et support de stockage
CN111368849B (zh) 图像处理方法、装置、电子设备及存储介质
US20220058821A1 (en) Medical image processing method, apparatus, and device, medium, and endoscope
DE112020004049T5 (de) Krankheitserkennung aus spärlich kommentierten volumetrischen medizinischen bildern unter verwendung eines faltenden langen kurzzeitgedächtnisses
TW202125415A (zh) 三維目標檢測及模型的訓練方法、設備、儲存媒體
US11663819B2 (en) Image processing method, apparatus, and device, and storage medium
WO2019037654A1 (fr) Procédé et appareil de détection d'image 3d, dispositif électronique et support lisible par ordinateur
AU2019430369B2 (en) VRDS 4D medical image-based vein Ai endoscopic analysis method and product
WO2022088572A1 (fr) Procédé de formation de modèle, procédé de traitement et d'alignement d'image, appareil, dispositif et support
WO2023142781A1 (fr) Procédé et appareil de reconstruction en trois dimensions d'image, dispositif électronique et support de stockage
CN111260639A (zh) 多视角信息协作的乳腺良恶性肿瘤分类方法
WO2024066049A1 (fr) Procédé de débruitage d'image pet, dispositif terminal et support de stockage lisible
CN111445550B (zh) Pet图像的迭代重建方法、装置和计算机可读存储介质
Cai et al. Identifying architectural distortion in mammogram images via a se-densenet model and twice transfer learning
WO2021097595A1 (fr) Procédé et appareil pour segmenter une zone de lésion dans une image, et serveur
US11455755B2 (en) Methods and apparatus for neural network based image reconstruction
Jiang et al. Images denoising for COVID-19 chest X-ray based on multi-resolution parallel residual CNN
US11626201B2 (en) Systems and methods to process electronic images for synthetic image generation
WO2021189383A1 (fr) Procédés d'entraînement et de production pour produire un modèle d'image tomodensitométrique à haute énergie, dispositif et support de stockage
CN112288683A (zh) 基于多模态融合的肺结核病判定装置和方法
JP2022526126A (ja) 訓練された深層神経網モデルの再現性能を改善する方法及びそれを用いた装置
US20230215546A1 (en) Systems and methods to process electronic images for synthetic image generation

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 19900328

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

32PN Ep: public notification in the ep bulletin as address of the adressee cannot be established

Free format text: NOTING OF LOSS OF RIGHTS PURSUANT TO RULE 112(1) EPC (EPO FORM 1205A DATED 10.11.2021)

122 Ep: pct application non-entry in european phase

Ref document number: 19900328

Country of ref document: EP

Kind code of ref document: A1