CN117291878A - Cardiac magnetic resonance image processing method and system - Google Patents
Cardiac magnetic resonance image processing method and system Download PDFInfo
- Publication number
- CN117291878A CN117291878A CN202311176545.3A CN202311176545A CN117291878A CN 117291878 A CN117291878 A CN 117291878A CN 202311176545 A CN202311176545 A CN 202311176545A CN 117291878 A CN117291878 A CN 117291878A
- Authority
- CN
- China
- Prior art keywords
- sequence
- preset
- magnetic resonance
- neural network
- image data
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
- 230000000747 cardiac effect Effects 0.000 title claims abstract description 42
- 238000003672 processing method Methods 0.000 title claims description 12
- 230000004927 fusion Effects 0.000 claims abstract description 35
- 230000011218 segmentation Effects 0.000 claims abstract description 31
- 238000000034 method Methods 0.000 claims abstract description 29
- 238000013528 artificial neural network Methods 0.000 claims abstract description 27
- 238000000605 extraction Methods 0.000 claims abstract description 25
- 238000003062 neural network model Methods 0.000 claims abstract description 18
- 238000012545 processing Methods 0.000 claims abstract description 13
- 238000012549 training Methods 0.000 claims description 24
- 230000015654 memory Effects 0.000 claims description 11
- 230000006870 function Effects 0.000 claims description 8
- 238000012795 verification Methods 0.000 claims description 5
- 230000004913 activation Effects 0.000 claims description 4
- 230000003205 diastolic effect Effects 0.000 claims description 3
- 238000011176 pooling Methods 0.000 claims description 3
- 210000004165 myocardium Anatomy 0.000 description 6
- 230000007787 long-term memory Effects 0.000 description 5
- 230000008569 process Effects 0.000 description 5
- 238000005481 NMR spectroscopy Methods 0.000 description 4
- 238000010586 diagram Methods 0.000 description 4
- 230000002107 myocardial effect Effects 0.000 description 4
- 239000002872 contrast media Substances 0.000 description 3
- 230000008878 coupling Effects 0.000 description 3
- 238000010168 coupling process Methods 0.000 description 3
- 238000005859 coupling reaction Methods 0.000 description 3
- 238000005516 engineering process Methods 0.000 description 3
- 229910052688 Gadolinium Inorganic materials 0.000 description 2
- 210000004027 cell Anatomy 0.000 description 2
- 239000003795 chemical substances by application Substances 0.000 description 2
- 238000004891 communication Methods 0.000 description 2
- 238000004590 computer program Methods 0.000 description 2
- 230000008602 contraction Effects 0.000 description 2
- UIWYJDYFSGRHKR-UHFFFAOYSA-N gadolinium atom Chemical compound [Gd] UIWYJDYFSGRHKR-UHFFFAOYSA-N 0.000 description 2
- 230000036541 health Effects 0.000 description 2
- 239000000463 material Substances 0.000 description 2
- 239000011159 matrix material Substances 0.000 description 2
- 238000012986 modification Methods 0.000 description 2
- 230000004048 modification Effects 0.000 description 2
- 238000005070 sampling Methods 0.000 description 2
- OKTJSMMVPCPJKN-UHFFFAOYSA-N Carbon Chemical compound [C] OKTJSMMVPCPJKN-UHFFFAOYSA-N 0.000 description 1
- 238000004458 analytical method Methods 0.000 description 1
- 230000001364 causal effect Effects 0.000 description 1
- 239000007795 chemical reaction product Substances 0.000 description 1
- 125000004122 cyclic group Chemical group 0.000 description 1
- 239000003814 drug Substances 0.000 description 1
- 230000000694 effects Effects 0.000 description 1
- 229910021389 graphene Inorganic materials 0.000 description 1
- 238000003384 imaging method Methods 0.000 description 1
- 230000007774 longterm Effects 0.000 description 1
- 230000003446 memory effect Effects 0.000 description 1
- 230000000877 morphologic effect Effects 0.000 description 1
- 210000002569 neuron Anatomy 0.000 description 1
- 230000003287 optical effect Effects 0.000 description 1
- 238000007781 pre-processing Methods 0.000 description 1
- 230000000750 progressive effect Effects 0.000 description 1
- 230000009467 reduction Effects 0.000 description 1
- 210000001519 tissue Anatomy 0.000 description 1
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/0002—Inspection of images, e.g. flaw detection
- G06T7/0012—Biomedical image inspection
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/04—Architecture, e.g. interconnection topology
- G06N3/044—Recurrent networks, e.g. Hopfield networks
- G06N3/0442—Recurrent networks, e.g. Hopfield networks characterised by memory or gating, e.g. long short-term memory [LSTM] or gated recurrent units [GRU]
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/04—Architecture, e.g. interconnection topology
- G06N3/045—Combinations of networks
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/04—Architecture, e.g. interconnection topology
- G06N3/0464—Convolutional networks [CNN, ConvNet]
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/20—Image preprocessing
- G06V10/26—Segmentation of patterns in the image field; Cutting or merging of image elements to establish the pattern region, e.g. clustering-based techniques; Detection of occlusion
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/70—Arrangements for image or video recognition or understanding using pattern recognition or machine learning
- G06V10/764—Arrangements for image or video recognition or understanding using pattern recognition or machine learning using classification, e.g. of video objects
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/70—Arrangements for image or video recognition or understanding using pattern recognition or machine learning
- G06V10/77—Processing image or video features in feature spaces; using data integration or data reduction, e.g. principal component analysis [PCA] or independent component analysis [ICA] or self-organising maps [SOM]; Blind source separation
- G06V10/80—Fusion, i.e. combining data from various sources at the sensor level, preprocessing level, feature extraction level or classification level
- G06V10/809—Fusion, i.e. combining data from various sources at the sensor level, preprocessing level, feature extraction level or classification level of classification results, e.g. where the classifiers operate on the same input data
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/70—Arrangements for image or video recognition or understanding using pattern recognition or machine learning
- G06V10/82—Arrangements for image or video recognition or understanding using pattern recognition or machine learning using neural networks
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/10—Image acquisition modality
- G06T2207/10072—Tomographic images
- G06T2207/10088—Magnetic resonance imaging [MRI]
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20084—Artificial neural networks [ANN]
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/30—Subject of image; Context of image processing
- G06T2207/30004—Biomedical image processing
- G06T2207/30048—Heart; Cardiac
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V2201/00—Indexing scheme relating to image or video recognition or understanding
- G06V2201/03—Recognition of patterns in medical or anatomical images
-
- Y—GENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
- Y02—TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
- Y02A—TECHNOLOGIES FOR ADAPTATION TO CLIMATE CHANGE
- Y02A90/00—Technologies having an indirect contribution to adaptation to climate change
- Y02A90/30—Assessment of water resources
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Evolutionary Computation (AREA)
- Health & Medical Sciences (AREA)
- General Health & Medical Sciences (AREA)
- Software Systems (AREA)
- Computing Systems (AREA)
- Artificial Intelligence (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Medical Informatics (AREA)
- Multimedia (AREA)
- Biomedical Technology (AREA)
- Databases & Information Systems (AREA)
- Life Sciences & Earth Sciences (AREA)
- Biophysics (AREA)
- Computational Linguistics (AREA)
- Data Mining & Analysis (AREA)
- Molecular Biology (AREA)
- General Engineering & Computer Science (AREA)
- Mathematical Physics (AREA)
- Quality & Reliability (AREA)
- Radiology & Medical Imaging (AREA)
- Nuclear Medicine, Radiotherapy & Molecular Imaging (AREA)
- Magnetic Resonance Imaging Apparatus (AREA)
Abstract
The invention discloses a method and a system for processing a cardiac magnetic resonance image, wherein the method comprises the following steps: determining a heart film sequence from the acquired heart magnetic resonance image; dividing each sequence in the heart film sequence through a preset division neural network model to obtain a target division region; intercepting cube image data with fixed size in a corresponding sequence based on a target segmentation area, and filling a preset gray value in the target segmentation area in the cube image to obtain cube image data to be extracted; extracting feature information from the cube image data to be extracted through a preset extraction neural network, and determining a preliminary classification result according to the feature information through the preset extraction neural network; and carrying out weighted fusion on the preliminary classification results corresponding to the sequences to obtain a fusion result, and inputting the fusion result into a preset classifier to obtain a classification result, so that the popularity and classification accuracy of the cardiac magnetic resonance image are considered, and a doctor can more quickly determine the condition of a patient.
Description
Technical Field
The invention belongs to the technical field of image processing, and particularly relates to a method and a system for processing a cardiac magnetic resonance image.
Background
Heart magnetic resonance is an important tool in medicine, and can help doctors to accurately diagnose illness states, but heart nuclear magnetic scanning parameters are complex, interference caused by different scanning parameters needs to be removed in a data preprocessing stage, magnetic resonance images are difficult to avoid myocardial signal inhomogeneity caused by parameter inconsistency and technician operation inconsistency, dependence of the popularity of a novel sequence of heart nuclear magnetic scanning on contrast agent and patient matching degree in the prior art is a great limit that heart nuclear magnetic resonance multiparameter enhanced scanning is used for all patients, the heart nuclear magnetic resonance multiparameter enhanced scanning is difficult to popularize for all patients, the gadolinium agent contrast agent is depended on, a part of patients cannot be hard to receive the gadolinium agent contrast agent, the whole heart nuclear magnetic resonance scanning time is overlong, a part of patients cannot be successfully completed, most of patients can only complete basic sequences, such as scanning of a horizontal scanning heart movie sequence, accurate information cannot be scanned in the prior art of scanning of the heart movie sequence, and classification result accuracy is low.
Therefore, how to improve the classification accuracy of the basic sequence in the cardiac magnetic resonance image, so that a doctor can more accurately determine the condition of a patient, and improve the popularity of cardiac nuclear magnetic resonance is a technical problem to be solved by the person skilled in the art.
Disclosure of Invention
The invention aims to solve the technical problem that the popularity and the classification accuracy of a heart magnetic resonance image cannot be considered in the prior art.
To achieve the above object, in one aspect, the present invention provides a cardiac magnetic resonance image processing method, including:
determining a heart film sequence from the acquired heart magnetic resonance images, wherein the heart film sequence comprises a first sequence, a second sequence and a third sequence;
dividing each sequence in the heart film sequence through a preset division neural network model to obtain a target division region;
intercepting cube image data with fixed size in a corresponding sequence based on the target segmentation area, and filling a gray value of a preset value in the target segmentation area in the cube image to obtain cube image data to be extracted;
extracting feature information from the cube image data to be extracted through a preset extraction neural network, and determining a preliminary classification result according to the feature information through the preset extraction neural network;
and carrying out weighted fusion on the preliminary classification results corresponding to the sequences to obtain a fusion result, and inputting the fusion result into a preset classifier to obtain a classification result.
Further, the neural network model is divided by a preset, which comprises a contraction path of 4 groups of convolution layers and an expansion path of 4 corresponding groups of up-sampling layers, and each group of convolution layers uses a preset activation function and a maximum pooling operation.
Further, after filling the target segmentation area in the cube image with the gray value of the preset value to obtain cube image data to be extracted, the method further comprises arranging the cube image data to be extracted according to the sequence of end diastole, end systole and end diastole.
Further, the characteristic information specifically includes a time phase characteristic, specifically motion characteristic information of systolic motion and diastolic motion, and a three-dimensional characteristic.
Further, the preset extraction neural network is specifically a long-short-term memory neural network model, the preset extraction neural network is trained in advance through training data, the training data comprises training characteristic information and training categories corresponding to the training characteristic information, and the training data comprises a training group and a verification group in a preset proportion.
Further, the weighting parameters in the weighted fusion are parameters trained in advance according to the training data, and the preliminary classification result is specifically the probability of each class corresponding to a single sequence in the cardiac cine sequence.
Further, the fusion result is specifically the probability of each category corresponding to the cardiac cine sequence.
In another aspect, the present invention also provides a cardiac magnetic resonance image processing system, the system including:
the acquisition module is used for determining a heart film sequence from the acquired heart magnetic resonance images, wherein the heart film sequence comprises a first sequence, a second sequence and a third sequence;
the segmentation module is used for respectively segmenting each sequence in the heart film sequence through a preset segmentation neural network model to obtain a target segmentation region;
the intercepting module is used for intercepting cube image data with fixed size in a corresponding sequence based on the target dividing region, and filling a gray value of a preset value into the target dividing region in the cube image to obtain cube image data to be extracted;
the extraction module is used for extracting characteristic information from the cube image data to be extracted through a preset extraction neural network, and determining a preliminary classification result according to the characteristic information through the preset extraction neural network;
the classification module is used for carrying out weighted fusion on the primary classification results corresponding to the sequences to obtain fusion results, and inputting the fusion results into a preset classifier to obtain classification results.
Compared with the prior art, the method for processing the cardiac magnetic resonance image determines a cardiac cine sequence from the acquired cardiac magnetic resonance image, wherein the cardiac cine sequence comprises a first sequence, a second sequence and a third sequence; dividing each sequence in the heart film sequence through a preset division neural network model to obtain a target division region; intercepting cube image data with fixed size in a corresponding sequence based on the target segmentation area, and filling a gray value of a preset value in the target segmentation area in the cube image to obtain cube image data to be extracted; extracting feature information from the cube image data to be extracted through a preset extraction neural network, and determining a preliminary classification result according to the feature information through the preset extraction neural network; the primary classification results corresponding to the sequences are weighted and fused to obtain a fusion result, and the fusion result is input into a preset classifier to obtain a classification result, so that the popularity and classification accuracy of the cardiac magnetic resonance image are considered, and a doctor can conveniently and rapidly determine the condition of a patient.
Drawings
In order to more clearly illustrate the embodiments of the present description or the technical solutions in the prior art, the drawings that are required to be used in the embodiments or the description of the prior art will be briefly described below, and it is obvious that the drawings in the following description are only some embodiments described in the present description, and other drawings may be obtained according to these drawings without inventive effort for a person skilled in the art.
Fig. 1 is a schematic flow chart of a cardiac magnetic resonance image processing method according to an embodiment of the present disclosure;
fig. 2 is a schematic structural diagram of a cardiac magnetic resonance image processing system according to an embodiment of the present disclosure.
Detailed Description
In order to enable those of ordinary skill in the art to better understand the technical solutions in the present specification, the technical solutions in the embodiments of the present application will be clearly and completely described below with reference to the drawings in the embodiments of the present application, and it is obvious that the described embodiments are only some embodiments of the present application, but not all embodiments. All other embodiments, which can be made by one of ordinary skill in the art based on the embodiments herein without making any inventive effort, are intended to be within the scope of the present application.
Fig. 1 is a schematic flow chart of a cardiac magnetic resonance image processing method according to the embodiment of the present disclosure, and although the present disclosure provides the following method operation steps or apparatus structures according to the embodiment or the accompanying drawings, more or fewer operation steps or module units may be included in the method or apparatus based on conventional or non-creative labor, or fewer operation steps or module units may be included in the method or apparatus, and in the steps or structures where there is no necessary causal relationship logically, the execution order of the steps or the module structure of the apparatus is not limited to the execution order or the module structure shown in the embodiment or the accompanying drawings of the present disclosure. The described methods or module structures may be implemented in a sequential or parallel manner (e.g., in a parallel processor or multithreaded environment, or even in a distributed processing, server cluster implementation environment) in accordance with the method or module structures shown in the embodiments or figures when the actual device, server, or end product is in use.
The cardiac magnetic resonance image processing method provided in the embodiment of the present disclosure may be applied to a terminal device such as a client and a server, as shown in fig. 1, and specifically includes the following steps:
step S101, determining a heart film sequence from acquired heart magnetic resonance images, wherein the heart film sequence comprises a first sequence, a second sequence and a third sequence.
Specifically, the cardiac cine sequence is a basic sequence in cardiac nuclear magnetic imaging, has higher popularity, and can be completed by scanning for most patients. The first sequence is a 2CH sequence, the second sequence is a 4CH sequence, and the third sequence is a SAX sequence.
And step S12, respectively segmenting each sequence in the heart film sequence through a preset segmentation neural network model to obtain a target segmentation region.
Specifically, for 2CH, 4CH and SAX cardiac cine sequences, corresponding Xreception-Unet neural network models are respectively constructed, and myocardial segmentation is performed on each 2D image. The split network uses a Unet structure that includes a contracted path of 4 sets of convolutional layers and a corresponding expanded path of 4 sets of upsampled layers. The shrink path follows a classical convolutional network structure, which consists of two repeated 3x3 convolutional kernels, and both use the RELU activation function and a 2x2 max pooling operation with a step size of 2 for downsampling. In the expansion path, each step includes sampling the feature map; and then, performing convolution operation by using a convolution kernel of 2x2, and finally cascading the characteristic diagram after clipping corresponding to the contraction path comprehensive function, performing convolution operation by using two convolution kernels of 3x3, wherein a ReLU activation function is used in the whole process. The characteristic extraction part uses a relatively efficient Xreception network, and the target segmentation area is specifically a myocardial area.
And step 103, intercepting cube image data with fixed size in a corresponding sequence based on the target segmentation area, and filling the target segmentation area in the cube image with a gray value of a preset value to obtain cube image data to be extracted.
Specifically, the size of the cube image data may be 80×80×25 pixels, including the myocardium and surrounding tissue images, in order to discharge morphological interference, it is necessary to uniformly fill the segmented myocardium region with a gray value of a preset value, and in order to better find myocardial changes of the myocardium at different phases of systolic and diastolic, it is necessary to arrange the cube image data to be extracted in the order of end-diastolic-end-systolic-end-diastolic.
And step S104, extracting characteristic information from the cube image data to be extracted through a preset extraction neural network, and determining a preliminary classification result according to the characteristic information through the preset extraction neural network.
Specifically, the preset extracted neural network model is specifically a 2D-ConvLSTM model in an LSTM long-term memory neural network model, the LSTM long-term memory neural network is an excellent variant of the RNN cyclic neural network, the characteristics of most cyclic neural network models are included, and the analysis of sequence data can still be carried out through a regression type network architecture and has a certain memory effect. The LSTM long-term memory technology can realize long-term storage input by adding a similar accumulator called a memory cell and a special unit of a gating neuron, so that the memory cell has a weight in the next time step and is linked to the LSTM long-term memory technology, and the real value of the state of the LSTM long-term memory technology and the accumulated external signal are copied. Thus, the gradient vanishing problem generated by gradual reduction in the gradient inversion process is solved. Since LSTM mainly considers time series, and is also a feature of time dimension, the 2D-ConvLSTM model can also extract the morphology of the myocardium, that is, the spatial feature or three-dimensional feature of the myocardium.
Firstly, a preset extracted neural network model is trained in advance through training data, the training data comprises training characteristic information and training categories corresponding to the training characteristic information, the training data comprises a training group and a verification group in a preset proportion, the proportion of the training group to the verification group can be 4:1, and the classification efficiency of the model is tested in the verification group. The accuracy of the model in classifying different etiologies is evaluated by adopting a Confusion Matrix (fusion Matrix), and the classification efficiency of the model for different categories is respectively evaluated by adopting a subject work characteristic curve (Receiver Operating Characteristic, ROC).
Step 105, carrying out weighted fusion on the preliminary classification results corresponding to the sequences to obtain a fusion result, and inputting the fusion result into a preset classifier to obtain a classification result.
Specifically, the fusion result is specifically the probability of each category corresponding to the cardiac movie sequence, wherein the preset classifier is also involved in the training process, and can be input with the fusion result, so that a final classification result is obtained, the classification result is specifically the health degree of cardiac muscle, and comprises a first level, a second level and a third level, and the health degree of the first level is greater than that of the second level and greater than that of the third level.
Based on the above-mentioned cardiac magnetic resonance image processing method, one or more embodiments of the present disclosure further provide a platform, a system, or a method for processing cardiac magnetic resonance images, where the platform or the system may include a device, software, a module, a plug-in, a server, a client, etc. using the method described in the embodiments of the present disclosure in combination with a device for implementing hardware, where the term "unit" or "module" used in the following may implement a combination of software and/or hardware for a predetermined function, based on the same innovative concept, where the implementation of the system to solve the problem is similar to the method, and therefore the implementation of the system in the embodiments of the present disclosure may refer to the implementation of the method, and the repetition is omitted. While the system described in the following embodiments is preferably implemented in software, hardware, implementation of a combination of hardware and software is also possible and contemplated.
Specifically, fig. 2 is a schematic block diagram of an embodiment of a cardiac magnetic resonance image processing system provided in the present specification, and as shown in fig. 2, the cardiac magnetic resonance image processing system provided in the present specification includes:
an acquisition module 201, configured to determine a cardiac cine sequence from acquired cardiac magnetic resonance images, where the cardiac cine sequence includes a first sequence, a second sequence, and a third sequence;
the segmentation module 202 is configured to segment each sequence in the cardiac cine sequence through a preset segmentation neural network model to obtain a target segmentation region;
the intercepting module 203 is configured to intercept cube image data with a fixed size in a corresponding sequence based on the target segmentation area, and fill a gray value of a preset value into the target segmentation area in the cube image to obtain cube image data to be extracted;
the extracting module 204 is configured to extract feature information from the cube image data to be extracted through a preset extracting neural network, and determine a preliminary classification result according to the feature information through the preset extracting neural network;
the classification module 205 is configured to perform weighted fusion on the preliminary classification results corresponding to each sequence to obtain a fusion result, and input the fusion result into a preset classifier to obtain a classification result.
It should be noted that, the description of the above system according to the corresponding method embodiment may further include other embodiments, and specific implementation manner may refer to the description of the above corresponding method embodiment, which is not described herein in detail.
The embodiment of the application also provides electronic equipment, which comprises:
a processor;
a memory for storing the processor-executable instructions;
the processor is configured to perform the method as provided in the above embodiments.
According to the electronic device provided by the embodiment of the application, the executable instructions of the processor are stored through the memory, and when the processor executes the executable instructions, a heart film sequence can be determined from the acquired heart magnetic resonance images, wherein the heart film sequence comprises a first sequence, a second sequence and a third sequence; dividing each sequence in the heart film sequence through a preset division neural network model to obtain a target division region; intercepting cube image data with fixed size in a corresponding sequence based on the target segmentation area, and filling a gray value of a preset value in the target segmentation area in the cube image to obtain cube image data to be extracted; extracting feature information from the cube image data to be extracted through a preset extraction neural network, and determining a preliminary classification result according to the feature information through the preset extraction neural network; the primary classification results corresponding to the sequences are weighted and fused to obtain a fusion result, and the fusion result is input into a preset classifier to obtain a classification result, so that the popularity and classification accuracy of the cardiac magnetic resonance image are considered, a doctor can more conveniently and rapidly determine the condition of a patient, and the method provided by the embodiment of the specification can be executed in a mobile terminal, a computer terminal, a server or a similar computing device.
The foregoing describes specific embodiments of the present disclosure. Other embodiments are within the scope of the following claims. In some cases, the actions or steps recited in the claims can be performed in a different order than in the embodiments and still achieve desirable results. In addition, the processes depicted in the accompanying figures do not necessarily require the particular order shown, or sequential order, to achieve desirable results. In some embodiments, multitasking and parallel processing are also possible or may be advantageous.
The method or apparatus according to the foregoing embodiments provided in the present specification may implement service logic through a computer program and be recorded on a storage medium, where the storage medium may be read and executed by a computer, to implement effects of the solutions described in the embodiments of the present specification, for example:
determining a heart film sequence from the acquired heart magnetic resonance images, wherein the heart film sequence comprises a first sequence, a second sequence and a third sequence;
dividing each sequence in the heart film sequence through a preset division neural network model to obtain a target division region;
intercepting cube image data with fixed size in a corresponding sequence based on the target segmentation area, and filling a gray value of a preset value in the target segmentation area in the cube image to obtain cube image data to be extracted;
extracting feature information from the cube image data to be extracted through a preset extraction neural network, and determining a preliminary classification result according to the feature information through the preset extraction neural network;
and carrying out weighted fusion on the preliminary classification results corresponding to the sequences to obtain a fusion result, and inputting the fusion result into a preset classifier to obtain a classification result.
The storage medium may include physical means for storing information, typically by digitizing the information before storing it in an electronic, magnetic, or optical medium. The storage medium may include: devices for storing information by using electric energy, such as various memories, e.g. ram, rom, etc.; devices for storing information using magnetic energy such as hard disk, floppy disk, magnetic tape, magnetic core memory, bubble memory, and u-disk; means for optically storing information, such as cd or dvd. Of course, there are other ways of readable storage medium, such as quantum memory, graphene memory, etc.
The above-described apparatus embodiments are merely illustrative, and for example, the division of the units is merely a logical function division, and there may be additional divisions in actual implementation, for example, multiple units or plug-ins may be combined or integrated into another system, or some features may be omitted or not performed. Alternatively, the coupling or direct coupling or communication connection shown or discussed with each other may be an indirect coupling or communication connection via some interfaces, devices or units, which may be in electrical, mechanical or other form.
These computer program instructions may also be loaded onto a computer or other programmable resource data updating apparatus to cause a series of operational steps to be performed on the computer or other programmable apparatus to produce a computer implemented process such that the instructions which execute on the computer or other programmable apparatus provide steps for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
In this specification, each embodiment is described in a progressive manner, and identical and similar parts of each embodiment are referred to each other, and each embodiment is mainly described in a different manner from other embodiments. In particular, for system embodiments, the description is relatively simple as it is substantially similar to method embodiments, and reference is made to the section of the method embodiments where relevant. In the description of the present specification, a description referring to terms "one embodiment," "some embodiments," "examples," "specific examples," or "some examples," etc., means that a particular feature, structure, material, or characteristic described in connection with the embodiment or example is included in at least one embodiment or example of the present specification. In this specification, schematic representations of the above terms are not necessarily directed to the same embodiment or example. Furthermore, the particular features, structures, materials, or characteristics described may be combined in any suitable manner in any one or more embodiments or examples. Furthermore, the various embodiments or examples described in this specification and the features of the various embodiments or examples may be combined and combined by those skilled in the art without conflict
Those of ordinary skill in the art will recognize that the embodiments described herein are for the purpose of aiding the reader in understanding the principles of the present invention and should be understood that the scope of the invention is not limited to such specific statements and embodiments. Those of ordinary skill in the art can make various other specific modifications and combinations from the teachings of the present disclosure without departing from the spirit thereof, and such modifications and combinations remain within the scope of the present disclosure.
Claims (8)
1. A method of cardiac magnetic resonance image processing, the method comprising:
determining a heart film sequence from the acquired heart magnetic resonance images, wherein the heart film sequence comprises a first sequence, a second sequence and a third sequence;
dividing each sequence in the heart film sequence through a preset division neural network model to obtain a target division region;
intercepting cube image data with fixed size in a corresponding sequence based on the target segmentation area, and filling a gray value of a preset value in the target segmentation area in the cube image to obtain cube image data to be extracted;
extracting feature information from the cube image data to be extracted through a preset extraction neural network, and determining a preliminary classification result according to the feature information through the preset extraction neural network;
and carrying out weighted fusion on the preliminary classification results corresponding to the sequences to obtain a fusion result, and inputting the fusion result into a preset classifier to obtain a classification result.
2. The cardiac magnetic resonance image processing method as set forth in claim 1, wherein the model of the neural network by preset segmentation includes a contracted path with 4 sets of convolution layers and an expanded path with corresponding 4 sets of upsampling layers, and each set of convolution layers uses a preset activation function and a max pooling operation.
3. The method of claim 1, further comprising arranging the cube image data to be extracted in end diastole-end systole-end diastole order after filling a target segmented region in the cube image with a preset gray value to obtain cube image data to be extracted.
4. The cardiac magnetic resonance image processing method as set forth in claim 1, wherein the feature information specifically includes phase features, specifically motion feature information of systolic motion and diastolic motion, and three-dimensional features.
5. The cardiac magnetic resonance image processing method as set forth in claim 1, wherein the preset extraction neural network is specifically a long-short-term memory neural network model, and the preset extraction neural network is trained in advance by training data, the training data includes training feature information and training categories corresponding thereto, and the training data includes a training group and a verification group of a preset proportion.
6. The cardiac magnetic resonance image processing method as set forth in claim 5, wherein the weighting parameters in the weighted fusion are parameters trained in advance according to the training data, and the preliminary classification result is specifically a probability of each class corresponding to a single sequence in the cardiac cine sequence.
7. The cardiac magnetic resonance image processing method as set forth in claim 6, wherein the fusion result is specifically a probability of each category corresponding to a cardiac cine sequence.
8. A cardiac magnetic resonance image processing system, the system comprising:
the acquisition module is used for determining a heart film sequence from the acquired heart magnetic resonance images, wherein the heart film sequence comprises a first sequence, a second sequence and a third sequence;
the segmentation module is used for respectively segmenting each sequence in the heart film sequence through a preset segmentation neural network model to obtain a target segmentation region;
the intercepting module is used for intercepting cube image data with fixed size in a corresponding sequence based on the target dividing region, and filling a gray value of a preset value into the target dividing region in the cube image to obtain cube image data to be extracted;
the extraction module is used for extracting characteristic information from the cube image data to be extracted through a preset extraction neural network, and determining a preliminary classification result according to the characteristic information through the preset extraction neural network;
the classification module is used for carrying out weighted fusion on the primary classification results corresponding to the sequences to obtain fusion results, and inputting the fusion results into a preset classifier to obtain classification results.
Applications Claiming Priority (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN2023108183739 | 2023-07-05 | ||
CN202310818373 | 2023-07-05 |
Publications (1)
Publication Number | Publication Date |
---|---|
CN117291878A true CN117291878A (en) | 2023-12-26 |
Family
ID=89243580
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202311176545.3A Pending CN117291878A (en) | 2023-07-05 | 2023-09-13 | Cardiac magnetic resonance image processing method and system |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN117291878A (en) |
-
2023
- 2023-09-13 CN CN202311176545.3A patent/CN117291878A/en active Pending
Similar Documents
Publication | Publication Date | Title |
---|---|---|
Liu et al. | Perception consistency ultrasound image super-resolution via self-supervised CycleGAN | |
US11854703B2 (en) | Simulating abnormalities in medical images with generative adversarial networks | |
CN107492099B (en) | Medical image analysis method, medical image analysis system, and storage medium | |
CN110475505B (en) | Automatic segmentation using full convolution network | |
CN111369440B (en) | Model training and image super-resolution processing method, device, terminal and storage medium | |
CN114581662B (en) | Brain tumor image segmentation method, system, device and storage medium | |
CN111368849B (en) | Image processing method, image processing device, electronic equipment and storage medium | |
CN111951344B (en) | Magnetic resonance image reconstruction method based on cascade parallel convolution network | |
CN113506310B (en) | Medical image processing method and device, electronic equipment and storage medium | |
WO2022032824A1 (en) | Image segmentation method and apparatus, device, and storage medium | |
CN110570394A (en) | medical image segmentation method, device, equipment and storage medium | |
CN111210444A (en) | Method, apparatus and medium for segmenting multi-modal magnetic resonance image | |
Dong et al. | Identifying carotid plaque composition in MRI with convolutional neural networks | |
CN110751629A (en) | Myocardial image analysis device and equipment | |
Li et al. | S 3 egANet: 3D spinal structures segmentation via adversarial nets | |
Lu et al. | A novel 3D medical image super-resolution method based on densely connected network | |
CN117953341A (en) | Pathological image segmentation network model, method, device and medium | |
Li et al. | Multi-scale residual denoising GAN model for producing super-resolution CTA images | |
Sander et al. | Autoencoding low-resolution MRI for semantically smooth interpolation of anisotropic MRI | |
CN113313728B (en) | Intracranial artery segmentation method and system | |
Li et al. | Rethinking multi-contrast mri super-resolution: Rectangle-window cross-attention transformer and arbitrary-scale upsampling | |
Zhang et al. | 3d cross-scale feature transformer network for brain mr image super-resolution | |
Seo et al. | Cardiac MRI image segmentation for left ventricle and right ventricle using deep learning | |
Zhang et al. | Multi-scale network with the deeper and wider residual block for MRI motion artifact correction | |
CN116580187A (en) | Knee joint image segmentation method and device based on artificial intelligence and electronic equipment |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination |