CN112215797A - MRI olfactory bulb volume detection method, computer device and computer readable storage medium - Google Patents

MRI olfactory bulb volume detection method, computer device and computer readable storage medium Download PDF

Info

Publication number
CN112215797A
CN112215797A CN202010957930.1A CN202010957930A CN112215797A CN 112215797 A CN112215797 A CN 112215797A CN 202010957930 A CN202010957930 A CN 202010957930A CN 112215797 A CN112215797 A CN 112215797A
Authority
CN
China
Prior art keywords
mri
olfactory bulb
decoding
layer
detection method
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202010957930.1A
Other languages
Chinese (zh)
Inventor
王朝晖
刘高林
刘森
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Ouyuan Beijing Technology Co ltd
Original Assignee
Ouyuan Beijing Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Ouyuan Beijing Technology Co ltd filed Critical Ouyuan Beijing Technology Co ltd
Priority to CN202010957930.1A priority Critical patent/CN112215797A/en
Publication of CN112215797A publication Critical patent/CN112215797A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/0002Inspection of images, e.g. flaw detection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/0002Inspection of images, e.g. flaw detection
    • G06T7/0012Biomedical image inspection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/60Analysis of geometric attributes
    • G06T7/62Analysis of geometric attributes of area, perimeter, diameter or volume
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10072Tomographic images
    • G06T2207/10088Magnetic resonance imaging [MRI]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20084Artificial neural networks [ANN]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30204Marker

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • General Health & Medical Sciences (AREA)
  • Health & Medical Sciences (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Computing Systems (AREA)
  • Software Systems (AREA)
  • Evolutionary Computation (AREA)
  • Computational Linguistics (AREA)
  • Molecular Biology (AREA)
  • Biophysics (AREA)
  • General Engineering & Computer Science (AREA)
  • Biomedical Technology (AREA)
  • Mathematical Physics (AREA)
  • Data Mining & Analysis (AREA)
  • Artificial Intelligence (AREA)
  • Quality & Reliability (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Medical Informatics (AREA)
  • Nuclear Medicine, Radiotherapy & Molecular Imaging (AREA)
  • Radiology & Medical Imaging (AREA)
  • Geometry (AREA)
  • Magnetic Resonance Imaging Apparatus (AREA)

Abstract

The invention provides an MRI olfactory bulb volume detection method based on a deep neural network, a computer device and a computer readable storage medium. The MRI olfactory bulb volume detection method comprises the following steps: step A, acquiring a series of coronal MRI images including olfactory bulb targets; step B, inputting each coronal MRI image into the trained deep neural network, and labeling olfactory bulb tissues in the deep neural network to obtain a mask image; and step C, calculating the olfactory bulb volume by combining the scaling and interval thickness of MRI scanning for a series of mask images marked with olfactory bulb tissues. The invention realizes the automatic detection of the volume of the MRI olfactory bulb by utilizing the deep neural network, saves the labor, improves the efficiency, and simultaneously is more stable, more universal, more objective and more consistent than the labor.

Description

MRI olfactory bulb volume detection method, computer device and computer readable storage medium
Technical Field
The invention relates to the technical field of image recognition in the information processing industry, in particular to a Magnetic Resonance Imaging (MRI) olfactory bulb volume detection method based on a deep neural network, a computer device and a computer readable storage medium.
Background
The olfactory function plays an extremely important role in human production and life as one of five basic senses of human. The olfactory bulb is an information transfer station of the olfactory system, is an important component of the olfactory system, is positioned on the basis of the anterior skull above the sieve plate, is in an oval shape, and is connected with the olfactory superior center and other brain regions through a cord-shaped olfactory tract.
The plasticity of the olfactory system has been proved by various previous studies, and the shape and size of the olfactory bulb as an important component of the olfactory system are also changed correspondingly by various factors such as virus infection, inflammation, trauma and the like. In addition to the high relevance of olfactory function, olfactory bulb volume has been implicated in recent years by a number of studies demonstrating the relevance of olfactory disorders to neurodegenerative, affective and psychiatric disorders. The change of olfactory function has been widely accepted in europe as an early diagnosis index of Alzheimer Disease (AD).
The olfactory bulb volume index is more and more widely applied in the fields of ear, nose and throat and neuroscience. With the development of medical imaging, olfactory system imaging has advanced significantly in recent decades. The olfactory bulb is easy to distinguish and measure in olfactory system images, and the volume of the olfactory bulb is widely used as an index of objective olfactory function and state of olfactory system. Therefore, olfactory bulb volume measurement is an extremely important content in clinical and scientific research work at present and in the future.
In the process of implementing the invention, the applicant finds that the traditional MRI olfactory bulb volume detection method is manually marked and calculated, and people need to be sketched to have rich medical and imaging backgrounds. This presents two problems: firstly, manual marking operation is fine and complex, and the operation time is long; secondly, the labeling has certain subjectivity, the learning cost is high, the consistency is poor, and the stability of the final labeling result is not high.
Disclosure of Invention
Technical problem to be solved
The present invention is intended to solve at least one of the above technical problems at least in part.
(II) technical scheme
In order to achieve the above object, the present invention provides an MRI olfactory bulb volume detection method. The MRI olfactory bulb volume detection method comprises the following steps: step A, acquiring a series of coronal MRI images including olfactory bulb targets; step B, inputting each coronal MRI image into the trained deep neural network, and labeling olfactory bulb tissues in the deep neural network to obtain a mask image; and step C, calculating the olfactory bulb volume by combining the scaling and interval thickness of MRI scanning for a series of mask images marked with olfactory bulb tissues.
In order to achieve the above object, the present invention also provides a computer apparatus. The computer device includes: a memory; and a processor, electrically coupled to the memory, configured to execute the MRI olfactory bulb volume detection method as described above based on instructions stored in the memory.
To achieve the above object, the present invention further provides a computer-readable storage medium. The computer readable storage medium has stored thereon computer instructions which, when executed by a processor, perform the MRI olfactory bulb volume detection method as described above.
(III) advantageous effects
According to the technical scheme, the invention has at least one of the following beneficial effects:
(1) the MRI sniffing ball volume automatic detection is realized by utilizing the deep neural network, the labor is saved, the efficiency is improved, and the method is more stable, better in universality and more objective and consistent than the manual method.
(2) Based on an end-to-end deep learning framework, the method widely supports the high-efficiency video card acceleration operation which is used when the video card is opened and has a low threshold.
(3) The step of center cutting ensures that the occupied display memory is low, the operation speed is high, the average time for marking a picture is within one second, and the time for waiting the result of the patient is greatly reduced.
(4) The deep neural network model adopts a coding, decoding and cascading mode, the coder can extract the features of all levels of the olfactory bulb organization, and the decoder and cascading operation ensure the accuracy of feature reduction to the original image, so that pixel-level olfactory bulb detection with high precision and high accuracy is realized.
(5) At each decoding layer, the method comprises the following steps: a series operation, a double convolution operation and a deconvolution operation, wherein: the series operation can better restore the outline of the olfactory target in the MRI image under high resolution. By introducing the corresponding characteristic diagrams of the same level in series, the characteristic diagrams of all levels can be better helped to be restored to high-resolution images of the original size step by step, the learned olfactory bulb contour is still clear under the high resolution, and the physiological morphological characteristics of the actual olfactory bulb are met; performing convolution operation twice can improve the nonlinear expression of the combined feature maps of the series operation and improve the fusion degree of the two feature maps; and introducing deconvolution operation into the decoding layer to restore the feature graph with smaller size to larger image size layer by layer, so as to ensure that the restored contour of the original image target feature is clear under high resolution.
Drawings
FIG. 1 is a flow chart of an MRI olfactory bulb volume detection method according to an embodiment of the present invention.
Fig. 2 is a schematic structural diagram of a convolutional neural network in the MRI olfactory bulb volume detection method shown in fig. 1.
FIG. 3A is a pre-processed coronal MRI image.
Fig. 3B shows the olfactory bulb labeling result labeled by the experts in the verification set.
Fig. 3C is a olfactory bulb labeling result obtained by convolutional neural network processing in the MRI olfactory bulb volume detection method shown in fig. 1.
FIG. 4 is a diagram of a computer device according to another embodiment of the invention.
Fig. 5 is a schematic diagram of a computer-readable storage medium according to another embodiment of the present invention.
Detailed Description
The invention realizes the automatic detection of the volume of the MRI olfactory bulb by utilizing the deep neural network, and can realize pixel-level olfactory bulb detection with high precision and high accuracy by adopting a coding, decoding and cascading mode in the deep neural network model, thereby saving labor and improving efficiency.
In order to make the objects, technical solutions and advantages of the present invention more apparent, the present invention is described in further detail below with reference to specific embodiments and the accompanying drawings. It should be understood that these embodiments are provided so that this disclosure will satisfy applicable legal requirements, and that this invention may be embodied in many different forms and should not be construed as limited to the embodiments set forth herein.
In an exemplary embodiment of the invention, an MRI olfactory bulb volume detection method is provided. FIG. 1 is a flow chart of an MRI olfactory bulb volume detection method according to an embodiment of the present invention. As shown in fig. 1, the MRI olfactory bulb volume detection method based on the deep neural network of the present invention includes:
step A, acquiring a series of coronal MRI images including olfactory bulb targets;
it should be noted that, no matter in the detection stage or in the training stage of the deep neural network model, the MRI images of the brain need to be preprocessed, and the description is unified here.
It will be appreciated by those skilled in the art that the olfactory bulb of the patient to be examined is represented by a series of coronal MRI images, each of which takes a slice of the patient's brain from the anterior to the posterior in a coronal orientation, containing a different location of the olfactory bulb.
In this embodiment, for each coronal image, the step a further comprises:
a substep A1, extracting a series of coronal MRI images including olfactory bulb targets from the original DICOM format data of brain MRI images and storing in grayscale format;
in this embodiment, the coronal MRI image is in PNG file format. Of course, in other embodiments of the present invention, other file formats may be used.
A substep a2, for each coronal MRI image, cropping the image without changing the center to an L × W coronal MRI image, wherein L and W are the number of pixels in the longitudinal direction and the width direction of the cropped coronal MRI image, respectively;
the sub-steps aim at increasing the area ratio of the olfactory ball target in the image to be recognized and improving the segmentation precision under the condition of unbalanced target background ratio.
In the process of implementing the invention, the applicant finds that the olfactory bulb is generally positioned in the middle of the coronal MRI image for the first time, and the possibility of cropping the MRI image is provided. When the image to be identified is cut, the image to be identified is obtained by taking the center of the coronal MRI image as the center by utilizing the characteristic that the olfactory bulb is positioned in the center of the coronal MRI image and the characteristic that the position and the shape are relatively fixed.
In this embodiment, L is 480 and W is 480. For L, W of the image to be recognized, if the setting is too large, the processing complexity is increased; if the setting is too small, information of the sniffing ball target may be lost. Therefore, in view of the above two aspects, it may be arranged that: l is more than or equal to 300 and W is less than or equal to 800.
It can be understood that the step of central cutting makes the occupied display memory low and the operation speed fast, and the time for marking one picture is within one second on average, thereby greatly reducing the time for the patient to wait for the result.
Step B, inputting each coronal MRI image into the trained deep neural network, and labeling olfactory bulb tissues in the deep neural network to obtain a mask image;
1. convolutional neural network
In this embodiment, the deep neural network is a convolutional neural network. Fig. 2 is a schematic structural diagram of a convolutional neural network in the MRI olfactory bulb volume detection method shown in fig. 1. As shown in fig. 2, the convolutional neural network includes: a concatenated encoding portion and decoding portion. Wherein, the coding part comprises five cascaded coding layers; the decoding part comprises four cascaded decoding layers corresponding to the same level coding layer. For any one of the encoding layer and the decoding layer in the encoding part and the decoding part, the neural network learns a feature map (feature map) formed by overlapping a plurality of channels (channels) for each coronal MRI image containing olfactory bulb tissues passing through one encoding layer, and the feature map represents features of the olfactory bulb tissues at the current level.
In other embodiments of the present invention, the number of layers of the coding layer and the decoding layer may be set as needed. Considering that the larger the number of layers, the greater the computational complexity, the number of layers is taken to be an integer greater than 2 and smaller than 8.
1.1 data dimension
Referring to fig. 2, the dimension of processing data by each coding layer or decoding layer is:
batch _ size channel width height
Where batch _ size is the batch size, it can be understood as the number of samples selected for one test (or training). The number of channels is the number of feature maps to be learned by the coding layer or the decoding layer of the layer, and the width and the height are the number of columns and the number of rows of the convolution operation matrix respectively.
1.2 coding part
Referring to fig. 2, in the coding layer of the first half of the coding part, it down-samples the input data to learn the abstract meaning of the olfactory bulb tissue from shallow to deep levels in the MRI coronal plane. In the coding part, a cascade of five coding layers is included: l isencoder-1、Lencoder-2、Lencoder-3、Lencoder-4、Lencoder-5. As can be seen, the five coding layers are numbered sequentially.
In this embodiment, the number of channels of the cascaded five coding layers of the coding part is 64, 128, 256, 512, 1024, respectively, so as to down-sample the MRI coronal plane image to extract features of the olfactory bulb and surrounding tissues thereof in the MRI image at different semantic depths step by step, and the more the number of channels, the deeper the number of layers, the more complicated the extracted features.
Regarding the setting of the number of channels, it should be noted that: the whole of the coding part and the decoding part is designed correspondingly, and the number of channels of the horizontal corresponding part is consistent with the structure; secondly, the design principle of the number of channels is enough to meet the requirement of learning the physiological characteristic diagram of the olfactory bulb target and the surrounding tissues.
With a second coding layer L of coded portionsencoder-2For example, the following steps are carried out:
(1) its input Iencoder-2Comprises the following steps: the last coding layer, i.e. the first coding layer Lencoder-1Output feature maps of (1);
the other coding layers are similar. It should be noted that, in particular, for the first coding layer Lencoder-1In other words, the input is the original coronal MRI image.
(2) Its output characteristic diagram Oencoder-2As follows: the next coding layer, i.e. the second coding layer Lencoder-3An input of (1);
the other coding layers are similar. It should be noted that, in particular, for the fifth coding layer Lencoder-5In other words, the output is the fourth decoding layer Ldecoder-4Is input.
(3) The operation comprises the following steps:
convolution operation, namely inputting I of current coding layerencoder-2Carrying out convolution operation on the convolution kernel to obtain a convolution result;
pooling operation, i.e. performing characteristic sampling on the convolution result to obtain the output characteristic diagram of the current coding layer, i.e. Oencoder-2
In this example, the pooling operation uses a maximum pooling of 2 × 2 (Max Pool 2 × 2). Of course, the pooling may be Average pooling (Average Pool), and the pooling size may be adjusted as needed.
1.3 decoding part
Referring to fig. 2, in the coding layer of the second half decoding portion, the feature map is upsampled and the olfactory bulb features of corresponding levels in the coder are superimposed to help recover the semantic segmentation result of the small object, i.e. the olfactory bulb, under high resolution, so that the detection result is more accurate, the olfactory bulb profile is more real, and accurate measurement of the olfactory bulb volume is facilitated
In this embodiment, the decoding portion corresponds to the encoding portion, and includes four cascaded decoding layers: l isdecoder-4、Ldecoder-3、Ldecoder-2、Ldecoder-1. As can be seen, the four decoding layers are numbered in reverse order, and the coding layer and the decoding layer which are numbered in the same level correspond to each other.
The channel numbers of the four decoding layers are 512, 256, 128 and 64 respectively. In this part, downsampling structures at the same level are spliced to help restore the outline features of a olfactory bulb in an image, the image is upsampled and is subjected to pixel-level segmentation, and a predicted Mask (Mask) picture is finally generated in a decoding layer at the last layer.
To decode part of the second decoding layer Ldecoder-2For example, the following steps are carried out:
(1) its first input Idecoder-2Comprises the following steps: the previous decoding layer, i.e. the third decoding layer Ldecoder-3An output of (d); the other input is a peer coding layer, i.e. a second coding layer Lencoder-2Output feature maps of (1);
the other decoding layers are similar. It should be noted that, for the fourth decoding layer, the first input is the fifth coding layer Lencoder-5The output characteristic map of (1).
(2) Its output Odecoder-2As follows: the latter decoding layer, i.e. the first decoding layer Ldecoder-1A first input of (d);
the other decoding layers are similar. It should be noted that, for the first decoding layer, the output is a Mask (Mask) image, i.e. an image marked with olfactory bulb tissue.
(3) The operation comprises the following steps:
first, serial operation, i.e. the third decoding layer Ldecoder-3And a peer coding layer Iencoder-2Performing series operation on the output feature graph (shown as cat in FIG. 2) to obtain a merged feature graph;
the purpose of the tandem operation is to better restore the contour of the olfactory target in MRI images at high resolution. By introducing the corresponding characteristic diagrams at the same level in series, the characteristic diagrams at all levels can be better helped to be restored to high-resolution images with original sizes step by step, the learned olfactory bulb contour is still clear under the high resolution, and the physiological morphological characteristics of the actual olfactory bulb are met.
And secondly, performing convolution operation twice, wherein the convolution operation is to perform convolution operation on the current combined feature graph and a convolution kernel to obtain a convolution result, and the number of channels of the convolution kernel of the convolution operation twice is consistent with that of the combined feature graph.
In this embodiment, performing convolution operation twice can improve the nonlinear expression of the merged feature map of the series operation, and improve the fusion degree of the two feature maps.
Performing deconvolution operation, namely performing deconvolution operation on convolution results after two times of convolution operation and convolution kernel to obtain deconvolution results serving as output of the current decoding layer, namely Odecoder-2
By introducing deconvolution operation into the decoding layer, the feature graph with smaller size is restored to larger image size layer by layer, and the clear restoration contour of the original image target feature under high resolution is ensured.
In summary, for one of a series of coronal MRI images, the overall feature size varies as follows:
①.Lencoder-1inputting a1 × 480 × 480 grayscale picture matrix, and outputting a feature map with a size of 64 × 480 × 480 after encoding, wherein the grayscale picture matrix is a coronal MRI image stored in a grayscale format;
②.Lencoder-2inputting a feature map with the size of 64 multiplied by 480, and outputting the feature map with the size of 128 multiplied by 240 after coding;
③.Lencoder-3inputting a feature map of 128 × 240 × 240, and outputting a feature map of 256 × 120 × 120 after encoding;
④.Lencoder-4inputting a feature map of 256 × 120 × 120, and outputting a feature map with the size of 512 × 60 × 60 after encoding;
⑤.Lencoder-5inputting a feature map with the size of 512 multiplied by 60, and outputting a feature map with the size of 1024 multiplied by 30 after coding;
⑥.Ldecoder-4inputting a feature map with the size of 1024 × 30 × 30, and outputting a feature map with the size of 512 × 60 × 60 after decoding;
⑦.Ldecoder-3inputting a feature map with the size of 512 multiplied by 60, and outputting the feature map with the size of 256 multiplied by 120 after decoding;
⑧.Ldecoder-2inputting a feature map of 256 × 120 × 120, and outputting a feature map of 128 × 240 × 240 after decoding;
⑨.Ldecoder-1a feature map of 128 × 240 × 240 is input, and a Mask (Mask) image of 1 × 480 × 480 size, that is, an image with olfactory bulb tissues labeled, is output after decoding.
2. Training for deep neural network models
2.1 training data
Before training, a training data set and a verification data set required for training need to be obtained.
Firstly, originally acquired patient MRI image data is obtained from a related nuclear magnetic resonance machine of a hospital, and for each coronal MRI image, pixel-level labeling of a olfactory bulb part is manually carried out by an expert. And then, combining the original data according to the manual labeling result of the expert, and sorting and generating a olfactory bulb data set for deep neural network training or verification.
Then, for the olfactory bulb data set, a cross validation method is used for dividing a training data set and a validation data set, so that the training data set and the validation data set for training the convolutional neural network are obtained. The training data set and the verification data set both comprise MRI images of a plurality of patients and corresponding expert manual labeling results.
It should be noted that the MRI images in the training dataset and the verification dataset are both subjected to the image preprocessing process described in step a, and will not be repeated here.
2.2 loss function
In this embodiment, a GD loss function is used in the training, and its expression is as follows:
Figure BDA0002677473120000081
wherein, gd (generated Dice coeffient) is a similarity coefficient-Dice coefficient, and k is a weighted penalty value over-parameter. The value range of the GD loss function is [0.2, 0.6], the smaller the numerical value is, the more accurate the annotation is shown, and the larger the numerical value is, the more inaccurate the annotation is shown.
2.3 model training
In order to determine an end-to-end convolutional neural network in combination with the MRI image feature, this embodiment further includes, before step B:
step B': and training the convolutional neural network by using a training data set and a verification data set with the aim of simulating the manual labeling result of an expert as much as possible.
The step B' specifically comprises the following steps:
training the convolutional neural network by using training data set data, adjusting the learning rate, and iterating the convolutional neural network until the weight of the model is saved as a result when the GD loss function does not drop obviously.
And secondly, testing the performance of the convolutional neural network on all data sets by using the cross-validation data set data, and evaluating whether the detection requirements are met. If the requirements are met, finishing the training, and if the requirements are not met, adjusting the training parameters and retraining.
Those skilled in the art will appreciate that numerous parameters of a convolutional neural network, including multiple sets of convolutional kernels, may be determined by training.
FIG. 3A is a pre-processed coronal MRI image. Fig. 3B shows the olfactory bulb labeling result labeled by the experts in the verification set. Fig. 3C is a olfactory bulb labeling result obtained by convolutional neural network processing in the MRI olfactory bulb volume detection method shown in fig. 1. Fig. 3B and 3C both show 480 × 480 pixels, and the more similar the morphology of the two, the higher the model accuracy.
3. In connection with detection
Inputting the preprocessed coronal MRI image into the trained convolutional neural network, and labeling olfactory bulb tissues therein to obtain a mask image corresponding to the coronal MRI image.
Step C, calculating the volume of the olfactory bulb by combining the scaling and the interval thickness of MRI scanning for a series of mask images marked with olfactory bulb tissues, wherein the specific formula is as follows:
Vob=∑pixelsmask*scale*thickness
wherein, VobFor olfactory bulb volume, pixelsmaskThe number of pixel points in the olfactory bulb tissue labeling result of a single mask image is marked, scale is the scaling of MRI scanning, thickness is the interval thickness of the MRI scanning, and sigma represents the summation of a series of coronal MRI images.
So far, the introduction of the MRI olfactory bulb volume detection method based on the deep neural network is finished.
Based on the above, the invention also provides a computer device. FIG. 4 is a diagram of a computer device according to another embodiment of the invention. As shown in fig. 4, in this embodiment, the computer apparatus includes: a memory; and a processor, electrically coupled to the memory, configured to execute the MRI olfactory bulb volume detection method according to the above embodiments based on instructions stored in the memory.
The MRI olfactory bulb volume detection method comprises the following steps: step A, acquiring a series of coronal MRI images including olfactory bulb targets; step B, inputting each coronal MRI image into the trained deep neural network, and labeling olfactory bulb tissues in the deep neural network to obtain a mask image; and C, calculating the volume of the olfactory bulb by combining the scaling and the interval thickness of MRI scanning for a series of mask images marked with olfactory bulb tissues. For the detailed description of each step, reference may be made to the related description of the above embodiments, which are also incorporated in the present embodiment. And will not be described in detail herein.
Based on the above, the invention also provides a computer readable storage medium. Fig. 5 is a schematic diagram of a computer-readable storage medium according to another embodiment of the present invention. As shown in fig. 5, in this embodiment, a computer readable storage medium stores computer instructions, which when executed by a processor, implement the MRI olfactory bulb volume detection method as described in the above embodiments.
As above, the MRI olfactory bulb volume detection method comprises: step A, acquiring a series of coronal MRI images including olfactory bulb targets; step B, inputting each coronal MRI image into the trained deep neural network, and labeling olfactory bulb tissues in the deep neural network to obtain a mask image; and C, calculating the volume of the olfactory bulb by combining the scaling and the interval thickness of MRI scanning for a series of mask images marked with olfactory bulb tissues. For the detailed description of each step, reference may be made to the related description of the above embodiments, which are also incorporated in the present embodiment. And will not be described in detail herein.
So far, a plurality of embodiments of the present invention have been described in detail with reference to the accompanying drawings.
It is noted that for some implementations, if not essential to the invention and well known to those of ordinary skill in the art, they are not illustrated in detail in the drawings or in the text of the description, as they may be understood with reference to the relevant prior art.
Furthermore, the above definitions of the various elements and methods are not limited to the specific structures, shapes or manners mentioned in the examples, which may be easily modified or substituted by those skilled in the art, for example:
(1) in addition to convolutional neural networks, other types of deep learning networks may be employed, such as: a deep residual error network, a DenseNet and the like, wherein the training process and the detection process are the same as those of a convolutional neural network;
(2) the specific form of the convolutional neural network can adopt the existing form in the prior art or adjust the existing form as required;
(3) the loss function may take other forms;
(4) the number of layers, the number of channels, the pooling scheme, etc. of the encoding and decoding sections can be adjusted as necessary.
From the above description of the various aspects, those skilled in the art should clearly recognize the deep neural network-based MRI olfactory bulb volume detection method of the present invention.
In summary, the invention utilizes the deep neural network, adopts the mode of encoding, decoding and cascading, the encoder can extract the characteristics of each level of the olfactory bulb organization, and the decoder and the cascading operation ensure the accuracy of restoring the characteristics to the original image, thereby realizing the automatic, high-precision and high-accuracy olfactory bulb detection, and having strong practicability and popularization and application values.
Unless expressly indicated to the contrary, the numerical parameters set forth in the specification and claims of this invention may be approximations that may vary depending upon the teachings of the invention. Specifically, all numbers expressing quantities of ingredients, reaction conditions, and so forth used in the specification and claims are to be understood as being modified in all instances by the term "about". Generally, the expression is meant to encompass variations of ± 10% in some embodiments, 5% in some embodiments, 1% in some embodiments, 0.5% in some embodiments by the specified amount.
Ordinal numbers such as letters, etc., used in the specification and claims to modify a corresponding step are intended to simply allow a step with a certain designation to be clearly distinguished from another step without implying any order or order to such steps.
In addition, unless steps are specifically described or must occur in sequence, the order of the steps is not limited to that listed above and may be changed or rearranged as desired by the desired design. The embodiments described above may be mixed and matched with each other or with other embodiments based on design and reliability considerations, i.e., technical features in different embodiments may be freely combined to form further embodiments.
The algorithms and displays presented herein are not inherently related to any particular computer, virtual machine, or other apparatus. Various general purpose systems may also be used with the teachings herein. The required structure for constructing such a system will be apparent from the description above. Moreover, the present invention is not directed to any particular programming language. It is appreciated that a variety of programming languages may be used to implement the teachings of the present invention, and the foregoing descriptions of specific languages are provided for purposes of disclosure as best modes of practicing the invention.
The invention may be implemented by means of hardware comprising several distinct elements, and by means of a suitably programmed computer. The various component embodiments of the invention may be implemented in hardware, or in software modules running on one or more processors, or in a combination thereof. Those skilled in the art will appreciate that a microprocessor or Digital Signal Processor (DSP) may be used in practice to implement some or all of the functionality of some or all of the components in the associated apparatus according to embodiments of the invention. The present invention may also be embodied as apparatus or device programs (e.g., computer programs and computer program products) for performing a portion or all of the methods described herein. Such programs implementing the present invention may be stored on computer-readable media or may be in the form of one or more signals. Such a signal may be downloaded from an internet website or provided on a carrier signal or in any other form.
It should be appreciated that in the foregoing description of exemplary embodiments of the invention, various features of the invention are sometimes grouped together in a single embodiment, figure, or description thereof for the purpose of streamlining the invention and aiding in the understanding of one or more of the various inventive aspects. However, the method of the invention should not be construed to reflect the intent: that the claimed invention requires more features than are expressly recited in each claim. Rather, as the following claims reflect, inventive aspects lie in less than all features of a single foregoing embodiment. Thus, the claims following the detailed description are hereby expressly incorporated into this detailed description, with each claim standing on its own as a separate embodiment of this invention.
The above-mentioned embodiments are intended to illustrate the objects, technical solutions and advantages of the present invention, and it should be understood that the above-mentioned embodiments are only exemplary embodiments of the present invention, and are not intended to limit the present invention, and any modifications, equivalent substitutions, improvements and the like made within the spirit and principle of the present invention should be included in the protection scope of the present invention.

Claims (10)

1. An MRI olfactory bulb volume detection method, comprising:
step A, acquiring a series of coronal MRI images including olfactory bulb targets;
step B, inputting each coronal MRI image into the trained deep neural network, and labeling olfactory bulb tissues in the deep neural network to obtain a mask image; and
and C, calculating the volume of the olfactory bulb by combining the scaling and the interval thickness of MRI scanning for a series of mask images marked with olfactory bulb tissues.
2. The MRI olfactory bulb volume detection method of claim 1, wherein in step C, the olfactory bulb volume is calculated using the following formula:
Figure FDA0002677473110000011
wherein, VobFor olfactory bulb volume, pixelsmaskThe number of pixel points in the olfactory bulb tissue labeling result of a single mask image is marked, scale is the scaling of MRI scanning, thickness is the interval thickness of the MRI scanning, and sigma represents the summation of a series of coronal MRI images.
3. The MRI olfactory bulb volume detection method according to claim 1, wherein in step B, the deep neural network is a convolutional neural network;
the convolutional neural network includes: a concatenated encoding portion and decoding portion;
the encoding section includes: the method comprises the following steps that (1) n +1 coding layers are cascaded, wherein the n +1 coding layers are numbered sequentially, each coding layer carries out downsampling on input data, and n is more than or equal to 2 and less than or equal to 8;
the decoding section includes: and each decoding layer carries out series operation on the output characteristic diagram of the previous decoding layer and the output characteristic diagram of the same level coding layer, and carries out up-sampling on the combined characteristic diagram after the series operation.
4. The MRI olfactory bulb volume detection method of claim 3, wherein:
for the coding layer, the operation comprises: convolution and pooling operations, wherein: the convolution operation is to carry out convolution operation on the input of the convolution operation and a convolution kernel to obtain a convolution result; performing characteristic sampling on the convolution result by the pooling operation to obtain an output characteristic diagram of the current coding layer; for a first encoding layer, the input of the first encoding layer is an original coronal MRI image, and for other encoding layers except the first encoding layer, the input of the first encoding layer is an output feature map of a previous encoding layer;
for the decoding layer, the operations thereof comprise: the method comprises the following steps of series operation, twice convolution operation and deconvolution operation, wherein the series operation is to carry out series operation on a first input of the series operation and an output characteristic graph of a same-level coding layer to obtain a combined characteristic graph; the convolution operation is to carry out convolution operation on the combined characteristic graph and a convolution kernel to obtain a convolution result; the number of channels of convolution kernels of the two convolution operations is consistent with that of the merged characteristic diagram; the deconvolution operation is to perform deconvolution operation on a convolution result obtained after two times of convolution operation and a convolution kernel to obtain a deconvolution result which is used as the output of the current decoding layer; for the nth decoding layer, the first input of the decoding layer is the output characteristic diagram of the (n + 1) th coding layer, and for the decoding layers except the nth decoding layer, the first input of the decoding layers is the output of the previous decoding layer; for the first decoding layer, the output is the mask image after marking olfactory bulb organization, and for the other decoding layers except the first decoding layer, the output is the first input of the next decoding layer.
5. The MRI olfactory bulb volume detection method as claimed in claim 3, wherein in the coding part, the number of characteristic pattern channels of n +1 coding layers is increased by multiple; in the decoding part, the number of characteristic diagram channels of n layers of decoding layers is decreased in multiples.
6. The MRI olfactory bulb volume detection method as claimed in claim 5, characterized in that n-4;
the number of characteristic image channels of five cascaded coding layers in the coding part is as follows in sequence: 64. 128, 256, 512, 1024; correspondingly, the number of feature map channels of the four cascaded decoding layers in the decoding part is as follows: 512. 256, 128, 64.
7. The MRI olfactory bulb volume detection method of claim 1, wherein the step a comprises:
substep A1, extracting a series of coronal MRI images from the original DICOM format data of the brain MRI image and storing in grayscale format;
and a substep A2, for each coronal MRI image, cutting the coronal MRI image into L × W coronal MRI images with the center unchanged, wherein L and W are the pixel numbers in the longitudinal direction and the width direction of the cut coronal MRI image respectively, wherein L is more than or equal to 300, and W is less than or equal to 800.
8. The MRI olfactory volume detection method of any one of claims 1 to 7, wherein step B is preceded by the further step of:
b', training and verifying the deep neural network by using training data set data and verification data set data with the aim of simulating the manual labeling result of an expert as much as possible;
wherein the loss function employed in the training is as follows:
Figure FDA0002677473110000021
wherein GD is a Dice coefficient, and k is a weighted penalty value over-parameter;
the training data set and the verification data set are taken from the same olfactory bulb data set, data set division is carried out by using a cross verification method, and the training data set and the verification data set respectively comprise MRI images of a plurality of patients and corresponding expert manual labeling results;
wherein the olfactory bulb dataset is obtained by: obtaining a series of coronal MRI images from the original MRI images; for each coronal MRI image, carrying out pixel-level labeling on a olfactory bulb part manually by an expert; and then, combining the original data according to the manual labeling result of the expert, and sorting and generating a olfactory bulb data set for deep neural network training or verification.
9. A computer device, comprising:
a memory; and
a processor, electrically coupled to the memory, configured to execute the MRI olfactory bulb volume detection method according to any of claims 1 to 7 based on instructions stored in the memory.
10. A computer-readable storage medium, characterized in that computer instructions are stored thereon, which instructions, when executed by a processor, perform the MRI olfactory bulb volume detection method as claimed in any one of claims 1 to 7.
CN202010957930.1A 2020-09-11 2020-09-11 MRI olfactory bulb volume detection method, computer device and computer readable storage medium Pending CN112215797A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010957930.1A CN112215797A (en) 2020-09-11 2020-09-11 MRI olfactory bulb volume detection method, computer device and computer readable storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010957930.1A CN112215797A (en) 2020-09-11 2020-09-11 MRI olfactory bulb volume detection method, computer device and computer readable storage medium

Publications (1)

Publication Number Publication Date
CN112215797A true CN112215797A (en) 2021-01-12

Family

ID=74050177

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010957930.1A Pending CN112215797A (en) 2020-09-11 2020-09-11 MRI olfactory bulb volume detection method, computer device and computer readable storage medium

Country Status (1)

Country Link
CN (1) CN112215797A (en)

Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106344594A (en) * 2015-07-17 2017-01-25 上海绿谷制药有限公司 Application of sodium alginate oligose and derivative to treatment of inflammations
CN109003299A (en) * 2018-07-05 2018-12-14 北京推想科技有限公司 A method of the calculating cerebral hemorrhage amount based on deep learning
CN109583425A (en) * 2018-12-21 2019-04-05 西安电子科技大学 A kind of integrated recognition methods of the remote sensing images ship based on deep learning
CN109754394A (en) * 2018-12-28 2019-05-14 上海联影智能医疗科技有限公司 3 d medical images processing unit and method
CN109829877A (en) * 2018-09-20 2019-05-31 中南大学 A kind of retinal fundus images cup disc ratio automatic evaluation method
CN110717907A (en) * 2019-10-06 2020-01-21 浙江大学 Intelligent hand tumor detection method based on deep learning
CN110969191A (en) * 2019-11-07 2020-04-07 吉林大学 Glaucoma prevalence probability prediction method based on similarity maintenance metric learning method
US20200108084A1 (en) * 2017-05-24 2020-04-09 Societe Des Produits Nestle S.A. Composition comprising oligofructose (of) for use in the improvement of short term memory and other cognitive benefits
CN111369582A (en) * 2020-03-06 2020-07-03 腾讯科技(深圳)有限公司 Image segmentation method, background replacement method, device, equipment and storage medium

Patent Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106344594A (en) * 2015-07-17 2017-01-25 上海绿谷制药有限公司 Application of sodium alginate oligose and derivative to treatment of inflammations
US20200108084A1 (en) * 2017-05-24 2020-04-09 Societe Des Produits Nestle S.A. Composition comprising oligofructose (of) for use in the improvement of short term memory and other cognitive benefits
CN109003299A (en) * 2018-07-05 2018-12-14 北京推想科技有限公司 A method of the calculating cerebral hemorrhage amount based on deep learning
CN109829877A (en) * 2018-09-20 2019-05-31 中南大学 A kind of retinal fundus images cup disc ratio automatic evaluation method
CN109583425A (en) * 2018-12-21 2019-04-05 西安电子科技大学 A kind of integrated recognition methods of the remote sensing images ship based on deep learning
CN109754394A (en) * 2018-12-28 2019-05-14 上海联影智能医疗科技有限公司 3 d medical images processing unit and method
CN110717907A (en) * 2019-10-06 2020-01-21 浙江大学 Intelligent hand tumor detection method based on deep learning
CN110969191A (en) * 2019-11-07 2020-04-07 吉林大学 Glaucoma prevalence probability prediction method based on similarity maintenance metric learning method
CN111369582A (en) * 2020-03-06 2020-07-03 腾讯科技(深圳)有限公司 Image segmentation method, background replacement method, device, equipment and storage medium

Similar Documents

Publication Publication Date Title
WO2020108562A1 (en) Automatic tumor segmentation method and system in ct image
CN109389584A (en) Multiple dimensioned rhinopharyngeal neoplasm dividing method based on CNN
CN111754520B (en) Deep learning-based cerebral hematoma segmentation method and system
CN115457021A (en) Skin disease image segmentation method and system based on joint attention convolution neural network
CN113450328B (en) Medical image key point detection method and system based on improved neural network
CN105913431A (en) Multi-atlas dividing method for low-resolution medical image
CN111860528B (en) Image segmentation model based on improved U-Net network and training method
CN114119637B (en) Brain white matter high signal segmentation method based on multiscale fusion and split attention
CN112862805B (en) Automatic auditory neuroma image segmentation method and system
CN109215035B (en) Brain MRI hippocampus three-dimensional segmentation method based on deep learning
CN113436173A (en) Abdomen multi-organ segmentation modeling and segmentation method and system based on edge perception
CN113361353A (en) Zebrafish morphological scoring method based on DeepLabV3Plus
CN113456031A (en) Training device and prediction device of brain state prediction model and electronic equipment
CN116051589A (en) Method and device for segmenting lung parenchyma and pulmonary blood vessels in CT image
CN114708212A (en) Heart image segmentation method based on SEA-Unet
CN114529562A (en) Medical image segmentation method based on auxiliary learning task and re-segmentation constraint
Dong et al. Supervised learning-based retinal vascular segmentation by m-unet full convolutional neural network
CN112634285B (en) Method for automatically segmenting abdominal CT visceral fat area
CN115359046B (en) Organ blood vessel segmentation method and device, storage medium and electronic equipment
CN116309615A (en) Multi-mode MRI brain tumor image segmentation method
CN112215797A (en) MRI olfactory bulb volume detection method, computer device and computer readable storage medium
CN116258685A (en) Multi-organ segmentation method and device for simultaneous extraction and fusion of global and local features
CN116486156A (en) Full-view digital slice image classification method integrating multi-scale feature context
Zhong et al. Autopet challenge 2022: Automatic segmentation of whole-body tumor lesion based on deep learning and fdg pet/ct
CN114937044A (en) Lightweight image segmentation method and device and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
WD01 Invention patent application deemed withdrawn after publication
WD01 Invention patent application deemed withdrawn after publication

Application publication date: 20210112