CN112053342A - Method and device for extracting and identifying pituitary magnetic resonance image based on artificial intelligence - Google Patents

Method and device for extracting and identifying pituitary magnetic resonance image based on artificial intelligence Download PDF

Info

Publication number
CN112053342A
CN112053342A CN202010907798.3A CN202010907798A CN112053342A CN 112053342 A CN112053342 A CN 112053342A CN 202010907798 A CN202010907798 A CN 202010907798A CN 112053342 A CN112053342 A CN 112053342A
Authority
CN
China
Prior art keywords
pituitary
magnetic resonance
resonance image
image
model training
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202010907798.3A
Other languages
Chinese (zh)
Inventor
陈燕铭
朱延华
郭裕兰
郭若汨
石国军
李庆玲
钱孝贤
刘浩
李海成
温会泉
曾龙驿
林硕
谭莺
高荣
聂元鹏
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Individual
Original Assignee
Individual
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Individual filed Critical Individual
Priority to CN202010907798.3A priority Critical patent/CN112053342A/en
Publication of CN112053342A publication Critical patent/CN112053342A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/0002Inspection of images, e.g. flaw detection
    • G06T7/0012Biomedical image inspection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • G06N3/084Backpropagation, e.g. using gradient descent
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/136Segmentation; Edge detection involving thresholding
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/44Local feature extraction by analysis of parts of the pattern, e.g. by detecting edges, contours, loops, corners, strokes or intersections; Connectivity analysis, e.g. of connected components
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10072Tomographic images
    • G06T2207/10088Magnetic resonance imaging [MRI]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20016Hierarchical, coarse-to-fine, multiscale or multiresolution image processing; Pyramid transform
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20084Artificial neural networks [ANN]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30004Biomedical image processing
    • G06T2207/30016Brain
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30004Biomedical image processing
    • G06T2207/30096Tumor; Lesion

Abstract

The invention discloses an extraction and identification method of a pituitary magnetic resonance image based on artificial intelligence, which comprises the following steps: acquiring at least one pituitary magnetic resonance image of a person to be detected; inputting each pituitary magnetic resonance image into a pituitary area positioning model obtained by pre-training so that the pituitary area positioning model can calculate each pituitary magnetic resonance image; outputting the pituitary area corresponding to each pituitary magnetic resonance image. The invention also discloses a corresponding device for extracting and identifying the pituitary magnetic resonance image based on artificial intelligence. By adopting the embodiment of the invention, the magnetic resonance image to be detected is calculated and analyzed through the pre-established and trained pituitary region positioning model so as to obtain the pituitary region by positioning, thereby effectively improving the extraction and identification efficiency and precision of the pituitary magnetic resonance image.

Description

Method and device for extracting and identifying pituitary magnetic resonance image based on artificial intelligence
Technical Field
The invention relates to the technical field of digital image processing, in particular to a pituitary magnetic resonance image extraction and identification method and device based on artificial intelligence.
Background
The pituitary gland, which is the main gland that supervises other glands of the endocrine system and controls hormone levels, is a common neuroendocrine tumor. According to past autopsy and imaging studies, the prevalence of pituitary adenomas is about 10.7-22.5%, with 99% of pituitary microadenomas, and it is estimated that the occurrence of pituitary microadenomas affects more than 7 million patients worldwide. Magnetic Resonance Imaging (MRI) is currently considered the primary method of pituitary imaging, and is performed by analysis and interpretation of pituitary MRI by clinicians or radiologists to diagnose whether a patient has pituitary adenoma.
When the magnetic resonance imaging is clinically analyzed, doctors can adopt a manual segmentation method to locate a specific pituitary area, and then the analysis and interpretation of pituitary adenomas are realized. However, in the process of implementing the invention, the inventor finds that the prior art has at least the following problems: the data volume of the magnetic resonance image sequence is huge, the manual segmentation method is time-consuming and labor-consuming, meanwhile, the segmentation result is different from person to person according to the experience of doctors, and the subjectivity is high.
Disclosure of Invention
The embodiment of the invention aims to provide an artificial intelligence-based extraction and identification method and device for a pituitary magnetic resonance image.
In order to achieve the above object, an embodiment of the present invention provides an artificial intelligence-based pituitary magnetic resonance image extraction and identification method, including:
acquiring at least one pituitary magnetic resonance image of a person to be detected;
inputting each pituitary magnetic resonance image into a pituitary area positioning model obtained by pre-training so that the pituitary area positioning model can calculate each pituitary magnetic resonance image;
outputting the pituitary area corresponding to each pituitary magnetic resonance image.
As an improvement of the above scheme, the training method of the pituitary region localization model specifically comprises the following steps:
acquiring a plurality of pituitary magnetic resonance images as model training images; wherein each model training image corresponds to a pre-labeled real pituitary region;
initializing parameters of a convolutional neural network, and calculating the model training image by using the convolutional neural network to output a predicted pituitary region corresponding to the model training image;
calculating a loss function from the predicted pituitary region and the true pituitary region;
and updating parameters of the convolutional neural network by adopting a gradient descent optimization algorithm to reduce the loss function until the loss function tends to be minimized, and obtaining a trained pituitary region positioning model.
As an improvement of the above scheme, the calculating the model training image by using the convolutional neural network to output a predicted pituitary region corresponding to the model training image specifically includes:
extracting a feature map of each model training image to obtain a plurality of pyramid feature maps;
extracting candidate frames of each pyramid feature map to obtain a candidate frame set;
and eliminating the candidate frame excessively overlapped in the candidate frame set to obtain a target candidate frame serving as a predicted pituitary area corresponding to the model training image.
As an improvement of the above scheme, the extracting a feature map from each model training image to obtain a plurality of pyramid feature maps specifically includes:
inputting the model training images into the convolutional neural network, and extracting a plurality of characteristic graphs with different resolutions corresponding to each model training image;
and fusing a plurality of feature maps with different resolutions corresponding to each model training image through a feature pyramid network to obtain a plurality of pyramid feature maps corresponding to each model training image.
As an improvement of the above scheme, the extracting candidate frames from each pyramid feature map to obtain a candidate frame set specifically includes:
according to each pyramid feature map, obtaining a candidate frame classification map c through two branch networks which do not share weightsiAnd candidate frame regression graph ri(ii) a Wherein the candidate frame classification map contains a probability of occurrence of the target in the anchor frame for each position; the candidate frame regression graph comprises regression parameters of an anchor frame at each position, wherein the regression parameters comprise position deviation values x and y, height h and width w;
classifying a graph c in a candidate frameiThe target occurrence probability in the anchor frame of each position is traversed to screen out a candidate frame classification chart ciAn anchor frame in which the target exists;
mapping the anchor frame of the existing target to the candidate frame regression graph riTo obtain a regression graph riTo determine the location of the candidate frame;
and obtaining a candidate frame set corresponding to the model training image according to all candidate frames of each pyramid feature map.
As an improvement of the above solution, the calculating a loss function according to the predicted pituitary region and the true pituitary region specifically includes:
calculating a loss function from the predicted pituitary region and the true pituitary region, the loss function being calculated by the following calculation:
Figure BDA0002662112970000031
wherein the content of the first and second substances,
Figure BDA0002662112970000032
and
Figure BDA0002662112970000033
respectively representing a real candidate frame classification graph and a real candidate frame regression graph; j denotes the jth anchor box, and when there is a target in the jth anchor box,
Figure BDA0002662112970000034
if not, then,
Figure BDA0002662112970000035
Δxij=(x-xa)/wa,Δyij=(y-ya)/ha,Δwij=log(w/wa),Δhij=log(h/ha);(xa,ya,wa,ha) Real regression parameters representing the anchor box, (x, y, w, h) predicted regression parameters representing the anchor box; CE (-) and L1Smooth (-) represent the cross-entropy function and L1 smoothing function, respectively.
As an improvement of the above solution, after the acquiring at least one pituitary magnetic resonance image of the person to be tested, the method further comprises:
preprocessing each pituitary magnetic resonance image of the person to be detected;
wherein the pre-processing comprises: and zooming each pituitary magnetic resonance image to a preset resolution by adopting a bilinear interpolation method, and carrying out pixel value normalization processing.
The embodiment of the invention also provides an extraction and identification device of pituitary magnetic resonance images based on artificial intelligence, which comprises:
the image acquisition module is used for acquiring at least one pituitary magnetic resonance image of a person to be detected;
the image input module is used for inputting each pituitary magnetic resonance image into a pre-trained pituitary region positioning model so as to enable the pituitary region positioning model to calculate each pituitary magnetic resonance image;
and the result output module is used for outputting the pituitary area corresponding to each pituitary magnetic resonance image.
As an improvement of the above solution, the device for extracting and identifying a pituitary magnetic resonance image based on artificial intelligence further comprises: a model training module; the model training module is specifically configured to:
acquiring a plurality of pituitary magnetic resonance images as model training images; wherein each model training image corresponds to a pre-labeled real pituitary region;
initializing parameters of a convolutional neural network, and calculating the model training image by using the convolutional neural network to output a predicted pituitary region corresponding to the model training image;
calculating a loss function from the predicted pituitary region and the true pituitary region;
and updating parameters of the convolutional neural network by adopting a gradient descent optimization algorithm to reduce the loss function until the loss function tends to be minimized, and obtaining a trained pituitary region positioning model.
The embodiment of the present invention further provides an apparatus for extracting and identifying an artificial intelligence-based pituitary magnetic resonance image, which includes a processor, a memory, and a computer program stored in the memory and configured to be executed by the processor, and when the processor executes the computer program, the method for extracting and identifying an artificial intelligence-based pituitary magnetic resonance image as described in any one of the above items is implemented.
Compared with the prior art, the method and the device for extracting and identifying the pituitary magnetic resonance image based on the artificial intelligence, disclosed by the invention, are characterized in that at least one pituitary magnetic resonance image of a person to be detected is obtained; inputting each pituitary magnetic resonance image into a pituitary area positioning model obtained by pre-training so that the pituitary area positioning model can calculate each pituitary magnetic resonance image; outputting the pituitary area corresponding to each pituitary magnetic resonance image. The pituitary magnetic resonance image of the person to be detected is calculated and analyzed by using the pre-constructed and trained pituitary region positioning model so as to identify the pituitary region in the pituitary magnetic resonance image of the person to be detected, so that the problems of low efficiency and low accuracy caused by manual interpretation and marking by a doctor expert in the prior art are solved, and a data basis is further provided for identifying whether the pituitary adenoma exists in the pituitary magnetic resonance image.
Drawings
Fig. 1 is a schematic step diagram of a method for extracting and identifying a pituitary magnetic resonance image based on artificial intelligence according to an embodiment of the present invention;
FIG. 2 is a schematic diagram illustrating the steps of a method for training a pituitary region localization model according to a second embodiment of the present invention;
fig. 3 is a schematic structural diagram of an artificial intelligence-based device for extracting and identifying a pituitary magnetic resonance image according to a third embodiment of the present invention;
fig. 4 is a schematic structural diagram of an artificial intelligence-based device for extracting and identifying a pituitary magnetic resonance image according to a fourth embodiment of the present invention.
Detailed Description
The technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are only a part of the embodiments of the present invention, and not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
Fig. 1 is a schematic step diagram of a method for extracting and identifying a pituitary magnetic resonance image based on artificial intelligence according to an embodiment of the present invention. The extraction and identification method of the pituitary magnetic resonance image based on artificial intelligence provided by the embodiment of the invention is implemented by the following steps S11 to S13:
and S11, acquiring at least one pituitary magnetic resonance image of the person to be detected.
And S12, inputting each pituitary magnetic resonance image into a pituitary area positioning model obtained by pre-training, so that the pituitary area positioning model can calculate each pituitary magnetic resonance image.
And S13, outputting the pituitary area corresponding to each pituitary magnetic resonance image.
In the embodiment of the invention, the person to be detected obtains intracranial MRI scanning data through MRI nuclear magnetic resonance examination. In order to ensure that the pituitary area in the pituitary magnetic resonance image of the person to be detected is effectively identified, at least one MRI pituitary image which can clearly observe the pituitary is selected according to the intracranial MRI scanning data of the person to be detected, namely, the pituitary magnetic resonance image of the person to be detected is obtained.
Specifically, the pituitary magnetic resonance image of the person to be measured is a coronal pituitary magnetic resonance image, which is a layer of scan image with a thickness of 1 mm. In the embodiment of the invention, for each pituitary magnetic resonance image of the person to be detected, an invalid slice not containing the brain is removed by a global threshold segmentation method, and then the head irregular posture of the person to be detected is adjusted by front combination-rear combination correction when the pituitary magnetic resonance image is obtained; then, carrying out skull stripping and cerebellum removal on the pituitary magnetic resonance image so as to obtain a complete and single brain tissue; finally, all extracted brain tissue images are normalized to a uniform sample space by adjusting the spatial resolution, correcting intensity inhomogeneities using the N3 algorithm, and resampling using trilinear interpolation to eliminate differences between pituitary mr brain images obtained with different imaging devices.
Preferably, the spatial resolution of the pituitary mr images is adjusted to 1 × 1mm, intensity inhomogeneities are corrected using the N3 algorithm and resampling is performed to 128 × 128 using a trilinear interpolation method, thereby normalizing all processed pituitary mr images to a uniform sample space.
By adopting the technical means of the embodiment of the invention, the validity of the pituitary magnetic resonance image of the person to be detected can be ensured, so that the identification precision of the pituitary adenoma is further improved.
As a preferred embodiment, after step S11, the method further includes step S14:
and S14, preprocessing each pituitary magnetic resonance image of the person to be detected.
The pretreatment comprises the following steps: and zooming each pituitary magnetic resonance image to a preset resolution by adopting a bilinear interpolation method, and carrying out pixel value normalization processing.
Specifically, each pituitary magnetic resonance image is an MRI slice with the size of H × W, each pituitary magnetic resonance image is amplified or reduced to the uniform size of 256 × 256 by a bilinear interpolation algorithm, and then the pixel value of each pituitary magnetic resonance image is normalized to the range of [ -1, 1], so that the pituitary magnetic resonance image used for being input into the pituitary region positioning model for calculation is obtained.
And further, inputting the preprocessed pituitary magnetic resonance image into a pre-trained pituitary region positioning model for calculation, and finally outputting the pituitary region predicted by the pituitary region positioning model.
Specifically, inputting the preprocessed pituitary magnetic resonance image into a pre-trained pituitary area positioning model, and extracting a feature map of the pituitary magnetic resonance image by using the pituitary area positioning model to obtain a plurality of pyramid feature maps; extracting candidate frames of each pyramid feature map to obtain a candidate frame set { (x)j,yj,hj,wj,pj) }; wherein (x)j,yj) Indicates the center position of the candidate frame, hjAnd wjHeight and width of the candidate box, pjRepresenting the candidate box confidence. And obtaining a pituitary area of the pituitary magnetic resonance image by excluding the candidate frame excessively overlapped in the candidate frame set to obtain a target candidate frame (x ', y ', h ', w ', p ').
The embodiment one of the invention provides an artificial intelligence-based pituitary magnetic resonance image extraction and identification method, which comprises the steps of obtaining at least one pituitary magnetic resonance image of a person to be detected; inputting each pituitary magnetic resonance image into a pituitary area positioning model obtained by pre-training so that the pituitary area positioning model can calculate each pituitary magnetic resonance image; outputting the pituitary area corresponding to each pituitary magnetic resonance image. The pituitary magnetic resonance image of the person to be detected is calculated and analyzed by using the pre-constructed and trained pituitary region positioning model so as to identify the pituitary region in the pituitary magnetic resonance image of the person to be detected, so that the problems of low efficiency and low accuracy caused by manual interpretation and marking by a doctor expert in the prior art are solved, and a data basis is further provided for identifying whether the pituitary adenoma exists in the pituitary magnetic resonance image.
Referring to fig. 2, a schematic step diagram of a method for training a pituitary region localization model according to a second embodiment of the present invention is shown. In an embodiment of the present invention, the method for training the pituitary region localization model is performed through steps S21 to S24:
s21, acquiring a plurality of pituitary magnetic resonance images as model training images; wherein each of the model training images corresponds to a pre-labeled real pituitary region.
And S22, initializing parameters of a convolutional neural network, and calculating the model training image by using the convolutional neural network to output a predicted pituitary region corresponding to the model training image.
S23, calculating a loss function according to the predicted pituitary area and the real pituitary area.
And S24, updating the parameters of the convolutional neural network by adopting a gradient descent optimization algorithm to reduce the loss function until the loss function tends to be minimized, and obtaining a trained pituitary region positioning model.
In the embodiment of the invention, a plurality of pituitary magnetic resonance images are acquired as model training images for being used as training samples of a pituitary region positioning model. Wherein, each pituitary magnetic resonance image corresponds to a real pituitary area which is marked in advance.
Preferably, dividing the acquired pituitary magnetic resonance images into a model training set and a model testing set according to a preset proportion, wherein the model training set is used as a learning and training sample of a pituitary region positioning model; and the model test set is used for testing the actual application environment after the primary training of the pituitary region positioning model is finished.
Preferably, each pituitary magnetic resonance image is an MRI slice with a size of H × W, and a preprocessing operation is further performed when obtaining the plurality of pituitary magnetic resonance images. Specifically, each pituitary magnetic resonance image is amplified or reduced to 256 × 256 uniform size by a bilinear interpolation algorithm, then the pixel value is normalized to the range of [ -1, 1], and finally a model training image for inputting a convolutional neural network for training is obtained.
And initializing the parameters of the convolutional neural network before adjusting, inputting the model training image into the convolutional neural network for calculation, and outputting the predicted pituitary region of the pituitary magnetic resonance image.
As a preferred embodiment, step S22 specifically includes:
s221, extracting a feature map of each model training image to obtain a plurality of pyramid feature maps;
preferably, the model training images are input into the convolutional neural network, and a plurality of feature maps with different resolutions corresponding to each model training image are extracted. And then, fusing a plurality of feature maps with different resolutions corresponding to each model training image through a feature pyramid network to obtain a plurality of pyramid feature maps corresponding to each model training image. As an example, 4 feature maps of different resolutions of the model training image are obtained by a convolutional neural network, namely { f }1,f2,f3,f4}; feature map f by feature pyramid network1,f2,f3,f4Performing fusion to obtain 4 pyramid feature maps, i.e.
Figure BDA0002662112970000091
Specifically, the convolutional neural network performs convolutional operation with step length of 2 on an input model training image by using 64 convolutional kernels with size of 7 × 7, the operation result is subjected to relu activation function, and then maximum pooling downsampling is performed by using kernels with size of 3 × 3 to obtain an initial feature map f0The size is 64 × 64 × 64. For the initial feature map f0Processing with several adjacent convolution, activation layers to obtain a feature map f1The size is 256 × 64 × 64. For the feature map f1Processing with several adjacent convolution and activation layers, and finally passing through the pooling layer by 2 timesSampling to obtain a characteristic diagram f2The size is 512 × 32 × 32. In turn, similarly by the characteristic diagram f1Obtaining a feature map f2Can be passed through the feature map f2Obtaining a feature map f3The size of which is 1024 × 16 × 16, and a feature map f is obtained4The size is 2048 × 8 × 8. Finally, feature maps { f } of 4 different resolutions of the image can be obtained1,f2,f3,f4}。
And fusing the four feature maps obtained above through a feature pyramid network. For the feature map f4Obtaining a pyramid feature map with 256 channels by using convolution kernel processing with the size of 1 multiplied by 1 and through relu activation function
Figure BDA0002662112970000092
The size is 256 × 8 × 8. To obtain f3Corresponding pyramid feature map
Figure BDA0002662112970000095
Firstly, the pyramid feature map of the upper layer is aligned
Figure BDA0002662112970000094
Up-sampling by a factor of 2, and then summing with a feature map f of the activation layer by convolution3Point-by-point addition is carried out to finally obtain a pyramid feature map with the size of 256 multiplied by 16
Figure BDA0002662112970000096
Similarly obtained pyramid feature map
Figure BDA0002662112970000097
In a manner of obtaining pyramid feature maps in sequence
Figure BDA0002662112970000093
The size is 256 × 32 × 32; and pyramid feature map f1 pThe size is 256 × 64 × 64. Finally, 4 pyramid feature maps can be obtained
Figure BDA0002662112970000098
It should be noted that the size n × H of the feature map mentioned in the embodiment of the present inventioni×WiWhere n denotes the number of features of the feature map, i.e. each position on the feature map has n values, Hi、WiIndicating the height and width of the feature map.
S222, extracting candidate frames of each pyramid feature map to obtain a candidate frame set.
Preferably, step S222 specifically includes:
s2221, according to each pyramid feature map, obtaining a candidate frame classification map c through two branch networks which do not share the weight respectivelyiAnd candidate frame regression graph ri(ii) a Wherein the candidate frame classification map contains a probability of occurrence of the target in the anchor frame for each position; the candidate frame regression graph comprises regression parameters of an anchor frame at each position, wherein the regression parameters comprise position deviation values x and y, height h and width w;
s2222, classifying the graphs c in the candidate framesiThe target occurrence probability in the anchor frame of each position is traversed to screen out a candidate frame classification chart ciAn anchor frame in which the target exists;
s2223, mapping the anchor frame of the existing target to the candidate frame regression graph riTo obtain a regression graph riTo determine the location of the candidate frame;
s2224, obtaining a candidate frame set corresponding to the model training image according to all candidate frames of each pyramid feature map.
In particular, for each pyramid profile f obtained as described abovei p(256×Hi×Wi) Firstly, obtaining candidate frame classification chart c through two branch networks (namely two layers of convolution activation layers) which do not share weight respectivelyi(2k×Hi×Wi) And candidate frame regression graph ri(4k×Hi×Wi)。
In the candidate frame classification chart ciIn (2 k), the candidate frame classification diagram ciHas 2k numbers per positionThe value, k, represents the number of anchor boxes in each position, an anchor box being a box that is artificially set with a fixed size and position. Each anchor box includes two values that indicate whether a target is present in the anchor box. By way of example, each anchor frame includes two values, x1 and x2, when x1<x2, the probability is considered to be greater than 0.5, which indicates that the target exists in the anchor box, otherwise, the target does not exist. Regression of the graph r in the candidate boxiIn (4 k), the box regression candidate graph riEach anchor frame comprising 4 values, regressing the graph r for said candidate frameiThe 4 regression parameters of (1) are respectively the offset value, height and width.
Further, obtaining the candidate frame classification map ciAnd candidate frame regression graph riThen, the map c is classified in the candidate frameiThe target occurrence probability in the anchor frame of each position is traversed, and a candidate frame classification chart c is screened outiThere is an anchor frame for the target. Then, the anchor frame is mapped to the regression graph r of the candidate frameiTo obtain a regression graph riThereby obtaining a candidate frame (x)1,y1,h1,w1,p1) Wherein (x)1,y1) Indicates the center position of the candidate frame, h1And w1Height and width of the candidate box, p1Representing the candidate box confidence. Classifying the candidate frames corresponding to the same pyramid feature mapiAnd candidate frame regression graph riSeveral candidate boxes may be obtained.
For each pyramid feature map fi pPerforming the candidate frame extraction operation to obtain all candidate frames of the model training image, and obtaining the candidate frame set { (x)j,yj,hj,wj,pj)}。
And S223, eliminating the candidate frame excessively overlapped in the candidate frame set to obtain a target candidate frame serving as a predicted pituitary area corresponding to the model training image.
In the embodiment of the present invention, in order to obtain the candidate frame including the pituitary, all the candidate frames need to be screened. And (4) eliminating the excessively overlapped candidate box through a non-maximum suppression algorithm to obtain a predicted pituitary area.
Specifically, the confidence p of each candidate box is determinediAnd (4) descending the sequence, taking out the first candidate frame with the highest confidence from the sequence, and then removing the candidate frames with the coincidence degree with the first candidate frame being more than a certain threshold value from the sequence to form a new sequence. Then, the second candidate frame with the highest confidence coefficient is taken out from the new sequence, and then the candidate frames with the coincidence degree with the second candidate frame larger than a certain threshold value in the new sequence are eliminated. By analogy, the first candidate frame and the second candidate frame … are finally formed into a candidate frame set, and one candidate frame closest to the center of the image is selected as the target candidate frame, that is, the predicted pituitary region.
Further, the volume layer in the pituitary area positioning model contains a large number of random parameters, and the model is required to be suitable for pituitary positioning tasks through a training process. Specifically, 2 anchor frames (having a size of 32 × 32, 64 × 64, and an aspect ratio of 0.5) are set for each position, that is, k is 2. Then, the output predicted pituitary region is coded into the corresponding real candidate classification map
Figure BDA0002662112970000111
And true candidate regression plots
Figure BDA0002662112970000112
In (1). The true candidate classification map
Figure BDA0002662112970000113
And true candidate regression plots
Figure BDA0002662112970000114
Obtained in the process of labeling the real pituitary area in advance.
Using a loss function
Figure BDA0002662112970000115
Measuring the pituitary region localization model output predictionDifference between pituitary area and true pituitary area:
Figure BDA0002662112970000121
wherein the content of the first and second substances,
Figure BDA0002662112970000122
and
Figure BDA0002662112970000123
respectively representing a real candidate frame classification graph and a real candidate frame regression graph; j denotes the jth anchor box, and when there is a target in the jth anchor box,
Figure BDA0002662112970000124
if not, then,
Figure BDA0002662112970000125
Δxij=(x-xa)/wa,Δyij=(y-ya)/ha,Δwij=log(w/wa),Δhij=log(h/ha);(xa,ya,wa,ha) Real regression parameters representing the anchor box, (x, y, w, h) predicted regression parameters representing the anchor box, (Δ xij,Δyij,Δwij,Δhij) Regression of the graph r from the candidate boxesiObtaining; CE (-) and L1Smooth (-) represent the cross-entropy function and L1 smoothing function, respectively.
And after the loss function is obtained through calculation, updating parameters of the convolutional neural network by adopting a gradient descent optimization algorithm, performing calculation again on the model training image by using the updated convolutional neural network to obtain a new predicted pituitary area, and calculating a new first loss function according to the new predicted pituitary area. And updating parameters of the convolutional neural network by adopting a gradient descent optimization algorithm, so that the loss function can be continuously reduced, and the difference between the predicted pituitary area and the real pituitary area is reduced. And (4) finishing the training of the convolutional neural network by continuously iterating and training until the value of the loss function tends to be minimized, thereby obtaining a trained pituitary region positioning model.
Preferably, after the pituitary region positioning model is trained, acquiring a pituitary magnetic resonance image in a model test set for testing, and verifying the accuracy of the pituitary region positioning model, thereby ensuring the identification accuracy of the pituitary region positioning model before being put into practical application.
The second embodiment of the invention provides a training method of a pituitary region positioning model, which comprises the steps of firstly obtaining a plurality of pituitary magnetic resonance images marked with pituitary regions in advance as training image samples of the positioning model. Initializing parameters of the convolutional neural network by random numbers, inputting model training images, calculating and positioning pituitary regions, and outputting predicted pituitary regions. And then calculating a loss function between the predicted pituitary region and the real pituitary region, calculating the gradient of the loss function relative to all parameter weights by using a back propagation algorithm, and updating the values of the parameters of the convolutional neural network by using a gradient descent method so as to minimize the loss function, thereby completing the training of the pituitary region positioning model. According to the embodiment of the invention, a pituitary region positioning model is established by adopting a convolutional neural network, accurate and effective characteristics are automatically acquired from a pituitary magnetic resonance image for learning, the precision and generalization capability of the pituitary region positioning model are improved, and the efficiency and precision of positioning the region of the pituitary in the pituitary magnetic resonance image in practical application are effectively improved.
Fig. 3 is a schematic structural diagram of an artificial intelligence-based device for extracting and identifying a pituitary magnetic resonance image according to a third embodiment of the present invention. The third embodiment of the present invention provides an artificial intelligence-based device 30 for extracting and identifying a pituitary magnetic resonance image, which includes: an image acquisition module 31, an image input module 32, and a result output module 33; wherein the content of the first and second substances,
the image acquisition module 31 is configured to acquire at least one pituitary magnetic resonance image of the person to be measured.
The image input module 32 is configured to input each pituitary magnetic resonance image into a pre-trained pituitary region positioning model, so that the pituitary region positioning model calculates each pituitary magnetic resonance image.
The result output module 33 is configured to output a pituitary region corresponding to each of the pituitary magnetic resonance images.
In a preferred embodiment, the device 30 for extracting and identifying pituitary magnetic resonance images based on artificial intelligence further comprises a model training module 34; the model training module 34 is specifically configured to:
acquiring a plurality of pituitary magnetic resonance images as model training images; wherein each model training image corresponds to a pre-labeled real pituitary region;
initializing parameters of a convolutional neural network, and calculating the model training image by using the convolutional neural network to output a predicted pituitary region corresponding to the model training image;
calculating a loss function from the predicted pituitary region and the true pituitary region;
and updating parameters of the convolutional neural network by adopting a gradient descent optimization algorithm to reduce the loss function until the loss function tends to be minimized, and obtaining a trained pituitary region positioning model.
It should be noted that the device 30 for extracting and identifying a pituitary magnetic resonance image based on artificial intelligence according to the embodiment of the present invention is used for executing all the process steps of the method for extracting and identifying a pituitary magnetic resonance image based on artificial intelligence according to the first embodiment, and the working principles and the beneficial effects of the two are in one-to-one correspondence; moreover, the model training module 34 in the artificial intelligence-based pituitary magnetic resonance image extraction and identification device 30 is used to execute all the process steps of the training method for the pituitary region localization model according to the second embodiment, and the working principles and beneficial effects of the two are in one-to-one correspondence, so that details are not repeated.
Fig. 4 is a schematic structural diagram of an artificial intelligence-based device for extracting and identifying a pituitary magnetic resonance image according to a fourth embodiment of the present invention. The device 40 for extracting and identifying an artificial intelligence-based pituitary magnetic resonance image according to an embodiment of the present invention includes a processor 41, a memory 42, and a computer program stored in the memory and configured to be executed by the processor, wherein the processor executes the computer program to implement the method for extracting and identifying an artificial intelligence-based pituitary magnetic resonance image according to an embodiment.
It will be understood by those skilled in the art that all or part of the processes of the methods of the embodiments described above can be implemented by a computer program, which can be stored in a computer-readable storage medium, and when executed, can include the processes of the embodiments of the methods described above. The storage medium may be a magnetic disk, an optical disk, a Read-only memory (ROM), a Random Access Memory (RAM), or the like.
While the foregoing is directed to the preferred embodiment of the present invention, it will be understood by those skilled in the art that various changes and modifications may be made without departing from the spirit and scope of the invention.

Claims (10)

1. An extraction and identification method of pituitary magnetic resonance images based on artificial intelligence is characterized by comprising the following steps:
acquiring at least one pituitary magnetic resonance image of a person to be detected;
inputting each pituitary magnetic resonance image into a pituitary area positioning model obtained by pre-training so that the pituitary area positioning model can calculate each pituitary magnetic resonance image;
outputting the pituitary area corresponding to each pituitary magnetic resonance image.
2. The method for extracting and identifying a pituitary magnetic resonance image based on artificial intelligence as claimed in claim 1, wherein the training method of the pituitary region localization model is specifically as follows:
acquiring a plurality of pituitary magnetic resonance images as model training images; wherein each model training image corresponds to a pre-labeled real pituitary region;
initializing parameters of a convolutional neural network, and calculating the model training image by using the convolutional neural network to output a predicted pituitary region corresponding to the model training image;
calculating a loss function from the predicted pituitary region and the true pituitary region;
and updating parameters of the convolutional neural network by adopting a gradient descent optimization algorithm to reduce the loss function until the loss function tends to be minimized, and obtaining a trained pituitary region positioning model.
3. The method for extracting and identifying a pituitary magnetic resonance image based on artificial intelligence as claimed in claim 2, wherein the calculating the model training image by using the convolutional neural network to output the predicted pituitary region corresponding to the model training image specifically comprises:
extracting a feature map of each model training image to obtain a plurality of pyramid feature maps;
extracting candidate frames of each pyramid feature map to obtain a candidate frame set;
and eliminating the candidate frame excessively overlapped in the candidate frame set to obtain a target candidate frame serving as a predicted pituitary area corresponding to the model training image.
4. The method for extracting and identifying a pituitary magnetic resonance image based on artificial intelligence as claimed in claim 3, wherein the extracting the feature map of each model training image to obtain a plurality of pyramid feature maps specifically comprises:
inputting the model training images into the convolutional neural network, and extracting a plurality of characteristic graphs with different resolutions corresponding to each model training image;
and fusing a plurality of feature maps with different resolutions corresponding to each model training image through a feature pyramid network to obtain a plurality of pyramid feature maps corresponding to each model training image.
5. The method for extracting and identifying a pituitary magnetic resonance image based on artificial intelligence as claimed in claim 3, wherein the extracting candidate frames from each of the pyramid feature maps to obtain a candidate frame set specifically comprises:
according to each pyramid feature map, obtaining a candidate frame classification map c through two branch networks which do not share weightsiAnd candidate frame regression graph ri(ii) a Wherein the candidate frame classification map contains a probability of occurrence of the target in the anchor frame for each position; the candidate frame regression graph comprises regression parameters of an anchor frame at each position, wherein the regression parameters comprise position deviation values x and y, height h and width w;
classifying a graph c in a candidate frameiThe target occurrence probability in the anchor frame of each position is traversed to screen out a candidate frame classification chart ciAn anchor frame in which the target exists;
mapping the anchor frame of the existing target to the candidate frame regression graph riTo obtain a regression graph riTo determine the location of the candidate frame;
and obtaining a candidate frame set corresponding to the model training image according to all candidate frames of each pyramid feature map.
6. The method for extracting and identifying a pituitary mr image based on artificial intelligence as claimed in claim 5, wherein the calculating the loss function based on the predicted pituitary region and the true pituitary region specifically comprises:
calculating a loss function from the predicted pituitary region and the true pituitary region, the loss function being calculated by the following calculation:
Figure FDA0002662112960000031
wherein the content of the first and second substances,
Figure FDA0002662112960000032
and
Figure FDA0002662112960000033
respectively representing a real candidate frame classification graph and a real candidate frame regression graph; j denotes the jth anchor box, and when there is a target in the jth anchor box,
Figure FDA0002662112960000034
if not, then,
Figure FDA0002662112960000035
Δxij=(x-xa)/wa,Δyij=(y-ya)/ha,Δwij=log(w/wa),Δhij=log(h/ha);(xa,ya,wa,ha) Real regression parameters representing the anchor box, (x, y, w, h) predicted regression parameters representing the anchor box; CE (-) and L1Smooth (-) represent the cross-entropy function and L1 smoothing function, respectively.
7. The method for extracting and identifying pituitary magnetic resonance image based on artificial intelligence as claimed in claim 1, further comprising, after the acquiring at least one pituitary magnetic resonance image of the person under test:
preprocessing each pituitary magnetic resonance image of the person to be detected;
wherein the pre-processing comprises: and zooming each pituitary magnetic resonance image to a preset resolution by adopting a bilinear interpolation method, and carrying out pixel value normalization processing.
8. An extraction and identification device for pituitary magnetic resonance images based on artificial intelligence is characterized by comprising:
the image acquisition module is used for acquiring at least one pituitary magnetic resonance image of a person to be detected;
the image input module is used for inputting each pituitary magnetic resonance image into a pre-trained pituitary region positioning model so as to enable the pituitary region positioning model to calculate each pituitary magnetic resonance image;
and the result output module is used for outputting the pituitary area corresponding to each pituitary magnetic resonance image.
9. The apparatus for extracting and identifying pituitary mr images based on artificial intelligence as claimed in claim 8, further comprising a model training module; the model training module is specifically configured to:
acquiring a plurality of pituitary magnetic resonance images as model training images; wherein each model training image corresponds to a pre-labeled real pituitary region;
initializing parameters of a convolutional neural network, and calculating the model training image by using the convolutional neural network to output a predicted pituitary region corresponding to the model training image;
calculating a loss function from the predicted pituitary region and the true pituitary region;
and updating parameters of the convolutional neural network by adopting a gradient descent optimization algorithm to reduce the loss function until the loss function tends to be minimized, and obtaining a trained pituitary region positioning model.
10. An artificial intelligence-based pituitary magnetic resonance image extraction and identification device, comprising a processor, a memory and a computer program stored in the memory and configured to be executed by the processor, wherein the processor executes the computer program to implement the artificial intelligence-based pituitary magnetic resonance image extraction and identification method according to any one of claims 1 to 7.
CN202010907798.3A 2020-09-02 2020-09-02 Method and device for extracting and identifying pituitary magnetic resonance image based on artificial intelligence Pending CN112053342A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010907798.3A CN112053342A (en) 2020-09-02 2020-09-02 Method and device for extracting and identifying pituitary magnetic resonance image based on artificial intelligence

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010907798.3A CN112053342A (en) 2020-09-02 2020-09-02 Method and device for extracting and identifying pituitary magnetic resonance image based on artificial intelligence

Publications (1)

Publication Number Publication Date
CN112053342A true CN112053342A (en) 2020-12-08

Family

ID=73607947

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010907798.3A Pending CN112053342A (en) 2020-09-02 2020-09-02 Method and device for extracting and identifying pituitary magnetic resonance image based on artificial intelligence

Country Status (1)

Country Link
CN (1) CN112053342A (en)

Citations (14)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107492097A (en) * 2017-08-07 2017-12-19 北京深睿博联科技有限责任公司 A kind of method and device for identifying MRI image area-of-interest
US20190156476A1 (en) * 2017-11-17 2019-05-23 National Cancer Center Image analysis method, image analysis apparatus, program, learned deep layer learning algorithm manufacturing method and learned deep layer learning algorithm
CN110047068A (en) * 2019-04-19 2019-07-23 山东大学 MRI brain tumor dividing method and system based on pyramid scene analysis network
CN110111313A (en) * 2019-04-22 2019-08-09 腾讯科技(深圳)有限公司 Medical image detection method and relevant device based on deep learning
CN110287927A (en) * 2019-07-01 2019-09-27 西安电子科技大学 Based on the multiple dimensioned remote sensing image object detection method with context study of depth
CN110415234A (en) * 2019-07-29 2019-11-05 北京航空航天大学 Brain tumor dividing method based on multi-parameter magnetic resonance imaging
CN110674866A (en) * 2019-09-23 2020-01-10 兰州理工大学 Method for detecting X-ray breast lesion images by using transfer learning characteristic pyramid network
CN110706209A (en) * 2019-09-17 2020-01-17 东南大学 Method for positioning tumor in brain magnetic resonance image of grid network
CN110739070A (en) * 2019-09-26 2020-01-31 南京工业大学 brain disease diagnosis method based on 3D convolutional neural network
CN110782427A (en) * 2019-08-19 2020-02-11 大连大学 Magnetic resonance brain tumor automatic segmentation method based on separable cavity convolution
CN110945564A (en) * 2019-08-13 2020-03-31 香港应用科技研究院有限公司 Medical image segmentation based on mixed context CNN model
CN111259758A (en) * 2020-01-13 2020-06-09 中国矿业大学 Two-stage remote sensing image target detection method for dense area
CN111310558A (en) * 2019-12-28 2020-06-19 北京工业大学 Pavement disease intelligent extraction method based on deep learning and image processing method
CN111583204A (en) * 2020-04-27 2020-08-25 天津大学 Organ positioning method of two-dimensional sequence magnetic resonance image based on network model

Patent Citations (14)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107492097A (en) * 2017-08-07 2017-12-19 北京深睿博联科技有限责任公司 A kind of method and device for identifying MRI image area-of-interest
US20190156476A1 (en) * 2017-11-17 2019-05-23 National Cancer Center Image analysis method, image analysis apparatus, program, learned deep layer learning algorithm manufacturing method and learned deep layer learning algorithm
CN110047068A (en) * 2019-04-19 2019-07-23 山东大学 MRI brain tumor dividing method and system based on pyramid scene analysis network
CN110111313A (en) * 2019-04-22 2019-08-09 腾讯科技(深圳)有限公司 Medical image detection method and relevant device based on deep learning
CN110287927A (en) * 2019-07-01 2019-09-27 西安电子科技大学 Based on the multiple dimensioned remote sensing image object detection method with context study of depth
CN110415234A (en) * 2019-07-29 2019-11-05 北京航空航天大学 Brain tumor dividing method based on multi-parameter magnetic resonance imaging
CN110945564A (en) * 2019-08-13 2020-03-31 香港应用科技研究院有限公司 Medical image segmentation based on mixed context CNN model
CN110782427A (en) * 2019-08-19 2020-02-11 大连大学 Magnetic resonance brain tumor automatic segmentation method based on separable cavity convolution
CN110706209A (en) * 2019-09-17 2020-01-17 东南大学 Method for positioning tumor in brain magnetic resonance image of grid network
CN110674866A (en) * 2019-09-23 2020-01-10 兰州理工大学 Method for detecting X-ray breast lesion images by using transfer learning characteristic pyramid network
CN110739070A (en) * 2019-09-26 2020-01-31 南京工业大学 brain disease diagnosis method based on 3D convolutional neural network
CN111310558A (en) * 2019-12-28 2020-06-19 北京工业大学 Pavement disease intelligent extraction method based on deep learning and image processing method
CN111259758A (en) * 2020-01-13 2020-06-09 中国矿业大学 Two-stage remote sensing image target detection method for dense area
CN111583204A (en) * 2020-04-27 2020-08-25 天津大学 Organ positioning method of two-dimensional sequence magnetic resonance image based on network model

Non-Patent Citations (4)

* Cited by examiner, † Cited by third party
Title
MUSTAFA RASHID ISMAEL: "Hybrid Model-Statistical Features and Deep Neural Network for Brain Tumor Classification in MRI Images", 《WESTERN MICHIGAN UNIVERSITY》, pages 1 - 127 *
李伟山 等: "改进的Faster RCNN煤矿井下行人检测算法", 《计算机工程与应》, vol. 55, no. 4, pages 200 - 207 *
李理: "目标检测", 《HTTPS://FANCYERII.GITHUB.IO/BOOKS/OBJECT-DETECTION/》, pages 1 - 26 *
陈志刚 等: "目标检测算法在乳腺癌病灶的影像学诊断上的应用", 《图形图像》, pages 28 - 31 *

Similar Documents

Publication Publication Date Title
CN109003267B (en) Computer-implemented method and system for automatically detecting target object from 3D image
CN109584209B (en) Vascular wall plaque recognition apparatus, system, method, and storage medium
CN111784671A (en) Pathological image focus region detection method based on multi-scale deep learning
CN108629785B (en) Three-dimensional magnetic resonance pancreas image segmentation method based on self-learning
JP7204007B2 (en) Identification of lesion boundaries in image data
CN111862044A (en) Ultrasonic image processing method and device, computer equipment and storage medium
CN110264444A (en) Damage detecting method and device based on weak segmentation
CN111340775A (en) Parallel method and device for acquiring ultrasonic standard tangent plane and computer equipment
CN111681230A (en) System and method for scoring high-signal of white matter of brain
CN111652300A (en) Spine curvature classification method, computer device and storage medium
US20220284578A1 (en) Image processing for stroke characterization
CN107862665B (en) CT image sequence enhancement method and device
CN114332132A (en) Image segmentation method and device and computer equipment
CN109949288A (en) Tumor type determines system, method and storage medium
CN111951215A (en) Image detection method and device and computer readable storage medium
KR20220112269A (en) Neural Network Processing of OCT Data to Generate Predictions of Supervised Atrophy Growth Rates
CN113012086A (en) Cross-modal image synthesis method
CN113989551A (en) Alzheimer disease classification method based on improved ResNet network
CN112767403A (en) Medical image segmentation model training method, medical image segmentation method and device
CN109816665B (en) Rapid segmentation method and device for optical coherence tomography image
CN112164028A (en) Pituitary adenoma magnetic resonance image positioning diagnosis method and device based on artificial intelligence
CN112053342A (en) Method and device for extracting and identifying pituitary magnetic resonance image based on artificial intelligence
CN111681233A (en) US-CT image segmentation method, system and equipment based on deep neural network
CN113222985B (en) Image processing method, image processing device, computer equipment and medium
CN114862799B (en) Full-automatic brain volume segmentation method for FLAIR-MRI sequence

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination