CN111583204A - Organ positioning method of two-dimensional sequence magnetic resonance image based on network model - Google Patents

Organ positioning method of two-dimensional sequence magnetic resonance image based on network model Download PDF

Info

Publication number
CN111583204A
CN111583204A CN202010344910.7A CN202010344910A CN111583204A CN 111583204 A CN111583204 A CN 111583204A CN 202010344910 A CN202010344910 A CN 202010344910A CN 111583204 A CN111583204 A CN 111583204A
Authority
CN
China
Prior art keywords
organ
image
positioning
sequence
dimensional
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202010344910.7A
Other languages
Chinese (zh)
Other versions
CN111583204B (en
Inventor
路志英
赵明月
肖阳
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Tianjin University
Original Assignee
Tianjin University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Tianjin University filed Critical Tianjin University
Priority to CN202010344910.7A priority Critical patent/CN111583204B/en
Publication of CN111583204A publication Critical patent/CN111583204A/en
Application granted granted Critical
Publication of CN111583204B publication Critical patent/CN111583204B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/0002Inspection of images, e.g. flaw detection
    • G06T7/0012Biomedical image inspection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • G06N3/084Backpropagation, e.g. using gradient descent
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/70Determining position or orientation of objects or cameras
    • G06T7/73Determining position or orientation of objects or cameras using feature-based methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10072Tomographic images
    • G06T2207/10088Magnetic resonance imaging [MRI]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30004Biomedical image processing
    • G06T2207/30081Prostate

Abstract

The invention discloses a network model-based organ positioning method for a two-dimensional sequence magnetic resonance image, which comprises the following steps: data set preparation, preprocessing and expansion processing; constructing an improved organ preliminary positioning network model based on the fast R-CNN; optimizing and adjusting the organ primary positioning network model by using the verification set to obtain an organ positioning network model; performing organ preliminary positioning on the verification set image to obtain a plurality of target detection frames and corresponding credibility scores aiming at each two-dimensional slice image, and reserving target candidate frames with credibility greater than a threshold value in each image; constructing a spatial curve fitting model based on sequence correlation processing; preprocessing a two-dimensional sequence magnetic resonance image of an organ to be positioned, inputting the preprocessed two-dimensional sequence magnetic resonance image into an organ positioning network model, and obtaining a target candidate frame with the credibility being more than a threshold value in each image according to an organ initial positioning result; the organ is finally located based on the sequence correlation process. The pair of organs is accurately positioned and has good generalization performance.

Description

Organ positioning method of two-dimensional sequence magnetic resonance image based on network model
Technical Field
The invention relates to a method for positioning organs in a magnetic resonance image, in particular to a method for positioning organs in a two-dimensional sequence magnetic resonance image based on a network model.
Background
Magnetic Resonance (MR) imaging has now become the primary imaging modality for prostate-assisted diagnosis due to its superior spatial resolution and tissue contrast. Compared to transrectal ultrasound imaging (TRUS), it facilitates targeted biopsy and treatment with important implications for prostate tumor lesion localization, volume assessment, and staging of prostate cancer. However, the examination of MR images of the prostate at present is a visual examination by a radiologist on the basis of each slice image, and is therefore a rather time-consuming, cumbersome and subjective task.
Organ localization is important for many medical image processing tasks such as image registration, organ segmentation and lesion detection. An efficient assessment of the initial position of the organ can greatly improve the performance of subsequent treatments. For example, for organ segmentation, the initial positioning of the organ can focus the segmentation task on the region of interest, thereby improving the segmentation speed, reducing the memory storage, and reducing the risk of false positive segmentation.
At present, some semi-automatic or fully automatic methods applied to computer-aided diagnosis have been proposed in the segmentation and detection of various organs/tissues of medical images. However, due to the heterogeneity of image brightness, imaging artifacts, and signal intensity around the rectal coil caused by the differences in the scanners and scanning schemes, and the differences in size and shape of the glands themselves, as well as the inherent differences in low contrast between the glands and surrounding tissue structures, lack of strong boundaries, etc., prostate segmentation and detection still face significant challenges.
In recent years, a two-stage region-based target detection algorithm, such as Faster R-CNN, has been widely used in the field of medical image processing due to its excellent detection accuracy and good detection efficiency. But since the network was originally designed for target detection in natural images and there are still some limitations for the detection of small targets, it is not possible to uniquely and accurately locate organs in magnetic resonance images.
Disclosure of Invention
In order to overcome the defects of the prior art, the invention provides an organ positioning method of a two-dimensional sequence magnetic resonance image improved based on Faster R-CNN, which fully utilizes the difference between the natural image characteristics and the medical image characteristics to optimize the positioning accuracy and the detection success rate of organs.
Therefore, the invention adopts the following technical scheme:
a method for organ positioning of two-dimensional sequence magnetic resonance images based on a network model comprises the following steps:
s1, preparing a data set:
collecting a plurality of two-dimensional sequence magnetic resonance images about an organ, labeling organ regions in the images by a rectangular target box, and then dividing the images into a training set and a verification set;
s2, preprocessing the data set: sequentially carrying out pixel intensity maximum and minimum normalization processing, center cutting processing and image size normalization processing on each two-dimensional sequence MR image;
s3, performing data expansion processing on the training set image obtained in the step S2 and the rectangular target frame mark of the corresponding organ;
s4, constructing an improved organ positioning network model based on the master R-CNN, comprising the following steps:
1) constructing a target detection network architecture based on the improved Faster R-CNN, replacing a VGG16 architecture in the classic Faster R-CNN with ResNet-50 with a spatial attention mechanism, and training the ResNet-50 by using an ImageNet large-scale natural data set to obtain an initial training weight parameter of the network;
2) using the training set image expanded in the step S3 and the corresponding rectangular target frame label as the input of the network in the step 1), and performing iterative adjustment on the whole network architecture parameter by using a multitask loss function formed by classification loss and regression loss to complete network training and generate an organ preliminary positioning network model;
s5, labeling the verification set image preprocessed in the step S2 and the rectangular target box corresponding to the verification set image and inputting the labeled verification set image and the rectangular target box into the organ initial positioning network model generated in the step S4, and optimizing and adjusting the organ positioning model according to an output result to obtain an organ positioning network model;
s6, performing organ preliminary positioning on the verification set image preprocessed in the step S2 by using the network model constructed in the step S5 to obtain a plurality of target detection frames and corresponding credibility scores aiming at each two-dimensional slice image, and reserving target candidate frames with credibility greater than a certain threshold value in each image;
s7, constructing a spatial curve fitting model based on sequence correlation processing:
1) aiming at each two-dimensional slice image with the initial detection result in the S6, the axial slice images belonging to the same sequence are sorted in a front-back order;
2) extracting key points of a target frame in the slice image with a single reliable target candidate frame from each sorted sequence, performing spatial curve fitting in the sequence direction by using the key points, and determining the best curve fitting mode to be least square quartic polynomial fitting;
s8, preprocessing the two-dimensional sequence magnetic resonance image of a certain organ to be positioned according to the method of the step S2, and inputting the preprocessed two-dimensional sequence magnetic resonance image into the organ positioning network model obtained in the step S5 to obtain a primary organ positioning result;
s9, obtaining a plurality of target detection frames and corresponding credibility scores of each two-dimensional slice image according to the organ initial positioning result output in the step S8, and reserving target candidate frames with credibility greater than a certain threshold value in each image;
s10, performing final positioning on the organ based on the sequence correlation processing:
1) sorting each two-dimensional slice image with the preliminary detection result in the S9 in sequence;
2) aiming at the sorted sequence image, extracting key points of a target frame in a slice image with a single reliable target candidate frame, and performing spatial curve fitting in the sequence direction on the key points according to a least square quartic polynomial fitting mode obtained in S7;
3) screening and judging a target frame closest to a fitting position in a two-dimensional image with multiple prediction frames as a final positioning frame of the organ by using the fitted space curve and the minimum space Euclidean distance, and updating space curve fitting parameters after each screening;
4) and fitting the two-dimensional image subjected to the missing detection by using the finally updated spatial curve to complete the final positioning of the organ.
When the pixel intensity is normalized in step S2, the pixel value x of the pixel iiThe normalized pixel values are: normi=(xi-Pmin)/(Pmax-Pmin)×255,normi∈[0,255]Wherein Pmax and Pmin respectively refer to the maximum value and the minimum value of the pixel of the slice image where the pixel i is located.
In step S2, when the center trimming process is performed, if the original size is W × H, the trimmed size is α W × α H, α is a scaling factor, and α is preferably 2/3.
Preferably, in step S2, the image size is normalized to 600 × 600.
In step S3, data expansion processing is performed by horizontal flipping, small-amplitude horizontal translation or vertical translation, small-angle random rotation, and elastic deformation. Preferably, the small angle is randomly rotated by an angle of-5 to 5 °.
In step S7, the threshold is 0.80-0.90.
The invention is inspired by the fact that the fast R-CNN model obtains excellent performance in natural image target detection, and is established on the basis of the model, further improves and perfects the model by fully utilizing the central position prior information of an organ in an image and the evolution rule between sequence images, thereby realizing the accurate positioning of the prostate organ in a magnetic resonance image and laying a good foundation for medical image tasks such as subsequent organ registration, segmentation and the like. The improved two-dimensional sequence magnetic resonance organ positioning method based on the Faster R-CNN fully utilizes the difference between the natural image characteristics and the medical image characteristics to optimize the organ positioning accuracy and the detection success rate.
Experiments prove that on the premise of not having too much complicated image preprocessing work, the method can realize more accurate organ positioning and has good generalization performance. The method has the recall rate of 96.91% for the area of the organ and the success rate of positioning the organ as high as 99.39%.
Drawings
FIG. 1 is a block flow diagram of the method of the present invention;
figure 2 is an example of axial magnetic resonance images of the prostate from different medical centers used in the present invention;
fig. 3 is a specific structure of a part of network blocks in the present invention: (a) a convolution attention block; (b) an identification block; (c) a spatial attention module;
FIG. 4 is a partial preliminary test result based on improved Faster R-CNN: (a) an example of a slice image with multiple target detection results; (b) slice image examples lacking target detection results;
FIG. 5 is a schematic diagram of spatial curve fitting and updating based on sequence correlation, wherein: (1) projection of key points of the rejected target frame on X-Z and X-Y planes; (2) drawing and adjusting a space fitting curve; (3) projection of the space fitting curve on an X-Z plane; (4) projection of space fitting curve on X-Y plane
FIG. 6 is a partial inspection result plot on a validation set for the method of the present invention: wherein the red mark is the real outline of the prostate organ, and the yellow rectangular box is the organ positioning result of the algorithm;
FIG. 7 is a graph of axial slice test results for a practical case of the method of the invention: (a) slicing the 5 th sheet; (b) slicing the 11 th slice; (c) slicing the 15 th sheet; (d) and (6) slicing the blank into 18 th slices.
FIG. 8 is a flow chart of the model training and testing stages of the present invention.
Detailed Description
Since the initial positioning of the organ is very important for medical image processing such as organ segmentation, image registration, lesion detection, etc., effective evaluation of the initial position of the organ can help remove most of the interfering information in the image background, thereby improving the subsequent processing. The invention introduces a spatial attention module which focuses on organ position characteristics and correlation processing which improves the detection effect among sequences based on a two-stage detection algorithm, shows excellent detection effect on a test data set with higher heterogeneity, and lays a good foundation for the subsequent processing of medical images.
The method of the present invention will be described in detail with reference to specific examples.
Embodiment a two-dimensional sequence magnetic resonance image organ positioning method based on network model
This example describes the positioning method of the prostate organ in detail by taking the prostate magnetic resonance data set with a small scale and high heterogeneity as an example. As shown in fig. 8, the method comprises the steps of:
s1, data set preparation:
the prostate magnetic resonance image used in this example is from the prostate segmentation challenge suite PROMISE12 dataset published at the MICCAI conference of 2012. The data set contains 80T 2 weighted magnetic resonance image cases from four medical centers, 50 of which have expert manual segmentation masks, each center using different acquisition equipment and acquisition protocols when acquiring images. Sample images from four centers are shown in fig. 2. Wherein 1.5T and 3T represent the magnetic field strength and the presence or absence of ERC represents whether a rectal coil is used for image acquisition.
This example divides the 50 cases of MR images of the prostate with expert manual segmentation masks into a training set and a verification set in a 4:1 ratio in case units.
S2, preprocessing the data set image:
(1) preprocessing images of a training set and a verification set:
and performing the following four steps of basic preprocessing operation on the obtained training set and verification set images: (a) because the interlayer resolution of the original three-dimensional image in the axial direction is higher than the in-layer resolution, the three-dimensional image in the original data is expanded according to the axial sequence to form a two-dimensional sequence MR image. (b) Secondly, since the image intensities from different acquisition centers are greatly different, intensity maximum and minimum normalization processing is performed for each two-dimensional slice image. E.g. pixel value x of pixel iiThe normalized pixel values are: normi=(xi-Pmin)/(Pmax-Pmin)×255,normi∈[0,255]In addition, in order to enhance the detection performance of a small target, the image is subjected to center cropping processing (if the original size is W × H, the cropped size is α W ×α H, α is a scaling factor, and in this embodiment, α is 2/3) (d), and finally, the image sizes are uniformly normalized, and the image sizes are normalized to 600 × 600 in this embodiment.
(2) Generation of training set and validation set target box labels
To train and validate the deep learning network with tagged data, image-level annotations, i.e., rectangular target box annotations, are generated based on each training and validation image using the segmentation masks (i.e., pixel-level annotations) corresponding to the training and validation set images. According to the method, after the center cutting and size normalization operation of (c) - (d) in (1) are carried out on the segmentation mask images corresponding to the training and verification images, the circumscribed rectangle of the real outline of the organ is used as the position label of the organ, and the training set images and the verification set images and the corresponding label images with the position labels are used as the input of a subsequent training and verification network.
For data sets without segmentation masks, rectangular target box labeling may be performed on regions of the organ in these images first.
S3, data expansion processing:
because extensive data sets are often needed for deep learning model training to better learn the generalization characteristics of image data, and medical images are often difficult to obtain due to problems of patient privacy and the like, the invention carries out data expansion processing on training set images and corresponding segmentation mask images through four modes of horizontal turning, small-amplitude horizontal or vertical translation, small-angle random rotation (-5 ℃) and elastic deformation. The number of the expanded images is 16 times of the number of the original images.
S4, constructing an improved organ preliminary positioning network model based on the master R-CNN:
(1) building a network structure:
the invention adopts a network structure similar to that of fast R-CNN, except that a ResNet-50 architecture with a spatial attention mechanism is used for replacing VGG16 in the original fast R-CNN to realize the feature extraction of an input image and the classification and identification of objects in the image, as shown in FIG. 1, wherein the specific construction of a convolution attention module and the embedding way of the spatial attention module are shown in FIG. 3(a), the specific structures of an identification block and the spatial attention module are shown in FIGS. 3(b) and (c), the addition of the spatial attention module is used for better utilizing the central position information of organs in the image and enhancing the sensitivity of the network to position feature learning, and F ∈ R is used for enhancing the sensitivity of the network to position feature learningC×H×WWhen the feature map is input as an intermediate feature map, the calculation formula of the attention feature map after the spatial attention module is added is as follows:
Attention(F)=sigmoid(f7×7[Favg,Fmax])Favg(F)=Global_avgpool(F),
Fmax(F)=Global_maxpool(F)
wherein, Favg,Fmax∈R1×H×W,f7×7The representation performs a convolution operation with the convolution kernel of 7 × 7.
(2) Network training strategy and loss function settings
1) Training strategy
Due to the limitation of medical data sets, the invention utilizes the idea of transfer learning to pre-train the network on the ImageNet large-scale natural image set, uses the pre-trained weight parameters as the initial weight of the method, and further trains the network by using the labeled prostate data set on the basis of the initial weight, wherein the parameters selected in the training process are β 1-0.9, β 2-0.999 and epsilon-10-8And 2 × 10, and-5a fixed learning rate.
2) Loss function setting
The Faster R-CNN network comprises two tasks of regression and classification, wherein a regressor is used for carrying out regression prediction on a candidate box, and a classifier is used for classifying objects in the candidate box. Therefore, the network training process uses a multitask loss function, and the calculation formula is as follows:
Figure BDA0002469786630000061
wherein L iscls、LregThe classification loss and the regression loss are respectively expressed as follows:
i) class loss by using a two-class cross entropy loss function, where piIs an achor [ i ]]The probability of the prediction being the target,
Figure BDA0002469786630000062
is an achor [ i ]]True tags of, i.e. satisfy
Figure BDA0002469786630000063
ii) regression loss:
Figure BDA0002469786630000064
wherein the content of the first and second substances,
Figure BDA0002469786630000065
ti={tx,ty,tw,this a vector representing the prediction candidate box anchor [ i }]Parameterized coordinates of (a);
Figure BDA0002469786630000066
and the coordinate vector of the corresponding real target frame.
(3) Network training
1) Feature extraction
The training set image obtained in the step S3 is input into a feature extraction network, the network can better learn the global features (namely generalization features) of the image along with the continuous reduction of the resolution of the convolutional layer, and the addition of a residual error structure further optimizes the feature extraction process. As shown in fig. 1, the image undergoes four down-sampling operations with a step size of 2 in stages 1-4, so the resulting convolution feature map size is about 1/16 (taking an input image size of 600 × 600 as an example, and the output convolution feature map size is 38 × 38) of the original input image size.
2) Prediction of target candidate box
(a) Based on the convolution characteristic diagram obtained in the step 1), the RPN network firstly performs a convolution operation of 3 x 3 on the convolution characteristic diagram, then forms a plurality of anchors with different sizes and different proportions by taking each pixel in the convolution characteristic diagram as a center, and maps the anchors onto the convolution characteristic diagram to obtain a plurality of target candidate box suggestions. In the present embodiment, anchors having sizes {64,128,256,512} and aspect ratios {0.5,1,2} are selected in consideration of the morphological size of the organ itself and the ratio thereof in the image, thereby forming 4 × 3 ═ 12 anchors at each pixel point.
(b) Performing ROI pooling operation based on the convolution feature map obtained in 1) and the target candidate frame suggestions with different sizes and sizes obtained in (a) to obtain target candidate frame feature maps with the same size;
(c) performing further convolution operation on the target candidate frame feature map obtained in the step (b) by using a stage 5 shown in fig. 1, and then respectively obtaining a class probability prediction and a position offset prediction of each target candidate frame by using a full-connection layer;
(d) non-maximum suppression (NMS) with a selected threshold is performed to remove the set of target candidate boxes with higher overlap in the same class.
(e) And (d) carrying out loss function evaluation on the classification and position prediction of the target candidate frame obtained through screening in the step (d) by using the label image corresponding to the training image, so that the parameters of the deep convolutional neural network are iteratively updated by using a gradient descent back propagation method, and the network parameters obtained after iteration to the maximum set times are used as the optimal network parameters to complete training and generate an organ initial positioning network model.
S5, applying the constructed network model to the primary positioning of the verification set:
and (4) inputting the preprocessed verification set image (not including the target frame label) in the S2 into the constructed network model, and outputting to obtain a plurality of target detection frames and corresponding credibility scores for each two-dimensional slice image in the verification set image. And reserving target candidate frames with the credibility larger than a threshold (the threshold is 0.80 selected in the method) in each image.
S6, performing final positioning on the organ based on the sequence correlation processing:
the phenomenon that a plurality of target detection frames appear in the individual slice image as shown in (a) of fig. 4 and the phenomenon that a target detection frame is absent in (b) occur in the preliminary detection result of step S4 because the feature of the medical image itself is ignored in this way. For organ positioning detection, the position of an organ between two-dimensional sequence MR images of the same individual does not change greatly, and the evolution of the surface contour of the organ has a certain rule, so that the evolution of the surface contour of the organ can be described by using a mathematical model, and the method introduces sequence correlation processing to improve the problems. The method comprises the following steps:
(1) two-dimensional image serialization:
and (4) aiming at each two-dimensional slice image with the preliminary detection result in the S4, the axial slice images belonging to the same sequence are sorted in a front-back order, and preparation is made for further sequence correlation processing.
(2) And finally determining a target frame:
taking the detection of the prostate organ as an example, each image contains the prostate organ and the organ is unique. Therefore, by using the uniqueness of the target and the micro-variability of the organ positions in the sequence images, firstly, aiming at each sequence in the verification set obtained in the step (1), three-dimensional space curve fitting is carried out on nine key points (including the central point, four vertexes and the midpoints of four sides of a rectangular frame) of a target frame in the slice image which belongs to the same sequence and has a single reliable target candidate frame, and the fitting mode adopts least square fourth-order polynomial fitting.
The following processes are performed for the two problems in fig. 4(a) and (b), respectively:
I) the phenomenon of two or more target frames for individual two-dimensional images in each sequence (as shown in fig. 4 (a)): and screening and judging a target frame closest to the fitting position in the two-dimensional image with the multiple prediction frames as the position of the final organ by using the fitted space curve and the minimum space Euclidean distance, and updating the fitting parameters of the space curve after each screening. FIG. 5 is a diagram illustrating this process, wherein the mathematical formula for the object box filtering is based on:
Figure BDA0002469786630000081
wherein, b*The organ prediction frame finally selected from the slice image is shown,
Figure BDA0002469786630000082
respectively representing the fitting predicted values of the y direction and the z direction at the jth key point,
Figure BDA0002469786630000083
and coordinate values in the y direction and the z direction at the jth key point of a certain target frame predicted by the network are respectively represented.
II) basal and basal prostate organs at the first and last ends of the sequence in each sequence may appear without target boxes due to the small target (as shown in fig. 4 (b):
due to the limitation of the target detection network to the small target detection, the method utilizes the space curve finally updated by each sequence to fit the two-dimensional images missing detection in the sequence to obtain the target positioning of the organ. Thereby, a final localization of the organ of the validation set image is achieved.
Validation set localization effect assessment
And (3) evaluating the detection effect of the method from two aspects of qualitative and quantitative by combining the positioning result of the verification set image obtained in the step S6 and the organ true labeling of the verification set image in the step S2:
from a qualitative perspective, fig. 6 (a) - (d) are graphs of the detection results obtained on the validation set. The red line represents the real contour of the organ, and the yellow rectangle represents the detection result of the prostate organ by the method of the invention. From the detection result diagram, the method can realize more accurate and unique organ positioning, and simultaneously well avoids organs such as bladder, rectum and the like which are close to the organs in shape and size.
From the quantitative evaluation, multiple tests show that the recall rate of the method for the real area of the organ reaches 96.91%, the recall rate of the area is 50% as the standard of the positioning success rate, and the positioning success rate of the method for the organ in the verification set is as high as 99.39%.
From the viewpoint of detection efficiency, the method takes only 0.3s (Intel Corei 73 GHz processor) for organ positioning detection in a two-dimensional slice image, and can well meet the actual requirement in medical tasks.
Preferably, the preprocessed verification set is input into the network model formed in step S4, the training network is further optimized and adjusted to obtain an optimized and adjusted network model, and then step S5 is executed.
Although the above description is made by taking the positioning of the prostate organ as an example, it can be understood that the method is also applicable to the detection and positioning of the tissues and organs such as kidney, pancreas, etc.
Example two
The method for positioning the prostate organ to be positioned by utilizing the method comprises the following steps:
(1) image preprocessing:
for a three-dimensional magnetic resonance prostate organ image of a certain case, two-dimensional serialization processing is firstly carried out. And secondly, carrying out maximum and minimum intensity normalization processing to normalize the pixel values of the two-dimensional slice image to be between 0 and 255. Finally, the image is cut properly according to the position and the proportion of the organ in the image, and the size of the image is normalized to 600 multiplied by 600.
(2) Based on the fast R-CNN improved organ positioning network model, the organ is initially positioned:
1) feature extraction: inputting the two-dimensional slice image obtained in the step (1) into a convolutional neural network constructed in S4 to obtain a convolutional characteristic map about the image;
2) generating a target candidate box suggestion: inputting the convolution characteristic diagram obtained in the step 1) into an RPN network to generate a target candidate frame suggestion;
3) ROI pooling (i.e., region of interest pooling): taking the convolution feature map output by 1) and the target candidate box suggestion output by 2) as input, mapping the target candidate box suggestion to a corresponding position of the feature map, and performing pooling operation on the mapped feature map to obtain a target area feature map with uniform size;
4) and (3) classification prediction: classifying according to the target region characteristic graph obtained in the step 3), distinguishing whether an object contained in the corresponding target candidate frame is an organ or a background, and obtaining a confidence score corresponding to the category;
5) and (3) boundary box regression: performing boundary frame regression on the target region characteristic graph obtained in the step 3) to refine the boundary frame, and finally obtaining the accurate position of the detection frame;
6) performing a process with a threshold value N based on the target frames belonging to the organ and the positions thereof obtained in 4) and 5)t(NtFixed value) to remove the target frame set with higher overlapping degree, and keep the target candidate frame with reliability higher than a certain fixed threshold (the recommended selection threshold is between 0.80 and 0.90) in each two-dimensional slice image. Therefore, the positions of the target detection frames in each two-dimensional slice image and the corresponding credibility scores are obtained.
(3) Final localization of organs based on sequence correlation processing:
1) sorting the slice images according to the front and back sequence of the original sequence;
2) and performing least square quartic polynomial space curve fitting on nine key points (including the central point, four vertexes and the midpoints of four edges) of the target frame in the sequence image with the single reliable target candidate frame. And secondly, screening and judging a target frame closest to the fitting position in the two-dimensional image with the multiple prediction frames as a final positioning frame of the organ by using the fitted space curve and the minimum space Euclidean distance, and updating the fitting parameters of the space curve after each screening.
3) Aiming at the phenomenon that no target frame is possibly generated due to small target in organ detection at the head end and the tail end of the sequence, the target positioning of the organ is obtained by fitting the two-dimensional image lacking detection through the finally updated space curve, so that the organ positioning detection of the whole two-dimensional sequence image is realized.
In this embodiment, the map of the prostate organ location effect in the axial slice image is shown in fig. 7, wherein (a) - (d) show the test effect maps of the 5 th, 11 th, 15 th and 18 th slices of the axial serial slice, respectively. Through transverse comparison and observation of the positioning effect of the prostate organs in the four images, it can be seen that the method has very good positioning effect on the prostate organs in the whole sequence slice, particularly the positioning of the organs at the first end and the last end of the sequence, which is rarely involved in or incomparable with other algorithms; in addition, in a longitudinal view, the method can well exclude areas with similar shapes and sizes with prostate organs, such as bladder, rectum and the like, while realizing accurate prostate positioning, and shows that the method has a good characteristic learning effect on target organs.
In summary, the organ localization identification method for two-dimensional sequence magnetic resonance images has the advantages that a general organ localization method is constructed by using a magnetic resonance data set with small scale and high heterogeneity, and subjectivity of traditional slice-based visual inspection is well avoided. In addition, the invention realizes the preliminary detection and positioning of organs by improving the target detection algorithm Faster R-CNN used in the existing natural image processing; and then, the organ positioning effect of the head end and the tail end of the sequence is greatly improved by utilizing the relevance between the medical sequence slice images (namely the evolution rule of the position and the shape size of the whole organ between layers).

Claims (8)

1. A method for organ positioning of two-dimensional sequence magnetic resonance images based on a network model comprises the following steps:
s1, preparing a data set:
collecting a plurality of two-dimensional sequence magnetic resonance images about an organ, labeling organ regions in the images by a rectangular target box, and then dividing the images into a training set and a verification set;
s2, preprocessing the data set: sequentially carrying out pixel intensity maximum and minimum normalization processing, center cutting processing and image size normalization processing on each two-dimensional sequence MR image;
s3, performing data expansion processing on the training set image obtained in the step S2 and the rectangular target frame mark of the corresponding organ;
s4, constructing an organ preliminary positioning network model based on fast R-CNN improvement, comprising the following steps:
1) constructing a target detection network architecture based on the improvement of the Faster R-CNN, replacing a VGG16 architecture in the classic Faster R-CNN with ResNet-50 with a space attention mechanism, and training the ResNet-50 by using an ImageNet large-scale natural data set to obtain an initial training weight parameter of the network;
2) using the training set image expanded in the step S3 and the corresponding rectangular target frame label as the input of the network in the step 1), and performing iterative adjustment on the whole network architecture parameter by using a multitask loss function formed by classification loss and regression loss to complete network training and generate an organ preliminary positioning network model;
s5, labeling the verification set image preprocessed in the step S2 and the rectangular target box corresponding to the verification set image and inputting the labeled verification set image and the rectangular target box into the organ initial positioning network model generated in the step S4, and optimizing and adjusting the organ positioning model according to an output result to obtain an organ positioning network model;
s6, performing organ preliminary positioning on the verification set image preprocessed in the step S2 by using the network model constructed in the step S5 to obtain a plurality of target detection frames and corresponding credibility scores aiming at each two-dimensional slice image, and reserving target candidate frames with credibility greater than a certain threshold value in each image;
s7, constructing a spatial curve fitting model based on sequence correlation processing:
1) aiming at each two-dimensional slice image with the initial detection result in the S6, the axial slice images belonging to the same sequence are sorted in a front-back order;
2) extracting key points of a target frame in the slice image with a single reliable target candidate frame from each sorted sequence, performing spatial curve fitting in the sequence direction by using the key points, and determining the best curve fitting mode to be least square quartic polynomial fitting;
s8, preprocessing the two-dimensional sequence magnetic resonance image of a certain organ to be positioned according to the method of the step S2, and inputting the preprocessed two-dimensional sequence magnetic resonance image into the organ positioning network model obtained in the step S5 to obtain a preliminary positioning result of the organ;
s9, obtaining a plurality of target detection frames and corresponding credibility scores of each two-dimensional slice image according to the organ initial positioning result output in the step S8, and reserving target candidate frames with credibility greater than a certain threshold value in each image;
s10, performing final positioning on the organ based on the sequence correlation processing:
1) sorting each two-dimensional slice image with the preliminary detection result in the S9 in sequence;
2) aiming at the sorted sequence image, extracting key points of a target frame in a slice image with a single reliable target candidate frame, and performing spatial curve fitting in the sequence direction on the key points according to a least square quartic polynomial fitting mode obtained in S7;
3) screening and judging a target frame closest to a fitting position in a two-dimensional image with multiple prediction frames as a final positioning frame of the organ by using the fitted space curve and the minimum space Euclidean distance, and updating space curve fitting parameters after each screening;
4) and fitting the two-dimensional image subjected to the missing detection by using the finally updated spatial curve to complete the final positioning of the organ.
2. The organ localization method according to claim 1, wherein: in step S2, when the pixel intensity is normalized, the pixel value x of the pixel iiThe normalized pixel values are:
normi=(xi-Pmin)/(Pmax-Pmin)×255,normi∈[0,255],
wherein Pmax and Pmin respectively refer to the maximum value and the minimum value of the pixel of the slice image where the pixel i is located.
3. The organ localization method according to claim 1, wherein: in step S2, when the center trimming process is performed, if the original size is W × H, the trimmed size is α W × α H, and α is a scaling factor.
4. The organ localization method according to claim 3, wherein: 2/3.
5. The organ localization method according to claim 1, wherein: in step S2, the image size is normalized to 600 × 600.
6. The organ localization method according to claim 1, wherein: in step S3, data expansion processing is performed by horizontal flipping, small-amplitude horizontal translation or vertical translation, small-angle random rotation, and elastic deformation.
7. The organ localization method according to claim 6, wherein: the angle of the small-angle random rotation is-5 degrees.
8. The organ localization method according to claim 1, wherein: in the step S7, the threshold value is 0.80-0.90.
CN202010344910.7A 2020-04-27 2020-04-27 Organ positioning method of two-dimensional sequence magnetic resonance image based on network model Active CN111583204B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010344910.7A CN111583204B (en) 2020-04-27 2020-04-27 Organ positioning method of two-dimensional sequence magnetic resonance image based on network model

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010344910.7A CN111583204B (en) 2020-04-27 2020-04-27 Organ positioning method of two-dimensional sequence magnetic resonance image based on network model

Publications (2)

Publication Number Publication Date
CN111583204A true CN111583204A (en) 2020-08-25
CN111583204B CN111583204B (en) 2022-10-14

Family

ID=72125433

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010344910.7A Active CN111583204B (en) 2020-04-27 2020-04-27 Organ positioning method of two-dimensional sequence magnetic resonance image based on network model

Country Status (1)

Country Link
CN (1) CN111583204B (en)

Cited By (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112053342A (en) * 2020-09-02 2020-12-08 陈燕铭 Method and device for extracting and identifying pituitary magnetic resonance image based on artificial intelligence
CN112164028A (en) * 2020-09-02 2021-01-01 陈燕铭 Pituitary adenoma magnetic resonance image positioning diagnosis method and device based on artificial intelligence
CN112598634A (en) * 2020-12-18 2021-04-02 燕山大学 CT image organ positioning method based on 3D CNN and iterative search
CN113256574A (en) * 2021-05-13 2021-08-13 中国科学院长春光学精密机械与物理研究所 Three-dimensional target detection method
CN113436139A (en) * 2021-05-10 2021-09-24 上海大学 Small intestine nuclear magnetic resonance image identification and physiological information extraction system and method based on deep learning
CN113538435A (en) * 2021-09-17 2021-10-22 北京航空航天大学 Pancreatic cancer pathological image classification method and system based on deep learning
CN113674248A (en) * 2021-08-23 2021-11-19 广州市番禺区中心医院(广州市番禺区人民医院、广州市番禺区心血管疾病研究所) Magnetic resonance amide proton transfer imaging magnetic susceptibility detection method and related equipment
CN114820584A (en) * 2022-05-27 2022-07-29 北京安德医智科技有限公司 Lung focus positioner
WO2022198786A1 (en) * 2021-03-25 2022-09-29 平安科技(深圳)有限公司 Target object detection method and apparatus, and electronic device and storage medium
CN116777935A (en) * 2023-08-16 2023-09-19 天津市肿瘤医院(天津医科大学肿瘤医院) Deep learning-based method and system for automatically segmenting prostate whole gland

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101301207A (en) * 2008-05-28 2008-11-12 华中科技大学 Vascular angiography three-dimensional rebuilding method under dynamic model direction
CN109727240A (en) * 2018-12-27 2019-05-07 深圳开立生物医疗科技股份有限公司 A kind of three-dimensional ultrasound pattern blocks tissue stripping means and relevant apparatus
CN110009628A (en) * 2019-04-12 2019-07-12 南京大学 A kind of automatic testing method for polymorphic target in continuous two dimensional image
CN110211097A (en) * 2019-05-14 2019-09-06 河海大学 A kind of crack image detecting method based on the migration of Faster R-CNN parameter
CN110503112A (en) * 2019-08-27 2019-11-26 电子科技大学 A kind of small target deteection of Enhanced feature study and recognition methods
CN110610210A (en) * 2019-09-18 2019-12-24 电子科技大学 Multi-target detection method
CN111027547A (en) * 2019-12-06 2020-04-17 南京大学 Automatic detection method for multi-scale polymorphic target in two-dimensional image

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101301207A (en) * 2008-05-28 2008-11-12 华中科技大学 Vascular angiography three-dimensional rebuilding method under dynamic model direction
CN109727240A (en) * 2018-12-27 2019-05-07 深圳开立生物医疗科技股份有限公司 A kind of three-dimensional ultrasound pattern blocks tissue stripping means and relevant apparatus
CN110009628A (en) * 2019-04-12 2019-07-12 南京大学 A kind of automatic testing method for polymorphic target in continuous two dimensional image
CN110211097A (en) * 2019-05-14 2019-09-06 河海大学 A kind of crack image detecting method based on the migration of Faster R-CNN parameter
CN110503112A (en) * 2019-08-27 2019-11-26 电子科技大学 A kind of small target deteection of Enhanced feature study and recognition methods
CN110610210A (en) * 2019-09-18 2019-12-24 电子科技大学 Multi-target detection method
CN111027547A (en) * 2019-12-06 2020-04-17 南京大学 Automatic detection method for multi-scale polymorphic target in two-dimensional image

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
SANGHYUN WOO,JONGCHAN PARK,JOON-YOUNG LEE,IN SO KWEON: "CBAM: Convolutional Block Attention Module", 《ECCV 2018》 *
SHAOQING REN, KAIMING HE, ROSS GIRSHICK, AND JIAN SUN: "Faster R-CNN: Towards Real-Time Object Detection with Region Proposal Networks", 《CVPR2016》 *

Cited By (13)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112164028A (en) * 2020-09-02 2021-01-01 陈燕铭 Pituitary adenoma magnetic resonance image positioning diagnosis method and device based on artificial intelligence
CN112053342A (en) * 2020-09-02 2020-12-08 陈燕铭 Method and device for extracting and identifying pituitary magnetic resonance image based on artificial intelligence
CN112598634A (en) * 2020-12-18 2021-04-02 燕山大学 CT image organ positioning method based on 3D CNN and iterative search
CN112598634B (en) * 2020-12-18 2022-11-25 燕山大学 CT image organ positioning method based on 3D CNN and iterative search
WO2022198786A1 (en) * 2021-03-25 2022-09-29 平安科技(深圳)有限公司 Target object detection method and apparatus, and electronic device and storage medium
CN113436139A (en) * 2021-05-10 2021-09-24 上海大学 Small intestine nuclear magnetic resonance image identification and physiological information extraction system and method based on deep learning
CN113256574A (en) * 2021-05-13 2021-08-13 中国科学院长春光学精密机械与物理研究所 Three-dimensional target detection method
CN113674248A (en) * 2021-08-23 2021-11-19 广州市番禺区中心医院(广州市番禺区人民医院、广州市番禺区心血管疾病研究所) Magnetic resonance amide proton transfer imaging magnetic susceptibility detection method and related equipment
CN113538435A (en) * 2021-09-17 2021-10-22 北京航空航天大学 Pancreatic cancer pathological image classification method and system based on deep learning
CN114820584A (en) * 2022-05-27 2022-07-29 北京安德医智科技有限公司 Lung focus positioner
CN114820584B (en) * 2022-05-27 2023-02-21 北京安德医智科技有限公司 Lung focus positioner
CN116777935A (en) * 2023-08-16 2023-09-19 天津市肿瘤医院(天津医科大学肿瘤医院) Deep learning-based method and system for automatically segmenting prostate whole gland
CN116777935B (en) * 2023-08-16 2023-11-10 天津市肿瘤医院(天津医科大学肿瘤医院) Deep learning-based method and system for automatically segmenting prostate whole gland

Also Published As

Publication number Publication date
CN111583204B (en) 2022-10-14

Similar Documents

Publication Publication Date Title
CN111583204B (en) Organ positioning method of two-dimensional sequence magnetic resonance image based on network model
WO2021088747A1 (en) Deep-learning-based method for predicting morphological change of liver tumor after ablation
CN110338844B (en) Three-dimensional imaging data display processing method and three-dimensional ultrasonic imaging method and system
Wang et al. Shape–intensity prior level set combining probabilistic atlas and probability map constrains for automatic liver segmentation from abdominal CT images
CN110889852B (en) Liver segmentation method based on residual error-attention deep neural network
WO2021203795A1 (en) Pancreas ct automatic segmentation method based on saliency dense connection expansion convolutional network
CN110889853A (en) Tumor segmentation method based on residual error-attention deep neural network
CN108364294A (en) Abdominal CT images multiple organ dividing method based on super-pixel
CN112365464B (en) GAN-based medical image lesion area weak supervision positioning method
CN108053417A (en) A kind of lung segmenting device of the 3DU-Net networks based on mixing coarse segmentation feature
CN108062749B (en) Identification method and device for levator ani fissure hole and electronic equipment
CN107274399A (en) A kind of Lung neoplasm dividing method based on Hession matrixes and 3D shape index
CN110363802B (en) Prostate image registration system and method based on automatic segmentation and pelvis alignment
CN111027590B (en) Breast cancer data classification method combining deep network features and machine learning model
CN109509193B (en) Liver CT atlas segmentation method and system based on high-precision registration
RU2654199C1 (en) Segmentation of human tissues in computer image
CN111784653A (en) Multi-scale network MRI pancreas contour positioning method based on shape constraint
CN112598613A (en) Determination method based on depth image segmentation and recognition for intelligent lung cancer diagnosis
CN115546605A (en) Training method and device based on image labeling and segmentation model
CN110570430B (en) Orbital bone tissue segmentation method based on volume registration
CN112651955A (en) Intestinal tract image identification method and terminal device
CN111383759A (en) Automatic pneumonia diagnosis system
CN112258536B (en) Integrated positioning and segmentation method for calluses and cerebellum earthworm parts
CN111724356A (en) Image processing method and system for CT image pneumonia identification
CN116228709A (en) Interactive ultrasonic endoscope image recognition method for pancreas solid space-occupying focus

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant