CN109584244A - A kind of hippocampus dividing method based on Sequence Learning - Google Patents
A kind of hippocampus dividing method based on Sequence Learning Download PDFInfo
- Publication number
- CN109584244A CN109584244A CN201811449294.0A CN201811449294A CN109584244A CN 109584244 A CN109584244 A CN 109584244A CN 201811449294 A CN201811449294 A CN 201811449294A CN 109584244 A CN109584244 A CN 109584244A
- Authority
- CN
- China
- Prior art keywords
- hippocampus
- group
- port number
- segmentation
- image set
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/10—Segmentation; Edge detection
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20081—Training; Learning
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20084—Artificial neural networks [ANN]
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20112—Image segmentation details
- G06T2207/20132—Image cropping
Landscapes
- Engineering & Computer Science (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- Magnetic Resonance Imaging Apparatus (AREA)
- Image Analysis (AREA)
Abstract
The present invention relates to computer visions, deep learning field, and in particular to a kind of hippocampus dividing method based on Sequence Learning.Steps are as follows by the present invention: step 1, being pre-processed to original image set A;Step 2, network model is built, hippocampus segmentation network model of the invention includes coded portion, two-way convolution length memory network and decoded portion;Step 3, training pattern;Forward-propagating is carried out to anatomical planes atlas D, E, F and obtains single iteration as a result, and calculating loss function and obtaining weight model J, K, L by backpropagation.The present invention utilizes the method based on deep learning network, and the high-efficient automatic of hippocampus structure in human brain nuclear magnetic resonance image is precisely divided in realization, and while guaranteeing high segmentation precision, arithmetic speed is also than very fast.And scalability is strong: in addition to the detection for hippocampus, the network in the present invention can be carried out re -training, it is made to be applied to the detection and segmentation of other organs or tissue.
Description
Technical field
The present invention relates to computer visions, deep learning field, and in particular to a kind of hippocampus based on Sequence Learning point
Segmentation method.
Background technique
Hippocampus is the important component of cerebral nervous system, the volume of hippocampus and the exception of function and many spiritual diseases
Disease has close relation, such as: temporal epilepsy (Temporal Lobe Epilepsy, TLE), Alzheimer syndrome
(Alzheimer ' s Disease, AD), schizophrenia (Schizophrenia) etc..Therefore accurate segmentation hippocampus, can be with
Auxiliary doctor carries out diagnosis and treatment to related psychiatric conditions, has great medical value.It is rich that nuclear magnetic resonance image can provide contrast
Rich, high resolution three-dimensional brain tissue information is the significant data for studying hippocampus volume morphing.Therefore, it is studied in Typical AVM image
The bulk & form of hippocampus, one that the Accurate Segmentation of the three-dimensional hippocampus of realization also increasingly becomes in medical image research are important
Task.
The method of traditional segmentation hippocampus includes manual segmentation method, semi-automatic partition method, the automatic segmentation side of tradition
Method, but these methods are uninteresting time-consuming, it is all less desirable in segmentation accuracy and efficiency.
In recent years, deep learning is quickly grown in artificial intelligence especially field of image processing.Classification, inspection in image
It surveys, have good research achievement in terms of segmentation, the Sequence Learning in deep learning is also widely used.
Summary of the invention
The present invention provides a kind of hippocampus dividing method based on Sequence Learning, it is therefore intended that solve to Typical AVM image
In hippocampus when being split segmentation accuracy rate it is low, the time of segmentation long problem.
A kind of hippocampus dividing method based on Sequence Learning, steps are as follows:
Step 1, original image set A is pre-processed;
The original image set A includes the Typical AVM hippocampus image file of N group NIfTI format.
N of the present invention is 120, then image file includes 62 groups of image having a size of 192*192*160, having a size of 256*256*
35 groups of 166,23 groups of the image having a size of 256*256*180.
1.1 cut image
The position and region of hippocampus in 120 groups of images are counted, the image file in original image set A is cut
At the image file having a size of Table A, image set B is obtained.
The clipping region of three kinds of different size images of Table A
Wherein, (x, y, z1) it is left hippocampus, (x, y, z2) it is right hippocampus;
Further, image file of the present invention is cut into having a size of 80*80*40, and the image file of 80*80*40 can obtain
To the effective coverage comprising hippocampus, so that the precision of segmentation is higher, and training speed can be accelerated.
1.2 data normalization
Data normalization processing is carried out to image set B, makes the range [0,1] of the voxel value in image set B, is standardized
The image set C of change.
1.3 Data Serialization
Image set C is serialized respectively by coronal-plane, sagittal plane and three, cross section direction, generates three groups of difference views
Figure lower anatomical planes atlas D, E, F include Slice Sequence in every group of anatomical planes atlas.
Including including 80 Slice Sequences, dissection in 80 Slice Sequences, anatomical planes atlas E in anatomical planes atlas D
It include 40 Slice Sequences in plane atlas F.
Step 2, hippocampus segmentation network model is built;
Hippocampus segmentation network model of the invention includes coded portion, two-way convolution length memory network (BDC-LSTM)
And decoded portion, overall structure figure are as shown in Figure 2.
Anatomical planes atlas D, E, F are individually subjected to feature extraction by coded portion first, then propose feature
Result after taking, which is sent in BDC-LSTM, to be trained, and excavates the spatial sequence relationship of serial section in anatomical planes atlas, most
The result after BDC-LSTM operation is up-sampled by decoded portion afterwards, is divided end to end to realize.Only will every time
One group of anatomical planes atlas is sent in network and is trained.
Coded portion: coded portion is to carry out spy to the slice in anatomical planes atlas D, E, F under three groups of different views
Sign is extracted.Coded portion includes four groups of convolutional networks and a maximum pond layer, the network structure of coded portion such as Fig. 3 institute
Show.First group be the 3*3 that port number is 16 convolutional layer.In order to extract more features, second group uses three kinds of different volumes
The long-pending information to extract multiple scales.The first is the convolution for the 1*1 that port number is 16, is for second the 3*3 that port number is 16
Convolution, the third be channel be 16 5*5 convolution.Third group is the convolutional layer for the 3*3 that port number is 16, and effect is by the
After the characteristic pattern that three kinds of different convolution are extracted in two groups carries out converge operation, then carry out feature extraction.4th group is that port number is
The convolutional layer of 16 3*3.After four groups of convolution, in order to reduce the size of characteristic pattern, a maximum pond layer is connected.
Further, the convergence problem for realizing network adds Batch- after first group, third group, the 4th group of convolutional layer
Normalization, activation primitive use Relu.
Anatomical planes atlas D, E, F after serializing obtains corresponding one group of characteristic pattern G, H, I by coded portion.
BDC-LSTM:BDC-LSTM is to preferably excavate serial section from three groups of characteristic patterns G, H, I after coding
Spatial sequence relationship.
The length memory network (LSTM) that Hochreite et al. was proposed in 1997, successfully solves original RNN not
Foot, to RNN added one judge that information is useful whether " processor ", this processor act on structure be referred to as cell-like
State (cell).If the sequence of input is image, a kind of extension form convolution length memory network (CLSTM) of LSTM
It is widely used.By combining CLSTM with other convolutional networks, CLSTM can effectively utilize input figure
Correlation between piece, to realize more accurate segmentation.
Difference with common LSTM is that matrix multiplication is replaced with convolution operation by CLSTM, thus remains longer system
The spatial information of column.It is highly effective to processing image sequence problem.CLSTM is defined as follows:
it=σ (xt*Wxi+ht-1*Whi+bi)
ft=σ (xt*Wxf+ht-1*Whf+bf)
ot=σ (xt*Wxo+ht-1*Who+bo)
Wherein, * indicates convolution operation.Wherein σ is sigmoid function, and tanh is hyperbolic tangent function.In whole network altogether
There are three doors, i.e. input gate it, forget door ftWith out gate ot。bi, bf, bc, boIt is bias term, xt, ct, htIt is to be carved in time t
Input, location mode and hidden state.W**It is the diagonal line weight matrix of controlling value conversion.For example, WhfIt is responsible for control and forgets door
How from hidden state acquired value.
In order to increase the available information of CLSTM, fully consider each slice and and it is adjacent be sliced up and down between it is close
It contacts, two-way convolution length memory network (BDC-LSTM) is used in the present invention.Using two layers of CLSTM, one layer of CLSTM is with timing
Forward direction, reversed (see the Fig. 4) of one layer of CLSTM against timing.The spatial sequence relationship of slice can be preferably excavated in this way.
Decoded portion: decoded portion mainly up-samples the output of BDC-LSTM, obtains as input picture
Resolution ratio.Major networks structure chart is as shown in Figure 5.Decoded portion include the 3*3 that port number is 16 warp lamination and
The convolutional layer for the 3*3 that one port number is 16, finally connects the convolutional layer for the 3*3 that a port number is 1.
Further, to realize convergence problem, Batch- is added after the 3*3 convolutional layer that warp lamination and port number are 16
Normalization, activation primitive use Relu.
The network number of plies of the invention is all fewer than U-Net and 3D U-Net, reduces the parameter of network, shortens the instruction of network
Practice the time, and segmentation precision is also above U-Net and 3D U-Net.
Two-way convolution length memory network (BDC-LSTM) is applied in the task of hippocampus segmentation, preferably excavates 3D
Spatial information in Typical AVM image improves the precision of segmentation.Segmentation precision is higher compared with CLSTM.By full convolutional network with
BDC-LSTM is combined, and on the basis of a small amount of characteristic extraction step, can improve the precision of segmentation.
Step 3, training pattern;
To anatomical planes atlas D, E, F carry out forward-propagating obtain single iteration as a result, and calculate loss function pass through it is anti-
Weight model J, K, the L obtained to propagation.
Anatomical planes figure under three kinds of views is trained respectively to obtain three groups of weight models for hippocampus segmentation
J, K, L are averaging weight model J, K, L to obtain final training pattern M.
Compared with prior art, the beneficial effects of the present invention are:
(1) method based on deep learning network is utilized, realizes the height to hippocampus structure in human brain nuclear magnetic resonance image
Effect automation precisely segmentation.Doctor can be helped to carry out the diagnosis of early stage to Alzheimer's disease.
(2) high-efficient automatic is precisely divided: can directly be split and be tied to the human brain nuclear magnetic resonance image of input
Fruit, while guaranteeing high segmentation precision, arithmetic speed is also than very fast.
(3) scalability is strong: in addition to the detection for hippocampus, we can very easily by the network in the present invention into
Row re -training makes it be applied to the detection and segmentation of other organs or tissue, such as eye ground cutting, detects lung
Tubercle etc..
Detailed description of the invention
Fig. 1 is workflow block schematic illustration of the invention.
Fig. 2 is the overall structure figure provided by the invention based on deep learning.
Fig. 3 is the network structure of coded portion provided by the invention.
Fig. 4 is the network structure of BDC-LCTM provided by the invention.
Fig. 5 is the network structure of decoded portion provided by the invention.
Specific embodiment
The invention will be further elucidated with reference to specific embodiments.
It in order to verify the validity of this method, is tested on ADNI database, experimental data of the invention is by 120
Typical AVM image is organized, includes actual patient and healthy comparison crowd in 120 groups.In order to verify the performance of model, data are divided into
10 parts, using 10 folding cross-validation experiments, 9 parts are used to train, and 1 part is used to test, until all data are all tested.About
The optimization algorithm of model, using Nadam algorithm, learning rate is set as 0.001, and weights initialisation is uniformly distributed just using glorot
Beginning method.
Hardware device is as follows: processor Intel Core i7-9700K CPU@4.2GHz;Memory (RAM) 32.0GB;Solely
Vertical video card, NVIDIA GeForce GTX 1070;System type, Ubuntu 16.04;Developing instrument, Python and Keras frame
Frame.
The effect of training pattern M is assessed using model verifying.The assessment of result precision is used and is ground based on medicine
Study carefully the general Dice Metric index of industry, proposed partitioning algorithm accuracy rate is assessed using Dice index.
Dice Metric index is as follows:
Wherein M indicates the result (goldstandard) of expert's manual segmentation, and A indicates the automatic split-run test of algorithm as a result, V () is indicated
Volume size in required region.
It is specific to implement to include following four part.First part is compared CLSTM with BDC-LSTM, second part handle
Single-view is compared with the segmentation of multiple view, and Part III is compared BDC-LSTM with U-Net and 3D U-Net, finally
Method proposed by the present invention is compared with other methods.
1CLSTM and BDC-LSTM comparison
After carrying out identical pretreatment operation to 120 MRI images, by two kinds of segmentation results of CLSTM and BDC-LSTM
It compares, the results are shown in Table 1.
The comparison of table 1LSTM and CLSTM
It is found that the segmentation precision of BDC-LSTM is significantly higher than CLSTM's.Demonstrating BDC-LSTM ratio CLSTM can be more
Information between good study slice.
2 single-views and multiple view comparison
It is all that 3D Typical AVM is cut under some view first in the common dividing method based on two-dimensional convolution network
2D slice, be then then sent through in network and be trained, it is contemplated that the structure of the slice under different views is different, single view
Under segmentation result may precision it is not high.The result under single view and multiple views is compared in the present invention thus.
To same data set, the hippocampus parted pattern proposed through the invention obtains sagittal plane, coronal-plane and cross section three first
Three under a view as a result, then integrate three segmentation results, obtain the segmentation knot of multiple view by averaging
Fruit.The segmentation result of multiple view and the segmentation result of single view are as shown in table 2.
The comparison that the single view of table 2 and view integrate
The result that can be seen that multi-view integration from the segmentation precision Dice in table 2 is better than the segmentation result of single view.
Because some very fuzzy structure boundaries can be perfectly clear under other views is split under some view.Therefore
Multi-view integration can be abundant consideration Typical AVM image flatness and spatial coherence, the information under multiple views is carried out
It is complementary to one another, the segmentation effect more being had.
The comparison of 3BDC-LSTM and U-Net, 3D U-Net
In view of U-Net and 3D U-Net is the main stream approach in current medical image segmentation field.U-Net is sliced 2D
It is handled, 3D U-Net is directly split 3-D image, and BDC-LSTM network is sufficiently dug in the parted pattern of this research
The spatial information between slice is dug.After carrying out same pretreatment operation to 120 group data sets, U-Net, 3D U-Net are used respectively
Originally the method researched and proposed is split, and the results are shown in Table 3.From table 3 it is observed that BDC-LSTM precision is higher than other
Two methods.
The comparison of table 3BDC-LST, U-Net and 3D U-Net
The comparison of 4BDC-LSTM and other existing methods
Parted pattern of the invention compared with nearest some methods about research hippocampus segmentation.Experiment be
Different case in ADNI data set is carried out, cannot directly with these methods carry out it is completely quantitative compared with, but from
Average Dice in table 4 can be seen that method of the invention and be better than other methods.
Hippocampus partitioning algorithm compares in table 4ADNI database
Sequence Learning is applied in the task of hippocampus segmentation, directly three-dimensional hippocampus image has been split,
BDC-LSTM network proposed by the present invention can adequately excavate the spatial information of 3D Typical AVM image, so that segmentation precision is more
It is high.It is based on ADNI database the experimental results showed that, based on Sequence Learning network segmentation hippocampus method obtain better than mesh
The segmentation result of preceding other methods.In the research of 3D medical image, which can be easier and more accurately execute segmentation
Task.
Claims (5)
1. a kind of hippocampus dividing method based on Sequence Learning, which is characterized in that steps are as follows:
Step 1, original image set A is pre-processed;
The original image set A includes the Typical AVM hippocampus image file of N group NIfTI format;
1.1 cut image
The position and region of hippocampus in N group image are counted, the image file in original image set A is cut into size
For the image file of Table A, image set B is obtained;
The clipping region of three kinds of different size images of Table A
1.2 data normalization
Data normalization processing is carried out to image set B, makes the range [0,1] of the voxel value in image set B, is standardized
Image set C;
1.3 Data Serialization
Image set C is serialized respectively by coronal-plane, sagittal plane and three, cross section direction, is generated under three groups of different views
Anatomical planes atlas D, E, F, include Slice Sequence in every group of anatomical planes atlas;
Step 2, hippocampus segmentation network model is built;
It includes coded portion, BDC-LSTM and decoded portion that hippocampus, which divides network model,;
Anatomical planes atlas D, E, F are individually subjected to feature extraction by coded portion first, it then will be after feature extraction
Result be sent in BDC-LSTM and be trained, excavate the spatial sequence relationship of serial section in anatomical planes atlas, finally lead to
It crosses decoded portion to up-sample the result after BDC-LSTM operation, divide end to end to realize, every time by one group of solution
It cuts open plane atlas and is sent in network and be trained;
The coded portion is to carry out feature extraction to the slice in anatomical planes atlas D, E, F under three groups of different views;
Coded portion includes four groups of convolutional networks and a maximum pond layer;First group be the 3*3 that port number is 16 convolutional layer;Second
Group extracts the information of multiple scales using three kinds of different convolution, the first be port number for 16 1*1 convolution, second
It is the convolution for the 3*3 that port number is 16, the third is the convolution for the 5*5 that port number is 16;Third group is the 3*3 that port number is 16
Convolutional layer;4th group be port number be 16 3*3 convolutional layer;After four groups of convolution, a maximum pond layer is connected;
The BDC-LSTM is two layers of CLSTM structure, and one layer of CLSTM is with timing forward direction, and one layer of CLSTM is against the anti-of timing
To;
The decoded portion is up-sampled to the output of BDC-LSTM, and the resolution ratio as input picture is obtained;
The convolutional layer for the 3*3 that the warp lamination and a port number for the 3*3 that decoded portion is 16 comprising a port number are 16, most
The convolutional layer for the 3*3 that a port number is 1 is connected afterwards;
Step 3, training pattern;
Forward-propagating is carried out to anatomical planes atlas D, E, F and obtains single iteration as a result, and calculating loss function by reversely passing
Weight model J, K, the L broadcast;
To the anatomical planes figure under three kinds of views be trained to obtain respectively three groups for hippocampus segmentation weight model J, K,
L is averaging weight model J, K, L to obtain final training pattern M.
2. the hippocampus dividing method based on Sequence Learning as described in claim 1, which is characterized in that described in step 2
First group of coded portion, third group, add Batch-normalization after the 4th group of convolutional layer, activation primitive uses
Relu。
3. the hippocampus dividing method based on Sequence Learning as claimed in claim 1 or 2, which is characterized in that in step 2, institute
Batch-normalization is added after the 3*3 convolutional layer that the warp lamination and port number for the decoded portion stated are 16, activates letter
Number uses Relu.
4. the hippocampus dividing method based on Sequence Learning as claimed in claim 1 or 2, which is characterized in that step 1.1 is original
Image file in image set A is cut into having a size of 80*80*40.
5. the hippocampus dividing method based on Sequence Learning as claimed in claim 3, which is characterized in that step 1.1 rapid 1.1 is former
Image file in beginning image set A is cut into having a size of 80*80*40.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201811449294.0A CN109584244B (en) | 2018-11-30 | 2018-11-30 | Hippocampus segmentation method based on sequence learning |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201811449294.0A CN109584244B (en) | 2018-11-30 | 2018-11-30 | Hippocampus segmentation method based on sequence learning |
Publications (2)
Publication Number | Publication Date |
---|---|
CN109584244A true CN109584244A (en) | 2019-04-05 |
CN109584244B CN109584244B (en) | 2023-05-23 |
Family
ID=65923803
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201811449294.0A Active CN109584244B (en) | 2018-11-30 | 2018-11-30 | Hippocampus segmentation method based on sequence learning |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN109584244B (en) |
Cited By (10)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN110211140A (en) * | 2019-06-14 | 2019-09-06 | 重庆大学 | Abdominal vascular dividing method based on 3D residual error U-Net and Weighted Loss Function |
CN110287773A (en) * | 2019-05-14 | 2019-09-27 | 杭州电子科技大学 | Transport hub safety check image-recognizing method based on autonomous learning |
CN110414481A (en) * | 2019-08-09 | 2019-11-05 | 华东师范大学 | A kind of identification of 3D medical image and dividing method based on Unet and LSTM |
CN110555847A (en) * | 2019-07-31 | 2019-12-10 | 瀚博半导体(上海)有限公司 | Image processing method and device based on convolutional neural network |
CN110969626A (en) * | 2019-11-27 | 2020-04-07 | 西南交通大学 | Method for extracting hippocampus of human brain nuclear magnetic resonance image based on 3D neural network |
CN111110228A (en) * | 2020-01-17 | 2020-05-08 | 武汉中旗生物医疗电子有限公司 | Electrocardiosignal R wave detection method and device |
CN112508953A (en) * | 2021-02-05 | 2021-03-16 | 四川大学 | Meningioma rapid segmentation qualitative method based on deep neural network |
CN113192150A (en) * | 2020-01-29 | 2021-07-30 | 上海交通大学 | Magnetic resonance interventional image reconstruction method based on cyclic neural network |
CN114549417A (en) * | 2022-01-20 | 2022-05-27 | 高欣 | Abdominal fat quantification method based on deep learning and nuclear magnetic resonance Dixon |
CN116681705A (en) * | 2023-08-04 | 2023-09-01 | 哈尔滨工业大学(深圳)(哈尔滨工业大学深圳科技创新研究院) | Surface morphology measurement method and processing equipment based on longitudinal structure of human brain hippocampus |
Citations (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN106920243A (en) * | 2017-03-09 | 2017-07-04 | 桂林电子科技大学 | The ceramic material part method for sequence image segmentation of improved full convolutional neural networks |
CN107292346A (en) * | 2017-07-05 | 2017-10-24 | 四川大学 | A kind of MR image hippocampus partitioning algorithms learnt based on Local Subspace |
US20170358075A1 (en) * | 2016-06-09 | 2017-12-14 | International Business Machines Corporation | Sequential learning technique for medical image segmentation |
CN108062753A (en) * | 2017-12-29 | 2018-05-22 | 重庆理工大学 | The adaptive brain tumor semantic segmentation method in unsupervised domain based on depth confrontation study |
CN108154194A (en) * | 2018-01-18 | 2018-06-12 | 北京工业大学 | A kind of method with the convolutional network extraction high dimensional feature based on tensor |
US20180231871A1 (en) * | 2016-06-27 | 2018-08-16 | Zhejiang Gongshang University | Depth estimation method for monocular image based on multi-scale CNN and continuous CRF |
CN108427920A (en) * | 2018-02-26 | 2018-08-21 | 杭州电子科技大学 | A kind of land and sea border defense object detection method based on deep learning |
-
2018
- 2018-11-30 CN CN201811449294.0A patent/CN109584244B/en active Active
Patent Citations (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20170358075A1 (en) * | 2016-06-09 | 2017-12-14 | International Business Machines Corporation | Sequential learning technique for medical image segmentation |
US20180231871A1 (en) * | 2016-06-27 | 2018-08-16 | Zhejiang Gongshang University | Depth estimation method for monocular image based on multi-scale CNN and continuous CRF |
CN106920243A (en) * | 2017-03-09 | 2017-07-04 | 桂林电子科技大学 | The ceramic material part method for sequence image segmentation of improved full convolutional neural networks |
CN107292346A (en) * | 2017-07-05 | 2017-10-24 | 四川大学 | A kind of MR image hippocampus partitioning algorithms learnt based on Local Subspace |
CN108062753A (en) * | 2017-12-29 | 2018-05-22 | 重庆理工大学 | The adaptive brain tumor semantic segmentation method in unsupervised domain based on depth confrontation study |
CN108154194A (en) * | 2018-01-18 | 2018-06-12 | 北京工业大学 | A kind of method with the convolutional network extraction high dimensional feature based on tensor |
CN108427920A (en) * | 2018-02-26 | 2018-08-21 | 杭州电子科技大学 | A kind of land and sea border defense object detection method based on deep learning |
Non-Patent Citations (4)
Title |
---|
CHEN LIU ET AL.: "Modeling User Session and Intent with an Aention-based", 《SESSION-BASED RECOMMENDER SYSTEMS》 * |
JIANXU CHEN ET AL.: "Combining Fully Convolutional and Recurrent", 《30TH CONFERENCE ON NEURAL INFORMATION PROCESSING SYSTEMS (NIPS 2016)》 * |
YANI CHEN ET AL.: "HIPPOCAMPUS SEGMENTATION THROUGH MULTI-VIEW ENSEMBLE CONVNETS", 《IEEE》 * |
杨春兰 等: "MRI 脑图像海马自动分割方法研究进展", 《北京工业大学学报》 * |
Cited By (14)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN110287773A (en) * | 2019-05-14 | 2019-09-27 | 杭州电子科技大学 | Transport hub safety check image-recognizing method based on autonomous learning |
CN110211140A (en) * | 2019-06-14 | 2019-09-06 | 重庆大学 | Abdominal vascular dividing method based on 3D residual error U-Net and Weighted Loss Function |
CN110555847A (en) * | 2019-07-31 | 2019-12-10 | 瀚博半导体(上海)有限公司 | Image processing method and device based on convolutional neural network |
CN110555847B (en) * | 2019-07-31 | 2021-04-02 | 瀚博半导体(上海)有限公司 | Image processing method and device based on convolutional neural network |
CN110414481A (en) * | 2019-08-09 | 2019-11-05 | 华东师范大学 | A kind of identification of 3D medical image and dividing method based on Unet and LSTM |
CN110969626A (en) * | 2019-11-27 | 2020-04-07 | 西南交通大学 | Method for extracting hippocampus of human brain nuclear magnetic resonance image based on 3D neural network |
CN110969626B (en) * | 2019-11-27 | 2022-06-07 | 西南交通大学 | Method for extracting hippocampus of human brain nuclear magnetic resonance image based on 3D neural network |
CN111110228A (en) * | 2020-01-17 | 2020-05-08 | 武汉中旗生物医疗电子有限公司 | Electrocardiosignal R wave detection method and device |
CN113192150B (en) * | 2020-01-29 | 2022-03-15 | 上海交通大学 | Magnetic resonance interventional image reconstruction method based on cyclic neural network |
CN113192150A (en) * | 2020-01-29 | 2021-07-30 | 上海交通大学 | Magnetic resonance interventional image reconstruction method based on cyclic neural network |
CN112508953A (en) * | 2021-02-05 | 2021-03-16 | 四川大学 | Meningioma rapid segmentation qualitative method based on deep neural network |
CN114549417A (en) * | 2022-01-20 | 2022-05-27 | 高欣 | Abdominal fat quantification method based on deep learning and nuclear magnetic resonance Dixon |
CN116681705A (en) * | 2023-08-04 | 2023-09-01 | 哈尔滨工业大学(深圳)(哈尔滨工业大学深圳科技创新研究院) | Surface morphology measurement method and processing equipment based on longitudinal structure of human brain hippocampus |
CN116681705B (en) * | 2023-08-04 | 2023-09-29 | 哈尔滨工业大学(深圳)(哈尔滨工业大学深圳科技创新研究院) | Surface morphology measurement method and processing equipment based on longitudinal structure of human brain hippocampus |
Also Published As
Publication number | Publication date |
---|---|
CN109584244B (en) | 2023-05-23 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN109584244A (en) | A kind of hippocampus dividing method based on Sequence Learning | |
CN110097550B (en) | Medical image segmentation method and system based on deep learning | |
Ye et al. | Multi-depth fusion network for whole-heart CT image segmentation | |
CN112465827B (en) | Contour perception multi-organ segmentation network construction method based on class-by-class convolution operation | |
CN108629816B (en) | Method for reconstructing thin-layer magnetic resonance image based on deep learning | |
CN110689543A (en) | Improved convolutional neural network brain tumor image segmentation method based on attention mechanism | |
CN110084318A (en) | A kind of image-recognizing method of combination convolutional neural networks and gradient boosted tree | |
CN104573309B (en) | Device and method for computer-aided diagnosis | |
CN109559358B (en) | Image sample up-sampling method based on convolution self-coding | |
CN110047138A (en) | A kind of magnetic resonance thin layer image rebuilding method | |
CN110310287A (en) | It is neural network based to jeopardize the automatic delineation method of organ, equipment and storage medium | |
CN110084823A (en) | Three-dimensional brain tumor image partition method based on cascade anisotropy FCNN | |
CN109410219A (en) | A kind of image partition method, device and computer readable storage medium based on pyramid fusion study | |
CN110033440A (en) | Biological cell method of counting based on convolutional neural networks and Fusion Features | |
CN116012344B (en) | Cardiac magnetic resonance image registration method based on mask self-encoder CNN-transducer | |
CN114266939B (en) | Brain extraction method based on ResTLU-Net model | |
CN109360152A (en) | 3 d medical images super resolution ratio reconstruction method based on dense convolutional neural networks | |
CN111179269B (en) | PET image segmentation method based on multi-view and three-dimensional convolution fusion strategy | |
CN115496771A (en) | Brain tumor segmentation method based on brain three-dimensional MRI image design | |
CN110120048A (en) | In conjunction with the three-dimensional brain tumor image partition method for improving U-Net and CMF | |
CN110782427A (en) | Magnetic resonance brain tumor automatic segmentation method based on separable cavity convolution | |
CN110942464A (en) | PET image segmentation method fusing 2-dimensional and 3-dimensional models | |
CN109215035A (en) | A kind of brain MRI hippocampus three-dimensional dividing method based on deep learning | |
CN111862261B (en) | FLAIR modal magnetic resonance image generation method and system | |
CN114972248A (en) | Attention mechanism-based improved U-net liver tumor segmentation method |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
TA01 | Transfer of patent application right | ||
TA01 | Transfer of patent application right |
Effective date of registration: 20230418 Address after: 214000 Room 709-G02, Building 13, Hongxing Daduhui, Wuxi Economic Development Zone, Wuxi City, Jiangsu Province Applicant after: Wuxi Bencio Intelligent Technology Co.,Ltd. Address before: 241000 A11-2, Phase I, Science and Technology Industrial Park, Yijiang District, Wuhu City, Anhui Province Applicant before: ANHUI HAILING INTELLIGENT TECHNOLOGY Co.,Ltd. |
|
GR01 | Patent grant | ||
GR01 | Patent grant |