CN110415253A - A kind of point Interactive medical image dividing method based on deep neural network - Google Patents

A kind of point Interactive medical image dividing method based on deep neural network Download PDF

Info

Publication number
CN110415253A
CN110415253A CN201910374698.6A CN201910374698A CN110415253A CN 110415253 A CN110415253 A CN 110415253A CN 201910374698 A CN201910374698 A CN 201910374698A CN 110415253 A CN110415253 A CN 110415253A
Authority
CN
China
Prior art keywords
image
segmentation
image block
kidney neoplasms
sequence
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201910374698.6A
Other languages
Chinese (zh)
Inventor
孙晋权
史颖欢
高阳
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
JIANGSU WANWEI AISI NETWORK INTELLIGENT INDUSTRY INNOVATION CENTER Co Ltd
Nanjing University
Original Assignee
JIANGSU WANWEI AISI NETWORK INTELLIGENT INDUSTRY INNOVATION CENTER Co Ltd
Nanjing University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by JIANGSU WANWEI AISI NETWORK INTELLIGENT INDUSTRY INNOVATION CENTER Co Ltd, Nanjing University filed Critical JIANGSU WANWEI AISI NETWORK INTELLIGENT INDUSTRY INNOVATION CENTER Co Ltd
Priority to CN201910374698.6A priority Critical patent/CN110415253A/en
Publication of CN110415253A publication Critical patent/CN110415253A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/0002Inspection of images, e.g. flaw detection
    • G06T7/0012Biomedical image inspection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/11Region-based segmentation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10072Tomographic images
    • G06T2207/10081Computed x-ray tomography [CT]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20021Dividing image into blocks, subimages or windows
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20084Artificial neural networks [ANN]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30004Biomedical image processing
    • G06T2207/30084Kidney; Renal
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30004Biomedical image processing
    • G06T2207/30096Tumor; Lesion

Abstract

Kidney neoplasms segmentation problem that the present invention is specific in medical image and the deep learning partitioning algorithm of a kind of point interaction proposed.The algorithm is by an interaction preprocessing module, two-way ConvRNN unit, core depth segmentation network composition.The tumor center location that the algorithm is provided from expert, on evenly spaced 16 directions, 4 pixel step lengths are pressed from inside to outside, the image block that intensive 16 sizes of acquisition are 32 × 32, form image block sequence, and using variation tendency inside and outside the depth segmentation e-learning target with Sequence Learning, determines object edge, realize the segmentation to kidney neoplasms.This method can overcome the influence that medical image contrast is low, target position is changeable, object edge is fuzzy, be suitable for organ segmentation and lesion segmentation task.Compared to current prior art, this method has a characteristic that 1) interactive mode succinctly facilitates;2) Sequential Patch Learning concept is proposed, long-range semantic relation is captured using sequence image block, biggish receptive field can also be obtained even if shallower network;3) completely new ConvRNN unit is proposed, variation tendency inside and outside learning objective has stronger interpretation, meets the real work mode of doctor, final mask precision is high, strong applicability.

Description

A kind of point Interactive medical image dividing method based on deep neural network
Technical field
The present invention relates to a kind of point interactive mode kidney neoplasms CT dividing method based on deep neural network, belongs to computer and answers Use field.
Background technique
Kidney is the vitals of human body, and renal function is accumulated in vivo once being damaged and will lead to a variety of end product of metabolisms, into And influence life security.In a variety of kidney troubles, kidney neoplasms is the No.1 threat disease of kidney health.CT images at present It checks, is one of main test mode of the kidney troubles such as kidney neoplasms.According to the size of kidney neoplasms, doctor can be serious to tumour Rank classification, and formulate corresponding treatment means;Kidney neoplasms is positioned simultaneously, and analyzes shapes and sizes, accurate tumor target Area describe and delineate and radiation therapy process in important step.
Manual delineation is carried out to kidney neoplasms and is delineated, is the principal mode in current kidney neoplasms therapeutic process.With calculating The continuous development of machine technology, hospital information system it is further perfect, more and more doctors can be in electronics radiotherapy system Tumour is completed to delineate.Based on the computer assisted mode of delineating towards CT kidney neoplasms, the blueness of more doctors is also gradually received It looks at.The CT medical image cutting method being widely used at present can be divided into two classes: traditional segmentation side based on energy functional Method and method based on machine learning.Level set (Level Set) method is the most famous in energy functional method.This method by Osher and Sethian are proposed first, become geometry change in topology during handling closing moving interface Temporal Evolution later Effective calculating instrument.Level Set Method has obtained more researchs and application, Malladi after proposition, in the field of medical imaging It is applied in medical image segmentation and reconstruction.This usual performance of dividing method is not sufficiently stable, for image quality There is higher dependence, it is more sensitive to variation of image grayscale.Dividing method based on machine learning focuses more on the statistics of image block Meaning, such method main flow is: feature extraction, feature selecting and training classifier.Such as Bansari and Charmi are to figure As pixel cluster, the segmentation to kidney neoplasms region is realized.The difficult point of this kind of methods is that the image block acquired is smaller, does not examine Consider the relationship between image block, ignores the semantic information under the bigger visual field, cause segmentation result poor.
Summary of the invention
Present invention is specifically directed to the algorithm that the kidney neoplasms segmentation task on medicine CT image specially proposes, which is based on letter Clean point interactive mode can assist doctor's Fast Segmentation kidney neoplasms target.In general, CT kidney neoplasms Image Segmentation exists such as Lower difficult point:
1. CT images contrast is low: medical image image quality is affected by reagent, equipment etc., general imaging results Contrast all can be lower compared with natural image, brings challenge to common deep learning dividing method;
2. kidney neoplasms position is changeable: kidney neoplasms disease locus varies with each individual, and position is simultaneously not fixed;
3. borderline tumor is fuzzy: kidney neoplasms is rooted in kidney organ, has to organ larger similar, other low contrast Imaging results also further increase the degree of kidney neoplasms edge blurry.
In the actual clinical operation that kidney neoplasms is delineated, doctor would generally determine the substantially position of tumour according to many years of experience It sets, on this basis, examines the grey scale change trend inside and outside tumour, and then determine the edge of tumour.Based on the above reality The inspiration of problem and doctor's working method, the invention proposes an interactive segmentation algorithms, and the algorithm is specifically for CT kidney neoplasms figure Succinct, quick interactive segmentation method is devised as dividing, this method speed is fast, and stability is good, and doctor can be assisted very fast It completes tumour and delineates task, use value with higher in ground.If core of the invention includes following stem portion:
1. putting interactive mode and pretreatment:
2. ConvRNN unit:
3. core depth divides network:
1. point interaction and preprocessing module:
The interactive important component of the invention of point.Detailed process is as follows:
A) system shows a 2D CT renal image;
B) doctor judges current image with the presence or absence of kidney neoplasms according to professional knowledge;
If c) there are kidney neoplasms, doctor to click mouse in the approximate centre of kidney neoplasms and do lower label for current image.
Left side as shown in Figure 2, in image pre-processing module, algorithm is from the centre mark point of tumour in multiple directions On acquire from inside to outside size be 32 × 32 16 image blocks.In each direction, figure is intensively acquired according to 4 pixel step lengths There is overlapping (Fig. 2 show only sparse sampling as a result, image block and underlapped) as block, between adjacent image block.For instruction Practice data set, the corresponding mark image of CT image can be pre-processed by same step.Shown on the right side of Fig. 2, acquisition is completed Afterwards, 16 image blocks in each direction form a sequence image block sample according to sequence from inside to outside.In addition, in order to Guarantee that the image block of acquisition can completely cover entire kidney neoplasms region, the present invention can be using tumor center location as starting point, 16 Image block is acquired on a direction, Fig. 2 is bandwagon effect, only depicts 8 directions.
2.ConvRNN unit:
Observation to grey scale change trend inside and outside tumour is the important step that medical practitioner completes that kidney neoplasms is delineated.This hair Bright this observation process by doctor, Sequence Learning (SequentialLearning) problem being converted into computer science: The image block composition sequence image block that will be acquired from inside to outside in each direction, because adjacent image block is with biggish in sequence Lap, thus the image block sequence relatively smoothly feature image grayscale from tumor center to tumour outside variation become Gesture.
The ConvRNN unit being embedded in full convolutional network, so that conventional depth segmentation network has Sequence Learning Ability.ConvRNN unit in the present invention is simple and efficient, and specific design is as follows:
rt=σ (Wxr*Xt+Whr*Ht-1+br)
Wherein, symbol σ represents sigmoid activation primitive, and symbol * represents convolution operation, and symbology corresponds to dot-product Operation;Image block sequence on one direction is denoted as X1, X2..., X16, the hidden state in calculating process is denoted as H1, H2..., H16;Xt, HtRespectively represent corresponding t-th of image block and t-th of hidden state in sequence;W is represented in convolution operation Convolution nuclear parameter;brAnd brRepresent bias term;Convolution kernel space size in ConvRNN is 3 × 3.
Hidden state HtThe spatial relationship between phase two adjacent images block is being encoded in calculating process.rtRepresent resetting Door, is a RP×M×NThree-dimensional tensor, wherein P represents port number, and M represents the line number of characteristic pattern, and N represents the columns of characteristic pattern. rtThe degree of correlation between current input image block and previous step hidden state is measured, for controlling from hidden state Ht-1It flows into Current hidden state HtInformation content.Work as rtWhen close to 0, hidden state Ht-1In information can pass into silence, new hiding shape State will be by currently inputting XtIt determines, which typically occurs in that two adjacent images block gray difference is larger or image block is by preceding When scene area enters background area.
The image block sequence terminated since inside tumor to tumour outside for one, only learns image from inside to outside The variation tendency of block can not achieve the inside and outside effect relatively as doctor, therefore to the two-way modeling of image block sequence Practising and comparing is also an important step.Unidirectional ConvRNN unit extensions are further that two-way ConvRNN is mono- by the present invention Member.As shown in Fig. 2, for the same image block sequence, ConvRNN unit carries out sequence in forward and backward both direction respectively Column study: forward direction study captures the variation tendency outside from inside tumor to tumour, and backward study captures outside tumour to swollen Variation tendency inside tumor, memory Fusion Module (Memory Fusion) then melt the characteristic pattern obtained in both direction It closes.The memory Fusion Module is made of 1 × 1 convolution, and after the resume module, the characteristic pattern that front and back is obtained to study can be obtained To it is further relatively, can generate it is more careful to adjacent image block space relationship portray, facilitate algorithm and perceive tumour The slight change at edge.
3. core depth divides network:
U-net is a kind of depth network structure that medical image segmentation field is widely used, the present invention using U-net as Infrastructure network, and be transformed, it is allowed to the medical image segmentation task being more used under an interaction scenarios.Such as Fig. 3 institute Show:
1. the network includes coding, two stages of decoding;
2. coding stage includes three ConvBlock, two Pooling;Each ConvBlock is by two convolution operations Composition, convolution kernel size is 3 × 3, and step-length is 1 × 1;Convolution kernel number in two ConvBlock is respectively 32,64, 128;The window size of Pooling is 2 × 2, and step-length is 1 × 1;
3. decoding stage includes two DeconvBlock, the ConvRNN unit for being embedded in different layers, and final point Class layer;Each DeconvBlock is operated by a deconvolution and two convolution operations form;Deconvolution is operated by bilinear interpolation It realizes;The convolution kernel size of two convolution operations is 3 × 3, and step-length is 1 × 1;The last layer is to use convolution kernel size for 1 × 1, step-length is also the two classification layer of 1 × 1 convolution operation realization;
4. shallow-layer characteristic pattern and further feature figure are connected using long-range hopping connection between coding stage and decoding stage It connects, to reach more accurate segmentation result.
5. being embedded in the ConvRNN of decoding stage different layers, adjacent image can be captured on different size of receptive field Space analog information between block, the spatial relationship that the ConvRNN of larger receptive field learns is more coarse, but can provide more More local messages, lesser receptive field then may learn finer correlation.
Detailed description of the invention
Fig. 1 the method for the present invention structure figures.
Fig. 2 midpoint interactive mode of the present invention and sample mode figure.
Two-way ConvRNN operation schematic diagram in Fig. 3 present invention.
Network structure in Fig. 4 present invention.
Specific embodiment:
For careful displaying objects, features and advantages of the present invention, come pair below in conjunction with attached drawing and specific case study on implementation The present invention is described in further details.
As shown in Figure 1, the present invention provides a kind of based on an interactive deep learning medical image cutting method.In model Training stage comprises the following specific steps that:
1) resampling is done to kidney neoplasms CT data, so that the voxel space factor 0.625 × 0.625 of every part of three-dimensional data × 1mm;By the image after resampling, two dimensional image is taken according to Z-direction.
2) point interaction: as shown in Fig. 2, whether present image, which includes kidney neoplasms, is judged by doctor for every image, and Kidney neoplasms approximate centre position is clicked, with the approximate location of signal tumor.
3) image block acquisition pretreatment: as shown in Fig. 2, using given tumor center as starting point, it is identical at 16 intervals By 4 pixel step length acquisition a series of images blocks, (to guarantee bandwagon effect, Fig. 2 is illustrated only on 8 directions, sparse on direction The result of non-overlapping sampling), all image blocks in each direction form image block sequence according to sequence from inside to outside, often A image block sequence is considered as an input sample of following model;Same sampling step is executed to corresponding mark image, and Form mark block sequence.
4) it is constructed in deep learning frame pytorch as shown in Figure 2 for learning variation tendency inside and outside image block sequence Two-way ConvRNN unit.
5) it is based on two-way ConvRNN unit and U-net basic network, further building is complete in deep learning frame Kidney neoplasms divides depth network.The web results use length as shown in figure 3, including coding stage and decoding stage between two stages The information fusion of cross-layer is completed in journey jump connection;Coding stage includes 6 convolutional layers and two pooling layers;Decoding stage packet Containing three two-way ConvRNN units, two DeConv (deconvolution), 4 convolutional layers and the last one 1 × 1 convolution composition Classification layer.
6) based on the image block sequence acquired on training dataset, training kidney neoplasms divides depth network;The step for In, the present invention is lost using cross entropy defining classification, optimizes network parameter using stochastic gradient descent and backpropagation.
7) in test phase, the interactive and image block sequence acquisition step of same point is executed to test image, by what is obtained 16 image block sequence inputtings obtain corresponding segmentation result sequence into depth network;These segmentation result image blocks are whole It is merged and connects, form final test result.

Claims (5)

1. point interactive deep learning kidney neoplasms dividing method, includes the following steps:
(1) three-dimensional kidney neoplasms CT data set is chosen, and divides training set and test set;
(2) resampling is done to kidney neoplasms CT data, so that the voxel space factor 0.625 × 0.625 of every part of three-dimensional data × 1mm;By the image after resampling, two dimensional image is taken according to Z-direction;
(3) point interaction: for every image, judge whether present image includes kidney neoplasms by doctor, if comprising doctor is in kidney The approximate centerpoint point of tumour, the approximate location of tumour is indicated with this;
(4) using given tumor center as starting point, on the identical direction in 16 intervals, it is step-length by 4 pixels, acquires a system Column image block, all image blocks in each direction form image block sequence, each image block sequence according to sequence from inside to outside Column are considered as an input sample of following model;
(5) it constructs in deep learning frame Pytorch for learning the two-way of inside and outside variation tendency in image block sequence ConvRNN unit;
(6) it is based on two-way ConvRNN unit, complete kidney neoplasms segmentation depth net is further constructed in deep learning frame Network;
(7) based on the image block sequence acquired on training dataset, training kidney neoplasms divides depth network, protects after model convergence Deposit model parameter;
(8) kidney neoplasms segmentation is carried out in test data set with trained depth network, and final segmentation result is integrated At final segmentation result.
2. the point interaction deep learning dividing method towards kidney neoplasms segmentation according to claim 1, it is characterised in that: described In the point interactive process of step (3), whether it includes kidney neoplasms that doctor judges each two dimensional image judgement to be split wherein, Then lower label can be done in the corresponding center point point of each kidney neoplasms, the corresponding position of tumour is indicated with this.
3. the point interaction deep learning dividing method towards kidney neoplasms segmentation according to claim 1, it is characterised in that: described The tumor center location that step (4) gives from doctor, on the identical direction in 16 intervals, from inside tumor to outside tumour Portion acquires 16 image blocks using 4 pixel spaces as step-length, and 16 image blocks in each direction are suitable according to from inside to outside Sequence forms image block sequence.
4. the point interaction deep learning dividing method towards kidney neoplasms segmentation according to claim 1, it is characterised in that: described ConvRNN unit described in step (5) is a kind of two-dimentional timing computing unit, and calculating process is as follows:
rt=σ (Wxr*Xt+Whr*Ht-1+br)
In entire timing calculating process, input includes X1,X2…XtAnd H1,H2…Ht, wherein t represents t-th of time step, XtGeneration The image block inputted on t-th of time step of table, HtRepresent the hidden state of t-th of time step, rtIt represents on t-th of time step The resetting door being calculated, σ represent nonlinear activation function, Wxr,Whr,Wxh,Represent convolution nuclear parameter, br,bhGeneration respectively Meter calculates bias term when resetting door, hidden state, and the convolution kernel size in ConvRNN is 3 × 3, and step-length is 1 × 1.
5. the point interaction deep learning dividing method towards kidney neoplasms segmentation according to claim 1, it is characterised in that: described Complete depth network described in step (6) includes two processes of coding and decoding;Cataloged procedure includes three ConvBlock (convolution block), each ConvBlock are made of two convolutional layers, and each convolution kernel space size is 3 × 3, and step-length is 1 × The convolution nucleus number of each convolutional layer is respectively 32,64 and 128 in 1, first, second and third ConvBlock, at these Be between ConvBlock window size be 2 × 2, step-length be 1 × 1 pond layer (Pooling layers);Decoding stage includes three ConvRNN layers, two DeconvBlock (warp block) and final two classification layer;DeconvBlock is by deconvolution Layer, two convolutional layers are successively constituted, and deconvolution operation is realized that convolutional layer convolution kernel size is 3 × 3 by bilinear interpolation, step Long equal 1 × 1;The convolution nucleus number of convolution operation is respectively 64 and 32 in two DeconvBlock;The volume of final two classification layer Product core size is 1 × 1, and step-length is 1 × 1;The network receives image block sequence as input, exports corresponding point of image block sequence Cut result sequence.
CN201910374698.6A 2019-05-06 2019-05-06 A kind of point Interactive medical image dividing method based on deep neural network Pending CN110415253A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910374698.6A CN110415253A (en) 2019-05-06 2019-05-06 A kind of point Interactive medical image dividing method based on deep neural network

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910374698.6A CN110415253A (en) 2019-05-06 2019-05-06 A kind of point Interactive medical image dividing method based on deep neural network

Publications (1)

Publication Number Publication Date
CN110415253A true CN110415253A (en) 2019-11-05

Family

ID=68357759

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910374698.6A Pending CN110415253A (en) 2019-05-06 2019-05-06 A kind of point Interactive medical image dividing method based on deep neural network

Country Status (1)

Country Link
CN (1) CN110415253A (en)

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111179275A (en) * 2019-12-31 2020-05-19 电子科技大学 Medical ultrasonic image segmentation method
CN111340825A (en) * 2020-03-05 2020-06-26 上海市肺科医院(上海市职业病防治院) Method and system for generating mediastinal lymph node segmentation model
US10762629B1 (en) 2019-11-14 2020-09-01 SegAI LLC Segmenting medical images
CN111932549A (en) * 2020-06-28 2020-11-13 山东师范大学 SP-FCN-based MRI brain tumor image segmentation system and method
US11423544B1 (en) 2019-11-14 2022-08-23 Seg AI LLC Segmenting medical images

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104268873A (en) * 2014-09-25 2015-01-07 南京信息工程大学 Breast tumor partition method based on nuclear magnetic resonance images
WO2018236497A1 (en) * 2017-05-25 2018-12-27 Enlitic, Inc. Medical scan assisted review system
CN109191452A (en) * 2018-09-12 2019-01-11 南京大学 A kind of abdominal cavity CT image peritonaeum transfer automark method based on Active Learning
CN109242860A (en) * 2018-08-21 2019-01-18 电子科技大学 Based on the brain tumor image partition method that deep learning and weight space are integrated
CN109598727A (en) * 2018-11-28 2019-04-09 北京工业大学 A kind of CT image pulmonary parenchyma three-dimensional semantic segmentation method based on deep neural network

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104268873A (en) * 2014-09-25 2015-01-07 南京信息工程大学 Breast tumor partition method based on nuclear magnetic resonance images
WO2018236497A1 (en) * 2017-05-25 2018-12-27 Enlitic, Inc. Medical scan assisted review system
CN109242860A (en) * 2018-08-21 2019-01-18 电子科技大学 Based on the brain tumor image partition method that deep learning and weight space are integrated
CN109191452A (en) * 2018-09-12 2019-01-11 南京大学 A kind of abdominal cavity CT image peritonaeum transfer automark method based on Active Learning
CN109598727A (en) * 2018-11-28 2019-04-09 北京工业大学 A kind of CT image pulmonary parenchyma three-dimensional semantic segmentation method based on deep neural network

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
JINQUAN SUN ET AL.: "Interactive Medical Image Segmentation via Point-Based Interaction and Sequential Patch Learning", 《ARXIV:1804.10481V2》 *

Cited By (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US10762629B1 (en) 2019-11-14 2020-09-01 SegAI LLC Segmenting medical images
US11423544B1 (en) 2019-11-14 2022-08-23 Seg AI LLC Segmenting medical images
CN111179275A (en) * 2019-12-31 2020-05-19 电子科技大学 Medical ultrasonic image segmentation method
CN111179275B (en) * 2019-12-31 2023-04-25 电子科技大学 Medical ultrasonic image segmentation method
CN111340825A (en) * 2020-03-05 2020-06-26 上海市肺科医院(上海市职业病防治院) Method and system for generating mediastinal lymph node segmentation model
CN111340825B (en) * 2020-03-05 2023-05-09 上海市肺科医院(上海市职业病防治院) Method and system for generating mediastinum lymph node segmentation model
CN111932549A (en) * 2020-06-28 2020-11-13 山东师范大学 SP-FCN-based MRI brain tumor image segmentation system and method

Similar Documents

Publication Publication Date Title
CN111476292B (en) Small sample element learning training method for medical image classification processing artificial intelligence
CN107203999B (en) Dermatoscope image automatic segmentation method based on full convolution neural network
US20220309674A1 (en) Medical image segmentation method based on u-net
CN110415253A (en) A kind of point Interactive medical image dividing method based on deep neural network
CN111047594B (en) Tumor MRI weak supervised learning analysis modeling method and model thereof
CN108921851B (en) Medical CT image segmentation method based on 3D countermeasure network
CN112465827B (en) Contour perception multi-organ segmentation network construction method based on class-by-class convolution operation
CN109754403A (en) Tumour automatic division method and system in a kind of CT image
CN109886986A (en) A kind of skin lens image dividing method based on multiple-limb convolutional neural networks
CN110782427B (en) Magnetic resonance brain tumor automatic segmentation method based on separable cavity convolution
CN106846330A (en) Human liver's feature modeling and vascular pattern space normalizing method
CN112785593A (en) Brain image segmentation method based on deep learning
CN116188479B (en) Hip joint image segmentation method and system based on deep learning
CN114897780A (en) MIP sequence-based mesenteric artery blood vessel reconstruction method
Merkow et al. Structural edge detection for cardiovascular modeling
Du et al. An integrated deep learning framework for joint segmentation of blood pool and myocardium
Chatterjee et al. A survey on techniques used in medical imaging processing
Asma-Ull et al. Data efficient segmentation of various 3d medical images using guided generative adversarial networks
Xiao et al. PET and CT image fusion of lung cancer with siamese pyramid fusion network
CN114387282A (en) Accurate automatic segmentation method and system for medical image organs
CN110706209B (en) Method for positioning tumor in brain magnetic resonance image of grid network
Mikhailov et al. A deep learning-based interactive medical image segmentation framework with sequential memory
Kumar et al. Segmentation of magnetic resonance brain images using 3D convolution neural network
CN114445421B (en) Identification and segmentation method, device and system for nasopharyngeal carcinoma lymph node region
Sahand et al. Detection of Brain Tumor from MRI Images Base on Deep Learning technique Using TL Model

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication

Application publication date: 20191105

RJ01 Rejection of invention patent application after publication