CN111986216B - RSG liver CT image interactive segmentation algorithm based on neural network improvement - Google Patents

RSG liver CT image interactive segmentation algorithm based on neural network improvement Download PDF

Info

Publication number
CN111986216B
CN111986216B CN202010907881.0A CN202010907881A CN111986216B CN 111986216 B CN111986216 B CN 111986216B CN 202010907881 A CN202010907881 A CN 202010907881A CN 111986216 B CN111986216 B CN 111986216B
Authority
CN
China
Prior art keywords
image
liver
neural network
pixels
region
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202010907881.0A
Other languages
Chinese (zh)
Other versions
CN111986216A (en
Inventor
张丽娟
章润
李东明
李阳
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Wuxi University
Original Assignee
Wuxi University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Wuxi University filed Critical Wuxi University
Priority to CN202010907881.0A priority Critical patent/CN111986216B/en
Publication of CN111986216A publication Critical patent/CN111986216A/en
Application granted granted Critical
Publication of CN111986216B publication Critical patent/CN111986216B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/11Region-based segmentation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/0002Inspection of images, e.g. flaw detection
    • G06T7/0012Biomedical image inspection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/13Edge detection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10072Tomographic images
    • G06T2207/10081Computed x-ray tomography [CT]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20084Artificial neural networks [ANN]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20092Interactive image processing based on input by user
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30004Biomedical image processing
    • G06T2207/30056Liver; Hepatic

Abstract

The invention provides an improved region growing algorithm based on a one-dimensional convolutional neural network for interactive segmentation of a liver CT image, which takes various information such as gray values, spatial information, different gradient values and the like of pixels into overall consideration as a growing rule through the neural network, so that the stability of the region growing method is improved, and the segmentation capability of the algorithm on an edge complex structure is enhanced. The method comprises the following specific steps: firstly, preprocessing an image, extracting a slice containing liver in a CT image sequence set, and converting the CT image into a gray level image by using a window algorithm; then detecting the image edge, calculating gradient values of pixels under different edge detection operators as the characteristics of the pixels, and forming pixel characteristic vectors; next, constructing a network model, extracting a training data set, and training the network model; and finally, segmenting, taking the trained convolutional neural network model as a growth criterion of a region growing algorithm, clicking a liver region by using a mouse to generate an initial segmentation result, and filling holes by using a morphological method to obtain a final result.

Description

RSG liver CT image interactive segmentation algorithm based on neural network improvement
Technical Field
The invention provides an improved region growing algorithm (Region Seeds Growing, RSG) based on a one-dimensional convolutional neural network for interactively segmenting a liver CT image, and the gray value, spatial information, different gradient values and other various information of pixels are comprehensively considered as a growing rule through the neural network, so that the stability of the region growing method is improved, and the segmentation capability of the algorithm on an edge complex structure is enhanced.
Background
CT is an noninvasive organ in-vitro imaging means, has higher imaging speed, higher resolution and better effect, and has become an essential means for clinical diagnosis, and the combination of a visualization technology and medical image analysis has dominant in the diagnosis of liver diseases. By segmenting the liver CT image, extracting liver tissues and obtaining corresponding characteristic information, a doctor can intuitively know the details of the inside of the liver of a patient, and plays a key role in diagnosis and the establishment of a next treatment plan.
Current segmentation methods can be divided into three categories: manual, semi-automatic and fully automatic. Manual segmentation methods are cumbersome, time consuming, and may be affected by inter-observer and intra-observer variability. Each pixel of an image needs to be assigned manually to its class, and although very accurate results can be obtained with this technique, the time required will limit some tasks to be translated into clinical practice. For some tasks, manual segmentation of a single case may take hours. The fully automated method requires no human effort and in the last decades researchers have developed many automated segmentation methods. However, fully automated segmentation methods rarely achieve sufficiently accurate, robust results that are clinically impractical. This is often due to poor image quality (with noise, partial volume effects, artifacts and low contrast), large patient-to-patient variability, uneven appearance from pathology and differences in protocols between clinicians leading to different definitions of a given structural boundary.
To address the limitations of the fully automated segmentation method, the interactive segmentation method is viable in clinical practice because it can provide higher accuracy and robustness in many applications, such as planning radiation therapy of brain tumors. Since providing manual annotations for segmentation is time consuming and laborious, an efficient interactive segmentation method is very important for practical use. The good interactive segmentation method should obtain accurate results with as little user interaction as possible, thereby improving interaction efficiency. Although there are a large number of interactive segmentation methods, most methods require a large amount of user interaction and take a long time for the user, or the learning ability of the underlying model is limited. For example, the widely used ITK-SNAP starts with user-supplied seed pixels or blobs and employs an active contour model for segmentation. It requires a large amount of user interaction initially, and once the initial subdivision is obtained, it is difficult to refine the base model through other user interactions. The SlicSeg accepts user-provided graffiti in a single starting slice to train an online random forest for 3D segmentation, but lacks flexibility to do further user editing. Random Walks and Graph Cuts learn from graffiti and allow users to provide other graffiti for refinement. They used a random walk and Gaussian Mixture Model (GMM) as the base model. However, they require a lot of graffiti to achieve a satisfactory segmentation. The method utilizes a convolutional neural network to improve the growth rule of a conventional region growth algorithm, and can complete interactive generation of segmented images through mouse clicking.
Disclosure of Invention
The invention aims to solve the problems of low accuracy and weak stability of the traditional region growing method for segmenting a liver CT image, proposes to interactively segment the liver CT image by using a region growing algorithm based on the improvement of a one-dimensional convolutional neural network, and comprehensively considers various information such as gray values, spatial information, different gradient values and the like of pixels as a growing rule by using the neural network, and the method comprises the following steps of:
step one: image preprocessing, namely extracting slices containing livers in a CT image sequence set, and converting the CT images into gray-scale images by using a window algorithm (W/L);
step two: detecting the image edge, calculating gradient values of pixels under different edge detection operators as the characteristics of the pixels, and forming pixel characteristic vectors;
step three: constructing a network model, extracting a training data set, and training the network model, wherein the network takes a pair of pixel characteristic vectors as input and takes a correlation coefficient of two pixels as output;
step four: and (3) segmentation, namely taking the trained convolutional neural network model as a growth criterion of a region growing algorithm, clicking a liver region by using a mouse to generate an initial segmentation result, and filling holes by using a morphological method to obtain a final result.
The specific case in the first step is as follows:
(1) Extracting slices:
the dataset comprises an original CT image and a segmentation label in which the practitioner has associated 13 abdominal organs one-to-one with a number, wherein the liver corresponds to a number of 6. Slice T satisfies: start+5< T < end-5. Wherein Start represents the sequence number of the earliest digit 6 in the tag image sequence set, end represents the sequence number of the last digit 6 in the tag image sequence set;
(2) Image conversion:
the value g (i) of the pixel point after being processed by using a Window-level (W/L) Window algorithm is as follows:
wherein:,/>the CT value of liver tissue is typically between 50 and 250, we=200, wl=150.
The specific case in the second step is as follows:
respectively filtering the image by a Sobel operator, a Roberts operator, a Canny operator, a Gabor operator, a sobel_h operator, a sobel_v operator and a robert_neg_diag operator, and taking the obtained value as a characteristic value of the pixel to form a pixel characteristic vector:whereinIs the gray value of the pixel.
The specific case in the third step is as follows:
(1) Extracting data:
defining a value area, and enabling the boundary of the liver to be outwards within a city block distance of 10 pixels:
the region comprises two parts: an internal region of the liver and an external 10-pixel distance region of the liver. Two pixel combinations are selected arbitrarily in the region to form an input sample X of the neural network,the corresponding output tag Y is provided with a plurality of output tags,
(2) Training network model
The last layer of the network model uses a sigmoid activation function to output valuesNormalized to (0, 1), the probability that two pixels are input to the same region: />Wherein Z represents the output value before deactivation; a binary cross entropy function (binary cross entropy) is used as a loss function for the network:
only whenAnd->When the probabilities are equal, the loss is 0, otherwise, the loss is a positive number, and the larger the probability difference is, the larger the loss is.
The specific cases in the fourth step are as follows:
(1) Taking the trained convolutional neural network model as a growth criterion of a region growth algorithm, and f when judging seed pixels 1 Pixel f in four neighborhoods 2 If it is incorporated into the growth area represented by the seed pixel, f 1 、f 2 Is used as the input of the neural network to obtain the output result y When y is >0.9, combining; otherwise, the two components are not combined. Repeating the step until all the pixels in the four fields of seed pixels do not meet the condition; the initial seed pixels are selected through mouse clicking;
(2) Since the liver tissue contains blood vessels, tumors and the like, holes exist in the liver region in the segmentation result. The basic principle of morphological filling of holes is:wherein->Is the starting point of hole filling, B is the structural element used for filling the hole, ">Is the complement of A. Iterative calculation is continuously carried out on the formula until +.>The final filling result is +.>And the union of the boundaries, i.e. the final segmentation result.
The invention also includes such features:
comparing the growth rule of the traditional region growth algorithm, and only comparing the gray value of the adjacent pixels to form a single dimension; the gray value, the spatial information, different gradient values and other various information of the pixels are comprehensively considered as the growth rules through the neural network, so that the stability of the algorithm is improved, and the processing capacity of the algorithm on the edge complex structure is enhanced. Although only the pixels in the region near the liver are trained, the present invention can also effectively segment the untrained region.
Compared with other interactive methods, the interactive method is simple to operate, and the edges of the segmentation result are finer. The invention is suitable for medical image segmentation with single internal structure, and has less obvious segmentation effect on natural images with complex semantics.
Drawings
FIG. 1 is a flow chart of the method of the present invention
FIG. 2 is a diagram of a one-dimensional convolutional neural network architecture
Detailed Description
It will be appreciated by those skilled in the art that certain well-known structures in the drawings and descriptions thereof may be omitted. The invention is further described below with reference to the drawings and implementation steps.
The invention provides an improved region growing algorithm based on a one-dimensional convolutional neural network for interactive segmentation of a liver CT image, which takes various information such as gray values, spatial information, different gradient values and the like of pixels into overall consideration as a growing rule through the neural network, so that the stability of the region growing method is improved, and the segmentation capability of the algorithm on an edge complex structure is enhanced.
FIG. 1 is a flow chart of the method of the present invention, which comprises the steps of firstly, image preprocessing, extracting a slice containing liver in a CT image sequence set, and converting the CT image into a gray level image by using a window algorithm; then detecting the image edge, calculating gradient values of pixels under different edge detection operators as the characteristics of the pixels, and forming pixel characteristic vectors; next, constructing a network model, extracting a training data set, and training the network model; and finally, segmenting, taking the trained convolutional neural network model as a growth criterion of a region growing algorithm, clicking a liver region by using a mouse to generate an initial segmentation result, and filling holes by using a morphological method to obtain a final result.
The specific implementation steps are as follows:
step1.1 extraction of the sections:
the dataset comprises an original CT image and a segmentation label in which the practitioner has associated 13 abdominal organs one-to-one with a number, wherein the liver corresponds to a number of 6. Slice T satisfies: start+5< T < end-5. Wherein Start represents the sequence number of the earliest digit 6 in the tag image sequence set, end represents the sequence number of the last digit 6 in the tag image sequence set;
step1.2 image conversion:
the value g (i) of the pixel point after being processed by using a Window-level (W/L) Window algorithm is as follows:
wherein:,/>the CT value of liver tissue is typically between 50 and 250, we=200, wl=150.
Step2 filters the image by a Sobel operator, a Roberts operator, a Canny operator, a Gabor operator, a sobel_h operator, a sobel_v operator and a robert_neg_diag operator respectively, and the obtained value is used as a characteristic value of the pixel to form a pixel characteristic vector:whereinIs the gray value of the pixel.
Step3.1 extraction data:
defining a value area, and enabling the boundary of the liver to be outwards within a city block distance of 10 pixels:
the region comprises two parts: an internal region of the liver and an external 10-pixel distance region of the liver. Two pixel combinations are selected arbitrarily in the region to form an input sample X of the neural network,the corresponding output tag Y is provided with a plurality of output tags,
step3.2 trains the web model:
the last level of the network model uses a sigmoid activation function,will output the valueNormalized to (0, 1), the probability that two pixels are input to the same region: />Wherein Z represents the output value before deactivation; a binary cross entropy function (binary cross entropy) is used as a loss function for the network:
only whenAnd->When the probabilities are equal, the loss is 0, otherwise, the loss is a positive number, and the larger the probability difference is, the larger the loss is.
Step4.1, taking a trained convolutional neural network model as a growth criterion of a region growth algorithm, and f when judging seed pixels 1 Pixel f in four neighborhoods 2 If it is incorporated into the growth area represented by the seed pixel, f 1 、f 2 Is used as the input of the neural network to obtain the output result y When y is >0.9, combining; otherwise, the two components are not combined. Repeating the step until all the pixels in the four fields of seed pixels do not meet the condition; the initial seed pixels are selected through mouse clicking;
step4.2 the liver region in the segmentation result has holes due to blood vessels, tumors and the like contained in the liver tissue. The basic principle of morphological filling of holes is:wherein->Is the starting point of hole filling, B is the structural element used for filling the hole, ">Is the complement of A. Iterative calculation is continuously carried out on the formula until +.>The final filling result is +.>And the union of the boundaries, i.e. the final segmentation result.
Fig. 2 is a diagram of a one-dimensional convolutional neural network architecture. The neural network architecture of the invention is shown in fig. 2, and is similar to a convolutional neural network, a convolutional layer is firstly carried out, two-dimensional input is unidimensionally carried out through a flat layer, the two-dimensional input is transited to a full-connection layer, and finally a constant probability value is output through a sigmoid activation function. But the convolutional layers of the network are different and one-dimensional convolution is used. The step length of the convolution kernel is 1, namely, each convolution, the convolution kernel corresponds to a whole row of the vector, adjacent rows are mutually independent, and cross combination is not carried out.

Claims (5)

1. The liver CT image interactive segmentation algorithm based on the neural network improved RSG region growing algorithm is characterized by comprising the following steps of:
step1: image preprocessing, namely extracting slices containing livers in a CT image sequence set, and converting the CT image into a gray-scale image by using a Window algorithm Window-level;
step2: detecting the image edge, calculating gradient values of pixels under different edge detection operators as the characteristics of the pixels, and forming pixel characteristic vectors;
step3: constructing a network model, extracting a training data set, and training the network model, wherein the network takes a pair of pixel characteristic vectors as input and takes a correlation coefficient of two pixels as output;
step4: dividing, taking the trained convolutional neural network model as a growth criterion of a region growth algorithm, and judging a seed pixel f 1 Pixel f in four neighborhoods 2 If it is incorporated into the growth area represented by the seed pixel, f 1 、f 2 The neural network is input to obtain an output result y ', when y'>0.9, combining; otherwise, not merging; repeating the step until all the pixels in the four fields of seed pixels do not meet the condition; the initial seed pixels are selected through mouse clicking, initial segmentation results are generated by clicking the liver region through a mouse, and holes are filled through a morphological method to obtain final results.
2. The liver CT image interactive segmentation algorithm based on the neural network improved RSG region growing algorithm of claim 1, wherein the specific process in Step1 is as follows:
step1.1 extraction of the sections:
the dataset comprises an original CT image and a segmentation label in which the practitioner has associated 13 abdominal organs one to one with numbers, wherein the liver corresponds to a number of 6, and the slice T satisfies: start+5< T < end-5;
wherein Start represents the sequence number of the earliest digit 6 in the tag image sequence set, end represents the sequence number of the last digit 6 in the tag image sequence set;
step1.2 image conversion:
the value g (i) of the pixel point after being processed by using a Window-level Window algorithm is as follows:
wherein: min=wl-0.5×ww, max=wl+0.5×ww, and CT values of liver tissue are typically between 50 and 250, with ww=200, wl=150.
3. The liver CT image interactive segmentation algorithm based on the neural network improved RSG region growing algorithm of claim 1, wherein the specific process in Step2 is as follows:
the step2.1 filters the image by a Sobel operator, a Roberts operator, a Canny operator, a Gabor operator, a sobel_h operator, a sobel_v operator and a robert_neg_diag operator respectively, and the obtained value is used as a characteristic value of the pixel to form a pixel characteristic vector:
f=[α 123 ...α 8 ]
where a1 is the gray value of the pixel.
4. The liver CT image interactive segmentation algorithm based on the neural network improved RSG region growing algorithm of claim 1, wherein the specific process in Step3 is as follows:
step3.1 extraction data:
defining a value area, and enabling the boundary of the liver to be outwards within a city block distance of 10 pixels:
disf(p(x 1 ,y 1 ),P(x 2 ,x 2 ))=|x 1 -x 2 |+|y 1 -y 2 |<10
the region comprises two parts: an inner region of the liver and an outer 10-pixel distance region of the liver; two pixel combinations are selected arbitrarily in the region to form an input sample X of the neural network,
X i =[f 1 ,f 2 ]
the corresponding output tag Y is provided with a plurality of output tags,
step3.2 trains the web model:
the final level of the network model uses a sigmoid activation function to output a value y' i Normalized to (0, 1), the probability that two pixels are input to the same region:
wherein Z represents the output value before deactivation; a binary cross entropy function (binary cross entropy) is used as a loss function for the network,
only when y' i And y i When the probabilities are equal, the loss is 0, otherwise, the loss is a positive number, and the larger the probability difference is, the larger the loss is.
5. The liver CT image interactive segmentation algorithm based on the neural network improved RSG region growing algorithm of claim 1, wherein the specific process in Step4 is as follows:
step4.1, because liver tissues contain blood vessels, tumors and the like, holes exist in liver areas in the segmentation result; the basic principle of morphological filling of holes is:
wherein X is 0 Is the starting point of hole filling, B is the structural element used for filling the holes, A c Is the complement of A; iterative calculation is continuously carried out on the formula until X k =X k-1 The final filling result is X k And the union of the boundaries, i.e. the final segmentation result.
CN202010907881.0A 2020-09-02 2020-09-02 RSG liver CT image interactive segmentation algorithm based on neural network improvement Active CN111986216B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010907881.0A CN111986216B (en) 2020-09-02 2020-09-02 RSG liver CT image interactive segmentation algorithm based on neural network improvement

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010907881.0A CN111986216B (en) 2020-09-02 2020-09-02 RSG liver CT image interactive segmentation algorithm based on neural network improvement

Publications (2)

Publication Number Publication Date
CN111986216A CN111986216A (en) 2020-11-24
CN111986216B true CN111986216B (en) 2024-01-12

Family

ID=73448212

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010907881.0A Active CN111986216B (en) 2020-09-02 2020-09-02 RSG liver CT image interactive segmentation algorithm based on neural network improvement

Country Status (1)

Country Link
CN (1) CN111986216B (en)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117744232A (en) * 2024-02-19 2024-03-22 中铁十六局集团第一工程有限公司 Concrete pouring method for beam joints of steel reinforced concrete columns

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106056596A (en) * 2015-11-30 2016-10-26 浙江德尚韵兴图像科技有限公司 Fully-automatic three-dimensional liver segmentation method based on local apriori information and convex optimization
CN106127819A (en) * 2016-06-30 2016-11-16 上海联影医疗科技有限公司 Medical image extracts method and the device thereof of vessel centerline
CN107016683A (en) * 2017-04-07 2017-08-04 衢州学院 The level set hippocampus image partition method initialized based on region growing

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106056596A (en) * 2015-11-30 2016-10-26 浙江德尚韵兴图像科技有限公司 Fully-automatic three-dimensional liver segmentation method based on local apriori information and convex optimization
CN106127819A (en) * 2016-06-30 2016-11-16 上海联影医疗科技有限公司 Medical image extracts method and the device thereof of vessel centerline
CN107016683A (en) * 2017-04-07 2017-08-04 衢州学院 The level set hippocampus image partition method initialized based on region growing

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
Radiomics based detection and characterization of suspicious lesions on full field digital mammograms;Suhas G. Sapate et al.;《Computer Methods and Programs in Biomedicine》;第1-20页 *
基于全卷积神经网络和动态自适应区域 生长法的红外图像目标分割方法;任志淼;《光电技术及应用》;第564-569页 *

Also Published As

Publication number Publication date
CN111986216A (en) 2020-11-24

Similar Documents

Publication Publication Date Title
EP3639240B1 (en) A system and computer-implemented method for segmenting an image
Xu et al. DW-Net: A cascaded convolutional neural network for apical four-chamber view segmentation in fetal echocardiography
Zhao et al. An overview of interactive medical image segmentation
Kim et al. Machine-learning-based automatic identification of fetal abdominal circumference from ultrasound images
CN112150428A (en) Medical image segmentation method based on deep learning
El-Regaily et al. Lung nodule segmentation and detection in computed tomography
CN113420826B (en) Liver focus image processing system and image processing method
EP2401719B1 (en) Methods for segmenting images and detecting specific structures
Lameski et al. Skin lesion segmentation with deep learning
CN115496771A (en) Brain tumor segmentation method based on brain three-dimensional MRI image design
CN114972362A (en) Medical image automatic segmentation method and system based on RMAU-Net network
Shan et al. SCA-Net: A spatial and channel attention network for medical image segmentation
Bi et al. Hyper-fusion network for semi-automatic segmentation of skin lesions
H Khan et al. Classification of skin lesion with hair and artifacts removal using black-hat morphology and total variation
Kriti et al. A review of Segmentation Algorithms Applied to B-Mode breast ultrasound images: a characterization Approach
CN111986216B (en) RSG liver CT image interactive segmentation algorithm based on neural network improvement
Banerjee et al. A CADe system for gliomas in brain MRI using convolutional neural networks
Chatterjee et al. A survey on techniques used in medical imaging processing
CN116958705A (en) Medical image classifying system based on graph neural network
Lawankar et al. Segmentation of liver using marker watershed transform algorithm for CT scan images
Hoori et al. Automatic Deep Learning Segmentation and Quantification of Epicardial Adipose Tissue in Non-Contrast Cardiac CT scans
Tu et al. Segmentation of lesion in dermoscopy images using dense-residual network with adversarial learning
Xue et al. Region-of-interest aware 3D ResNet for classification of COVID-19 chest computerised tomography scans
CN110706209B (en) Method for positioning tumor in brain magnetic resonance image of grid network
Hatture et al. Clinical diagnostic systems based on machine learning and deep learning

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
TA01 Transfer of patent application right

Effective date of registration: 20231207

Address after: No. 333 Wuxi Avenue, Wuxi City, Jiangsu Province, 214000

Applicant after: Wuxi University

Address before: 130012 No. 2055 Yan'an Street, Chaoyang District, Changchun City, Jilin Province

Applicant before: Changchun University of Technology

TA01 Transfer of patent application right
GR01 Patent grant
GR01 Patent grant