CN110211117B - Processing system for identifying linear tubular objects in medical image and optimized segmentation method - Google Patents

Processing system for identifying linear tubular objects in medical image and optimized segmentation method Download PDF

Info

Publication number
CN110211117B
CN110211117B CN201910480031.4A CN201910480031A CN110211117B CN 110211117 B CN110211117 B CN 110211117B CN 201910480031 A CN201910480031 A CN 201910480031A CN 110211117 B CN110211117 B CN 110211117B
Authority
CN
China
Prior art keywords
image
segmentation
detection
tip
target
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201910480031.4A
Other languages
Chinese (zh)
Other versions
CN110211117A (en
Inventor
胡贤良
胡新
彭粲
林光
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Guangdong Centrizen Technology Co ltd
Original Assignee
Guangdong Centrizen Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Guangdong Centrizen Technology Co ltd filed Critical Guangdong Centrizen Technology Co ltd
Priority to CN201910480031.4A priority Critical patent/CN110211117B/en
Publication of CN110211117A publication Critical patent/CN110211117A/en
Application granted granted Critical
Publication of CN110211117B publication Critical patent/CN110211117B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/40Image enhancement or restoration by the use of histogram techniques
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/0002Inspection of images, e.g. flaw detection
    • G06T7/0012Biomedical image inspection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/11Region-based segmentation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/187Segmentation; Edge detection involving region growing; involving region merging; involving connected component labelling
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10072Tomographic images
    • G06T2207/10081Computed x-ray tomography [CT]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20084Artificial neural networks [ANN]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30004Biomedical image processing
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02TCLIMATE CHANGE MITIGATION TECHNOLOGIES RELATED TO TRANSPORTATION
    • Y02T10/00Road transport of goods or passengers
    • Y02T10/10Internal combustion engine [ICE] based vehicles
    • Y02T10/40Engine management systems

Abstract

The invention relates to a processing system for identifying a linear tubular object in a medical image, which comprises an image preprocessing module, a processing module and a processing module, wherein the image preprocessing module acquires an image and processes the image into a size of quite detection and segmentation; a segmentation and detection module; the method comprises the steps of dividing a pipeline path in an image and detecting the position of a pipeline tip in the image; a synthesis processing module; for outputting a clearly visible image. The processing system for identifying the linear object in the medical image realizes the size of the image reconstruction image by comparing and limiting the self-adaptive gray level histogram equalization, and improves the definition of the identification image; the U-Net convolutional neural network of the segmentation and detection module is adopted to realize tubular segmentation of the image, and the first loss function, the second loss function and the average cross-union ratio are adopted to shorten the calculation time of the identification line tubular object.

Description

Processing system for identifying linear tubular objects in medical image and optimized segmentation method
Technical Field
The invention relates to the technical field of medical images, in particular to a processing system for identifying a wire tubular object in a medical image, and further provides a method for optimizing detection of a segmentation tip.
Background
Medical imaging refers to techniques and procedures for non-invasively acquiring an image of internal tissue of a human body or a portion of a human body for medical or medical research. It contains the following two relatively independent directions of investigation: a medical imaging system (medical imaging system) and medical image processing (medical image processing). The former refers to the image formation process, including the research on problems such as imaging mechanism, imaging equipment, imaging system analysis and the like; the latter refers to further processing of the already obtained image with the purpose of either restoring the original insufficiently sharp image, or highlighting some characteristic information in the image, or pattern classification of the image, etc.
The detection of the thread has application in the problems of medical imaging, hair tracking and the like, and the detection of the tip of the thread has important requirements in some medical CT image analysis. The traditional methods for identifying the linear tubular objects are combined with classical image processing methods, and comprise a central line extension algorithm: tracking the thread, given an initial point; hough line transformation detection algorithm, auxiliary Active Shape Model (ASM) adjustment and the like, and the methods have the advantages of large calculation amount, low accuracy and poor image effect on medical scanning images with high noise.
Therefore, a processing system for identifying a tubular object in a medical image is needed to solve the defects in the prior art.
Disclosure of Invention
In order to overcome the defects in the prior art, the invention provides a processing system for identifying a linear tubular object in a medical image.
The technical scheme of the invention is as follows:
a processing system for identifying a wireform in a medical image, comprising:
the image preprocessing module is used for acquiring an image and processing the image into a size of quite detection segmentation;
a segmentation and detection module; the method comprises the steps of dividing a pipeline path in an image and detecting the position of a pipeline tip in the image;
a synthesis processing module; for outputting a clearly visible image.
Preferably, the image preprocessing module is provided with a preprocessing module for comparing and limiting the self-adaptive gray histogram equalization; the preprocessing module acquires a gray level image and reconstructs the non-standard size of the gray level image into an image with the same size.
The preprocessing module reconstructs an image with 1024 pixels in size.
Preferably, the segmentation and detection module comprises wire segmentation and tip detection;
the wire segmentation adopts a U-Net convolutional neural network to realize segmentation of cells, cell interstitials and thin line-shaped cell edge cells in medical images and construct a depth model;
the tip detection is used for marking the position of the tip of the pipeline in the detection image in a frame mode and extracting the target position of the tip; the tip detection is to extract the medical image characteristics by adopting a deep convolutional neural network (VGG-Net).
Further, the target position of the tip detection includes a candidate region and a border region. The candidate region is a candidate region for extracting the target position of the tip by adopting a target algorithm of Faster R-CNN; and obtaining the frame region by adopting a regression function based on a regression target detection network.
Further, an average intersection ratio MIoU is set on the tip detection, where the average intersection ratio MIoU is a value obtained by intersection between an average value of real region data of the extracted features of the medical image and an average value of preset region data.
Further, the tip detection is also provided with a first loss function L of tip detection guidance cusp The first loss function L cusp The method comprises the following steps:
L reg the target position coordinates of the frame area; l (L) cls Classifying targets of the medical image, wherein p is the probability of predicting the target as a tip, and I (x) is an indication function of a target frame area of the current coordinate; t is the coordinate set of the candidate region, R (x) is the regression function of the border region,is the true value of the tip probability, +.>Is the true value of the candidate region coordinate set.
Further, the average cross-over ratio MIoU is:
N cls classifying the number for the total target; predimctResult cls Presetting an average value of region data for each target class; groundTruth cls The average value of the real area data is classified for each object.
Further, a second loss function L is arranged in the objective function of the pipeline segmentation in the wire segmentation pipe The second loss function L pipe The method comprises the following steps:
L pipe =λ*y gt* (-log p)+(1-y gt )*(-log(1-p))
lambda is a hyper-parameter that regulates the loss ratio: y is gt 0 or 1.
Based on the processing system for identifying the linear tubular object in the medical image, the invention also provides a method for optimizing the detection of the segmentation tip, which comprises the following steps:
s1: acquiring medical images, and calculating preset area data of each target class to obtain a preset average value PredictResult cls
S2: each target classification of the target position real region is obtained according to the segmentation and detection module, and the real average value GroundTruth is obtained through calculation cls
S3: obtaining the value of the average cross-over ratio according to the formula of the average cross-over ratio MIoU;
s4: and adjusting the super parameter lambda according to the numerical value of the average intersection ratio, thereby shortening the time for segmenting the linear object and realizing the optimal segmentation tip detection.
The beneficial effects of the invention are as follows: compared with the prior art, the processing system for identifying the linear object in the medical image realizes the size of the image reconstruction image by comparing and limiting the self-adaptive gray level histogram equalization, and improves the definition of the identification image; the U-Net convolutional neural network of the segmentation and detection module is adopted to realize tubular segmentation of the image, and the first loss function, the second loss function and the average cross-union ratio are adopted to shorten the calculation time of the identification line tubular object.
Description of the drawings:
FIG. 1 is a block diagram of a system for identifying a linear tubular object in a medical image according to the present invention.
Fig. 2a is a processing diagram of a prior art medical image.
Fig. 2b is a processing diagram after an image preprocessing module of the processing system for recognizing a tubular object in a medical image according to the present invention.
FIG. 3 is a first loss function formula of a processing system for identifying a linear object in a medical image according to the present invention.
FIG. 4 is a second loss function formula of a processing system for identifying a linear object in a medical image according to the present invention.
FIG. 5 is an average cross-ratio formula of a processing system for identifying a linear object in a medical image according to the present invention.
Fig. 6a is a diagram showing a left detection effect of a processing system for identifying a linear object in a medical image according to the present invention.
Fig. 6b is a right side view of the processing system for recognizing a linear object in a medical image according to the present invention.
FIG. 7 is a flow chart of a method of optimizing split-tip detection according to the present invention.
Detailed Description
In order to make the technical scheme and technical effects of the invention more clear, the invention is further described below with reference to specific embodiments. It should be understood that the specific embodiments described herein are for purposes of illustration only and are not intended to limit the scope of the invention.
The processing system for identifying the linear object in the medical image can be applied to the field of images. The invention will now be further explained by means of specific embodiments in connection with the accompanying drawings.
Referring to fig. 1, the present invention provides a processing system for identifying a tubular object in a medical image, comprising:
the image preprocessing module is used for acquiring an image and processing the image into a size of quite detection segmentation;
a segmentation and detection module; the method comprises the steps of dividing a pipeline path in an image and detecting the position of a pipeline tip in the image;
a synthesis processing module; for outputting a clearly visible image.
Referring to fig. 1 and 2, the image preprocessing module is provided with a preprocessing module for performing contrast-limited adaptive gray histogram equalization, wherein the image preprocessing module acquires a gray image from a DICOM file of PICC data, and the acquired gray image is processed by using contrast-limited adaptive gray histogram equalization (CLAHE) to reconstruct the gray image into an image with the same size. In this embodiment, since the width of the image channel is only 20 to 30 pixels in the PICC data, and the area size of the box marking the tip position is only 20×20 pixels, if the image is reconstructed to be too small, the required feature is reduced accordingly, so that the image recognition is easy to be disabled. If the image reconstruction size is too large, the graphics processing period memory (GPU) required in the image recognition process is too large to be suitable for a general computer. The size pixels of the reconstructed image in the image preprocessing module are 1024 x 1024.
The image preprocessing module adopts contrast limiting self-adaptive gray histogram equalization (CLAHE) to carry out contrast adjustment on image blocks, limits the brightness enhancement range, can effectively avoid the noise enhancement condition, only highlights the prospect, and is suitable for processing medical images.
The segmentation and detection module includes wire segmentation and tip detection. The wire segmentation adopts a U-Net convolutional neural network to realize segmentation of cells, cell interstitials and thin line-shaped cell edge cells in medical images and construct a depth model. The line object segmentation is mainly to segment a tubular shape in a medical image, the pipeline segmentation is regarded as two classifications of pixels, and a weighted cross entropy function is used as an objective function. In this embodiment, the U-Net convolutional neural network is a derivative network of a Full Convolutional Network (FCN), the semantic segmentation image is realized through pixel level classification, and since the full convolutional network has no full connection structure, the full convolutional network is irrelevant to the input size, and the input of any size increment of the image does not cause the number of image parameters to exponentially increase. The U-Net convolutional neural network can use less network training data to achieve a more accurate segmentation result, the FCN type network replaces a pooling operator with an up-sampling operator, and the size of a feature map is increased so as to be matched with the input size; the feature map obtained after each sampling of the U-Net convolutional neural network is the same as the size of one layer of feature map on the extraction path, and the feature map bridges the size map with the same size to combine the global information of a high layer with the extracted local information, so that more accurate segmentation is obtained.
The tip detection is used for carrying out frame type marking on the position of the tip of the pipeline in the detection image and extracting the target position of the tip, and the target position of the tip detection comprises a candidate region and a frame region. When detecting the position of the tip of the medical image, the tip detection needs to extract the characteristics of the medical image in a deep convolutional neural network (VGG-Net), and the value of the intersection ratio between the average value of the real area data of the extracted characteristics of the medical image and the average value of the preset area data is the average intersection ratio MIoU. In the embodiment, the candidate region is a candidate region for extracting a tip target position by adopting a target algorithm of Faster R-CNN; and obtaining the frame region by adopting a regression function based on a regression target detection network. The process of extracting the candidate area network and the process of extracting the border area are connected together, so that the whole process can be trained from end to end, and the speed of the tip detection is improved.
The tip detection is also provided with a first loss function L of tip detection guidance cusp The first loss function L cusp The formula of (2) is:
L reg the target position coordinates of the frame area; l (L) cls Classifying targets of the medical image, wherein p is the probability of predicting the target as a tip, and I (x) is an indication function of a target frame area of the current coordinate; t is the coordinate set of the candidate region, R (x) is the regression function of the border region,is the true value of the tip probability, +.>Is the true value of the candidate region coordinate set.
The formula of the average intersection ratio MIoU is as follows:
wherein N is cls Classifying the total target in the medical image into a number including the background class in the medical image; predimctResult cls Presetting an average value of region data for each target class; groundTruth cls The average value of the real area data is classified for each object. In this embodiment, the average cross ratio MIoU is used to measure the accuracy of the thread segmentation. For example, since the background is also calculated as a class of added index calculation, when there is a class of tubular objects in the medical image and the proportion of the tubular objects to the background is great, the depth model of the line segmentation always tends to predict all parts as the background so that the average intersection ratio MIoU is near 0.5 waveAnd (5) moving.
The objective function of the pipeline segmentation in the line segmentation is provided with a second loss function L pipe The second loss function L pipe The formula of (2) is:
L pipe =λ*y gt* (-log p)+(1-y gt )*(-log(1-p))
lambda is a hyper-parameter that regulates the loss ratio: y is gt 0 or 1, when y gt 0, y gt Indication function other than target, when y gt Is 1, and represents y gt Is an indicator function of the target.
Wherein the second loss function L pipe Where λ is the initial value of the λ hyper-parameter determined by calculating the average area ratio of the tunnel marker in the PICC data to the background in the medical image.
In the segmentation and detection module, as the lambda super-parameter is changed, the second loss function value is correspondingly changed in scale, and a relatively irrelevant variable which can reflect the model effect, namely the average cross-over ratio MIoU, is required to be referred to in judgment; in the model learning process of the U-Net convolutional neural network, the average cross-over ratio MIoU is gradually increased, when the increase is slow, the control effect of lambda on each proportion of the second loss function value reaches the current optimum, and the lambda super-parameters are finely adjusted to further control each proportion of the first loss function value so that the predicted area approaches the target position, so that the processing system for identifying the linear tubular object in the medical image can achieve the identification segmentation effect in a short time.
Referring to fig. 6a and 6b, the composition processing module is mainly provided with a picture processor for processing image data.
The processing system for identifying the linear object in the medical image realizes the size of the image reconstruction image by comparing and limiting the self-adaptive gray level histogram equalization, and improves the definition of the identification image; the U-Net convolutional neural network of the segmentation and detection module is adopted to realize tubular segmentation of the image, and the first loss function, the second loss function and the average cross-union ratio are adopted to shorten the calculation time of the identification line tubular object.
Referring to fig. 7, the present invention further provides a method for optimizing detection of a segmented tip, based on the above processing system for identifying a tubular object in a medical image, comprising the following steps:
s1: acquiring medical images, and calculating preset area data of each target class to obtain a preset average value PredictResult cls
S2: each target classification of the target position real region is obtained according to the segmentation and detection module, and the real average value GroundTruth is obtained through calculation cls
S3: obtaining the value of the average cross-over ratio according to the formula of the average cross-over ratio MIoU;
s4: and adjusting the super parameter lambda according to the numerical value of the average intersection ratio, thereby shortening the time for segmenting the linear object and realizing the optimal segmentation tip detection.
The foregoing is a further detailed description of the invention in connection with the preferred embodiments, and it is not intended that the invention be limited to the specific embodiments described. For those skilled in the art, the architecture of the invention can be flexible and changeable without departing from the concept of the invention, and serial products can be derived. But a few simple derivatives or substitutions should be construed as falling within the scope of the invention as defined by the appended claims.

Claims (4)

1. A system for processing a medical image for identifying a linear tubular object, comprising:
the image preprocessing module is used for acquiring an image and processing the image into a size of quite detection segmentation;
a segmentation and detection module; the method comprises the steps of dividing a pipeline path in an image and detecting the position of a pipeline tip in the image;
a synthesis processing module; for outputting a clearly visible image;
the segmentation and detection module comprises wire segmentation and tip detection;
the wire segmentation adopts a U-Net convolutional neural network to realize segmentation of cells, cell interstitials and thin line-shaped cell edge cells in medical images and construct a depth model;
the tip detection is used for marking the position of the tip of the pipeline in the detection image in a frame mode and extracting the target position of the tip; the tip detection is to extract medical image features by adopting a deep convolutional neural network (VGG-Net);
the tip detection is provided with an average intersection ratio MIoU, and the average intersection ratio MIoU is a numerical value obtained by intersection between an average value of real region data of medical image extraction characteristics and an average value of preset region data;
the tip detection is also provided with a first loss function L of tip detection guidance cusp The first loss function L cusp Is that L reg The target position coordinates of the frame area; l (L) cls Classifying targets of the medical image, wherein p is the probability of predicting the target as a tip, and I (x) is an indication function of a target frame area of the current coordinate; t is the coordinate set of the candidate region, R (x) is the regression function of the border region, +.>Is the true value of the tip probability, +.>True values for the candidate region coordinate set;
the objective function of the pipeline segmentation in the line segmentation is provided with a second loss function L pipe The second loss function L pipe Is L pipe =λ*y gt* (-log p)+(1-y gt ) (-log (1-p)), λ is the hyper-parameter that regulates the loss ratio: y is gt 0 or 1;
the average cross-over ratio MIoU isN cls Classifying the number for the total target; predimctResult cls Presetting an average value of region data for each target class; groundTruth cls Classifying an average value of the real area data for each target;
the target position of the tip detection comprises a candidate region and a frame region, wherein the candidate region is a candidate region for extracting the target position of the tip by adopting a target algorithm of Faster R-CNN; and obtaining the frame region by adopting a regression function based on a regression target detection network.
2. The processing system for identifying a linear object in a medical image according to claim 1, wherein the image preprocessing module is provided with a preprocessing module for contrast-limited adaptive gray histogram equalization; the preprocessing module acquires a gray level image and reconstructs the non-standard size of the gray level image into an image with the same size.
3. The system of claim 2, wherein the preprocessing module reconstructs the image to a size of 1024 x 1024 pixels.
4. A method of optimizing segmented tip detection, a processing system for identifying a wireform in a medical image according to any of claims 1-3, characterized in that the method of optimizing segmented tip detection comprises the steps of:
s1: acquiring medical images, and calculating preset area data of each target class to obtain a preset average value PredictResult cls
S2: each target classification of the target position real region is obtained according to the segmentation and detection module, and the real average value GroundTruth is obtained through calculation cls
S3: obtaining the value of the average cross-over ratio according to the formula of the average cross-over ratio MIoU;
and S4, adjusting the super parameter lambda according to the numerical value of the average cross ratio, so as to shorten the time for segmenting the linear object and realize the optimal segmentation tip detection.
CN201910480031.4A 2019-05-31 2019-05-31 Processing system for identifying linear tubular objects in medical image and optimized segmentation method Active CN110211117B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910480031.4A CN110211117B (en) 2019-05-31 2019-05-31 Processing system for identifying linear tubular objects in medical image and optimized segmentation method

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910480031.4A CN110211117B (en) 2019-05-31 2019-05-31 Processing system for identifying linear tubular objects in medical image and optimized segmentation method

Publications (2)

Publication Number Publication Date
CN110211117A CN110211117A (en) 2019-09-06
CN110211117B true CN110211117B (en) 2023-08-15

Family

ID=67790640

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910480031.4A Active CN110211117B (en) 2019-05-31 2019-05-31 Processing system for identifying linear tubular objects in medical image and optimized segmentation method

Country Status (1)

Country Link
CN (1) CN110211117B (en)

Families Citing this family (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110310292B (en) * 2019-06-28 2021-02-02 浙江工业大学 Wrist reference bone segmentation method
CN110880176B (en) * 2019-11-19 2022-04-26 浙江大学 Semi-supervised industrial image defect segmentation method based on countermeasure generation network
CN112396565A (en) * 2020-11-19 2021-02-23 同济大学 Method and system for enhancing and segmenting blood vessels of images and videos of venipuncture robot

Family Cites Families (12)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2001078005A2 (en) * 2000-04-11 2001-10-18 Cornell Research Foundation, Inc. System and method for three-dimensional image rendering and analysis
JP2005266452A (en) * 2004-03-19 2005-09-29 ▲ぎょく▼瀚科技股▲ふん▼有限公司 Method and device for correcting luminance of liquid crystal display device
JP5110005B2 (en) * 2009-02-23 2012-12-26 株式会社島津製作所 Correction position information acquisition method, positional deviation correction method, image processing apparatus, and radiation imaging apparatus
WO2015085320A1 (en) * 2013-12-06 2015-06-11 The Johns Hopkins University Methods and systems for analyzing anatomy from multiple granularity levels
US10467741B2 (en) * 2015-02-26 2019-11-05 Washington University CT simulation optimization for radiation therapy contouring tasks
CN105719278B (en) * 2016-01-13 2018-11-16 西北大学 A kind of medical image cutting method based on statistics deformation model
TW201903708A (en) * 2017-06-06 2019-01-16 國立陽明大學 Method and system for analyzing digital subtraction angiography images
CN107945181A (en) * 2017-12-30 2018-04-20 北京羽医甘蓝信息技术有限公司 Treating method and apparatus for breast cancer Lymph Node Metastasis pathological image
CN108921227B (en) * 2018-07-11 2022-04-08 广东技术师范学院 Glaucoma medical image classification method based on capsule theory
CN109191434A (en) * 2018-08-13 2019-01-11 阜阳师范学院 Image detecting system and detection method in a kind of cell differentiation
CN109472781B (en) * 2018-10-29 2022-02-11 电子科技大学 Diabetic retinopathy detection system based on serial structure segmentation
CN109584248B (en) * 2018-11-20 2023-09-08 西安电子科技大学 Infrared target instance segmentation method based on feature fusion and dense connection network

Also Published As

Publication number Publication date
CN110211117A (en) 2019-09-06

Similar Documents

Publication Publication Date Title
CN106778902B (en) Dairy cow individual identification method based on deep convolutional neural network
CN110930416B (en) MRI image prostate segmentation method based on U-shaped network
CN108765369B (en) Method, apparatus, computer device and storage medium for detecting lung nodule
CN107316307B (en) Automatic segmentation method of traditional Chinese medicine tongue image based on deep convolutional neural network
CN111798462B (en) Automatic delineation method of nasopharyngeal carcinoma radiotherapy target area based on CT image
Li et al. Connection sensitive attention U-NET for accurate retinal vessel segmentation
CN110211117B (en) Processing system for identifying linear tubular objects in medical image and optimized segmentation method
US11562491B2 (en) Automatic pancreas CT segmentation method based on a saliency-aware densely connected dilated convolutional neural network
CN111784721B (en) Ultrasonic endoscopic image intelligent segmentation and quantification method and system based on deep learning
CN108537751B (en) Thyroid ultrasound image automatic segmentation method based on radial basis function neural network
CN109035300B (en) Target tracking method based on depth feature and average peak correlation energy
WO2023045231A1 (en) Method and apparatus for facial nerve segmentation by decoupling and divide-and-conquer
CN110543906B (en) Automatic skin recognition method based on Mask R-CNN model
CN111951288A (en) Skin cancer lesion segmentation method based on deep learning
CN109190571B (en) Method and device for detecting and identifying typical plant species eaten by grazing sheep
CN111047608A (en) Distance-AttU-Net-based end-to-end mammary ultrasound image segmentation method
Li et al. Image segmentation based on improved unet
Tanna et al. Binary classification of melanoma skin cancer using svm and cnn
CN112419335B (en) Shape loss calculation method of cell nucleus segmentation network
Cui et al. Double-branch local context feature extraction network for hyperspectral image classification
CN116228795A (en) Ultrahigh resolution medical image segmentation method based on weak supervised learning
CN103366183A (en) Nonparametric automatic detection method of focal niduses
Guo et al. Pathological Detection of Micro and Fuzzy Gastric Cancer Cells Based on Deep Learning.
CN113963427B (en) Method and system for rapid in-vivo detection
CN110969182A (en) Convolutional neural network construction method and system based on farmland image

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant