CN109671094A - A kind of eye fundus image blood vessel segmentation method based on frequency domain classification - Google Patents

A kind of eye fundus image blood vessel segmentation method based on frequency domain classification Download PDF

Info

Publication number
CN109671094A
CN109671094A CN201811331874.XA CN201811331874A CN109671094A CN 109671094 A CN109671094 A CN 109671094A CN 201811331874 A CN201811331874 A CN 201811331874A CN 109671094 A CN109671094 A CN 109671094A
Authority
CN
China
Prior art keywords
block
fundus image
eye fundus
low
sampling
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201811331874.XA
Other languages
Chinese (zh)
Other versions
CN109671094B (en
Inventor
武薇
田纯
范影乐
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Hangzhou Dianzi University
Original Assignee
Hangzhou Dianzi University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Hangzhou Dianzi University filed Critical Hangzhou Dianzi University
Priority to CN201811331874.XA priority Critical patent/CN109671094B/en
Publication of CN109671094A publication Critical patent/CN109671094A/en
Application granted granted Critical
Publication of CN109671094B publication Critical patent/CN109671094B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/13Edge detection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/90Dynamic range modification of images or parts thereof
    • G06T5/92Dynamic range modification of images or parts thereof based on global image properties
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20212Image combination
    • G06T2207/20221Image fusion; Image merging
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30004Biomedical image processing
    • G06T2207/30041Eye; Retina; Ophthalmic
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30004Biomedical image processing
    • G06T2207/30101Blood vessel; Artery; Vein; Vascular

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • General Health & Medical Sciences (AREA)
  • General Engineering & Computer Science (AREA)
  • Biophysics (AREA)
  • Computational Linguistics (AREA)
  • Data Mining & Analysis (AREA)
  • Evolutionary Computation (AREA)
  • Artificial Intelligence (AREA)
  • Molecular Biology (AREA)
  • Computing Systems (AREA)
  • Biomedical Technology (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Health & Medical Sciences (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Eye Examination Apparatus (AREA)
  • Image Analysis (AREA)

Abstract

The present invention relates to a kind of eye fundus image blood vessel segmentation methods based on frequency domain classification.For the inaccurate problem of blood vessel segmentation of the original eye fundus image full range stage treatment method of tradition, the low high-frequency information that eye fundus image is obtained using frequency domain pretreatment is proposed, the low-dimensional and high dimensional feature for then pointedly constructing multipath extract convolutional network.Wherein low-dimensional feature extraction convolutional network includes two symmetric paths in left and right, the main extraction and accurate positioning realized to eye fundus image blood vessel overall situation profile information.It includes two symmetric paths in left and right that high dimensional feature, which extracts convolutional network not only, and on the right during the up-sampling of path, the vessel borders information lost by merging port number realization mixing operation with the characteristic pattern of left side symmetric path with completion, will the further minutia for sharpening the distribution of eye fundus image vessel boundary.Height dimensional feature figure finally is merged using convolution kernel, to obtain more accurate eye fundus image blood vessel segmentation figure.

Description

A kind of eye fundus image blood vessel segmentation method based on frequency domain classification
Technical field
The present invention relates to machine learning and field of medical image processing, and in particular to a kind of eyeground figure based on frequency domain classification As blood vessel segmentation method.
Background technique
Clinical research shows that the morphosis of retinal vessel when eyeground pathological changes is easy to happen pathology and sexually revises, specific table It is now the hyperplasia of length of vessel, width, the variation of angle and blood vessel.Carrying out blood vessel segmentation to eye fundus image will be helpful to disease Screening, diagnosis and analysis, however at present blood vessel segmentation mostly use manual type greatly, not only need clinical experience abundant, also The a large amount of time and efforts of doctor will be consumed.
The network structure of retinal vessel is tree-shaped type, and branch is more, and minute blood vessel and background contrast in apparatus derivatorius Very little is spent, profile and border is very fuzzy, and the segmentation that this allows for minute blood vessel is extremely difficult.In recent years, scholars propose perhaps Multi-method, including retinal images are handled using Gauss matched filtering method combination threshold value;Tracking side based on probability Method combination image local gray-scale information and blood vessel connection characteristic detect retinal vascular images.Also research and propose A kind of Segmentation Method of Retinal Blood Vessels combined based on deep learning with conventional method, by full convolutional network end and shallow-layer The blood vessel probability graph of information is merged, and then obtains desired retinal vessel segmentation figure, but since cutting procedure does not have There is the relevance fully considered between image airspace and frequency domain, it is intended to which full range is handled by the Depth Expansion of convolutional neural networks Segment data will not only significantly reduce the computing capability of network, but also the result that will lead to after segmentation is not fine enough, Space Consistency Hardly result in holding.Therefore above method can extract most of retinal vessel, however eyeground figure lower for contrast The segmentation task of picture, especially minute blood vessel, is generally unable to reach satisfactory segmentation effect.
Summary of the invention
Above-mentioned to solve the problems, such as, the invention proposes a kind of eye fundus image blood vessel segmentation sides based on frequency domain classification Method.This method considers significant difference of the retinal vessel in contrast, changes original eye fundus image being directly inputted to volume The traditional mode of product neural network, but frequency domain classification is carried out to eye fundus image first, the global profile of image is extracted respectively Information and local detailed information, then pointedly construct independent multipath convolutional neural networks, realize retinal vessel It extracts and merges, obtain segmentation figure.The present invention carries out the conversion of null tone domain to eye fundus image, extracts eyeground figure respectively using filter The low frequency and high-frequency information of picture, then low frequency and high-frequency information contravariant are changed into airspace, it is separately input to constructed multipath volume Feature extraction is carried out in product neural network, last fusion feature figure obtains segmentation figure.Comprising the following steps:
Step 1: the frequency domain classification processing of eye fundus image
Original eye fundus image f (x, y) is transformed into frequency domain F (u, v) by Fourier transformation first, then passes through height respectively This low-pass filter and Gauss high-pass filter obtain the low frequency and radio-frequency component of eye fundus image, finally utilize Fourier contravariant It changes and the low frequency and high-frequency information of eye fundus image is handled, obtain the frequency domain classification results of eye fundus image, respectively low frequency is believed The corresponding f of breath1(x, y), f corresponding to high-frequency information2(x,y);
Step 2: building low-dimensional feature extraction convolutional network, the global profile for describing eye fundus image using low-frequency component are special Property, facilitate the separation of background and target blood, promotes the accuracy of prediction vessel borders information;The network is divided into left and right two Path, whole network include 8 residual blocks, 2 down-samplings, 2 up-samplings and 4 convolutional layers;Wherein, each residual block packet Containing two 3 × 3 empty convolutional layers, every two residual block and a down-sampling or a up-sampling form 1 block, and totally 4 block;The result of each block output is activated by ReLu function, followed by standardization processing;The left side The global feature of eye fundus image vascular distribution is extracted by 2 block in path;2, the right block is used to be accurately positioned, wherein Each block that left pathways pass through is the combination of two residual blocks and a down-sampling, and wherein right pathways each of pass through Block is the combination of two residual blocks and a up-sampling;The low-frequency image f that step 1 is obtained1(x, y) is input to constructed Low-dimensional feature extraction convolutional network in, obtain characteristic pattern FL
Step 3: building high dimensional feature extracts convolutional network, and the details characteristic of eye fundus image, example are described using radio-frequency component Such as the region that brightness change is violent;The network is equally divided into two paths in left and right, and whole network includes 14 residual blocks, under 4 times Sampling, 4 up-samplings and 3 convolutional layers;Wherein, each residual block includes two 3 × 3 empty convolutional layers, and every two is residual Poor block and a down-sampling or up-sampling form a block, totally 6 block, 1 block in left and right path junction Include two residual blocks, a down-sampling and a up-sampling;For each block output result by ReLu function into Line activating, followed by standardization processing;Left pathways capture eye fundus image vessel profile information by 3 block;The right Path carries out mixing operation by 3 block and left side symmetric path, and mixing operation merges characteristic pattern port number, with completion The further minutia for sharpening eye fundus image vascular distribution is obtained relatively sharp blood vessel by the vessel borders information of loss Boundary;Each block that wherein left pathways pass through is the combination of two residual blocks and a down-sampling, and wherein right pathways are logical The each block crossed is the combination of two residual blocks and a up-sampling;The high frequency imaging f that step 1 is obtained2(x, y) input It is extracted in convolutional network to constructed high dimensional feature, obtains characteristic pattern FH
Step 4: being added using port number by two groups of characteristic pattern FLAnd FHMerge, obtains high low-dimensional by 1 × 1-32 convolution kernel Characteristic pattern F ' is then changed into single channel characteristic pattern F " using the convolution kernel of 1 × 1-1 by the characteristic pattern F ' of fusion, using ReLu function activation after obtain original image f (x, y) corresponding to vessel segmentation output pixel value, and with it is corresponding known to Blood vessel segmentation label carries out loss operation with the difference of two squares, adds up to the loss operation result of all training images, as a result remembers For loss, and high and low dimensional feature is respectively trained using gradient descent method and extracts convolutional network;When penalty values loss meets convergence item After part, terminate training;
Step 5: after high and low dimensional feature extracts convolutional network training, by the eye fundus image of Unknown Label by step 1~ 4 are handled, and the single channel characteristic pattern F " corresponding to it is obtained, after the activation of ReLu function, the as blood vessel of eye fundus image Segmentation result.
The invention has the following advantages that
(1) present invention changes conventional method for the full frequency band tupe of eye fundus image, avoids convolutional Neural net Influence of the network Depth Expansion to network query function ability.Using the thought of scaling down processing, obtain respectively eye fundus image low frequency and Then high-frequency information targetedly designs low-dimensional and high dimensional feature extracts convolutional network.
(2) it constructs low-dimensional and high dimensional feature extracts convolutional network, pass through convolution, up-sampling and the down-sampling of multipath respectively Operation, is extracted the entirety and minutia of eye fundus image, is finally merged using convolution kernel to height dimensional feature figure, obtain essence True vessel segmentation.
Detailed description of the invention
In order to keep the purpose of the present invention, technical scheme and beneficial effects clearer, the present invention provides following attached drawing and carries out Illustrate:
Fig. 1 is eye fundus image blood vessel segmentation schematic network structure of the invention;
Fig. 2 is eye fundus image blood vessel segmentation flow chart of the invention.
Specific embodiment
The specific implementation process that the present invention will now be explained with reference to the accompanying drawings, attached drawing 1 are eye fundus image blood vessel of the invention point Schematic network structure is cut, attached drawing 2 is eye fundus image blood vessel segmentation flow chart of the invention.
Step 1: the frequency domain classification processing of eye fundus image.Original image is pre-processed first, is carried out Fourier Transformation is transformed into frequency domain, as shown in formula (1):
M, N respectively indicate the row and column of image size;F (x, y) indicates input picture;X, y respectively indicate airspace cross, vertical seat Mark;F (u, v) indicates the frequency domain value after Fourier transformation;U, v respectively indicate frequency domain cross, ordinate.Then f (x, y) is led to respectively Cross gauss low frequency filter (GLPF) and Gauss high-pass filter (GHPF), the transmission function of GLPF and GHPF are respectively H1(u, And H v)2(u, v), as shown in formula (2) and (3):
Wherein D (u, v) indicates the distance away from Fourier transformation center origin, D0It is off frequency.By formula (2) and (3) point The low-frequency component G of eye fundus image is not obtained multiplied by formula (1)1(u, v) and radio-frequency component G2(u,v).To G1(u, v) and G2(u, v) point Not carry out Fourier inversion, obtain filtered low-frequency image f1(x, y) and high frequency imaging f2(x, y), as shown in formula (4):
Step 2: building low-dimensional feature extraction convolutional network.In view of low-frequency image mainly reflects the global wheel of eye fundus image Wide information, the attribute with low dimensional feature, and low-frequency image can promote inspection to the assurance of Global Information to a certain extent Survey precision.Therefore by low-frequency image f obtained in step 11(x, y) is input to constructed low-dimensional feature extraction convolutional network In, as shown in 1 solid box of attached drawing, which is made of 4 convolutional layers and 4 block.2 block of left pathways, each Block includes two residual blocks and a down-sampling, is mainly used for extracting the global feature of eye fundus image vascular distribution;The right road Diameter 2 block, each block include two residual blocks and a up-sampling, are mainly used for the accurate fixed of eye fundus image blood vessel Position.Wherein down-sampling uses bilinear interpolation value method using 2 × 2 maximum pondization operations, up-sampling.Specific step is as follows:
1.: image f1(x, y) after 1 × 1-32 convolution kernel by first left block when, wherein each residual block packet Containing 64 convolution kernels, 64 dimensional feature figure F are converted images into1, as shown in formula (5):
F1=Res2 (pool (conv (f1(x,y)))) (5)
Wherein conv indicates convolution operation;Pool indicates pondization operation, using 2 × 2 maximum pond methods, similarly hereinafter;
Res2 indicates 2 residual block operations, similarly hereinafter.
2.: by F1It is activated by ReLu function, standardization processing is then carried out, by the 2nd, left side block, wherein wrapping Containing 128 convolution kernels, characteristic pattern F is obtained2, as shown in formula (6):
F2=Res2 (pool (Norm (Re Lu (F1)))) (6)
Wherein ReLu indicates activation primitive, similarly hereinafter;Norm indicates standardization processing, similarly hereinafter.
3.: by F2Characteristic pattern after 1 × 1 × 256 convolution kernel obtains feature by the 1st block of right pathways Scheme F3, as shown in formula (7):
F3=unsampling (Res2 (conv (F2))) (7)
Wherein unsampling indicates up-sampling operation, using bilinear interpolation value method, similarly hereinafter.
4.: by F3The final characteristic pattern F of low-frequency image is obtained by the 2nd block of right pathwaysL, as shown in formula (8):
FL=unsampling (Res2 (Norm (Re Lu (F3)))) (8)
Step 3: building high dimensional feature extracts convolutional network.In view of high frequency imaging mainly reflects the profile details of image, It is the further reinforcing in low-frequency information to picture material.The network of the characteristics of for high frequency imaging, building are more concerned about details Problem, the main minutia for capturing different layers.Therefore by high frequency imaging f obtained in step 12(x, y) is input to constructed High-frequency characteristic extract convolutional network in, as shown in 1 dotted line frame of attached drawing, the high-frequency characteristic extract convolutional network include 3 volume Lamination, 7 block.Its right and left each 3 block, each block include two residual blocks and a down-sampling or on adopt Sample;Having 1 block in left and right path junction includes two residual blocks, a down-sampling and a up-sampling.The network institute The convolutional layer weight in convolutional layer and low-dimensional feature extraction convolutional network for including is shared.Specific step is as follows:
1.: image is two residual blocks by 3 block of left pathways, each block after 1 × 1-32 convolution kernel It is combined with a down-sampling, obtains characteristic pattern F3', as shown in formula (9):
Fi'=Res2 (pool (Norm (Re Lu (F 'i-1)))) (9)
Wherein i=1,2,3 indicate the number of left pathways block, F0' as shown in formula (10):
F0'=conv (f2(x,y)) (10)
2.: by F3' by the block of left and right path junction, which is a down-sampling, two residual blocks and one A up-sampling combination, obtains characteristic pattern F '4, as shown in formula (11):
F′4=unsampling (Res2 (pool (Norm (Re Lu (F3′))))) (11)
3.: by F4' passing through 3 block of right pathways, each block is that two residual blocks and a up-sampling combine, together When each up-sampling during by merging port number with the characteristic pattern of left side symmetric path, then again to merging after Characteristic pattern is operated, and lost spatial information during pond is compensated for, and the details for strengthening eye fundus image blood vessel is special Sign, will obtain relatively sharp characteristic pattern FH, as shown in formula (12):
Wherein j=4,5,6 and Fj' indicate the characteristic pattern for passing through j-th of block;copy(F′7-j)+Fj' indicate characteristic pattern Fj' with the characteristic pattern of left side symmetric path merge port number.
Step 4: by characteristic pattern F obtained in step 2 and step 3LAnd FHIt merges, i.e., corresponding channel number is added, and is adopted With 1 × 1-32 convolution kernel determine image corresponding to characteristic pattern F ', using 1 × 1-1 convolution kernel by F ' be changed into single channel spy Sign figure F " obtains the output pixel value of vessel segmentation corresponding to original image f (x, y) after the activation of ReLu function, and Loss operation is carried out with the difference of two squares with corresponding known blood vessel segmentation label, the loss operation result of all training images is carried out It is cumulative, it is denoted as loss, as shown in formula (13):
Wherein n is the sample number of training image, and M, N are the length and width dimensions of training image,For i-th of training image pair Answer vessel segmentation in the output pixel value of the position (j, k),For corresponding known blood vessel segmentation label value.It is finally right Loss value carries out backpropagation, updates high and low dimensional feature respectively using gradient descent method and extracts weight in convolutional network and partially It sets, when loss value is less than threshold epsilon, training terminates, and ε may be configured as the 1~3% of training image sampled pixel sum, that is, obtains Network model after training.
Step 5: after low, high dimensional feature extracts convolutional network training, by the eye fundus image of Unknown Label by step 1~ 4 are handled, and the single channel characteristic pattern F " corresponding to it is obtained, after the activation of ReLu function, the as blood vessel of eye fundus image Segmentation result.

Claims (1)

1. a kind of eye fundus image blood vessel segmentation method based on frequency domain classification, which is characterized in that this method specifically includes following step It is rapid:
Step 1: the frequency domain classification processing of eye fundus image
Original eye fundus image f (x, y) is transformed into frequency domain F (u, v) by Fourier transformation first, it is then low by Gauss respectively Bandpass filter and Gauss high-pass filter obtain the low frequency and radio-frequency component of eye fundus image, finally utilize inverse fourier transform pair The low frequency and high-frequency information of eye fundus image are handled, and the frequency domain classification results of eye fundus image, respectively low-frequency information institute are obtained Corresponding f1(x, y), f corresponding to high-frequency information2(x,y);
Step 2: building low-dimensional feature extraction convolutional network describes the global profile characteristic of eye fundus image using low-frequency component;It should Network is divided into two paths in left and right, and whole network includes 8 residual blocks, 2 down-samplings, 2 up-samplings and 4 convolutional layers;Its In, each residual block includes two 3 × 3 empty convolutional layers, every two residual block and a down-sampling or a up-sampling 1 block is formed, totally 4 block;The result of each block output is activated by ReLu function, then again Carry out standardization processing;Left pathways extract the global feature of eye fundus image vascular distribution by 2 block;2, the right For block for being accurately positioned, each block that wherein left pathways pass through is the combination of two residual blocks and a down-sampling, Each block that right pathways pass through is the combination of two residual blocks and a up-sampling;The low-frequency image f that step 1 is obtained1 (x, y) is input in constructed low-dimensional feature extraction convolutional network, obtains characteristic pattern FL
Step 3: building high dimensional feature extracts convolutional network, and the details characteristic of eye fundus image is described using radio-frequency component;The network Equally it is divided into two paths in left and right, whole network includes 14 residual blocks, 4 down-samplings, 4 up-samplings and 3 convolutional layers;Its In, each residual block includes that two 3 × 3 empty convolutional layers, every two residual block and a down-sampling or up-sampling form One block, totally 6 block, 1 block in left and right path junction include two residual blocks, a down-sampling and one A up-sampling;The result of each block output is activated by ReLu function, followed by standardization processing; Left pathways capture eye fundus image vessel profile information by 3 block;Right pathways by 3 block simultaneously with the left side pair Path is claimed to carry out mixing operation, mixing operation merges characteristic pattern port number;Each block that wherein left pathways pass through is two The combination of a residual block and a down-sampling, each block that wherein right pathways pass through are to adopt on two residual blocks and one The combination of sample;The high frequency imaging f that step 1 is obtained2(x, y) is input to constructed high dimensional feature and extracts in convolutional network, obtains To characteristic pattern FH
Step 4: being added using port number by two groups of characteristic pattern FLAnd FHMerge, obtains high low-dimensional by 1 × 1-32 convolution kernel and merge Characteristic pattern F ', characteristic pattern F ' is then changed into single channel characteristic pattern F " using the convolution kernel of 1 × 1-1, using ReLu letter The output pixel value of vessel segmentation corresponding to original image f (x, y) is obtained after number activation, and is divided with corresponding known blood vessel It cuts label and loss operation is carried out with the difference of two squares, add up to the loss operation result of all training images, be as a result denoted as loss, And high and low dimensional feature is respectively trained using gradient descent method and extracts convolutional network;After penalty values loss meets the condition of convergence, knot Shu Xunlian;
Step 5: after high and low dimensional feature extracts convolutional network training, by the eye fundus image of Unknown Label by step 1~4 into Row processing, obtains the single channel characteristic pattern F " corresponding to it, after the activation of ReLu function, the as blood vessel segmentation of eye fundus image As a result.
CN201811331874.XA 2018-11-09 2018-11-09 Fundus image blood vessel segmentation method based on frequency domain classification Active CN109671094B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201811331874.XA CN109671094B (en) 2018-11-09 2018-11-09 Fundus image blood vessel segmentation method based on frequency domain classification

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201811331874.XA CN109671094B (en) 2018-11-09 2018-11-09 Fundus image blood vessel segmentation method based on frequency domain classification

Publications (2)

Publication Number Publication Date
CN109671094A true CN109671094A (en) 2019-04-23
CN109671094B CN109671094B (en) 2023-04-18

Family

ID=66142084

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201811331874.XA Active CN109671094B (en) 2018-11-09 2018-11-09 Fundus image blood vessel segmentation method based on frequency domain classification

Country Status (1)

Country Link
CN (1) CN109671094B (en)

Cited By (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110490843A (en) * 2019-07-23 2019-11-22 苏州国科视清医疗科技有限公司 A kind of eye fundus image blood vessel segmentation method
CN110619633A (en) * 2019-09-10 2019-12-27 武汉科技大学 Liver image segmentation method based on multi-path filtering strategy
CN110807762A (en) * 2019-09-19 2020-02-18 温州大学 Intelligent retinal blood vessel image segmentation method based on GAN
CN110991611A (en) * 2019-11-29 2020-04-10 北京市眼科研究所 Full convolution neural network based on image segmentation
CN111489328A (en) * 2020-03-06 2020-08-04 浙江工业大学 Fundus image quality evaluation method based on blood vessel segmentation and background separation
CN111681252A (en) * 2020-05-30 2020-09-18 重庆邮电大学 Medical image automatic segmentation method based on multipath attention fusion
CN112365493A (en) * 2020-11-30 2021-02-12 上海鹰瞳医疗科技有限公司 Training data generation method and device for fundus image recognition model
CN112766178A (en) * 2021-01-22 2021-05-07 广州大学 Method, device, equipment and medium for positioning pests based on intelligent pest control system
CN113592790A (en) * 2021-07-16 2021-11-02 大连民族大学 Panoramic image segmentation method based on high-frequency and low-frequency enhancement, computer system and medium
CN113627285A (en) * 2021-07-26 2021-11-09 长沙理工大学 Video forensics method, system, and medium
CN116681980A (en) * 2023-07-31 2023-09-01 北京建筑大学 Deep learning-based large-deletion-rate image restoration method, device and storage medium

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105551010A (en) * 2016-01-20 2016-05-04 中国矿业大学 Multi-focus image fusion method based on NSCT (Non-Subsampled Contourlet Transform) and depth information incentive PCNN (Pulse Coupled Neural Network)
US20160217586A1 (en) * 2015-01-28 2016-07-28 University Of Florida Research Foundation, Inc. Method for the autonomous image segmentation of flow systems
US20170256033A1 (en) * 2016-03-03 2017-09-07 Mitsubishi Electric Research Laboratories, Inc. Image Upsampling using Global and Local Constraints
CN107657612A (en) * 2017-10-16 2018-02-02 西安交通大学 Suitable for full-automatic the retinal vessel analysis method and system of intelligent and portable equipment
CN108520225A (en) * 2018-03-30 2018-09-11 南京信息工程大学 A kind of fingerprint detection sorting technique based on spatial alternation convolutional neural networks

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20160217586A1 (en) * 2015-01-28 2016-07-28 University Of Florida Research Foundation, Inc. Method for the autonomous image segmentation of flow systems
CN105551010A (en) * 2016-01-20 2016-05-04 中国矿业大学 Multi-focus image fusion method based on NSCT (Non-Subsampled Contourlet Transform) and depth information incentive PCNN (Pulse Coupled Neural Network)
US20170256033A1 (en) * 2016-03-03 2017-09-07 Mitsubishi Electric Research Laboratories, Inc. Image Upsampling using Global and Local Constraints
CN107657612A (en) * 2017-10-16 2018-02-02 西安交通大学 Suitable for full-automatic the retinal vessel analysis method and system of intelligent and portable equipment
CN108520225A (en) * 2018-03-30 2018-09-11 南京信息工程大学 A kind of fingerprint detection sorting technique based on spatial alternation convolutional neural networks

Non-Patent Citations (3)

* Cited by examiner, † Cited by third party
Title
KIEN NGUYEN;ET.AL: "Fusing shrinking and expanding active contour models for robust iris segementation" *
李媛媛;蔡轶珩;高旭蓉;: "基于融合相位特征的视网膜血管分割算法" *
王强;范影乐;武薇;朱亚萍;: "正立和倒立面孔的混合识别" *

Cited By (17)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110490843A (en) * 2019-07-23 2019-11-22 苏州国科视清医疗科技有限公司 A kind of eye fundus image blood vessel segmentation method
CN110619633A (en) * 2019-09-10 2019-12-27 武汉科技大学 Liver image segmentation method based on multi-path filtering strategy
CN110619633B (en) * 2019-09-10 2023-06-23 武汉科技大学 Liver image segmentation method based on multipath filtering strategy
CN110807762A (en) * 2019-09-19 2020-02-18 温州大学 Intelligent retinal blood vessel image segmentation method based on GAN
CN110991611A (en) * 2019-11-29 2020-04-10 北京市眼科研究所 Full convolution neural network based on image segmentation
CN111489328A (en) * 2020-03-06 2020-08-04 浙江工业大学 Fundus image quality evaluation method based on blood vessel segmentation and background separation
CN111489328B (en) * 2020-03-06 2023-06-30 浙江工业大学 Fundus image quality evaluation method based on blood vessel segmentation and background separation
CN111681252B (en) * 2020-05-30 2022-05-03 重庆邮电大学 Medical image automatic segmentation method based on multipath attention fusion
CN111681252A (en) * 2020-05-30 2020-09-18 重庆邮电大学 Medical image automatic segmentation method based on multipath attention fusion
CN112365493A (en) * 2020-11-30 2021-02-12 上海鹰瞳医疗科技有限公司 Training data generation method and device for fundus image recognition model
CN112766178B (en) * 2021-01-22 2023-06-23 广州大学 Disease and pest positioning method, device, equipment and medium based on intelligent deinsectization system
CN112766178A (en) * 2021-01-22 2021-05-07 广州大学 Method, device, equipment and medium for positioning pests based on intelligent pest control system
CN113592790A (en) * 2021-07-16 2021-11-02 大连民族大学 Panoramic image segmentation method based on high-frequency and low-frequency enhancement, computer system and medium
CN113592790B (en) * 2021-07-16 2023-08-29 大连民族大学 Panoramic image segmentation method based on high-low frequency reinforcement, computer system and medium
CN113627285A (en) * 2021-07-26 2021-11-09 长沙理工大学 Video forensics method, system, and medium
CN116681980A (en) * 2023-07-31 2023-09-01 北京建筑大学 Deep learning-based large-deletion-rate image restoration method, device and storage medium
CN116681980B (en) * 2023-07-31 2023-10-20 北京建筑大学 Deep learning-based large-deletion-rate image restoration method, device and storage medium

Also Published As

Publication number Publication date
CN109671094B (en) 2023-04-18

Similar Documents

Publication Publication Date Title
CN109671094A (en) A kind of eye fundus image blood vessel segmentation method based on frequency domain classification
CN107220980B (en) A kind of MRI image brain tumor automatic division method based on full convolutional network
CN106920227B (en) The Segmentation Method of Retinal Blood Vessels combined based on deep learning with conventional method
CN111145170B (en) Medical image segmentation method based on deep learning
CN108596884B (en) Esophagus cancer segmentation method in chest CT image
CN112508864B (en) Retinal vessel image segmentation method based on improved UNet +
CN110930416B (en) MRI image prostate segmentation method based on U-shaped network
CN111161273B (en) Medical ultrasonic image segmentation method based on deep learning
CN109767440A (en) A kind of imaged image data extending method towards deep learning model training and study
CN110298262A (en) Object identification method and device
CN107492071A (en) Medical image processing method and equipment
CN109685813A (en) A kind of U-shaped Segmentation Method of Retinal Blood Vessels of adaptive scale information
CN109829855A (en) A kind of super resolution ratio reconstruction method based on fusion multi-level features figure
CN110232653A (en) The quick light-duty intensive residual error network of super-resolution rebuilding
CN103699904B (en) The image computer auxiliary judgment method of multisequencing nuclear magnetic resonance image
CN107437092A (en) The sorting algorithm of retina OCT image based on Three dimensional convolution neutral net
CN108764286A (en) The classifying identification method of characteristic point in a kind of blood-vessel image based on transfer learning
CN110097554A (en) The Segmentation Method of Retinal Blood Vessels of convolution is separated based on intensive convolution sum depth
Zhang et al. LU-NET: An improved U-Net for ventricular segmentation
CN110675411A (en) Cervical squamous intraepithelial lesion recognition algorithm based on deep learning
CN107644420A (en) Blood-vessel image dividing method, MRI system based on central line pick-up
CN108764342B (en) Semantic segmentation method for optic discs and optic cups in fundus image
CN112001928B (en) Retina blood vessel segmentation method and system
CN110084823A (en) Three-dimensional brain tumor image partition method based on cascade anisotropy FCNN
CN108416821A (en) A kind of CT Image Super-resolution Reconstruction methods of deep neural network

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant