CN109671094B - Fundus image blood vessel segmentation method based on frequency domain classification - Google Patents

Fundus image blood vessel segmentation method based on frequency domain classification Download PDF

Info

Publication number
CN109671094B
CN109671094B CN201811331874.XA CN201811331874A CN109671094B CN 109671094 B CN109671094 B CN 109671094B CN 201811331874 A CN201811331874 A CN 201811331874A CN 109671094 B CN109671094 B CN 109671094B
Authority
CN
China
Prior art keywords
low
fundus image
blood vessel
sampling
frequency
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201811331874.XA
Other languages
Chinese (zh)
Other versions
CN109671094A (en
Inventor
武薇
田纯
范影乐
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Hangzhou Dianzi University
Original Assignee
Hangzhou Dianzi University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Hangzhou Dianzi University filed Critical Hangzhou Dianzi University
Priority to CN201811331874.XA priority Critical patent/CN109671094B/en
Publication of CN109671094A publication Critical patent/CN109671094A/en
Application granted granted Critical
Publication of CN109671094B publication Critical patent/CN109671094B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/13Edge detection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/90Dynamic range modification of images or parts thereof
    • G06T5/92Dynamic range modification of images or parts thereof based on global image properties
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20212Image combination
    • G06T2207/20221Image fusion; Image merging
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30004Biomedical image processing
    • G06T2207/30041Eye; Retina; Ophthalmic
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30004Biomedical image processing
    • G06T2207/30101Blood vessel; Artery; Vein; Vascular

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • General Health & Medical Sciences (AREA)
  • General Engineering & Computer Science (AREA)
  • Biophysics (AREA)
  • Computational Linguistics (AREA)
  • Data Mining & Analysis (AREA)
  • Evolutionary Computation (AREA)
  • Artificial Intelligence (AREA)
  • Molecular Biology (AREA)
  • Computing Systems (AREA)
  • Biomedical Technology (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Health & Medical Sciences (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Eye Examination Apparatus (AREA)
  • Image Analysis (AREA)

Abstract

The invention relates to a fundus image blood vessel segmentation method based on frequency domain grading. Aiming at the problem of inaccurate blood vessel segmentation of the traditional original fundus image full-band processing method, the method is provided for acquiring low-frequency and high-frequency information of a fundus image by utilizing frequency domain preprocessing and then pertinently constructing a multipath low-dimensional and high-dimensional feature extraction convolution network. The low-dimensional feature extraction convolution network comprises a left symmetrical path and a right symmetrical path, and mainly realizes extraction and accurate positioning of global outline information of the blood vessels of the fundus image. The high-dimensional feature extraction convolution network not only comprises a left symmetrical path and a right symmetrical path, but also realizes fusion operation by combining a feature map of the left symmetrical path with the number of channels in the process of sampling on the right path so as to complement lost blood vessel boundary information, and further sharpens the detailed features of blood vessel edge distribution of the fundus image. And finally, fusing the high-low dimensional characteristic map by utilizing convolution kernels, thereby obtaining a more accurate fundus image blood vessel segmentation map.

Description

Fundus image blood vessel segmentation method based on frequency domain classification
Technical Field
The invention relates to the field of machine learning and medical image processing, in particular to a fundus image blood vessel segmentation method based on frequency domain grading.
Background
Clinical studies show that pathological changes easily occur to the morphological structure of retinal blood vessels during fundus oculi pathological changes, which are specifically represented by changes in length, width and angle of blood vessels and hyperplasia of blood vessels. The blood vessel segmentation of the fundus image can help to screen, diagnose and analyze diseases, however, at present, the blood vessel segmentation is mostly carried out in a manual mode, which not only needs abundant clinical experience, but also consumes a great deal of time and energy of doctors.
The network structure of the retinal blood vessels is tree-shaped, the branches are more, the contrast between the tiny blood vessels and the background in the branch structure is very small, and the outline boundary is very fuzzy, so that the segmentation of the tiny blood vessels is very difficult. In recent years, researchers have proposed many methods, including processing retinal images using gaussian matched filtering methods in combination with thresholds; the probability-based tracking method combines the local gray level information of the image and the blood vessel communication characteristic to detect the retinal blood vessel image. The method is characterized in that a retinal vessel segmentation method based on deep learning and a traditional method is also provided, a vessel probability map of the tail end of a full convolution network and shallow information is fused, and a desired retinal vessel segmentation map is obtained. The above method is therefore capable of extracting most of retinal blood vessels, but generally fails to achieve a satisfactory segmentation effect for segmentation tasks of fundus images with low contrast, particularly for fine blood vessels.
Disclosure of Invention
In order to solve the existing problems, the invention provides a fundus image blood vessel segmentation method based on frequency domain grading. The method considers the obvious difference of retinal blood vessels in contrast, changes the traditional mode of directly inputting an original fundus image into a convolutional neural network, and comprises the steps of firstly carrying out frequency domain grading on the fundus image, respectively extracting global contour information and local detail information of the image, and then pertinently constructing an independent multipath convolutional neural network to realize extraction and fusion of the retinal blood vessels and obtain a segmentation map. The invention carries out space-frequency domain conversion on the fundus image, respectively extracts low-frequency and high-frequency information of the fundus image by using a filter, then inversely converts the low-frequency and high-frequency information into a space domain, respectively inputs the low-frequency and high-frequency information into a constructed multipath convolution neural network for feature extraction, and finally fuses feature maps to obtain segmentation maps. The method specifically comprises the following steps:
step 1: frequency domain classification processing of fundus images
Firstly, converting an original fundus image F (x, y) into a frequency domain F (u, v) through Fourier transform, then respectively obtaining low-frequency and high-frequency components of the fundus image through a Gaussian low-pass filter and a Gaussian high-pass filter, finally processing low-frequency and high-frequency information of the fundus image through inverse Fourier transform to obtain a frequency domain grading result of the fundus image, wherein the frequency domain grading result is respectively F corresponding to the low-frequency information 1 (x, y), f corresponding to high frequency information 2 (x,y);
Step 2: a low-dimensional feature extraction convolution network is constructed, the global contour characteristics of the fundus image are described by using low-frequency components, the separation of a background and a target blood vessel is facilitated, and the accuracy of predicting blood vessel boundary information is improved; the network is divided into a left path and a right path, and the whole network comprises 8 residual blocks, 2 down-sampling layers, 2 up-sampling layers and 4 convolution layers; each residual block comprises two 3 multiplied by 3 cavity convolution layers, and each two residual blocks and one down-sampling or one up-sampling form 1 block, and the total number of the blocks is 4; activating the result output by each block through a ReLu function, and then carrying out normalization processing; the left path extracts the integral characteristics of the blood vessel distribution of the fundus image through 2 blocks; 2 blocks on the right side are used for accurate positioning, wherein each block passed by the left path is a combination of two residual blocks and one down-sampling, and each block passed by the right path is a combination of two residual blocks and one up-sampling; the low-frequency image f obtained in the step 1 1 (x, y) is input into the constructed low-dimensional feature extraction convolution network to obtain a feature map F L
And step 3: constructing a high-dimensional feature extraction convolution network, and describing detail characteristics of the fundus image by using high-frequency components, such as an area with severe brightness change; the network is also divided into a left path and a right path, and the whole network comprises 14 residual blocks, 4 down-sampling, 4 up-sampling and 3 convolutional layers; each residual block comprises two 3 multiplied by 3 hole convolution layers, each two residual blocks and a down-sampling or up-sampling layer form a block, the total number of the blocks is 6, and 1 block at the joint of a left path and a right path comprises two residual blocks, a down-sampling layer and an up-sampling layer; the result for each block output is activated by the ReLu function,then carrying out normalization treatment; the left path captures blood vessel contour information of the fundus image through 3 blocks; the right path and the left symmetrical path are fused through 3 blocks, and the fusion operation is to combine the number of the characteristic image channels to complement the lost blood vessel boundary information, so that the detailed characteristics of the blood vessel distribution of the fundus image are further sharpened to obtain a clearer blood vessel boundary; each block passed by the left path is a combination of two residual blocks and one down-sampling, and each block passed by the right path is a combination of two residual blocks and one up-sampling; the high-frequency image f obtained in the step 1 2 (x, y) is input into the constructed high-dimensional feature extraction convolution network to obtain a feature map F H
And 4, step 4: two sets of feature maps F by adding channel numbers L And F H Merging, obtaining a high-low dimensional fused feature map F ' through 1 x 1-32 convolution kernel, then converting the feature map F ' into a single-channel feature map F ' by using the 1 x 1-1 convolution kernel, obtaining an output pixel value of a blood vessel segmentation result corresponding to an original image F (x, y) after being activated by a ReLu function, performing loss operation on the output pixel value and a corresponding known blood vessel segmentation label by using a square error, accumulating loss operation results of all training images, marking the result as loss, and respectively training a high-low dimensional feature extraction convolution network by using a gradient descent method; when the loss value loss meets the convergence condition, finishing the training;
and 5: after the high-dimensional and low-dimensional feature extraction convolution network training is finished, the fundus images without known labels are processed according to the steps 1-4 to obtain a single-channel feature map F 'corresponding to the fundus images, and the single-channel feature map F' is activated by a ReLu function to obtain a blood vessel segmentation result of the fundus images.
The invention has the following advantages:
(1) The invention changes the full-band processing mode of the traditional method for the fundus image and avoids the influence of the depth expansion of the convolutional neural network on the network computing capability. By utilizing the idea of frequency division processing, low-frequency and high-frequency information of the fundus image is respectively obtained, and then a low-dimensional and high-dimensional feature extraction convolution network is designed in a targeted manner.
(2) And constructing a low-dimensional and high-dimensional feature extraction convolution network, extracting the overall and detailed features of the fundus image through multi-path convolution, up-sampling and down-sampling operations, and finally fusing the high-dimensional and low-dimensional feature images by using convolution to obtain an accurate blood vessel segmentation result.
Drawings
In order to make the object, technical scheme and beneficial effect of the invention more clear, the invention provides the following drawings for explanation:
FIG. 1 is a schematic view of a fundus image blood vessel segmentation network structure according to the present invention;
fig. 2 is a flowchart of fundus image blood vessel segmentation according to the present invention.
Detailed Description
The following describes the implementation process of the present invention with reference to the accompanying drawings, fig. 1 is a schematic structural diagram of a fundus image blood vessel segmentation network of the present invention, and fig. 2 is a flowchart of fundus image blood vessel segmentation of the present invention.
Step 1: and (5) carrying out frequency domain grading processing on the fundus image. Firstly, preprocessing an original image, and performing Fourier transform on the original image to convert the original image into a frequency domain, wherein the formula (1) is as follows:
Figure BDA0001860228890000041
m, N respectively represent rows and columns of an image size; f (x, y) represents an input image; x and y respectively represent the horizontal and vertical coordinates of the airspace; f (u, v) represents a frequency domain value after fourier transform; u and v represent the horizontal and vertical coordinates of the frequency domain, respectively. Then, f (x, y) is respectively passed through a Gaussian low-pass filter (GLPF) and a Gaussian high-pass filter (GHPF), and the transfer functions of the GLPF and the GHPF are respectively H 1 (u, v) and H 2 (u, v) as shown in formulas (2) and (3):
Figure BDA0001860228890000042
Figure BDA0001860228890000043
where D (u, v) represents the distance from the center origin of the Fourier transform, D 0 Is the cut-off frequency. Multiplying the expressions (2) and (3) by the expression (1) to obtain a low-frequency component G of the fundus oculi image 1 (u, v) and a high frequency component G 2 (u, v). For G 1 (u, v) and G 2 (u, v) respectively carrying out inverse Fourier transform to obtain a filtered low-frequency image f 1 (x, y) and high frequency image f 2 (x, y) as shown in formula (4):
Figure BDA0001860228890000051
step 2: and constructing a low-dimensional feature extraction convolution network. The low-frequency image mainly reflects global contour information of the fundus image, has the attribute of low-dimensional features, and the grasp of the low-frequency image on the overall information can improve the detection accuracy to a certain extent. Thus, the low-frequency image f obtained in step 1 is processed 1 (x, y) are input into the constructed low dimensional feature extraction convolutional network, which is composed of 4 convolutional layers and 4 blocks, as shown in the solid box of FIG. 1. The left path comprises 2 blocks, each block comprises two residual blocks and a downsampling and is mainly used for extracting the overall characteristics of fundus image blood vessel distribution; and 2 blocks on the right path, wherein each block comprises two residual blocks and an up-sampling block and is mainly used for accurately positioning the blood vessels of the fundus image. Wherein the down-sampling adopts 2 x 2 maximal pooling operation, and the up-sampling adopts bilinear interpolation method. The method comprises the following specific steps:
(1) the method comprises the following steps Image f 1 (x, y) after 1 x 1-32 convolution kernels, each residual block containing 64 convolution kernels, is converted into a 64-dimensional feature map F 1 As shown in formula (5):
F 1 =Res2(pool(conv(f 1 (x,y)))) (5)
where conv represents a convolution operation; pool represents pooling operation using 2 × 2 maximal pooling method, the same as follows;
res2 represents 2 residual block operations, the same below.
(2) The method comprises the following steps F is to be 1 ThroughActivating a ReLu function, then carrying out normalization processing, and obtaining a feature graph F through a left 2 nd block which comprises 128 convolution kernels 2 As shown in formula (6):
F 2 =Res2(pool(Norm(Re Lu(F 1 )))) (6)
wherein ReLu represents the activation function, the same applies below; norm denotes normalization, the same applies below.
(3) The method comprises the following steps F is to be 2 Obtaining a characteristic diagram F by the 1 st block of the characteristic diagram after the convolution kernel of 1 multiplied by 256 through the right path 3 As shown in formula (7):
F 3 =unsampling(Res2(conv(F 2 ))) (7)
wherein unsampling represents the upsampling operation, and employs a bilinear interpolation method, as follows.
(4) The method comprises the following steps F is to be 3 Obtaining a final characteristic diagram F of the low-frequency image through the 2 nd block of the right path L As shown in formula (8):
F L =unsampling(Res2(Norm(Re Lu(F 3 )))) (8)
and step 3: and constructing a high-dimensional feature extraction convolution network. Considering that the high-frequency image mainly reflects the outline details of the image, the content of the image is further strengthened on the low-frequency information. Aiming at the characteristics of high-frequency images, the constructed network focuses more on the detail problem and mainly captures detail features of different layers. Thus, the high-frequency image f obtained in step 1 is processed 2 (x, y) input into the constructed high frequency feature extraction convolutional network, which comprises 3 convolutional layers, 7 blocks, as shown in the dashed box of fig. 1. The left side and the right side of the device are respectively provided with 3 blocks, and each block comprises two residual blocks and a down-sampling or up-sampling; there are 1 block at the left and right path junction containing two residual blocks, one downsampled and one upsampled. The convolutional layer contained in the network is shared with the weight of the convolutional layer in the low-dimensional feature extraction convolutional network. The method comprises the following specific steps:
(1) the method comprises the following steps The image passes through 3 blocks on the left path after being subjected to 1 x 1-32 convolution kernel, each block is a combination of two residual blocks and one down-sampling, and a characteristic diagram F is obtained 3 ', of the formula (9)The following steps:
F i ′=Res2(pool(Norm(Re Lu(F′ i-1 )))) (9)
where i =1,2,3 denotes the number of left path blocks, F 0 ' As shown in formula (10):
F 0 ′=conv(f 2 (x,y)) (10)
(2) the method comprises the following steps F is to be 3 'Block passing through the junction of the left and right paths, the block being a combination of one downsampling, two residual blocks and one upsampling to obtain a feature map F' 4 As shown in formula (11):
F′ 4 =unsampling(Res2(pool(Norm(Re Lu(F 3 ′))))) (11)
(3) the method comprises the following steps F is to be 4 3 blocks pass through the right path, each block is a combination of two residual blocks and one up-sampling, the number of channels is combined with the feature map of the left symmetrical path in each up-sampling process, and then the combined feature map is operated, so that the lost spatial information in the pooling process is made up, the detailed features of the blood vessels of the fundus images are enhanced, and a clearer feature map F is obtained H As shown in formula (12):
Figure BDA0001860228890000071
wherein j =4,5,6 and F j ' represents a feature map of passing the jth block; copy (F' 7-j )+F j ' representation feature map F j ' combine the channel numbers with the feature map of the left symmetric path.
And 4, step 4: the characteristic diagrams F obtained in the step 2 and the step 3 L And F H Merging, namely adding corresponding channels, determining a feature map F ' corresponding to the image by adopting a 1 x 1-32 convolution kernel, converting the F ' into a single-channel feature map F ' by utilizing the 1 x 1-1 convolution kernel, obtaining an output pixel value of a blood vessel segmentation result corresponding to an original image F (x, y) after being activated by a ReLu function, performing loss operation on the output pixel value and corresponding known blood vessel segmentation labels by using a square difference, and accumulating loss operation results of all training imagesAnd is expressed as loss, as shown in formula (13):
Figure BDA0001860228890000072
where n is the number of samples of the training image, M, N is the length and width dimensions of the training image,
Figure BDA0001860228890000073
for the output pixel value at the (j, k) position of the corresponding blood vessel segmentation result of the ith training image, the value is determined>
Figure BDA0001860228890000074
Label values are segmented for the corresponding known vessels. And finally, performing back propagation on the loss value, respectively updating the weight and the offset in the high-dimensional feature extraction convolutional network and the low-dimensional feature extraction convolutional network by using a gradient descent method, finishing the training when the loss value is less than a threshold value epsilon, and setting epsilon to be 1-3% of the total number of pixels of the training image sample, namely obtaining the trained network model.
And 5: after the low-dimensional and high-dimensional feature extraction convolution network training is finished, the fundus images without known labels are processed according to the steps 1-4 to obtain a single-channel feature map F' corresponding to the fundus images, and the fundus images are activated by a ReLu function to obtain a blood vessel segmentation result of the fundus images.

Claims (1)

1. A fundus image blood vessel segmentation method based on frequency domain classification is characterized by comprising the following steps:
step 1: frequency domain hierarchical processing of fundus images
Firstly, converting an original fundus image F (x, y) into a frequency domain F (u, v) through Fourier transform, then respectively obtaining low-frequency and high-frequency components of the fundus image through a Gaussian low-pass filter and a Gaussian high-pass filter, finally processing low-frequency and high-frequency information of the fundus image through inverse Fourier transform to obtain a frequency domain grading result of the fundus image, wherein the frequency domain grading result is respectively F corresponding to the low-frequency information 1 (x, y), f corresponding to high frequency information 2 (x,y);
Step 2:constructing a low-dimensional feature extraction convolution network, and describing the global contour characteristics of the fundus image by using low-frequency components; the network is divided into a left path and a right path, and the whole network comprises 8 residual blocks, 2 downsampling layers, 2 upsampling layers and 4 convolutional layers; each residual block comprises two 3 multiplied by 3 cavity convolution layers, and each two residual blocks and one down-sampling or one up-sampling form 1 block, and the total number of the blocks is 4; activating the result output by each block through a ReLu function, and then carrying out normalization processing; the left path extracts the integral characteristics of the blood vessel distribution of the fundus image through 2 blocks; 2 blocks on the right side are used for accurate positioning, wherein each block passed by the left path is a combination of two residual blocks and one down-sampling, and each block passed by the right path is a combination of two residual blocks and one up-sampling; the low-frequency image f obtained in the step 1 1 (x, y) is input into the constructed low-dimensional feature extraction convolution network to obtain a feature map F L
And 3, step 3: constructing a high-dimensional feature extraction convolution network, and describing detail characteristics of the fundus image by using high-frequency components; the network is also divided into a left path and a right path, and the whole network comprises 14 residual blocks, 4 down-sampling, 4 up-sampling and 3 convolutional layers; each residual block comprises two 3 x 3 cavity convolution layers, each two residual blocks and one down-sampling or up-sampling form a block, the total number of the blocks is 6, and 1 block at the joint of the left path and the right path comprises two residual blocks, one down-sampling and one up-sampling; activating the result output by each block through a ReLu function, and then carrying out normalization processing; the left path captures blood vessel contour information of the fundus image through 3 blocks; the right path is subjected to fusion operation with the left symmetrical path through 3 blocks, and the fusion operation is to merge the number of the characteristic diagram channels; each block passed by the left path is a combination of two residual error blocks and one down-sampling, and each block passed by the right path is a combination of two residual error blocks and one up-sampling; the high-frequency image f obtained in the step 1 2 (x, y) is input into the constructed high-dimensional feature extraction convolution network to obtain a feature map F H
And 4, step 4: by adding the number of channelsTwo sets of feature maps F L And F H Merging, obtaining a high-low dimensional fused feature map F ' through a 1 x 1-32 convolution kernel, converting the feature map F ' into a single-channel feature map F ' by using the 1 x 1-1 convolution kernel, obtaining an output pixel value of a blood vessel segmentation result corresponding to an original image F (x, y) after activation of a ReLu function, performing loss operation on the output pixel value and a corresponding known blood vessel segmentation label by using a square difference, accumulating loss operation results of all training images, recording the result as loss, and training a high-low dimensional feature extraction convolution network respectively by using a gradient descent method; when the loss value loss meets the convergence condition, ending the training;
and 5: after the high-dimensional and low-dimensional feature extraction convolution network training is finished, the fundus images without known labels are processed according to the steps 1-4 to obtain a single-channel feature map F 'corresponding to the fundus images, and the single-channel feature map F' is activated by a ReLu function to obtain a blood vessel segmentation result of the fundus images.
CN201811331874.XA 2018-11-09 2018-11-09 Fundus image blood vessel segmentation method based on frequency domain classification Active CN109671094B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201811331874.XA CN109671094B (en) 2018-11-09 2018-11-09 Fundus image blood vessel segmentation method based on frequency domain classification

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201811331874.XA CN109671094B (en) 2018-11-09 2018-11-09 Fundus image blood vessel segmentation method based on frequency domain classification

Publications (2)

Publication Number Publication Date
CN109671094A CN109671094A (en) 2019-04-23
CN109671094B true CN109671094B (en) 2023-04-18

Family

ID=66142084

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201811331874.XA Active CN109671094B (en) 2018-11-09 2018-11-09 Fundus image blood vessel segmentation method based on frequency domain classification

Country Status (1)

Country Link
CN (1) CN109671094B (en)

Families Citing this family (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110490843A (en) * 2019-07-23 2019-11-22 苏州国科视清医疗科技有限公司 A kind of eye fundus image blood vessel segmentation method
CN110619633B (en) * 2019-09-10 2023-06-23 武汉科技大学 Liver image segmentation method based on multipath filtering strategy
CN110807762B (en) * 2019-09-19 2021-07-06 温州大学 Intelligent retinal blood vessel image segmentation method based on GAN
CN110991611A (en) * 2019-11-29 2020-04-10 北京市眼科研究所 Full convolution neural network based on image segmentation
CN111489328B (en) * 2020-03-06 2023-06-30 浙江工业大学 Fundus image quality evaluation method based on blood vessel segmentation and background separation
CN111681252B (en) * 2020-05-30 2022-05-03 重庆邮电大学 Medical image automatic segmentation method based on multipath attention fusion
CN112365493B (en) * 2020-11-30 2022-04-22 北京鹰瞳科技发展股份有限公司 Training data generation method and device for fundus image recognition model
CN112766178B (en) * 2021-01-22 2023-06-23 广州大学 Disease and pest positioning method, device, equipment and medium based on intelligent deinsectization system
CN113592790B (en) * 2021-07-16 2023-08-29 大连民族大学 Panoramic image segmentation method based on high-low frequency reinforcement, computer system and medium
CN113627285A (en) * 2021-07-26 2021-11-09 长沙理工大学 Video forensics method, system, and medium
CN116681980B (en) * 2023-07-31 2023-10-20 北京建筑大学 Deep learning-based large-deletion-rate image restoration method, device and storage medium

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105551010A (en) * 2016-01-20 2016-05-04 中国矿业大学 Multi-focus image fusion method based on NSCT (Non-Subsampled Contourlet Transform) and depth information incentive PCNN (Pulse Coupled Neural Network)
CN107657612A (en) * 2017-10-16 2018-02-02 西安交通大学 Suitable for full-automatic the retinal vessel analysis method and system of intelligent and portable equipment
CN108520225A (en) * 2018-03-30 2018-09-11 南京信息工程大学 A kind of fingerprint detection sorting technique based on spatial alternation convolutional neural networks

Family Cites Families (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US9836849B2 (en) * 2015-01-28 2017-12-05 University Of Florida Research Foundation, Inc. Method for the autonomous image segmentation of flow systems
US9836820B2 (en) * 2016-03-03 2017-12-05 Mitsubishi Electric Research Laboratories, Inc. Image upsampling using global and local constraints

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105551010A (en) * 2016-01-20 2016-05-04 中国矿业大学 Multi-focus image fusion method based on NSCT (Non-Subsampled Contourlet Transform) and depth information incentive PCNN (Pulse Coupled Neural Network)
CN107657612A (en) * 2017-10-16 2018-02-02 西安交通大学 Suitable for full-automatic the retinal vessel analysis method and system of intelligent and portable equipment
CN108520225A (en) * 2018-03-30 2018-09-11 南京信息工程大学 A kind of fingerprint detection sorting technique based on spatial alternation convolutional neural networks

Non-Patent Citations (3)

* Cited by examiner, † Cited by third party
Title
Kien Nguyen;et.al.Fusing shrinking and expanding active contour models for robust iris segementation.《2010 10th International Conference on Information Science, Signal Processing and their Applications》.2010,全文. *
李媛媛 ; 蔡轶珩 ; 高旭蓉 ; .基于融合相位特征的视网膜血管分割算法.计算机应用.2018,(07),全文. *
王强 ; 范影乐 ; 武薇 ; 朱亚萍 ; .正立和倒立面孔的混合识别.中国图象图形学报.2018,(07),全文. *

Also Published As

Publication number Publication date
CN109671094A (en) 2019-04-23

Similar Documents

Publication Publication Date Title
CN109671094B (en) Fundus image blood vessel segmentation method based on frequency domain classification
CN111145170B (en) Medical image segmentation method based on deep learning
Mahapatra et al. Image super resolution using generative adversarial networks and local saliency maps for retinal image analysis
CN109685768B (en) Pulmonary nodule automatic detection method and system based on pulmonary CT sequence
CN108090906B (en) Cervical image processing method and device based on region nomination
CN109035172B (en) Non-local mean ultrasonic image denoising method based on deep learning
CN109410219A (en) A kind of image partition method, device and computer readable storage medium based on pyramid fusion study
CN109461172A (en) Manually with the united correlation filtering video adaptive tracking method of depth characteristic
CN105160310A (en) 3D (three-dimensional) convolutional neural network based human body behavior recognition method
CN112001928B (en) Retina blood vessel segmentation method and system
CN106056595A (en) Method for automatically identifying whether thyroid nodule is benign or malignant based on deep convolutional neural network
CN112102385B (en) Multi-modal liver magnetic resonance image registration system based on deep learning
CN111161271A (en) Ultrasonic image segmentation method
CN107452022A (en) A kind of video target tracking method
CN112465752A (en) Improved Faster R-CNN-based small target detection method
CN113192076B (en) MRI brain tumor image segmentation method combining classification prediction and multi-scale feature extraction
CN113516126A (en) Adaptive threshold scene text detection method based on attention feature fusion
CN110751636A (en) Fundus image retinal arteriosclerosis detection method based on improved coding and decoding network
CN112712526B (en) Retina blood vessel segmentation method based on asymmetric convolutional neural network double channels
CN109003275A (en) The dividing method of weld defect image
CN112102259A (en) Image segmentation algorithm based on boundary guide depth learning
CN113066025A (en) Image defogging method based on incremental learning and feature and attention transfer
CN114926386A (en) Method and system for automatically segmenting leucocyte nucleoplasm based on deep learning
CN102081740B (en) 3D image classification method based on scale invariant features
CN117635628B (en) Sea-land segmentation method based on context attention and boundary perception guidance

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant