CN112308829A - Self-adaptive network suitable for high-reflection bright spot segmentation in retina optical coherence tomography image - Google Patents

Self-adaptive network suitable for high-reflection bright spot segmentation in retina optical coherence tomography image Download PDF

Info

Publication number
CN112308829A
CN112308829A CN202011160346.XA CN202011160346A CN112308829A CN 112308829 A CN112308829 A CN 112308829A CN 202011160346 A CN202011160346 A CN 202011160346A CN 112308829 A CN112308829 A CN 112308829A
Authority
CN
China
Prior art keywords
module
feature
adaptive
segmentation
optical coherence
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Withdrawn
Application number
CN202011160346.XA
Other languages
Chinese (zh)
Inventor
陈新建
姚辰璞
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Suzhou Bigvision Medical Technology Co ltd
Original Assignee
Suzhou University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Suzhou University filed Critical Suzhou University
Priority to CN202011160346.XA priority Critical patent/CN112308829A/en
Publication of CN112308829A publication Critical patent/CN112308829A/en
Withdrawn legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/0002Inspection of images, e.g. flaw detection
    • G06T7/0012Biomedical image inspection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/11Region-based segmentation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10072Tomographic images
    • G06T2207/10101Optical tomography; Optical coherence tomography [OCT]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20084Artificial neural networks [ANN]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30004Biomedical image processing
    • G06T2207/30041Eye; Retina; Ophthalmic

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • General Health & Medical Sciences (AREA)
  • Health & Medical Sciences (AREA)
  • Computing Systems (AREA)
  • Mathematical Physics (AREA)
  • Data Mining & Analysis (AREA)
  • Evolutionary Computation (AREA)
  • Biophysics (AREA)
  • Molecular Biology (AREA)
  • Biomedical Technology (AREA)
  • General Engineering & Computer Science (AREA)
  • Artificial Intelligence (AREA)
  • Computational Linguistics (AREA)
  • Software Systems (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Medical Informatics (AREA)
  • Nuclear Medicine, Radiotherapy & Molecular Imaging (AREA)
  • Radiology & Medical Imaging (AREA)
  • Quality & Reliability (AREA)
  • Eye Examination Apparatus (AREA)

Abstract

The embodiment of the application discloses a self-adaptive network suitable for high-reflection bright spot segmentation in a retina optical coherence tomography image, which comprises a feature coding module, a self-adaptive SA module and a feature decoding module; the characteristic coding module comprises a characteristic extraction unit and a double residual DR module, wherein the double residual DR module comprises two residual blocks; the self-adaptive SA module comprises a characteristic input end, a deformable convolution layer, matrix multiplication and pixel-level summation; the feature decoding module reconstructs the high-level features generated by the self-adaptive SA module, the high-level features are gradually spliced with the local information guided by the double residual DR module through deconvolution of a 2 multiplied by 2 deconvolution layer, and a result obtained through convolution of a 1 multiplied by 1 convolution layer is used as the output of the feature decoding module. The method and the device can simplify the learning process of the whole network and enhance gradient propagation while enhancing feature extraction, and can adapt to segmentation targets with different sizes.

Description

Self-adaptive network suitable for high-reflection bright spot segmentation in retina optical coherence tomography image
Technical Field
The application relates to the technical field of retina OCT image segmentation, in particular to an adaptive network suitable for segmentation of high-reflection bright spots in a retina optical coherence tomography image.
Background
Hard exudation is a marked fundus change in diabetic retinopathy that appears as a highly reflective bright spot when imaged by optical coherence tomography. In recent years, many studies have been made on detection of hard exudation in retinal fundus oculi color photographs, for example, a hard exudation detection method using a support vector machine, an automatic detection method using a k-neighbor map region combination, a detection method using a threshold method, and the like. Similar studies have also segmented bright spots in Polarization-Sensitive Optical Coherence Tomography (PS-OCT). But segmentation methods based on deep learning are rare. Currently, many deep learning methods based on Convolutional Neural Network (CNN), such as U-Net, are widely used for medical image segmentation. The U-Net greatly improves the performance of medical image segmentation due to the adoption of a codec structure and jump connection. Because the target of the high-reflection bright spot in the retina OCT image is small, and the problems of irregular shape, fuzzy boundary, uneven size and the like exist, the segmentation task has great challenges, and the network needs to be segmented to simultaneously extract and utilize global features and local features, so that dynamic fusion of local information and self-adaptation of a network feature extraction receptive field to the target size are realized. The encoder-decoder structure of the original U-Net can not realize the effective extraction and utilization of global features, can not be adaptive to the segmentation objects with various shapes and sizes, and has poor performance on small target segmentation.
Disclosure of Invention
The present application aims to solve the above technical problems, and provides an adaptive network suitable for segmentation of high-reflection bright spots in a retina optical coherence tomography image, which can simplify the learning process of the whole network and enhance gradient propagation while enhancing feature extraction, and can adapt to segmentation targets of different sizes.
In order to achieve the above object, the present application discloses an adaptive network suitable for segmentation of high-reflection bright spots in a retinal optical coherence tomography image, comprising a feature encoding module, an adaptive SA module applied to a deep layer of the encoder module, and a plurality of feature decoding modules arranged in a decoder channel;
the characteristic coding module comprises a characteristic extraction unit and a dual residual DR module embedded in a down-sampling position of the characteristic extraction unit, wherein the dual residual DR module comprises two residual blocks, and each residual block comprises a 1 × 1 convolutional layer, a 3 × 3 convolutional layer, a 1 × 1 convolutional layer, a batch normalization processing layer and a ReLU activation function which are sequentially arranged;
the adaptive SA module comprises a characteristic input end, a deformable convolution layer, matrix multiplication and pixel-level summation, wherein the characteristic input end is connected with the characteristic coding module, and the deformable convolution layer comprises a 2D offset which is arranged on a conventional grid sampling position and is used for enabling a sampling grid to be deformed freely;
the feature decoding module reconstructs the high-level features generated by the self-adaptive SA module, gradually performs feature splicing with local information guided by the dual residual DR module through 2 × 2 deconvolution, and a result obtained through one 1 × 1 convolution layer convolution is used as the output of the feature decoding module.
Preferably, the feature extraction unit is a U-Net encoder layer, and an output of a fourth layer of the feature extraction unit is connected to the feature input terminal.
Preferably, the adaptive SA module includes three parallel deformable convolutional layers, the feature input end receives a feature map output by the feature coding module, the feature map describes a spatial relationship between any two pixels of a feature by generating a spatial attention map through two of the deformable convolutional layers, the feature map is subjected to the matrix multiplication with the output of the third deformable convolutional layer, the obtained result is subjected to the pixel-level summation with the feature map input by the adaptive SA module, and the result of the pixel-level summation is used as the output of the adaptive SA module for pixel-level prediction.
Preferably, the output of the feature decoding module is up-sampled to be consistent with the feature map size of the adaptive network suitable for high-reflection bright spot segmentation in the retina optical coherence tomography image, and then the up-sampled output is used as the output of the adaptive network suitable for high-reflection bright spot segmentation in the retina optical coherence tomography image.
Preferably, the loss function of the adaptive network suitable for high-reflection bright spot segmentation in the retina optical coherence tomography image is a loss function combining a Dice loss function and a binary cross entropy loss function.
The beneficial effect of this application:
1. according to the self-adaptive network suitable for high-reflection bright spot segmentation in the retina optical coherence tomography image, the self-adaptive SA module is applied to the deep layer of an encoder to dynamically self-adapt to segmentation targets with different sizes and fuse global information; the double residual DR module is applied to the down-sampling step of the encoder, so that the learning process of the whole network is simplified and the gradient propagation is enhanced while the feature extraction is enhanced. The improved method of the U-shaped structure segmentation network based on the double residual DR module and the self-adaptive SA module is suitable for segmentation of high-reflection bright spot areas in OCT images, is good in segmentation performance, has certain improvement on four indexes of a Dice coefficient, a Jaccard index, sensitivity and accuracy, has certain self-adaptive performance, and provides a solution for segmentation of high-reflection bright spots in irregular shapes. And the method can simplify the learning process of the whole network and enhance gradient propagation while enhancing feature extraction, and can adapt to segmentation targets with different sizes.
2. The adaptive network suitable for high-reflection bright spot segmentation in the optical coherence tomography images of the retina enables the sampling grid to be freely deformed through the three parallel deformable convolution layers, and can realize global information fusion and multi-scale adaptation of segmented objects with different sizes and shapes.
3. According to the adaptive network suitable for high-reflection bright spot segmentation in the optical coherence tomography image of the retina, a loss function combining a Dice loss function and a binary cross entropy loss function is used as a loss function of the network, so that the design model of the adaptive network can be optimized, and the problem of data imbalance can be effectively solved.
Drawings
In order to more clearly illustrate the embodiments of the present application or the technical solutions in the prior art, the drawings used in the description of the embodiments or the prior art will be briefly described below, it is obvious that the drawings in the following description are only some embodiments of the present application, and other drawings can be obtained by those skilled in the art without creative efforts.
FIG. 1 is a schematic structural diagram of the present application;
FIG. 2 is a schematic structural diagram of an adaptive SA module according to an embodiment of the present application;
FIG. 3 is a schematic diagram of a deformable convolution layer in an embodiment of the present application;
fig. 4 is a schematic diagram of a segmentation result of a high-reflection bright spot of a retina experimentally obtained in an embodiment of the present application.
Detailed Description
In order to make those skilled in the art better understand the technical solutions in the present application, the technical solutions in the embodiments of the present application will be described clearly and completely with reference to the drawings in the embodiments of the present application, and it is obvious that the described embodiments are only a part of the embodiments of the present application, and not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present application.
Example (b): referring to fig. 1, an adaptive network for segmentation of high-reflection bright spots in a retinal optical coherence tomography image includes a feature encoding module, an adaptive SA module applied to a deep layer of the encoder module, and a plurality of feature decoding modules disposed in a decoder channel.
The feature coding module comprises a feature extraction unit and a dual residual DR module embedded in a down-sampling position of the feature extraction unit, wherein the dual residual DR module comprises two residual blocks, and each residual block comprises a 1 × 1 convolutional layer, a 3 × 3 convolutional layer, a 1 × 1 convolutional layer, a batch normalization processing layer and a ReLU activation function which are sequentially arranged. In this embodiment, the feature extraction unit is a U-Net encoder layer, and the output of the fourth layer of the feature extraction unit is connected to the feature input terminal. In order to obtain a representative feature map, the encoder part of the feature encoding module adopts the same feature extraction structure as the U-Net encoder layer. In order to simplify the whole network learning process and enhance the gradient propagation, the double residual DR module provided by the invention is embedded at the down-sampling position of each feature extraction unit. The double residual DR module is applied to a shallow encoder of a U-shaped network, so that the network depth can be properly increased, and the feature extraction is enhanced while the gradient propagation is enhanced.
As shown in fig. 2 and 3, the adaptive SA module includes a feature input terminal, a deformable convolution layer, a matrix multiplication, and a pixel-level summation, the feature input terminal is connected to the feature encoding module, and the deformable convolution layer includes a 2D offset disposed at a sampling position of the regular grid for freely deforming the sampling grid. Through the arrangement of the self-adaptive SA module in the embodiment, global information and self-adaptive multi-scale segmentation targets can be effectively extracted and fused.
The adaptive SA module includes three parallel deformable convolution layers to adapt to segmented objects of different sizes, and the structure of the deformable convolution is shown in fig. 3, and compared with the standard convolution, the deformable convolution adds 2D offset to the sampling position of the conventional mesh, so that the sampling mesh can be deformed freely. The output of the fourth layer of the encoder part of the feature coding module (namely the fourth layer of the feature extraction unit) is used as the input of the adaptive SA module, the feature input end receives a feature map output by the feature coding module, the feature map is convolved by the upper two deformable convolution layers to generate a spatial attention map for describing the spatial relationship between any two pixels of the features, then the spatial attention map is subjected to matrix multiplication with the output of the third deformable convolution layer, the obtained result is subjected to pixel-level summation with the feature map input by the adaptive SA module, and the pixel-level summation result is a better feature representation and is used as the output of the adaptive SA module for pixel-level prediction. Through the self-adaptive SA module in the embodiment, the global information fusion and multi-scale self-adaptation of the segmentation objects with different sizes and shapes can be realized.
The feature decoding module reconstructs high-level features generated by the self-adaptive SA module, feature splicing is gradually carried out on the high-level features and local information guided by the double residual DR module through deconvolution of a 2 x 2 deconvolution layer, and the 2 x 2 deconvolution is adopted to carry out upsampling on a fusion feature map, so that parameters of a model can be reduced, and a chessboard effect can be inhibited. The result obtained by a 1 × 1 convolutional layer convolution is used as the output of the feature decoding module.
In this embodiment, the output of the feature decoding module is up-sampled to be consistent with the feature map size of the input adaptive network suitable for segmentation of high reflection bright spots in the optical coherence tomography image of the retina, and then the up-sampled output is used as the output of the adaptive network suitable for segmentation of high reflection bright spots in the optical coherence tomography image of the retina of this embodiment.
As a preferred implementation manner of this embodiment, the loss function of the adaptive network suitable for segmentation of high-reflection bright spots in the optical coherence tomography image of the retina is a loss function combining a Dice loss function and a binary cross entropy loss function. The problem of unbalanced data distribution is a main challenge in medical image segmentation, and through the arrangement, the design model of the invention can be optimized, and the problem of unbalanced data can be effectively overcome.
Based on the background of the prior art, hard exudation is an obvious fundus change of diabetic retinopathy, and is mainly manifested as a high reflectance bright spot (HRF) in a retinal optical coherence tomography OCT image. The automatic segmentation of HRFs in OCT images presents a great challenge, mainly due to the following 2 points: (i) the target boundary is fuzzy, and the OCT image has serious speckle noise; (ii) there are large differences in the size and shape of the segmentation targets in different data. The adaptive network suitable for segmentation of high-reflection bright spots in the optical coherence tomography images of the retina of the embodiment needs to solve the problem. For this reason, the present embodiment experimentally verifies the adaptive network of the present application, which is suitable for segmentation of high-reflection bright spots in a retinal optical coherence tomography image.
The data from this experiment were obtained from 112 512 x 992 retinal OCT B-scan (B-scan) images taken from 28 different patients using a Topcon Atlantis DRI-1OCT scanner, with HRFs manually marked by a senior ophthalmologist. Because of the small amount of data, the experiment employed four-fold cross validation, each fold containing 28 pictures of 7 different patients. To reduce the computational effort, all OCT pictures were size-transformed to 256 × 512. In order to improve the generalization capability of the model and prevent the overfitting of the model, the data is enhanced on line, including vertical turning, horizontal turning, random rotation, affine transformation and Gaussian noise addition. In order to quantitatively evaluate the performance of the model, the Dice coefficient, the Jaccard index, the sensitivity and the accuracy are used as evaluation indexes in the experiment. The results are shown in Table 1.
TABLE 1
Figure BDA0002744085590000051
The result of the segmentation of the partially highly reflected bright spot is shown in FIG. 4, where the first column is the original OCT image; the second column is a segmentation image obtained by a gold standard; the third column is a segmentation result image of U-Net; the fourth column is a segmentation result image of the basic network Baseline; the fifth column is a segmentation result image of the adaptive network suitable for segmentation of high-reflection bright spots in the retina optical coherence tomography image in the embodiment. The results of Table 1 and FIG. 4 show that FCN and U-Net as comparative experiments are slightly less effective in the present segmentation task, compared with the basic network Baseline designed according to the structure of U-Net and embedded in DR module in the encoder part, which is improved in multiple indexes. Compared with Baseline, the Baseline + SA model is respectively improved in Dice coefficient, Jaccard index and accuracy. For the sake of fairness comparison, the parameter number of the Baseline is increased to be equivalent to the network parameter number of the present invention by increasing the number of channels, so as to obtain the Baseline _ Wide model, but the segmentation performance of the Baseline _ Wide model is not significantly improved, which indicates that the segmentation performance cannot be significantly improved by simply enlarging the network scale. The self-adaptive network (SANet) provided by the invention has the best effect on four indexes of Dice coefficient, Jaccard index, sensitivity and accuracy, and shows the important roles of the DR module and the SA module in information fusion and feature self-adaptation.
In summary, the adaptive network applicable to segmentation of high-reflection bright spots in the retinal optical coherence tomography image of the present embodiment is verified. Based on the dual residual DR module and the adaptive SA module provided by the application, the adaptive network applicable to segmentation of high-reflection bright spots in the optical coherence tomography image of the retina better overcomes the defects that the original U-Net network is difficult to acquire and fuse global information and multi-scale characteristic information and is difficult to adapt to segmented objects with different sizes. Based on the verification result, the result shows that the adaptive network segmentation performance suitable for the segmentation of the high-reflection bright spots in the optical coherence tomography image of the retina is good, and the adaptive network segmentation method has good practicability in the aspect of the segmentation of the high-reflection bright spots in the optical coherence tomography OCT image of the retina. Moreover, the DR module and the SA module provided in this embodiment can be effectively and universally applied to improve the performance of other networks using the encoder-decoder architecture.
The foregoing description is for the purpose of illustration and is not for the purpose of limitation. Many embodiments and many applications other than the examples provided will be apparent to those of skill in the art upon reading the above description. The scope of the present teachings should, therefore, be determined not with reference to the above description, but should instead be determined with reference to the appended claims, along with the full scope of equivalents to which such claims are entitled. The disclosures of all articles and references, including patent applications and publications, are hereby incorporated by reference for all purposes. The omission in the foregoing claims of any aspect of subject matter that is disclosed herein is not intended to forego the subject matter and should not be construed as an admission that the applicant does not consider such subject matter to be part of the disclosed subject matter.

Claims (5)

1. An adaptive network suitable for high-reflection bright spot segmentation in a retina optical coherence tomography image is characterized by comprising a feature coding module, an adaptive SA module applied to the deep layer of the encoder module, and a plurality of feature decoding modules arranged in a decoder channel;
the characteristic coding module comprises a characteristic extraction unit and a dual residual DR module embedded in a down-sampling position of the characteristic extraction unit, wherein the dual residual DR module comprises two residual blocks, and each residual block comprises a 1 × 1 convolutional layer, a 3 × 3 convolutional layer, a 1 × 1 convolutional layer, a batch normalization processing layer and a ReLU activation function which are sequentially arranged;
the adaptive SA module comprises a characteristic input end, a deformable convolution layer, matrix multiplication and pixel-level summation, wherein the characteristic input end is connected with the characteristic coding module, and the deformable convolution layer comprises a 2D offset which is arranged on a conventional grid sampling position and is used for enabling a sampling grid to be deformed freely;
the feature decoding module reconstructs the high-level features generated by the self-adaptive SA module, gradually performs feature splicing with local information guided by the dual residual DR module through 2 × 2 deconvolution, and a result obtained through one 1 × 1 convolution layer convolution is used as the output of the feature decoding module.
2. The adaptive network for segmentation of high-reflection bright spots in optical coherence tomography images of retinas according to claim 1, wherein the feature extraction unit is a U-Net encoder layer, and an output of a fourth layer of the feature extraction unit is connected to the feature input terminal.
3. The adaptive network for segmentation of high reflection bright spots in a retinal optical coherence tomography image as claimed in claim 1, wherein the adaptive SA module comprises three parallel deformable convolutional layers, the feature input end receives a feature map output by the feature coding module, the feature map describes a spatial relationship between any two pixels of a feature by generating a spatial attention map through two of the deformable convolutional layers, and then performs the matrix multiplication with the output of a third deformable convolutional layer, and the obtained result is summed with the feature map input by the adaptive SA module at the pixel level, and the result of the pixel level summation is used as the output of the adaptive SA module for pixel level prediction.
4. The adaptive network for segmentation of high reflection hot spots in a retinal optical coherence tomography image as claimed in claim 1, wherein the output of the feature decoding module is up-sampled to a size consistent with the feature pattern of the adaptive network for segmentation of high reflection hot spots in a retinal optical coherence tomography image and is used as the output of the adaptive network for segmentation of high reflection hot spots in a retinal optical coherence tomography image.
5. The adaptive network for segmentation of high reflection bright spots in an optical coherence tomography image of a retina as claimed in claim 1, wherein the loss function of the adaptive network for segmentation of high reflection bright spots in an optical coherence tomography image of a retina is a combination of a Dice loss function and a binary cross entropy loss function.
CN202011160346.XA 2020-10-27 2020-10-27 Self-adaptive network suitable for high-reflection bright spot segmentation in retina optical coherence tomography image Withdrawn CN112308829A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202011160346.XA CN112308829A (en) 2020-10-27 2020-10-27 Self-adaptive network suitable for high-reflection bright spot segmentation in retina optical coherence tomography image

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202011160346.XA CN112308829A (en) 2020-10-27 2020-10-27 Self-adaptive network suitable for high-reflection bright spot segmentation in retina optical coherence tomography image

Publications (1)

Publication Number Publication Date
CN112308829A true CN112308829A (en) 2021-02-02

Family

ID=74330685

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202011160346.XA Withdrawn CN112308829A (en) 2020-10-27 2020-10-27 Self-adaptive network suitable for high-reflection bright spot segmentation in retina optical coherence tomography image

Country Status (1)

Country Link
CN (1) CN112308829A (en)

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112819798A (en) * 2021-02-05 2021-05-18 苏州大学 Context attention and fusion network suitable for joint segmentation of multiple retinal hydrops
CN113205538A (en) * 2021-05-17 2021-08-03 广州大学 Blood vessel image segmentation method and device based on CRDNet
CN113469269A (en) * 2021-07-16 2021-10-01 上海电力大学 Residual convolution self-coding wind-solar-charged scene generation method based on multi-channel fusion
CN115082500A (en) * 2022-05-31 2022-09-20 苏州大学 Corneal nerve fiber segmentation method based on multi-scale and local feature guide network
CN116777794A (en) * 2023-08-17 2023-09-19 简阳市人民医院 Cornea foreign body image processing method and system

Cited By (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112819798A (en) * 2021-02-05 2021-05-18 苏州大学 Context attention and fusion network suitable for joint segmentation of multiple retinal hydrops
CN112819798B (en) * 2021-02-05 2023-06-13 苏州大学 Contextual awareness and fusion network system for multiple retinal hydrops joint segmentation
CN113205538A (en) * 2021-05-17 2021-08-03 广州大学 Blood vessel image segmentation method and device based on CRDNet
CN113469269A (en) * 2021-07-16 2021-10-01 上海电力大学 Residual convolution self-coding wind-solar-charged scene generation method based on multi-channel fusion
CN115082500A (en) * 2022-05-31 2022-09-20 苏州大学 Corneal nerve fiber segmentation method based on multi-scale and local feature guide network
CN115082500B (en) * 2022-05-31 2023-07-11 苏州大学 Cornea nerve fiber segmentation method based on multiscale and local feature guide network
CN116777794A (en) * 2023-08-17 2023-09-19 简阳市人民医院 Cornea foreign body image processing method and system
CN116777794B (en) * 2023-08-17 2023-11-03 简阳市人民医院 Cornea foreign body image processing method and system

Similar Documents

Publication Publication Date Title
CN112308829A (en) Self-adaptive network suitable for high-reflection bright spot segmentation in retina optical coherence tomography image
Nguyen et al. Super-resolution for biometrics: A comprehensive survey
EP3948764B1 (en) Method and apparatus for training neural network model for enhancing image detail
EP0927405B1 (en) Image processing electronic device for detecting dimensional variations
US10380421B2 (en) Iris recognition via plenoptic imaging
CN110473142B (en) Single image super-resolution reconstruction method based on deep learning
JP7340107B2 (en) Image reconstruction method, apparatus, device, system and computer-readable storage medium
JP6446374B2 (en) Improvements in image processing or improvements related to image processing
CN116433914A (en) Two-dimensional medical image segmentation method and system
Liu et al. Deep image inpainting with enhanced normalization and contextual attention
Dövencioğlu et al. Specular motion and 3D shape estimation
Li et al. A deep‐learning‐based approach for noise reduction in high‐speed optical coherence Doppler tomography
Yao et al. SANet: a self-adaptive network for hyperreflective foci segmentation in retinal OCT images
CN113538254A (en) Image restoration method and device, electronic equipment and computer readable storage medium
CN117455829A (en) Anti-learning-based diabetic retinopathy detection algorithm
CN116503422A (en) Eye cup video disc segmentation method based on attention mechanism and multi-scale feature fusion
Li et al. Underwater image enhancement utilizing adaptive color correction and model conversion for dehazing
CN113222879B (en) Generation countermeasure network for fusion of infrared and visible light images
CN109934902B (en) Gradient domain rendering image reconstruction method using scene feature constraint
van Tonder et al. Bottom–up clues in target finding: Why a Dalmatian may be mistaken for an elephant
CN109584257B (en) Image processing method and related equipment
Zhao et al. RISSNet: Retain low‐light image details and improve the structural similarity net
Zhou et al. A comparative study on wavelets and residuals in deep super resolution
Stojanov et al. The benefits of depth information for head-mounted gaze estimation
CN117876470B (en) Method and system for extracting central line of laser bar of transparent optical lens

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
TA01 Transfer of patent application right

Effective date of registration: 20220316

Address after: 215011 Building 2, No. 209, Zhuyuan Road, high tech Zone, Suzhou, Jiangsu

Applicant after: SUZHOU BIGVISION MEDICAL TECHNOLOGY Co.,Ltd.

Address before: 215000, ten, 1 Zi street, Jiangsu, Suzhou

Applicant before: SOOCHOW University

TA01 Transfer of patent application right
WW01 Invention patent application withdrawn after publication

Application publication date: 20210202

WW01 Invention patent application withdrawn after publication