CN111325755A - U-shaped network and method for segmenting nerve fibers in cornea image - Google Patents

U-shaped network and method for segmenting nerve fibers in cornea image Download PDF

Info

Publication number
CN111325755A
CN111325755A CN202010068764.XA CN202010068764A CN111325755A CN 111325755 A CN111325755 A CN 111325755A CN 202010068764 A CN202010068764 A CN 202010068764A CN 111325755 A CN111325755 A CN 111325755A
Authority
CN
China
Prior art keywords
convolution
layer
image
nerve fibers
layers
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202010068764.XA
Other languages
Chinese (zh)
Other versions
CN111325755B (en
Inventor
陈新建
石霏
周鑫鑫
朱伟芳
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Suzhou University
Original Assignee
Suzhou University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Suzhou University filed Critical Suzhou University
Priority to CN202010068764.XA priority Critical patent/CN111325755B/en
Publication of CN111325755A publication Critical patent/CN111325755A/en
Application granted granted Critical
Publication of CN111325755B publication Critical patent/CN111325755B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/11Region-based segmentation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T3/00Geometric image transformations in the plane of the image
    • G06T3/02Affine transformations
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/70Denoising; Smoothing
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/136Segmentation; Edge detection involving thresholding
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30004Biomedical image processing
    • G06T2207/30041Eye; Retina; Ophthalmic
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02PCLIMATE CHANGE MITIGATION TECHNOLOGIES IN THE PRODUCTION OR PROCESSING OF GOODS
    • Y02P90/00Enabling technologies with a potential contribution to greenhouse gas [GHG] emissions mitigation
    • Y02P90/30Computing systems specially adapted for manufacturing

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • General Health & Medical Sciences (AREA)
  • General Engineering & Computer Science (AREA)
  • Biophysics (AREA)
  • Computational Linguistics (AREA)
  • Data Mining & Analysis (AREA)
  • Evolutionary Computation (AREA)
  • Artificial Intelligence (AREA)
  • Molecular Biology (AREA)
  • Computing Systems (AREA)
  • Biomedical Technology (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Health & Medical Sciences (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Eye Examination Apparatus (AREA)
  • Image Processing (AREA)

Abstract

The invention discloses a U-shaped network and a method for segmenting nerve fibers in a cornea image, wherein the U-shaped network comprises an encoder and a decoder, the encoder and the decoder are 4 layers and are symmetrically connected in a cross-layer manner; a multi-scale separation and fusion module is added after the upper sampling operation of the decoder part; the U-shaped network reduces the parameter quantity, increases the receptive field and improves the segmentation performance; the segmentation method adopts the trained U-shaped network, and the loss function combines the fiber length difference between a prediction graph and a gold standard and the Dice loss to jointly constrain the U-shaped network. The invention can accurately segment the tiny corneal nerve fibers and improve the segmentation precision of the corneal nerve fibers.

Description

U-shaped network and method for segmenting nerve fibers in cornea image
Technical Field
The invention relates to a U-shaped network and a method for segmenting nerve fibers in a cornea image, and belongs to the technical field of image processing.
Background
The segmentation and analysis of corneal nerve images are receiving more and more attention. These images obtained by confocal corneal microscopy allow the recording of corneal nerve changes. Corneal damage and corneal disease are often associated with alterations in corneal nerve fibers. The segmentation of nerve fibers provides useful information for the post-operative regeneration and repair of corneal lesions, the effects of extended contact lens wear, quantitative analysis of varying degrees of diabetic peripheral neuropathy, and the like.
The traditional corneal nerve image segmentation method cannot segment nerve fibers well. For example, the presence of corneal cells in a nerve fiber image may be a disturbance; in addition, for the cornea with diseases, the abnormal region in the corneal nerve image can reduce the segmentation precision, and the segmentation of the thin nerve fiber is also a great challenge.
Disclosure of Invention
The invention aims to overcome the defects in the prior art and provides a U-shaped network and a method for segmenting nerve fibers in a corneal image, which are applied to corneal nerve image segmentation and can improve the corneal nerve fiber segmentation precision.
In order to achieve the purpose, the invention provides the following technical scheme:
in a first aspect, the present invention provides a U-type network comprising an encoder and a decoder connected symmetrically across layers;
the encoder comprises four layers, each layer of the first three layers comprises 2 convolutional layers and 1 pooling layer, the last layer comprises 2 convolutional layers, and the number of channels is increased layer by layer;
the decoder also has four layers, the first layer comprises 1 up-sampling, each layer of the last three layers comprises 1 multi-scale separation and fusion module, 2 convolution layers and 1 up-sampling, and the number of channels decreases gradually layer by layer;
the output end of each layer of the coder is connected to the input end of the decoder which is symmetrical to the output end of each layer of the coder; each layer of the last three layers of the decoder is connected with the previous layer through respective multi-scale separation and fusion modules.
With reference to the first aspect, the multi-scale separation and fusion module includes a first convolution, a second convolution, a third convolution, a fourth convolution, a fifth convolution, a separation module and a fusion module, where the first convolution and the second convolution are 1 × 1 convolutions, and the third convolution, the fourth convolution and the fifth convolution are 9 × 9 convolutions;
the feature map is divided into 4 sub-channels by a separation module after first convolution, and the fourth sub-channel is directly connected with the input end of a fusion module; the third sub-channel is connected with the input end of the fourth convolution and the input end of the fusion module after the fifth convolution; the second sub-channel is connected with the input end of the third convolution and the input end of the fusion module after the fourth convolution; the first sub-channel is connected with the input end of the fusion module through a third convolution; and the fusion module fuses the feature information output by the four sub-channels and outputs the feature information through the fusion module.
In a second aspect, the present invention provides a method for segmenting nerve fibers in a corneal image, the method comprising the steps of: segmenting a cornea image by adopting the trained U-shaped network of claim 1 or 2 to obtain a cornea nerve fiber image;
wherein the loss function used to train the U-shaped network is:
Loss=α×LDice
wherein Loss is a pre-designed Loss function, α is 1+ mr,
Figure BDA0002376736390000021
lxindicates the total length of corneal nerve fibers, l, in the gold standard image xyRepresenting the total length of corneal nerve fibers in a binary segmentation image y, wherein the binary segmentation image y is obtained by carrying out threshold value method processing on a U-shaped network output image;
Figure BDA0002376736390000022
Figure BDA0002376736390000023
n denotes the total number of pixels per graph, giExpressing the pixel value of the ith pixel point in the gold standard image; p is a radical ofiTo representAnd binary segmentation is carried out on the pixel value of the ith pixel point in the image y.
With reference to the second aspect, further, the method further includes:
carrying out data enhancement processing on a cornea image to be segmented;
the data enhancement processing includes horizontal or vertical flipping, affine transformation, and additive gaussian noise processing.
With reference to the second aspect, further, the threshold value used by the threshold value method is 0.5.
With reference to the second aspect, further, in the training process of the U-shaped network, a random gradient descent algorithm with an initial learning rate of 0.01 and a momentum of 0.9 is adopted to optimize the U-shaped network; the batch size was 2, the number of iterations was 80, and each time an iteration was completed, the U-network was performance tested using the validation set.
Compared with the prior art, the invention has the following beneficial effects:
the 4-layer U-shaped network is adopted, the encoder and the decoder are symmetrically connected in a cross-layer mode, the number of parameters is obviously reduced on the premise that the performance is kept unchanged, and the multi-scale separation and fusion module is added after the up-sampling operation of the decoding part, so that the global information and the local information can be simultaneously extracted, and the characteristic extraction capability is enhanced; the loss function combines the fiber length difference between the prediction graph and the gold standard with the Dice loss to jointly constrain a U-shaped network, so that thinner corneal nerve fibers with lower contrast in a corneal image can be identified, and the output result is more reliable.
Drawings
Fig. 1 is a schematic structural diagram of a U-type network provided according to an embodiment of the present invention;
FIG. 2 is a schematic structural diagram of a multi-scale separation and fusion module provided in accordance with an embodiment of the present invention;
fig. 3 is a graph of experimental comparison results of corneal confocal microscope imaging images.
Detailed Description
The invention is further described below with reference to the accompanying drawings. The following examples are only for illustrating the technical solutions of the present invention more clearly, and the protection scope of the present invention is not limited thereby.
In the description of the present invention, it is to be understood that the terms "first", "second", and the like are used for descriptive purposes only and are not to be construed as indicating or implying relative importance or implying any number of technical features indicated. Thus, a feature defined as "first," "second," etc. may explicitly or implicitly include one or more of that feature.
As shown in fig. 1, the U-type network provided by the embodiment of the present invention is a symmetric encoder-decoder structure connected across layers, and as viewed from the image data input end, the encoder has 4 layers, each of the first 3 layers includes 2 convolutional layers and 1 pooling layer, the last layer includes 2 convolutional layers, the number of channels increases layer by layer, the decoder also has 4 layers, the first layer includes 1 upsampling, each of the last three layers includes 1 multi-scale separation and fusion (MSC) module, 2 convolutional layers and 1 upsampling, the number of channels decreases layer by layer, and the output of each layer of the encoder is connected to the input of the decoder which is symmetrical to the output of each layer of the encoder, the encoder can generate characteristic diagram detail information with different resolutions, the characteristic detail information of different resolutions is connected to a decoder in a cross-layer mode, so that the output graph close to the gold standard can be better repaired.
The specific structure of a multi-scale separation and fusion (MSC) module is shown in FIG. 2, after convolution of 1 ×, feature mapping is uniformly divided into 4 subchannel sets, and the subchannel sets are represented by I1 to I4, I4 is directly connected to O4 without any transformation, I3 is convolved by 9 × to obtain O3, O3 is added to I2 and then is convolved by 9 × to obtain O2, O2 is added to I1 and then is convolved by 9 × to obtain O1., and then multi-scale feature mapping of O1 to O4 is obtained.
The method for segmenting nerve fibers in the cornea image provided by the embodiment of the invention is further described in detail in three aspects of data acquisition and preprocessing, segmentation method design and U-shaped network training and testing.
1) Data acquisition and preprocessing
The data set included 90 two-dimensional images of corneal nerve fibers obtained from a confocal microscope with an image size of 384 × 384, corresponding to 400 μm × 400 μm the 90 images were 50 from 4 normal eyes and 40 from 4 eyes with corneal disease.
2) The design of the segmentation method comprises the following steps:
the design of the segmentation method is divided into the design of a network structure and the design of a loss function.
As for the design of the network structure, as described above, the MSC (multi-scale separation and fusion) module is mainly added after the up-sampling operation of the decoding part of the 4-layer U-Net network, and is not described herein again.
For the design of the loss function, the length difference of nerve fibers between the segmentation graph and the gold standard is mainly defined as a coefficient of the Dice loss function, and the specific details are as follows:
because the foreground only occupies a small part of the image, the network is more focused on the foreground by using a Dice loss function; however, thicker fibers contribute more when the Dice loss function is used, and the network tends to favor accurate segmentation of thicker fibers, often ignoring finer and shorter nerve fibers. The embodiment of the invention adds the length difference into the loss function, thereby reducing the deviation caused by the width.
A threshold value of 0.5 is applied to the output of the U-network to generate a binary segmentation result. And (3) setting x as a gold standard image and y as a binary segmentation image obtained by the threshold method, respectively carrying out thinning operation on x and y, respectively calculating the total number of the fiber pixels in the corresponding skeleton image, and expressing the total fiber lengths lx and ly. The difference between them, mr, is defined as:
Figure BDA0002376736390000051
the improved loss function is:
Loss=α×LDice(2)
wherein:
α=1+mr (3)
by piThe pixel value g representing the ith pixel point in the two-value segmentation graph yiThe pixel value of the ith pixel point in the golden standard x, the total number of the pixels of each image is N, and then LDiceIs defined as follows:
Figure BDA0002376736390000061
the loss function can guide the U-shaped network to pay more attention to the difference of the length of the nerve fiber in the optimization process, further solve the problem of class imbalance and contribute to the detection of the thin fiber.
3) Training and testing of U-type networks:
in the training process, the embodiment of the invention optimizes the network by adopting a random gradient descent algorithm with an initial learning rate of 0.01 and momentum of 0.9. The batch size was 2 and the number of iterations was 80. Each time an iteration ends, the validation set will be tested to see the performance of the U-type network. In the testing process, the normal data set and the lesion data set are respectively tested to obtain a testing result. In the experiment, four-fold cross validation is adopted to test the generalization capability and stability of the model, so that the output result is more reliable.
To further demonstrate the effects of the embodiments of the present invention, the following is further illustrated with reference to the experimental results:
the experiment adopts 5 indexes of accuracy (Acc), Daiss Similarity Coefficient (DSC), area under ROC curve (AUC), sensitivity (Se) and specificity (Sp) to quantitatively evaluate the performance of the segmentation method provided by the embodiment of the invention. The correlation index is defined as follows:
Figure BDA0002376736390000062
Figure BDA0002376736390000063
wherein TP is the number of pixels with true positive prediction result, FP is the number of pixels with false positive prediction result, FN is the number of pixels with false negative prediction result, and TN is the number of pixels with true negative prediction result.
As shown in table 1, through analysis of each index of the test result of the test set performed after training is finished, it can be seen that, compared with the reference network, the method provided by the embodiment of the present invention can obtain better performance than the reference network no matter whether the normal data or the abnormal data are positive, and the improved U-type network and the loss function provided by the embodiment of the present invention can obtain the best performance.
Table 1:
Figure BDA0002376736390000071
as shown in fig. 3, (a) is the original nerve fiber image (the first two are from the normal data set, and the third is from the lesion data set), (b) is the segmentation result of baseline (reference model), and (c) is the segmentation result of the segmentation method provided by the embodiment of the present invention, it can be seen that the segmentation method provided by the embodiment of the present invention can identify thinner corneal nerve fibers with lower contrast than the reference network.
In conclusion, the embodiment of the invention reduces the original 5-layer U-Net network into 4 layers, thereby greatly reducing the parameter quantity and keeping the performance unchanged; an MSC (multi-scale separation and fusion) module is added after the sampling operation on the network decoding part, so that the receptive field is increased, and the segmentation performance is improved; on the loss function, the existing Dice loss function is improved based on the nerve fiber length difference between the segmentation map and the gold standard, so that the model is more concerned about accurate segmentation of the fine fibers. The experimental result proves the effectiveness of the method and provides more accurate information for the quantitative analysis of the corneal nerve fibers.
The above description is only a preferred embodiment of the present invention, and it should be noted that, for those skilled in the art, several modifications and variations can be made without departing from the technical principle of the present invention, and these modifications and variations should also be regarded as the protection scope of the present invention.

Claims (6)

1. A U-type network comprising an encoder and a decoder connected symmetrically across layers;
the encoder comprises four layers, each layer of the first three layers comprises 2 convolutional layers and 1 pooling layer, the last layer comprises 2 convolutional layers, and the number of channels is increased layer by layer;
the decoder also has four layers, the first layer comprises 1 up-sampling, each layer of the last three layers comprises 1 multi-scale separation and fusion module, 2 convolution layers and 1 up-sampling, and the number of channels decreases gradually layer by layer;
the output end of each layer of the coder is connected to the input end of the decoder which is symmetrical to the output end of each layer of the coder; each layer of the last three layers of the decoder is connected with the previous layer through respective multi-scale separation and fusion modules.
2. The U-network of claim 1, wherein said multi-scale separation and fusion module comprises a first convolution, a second convolution, a third convolution, a fourth convolution, a fifth convolution, a separation module and a fusion module, wherein said first convolution and second convolution are 1 × 1 convolutions, and wherein said third convolution, fourth convolution and fifth convolution are 9 × 9 convolutions;
the feature map is divided into 4 sub-channels by a separation module after first convolution, and the fourth sub-channel is directly connected with the input end of a fusion module; the third sub-channel is connected with the input end of the fourth convolution and the input end of the fusion module after the fifth convolution; the second sub-channel is connected with the input end of the third convolution and the input end of the fusion module after the fourth convolution; the first sub-channel is connected with the input end of the fusion module through a third convolution; and the fusion module fuses the feature information output by the four sub-channels and outputs the feature information through the fusion module.
3. A method for segmenting nerve fibers in a corneal image, the method comprising the steps of: segmenting a cornea image by adopting the trained U-shaped network of claim 1 or 2 to obtain a cornea nerve fiber image;
wherein the loss function used to train the U-shaped network is:
Loss=α×LDice
wherein Loss is a pre-designed Loss function, α is 1+ mr,
Figure FDA0002376736380000021
lxindicates the total length of corneal nerve fibers, l, in the gold standard image xyRepresenting the total length of corneal nerve fibers in a binary segmentation image y, wherein the binary segmentation image y is obtained by carrying out threshold value method processing on a U-shaped network output image;
Figure FDA0002376736380000022
Figure FDA0002376736380000023
n denotes the total number of pixels per graph, giExpressing the pixel value of the ith pixel point in the gold standard image; p is a radical ofiAnd expressing the pixel value of the ith pixel point in the binary segmentation image y.
4. The method of segmenting nerve fibers in a corneal image as in claim 3, further comprising:
carrying out data enhancement processing on a cornea image to be segmented;
the data enhancement processing includes horizontal or vertical flipping, affine transformation, and additive gaussian noise processing.
5. The method for segmenting nerve fibers in a corneal image according to claim 3, wherein a threshold value used in the threshold value method is 0.5.
6. The method for segmenting nerve fibers in a corneal image according to claim 3,
in the training process of the U-shaped network, optimizing the U-shaped network by adopting a random gradient descent algorithm with an initial learning rate of 0.01 and momentum of 0.9; the batch size was 2, the number of iterations was 80, and each time an iteration was completed, the U-network was performance tested using the validation set.
CN202010068764.XA 2020-01-21 2020-01-21 Method for segmenting nerve fibers in U-shaped network and cornea image Active CN111325755B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010068764.XA CN111325755B (en) 2020-01-21 2020-01-21 Method for segmenting nerve fibers in U-shaped network and cornea image

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010068764.XA CN111325755B (en) 2020-01-21 2020-01-21 Method for segmenting nerve fibers in U-shaped network and cornea image

Publications (2)

Publication Number Publication Date
CN111325755A true CN111325755A (en) 2020-06-23
CN111325755B CN111325755B (en) 2024-04-09

Family

ID=71173146

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010068764.XA Active CN111325755B (en) 2020-01-21 2020-01-21 Method for segmenting nerve fibers in U-shaped network and cornea image

Country Status (1)

Country Link
CN (1) CN111325755B (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115082500A (en) * 2022-05-31 2022-09-20 苏州大学 Corneal nerve fiber segmentation method based on multi-scale and local feature guide network
CN117649417A (en) * 2024-01-30 2024-03-05 苏州慧恩齐家医疗科技有限公司 Cornea nerve fiber segmentation system, method, computer equipment and storage medium

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109635711A (en) * 2018-12-07 2019-04-16 上海衡道医学病理诊断中心有限公司 A kind of pathological image dividing method based on deep learning network
US10482603B1 (en) * 2019-06-25 2019-11-19 Artificial Intelligence, Ltd. Medical image segmentation using an integrated edge guidance module and object segmentation network
CN110517235A (en) * 2019-08-19 2019-11-29 苏州大学 One kind carrying out OCT image choroid automatic division method based on GCS-Net

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109635711A (en) * 2018-12-07 2019-04-16 上海衡道医学病理诊断中心有限公司 A kind of pathological image dividing method based on deep learning network
US10482603B1 (en) * 2019-06-25 2019-11-19 Artificial Intelligence, Ltd. Medical image segmentation using an integrated edge guidance module and object segmentation network
CN110517235A (en) * 2019-08-19 2019-11-29 苏州大学 One kind carrying out OCT image choroid automatic division method based on GCS-Net

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115082500A (en) * 2022-05-31 2022-09-20 苏州大学 Corneal nerve fiber segmentation method based on multi-scale and local feature guide network
CN117649417A (en) * 2024-01-30 2024-03-05 苏州慧恩齐家医疗科技有限公司 Cornea nerve fiber segmentation system, method, computer equipment and storage medium
CN117649417B (en) * 2024-01-30 2024-04-26 苏州慧恩齐家医疗科技有限公司 Cornea nerve fiber segmentation system, method, computer equipment and storage medium

Also Published As

Publication number Publication date
CN111325755B (en) 2024-04-09

Similar Documents

Publication Publication Date Title
CN111325751A (en) CT image segmentation system based on attention convolution neural network
CN111738363B (en) Alzheimer disease classification method based on improved 3D CNN network
CN109191472A (en) Based on the thymocyte image partition method for improving U-Net network
CN113223005B (en) Thyroid nodule automatic segmentation and grading intelligent system
CN111951288A (en) Skin cancer lesion segmentation method based on deep learning
CN113192076B (en) MRI brain tumor image segmentation method combining classification prediction and multi-scale feature extraction
CN111640128A (en) Cell image segmentation method based on U-Net network
CN111325755A (en) U-shaped network and method for segmenting nerve fibers in cornea image
CN112132827A (en) Pathological image processing method and device, electronic equipment and readable storage medium
CN115375711A (en) Image segmentation method of global context attention network based on multi-scale fusion
CN114972254A (en) Cervical cell image segmentation method based on convolutional neural network
CN110991374B (en) Fingerprint singular point detection method based on RCNN
CN114972202A (en) Ki67 pathological cell rapid detection and counting method based on lightweight neural network
CN114972753A (en) Lightweight semantic segmentation method and system based on context information aggregation and assisted learning
CN111612803B (en) Vehicle image semantic segmentation method based on image definition
CN109685118A (en) Weak classifier Adaboost vehicle detection method based on convolutional neural network characteristics
CN117523202A (en) Fundus blood vessel image segmentation method based on visual attention fusion network
CN113538363A (en) Lung medical image segmentation method and device based on improved U-Net
CN111768420A (en) Cell image segmentation model
CN116503593A (en) Retina OCT image hydrops segmentation method based on deep learning
TW202009793A (en) Fingerprint recognition method and fingerprint recognition chip for improving fingerprint recognition rate increasing the overlapping area of the fingerprint samples and the global fingerprint template
CN114997210A (en) Machine abnormal sound identification and detection method based on deep learning
CN114332278A (en) OCTA image motion correction method based on deep learning
CN114723937A (en) Method and system for classifying blood vessel surrounding gaps based on nuclear magnetic resonance image
CN116309601B (en) Leather defect real-time detection method based on Lite-EDNet

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant