CN111563910A - Fundus image segmentation method and device - Google Patents

Fundus image segmentation method and device Download PDF

Info

Publication number
CN111563910A
CN111563910A CN202010401358.0A CN202010401358A CN111563910A CN 111563910 A CN111563910 A CN 111563910A CN 202010401358 A CN202010401358 A CN 202010401358A CN 111563910 A CN111563910 A CN 111563910A
Authority
CN
China
Prior art keywords
segmentation
fundus image
training
model
data
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202010401358.0A
Other languages
Chinese (zh)
Other versions
CN111563910B (en
Inventor
黄烨霖
熊健皓
赵昕
和超
张大磊
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shanghai Eaglevision Medical Technology Co Ltd
Original Assignee
Shanghai Eaglevision Medical Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shanghai Eaglevision Medical Technology Co Ltd filed Critical Shanghai Eaglevision Medical Technology Co Ltd
Priority to CN202010401358.0A priority Critical patent/CN111563910B/en
Publication of CN111563910A publication Critical patent/CN111563910A/en
Application granted granted Critical
Publication of CN111563910B publication Critical patent/CN111563910B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/11Region-based segmentation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20084Artificial neural networks [ANN]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30004Biomedical image processing
    • G06T2207/30041Eye; Retina; Ophthalmic
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30004Biomedical image processing
    • G06T2207/30096Tumor; Lesion
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02TCLIMATE CHANGE MITIGATION TECHNOLOGIES RELATED TO TRANSPORTATION
    • Y02T10/00Road transport of goods or passengers
    • Y02T10/10Internal combustion engine [ICE] based vehicles
    • Y02T10/40Engine management systems

Landscapes

  • Engineering & Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Eye Examination Apparatus (AREA)

Abstract

The invention provides a fundus image segmentation method and equipment, which relate to a model training method and comprise the steps of obtaining training data, wherein the training data comprise fundus images and a plurality of label data of the same interested target; segmenting the fundus image by using a trained first segmentation model to obtain a first segmentation result of the interested target; and training a second segmentation model by using the training data and the first segmentation result, so that the second segmentation model outputs a second segmentation result of the interested target.

Description

Fundus image segmentation method and device
Technical Field
The invention relates to the field of medical image processing, in particular to a fundus image segmentation method and fundus image segmentation equipment.
Background
In recent years, machine learning techniques have been widely used in the medical field, and in particular, machine learning techniques typified by deep learning have been attracting attention in the medical imaging field. In the aspect of fundus image detection, most tasks of semantic segmentation use an end-to-end deep learning method to obtain better results, and accurate boundary and position detection is important for tracking the development of fundus image focuses. Therefore, pixel-by-pixel segmentation of lesions is a very valuable task for medical applications.
The prior art can be applied to an interested target with a well-defined and clear boundary, for example, for a hard focus such as a myelinated nerve fiber of a retina, a neural network is trained through an eyeground image and marking data of a focus area, so that the focus area can be accurately segmented by the network.
In some cases, however, the object of interest may have unclear and fuzzy boundaries. The fundus image shown in fig. 5 has a membranous epiretinal lesion, and such a lesion region has no clear boundary and has a small contrast with the background. For the case of the lesion shown in fig. 5, even a professional ophthalmologist may not accurately label the boundaries of the region, which makes training a segmentation model to automatically segment the lesion challenging. Moreover, considering that the labor cost required for segmenting the label is large, only one-hot coded label is required to be labeled, for example, "0" represents the background and "1" represents the object of interest, and the general labeling form cannot show such information in the case that the boundary of the object of interest has a gradual change as shown in fig. 5.
It follows that the performance of segmentation models trained by common machine learning approaches is to be improved when faced with the objective segmentation task of interest for fundus images.
Disclosure of Invention
In view of the above, the present invention provides a fundus image segmentation model training method, which includes:
acquiring training data, wherein the training data comprises fundus images and a plurality of marking data of the same interested target;
segmenting the fundus image by using a trained first segmentation model to obtain a first segmentation result of the interested target;
and training a second segmentation model by using the training data and the first segmentation result, so that the second segmentation model outputs a second segmentation result of the interested target.
Optionally, the first segmentation model includes a plurality of neural networks for outputting a segmentation result of the object of interest from the fundus image, respectively, and the first segmentation model determines a first segmentation result of the object of interest from the segmentation result of each of the neural networks.
Optionally, the first segmentation result is obtained by performing a weighted average calculation on the segmentation results of each of the neural networks.
Optionally, the first segmentation model performs correction processing on the segmentation result of each neural network, and determines the first segmentation result according to the segmentation result after each correction processing.
Optionally, training a second segmentation model using the training data and the first segmentation result includes:
obtaining comprehensive marking data according to a plurality of marking data in the training data;
and training a second segmentation model by using the fundus image, the first segmentation result and the comprehensive labeling data.
Optionally, the comprehensive annotation data is obtained by performing weighted average calculation on a plurality of annotation data in the training data.
The invention also provides a fundus image segmentation method, which comprises the following steps:
acquiring a fundus image; and segmenting the interested target in the fundus image by using the second segmentation model trained by the method to obtain a segmentation result.
Optionally, the object of interest is a soft lesion region with blurred boundaries.
Correspondingly, the invention also provides fundus image segmentation model training equipment, which comprises: at least one processor; and a memory communicatively coupled to the at least one processor; wherein the memory stores instructions executable by the one processor to cause the at least one processor to perform the above fundus image segmentation model training method.
Accordingly, the present invention also provides a fundus image segmentation apparatus comprising: at least one processor; and a memory communicatively coupled to the at least one processor; wherein the memory stores instructions executable by the one processor to cause the at least one processor to perform the fundus image segmentation method described above.
According to the fundus image segmentation model training method and device provided by the invention, the fundus image is segmented by using the conventionally trained segmentation model to obtain the segmentation result aiming at the interested target. The target model is trained by combining the fundus image, a plurality of label data of the same interested target and the segmentation result, the parameters of the target model are optimized according to the difference between the segmentation result output by the target model and the segmentation result output by the frequently trained model and the difference between the segmentation result output by the target model and the label in the training process, and the label data is trained by the aid of the pre-trained model, so that the target model can learn more contents, and the identification accuracy of the target model on the interested target in the fundus image is improved.
The fundus image segmentation method and device provided by the invention are particularly suitable for identifying the interested target with fuzzy boundaries, the interested target is segmented by utilizing the second segmentation model trained by the scheme, the second segmentation model learns the knowledge from the soft label and the knowledge from the artificial labeling data, so that the segmentation accuracy is higher, and the finally output segmentation result is more in line with the actual situation of the interested target.
Drawings
In order to more clearly illustrate the embodiments of the present invention or the technical solutions in the prior art, the drawings used in the description of the embodiments or the prior art will be briefly described below, and it is obvious that the drawings in the following description are some embodiments of the present invention, and other drawings can be obtained by those skilled in the art without creative efforts.
FIG. 1 is a fundus image and a plurality of corresponding label data;
FIG. 2 is a schematic diagram of a segmentation model training in an embodiment of the present invention;
FIG. 3 is a diagram illustrating a segmentation result output by the first segmentation model according to an embodiment of the present invention;
FIG. 4 is a diagram illustrating a preferred segmentation model training in an embodiment of the present invention;
FIG. 5 is a diagram illustrating a segmentation model trained according to an embodiment of the present invention to segment an object of interest.
Detailed Description
The technical solutions of the present invention will be described clearly and completely with reference to the accompanying drawings, and it should be understood that the described embodiments are some, but not all embodiments of the present invention. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
In the description of the present invention, it should be noted that the terms "first" and "second" are used for descriptive purposes only and are not to be construed as indicating or implying relative importance. In addition, the technical features involved in the different embodiments of the present invention described below may be combined with each other as long as they do not conflict with each other.
The embodiment of the invention provides a fundus image segmentation model training method which can be executed by electronic equipment such as a computer or a server and the like, and a machine learning model for segmenting a region of interest in a fundus image is trained by using training data, wherein the models can be composed of a convolutional neural network.
The training data used in the present embodiment respectively includes fundus images and a plurality of labeling data of the same object of interest therein. A plurality of objects of interest may be included in the fundus image at the same time, and for convenience of explanation, the present embodiment will be described taking only one object of interest as an example. The object of interest may particularly be a tissue of the human body, such as the optic disc, the macula; and may also be a lesion such as a bleeding area, cotton wool spots, membrane epiretinal membranes, and the like.
The annotation data is information generated artificially on the basis of the marking of the target of interest in the fundus image, and may specifically be a one-hot coded label generated using an image annotation tool, and the annotation data may be regarded as a kind of mask image. Since a plurality of labeling data of the same object of interest for the same fundus image are different data formed by labeling different persons (such as ophthalmologists) according to subjective recognition, the respective labeling data are not completely the same even if the object of interest is fixed.
For example, as shown in fig. 1, N ophthalmologists label the same fundus image x pixel by pixel to obtain N mask labels
Figure BDA0002489598140000041
Wherein
Figure BDA0002489598140000042
Representing the values of the pixel points corresponding to coordinate positions i and j in the mask, thereby sharing y1…yNThe N annotation data, and at least some of them are different.
As shown in FIG. 2, a fundus image x is first segmented using a trained first segmentation model 11 to obtain a segmentation result y for an object of interests. The first segmentation model 11 may be trained in advance by using an existing machine learning algorithm to have a certain capability of segmenting the image. For example, the neural network may be trained by a model training method related to a macular image region segmentation method disclosed in chinese patent document CN110428421A or an image recognition method disclosed in CN109583364A, so that an object of interest in a fundus image can be recognized and a segmentation result can be output. In particular, the model can be trained using the training data described in the present invention, i.e., using the labeling mask ylThe first segmentation model 11 is pre-trained with the fundus image x.
In the present embodiment, the segmentation result y output from the first segmentation model 11sAs an intermediate data rather than a final result. On the basis of this, training data, i.e. fundus image x and mask label y, are usedlAnd the segmentation result ysThese three data, the second segmentation model 12, are trained to output a second segmentation result for the object of interest. Training is carried out through a large amount of data, and the training can be carried out according to the second segmentation result and the first segmentation result ysAnd the second segmentation result and the annotation data ylThe parameters of the second segmentation model 12 are optimized by the difference, so that the output result is better and more accurate.
According to the fundus image segmentation model training method provided by the embodiment of the invention, firstly, a conventionally trained segmentation model is used for segmenting a fundus image to obtain a segmentation result aiming at an interested target. The target model is trained by combining the fundus image, a plurality of label data of the same interested target and the segmentation result, the parameters of the target model are optimized according to the difference between the segmentation result output by the target model and the segmentation result output by the frequently trained model and the difference between the segmentation result output by the target model and the label in the training process, and the label data is trained by the aid of the pre-trained model, so that the model can learn more contents, and the identification accuracy of the target model on the interested target in the fundus image is improved.
To further improve the performance of the second segmentation model 12, as shown in fig. 3, the first segmentation model 11 used in a preferred embodiment comprises a plurality of neural networks 110, the structure of which may be identical. Each of the neural networks 110 is for outputting a segmentation result p of an object of interest therein from a fundus imagelL ∈ (1,2, …, N), such as N neural networks 110 collectively outputting p1…pNThe N segmentation results, and finally determining a first segmentation result y for the object of interest according to the segmentation results of the respective neural networkss. Because the performance of each neural network is not completely the same and the output segmentation results are different, the embodiment performs equalization by introducing a plurality of neural networks, thereby improving the accuracy of the segmentation results.
There are various ways of obtaining a unique combined segmentation result from a plurality of segmentation results, e.g. averaging may be performed
Figure BDA0002489598140000051
Or a weighted average calculation, etc., the weight of each segmented result may be configured based on the performance value of the neural network 110 outputting the result, such as configuring the weight according to the roc (receiver Operating characterization) value or the auc (area Under the current) value.
Further, the segmentation result directly output by the neural network 110 may also be corrected, for example, as follows:
Figure BDA0002489598140000052
weight of
Figure BDA0002489598140000053
Is the segmentation result directly output by the neural network 110, and T is a number representing the confidence level calibration, and the probability value output by each model is normalized. The T values of the neural networks are different and are calculated according to the output result of each model,
Figure BDA0002489598140000054
is the corrected segmentation result.
Based on the preferred first segmentation model shown in FIG. 3, the present invention provides a preferred segmentation model training method, and the model structure used in this embodiment is shown in FIG. 4, wherein
Figure BDA0002489598140000055
Is a trained object (target model), i.e. a second segmentation model;
Figure BDA0002489598140000056
these N neural networks constitute a pre-segmentation model, i.e. the first segmentation model.
The training data for this embodiment is still a fundus image x, and a plurality of original manual labels y for the same object of interest therein1…yN. In this embodiment, the N original manual labels are processed to obtain a comprehensive labelNote data
Figure BDA0002489598140000057
There are various ways to obtain unique integrated annotation data from multiple annotation data, e.g., averaging
Figure BDA0002489598140000058
Or a weighted average calculation, etc.
In the present embodiment
Figure BDA0002489598140000059
Respectively segmenting the fundus images x, outputting segmentation results aiming at the same region of interest in the fundus images x, respectively correcting the segmentation results to obtain corrected results pl. The N corrected results are weighted and averaged to obtain a segmentation result ys(first segmentation result), called soft label, and thus the corresponding part in fig. 4 is called soft label generation module.
Then the fundus image x and the soft label y are labeledsAnd original label
Figure BDA00024895981400000510
Are input together into a segmentation model
Figure BDA00024895981400000511
The model parameters are optimized by calculating the loss of uncertain distillation, and the final segmentation result (second segmentation result) is trained to output, so that the corresponding part in fig. 4 is called a segmentation model training module. The formula for calculating the loss of distillation with uncertain knowledge can be recorded as
Figure BDA0002489598140000061
ω is the set weight.
By training a large number of fundus images and labels, the second segmentation model can accurately segment the interested target, and auxiliary diagnosis information is provided for doctors. When the trained model is used for segmenting the fundus image, the first segmentation model is not needed, and only the second segmentation model is used for identification.
In one embodiment, as shown in fig. 5, the fundus image to be segmented is acquired including an anterior membrane pathological region therein. The second segmentation model 12 trained using the method described above segments the object of interest in the fundus image, resulting in a segmentation result 51 representing the anterior membrane pathology area, which can then be marked in the original fundus image.
The fundus image segmentation method provided by the embodiment of the invention is particularly suitable for identifying the interested target with fuzzy boundaries, the second segmentation model trained by the scheme is used for segmenting the interested target, the second segmentation model learns the knowledge from the soft label and the knowledge from the artificial labeling data, so that the segmentation accuracy is higher, and the finally output segmentation result is more in line with the actual situation of the interested target.
As will be appreciated by one skilled in the art, embodiments of the present invention may be provided as a method, system, or computer program product. Accordingly, the present invention may take the form of an entirely hardware embodiment, an entirely software embodiment or an embodiment combining software and hardware aspects. Furthermore, the present invention may take the form of a computer program product embodied on one or more computer-usable storage media (including, but not limited to, disk storage, CD-ROM, optical storage, and the like) having computer-usable program code embodied therein.
The present invention is described with reference to flowchart illustrations and/or block diagrams of methods, apparatus (systems), and computer program products according to embodiments of the invention. It will be understood that each flow and/or block of the flow diagrams and/or block diagrams, and combinations of flows and/or blocks in the flow diagrams and/or block diagrams, can be implemented by computer program instructions. These computer program instructions may be provided to a processor of a general purpose computer, special purpose computer, embedded processor, or other programmable data processing apparatus to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable data processing apparatus, create means for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be stored in a computer-readable memory that can direct a computer or other programmable data processing apparatus to function in a particular manner, such that the instructions stored in the computer-readable memory produce an article of manufacture including instruction means which implement the function specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be loaded onto a computer or other programmable data processing apparatus to cause a series of operational steps to be performed on the computer or other programmable apparatus to produce a computer implemented process such that the instructions which execute on the computer or other programmable apparatus provide steps for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
It should be understood that the above examples are only for clarity of illustration and are not intended to limit the embodiments. Other variations and modifications will be apparent to persons skilled in the art in light of the above description. And are neither required nor exhaustive of all embodiments. And obvious variations or modifications therefrom are within the scope of the invention.

Claims (10)

1. A fundus image segmentation model training method is characterized by comprising the following steps:
acquiring training data, wherein the training data comprises fundus images and a plurality of marking data of the same interested target;
segmenting the fundus image by using a trained first segmentation model to obtain a first segmentation result of the interested target;
and training a second segmentation model by using the training data and the first segmentation result, so that the second segmentation model outputs a second segmentation result of the interested target.
2. The method according to claim 1, wherein the first segmentation model includes a plurality of neural networks for outputting segmentation results for the object of interest from the fundus image, respectively, and determines the first segmentation result for the object of interest from the segmentation results of the respective neural networks.
3. The method of claim 2, wherein the first segmentation result is calculated by a weighted average of the segmentation results of each of the neural networks.
4. The method according to claim 2 or 3, wherein the first segmentation model performs a correction process on the segmentation result of each neural network, and the first segmentation result is determined according to the segmentation result after each correction process.
5. The method of claim 1, wherein training a second segmentation model using the training data and the first segmentation results comprises:
obtaining comprehensive marking data according to a plurality of marking data in the training data;
and training a second segmentation model by using the fundus image, the first segmentation result and the comprehensive labeling data.
6. The method of claim 5, wherein the composite annotation data is calculated by a weighted average of a plurality of annotation data in the training data.
7. A fundus image segmentation method, comprising:
acquiring a fundus image; segmenting the object of interest in the fundus image using the second segmentation model trained using the method of any one of claims 1-6, resulting in a segmentation result.
8. The method of any one of claims 1-7, wherein the object of interest is a soft lesion region with blurred boundaries.
9. An eye fundus image segmentation model training device, comprising: at least one processor; and a memory communicatively coupled to the at least one processor; wherein the memory stores instructions executable by the one processor to cause the at least one processor to perform a fundus image segmentation model training method according to any one of claims 1-6.
10. An eye fundus image segmentation apparatus, comprising: at least one processor; and a memory communicatively coupled to the at least one processor; wherein the memory stores instructions executable by the one processor to cause the at least one processor to perform the fundus image segmentation method of claim 7 or 8.
CN202010401358.0A 2020-05-13 2020-05-13 Fundus image segmentation method and device Active CN111563910B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010401358.0A CN111563910B (en) 2020-05-13 2020-05-13 Fundus image segmentation method and device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010401358.0A CN111563910B (en) 2020-05-13 2020-05-13 Fundus image segmentation method and device

Publications (2)

Publication Number Publication Date
CN111563910A true CN111563910A (en) 2020-08-21
CN111563910B CN111563910B (en) 2023-06-06

Family

ID=72074654

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010401358.0A Active CN111563910B (en) 2020-05-13 2020-05-13 Fundus image segmentation method and device

Country Status (1)

Country Link
CN (1) CN111563910B (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112070163A (en) * 2020-09-09 2020-12-11 北京字节跳动网络技术有限公司 Image segmentation model training and image segmentation method, device and equipment
CN112598686A (en) * 2021-03-03 2021-04-02 腾讯科技(深圳)有限公司 Image segmentation method and device, computer equipment and storage medium

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110120047A (en) * 2019-04-04 2019-08-13 平安科技(深圳)有限公司 Image Segmentation Model training method, image partition method, device, equipment and medium
CN110428421A (en) * 2019-04-02 2019-11-08 上海鹰瞳医疗科技有限公司 Macula lutea image region segmentation method and apparatus
US20190365314A1 (en) * 2018-06-04 2019-12-05 Nidek Co., Ltd. Ocular fundus image processing device and non-transitory computer-readable medium storing computer-readable instructions

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20190365314A1 (en) * 2018-06-04 2019-12-05 Nidek Co., Ltd. Ocular fundus image processing device and non-transitory computer-readable medium storing computer-readable instructions
CN110428421A (en) * 2019-04-02 2019-11-08 上海鹰瞳医疗科技有限公司 Macula lutea image region segmentation method and apparatus
CN110120047A (en) * 2019-04-04 2019-08-13 平安科技(深圳)有限公司 Image Segmentation Model training method, image partition method, device, equipment and medium

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
崔栋;刘敏敏;张光玉;: "BP神经网络在眼底造影图像分割中的应用" *

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112070163A (en) * 2020-09-09 2020-12-11 北京字节跳动网络技术有限公司 Image segmentation model training and image segmentation method, device and equipment
CN112070163B (en) * 2020-09-09 2023-11-24 抖音视界有限公司 Image segmentation model training and image segmentation method, device and equipment
CN112598686A (en) * 2021-03-03 2021-04-02 腾讯科技(深圳)有限公司 Image segmentation method and device, computer equipment and storage medium
CN112598686B (en) * 2021-03-03 2021-06-04 腾讯科技(深圳)有限公司 Image segmentation method and device, computer equipment and storage medium
WO2022183984A1 (en) * 2021-03-03 2022-09-09 腾讯科技(深圳)有限公司 Image segmentation method and apparatus, computer device and storage medium

Also Published As

Publication number Publication date
CN111563910B (en) 2023-06-06

Similar Documents

Publication Publication Date Title
Asiri et al. Deep learning based computer-aided diagnosis systems for diabetic retinopathy: A survey
CN110909780B (en) Image recognition model training and image recognition method, device and system
CN109961848B (en) Macular image classification method and device
WO2020031243A1 (en) Method for correcting teacher label image, method for preparing learned model, and image analysis device
CN109684981B (en) Identification method and equipment of cyan eye image and screening system
CN109697719B (en) Image quality evaluation method and device and computer readable storage medium
CN113768461B (en) Fundus image analysis method, fundus image analysis system and electronic equipment
Aquino Establishing the macular grading grid by means of fovea centre detection using anatomical-based and visual-based features
CN110263755B (en) Eye ground image recognition model training method, eye ground image recognition method and eye ground image recognition device
CN111553436B (en) Training data generation method, model training method and equipment
CN110555845A (en) Fundus OCT image identification method and equipment
CN111986211A (en) Deep learning-based ophthalmic ultrasonic automatic screening method and system
JP2019192215A (en) 3d quantitative analysis of retinal layers with deep learning
CN111563910B (en) Fundus image segmentation method and device
US20220047159A1 (en) Glaucoma image recognition method and device and diagnosis system
CN113889267A (en) Method for constructing diabetes diagnosis model based on eye image recognition and electronic equipment
CN113012093B (en) Training method and training system for glaucoma image feature extraction
WO2020236729A1 (en) Deep learning-based segmentation of corneal nerve fiber images
CN110276333B (en) Eye ground identity recognition model training method, eye ground identity recognition method and equipment
CN112634221A (en) Image and depth-based cornea level identification and lesion positioning method and system
CN109919098B (en) Target object identification method and device
Giancardo et al. Quality assessment of retinal fundus images using elliptical local vessel density
CN111640097A (en) Skin mirror image identification method and equipment
CN111402246A (en) Eye ground image classification method based on combined network
WO2020016836A1 (en) System and method for managing the quality of an image

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant