CN111563910B - Fundus image segmentation method and device - Google Patents

Fundus image segmentation method and device Download PDF

Info

Publication number
CN111563910B
CN111563910B CN202010401358.0A CN202010401358A CN111563910B CN 111563910 B CN111563910 B CN 111563910B CN 202010401358 A CN202010401358 A CN 202010401358A CN 111563910 B CN111563910 B CN 111563910B
Authority
CN
China
Prior art keywords
segmentation
fundus image
model
training
interest
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202010401358.0A
Other languages
Chinese (zh)
Other versions
CN111563910A (en
Inventor
黄烨霖
熊健皓
赵昕
和超
张大磊
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shanghai Eaglevision Medical Technology Co Ltd
Original Assignee
Shanghai Eaglevision Medical Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shanghai Eaglevision Medical Technology Co Ltd filed Critical Shanghai Eaglevision Medical Technology Co Ltd
Priority to CN202010401358.0A priority Critical patent/CN111563910B/en
Publication of CN111563910A publication Critical patent/CN111563910A/en
Application granted granted Critical
Publication of CN111563910B publication Critical patent/CN111563910B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/11Region-based segmentation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20084Artificial neural networks [ANN]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30004Biomedical image processing
    • G06T2207/30041Eye; Retina; Ophthalmic
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30004Biomedical image processing
    • G06T2207/30096Tumor; Lesion
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02TCLIMATE CHANGE MITIGATION TECHNOLOGIES RELATED TO TRANSPORTATION
    • Y02T10/00Road transport of goods or passengers
    • Y02T10/10Internal combustion engine [ICE] based vehicles
    • Y02T10/40Engine management systems

Landscapes

  • Engineering & Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Eye Examination Apparatus (AREA)

Abstract

The invention provides a fundus image segmentation method and device, wherein the method relates to a model training method, and the method comprises the steps of obtaining training data, wherein the training data comprises fundus images and a plurality of labeling data for the same interested target; dividing the fundus image by using a trained first division model to obtain a first division result of the object of interest; and training a second segmentation model by using the training data and the first segmentation result to output a second segmentation result of the object of interest.

Description

Fundus image segmentation method and device
Technical Field
The invention relates to the field of medical image processing, in particular to a fundus image segmentation method and fundus image segmentation equipment.
Background
In recent years, machine learning techniques have been widely used in the medical field, and in particular, machine learning techniques typified by deep learning have been attracting attention in the medical imaging field. In the aspect of fundus image detection, most of semantic segmentation tasks acquire better results by using an end-to-end deep learning method, and accurate boundary and position detection is important for tracking the development of a fundus image focus. Therefore, pixel-by-pixel segmentation of lesions is a very valuable task for medical applications.
The prior art is applicable to objects of interest with well-defined and clear boundaries, such as for hard lesions such as the retina medullary nerve fibers, by training the neural network with fundus images and labeling data for the lesion areas therein, the network can be made to segment such lesion areas relatively accurately.
In some cases, however, the object of interest may have unclear and ambiguous boundaries. There is a membranous pre-retinal membrane lesion in the fundus image as shown in fig. 5, and this lesion area has no clear boundary and has a small contrast with the background. For the case of the lesion shown in fig. 5, it is not possible for even a professional ophthalmologist to precisely label the boundaries of the region, which makes it challenging to train a segmentation model to automatically segment the lesion. Moreover, considering that the labor cost required for dividing and labeling is relatively high, only labels with single thermal codes are generally labeled, for example, a background is represented by 0, an object of interest is represented by 1, and in the face of gradual change of the boundary of the object of interest shown in fig. 5, a general labeling form cannot show such information.
It follows that the performance of a segmentation model trained by a common machine learning scheme is to be improved when facing the object-of-interest segmentation task of fundus images.
Disclosure of Invention
In view of this, the present invention provides a fundus image segmentation model training method, including:
acquiring training data, wherein the training data comprises fundus images and a plurality of labeling data of the same interested target;
dividing the fundus image by using a trained first division model to obtain a first division result of the object of interest;
and training a second segmentation model by using the training data and the first segmentation result to output a second segmentation result of the object of interest.
Optionally, the first segmentation model includes a plurality of neural networks, and the neural networks are respectively used for outputting segmentation results of the object of interest according to the fundus image, and the first segmentation model determines a first segmentation result of the object of interest according to the segmentation results of the neural networks.
Optionally, the first segmentation result is obtained by performing weighted average calculation on the segmentation results of each neural network.
Optionally, the first segmentation model performs correction processing on the segmentation result of each neural network, and determines the first segmentation result according to the segmentation result after the correction processing.
Optionally, training a second segmentation model using the training data and the first segmentation result includes:
obtaining comprehensive annotation data according to a plurality of annotation data in the training data;
and training a second segmentation model by utilizing the fundus image, the first segmentation result and the comprehensive annotation data.
Optionally, the comprehensive annotation data is obtained by performing weighted average calculation on a plurality of annotation data in the training data.
The invention also provides a fundus image segmentation method, which comprises the following steps:
acquiring a fundus image; and dividing the interested target in the fundus image by using the second division model trained by the method to obtain a division result.
Optionally, the object of interest is a soft lesion region with blurred boundaries.
Correspondingly, the invention also provides fundus image segmentation model training equipment, which comprises the following components: at least one processor; and a memory communicatively coupled to the at least one processor; wherein the memory stores instructions executable by the one processor, the instructions being executable by the at least one processor to cause the at least one processor to perform the fundus image segmentation model training method described above.
Accordingly, the present invention also provides a fundus image segmentation apparatus including: at least one processor; and a memory communicatively coupled to the at least one processor; wherein the memory stores instructions executable by the one processor, the instructions being executable by the at least one processor to cause the at least one processor to perform the fundus image segmentation method described above.
According to the fundus image segmentation model training method and device provided by the invention, the fundus image is segmented by using the conventionally trained segmentation model, so that the segmentation result aiming at the target of interest is obtained. The method is characterized in that a fundus image, a plurality of labeling data aiming at the same interested target and the segmentation result are combined to train a target model, and parameters of the target model are optimized according to the difference between the segmentation result output by the target model and the difference between the segmentation result output by the target model and the label in the training process, so that the training of the labeling data is assisted by utilizing the pre-trained model, the target model can learn more contents, and the recognition accuracy of the target interested in the fundus image is improved.
The fundus image segmentation method and the fundus image segmentation equipment provided by the invention are particularly suitable for identifying the target of interest with blurred boundary, the target of interest is segmented by utilizing the second segmentation model trained by the scheme, and the second segmentation model learns knowledge from the soft label and knowledge from the manual annotation data, so that the fundus image segmentation method and the fundus image segmentation equipment have higher segmentation accuracy, and finally the output segmentation result is more in accordance with the actual condition of the target of interest.
Drawings
In order to more clearly illustrate the embodiments of the present invention or the technical solutions in the prior art, the drawings that are needed in the description of the embodiments or the prior art will be briefly described, and it is obvious that the drawings in the description below are some embodiments of the present invention, and other drawings can be obtained according to the drawings without inventive effort for a person skilled in the art.
FIG. 1 is a fundus image and corresponding plurality of annotation data;
FIG. 2 is a schematic diagram of training a segmentation model according to an embodiment of the present invention;
FIG. 3 is a schematic diagram of a first segmentation model outputting segmentation results according to an embodiment of the present invention;
FIG. 4 is a schematic diagram of a segmentation model training preferred in an embodiment of the present invention;
FIG. 5 is a schematic diagram of segmenting an object of interest using a segmentation model trained in accordance with an embodiment of the present invention.
Detailed Description
The following description of the embodiments of the present invention will be made apparent and fully in view of the accompanying drawings, in which some, but not all embodiments of the invention are shown. All other embodiments, which can be made by those skilled in the art based on the embodiments of the invention without making any inventive effort, are intended to be within the scope of the invention.
In the description of the present invention, it should be noted that the terms "first," "second," and "second" are used for descriptive purposes only and are not to be construed as indicating or implying relative importance. In addition, the technical features of the different embodiments of the present invention described below may be combined with each other as long as they do not collide with each other.
The embodiment of the invention provides a fundus image segmentation model training method, which can be executed by electronic equipment such as a computer or a server, and is used for training a machine learning model for segmenting a region of interest in a fundus image by using training data, wherein the model can be composed of a convolutional neural network.
The training data used in the present embodiment includes fundus images and a plurality of annotation data for the same object of interest therein, respectively. A plurality of objects of interest may be simultaneously included in the fundus image, and for convenience of explanation, the present embodiment will be explained taking only one object of interest as an example. The object of interest may in particular be a tissue of the human body, such as the optic disc, the macula; but also lesions such as bleeding areas, cotton wool spots, membranous pre-retinal membranes, etc.
The labeling data is information generated based on manual marking of the object of interest in the fundus image, and specifically may be a single-heat-coded label generated by using an image labeling tool, and the labeling data may be regarded as a mask image. The plurality of labeling data for the same object of interest of the same fundus image are different data formed by marking by different persons (such as ophthalmologists) based on subjective knowledge, and therefore, even if the object of interest is fixed, the respective labeling data are not exactly the same.
For example, as shown in fig. 1, N ophthalmologists label the same fundus image x on a pixel-by-pixel level to obtain N mask labels
Figure BDA0002489598140000041
Wherein->
Figure BDA0002489598140000042
Representing the values of the pixel points corresponding to coordinate positions i and j in the mask, thereby sharing y 1 …y N The N annotation data are, at least in part, different.
As shown in fig. 2, the fundus image x is first segmented by the trained first segmentation model 11 to obtain a segmentation result y of the object of interest s . The first segmentation model 11 may be previously trained to have a certain image segmentation capability using existing machine learning algorithms. For example, the neural network may be trained by a model training method related to the macula image region segmentation method disclosed in chinese patent document CN110428421a or the image recognition method disclosed in CN109583364a, so that the neural network can recognize an object of interest in the fundus image and output a segmentation result. The model can be trained in particular using the training data according to the invention, i.e. using the labeling mask y l The first segmentation model 11 is pre-trained with the fundus image x.
In the present embodiment, the segmentation result y output by the first segmentation model 11 s As an intermediate data rather than as a final result. On the basis, training data, namely fundus image x and mask label y are utilized l And the segmentation result y s The three data train the second segmentation model 12 to output a second segmentation result for the object of interest. Training with a large amount of data can be performed according to the second segmentation result and the first segmentation result y s And the second segmentation result and annotation data y l The difference in (2) optimizes the parameters of the second segmentation model 12 so that the output results are more accurate.
According to the fundus image segmentation model training method provided by the embodiment of the invention, the fundus image is segmented by using the segmentation model which is conventionally trained, so that a segmentation result aiming at an interested target is obtained. The method is characterized in that a fundus image, a plurality of labeling data aiming at the same interested target and the segmentation result are combined to train a target model, and parameters of the target model are optimized according to the difference between the segmentation result output by the target model and the difference between the segmentation result output by the target model and the label in the training process, so that the training of the labeling data is assisted by the pre-trained model, the target model can learn more contents, and the recognition accuracy of the target of interest in the fundus image is improved.
To further enhance the performance of the second segmentation model 12, the first segmentation model 11 used in a preferred embodiment includes a plurality of neural networks 110, which may be identical in structure, as shown in FIG. 3. Each neural network 110 is respectively used for outputting a segmentation result p of an object of interest therein according to the fundus image l L epsilon (1, 2, …, N), e.g., N neural networks 110 co-output p 1 …p N The N segmentation results, and finally determining a first segmentation result y of the object of interest according to the segmentation results of the neural networks s . Since the performances of the neural networks are not identical, the output segmentation results are different, and the embodiment equalizes by introducing a plurality of neural networks, thereby improving the accuracy of the segmentation results.
There are various ways to obtain a unique integrated segmentation result based on multiple segmentation results, for example, averaging may be performed
Figure BDA0002489598140000051
Or weighted average calculations, etc., the weights of the individual segmented results may be configured based on the performance values of the neural network 110 outputting the results, such as by configuring the weights according to ROC (Receiver Operating Characteristic) or AUC (Area Under the Curve) values.
Further, the segmentation result directly output by the neural network 110 may be corrected, for example, as follows:
Figure BDA0002489598140000052
weighting of
Figure BDA0002489598140000053
The segmentation result directly output by the neural network 110, T is a numerical value, represents the calibration of the confidence coefficient, and normalizes the probability value output by each model. T values of the respective neural networks are different, calculated based on the output result of each model,/->
Figure BDA0002489598140000054
Is the corrected segmentation result.
Based on the preferred first segmentation model shown in FIG. 3, the present invention provides a preferred segmentation model training method, the model structure used in this embodiment is shown in FIG. 4, in which
Figure BDA0002489598140000055
For a trained object (target model), i.e. a second segmentation model; />
Figure BDA0002489598140000056
The N neural networks constitute a pre-segmentation model, i.e. a first segmentation model.
The training data of this embodiment is still fundus image x, and a plurality of original manual labels y for the same object of interest therein 1 …y N . In this embodiment, the N original manual labels are processed to obtain a comprehensive labeling data
Figure BDA0002489598140000057
There are various ways to obtain unique comprehensive labeling data based on multiple labeling data, for example, averaging may be performed
Figure BDA0002489598140000058
Or weighted average calculations, etc.
In the present embodiment
Figure BDA0002489598140000059
Dividing the fundus image x respectively, outputting the division results of the same region of interest, and correcting the fundus image x respectively to obtain corrected results p l . The N corrected resultsAfter weighted average, the segmentation result y is obtained s (first division result) is called a soft tag, and thus the corresponding part in fig. 4 is called a soft tag generation module.
Then the fundus image x and the soft label y are combined s Original label
Figure BDA00024895981400000510
Input together into the segmentation model->
Figure BDA00024895981400000511
The model parameters are optimized by calculating the loss of uncertain distillation, which is trained to output the final segmentation result (second segmentation result), and the corresponding part in fig. 4 is called a segmentation model training module. The loss calculation formula for uncertain knowledge distillation can be described as +.>
Figure BDA0002489598140000061
Omega is the set weight.
Through training with a large number of fundus images and labels, the second segmentation model can accurately segment the interested target, and auxiliary diagnosis information is provided for doctors. When the fundus image is segmented using the trained model, the first segmentation model is not required, and only the second segmentation model is used for recognition.
In one embodiment, as shown in fig. 5, an anterior membrane pathology region is included in the acquired fundus image to be segmented. The object of interest in the fundus image is segmented using the second segmentation model 12 trained as described above, and the resulting segmentation result 51 represents the anterior membrane pathology region, which can then be marked in the original fundus image.
The fundus image segmentation method provided by the embodiment of the invention is particularly suitable for identifying the target of interest with blurred boundary, the target of interest is segmented by utilizing the second segmentation model trained by the scheme, and the second segmentation model learns knowledge from the soft label and knowledge from the manual annotation data, so that the segmentation method has higher segmentation accuracy, and the finally output segmentation result is more in accordance with the actual condition of the target of interest.
It will be appreciated by those skilled in the art that embodiments of the present invention may be provided as a method, system, or computer program product. Accordingly, the present invention may take the form of an entirely hardware embodiment, an entirely software embodiment or an embodiment combining software and hardware aspects. Furthermore, the present invention may take the form of a computer program product embodied on one or more computer-usable storage media (including, but not limited to, disk storage, CD-ROM, optical storage, and the like) having computer-usable program code embodied therein.
The present invention is described with reference to flowchart illustrations and/or block diagrams of methods, apparatus (systems) and computer program products according to embodiments of the invention. It will be understood that each flow and/or block of the flowchart illustrations and/or block diagrams, and combinations of flows and/or blocks in the flowchart illustrations and/or block diagrams, can be implemented by computer program instructions. These computer program instructions may be provided to a processor of a general purpose computer, special purpose computer, embedded processor, or other programmable data processing apparatus to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable data processing apparatus, create means for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be stored in a computer-readable memory that can direct a computer or other programmable data processing apparatus to function in a particular manner, such that the instructions stored in the computer-readable memory produce an article of manufacture including instruction means which implement the function specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be loaded onto a computer or other programmable data processing apparatus to cause a series of operational steps to be performed on the computer or other programmable apparatus to produce a computer implemented process such that the instructions which execute on the computer or other programmable apparatus provide steps for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
It is apparent that the above examples are given by way of illustration only and are not limiting of the embodiments. Other variations or modifications of the above teachings will be apparent to those of ordinary skill in the art. It is not necessary here nor is it exhaustive of all embodiments. While still being apparent from variations or modifications that may be made by those skilled in the art are within the scope of the invention.

Claims (8)

1. A fundus image segmentation model training method, comprising:
acquiring training data, wherein the training data comprises fundus images and a plurality of labeling data of the same interested target;
dividing the fundus image by using a trained first division model to obtain a first division result of the object of interest;
training a second segmentation model by utilizing the training data and the first segmentation result to output a second segmentation result of the object of interest;
wherein training a second segmentation model with the training data and the first segmentation result comprises:
obtaining comprehensive annotation data by carrying out weighted average calculation on a plurality of annotation data in the training data;
and training a second segmentation model by utilizing the fundus image, the first segmentation result and the comprehensive annotation data.
2. The method according to claim 1, wherein the first segmentation model includes a plurality of neural networks for outputting segmentation results for the object of interest from the fundus image, respectively, and the first segmentation model determines a first segmentation result for the object of interest from the segmentation results of the respective neural networks.
3. The method of claim 2, wherein the first segmented result is obtained by a weighted average calculation of segmented results of each of the neural networks.
4. A method according to claim 2 or 3, wherein the first segmentation model performs a correction process on the segmentation results of each of the neural networks, and determines the first segmentation result from the segmentation results after each correction process.
5. A fundus image segmentation method, comprising:
acquiring a fundus image; segmentation of an object of interest in a fundus image using the second segmentation model trained by the method of any of claims 1-4, resulting in a segmentation result.
6. The method of any one of claims 1-4, wherein the object of interest is a soft lesion region with blurred boundaries.
7. A fundus image segmentation model training apparatus, comprising: at least one processor; and a memory communicatively coupled to the at least one processor; wherein the memory stores instructions executable by the one processor to cause the at least one processor to perform the fundus image segmentation model training method of any of claims 1-4.
8. A fundus image segmentation apparatus, characterized by comprising: at least one processor; and a memory communicatively coupled to the at least one processor; wherein the memory stores instructions executable by the one processor, the instructions being executable by the at least one processor to cause the at least one processor to perform the fundus image segmentation method of claim 5 or 6.
CN202010401358.0A 2020-05-13 2020-05-13 Fundus image segmentation method and device Active CN111563910B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010401358.0A CN111563910B (en) 2020-05-13 2020-05-13 Fundus image segmentation method and device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010401358.0A CN111563910B (en) 2020-05-13 2020-05-13 Fundus image segmentation method and device

Publications (2)

Publication Number Publication Date
CN111563910A CN111563910A (en) 2020-08-21
CN111563910B true CN111563910B (en) 2023-06-06

Family

ID=72074654

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010401358.0A Active CN111563910B (en) 2020-05-13 2020-05-13 Fundus image segmentation method and device

Country Status (1)

Country Link
CN (1) CN111563910B (en)

Families Citing this family (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112070163B (en) * 2020-09-09 2023-11-24 抖音视界有限公司 Image segmentation model training and image segmentation method, device and equipment
CN112598686B (en) * 2021-03-03 2021-06-04 腾讯科技(深圳)有限公司 Image segmentation method and device, computer equipment and storage medium
CN113688675A (en) * 2021-07-19 2021-11-23 北京鹰瞳科技发展股份有限公司 Target detection method and device, electronic equipment and storage medium
CN114529535A (en) * 2022-02-22 2022-05-24 平安科技(深圳)有限公司 Fundus leopard print image segmentation method, computer and storage medium

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110120047A (en) * 2019-04-04 2019-08-13 平安科技(深圳)有限公司 Image Segmentation Model training method, image partition method, device, equipment and medium
CN110428421A (en) * 2019-04-02 2019-11-08 上海鹰瞳医疗科技有限公司 Macula lutea image region segmentation method and apparatus

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2019208851A (en) * 2018-06-04 2019-12-12 株式会社ニデック Fundus image processing device and fundus image processing program

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110428421A (en) * 2019-04-02 2019-11-08 上海鹰瞳医疗科技有限公司 Macula lutea image region segmentation method and apparatus
CN110120047A (en) * 2019-04-04 2019-08-13 平安科技(深圳)有限公司 Image Segmentation Model training method, image partition method, device, equipment and medium

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
崔栋 ; 刘敏敏 ; 张光玉 ; .BP神经网络在眼底造影图像分割中的应用.中国医学物理学杂志.2011,(01),全文. *

Also Published As

Publication number Publication date
CN111563910A (en) 2020-08-21

Similar Documents

Publication Publication Date Title
CN111563910B (en) Fundus image segmentation method and device
CN110232383B (en) Focus image recognition method and focus image recognition system based on deep learning model
JP7022195B2 (en) Machine learning equipment, methods and programs and recording media
CN110889826B (en) Eye OCT image focus region segmentation method, device and terminal equipment
CN110956635A (en) Lung segment segmentation method, device, equipment and storage medium
CN109961848B (en) Macular image classification method and device
CN110263755B (en) Eye ground image recognition model training method, eye ground image recognition method and eye ground image recognition device
CN112581458B (en) Image processing method and device
CN113689359B (en) Image artifact removal model and training method and system thereof
CN111353996B (en) Vascular calcification detection method and device
CN111553436A (en) Training data generation method, model training method and device
CN112634246B (en) Oral cavity image recognition method and related equipment
CN112396588A (en) Fundus image identification method and system based on countermeasure network and readable medium
CN117058676B (en) Blood vessel segmentation method, device and system based on fundus examination image
CN113066066A (en) Retinal abnormality analysis method and device
CN113808125A (en) Medical image processing method, focus type identification method and related product
CN111640127B (en) Accurate clinical diagnosis navigation method for orthopedics department
CN110276333B (en) Eye ground identity recognition model training method, eye ground identity recognition method and equipment
CN109919098B (en) Target object identification method and device
CN115661142B (en) Tongue diagnosis image processing method, device and medium based on key point detection
CN111640097A (en) Skin mirror image identification method and equipment
CN112634221A (en) Image and depth-based cornea level identification and lesion positioning method and system
CN111402246A (en) Eye ground image classification method based on combined network
CN116030042A (en) Diagnostic device, method, equipment and storage medium for doctor's diagnosis
CN113591601B (en) Method and device for identifying hyphae in cornea confocal image

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant