CN111311565A - Eye OCT image-based detection method and device for positioning points of optic cups and optic discs - Google Patents

Eye OCT image-based detection method and device for positioning points of optic cups and optic discs Download PDF

Info

Publication number
CN111311565A
CN111311565A CN202010087226.5A CN202010087226A CN111311565A CN 111311565 A CN111311565 A CN 111311565A CN 202010087226 A CN202010087226 A CN 202010087226A CN 111311565 A CN111311565 A CN 111311565A
Authority
CN
China
Prior art keywords
optic
cup
oct image
disc
eye
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202010087226.5A
Other languages
Chinese (zh)
Inventor
王立龙
陈锞
范栋轶
王瑞
王关政
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Ping An Technology Shenzhen Co Ltd
Original Assignee
Ping An Technology Shenzhen Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Ping An Technology Shenzhen Co Ltd filed Critical Ping An Technology Shenzhen Co Ltd
Priority to CN202010087226.5A priority Critical patent/CN111311565A/en
Priority to PCT/CN2020/093585 priority patent/WO2021159643A1/en
Publication of CN111311565A publication Critical patent/CN111311565A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/0002Inspection of images, e.g. flaw detection
    • G06T7/0004Industrial image inspection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/70Determining position or orientation of objects or cameras
    • G06T7/73Determining position or orientation of objects or cameras using feature-based methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/214Generating training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10072Tomographic images
    • G06T2207/10101Optical tomography; Optical coherence tomography [OCT]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30004Biomedical image processing
    • G06T2207/30041Eye; Retina; Ophthalmic

Abstract

The application is suitable for the technical field of image processing, and provides a method, a device and a terminal device for detecting positioning points of a sight glass and a optic disc based on an eye OCT image, wherein the method comprises the following steps: acquiring an eye OCT image; detecting the eye OCT image by using a preset detection model to obtain coordinates of two positioning points of a sight glass in the eye OCT image and coordinates of two positioning points of a optic disc; the detection model comprises a first network branch and a second network branch, the first network branch is used for extracting a plurality of feature maps with different scales of the eye OCT image, and the second network branch is used for extracting two positioning point coordinates of a visual cup and two positioning point coordinates of a visual disc in the eye OCT image according to the plurality of feature maps with different scales. The application realizes accurate and efficient positioning of the optic cup and the optic disc of the eye OCT image.

Description

Eye OCT image-based detection method and device for positioning points of optic cups and optic discs
Technical Field
The application belongs to the technical field of image processing, and particularly relates to a method and a device for detecting positioning points of a sight glass and a optic disc based on an eye OCT image, a terminal device and a computer readable storage medium.
Background
The Optical Coherence Tomography (OCT) technique is a new type of Tomography technique with the greatest development prospect in recent years, and has an attractive application prospect in the aspects of biological tissue biopsy and imaging. The OCT image obtained by the OCT technology has the characteristics of non-invasiveness, no radiation, non-invasion, high resolution, high detection sensitivity, safety and high efficiency in image acquisition and the like, and is increasingly important in ophthalmologic diagnosis.
The parameters for evaluation of optic disc morphology are very important indicators in ophthalmic diagnosis. The disc morphology evaluation parameters include, but are not limited to, disc area, optic Cup area, rim area, vertical rim area, horizontal rim volume, mean Cup to disc ratio (CDR), horizontal and vertical CDR, and the like.
However, at present, most of the measurement of the optical disk morphological evaluation parameters based on the OCT image still depends on manual measurement and machine semi-automatic measurement. Therefore, a video disc morphology evaluation parameter detection scheme based on the eye OCT image is needed.
Disclosure of Invention
The embodiment of the application provides a method and a device for detecting a positioning point of a vision cup and a vision disk based on an eye OCT image, a terminal device and a computer readable storage medium, provides a scheme for detecting the positioning point of the vision cup and the vision disk based on the eye OCT image, and realizes accurate and efficient detection of the positioning point of the vision cup and the vision disk.
In a first aspect, an embodiment of the present application provides a method for detecting a location point of a cup and a disc based on an eye OCT image, including:
acquiring an eye OCT image;
detecting the eye OCT image by using a preset detection model to obtain coordinates of two positioning points of a sight glass in the eye OCT image and coordinates of two positioning points of a optic disc; the detection model comprises a first network branch and a second network branch, the first network branch is used for extracting a plurality of feature maps with different scales of the eye OCT image, and the second network branch is used for extracting two positioning point coordinates of a visual cup and two positioning point coordinates of a visual disc in the eye OCT image according to the plurality of feature maps with different scales.
In a second aspect, an embodiment of the present application provides an eye cup and optic disc positioning point detection apparatus based on an eye OCT image, including:
the acquisition module is used for acquiring an eye OCT image;
the detection module is used for detecting the eye OCT image by using a preset detection model to obtain two positioning point coordinates of a visual cup in the eye OCT image and two positioning point coordinates of a visual disc; the detection model comprises a first network branch and a second network branch, the first network branch is used for extracting a plurality of feature maps with different scales of the eye OCT image, and the second network branch is used for extracting two positioning point coordinates of a visual cup and two positioning point coordinates of a visual disc in the eye OCT image according to the plurality of feature maps with different scales.
In a third aspect, an embodiment of the present application provides a terminal device, including: a memory, a processor and a computer program stored in the memory and executable on the processor, the processor implementing the method according to the first aspect when executing the computer program.
In a fourth aspect, the present application provides a computer-readable storage medium, which stores a computer program, and when the computer program is executed by a processor, the computer program implements the method according to the first aspect.
In a fifth aspect, embodiments of the present application provide a computer program product, which, when run on a terminal device, causes the terminal device to perform the method according to the first aspect.
In the embodiment of the application, the positioning points of the cup and the optic disc of the eye OCT image are detected through the preset detection model, on one hand, the detection result of the positioning points can be obtained by directly detecting the eye OCT image through the detection model, and the detection efficiency is greatly improved; on the other hand, the detection model extracts the characteristics of the eye OCT image in different scales, so that the detection of the positioning points of the optic cup and the optic disc is more accurately realized.
Drawings
In order to more clearly illustrate the technical solutions in the embodiments of the present application, the drawings needed to be used in the embodiments or the prior art descriptions will be briefly described below, and it is obvious that the drawings in the following description are only some embodiments of the present application, and it is obvious for those skilled in the art to obtain other drawings based on these drawings without inventive exercise.
Fig. 1 is a schematic flowchart of a method for detecting a cup and a disc positioning point based on an eye OCT image according to an embodiment of the present disclosure;
fig. 2 is a schematic structural diagram of a detection model adopted in a method for detecting a location point of a cup and a disc based on an eye OCT image according to an embodiment of the present disclosure;
fig. 3 is a schematic flowchart illustrating a preprocessing of an original eye OCT image in a method for detecting a cup and a disc positioning point based on an eye OCT image according to an embodiment of the present disclosure;
FIG. 4 is a schematic diagram illustrating an eye OCT image marking method in a cup and disc positioning point detection method based on an eye OCT image according to an embodiment of the present application;
fig. 5 is a schematic structural diagram of a first network branch adopted in a method for detecting a location point of a cup and a disc based on an eye OCT image according to an embodiment of the present disclosure;
fig. 6 is a schematic structural diagram of a module1 of a first network branch used in a method for detecting an eye cup and a disk anchor point based on an eye OCT image according to an embodiment of the present application;
fig. 7 is a schematic structural diagram of a module2 of a first network branch used in a method for detecting a location point of a cup and a disc based on an eye OCT image according to an embodiment of the present application;
fig. 8 is a schematic structural diagram of a module3 of a first network branch used in a method for detecting a location point of a cup and a disc based on an eye OCT image according to an embodiment of the present application;
fig. 9 is a schematic structural diagram of a module4 of a first network branch used in a method for detecting a location point of a cup and a disc based on an eye OCT image according to an embodiment of the present application;
fig. 10 is a schematic structural diagram of a second network branch used in a method for detecting a location point of a cup and a disc based on an eye OCT image according to an embodiment of the present application;
fig. 11 is a schematic structural diagram of a first sub-network of a second network branch used in a method for detecting a location of a cup and a disc based on an eye OCT image according to an embodiment of the present application;
fig. 12 is a schematic structural diagram of a second sub-network of a second network branch used in the eye OCT image-based cup and optic disc location point detection method according to an embodiment of the present application;
fig. 13 is a schematic structural diagram of an attention module of a second sub-network of a second network branch used in a method for detecting a location of a cup and a disc based on an OCT image of an eye according to an embodiment of the present application;
fig. 14 is a schematic diagram of a cup ellipse and a disc ellipse obtained in a cup and disc positioning point detection method based on an eye OCT image according to an embodiment of the present disclosure;
fig. 15 is a schematic structural diagram of a device for detecting a cup and a disc positioning point based on an eye OCT image according to an embodiment of the present application;
fig. 16 is a schematic structural diagram of a terminal device to which the eye OCT image-based eye cup and optic disc positioning point detection method provided in an embodiment of the present application is applied.
Detailed Description
In the following description, for purposes of explanation and not limitation, specific details are set forth, such as particular system structures, techniques, etc. in order to provide a thorough understanding of the embodiments of the present application.
In order to make the technical solutions better understood by those skilled in the art, the technical solutions in the embodiments of the present application will be described below in detail and completely with reference to the drawings in the embodiments of the present application, and it is obvious that the described embodiments are only some embodiments of the present application, but not all embodiments. All other embodiments obtained by a person of ordinary skill in the art based on the embodiments in the present application shall fall within the protection scope of the present application without any creative effort. It should be noted that the embodiments and features of the embodiments in the present application may be combined with each other without conflict.
In the following description, for purposes of explanation and not limitation, specific details are set forth, such as particular system structures, techniques, etc. in order to provide a thorough understanding of the embodiments of the present application. It will be apparent, however, to one skilled in the art that the present application may be practiced in other embodiments that depart from these specific details. In other instances, detailed descriptions of well-known systems, devices, circuits, and methods are omitted so as not to obscure the description of the present application with unnecessary detail.
It will be understood that the terms "comprises" and/or "comprising," when used in this specification and the appended claims, specify the presence of stated features, integers, steps, operations, elements, and/or components, but do not preclude the presence or addition of one or more other features, integers, steps, operations, elements, components, and/or groups thereof.
It should also be understood that the term "and/or" as used in this specification and the appended claims refers to and includes any and all possible combinations of one or more of the associated listed items.
As used in this specification and the appended claims, the term "if" may be interpreted contextually as "when", "upon" or "in response to" determining "or" in response to detecting ". Similarly, the phrase "if it is determined" or "if a [ described condition or event ] is detected" may be interpreted contextually to mean "upon determining" or "in response to determining" or "upon detecting [ described condition or event ]" or "in response to detecting [ described condition or event ]".
Furthermore, in the description of the present application and the appended claims, the terms "first," "second," "third," and the like are used for distinguishing between descriptions and not necessarily for describing or implying relative importance.
Reference throughout this specification to "one embodiment" or "some embodiments," or the like, means that a particular feature, structure, or characteristic described in connection with the embodiment is included in one or more embodiments of the present application. Thus, appearances of the phrases "in one embodiment," "in some embodiments," "in other embodiments," or the like, in various places throughout this specification are not necessarily all referring to the same embodiment, but rather "one or more but not all embodiments" unless specifically stated otherwise. The terms "comprising," "including," "having," and variations thereof mean "including, but not limited to," unless expressly specified otherwise.
The evaluation parameters of optic disc morphology are very important indexes in ophthalmologic diagnosis, and the positioning detection of the optic cup and the optic disc in the OCT image of the eye is the basis for obtaining the evaluation parameters of optic disc morphology. Therefore, the embodiment of the application provides a method for detecting the positioning point of the optic cup and the optic disc based on the eye OCT image, and the accurate and efficient detection of the positioning point of the optic cup and the optic disc in the eye OCT image is realized.
Fig. 1 shows a flowchart of an implementation of a method for detecting a location point of a cup and a disc based on an eye OCT image according to an embodiment of the present application. The method is applied to the terminal equipment. The eye OCT image-based eye cup and optic disc positioning point detection method provided by the embodiment of the application can be applied to terminal devices such as an ophthalmological OCT device, a mobile phone, a tablet personal computer, a wearable device, a vehicle-mounted device, an Augmented Reality (AR)/Virtual Reality (VR) device, a notebook computer, a super-mobile personal computer (UMPC), a netbook, a Personal Digital Assistant (PDA), an independent server, a distributed server, a server cluster or a cloud server, and the embodiment of the application does not limit the specific types of the terminal devices. As shown in fig. 1, the method includes steps S110 to S130. The specific implementation principle of each step is as follows.
And S110, acquiring an eye OCT image.
The eye OCT image is an object needing to be detected at the positioning point of the cup and the optic disc, and can be an original eye OCT image of one frame.
When the terminal device is an OCT device, the eye OCT image may be an eye OCT image obtained by scanning an eye of a human body to be measured in real time by the OCT device.
When the terminal device is not the OCT device, the eye OCT image may be an eye OCT image acquired by the terminal device from the OCT device in real time, or may be a pre-stored eye OCT image acquired from an internal or external memory of the terminal device.
In a non-limiting example, the OCT device collects an OCT image of an eye of a human body to be measured in real time, sends the OCT image to the terminal device, and the terminal device acquires the OCT image.
In another non-limiting example, the OCT device collects an OCT image of the eye of the human body to be measured and sends the OCT image to the terminal device, and the terminal device stores the OCT image in the database and then obtains the OCT image of the eye of the human body to be measured from the database.
In some embodiments of the present application, the terminal device acquires an eye OCT image, and directly performs the subsequent step S120 after acquiring the eye OCT image, that is, detects a cup and a optic disc positioning point in the eye OCT image.
In some embodiments of the present application, the terminal device acquires an eye OCT image, and after acquiring the eye OCT image, the eye OCT image is pre-cut to a preset size, for example, 512 × 512, and then the subsequent step S120 is performed, that is, the cup and the optic disc positioning point in the pre-processed eye OCT image are detected.
In a non-limiting usage scenario of the present application, when a user wants to perform cup and optic disc location point detection on a selected frame of eye OCT image, a location point detection function of a terminal device is enabled by clicking a specific physical key and/or a virtual key of the terminal device, and at this time, the terminal device automatically processes the selected frame of eye OCT image according to the processes from step S110 to step S120 to obtain a location point detection result.
In another non-limiting usage scenario of the present application, when a user wants to perform cup and optic disc location point detection on a certain frame of eye OCT image, the location point detection function of the terminal device may be enabled by clicking a specific physical key and/or a virtual key, and a frame of eye OCT image is selected, and then the terminal device may automatically process the eye OCT image according to the processes from step S110 to step S120, so as to obtain a location point detection result.
It is understood herein that the order of clicking the button and selecting one frame of the eye OCT image may be interchanged, and the embodiments of the present application are applicable to, but not limited to, these two different usage scenarios.
And S120, detecting the eye OCT image by using a preset detection model to obtain coordinates of two positioning points of a sight glass in the eye OCT image and coordinates of two positioning points of a optic disc.
Step S120 is a step of performing positioning point detection on the eye OCT image by using a preset detection model, and determining coordinates of two positioning points of the optic cup and coordinates of two positioning points of the optic disc in the eye OCT image.
Wherein, as shown in fig. 2, the detection model comprises a first network branch and a second network branch. The first network branch is used for extracting a plurality of feature maps with different scales of the eye OCT image, and the second network branch is used for extracting two positioning point coordinates of a visual cup and two positioning point coordinates of a visual disk in the eye OCT image according to the feature maps with different scales.
In the embodiment of the application, the detection model may be a deep learning network model, and the deep learning network model may be a deep learning network model based on a machine learning technology in artificial intelligence.
When the eye OCT image is input into the deep learning network model, the deep learning network model outputs coordinates of two positioning points of the visual cup in the eye OCT image and coordinates of two positioning points of the visual disk.
Wherein, the training process of the detection model comprises the following steps: acquiring a sample data set, wherein the sample data set comprises a plurality of sample images, and each sample image is an eye OCT sample image subjected to eye cup and optic disc positioning point labeling; and training the key point detection model by using the sample data set, adjusting the weight of the key point detection model in the training process until the output result of the key point detection model after the weight is adjusted meets a preset condition, or stopping training when the iteration number of the training process reaches a preset iteration number.
As a non-limiting example of the present application, a large number of eye OCT images are acquired as sample images, forming a sample data set; each sample image is an eye OCT sample image marked by the positioning points of the optic cup and the optic disc.
In order to obtain good marking precision so as to train a detection model with better performance, in some embodiments of the present application, the sample image is an eye OCT image obtained by preprocessing an original eye OCT image and then labeling a cup and a optic disc positioning point.
It will be appreciated that preprocessing includes, but is not limited to, interpolation and truncation, among other operations. Illustratively, referring to fig. 3, the original image of the obtained OCT image is 1024(1 pixel corresponds to actual 6 mm) × 768(1 pixel corresponds to actual 3.01 mm), and the OCT image is first interpolated to 1200 × 462, so that 1 pixel represents 5 μm (micrometer), which is very convenient for labeling; then, both sides are truncated, and 200 pixels are truncated for each of the left and right sides, so that the resolution of the preprocessed OCT image is 800 × 462.
And then labeling the preprocessed eye OCT image, wherein the labeling is mainly performed by a doctor according to experience. By combining the definition of the optic cups and the optic discs in clinical application, positioning points of the optic cups and the optic discs in different OCT images are accurately marked by a plurality of doctors, and finally, the positioning points are uniformly audited by one expert doctor, so that the accuracy and the rule consistency of marking are ensured. The labeling result is shown schematically in FIG. 4. As shown in fig. 4, the labeling result includes four anchor points: two optic disc locating point coordinates and two sight glass locating point coordinates. The coordinates of the positioning points of the two video disks are respectively as follows: optic disc location point 1, coordinates (x1, y 1); the optic disc location point 2 has coordinates (x2, y 2). The coordinates of the positioning points of the two sight cups are respectively as follows: looking at cup location point 1, coordinates are (x3, y 3); looking at cup location point 2, the coordinates are (x4, y 4). Where the annotations comply with clinical norms. The optic disc locating point is the terminal of the retinal pigment epithelium layer (RPE), the optic cup connecting line is parallel to the optic disc connecting line and intersects with the Inner Limiting Membrane (ILM) at the optic cup locating points (x3, y3) and (x4, y4), the distance d between the optic cup connecting line and the optic disc connecting line is 110 mu m according to clinical use, 5 mu m is represented by 1 pixel, and the distance can be calculated to be 22 pixels.
And storing the marked sample image into a preset database as a sample data set.
The method comprises the steps of obtaining a sample data set from a preset database, taking a sample image as input, taking a labeling result in the sample image as a target positioning point, and establishing a detection model of the positioning point of the visual cup and the visual disc. In the training process of the model, the weight of the model is adjusted until the output result of the model after the weight is adjusted meets a preset accuracy threshold value, or the iteration number reaches a preset iteration number threshold value, and the training process of the model is stopped.
Optionally, the sample image is divided into a training sample set, a verification sample set and a test sample set, and a deep learning network model is trained by using a back propagation algorithm according to the training sample set, the verification sample set and the test sample set.
It should be noted that the process of training the detection model may be implemented locally on the terminal device, or may be implemented on other devices in communication connection with the terminal device, and when a successfully trained detection model is deployed on the terminal device side, or after the trained detection model is pushed to the terminal device and successfully deployed by the other devices, the detection of the cup optic disk positioning point on the obtained eye OCT image may be implemented on the terminal device. It should be noted that the OCT image of the eye to be detected, which is obtained in the process of detecting the positioning point of the optic cup and the optic disc, may also be used to increase the sample size in the sample data set, perform further optimization of the detection model at the terminal device or other device, and deploy the further optimized detection model to the terminal device to replace the previous detection model. In this way, the detection model is optimized, and the performance of the detection model is further improved.
In the embodiment of the present application, the preset detection model includes a first network branch and a second network branch.
The first network branch is used for extracting a plurality of feature maps with different scales of the eye OCT image. The first network branches into an improved Xception network for extracting image information of different scales in a target image, and the structure of the improved Xception network is shown in fig. 5. As shown in fig. 5, the first network branch includes cascaded blocks (module)1a, module2a, module3a, and module4a, where the output of module4a is spliced (concat) with the output of module2a to input module2b after 4 times up-sampling (upsample × 4), the output of module3a is spliced (concat) with the output of module2b to input module3b, the output of module3b is spliced (concat) with the output of module4a to input module4b, and the output of module4b is spliced (upsample × 4) with the output of module2b to input module2 c; module1a, module2a, module2b and module2c output four feature maps with different scales, the scale of the feature map output by module1a is 256 multiplied by 256, and the number of channels is 8; the dimension of the feature graph output by the module2a is 128 multiplied by 128, and the number of channels is 48; the dimension of the feature graph output by the module2b is 64 multiplied by 64, and the number of channels is 48; the module2c outputs a feature map scale of 32 x 32 with a channel number of 48.
The structures of the module1 (including the module 1a), the module2 (including the module2a, the module2b and the module2c), the module3 (including the module3a and the module3b), and the module4 (including the module4a and the module4b) are respectively as shown in fig. 6 to 9.
Referring to fig. 6, a schematic diagram of module1 is shown, and as shown in fig. 6, module1 includes 1 convolutional layer and a bn (batch normalization) layer with an activation function. Wherein, the activation function of the BN layer rear band is a ReLU function; the convolution kernel of the convolutional layer is 3 × 3, the stride (stride) is 2 × 2, and the number of channels is 8.
As shown in fig. 7, which is a schematic structural diagram of the module2, as shown in fig. 7, the module2 includes 5 parts, and the four parts from the first part to the fifth part, and the second part to the fifth part have the same network structure. Wherein the first part comprises three convolutional layers of a first convolutional layer, a second convolutional layer and a third convolutional layer which are cascaded, and a fourth convolutional layer; the first convolution layer and the second convolution layer are provided with a BN layer and an activation function ReLU function, and the output of the third convolution layer and the output of the fourth convolution layer are input into the second part after being subjected to the first addition and the second addition. The second part comprises an activation function ReLU function and three convolutional layers which are cascaded, the data of the second part is input, and the data after being subjected to the sum operation with the data after being subjected to the three convolutional layers is input into the third part. And so on until the fifth part gets the module2 output.
FIG. 8 is a schematic structural diagram of module3, and FIG. 9 is a schematic structural diagram of module 4. Module2, module3, and module4 are similar in overall structure, except for the size of the convolution kernel and the number of module repetitions. Please refer to fig. 8-9, which are not described herein.
In the embodiment of the present application, the first network branch utilizes the structure of the main module of the original Xception, and the modification of the original Xception includes reducing the number of channels, increasing the number of times of module repetition, and increasing the feature concatenation (or aggregation). By reducing the number of channels, the calculation amount is greatly reduced, the system resource occupation is reduced, and the calculation cost is reduced; meanwhile, in order to balance the accuracy lost by reducing the number of channels, the number of module repetition times is increased on one hand, and the characteristic cascade is increased on the other hand.
Specifically, the channel number of the original Xception is reduced to 8, 48, 96, 192, 256, 728, 64, 128, 256, 728, and a light Xception network is formed. But the feature extraction is insufficient due to the reduction of the number of channels, so that the feature cascading operation is increased. The characteristic cascade specifically comprises: the feature extraction network with a smaller number of channels is duplicated into three parts, which are called multi-stage networks for convenience of description, each network has a plurality of convolution layers, and each layer outputs features with different resolutions, which are called multi-layer features. Firstly, the multi-stage networks are connected in series, the features extracted by each stage of network are transmitted to the next stage as input, and simultaneously, the features of the corresponding stages of the previous stage are fused together, so that the features are multiplexed. The cascade operation fuses the features with different resolutions for many times, and fully extracts effective information.
The main advantages of the cascade mode are:
1) the module1a, the module2a, the module3a, the module4a and the module2b, the module3b and the module4b belong to different levels, and a plurality of networks with different levels can fully extract image information with different scales.
2) The structure utilizes the characteristics of the module2a and the module4a after up-sampling in a plurality of ways, such as the module2b, to fuse the characteristics of different resolutions, thereby realizing the characteristic multiplexing and effectively utilizing the network characteristics of different levels.
And the second network branch is used for extracting coordinates of two positioning points of a visual cup and coordinates of two positioning points of a visual disk in the eye OCT image according to the feature maps with different scales.
As an example of the present application, as shown in fig. 10, the second network branch comprises a first sub-network and a second sub-network, the first sub-network is used for roughly detecting the optic cup and optic disc positioning point of the eye OCT image; the second sub-network is used for precisely detecting the positioning points of the optic cup and the optic disc of the eye OCT image.
Wherein the first sub-network is a global network (GlobalNet) for adding feature cascades; the second sub-network is a split network (RefineNet) that increases the attention mechanism.
The first sub-network takes as input the feature maps of different scales output by the first network branches and adds the feature cascade. Simple key points can be located by extracting image features by using a global network.
Fig. 11 is a schematic diagram of the first subnetwork. As shown in FIG. 11, it should be noted that modules 1a, 2a, 2b, and 2c in FIG. 11 correspond to different scales of outputs in the first network branch, respectively. The first sub-network comprises 7 convolutional layers, wherein the output of the module2c is spliced (concat) with the output of the module2b after passing through the first convolutional layer (the convolutional kernel is 3 × 3, the number of channels is 256) and 2 times of upsampling (upsampling) to be used as the input of the second convolutional layer (the convolutional kernel is 3 × 3, the number of channels is 128); the output of the second convolutional layer is subjected to 2 times of upsampling (upsample) and then spliced (concat) with the output of module2a to be used as the input of a third convolutional layer (the convolutional kernel is 3 multiplied by 3, and the number of channels is 64); the output of the third convolutional layer is subjected to up-sampling (upsampling) by 2 times and then spliced (concat) with the output of the module1a to be used as the input of a fourth convolutional layer (the convolutional kernel is 3 × 3, the number of channels is 64), the outputs of the second convolutional layer, the third convolutional layer and the third convolutional layer are respectively subjected to a fifth convolutional layer (the convolutional kernel is 1 × 1, the number of channels is 4) and then output of three different scales generated by the first sub-network, and the three outputs are global _ out1, global _ out2 and global _ out 3.
The second sub-network takes the output of the first sub-network with different scales as input, the features are highly dense, and by increasing the attention mechanism, the features can be screened according to the importance, so that the reliability of the final result can be effectively improved.
Fig. 12 is a schematic diagram of the second sub-network. As shown in fig. 12, the three convolutional layers of the second sub-network are connected to the three outputs of the first sub-network, and are sequentially a first convolutional layer (convolutional kernel 1 × 1, number of channels 128), a second convolutional layer (convolutional kernel 1 × 1, number of channels 128), and a third convolutional layer (convolutional kernel 1 × 1, number of channels 256); the output of the second convolution layer is connected with the first attention module; the output of the third convolutional layer is connected with a second attention module; and after output of the second attention module sequentially passes through a fourth convolution layer (convolution kernel is 1 multiplied by 1, the number of channels is 128), and after the third attention module and the 4 times of upsampling, the output of the second attention module is spliced (concat) with output of the first attention module after the 2 times of upsampling and output of the first convolution layer, and after splicing, the output is input into a fifth convolution layer (convolution kernel is 1 multiplied by 1, the number of channels is 4) to obtain output, and the output is the detection result of the positioning points of the view cup and the view disk.
The attention module includes first to third attention modules, and a schematic structural diagram thereof is shown in fig. 13. The attention module comprises a Global average pooling layer (Global average pooling layer) and 2 full connection layers (sense out), an activation function ReLU function is arranged among the 2 full connection layers, an activation function Sigmoid function is arranged behind the 2 nd full connection layer, and the output of the Sigmoid function is subjected to structure adjustment and then multiplied by input data to serve as the output of the attention module.
The global averaging pooling layer outputs a value by globally averaging the feature map, that is, a tensor of W × H × D is changed into a tensor of 1 × D. The layer is subjected to feature compression along the spatial dimension, so that a global receptive field is formed on the feature channels, and the output dimension is matched with the number of input feature channels.
The three layers, fully-connected layer, activation function, and structural adjustment, generate weights for each feature channel by parameters that are learned to explicitly model the correlation between feature channels.
The final multiplication operation is a recalibration operation, the output weight is regarded as the importance of each feature channel after feature selection, and then the original features are recalibrated on the channel dimension by weighting to the previous features channel by channel through multiplication. The attention module in the embodiment of the present application only needs to learn one weight and multiply the weight by the original convolution.
The attention module adopts SE-Net (Squeeze-and-Excitation Networks), the SE-Net explicitly models the interdependence relation between the characteristic channels, a new space dimension is not introduced to carry out the fusion between the characteristic channels, and a brand new characteristic recalibration strategy is adopted. Specifically, the importance degree of each feature channel is automatically acquired through a learning mode, and then useful features are promoted according to the importance degree and the features which are not useful for the current task are suppressed. Therefore, the characteristics are screened according to the importance, and the final result can be effectively improved.
It is to be understood that the deep learning network model described herein is merely an exemplary description and is not to be construed as a specific limitation of the invention.
In the embodiment of the application, the locating points of the cup and the optic disc of the eye OCT image are detected through the preset detection model, on one hand, the detection result of the locating points of the eye OCT image can be obtained directly through the detection model, and the detection efficiency is greatly improved; on the other hand, the detection model extracts the characteristics of the eye OCT image in different scales, so that the detection of the positioning points of the optic cup and the optic disc is more accurately realized.
Optionally, on the basis of any of the above embodiments, that is, on the basis of obtaining the detection result of the cup and the optic disc positioning point in one eye OCT image, in some other embodiments of the present application, after step S120 of the above embodiment shown in fig. 1, steps S130 to S160 are further included.
S130, acquiring coordinates of two positioning points of a sight glass in the eye OCT image at least three different angles and coordinates of two positioning points of a optic disc; the at least three different angles include 0 degrees and 90 degrees.
S140, determining the optic cup length and optic disc length in each eye OCT image according to the coordinates of the two positioning points of the optic cup in each eye OCT image and the coordinates of the two positioning points of the optic disc.
S150, forming at least three first line segments according to the lengths of the sight cups at least three different angles, and fitting a sight cup ellipse by centering and projecting the at least three first line segments to the same plane; and forming at least three second line segments according to the lengths of the optic discs at least three different angles, and fitting the optic disc ellipse by centering and projecting the at least three second line segments to the same plane.
S160, according to the lengths of the optic cups at least three different angles, the lengths of the optic discs at least three angles, the optic cup ellipses and the optic disc ellipses, morphological parameters such as optic cup areas, optic disc areas, cup disc area ratios, vertical cup disc ratios, horizontal cup disc ratios and the like are obtained.
In the embodiment of the present application, the OCT image includes 0 degree, 45 degrees, 90 degrees, 135 degrees, etc., and thus the calculated cup length and optic disc length are lengths at various angles, for example, a length at 90 degrees in the vertical direction and a length at 0 degrees in the horizontal direction.
In a non-limiting example, in this example, four different angles are used as an example, and the cup length and the disc length at 0 degrees, 45 degrees, 90 degrees, and 135 degrees can be obtained first. From these cup and disc lengths, 8 line segments can be constructed, which are concentric and projected onto the same plane, as shown in fig. 14. Taking the cup length as an example now: projecting the concentric line segments onto the same plane can yield 8 points, fitting these 8 points to an ellipse, such as the small ellipse inside as shown in fig. 14, can yield the ellipse parameters, and thus the cup area, i.e., the area of the smaller ellipse (cup ellipse), can be determined. Optic discs are similar and fitting to the outer large ellipse shown in figure 14 the disc area, i.e. the area of the larger ellipse (optic disc ellipse) can be found. The area ratio of the cup to the optic disc is the area ratio of the optic cup to the optic disc, and the area ratio of the optic cup to the optic disc is the larger elliptical area of the smaller elliptical area ratio; the horizontal ratio of the optic cup to the optic disc is the ratio of the length of the optic cup at 0 degree to the length of the optic disc at 0 degree; the ratio of the length of the optic cup to the length of the 90-degree optic disc is lower than the vertical ratio of the optic cup to the optic disc of 90 degrees.
On one hand, the positioning point detection result of the eye OCT image can be directly obtained through the detection model, the detection efficiency is greatly improved, on the basis, line segment projection is carried out according to the positioning point results of the optic cup optic discs at a plurality of angles, the form parameters of the optic cup optic discs are obtained through fitting, and the method is simple, convenient and efficient and is easy to implement; on the other hand, as the detection model extracts the characteristics of the eye OCT image in a plurality of different scales, the detection of the positioning points of the optic cup and the optic disc is more accurately realized, and the accuracy of obtaining the morphological parameters of the optic cup and the optic disc is also improved; on the other hand, more and richer cup optic disk morphological parameters of the eye OCT image are obtained based on a plurality of cup and optic disk positioning points with different angles, so that the scheme of the application can be suitable for different scenes and is more adaptive.
An embodiment of the present application further provides a glaucoma classification method, where after 4-dimensional morphological parameters including an optic cup area, an optic disc area, a cup-disc area ratio, and a vertical cup-disc ratio are obtained by using the foregoing embodiment, the method further includes: acquiring 5-dimensional Ganglion Cell Complex (GCC) characteristics and 4-dimensional retinal optic fiber layer (RNFL) thickness characteristics; and combining the 4-dimensional morphological parameters, the 5-dimensional GCC characteristics and the 4-dimensional RNFL characteristics into 13-dimensional input characteristics, and inputting the glaucoma classification model trained by using a machine learning method to obtain a glaucoma classification result.
Wherein the 5-dimensional GCC characteristics related to the GCC parameters comprise: upper GCC thickness, lower GCC thickness, average GCC thickness, local missing volume (FLV), and global missing volume (GLV).
The 4-dimensional RNFL thickness characteristics for RNFL thickness include: upper RNFL thickness, lower RNFL thickness, nasal RNFL thickness and temporal RNFL thickness.
The 5-dimensional GCC feature and the 4-dimensional RNFL thickness feature can be read directly from the OCT image acquisition instrument.
And combining the 4-dimensional morphological parameters, the 5-dimensional GCC characteristics and the 4-dimensional RNFL characteristics into 13-dimensional input characteristics, and inputting the glaucoma classification model trained by using a machine learning method to obtain a glaucoma classification result.
The glaucoma classification model may be a machine learning based classification model. For example an Xgboost based decision tree model.
Illustratively, the grading results of the glaucoma grading model include: no glaucoma, low risk, medium risk and high risk. The example is four classes, but may also be two, three, or more classes of classification models.
It is understood that, under the teaching of the embodiments of the present application, a person skilled in the art may select a suitable hierarchical model according to an actual implementation situation, and a classification result of the hierarchical model may also be selected and set according to the actual situation, which is not specifically limited in the present application.
The embodiment of the application integrates various parameters and improves the accuracy of classification. In addition, grading is carried out based on the glaucoma grading model, decision can be completed within a few seconds, system resource occupation is reduced, and grading efficiency is greatly improved.
It should be understood that, the sequence numbers of the steps in the foregoing embodiments do not imply an execution sequence, and the execution sequence of each process should be determined by its function and inherent logic, and should not constitute any limitation to the implementation process of the embodiments of the present application.
Corresponding to the method for detecting the cup and the disk positioning point based on the eye OCT image described in the above embodiments, fig. 15 shows a block diagram of the device for detecting the cup and the disk positioning point based on the eye OCT image provided in the embodiments of the present application, and for convenience of description, only the relevant parts to the embodiments of the present application are shown.
Referring to fig. 15, the apparatus includes:
an acquisition module 151 for acquiring an eye OCT image;
the detection module 152 is used for detecting the positioning points of the cup and the optic disc of the eye OCT image through a preset detection model, on one hand, the detection result of the positioning points can be obtained by directly detecting the eye OCT image through the detection model, and the detection efficiency is greatly improved; on the other hand, the detection model extracts the characteristics of the eye OCT image in different scales, so that the detection of the positioning points of the optic cup and the optic disc is more accurately realized.
It should be noted that, because the contents of information interaction, execution process, and the like between the modules/units are based on the same concept as that of the method embodiment of the present application, specific functions and technical effects thereof may be referred to specifically in the method embodiment section, and are not described herein again.
It will be apparent to those skilled in the art that, for convenience and brevity of description, only the above-mentioned division of the functional units and modules is illustrated, and in practical applications, the above-mentioned function distribution may be performed by different functional units and modules according to needs, that is, the internal structure of the apparatus is divided into different functional units or modules to perform all or part of the above-mentioned functions. Each functional unit and module in the embodiments may be integrated in one processing unit, or each unit may exist alone physically, or two or more units are integrated in one unit, and the integrated unit may be implemented in a form of hardware, or in a form of software functional unit. In addition, specific names of the functional units and modules are only for convenience of distinguishing from each other, and are not used for limiting the protection scope of the present application. The specific working processes of the units and modules in the system may refer to the corresponding processes in the foregoing method embodiments, and are not described herein again.
Fig. 16 is a schematic structural diagram of a terminal device according to an embodiment of the present application. As shown in fig. 16, the terminal device 16 of this embodiment includes: at least one processor 160 (only one processor is shown in fig. 16), a memory 161, and a computer program 162 stored in the memory 161 and executable on the at least one processor 160, the steps in the various method embodiments described above being implemented when the computer program 162 is executed by the processor 100. Such as step S110 through step S120 shown in fig. 1.
The terminal equipment may include, but is not limited to, a processor 160, a memory 161. Those skilled in the art will appreciate that fig. 16 is merely an example of a terminal device 16 and does not constitute a limitation of the terminal device 16 and may include more or fewer components than shown, or some components may be combined, or different components, e.g., the electrocardiograph may also include an input-output device, a network access device, a bus, etc.
The Processor 160 may be a Central Processing Unit (CPU), other general purpose Processor, a Digital Signal Processor (DSP), an Application Specific Integrated Circuit (ASIC), a Field Programmable Gate Array (FPGA) or other Programmable logic device, discrete Gate or transistor logic device, discrete hardware component, or the like. A general purpose processor may be a microprocessor or the processor may be any conventional processor or the like.
The storage 161 may be an internal storage unit of the terminal device 16, such as a hard disk or a memory of the terminal device 16. The memory 161 may also be an external storage device of the terminal device 16, such as a plug-in hard disk, a Smart Media Card (SMC), a Secure Digital (SD) Card, a Flash memory Card (Flash Card), and the like, which are provided on the terminal device 16. Further, the memory 161 may also include both an internal storage unit and an external storage device of the terminal device 16. The memory 161 is used for storing the computer programs and other programs and data required by the terminal device 16. The memory 161 may also be used to temporarily store data that has been output or is to be output.
The embodiments of the present application further provide a computer-readable storage medium, where a computer program is stored, and when the computer program is executed by a processor, the computer program implements the steps in the above-mentioned method embodiments.
The embodiments of the present application provide a computer program product, which when running on a mobile terminal, enables the mobile terminal to implement the steps in the above method embodiments when executed.
The integrated modules/units, if implemented in the form of software functional units and sold or used as separate products, may be stored in a computer readable storage medium. Based on such understanding, all or part of the processes in the methods of the embodiments described above can be implemented by a computer program, which can be stored in a computer-readable storage medium and can implement the steps of the embodiments of the methods described above when the computer program is executed by a processor. Wherein the computer program comprises computer program code, which may be in the form of source code, object code, an executable file or some intermediate form, etc. The computer readable medium may include at least: any entity or device capable of carrying computer program code to a photographing apparatus/terminal apparatus, a recording medium, computer Memory, Read-Only Memory (ROM), Random Access Memory (RAM), electrical carrier wave signal, telecommunication signal, and software distribution medium. Such as a usb-disk, a removable hard disk, a magnetic or optical disk, etc. In certain jurisdictions, computer-readable media may not be an electrical carrier signal or a telecommunications signal in accordance with legislative and patent practice.
In the above embodiments, the descriptions of the respective embodiments have respective emphasis, and reference may be made to the related descriptions of other embodiments for parts that are not described or illustrated in a certain embodiment.
Those of ordinary skill in the art will appreciate that the various illustrative elements and algorithm steps described in connection with the embodiments disclosed herein may be implemented as electronic hardware or combinations of computer software and electronic hardware. Whether such functionality is implemented as hardware or software depends upon the particular application and design constraints imposed on the implementation. Skilled artisans may implement the described functionality in varying ways for each particular application, but such implementation decisions should not be interpreted as causing a departure from the scope of the present application.
In the embodiments provided in the present application, it should be understood that the disclosed terminal device and method may be implemented in other ways. For example, the terminal device embodiments described above are merely illustrative. In addition, the shown or discussed mutual coupling or direct coupling or communication connection may be an indirect coupling or communication connection through some interfaces, devices or units, and may be in an electrical, mechanical or other form.
The units described as separate parts may or may not be physically separate, and parts displayed as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the units can be selected according to actual needs to achieve the purpose of the solution of the embodiment.
The above-mentioned embodiments are only used for illustrating the technical solutions of the present application, and not for limiting the same; although the present application has been described in detail with reference to the foregoing embodiments, it should be understood by those of ordinary skill in the art that: the technical solutions described in the foregoing embodiments may still be modified, or some technical features may be equivalently replaced; such modifications and substitutions do not substantially depart from the spirit and scope of the embodiments of the present application and are intended to be included within the scope of the present application.

Claims (10)

1. A method for detecting locating points of an optic cup and an optic disc based on an eye OCT image is characterized by comprising the following steps:
acquiring an eye OCT image;
detecting the eye OCT image by using a preset detection model to obtain coordinates of two positioning points of a sight glass in the eye OCT image and coordinates of two positioning points of a optic disc; the detection model comprises a first network branch and a second network branch, the first network branch is used for extracting a plurality of feature maps with different scales of the eye OCT image, and the second network branch is used for extracting two positioning point coordinates of a visual cup and two positioning point coordinates of a visual disc in the eye OCT image according to the plurality of feature maps with different scales.
2. The method of claim 1, wherein the training process of the inspection model comprises:
acquiring a sample data set, wherein the sample data set comprises a plurality of sample images, and each sample image is an eye OCT sample image subjected to eye cup and optic disc positioning point labeling;
and training the key point detection model by using the sample data set, adjusting the weight of the key point detection model in the training process until the output result of the key point detection model after the weight is adjusted meets a preset condition, or stopping training when the iteration number of the training process reaches a preset iteration number.
3. The method of claim 1 or 2, wherein the second network branch comprises a first sub-network and a second sub-network, the first sub-network being used for coarse detection of the optic cup and optic disc location point of the eye OCT image; the second sub-network is used for precisely detecting the positioning points of the optic cup and the optic disc of the eye OCT image.
4. The method of claim 3, wherein said first sub-network is GlobalNet with added feature cascades; the second sub-network is RefineNet, which increases the attention mechanism.
5. The method for detecting the location of a cup or disk as claimed in claim 1, further comprising:
acquiring coordinates of two positioning points of a sight glass and coordinates of two positioning points of a visual disc in eye OCT images at least three different angles; the at least three different angles include 0 degrees and 90 degrees;
and calculating the form parameters of the optic disc of the optic cup according to the coordinates of the two positioning points of the optic cup in the at least three eye OCT images and the coordinates of the two positioning points of the optic disc.
6. The method for detecting the positioning point of the optic cup and the optic disc as claimed in claim 5, wherein the step of calculating the optic cup and optic disc morphological parameters according to the coordinates of the two positioning points of the optic cup and the coordinates of the two positioning points of the optic disc in at least three OCT images of the eye comprises the following steps:
forming at least three first line segments according to the lengths of the sight cups at least three different angles, and fitting a sight cup ellipse by centering and projecting the at least three first line segments to the same plane; forming at least three second line segments according to the lengths of the optic discs at least three different angles, and fitting an optic disc ellipse by centering and projecting the at least three second line segments to the same plane;
and obtaining the optic cup optic disc form parameters according to the optic cup lengths at least three different angles, the optic disc lengths at least three angles, the optic cup ellipses and the optic disc ellipses.
7. The method for detecting the positioning point of the optic cup and the optic disk according to claim 5 or 6, wherein the morphological parameters of the optic cup and the optic disk comprise: viewing at least one of cup area, optic disc area, cup to disc area ratio, vertical cup to disc ratio, and horizontal cup to disc ratio.
8. An eye OCT image-based detection device for locating points of a cup and a optic disc, which is characterized by comprising:
the acquisition module is used for acquiring an eye OCT image;
the detection module is used for detecting the eye OCT image by using a preset detection model to obtain two positioning point coordinates of a visual cup in the eye OCT image and two positioning point coordinates of a visual disc; the detection model comprises a first network branch and a second network branch, the first network branch is used for extracting a plurality of feature maps with different scales of the eye OCT image, and the second network branch is used for extracting two positioning point coordinates of a visual cup and two positioning point coordinates of a visual disc in the eye OCT image according to the plurality of feature maps with different scales.
9. A terminal device comprising a memory, a processor and a computer program stored in the memory and executable on the processor, characterized in that the processor implements the method according to any of claims 1 to 7 when executing the computer program.
10. A computer-readable storage medium, in which a computer program is stored which, when being executed by a processor, carries out the method according to any one of claims 1 to 7.
CN202010087226.5A 2020-02-11 2020-02-11 Eye OCT image-based detection method and device for positioning points of optic cups and optic discs Pending CN111311565A (en)

Priority Applications (2)

Application Number Priority Date Filing Date Title
CN202010087226.5A CN111311565A (en) 2020-02-11 2020-02-11 Eye OCT image-based detection method and device for positioning points of optic cups and optic discs
PCT/CN2020/093585 WO2021159643A1 (en) 2020-02-11 2020-05-30 Eye oct image-based optic cup and optic disc positioning point detection method and apparatus

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010087226.5A CN111311565A (en) 2020-02-11 2020-02-11 Eye OCT image-based detection method and device for positioning points of optic cups and optic discs

Publications (1)

Publication Number Publication Date
CN111311565A true CN111311565A (en) 2020-06-19

Family

ID=71160064

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010087226.5A Pending CN111311565A (en) 2020-02-11 2020-02-11 Eye OCT image-based detection method and device for positioning points of optic cups and optic discs

Country Status (2)

Country Link
CN (1) CN111311565A (en)
WO (1) WO2021159643A1 (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113158821A (en) * 2021-03-29 2021-07-23 中国科学院深圳先进技术研究院 Multimodal eye detection data processing method and device and terminal equipment

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113870270A (en) * 2021-08-30 2021-12-31 北京工业大学 Eyeground image cup and optic disc segmentation method under unified framework
CN113837104B (en) * 2021-09-26 2024-03-15 大连智慧渔业科技有限公司 Underwater fish target detection method and device based on convolutional neural network and storage medium

Family Cites Families (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US9101293B2 (en) * 2010-08-05 2015-08-11 Carl Zeiss Meditec, Inc. Automated analysis of the optic nerve head: measurements, methods and representations
CN109829894B (en) * 2019-01-09 2022-04-26 平安科技(深圳)有限公司 Segmentation model training method, OCT image segmentation method, device, equipment and medium
CN110120047B (en) * 2019-04-04 2023-08-08 平安科技(深圳)有限公司 Image segmentation model training method, image segmentation method, device, equipment and medium
CN110327013B (en) * 2019-05-21 2022-02-15 北京至真互联网技术有限公司 Fundus image detection method, device and equipment and storage medium
CN110298850B (en) * 2019-07-02 2022-03-15 北京百度网讯科技有限公司 Segmentation method and device for fundus image
CN110889826B (en) * 2019-10-30 2024-04-19 平安科技(深圳)有限公司 Eye OCT image focus region segmentation method, device and terminal equipment

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113158821A (en) * 2021-03-29 2021-07-23 中国科学院深圳先进技术研究院 Multimodal eye detection data processing method and device and terminal equipment
CN113158821B (en) * 2021-03-29 2024-04-12 中国科学院深圳先进技术研究院 Method and device for processing eye detection data based on multiple modes and terminal equipment

Also Published As

Publication number Publication date
WO2021159643A1 (en) 2021-08-19

Similar Documents

Publication Publication Date Title
CN110427917B (en) Method and device for detecting key points
CN110874864B (en) Method, device, electronic equipment and system for obtaining three-dimensional model of object
WO2020215985A1 (en) Medical image segmentation method and device, electronic device and storage medium
CN111311565A (en) Eye OCT image-based detection method and device for positioning points of optic cups and optic discs
CN110298844B (en) X-ray radiography image blood vessel segmentation and identification method and device
CN110889826B (en) Eye OCT image focus region segmentation method, device and terminal equipment
CN106133750A (en) For determining the 3D rendering analyzer of direction of visual lines
CN108764342B (en) Semantic segmentation method for optic discs and optic cups in fundus image
US20220383661A1 (en) Method and device for retinal image recognition, electronic equipment, and storage medium
CN113269737B (en) Fundus retina artery and vein vessel diameter calculation method and system
CN111667459A (en) Medical sign detection method, system, terminal and storage medium based on 3D variable convolution and time sequence feature fusion
CN102567734A (en) Specific value based retina thin blood vessel segmentation method
CN112330684A (en) Object segmentation method and device, computer equipment and storage medium
CN111178420A (en) Coronary segment labeling method and system on two-dimensional contrast image
CN113763348A (en) Image quality determination method and device, electronic equipment and storage medium
CN110610480B (en) MCASPP neural network eyeground image optic cup optic disc segmentation model based on Attention mechanism
CN113158821B (en) Method and device for processing eye detection data based on multiple modes and terminal equipment
CN111239999A (en) Optical data processing method and device based on microscope and storage medium
CN113557528B (en) Method, device and system for generating point cloud completion network and processing point cloud data
CN109145861A (en) Emotion identification device and method, head-mounted display apparatus, storage medium
CN112037305B (en) Method, device and storage medium for reconstructing tree-like organization in image
Leonardo et al. Impact of generative modeling for fundus image augmentation with improved and degraded quality in the classification of glaucoma
CN113096039A (en) Depth information completion method based on infrared image and depth image
CN112258647A (en) Map reconstruction method and device, computer readable medium and electronic device
CN110363755A (en) Exempt from detection method, device, equipment and the medium of the myocardial infarction area of contrast agent

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination