CN110889826A - Segmentation method and device for eye OCT image focal region and terminal equipment - Google Patents

Segmentation method and device for eye OCT image focal region and terminal equipment Download PDF

Info

Publication number
CN110889826A
CN110889826A CN201911043286.0A CN201911043286A CN110889826A CN 110889826 A CN110889826 A CN 110889826A CN 201911043286 A CN201911043286 A CN 201911043286A CN 110889826 A CN110889826 A CN 110889826A
Authority
CN
China
Prior art keywords
oct image
region
eye
result
segmentation
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201911043286.0A
Other languages
Chinese (zh)
Other versions
CN110889826B (en
Inventor
周侠
郭晏
王玥
吕彬
吕传峰
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Ping An Technology Shenzhen Co Ltd
Original Assignee
Ping An Technology Shenzhen Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Ping An Technology Shenzhen Co Ltd filed Critical Ping An Technology Shenzhen Co Ltd
Priority to CN201911043286.0A priority Critical patent/CN110889826B/en
Publication of CN110889826A publication Critical patent/CN110889826A/en
Priority to PCT/CN2020/111734 priority patent/WO2021082691A1/en
Application granted granted Critical
Publication of CN110889826B publication Critical patent/CN110889826B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/0002Inspection of images, e.g. flaw detection
    • G06T7/0012Biomedical image inspection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/12Edge-based segmentation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/136Segmentation; Edge detection involving thresholding
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10072Tomographic images
    • G06T2207/10101Optical tomography; Optical coherence tomography [OCT]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20084Artificial neural networks [ANN]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30004Biomedical image processing
    • G06T2207/30041Eye; Retina; Ophthalmic

Abstract

The application is applicable to the technical field of image processing, and provides a segmentation method, a segmentation device and terminal equipment for an eye OCT image focal region, wherein the segmentation method comprises the following steps: acquiring an eye OCT image to be segmented; detecting the eye OCT image, and determining a boundary frame of a focus area in the eye OCT image; and performing edge extraction on the focus region in the boundary frame to obtain a segmentation result of the focus region. The application provides a segmentation scheme of an eye OCT image focal region, which realizes accurate and efficient segmentation of the focal region.

Description

Segmentation method and device for eye OCT image focal region and terminal equipment
Technical Field
The present application belongs to the field of image processing technologies, and in particular, to a method and an apparatus for segmenting a focal region of an eye OCT image, a terminal device, and a computer-readable storage medium.
Background
Optical Coherence Tomography (OCT) is a new tomography technology with the greatest development prospect, which is developed rapidly in recent years, and has an attractive application prospect particularly in the aspects of biological tissue biopsy and imaging. The Imaging technology has been tried to be applied in clinical diagnosis in ophthalmology, dentistry and dermatology, and is another major technological breakthrough after the technologies of Computed Tomography (CT) and Magnetic Resonance Imaging (MRI), and has been rapidly developed in recent years.
The segmentation of the focus region in the ophthalmic OCT image, such as subretinal fluid, intraretinal fluid, subretinal high-reflectance material, pigment epithelium shedding and the like, is the basis for reliable diagnosis of fundus diseases. Therefore, a segmentation scheme for the lesion region of the OCT image of the eye is needed.
Disclosure of Invention
The embodiment of the application provides a method and a device for segmenting a focus region of an eye OCT image, terminal equipment and a computer readable storage medium, provides a segmentation scheme of the focus region of the eye OCT image, and realizes accurate and efficient segmentation of the focus region.
In a first aspect, an embodiment of the present application provides a method for segmenting a lesion region in an OCT image of an eye, including:
acquiring an eye OCT image to be segmented;
detecting the eye OCT image, and determining a boundary frame of a focus area in the eye OCT image;
and performing edge extraction on the focus region in the boundary frame to obtain a segmentation result of the focus region.
In a second aspect, an embodiment of the present application provides a segmentation apparatus for a lesion region in an OCT image of an eye, including:
the acquisition module is used for acquiring an eye OCT image to be segmented;
the detection module is used for detecting the eye OCT image and determining a boundary frame of a focus area in the eye OCT image;
and the extraction module is used for carrying out edge extraction on the focus region in the boundary frame to obtain a segmentation result of the focus region.
In a third aspect, an embodiment of the present application provides a terminal device, including: a memory, a processor and a computer program stored in the memory and executable on the processor, the processor implementing the method according to the first aspect when executing the computer program.
In a fourth aspect, the present application provides a computer-readable storage medium, which stores a computer program, and when the computer program is executed by a processor, the computer program implements the method according to the first aspect.
In a fifth aspect, embodiments of the present application provide a computer program product, which, when run on a terminal device, causes the terminal device to perform the method according to the first aspect.
In the embodiment of the application, the boundary frame of the focus region in the eye OCT image is determined firstly, then the edge extraction is carried out on the focus region in the boundary frame, and the segmentation result of the focus region in the eye OCT image is obtained; on the other hand, because the edge extraction only aims at the image area in the boundary frame, the segmentation efficiency is improved, the data processing amount is reduced, and the system resource occupation is reduced.
Drawings
In order to more clearly illustrate the technical solutions in the embodiments of the present application, the drawings needed to be used in the embodiments or the prior art descriptions will be briefly described below, and it is obvious that the drawings in the following description are only some embodiments of the present application, and it is obvious for those skilled in the art to obtain other drawings based on these drawings without inventive exercise.
Fig. 1 is a schematic flowchart of a segmentation method for a lesion region of an OCT image of an eye according to an embodiment of the present application;
fig. 2 is a schematic diagram illustrating the results of step S110 and step S120 in the method for segmenting a lesion region in an OCT image of an eye according to an embodiment of the present application;
fig. 3 is a schematic flowchart illustrating step S120 of a segmentation method for a lesion region of an OCT image of an eye according to an embodiment of the present application;
fig. 4 is a schematic structural diagram of a deep learning neural network model used in a segmentation method for a focal region of an eye OCT image according to an embodiment of the present disclosure;
fig. 5 is a schematic structural diagram of a first sub-network used in a segmentation method for a lesion region of an OCT image of an eye according to an embodiment of the present disclosure;
fig. 6 is a schematic structural diagram of an attention module used in a segmentation method for a lesion region of an OCT image of an eye according to an embodiment of the present disclosure;
fig. 7 is a schematic diagram illustrating the results of step S120 and step S130 in the method for segmenting the lesion region in the OCT image of the eye according to an embodiment of the present application;
fig. 8 is a schematic flowchart illustrating step S130 in a method for segmenting a lesion region in an OCT image of an eye according to an embodiment of the present application;
fig. 9 is a schematic structural diagram of a segmentation apparatus for a lesion region of an OCT image of an eye according to an embodiment of the present application;
fig. 10 is a schematic structural diagram of a terminal device to which the segmentation method for a lesion region in an OCT image of an eye according to an embodiment of the present disclosure is applied.
Detailed Description
In the following description, for purposes of explanation and not limitation, specific details are set forth, such as particular system structures, techniques, etc. in order to provide a thorough understanding of the embodiments of the present application.
In order to make the technical solutions better understood by those skilled in the art, the technical solutions in the embodiments of the present application will be described below in detail and completely with reference to the drawings in the embodiments of the present application, and it is obvious that the described embodiments are only some embodiments of the present application, but not all embodiments. All other embodiments obtained by a person of ordinary skill in the art based on the embodiments in the present application shall fall within the protection scope of the present application without any creative effort. It should be noted that the embodiments and features of the embodiments in the present application may be combined with each other without conflict.
In the following description, for purposes of explanation and not limitation, specific details are set forth, such as particular system structures, techniques, etc. in order to provide a thorough understanding of the embodiments of the present application. It will be apparent, however, to one skilled in the art that the present application may be practiced in other embodiments that depart from these specific details. In other instances, detailed descriptions of well-known systems, devices, circuits, and methods are omitted so as not to obscure the description of the present application with unnecessary detail.
It will be understood that the terms "comprises" and/or "comprising," when used in this specification and the appended claims, specify the presence of stated features, integers, steps, operations, elements, and/or components, but do not preclude the presence or addition of one or more other features, integers, steps, operations, elements, components, and/or groups thereof.
It should also be understood that the term "and/or" as used in this specification and the appended claims refers to and includes any and all possible combinations of one or more of the associated listed items.
As used in this specification and the appended claims, the term "if" may be interpreted contextually as "when", "upon" or "in response to" determining "or" in response to detecting ". Similarly, the phrase "if it is determined" or "if a [ described condition or event ] is detected" may be interpreted contextually to mean "upon determining" or "in response to determining" or "upon detecting [ described condition or event ]" or "in response to detecting [ described condition or event ]".
Furthermore, in the description of the present application and the appended claims, the terms "first," "second," "third," and the like are used for distinguishing between descriptions and not necessarily for describing or implying relative importance.
Reference throughout this specification to "one embodiment" or "some embodiments," or the like, means that a particular feature, structure, or characteristic described in connection with the embodiment is included in one or more embodiments of the present application. Thus, appearances of the phrases "in one embodiment," "in some embodiments," "in other embodiments," or the like, in various places throughout this specification are not necessarily all referring to the same embodiment, but rather "one or more but not all embodiments" unless specifically stated otherwise. The terms "comprising," "including," "having," and variations thereof mean "including, but not limited to," unless expressly specified otherwise.
Segmentation of the focal region in ophthalmic OCT images is the basis for reliable diagnosis of fundus diseases. Therefore, the embodiment of the application provides a segmentation scheme for the focal region of the OCT image of the eye, which accurately and reliably segments the focal region in the OCT image of the eye.
Fig. 1 shows a flowchart of an implementation of a method for segmenting a lesion region in an OCT image of an eye according to an embodiment of the present application. The segmentation method is applied to the terminal equipment. The segmentation method for the eye OCT image lesion area provided in the embodiment of the present application may be applied to terminal devices such as an ophthalmic OCT device, a mobile phone, a tablet computer, a wearable device, a vehicle-mounted device, an Augmented Reality (AR)/Virtual Reality (VR) device, a notebook computer, a super-mobile personal computer (UMPC), a netbook, a Personal Digital Assistant (PDA), an independent server, a distributed server, a server cluster, or a cloud server, and the embodiment of the present application does not set any limit to specific types of the terminal devices. As shown in fig. 1, the segmentation method includes steps S110 to S130. The specific implementation principle of each step is as follows.
And S110, acquiring an eye OCT image to be segmented.
The eye OCT image to be segmented is an object needing to be segmented in a focus area, and the eye OCT image can be an original eye OCT image of one frame.
When the terminal device is an OCT device, the eye OCT image may be an eye OCT image obtained by scanning an eye of a human body to be measured in real time by the OCT device.
When the terminal device is not the OCT device, the eye OCT image may be an eye OCT image acquired by the terminal device from the OCT device in real time, or may be a pre-stored eye OCT image acquired from an internal or external memory of the terminal device.
In a non-limiting example, the OCT device collects an OCT image of an eye of a human body to be measured in real time and sends the OCT image to the terminal device. And the terminal equipment acquires the OCT image and takes the OCT image as an image to be segmented.
In another non-limiting example, the OCT device collects an OCT image of an eye of a human body to be measured and sends the OCT image to the terminal device, and the terminal device stores the OCT image in a database and then acquires the OCT image of the eye of the human body to be measured from the database as an image to be segmented.
In some embodiments of the present application, the terminal device acquires an eye OCT image to be segmented, and directly performs the subsequent step S120 after acquiring the eye OCT image, that is, detects the eye OCT image.
In other embodiments of the present application, the terminal device adds preprocessing to the acquired OCT images of the eye. It is understood that preprocessing includes, but is not limited to, noise reduction and clipping operations. Through operations such as noise reduction and cutting, noise and data processing amount are reduced, accuracy of segmentation results is improved, and calculation power is saved. Illustratively, the noise reduction operation may be a filtering operation including, but not limited to, nonlinear filtering, median filtering, bilateral filtering, and the like.
In a non-limiting usage scenario of the present application, when a user wants to segment a lesion area of a selected frame of eye OCT image, the user activates a lesion area segmentation function of a terminal device by clicking a specific physical key and/or a specific virtual key of the terminal device, and at this time, the terminal device automatically processes the selected frame of eye OCT image according to the processes of step S120 and step S130 to obtain a segmentation result.
In another non-limiting usage scenario of the present application, when a user wants to perform lesion area segmentation on a certain frame of eye OCT image, the user may activate a lesion area segmentation function of a terminal device by clicking a specific physical key and/or a virtual key, and select a frame of eye OCT image, and then the terminal device may automatically process the eye OCT image according to the processes of step S120 and step S130 to obtain a segmentation result.
It is understood herein that the order of clicking the button and selecting one frame of the eye OCT image may be interchanged, and the embodiments of the present application are applicable to, but not limited to, these two different usage scenarios.
S120, detecting the eye OCT image and determining a boundary frame of a focus area in the eye OCT image.
Step S120 is a step of detecting an eye OCT image, and determines a boundary frame of a lesion region in the eye OCT image.
In the embodiment of the application, the eye OCT image is detected by using a deep learning network model, a boundary frame of a focus area in the eye OCT image is determined, and an area surrounded by the boundary frame is the focus area determined by the frame.
The deep learning network model is used for framing a focus region in the eye OCT image, specifically, the focus region is framed by a bounding box. As shown in fig. 2, the eye OCT image is detected, and the bounding box a of the lesion area in the eye OCT image is determined.
When the eye OCT image to be segmented is input into the deep learning network model, the deep learning network model outputs the eye OCT image marked with the boundary box, and the region framed by the boundary box is a focus region of the eye OCT image.
The training process of the deep learning network model comprises the following steps: acquiring a large number of eye OCT sample images; the eye OCT sample image is an eye OCT sample image for marking a focus area; dividing the sample image into a training sample set, a verification sample set and a test sample set, and training a deep learning network model by using a back propagation algorithm according to the training sample set, the verification sample set and the test sample set.
In the training process, a large number of eye OCT sample images of marked lesion areas need to be acquired, and for example, on the basis of the original sample set, the eye OCT sample images in the sample set may be clipped, or a new sample image may be generated by rotation or the like to expand the sample set.
It should be noted that the process of training the deep learning network model may be implemented locally on the terminal device, or may be implemented on other devices in communication connection with the terminal device, and when the trained deep learning network model is deployed on the terminal device side, or after the trained deep learning network model is pushed to the terminal device and successfully deployed by the other devices, the obtained OCT image of the eye to be segmented may be segmented on the terminal device. It should be noted that the eye OCT image obtained during the focal region segmentation process may also be used to add data in the training sample set, perform further optimization of the deep learning network model at the terminal device or other device, and deploy the further optimized deep learning network model to the terminal device to replace the previous deep learning network model. By optimizing the deep learning network model in this way, the adaptability of the scheme of the application is further improved.
In the process of training the neural network model, the loss function used may be one of a 0-1 loss function, an absolute loss function, a logarithmic loss function, an exponential loss function and a hinge loss function or a combination of at least two of them.
The deep learning network model can be a deep learning network model based on machine learning technology in artificial intelligence, including but not limited to AlexNet, VGG Net, GoogleNet, ResNet, ResNeXt, R-CNN, YOLO, SqueezeNet, SegNet or Gan, etc.
Optionally, in a non-limiting example of the present application, as shown in fig. 3, step S120 includes step S121 to step S123.
And S121, performing feature extraction on the eye OCT image to obtain a plurality of feature maps with different scales.
And S122, fusing the feature maps with different scales based on an attention mechanism to obtain a fusion result.
And S123, extracting the region of the fusion result, and determining a boundary frame of a focus region in the eye OCT image.
In this example, as shown in fig. 4, the deep learning network model includes two cascaded deep learning network models, a first sub-network and a second sub-network.
The first subnetwork includes a feature extraction network and an attention network. The feature extraction network of the first sub-network is used for extracting a plurality of feature maps with different scales of the eye OCT image; the attention network of the first sub-network is used for fusing the feature maps with different scales based on an attention mechanism to obtain a fusion result. The second sub-network is used for extracting the region of the fusion result and determining a boundary box of a focus region in the eye OCT image.
As shown in fig. 5, the feature extraction network of the first sub-network includes 4 cascaded downsampling and 4 cascaded upsampling, where the 4 downsampling is sequentially a first downsampling, a second downsampling, a third downsampling layer, and a fourth downsampling; the 4 upsampling is sequentially a first upsampling, a second upsampling, a third upsampling and a fourth upsampling. And a fourth down-sampling result is used as the input of the first up-sampling, a third down-sampling result and a first up-sampling result are spliced and then used as the input of the second up-sampling, a second down-sampling result and a second up-sampling result are spliced and then used as the input of the third up-sampling, and a first down-sampling result and a third up-sampling result are spliced and then used as the input of the fourth up-sampling. The attention network of the first sub-network comprises 4 attention modules, the first up-sampling result, the second up-sampling result, the third up-sampling result and the fourth up-sampling result obtained by the feature extraction network are respectively input into 1 attention module, and the output of each attention module is spliced to obtain a fusion result.
Illustratively, the downsampling may be implemented by a convolutional layer and the upsampling by a deconvolution layer. Alternatively, the downsampling may be implemented by a convolution layer plus a pooling layer and the upsampling by a deconvolution layer plus an anti-pooling layer.
Illustratively, as shown in fig. 6, the attention module includes 1 global pooling layer, 1 convolutional layer with BN and softmax functions. When the features of the respective layers are scored by the attention module, normalization is performed by softmax so that the sum of scores corresponding to each input feature information is 1.
The features of the deep layer and the shallow layer in the eye OCT image are fused through the feature extraction network, so that the accuracy of feature extraction by using the model is greatly improved, and the accuracy of a subsequent segmentation result is improved. In addition, an attention module is added after each upsampling, the attention module increases the weight of relatively important features, and the accuracy of feature extraction is further improved.
The second sub-network may be the original MASK R-CNN model or the Faster R-CNN model from which the feature map extraction module is removed, that is, in this example, the feature extraction module of the original MASK R-CNN model or the fast R-CNN model, i.e., the CNN module, is replaced by the first sub-network. And marking the focus area in the fusion result through the second sub-network.
As an example of the application, the original MASK R-CNN model with the feature extraction module removed, namely the second sub-network, and the fusion result output by the attention model in the first sub-network are subjected to connection processing, so that the output parameters of the attention model can be used as the input parameters of the second sub-network, and the attention mechanism can be added to the MASK R-CNN model.
Because the MASKR-CNN model has relatively stable performance, relatively high generalization performance and relatively high accuracy, and an attention mechanism is added in the embodiment of the application, so that the MASKR-CNN model with the attention mechanism can improve the expression capacity of different types of large and small focuses, and the accuracy of identifying and detecting the focus area can be improved by utilizing the MASKR-CNN model with the attention mechanism, and the MASKR-CNN model is particularly favorable for detecting the small target focus area.
After entering the MASK R-CNN model full-link layer, the classification and regression positioning of the classification and boundary region of the lesion region to be segmented may be performed based on a preset classification loss function and a preset boundary frame loss function. Wherein, the category of the focus area can be set to include four categories: intraretinal fluid accumulation, subretinal hyperreflective material, and detachment of color number epithelium.
It is to be understood that the deep learning network model described herein is merely an exemplary description and is not to be construed as a specific limitation of the invention.
S130, performing edge extraction on the focus region in the boundary box to obtain a segmentation result of the focus region.
In the embodiment of the present application, the boundary frame of the lesion area is detected in step S120, and then the detailed segmentation is performed on the basis of the boundary frame, that is, the initial boundary frame determined in step S120 is a coarse positioning area, such as an image area surrounded by the boundary frame a in fig. 2. In step S130, edge extraction is performed on the lesion region in the coarse positioning region to obtain a segmentation result of the lesion region.
As shown in fig. 7, a schematic diagram of edge extraction performed on a lesion area in a boundary box a with respect to the boundary box a of the lesion area in an OCT image of an eye is shown.
As a non-limiting example of the present application, as shown in fig. 8, step S130 includes step S131 to step S134.
S131, acquiring a transverse convolution factor and a longitudinal convolution factor.
In the present application example, the horizontal convolution factor and the vertical convolution factor may be set in the system in advance, or may be adjusted by the user according to the requirement, or the setting value may be set as the default value of the system after the user adjusts the setting value. The present example does not specifically limit these two convolution factors.
For example, the convolution factor may be a sobel convolution factor, a powell convolution factor, a roberts convolution factor, or the like.
Illustratively, the system presets sobel convolution factors whose transverse convolution factors are:
Figure BDA0002253440810000111
the vertical convolution factor of the sobel convolution factor is:
Figure BDA0002253440810000112
s132, carrying out convolution calculation on the region image surrounded by the bounding box by using the transverse convolution factor to obtain a transverse gradient; and carrying out convolution calculation on the region image surrounded by the bounding box by utilizing the longitudinal convolution factor to obtain a longitudinal gradient.
And performing convolution calculation processing on the transverse convolution factor and the longitudinal convolution factor and the region image enclosed by the bounding box to obtain a transverse gradient and a longitudinal gradient.
Illustratively, if the system presets a sobel convolution factor, the image of the area enclosed by the bounding box is represented by FA; gxAnd GyRepresenting the gray values of the image detected by the transverse and longitudinal edges, respectively, i.e. GxRepresents a transverse gradient, GyRepresenting the longitudinal gradient, the formula is as follows:
Figure BDA0002253440810000113
it should be noted that, here, the transverse gradient and the longitudinal gradient are calculated for each pixel point (x, y) in the region image.
S133, determining the edge of the focus area in the boundary box according to the transverse gradient and the longitudinal gradient.
Wherein the edge of the focal region in the bounding box of the OCT image of the eye is determined by the calculated transverse gradient and the longitudinal gradient.
Optionally, step S133 includes:
determining an edge of the focal region in the bounding box based on a sum of the absolute values of the transverse gradient and the longitudinal gradient; or
Averaging the absolute value of the transverse gradient and the absolute value of the longitudinal gradient, and determining the edge of the focal region in the bounding box based on the average; or
Obtaining a root mean square of the transverse gradient and the longitudinal gradient, and determining an edge of the focal region in the bounding box based on the root mean square; or
And averaging the transverse gradient and the longitudinal gradient to obtain a square sum, and determining the edge of the focus area in the bounding box based on the square sum.
As an example, by calculating the sum of the absolute value of the lateral gradient and the absolute value of the longitudinal gradient, the edge of the focal region in the bounding box of the ocular OCT image is determined based on the sum. When the arithmetic sum of the absolute values exceeds the first preset threshold SHR1, that is, | Gx | + | Gy | > SHR1, the pixel point (x, y) is an edge point.
As another example, by calculating an average of the absolute values of the transverse gradient and the longitudinal gradient, the edge of the focal region in the bounding box of the ocular OCT image is determined based on the average. When the average value exceeds the second preset threshold SHR2, i.e., (| Gx | + | Gy |)/2> SHR2, the pixel point (x, y) is an edge point.
As another example, by calculating the root mean square of the lateral and longitudinal gradients, the edges of the focal region in the bounding box of the OCT image of the eye are determined based on the root mean square. When the root mean square exceeds a third preset threshold SHR3, i.e. (G)x 2+Gy 2)1/2>At SHR3, pixel point (x, y) is an edge point.
As another example, by calculating the sum of squares of the lateral and longitudinal gradients, the edges of the focal region in the bounding box of the ocular OCT image are determined based on the sum of squares. When the sum of squares exceeds a fourth preset threshold SHR4, i.e. Gx 2+Gy 2>At SHR4, pixel point (x, y) is an edge point.
It should be noted that the first preset threshold is a numerical value set for a sum of absolute values, the second preset threshold is a numerical value set for a mean of absolute values, the third preset threshold is a numerical value set for a root mean square, the fourth preset threshold is a numerical value set for a sum of squares, values of the four preset thresholds are empirical values, and the values can be set in the system in advance, can be adjusted by a user according to needs, and can also be set as default values of the system after being adjusted by the user.
S134, obtaining a segmentation result of the focus area based on the determined edge.
In step S133, the edge points of the focal region are determined, and the pixel connected region surrounded by the edge points is the segmentation result of the focal region. It should be noted that the segmentation result may include more than one pixel connected region, and how many pixel connected regions are determined by the surrounding of the detected edge points into several regions. With continued reference to FIG. 7, the segmentation result includes a plurality of connected pixel regions.
In the embodiment of the application, the boundary frame of the focus region in the eye OCT image is determined firstly, then the edge extraction is carried out on the focus region in the boundary frame, and the segmentation result of the focus region in the eye OCT image is obtained; on the other hand, because the edge extraction only aims at the image area in the boundary frame, the segmentation efficiency is improved, the data processing amount is reduced, and the system resource occupation is reduced.
It should be understood that, the sequence numbers of the steps in the foregoing embodiments do not imply an execution sequence, and the execution sequence of each process should be determined by its function and inherent logic, and should not constitute any limitation to the implementation process of the embodiments of the present application.
Fig. 9 shows a block diagram of a segmentation apparatus for a focal region of an OCT image of an eye according to an embodiment of the present application, which corresponds to the segmentation method for a focal region of an OCT image of an eye according to the above-mentioned embodiment, and only shows the relevant portions of the OCT image of an eye according to the embodiment of the present application for convenience of description.
Referring to fig. 9, the apparatus includes:
the acquisition module 91 is used for acquiring an eye OCT image to be segmented;
a detection module 92, configured to detect the eye OCT image and determine a bounding box of a lesion area in the eye OCT image;
an extracting module 93, configured to perform edge extraction on the focus region in the bounding box to obtain a segmentation result of the focus region.
Optionally, the detection module 92 is specifically configured to:
extracting the features of the eye OCT image to obtain a plurality of feature maps with different scales;
fusing the feature maps with different scales based on an attention mechanism to obtain a fusion result;
and performing region extraction on the fusion result, and determining a boundary frame of a focus region in the eye OCT image.
Optionally, the feature extraction is performed on the eye OCT image to obtain a plurality of feature maps with different scales; fusing the feature maps of a plurality of different scales based on an attention mechanism to obtain a fusion result, wherein the fusion result comprises:
extracting the features of the eye OCT image by using a feature extraction network to obtain a plurality of feature maps with different scales of the eye OCT image;
and respectively inputting each feature map with different scales into an attention module, and splicing the outputs of all the attention modules to obtain a fusion result.
Optionally, the feature extraction network includes multiple cascaded downsampling and multiple cascaded upsampling, and a result obtained by the multiple upsampling is a plurality of feature maps with different scales.
Optionally, the feature extraction network includes 4 cascaded downsampling and 4 cascaded upsampling, where the 4 downsampling is sequentially a first downsampling, a second downsampling, a third downsampling layer, and a fourth downsampling; the 4 times of upsampling are sequentially a first upsampling, a second upsampling, a third upsampling and a fourth upsampling; a fourth down-sampling result is used as the input of the first up-sampling, a third down-sampling result and the first up-sampling result are spliced and then used as the input of the second up-sampling, a second down-sampling result and the second up-sampling result are spliced and then used as the input of the third up-sampling, and a first down-sampling result and a third up-sampling result are spliced and then used as the input of the fourth up-sampling; the result of the first upsampling, the result of the second upsampling, the result of the third upsampling and the result of the fourth upsampling are feature maps of 4 different scales.
Optionally, the extracting module 93 is specifically configured to:
acquiring a transverse convolution factor and a longitudinal convolution factor;
performing convolution calculation on the region image surrounded by the bounding box by using the transverse convolution factor to obtain a transverse gradient; carrying out convolution calculation on the region image surrounded by the bounding box by utilizing the longitudinal convolution factor to obtain a longitudinal gradient;
determining an edge of the focal region in the bounding box according to the transverse gradient and the longitudinal gradient;
obtaining a segmentation result of the lesion region based on the determined edge.
Optionally, the determining the edge of the focal region in the bounding box according to the lateral gradient and the longitudinal gradient includes:
determining an edge of the focal region in the bounding box based on a sum of the absolute values of the transverse gradient and the longitudinal gradient; or
Averaging the absolute value of the transverse gradient and the absolute value of the longitudinal gradient, and determining the edge of the focal region in the bounding box based on the average; or
Obtaining a root mean square of the transverse gradient and the longitudinal gradient, and determining an edge of the focal region in the bounding box based on the root mean square; or
And averaging the transverse gradient and the longitudinal gradient to obtain a square sum, and determining the edge of the focus area in the bounding box based on the square sum.
It should be noted that, because the contents of information interaction, execution process, and the like between the modules/units are based on the same concept as that of the method embodiment of the present application, specific functions and technical effects thereof may be referred to specifically in the method embodiment section, and are not described herein again.
It will be apparent to those skilled in the art that, for convenience and brevity of description, only the above-mentioned division of the functional units and modules is illustrated, and in practical applications, the above-mentioned function distribution may be performed by different functional units and modules according to needs, that is, the internal structure of the apparatus is divided into different functional units or modules to perform all or part of the above-mentioned functions. Each functional unit and module in the embodiments may be integrated in one processing unit, or each unit may exist alone physically, or two or more units are integrated in one unit, and the integrated unit may be implemented in a form of hardware, or in a form of software functional unit. In addition, specific names of the functional units and modules are only for convenience of distinguishing from each other, and are not used for limiting the protection scope of the present application. The specific working processes of the units and modules in the system may refer to the corresponding processes in the foregoing method embodiments, and are not described herein again.
Fig. 10 is a schematic structural diagram of a terminal device according to an embodiment of the present application. As shown in fig. 10, the terminal device 10 of this embodiment includes: at least one processor 100 (only one processor is shown in fig. 10), a memory 101, and a computer program 102 stored in the memory 101 and executable on the at least one processor 100, wherein the steps in the above-described method embodiments are implemented when the computer program 102 is executed by the processor 100. Such as step S110 through step S130 shown in fig. 1.
The terminal device may include, but is not limited to, a processor 100, a memory 101. Those skilled in the art will appreciate that fig. 10 is merely an example of a terminal device 10 and does not constitute a limitation of terminal device 10 and may include more or fewer components than shown, or some components may be combined, or different components, e.g., the electrocardiograph may also include an input-output device, a network access device, a bus, etc.
The Processor 100 may be a Central Processing Unit (CPU), other general purpose Processor, a Digital Signal Processor (DSP), an Application Specific Integrated Circuit (ASIC), a Field Programmable Gate Array (FPGA) or other Programmable logic device, discrete Gate or transistor logic device, discrete hardware component, or the like. A general purpose processor may be a microprocessor or the processor may be any conventional processor or the like.
The storage 101 may be an internal storage unit of the terminal device 10, such as a hard disk or a memory of the terminal device 10. The memory 101 may also be an external storage device of the terminal device 10, such as a plug-in hard disk, a Smart Media Card (SMC), a Secure Digital (SD) Card, a Flash memory Card (Flash Card), and the like, which are provided on the terminal device 10. Further, the memory 101 may also include both an internal storage unit and an external storage device of the terminal device 10. The memory 101 is used for storing the computer program and other programs and data required by the terminal device 10. The memory 101 may also be used to temporarily store data that has been output or is to be output.
The embodiments of the present application further provide a computer-readable storage medium, where a computer program is stored, and when the computer program is executed by a processor, the computer program implements the steps in the above-mentioned method embodiments.
The embodiments of the present application provide a computer program product, which when running on a mobile terminal, enables the mobile terminal to implement the steps in the above method embodiments when executed.
The integrated modules/units, if implemented in the form of software functional units and sold or used as separate products, may be stored in a computer readable storage medium. Based on such understanding, all or part of the processes in the methods of the embodiments described above can be implemented by a computer program, which can be stored in a computer-readable storage medium and can implement the steps of the embodiments of the methods described above when the computer program is executed by a processor. Wherein the computer program comprises computer program code, which may be in the form of source code, object code, an executable file or some intermediate form, etc. The computer readable medium may include at least: any entity or device capable of carrying computer program code to a photographing apparatus/terminal apparatus, a recording medium, computer Memory, Read-Only Memory (ROM), Random Access Memory (RAM), electrical carrier wave signal, telecommunication signal, and software distribution medium. Such as a usb-disk, a removable hard disk, a magnetic or optical disk, etc. In certain jurisdictions, computer-readable media may not be an electrical carrier signal or a telecommunications signal in accordance with legislative and patent practice.
In the above embodiments, the descriptions of the respective embodiments have respective emphasis, and reference may be made to the related descriptions of other embodiments for parts that are not described or illustrated in a certain embodiment.
Those of ordinary skill in the art will appreciate that the various illustrative elements and algorithm steps described in connection with the embodiments disclosed herein may be implemented as electronic hardware or combinations of computer software and electronic hardware. Whether such functionality is implemented as hardware or software depends upon the particular application and design constraints imposed on the implementation. Skilled artisans may implement the described functionality in varying ways for each particular application, but such implementation decisions should not be interpreted as causing a departure from the scope of the present application.
In the embodiments provided in the present application, it should be understood that the disclosed terminal device and method may be implemented in other ways. For example, the terminal device embodiments described above are merely illustrative. In addition, the shown or discussed mutual coupling or direct coupling or communication connection may be an indirect coupling or communication connection through some interfaces, devices or units, and may be in an electrical, mechanical or other form.
The units described as separate parts may or may not be physically separate, and parts displayed as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the units can be selected according to actual needs to achieve the purpose of the solution of the embodiment.
The above-mentioned embodiments are only used for illustrating the technical solutions of the present application, and not for limiting the same; although the present application has been described in detail with reference to the foregoing embodiments, it should be understood by those of ordinary skill in the art that: the technical solutions described in the foregoing embodiments may still be modified, or some technical features may be equivalently replaced; such modifications and substitutions do not substantially depart from the spirit and scope of the embodiments of the present application and are intended to be included within the scope of the present application.

Claims (10)

1. A segmentation method for a focus region of an eye OCT image is characterized by comprising the following steps:
acquiring an eye OCT image to be segmented;
detecting the eye OCT image, and determining a boundary frame of a focus area in the eye OCT image;
and performing edge extraction on the focus region in the boundary frame to obtain a segmentation result of the focus region.
2. The segmentation method as set forth in claim 1, wherein the detecting the OCT-eye image and determining the bounding box of the lesion region in the OCT-eye image comprises:
extracting the features of the eye OCT image to obtain a plurality of feature maps with different scales;
fusing the feature maps with different scales based on an attention mechanism to obtain a fusion result;
and performing region extraction on the fusion result, and determining a boundary frame of a focus region in the eye OCT image.
3. The segmentation method according to claim 2, wherein the eye OCT image is subjected to feature extraction to obtain a plurality of feature maps of different scales; fusing the feature maps of a plurality of different scales based on an attention mechanism to obtain a fusion result, wherein the fusion result comprises:
extracting the features of the eye OCT image by using a feature extraction network to obtain a plurality of feature maps with different scales of the eye OCT image;
and respectively inputting each feature map with different scales into an attention module, and splicing the outputs of all the attention modules to obtain a fusion result.
4. The segmentation method according to claim 3, wherein the feature extraction network comprises a plurality of cascaded downsampling and a plurality of cascaded upsampling, and the result of the plurality of upsampling is a plurality of feature maps with different scales.
5. The segmentation method according to claim 4, wherein the feature extraction network comprises 4 cascaded downsamplings and 4 cascaded upsamplings, the 4 downsamplings being sequentially a first downsampling, a second downsampling, a third downsampling layer and a fourth downsampling; the 4 times of upsampling are sequentially a first upsampling, a second upsampling, a third upsampling and a fourth upsampling; a fourth down-sampling result is used as the input of the first up-sampling, a third down-sampling result and the first up-sampling result are spliced and then used as the input of the second up-sampling, a second down-sampling result and the second up-sampling result are spliced and then used as the input of the third up-sampling, and a first down-sampling result and a third up-sampling result are spliced and then used as the input of the fourth up-sampling; the result of the first upsampling, the result of the second upsampling, the result of the third upsampling and the result of the fourth upsampling are feature maps of 4 different scales.
6. The segmentation method according to claim 2, wherein the performing edge extraction on the lesion region in the bounding box to obtain the segmentation result of the lesion region comprises:
acquiring a transverse convolution factor and a longitudinal convolution factor;
performing convolution calculation on the region image surrounded by the bounding box by using the transverse convolution factor to obtain a transverse gradient; carrying out convolution calculation on the region image surrounded by the bounding box by utilizing the longitudinal convolution factor to obtain a longitudinal gradient;
determining an edge of the focal region in the bounding box according to the transverse gradient and the longitudinal gradient;
obtaining a segmentation result of the lesion region based on the determined edge.
7. The segmentation method of claim 6, wherein said determining the edge of the focal region in the bounding box based on the transverse gradient and the longitudinal gradient comprises:
determining an edge of the focal region in the bounding box based on a sum of the absolute values of the transverse gradient and the longitudinal gradient; or
Averaging the absolute value of the transverse gradient and the absolute value of the longitudinal gradient, and determining the edge of the focal region in the bounding box based on the average; or
Obtaining a root mean square of the transverse gradient and the longitudinal gradient, and determining an edge of the focal region in the bounding box based on the root mean square; or
And averaging the transverse gradient and the longitudinal gradient to obtain a square sum, and determining the edge of the focus area in the bounding box based on the square sum.
8. A segmentation device for a lesion region of an eye OCT image is characterized by comprising:
the acquisition module is used for acquiring an eye OCT image to be segmented;
the detection module is used for detecting the eye OCT image and determining a boundary frame of a focus area in the eye OCT image;
and the extraction module is used for carrying out edge extraction on the focus region in the boundary frame to obtain a segmentation result of the focus region.
9. A terminal device comprising a memory, a processor and a computer program stored in the memory and executable on the processor, characterized in that the processor implements the method according to any of claims 1 to 7 when executing the computer program.
10. A computer-readable storage medium, in which a computer program is stored which, when being executed by a processor, carries out the method according to any one of claims 1 to 7.
CN201911043286.0A 2019-10-30 2019-10-30 Eye OCT image focus region segmentation method, device and terminal equipment Active CN110889826B (en)

Priority Applications (2)

Application Number Priority Date Filing Date Title
CN201911043286.0A CN110889826B (en) 2019-10-30 2019-10-30 Eye OCT image focus region segmentation method, device and terminal equipment
PCT/CN2020/111734 WO2021082691A1 (en) 2019-10-30 2020-08-27 Segmentation method and apparatus for lesion area of eye oct image, and terminal device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201911043286.0A CN110889826B (en) 2019-10-30 2019-10-30 Eye OCT image focus region segmentation method, device and terminal equipment

Publications (2)

Publication Number Publication Date
CN110889826A true CN110889826A (en) 2020-03-17
CN110889826B CN110889826B (en) 2024-04-19

Family

ID=69746571

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201911043286.0A Active CN110889826B (en) 2019-10-30 2019-10-30 Eye OCT image focus region segmentation method, device and terminal equipment

Country Status (2)

Country Link
CN (1) CN110889826B (en)
WO (1) WO2021082691A1 (en)

Cited By (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112053348A (en) * 2020-09-03 2020-12-08 宁波市眼科医院 Eye ground image processing system and method for cataract diagnosis
CN112365973A (en) * 2020-11-02 2021-02-12 太原理工大学 Pulmonary nodule auxiliary diagnosis system based on countermeasure network and fast R-CNN
CN112734787A (en) * 2020-12-31 2021-04-30 山东大学 Ophthalmological SD-OCT high-reflection point segmentation method based on image decomposition and implementation system
WO2021082691A1 (en) * 2019-10-30 2021-05-06 平安科技(深圳)有限公司 Segmentation method and apparatus for lesion area of eye oct image, and terminal device
CN112906658A (en) * 2021-03-30 2021-06-04 航天时代飞鸿技术有限公司 Lightweight automatic detection method for ground target investigation by unmanned aerial vehicle
CN113140291A (en) * 2020-12-17 2021-07-20 慧影医疗科技(北京)有限公司 Image segmentation method and device, model training method and electronic equipment
CN113158821A (en) * 2021-03-29 2021-07-23 中国科学院深圳先进技术研究院 Multimodal eye detection data processing method and device and terminal equipment
WO2021159643A1 (en) * 2020-02-11 2021-08-19 平安科技(深圳)有限公司 Eye oct image-based optic cup and optic disc positioning point detection method and apparatus
CN113520317A (en) * 2021-07-05 2021-10-22 汤姆飞思(香港)有限公司 OCT-based endometrial detection and analysis method, device, equipment and storage medium
CN113570556A (en) * 2021-07-08 2021-10-29 北京大学第三医院(北京大学第三临床医学院) Method and device for grading eye dyeing image
CN115187579A (en) * 2022-08-11 2022-10-14 北京医准智能科技有限公司 Image category judgment method and device and electronic equipment

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2017104535A (en) * 2015-12-02 2017-06-15 株式会社ニデック Ophthalmologic information processing device and ophthalmologic information processing program
CN108198185A (en) * 2017-11-20 2018-06-22 海纳医信(北京)软件科技有限责任公司 Dividing method and device, storage medium, the processor of eyeground lesion image
CN108229531A (en) * 2017-09-29 2018-06-29 北京市商汤科技开发有限公司 Characteristics of objects processing method, device, storage medium and electronic equipment
CN110097559A (en) * 2019-04-29 2019-08-06 南京星程智能科技有限公司 Eye fundus image focal area mask method based on deep learning
CN110120047A (en) * 2019-04-04 2019-08-13 平安科技(深圳)有限公司 Image Segmentation Model training method, image partition method, device, equipment and medium
CN110148111A (en) * 2019-04-01 2019-08-20 江西比格威医疗科技有限公司 The automatic testing method of a variety of retina lesions in a kind of retina OCT image
CN110378877A (en) * 2019-06-24 2019-10-25 南京理工大学 SD-OCT image CNV lesion detection method based on depth convolutional network model

Family Cites Families (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US8873810B2 (en) * 2009-03-02 2014-10-28 Honeywell International Inc. Feature-based method and system for blur estimation in eye images
CN109493954B (en) * 2018-12-20 2021-10-19 广东工业大学 SD-OCT image retinopathy detection system based on category distinguishing and positioning
CN109829894B (en) * 2019-01-09 2022-04-26 平安科技(深圳)有限公司 Segmentation model training method, OCT image segmentation method, device, equipment and medium
CN110458883B (en) * 2019-03-07 2021-07-13 腾讯科技(深圳)有限公司 Medical image processing system, method, device and equipment
CN109919954B (en) * 2019-03-08 2021-06-15 广州视源电子科技股份有限公司 Target object identification method and device
CN110889826B (en) * 2019-10-30 2024-04-19 平安科技(深圳)有限公司 Eye OCT image focus region segmentation method, device and terminal equipment

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2017104535A (en) * 2015-12-02 2017-06-15 株式会社ニデック Ophthalmologic information processing device and ophthalmologic information processing program
CN108229531A (en) * 2017-09-29 2018-06-29 北京市商汤科技开发有限公司 Characteristics of objects processing method, device, storage medium and electronic equipment
CN108198185A (en) * 2017-11-20 2018-06-22 海纳医信(北京)软件科技有限责任公司 Dividing method and device, storage medium, the processor of eyeground lesion image
CN110148111A (en) * 2019-04-01 2019-08-20 江西比格威医疗科技有限公司 The automatic testing method of a variety of retina lesions in a kind of retina OCT image
CN110120047A (en) * 2019-04-04 2019-08-13 平安科技(深圳)有限公司 Image Segmentation Model training method, image partition method, device, equipment and medium
CN110097559A (en) * 2019-04-29 2019-08-06 南京星程智能科技有限公司 Eye fundus image focal area mask method based on deep learning
CN110378877A (en) * 2019-06-24 2019-10-25 南京理工大学 SD-OCT image CNV lesion detection method based on depth convolutional network model

Cited By (16)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2021082691A1 (en) * 2019-10-30 2021-05-06 平安科技(深圳)有限公司 Segmentation method and apparatus for lesion area of eye oct image, and terminal device
WO2021159643A1 (en) * 2020-02-11 2021-08-19 平安科技(深圳)有限公司 Eye oct image-based optic cup and optic disc positioning point detection method and apparatus
CN112053348A (en) * 2020-09-03 2020-12-08 宁波市眼科医院 Eye ground image processing system and method for cataract diagnosis
CN112365973A (en) * 2020-11-02 2021-02-12 太原理工大学 Pulmonary nodule auxiliary diagnosis system based on countermeasure network and fast R-CNN
CN112365973B (en) * 2020-11-02 2022-04-19 太原理工大学 Pulmonary nodule auxiliary diagnosis system based on countermeasure network and fast R-CNN
CN113140291B (en) * 2020-12-17 2022-05-10 慧影医疗科技(北京)股份有限公司 Image segmentation method and device, model training method and electronic equipment
CN113140291A (en) * 2020-12-17 2021-07-20 慧影医疗科技(北京)有限公司 Image segmentation method and device, model training method and electronic equipment
CN112734787A (en) * 2020-12-31 2021-04-30 山东大学 Ophthalmological SD-OCT high-reflection point segmentation method based on image decomposition and implementation system
CN112734787B (en) * 2020-12-31 2022-07-15 山东大学 Ophthalmological SD-OCT high-reflection point segmentation method based on image decomposition and implementation system
CN113158821A (en) * 2021-03-29 2021-07-23 中国科学院深圳先进技术研究院 Multimodal eye detection data processing method and device and terminal equipment
CN113158821B (en) * 2021-03-29 2024-04-12 中国科学院深圳先进技术研究院 Method and device for processing eye detection data based on multiple modes and terminal equipment
CN112906658A (en) * 2021-03-30 2021-06-04 航天时代飞鸿技术有限公司 Lightweight automatic detection method for ground target investigation by unmanned aerial vehicle
CN113520317A (en) * 2021-07-05 2021-10-22 汤姆飞思(香港)有限公司 OCT-based endometrial detection and analysis method, device, equipment and storage medium
CN113570556A (en) * 2021-07-08 2021-10-29 北京大学第三医院(北京大学第三临床医学院) Method and device for grading eye dyeing image
CN115187579A (en) * 2022-08-11 2022-10-14 北京医准智能科技有限公司 Image category judgment method and device and electronic equipment
CN115187579B (en) * 2022-08-11 2023-05-02 北京医准智能科技有限公司 Image category judging method and device and electronic equipment

Also Published As

Publication number Publication date
WO2021082691A1 (en) 2021-05-06
CN110889826B (en) 2024-04-19

Similar Documents

Publication Publication Date Title
CN110889826B (en) Eye OCT image focus region segmentation method, device and terminal equipment
CN110120047B (en) Image segmentation model training method, image segmentation method, device, equipment and medium
CN110399929B (en) Fundus image classification method, fundus image classification apparatus, and computer-readable storage medium
CN110033456B (en) Medical image processing method, device, equipment and system
CN110874594B (en) Human body appearance damage detection method and related equipment based on semantic segmentation network
Sopharak et al. Simple hybrid method for fine microaneurysm detection from non-dilated diabetic retinopathy retinal images
CN112017185B (en) Focus segmentation method, device and storage medium
CN107665491A (en) The recognition methods of pathological image and system
CN108198185B (en) Segmentation method and device for fundus focus image, storage medium and processor
US11967181B2 (en) Method and device for retinal image recognition, electronic equipment, and storage medium
CN112508965A (en) Automatic contour line drawing system for normal organs in medical image
CN111860169B (en) Skin analysis method, device, storage medium and electronic equipment
Veiga et al. Quality evaluation of digital fundus images through combined measures
CN113643354B (en) Measuring device of vascular caliber based on fundus image with enhanced resolution
CN111178420A (en) Coronary segment labeling method and system on two-dimensional contrast image
Vij et al. A systematic review on diabetic retinopathy detection using deep learning techniques
CN111311565A (en) Eye OCT image-based detection method and device for positioning points of optic cups and optic discs
CN113158821B (en) Method and device for processing eye detection data based on multiple modes and terminal equipment
CN117274278B (en) Retina image focus part segmentation method and system based on simulated receptive field
CN112541900B (en) Detection method and device based on convolutional neural network, computer equipment and storage medium
Zhang et al. Reconnection of interrupted curvilinear structures via cortically inspired completion for ophthalmologic images
Jana et al. A semi-supervised approach for automatic detection and segmentation of optic disc from retinal fundus image
CN110598652B (en) Fundus data prediction method and device
JP5740403B2 (en) System and method for detecting retinal abnormalities
CN115829980A (en) Image recognition method, device, equipment and storage medium for fundus picture

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
REG Reference to a national code

Ref country code: HK

Ref legal event code: DE

Ref document number: 40023194

Country of ref document: HK

SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant