CN116486179A - Fundus hemorrhage focus extraction method and device - Google Patents

Fundus hemorrhage focus extraction method and device Download PDF

Info

Publication number
CN116486179A
CN116486179A CN202310575688.5A CN202310575688A CN116486179A CN 116486179 A CN116486179 A CN 116486179A CN 202310575688 A CN202310575688 A CN 202310575688A CN 116486179 A CN116486179 A CN 116486179A
Authority
CN
China
Prior art keywords
image
focus
fundus hemorrhage
fundus
area
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202310575688.5A
Other languages
Chinese (zh)
Inventor
王茜
董洲
凌赛广
柯鑫
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Yiwei Science And Technology Beijing Co ltd
Original Assignee
Yiwei Science And Technology Beijing Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Yiwei Science And Technology Beijing Co ltd filed Critical Yiwei Science And Technology Beijing Co ltd
Priority to CN202310575688.5A priority Critical patent/CN116486179A/en
Publication of CN116486179A publication Critical patent/CN116486179A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/764Arrangements for image or video recognition or understanding using pattern recognition or machine learning using classification, e.g. of video objects
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • G06N3/084Backpropagation, e.g. using gradient descent
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/0002Inspection of images, e.g. flaw detection
    • G06T7/0012Biomedical image inspection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/11Region-based segmentation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/136Segmentation; Edge detection involving thresholding
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30004Biomedical image processing
    • G06T2207/30041Eye; Retina; Ophthalmic
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30004Biomedical image processing
    • G06T2207/30101Blood vessel; Artery; Vein; Vascular
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02ATECHNOLOGIES FOR ADAPTATION TO CLIMATE CHANGE
    • Y02A90/00Technologies having an indirect contribution to adaptation to climate change
    • Y02A90/10Information and communication technologies [ICT] supporting adaptation to climate change, e.g. for weather forecasting or climate simulation

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • General Health & Medical Sciences (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Health & Medical Sciences (AREA)
  • Evolutionary Computation (AREA)
  • Computing Systems (AREA)
  • Software Systems (AREA)
  • Artificial Intelligence (AREA)
  • Computational Linguistics (AREA)
  • Data Mining & Analysis (AREA)
  • Mathematical Physics (AREA)
  • General Engineering & Computer Science (AREA)
  • Molecular Biology (AREA)
  • Medical Informatics (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Biomedical Technology (AREA)
  • Biophysics (AREA)
  • Quality & Reliability (AREA)
  • Databases & Information Systems (AREA)
  • Radiology & Medical Imaging (AREA)
  • Nuclear Medicine, Radiotherapy & Molecular Imaging (AREA)
  • Multimedia (AREA)
  • Eye Examination Apparatus (AREA)

Abstract

The present disclosure provides a method and an apparatus for extracting a fundus hemorrhage focus, including performing type recognition on a fundus hemorrhage focus in a target eye image through a category recognition model to obtain a type recognition result; under the condition that the type identification result is that the fundus hemorrhage focus belongs to the first category, detecting a target eye image through a pre-trained fundus hemorrhage focus detection model to obtain a first initial image of a fundus hemorrhage focus area; dividing the first initial image by a pre-trained focus segmentation model to obtain a first image of the fundus hemorrhage focus; extracting a fundus hemorrhage focus area from the target eye image based on a preset image processing algorithm to obtain a second image of the fundus hemorrhage focus; and performing intersection processing on the first initial image, the first image and the second image to obtain a fundus hemorrhage focus segmentation result corresponding to the target eye image. The accuracy of the fundus hemorrhage focus segmentation result is improved.

Description

Fundus hemorrhage focus extraction method and device
Technical Field
The disclosure relates to the technical field of computers, in particular to a method and a device for extracting fundus hemorrhage focus.
Background
In the related art, due to hypertension, diabetes, vascular obstruction, trauma, etc., there may be caused damage to the eye, causing various eye lesions, for example, lesions such as fundus hemorrhage, etc., the types of fundus hemorrhage are many, for example: hemorrhage due to diabetic retinopathy, hemorrhage due to hypertension, venous occlusion fundus hemorrhage, vitreous hemorrhage, and optic disc edema hemorrhage. However, in the prior art, the identification and classification of the fundus hemorrhage focus usually depend on personnel such as doctors to carry out manual judgment, so that the workload of the doctors is high, the difficulty of the classification and identification of the focus is high, and the false identification or false classification caused by human factors is easy to occur.
The information disclosed in the background section of this application is only for enhancement of understanding of the general background of this application and should not be taken as an acknowledgement or any form of suggestion that this information forms the prior art already known to a person skilled in the art.
Disclosure of Invention
The embodiment of the disclosure provides a method and a device for extracting a fundus hemorrhage focus.
In a first aspect of embodiments of the present disclosure, there is provided a method for extracting a fundus hemorrhage focus, including: performing type recognition on fundus hemorrhage focus in the target eye image through a type recognition model to obtain a type recognition result; under the condition that the type identification result shows that the fundus hemorrhage focus belongs to a first category, detecting the target eye image through a pre-trained fundus hemorrhage focus detection model to obtain a first initial image of a fundus hemorrhage focus area; dividing the first initial image through a pre-trained focus segmentation model to obtain a first image of a fundus hemorrhage focus; extracting a fundus hemorrhage focus area from the target eye image based on a preset image processing algorithm to obtain a second image of the fundus hemorrhage focus; and performing intersection processing on the first image and the second image to obtain a fundus hemorrhage focus segmentation result corresponding to the target eye image.
Optionally, the method further comprises: and under the condition that the type identification result shows that the fundus hemorrhage focus belongs to a second category, detecting the target eye image through a pre-trained fundus hemorrhage focus detection model to obtain a second initial image of a fundus hemorrhage focus area, wherein the second initial image comprises fundus hemorrhage focus category information.
Optionally, before detecting the target eye image, the method further comprises: removing a background area of an eye image to be processed to obtain a first eye image; and carrying out normalization processing on the first eye image to obtain the target eye image.
Optionally, the fundus hemorrhage focus detection frame is segmented by a pre-trained focus segmentation model, so as to obtain a first image of the fundus hemorrhage focus, including: expanding the region selected by the detection frame in the first initial image to obtain an expanded region; intercepting an image block corresponding to the expansion area; inputting the image block into the pre-trained focus segmentation model to obtain a first image of the fundus hemorrhage focus corresponding to the image block.
Optionally, based on a preset image processing algorithm, extracting a fundus hemorrhage focus area from the target eye image to obtain a second image of the fundus hemorrhage focus, including: preprocessing the target eye image, and dividing a target area with brightness or saturation meeting a preset threshold value from the preprocessed target eye image; removing the target area in the video disc area to obtain a candidate bleeding area; screening out candidate bleeding areas meeting a preset threshold based on at least one of the area, roundness or edge sharpness of the candidate bleeding areas; and determining a second image of the fundus hemorrhage focus by using different processing algorithms based on different magnitude relations between the gray values of the candidate hemorrhage areas and the gray values of the average preset color channels of the blood vessel areas.
Optionally, the determining the second image of the fundus hemorrhage focus using a different processing algorithm comprises: if the gray value of the candidate bleeding area is smaller than or equal to the average preset color channel gray value of the blood vessel area, judging whether the candidate bleeding area is connected with the blood vessel or not according to a first judging condition, and obtaining a first judging result; judging whether the candidate bleeding area can be used as a second image of fundus bleeding focus or not based on the first judgment result; and if the gray value of the candidate bleeding area is larger than the average preset color channel gray value of the blood vessel area, executing different preset rules to determine a second image of the fundus bleeding focus based on the gray value and/or the color space value of the blood vessel area.
Optionally, before segmenting the initial image with a pre-trained lesion segmentation model, the method further trains a lesion segmentation model comprising: acquiring a first sample set, wherein a sample in the first sample set is a sample image of a fundus hemorrhage focus; determining a sample label image corresponding to the sample in the first sample set in a label labeling mode; and inputting the sample image and the sample label image corresponding to the sample into a pre-established focus segmentation model to train the pre-established focus segmentation model.
Optionally, the method further comprises: boundary extraction is carried out on the segmentation result of the fundus hemorrhage focus to obtain the boundary of the fundus hemorrhage focus; and overlapping the boundary with the target eye image to obtain an overlapped fundus image of the fundus hemorrhage focus.
A second aspect of embodiments of the present disclosure provides a fundus hemorrhage focus extraction device, including: the type identification module is configured to identify the type of the fundus hemorrhage focus in the target eye image to obtain a type identification result; the detection module is configured to detect the target eye image through a pre-trained fundus hemorrhage focus detection model under the condition that the type identification result indicates that the fundus hemorrhage focus belongs to a first category, so as to obtain a first initial image of a fundus hemorrhage focus area; the first segmentation module is configured to segment the first initial image through a pre-trained focus segmentation model to obtain a first image of a fundus hemorrhage focus; the second segmentation module is configured to extract fundus hemorrhage focus areas from the target eye images based on a preset image processing algorithm to obtain second images of fundus hemorrhage focuses; the segmentation result determining module is configured to perform intersection processing on the first initial image, the first image and the second image to obtain a fundus hemorrhage focus segmentation result corresponding to the target eye image.
A third aspect of the embodiments of the present disclosure provides a fundus hemorrhage focus extraction apparatus, including: a processor; a memory for storing processor-executable instructions; wherein the processor is configured to invoke the instructions stored in the memory to perform the above method.
In a fourth aspect of the disclosed embodiments, there is provided a computer readable storage medium having stored thereon computer program instructions, wherein the computer program instructions when executed by a processor implement the above-described method.
The embodiment of the method and the device for extracting the fundus hemorrhage focus comprise the steps of performing type recognition on the fundus hemorrhage focus in a target eye image through a type recognition model to obtain a type recognition result; under the condition that the type identification result shows that the fundus hemorrhage focus belongs to a first category, detecting the target eye image through a pre-trained fundus hemorrhage focus detection model to obtain a first initial image of a fundus hemorrhage focus area; dividing the first initial image through a pre-trained focus segmentation model to obtain a first image of a fundus hemorrhage focus; extracting a fundus hemorrhage focus area from the target eye image based on a preset image processing algorithm to obtain a second image of the fundus hemorrhage focus; and performing intersection processing on the first image and the second image to obtain a fundus hemorrhage focus segmentation result corresponding to the target eye image. By carrying out type recognition on the ocular fundus bleeding focus in advance, different modes of treatment are carried out on different types of bleeding focus on the basis, and the segmentation precision of the bleeding focus is improved; the second image obtained by the computer vision method is used for correcting the first image obtained by the fundus hemorrhage focus segmentation model, so that the accuracy of the fundus hemorrhage focus segmentation result is improved. And further, the problems that in the related technology, the bleeding focus is usually identified and classified by the professional personnel such as doctors and the like, so that the workload of the doctors is high, the difficulty in classifying and identifying the focus is high, and the false identification or false classification is easily caused by human factors are solved.
Drawings
Fig. 1 exemplarily shows a flow diagram of a fundus hemorrhage focus extraction method according to an embodiment of the present disclosure;
fig. 2 schematically illustrates a schematic view of an ocular image of an embodiment of the present disclosure;
fig. 3 schematically illustrates a schematic diagram of a fundus hemorrhage focus detection frame of an embodiment of the present disclosure;
FIG. 4 schematically illustrates a schematic diagram of a segmentation boundary of an embodiment of the present disclosure;
FIG. 5 schematically illustrates a schematic diagram of segmentation results of an embodiment of the present disclosure;
fig. 6 exemplarily illustrates a block diagram of a fundus hemorrhage focus extraction device according to an embodiment of the present disclosure;
fig. 7 is a block diagram illustrating a fundus hemorrhage focus extraction apparatus according to an exemplary embodiment.
Detailed Description
For the purposes of making the objects, technical solutions and advantages of the embodiments of the present disclosure more apparent, the technical solutions of the embodiments of the present disclosure will be clearly and completely described below with reference to the accompanying drawings in the embodiments of the present disclosure, and it is apparent that the described embodiments are only some embodiments of the present disclosure, not all embodiments. Based on the embodiments in this disclosure, all other embodiments that a person of ordinary skill in the art would obtain without making any inventive effort are within the scope of protection of this disclosure.
The terms "first," "second," "third," "fourth" and the like in the description and in the claims and in the above-described figures, if any, are used for distinguishing between similar objects and not necessarily for describing a particular sequential or chronological order. It is to be understood that the data so used may be interchanged where appropriate such that the embodiments of the disclosure described herein may be capable of operation in sequences other than those illustrated or described herein.
It should be understood that, in various embodiments of the present disclosure, the size of the sequence number of each process does not mean that the execution sequence of each process should be determined by its functions and internal logic, and should not constitute any limitation on the implementation process of the embodiments of the present disclosure.
It should be understood that in this disclosure, "comprising" and "having" and any variations thereof are intended to cover non-exclusive inclusion, such that a process, method, system, article, or apparatus that comprises a list of steps or elements is not necessarily limited to those steps or elements that are expressly listed or inherent to such process, method, article, or apparatus.
It should be understood that in this disclosure, "plurality" means two or more. "and/or" is merely an association relationship describing an association object, and means that three relationships may exist, for example, and/or B may mean: a exists alone, A and B exist together, and B exists alone. The character "/" generally indicates that the context-dependent object is an "or" relationship. "comprising A, B and C", "comprising A, B, C" means that all three of A, B, C comprise, "comprising A, B or C" means that one of the three comprises A, B, C, and "comprising A, B and/or C" means that any 1 or any 2 or 3 of the three comprises A, B, C.
It should be understood that in this disclosure, "B corresponding to a", "a corresponding to B", or "B corresponding to a" means that B is associated with a from which B may be determined. Determining B from a does not mean determining B from a alone, but may also determine B from a and/or other information. The matching of A and B is that the similarity of A and B is larger than or equal to a preset threshold value.
As used herein, "if" may be interpreted as "at … …" or "at … …" or "in response to a determination" or "in response to detection" depending on the context.
The technical scheme of the present disclosure is described in detail below with specific examples. The following embodiments may be combined with each other, and some embodiments may not be repeated for the same or similar concepts or processes.
Fig. 1 exemplarily shows a flow diagram of a fundus hemorrhage focus extraction method according to an embodiment of the present disclosure, as shown in fig. 1, the method includes:
step S101, performing type recognition on fundus hemorrhage focus in the process of target eye image through a type recognition model to obtain a type recognition result.
In this embodiment, the fundus hemorrhage focus is classified primarily, and is exemplified by two major categories, one category is a small-category hemorrhage type such as a sugar net, hypertension, unknown hemorrhage, and the other category is a large-category hemorrhage type such as static resistance hemorrhage, glass volume hemorrhage, and optic disc edema.
For example, in medical detection of an eye, a target eye-management image may be acquired. For example, the detection is performed by a fundus camera, fundus fluorescence angiography, optical coherence tomography, or the like.
When the type identification is performed, the characteristic extraction processing can be performed through the deep learning neural network, so that the characteristic information of the fundus hemorrhage focus is obtained, and the classification of the fundus hemorrhage focus is performed based on the characteristic information. The characteristic information of the fundus hemorrhage focus can also be obtained by other modes, for example, the characteristic information of the fundus hemorrhage focus can be obtained by edge detection, pixel detection and the like. Further, when the type of the fundus hemorrhage focus is identified based on the characteristic information, the characteristic information of the fundus hemorrhage focus can be input into a support vector machine model for processing, wherein the support vector machine model can be a model obtained after training based on labeling a plurality of types of fundus hemorrhage focus samples, and the trained support vector machine model can be used for identifying the characteristic information of the fundus hemorrhage focus so as to determine the category information of the fundus hemorrhage focus.
In an example, the support vector machine model may determine category information of the fundus hemorrhage focus based on similarity between the feature information and boundaries and appearances of the above-described multiple types of fundus hemorrhage focuses.
In an example, in determining the category information, a regression analysis, a bayesian model, or the like may also be used, and the present disclosure does not limit a specific manner of determining the category information.
As an optional implementation manner of this embodiment, before the detecting the target eye image, the method further includes: removing a background area of an eye image to be processed to obtain a first eye image; and carrying out normalization processing on the first eye image to obtain the target eye image.
In this alternative implementation, referring to fig. 2, fig. 2 schematically illustrates an eye image according to an embodiment of the disclosure, a portion of a background area may exist in the eye image to be processed due to factors such as a shape of a lens, for example, a black background area in fig. 2, and the portion of the area does not have information of the eye, and may generate interference in model calculation. Moreover, since the resolutions of the lenses are different from each other, the sizes and resolutions of the eye images to be processed may also be different from each other. All of the above factors may adversely affect the processing of the eye image to be processed, as well as the identification and classification of lesions.
In order to reduce the adverse effects, the background area of the eye image to be processed can be removed, and the first eye image after the background area is removed is scaled to a preset size, so that the target eye image can be obtained. Through the processing, the interference of a background area can be reduced, the size of the target eye image processed by the model is consistent, and the processing robustness of the model and the recognition and classification robustness of the focus are improved.
In an example, an ROI (region of interest ), i.e. a non-background region, may be determined in the eye image to be processed. And intercepting the non-background area to obtain a first eye image. And the first eye image can be normalized to obtain a target eye image with uniform size. Normalization processing may include, but is not limited to, translation, rotation, scaling. If the size is scaled to 512×512, a target eye image with uniform size is obtained.
Step S102, detecting the target eye image through a pre-trained fundus hemorrhage focus detection model to obtain a first initial image of a fundus hemorrhage focus area when the type identification result indicates that the fundus hemorrhage focus belongs to the first category.
In this embodiment, if the type of the fundus hemorrhage focus is a small type of hemorrhage such as "sugar net, hypertension, unknown hemorrhage", that is, the first type, the target eye image may be detected by the pre-trained fundus hemorrhage focus detection model, so as to obtain a first initial image, where the first initial image may include a fundus hemorrhage focus area selected by a detection frame. Illustratively, the pre-trained fundus hemorrhage focus detection model is used for analyzing the target eye image so as to detect the position of the fundus hemorrhage focus, and a fundus focus detection frame which is provided with a fundus hemorrhage focus type mark and can frame and select the fundus hemorrhage focus area is respectively generated for each fundus hemorrhage focus.
The first initial image of the embodiment is obtained by overlapping the fundus focus detection frame with the target eye image; each fundus focus detection frame comprises the type information of the selected fundus hemorrhage focus. The fundus focus detection frame can be a minimum circumscribed rectangle of fundus hemorrhage focus, or can be detection frames with other shapes, such as a circle, a triangle and the like. The initial image may include a fundus hemorrhage focal region selected with a detection frame. Each detection frame selection area in the initial image may also correspond to a type of fundus hemorrhage lesion, which may include a sugar net, hypertension, unknown hemorrhage, etc., which may be displayed when the mouse pointer is moved onto the detection frame. The fundus hemorrhage focus detection model can be a yolov5 detection network, and a detection frame of the fundus hemorrhage focus and a focus label type corresponding to each detection frame can be extracted through the network. The fundus hemorrhage focus detection model can also be other network structures, which are not limited herein.
Referring to fig. 3, fig. 3 schematically illustrates a schematic diagram of a fundus hemorrhage focus detection frame implemented in the present disclosure, where each rectangular frame is a fundus hemorrhage focus detection frame for framing a fundus hemorrhage focus.
As an optional implementation manner of this embodiment, the method further includes: and under the condition that the type identification result shows that the fundus hemorrhage focus belongs to a second category, detecting the target eye image through a pre-trained fundus hemorrhage focus detection model to obtain a second initial image of a fundus hemorrhage focus area, wherein the second initial image comprises fundus hemorrhage focus category information.
In this optional implementation manner, if the type of the fundus hemorrhage focus is "static resistance hemorrhage, glass volume hemorrhage, optic disc edema hemorrhage" and other major hemorrhage, namely, the second type, the target eye image is detected through the pre-trained fundus hemorrhage focus detection model, so as to obtain a second initial image, the second initial image may include a fundus hemorrhage focus area selected by using a detection frame, and focus extraction is not performed on the second initial image, so that on one hand, there is no reference meaning for major hemorrhage focus extraction, on the other hand, the problems of waste of calculation resources, large focus segmentation error and the like are avoided. In this alternative implementation manner, the pre-trained fundus hemorrhage focus detection model is the same as the aforementioned fundus hemorrhage focus detection model, and will not be described in detail herein.
Step S103, segmenting the first initial image through a pre-trained focus segmentation model to obtain a first image of the fundus hemorrhage focus.
In this embodiment, a focus segmentation model may be built in advance, and training may be performed on the model in advance, based on the trained focus segmentation model, after the first initial image including step S102 is input, the trained focus segmentation model may segment the fundus hemorrhage focus area, and output a fundus hemorrhage focus area in each detection frame, that is, a first image, which is composed of a first segmentation boundary and an area surrounded by the first segmentation boundary.
Illustratively, the lesion segmentation model may be a deep learning neural network model, e.g., a U-NET type neural network model, and the present disclosure is not limited to the type and structure of the lesion segmentation model.
As an optional implementation manner of this embodiment, the dividing the fundus hemorrhage focus detection frame by using a pre-trained focus segmentation model, to obtain a first image of the fundus hemorrhage focus includes: expanding the region selected by the detection frame in the first initial image to obtain an expanded region; intercepting an image block corresponding to the expansion area; inputting the image block into the pre-trained focus segmentation model to obtain a first image of the fundus hemorrhage focus corresponding to the image block.
In this alternative implementation manner, the information in the detection frame is less, and the edge of the detection frame may overlap with the edge of the fundus hemorrhage focus, so that it is difficult to directly divide the detection frame, so that the detection frame may be expanded, for example, up, down, left, right, and four sides are expanded, so as to obtain an expanded area, thereby expanding the detection frame, increasing the information amount, and reducing the possibility of overlapping with the edge of the focus. After the selected area of the detection frame is expanded, the image blocks of the fundus hemorrhage focus in the expanded area can be intercepted, and each image block is further segmented through a focus segmentation model. The accuracy of fundus hemorrhage focus segmentation can be improved by intercepting the image blocks in advance and then accurately segmenting each image block through the focus segmentation model, and the segmentation boundaries in each image block and the areas surrounded by the segmentation boundaries can be obtained after segmentation of the focus segmentation model. The set of the partition boundaries in each image block is the first partition boundary.
Through the processing, the segmentation accuracy of the focus segmentation model on the fundus hemorrhage focus can be improved, so that a first segmentation boundary with higher accuracy is obtained.
Step S104, extracting the fundus hemorrhage focus area from the target eye image based on a preset image processing algorithm to obtain a second image of the fundus hemorrhage focus.
In this embodiment, the focus segmentation model, such as the Unet, has a thicker boundary of the extracted segmentation result, so in order to make the boundary of the extracted result of the fundus hemorrhage focus finer, the following manner may be adopted to implement fine segmentation: performing boundary detection processing on the target eye image through other computer vision methods to obtain a second image, wherein the second image comprises a second segmentation boundary and an area surrounded by the second segmentation boundary; and determines a final segmentation result, that is, a final fundus hemorrhage focus segmentation boundary and an area surrounded by the fundus hemorrhage focus segmentation boundary, based on the second image obtained by the boundary detection and the first image obtained by the focus segmentation model in step S105.
As an optional implementation manner of this embodiment, based on a preset image processing algorithm, the extracting the fundus hemorrhage focus area from the target eye image to obtain a second image of the fundus hemorrhage focus includes: preprocessing the target eye image, and dividing a target area with brightness or saturation meeting a preset threshold value from the preprocessed target eye image; removing the target area in the video disc area to obtain a candidate bleeding area; screening out candidate bleeding areas meeting a preset threshold based on at least one of the area, roundness or edge sharpness of the candidate bleeding areas; and determining a second image of the fundus hemorrhage focus by using different processing algorithms based on different magnitude relations between the gray values of the candidate hemorrhage areas and the gray values of the average preset color channels of the blood vessel areas.
In this optional implementation manner, preprocessing may include dividing a target area with brightness meeting a preset threshold from the target eye image, and when implemented, may include: extracting an image of the ROI area from the target eye image; multi-scale enhancement is carried out on a preset channel image in the image of the ROI area, preferably multi-scale enhancement is carried out on a preset color channel image, and by way of example, median filtering with different scales can be carried out, subtraction is carried out on the preset color channel image and a middle finger filtering image to obtain enhancement images of a plurality of channels, and finally average value is taken to obtain an enhancement image; and then, carrying out dynamic threshold segmentation on the enhanced image, wherein the determination of the dynamic threshold is related to the image ambiguity, so as to obtain a target area (namely, a darker area of the image). The pre-set color channel is preferably the green channel.
Further, after the target area in the video disc area is removed to obtain the candidate bleeding area, the candidate bleeding area can be screened based on attribute information of the candidate bleeding area.
Illustratively, the screening may be performed based on the area of the candidate bleeding area, based on the circularity of the candidate bleeding area (determining whether the circularity meets a preset circularity value), or based on the edge sharpness of the candidate bleeding area (determining whether the edge sharpness meets a preset value). When screening is performed based on the area of the candidate bleeding area, a candidate area having an area larger than the area of a circle having the vessel diameter as a radius may be selected as the final candidate bleeding area.
As an alternative implementation manner of this embodiment, determining the second image of the fundus hemorrhage focus by using a different processing algorithm includes: if the gray value of the candidate bleeding area is smaller than or equal to the average preset color channel gray value of the blood vessel area, judging whether the candidate bleeding area is connected with the blood vessel or not according to a first judging condition, and obtaining a first judging result; judging whether the candidate bleeding area can be used as a second image of fundus bleeding focus or not based on the first judgment result; and if the gray value of the candidate bleeding area is larger than the average preset color channel gray value of the blood vessel area, executing different preset rules to determine a second image of the fundus bleeding focus based on the gray value and/or the color space value of the blood vessel area.
In this optional implementation manner, the gray value of the final candidate bleeding area is calculated, the average preset color channel gray value of the blood vessel area is calculated, and the second image of the fundus bleeding focus is determined by using different processing algorithms based on different size relations between the gray value of the candidate bleeding area and the average preset color channel gray value of the blood vessel area.
If the gray value of the final candidate bleeding area is smaller than or equal to the average preset color channel gray value of the blood vessel area, judging whether the candidate bleeding area is connected with the blood vessel or not according to a first judging condition, and obtaining a first judging result; based on the first judgment result, judging whether the candidate bleeding area can be used as a second image of fundus bleeding focus. The first determination condition may be a determination of whether the candidate bleeding area is far from or near to the blood vessel, if the distance is relatively short (for example, smaller than a preset value), the first determination result is that the candidate bleeding area is connected to the blood vessel, otherwise, the first determination result is that the candidate bleeding area is not connected. If the first image is not connected, determining a back fundus hemorrhage focus area from the candidate hemorrhage area to obtain a second image of the fundus hemorrhage focus; if the two blood vessels are connected, judging whether the two blood vessels are connected with one end of the blood vessels, and if the two blood vessels are connected with one end of the blood vessels, determining a back fundus hemorrhage focus area of the candidate hemorrhage area to obtain a second image of the fundus hemorrhage focus.
If the final candidate bleeding area gray value is larger than the average preset color channel gray value of the blood vessel area, a second image of the fundus bleeding focus is determined by executing different preset rules based on the gray value and/or the color space value (including brightness and saturation) of the blood vessel area. When different preset rules are executed, the fundus hemorrhage focus can be determined through the relation between the gray value and/or the color space value of the candidate hemorrhage area and the gray value and/or the color space value of the blood vessels around the candidate hemorrhage area; the fundus hemorrhage focus can be determined by the relation between the gray value and/or the color space value of the candidate hemorrhage area and the average preset color channel gray value and/or the color space value of the blood vessel area. It should be appreciated that in determining the magnitude relation, the final candidate bleed area gray value is determined in magnitude relation to the gray value of the vascular area; and/or the final candidate bleeding area color space value is determined in relation to the color space value of the blood vessel area.
For example, when determining the fundus hemorrhage focus through the relation between the gray value and/or the color space value of the final candidate hemorrhage area and the gray value and/or the color space value of the peripheral blood vessel, the gray value and/or the color space value of the peripheral blood vessel of the candidate hemorrhage area can be calculated first, if the gray value and/or the color space value of the final candidate hemorrhage area is smaller than the gray value and/or the color space value of the peripheral blood vessel of the final candidate hemorrhage area, whether the candidate hemorrhage area is connected with the blood vessel is judged through the first judgment condition, the first judgment condition is the same as the above, and a second judgment result is obtained after judging through the first judgment condition, and if the second judgment result indicates that the candidate hemorrhage area is not connected with the blood vessel, the fundus hemorrhage focus area is determined, and a second image of the fundus hemorrhage focus is obtained; if so, judging whether the first image is connected with one end of the blood vessel, and if so, determining the final candidate bleeding area as a fundus hemorrhage focus area to obtain a second image of the fundus hemorrhage focus.
Illustratively, when the fundus hemorrhage focus is determined by the relation between the gray value and/or the color space value of the final candidate hemorrhage area and the average preset color channel gray value and/or the color space value of the blood vessel area, the color space value of the blood vessel around the candidate hemorrhage area can be calculated; if the gray level value and/or the color space value of the final candidate bleeding area is smaller than or not smaller than but close to the gray level value and/or the color space value of the blood vessel around the final candidate bleeding area, judging whether the candidate bleeding area is connected with the blood vessel or not according to the first judging condition, wherein the first judging condition is the same as the first judging condition, and not described in detail herein, obtaining a second judging result after judging according to the first judging condition, and if the second judging result indicates that the candidate bleeding area is not connected with the blood vessel, determining the back fundus bleeding focus area according to the final candidate bleeding area, and obtaining a second image of the fundus bleeding focus; if the two blood vessels are connected, judging whether the two blood vessels are connected with one end of the blood vessels, if the two blood vessels are connected with one end of the blood vessels, determining a rear fundus hemorrhage focus area according to the final candidate hemorrhage area, and obtaining a second image of the fundus hemorrhage focus.
The second image includes a second segmentation boundary and an area surrounded by the second segmentation boundary. In the above manner, all edges in the target eye image, that is, the second segmentation boundary, and the first segmentation boundary obtained by the fundus hemorrhage focus segmentation model can be obtained.
Step S105, performing intersection processing on the first initial image, the first image and the second image, to obtain a fundus hemorrhage focus segmentation result corresponding to the target eye image.
In this embodiment, the first image may have a segmentation error, so the first image may be corrected by the second image. Performing intersection processing on a second image (comprising a second segmentation boundary and an area surrounded by the second segmentation boundary) obtained based on a preset image processing algorithm and a first image (namely, an image comprising the first segmentation boundary and the area surrounded by the first segmentation boundary) obtained by a fundus hemorrhage focus segmentation model to obtain a final segmentation result; further, the first image segmented by the focus segmentation model is determined based on the region framed by the expanded detection frame, in order to ensure that the final segmentation result is the fundus hemorrhage focus of the detection frame selected region in the first initial image, the first initial image and the first image can be intersected to obtain a segmented image of the detection frame selected region in the first initial image, and then the segmented image and the second image are continuously intersected to obtain the final accurate segmentation result. The first image and the second image may be acquired and acquired with the first initial image, and the acquiring and acquiring operation sequence is not limited herein, and is finally realized for acquiring and acquiring the three images. The three are crossed to obtain the final eyeground bleeding focus segmentation boundary and the area image surrounded by the final eyeground bleeding focus segmentation boundary.
Illustratively, by taking the intersection processing step, the portion where the error exists in the first division boundary may be automatically removed, thereby reducing the error of the first division boundary and obtaining a division boundary of the fundus hemorrhage focus with higher accuracy.
Further, after the fundus hemorrhage focus segmentation result is obtained, the specific category of the hemorrhage focus can be determined based on the hemorrhage focus position characteristic information. The positional characteristic information may include a positional relationship between the segmented bleeding focus location and other preset objects, which may include, but are not limited to, fundus veins, fundus central arteries, and the like. For example, if the segmented bleeding lesion is a location feature information indicating that it is close to a vein, the bleeding lesion may be further determined as a vein occlusion bleeding lesion; if the segmented bleeding lesion is a location feature information indicating that it is near a central artery, the bleeding lesion may be further determined to be a central artery occlusion bleeding lesion.
As an optional implementation manner of this embodiment, before segmenting the initial image by using a pre-trained lesion segmentation model, the method further trains a lesion segmentation model, including: acquiring a first sample set, wherein a sample in the first sample set is a sample image of a fundus hemorrhage focus; determining a sample label image corresponding to the sample in the first sample set in a label labeling mode; and inputting the sample image and the sample label image corresponding to the sample into a pre-established focus segmentation model to train the pre-established focus segmentation model.
In this optional implementation manner, the image of the fundus hemorrhage focus may be used as a sample image, and the corresponding label image (such as a binary image including a background area and a fundus hemorrhage focus area) may be determined by a label labeling manner; the two are input into a focus segmentation model to complete the training of the model.
Further, since there is an error in the segmentation result of the lesion segmentation model and the label image, a loss function of the fundus hemorrhage lesion segmentation model is determined by an error between the lesion segmentation result and the label image at the time of training the segmentation model, for example, the loss function is calculated by a position error between the segmentation boundaries. The loss function may thus be counter-propagated to adjust parameters of the fundus lesion segmentation model based on the gradient descent method such that the loss function is reduced. Further, the training process is executed through multiple iterations until the training condition is met, such as convergence of a loss function; or the set training times are reached; or the accuracy in verification set meets the accuracy requirement, etc., the training conditions are not limited by the present disclosure. After the training conditions are met, the trained fundus hemorrhage focus segmentation model can be obtained.
As an optional implementation manner of this embodiment, the pre-established lesion segmentation model is a u-net network structure.
In the embodiment, the type of the fundus hemorrhage focus is identified in advance, and different modes of treatment are carried out on different types of hemorrhage focuses on the basis, so that the segmentation precision of the hemorrhage focus is improved; the second image obtained by the computer vision method is used for correcting the first image obtained by the fundus hemorrhage focus segmentation model, so that the accuracy of the fundus hemorrhage focus segmentation result is improved. Further, in the training process of the focus segmentation model, labeling personnel are assisted through boundary detection, the labeling workload is reduced, the labeling quality is improved, the precision of the fundus hemorrhage focus segmentation model is further improved, and the accuracy of the segmentation result can be further improved.
Referring to fig. 4, fig. 4 schematically illustrates a schematic diagram of a segmentation boundary of an embodiment of the present disclosure. As shown in fig. 4, the boundary line surrounding each fundus hemorrhage focus is the dividing boundary.
According to an embodiment of the present disclosure, further, by the division boundary, a division result of the fundus hemorrhage focus may be obtained, and in an example, the division result may be a binary image corresponding to the target eye image, for example, a binary image in which the size of the target eye image is consistent, and the pixel value of the region where the fundus hemorrhage focus is located (i.e., the region within the division boundary) is 1, and the pixel value of the other region is 0. The present disclosure is not limited in terms of the form of the segmentation result.
Fig. 5 exemplarily illustrates a schematic diagram of a segmentation result of an embodiment of the present disclosure, and as illustrated in fig. 5, the segmentation result may be a binary graph in which a pixel value of an area where a fundus hemorrhage focus is located is 1 and pixel values of other areas are 0.
In this embodiment, the model for detecting an ocular fundus hemorrhage focus may be trained, and the training sample may be an image including an ocular fundus hemorrhage focus, and before training, the type of the ocular fundus hemorrhage focus may be manually marked in advance. However, the manual marking of the boundaries of fundus hemorrhage focus in the eye image sample has large workload and high marking error rate.
In the training process of the fundus hemorrhage focus detection model, in order to avoid the problems of large workload and high labeling error rate caused by directly labeling in an eye image sample, the following steps can be adopted: performing boundary detection processing on the eye image sample to obtain a third segmentation boundary of the eye image sample; and receiving selection processing of the third segmentation boundary to obtain the labeling information corresponding to the third segmentation boundary. After the boundary detection, the boundary of the fundus hemorrhage focus is selected from the known third segmentation boundary to be marked by assisting the human, so that the problems of large workload and high marking error rate caused by directly marking in an eye image sample are avoided, and the marking accuracy is improved.
In the training process of the fundus hemorrhage focus detection model, labeling personnel are assisted through boundary detection, the labeling workload is reduced, the labeling quality is improved, the precision of the fundus hemorrhage focus segmentation model is further improved, and the accuracy of the segmentation result can be further improved.
Further, the manner of boundary detection may be similar to the above processing of luminance normalization and/or color normalization and threshold segmentation, which is not limiting of the present disclosure.
Fig. 6 schematically illustrates a block diagram of a fundus hemorrhage focus extraction device according to an embodiment of the present disclosure, the device including: the type identification module is configured to identify the type of the fundus hemorrhage focus in the target eye image to obtain a type identification result; the detection module is configured to detect the target eye image through a pre-trained fundus hemorrhage focus detection model under the condition that the type identification result indicates that the fundus hemorrhage focus belongs to a first category, so as to obtain a first initial image of a fundus hemorrhage focus area; the first segmentation module is configured to segment the first initial image through a pre-trained focus segmentation model to obtain a first image of a fundus hemorrhage focus; the second segmentation module is configured to extract fundus hemorrhage focus areas from the target eye images based on a preset image processing algorithm to obtain second images of fundus hemorrhage focuses; the segmentation result determining module is configured to perform intersection processing on the first image and the second image to obtain a fundus hemorrhage focus segmentation result corresponding to the target eye image.
As an optional implementation manner of this embodiment, the apparatus further includes: and under the condition that the type identification result shows that the fundus hemorrhage focus belongs to a second category, detecting the target eye image through a pre-trained fundus hemorrhage focus detection model to obtain a second initial image of a fundus hemorrhage focus area, wherein the second initial image comprises fundus hemorrhage focus category information.
As an optional implementation manner of this embodiment, before the detecting the target eye image, the method further includes: removing a background area of an eye image to be processed to obtain a first eye image; and carrying out normalization processing on the first eye image to obtain the target eye image.
As an optional implementation manner of this embodiment, the dividing the fundus hemorrhage focus detection frame by using a pre-trained focus segmentation model, to obtain a first image of the fundus hemorrhage focus includes: expanding the region selected by the detection frame in the first initial image to obtain an expanded region; intercepting an image block corresponding to the expansion area; inputting the image block into the pre-trained focus segmentation model to obtain a first image of the fundus hemorrhage focus corresponding to the image block.
As an optional implementation manner of this embodiment, based on a preset image processing algorithm, the extracting the fundus hemorrhage focus area from the target eye image to obtain a second image of the fundus hemorrhage focus includes: sequentially performing multi-scale filtering processing, brightness normalization and/or color normalization processing on the target eye image to obtain an enhanced image; and carrying out threshold segmentation on the enhanced image to obtain the second segmentation boundary.
As an optional implementation manner of this embodiment, before segmenting the initial image by using a pre-trained lesion segmentation model, the method further trains a lesion segmentation model, including: acquiring a first sample set, wherein a sample in the first sample set is a sample image of a fundus hemorrhage focus; determining a sample label image corresponding to the sample in the first sample set in a label labeling mode; and inputting the sample image and the sample label image corresponding to the sample into a pre-established focus segmentation model to train the pre-established focus segmentation model.
As an optional implementation manner of this embodiment, the apparatus further includes: the boundary extraction module is configured to carry out boundary extraction on the fundus hemorrhage focus segmentation result to obtain the boundary of the fundus hemorrhage focus;
And the image processing module is configured to overlap the boundary with the target eye image to obtain an overlapped fundus image of the fundus hemorrhage focus.
Fig. 7 is a block diagram illustrating a fundus hemorrhage focus extraction apparatus according to an exemplary embodiment. For example, the device 1600 may be provided as a terminal or server. The device 1600 includes a processing component 1602, and memory resources represented by a memory 1603 for storing instructions, such as application programs, executable by the processing component 1602. The application programs stored in memory 1603 may include one or more modules each corresponding to a set of instructions. Further, the processing component 1602 is configured to execute instructions to perform the methods described above.
The device 1600 may also include a power component 1606 configured to perform power management of the device 1600, a wired or wireless network interface 1605 configured to connect the device 1600 to a network, and an input output (I/O) interface 1608. The device 1600 may operate based on an operating system stored in memory 1603, such as Windows Server, mac OS XTM, unixTM, linuxTM, freeBSDTM, or the like.
The present invention may be a method, apparatus, system, and/or computer program product. The computer program product may include a computer readable storage medium having computer readable program instructions embodied thereon for performing various aspects of the present invention.
The computer readable storage medium may be a tangible device that can hold and store instructions for use by an instruction execution device. The computer readable storage medium may be, for example, but not limited to, an electronic storage device, a magnetic storage device, an optical storage device, an electromagnetic storage device, a semiconductor storage device, or any suitable combination of the foregoing. More specific examples (a non-exhaustive list) of the computer-readable storage medium would include the following: portable computer disks, hard disks, random Access Memory (RAM), read-only memory (ROM), erasable programmable read-only memory (EPROM or flash memory), static Random Access Memory (SRAM), portable compact disk read-only memory (CD-ROM), digital Versatile Disks (DVD), memory sticks, floppy disks, mechanical coding devices, punch cards or in-groove structures such as punch cards or grooves having instructions stored thereon, and any suitable combination of the foregoing. Computer-readable storage media, as used herein, are not to be construed as transitory signals per se, such as radio waves or other freely propagating electromagnetic waves, electromagnetic waves propagating through waveguides or other transmission media (e.g., optical pulses through fiber optic cables), or electrical signals transmitted through wires.
The computer readable program instructions described herein may be downloaded from a computer readable storage medium to a respective computing/processing device or to an external computer or external storage device over a network, such as the internet, a local area network, a wide area network, and/or a wireless network. The network may include copper transmission cables, fiber optic transmissions, wireless transmissions, routers, firewalls, switches, gateway computers and/or edge servers. The network interface card or network interface in each computing/processing device receives computer readable program instructions from the network and forwards the computer readable program instructions for storage in a computer readable storage medium in the respective computing/processing device.
Computer program instructions for carrying out operations of the present invention may be assembly instructions, instruction Set Architecture (ISA) instructions, machine-related instructions, microcode, firmware instructions, state setting data, or source or object code written in any combination of one or more programming languages, including an object oriented programming language such as Smalltalk, c++ or the like and conventional procedural programming languages, such as the "C" programming language or similar programming languages. The computer readable program instructions may be executed entirely on the user's computer, partly on the user's computer, as a stand-alone software package, partly on the user's computer and partly on a remote computer or entirely on the remote computer or server. In the case of a remote computer, the remote computer may be connected to the user's computer through any kind of network, including a Local Area Network (LAN) or a Wide Area Network (WAN), or may be connected to an external computer (for example, through the Internet using an Internet service provider). In some embodiments, aspects of the present invention are implemented by personalizing electronic circuitry, such as programmable logic circuitry, field Programmable Gate Arrays (FPGAs), or Programmable Logic Arrays (PLAs), with state information for computer readable program instructions, which can execute the computer readable program instructions.
Various aspects of the present invention are described herein with reference to flowchart illustrations and/or block diagrams of methods, apparatus (systems) and computer program products according to embodiments of the invention. It will be understood that each block of the flowchart illustrations and/or block diagrams, and combinations of blocks in the flowchart illustrations and/or block diagrams, can be implemented by computer-readable program instructions.
These computer readable program instructions may be provided to a processing unit of a general purpose computer, special purpose computer, or other programmable data processing apparatus to produce a machine, such that the instructions, which execute via the processing unit of the computer or other programmable data processing apparatus, create means for implementing the functions/acts specified in the flowchart and/or block diagram block or blocks. These computer readable program instructions may also be stored in a computer readable storage medium that can direct a computer, programmable data processing apparatus, and/or other devices to function in a particular manner, such that the computer readable medium having the instructions stored therein includes an article of manufacture including instructions which implement the function/act specified in the flowchart and/or block diagram block or blocks.
The computer readable program instructions may also be loaded onto a computer, other programmable data processing apparatus, or other devices to cause a series of operational steps to be performed on the computer, other programmable apparatus or other devices to produce a computer implemented process such that the instructions which execute on the computer, other programmable apparatus or other devices implement the functions/acts specified in the flowchart and/or block diagram block or blocks.
The flowcharts and block diagrams in the figures illustrate the architecture, functionality, and operation of possible implementations of systems, methods and computer program products according to various embodiments of the present invention. In this regard, each block in the flowchart or block diagrams may represent a module, segment, or portion of instructions, which comprises one or more executable instructions for implementing the specified logical function(s). In some alternative implementations, the functions noted in the block may occur out of the order noted in the figures. For example, two blocks shown in succession may, in fact, be executed substantially concurrently, or the blocks may sometimes be executed in the reverse order, depending upon the functionality involved. It will also be noted that each block of the block diagrams and/or flowchart illustration, and combinations of blocks in the block diagrams and/or flowchart illustration, can be implemented by special purpose hardware-based systems which perform the specified functions or acts, or combinations of special purpose hardware and computer instructions.
Note that all features disclosed in this specification (including any accompanying claims, abstract and drawings) may be replaced by alternative features serving the same, equivalent or similar purpose, unless expressly stated otherwise. Thus, unless expressly stated otherwise, each feature disclosed is one example only of a generic set of equivalent or similar features. Where used, further, preferably, still further and preferably, the brief description of the other embodiment is provided on the basis of the foregoing embodiment, and further, preferably, further or more preferably, the combination of the contents of the rear band with the foregoing embodiment is provided as a complete construct of the other embodiment. A further embodiment is composed of several further, preferably, still further or preferably arrangements of the strips after the same embodiment, which may be combined arbitrarily.
It will be appreciated by persons skilled in the art that the embodiments of the invention described above and shown in the drawings are by way of example only and are not limiting. The objects of the present invention have been fully and effectively achieved. The functional and structural principles of the present invention have been shown and described in the examples and embodiments of the invention may be modified or practiced without departing from the principles described.
Finally, it should be noted that: the above embodiments are only for illustrating the technical solution of the present disclosure, and not for limiting the same; although the present disclosure has been described in detail with reference to the foregoing embodiments, it should be understood by those of ordinary skill in the art that: the technical scheme described in the foregoing embodiments can be modified or some or all of the technical features thereof can be replaced by equivalents; such modifications and substitutions do not depart from the spirit of the corresponding technical solutions from the scope of the technical solutions of the embodiments of the present disclosure.

Claims (10)

1. A method for extracting a fundus hemorrhage focus, comprising:
performing type recognition on fundus hemorrhage focus in the target eye image through a type recognition model to obtain a type recognition result;
under the condition that the type identification result shows that the fundus hemorrhage focus belongs to a first category, detecting the target eye image through a pre-trained fundus hemorrhage focus detection model to obtain a first initial image of a fundus hemorrhage focus area;
dividing the first initial image through a pre-trained focus segmentation model to obtain a first image of a fundus hemorrhage focus;
extracting a fundus hemorrhage focus area from the target eye image based on a preset image processing algorithm to obtain a second image of the fundus hemorrhage focus;
And performing intersection processing on the first initial image, the first image and the second image to obtain a fundus hemorrhage focus segmentation result corresponding to the target eye image.
2. The method of fundus hemorrhage focus extraction according to claim 1, wherein said method further comprises:
and under the condition that the type identification result shows that the fundus hemorrhage focus belongs to a second category, detecting the target eye image through a pre-trained fundus hemorrhage focus detection model to obtain a second initial image of a fundus hemorrhage focus area, wherein the second initial image comprises fundus hemorrhage focus category information.
3. The method of claim 1, wherein prior to detecting the target eye image, the method further comprises:
removing a background area of an eye image to be processed to obtain a first eye image;
and carrying out normalization processing on the first eye image to obtain the target eye image.
4. The method according to claim 1, wherein the fundus hemorrhage focus detection frame is segmented by a pre-trained focus segmentation model to obtain a first image of the fundus hemorrhage focus, comprising:
Expanding the region selected by the detection frame in the first initial image to obtain an expanded region;
intercepting an image block corresponding to the expansion area;
inputting the image block into the pre-trained focus segmentation model to obtain a first image of the fundus hemorrhage focus corresponding to the image block.
5. The method according to claim 1, wherein extracting the fundus hemorrhage focus area from the target eye image based on a preset image processing algorithm, to obtain a second image of the fundus hemorrhage focus, comprises:
preprocessing the target eye image, and dividing a target area with brightness or saturation meeting a preset threshold value from the preprocessed target eye image;
removing the target area in the video disc area to obtain a candidate bleeding area;
screening out candidate bleeding areas meeting a preset threshold based on at least one of the area, roundness or edge sharpness of the candidate bleeding areas;
and determining a second image of the fundus hemorrhage focus by using different processing algorithms based on different magnitude relations between the gray values of the candidate hemorrhage areas and the gray values of the average preset color channels of the blood vessel areas.
6. The method of claim 5, wherein determining the second image of the fundus hemorrhage lesion using a different processing algorithm comprises:
if the gray value of the candidate bleeding area is smaller than or equal to the average preset color channel gray value of the blood vessel area, judging whether the candidate bleeding area is connected with the blood vessel or not according to a first judging condition, and obtaining a first judging result;
judging whether the candidate bleeding area can be used as a second image of fundus bleeding focus or not based on the first judgment result;
and if the gray value of the candidate bleeding area is larger than the average preset color channel gray value of the blood vessel area, executing different preset rules to determine a second image of the fundus bleeding focus based on the gray value and/or the color space value of the blood vessel area.
7. A fundus hemorrhage lesion extraction method according to claim 1, wherein prior to segmentation of the initial image by a pre-trained lesion segmentation model, the method further trains a lesion segmentation model comprising:
acquiring a first sample set, wherein a sample in the first sample set is a sample image of a fundus hemorrhage focus;
Determining a sample label image corresponding to the sample in the first sample set in a label labeling mode;
and inputting the sample image and the sample label image corresponding to the sample into a pre-established focus segmentation model to train the pre-established focus segmentation model.
8. The method of fundus hemorrhage focus extraction according to claim 1, wherein said method further comprises:
boundary extraction is carried out on the segmentation result of the fundus hemorrhage focus to obtain the boundary of the fundus hemorrhage focus;
and overlapping the boundary with the target eye image to obtain an overlapped fundus image of the fundus hemorrhage focus.
9. A fundus hemorrhage focus extraction device, comprising:
the type identification module is configured to identify the type of the fundus hemorrhage focus in the target eye image to obtain a type identification result;
the detection module is configured to detect the target eye image through a pre-trained fundus hemorrhage focus detection model under the condition that the type identification result indicates that the fundus hemorrhage focus belongs to a first category, so as to obtain a first initial image of a fundus hemorrhage focus area;
the first segmentation module is configured to segment the first initial image through a pre-trained focus segmentation model to obtain a first image of a fundus hemorrhage focus;
The second segmentation module is configured to extract fundus hemorrhage focus areas from the target eye images based on a preset image processing algorithm to obtain second images of fundus hemorrhage focuses;
the segmentation result determining module is configured to perform intersection processing on the first initial image, the first image and the second image to obtain a fundus hemorrhage focus segmentation result corresponding to the target eye image.
10. A computer readable storage medium having stored thereon computer program instructions, which when executed by a processor, implement the method of any of claims 1 to 7.
CN202310575688.5A 2023-05-19 2023-05-19 Fundus hemorrhage focus extraction method and device Pending CN116486179A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202310575688.5A CN116486179A (en) 2023-05-19 2023-05-19 Fundus hemorrhage focus extraction method and device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202310575688.5A CN116486179A (en) 2023-05-19 2023-05-19 Fundus hemorrhage focus extraction method and device

Publications (1)

Publication Number Publication Date
CN116486179A true CN116486179A (en) 2023-07-25

Family

ID=87221519

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202310575688.5A Pending CN116486179A (en) 2023-05-19 2023-05-19 Fundus hemorrhage focus extraction method and device

Country Status (1)

Country Link
CN (1) CN116486179A (en)

Similar Documents

Publication Publication Date Title
US10966602B2 (en) Automatically detecting eye type in retinal fundus images
CN110120047B (en) Image segmentation model training method, image segmentation method, device, equipment and medium
Mvoulana et al. Fully automated method for glaucoma screening using robust optic nerve head detection and unsupervised segmentation based cup-to-disc ratio computation in retinal fundus images
Sopharak et al. Simple hybrid method for fine microaneurysm detection from non-dilated diabetic retinopathy retinal images
US11783488B2 (en) Method and device of extracting label in medical image
CN107564048B (en) Feature registration method based on bifurcation point
CN111899244B (en) Image segmentation method, network model training method, device and electronic equipment
CN110880177A (en) Image identification method and device
Wang et al. Segmenting retinal vessels with revised top-bottom-hat transformation and flattening of minimum circumscribed ellipse
Uribe-Valencia et al. Automated Optic Disc region location from fundus images: Using local multi-level thresholding, best channel selection, and an Intensity Profile Model
CN113344894A (en) Method and device for extracting characteristics of eyeground leopard streak spots and determining characteristic index
CN113469963B (en) Pulmonary artery image segmentation method and device
Lermé et al. A fully automatic method for segmenting retinal artery walls in adaptive optics images
KR102318194B1 (en) Device for predicting optic neuropathy and method for providing prediction result to optic neuropathy using fundus image
Jana et al. A semi-supervised approach for automatic detection and segmentation of optic disc from retinal fundus image
Zhou et al. Automatic fovea center localization in retinal images using saliency-guided object discovery and feature extraction
Nayak et al. Retinal blood vessel segmentation for diabetic retinopathy using multilayered thresholding
CN116486179A (en) Fundus hemorrhage focus extraction method and device
Kasurde et al. An automatic detection of proliferative diabetic retinopathy
Zulfikar et al. Android application: skin abnormality analysis based on edge detection technique
Khowaja et al. Supervised method for blood vessel segmentation from coronary angiogram images using 7-D feature vector
Irshad et al. Automatic optic disk segmentation in presence of disk blurring
Das et al. Entropy thresholding based microaneurysm detection in fundus images
CN116758334A (en) Fundus focus identification method and device
CN116797608A (en) Method and device for extracting fundus microaneurysm focus

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination