CN110348541B - Method, device and equipment for classifying fundus blood vessel images and storage medium - Google Patents

Method, device and equipment for classifying fundus blood vessel images and storage medium Download PDF

Info

Publication number
CN110348541B
CN110348541B CN201910670551.1A CN201910670551A CN110348541B CN 110348541 B CN110348541 B CN 110348541B CN 201910670551 A CN201910670551 A CN 201910670551A CN 110348541 B CN110348541 B CN 110348541B
Authority
CN
China
Prior art keywords
blood vessel
image
probability
pixel point
map
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201910670551.1A
Other languages
Chinese (zh)
Other versions
CN110348541A (en
Inventor
余双
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Tencent Healthcare Shenzhen Co Ltd
Original Assignee
Tencent Healthcare Shenzhen Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Tencent Healthcare Shenzhen Co Ltd filed Critical Tencent Healthcare Shenzhen Co Ltd
Priority to CN201910670551.1A priority Critical patent/CN110348541B/en
Publication of CN110348541A publication Critical patent/CN110348541A/en
Application granted granted Critical
Publication of CN110348541B publication Critical patent/CN110348541B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • G06F18/241Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches
    • G06F18/2415Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches based on parametric or probabilistic models, e.g. based on likelihood ratio or false acceptance rate versus a false rejection rate
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/25Fusion techniques
    • G06F18/253Fusion techniques of extracted features
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10024Color image
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20076Probabilistic image processing
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20084Artificial neural networks [ANN]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30004Biomedical image processing
    • G06T2207/30041Eye; Retina; Ophthalmic
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30004Biomedical image processing
    • G06T2207/30101Blood vessel; Artery; Vein; Vascular

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Artificial Intelligence (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Evolutionary Biology (AREA)
  • Evolutionary Computation (AREA)
  • General Engineering & Computer Science (AREA)
  • Probability & Statistics with Applications (AREA)
  • Multimedia (AREA)
  • Image Analysis (AREA)
  • Eye Examination Apparatus (AREA)
  • Image Processing (AREA)

Abstract

The invention provides a method, a device, equipment and a storage medium for classifying fundus blood vessel images; the method comprises the following steps: extracting an original characteristic map from the fundus blood vessel image; determining the probability value of each pixel point belonging to the blood vessel in the fundus blood vessel image based on the characteristics of each pixel point in the original characteristic image so as to form a blood vessel segmentation probability image of the fundus blood vessel image; distributing corresponding weight to each pixel point based on the distribution condition of the probability value of each pixel point in the blood vessel segmentation probability map; fusing the weight of each pixel point in the fundus blood vessel image with the characteristics of the corresponding pixel point in the original characteristic diagram to obtain a fused characteristic diagram; and determining the probability values of the pixels in the fundus blood vessel image belonging to the artery blood vessel and the vein blood vessel respectively based on the characteristics of the pixels in the fusion characteristic image to form a blood vessel classification probability image of the fundus blood vessel image. The invention can realize the blood vessel segmentation and the blood vessel classification automatically and with high precision.

Description

Method, device and equipment for classifying fundus blood vessel images and storage medium
Technical Field
The present invention relates to medical image processing technologies, and in particular, to a method, an apparatus, a device, and a storage medium for classifying images of blood vessels in the fundus oculi.
Background
Segmenting (i.e., identifying blood vessels from the background) and classifying (i.e., classifying blood vessels) blood vessels in an image is of great importance in clinical diagnosis and treatment.
Taking fundus blood vessels as an example, the fundus is the only area of a human body where blood vessels can be directly observed without intervention, and many systemic diseases and cardiovascular and cerebrovascular diseases can affect the form of the fundus blood vessels and have different influences on the formation of arteries and veins. For example, clinical studies confirm that a decrease in the fundus arteriovenous width ratio can cause an increase in stroke risk; narrowing of the fundus artery is associated with the development of hypertension and diabetes.
The related art has no effective scheme which is compatible with the automation of vessel segmentation and classification multitask.
Disclosure of Invention
Embodiments of the present invention provide a method, an apparatus, a device, and a storage medium for classifying fundus blood vessel images, which can automatically and precisely implement blood vessel segmentation and blood vessel classification.
The technical scheme of the embodiment of the invention is realized as follows:
the embodiment of the invention provides a fundus blood vessel image classification method, which comprises the following steps:
extracting an original characteristic map from the fundus blood vessel image;
determining the probability value of each pixel point in the fundus blood vessel image belonging to a blood vessel based on the characteristics of each pixel in the original characteristic map so as to form a blood vessel segmentation probability map of the fundus blood vessel image;
distributing corresponding weight to each pixel point based on the distribution condition of the probability value corresponding to each pixel point in the blood vessel segmentation probability map;
fusing the weight of each pixel point in the original characteristic diagram with the characteristics of the corresponding pixel point in the original characteristic diagram to obtain a fused characteristic diagram;
determining the probability value of each pixel point in the fundus blood vessel image belonging to an artery blood vessel and the probability value of each pixel point belonging to a vein blood vessel based on the characteristics of each pixel point in the fusion characteristic graph;
and forming a blood vessel classification probability map of the fundus blood vessel image based on the determined probability values belonging to the artery blood vessels and the determined probability values belonging to the vein blood vessels, wherein the blood vessel classification probability map comprises two channels, one channel represents the probability that each pixel point of the fundus blood vessel image is an artery pixel point, and the other channel represents the probability that each pixel point of the fundus blood vessel image is a vein pixel point.
An embodiment of the present invention provides a fundus blood vessel image classification device, including:
the characteristic extraction module is used for extracting an original characteristic map from the fundus blood vessel image;
the output module is used for determining the probability value of each pixel point in the fundus blood vessel image belonging to a blood vessel based on the characteristics of each pixel in the original characteristic map so as to form a blood vessel segmentation probability map of the fundus blood vessel image;
the activation module is used for distributing corresponding weight to each pixel point based on the distribution condition of the probability value corresponding to each pixel point in the blood vessel segmentation probability map;
the output module is used for fusing the weight of each pixel point with the characteristics of the corresponding pixel point in the original characteristic diagram to obtain a fused characteristic diagram;
the output module is used for determining probability values of all pixel points in the fundus blood vessel image belonging to arterial blood vessels and venous blood vessels respectively based on the characteristics of all pixel points in the fusion characteristic diagram;
and forming a blood vessel classification probability map of the fundus blood vessel image based on the probability values belonging to the artery blood vessel and the vein blood vessel, wherein the blood vessel classification probability map comprises two channels, one channel represents the probability that each pixel point of the fundus blood vessel image is an artery pixel point, and the other channel represents the probability that each pixel point of the fundus blood vessel image is a vein pixel point.
In the above aspect, the fundus blood vessel image classification device further includes:
an expansion compression module to: expanding the number of channels of the fundus blood vessel image; compressing the number of channels of the fundus blood vessel image after expansion to fit the number of input channels for down-sampling the fundus blood vessel image.
In the foregoing solution, the input module is further configured to:
acquiring an original blood vessel image, and extracting at least two fundus blood vessel images from the original blood vessel image in a blocking mode;
in the foregoing solution, the output module is further configured to:
splicing the blood vessel segmentation probability maps of the at least two fundus blood vessel images according to the extraction sequence to obtain a blood vessel segmentation result of the original blood vessel image, and
and splicing the classification results of the at least two fundus blood vessel images according to the extraction sequence to obtain a blood vessel classification result of the original blood vessel image.
In the foregoing solution, the output module is further configured to:
carrying out dimension reduction processing on the original feature map, and carrying out batch processing on the original feature map subjected to dimension reduction processing to obtain a normalized original feature map;
and mapping the characteristics of each pixel point in the normalized original characteristic diagram to the probability value that each pixel point in the fundus blood vessel image is a blood vessel pixel point through an activation function.
In the foregoing solution, the activating module is further configured to:
assigning a first weight to a probability value of a probability value distribution corresponding to a capillary and a vessel boundary in the probability map by an activation function;
assigning a second weight to a probability value of a probability value distribution in the probability map that conforms to probability values of arteries, veins and an image background, by the activation function, the second weight being less than the first weight;
forming the activation weight map based on the probability values in the probability map and the corresponding assigned weights.
In the foregoing solution, the output module is further configured to:
performing point multiplication operation processing on the weight of each pixel point in the original characteristic diagram and the characteristics of the corresponding pixel point in the original characteristic diagram to obtain a point multiplication operation result of each pixel point;
and combining the dot product operation results of all the pixel points to form the fusion characteristic graph.
In the foregoing solution, the output module is further configured to:
carrying out dimension reduction on the fusion feature map, and carrying out batch processing on the fusion feature map subjected to dimension reduction to obtain a normalized fusion feature map;
and mapping the characteristics of each pixel point in the normalized fusion characteristic graph into the probability value that each pixel point in the fundus blood vessel image is an artery blood vessel and the probability value that each pixel point is a vein blood vessel correspondingly through an activation function.
In the above aspect, the fundus blood vessel image classification device further includes:
a side output layer for obtaining a loss of down-sampling by using a reference standard of down-sampling characteristics and down-sampling characteristics output when the fundus blood vessel image is down-sampled at a plurality of levels;
a training module for constructing a loss function of the neural network model based on the downsampled loss, the predicted loss of the vessel segmentation probability map and the vessel classification probability map, and the network weights of the neural network model for predicting the vessel segmentation probability map and the vessel classification probability map; updating model parameters of the neural network model based on the loss function to converge the loss function.
The embodiment of the invention provides a fundus blood vessel image classification device, which comprises:
a memory for storing executable instructions;
and the processor is used for realizing the fundus blood vessel image classification method provided by the embodiment of the invention when the executable instructions stored in the memory are executed.
The embodiment of the invention provides a storage medium, which stores executable instructions and is used for causing a processor to execute the storage medium to realize the fundus blood vessel image classification method provided by the embodiment of the invention.
The embodiment of the invention has the following beneficial effects:
the results of the vessel segmentation and classification are output through two different probability maps, and the results of the vessel segmentation task are used for assisting classification, so that the classification robustness is ensured; and weights are pertinently distributed to the pixel points according to the probability values of the pixel points, so that the classification processing of the pixel points of the blood vessel types which are difficult to classify can be emphasized, and the classification precision is ensured.
Drawings
Fig. 1 is a schematic view of an application scenario of a fundus blood vessel image classification system 10 provided by an embodiment of the present invention;
fig. 2 is a schematic structural diagram of a fundus blood vessel image classification apparatus 500 provided in an embodiment of the present invention;
fig. 3 is a schematic structural diagram of a fundus blood vessel image classification apparatus 555 according to an embodiment of the present invention;
fig. 4A to 4D are schematic flowcharts of a fundus blood vessel image classification method according to an embodiment of the present invention;
FIG. 5 is a diagram illustrating a neural network model for segmenting and classifying fundus images according to an embodiment of the present invention;
FIG. 6 is a schematic illustration of normalization and enhancement of an original fundus color photograph provided by an embodiment of the present invention;
FIG. 7A is a schematic structural diagram of an input module in a neural network model provided by an embodiment of the present invention;
FIG. 7B is a schematic structural diagram of a feature extraction module in a neural network model according to an embodiment of the present invention;
FIG. 7C is a schematic structural diagram of an output module in a neural network model according to an embodiment of the present invention;
fig. 8A to 8C are schematic structural diagrams of a side output layer according to an embodiment of the present invention;
FIG. 9 is a schematic structural diagram of an activation module in a neural network model provided by an embodiment of the present invention;
fig. 10 is a schematic diagram of an activation weight map generated by an activation module according to an embodiment of the present invention.
Detailed Description
In order to make the objects, technical solutions and advantages of the present invention clearer, the present invention will be further described in detail with reference to the accompanying drawings, the described embodiments should not be construed as limiting the present invention, and all other embodiments obtained by a person of ordinary skill in the art without creative efforts shall fall within the protection scope of the present invention.
In the following description, reference is made to "some embodiments" which describe a subset of all possible embodiments, but it is understood that "some embodiments" may be the same subset or different subsets of all possible embodiments, and may be combined with each other without conflict.
Unless defined otherwise, all technical and scientific terms used herein have the same meaning as commonly understood by one of ordinary skill in the art to which this invention belongs. The terminology used herein is for the purpose of describing embodiments of the invention only and is not intended to be limiting of the invention.
Before further detailed description of the embodiments of the present invention, terms and expressions mentioned in the embodiments of the present invention are explained, and the terms and expressions mentioned in the embodiments of the present invention are applied to the following explanations.
1) Vessel segmentation: the blood vessel in the blood vessel image to be processed is distinguished from the background, and the blood vessel segmentation can be in a pixel level, namely whether each pixel point in the blood vessel image to be processed belongs to the blood vessel or not (namely whether the pixel point is the blood vessel or not) is respectively distinguished.
2) Blood vessel classification: the blood vessel image to be processed includes blood vessels that are classified, for example, artery and vein. The vessel segmentation may be at a pixel level, for example, each pixel point in the vessel image to be processed determines whether the pixel point belongs to an artery or a vein (i.e., whether the pixel point belongs to an artery or a vein).
3) Blood vessel segmentation probability map: the probability values which are in one-to-one correspondence with the pixel points in the blood vessel image to be processed represent the possibility that each pixel point in the blood vessel image to be processed belongs to the blood vessel (namely, the blood vessel pixel point).
4) Blood vessel classification probability map: the probability values which are in one-to-one correspondence with the pixel points in the blood vessel image to be processed represent the possibility that each pixel point in the blood vessel image to be processed belongs to a certain type (such as artery and vein) of blood vessels (namely, a certain type of blood vessel pixel points).
5) Characteristic diagram: the resolution of the feature map may be smaller than or equal to the resolution of the blood vessel image to be processed. The feature map may be extracted directly from the blood vessel image to be processed, or may be extracted (i.e. indirectly extracted) from the feature map of the blood vessel image to be processed.
The following analyzes the schemes provided by the related art with respect to vessel segmentation and vessel classification.
In the related art, the automated arteriovenous classification process is generally divided into two successive stages: retinal vessel segmentation is performed in a first stage, and in a second stage, classification of arteries and veins is performed on the basis of the segmented vessels. There is a problem that the representation of the vessel classification completely depends on the vessel segmentation result, resulting in lack of robustness. In the graph theory based method, the defects of the first stage blood vessel segmentation, especially the breakpoint and the blood vessel with wrong segmentation, can be amplified in the second stage, which further affects the accuracy of the blood vessel classification. For the method based on feature extraction, the accuracy of arteriovenous classification depends on the effectiveness of the manually designed features, and the steps of continuous vessel segmentation, graph structure reconstruction (if the method is based on graph theory), feature extraction, arteriovenous classification and the like are required, so that the time consumption and complexity are obvious when the system is implemented.
In addition, the related art also has a scheme of distinguishing pixel points into arteries, veins and a background by using a deep learning framework (such as various basic networks). However, the features of the deep learning framework are not sufficient, so that the classification of the blood vessels is still difficult, and the accuracy of the blood vessel segmentation based on the classification is affected, especially the segmentation of small blood vessels.
Therefore, the related technology lacks an effective solution for giving consideration to the precision of two tasks of blood vessel segmentation and arteriovenous classification and realizing the automation of the two tasks.
To solve at least the above technical problems of the related art, embodiments of the present invention provide a fundus blood vessel image classification method, apparatus, device, and storage medium, which can realize blood vessel segmentation and blood vessel classification automatically and with high accuracy. An exemplary application of the fundus blood vessel image classification device provided by the embodiment of the present invention is described below, where the fundus blood vessel image classification device provided by the embodiment of the present invention may be a server, for example, a server deployed in a cloud, and provides a remote blood vessel image processing function (including blood vessel segmentation and blood vessel classification) to a user according to a blood vessel image to be processed submitted by the user; the device can be a disease diagnosis device, for example, a diagnosis device for one or more diseases (such as diabetes, retinopathy, cardiovascular and cerebrovascular diseases, etc.), and can perform blood vessel segmentation and classification on blood vessel images of eyes or other lesion parts to assist diagnosis and treatment of diseases; and may even be a handheld terminal or the like.
By operating the arteriovenous blood vessel segmentation and classification scheme provided by the embodiment of the invention, the fundus blood vessel image classification device can utilize a quantification system of fundus artery and vein morphological parameters or form a system for quantifying artery and vein characteristics and changes, so as to provide support for relevant researches such as clinical researches on fundus blood vessels and systemic diseases and biomarkers of cardiovascular and cerebrovascular diseases, and further utilize the system to quantify and predict the development progress of the systemic diseases and predict risk factors of the cardiovascular and cerebrovascular diseases.
Of course, the fundus blood vessel image classification device can also be applied to a fundus screening system to judge whether the distribution of fundus blood vessels is normal or not and assist the prediction and diagnosis of fundus diseases and systemic diseases (such as hypertension and hyperlipidemia).
Referring to fig. 1 by way of example, fig. 1 is a schematic view of an application scenario of a fundus blood vessel image classification system 10 according to an embodiment of the present invention, and a terminal 200 may be located in various institutions (e.g., hospitals, medical research institutes) with medical attributes, and may be used to acquire (e.g., an image acquisition device such as the terminal 200, or via another image acquisition device 400) a fundus image (i.e., a blood vessel image to be processed) of a patient.
In some embodiments, the terminal 200 locally performs the fundus blood vessel image classification method provided by the embodiments of the present invention to complete blood vessel segmentation and blood vessel classification of the fundus image, and outputs the results of the blood vessel segmentation and the blood vessel classification in a graphical manner, so that doctors and researchers can study the diagnosis, re-diagnosis and treatment methods of diseases, for example, morphological performances of different types of blood vessels can be determined according to the blood vessel segmentation results and the blood vessel classification results of the fundus image, and then assist or directly diagnose whether the patient has cardiovascular and cerebrovascular disease risks or hypertensive retinopathy.
The terminal 200 can also send the fundus image to the server 100 through the network 300, and call the function of the remote diagnosis service provided by the server 100, the server 100 performs multitask of blood vessel segmentation and blood vessel classification through the fundus blood vessel image classification method provided by the embodiment of the invention, and the results of the blood vessel segmentation and the blood vessel classification are returned to the terminal 200 for doctors and researchers to perform diagnosis, re-diagnosis and research of treatment methods of diseases.
The terminal 200 can display various intermediate results and final results of the blood vessel image processing, such as a fundus image, segmentation results and classification results of fundus blood vessels, and the like, in the graphical interface 210.
Continuing with the description of the structure of the fundus blood vessel image classification apparatus provided by the embodiment of the present invention, the fundus blood vessel image classification apparatus may be various terminals, such as a medical diagnosis apparatus, a computer, etc., or may be the server 100 shown in fig. 1.
Referring to fig. 2, fig. 2 is a schematic structural view of a fundus blood vessel image classification apparatus 500 provided by an embodiment of the present invention, the fundus blood vessel image classification apparatus 500 shown in fig. 2 including: at least one processor 510, memory 550, at least one network interface 520, and a user interface 530. The various components in the fundus blood vessel image classification apparatus 500 are coupled together by a bus system 540. It is understood that the bus system 540 is used to enable communications among the components. The bus system 540 includes a power bus, a control bus, and a status signal bus in addition to a data bus. For clarity of illustration, however, the various buses are labeled as bus system 540 in fig. 2.
The Processor 510 may be an integrated circuit chip having Signal processing capabilities, such as a general purpose Processor, a Digital Signal Processor (DSP), or other programmable logic device, discrete gate or transistor logic device, discrete hardware components, or the like, wherein the general purpose Processor may be a microprocessor or any conventional Processor, or the like.
The user interface 530 includes one or more output devices 531 enabling presentation of media content, including one or more speakers and/or one or more visual display screens. The user interface 530 also includes one or more input devices 532, including user interface components to facilitate user input, such as a keyboard, mouse, microphone, touch screen display, camera, other input buttons and controls.
The memory 550 may comprise volatile memory or nonvolatile memory, and may also comprise both volatile and nonvolatile memory. The non-volatile Memory may be a Read Only Memory (ROM), and the volatile Memory may be a Random Access Memory (RAM). The memory 550 described in connection with embodiments of the invention is intended to comprise any suitable type of memory. Memory 550 optionally includes one or more storage devices physically located remote from processor 510.
In some embodiments, memory 550 can store data to support various operations, examples of which include programs, modules, and data structures, or subsets or supersets thereof, as exemplified below.
An operating system 551 including system programs for processing various basic system services and performing hardware-related tasks, such as a framework layer, a core library layer, a driver layer, etc., for implementing various basic services and processing hardware-based tasks;
a network communication module 552 for communicating to other computing devices via one or more (wired or wireless) network interfaces 520, exemplary network interfaces 520 including: bluetooth, wireless compatibility authentication (WiFi), and Universal Serial Bus (USB), etc.;
a display module 553 for enabling presentation of information (e.g., a user interface for operating peripherals and displaying content and information) via one or more output devices 531 (e.g., a display screen, speakers, etc.) associated with the user interface 530;
an input processing module 554 to detect one or more user inputs or interactions from one of the one or more input devices 532 and to translate the detected inputs or interactions.
In some embodiments, the fundus blood vessel image classification apparatus provided by the embodiments of the present invention may be implemented by a combination of hardware and software, and by way of example, the apparatus provided by the embodiments of the present invention may be a processor in the form of a hardware decoding processor programmed to perform the operations provided by the embodiments of the present invention. . Methods, for example, a processor in the form of a hardware decoding processor may employ one or more Application Specific Integrated Circuits (ASICs), DSPs, Programmable Logic Devices (PLDs), Complex Programmable Logic Devices (CPLDs), Field-Programmable Gate arrays (FPGAs), or other electronic components.
In other embodiments, the fundus blood vessel image classification apparatus provided by the embodiment of the present invention may be implemented in a software manner, and fig. 2 shows a fundus blood vessel image classification apparatus 555 stored in a memory 550, which may be software in the form of a program, a plug-in, and the like, and includes a series of modules including a neural network model and a training module 5557 for training the neural network model; the neural network model is used for implementing the functions of blood vessel segmentation and blood vessel classification in the blood image processing method provided by the embodiment of the invention, and includes a series of modules (to be described below), and the training module 5557 is used for implementing the training function of the neural network model provided by the embodiment of the invention.
In connection with the exemplary application and implementation of the terminal provided by the embodiment of the present invention, the blood vessel processing method provided by the embodiment of the present invention is described, and it can be understood from the foregoing that the fundus blood vessel image classification method provided by the embodiment of the present invention can be applied to various types of blood vessel processing devices, such as a disease diagnosis device, a computer, a server, and the like.
Referring to fig. 3 and 4A, fig. 3 is a schematic structural diagram of a fundus blood vessel image classification apparatus 555 according to an embodiment of the present invention, which shows a processing flow of a series of modules in a neural network model to implement blood vessel segmentation and blood vessel classification, and fig. 4A is a schematic flow diagram of a fundus blood vessel image classification method according to an embodiment of the present invention, and the steps shown in fig. 4A will be described with reference to fig. 3.
In step 101, an original feature map is extracted from a blood vessel image to be processed.
In some embodiments, the original feature map is extracted using a U-type network, which is also called a unified network (U-Net), and the feature extraction process includes two stages of down-sampling and up-sampling, where the down-sampling is performed on the blood vessel image to be processed in multiple levels, and the resolution of the down-sampling is gradually reduced, for example, the down-sampling is performed on the blood vessel image to be processed in 16 × 16 (width × height of resolution, unit is pixel) in 3 levels, and the resolutions of the down-sampling feature maps obtained by the respective down-sampling are respectively: 16 × 16, 8 × 8, 4 × 4, and finally obtaining a down-sampled feature map with a resolution smaller than that of the blood vessel image to be processed, where the down-sampled feature map includes visualized features of the blood vessel image to be processed, such as boundaries, colors, and also includes abstract features (i.e., cannot be described using the visualized features).
Then, the downsampled feature map is upsampled by a plurality of levels, and the resolution of the upsampled feature map obtained by upsampling is corresponding to the resolution of the downsampled feature map of different levels, and is increased by levels, such as 4 × 4, 8 × 8 and 16 × 16; and the up-sampling result of each level is spliced with the down-sampling feature map with the same resolution to be used as the input of the up-sampling of the next level, for example, the up-sampling feature map of 8 × 8 is spliced with the down-sampling feature map of 8 × 8, and then 16 × 16 up-sampling is carried out to obtain the original feature map which is consistent with the resolution of the blood vessel image to be processed.
By fusing the features of different levels in the up-sampling process, the subsequent blood vessel segmentation effect can be more refined.
As an example, referring to fig. 3, downsampling is performed by an encoder in the feature extraction module 5551 in the neural network model, where the encoder may include multiple cascaded encoding layers, and downsampling is performed on the blood vessel image to be processed sequentially by the multiple cascaded encoding layers, that is, a downsampling result of one encoding layer is input into the next encoding layer to continue downsampling until all encoding layers are traversed, and a final downsampled feature map is output. Similarly, the down-sampling is performed by a decoder in the feature extraction module 5551 in the neural network model, where the decoder may include a plurality of cascaded decoding layers, the down-sampling feature map is sequentially up-sampled by the plurality of cascaded decoding layers, that is, the up-sampling result of one decoding layer is input into the next decoding layer to continue the down-sampling until all decoding layers are traversed, and the decoder of the last layer outputs the up-sampling feature map with the resolution consistent with that of the blood vessel image to be processed as the original feature map.
Of course, the extraction of the original feature map from the blood vessel image to be processed is not limited to the U-type Network, and may be replaced by various basic networks, such as a full volume Network (FCN) and other various networks.
In step 102, based on the features of each pixel point in the original feature map, determining a probability value that each pixel point in the blood vessel image to be processed belongs to the blood vessel, so as to form a blood vessel segmentation probability map of the blood vessel image to be processed.
In some embodiments, the original feature map is subjected to dimension reduction processing, and the original feature map subjected to dimension reduction processing is subjected to batch processing to obtain a normalized original feature map; and mapping the characteristics of each pixel point in the normalized original characteristic diagram to be probability values that each pixel point in the blood vessel image to be processed is a blood vessel pixel point respectively through an activation function.
As an example, referring to fig. 3, the dimension reduction processing is performed on the convolution layer in the blood vessel segmentation branch in the output module, and the original feature map after the dimension reduction processing is Batch processed through the Batch Normalization (BN) layer in the blood vessel segmentation branch to obtain a normalized original feature map; and (3) correspondingly mapping the characteristics of each pixel point in the normalized original characteristic image corresponding to the blood vessel image to be processed into probability values that the pixel points are blood vessel pixel points through a Linear rectification function (ReLU) in the blood vessel segmentation branch, wherein the probability values that each pixel point in the blood vessel image to be processed is the blood vessel pixel point constitute a blood vessel segmentation probability image.
In step 103, based on the distribution of the probability values of the corresponding pixels in the blood vessel segmentation probability map, corresponding weights are assigned to the pixels.
In some embodiments, a first weight is assigned to the probability values of the probability value distribution corresponding to the probability values of the capillary and vessel boundaries in the vessel probability map by the activation function; assigning a second weight to the probability value of the probability value distribution which accords with the probability values of the artery, the vein and the image background in the blood vessel segmentation probability map through an activation function, wherein the second weight is smaller than the first weight; an activation weight map is formed based on the probability values in the vessel segmentation probability map and the corresponding assigned weights.
As an example, referring to fig. 3, by receiving the blood vessel segmentation probability map output by the blood vessel segmentation branch of the output module 5552 through the activation module 5553, and fitting the probability values of the respective pixel points of the blood vessel image to be processed in the blood vessel segmentation probability map through the activation function of the activation module 5553, so as to assign a higher weight (i.e. a first weight) than the weight (i.e. a second weight) of the probability values of the boundary regions (i.e. the value intervals of the probability values of the pixel points of the artery, the vein and the image background, and the intervals other than the interval with the intermediate value of 0.5, i.e. the intervals with the probability values close to 0 and 1) when the probability values of the probability distributions corresponding to the capillary vessel and the blood vessel boundary in the blood vessel segmentation probability map, i.e. the probability values of the interval with the intermediate value of 0.5 are in the distribution, it can be understood that the first weight and the second weight are only weights assigned for distinguishing the probability values of different distributions, not a weight is specified but a class of weights. For each pixel point of the capillary and the blood vessel boundary, the values of the corresponding first weights may be the same or different.
By enhancing the weight of the probability value of the pixel points of the capillary vessel and the vessel boundary, the capillary vessel can be easily distinguished from the artery, the vein and the image background in the subsequent vessel classification process, so that the robustness of vessel classification is improved.
In step 104, the weight of each pixel point in the blood vessel image to be processed is fused with the feature of the corresponding pixel point in the original feature map to obtain a fused feature map.
In some embodiments, the weight of each pixel point corresponding to the blood vessel image to be processed in the activation weight map and the feature of the corresponding pixel point in the original feature map are subjected to point multiplication operation to obtain a point multiplication operation result of each pixel point; and combining the dot product operation results of all the pixel points to form a fusion characteristic graph.
For example, referring to fig. 3, after the output module 5552 forms an activation weight map by using an activation function to fit the probability value in the blood vessel segmentation probability map based on the blood vessel segmentation probability map output by the blood vessel segmentation branch, the weight of each pixel point in the activation weight map corresponding to the blood vessel image to be processed is subjected to point multiplication with the feature of the corresponding pixel point in the original feature map, that is, for each pixel point in the blood vessel image to be processed, the feature of the pixel point corresponding to the original feature map is subjected to multiplication with the weight corresponding to the activation weight map, and the multiplication results of each pixel point are combined to form a fused feature map.
By fusing the weight and the characteristics of the pixel points, some capillary vessels (the probability value of which in the segmentation probability map is 0.5, which is an interval of a middle value) which may be originally segmented into blood vessels can be activated by the activation module 5553 and fused by the output module 5552, so that a weight larger than the original weight in the segmentation probability map can be obtained, and the background and artery/vein pixel points can obtain a weight smaller than the capillary vessels, thereby improving the subsequent classification accuracy.
In step 105, based on the features of each pixel point in the fused feature map, determining probability values that each pixel point in the to-be-processed blood vessel image belongs to different types of blood vessels respectively, so as to form a blood vessel classification probability map of the to-be-processed blood vessel image.
In some embodiments, the fused feature map is subjected to dimension reduction processing, and the fused feature map subjected to dimension reduction processing is subjected to batch processing to obtain a normalized fused feature map; and correspondingly mapping the characteristics of the pixel level in the normalized fusion characteristic image into probability values that each pixel point in the blood vessel image to be processed is respectively different types of blood vessels through an activation function.
For example, referring to fig. 3, in the blood vessel classification branch in the output module 5552, the convolution layer performs dimension reduction processing on each pixel point in the fusion feature map corresponding to the blood vessel image to be processed, the BN layer performs batch processing on the fusion feature map after the dimension reduction processing to obtain a normalized fusion feature map, and the ReLU function maps the features of each pixel point in the normalized fusion feature map corresponding to the blood vessel image to be processed into the probabilities that the pixel points are the pixel points of different types of blood vessels one by one, so as to form a blood vessel classification probability map. Taking artery/vein classification as an example, the blood vessel classification probability map comprises 2 channels, wherein one channel represents the probability that each pixel point of the blood vessel image to be processed is an artery pixel point, and the other channel represents the probability that each pixel point of the blood vessel image to be processed is a vein pixel point.
In some embodiments, when the resolution of the original color image acquired from the blood vessel region does not exceed the resolution acceptable by the neural model, the original color image can be used as the blood vessel image to be processed for the task of blood vessel classification and blood vessel classification.
In other embodiments, referring to fig. 4B, based on fig. 4A, fig. 4B is a schematic flowchart of a fundus blood vessel image classification method provided by an embodiment of the present invention, when a resolution of an original color image acquired from a blood vessel region exceeds a resolution of a neural network model support input, before step 101, the original blood vessel image may also be acquired in step 106, and at least two blood vessel images to be processed are extracted from the original blood vessel image in a block form, so that the resolution of the blood vessel image to be processed extracted in the block form does not exceed the resolution of the neural network model support input; after the classification probability map and the segmentation probability map of each blood vessel image to be processed are obtained in steps 101 to 105, the blood vessel segmentation probability maps of at least two blood vessel images to be processed are spliced in step 107 according to the extraction sequence to obtain a blood vessel segmentation result of the original blood vessel image, and the classification results of at least two blood vessel images to be processed are spliced in step 108 according to the extraction sequence to obtain a blood vessel classification result of the original blood vessel image.
For example, referring to fig. 3, an original blood vessel image is acquired through the input module 5554, and at least two blood vessel images to be processed are extracted from the original blood vessel image in a block form; after the output module 5552 executes the blood vessel segmentation and blood vessel classification tasks on the blood vessel images to be processed corresponding to each block, according to the extraction sequence, the blood vessel segmentation probability maps of the blood vessel images to be processed corresponding to each block are spliced to form a blood vessel segmentation probability map of the original blood vessel image, and simultaneously, the blood vessel classification probability maps of the blood vessel images to be processed corresponding to each block are spliced to form a blood vessel classification probability map of the original blood vessel image.
By splicing the probability maps corresponding to the blocks, the method can adapt to the blood vessel segmentation and classification tasks of blood vessels with any resolution acquired from the blood vessel region, and has good compatibility.
In some embodiments, the type of vessel image to be processed may be an original color image acquired from a vessel region, such that the neural network model performs vessel segmentation and vessel classification based on a priori knowledge.
In other embodiments, the type of the blood vessel image to be processed may be multiple, so as to form a multi-input blood vessel image to be processed. Referring to fig. 4C, based on fig. 4A, fig. 4C is a schematic flowchart of a fundus blood vessel image classification method according to an embodiment of the present invention, before step 101, an original color image obtained by image acquisition of a blood vessel region may also be acquired in step 109; carrying out brightness normalization processing on each pixel point in the original color image to obtain a brightness normalized image; in step 110, contrast between blood vessels and a background in the original color image is enhanced to obtain a blood vessel enhanced image, for example, an enhanced image enhanced by using a Ga bor filter and an enhanced image enhanced by using a Linear Detector (Linear Detector); in step 111, at least one of the original color image, the brightness normalized image and the blood vessel enhanced image is marked as an image to be processed. The vessel segmentation and vessel classification tasks are then performed through steps 101 to 105.
For example, referring to fig. 3, the blood vessel image to be processed is subjected to brightness normalization and blood vessel enhancement processing respectively through the input module 5554 of the neural network model, so as to form multi-input prior knowledge. By taking the original color image, the brightness normalization preprocessing and the blood vessel enhancement effect image as the prior knowledge of blood vessel classification and blood vessel segmentation, the generalization capability of the neural network model can be enhanced, and the sensitivity of the neural network model to the difference of the test set is reduced.
In some embodiments, referring to fig. 4D, based on fig. 4A, fig. 4D is a schematic flowchart of a fundus blood vessel image classification method provided by an embodiment of the present invention, before extracting an original feature map from a blood vessel image to be processed, the number of channels of the blood vessel image to be processed may also be expanded in step 112; the number of channels of the expanded vessel image to be processed is compressed in step 113 to adapt the number of input channels for down-sampling the vessel image to be processed.
For example, referring to fig. 3, the blood vessel image to be processed is 8 × 64 × 64 (number of channels × width × height), and is expanded by 2 cascaded expansion layers in the input expansion compression module 5555, each expansion layer including: convolution layer (convolution kernel 2 × 2), BN layer, activation function is ReLU. The expanded blood vessel image to be processed is 32 × 64 × 64, and then is compressed by the number of channels of the compression layer of the expansion compression module 5555, where the compression layer includes: convolutional layers (convolutional kernel 7 × 7, step 2); BN layer, ReLU. The compressed blood vessel image to be processed is 3 × 64 × 64, and is consistent with the number of input channels of the encoder in the feature extraction module 5551.
In some embodiments, the training of the neural network model is explained, and the loss of each layer when the blood vessel image to be processed is subjected to down-sampling of multiple layers is determined; constructing a loss function of the neural network based on the loss of each layer, the predicted loss of the blood vessel segmentation probability map and the blood vessel classification probability map, and the network weight of the neural network model for predicting the blood vessel segmentation probability map and the blood vessel classification probability map; the neural network is updated based on the loss function to converge the loss function.
For example, referring to fig. 3, the downsampled feature map output by each coding layer of the encoder in the feature extraction module 5551 is obtained by the side output layer 5556, and the difference is compared with the reference standard (G T, group-try) of the downsampled feature map of each coding layer, and is taken as a downsampling loss.
The training module 5557 further determines a norm of a network weight of the neural network model, a classification probability map of the neural network model, and a prediction loss of the classification probability map (i.e., a difference between the classification probability map and a reference standard of the classification probability map and a segmentation probability map for a blood vessel image to be processed in a test set), weights the norm of the downsampling loss, the prediction loss, and the network weight to obtain a loss function of the neural network model, and operates a back propagation algorithm to update the neural network model layer by layer until the loss function converges.
The additional side output layer is added to the encoder part to carry out deep supervision on the training of the neural network model, the side output layer learns more semantic information in the shallow layer of the neural network model to help to achieve better performance, the characteristics of the shallow layer from the encoder, which contain higher resolution and less semantic information, are better fused with the characteristics of the deep layer from the encoder, which contain more semantic information and lower resolution, the problem of gradient disappearance when the neural network model is trained by utilizing a back propagation algorithm is avoided, and the shallow network is helped to extract more semantic characteristics and accelerate convergence.
With reference to the fundus blood vessel image classification method and the exemplary application thereof in the fundus blood vessel image classification device provided by the embodiment of the present invention, the following continues to describe a scheme of implementing the blood vessel image processing by matching the modules in the fundus blood vessel image classification device 555 provided by the embodiment of the present invention.
In the blood vessel segmentation task of the neural network model, the feature extraction module 5551 extracts an original feature map from a blood vessel image to be processed; the output module 5552 determines probability values of the blood vessels to which the pixel points belong in the blood vessel image to be processed based on the features of the pixel points in the original feature map to form a blood vessel segmentation probability map of the blood vessel image to be processed.
For example, the output module 5552 performs dimension reduction on the original feature map, and performs batch processing on the original feature map after the dimension reduction processing to obtain a normalized original feature map; and mapping the characteristics of each pixel point in the normalized original characteristic diagram to be probability values that each pixel point in the blood vessel image to be processed is a blood vessel pixel point respectively through an activation function.
In the blood vessel classification task of the neural network model, the activation module 5553 assigns corresponding weights to the respective pixel points based on the distribution of the probability values of the respective pixel points in the blood vessel segmentation probability map, for example: assigning a first weight to a probability value of the probability value distribution corresponding to the probability values of the capillary and blood vessel boundaries in the probability map by means of an activation function; assigning a second weight to the probability value of the probability value distribution in accordance with the probability values of the artery, the vein and the image background in the probability map by an activation function, wherein the second weight is smaller than the first weight; an activation weight map is formed based on the probability values in the probability map and the corresponding assigned weights.
The output module 5552 fuses the weight of each pixel point with the characteristics of the corresponding pixel point in the original characteristic diagram to obtain a fused characteristic diagram; for example: performing point multiplication operation processing on the weight of each pixel point in the blood vessel image to be processed and the characteristics of the corresponding pixel point in the original characteristic diagram to obtain a point multiplication operation result of each pixel point; and combining the dot product operation results of all the pixel points to form a fusion characteristic graph.
The output module 5552 determines probability values of different types of blood vessels in the blood vessel image to be processed based on the features of the pixel points in the fusion feature map to form a blood vessel classification probability map of the blood vessel image to be processed.
For example, the output module 5552 performs dimension reduction on the fused feature map, and performs batch processing on the fused feature map after the dimension reduction to obtain a normalized fused feature map; and correspondingly mapping the characteristics of each pixel point in the normalized fusion characteristic graph into probability values of different types of blood vessels of each pixel point in the blood vessel image to be processed through an activation function.
In some embodiments, the feature extraction module 5551 performs multiple levels of downsampling on the blood vessel image to be processed, so as to obtain a downsampled feature map smaller than the resolution of the blood vessel image to be processed; and performing up-sampling on the down-sampling feature map at multiple levels, and splicing the up-sampling result of each level with the down-sampling feature map with the same resolution to be used as the input of the up-sampling of the next level, so as to obtain the original feature map with the resolution consistent with that of the blood vessel image to be processed.
In some embodiments, the neural network model may also apply a multi-input scheme, and the input module 5554 acquires an original color image obtained by image acquisition of a blood vessel region; carrying out brightness normalization processing on each pixel point in the original color image to obtain a brightness normalized image; enhancing the contrast ratio of blood vessels and the background in the original color image to obtain a blood vessel enhanced image; and marking at least one of the original color image, the brightness normalized image and the blood vessel enhanced image as an image to be processed.
In some embodiments, the expansion compression module 5555 expands the number of channels of the blood vessel image to be processed; and compressing the number of the channels of the expanded blood vessel image to be processed so as to adapt to the number of input channels for down-sampling the blood vessel image to be processed.
In some embodiments, a scheme is provided in which an original color image obtained by image acquisition from a blood vessel region is segmented, and then predicted probability maps are spliced, the input module 5554 acquires an original blood vessel image, and extracts at least two blood vessel images to be processed from the original blood vessel image in a segmented form; the output module 5552 splices the blood vessel segmentation probability maps of at least two blood vessel images to be processed according to the extraction sequence to obtain a blood vessel segmentation result of the original blood vessel image, and splices the classification results of at least two blood vessel images to be processed according to the extraction sequence to obtain a blood vessel classification result of the original blood vessel image.
In some embodiments, a solution for performing auxiliary supervision on a neural network model is provided, and the side output layer 5556 compares downsampling features output when the blood vessel image to be processed is downsampled in multiple levels with reference standards of the downsampling features, so as to obtain a downsampling loss; the training module 5557 constructs a loss function of the neural network model based on the loss of the downsampling, the predicted loss of the blood vessel segmentation probability map and the blood vessel classification probability map, and the network weight of the neural network model for predicting the blood vessel segmentation probability map and the blood vessel classification probability map; the neural network model is updated based on the loss function to converge the loss function.
Embodiments of the present invention also provide a storage medium having stored therein executable instructions that, when executed by a processor, will cause the processor to perform a fundus blood vessel image classification method provided by embodiments of the present invention, for example, a fundus blood vessel image classification method as shown in fig. 4A to 4D.
In some embodiments, the storage medium may be a memory such as FRAM, ROM, PROM, EPROM, EE PROM, flash, magnetic surface memory, optical disk, or CD-ROM; or may be various devices including one or any combination of the above memories.
In some embodiments, executable instructions may be written in any form of programming language (including compiled or interpreted languages), in the form of programs, software modules, scripts or code, and may be deployed in any form, including as a stand-alone program or as a module, component, subroutine, or other unit suitable for use in a computing environment.
By way of example, executable instructions may be deployed to be executed on one computing device or on multiple computing devices at one site or distributed across multiple sites and interconnected by a communication network.
Next, taking a fundus image (i.e., a retina image) as an example, a processing scheme for segmentation and classification based on the neural network model provided by the embodiment of the present invention will be described.
As an exemplary application, the neural network model provided by the embodiment of the present invention may be applied to a quantification system using morphological parameters of the fundus artery and vein, or applied to a system quantifying characteristics and changes of the artery and vein, so as to provide support for relevant studies such as clinical studies on fundus blood vessels and systemic diseases and biomarkers of cardiovascular and cerebrovascular diseases, and further quantify and predict the progress of the systemic diseases and predict risk factors of the cardiovascular and cerebrovascular diseases by using the system.
As another exemplary application, the neural network model provided by the embodiment of the invention is also applied to a fundus screening system, judges whether the distribution of blood vessels in the fundus is normal or not, and assists in the prediction and diagnosis of fundus diseases and systemic diseases (such as hypertension and hyperlipidemia). Of course, the neural network model provided by the embodiment of the invention can also be applied to segmentation and classification tasks in fundus images of other modes or other types of medical images; or to segment other fundus structures or lesions in the fundus image except for the arteries/veins.
Referring to fig. 5, fig. 5 is a schematic diagram of segmentation and classification processing of fundus images by the neural network model provided by the embodiment of the invention. In the stage (a), the original fundus color photograph is subjected to brightness normalization to obtain a brightness normalized image. In the stage (b), the original fundus color photograph is subjected to blood vessel enhancement processing to obtain a blood vessel enhanced image. Thus, images from 3 different sources are formed as a priori knowledge, namely: the method comprises the steps of original fundus color photography, a brightness normalization image of the original fundus color photography and a blood vessel enhancement image of the original fundus color photography.
As an example, referring to fig. 6, fig. 6 is a schematic diagram of normalization and enhancement processing performed on an original fundus color photograph, which shows a luminance normalized image (B) obtained by luminance normalization processing performed on an original fundus color photograph (a), an enhanced image (C) obtained by enhancement processing performed on an original fundus color photograph (a) through a multi-resolution Gabor filter, and an enhanced image (D) obtained by enhancement processing performed on an original fundus color photograph (a) through a linear detector.
Of course, it is not excluded to use the image obtained by processing the fundus image by other processing methods as a priori knowledge to be sent to the network, or to use the image of other vessel enhancement methods besides the multi-scale Gabor filter and the linear detector.
In stage (c), image patches are extracted from the images from three different sources, and then the extracted image patches are sent to the neural network model for prediction in stage (d). In the stage (e), the neural network model outputs the prediction results (i.e. probability maps of 3 channels, which respectively represent the probability that each pixel point in the fundus image belongs to the artery, the vein and the blood vessel) of 3 channels (i.e. artery, vein and all blood vessel maps) for each image, traverses all image blocks, and then enters the stage (f). And (f) splicing the prediction results of the same channel of each image block according to the extraction sequence to obtain the final blood vessel segmentation result, artery classification result and vein classification result which respectively correspond to the probability graph of blood vessel segmentation, the probability graph of artery classification and the probability graph of vein classification.
With continuing reference to fig. 7A, fig. 7B, and fig. 7C, schematic structural diagrams of the neural network model provided by the embodiment of the present invention are shown, where fig. 7A shows an input module of the neural network model, fig. 7B shows a feature extraction module and a side output layer of the neural network model, and fig. 7C shows an output module of the neural network model.
First, the labeling method in fig. 7A, 7B, and 7C will be described as follows.
CONV 1: convolutional layer, BN layer;
CONV 3: a convolution layer; a BN layer, wherein the activation function is a Linear rectification function (RecuU), and the convolution kernel is 2 multiplied by 2;
CONV 7: convolutional layer, step size (Stride) 2; BN layer, the activation function is ReLU, and the convolution kernel is 7 multiplied by 7;
MAX: max pooling layer (maxpoiring), pooling kernel 2 × 2;
UP: an UPsampling layer (UPsampling);
RES1, RES2, RES 3: a deep residual network (ResNet);
as follows: performing dot product operation;
Figure BDA0002141557840000201
connecting operation;
ds-1, Ds-2, Ds-3: and a deep supervision (auxiliary supervision) module.
Referring to fig. 7A, the input module of the neural network model may be a Multiple input Module (MIs), and besides inputting the original fundus color photograph, the input module may also process the original fundus color photograph to obtain other types of images, such as a brightness normalized image obtained by performing brightness normalization on the fundus color photograph, and a blood vessel enhanced image obtained by performing blood vessel enhancement on the fundus color photograph, where the blood vessel enhancement method may include a multiresolution Gabor filter and a linear detector. The original fundus color photograph and brightness normalized image have 3 channels, i.e. 3 channels of red, green and blue, respectively, and in addition, the multi-resolution Gabor filter and linear detector output enhanced images of one channel, totaling images of 3 sources and 8 channels, respectively.
Under the condition of small data volume, the generalization capability of the neural network model can be enhanced by combining the multiple images as priori knowledge, and the sensitivity of the neural network model to the difference of the test set can be reduced.
Referring to fig. 7B, the neural network model extracts features in the section shown in fig. 7B using a U-type network comprising an encoder (also referred to as a downsampling section) and a decoder (also referred to as an upsampling section), where the encoder uses a plurality of cascaded ResNet layers (RES1, RES2, and RES3), a shallow ResNet layer (RES1 for example) can capture some simple features of the image, such as boundaries, colors, and a deep ResNet layer (RES 3 for example) can capture more abstract features of the fundus image.
For the features extracted by the encoder, a plurality of cascaded decoding layers of the decoder sequentially carry out upsampling, each decoding layer carries out cascade splicing on the upsampling features from the previous layer and the features with the same resolution output by the encoding layer, and finally the features are restored to the resolution which is the same as the fundus image input into the neural network model.
In some embodiments, to accommodate multiple input channels (8 channels total), a convolutional layer is also provided as an extension layer before the ResNet layer, and one max pooling layer is inserted as a compression layer, expanding the input fundus image to a higher dimension by the extension layer, and compressing to a 3-channel image by the compression layer to match the standard input channels of the ResNet layer.
Referring to fig. 7C, the output module of the neural network model uses a multitask output module for efficient whole vessel segmentation and artery/vein classification simultaneously. In order to obtain more accurate artery/vein classification results, the neural network model needs to learn more distinguishing features between arteries and veins. However, the neural network model provided by the related art focuses only on the artery/vein classification, and thus, it may not be possible to segment the fine capillaries.
In some embodiments, in order for the neural network model provided by the embodiments of the present invention to learn more common features between arteries and veins, i.e. vessel features, the network end of the neural network model includes two parallel branches, wherein the vessel segmentation branch focuses on extracting common features between the arteries and veins and generating a probability map of vessel segmentation, while the vessel classification branch focuses on identifying artery/vein features, and the outputs of the two branches are combined and further used to generate the final result of artery/vein classification.
In some embodiments, in order to further utilize the result of the vessel segmentation to assist artery/vein classification, in particular capillary vessels, as shown in fig. 7C, an activation module applying an activation mechanism (ac) may be provided in the neural network model, and in fig. 7C, the activation module is shown embedded in the output module in order to represent the flow of the data stream. Referring to fig. 9, fig. 9 is a schematic structural diagram of an activation module in a neural network model provided in an embodiment of the present invention, which employs a specific algorithm of an activation mechanism as shown in the following formula (1).
Figure BDA0002141557840000221
Of course, the embodiments of the present invention do not exclude the use of activation mechanisms based on other formulas to change the weights of different parts in the segmented object.
The activation mechanism provided by embodiments of the present invention is based on the observation that the vessel segmentation probability map, with capillary and vessel boundary pixels typically having values of about 0.5 in the vessel probability map, while coarse vessel and background pixels having values close to 1 or 0. In order to highlight the importance of the capillary vessels, a Gaussian function is adopted in the activation module to enhance the weight of the pixel point with the probability value of about 0.5, and in addition, a deviation e can be added in the activation function-1/4To limit the weight to [1,1+ sigma (1-e)-1/4)]Within the interval (c).
Referring to fig. 10, fig. 10 is a schematic diagram of an activation weight map generated by the activation module according to an embodiment of the present invention, some tiny vein pixels and artery pixels (probability value is about 0.5) segmented into blood vessels can obtain a larger weight close to 2-exp (1/4) after passing through the activation module, and other background or easily segmented artery/vein pixels obtain a weight close to 1.
In an output module of the neural network model, an activation weight graph and a feature graph (feature maps) of an artery/vein classification branch are fused in a point multiplication mode, and a probability graph of classification of two channels of an artery and a vein is output through an activation layer.
In the U-type network provided in the related art, the features of the shallow layer and the deep layer are combined by a connection (concat) method. Since features from the shallow layer contain higher resolution and less semantic information, while features from the deep layer contain more semantic information and lower resolution, direct combination may affect the classification effect due to the difference in spatial resolution and semantic information. Therefore, learning more semantic information in the shallow network can help the U-type network to achieve better performance and overcome the problem that the vanishing gradient causes the shallow loss to have poor back propagation effect.
Referring to fig. 8A to 8C, fig. 8A to 8C are schematic structural diagrams of side output layers provided by the embodiment of the present invention, which may be provided in the side output layers provided in the encoder portion of the U-type network as shown in fig. 7B, that is, Ds-1, Ds-2, and Ds-3 shown in fig. 8A to 8C, and perform deep supervision assisted by the side output layers on training of the neural network model to extract more semantic features and accelerate convergence in a shallow layer of the network.
The following describes a training scheme of a neural network model provided by an embodiment of the present invention. The neural network model is obtained by constructing a loss function and training by using a back propagation algorithm. The Loss function Loss of the neural network model consists of 3 parts, including the cross entropy Loss of the final classification prediction result, the cross entropy Loss of the auxiliary supervision layer and the regularization Loss of the network parameters, as shown in formula (2):
Figure BDA0002141557840000231
wherein, sideiThe output of the ith auxiliary monitoring module is represented, theta represents the parameter weight of the neural network model, and | theta | represents the norm of the network weight; CE is cross entropy loss and represents the difference between two values, output model outputs a prediction result, and GT represents a reference standard of the prediction result.
The cross entropy loss CE is shown in equation (3) below:
Figure BDA0002141557840000232
where pred represents the predicted value of the classification result, target represents the label (Lable) value of the classification result, where the weight of each class is assigned μcComprises the following steps: all blood vessels 3/7, artery 2/7, vein 2/7. Of course, the embodiments of the present invention do not preclude the use of different weights to vary the weights of the different classes.
The following description will be made in conjunction with experimental test data of a neural network model provided by an embodiment of the present invention to illustrate advantages in blood vessel segmentation and blood vessel classification.
Figure BDA0002141557840000241
Table 1
Table 1 shows the performance of the neural network model under the combination of different modules, and it can be seen that when the neural network model adopts the multi-input module, the result of the artery/vein classification can be improved by 0.9%. When the neural network model adopts a multi-task output module (MTs, Multiple Tasks), the precision of the blood vessel segmentation and the arteriovenous classification is improved. When the neural network model adopts an introduced activation mechanism, the result of arteriovenous classification is improved by 1.7 percent.
The Area under the ROC Curve (AUC, Area under the Curve of ROC) includes four indexes, namely, a False Positive Rate (also called False Positive Rate), a True Positive Rate (True Positive Rate), a True Negative Rate (True Negative Rate), and a False Negative Rate (False Negative Rate).
Method Accuracy of Specificity of Sensitivity of the composition AUC
Liskowski et al. 0.9535 0.9807 0.7811 0.9790
MO et al. 0.9521 0.9780 0.7779 0.9782
Wu et al. 0.9567 0.9819 0.7844 0.9807
Examples of the invention 0.9570 0.9811 0.7916 0.9810
Table 2
Method Accuracy of Specificity of Sensitivity of the composition
Dashtbozorg et al. 0.874 0.90 0.84
Estrada et al. 0.935 0.93 0.941
Xu et al. 0.923 0.929 0.915
Zhao et al. 0.919 0.915
AlBadawi et al. 0.935
Example of the invention (Ground-truth Vesels) 0.9246 0.9194 0.9205
Inventive embodiments (Segmented Vessels) 0.9445 0.9332 0.9537
Table 3
Tables 2 and 3 show the comparison between the AV-DRIVE data set and the blood vessel segmentation result of the related art by the neural network model provided by the embodiment of the present invention, and the blood vessel classification and blood vessel segmentation method provided by the embodiment of the present invention achieves the optimal results of blood vessel segmentation and arteriovenous classification on the AV-DRIVE public data set. The classification indexes of the artery and vein in table 3 and the classification methods in the listed documents are based on the artery and vein classification accuracy based on the Segmented Vessels (Segmented Vessels), while the indexes in table 1 are based on the artery and vein classification of the reference standard Vessels (group-route Vessels), and are more strict. When using the same criteria as in the literature, embodiments of the present invention can achieve an arterio/venous classification accuracy of 94.45%.
In summary, the embodiment of the present invention provides a multi-task neural network model based on a spatial activation mechanism, which can implement parallel segmentation and classification of an artery, a vein and all blood vessels from end to end, and specifically includes the following beneficial effects:
1) the method combines the prior knowledge of traditional fundus image preprocessing and blood vessel segmentation, is fused into the input end of the neural network model, and improves the stability of the model and the performance on a test set through multi-channel input.
2) The multi-task output module is realized in the neural network model, the results of the blood vessel segmentation and classification are output in parallel, and the blood vessel classification can be assisted by the results of the blood vessel segmentation, so that the problem of low blood vessel segmentation precision by a deep learning method in the related art is solved.
3) A space activation mechanism is realized at the network output end of the neural network model, and the classified characteristic graphs are subjected to space weighting by using the blood vessel segmentation result, so that the weight of the small blood vessels is enhanced, and the precision of the classification of the small blood vessels is improved.
The above description is only an example of the present invention, and is not intended to limit the scope of the present invention. Any modification, equivalent replacement, and improvement made within the spirit and scope of the present invention are included in the protection scope of the present invention.

Claims (14)

1. A fundus blood vessel image classification method is characterized by comprising the following steps:
extracting an original characteristic map from the fundus blood vessel image;
determining the probability value of each pixel point in the fundus blood vessel image belonging to a blood vessel based on the characteristics of each pixel in the original characteristic map so as to form a blood vessel segmentation probability map of the fundus blood vessel image;
assigning a first weight to a probability value of a probability value distribution corresponding to a probability value distribution of a capillary vessel and a vessel boundary in the vessel segmentation probability map;
assigning a second weight to a probability value of a probability value distribution in the vessel segmentation probability map that conforms to probability values of an artery, a vein and an image background, wherein the second weight is smaller than the first weight;
forming an activation weight map based on the probability values in the vessel segmentation probability map and the corresponding assigned weights;
fusing the weight of each pixel point in the activation weight graph corresponding to the original feature graph with the feature of the corresponding pixel point in the original feature graph to obtain a fused feature graph;
determining the probability value of each pixel point in the fundus blood vessel image belonging to an artery blood vessel and the probability value of each pixel point belonging to a vein blood vessel based on the characteristics of each pixel point in the fusion characteristic graph;
and forming a blood vessel classification probability map of the fundus blood vessel image based on the determined probability values belonging to the artery blood vessels and the determined probability values belonging to the vein blood vessels, wherein the blood vessel classification probability map comprises two channels, one channel represents the probability that each pixel point of the fundus blood vessel image is an artery pixel point, and the other channel represents the probability that each pixel point of the fundus blood vessel image is a vein pixel point.
2. The method according to claim 1, wherein the extracting of the original feature map from the fundus blood vessel image comprises:
performing down-sampling on the fundus blood vessel image at multiple levels to obtain a down-sampling characteristic map with a resolution smaller than that of the fundus blood vessel image;
and performing up-sampling on the down-sampling feature map in multiple levels, and splicing the up-sampling result of each level with the down-sampling feature map with the same resolution as the up-sampling input of the next level to obtain the original feature map which is consistent with the resolution of the fundus blood vessel image.
3. The method of claim 1, further comprising:
acquiring an original color image obtained by image acquisition of an eyeground blood vessel region;
performing brightness normalization processing on each pixel point in the original color image to obtain a brightness normalized image;
enhancing the contrast ratio of the blood vessels and the background in the original color image to obtain a blood vessel enhanced image;
labeling at least one of the original color image, the brightness normalized image, and the blood vessel enhanced image as the fundus blood vessel image.
4. The method of claim 1, further comprising:
expanding the number of channels of the fundus blood vessel image;
compressing the number of channels of the fundus blood vessel image after expansion to fit the number of input channels for down-sampling the fundus blood vessel image.
5. The method of claim 1, further comprising:
acquiring an original blood vessel image, and extracting at least two fundus blood vessel images from the original blood vessel image in a blocking mode;
splicing the blood vessel segmentation probability maps of the at least two fundus blood vessel images according to the extraction sequence to obtain a blood vessel segmentation result of the original blood vessel image, and
and splicing the classification results of the at least two fundus blood vessel images according to the extraction sequence to obtain a blood vessel classification result of the original blood vessel image.
6. The method according to claim 1, wherein the determining a probability value that each pixel point in the fundus blood vessel image belongs to a blood vessel based on the feature of each pixel in the original feature map comprises:
carrying out dimension reduction processing on the original feature map, and carrying out batch processing on the original feature map subjected to dimension reduction processing to obtain a normalized original feature map;
and mapping the characteristics of each pixel point in the normalized original characteristic diagram to the probability value that each pixel point in the fundus blood vessel image is a blood vessel pixel point through an activation function.
7. The method according to claim 1, wherein the fusing the weight of each pixel with the feature of the corresponding pixel in the original feature map to obtain a fused feature map comprises:
performing point multiplication operation processing on the weight of each pixel point in the original characteristic diagram and the characteristics of the corresponding pixel point in the original characteristic diagram to obtain a point multiplication operation result of each pixel point;
and combining the dot product operation results of all the pixel points to form the fusion characteristic graph.
8. The method according to claim 1, wherein the determining the probability value that each pixel point in the fundus blood vessel image belongs to an artery blood vessel and the probability value that each pixel point belongs to a vein blood vessel based on the characteristics of each pixel point in the fused feature map comprises:
carrying out dimension reduction on the fusion feature map, and carrying out batch processing on the fusion feature map subjected to dimension reduction to obtain a normalized fusion feature map;
and mapping the characteristics of each pixel point in the normalized fusion characteristic graph into the probability value that each pixel point in the fundus blood vessel image is an artery blood vessel and the probability value that each pixel point is a vein blood vessel correspondingly through an activation function.
9. The method according to any one of claims 1 to 8, further comprising:
obtaining down-sampling loss by using down-sampling characteristics output when the fundus blood vessel image is subjected to down-sampling of a plurality of layers and a reference standard of the down-sampling characteristics;
constructing a loss function of a neural network model based on the downsampled loss, the predicted losses of the vessel segmentation probability map and the vessel classification probability map, and network weights of the neural network model for predicting the vessel segmentation probability map and the vessel classification probability map;
updating model parameters of the neural network model based on the loss function such that the loss function converges.
10. A fundus blood vessel image classification apparatus, comprising:
the characteristic extraction module is used for extracting an original characteristic map from the fundus blood vessel image;
the output module is used for determining the probability value of each pixel point in the fundus blood vessel image belonging to a blood vessel based on the characteristics of each pixel in the original characteristic map so as to form a blood vessel segmentation probability map of the fundus blood vessel image;
an activation module, configured to assign a first weight to a probability value of a probability value distribution corresponding to probability values of a capillary vessel and a vessel boundary in the vessel segmentation probability map; assigning a second weight to a probability value of a probability value distribution in the vessel segmentation probability map that conforms to probability values of an artery, a vein and an image background, wherein the second weight is smaller than the first weight; forming an activation weight map based on the probability values in the vessel segmentation probability map and the corresponding assigned weights;
the output module is used for fusing the weight of each pixel point in the original feature map corresponding to the activation weight map with the feature of the corresponding pixel point in the original feature map to obtain a fused feature map;
the output module is used for determining probability values of all pixel points in the fundus blood vessel image belonging to arterial blood vessels and venous blood vessels respectively based on the characteristics of all pixel points in the fusion characteristic diagram;
and forming a blood vessel classification probability map of the fundus blood vessel image based on the probability values belonging to the artery blood vessel and the vein blood vessel, wherein the blood vessel classification probability map comprises two channels, one channel represents the probability that each pixel point of the fundus blood vessel image is an artery pixel point, and the other channel represents the probability that each pixel point of the fundus blood vessel image is a vein pixel point.
11. The apparatus of claim 10,
the feature extraction module is further configured to:
performing down-sampling on the fundus blood vessel image at multiple levels to obtain a down-sampling characteristic map with a resolution smaller than that of the fundus blood vessel image;
and performing up-sampling on the down-sampling feature map in multiple levels, and splicing the up-sampling result of each level and the down-sampling feature map with the same resolution as the up-sampling result of the next level to be used as the input of the up-sampling of the next level, so as to obtain the original feature map which is consistent with the resolution of the fundus blood vessel image.
12. The apparatus of claim 10 or 11, further comprising:
an input module to:
acquiring an original color image obtained by image acquisition of an eyeground blood vessel region;
performing brightness normalization processing on each pixel point in the original color image to obtain a brightness normalized image;
enhancing the contrast ratio of the blood vessels and the background in the original color image to obtain a blood vessel enhanced image;
labeling at least one of the original color image, the brightness normalized image, and the blood vessel enhanced image as the fundus blood vessel image.
13. A fundus blood vessel image classification apparatus, comprising:
a memory for storing executable instructions;
a processor for implementing the fundus blood vessel image classification method of any one of claims 1 to 9 when executing executable instructions stored in the memory.
14. A storage medium storing executable instructions for causing a processor to perform the fundus blood vessel image classification method according to any one of claims 1 to 9 when executed.
CN201910670551.1A 2019-05-10 2019-05-10 Method, device and equipment for classifying fundus blood vessel images and storage medium Active CN110348541B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910670551.1A CN110348541B (en) 2019-05-10 2019-05-10 Method, device and equipment for classifying fundus blood vessel images and storage medium

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN201910390826.6A CN110309849B (en) 2019-05-10 2019-05-10 Blood vessel image processing method, device, equipment and storage medium
CN201910670551.1A CN110348541B (en) 2019-05-10 2019-05-10 Method, device and equipment for classifying fundus blood vessel images and storage medium

Related Parent Applications (1)

Application Number Title Priority Date Filing Date
CN201910390826.6A Division CN110309849B (en) 2019-05-10 2019-05-10 Blood vessel image processing method, device, equipment and storage medium

Publications (2)

Publication Number Publication Date
CN110348541A CN110348541A (en) 2019-10-18
CN110348541B true CN110348541B (en) 2021-12-10

Family

ID=68074700

Family Applications (2)

Application Number Title Priority Date Filing Date
CN201910390826.6A Active CN110309849B (en) 2019-05-10 2019-05-10 Blood vessel image processing method, device, equipment and storage medium
CN201910670551.1A Active CN110348541B (en) 2019-05-10 2019-05-10 Method, device and equipment for classifying fundus blood vessel images and storage medium

Family Applications Before (1)

Application Number Title Priority Date Filing Date
CN201910390826.6A Active CN110309849B (en) 2019-05-10 2019-05-10 Blood vessel image processing method, device, equipment and storage medium

Country Status (1)

Country Link
CN (2) CN110309849B (en)

Families Citing this family (28)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110796161A (en) * 2019-09-18 2020-02-14 平安科技(深圳)有限公司 Recognition model training method, recognition device, recognition equipment and recognition medium for eye ground characteristics
CN110807788B (en) * 2019-10-21 2023-07-21 腾讯科技(深圳)有限公司 Medical image processing method, medical image processing device, electronic equipment and computer storage medium
CN110827963A (en) * 2019-11-06 2020-02-21 杭州迪英加科技有限公司 Semantic segmentation method for pathological image and electronic equipment
CN111080592B (en) * 2019-12-06 2021-06-01 广州柏视医疗科技有限公司 Rib extraction method and device based on deep learning
CN111383191B (en) * 2019-12-11 2024-03-08 北京深睿博联科技有限责任公司 Image processing method and device for vascular fracture repair
CN111161270B (en) * 2019-12-24 2023-10-27 上海联影智能医疗科技有限公司 Vascular segmentation method for medical image, computer device and readable storage medium
CN111161240B (en) * 2019-12-27 2024-03-05 上海联影智能医疗科技有限公司 Blood vessel classification method, apparatus, computer device, and readable storage medium
CN111047613B (en) * 2019-12-30 2021-04-27 北京小白世纪网络科技有限公司 Fundus blood vessel segmentation method based on branch attention and multi-model fusion
CN111145163B (en) * 2019-12-30 2021-04-02 深圳市中钞科信金融科技有限公司 Paper wrinkle defect detection method and device
CN111292801A (en) * 2020-01-21 2020-06-16 西湖大学 Method for evaluating thyroid nodule by combining protein mass spectrum with deep learning
CN111369499A (en) * 2020-02-21 2020-07-03 北京致远慧图科技有限公司 Method and device for processing fundus images
CN111369582B (en) * 2020-03-06 2023-04-07 腾讯科技(深圳)有限公司 Image segmentation method, background replacement method, device, equipment and storage medium
CN111445486B (en) * 2020-03-25 2023-10-03 腾讯科技(深圳)有限公司 Image processing method, device, equipment and computer readable storage medium
CN111507208B (en) * 2020-03-30 2021-06-25 中国科学院上海微系统与信息技术研究所 Identity verification method, device, equipment and medium based on sclera identification
CN112288683A (en) * 2020-06-30 2021-01-29 深圳市智影医疗科技有限公司 Pulmonary tuberculosis judgment device and method based on multi-mode fusion
CN112016477B (en) * 2020-08-31 2023-06-09 电子科技大学 Logging deposit microphase identification method based on deep learning
CN113643354B (en) * 2020-09-04 2023-10-13 深圳硅基智能科技有限公司 Measuring device of vascular caliber based on fundus image with enhanced resolution
CN112163506A (en) * 2020-09-25 2021-01-01 伏羲九针智能科技(北京)有限公司 Vein blood vessel identification method, device and equipment based on ultrasound
CN112233789A (en) * 2020-10-12 2021-01-15 辽宁工程技术大学 Regional feature fusion type hypertensive retinopathy classification method
CN112330684B (en) * 2020-11-23 2022-09-13 腾讯科技(深圳)有限公司 Object segmentation method and device, computer equipment and storage medium
CN112465054B (en) * 2020-12-07 2023-07-11 深圳市检验检疫科学研究院 FCN-based multivariate time series data classification method
CN112949654A (en) * 2021-02-25 2021-06-11 上海商汤智能科技有限公司 Image detection method and related device and equipment
CN112950611A (en) * 2021-03-18 2021-06-11 西安智诊智能科技有限公司 Liver blood vessel segmentation method based on CT image
CN113576399B (en) * 2021-08-02 2024-03-08 北京鹰瞳科技发展股份有限公司 Sugar net analysis method, system and electronic equipment
CN114359284B (en) * 2022-03-18 2022-06-21 北京鹰瞳科技发展股份有限公司 Method for analyzing retinal fundus images and related products
CN114693648B (en) * 2022-04-02 2024-07-05 深圳睿心智能医疗科技有限公司 Blood vessel center line extraction method and system
CN117237270B (en) * 2023-02-24 2024-03-19 靖江仁富机械制造有限公司 Forming control method and system for producing wear-resistant and corrosion-resistant pipeline
CN116740203B (en) * 2023-08-15 2023-11-28 山东理工职业学院 Safety storage method for fundus camera data

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101515365A (en) * 2009-03-25 2009-08-26 沈阳东软医疗系统有限公司 Method for automatically separating adherent hyaline-vascular type lung nodule in CT image
CN102800087A (en) * 2012-06-28 2012-11-28 华中科技大学 Automatic dividing method of ultrasound carotid artery vascular membrane
CN104899876A (en) * 2015-05-18 2015-09-09 天津工业大学 Eyeground image blood vessel segmentation method based on self-adaption difference of Gaussians
CN107292887A (en) * 2017-06-20 2017-10-24 电子科技大学 A kind of Segmentation Method of Retinal Blood Vessels based on deep learning adaptive weighting
CN107590510A (en) * 2017-08-29 2018-01-16 上海联影医疗科技有限公司 A kind of image position method, device, computer and storage medium
WO2018159708A1 (en) * 2017-02-28 2018-09-07 富士フイルム株式会社 Blood flow analyzing device and method, and program
CN109345538A (en) * 2018-08-30 2019-02-15 华南理工大学 A kind of Segmentation Method of Retinal Blood Vessels based on convolutional neural networks

Family Cites Families (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106920227B (en) * 2016-12-27 2019-06-07 北京工业大学 The Segmentation Method of Retinal Blood Vessels combined based on deep learning with conventional method
WO2019013779A1 (en) * 2017-07-12 2019-01-17 Mohammed Alauddin Bhuiyan Automated blood vessel feature detection and quantification for retinal image grading and disease screening
CN109598732B (en) * 2018-12-11 2022-06-14 厦门大学 Medical image segmentation method based on three-dimensional space weighting

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101515365A (en) * 2009-03-25 2009-08-26 沈阳东软医疗系统有限公司 Method for automatically separating adherent hyaline-vascular type lung nodule in CT image
CN102800087A (en) * 2012-06-28 2012-11-28 华中科技大学 Automatic dividing method of ultrasound carotid artery vascular membrane
CN104899876A (en) * 2015-05-18 2015-09-09 天津工业大学 Eyeground image blood vessel segmentation method based on self-adaption difference of Gaussians
WO2018159708A1 (en) * 2017-02-28 2018-09-07 富士フイルム株式会社 Blood flow analyzing device and method, and program
CN107292887A (en) * 2017-06-20 2017-10-24 电子科技大学 A kind of Segmentation Method of Retinal Blood Vessels based on deep learning adaptive weighting
CN107590510A (en) * 2017-08-29 2018-01-16 上海联影医疗科技有限公司 A kind of image position method, device, computer and storage medium
CN109345538A (en) * 2018-08-30 2019-02-15 华南理工大学 A kind of Segmentation Method of Retinal Blood Vessels based on convolutional neural networks

Non-Patent Citations (3)

* Cited by examiner, † Cited by third party
Title
An Ensemble Classification-Based Approach Applied to Retinal Blood Vessel Segmentation;Muhammad Moazam Fraz 等;《IEEE Transactions on Biomedical Engineering》;20120930;第59卷(第9期);第2538-2548页 *
Segmenting Retinal Blood Vessels With Deep Neural Networks;Paweł Liskowski 等;《IEEE TRANSACTIONS ON MEDICAL IMAGING》;20161130;第35卷(第11期);第2369-2380页 *
基于全卷积神经网络的多尺度视网膜血管分割;郑婷月 等;《光学学报》;20190228;第39卷(第2期);第1-8页 *

Also Published As

Publication number Publication date
CN110309849B (en) 2021-08-06
CN110348541A (en) 2019-10-18
CN110309849A (en) 2019-10-08

Similar Documents

Publication Publication Date Title
CN110348541B (en) Method, device and equipment for classifying fundus blood vessel images and storage medium
CN110120047B (en) Image segmentation model training method, image segmentation method, device, equipment and medium
US10482603B1 (en) Medical image segmentation using an integrated edge guidance module and object segmentation network
US20210248751A1 (en) Brain image segmentation method and apparatus, network device, and storage medium
CN112883962B (en) Fundus image recognition method, fundus image recognition apparatus, fundus image recognition device, fundus image recognition program, and fundus image recognition program
CN109919915A (en) Retina fundus image abnormal region detection method and device based on deep learning
WO2021042690A1 (en) Deep convolution neural network-based breast cancer auxiliary diagnosis method and apparatus
CN110738660B (en) Vertebra CT image segmentation method and device based on improved U-net
CN112926537B (en) Image processing method, device, electronic equipment and storage medium
Hu et al. Automatic artery/vein classification using a vessel-constraint network for multicenter fundus images
US20220130052A1 (en) Device and method for glaucoma auxiliary diagnosis, and storage medium
CN112330624A (en) Medical image processing method and device
CN112541924A (en) Fundus image generation method, device, equipment and storage medium
CN116848588A (en) Automatic labeling of health features in medical images
CN112053363A (en) Retinal vessel segmentation method and device and model construction method
CN111950637A (en) Purple matter detection method, purple matter detection device, skin detector and readable storage medium
CN110827963A (en) Semantic segmentation method for pathological image and electronic equipment
Tian et al. Learning discriminative representations for fine-grained diabetic retinopathy grading
CN113724262A (en) CNV segmentation method in retina OCT image
CN113705595A (en) Method, device and storage medium for predicting degree of abnormal cell metastasis
Mohammedhasan et al. A new deeply convolutional neural network architecture for retinal blood vessel segmentation
Pradhan A Novel Threshold based Method for Vessel Intensity Detection and Extraction from Retinal Images
CN113384261B (en) Centrum compression fracture multi-mode intelligent diagnosis system based on deep learning
CN113379770B (en) Construction method of nasopharyngeal carcinoma MR image segmentation network, image segmentation method and device
CN113576399B (en) Sugar net analysis method, system and electronic equipment

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant