CN112435281A - Multispectral fundus image analysis method and system based on counterstudy - Google Patents

Multispectral fundus image analysis method and system based on counterstudy Download PDF

Info

Publication number
CN112435281A
CN112435281A CN202011006571.8A CN202011006571A CN112435281A CN 112435281 A CN112435281 A CN 112435281A CN 202011006571 A CN202011006571 A CN 202011006571A CN 112435281 A CN112435281 A CN 112435281A
Authority
CN
China
Prior art keywords
fundus image
model
vessel
blood vessel
label
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202011006571.8A
Other languages
Chinese (zh)
Other versions
CN112435281B (en
Inventor
郑元杰
隋晓丹
姜岩芸
贾伟宽
赵艳娜
牛屹
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shandong Normal University
Original Assignee
Shandong Normal University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shandong Normal University filed Critical Shandong Normal University
Priority to CN202011006571.8A priority Critical patent/CN112435281B/en
Publication of CN112435281A publication Critical patent/CN112435281A/en
Application granted granted Critical
Publication of CN112435281B publication Critical patent/CN112435281B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/30Determination of transform parameters for the alignment of images, i.e. image registration
    • G06T7/33Determination of transform parameters for the alignment of images, i.e. image registration using feature-based methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30004Biomedical image processing
    • G06T2207/30041Eye; Retina; Ophthalmic

Landscapes

  • Engineering & Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Eye Examination Apparatus (AREA)
  • Image Analysis (AREA)

Abstract

The utility model discloses a multispectral fundus image analysis method and system based on counterstudy, which comprises: acquiring multispectral fundus images including blood vessel label images and blood vessel label-free images; inputting the multispectral fundus image into a trained fundus image analysis model to obtain a registration result of the multispectral fundus image; the fundus image analysis model comprises a fundus image registration model and a retinal vessel segmentation model, and during training, the fundus image registration model and the retinal vessel segmentation model are trained independently by adopting vessel label images respectively; the fundus image registration model and the retinal vessel segmentation model perform counterstudy according to the non-vessel label image. The fundus image registration model and the retinal vessel segmentation model can be trained independently through the vessel label images, and can also be used for counterstudy according to the non-vessel label images, so that the precision of image registration and vessel segmentation is improved.

Description

Multispectral fundus image analysis method and system based on counterstudy
Technical Field
The disclosure relates to a multispectral fundus image analysis method and system based on counterstudy.
Background
The statements in this section merely provide background information related to the present disclosure and may not necessarily constitute prior art.
Effective imaging techniques are key to the diagnosis and successful treatment of ophthalmic diseases. In recent years, a variety of imaging techniques for fundus imaging have been developed, including Color Fundus Photography (CFP), multispectral imaging (MSI) enhanced depth imaging optical coherence tomography (EDI-OCT), Fundus Autofluorescence (FAF), indocyanine green angiography, fundus fluorescein angiography (ICGA/FFA). Different ophthalmic imaging technologies show unique advantages in the manifestation of specific ophthalmic diseases.
Multispectral fundus imaging (MSI) techniques take pictures at a range of wavelengths based on Light Emitting Diode (LED) illumination to extract spectral information from the retina. The use of multispectral imaging techniques increases the amount of spectral information extracted from biological tissue using more than three spectral bands, thereby facilitating the diagnosis of many diseases. MSI combines the utility of spectroscopy and imaging to provide spectral and spatial information of retinal landmarks.
The combination of wavelengths may highlight small details that may not otherwise be visible for better visualization and differential diagnosis. For example, based on hemoglobin oxygenation levels in blood vessels and the retina, a retinal oxygen/deoxyhemoglobin contrast map may be created in combination with 580nm and 590nm wavelength images, images at wavelengths of 760nm and 810nm may be merged to create a choroidal oxygen/deoxyhemoglobin contrast map, and images at wavelengths of 550nm and 660nm constitute a common fundus photographic image, which is essential in ocular disease analysis. Therefore, effectively estimating and eliminating spatial misalignments between MSI slices from different spectra is a first step during imaging analysis.
In recent years, many available methods have been developed for the problem of registration of fundus images, and registration of multispectral fundus images remains a problem. This is mainly due to two main challenges with MSI registration: the first challenge is the intensity difference between the multispectral images, which are each characterized by a specific depth of the retina and choroid, and which are obtained by the RHA being projected to the fundus using monochromatic LED illumination of different wavelengths. For example, a green spectral (550nm) image shows the pre-retinal space, and there are significant retinal blood vessels in this layer. As the wavelength increases, the permeability increases, revealing the nerve fiber layer, ganglion cell layer, etc., until reaching the choroid. Clear blood vessel information is shown in the short-wavelength images, the choroidal structure gradually appears as the wavelength increases, and a clear apparent difference is observed between the spectral images. Fortunately, retinal blood vessels can still be distinguished. The second challenge is pixel misalignment due to eye motion. The complex eye movements during imaging can present significant challenges in analyzing the MSI images.
Multispectral fundus image registration is generally to align images based on retinal blood vessel structures, many ophthalmic diseases can be reflected on retinal blood vessels, extraction of the retinal blood vessels is a key technology for analyzing fundus images, therefore, segmentation of the retinal blood vessels is a key step of multispectral fundus image analysis, and at present, no method for joint analysis of multispectral fundus image alignment and blood vessel segmentation exists.
Disclosure of Invention
In order to solve the problems, the invention provides a multispectral fundus image analysis method and system based on antagonistic learning, a fundus image registration model and a retina blood vessel segmentation model can be trained independently through blood vessel label images, and can also be subjected to antagonistic learning according to blood vessel label-free images, so that the precision of image registration and blood vessel segmentation is improved, manual marking of multispectral fundus images is not needed, and the image segmentation and registration efficiency is improved.
In order to achieve the purpose, the following technical scheme is adopted in the disclosure:
in one or more embodiments, there is provided a method of multispectral fundus image analysis based on antagonistic learning, comprising:
acquiring multispectral fundus images including blood vessel label images and blood vessel label-free images;
inputting the multispectral fundus image into a trained fundus image analysis model to obtain a registration result of the multispectral fundus image;
the fundus image analysis model comprises a fundus image registration model and a retinal vessel segmentation model, and during training, the fundus image registration model and the retinal vessel segmentation model are trained independently by adopting vessel label images respectively; the fundus image registration model and the retinal vessel segmentation model perform counterstudy according to the non-vessel label image.
Further, the specific process of the fundus image registration model and the retinal vessel segmentation model for counterstudy according to the non-vessel label image is as follows:
inputting the non-blood vessel label image into a retina blood vessel segmentation model to obtain a prediction label of the non-blood vessel label image, and training a fundus image registration model through the prediction label;
and the fundus image registration model registers the prediction labels to obtain deformed blood vessel labels, and the deformed blood vessel labels retrain the retinal blood vessel segmentation model.
In one or more embodiments, there is disclosed a counterlearning-based multispectral fundus image analysis system, comprising:
the data acquisition module is used for acquiring a multispectral fundus image;
the data analysis module is used for registering the multispectral fundus images through a fundus image analysis model, wherein the fundus image analysis model comprises a fundus image registration model and a retina blood vessel segmentation model, a blood vessel map corresponding to the multispectral fundus images is obtained through the retina blood vessel segmentation model, the fundus image registration model is used for registering the multispectral fundus images according to the blood vessel map, and during training, the fundus image registration model and the retina blood vessel segmentation model are respectively trained independently by adopting blood vessel label images; the fundus image registration model and the retinal vessel segmentation model perform counterstudy according to the non-vessel label image.
In one or more embodiments, an electronic device is disclosed that includes a memory and a processor and computer instructions stored on the memory and executed on the processor that, when executed by the processor, perform the steps of the method for multispectral fundus image analysis that counters learning.
In one or more embodiments, a computer-readable storage medium is disclosed for storing computer instructions that, when executed by a processor, perform the steps of the method for multispectral fundus image analysis against learning.
Compared with the prior art, the beneficial effect of this disclosure is:
1. the fundus image registration model and the retinal vessel segmentation model can be trained independently through the vessel labeled image, and can also be used for counterstudy by using the non-vessel labeled image, the non-vessel labeled image obtains the prediction label of the non-vessel labeled image through the retinal vessel segmentation model, the fundus image registration model is trained through the prediction label counterstudy, the problem that a labeled data set is lacked is solved to a certain extent, and the registration precision is improved.
2. When the fundus image registration model and the retinal vessel segmentation model are used for counterstudy, the blood vessel label-free image obtains a prediction label of the blood vessel label-free image through the retinal vessel segmentation model, the prediction label is used for counterstudy training the fundus image registration model, and a deformed blood vessel label can be generated through the fundus image registration model, so that the deformed blood vessel label continues to train the retinal vessel segmentation model, the blood vessel label-free image is fully utilized, and the accuracy of retinal vessel segmentation is improved.
3. The fundus image registration model and the retinal vessel segmentation model can perform counterstudy according to the non-vessel label image without manually marking the multispectral fundus image, so that registration and retinal vessel segmentation of the multispectral fundus image are automatically realized, automatic image analysis is realized, and analysis efficiency is improved.
4. The fundus image registration model establishes mapping from an image pair to a space corresponding relation between images based on deep learning, and the retinal vessel segmentation model establishes mapping from a multispectral fundus image to a retinal vessel image. The trained model can realize the prediction of the spatial correspondence between the image pairs through one-time forward transmission or realize the prediction of the retinal vessel map through one-time forward transmission, so the operation speed is high.
Drawings
The accompanying drawings, which are incorporated in and constitute a part of this application, illustrate embodiments of the application and, together with the description, serve to explain the application and are not intended to limit the application.
Fig. 1 is a flow chart of example 1 of the present disclosure;
fig. 2 is a set of multispectral fundus image presentation views;
fig. 3 is a view of a fundus image analysis model configuration according to embodiment 1 of the present disclosure;
FIG. 4 is a quantitative graph of the result of multispectral fundus image registration achieved by the method of embodiment 1 of the present disclosure;
FIG. 5 is a visualization display diagram of the registration result of the MSI image achieved by the method of embodiment 1 of the present disclosure and the registration achieved by other learning-based methods;
FIG. 6 is a diagram illustrating the result of retinal vessel segmentation achieved by the method of embodiment 1 of the present disclosure;
fig. 7 is a visual display diagram of the result of retinal vessel segmentation realized by the method of embodiment 1 of the present disclosure and retinal vessel segmentation realized by other learning-based methods;
fig. 8 is a quantitative graph of a retinal vessel segmentation result of a multispectral fundus image implemented by the method in embodiment 1 of the present disclosure.
The specific implementation mode is as follows:
the present disclosure is further described with reference to the following drawings and examples.
It should be noted that the following detailed description is exemplary and is intended to provide further explanation of the disclosure. Unless defined otherwise, all technical and scientific terms used herein have the same meaning as commonly understood by one of ordinary skill in the art to which this application belongs.
It is noted that the terminology used herein is for the purpose of describing particular embodiments only and is not intended to be limiting of example embodiments according to the present application. As used herein, the singular forms "a", "an" and "the" are intended to include the plural forms as well, and it should be understood that when the terms "comprises" and/or "comprising" are used in this specification, they specify the presence of stated features, steps, operations, devices, components, and/or combinations thereof, unless the context clearly indicates otherwise.
In the present disclosure, terms such as "upper", "lower", "left", "right", "front", "rear", "vertical", "horizontal", "side", "bottom", and the like indicate orientations or positional relationships based on those shown in the drawings, and are only relational terms determined for convenience in describing structural relationships of the parts or elements of the present disclosure, and do not refer to any parts or elements of the present disclosure, and are not to be construed as limiting the present disclosure.
In the present disclosure, terms such as "fixedly connected", "connected", and the like are to be understood in a broad sense, and mean either a fixed connection or an integrally connected or detachable connection; may be directly connected or indirectly connected through an intermediate. The specific meanings of the above terms in the present disclosure can be determined on a case-by-case basis by persons skilled in the relevant art or technicians, and are not to be construed as limitations of the present disclosure.
Example 1
In this embodiment, a multispectral fundus image analysis method based on counterlearning is disclosed, as shown in fig. 1 to 8, including:
acquiring multispectral fundus images including blood vessel label images and blood vessel label-free images;
inputting the multispectral fundus image into a trained fundus image analysis model to obtain a registration result of the multispectral fundus image;
the fundus image analysis model comprises a fundus image registration model and a retinal vessel segmentation model, and during training, the fundus image registration model and the retinal vessel segmentation model are trained independently by adopting vessel label images respectively; the fundus image registration model and the retinal vessel segmentation model perform counterstudy according to the non-vessel label image.
And obtaining a blood vessel image corresponding to the multispectral fundus image through the retinal blood vessel segmentation model, and registering the multispectral fundus image according to the blood vessel image by the fundus image registration model.
Further, the specific process of the fundus image registration model and the retinal vessel segmentation model for counterstudy according to the non-vessel label image is as follows:
inputting the non-blood vessel label image into a retina blood vessel segmentation model to obtain a prediction label of the non-blood vessel label image, and training a fundus image registration model through the prediction label;
and the fundus image registration model registers the prediction labels to obtain deformed blood vessel labels, and the deformed blood vessel labels retrain the retinal blood vessel segmentation model.
Further, the fundus image registration model is based on a U-Net network, loss is calculated through a blood vessel label and obtained through back-transmission training, and a loss function of the fundus image registration model is a weak supervision loss function;
the retinal vessel segmentation model is based on a U-Net network, loss is calculated by adopting a vessel label and obtained by returning training, and a loss function of the retinal vessel segmentation model is a fully supervised loss function.
Further, the loss function of the fundus image registration model includes a load regularization constraint loss function and a similarity loss function.
Further, the blood vessel labels used by the fundus image registration model include real blood vessel labels of the blood vessel label image and prediction labels of the blood vessel label-free image.
Further, the blood vessel labels used by the retinal blood vessel segmentation model comprise real blood vessel labels of the blood vessel label image and deformed blood vessel labels output by the fundus image registration model.
A multispectral fundus image analysis method based on counterstudy will be specifically described:
acquiring multispectral fundus images including blood vessel label images and blood vessel label-free images;
inputting the multispectral fundus image into a trained fundus image analysis model to obtain a registration result of the multispectral fundus image;
the fundus image analysis model comprises a fundus image registration model and a retinal vessel segmentation model, and during training, the fundus image registration model and the retinal vessel segmentation model are trained independently by adopting vessel label images respectively; the fundus image registration model and the retinal vessel segmentation model perform counterstudy according to the non-vessel label image.
The fundus image registration model comprises a training part and an implementation part.
The training portion includes acquiring a multi-spectral fundus image dataset, as shown in fig. 2. Data acquisition uses multispectral fundus image acquisition equipment to sequentially realize multi-wavelength image acquisition within a certain time.
Deformable images are registered to estimate the spatial correspondence between the images, so that the deformed floating image and the fixed image are substantially identical, usually by minimizing an energy function expressed as:
M(IF,IM(φ))+R(φ) (1)
where the first term M represents the similarity measure between the deformed floating image and the fixed image and the last term R represents the constraints on the deformation matrix of the model estimation.
According to the above object, a fundus image registration model is designed. The model structure of the fundus image registration model is shown in fig. 3, a pair of multispectral fundus images are input, registration is carried out through the multispectral fundus image registration model, and the space corresponding relation between dense images is output. The output space corresponding relation and the retina blood vessel image corresponding to the original floating image are transformed into a deformed floating blood vessel image through a space transformation layer. And optimizing parameters in the multispectral fundus image registration model by the similarity constraint between the deformed floating blood vessel image and the blood vessel image corresponding to the fixed image and the constraint on the deformation matrix so as to expect to obtain an optimal deformation matrix.
Constructing a fundus image registration model, comprising:
and the input module inputs a pair of multispectral fundus images.
It should be noted that the fundus images are registered to images acquired by the same person at different times or acquired by different imaging modes, and the problem of registration among different samples does not exist.
The eye fundus image registration model is designed based on U-Net and comprises an encoder part and a decoder part, wherein the encoder part consists of a convolutional layer, a normalization layer, an activation layer and a pooling layer; the decoder part is composed of a convolution layer, a normalization layer, an activation layer and a deconvolution layer. A gap filling layer is added between the encoder and the decoder of the fundus image registration model to balance shallow features and deep features, so that the problem that the deep features are easy to lose in the fusion process is solved.
The end-to-end registration model is connected to a spatial translation layer that performs bilinear interpolation of the values at four neighboring pixels, expressed as:
Figure BDA0002696126980000101
where u is the coordinate of the voxel position [ x, y ], N (u + φ (u)) is the four voxel neighbors of u + φ (u) in I, and D represents two directions in the 2D domain.
And the output module is used for outputting a model operation result and comprises a space corresponding matrix between the images and a floating image after the deformation of the space transformation layer or a blood vessel image corresponding to the deformed floating image.
The loss function includes a similarity constraint that constrains the deformed floating image to be similar to the fixed image, and a constraint on the deformation field:
the purpose of the fundus image registration model is to estimate a dense spatial correspondence so that the deformed floating image is aligned with the fixed image. In other words, for the multispectral fundus image, the purpose of the image registration is to align the retinal vessel images to which the multispectral fundus image corresponds. This is consistent with the alignment used by the ophthalmologist when analyzing the multispectral fundus image. Therefore, the aim of image registration is achieved by minimizing the similarity matrix between the corresponding retinal vessel maps of the images.
In particular, when the model calculates the loss function, a blood vessel map soft label is used instead of directly using the blood vessel map, wherein the blood vessel map soft label comprises a real blood vessel label of a blood vessel label image and a prediction label obtained by a retina blood vessel segmentation model of a blood vessel label-free image. Using 2D Gaussian nuclei
Figure BDA0002696126980000111
And transforming the vessel map into a vessel map soft label.
The similarity loss function is defined as:
Figure BDA0002696126980000112
wherein u represents a fixed tag VFPixel coordinate of [ x, y ]]And N is the number of pixels.
In order to make the deformation matrix computed across the end-to-end convolutional neural network estimate smooth and reasonable, a composite regularization constraint loss function is used, expressed as:
Figure BDA0002696126980000113
the former term is the Laplace operator
Figure BDA0002696126980000114
Ensuring the smoothness of the deformation matrix; the latter term can balance errors in the model initialization output, which means that the closest spatial correspondence is selected under the same similarity position. We experimentally verified that these two regularization constraints are essential for regularizing the transform field, where α and β are the weights of the two regularization constraints, setting α to 1.5 and β to 0.01.
Therefore, the overall loss function of the convolutional neural network-based end-to-end registration model module is represented as:
Figure BDA0002696126980000115
and an implementation part, wherein the trained image end-to-end registration model can be used for predicting the spatial correspondence between image pairs, and further a deformed floating image is obtained through spatial transformation layer transformation.
The specific steps when the multispectral fundus image registration is carried out through the fundus image registration model are as follows:
step 1-1: acquiring a multispectral fundus image.
Specifically, the image used by the training data is a to-be-processed image acquired by multispectral fundus image acquisition equipment or an image pre-stored in a memory. Images acquired by a multispectral fundus image device or images stored in a memory in advance can also be used in the test. Wherein the images used each comprise a series of multispectral fundus images.
Step 1-2: the encoder encodes the image features and the decoder obtains the spatial correspondence between the images.
Step 1-3-1: the retinal vessel map corresponding to the floating image is transformed using the spatial transform layer.
Step 1-3-2: the floating image is transformed using a spatial transform layer.
Step 1-4: the loss function calculates the loss and passes the gradient back to the end-to-end registration model, optimizing the parameters in the end-to-end registration model.
Step 1-5: and outputting the spatial corresponding relation matrix between the images predicted by the model and the deformed floating image.
It is to be noted that the model training phase performs: step 1-1, step 1-2, step 1-3-1, step 1-4; the model testing phase executes: step 1-1, step 1-2, step 1-3-2, and step 1-5.
The retina blood vessel segmentation model comprises a training part and an implementation part.
The training portion includes acquiring a multi-spectral fundus image dataset. Data acquisition uses multispectral fundus image acquisition equipment to sequentially realize multi-wavelength image acquisition within a certain time.
The constructed retinal vessel segmentation model comprises the following steps:
and the input module is used for inputting one of the group of multispectral fundus images.
Constructing an end-to-end segmentation model based on a convolutional neural network, comprising the following steps of:
and the input module is used for inputting one image of a group of multispectral fundus images for segmenting retinal blood vessels.
An end-to-end segmentation model module based on a convolutional neural network is designed based on U-Net and comprises an encoder part and a decoder part, wherein the encoder part consists of a convolutional layer, a normalization layer, an activation layer and a pooling layer; the decoder part is composed of a convolution layer, a normalization layer, an activation layer and a deconvolution layer.
And the output module is used for outputting the model operation result and the retinal vessel map corresponding to the multispectral fundus image.
The loss function uses generalized Dice loss. In the task of retinal fundus image vessel segmentation, the vessel label occupies only a very small portion of the entire fundus image. In order to overcome the segmentation difficulty caused by the fact that negative samples are far larger than positive samples, we recommend generalized Dice loss as the segmentation loss to pay more attention to the pixels which are difficult to learn.
Figure BDA0002696126980000131
Where l is the number of semantic classes, in our experiment, only the vessel region and background. w is alProviding balanced weights, settings for foreground and background
Figure BDA0002696126980000132
VinAnd
Figure BDA0002696126980000133
representing the real vessel label and the predicted vessel label from the end-to-end segmentation model, respectively.The weights of the labels are adjusted according to the voxels, and more attention can be paid to regions which are difficult to identify in the training process, such as blood vessel edges and thin blood vessels.
The specific process of performing retinal vessel segmentation by using the retinal vessel segmentation model comprises the following steps:
step 2-1: acquiring a multispectral fundus image.
Step 2-2: the encoder encodes the image characteristics, and the decoder predicts and obtains a retinal vessel map corresponding to the image.
Step 2-3: the loss function calculates the loss and passes the degree back to the end-to-end retinal vessel segmentation model, optimizing the parameters in the end-to-end segmentation model.
Step 2-4: and outputting a retinal vessel map corresponding to the image predicted by the model.
It is to be noted that the model training phase performs: step 2-1, step 2-2 and step 2-3; the model testing phase executes: step 2-1, step 2-2 and step 2-4.
The fundus image analysis model has a structure shown in fig. 3 and comprises a fundus image registration model and a retinal vessel segmentation model, wherein during training, the fundus image registration model and the retinal vessel segmentation model are respectively trained independently by adopting vessel labeled images; the fundus image registration model and the retinal vessel segmentation model perform counterstudy according to the non-vessel label image.
The fundus image registration model uses the blood vessel image soft label corresponding to the multispectral fundus image as weak supervision to train parameters of the multispectral fundus image registration model.
The fundus image registration model can be trained by using images with blood vessel labels, and can also be used for performing antagonistic learning by using images without blood vessel labels; the antagonistic learning phase can be trained by using a predictive label-assisted model of the blood vessel obtained by segmenting the model from the image without the blood vessel label.
The retinal vessel segmentation model is a fully supervised end-to-end deep learning segmentation model, a labeled retinal vessel label is required to be used for calculating a loss function, the generalized Dice loss is adopted for the loss function of the retinal vessel segmentation model, the loss function is a fully supervised loss function, and the labeled vessel label is required to be used for calculating loss and returning the loss to the model to adjust model parameters during training: the model individual training phase uses a manually labeled retinal vessel map; the confrontation learning stage can use the blood vessel labels after the deformation of the registration model as the false-real label auxiliary model training.
Since the retinal vessel segmentation model can predict the vessel map corresponding to the multispectral fundus image, the fundus image registration model can predict the spatial correspondence between the image pairs. And (3) performing distortion deformation on the prediction label obtained by the retinal vessel segmentation model of the blood vessel-free label image by using a spatial transformation layer to obtain a deformed blood vessel label, and further generating a blood vessel possibility area. In this way, the obtained likelihood regions can be used as true labels to train the retinal vessel segmentation model using unlabeled data. The generalized Dice loss was also chosen for the antagonistic losses:
Figure BDA0002696126980000151
wherein,
Figure BDA0002696126980000152
is formed by registering the displacement fields of the network
Figure BDA0002696126980000153
Potential area map of distortion. Since the training of the segmentation and registration model depends on vessel labeling, the segmentation and registration model parameters are optimized at each iteration. Registration errors may accumulate during the course of a challenge, and to reduce such errors, unlabeled multispectral data is suitably used in the challenge network.
1. In terms of the registration effect, the embodiment firstly proposes a method for counterlearning of segmentation and registration. The registration model can be used for training a model according to an image with a label (label) in an original data set, and can also be used for training the registration model in an antagonistic way by predicting the blood vessel label of an image without the retinal blood vessel label through a segmentation model. The problem of lack of a mark data set is solved to a certain extent, and the registration accuracy is improved.
2. In terms of segmentation effect, the embodiment firstly proposes a method for segmentation and registration counterstudy. The segmentation model can use an image with a label in the original data set as a training data set, and a retina blood vessel graph of a short-wavelength image predicted by the segmentation model can be used as a blood vessel graph label corresponding to the long-wavelength image after the retina blood vessel graph is deformed by the registration model to continue training the segmentation model, so that the problem of lack of the label data set is solved to a certain extent, and the accuracy of retina blood vessel segmentation is improved.
3. In the aspects of practicability and expansibility, the established model can automatically realize the registration of multispectral fundus images and the retinal vessel segmentation, does not need manual marking by doctors, realizes automatic image analysis, has certain robustness, and can be used for the target region segmentation and registration of other medical multispectral images through certain modification.
4. On the aspects of calculation efficiency and operation speed, the invention is based on a deep learning model, a registration model establishes the mapping of an image to the space corresponding relation between images, and a segmentation model establishes the mapping between the images and a retinal vessel map. The trained model can realize the prediction of the spatial correspondence between the image pairs through one-time forward transmission or realize the prediction of the retinal vessel map through one-time forward transmission, so the operation speed is high.
Example 2
In this embodiment, there is disclosed a multi-spectral fundus image analysis system based on counterstudy, including:
the data acquisition module is used for acquiring a multispectral fundus image;
the data analysis module is used for registering the multispectral fundus images through a fundus image analysis model, wherein the fundus image analysis model comprises a fundus image registration model and a retina blood vessel segmentation model, a blood vessel map corresponding to the multispectral fundus images is obtained through the retina blood vessel segmentation model, the fundus image registration model is used for registering the multispectral fundus images according to the blood vessel map, and during training, the fundus image registration model and the retina blood vessel segmentation model are respectively trained independently by adopting blood vessel label images; the fundus image registration model and the retinal vessel segmentation model perform counterstudy according to the non-vessel label image.
Example 3
In this embodiment, an electronic device is disclosed comprising a memory and a processor and computer instructions stored on the memory and executed on the processor that, when executed by the processor, perform the steps of a method of multispectral fundus image analysis that counters learning disclosed in embodiment 1.
Example 4
In this embodiment, a computer readable storage medium is disclosed for storing computer instructions which, when executed by a processor, perform the steps of a learning-resistant multi-spectral fundus image analysis method disclosed in embodiment 1.
The above description is only a preferred embodiment of the present application and is not intended to limit the present application, and various modifications and changes may be made by those skilled in the art. Any modification, equivalent replacement, improvement and the like made within the spirit and principle of the present application shall be included in the protection scope of the present application.
Although the present disclosure has been described with reference to specific embodiments, it should be understood that the scope of the present disclosure is not limited thereto, and those skilled in the art will appreciate that various modifications and changes can be made without departing from the spirit and scope of the present disclosure.

Claims (10)

1. A multispectral fundus image analysis method based on counterstudy is characterized by comprising the following steps:
acquiring multispectral fundus images including blood vessel label images and blood vessel label-free images;
inputting the multispectral fundus image into a trained fundus image analysis model to obtain a registration result of the multispectral fundus image;
the fundus image analysis model comprises a fundus image registration model and a retinal vessel segmentation model, and during training, the fundus image registration model and the retinal vessel segmentation model are trained independently by adopting vessel label images respectively; the fundus image registration model and the retinal vessel segmentation model perform counterstudy according to the non-vessel label image.
2. The method as claimed in claim 1, wherein the vessel map corresponding to the multispectral fundus image is obtained by a retinal vessel segmentation model, and the fundus image registration model registers the multispectral fundus image according to the vessel map.
3. The multispectral fundus image analysis method based on antagonistic learning as claimed in claim 1, wherein the specific process of the fundus image registration model and the retinal vessel segmentation model for antagonistic learning based on the blood vessel label-free image is as follows:
inputting the non-blood vessel label image into a retina blood vessel segmentation model to obtain a prediction label of the non-blood vessel label image, and training a fundus image registration model through the prediction label;
and the fundus image registration model registers the prediction labels to obtain deformed blood vessel labels, and the deformed blood vessel labels retrain the retinal blood vessel segmentation model.
4. The multispectral fundus image analysis method based on antagonistic learning according to claim 3, wherein the fundus image registration model is based on a U-Net network, and is obtained by calculating loss through a blood vessel label and returning training, and the loss function of the fundus image registration model is a weakly supervised loss function;
the retinal vessel segmentation model is based on a U-Net network, loss is calculated by adopting a vessel label and obtained by returning training, and a loss function of the retinal vessel segmentation model is a fully supervised loss function.
5. The method of claim 4 wherein the loss functions of the fundus image registration model include load regularization constraint loss functions and similarity loss functions.
6. The method of claim 4 wherein the vessel labels used by the fundus image registration model include true vessel labels for vessel-labeled images and predicted labels for vessel-label-free images.
7. The method as claimed in claim 4, wherein the vessel labels used by the retinal vessel segmentation model include real vessel labels of the vessel labeled image and deformed vessel labels output by the fundus image registration model.
8. A multispectral fundus image analysis system based on counterstudy, comprising:
the data acquisition module is used for acquiring a multispectral fundus image;
the data analysis module is used for registering the multispectral fundus images through a fundus image analysis model, wherein the fundus image analysis model comprises a fundus image registration model and a retina blood vessel segmentation model, a blood vessel map corresponding to the multispectral fundus images is obtained through the retina blood vessel segmentation model, the fundus image registration model is used for registering the multispectral fundus images according to the blood vessel map, and during training, the fundus image registration model and the retina blood vessel segmentation model are respectively trained independently by adopting blood vessel label images; the fundus image registration model and the retinal vessel segmentation model perform counterstudy according to the non-vessel label image.
9. An electronic device comprising a memory and a processor and computer instructions stored on the memory and executed on the processor, wherein the computer instructions, when executed by the processor, perform the steps of a method of multispectral fundus image analysis in opposition to learning according to any one of claims 1 to 7.
10. A computer readable storage medium storing computer instructions which, when executed by a processor, perform the steps of a method of multispectral fundus image analysis against learning according to any one of claims 1 to 7.
CN202011006571.8A 2020-09-23 2020-09-23 Multispectral fundus image analysis method and system based on counterstudy Active CN112435281B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202011006571.8A CN112435281B (en) 2020-09-23 2020-09-23 Multispectral fundus image analysis method and system based on counterstudy

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202011006571.8A CN112435281B (en) 2020-09-23 2020-09-23 Multispectral fundus image analysis method and system based on counterstudy

Publications (2)

Publication Number Publication Date
CN112435281A true CN112435281A (en) 2021-03-02
CN112435281B CN112435281B (en) 2022-06-24

Family

ID=74690160

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202011006571.8A Active CN112435281B (en) 2020-09-23 2020-09-23 Multispectral fundus image analysis method and system based on counterstudy

Country Status (1)

Country Link
CN (1) CN112435281B (en)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113240697A (en) * 2021-05-13 2021-08-10 安徽大学 Lettuce multispectral image foreground segmentation method
CN114782338A (en) * 2022-04-09 2022-07-22 中山大学中山眼科中心 Automatic labeling method for retinal artery and vein and capillary vessel of fundus color photograph
CN115170912A (en) * 2022-09-08 2022-10-11 北京鹰瞳科技发展股份有限公司 Method for training image processing model, method for generating image and related product

Citations (12)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107437252A (en) * 2017-08-04 2017-12-05 山东师范大学 Disaggregated model construction method and equipment for ARM region segmentation
CN108876770A (en) * 2018-06-01 2018-11-23 山东师范大学 A kind of eyeground multispectral image joint method for registering and system
CN109767459A (en) * 2019-01-17 2019-05-17 中南大学 Novel ocular base map method for registering
CN109903299A (en) * 2019-04-02 2019-06-18 中国矿业大学 A kind of conditional generates the heterologous remote sensing image registration method and device of confrontation network
CN109993782A (en) * 2019-04-02 2019-07-09 中国矿业大学 A kind of annular generates the heterologous remote sensing image registration method and device of confrontation network
CN110021037A (en) * 2019-04-17 2019-07-16 南昌航空大学 A kind of image non-rigid registration method and system based on generation confrontation network
CN110544274A (en) * 2019-07-18 2019-12-06 山东师范大学 multispectral-based fundus image registration method and system
CN110570426A (en) * 2018-06-06 2019-12-13 国际商业机器公司 Joint registration and segmentation of images using deep learning
US20190384047A1 (en) * 2017-08-09 2019-12-19 Allen Institute Systems, devices, and methods for image processing to generate an image having predictive tagging
CN110838140A (en) * 2019-11-27 2020-02-25 艾瑞迈迪科技石家庄有限公司 Ultrasound and nuclear magnetic image registration fusion method and device based on hybrid supervised learning
CN111091589A (en) * 2019-11-25 2020-05-01 北京理工大学 Ultrasonic and nuclear magnetic image registration method and device based on multi-scale supervised learning
CN111402179A (en) * 2020-03-12 2020-07-10 南昌航空大学 Image synthesis method and system combining countermeasure autoencoder and generation countermeasure network

Patent Citations (12)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107437252A (en) * 2017-08-04 2017-12-05 山东师范大学 Disaggregated model construction method and equipment for ARM region segmentation
US20190384047A1 (en) * 2017-08-09 2019-12-19 Allen Institute Systems, devices, and methods for image processing to generate an image having predictive tagging
CN108876770A (en) * 2018-06-01 2018-11-23 山东师范大学 A kind of eyeground multispectral image joint method for registering and system
CN110570426A (en) * 2018-06-06 2019-12-13 国际商业机器公司 Joint registration and segmentation of images using deep learning
CN109767459A (en) * 2019-01-17 2019-05-17 中南大学 Novel ocular base map method for registering
CN109903299A (en) * 2019-04-02 2019-06-18 中国矿业大学 A kind of conditional generates the heterologous remote sensing image registration method and device of confrontation network
CN109993782A (en) * 2019-04-02 2019-07-09 中国矿业大学 A kind of annular generates the heterologous remote sensing image registration method and device of confrontation network
CN110021037A (en) * 2019-04-17 2019-07-16 南昌航空大学 A kind of image non-rigid registration method and system based on generation confrontation network
CN110544274A (en) * 2019-07-18 2019-12-06 山东师范大学 multispectral-based fundus image registration method and system
CN111091589A (en) * 2019-11-25 2020-05-01 北京理工大学 Ultrasonic and nuclear magnetic image registration method and device based on multi-scale supervised learning
CN110838140A (en) * 2019-11-27 2020-02-25 艾瑞迈迪科技石家庄有限公司 Ultrasound and nuclear magnetic image registration fusion method and device based on hybrid supervised learning
CN111402179A (en) * 2020-03-12 2020-07-10 南昌航空大学 Image synthesis method and system combining countermeasure autoencoder and generation countermeasure network

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
孙宸宸: "基于卷积神经网络的心脏CT图像配准与分割", 《万方学术论文》 *

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113240697A (en) * 2021-05-13 2021-08-10 安徽大学 Lettuce multispectral image foreground segmentation method
CN113240697B (en) * 2021-05-13 2022-10-18 安徽大学 Lettuce multispectral image foreground segmentation method
CN114782338A (en) * 2022-04-09 2022-07-22 中山大学中山眼科中心 Automatic labeling method for retinal artery and vein and capillary vessel of fundus color photograph
CN115170912A (en) * 2022-09-08 2022-10-11 北京鹰瞳科技发展股份有限公司 Method for training image processing model, method for generating image and related product

Also Published As

Publication number Publication date
CN112435281B (en) 2022-06-24

Similar Documents

Publication Publication Date Title
Abràmoff et al. Retinal imaging and image analysis
CN112435281B (en) Multispectral fundus image analysis method and system based on counterstudy
EP3660785A1 (en) Method and system for providing an at least 3-dimensional medical image segmentation of a structure of an internal organ
CN112367915A (en) Medical image processing apparatus, medical image processing method, and program
US11854199B2 (en) Methods and systems for ocular imaging, diagnosis and prognosis
JP2019192215A (en) 3d quantitative analysis of retinal layers with deep learning
Nyúl Retinal image analysis for automated glaucoma risk evaluation
CN113543695B (en) Image processing apparatus and image processing method
CN113557714A (en) Medical image processing apparatus, medical image processing method, and program
CN113962311A (en) Knowledge data and artificial intelligence driven ophthalmic multi-disease identification system
US11830193B2 (en) Recognition method of intracranial vascular lesions based on transfer learning
Abràmoff Image processing
Bhadra et al. Automated detection of eye diseases
CN108665474B (en) B-COSFIRE-based retinal vessel segmentation method for fundus image
CN115393239A (en) Multi-mode fundus image registration and fusion method and system
Liu et al. Vct-net: An octa retinal vessel segmentation network based on convolution and transformer
US20210082116A1 (en) 3d quantitative analysis with deep learning
Girard et al. Statistical atlas-based descriptor for an early detection of optic disc abnormalities
Liu et al. OCTA retinal vessel segmentation based on vessel thickness inconsistency loss
US20240104731A1 (en) System for Integrated Analysis of Multi-Spectral Imaging and Optical Coherence Tomography Imaging
Khalef et al. Optic disc segmentation in human retina images with meta heuristic optimization
Jiang et al. ASRNet: Adversarial Segmentation and Registration Networks for Multispectral Fundus Images.
Martins BW-Eye Ophthalmologic decision support system based on clinical workflow and data mining techniques-image registration algorithm
Xu et al. Three dimensional optic disc visualisation from stereo images via dual registration and ocular media optical correction
Jesi et al. Research Article Energetic Glaucoma Segmentation and Classification Strategies Using Depth Optimized Machine Learning Strategies

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant