CN112819828A - Fundus image processing method and device - Google Patents

Fundus image processing method and device Download PDF

Info

Publication number
CN112819828A
CN112819828A CN202110416454.7A CN202110416454A CN112819828A CN 112819828 A CN112819828 A CN 112819828A CN 202110416454 A CN202110416454 A CN 202110416454A CN 112819828 A CN112819828 A CN 112819828A
Authority
CN
China
Prior art keywords
image
blood vessel
processed
reference image
feature descriptor
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202110416454.7A
Other languages
Chinese (zh)
Inventor
张冬冬
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Zhizhen Internet Technology Co ltd
Original Assignee
Beijing Zhizhen Internet Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Zhizhen Internet Technology Co ltd filed Critical Beijing Zhizhen Internet Technology Co ltd
Priority to CN202110416454.7A priority Critical patent/CN112819828A/en
Publication of CN112819828A publication Critical patent/CN112819828A/en
Priority to PCT/CN2021/112999 priority patent/WO2022222328A1/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/0002Inspection of images, e.g. flaw detection
    • G06T7/0012Biomedical image inspection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/40Image enhancement or restoration using histogram techniques
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/50Image enhancement or restoration using two or more images, e.g. averaging or subtraction
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/70Denoising; Smoothing
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/11Region-based segmentation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/30Determination of transform parameters for the alignment of images, i.e. image registration
    • G06T7/33Determination of transform parameters for the alignment of images, i.e. image registration using feature-based methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/60Analysis of geometric attributes
    • G06T7/62Analysis of geometric attributes of area, perimeter, diameter or volume
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20084Artificial neural networks [ANN]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20212Image combination
    • G06T2207/20221Image fusion; Image merging
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30004Biomedical image processing
    • G06T2207/30041Eye; Retina; Ophthalmic

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • General Health & Medical Sciences (AREA)
  • Health & Medical Sciences (AREA)
  • Medical Informatics (AREA)
  • Nuclear Medicine, Radiotherapy & Molecular Imaging (AREA)
  • Radiology & Medical Imaging (AREA)
  • Quality & Reliability (AREA)
  • Geometry (AREA)
  • Eye Examination Apparatus (AREA)
  • Image Processing (AREA)

Abstract

The application relates to a fundus image processing method and device, wherein the method comprises the following steps: reading an image to be processed and a reference image; respectively extracting a blood vessel image of the image to be processed and a blood vessel image of the reference image, and positioning a blood vessel intersection point of the image to be processed and a blood vessel intersection point of the reference image; constructing a to-be-processed image feature descriptor for the blood vessel intersection of the to-be-processed image, and constructing a reference image feature descriptor for the blood vessel intersection of the reference image; and processing the image to be processed and the reference image based on the image feature descriptor to be processed and the reference image feature descriptor, and then fusing the registration image obtained by registration processing and the reference image to obtain a fused image. The fundus image processing method provided by the embodiment of the application can more effectively realize the processing of the fundus image of the neonate, and meanwhile, the accuracy of the processing result can be effectively ensured.

Description

Fundus image processing method and device
Technical Field
The present application relates to the field of medical image processing technologies, and in particular, to a method and an apparatus for processing an eye fundus image.
Background
The fundus image has very wide application in assisting the ophthalmologic diagnosis and treatment process. By processing the fundus images, various fundus-related diseases can be better diagnosed and detected, such as: diabetic retinopathy, glaucoma, macular degeneration, and the like. However, most fundus image processing algorithms currently perform registration processing on adult fundus images shot by a desktop camera, and a processing method based on the fundus of a neonate is not designed by combining the characteristics of the fundus images. In the process of collecting the fundus images of the neonates, due to the limited matching degree of the neonates, the standard fundus images similar to the fundus of the adult are difficult to collect, and a plurality of fundus images are required to be collected for each eye to comprehensively diagnose. Meanwhile, due to the difference of shooting modes, most of the acquired neonatal fundus images are poor in quality, so that the accuracy rate is low when an adult fundus processing algorithm is directly transferred to the neonatal fundus images for image processing.
Disclosure of Invention
In view of this, the present application provides a fundus image processing method, which can process a fundus image of a neonate and effectively improve the accuracy of a fundus image processing result.
According to an aspect of the present application, there is provided a fundus image processing method including:
reading an image to be processed and a reference image;
respectively extracting a blood vessel image of the image to be processed and a blood vessel image of the reference image, and positioning a blood vessel intersection point of the image to be processed and a blood vessel intersection point of the reference image;
constructing a to-be-processed image feature descriptor for the blood vessel intersection of the to-be-processed image, and constructing a reference image feature descriptor for the blood vessel intersection of the reference image;
performing registration processing on the image to be processed and the reference image based on the image feature descriptor to be processed and the reference image feature descriptor;
and carrying out fusion processing on the registration image obtained by the registration processing and the reference image to obtain a corresponding fusion image.
In a possible implementation manner, when the blood vessel image of the image to be processed and the blood vessel image of the reference image are extracted, a pre-trained network model is used for extraction;
the network model comprises a down-sampling module and an up-sampling module which are sequentially cascaded;
the input of the up-sampling module is the output of the down-sampling module.
In a possible implementation manner, before extracting the blood vessel image of the image to be processed and the blood vessel image of the reference image, the method further includes: and performing at least one of size standardization processing and image enhancement processing on the image to be processed and the reference image.
In one possible implementation, the locating the blood vessel intersection of the image to be processed and the blood vessel intersection of the reference image includes:
deleting small-area blood vessels in the blood vessel image of the image to be processed and small-area blood vessels in the blood vessel image of the reference image; wherein the small-area blood vessel is a blood vessel with an area smaller than a preset area; the value of the preset area can be set as follows: [100, 200], the unit is a pixel;
respectively thinning the blood vessel image of the image to be processed and the blood vessel image of the reference image, and converting the blood vessel image of the image to be processed and the blood vessel image of the reference image into binary images; wherein, the refined blood vessel image only comprises a skeleton of the blood vessel;
performing convolution filtering processing on the refined blood vessel image of the image to be processed and the refined blood vessel image of the reference image by using the constructed convolution kernel;
and retrieving pixels with pixel values larger than or equal to a preset pixel value in the blood vessel image of the image to be processed after convolution filtering as blood vessel intersections of the image to be processed, and retrieving pixels with pixel values larger than or equal to a preset pixel value in the blood vessel image of the reference image after convolution filtering as the blood vessel intersections of the reference image.
In one possible implementation, constructing a reference image feature descriptor for the vessel intersection of the reference image includes:
extracting a green channel image of the reference image;
for each blood vessel intersection point of the reference image, selecting a window area with a preset size on a green channel image of the reference image, and respectively calculating the gradient amplitude and the gradient direction of each pixel in the window area;
generating a gradient histogram according to the gradient amplitude and the gradient direction of each pixel in the window area;
determining the main direction of each blood vessel intersection point based on the gradient histogram corresponding to each blood vessel intersection point;
dividing the neighborhood of each blood vessel intersection point into a first preset number of primary sub-areas, and performing interpolation and rotation processing on each primary sub-area according to the determined main direction of the blood vessel intersection point; wherein the sizes of the primary subregions are the same;
in the rotated primary sub-area, with the blood vessel intersection point as the center, dividing the neighborhood of the blood vessel intersection point into a second preset number of secondary sub-areas again; wherein the sizes of the secondary subregions are the same;
and calculating the gradient histogram of each secondary subregion, and obtaining a corresponding reference image feature descriptor according to the gradient histogram of each secondary subregion.
In one possible implementation manner, determining a main direction of each blood vessel intersection point based on a gradient histogram corresponding to each blood vessel intersection point includes:
performing Gaussian smoothing on each generated gradient histogram, and retrieving an index value index of a bin corresponding to the maximum amplitude value in the gradient histogram after the Gaussian smoothing;
fitting a parabola according to the first index-1, the first index +1 bin and the corresponding amplitude;
and the abscissa corresponding to the vertex of the parabola is the main direction of the blood vessel intersection point.
In a possible implementation manner, when a gradient histogram is generated according to the gradient magnitude and the gradient direction of each pixel in the window region, the gradient magnitude at each pixel point is multiplied by a gaussian weight;
wherein the Gaussian weights are generated according to a two-dimensional Gaussian function.
In a possible implementation manner, after generating the corresponding reference image feature descriptor according to the gradient histogram of each of the secondary sub-regions, the method further includes: and carrying out normalization processing on the generated reference image feature descriptors.
In one possible implementation manner, performing registration processing on the image to be processed and the reference image based on the image feature descriptor to be processed and the reference image feature descriptor includes:
calculating Euclidean distances between each to-be-processed image feature descriptor and each reference image feature descriptor, and screening out matching points from the to-be-processed image feature descriptors and the reference image feature descriptors according to the Euclidean distances;
solving an affine transformation matrix according to the screened matching points;
and transforming the image to be processed according to the affine transformation matrix to obtain a final registration image.
According to another aspect of the present application, there is also provided a fundus image processing apparatus, including an image reading module, a blood vessel image extraction module, a blood vessel intersection positioning module, a feature descriptor constructing module, an image registration module, and an image fusion module;
the image reading module is configured to read an image to be processed and a reference image;
the blood vessel image extraction module is configured to extract a blood vessel image of the image to be processed and a blood vessel image of the reference image respectively;
the blood vessel intersection point positioning module is configured to position the blood vessel intersection point of the image to be processed and the blood vessel intersection point of the reference image;
the feature descriptor construction module is configured to construct a feature descriptor of the image to be processed for the blood vessel intersection of the image to be processed, and construct a reference image feature descriptor for the blood vessel intersection of the reference image;
the image registration module is configured to perform registration processing on the image to be processed and the reference image based on the image feature descriptor to be processed and the reference image feature descriptor;
and the image fusion module is configured to perform fusion processing on the registration image obtained by the registration processing and the reference image to obtain a corresponding fusion image.
When the fundus images are fused, the blood vessel images of the images to be processed and the blood vessel images of the reference images are extracted, then the blood vessel intersection points of the images to be processed and the blood vessel intersection points of the reference images are positioned based on the extracted blood vessel images of the images to be processed and the blood vessel images of the reference images, and then when the characteristic descriptors of the images to be processed and the characteristic descriptors of the reference images are extracted and constructed, the method is carried out according to the positioned blood vessel intersection points of the images to be processed and the positioned blood vessel intersection points of the reference images respectively, and compared with a method of randomly selecting characteristic points in the related technology, the accuracy of the extracted characteristic descriptors is effectively improved. Furthermore, when the registration of the image to be processed is carried out based on the extracted and constructed image feature descriptor to be processed and the reference image feature descriptor, the obtained registration image is more accurate. Meanwhile, in the process of processing the neonatal fundus image, the feature descriptors on which the registration is performed are also realized according to the corresponding positioning of the blood vessel intersection points in the neonatal fundus image, so that the extracted feature descriptors are more suitable for the current neonatal fundus image to be registered, and finally, the fundus image processing method provided by the embodiment of the application can realize the registration of the neonatal fundus image more effectively. Moreover, the registration image obtained after the registration processing is fused with the reference image, so that the transition of the overlapping region part of the registration image and the reference image in the obtained fusion image is more natural, and the accuracy of the obtained fusion image is effectively ensured.
Other features and aspects of the present application will become apparent from the following detailed description of exemplary embodiments, which proceeds with reference to the accompanying drawings.
Drawings
The accompanying drawings, which are incorporated in and constitute a part of this specification, illustrate exemplary embodiments, features, and aspects of the application and, together with the description, serve to explain the principles of the application.
Fig. 1 shows a flowchart of a fundus image processing method according to an embodiment of the present application;
fig. 2 is a schematic diagram showing a manner of fitting a parabola employed when determining a principal direction of a blood vessel intersection in an image in a fundus image processing method according to an embodiment of the present application;
fig. 3a illustrates a to-be-processed image read in the fundus image processing method according to an embodiment of the present application;
fig. 3b shows a reference image read in the fundus image processing method according to an embodiment of the present application;
FIG. 3c shows a registered image obtained by registering the images of FIGS. 3a and 3b using a fundus image processing method according to an embodiment of the present application;
fig. 4 illustrates a fused image obtained by fusing the obtained registration image and the reference image in the fundus image processing method according to the embodiment of the present application;
fig. 5 is a block diagram showing a configuration of a fundus image processing apparatus according to an embodiment of the present application.
Detailed Description
Various exemplary embodiments, features and aspects of the present application will be described in detail below with reference to the accompanying drawings. In the drawings, like reference numbers can indicate functionally identical or similar elements. While the various aspects of the embodiments are presented in drawings, the drawings are not necessarily drawn to scale unless specifically indicated.
The word "exemplary" is used exclusively herein to mean "serving as an example, embodiment, or illustration. Any embodiment described herein as "exemplary" is not necessarily to be construed as preferred or advantageous over other embodiments.
Furthermore, in the following detailed description, numerous specific details are set forth in order to provide a better understanding of the present application. It will be understood by those skilled in the art that the present application may be practiced without some of these specific details. In some instances, methods, means, elements and circuits that are well known to those skilled in the art have not been described in detail so as not to obscure the present application.
Fig. 1 shows a flowchart of a fundus image processing method according to an embodiment of the present application. As shown in fig. 1, the method includes: step S100, reading the image to be processed and the reference image. Here, as will be understood by those skilled in the art, the image to be processed refers to the fundus image currently to be processed, and the reference image is the selected standard fundus image taken out for processing the image to be processed. The reference image may be selected by a conventional technique in the art, and is not specifically limited herein. And S200, respectively extracting the blood vessel image of the image to be processed and the blood vessel image of the reference image, and positioning the blood vessel intersection point of the image to be processed and the blood vessel intersection point of the reference image. Then, in step S300, a to-be-processed image feature descriptor is constructed for the blood vessel intersection of the to-be-processed image, and a reference image feature descriptor is constructed for the blood vessel intersection of the reference image. Further, in step S400, the image to be processed and the reference image are processed based on the image feature descriptor to be processed and the reference image feature descriptor. Finally, in step S500, the registration image obtained by the registration process is fused with the reference image to obtain a corresponding fused image.
Therefore, in the fundus image processing method of the embodiment of the application, when the fundus image is processed, the blood vessel image of the image to be processed and the blood vessel image of the reference image are extracted, then the blood vessel intersection point of the image to be processed and the blood vessel intersection point of the reference image are located based on the extracted blood vessel image of the image to be processed and the extracted blood vessel image of the reference image, and then when the feature descriptor of the image to be processed and the feature descriptor of the reference image are extracted and constructed, the extraction is carried out according to the located blood vessel intersection point of the image to be processed and the located blood vessel intersection point of the reference image respectively. Furthermore, when the to-be-processed image is processed based on the extracted and constructed to-be-processed image feature descriptor and the reference image feature descriptor, the obtained processed image is more accurate. Meanwhile, for the processing of the neonatal fundus image, the feature descriptors on which the neonatal fundus image is processed are also realized by correspondingly positioning the blood vessel intersection in the neonatal fundus image, so that the extracted feature descriptors are more suitable for the current neonatal fundus image to be processed, and finally, the method for processing the neonatal fundus image can more effectively realize the processing of the neonatal fundus image. Meanwhile, after the registration image after registration processing and the reference image are subjected to fusion processing, the transition of the region of the overlapped part of the registration image and the reference image in the obtained fusion image is more natural, so that the accuracy of the fused fundus image is more effectively improved, and a clearer reference is provided for subsequent detection.
In a possible implementation manner, in step S200, when extracting a blood vessel image of an image to be processed and a blood vessel image of a reference image, in the fundus image processing method according to the embodiment of the present application, a pre-trained network model may be used for extraction.
That is, in the fundus image processing method of the embodiment of the present application, in extracting blood vessel images of an image to be processed and a reference image, processing of a fundus image in combination with depth learning is realized by adopting a manner of depth learning. The method for extracting the blood vessel image by using the deep learning mode can effectively ensure the accuracy of the extracted blood vessel image, thereby further ensuring the accuracy of the final processing result.
In a possible implementation manner, when the blood vessel image is extracted in combination with a deep learning manner, the adopted network model may directly use a neural network commonly used in the art for image recognition and classification, or may be implemented by self-designing the neural network.
In the fundus image processing method of the embodiment of the present application, a self-designed neural network may be employed. The network structure of the self-designed neural network comprises a down-sampling module and an up-sampling module which are sequentially cascaded. Wherein, the input of the up-sampling module is the output of the down-sampling module.
Meanwhile, when the vessel image is extracted in a deep learning manner, a self-designed network model needs to be trained until the vessel image of the image to be processed and the vessel image of the reference image are extracted after convergence. Specifically, when the constructed network model is trained, the training can be implemented in the following manner.
First, a sample database for model training needs to be constructed. Namely, a gold-labeled database of fundus blood vessels is constructed. The method comprises the following specific construction steps:
first, the current preprocessed image is marked as I. Normalizing the size of the pre-processed image, wherein the normalized expression is as follows:
IS = imresize(I,[512,512])........................................(1)
in the formula (1), IS represents the scaled image, and [512,512] represents the length and width of the scaled image, 512 respectively.
Then, the image IS IS subjected to image enhancement, so that the details of the enhanced image are richer, and blood vessels are easier to identify. Wherein the expression of the enhancement processing is as follows:
IE =clahe(IS,clipLimit=2.0,tileGridSize=(8,8))....................(2)
in equation (2), IE represents the enhanced image, clahe is the public image enhancement algorithm, and clipLimit and tileGridSize are two hyper-parameters of the clahe algorithm, set to 2.0 and (8,8), respectively.
And then, sequentially operating the pre-labeled neonatal fundus images according to the steps, and after the operation is finished, giving the neonatal fundus images to a professional doctor for data labeling to finally generate a neonatal fundus blood vessel gold-labeled database.
After the sample database is built, the data of each sample in the sample database can be input into the built network model for training. Here, it should be noted that since the network model used is designed by itself in one possible implementation of the fundus image processing method of the embodiment of the present application, the construction of the network structure of the network model is also required.
Namely, a deep vessel extraction learning network is designed and trained. The method comprises the following specific steps:
firstly, an encoder part of a learning network is constructed, wherein the encoder is used for continuously downsampling an input image and extracting high-dimensional features while downsampling, and the expression of the method is as follows:
Unite_e = Relu(BN(Conv_2d))(x)...................................(3)
x = Unite_e......................................................(4)
Encoder = repeate(maxpool(Unite_e),n)............................(5)
in the formula (3), Unite _ e represents the results obtained by performing three operations, namely two-dimensional convolution-Conv _2d, batch normalization-BN and modified linear unit-Relu, on the input x. The initial value of x is the image size of one batch size, i.e. the x initial input is batch size 3 512. Equation (4) represents that the value of x is updated to the result of the last step Unite _ e. In equation (5), Encoder represents the result obtained after the whole downsampling, maxpool represents downsampling for Unite _ e, the downsampling step is set to 2 here, and repeat represents repeated execution of maxpool (Unite _ e), and the repeated execution number is n. In one possible implementation, the value of n may be set to 4.
The number of convolution kernels of Conv _2d in the 4-time down-sampling process is [64, 128, 256 and 512], and the sizes of the convolution kernels are uniformly set to be 3 x 3. The kernel size for all maxpool operations is 2 x 2, step size is 2.
And then, constructing a decoder part of the learning network, wherein the decoder is used for continuously up-sampling an output image of the encoder, so that the output of the encoder is restored to the same size as the input, and the characteristics are further fused in the upper adoption process. The expression is as follows:
Pool_d = unmaxpool(encoder)......................................(6)
Unite_d = Relu(BN(Conv_2d))(Pool_d)..............................(7)
encoder = Unite_d................................................(8)
Decoder = repeate(Unite_d,n)....................................(9)
in equation (6), unmaxpool indicates maximum pooling upsampling of the encoder output. In the formula (7), Unite _ d represents the results obtained by performing three operations, namely two-dimensional convolution-Conv _2d, batch normalization-BN and linear unit-Relu on Pool _ d respectively. Equation (8) represents the result of updating the value of encoder to Unite _ d in the previous step. In equation (9), Decoder represents the result obtained after the entire upsampling, and repeat represents the repeated execution on Unite _ d, and the execution number is n. As described above, n here may also be set to 4.
The number of convolution kernels of Conv _2d in the 4 upsampling processes is [256, 128, 64, 2], and the sizes of the convolution kernels are uniformly set to be 3 x 3. The kernel size for all unmaxpool operations is 2 x 2, step size is 2.
Then, a loss function of the network is set, and the loss function adopts a cross entropy loss function. Since background pixels are much more numerous than vessel pixels, i.e. negative samples are much more numerous than positive samples. Therefore, different weights are set for the positive and negative samples, and overfitting or non-convergence of model training is prevented. The weight definition expression is as follows:
f(class) = frequency(class) / (image_count(class) * w * h).......(10)
weight(class) = median of f(class)) / f(class)...................(11)
in equation (10), frequency (class) represents the total number of pixels of the class in the whole training set; image _ count (class) represents the number of pictures that contain the class in the entire training set; w h is the length and width of the image. In the formula (11), mean of f (class)) is the median of f (class) calculated.
And finally, initializing the parameters of the network learning by adopting a xavier initialization method, setting the epoch times to be 10 and the batch size to be 8. And (3) optimizing the model parameters by adopting an Adam optimization algorithm, setting the learning rate to be 0.01, training the model until the network model is converged, and finishing the training of the network model.
Further, before the image to be processed and the reference image are respectively input into the trained network model for extracting the blood vessel image, operations such as size standardization processing and image enhancement processing should be further included for the image to be processed and the reference image.
It should be noted that, when the size standardization processing and the image enhancement processing are performed on the image to be processed and the reference image, the standardization processing and the enhancement processing performed on the sample data in the process of constructing the sample database may be adopted, and details are not repeated here.
After the image to be processed and the reference image are subjected to corresponding standardization processing and image enhancement processing, the images can be input into a trained network model to extract blood vessel images, and the blood vessel intersection points are positioned after the blood vessel images are extracted.
Specifically, in a possible implementation manner, the to-be-processed image and the reference image are respectively input into a trained network model to extract the blood vessel image, and when the blood vessel intersection is located after the blood vessel image is extracted, the following manner is used to implement the following.
Here, it should be noted that, firstly, since the process of extracting the blood vessel image of the image to be processed and locating the blood vessel intersection based on the extracted blood vessel image is the same as or similar to the process of extracting the blood vessel image of the reference image and locating the blood vessel intersection based on the extracted blood vessel image, the following description will be given only by taking the reference image as an example, and the process of processing the image to be processed is not repeated.
Wherein, after the reference image Im _ fix is subjected to the standardization processing and the image enhancement processing according to the mode, the reference image Im _ fix is marked as Im _ e _ fix; then, the image Im _ e _ fix is input into the trained network model for forward reasoning to obtain a blood vessel image, and the obtained blood vessel image is recorded as Im _ v _ fix.
After the blood vessel image Im _ v _ fix of the reference image is extracted, the blood vessel intersection point can be located based on the extracted blood vessel image Im _ v _ fix.
Specifically, when the blood vessel intersection is located based on the extracted blood vessel image, this can be achieved in the following manner.
Firstly, deleting small-area blood vessels in a blood vessel image of an image to be processed, and referring to the small-area blood vessels in the blood vessel image of the image; wherein, the small-area blood vessel is a blood vessel with an area smaller than a preset area. Here, it should be noted that the values of the preset area may be set as follows: [100, 200]. Such as: the predetermined area may take the value of 100 pixels.
Then, respectively thinning the blood vessel image of the image to be processed and the blood vessel image of the reference image, and converting the blood vessel images into binary images; wherein, the refined blood vessel image only comprises the skeleton of the blood vessel.
And then carrying out convolution filtering processing on the refined blood vessel image of the image to be processed and the refined blood vessel image of the reference image by using the constructed convolution kernel.
And finally, retrieving pixels of which the pixel values are greater than or equal to a preset pixel value in the blood vessel image of the image to be processed after convolution filtering as blood vessel intersections of the image to be processed, and retrieving pixels of which the pixel values are greater than or equal to the preset pixel value in the blood vessel image of the reference image after convolution filtering as the blood vessel intersections of the reference image. Here, it should be noted that the size of the preset pixel value may be set to 3 when the search of the blood vessel intersection is performed.
In the above step, when performing convolution filtering processing on the refined blood vessel image of the image to be processed and the refined blood vessel image of the reference image, the convolution kernels used are:
Figure 593160DEST_PATH_IMAGE001
that is, in the fundus image processing method according to the embodiment of the present application, in step S200, at the time of performing the positioning of the blood vessel intersection points, it is possible to extract the constructed feature descriptor by generating a corresponding gradient histogram for each blood vessel intersection point and then extracting based on the generated gradient histograms.
The reference image is also exemplified. That is, when the blood vessel image Im _ v _ fix of the reference image is extracted and then the blood vessel intersection is located based on the blood vessel image Im _ v _ fix of the extracted reference image, the method specifically includes:
first, a blood vessel with a small area in the blood vessel image Im _ v _ fix is deleted in order to remove noise interference, and the image after noise removal is still referred to as Im _ v _ fix.
Then, the image Im _ v _ fix is refined and converted into a binary image, only the skeleton of the blood vessel is reserved, and the refined image is still recorded as Im _ v _ fix.
Further, a 3 x 3 convolution kernel is constructed, filtering is carried out on the image Im _ v _ fix, and the filtered image is still marked as Im _ v _ fix;
Figure 785107DEST_PATH_IMAGE001
................................................(12)
Im_v_fix = filter(Im_v_fix, kernel).............................(13)
equation (12) represents the constructed convolution kernel; equation (13) represents filtering Im _ v _ fix, and kernel is the convolution kernel constructed by equation (12).
Then, pixels with pixel values greater than or equal to 3 in Im _ v _ fix are searched, the pixels with pixel values greater than or equal to 3 are blood vessel intersections, and the Set of the searched blood vessel intersections is recorded as Set _ fix.
And executing the same operation on the image Im _ mov to be processed to obtain a blood vessel intersection Set _ mov of the image Im _ mov to be processed.
After the blood vessel image of the image to be processed is extracted and the blood vessel intersection is located, and the blood vessel image of the reference image is extracted and the blood vessel intersection is located, step S300 may be executed to extract and construct corresponding feature descriptors (i.e., the feature descriptors of the image to be processed and the feature descriptors of the reference image) based on the blood vessel intersection of the image to be processed and the blood vessel intersection of the reference image, respectively. In one possible implementation, this may be done in the following manner.
It should be noted that, since the extraction and construction method of the feature descriptor of the image to be processed is the same as that of the feature descriptor of the reference image, only the reference image is taken as an example for explanation.
Specifically, first, a green channel image of the reference image is extracted, which may be labeled Im _ g _ fix.
Then, for each blood vessel intersection in the Set _ fix of blood vessel intersections of the reference image, a window area with a preset size is selected on the green channel image of the reference image, and the gradient amplitude and the gradient direction of each pixel in the window area are respectively calculated. Here, it should be noted that the size of the selected window area can be flexibly set according to practical situations, such as: the setting can be performed according to the number of the selected pixel points, the resolution of the image and other parameters. In the fundus image processing method according to the embodiment of the present application, a window area 16 × 16 is provided below the window area, the area has 256 pixel points in total, and the gradient amplitude and the gradient direction of each pixel point of the 256 pixel points are calculated respectively.
When the gradient amplitude and the gradient direction of each pixel point are calculated, the calculation can be performed according to the following modes:
Figure 6004DEST_PATH_IMAGE002
.....................(14)
Figure 562887DEST_PATH_IMAGE003
.......................................(15)
in formula (14), m (x, y)) And the gradient amplitude of the pixel point currently processed is represented, L represents Im _ g _ fix or Im _ g _ mov, and represents Im _ g _ fix when the processed object is an element in Set _ fix, or represents Im _ g _ mov otherwise. In the formula (15)
Figure 626658DEST_PATH_IMAGE004
And expressing the gradient direction of the currently processed pixel point.
And then, a gradient histogram is generated according to the gradient amplitude and the gradient direction of each pixel in the window area. That is, a corresponding gradient histogram is correspondingly generated for each pixel point in the window region. And then determining the main direction of each blood vessel intersection point based on the gradient histogram corresponding to each blood vessel intersection point. Here, in determining the main direction of each blood vessel intersection, a parabolic fitting method may be used. Namely, a parabola is fitted according to the gradient histogram corresponding to each pixel point, and the abscissa corresponding to the vertex of the parabola is the main direction of the blood vessel intersection.
Then, dividing the neighborhood of each blood vessel intersection point into a first preset number of primary sub-areas, and performing interpolation and rotation processing on each primary sub-area according to the determined main direction of the blood vessel intersection point; wherein the primary subregions are the same size. Here, when the sub-region is divided into the neighborhood of each blood vessel intersection, a division window (e.g., a rectangular frame) having a certain size may be used. That is, after the center point of the division window is overlapped with the intersection of the blood vessel currently being processed, the division window is divided into a first preset number of sub-regions.
The value of the first preset number (i.e., the number of divided regions when the neighborhood of each blood vessel intersection is primarily divided) can be flexibly set according to the actual situation. Specifically, it may be set as: 4*4. Meanwhile, the size of the primary sub-area may be set to 4 × 4.
Secondly, in the rotated primary sub-area, the neighborhood of the blood vessel intersection point is divided into a second preset number of secondary sub-areas by taking the blood vessel intersection point as a center; wherein the size of each secondary sub-region is the same. Here, it should be noted that the manner of re-dividing the neighborhood of the blood vessel intersection is the same as or similar to the manner of primary division, and details thereof are not repeated here. It should be noted that the second preset number may take the following values: 4 x 4, the size of the secondary sub-region may be set to: 4*4.
And finally, calculating the gradient histogram of each secondary sub-region, and obtaining a corresponding reference image feature descriptor according to the gradient histogram of each secondary sub-region.
More specifically, in the above step, the main direction of each blood vessel intersection point may be determined by means of parabolic fitting. The method specifically comprises the following steps:
the generated histogram of each gradient is gaussian smoothed to reduce the influence of the mutation. After each gradient histogram is subjected to gaussian smoothing, the index value index of the bin corresponding to the maximum amplitude in the gradient histogram after the gaussian smoothing is retrieved. The angle interval corresponding to the bin is [ index × 10, index × 10+10], that is, the main direction range of the currently processed blood vessel intersection.
A parabola is fitted to the first index-1, the first index +1 bins, and the corresponding amplitudes. Wherein, the abscissa corresponding to the vertex of the parabola is the main direction of the blood vessel intersection, as shown in fig. 2.
Further, when the gradient histogram is generated according to the gradient magnitude and gradient direction of each pixel in the window region, the gradient direction of each pixel in the window region may be quantized into the histogram of 36 (360/10) by counting the gradient magnitude and gradient direction of all pixels in the window region, and setting a predetermined number of degrees (e.g., 10 degrees) as one bin. Where the height of a bin represents the sum of the gradient magnitudes of all pixels falling into that bin. Meanwhile, it should be noted that the value of the preset degree can also be set to other angle values, and in one possible implementation manner, the value of the preset degree can be as follows: 10 deg.
Meanwhile, in the fundus image processing method according to the embodiment of the present application, it is also considered that the closer a pixel to the central point in the window region is to the central point, the greater the contribution is, and the farther the pixel is from the central point, the smaller the contribution is. Therefore, in the process of generating the gradient histogram, the gradient amplitude of each pixel point can be multiplied by each Gaussian weight to balance. That is, when the gradient histogram is generated based on the gradient magnitude and gradient direction of each pixel in the window region, the gradient magnitude at each pixel point is multiplied by a gaussian weight.
Wherein the gaussian weight may be generated by a two-dimensional gaussian function. Specifically, the expression is as follows:
Figure 559979DEST_PATH_IMAGE005
............................................(16)
further, in the fundus image processing method according to the embodiment of the present application, after generating the corresponding reference image feature descriptors from the gradient histograms of the secondary subregions, the method may further include: and carrying out normalization processing on the generated reference image feature descriptors to remove the influence of illumination change.
To more clearly describe the process of locating the blood vessel intersection in the blood vessel image in the fundus image processing method according to the embodiment of the present application, the process of locating the blood vessel intersection in the blood vessel image will be described below by taking a reference image as an example.
First, the green channel image of image Im _ e _ fix is extracted and labeled Im _ g _ fix. Then, for each blood vessel intersection in the Set _ fix of the blood vessel intersection of the reference image, a 16 × 16 window region is selected on the Im _ g _ fix image, the region has 256 pixel points, and for each pixel point, the gradient amplitude and the gradient direction of the pixel point are calculated according to a formula (14) and a formula (15).
And then, generating a gradient histogram according to the gradient amplitude and the gradient direction of each pixel point. Specifically, the gradient magnitude and gradient direction of all pixels in the 16 × 16 window region are counted, and 256 gradient directions are quantized into a histogram of 36 (360/10) bins with 1 bin per 10 degrees. Where the height of a bin represents the sum of the gradient magnitudes of all pixels falling into that bin.
Meanwhile, it is considered that the closer a pixel to the central point in the window region contributes more to the central point, and the farther a pixel is from the central point, the smaller its contribution. The gradient magnitude at each pixel point also needs to be balanced by multiplying it by a gaussian weight. The gaussian weight is generated by a two-dimensional gaussian function (i.e., as shown in equation (16)).
Then, the generated histogram of each gradient is gaussian smoothed to reduce the influence of the mutation. When each gradient histogram is subjected to Gaussian smoothing, the expression is as follows:
bin[i] = 0.25 * bin[i-1] + 0.5*bin[i] + 0.25*bin[i+1]............(17)
in the formula (17), bin [ i ] represents the amplitude corresponding to the current bin, bin [ i-1] represents the amplitude corresponding to the previous bin, and bin [ i +1] represents the amplitude corresponding to the next bin.
Furthermore, the index value index of the bin corresponding to the maximum amplitude in the gradient histogram after the gaussian smoothing is retrieved, and the angle interval corresponding to the bin is [ index × 10, index × 10+10], that is, the main direction range of the currently processed blood vessel intersection.
And then, fitting a parabola according to the index-1, the index +1 bins and the corresponding amplitude values, wherein the abscissa corresponding to the vertex of the parabola is the main direction of the blood vessel intersection.
The neighborhood around the intersection of the blood vessel currently being processed is then divided into d x d sub-regions (i.e., primary sub-regions), each of which is k x k in size. Considering that bilinear interpolation and rotation operation are needed in actual calculation, the size of the actual effective area of the calculated image is as follows:
Figure 399497DEST_PATH_IMAGE006
. In this application d =4 and k = 5. The gradient position and direction of the pixels in the region are rotated by an orientation angle theta, which is the principal direction of the vessel intersection point determined by the calculation in the previous step.
Furthermore, in the rotated primary sub-region, the neighborhood of the blood vessel intersection 8 × 8 is divided into 2 × 2 sub-regions (i.e., secondary sub-regions) with the blood vessel intersection as the center. Each secondary sub-region is 4 x 4 in size. The gradient histogram for each secondary sub-region is calculated in the manner described above for the gradient histogram generation, respectively. The bin here takes a value in the range of 45 degrees. Each secondary sub-region may generate 8(360/45) gradient features. The secondary subregions of 4 x 4 can generate 128(4 x 8) dimensional feature descriptors, which are denoted as descriptors.
And finally, normalizing the feature descriptors to remove the influence of illumination change. In one possible implementation, the expression of the normalization process is as follows:
Figure 924019DEST_PATH_IMAGE007
....................(18)
therefore, the process of extracting and constructing the feature descriptors for the blood vessel intersection of the reference image can be completed. Similarly, the same operation is performed on the image to be processed in the manner described above by 4, so that the extraction of the feature descriptors of the image to be processed can be realized.
After the feature descriptors of the blood vessel intersection of the reference image and the blood vessel intersection of the image to be processed are extracted and constructed in any of the above manners, step S400 may be executed, and the image to be processed and the reference image are processed based on the extracted and constructed feature descriptors of the image to be processed and the reference image.
In a possible implementation manner, euclidean distances between each to-be-processed image feature descriptor and each reference image feature descriptor may be calculated, and then matching points are screened out from the to-be-processed image feature descriptor and the reference image feature descriptor according to the calculated euclidean distances. And then, solving an affine transformation matrix according to the screened matching points. And finally, transforming the image to be processed according to the affine transformation matrix to obtain a final processed image.
That is, when the registration processing of the image to be processed is performed, the feature intersection points successfully registered may be screened out from the image feature descriptor to be processed and the reference image feature descriptor as matching points based on the euclidean distance between the image feature descriptor to be processed and the reference image feature descriptor, then the calculation and solution of the affine transformation matrix is performed according to the screened matching points, and finally the image registration processing is performed according to the solved affine transformation matrix. By screening the matching points and performing solution calculation of the affine transformation matrix according to the matching points, the accuracy of the registration processing result can be further ensured when image registration processing is performed according to the affine transformation matrix.
When the matching points are screened, the calculated Euclidean distance can be compared and judged with a preset threshold value T. It should be noted that the value size of the preset threshold T determines the accuracy of the screening result of the matching point. In a possible implementation manner, the value of the preset threshold T may be: [0.3,0.5]. Preferably, the value of the preset threshold T may be 0.3.
Meanwhile, it should be noted that, when the euclidean distance between the to-be-processed image feature descriptor and the reference image feature descriptor is calculated, the euclidean distance between each to-be-processed image feature descriptor and each reference image feature descriptor is calculated.
In addition, when the affine transformation matrix is solved according to the screened matching points, the affine transformation matrix can be realized by using a least square method. And will not be described in detail herein.
Referring to fig. 3c, a fundus image processing method according to the embodiment of the present application is a registered image obtained after the registration processing is performed on the image to be processed shown in fig. 3a by using the reference image shown in fig. 3 b. The registration image can be obviously seen, the registration processing of the neonatal fundus image is effectively realized by adopting the fundus image processing method of the embodiment of the application, and meanwhile, the accuracy of the registration processing result is also ensured.
Further, after the registration of the image to be processed is completed in any of the above manners to obtain the corresponding registered image, the registered image obtained by the registration processing and the reference image may be fused to obtain the corresponding fused image in step S500.
Specifically, in a possible implementation manner, an image average fusion method may be used to fuse the images, and the formula is as follows: fusion = 0.5 (ima1 + ima 2). Where ima1 represents the reference image, ima2 represents the registered image, and fusion represents the fused image.
That is, for the registered images, the images may be further fused. The purpose of fusion is to make the two registered images excessive and more natural in the overlapping area, and to assist the doctor in reading the film in a wider area. The division of the retina of the neonatal fundus image can be realized for the fused image. The partition is helpful for providing more accurate auxiliary reference for doctors to measure the influence of the illness state on the newborn. And, the I region is particularly important. Here, as can be understood by those skilled in the art, a circular area having a radius of 2 times the foveal distance of the papilla from the macula lutea is a zone I centered on the papilla. Before image registration or fusion is not performed, since the coverage of a single fundus image is relatively small, the I-region is difficult to be shown on one fundus image, and since the fused image expands the visible region of the fundus, the I-region can be shown on the fused fundus image, as shown in fig. 4.
It should be noted that those skilled in the art can understand that the specific implementation of each step in the fundus image processing method according to the embodiment of the present application should not be limited to the above-mentioned implementation. In fact, the user can flexibly set the specific mode of each step according to personal preference and/or practical application scenes, as long as the processing of the neonatal fundus image can be effectively realized and the accuracy of the processing result is ensured.
Correspondingly, the application also provides a fundus image processing device. Since the working principle of the fundus image processing apparatus provided by the present application is the same as or similar to that of the fundus image processing method of the present application, repeated descriptions are omitted.
Referring to fig. 5, the present application provides a fundus image processing apparatus 100 including: the image processing module comprises an image reading module 110, a blood vessel image extraction module 120, a blood vessel intersection point positioning module 130, a feature descriptor construction module 140, an image processing module 150 and an image fusion module 160. The image reading module 110 is configured to read the image to be processed and the reference image. A blood vessel image extraction module 120 configured to extract a blood vessel image of the image to be processed and a blood vessel image of the reference image, respectively. A blood vessel intersection locating module 130 configured to locate a blood vessel intersection of the image to be processed and a blood vessel intersection of the reference image. And the feature descriptor construction module 140 is configured to construct a feature descriptor of the image to be processed for the blood vessel intersection of the image to be processed, and construct a feature descriptor of a reference image for the blood vessel intersection of the reference image. And the image processing module 150 is configured to process the image to be processed and the reference image based on the image feature descriptor to be processed and the reference image feature descriptor. And the image fusion module 160 is configured to perform fusion processing on the registration image obtained by the registration processing and the reference image to obtain a corresponding fusion image.
The fundus image processing device 100 of the present application is adopted to process fundus images, blood vessel images in the images are extracted in a deep learning mode, blood vessel intersections in the blood vessel images are located, then the located blood vessel intersections are used as stable feature points for processing, and corresponding feature descriptors are extracted and established for the feature points to process the images, so that the processing of fundus images of neonates is effectively realized, and meanwhile, the accuracy of image processing results is improved.
Having described embodiments of the present application, the foregoing description is intended to be exemplary, not exhaustive, and not limited to the disclosed embodiments. Many modifications and variations will be apparent to those of ordinary skill in the art without departing from the scope and spirit of the described embodiments. The terms used herein were chosen in order to best explain the principles of the embodiments, the practical application, or technical improvements to the techniques in the marketplace, or to enable others of ordinary skill in the art to understand the embodiments disclosed herein.

Claims (10)

1. A fundus image processing method, comprising:
reading an image to be processed and a reference image;
respectively extracting a blood vessel image of the image to be processed and a blood vessel image of the reference image, and positioning a blood vessel intersection point of the image to be processed and a blood vessel intersection point of the reference image;
constructing a to-be-processed image feature descriptor for the blood vessel intersection of the to-be-processed image, and constructing a reference image feature descriptor for the blood vessel intersection of the reference image;
performing registration processing on the image to be processed and the reference image based on the image feature descriptor to be processed and the reference image feature descriptor;
and carrying out fusion processing on the registration image obtained by the registration processing and the reference image to obtain a corresponding fusion image.
2. The method according to claim 1, wherein when extracting the blood vessel image of the image to be processed and the blood vessel image of the reference image, a pre-trained network model is used for extraction;
the network model comprises a down-sampling module and an up-sampling module which are sequentially cascaded;
the input of the up-sampling module is the output of the down-sampling module.
3. The method according to claim 1, wherein before extracting the vessel image of the image to be processed and the vessel image of the reference image, further comprising: and performing at least one of size standardization processing and image enhancement processing on the image to be processed and the reference image.
4. The method according to any one of claims 1 to 3, wherein locating the vessel intersection of the image to be processed and the vessel intersection of the reference image comprises:
deleting small-area blood vessels in the blood vessel image of the image to be processed and small-area blood vessels in the blood vessel image of the reference image; wherein the small-area blood vessel is a blood vessel with an area smaller than a preset area; the value of the preset area can be set as follows: [100, 200], the unit is a pixel;
respectively thinning the blood vessel image of the image to be processed and the blood vessel image of the reference image, and converting the blood vessel image of the image to be processed and the blood vessel image of the reference image into binary images; wherein, the refined blood vessel image only comprises a skeleton of the blood vessel;
performing convolution filtering processing on the refined blood vessel image of the image to be processed and the refined blood vessel image of the reference image by using the constructed convolution kernel;
and retrieving pixels with pixel values larger than or equal to a preset pixel value in the blood vessel image of the image to be processed after convolution filtering as blood vessel intersections of the image to be processed, and retrieving pixels with pixel values larger than or equal to a preset pixel value in the blood vessel image of the reference image after convolution filtering as the blood vessel intersections of the reference image.
5. The method according to any one of claims 1 to 3, wherein constructing a reference image feature descriptor for a vessel intersection of the reference image comprises:
extracting a green channel image of the reference image;
for each blood vessel intersection point of the reference image, selecting a window area with a preset size on a green channel image of the reference image, and respectively calculating the gradient amplitude and the gradient direction of each pixel in the window area;
generating a gradient histogram according to the gradient amplitude and the gradient direction of each pixel in the window area;
determining the main direction of each blood vessel intersection point based on the gradient histogram corresponding to each blood vessel intersection point;
dividing the neighborhood of each blood vessel intersection point into a first preset number of primary sub-areas, and performing interpolation and rotation processing on each primary sub-area according to the determined main direction of the blood vessel intersection point; wherein the sizes of the primary subregions are the same;
in the rotated primary sub-area, with the blood vessel intersection point as the center, dividing the neighborhood of the blood vessel intersection point into a second preset number of secondary sub-areas again; wherein the sizes of the secondary subregions are the same;
and calculating the gradient histogram of each secondary subregion, and obtaining a corresponding reference image feature descriptor according to the gradient histogram of each secondary subregion.
6. The method of claim 5, wherein determining the principal direction of each of the vessel intersections based on the gradient histogram corresponding to each of the vessel intersections comprises:
performing Gaussian smoothing on each generated gradient histogram, and retrieving an index value index of a bin corresponding to the maximum amplitude value in the gradient histogram after the Gaussian smoothing;
fitting a parabola according to the first index-1, the first index +1 bin and the corresponding amplitude;
and the abscissa corresponding to the vertex of the parabola is the main direction of the blood vessel intersection point.
7. The method of claim 5, wherein, when generating the gradient histogram according to the gradient magnitude and gradient direction of each pixel in the window region, the gradient magnitude at each pixel point is multiplied by a Gaussian weight;
wherein the Gaussian weights are generated according to a two-dimensional Gaussian function.
8. The method of claim 5, wherein after generating the corresponding reference image feature descriptor from the gradient histogram of each of the secondary sub-regions, further comprising: and carrying out normalization processing on the generated reference image feature descriptors.
9. The method according to claim 1, wherein the registration processing of the image to be processed and the reference image based on the image feature descriptor to be processed and the reference image feature descriptor comprises:
calculating Euclidean distances between each to-be-processed image feature descriptor and each reference image feature descriptor, and screening out matching points from the to-be-processed image feature descriptors and the reference image feature descriptors according to the Euclidean distances;
solving an affine transformation matrix according to the screened matching points;
and transforming the image to be processed according to the affine transformation matrix to obtain a final registration image.
10. The fundus image processing device is characterized by comprising an image reading module, a blood vessel image extraction module, a blood vessel intersection positioning module, a feature descriptor construction module, an image registration module and an image fusion module;
the image reading module is configured to read an image to be processed and a reference image;
the blood vessel image extraction module is configured to extract a blood vessel image of the image to be processed and a blood vessel image of the reference image respectively;
the blood vessel intersection point positioning module is configured to position the blood vessel intersection point of the image to be processed and the blood vessel intersection point of the reference image;
the feature descriptor construction module is configured to construct a feature descriptor of the image to be processed for the blood vessel intersection of the image to be processed, and construct a reference image feature descriptor for the blood vessel intersection of the reference image;
the image registration module is configured to perform registration processing on the image to be processed and the reference image based on the image feature descriptor to be processed and the reference image feature descriptor;
and the image fusion module is configured to perform fusion processing on the registration image obtained by the registration processing and the reference image to obtain a corresponding fusion image.
CN202110416454.7A 2021-04-19 2021-04-19 Fundus image processing method and device Pending CN112819828A (en)

Priority Applications (2)

Application Number Priority Date Filing Date Title
CN202110416454.7A CN112819828A (en) 2021-04-19 2021-04-19 Fundus image processing method and device
PCT/CN2021/112999 WO2022222328A1 (en) 2021-04-19 2021-08-17 Eye fundus image processing method and apparatus

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110416454.7A CN112819828A (en) 2021-04-19 2021-04-19 Fundus image processing method and device

Publications (1)

Publication Number Publication Date
CN112819828A true CN112819828A (en) 2021-05-18

Family

ID=75863677

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110416454.7A Pending CN112819828A (en) 2021-04-19 2021-04-19 Fundus image processing method and device

Country Status (2)

Country Link
CN (1) CN112819828A (en)
WO (1) WO2022222328A1 (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114387209A (en) * 2021-12-03 2022-04-22 依未科技(北京)有限公司 Method, apparatus, medium, and device for fundus structural feature determination
WO2022222328A1 (en) * 2021-04-19 2022-10-27 北京至真互联网技术有限公司 Eye fundus image processing method and apparatus

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105809626A (en) * 2016-03-08 2016-07-27 长春理工大学 Self-adaption light compensation video image splicing method
CN106548491A (en) * 2016-09-30 2017-03-29 深圳大学 A kind of method for registering images, its image interfusion method and its device
CN108764286A (en) * 2018-04-24 2018-11-06 电子科技大学 The classifying identification method of characteristic point in a kind of blood-vessel image based on transfer learning
CN111583262A (en) * 2020-04-23 2020-08-25 北京小白世纪网络科技有限公司 Blood vessel segmentation method and system

Family Cites Families (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US8194936B2 (en) * 2008-04-25 2012-06-05 University Of Iowa Research Foundation Optimal registration of multiple deformed images using a physical model of the imaging distortion
CN112819828A (en) * 2021-04-19 2021-05-18 北京至真互联网技术有限公司 Fundus image processing method and device

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105809626A (en) * 2016-03-08 2016-07-27 长春理工大学 Self-adaption light compensation video image splicing method
CN106548491A (en) * 2016-09-30 2017-03-29 深圳大学 A kind of method for registering images, its image interfusion method and its device
CN108764286A (en) * 2018-04-24 2018-11-06 电子科技大学 The classifying identification method of characteristic point in a kind of blood-vessel image based on transfer learning
CN111583262A (en) * 2020-04-23 2020-08-25 北京小白世纪网络科技有限公司 Blood vessel segmentation method and system

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2022222328A1 (en) * 2021-04-19 2022-10-27 北京至真互联网技术有限公司 Eye fundus image processing method and apparatus
CN114387209A (en) * 2021-12-03 2022-04-22 依未科技(北京)有限公司 Method, apparatus, medium, and device for fundus structural feature determination

Also Published As

Publication number Publication date
WO2022222328A1 (en) 2022-10-27

Similar Documents

Publication Publication Date Title
CN110120047B (en) Image segmentation model training method, image segmentation method, device, equipment and medium
Dharmawan et al. A new hybrid algorithm for retinal vessels segmentation on fundus images
Melinscak et al. Retinal Vessel Segmentation using Deep Neural Networks.
Zhang et al. Pyramid u-net for retinal vessel segmentation
Dash et al. An unsupervised approach for extraction of blood vessels from fundus images
Hassan et al. Joint segmentation and quantification of chorioretinal biomarkers in optical coherence tomography scans: A deep learning approach
Panda et al. New binary Hausdorff symmetry measure based seeded region growing for retinal vessel segmentation
Abramoff et al. The automatic detection of the optic disc location in retinal images using optic disc location regression
CN113728335A (en) Method and system for classification and visualization of 3D images
WO2022222328A1 (en) Eye fundus image processing method and apparatus
CN111862009A (en) Classification method of fundus OCT images and computer-readable storage medium
Alqudah et al. Artificial intelligence hybrid system for enhancing retinal diseases classification using automated deep features extracted from OCT images
Matovinovic et al. Transfer learning with U-Net type model for automatic segmentation of three retinal layers in optical coherence tomography images
Manikandan et al. Glaucoma Disease Detection Using Hybrid Deep Learning Model
Alagirisamy Micro statistical descriptors for glaucoma diagnosis using neural networks
Al Jannat et al. Detection of multiple sclerosis using deep learning
Dharmawan et al. Design of optimal adaptive filters for two-dimensional filamentary structures segmentation
Jadhav et al. Computer-aided diabetic retinopathy diagnostic model using optimal thresholding merged with neural network
Badeka et al. Evaluation of LBP variants in retinal blood vessels segmentation using machine learning
Sundaram et al. An automated eye disease prediction system using bag of visual words and support vector machine
Galveia et al. Computer aided diagnosis in ophthalmology: Deep learning applications
Vij et al. A hybrid evolutionary weighted ensemble of deep transfer learning models for retinal vessel segmentation and diabetic retinopathy detection
Asawa et al. Deep learning approaches for determining optimal cervical cancer treatment
Choudhury et al. Automated Detection of Central Retinal Vein Occlusion Using Convolutional Neural Network
Kumari et al. Identification and classification of cervical cancer using convolutional neural network based on Fisher score

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination