CN111815617A - Hyperspectrum-based eye fundus image detection method - Google Patents

Hyperspectrum-based eye fundus image detection method Download PDF

Info

Publication number
CN111815617A
CN111815617A CN202010708640.3A CN202010708640A CN111815617A CN 111815617 A CN111815617 A CN 111815617A CN 202010708640 A CN202010708640 A CN 202010708640A CN 111815617 A CN111815617 A CN 111815617A
Authority
CN
China
Prior art keywords
image
fundus
hyperspectral
diagnosis
information
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202010708640.3A
Other languages
Chinese (zh)
Other versions
CN111815617B (en
Inventor
李文军
龙伟
高泽天
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Jilin University
Original Assignee
Jilin University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Jilin University filed Critical Jilin University
Priority to CN202010708640.3A priority Critical patent/CN111815617B/en
Publication of CN111815617A publication Critical patent/CN111815617A/en
Application granted granted Critical
Publication of CN111815617B publication Critical patent/CN111815617B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/0002Inspection of images, e.g. flaw detection
    • G06T7/0012Biomedical image inspection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/20Image enhancement or restoration using local operators
    • G06T5/30Erosion or dilatation, e.g. thinning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/70Denoising; Smoothing
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/80Geometric correction
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/136Segmentation; Edge detection involving thresholding
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/30Determination of transform parameters for the alignment of images, i.e. image registration
    • G06T7/33Determination of transform parameters for the alignment of images, i.e. image registration using feature-based methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/90Determination of colour characteristics
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30004Biomedical image processing
    • G06T2207/30041Eye; Retina; Ophthalmic
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02ATECHNOLOGIES FOR ADAPTATION TO CLIMATE CHANGE
    • Y02A40/00Adaptation technologies in agriculture, forestry, livestock or agroalimentary production
    • Y02A40/10Adaptation technologies in agriculture, forestry, livestock or agroalimentary production in agriculture

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Health & Medical Sciences (AREA)
  • General Health & Medical Sciences (AREA)
  • Medical Informatics (AREA)
  • Nuclear Medicine, Radiotherapy & Molecular Imaging (AREA)
  • Radiology & Medical Imaging (AREA)
  • Quality & Reliability (AREA)
  • Eye Examination Apparatus (AREA)

Abstract

The invention discloses a hyperspectral fundus image detection method, and relates to the field of fundus image detection. The specific method comprises the following steps: the optical system of the equipment for direct vision of the eyes of the person to be detected finishes shooting the eye fundus image by the hyperspectral eye fundus camera, processes and analyzes the obtained image data and the obtained spectral data, respectively gives out the results of machine diagnosis and manual diagnosis by means of a big data platform and the clinical experience of a doctor on the basis, and displays the results. The invention is a non-invasive detection method, doctors can make detailed evaluation on the disease state of the eyes of patients according to fundus images of different wave bands and the results of machine diagnosis, thereby greatly reducing the risk of missed diagnosis and misdiagnosis and greatly improving the working efficiency of clinical diagnosis.

Description

Hyperspectrum-based eye fundus image detection method
Technical Field
The invention relates to the technical field of fundus image detection, in particular to a hyperspectral fundus image detection method.
Background
The eye is one of the most important organs of a human being, and the human being can accomplish direct observation of a complex environment through the eye. The fundus is the posterior tissue in the eye, i.e., the inner membrane of the eye, retina, optic papilla, macula, and the center of the retina. These fundus tissues contain a wealth of information that can be used by physicians to make certain diagnoses or to predict the occurrence of certain diseases based on changes in these tissues. Such as diabetic retinopathy, age-related macular degeneration, glaucoma, and the like. More and more studies have shown that many diseases of the eye are closely related to the fundus. In addition, fundus lesions become the leading cause of blindness in the elderly, and therefore fundus oculi has become an important clinical test object in modern medicine.
At present, fundus detection means mainly complete observation of the fundus by fundus color photography, OCT, fluorescence angiography and the like, but due to the limitation of the principle of the fundus detection means, the fundus detection means has the problems of limited observation range, requirements on the physique of a patient, wound on the patient, insufficient accuracy and the like.
Disclosure of Invention
In order to overcome the defects, the invention provides a hyperspectral-based fundus image detection method. The method can detect the fundus images of a patient noninvasively, provides the processed images of green light wave band, yellow light wave band, red light wave band, infrared light wave band and amber light wave according to the principle that different wave bands can reflect different fundus levels, greatly improves the detection range and improves the diagnosis work efficiency of doctors.
A hyperspectral-based fundus image detection method comprises the following steps of:
s1, aligning the two eyes of the detected person with the hyperspectral fundus camera to direct view the optical system, and completing capturing and recording fundus information of the person to be detected by the hyperspectral fundus camera;
s2, performing spectrum preprocessing and image preprocessing on fundus data acquired by the hyperspectral fundus camera respectively, wherein the image preprocessing can be performed only after the spectral data processing is completed in the next step;
s3, dividing the recorded spectrum information into: green light wave band (492-577 nm), yellow light wave band (570-585 nm), red light wave band (625-740 nm), infrared light wave band (760-860 nm) and amber light (590 nm). Wherein, except the amber light, the other divided wave bands are processed in the respective wave band interval range by adopting a PCA analysis method, so as to obtain the principal component data in the respective range: green light principal component G, yellow light principal component Y, red light principal component R, infrared light principal component IR;
s4, performing fundus image analysis and fundus spectrum analysis on the obtained data of S2 and S3 respectively, sending sample data into a large database, and providing reference information for fundus image diagnosis by the database, so that a doctor can conveniently determine the disease condition;
s5, displaying the diagnosis result of the condition of the fundus oculi, wherein the contents of the diagnosis result include: images processed by the main components of all wave bands and fused images, diagnosis information provided by a big database and results of manual diagnosis of doctors.
Further, the hyperspectral fundus camera of the step S1 is formed by coupling a hyperspectral imaging system and a fundus camera, and the person to be detected only needs to look directly at the optical system with eyes, and the hyperspectral fundus camera takes a picture of the fundus of the person to be detected and acquires information.
Further, the SG smoothing processing and the continuous wavelet transform method adopted in the spectrum preprocessing in step S2 are respectively used to reduce the influence of random noise, improve the signal-to-noise ratio of the spectrogram, and subtract the interference of the instrument background to the signal. The image preprocessing of step S2 includes the steps of:
s201, correcting the fundus image, including gray correction and geometric correction. And performing graying processing on the obtained color image, and after a gray image of a G channel is selected, extracting a background mask from the hyperspectral fundus image in order to prevent the influence of a non-fundus image observation area and reduce the subsequent calculation amount of fundus image registration and image analysis. The specific background mask extraction algorithm is as follows:
(1) if the gray level of the hyperspectral fundus image is L, the gray level range is [0, L-1], the total average value of the image is u:
u=wB(t)uB(t)+wo(t)uo(t)
the gray threshold t divides pixels in the image into a foreground pixel and a background pixel, and the proportion of the foreground pixel in the image is wo(t) mean value uo(t) the number of background pixels in the image is wB(t) mean value uB(t)。
(2) The optimal threshold of the image is set as follows:
Figure BDA0002595727950000031
after obtaining the optimal threshold, the image is subjected to threshold segmentation according to the following rules
Figure BDA0002595727950000032
After the segmentation is finished, in order to eliminate the influence of the image distortion of the hyperspectral fundus camera on analysis processing, a polynomial coordinate transformation method is adopted to carry out geometric correction on the image after the mask processing.
S202, extracting useful information from the corrected image by using a Gabor filter, extracting a main trunk and a tiny blood vessel from the image by using top-hat, and finally combining the two methods to finish the dynamic adjustable enhancement of the fundus image so as to enable the details of the fundus image to be clearer. The specific algorithm is as follows:
(1) and (3) taking the real part of the general expression of the two-dimensional Gabor filter and removing the front constants to obtain:
Figure BDA0002595727950000041
wherein
Figure BDA0002595727950000042
To be the direction of the filter, is,
Figure BDA0002595727950000043
n is the number of directions, N is 18, f is the center frequency of the filter,
Figure BDA0002595727950000044
σ is a gaussian envelope space constant, σ ═ k × s/π, k ∈ [0.5, 1.5]And s is the size of the filter mask.
(2) Let the current image be I (x, y), for the above filter, if
Figure BDA0002595727950000045
Respectively selecting a small scale alpha and a large scale beta for any direction, and then the corresponding masks are
Figure BDA0002595727950000046
And
Figure BDA0002595727950000047
their filtering results are:
Figure BDA0002595727950000048
Figure BDA0002595727950000049
the image is I (x, y) in the direction
Figure BDA00025957279500000410
The detail image of (1) is:
Figure BDA00025957279500000411
(3) selecting 18 directions to perform the filtering operation, and obtaining a final image F1Any pixel point (i, j) in (x, y) is as follows:
F1(x,y)=max[F(i,j,0),F(i,j,π/N),F(i,j,2π/N),…F(i,j,(N-1)π/N)]
then to is carried out on the image I (x, y) in 18 directionsp-hat transform and summation to obtain image F2(x, y) is:
Figure BDA0002595727950000051
wherein:
Figure BDA0002595727950000053
represents an open operation, BiThe structural elements have an angle of in/N and a length of L. To F2(x, y) the image after Gaussian smoothing, normalization and gray level transformation is F'2(x,y)。
For image F1(x, y) is calculated by the following formula:
Figure BDA0002595727950000052
image F1(x, y) normalized into 8-bit grayscale image F'1(x, y). Let A, B be the weighting factor, the final enhanced image P (x, y) is:
P(x,y)=A·F′1(x,y)+B·F′2(x,y)
further, in step S3, the entered spectral information is first divided into: green light wave band (492-577 nm), yellow light wave band (570-585 nm), red light wave band (625-740 nm), infrared light wave band (760-860 nm) and amber light (590 nm). Except for amber light, the other divided wave bands are processed in the range of each wave band interval by adopting a PCA (principal component analysis) method, and first principal component data in each range are obtained according to the maximum contribution rate of the variance: green light principal component G, yellow light principal component Y, red light principal component R, infrared light principal component IR. Then, the operations S201 and S202 are performed on the images of all the wavelength band principal components and the amber light wavelength band.
Further, the fundus image analysis of step S4 is to perform SIFT image registration on the primary component images of each wavelength band and the amber wavelength band obtained after multiple times of shooting and processing in different times after being processed in S201 and S202, so that the same-wavelength-band image information is superimposed on the same pair of images, thereby reducing the data processing amount, and finally fusing all the registered images, thereby facilitating clinical analysis and diagnosis for the doctor. And all image data and spectral information are transmitted to the big data platform, the big data platform further completes analysis of the spectral curve, and finally the big data platform synthesizes the image and the spectral information to give eye disease assessment.
The fusion image adopts a Laplacian pyramid transformation method:
(1) let Gl(i, j) is the l-th layer gaussian pyramid image, l is the number of layers of decomposition, h obeys the gaussian density distribution function, ω (m, n) ═ h (m) × h (n) is the window function with low-pass characteristic, and the reduction operator of the window size is REDUCE, then the image gaussian pyramid is as follows:
Figure BDA0002595727950000061
Gl(i,j)=REDUCE(Gl-1)
performing differential expansion on the layers to satisfy the first layer image GlAnd l-1 layer image Gl-1The size is the same, then the expansion sequence is:
Figure BDA0002595727950000062
(2) let LPlFor the l-th layer decomposition image, the interpolation expansion operator is EXPAND, then, the laplacian tower decomposition expression is:
Figure BDA0002595727950000063
gradually pushing and reconstructing downwards from the tower top from the Laplacian pyramid to finally obtain an original image G0
G0=LP0+EXPAND(LP1+EXPAND(LP2+…EXPAND(LPN)))
After the images are fused by adopting the flow algorithm, further comprehensive analysis is facilitated.
Further, the result of step S5 shows that the processed images of green light band G, yellow light band Y, red light band R, infrared light band IR and amber light and the fused image are all displayed, and the diagnostic information of the large database and the artificial diagnostic information given by the doctor in combination with the image are all displayed on the terminal device.
The invention has the beneficial effects that:
1. the invention can complete the in-situ noninvasive shooting of the fundus images.
2. The invention can provide information of fundus images of different wave bands, greatly expands the range of fundus detection, and performs registration and fusion on the fundus images, thereby further improving the detection precision of the images.
3. According to the invention, a doctor can make detailed eye diagnosis by combining the machine diagnosis result given by the big data platform, so that the working efficiency of the doctor is greatly improved.
Drawings
FIG. 1 is a flow chart of the present invention.
Detailed Description
As shown in fig. 1, a hyperspectral-based fundus image detection method sequentially includes the following steps:
s1, aligning the two eyes of the detected person with the hyperspectral fundus camera to direct view the optical system, and completing capturing and recording fundus information of the person to be detected by the hyperspectral fundus camera;
s2, performing spectrum preprocessing and image preprocessing on fundus data acquired by the hyperspectral fundus camera respectively, wherein the image preprocessing can be performed only after the spectral data processing is completed in the next step;
s3, dividing the recorded spectrum information into: green light wave band (492-577 nm), yellow light wave band (570-585 nm), red light wave band (625-740 nm), infrared light wave band (760-860 nm) and amber light (590 nm). Wherein, except the amber light, the other divided wave bands are processed in the respective wave band interval range by adopting a PCA analysis method, so as to obtain the principal component data in the respective range: green light principal component G, yellow light principal component Y, red light principal component R, infrared light principal component IR;
s4, performing fundus image analysis and fundus spectrum analysis on the obtained data of S2 and S3 respectively, sending sample data into a large database, and providing reference information for fundus image diagnosis by the database, so that a doctor can conveniently determine the disease condition;
s5, displaying the diagnosis result of the condition of the fundus oculi, wherein the contents of the diagnosis result include: images processed by the main components of all wave bands and fused images, diagnosis information provided by a big database and results of manual diagnosis of doctors.
Further, the hyperspectral fundus camera of the step S1 is formed by coupling a hyperspectral imaging system and a fundus camera, and the person to be detected only needs to look directly at the optical system with eyes, and the hyperspectral fundus camera takes a picture of the fundus of the person to be detected and acquires information.
Further, the SG smoothing processing and the continuous wavelet transform method adopted in the spectrum preprocessing in step S2 are respectively used to reduce the influence of random noise, improve the signal-to-noise ratio of the spectrogram, and subtract the interference of the instrument background to the signal. The image preprocessing of step S2 includes the steps of:
s201, correcting the fundus image, including gray correction and geometric correction. And performing graying processing on the obtained color image, and after a gray image of a G channel is selected, extracting a background mask from the hyperspectral fundus image in order to prevent the influence of a non-fundus image observation area and reduce the subsequent calculation amount of fundus image registration and image analysis. The specific background mask extraction algorithm is as follows:
(1) if the gray level of the hyperspectral fundus image is L, the gray level range is [0, L-1], the total average value of the image is u:
u=wB(t)uB(t)+wo(t)uo(t)
the gray threshold t divides pixels in the image into a foreground pixel and a background pixel, and the proportion of the foreground pixel in the image is wo(t) mean value uo(t) the number of background pixels in the image is wB(t) mean value uB(t)。
(2) The optimal threshold of the image is set as follows:
Figure BDA0002595727950000091
after obtaining the optimal threshold, the image is subjected to threshold segmentation according to the following rules
Figure BDA0002595727950000092
After the segmentation is finished, in order to eliminate the influence of the image distortion of the hyperspectral fundus camera on analysis processing, a polynomial coordinate transformation method is adopted to carry out geometric correction on the image after the mask processing.
S202, extracting useful information from the corrected image by using a Gabor filter, extracting a main trunk and a tiny blood vessel from the image by using top-hat, and finally combining the two methods to finish the dynamic adjustable enhancement of the fundus image so as to enable the details of the fundus image to be clearer. The specific algorithm is as follows:
(1) and (3) taking the real part of the general expression of the two-dimensional Gabor filter and removing the front constants to obtain:
Figure BDA0002595727950000093
wherein
Figure BDA0002595727950000094
To be the direction of the filter, is,
Figure BDA0002595727950000095
n is the number of directions, N is 18, f is the center frequency of the filter,
Figure BDA0002595727950000096
σ is a gaussian envelope space constant, σ ═ k × s/π, k ∈ [0.5, 1.5]And s is the size of the filter mask.
(2) Let the current image be I (x, y), for the above filter, if
Figure BDA0002595727950000101
Respectively selecting a small scale alpha and a large scale beta for any direction, and then the corresponding masks are
Figure BDA0002595727950000102
And
Figure BDA0002595727950000103
their filtering results are:
Figure BDA0002595727950000104
Figure BDA0002595727950000105
the image is I (x, y) in the direction
Figure BDA0002595727950000106
The detail image of (1) is:
Figure BDA0002595727950000107
(3) selecting 18 directions to perform the filtering operation, and obtaining a final image F1Any pixel point (i, j) in (x, y) is as follows:
F1(x,y)=max[F(i,j,0),F(i,j,π/N),F(i,j,2π/N),…F(i,j,(N-1)π/N)]
then, the image I (x, y) is subjected to top-hat transformation in 18 directions and summed to obtain an image F2(x, y) is:
Figure BDA0002595727950000108
wherein
Figure BDA0002595727950000109
Represents an open operation, BiThe structural elements have an angle of in/N and a length of L. To F2(x, y) is subjected to Gaussian smoothing and normalization,The image after gray level conversion is F'2(x,y)。
For image F1(x, y) is calculated by the following formula:
Figure BDA0002595727950000111
image F1(x, y) normalized into 8-bit grayscale image F'1(x, y). Let A, B be the weighting factor, the final enhanced image P (x, y) is:
P(x,y)=A·F′1(x,y)+B·F′2(x,y)
further, in step S3, the entered spectral information is first divided into: green light wave band (492-577 nm), yellow light wave band (570-585 nm), red light wave band (625-740 nm), infrared light wave band (760-860 nm) and amber light (590 nm). Except for amber light, the other divided wave bands are processed in the range of each wave band interval by adopting a PCA (principal component analysis) method, and first principal component data in each range are obtained according to the maximum contribution rate of the variance: green light principal component G, yellow light principal component Y, red light principal component R, infrared light principal component IR. Then, the operations S201 and S202 are performed on the images of all the wavelength band principal components and the amber light wavelength band.
Further, the fundus image analysis of step S4 is to perform SIFT image registration on the primary component images of each wavelength band and the amber wavelength band obtained after multiple times of shooting and processing in different times after being processed in S201 and S202, so that the same-wavelength-band image information is superimposed on the same pair of images, thereby reducing the data processing amount, and finally fusing all the registered images, thereby facilitating clinical analysis and diagnosis for the doctor. And all image data and spectral information are transmitted to the big data platform, the big data platform further completes analysis of the spectral curve, and finally the big data platform synthesizes the image and the spectral information to give eye disease assessment.
The fusion image adopts a Laplacian pyramid transformation method:
(1) let Gl(i, j) is the I-th layer Gaussian pyramid image, l is the number of layers of decomposition, and h is obedienceA gaussian density distribution function, ω (m, n) ═ h (m) × h (n) is a window function with low-pass characteristic, and the reduction operator of the window size is REDUCE, then the gaussian pyramid of the image is as follows:
Figure BDA0002595727950000121
Gl(i,j)=REDUCE(Gl-1)
performing differential expansion on the layers to satisfy the first layer image GlAnd l-1 layer image Gl-1The size is the same, then the expansion sequence is:
Figure BDA0002595727950000122
(2) let LPlFor the l-th layer decomposition image, the interpolation expansion operator is EXPAND, then, the laplacian tower decomposition expression is:
Figure BDA0002595727950000123
gradually pushing and reconstructing downwards from the tower top from the Laplacian pyramid to finally obtain an original image G0
G0=LP0+EXPAND(LP1+EXPAND(LP2+…EXPAND(LPN)))
After the images are fused by adopting the flow algorithm, further comprehensive analysis is facilitated.
Further, the result of step S5 shows that the processed images of green light band G, yellow light band Y, red light band R, infrared light band IR and amber light and the fused image are all displayed, and the diagnostic information of the large database and the artificial diagnostic information given by the doctor in combination with the image are all displayed on the terminal device.

Claims (5)

1. A hyperspectral-based fundus image detection method is characterized by comprising the following steps of: the method comprises the following steps:
s1, aligning the two eyes of the detected person with the hyperspectral fundus camera to direct view the optical system, and completing capturing and recording fundus information of the person to be detected by the hyperspectral fundus camera;
s2, performing spectrum preprocessing and image preprocessing on fundus data acquired by the hyperspectral fundus camera respectively, wherein the image preprocessing can be performed only after the spectral data processing is completed in the next step;
s3, dividing the recorded spectrum information into: green light band, yellow light band, red light band, infrared light band and amber light; wherein, except the amber light, the other divided wave bands are processed in the respective wave band interval range by adopting a PCA analysis method, so as to obtain the principal component data in the respective range: green light principal component G, yellow light principal component Y, red light principal component R, infrared light principal component IR; the green light band has the wavelength of 492-577 nm, the yellow light band has the wavelength of 570-585 nm, the red light band has the wavelength of 625-740 nm, the infrared light band has the wavelength of 760-860 nm and the amber light has the wavelength of 590 nm;
s4, performing fundus image analysis and fundus spectrum analysis on the data obtained in the steps S2 and S3 respectively, sending sample data into a large database, and providing reference information for fundus image diagnosis by the database, so that a doctor can conveniently determine a disease state;
s5, displaying the diagnosis result of the condition of the fundus oculi, wherein the contents of the diagnosis result include: images processed by the main components of all wave bands and fused images, diagnosis information provided by a big database and results of manual diagnosis of doctors.
2. A hyperspectral-based fundus image detection method according to claim 1, characterized in that: the hyperspectral fundus camera of the step S1 is formed by coupling a hyperspectral imaging system and a fundus camera, and the person to be detected only needs to look directly at the optical system with both eyes, and the hyperspectral fundus camera takes a picture of the fundus of the person to be detected and acquires information.
3. A hyperspectral-based fundus image detection method according to claim 1, characterized in that: the SG smoothing processing and the continuous wavelet transform method adopted in the spectrum preprocessing in the step S2 are respectively used for reducing the influence of random noise, improving the signal-to-noise ratio of the spectrogram and deducting the interference of the instrument background to the signal;
the image preprocessing of step S2 includes the steps of:
s201, firstly, correcting the fundus image, including gray correction and geometric correction; performing graying processing on the obtained color image, and after a gray image of a G channel is selected, extracting a background mask from the hyperspectral fundus image in order to prevent the influence of a non-fundus image observation area and reduce the subsequent calculation amount of fundus image registration and image analysis; the specific background mask extraction algorithm is as follows:
(1) if the gray level of the hyperspectral fundus image is L, the gray level range is [0, L-1], the total average value of the image is u, and the following steps are performed:
u=wB(t)uB(t)+wo(t)uo(t)
the gray threshold t divides pixels in the image into a foreground pixel and a background pixel, and the proportion of the foreground pixel in the image is wo(t) mean value uo(t) the number of background pixels in the image is wB(t) mean value uB(t);
(2) The optimal threshold of the image is set as follows:
Figure FDA0002595727940000021
after the optimal threshold is obtained, performing threshold segmentation on the image according to the following rules:
Figure FDA0002595727940000022
after the segmentation is finished, in order to eliminate the influence of image distortion of the hyperspectral fundus camera on analysis processing, geometric correction is carried out on the image after the mask processing by adopting a polynomial coordinate transformation method;
s202, extracting useful information from the corrected image by using a Gabor filter, extracting a main trunk and a tiny blood vessel from the image by using top-hat, and finally combining the two methods to finish the dynamic adjustable enhancement of the fundus image so as to enable the details of the fundus image to be clearer; the specific algorithm is as follows:
(1) and (3) taking the real part of the general expression of the two-dimensional Gabor filter and removing the front constants to obtain:
Figure FDA0002595727940000031
wherein
Figure FDA0002595727940000032
Figure FDA0002595727940000033
To be the direction of the filter, is,
Figure FDA0002595727940000034
n is the number of directions, N is 18, f is the center frequency of the filter,
Figure FDA0002595727940000035
σ is a gaussian envelope space constant, σ ═ k × s/π, k ∈ [0.5, 1.5]S is the size of the filter mask;
(2) let the current image be I (x, y), for the above filter, if
Figure FDA0002595727940000036
Respectively selecting a small scale alpha and a large scale beta for any direction, and then the corresponding masks are
Figure FDA0002595727940000037
And
Figure FDA00025957279400000312
their filtering results are:
Figure FDA0002595727940000038
Figure FDA0002595727940000039
the image is I (x, y) in the direction
Figure FDA00025957279400000310
The detail image of (1) is:
Figure FDA00025957279400000311
(3) selecting 18 directions to perform the filtering operation, and obtaining a final image F1Any pixel point (i, j) in (x, y) is as follows:
F1(x,y)=max[F(i,j,0),F(i,j,π/N),F(i,j,2π/N),…F(i,j,(N-1)π/N)]
then, the image I (x, y) is subjected to top-hat transformation in 18 directions and summed to obtain an image F2(x, y) is:
Figure FDA0002595727940000041
wherein (o) represents an open operation, BiThe structural elements have an angle of in/N and a length of L; to F2(x, y) the image after Gaussian smoothing, normalization and gray level transformation is F'2(x,y);
For image F1(x, y) is calculated by the following formula:
Figure FDA0002595727940000042
image F1(x, y) normalized into 8-bit grayscale image F'1(x, y); let A, B be the weighting factor, the final enhanced image P (x, y) is:
P(x,y)=A·F′1(x,y)+B·F′2(x,y)。
4. a hyperspectral-based fundus image detection method according to claim 1, characterized in that: the fundus image analysis of step S4 is to perform SIFT image registration on the primary component images of each waveband and the amber waveband image, which are obtained by shooting and processing for a plurality of times in different times, after processing of step S201 and step S202, so that the same waveband image information is superimposed on the same pair of images, thereby reducing the data processing amount, and finally fusing all the registered images, thereby facilitating clinical analysis and diagnosis for doctors; and all image data and spectral information are transmitted to the big data platform, the big data platform further completes analysis of the spectral curve, and finally the big data platform synthesizes the image and the spectral information to give eye disease assessment.
5. A hyperspectral-based fundus image detection method according to claim 1, characterized in that: the result of step S5 shows that the processed images of green light band G, yellow light band Y, red light band R, infrared light band IR and amber light and the fused image are displayed, and the diagnostic information of the large database and the artificial diagnostic information given by the doctor in combination with the image are all displayed on the terminal device.
CN202010708640.3A 2020-07-22 2020-07-22 Fundus image detection method based on hyperspectrum Active CN111815617B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010708640.3A CN111815617B (en) 2020-07-22 2020-07-22 Fundus image detection method based on hyperspectrum

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010708640.3A CN111815617B (en) 2020-07-22 2020-07-22 Fundus image detection method based on hyperspectrum

Publications (2)

Publication Number Publication Date
CN111815617A true CN111815617A (en) 2020-10-23
CN111815617B CN111815617B (en) 2023-11-17

Family

ID=72862116

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010708640.3A Active CN111815617B (en) 2020-07-22 2020-07-22 Fundus image detection method based on hyperspectrum

Country Status (1)

Country Link
CN (1) CN111815617B (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112560597A (en) * 2020-12-02 2021-03-26 吉林大学 Microscopic hyperspectral COVID-19 detection and identification method
CN112905823A (en) * 2021-02-22 2021-06-04 深圳市国科光谱技术有限公司 Hyperspectral substance detection and identification system and method based on big data platform

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20110242306A1 (en) * 2008-12-19 2011-10-06 The Johns Hopkins University System and method for automated detection of age related macular degeneration and other retinal abnormalities
CN105809188A (en) * 2016-02-26 2016-07-27 山东大学 Fungal keratitis image identification method based on AMBP improved algorithm
CN106056157A (en) * 2016-06-01 2016-10-26 西北大学 Hyperspectral image semi-supervised classification method based on space-spectral information
CN108197640A (en) * 2017-12-18 2018-06-22 华南理工大学 High spectrum image fast filtering method based on three-dimensional Gabor filter
CN109544540A (en) * 2018-11-28 2019-03-29 东北大学 A kind of diabetic retina picture quality detection method based on image analysis technology
WO2019100585A1 (en) * 2017-11-25 2019-05-31 深圳市前海安测信息技术有限公司 Fundus camera-based monitoring system and method for prevention and treatment of potential diseases based on traditional chinese medicine

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20110242306A1 (en) * 2008-12-19 2011-10-06 The Johns Hopkins University System and method for automated detection of age related macular degeneration and other retinal abnormalities
CN105809188A (en) * 2016-02-26 2016-07-27 山东大学 Fungal keratitis image identification method based on AMBP improved algorithm
CN106056157A (en) * 2016-06-01 2016-10-26 西北大学 Hyperspectral image semi-supervised classification method based on space-spectral information
WO2019100585A1 (en) * 2017-11-25 2019-05-31 深圳市前海安测信息技术有限公司 Fundus camera-based monitoring system and method for prevention and treatment of potential diseases based on traditional chinese medicine
CN108197640A (en) * 2017-12-18 2018-06-22 华南理工大学 High spectrum image fast filtering method based on three-dimensional Gabor filter
CN109544540A (en) * 2018-11-28 2019-03-29 东北大学 A kind of diabetic retina picture quality detection method based on image analysis technology

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
HSIN-YU YAO 等: "Hyperspectral Ophthalmoscope Images for the Diagnosis of Diabetic Retinopathy Stage", 《CLINICAL MEDICINE》, pages 1 - 16 *
白瑞: "基于多特征融合的高光谱遥感图像分类研究", 《中国优秀硕士学位论文全文数据库 信息科技辑》, no. 2, pages 1 - 47 *

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112560597A (en) * 2020-12-02 2021-03-26 吉林大学 Microscopic hyperspectral COVID-19 detection and identification method
CN112905823A (en) * 2021-02-22 2021-06-04 深圳市国科光谱技术有限公司 Hyperspectral substance detection and identification system and method based on big data platform
CN112905823B (en) * 2021-02-22 2023-10-31 深圳市国科光谱技术有限公司 Hyperspectral substance detection and identification system and method based on big data platform

Also Published As

Publication number Publication date
CN111815617B (en) 2023-11-17

Similar Documents

Publication Publication Date Title
WO2021039339A1 (en) Information processing device, information processing method, information processing system, and program
WO2020183799A1 (en) Medical image processing device, medical image processing method, and program
CN112601487A (en) Medical image processing apparatus, medical image processing method, and program
Raja et al. Clinically verified hybrid deep learning system for retinal ganglion cells aware grading of glaucomatous progression
JP2022103221A (en) Medical image processing device, medical image processing system, medical image processing method, and program
JP2014527434A (en) Feature motion correction and normalization in optical coherence tomography
CN108618749B (en) Retina blood vessel three-dimensional reconstruction method based on portable digital fundus camera
WO2020183791A1 (en) Image processing device and image processing method
CN111815617B (en) Fundus image detection method based on hyperspectrum
Ram et al. The relationship between Fully Connected Layers and number of classes for the analysis of retinal images
JP7362403B2 (en) Image processing device and image processing method
Tripathi et al. MTCD: Cataract detection via near infrared eye images
Kolar et al. Analysis of visual appearance of retinal nerve fibers in high resolution fundus images: a study on normal subjects
WO2022097620A1 (en) Ophthalmic information processing device, ophthalmic device, ophthalmic information processing method, and program
Zheng et al. New simplified fovea and optic disc localization method for retinal images
Aloudat et al. Histogram analysis for automatic blood vessels detection: First step of IOP
JP2021069667A (en) Image processing device, image processing method and program
JP6481432B2 (en) Fundus image processing device
Laliberté et al. Studies on registration and fusion of retinal images
Manne et al. Improved fundus image quality assessment: Augmenting traditional features with structure preserving scatnet features in multicolor space
Wang et al. MEMO: Dataset and Methods for Robust Multimodal Retinal Image Registration with Large or Small Vessel Density Differences
Tanachotnarangkun et al. A Framework for Generating an ICGA from a Fundus Image using GAN
Jiang et al. Cross-Domain Images Generation of Fundus Fluorescence Angiography Based on Generative Adversarial Networks with Self-Attention Mechanism
Raju DETECTION OF DIABETIC RETINOPATHY USING IMAGE PROCESSING
Odstrčilík Analysis of retinal image data to support glaucoma diagnosis

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant