CN111815617A - Hyperspectrum-based eye fundus image detection method - Google Patents
Hyperspectrum-based eye fundus image detection method Download PDFInfo
- Publication number
- CN111815617A CN111815617A CN202010708640.3A CN202010708640A CN111815617A CN 111815617 A CN111815617 A CN 111815617A CN 202010708640 A CN202010708640 A CN 202010708640A CN 111815617 A CN111815617 A CN 111815617A
- Authority
- CN
- China
- Prior art keywords
- image
- fundus
- hyperspectral
- diagnosis
- information
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
- 238000001514 detection method Methods 0.000 title claims abstract description 21
- 238000003745 diagnosis Methods 0.000 claims abstract description 25
- 210000001508 eye Anatomy 0.000 claims abstract description 17
- 238000000034 method Methods 0.000 claims abstract description 15
- 230000003595 spectral effect Effects 0.000 claims abstract description 15
- 230000003287 optical effect Effects 0.000 claims abstract description 7
- 201000010099 disease Diseases 0.000 claims abstract description 6
- 208000037265 diseases, disorders, signs and symptoms Diseases 0.000 claims abstract description 6
- 238000007781 pre-processing Methods 0.000 claims description 15
- 238000004458 analytical method Methods 0.000 claims description 11
- 238000010191 image analysis Methods 0.000 claims description 9
- 238000001228 spectrum Methods 0.000 claims description 9
- 238000004422 calculation algorithm Methods 0.000 claims description 8
- 238000000513 principal component analysis Methods 0.000 claims description 7
- 238000001914 filtration Methods 0.000 claims description 6
- 238000009499 grossing Methods 0.000 claims description 6
- 230000011218 segmentation Effects 0.000 claims description 6
- 238000011426 transformation method Methods 0.000 claims description 5
- 210000004220 fundus oculi Anatomy 0.000 claims description 4
- 230000009466 transformation Effects 0.000 claims description 4
- 210000004204 blood vessel Anatomy 0.000 claims description 3
- 238000004364 calculation method Methods 0.000 claims description 3
- 238000000701 chemical imaging Methods 0.000 claims description 3
- 230000008878 coupling Effects 0.000 claims description 3
- 238000010168 coupling process Methods 0.000 claims description 3
- 238000005859 coupling reaction Methods 0.000 claims description 3
- 238000000605 extraction Methods 0.000 claims description 3
- 208000030533 eye disease Diseases 0.000 claims description 3
- 238000010606 normalization Methods 0.000 claims description 3
- 238000010183 spectrum analysis Methods 0.000 claims description 3
- 238000003759 clinical diagnosis Methods 0.000 abstract 1
- 238000011156 evaluation Methods 0.000 abstract 1
- 238000000354 decomposition reaction Methods 0.000 description 6
- 230000004927 fusion Effects 0.000 description 3
- 238000005315 distribution function Methods 0.000 description 2
- 210000001525 retina Anatomy 0.000 description 2
- 201000004569 Blindness Diseases 0.000 description 1
- 206010012689 Diabetic retinopathy Diseases 0.000 description 1
- 208000010412 Glaucoma Diseases 0.000 description 1
- 206010025421 Macule Diseases 0.000 description 1
- 206010064930 age-related macular degeneration Diseases 0.000 description 1
- 238000002583 angiography Methods 0.000 description 1
- 230000009286 beneficial effect Effects 0.000 description 1
- 238000006243 chemical reaction Methods 0.000 description 1
- 230000007547 defect Effects 0.000 description 1
- 239000003814 drug Substances 0.000 description 1
- 238000011065 in-situ storage Methods 0.000 description 1
- 230000003902 lesion Effects 0.000 description 1
- 208000002780 macular degeneration Diseases 0.000 description 1
- 239000012528 membrane Substances 0.000 description 1
- 210000003733 optic disk Anatomy 0.000 description 1
- 210000000056 organ Anatomy 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/0002—Inspection of images, e.g. flaw detection
- G06T7/0012—Biomedical image inspection
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T5/00—Image enhancement or restoration
- G06T5/20—Image enhancement or restoration using local operators
- G06T5/30—Erosion or dilatation, e.g. thinning
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T5/00—Image enhancement or restoration
- G06T5/70—Denoising; Smoothing
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T5/00—Image enhancement or restoration
- G06T5/80—Geometric correction
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/10—Segmentation; Edge detection
- G06T7/136—Segmentation; Edge detection involving thresholding
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/30—Determination of transform parameters for the alignment of images, i.e. image registration
- G06T7/33—Determination of transform parameters for the alignment of images, i.e. image registration using feature-based methods
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/90—Determination of colour characteristics
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/30—Subject of image; Context of image processing
- G06T2207/30004—Biomedical image processing
- G06T2207/30041—Eye; Retina; Ophthalmic
-
- Y—GENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
- Y02—TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
- Y02A—TECHNOLOGIES FOR ADAPTATION TO CLIMATE CHANGE
- Y02A40/00—Adaptation technologies in agriculture, forestry, livestock or agroalimentary production
- Y02A40/10—Adaptation technologies in agriculture, forestry, livestock or agroalimentary production in agriculture
Landscapes
- Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Health & Medical Sciences (AREA)
- General Health & Medical Sciences (AREA)
- Medical Informatics (AREA)
- Nuclear Medicine, Radiotherapy & Molecular Imaging (AREA)
- Radiology & Medical Imaging (AREA)
- Quality & Reliability (AREA)
- Eye Examination Apparatus (AREA)
Abstract
The invention discloses a hyperspectral fundus image detection method, and relates to the field of fundus image detection. The specific method comprises the following steps: the optical system of the equipment for direct vision of the eyes of the person to be detected finishes shooting the eye fundus image by the hyperspectral eye fundus camera, processes and analyzes the obtained image data and the obtained spectral data, respectively gives out the results of machine diagnosis and manual diagnosis by means of a big data platform and the clinical experience of a doctor on the basis, and displays the results. The invention is a non-invasive detection method, doctors can make detailed evaluation on the disease state of the eyes of patients according to fundus images of different wave bands and the results of machine diagnosis, thereby greatly reducing the risk of missed diagnosis and misdiagnosis and greatly improving the working efficiency of clinical diagnosis.
Description
Technical Field
The invention relates to the technical field of fundus image detection, in particular to a hyperspectral fundus image detection method.
Background
The eye is one of the most important organs of a human being, and the human being can accomplish direct observation of a complex environment through the eye. The fundus is the posterior tissue in the eye, i.e., the inner membrane of the eye, retina, optic papilla, macula, and the center of the retina. These fundus tissues contain a wealth of information that can be used by physicians to make certain diagnoses or to predict the occurrence of certain diseases based on changes in these tissues. Such as diabetic retinopathy, age-related macular degeneration, glaucoma, and the like. More and more studies have shown that many diseases of the eye are closely related to the fundus. In addition, fundus lesions become the leading cause of blindness in the elderly, and therefore fundus oculi has become an important clinical test object in modern medicine.
At present, fundus detection means mainly complete observation of the fundus by fundus color photography, OCT, fluorescence angiography and the like, but due to the limitation of the principle of the fundus detection means, the fundus detection means has the problems of limited observation range, requirements on the physique of a patient, wound on the patient, insufficient accuracy and the like.
Disclosure of Invention
In order to overcome the defects, the invention provides a hyperspectral-based fundus image detection method. The method can detect the fundus images of a patient noninvasively, provides the processed images of green light wave band, yellow light wave band, red light wave band, infrared light wave band and amber light wave according to the principle that different wave bands can reflect different fundus levels, greatly improves the detection range and improves the diagnosis work efficiency of doctors.
A hyperspectral-based fundus image detection method comprises the following steps of:
s1, aligning the two eyes of the detected person with the hyperspectral fundus camera to direct view the optical system, and completing capturing and recording fundus information of the person to be detected by the hyperspectral fundus camera;
s2, performing spectrum preprocessing and image preprocessing on fundus data acquired by the hyperspectral fundus camera respectively, wherein the image preprocessing can be performed only after the spectral data processing is completed in the next step;
s3, dividing the recorded spectrum information into: green light wave band (492-577 nm), yellow light wave band (570-585 nm), red light wave band (625-740 nm), infrared light wave band (760-860 nm) and amber light (590 nm). Wherein, except the amber light, the other divided wave bands are processed in the respective wave band interval range by adopting a PCA analysis method, so as to obtain the principal component data in the respective range: green light principal component G, yellow light principal component Y, red light principal component R, infrared light principal component IR;
s4, performing fundus image analysis and fundus spectrum analysis on the obtained data of S2 and S3 respectively, sending sample data into a large database, and providing reference information for fundus image diagnosis by the database, so that a doctor can conveniently determine the disease condition;
s5, displaying the diagnosis result of the condition of the fundus oculi, wherein the contents of the diagnosis result include: images processed by the main components of all wave bands and fused images, diagnosis information provided by a big database and results of manual diagnosis of doctors.
Further, the hyperspectral fundus camera of the step S1 is formed by coupling a hyperspectral imaging system and a fundus camera, and the person to be detected only needs to look directly at the optical system with eyes, and the hyperspectral fundus camera takes a picture of the fundus of the person to be detected and acquires information.
Further, the SG smoothing processing and the continuous wavelet transform method adopted in the spectrum preprocessing in step S2 are respectively used to reduce the influence of random noise, improve the signal-to-noise ratio of the spectrogram, and subtract the interference of the instrument background to the signal. The image preprocessing of step S2 includes the steps of:
s201, correcting the fundus image, including gray correction and geometric correction. And performing graying processing on the obtained color image, and after a gray image of a G channel is selected, extracting a background mask from the hyperspectral fundus image in order to prevent the influence of a non-fundus image observation area and reduce the subsequent calculation amount of fundus image registration and image analysis. The specific background mask extraction algorithm is as follows:
(1) if the gray level of the hyperspectral fundus image is L, the gray level range is [0, L-1], the total average value of the image is u:
u=wB(t)uB(t)+wo(t)uo(t)
the gray threshold t divides pixels in the image into a foreground pixel and a background pixel, and the proportion of the foreground pixel in the image is wo(t) mean value uo(t) the number of background pixels in the image is wB(t) mean value uB(t)。
(2) The optimal threshold of the image is set as follows:
after obtaining the optimal threshold, the image is subjected to threshold segmentation according to the following rules
After the segmentation is finished, in order to eliminate the influence of the image distortion of the hyperspectral fundus camera on analysis processing, a polynomial coordinate transformation method is adopted to carry out geometric correction on the image after the mask processing.
S202, extracting useful information from the corrected image by using a Gabor filter, extracting a main trunk and a tiny blood vessel from the image by using top-hat, and finally combining the two methods to finish the dynamic adjustable enhancement of the fundus image so as to enable the details of the fundus image to be clearer. The specific algorithm is as follows:
(1) and (3) taking the real part of the general expression of the two-dimensional Gabor filter and removing the front constants to obtain:
whereinTo be the direction of the filter, is,n is the number of directions, N is 18, f is the center frequency of the filter,σ is a gaussian envelope space constant, σ ═ k × s/π, k ∈ [0.5, 1.5]And s is the size of the filter mask.
(2) Let the current image be I (x, y), for the above filter, ifRespectively selecting a small scale alpha and a large scale beta for any direction, and then the corresponding masks areAndtheir filtering results are:
(3) selecting 18 directions to perform the filtering operation, and obtaining a final image F1Any pixel point (i, j) in (x, y) is as follows:
F1(x,y)=max[F(i,j,0),F(i,j,π/N),F(i,j,2π/N),…F(i,j,(N-1)π/N)]
then to is carried out on the image I (x, y) in 18 directionsp-hat transform and summation to obtain image F2(x, y) is:
wherein:represents an open operation, BiThe structural elements have an angle of in/N and a length of L. To F2(x, y) the image after Gaussian smoothing, normalization and gray level transformation is F'2(x,y)。
For image F1(x, y) is calculated by the following formula:
image F1(x, y) normalized into 8-bit grayscale image F'1(x, y). Let A, B be the weighting factor, the final enhanced image P (x, y) is:
P(x,y)=A·F′1(x,y)+B·F′2(x,y)
further, in step S3, the entered spectral information is first divided into: green light wave band (492-577 nm), yellow light wave band (570-585 nm), red light wave band (625-740 nm), infrared light wave band (760-860 nm) and amber light (590 nm). Except for amber light, the other divided wave bands are processed in the range of each wave band interval by adopting a PCA (principal component analysis) method, and first principal component data in each range are obtained according to the maximum contribution rate of the variance: green light principal component G, yellow light principal component Y, red light principal component R, infrared light principal component IR. Then, the operations S201 and S202 are performed on the images of all the wavelength band principal components and the amber light wavelength band.
Further, the fundus image analysis of step S4 is to perform SIFT image registration on the primary component images of each wavelength band and the amber wavelength band obtained after multiple times of shooting and processing in different times after being processed in S201 and S202, so that the same-wavelength-band image information is superimposed on the same pair of images, thereby reducing the data processing amount, and finally fusing all the registered images, thereby facilitating clinical analysis and diagnosis for the doctor. And all image data and spectral information are transmitted to the big data platform, the big data platform further completes analysis of the spectral curve, and finally the big data platform synthesizes the image and the spectral information to give eye disease assessment.
The fusion image adopts a Laplacian pyramid transformation method:
(1) let Gl(i, j) is the l-th layer gaussian pyramid image, l is the number of layers of decomposition, h obeys the gaussian density distribution function, ω (m, n) ═ h (m) × h (n) is the window function with low-pass characteristic, and the reduction operator of the window size is REDUCE, then the image gaussian pyramid is as follows:
Gl(i,j)=REDUCE(Gl-1)
performing differential expansion on the layers to satisfy the first layer image GlAnd l-1 layer image Gl-1The size is the same, then the expansion sequence is:
(2) let LPlFor the l-th layer decomposition image, the interpolation expansion operator is EXPAND, then, the laplacian tower decomposition expression is:
gradually pushing and reconstructing downwards from the tower top from the Laplacian pyramid to finally obtain an original image G0:
G0=LP0+EXPAND(LP1+EXPAND(LP2+…EXPAND(LPN)))
After the images are fused by adopting the flow algorithm, further comprehensive analysis is facilitated.
Further, the result of step S5 shows that the processed images of green light band G, yellow light band Y, red light band R, infrared light band IR and amber light and the fused image are all displayed, and the diagnostic information of the large database and the artificial diagnostic information given by the doctor in combination with the image are all displayed on the terminal device.
The invention has the beneficial effects that:
1. the invention can complete the in-situ noninvasive shooting of the fundus images.
2. The invention can provide information of fundus images of different wave bands, greatly expands the range of fundus detection, and performs registration and fusion on the fundus images, thereby further improving the detection precision of the images.
3. According to the invention, a doctor can make detailed eye diagnosis by combining the machine diagnosis result given by the big data platform, so that the working efficiency of the doctor is greatly improved.
Drawings
FIG. 1 is a flow chart of the present invention.
Detailed Description
As shown in fig. 1, a hyperspectral-based fundus image detection method sequentially includes the following steps:
s1, aligning the two eyes of the detected person with the hyperspectral fundus camera to direct view the optical system, and completing capturing and recording fundus information of the person to be detected by the hyperspectral fundus camera;
s2, performing spectrum preprocessing and image preprocessing on fundus data acquired by the hyperspectral fundus camera respectively, wherein the image preprocessing can be performed only after the spectral data processing is completed in the next step;
s3, dividing the recorded spectrum information into: green light wave band (492-577 nm), yellow light wave band (570-585 nm), red light wave band (625-740 nm), infrared light wave band (760-860 nm) and amber light (590 nm). Wherein, except the amber light, the other divided wave bands are processed in the respective wave band interval range by adopting a PCA analysis method, so as to obtain the principal component data in the respective range: green light principal component G, yellow light principal component Y, red light principal component R, infrared light principal component IR;
s4, performing fundus image analysis and fundus spectrum analysis on the obtained data of S2 and S3 respectively, sending sample data into a large database, and providing reference information for fundus image diagnosis by the database, so that a doctor can conveniently determine the disease condition;
s5, displaying the diagnosis result of the condition of the fundus oculi, wherein the contents of the diagnosis result include: images processed by the main components of all wave bands and fused images, diagnosis information provided by a big database and results of manual diagnosis of doctors.
Further, the hyperspectral fundus camera of the step S1 is formed by coupling a hyperspectral imaging system and a fundus camera, and the person to be detected only needs to look directly at the optical system with eyes, and the hyperspectral fundus camera takes a picture of the fundus of the person to be detected and acquires information.
Further, the SG smoothing processing and the continuous wavelet transform method adopted in the spectrum preprocessing in step S2 are respectively used to reduce the influence of random noise, improve the signal-to-noise ratio of the spectrogram, and subtract the interference of the instrument background to the signal. The image preprocessing of step S2 includes the steps of:
s201, correcting the fundus image, including gray correction and geometric correction. And performing graying processing on the obtained color image, and after a gray image of a G channel is selected, extracting a background mask from the hyperspectral fundus image in order to prevent the influence of a non-fundus image observation area and reduce the subsequent calculation amount of fundus image registration and image analysis. The specific background mask extraction algorithm is as follows:
(1) if the gray level of the hyperspectral fundus image is L, the gray level range is [0, L-1], the total average value of the image is u:
u=wB(t)uB(t)+wo(t)uo(t)
the gray threshold t divides pixels in the image into a foreground pixel and a background pixel, and the proportion of the foreground pixel in the image is wo(t) mean value uo(t) the number of background pixels in the image is wB(t) mean value uB(t)。
(2) The optimal threshold of the image is set as follows:
after obtaining the optimal threshold, the image is subjected to threshold segmentation according to the following rules
After the segmentation is finished, in order to eliminate the influence of the image distortion of the hyperspectral fundus camera on analysis processing, a polynomial coordinate transformation method is adopted to carry out geometric correction on the image after the mask processing.
S202, extracting useful information from the corrected image by using a Gabor filter, extracting a main trunk and a tiny blood vessel from the image by using top-hat, and finally combining the two methods to finish the dynamic adjustable enhancement of the fundus image so as to enable the details of the fundus image to be clearer. The specific algorithm is as follows:
(1) and (3) taking the real part of the general expression of the two-dimensional Gabor filter and removing the front constants to obtain:
whereinTo be the direction of the filter, is,n is the number of directions, N is 18, f is the center frequency of the filter,σ is a gaussian envelope space constant, σ ═ k × s/π, k ∈ [0.5, 1.5]And s is the size of the filter mask.
(2) Let the current image be I (x, y), for the above filter, ifRespectively selecting a small scale alpha and a large scale beta for any direction, and then the corresponding masks areAndtheir filtering results are:
(3) selecting 18 directions to perform the filtering operation, and obtaining a final image F1Any pixel point (i, j) in (x, y) is as follows:
F1(x,y)=max[F(i,j,0),F(i,j,π/N),F(i,j,2π/N),…F(i,j,(N-1)π/N)]
then, the image I (x, y) is subjected to top-hat transformation in 18 directions and summed to obtain an image F2(x, y) is:
whereinRepresents an open operation, BiThe structural elements have an angle of in/N and a length of L. To F2(x, y) is subjected to Gaussian smoothing and normalization,The image after gray level conversion is F'2(x,y)。
For image F1(x, y) is calculated by the following formula:
image F1(x, y) normalized into 8-bit grayscale image F'1(x, y). Let A, B be the weighting factor, the final enhanced image P (x, y) is:
P(x,y)=A·F′1(x,y)+B·F′2(x,y)
further, in step S3, the entered spectral information is first divided into: green light wave band (492-577 nm), yellow light wave band (570-585 nm), red light wave band (625-740 nm), infrared light wave band (760-860 nm) and amber light (590 nm). Except for amber light, the other divided wave bands are processed in the range of each wave band interval by adopting a PCA (principal component analysis) method, and first principal component data in each range are obtained according to the maximum contribution rate of the variance: green light principal component G, yellow light principal component Y, red light principal component R, infrared light principal component IR. Then, the operations S201 and S202 are performed on the images of all the wavelength band principal components and the amber light wavelength band.
Further, the fundus image analysis of step S4 is to perform SIFT image registration on the primary component images of each wavelength band and the amber wavelength band obtained after multiple times of shooting and processing in different times after being processed in S201 and S202, so that the same-wavelength-band image information is superimposed on the same pair of images, thereby reducing the data processing amount, and finally fusing all the registered images, thereby facilitating clinical analysis and diagnosis for the doctor. And all image data and spectral information are transmitted to the big data platform, the big data platform further completes analysis of the spectral curve, and finally the big data platform synthesizes the image and the spectral information to give eye disease assessment.
The fusion image adopts a Laplacian pyramid transformation method:
(1) let Gl(i, j) is the I-th layer Gaussian pyramid image, l is the number of layers of decomposition, and h is obedienceA gaussian density distribution function, ω (m, n) ═ h (m) × h (n) is a window function with low-pass characteristic, and the reduction operator of the window size is REDUCE, then the gaussian pyramid of the image is as follows:
Gl(i,j)=REDUCE(Gl-1)
performing differential expansion on the layers to satisfy the first layer image GlAnd l-1 layer image Gl-1The size is the same, then the expansion sequence is:
(2) let LPlFor the l-th layer decomposition image, the interpolation expansion operator is EXPAND, then, the laplacian tower decomposition expression is:
gradually pushing and reconstructing downwards from the tower top from the Laplacian pyramid to finally obtain an original image G0:
G0=LP0+EXPAND(LP1+EXPAND(LP2+…EXPAND(LPN)))
After the images are fused by adopting the flow algorithm, further comprehensive analysis is facilitated.
Further, the result of step S5 shows that the processed images of green light band G, yellow light band Y, red light band R, infrared light band IR and amber light and the fused image are all displayed, and the diagnostic information of the large database and the artificial diagnostic information given by the doctor in combination with the image are all displayed on the terminal device.
Claims (5)
1. A hyperspectral-based fundus image detection method is characterized by comprising the following steps of: the method comprises the following steps:
s1, aligning the two eyes of the detected person with the hyperspectral fundus camera to direct view the optical system, and completing capturing and recording fundus information of the person to be detected by the hyperspectral fundus camera;
s2, performing spectrum preprocessing and image preprocessing on fundus data acquired by the hyperspectral fundus camera respectively, wherein the image preprocessing can be performed only after the spectral data processing is completed in the next step;
s3, dividing the recorded spectrum information into: green light band, yellow light band, red light band, infrared light band and amber light; wherein, except the amber light, the other divided wave bands are processed in the respective wave band interval range by adopting a PCA analysis method, so as to obtain the principal component data in the respective range: green light principal component G, yellow light principal component Y, red light principal component R, infrared light principal component IR; the green light band has the wavelength of 492-577 nm, the yellow light band has the wavelength of 570-585 nm, the red light band has the wavelength of 625-740 nm, the infrared light band has the wavelength of 760-860 nm and the amber light has the wavelength of 590 nm;
s4, performing fundus image analysis and fundus spectrum analysis on the data obtained in the steps S2 and S3 respectively, sending sample data into a large database, and providing reference information for fundus image diagnosis by the database, so that a doctor can conveniently determine a disease state;
s5, displaying the diagnosis result of the condition of the fundus oculi, wherein the contents of the diagnosis result include: images processed by the main components of all wave bands and fused images, diagnosis information provided by a big database and results of manual diagnosis of doctors.
2. A hyperspectral-based fundus image detection method according to claim 1, characterized in that: the hyperspectral fundus camera of the step S1 is formed by coupling a hyperspectral imaging system and a fundus camera, and the person to be detected only needs to look directly at the optical system with both eyes, and the hyperspectral fundus camera takes a picture of the fundus of the person to be detected and acquires information.
3. A hyperspectral-based fundus image detection method according to claim 1, characterized in that: the SG smoothing processing and the continuous wavelet transform method adopted in the spectrum preprocessing in the step S2 are respectively used for reducing the influence of random noise, improving the signal-to-noise ratio of the spectrogram and deducting the interference of the instrument background to the signal;
the image preprocessing of step S2 includes the steps of:
s201, firstly, correcting the fundus image, including gray correction and geometric correction; performing graying processing on the obtained color image, and after a gray image of a G channel is selected, extracting a background mask from the hyperspectral fundus image in order to prevent the influence of a non-fundus image observation area and reduce the subsequent calculation amount of fundus image registration and image analysis; the specific background mask extraction algorithm is as follows:
(1) if the gray level of the hyperspectral fundus image is L, the gray level range is [0, L-1], the total average value of the image is u, and the following steps are performed:
u=wB(t)uB(t)+wo(t)uo(t)
the gray threshold t divides pixels in the image into a foreground pixel and a background pixel, and the proportion of the foreground pixel in the image is wo(t) mean value uo(t) the number of background pixels in the image is wB(t) mean value uB(t);
(2) The optimal threshold of the image is set as follows:
after the optimal threshold is obtained, performing threshold segmentation on the image according to the following rules:
after the segmentation is finished, in order to eliminate the influence of image distortion of the hyperspectral fundus camera on analysis processing, geometric correction is carried out on the image after the mask processing by adopting a polynomial coordinate transformation method;
s202, extracting useful information from the corrected image by using a Gabor filter, extracting a main trunk and a tiny blood vessel from the image by using top-hat, and finally combining the two methods to finish the dynamic adjustable enhancement of the fundus image so as to enable the details of the fundus image to be clearer; the specific algorithm is as follows:
(1) and (3) taking the real part of the general expression of the two-dimensional Gabor filter and removing the front constants to obtain:
wherein To be the direction of the filter, is,n is the number of directions, N is 18, f is the center frequency of the filter,σ is a gaussian envelope space constant, σ ═ k × s/π, k ∈ [0.5, 1.5]S is the size of the filter mask;
(2) let the current image be I (x, y), for the above filter, ifRespectively selecting a small scale alpha and a large scale beta for any direction, and then the corresponding masks areAndtheir filtering results are:
(3) selecting 18 directions to perform the filtering operation, and obtaining a final image F1Any pixel point (i, j) in (x, y) is as follows:
F1(x,y)=max[F(i,j,0),F(i,j,π/N),F(i,j,2π/N),…F(i,j,(N-1)π/N)]
then, the image I (x, y) is subjected to top-hat transformation in 18 directions and summed to obtain an image F2(x, y) is:
wherein (o) represents an open operation, BiThe structural elements have an angle of in/N and a length of L; to F2(x, y) the image after Gaussian smoothing, normalization and gray level transformation is F'2(x,y);
For image F1(x, y) is calculated by the following formula:
image F1(x, y) normalized into 8-bit grayscale image F'1(x, y); let A, B be the weighting factor, the final enhanced image P (x, y) is:
P(x,y)=A·F′1(x,y)+B·F′2(x,y)。
4. a hyperspectral-based fundus image detection method according to claim 1, characterized in that: the fundus image analysis of step S4 is to perform SIFT image registration on the primary component images of each waveband and the amber waveband image, which are obtained by shooting and processing for a plurality of times in different times, after processing of step S201 and step S202, so that the same waveband image information is superimposed on the same pair of images, thereby reducing the data processing amount, and finally fusing all the registered images, thereby facilitating clinical analysis and diagnosis for doctors; and all image data and spectral information are transmitted to the big data platform, the big data platform further completes analysis of the spectral curve, and finally the big data platform synthesizes the image and the spectral information to give eye disease assessment.
5. A hyperspectral-based fundus image detection method according to claim 1, characterized in that: the result of step S5 shows that the processed images of green light band G, yellow light band Y, red light band R, infrared light band IR and amber light and the fused image are displayed, and the diagnostic information of the large database and the artificial diagnostic information given by the doctor in combination with the image are all displayed on the terminal device.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202010708640.3A CN111815617B (en) | 2020-07-22 | 2020-07-22 | Fundus image detection method based on hyperspectrum |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202010708640.3A CN111815617B (en) | 2020-07-22 | 2020-07-22 | Fundus image detection method based on hyperspectrum |
Publications (2)
Publication Number | Publication Date |
---|---|
CN111815617A true CN111815617A (en) | 2020-10-23 |
CN111815617B CN111815617B (en) | 2023-11-17 |
Family
ID=72862116
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202010708640.3A Active CN111815617B (en) | 2020-07-22 | 2020-07-22 | Fundus image detection method based on hyperspectrum |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN111815617B (en) |
Cited By (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN112560597A (en) * | 2020-12-02 | 2021-03-26 | 吉林大学 | Microscopic hyperspectral COVID-19 detection and identification method |
CN112905823A (en) * | 2021-02-22 | 2021-06-04 | 深圳市国科光谱技术有限公司 | Hyperspectral substance detection and identification system and method based on big data platform |
Citations (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20110242306A1 (en) * | 2008-12-19 | 2011-10-06 | The Johns Hopkins University | System and method for automated detection of age related macular degeneration and other retinal abnormalities |
CN105809188A (en) * | 2016-02-26 | 2016-07-27 | 山东大学 | Fungal keratitis image identification method based on AMBP improved algorithm |
CN106056157A (en) * | 2016-06-01 | 2016-10-26 | 西北大学 | Hyperspectral image semi-supervised classification method based on space-spectral information |
CN108197640A (en) * | 2017-12-18 | 2018-06-22 | 华南理工大学 | High spectrum image fast filtering method based on three-dimensional Gabor filter |
CN109544540A (en) * | 2018-11-28 | 2019-03-29 | 东北大学 | A kind of diabetic retina picture quality detection method based on image analysis technology |
WO2019100585A1 (en) * | 2017-11-25 | 2019-05-31 | 深圳市前海安测信息技术有限公司 | Fundus camera-based monitoring system and method for prevention and treatment of potential diseases based on traditional chinese medicine |
-
2020
- 2020-07-22 CN CN202010708640.3A patent/CN111815617B/en active Active
Patent Citations (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20110242306A1 (en) * | 2008-12-19 | 2011-10-06 | The Johns Hopkins University | System and method for automated detection of age related macular degeneration and other retinal abnormalities |
CN105809188A (en) * | 2016-02-26 | 2016-07-27 | 山东大学 | Fungal keratitis image identification method based on AMBP improved algorithm |
CN106056157A (en) * | 2016-06-01 | 2016-10-26 | 西北大学 | Hyperspectral image semi-supervised classification method based on space-spectral information |
WO2019100585A1 (en) * | 2017-11-25 | 2019-05-31 | 深圳市前海安测信息技术有限公司 | Fundus camera-based monitoring system and method for prevention and treatment of potential diseases based on traditional chinese medicine |
CN108197640A (en) * | 2017-12-18 | 2018-06-22 | 华南理工大学 | High spectrum image fast filtering method based on three-dimensional Gabor filter |
CN109544540A (en) * | 2018-11-28 | 2019-03-29 | 东北大学 | A kind of diabetic retina picture quality detection method based on image analysis technology |
Non-Patent Citations (2)
Title |
---|
HSIN-YU YAO 等: "Hyperspectral Ophthalmoscope Images for the Diagnosis of Diabetic Retinopathy Stage", 《CLINICAL MEDICINE》, pages 1 - 16 * |
白瑞: "基于多特征融合的高光谱遥感图像分类研究", 《中国优秀硕士学位论文全文数据库 信息科技辑》, no. 2, pages 1 - 47 * |
Cited By (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN112560597A (en) * | 2020-12-02 | 2021-03-26 | 吉林大学 | Microscopic hyperspectral COVID-19 detection and identification method |
CN112905823A (en) * | 2021-02-22 | 2021-06-04 | 深圳市国科光谱技术有限公司 | Hyperspectral substance detection and identification system and method based on big data platform |
CN112905823B (en) * | 2021-02-22 | 2023-10-31 | 深圳市国科光谱技术有限公司 | Hyperspectral substance detection and identification system and method based on big data platform |
Also Published As
Publication number | Publication date |
---|---|
CN111815617B (en) | 2023-11-17 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
WO2021039339A1 (en) | Information processing device, information processing method, information processing system, and program | |
WO2020183799A1 (en) | Medical image processing device, medical image processing method, and program | |
CN112601487A (en) | Medical image processing apparatus, medical image processing method, and program | |
Raja et al. | Clinically verified hybrid deep learning system for retinal ganglion cells aware grading of glaucomatous progression | |
JP2022103221A (en) | Medical image processing device, medical image processing system, medical image processing method, and program | |
JP2014527434A (en) | Feature motion correction and normalization in optical coherence tomography | |
CN108618749B (en) | Retina blood vessel three-dimensional reconstruction method based on portable digital fundus camera | |
WO2020183791A1 (en) | Image processing device and image processing method | |
CN111815617B (en) | Fundus image detection method based on hyperspectrum | |
Ram et al. | The relationship between Fully Connected Layers and number of classes for the analysis of retinal images | |
JP7362403B2 (en) | Image processing device and image processing method | |
Tripathi et al. | MTCD: Cataract detection via near infrared eye images | |
Kolar et al. | Analysis of visual appearance of retinal nerve fibers in high resolution fundus images: a study on normal subjects | |
WO2022097620A1 (en) | Ophthalmic information processing device, ophthalmic device, ophthalmic information processing method, and program | |
Zheng et al. | New simplified fovea and optic disc localization method for retinal images | |
Aloudat et al. | Histogram analysis for automatic blood vessels detection: First step of IOP | |
JP2021069667A (en) | Image processing device, image processing method and program | |
JP6481432B2 (en) | Fundus image processing device | |
Laliberté et al. | Studies on registration and fusion of retinal images | |
Manne et al. | Improved fundus image quality assessment: Augmenting traditional features with structure preserving scatnet features in multicolor space | |
Wang et al. | MEMO: Dataset and Methods for Robust Multimodal Retinal Image Registration with Large or Small Vessel Density Differences | |
Tanachotnarangkun et al. | A Framework for Generating an ICGA from a Fundus Image using GAN | |
Jiang et al. | Cross-Domain Images Generation of Fundus Fluorescence Angiography Based on Generative Adversarial Networks with Self-Attention Mechanism | |
Raju | DETECTION OF DIABETIC RETINOPATHY USING IMAGE PROCESSING | |
Odstrčilík | Analysis of retinal image data to support glaucoma diagnosis |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |