CN110448267B - Multimode fundus dynamic imaging analysis system and method - Google Patents

Multimode fundus dynamic imaging analysis system and method Download PDF

Info

Publication number
CN110448267B
CN110448267B CN201910844043.0A CN201910844043A CN110448267B CN 110448267 B CN110448267 B CN 110448267B CN 201910844043 A CN201910844043 A CN 201910844043A CN 110448267 B CN110448267 B CN 110448267B
Authority
CN
China
Prior art keywords
image
light source
blood vessel
fundus
vessel
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201910844043.0A
Other languages
Chinese (zh)
Other versions
CN110448267A (en
Inventor
刘刚军
于泽宽
赵鑫
邹达
冯夕萌
邱斌
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Chongqing Beiao New Vision Medical Equipment Co ltd
Original Assignee
Chongqing Beiao New Vision Medical Equipment Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Chongqing Beiao New Vision Medical Equipment Co ltd filed Critical Chongqing Beiao New Vision Medical Equipment Co ltd
Priority to CN201910844043.0A priority Critical patent/CN110448267B/en
Publication of CN110448267A publication Critical patent/CN110448267A/en
Application granted granted Critical
Publication of CN110448267B publication Critical patent/CN110448267B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B3/00Apparatus for testing the eyes; Instruments for examining the eyes
    • A61B3/10Objective types, i.e. instruments for examining the eyes independent of the patients' perceptions or reactions
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B3/00Apparatus for testing the eyes; Instruments for examining the eyes
    • A61B3/10Objective types, i.e. instruments for examining the eyes independent of the patients' perceptions or reactions
    • A61B3/12Objective types, i.e. instruments for examining the eyes independent of the patients' perceptions or reactions for looking at the eye fundus, e.g. ophthalmoscopes
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B3/00Apparatus for testing the eyes; Instruments for examining the eyes
    • A61B3/10Objective types, i.e. instruments for examining the eyes independent of the patients' perceptions or reactions
    • A61B3/12Objective types, i.e. instruments for examining the eyes independent of the patients' perceptions or reactions for looking at the eye fundus, e.g. ophthalmoscopes
    • A61B3/1241Objective types, i.e. instruments for examining the eyes independent of the patients' perceptions or reactions for looking at the eye fundus, e.g. ophthalmoscopes specially adapted for observation of ocular blood flow, e.g. by fluorescein angiography
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B3/00Apparatus for testing the eyes; Instruments for examining the eyes
    • A61B3/10Objective types, i.e. instruments for examining the eyes independent of the patients' perceptions or reactions
    • A61B3/14Arrangements specially adapted for eye photography
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/11Region-based segmentation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/60Analysis of geometric attributes
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30004Biomedical image processing
    • G06T2207/30041Eye; Retina; Ophthalmic
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30004Biomedical image processing
    • G06T2207/30101Blood vessel; Artery; Vein; Vascular
    • G06T2207/30104Vascular flow; Blood flow; Perfusion

Abstract

The invention provides a multimode fundus dynamic imaging analysis system and a method thereof, wherein the system comprises a first light source emitter, a first lens, a dichroic mirror, a reflective mirror, a hollow reflective mirror, an ocular and a fundus; the light source emitted by the first light source emitter sequentially passes through the first lens, the dichroic mirror, the reflector, the hollow reflector and the eyepiece and then reaches the fundus; the light incident on the reflector is at an angle alpha to the light incident on the hollow reflector. The multimode fundus dynamic imaging analysis system provided by the invention can overcome the defect of single wide spectrum observation of the traditional fundus camera, highlight morphological characteristics of different layers and reflection points of the fundus under different narrow-band spectrums, innovatively utilize a dynamic optical stimulation screen to carry out optical stimulation on the interested position of the fundus, record, measure and analyze the dynamic response change of the oxygen content and the diameter of the retinal microvasculature in the whole process in a video form.

Description

Multimode fundus dynamic imaging analysis system and method
Technical Field
The invention relates to the technical field of multimode eyeground, in particular to a multimode eyeground dynamic imaging analysis system and a multimode eyeground dynamic imaging analysis method.
Background
The traditional fundus camera adopts white light illumination, the spectrum of the traditional fundus camera is very wide, and the camera simultaneously receives information of all spectra in the wide spectrum, so that information of specific wavelengths sensitive to some specific tissues or lesion positions of the fundus is submerged, cannot be reflected and cannot be observed by researchers or doctors, and the resolution of the abnormal structures and functions of the fundus and the early detection and diagnosis of the lesion of the fundus are greatly limited. By using the multispectral technology, spectral images of the eyeground of a tester under the irradiation of a series of light with different narrow-band wavelengths can be obtained, and morphological characteristics or pathological characteristics of the eyeground at different layers and with different reflection emphasis can be correspondingly seen. Multispectral fundus imaging techniques can help physicians identify, understand, diagnose, and manage relevant ophthalmic pathologies and diseases earlier, better, and more specifically.
However, the existing multispectral fundus imaging system is improved only in the system imaging device composition, and does not fully utilize the change of disease-related physiological states provided in the system for early screening and diagnosis of diseases. For example, patent application No. 2017101099747 proposes a multispectral fundus imaging system that has the features of low cost, small size, strong practicability, and simple operation. Patent application No. 2016212023969 provides a multispectral fundus layering equipment, can be when the height of this equipment and the height of being detected the object and not match, makes the height-adjustable of this equipment through adjusting elevating gear to the detected object who adapts to different heights, thereby provides more convenient performance. The patent application number 201810363180.8 provides a dynamic visual stimulation multispectral fundus imaging system, which expands static multispectral fundus static imaging to the field of dynamic functional imaging by combining dynamic visual stimulation and aims to improve the universality and accuracy of a fundus disease diagnosis method by combining image processing and machine learning. The invention has been explored to a certain extent for the detection index of the fundus physiological state provided by the fundus multispectral camera. However, the system does not provide monitoring of eye movement, pupil, blood vessel diameter and blood flow changes (blood flow, blood flow velocity, etc.), which are important criteria for diagnosing eye diseases and related diseases, such as high mean arterial blood oxygen saturation of retina of chronic kidney disease patients, wide vein diameter, and small arterio-venous diameter ratio; diabetes can cause retinal microangioma, vascular hyperplasia and damage retinal capillaries, so that a basement membrane of the retinal capillaries is obviously thickened, oxygen diffused from the capillaries to retinal tissues is obviously reduced, the retinal tissues have an anoxic symptom, and meanwhile, the retinal arterial blood oxygen saturation is increased; for hypertensive patients, the retinal vessel diameter will be correspondingly thinner. Therefore, these indicators can provide more comprehensive parameters for the change of physiological status related to diseases, and help to early screen and accurately diagnose diseases.
Disclosure of Invention
The invention aims to at least solve the technical problems in the prior art, and particularly provides a multimode fundus dynamic imaging analysis system and a multimode fundus dynamic imaging analysis method.
In order to achieve the above object of the present invention, the present invention provides a multimode fundus dynamic imaging analysis system, comprising a first light source emitter, a first lens, a dichroic mirror, a reflective mirror, a hollow reflective mirror, an eyepiece and a fundus; the light source emitted by the first light source emitter sequentially passes through the first lens, the dichroic mirror, the reflector, the hollow reflector and the eyepiece and then reaches the fundus; the light incident to the reflector and the light incident to the hollow reflector form an angle of alpha;
the eyeground reflected light sequentially passes through the ocular lens, the hollow reflector, the relay lens and the second lens and then reaches the image collector;
the light source emitted by the second light source emitter sequentially passes through the third lens, the dichroic mirror, the reflector, the hollow reflector and the eyepiece and then reaches the fundus; the light ray of the incident dichroic mirror and the light ray of the incident reflector form an angle of beta;
the control end of the first light source emitter is connected with the first light source control end of the controller to control the first light source emitter to emit light sources with different wavelengths; the control end of the second light source emitter is connected with the second light source control end of the controller to control the second light source emitter to emit the stimulating light sources of different images; the image data output end of the image collector is connected with the image data input end of the controller, and the image data collected by the image collector is transmitted to the controller for recording.
In a preferred embodiment of the invention, the first light source emitter is a laser light source emitter;
and/or the second light source emitter is an image emitter;
and/or the image collector is one of a camera, a CCD camera and a CMOS camera.
In a preferred embodiment of the present invention, the first light source emitter emits a 840nm infrared laser source under the control of the controller;
and/or the second light source emitter emits the stimulation image under control of the controller.
The invention also discloses a multimode fundus dynamic imaging analysis method, which comprises the following steps:
s1, acquiring an image to be processed;
s2, processing the image to be processed acquired in the step S1 into a segmented blood vessel image;
s3, searching the minimum pixel value of the blood vessel section on the segmented blood vessel image obtained in S2 as the gray value of the incident light intensity image;
s4, calculating the gray value of the emergent light intensity image on the segmented blood vessel image obtained in the step S2;
s5, calculating the retinal blood oxygen saturation level from the values calculated in steps S3 and S4, and presenting the calculated image.
In a preferred embodiment of the present invention, step S2 is: and (4) performing one or any combination of image denoising and image adaptive histogram processing on the to-be-processed image acquired in the step (S1) to obtain a corrected image, and processing the obtained corrected image into a segmented blood vessel image.
In a preferred embodiment of the present invention, the calculation method for processing the image to be processed or the corrected image into the segmented blood vessel image in step S2 is:
matching a filter:
Figure BDA0002194608500000031
|x|≤t1σ,
Figure BDA0002194608500000032
where σ is the filter's scale and L is the neighborhood length along the y-axis for noise smoothing;
and performing image convolution operation on the image to be processed or the corrected image and the matched filter to obtain a blood vessel segmentation image.
In a preferred embodiment of the present invention, in step S3, the calculation method for finding the minimum pixel value of the blood vessel section is:
Figure BDA0002194608500000041
wherein the content of the first and second substances,
Figure BDA0002194608500000042
a point on the centerline of the blood vessel is represented,
Figure BDA0002194608500000043
and
Figure BDA0002194608500000044
respectively representing points on the left and right vessel walls, t2Is a constant number from 0 to 1 and,
Figure BDA0002194608500000045
is shown in
Figure BDA0002194608500000046
The pixel gray value of (a).
In a preferred embodiment of the present invention, in step S4, the method for calculating the gray-scale value of the emergent light intensity image comprises:
Figure BDA0002194608500000047
wherein the content of the first and second substances,
Figure BDA0002194608500000048
are respectively shown in
Figure BDA0002194608500000049
The gray value of the pixel at (D) is the width of the blood vessel at each point along the central line,
Figure BDA00021946085000000410
is a unit vector perpendicular to the blood vessel.
In a preferred embodiment of the present invention, the method for calculating the retinal blood oxygen saturation level comprises:
Figure BDA00021946085000000411
wherein, IoutIs the gray value of the emergent light intensity image IinIs the grey value of the incident light intensity image.
In a preferred embodiment of the invention, for each branch point
Figure BDA00021946085000000412
Searching the starting point of its branch in turn
Figure BDA00021946085000000413
Starting from different branches, taking every P points as a small blood vessel section, wherein P is a positive integer, calculating the direction of the blood vessel of the small blood vessel section, expressing the direction by an included angle theta with the horizontal direction, distinguishing pixel points inside and outside the blood vessel, judging the blood vessel wall, and judging a point d (x _ loc) on the blood vessel sectioni,y_loci) The following formula is used:
x=x_loci+L′×cos(θ-π/2),
y=y_loci+ L '× sin (theta-pi/2), L' is the length of the small vessel segment;
continuously and iteratively searching the vessel wall formula, judging whether the point (x, y) is in the vessel, obtaining the position of a vessel pixel point of the section in the direction vertical to the vessel, and taking the minimum value of the grey level in the vessel as a transmission light intensity grey level value;
the vessel width D can be obtained by performing geometric distance calculation on the left and right vessel walls:
Figure BDA0002194608500000051
where, (xl, yl), (xr, yr) are coordinates at which the left and right sides of the blood vessel wall have the smallest grayscale values, respectively.
In conclusion, due to the adoption of the technical scheme, the multimode fundus dynamic imaging analysis system provided by the invention can overcome the defect of single wide spectrum observation of the traditional fundus camera, highlight morphological characteristics of different layers and reflection points of the fundus under different narrow-band spectrums, innovatively utilize a dynamic optical stimulation screen to perform optical stimulation on interested positions of the fundus, and record, measure and analyze the dynamic response change of the oxygen content and the diameter of retinal microvasculature in the form of video in the whole process. Meanwhile, the system is combined with a laser light source, based on a laser speckle contrast imaging technology, the blood flow velocity of retinal microcirculation can be directly and quantitatively measured, and further, the functional and metabolic changes of retinal whole blood perfusion under light stimulation can be researched. The system provides a very important tool for the research of ophthalmic diseases, systemic diseases and nervous activities, and has great clinical and scientific research values.
Additional aspects and advantages of the invention will be set forth in part in the description which follows and, in part, will be obvious from the description, or may be learned by practice of the invention.
Drawings
The above and/or additional aspects and advantages of the present invention will become apparent and readily appreciated from the following description of the embodiments, taken in conjunction with the accompanying drawings of which:
fig. 1 is a laser speckle imaging system of the present invention.
Fig. 2 is a schematic diagram of eye tracking using pupil-cornea tracking method according to the present invention.
Fig. 3 is a pupil-non-pupil template in the present invention.
Fig. 4 is a schematic diagram of a pupil acquisition process consisting of a full frame operation and a pupil candidate operation according to the present invention.
Fig. 5 is an image obtained by performing vessel segmentation by a matched filter method according to the present invention.
Fig. 6 shows the calculation result of the blood vessel width in the present invention.
Detailed Description
Reference will now be made in detail to embodiments of the present invention, examples of which are illustrated in the accompanying drawings, wherein like or similar reference numerals refer to the same or similar elements or elements having the same or similar function throughout. The embodiments described below with reference to the accompanying drawings are illustrative only for the purpose of explaining the present invention, and are not to be construed as limiting the present invention.
The invention provides a multimode fundus dynamic imaging analysis system, which comprises a first light source emitter 1, a first lens 2, a dichroic mirror 3, a reflective mirror 6, a hollow reflective mirror 7, an ocular lens 8 and a fundus 9; the light source emitted by the first light source emitter 1 sequentially passes through the first lens 2, the dichroic mirror 3, the reflective mirror 6, the hollow reflective mirror 7 and the ocular lens 8 and then reaches the fundus 9; the light ray entering the reflector 6 and the light ray entering the hollow reflector 7 form an angle of alpha; the light incident on the hollow reflector 7 is gamma to the same incident eyepiece 8.
The eyeground reflected light sequentially passes through the ocular lens 8, the hollow reflector 7, the relay lens 10 and the second lens 11 and then reaches the image collector 12;
the eyeground 9 is reached after the light source emitted by the second light source emitter 4 sequentially passes through the third lens 5, the dichroic mirror 3, the reflective mirror 6, the hollow reflective mirror 7 and the ocular lens 8; the light rays entering the dichroic mirror 3 and the light rays entering the same reflecting mirror 6 form an angle of beta; in the present embodiment, each of α °, β °, and γ ° is pi/2.
The control end of the first light source emitter 1 is connected with the first light source control end of the controller 13, and the first light source emitter 1 is controlled to emit light sources with different wavelengths; the control end of the second light source emitter 4 is connected with the second light source control end of the controller 13, and the second light source emitter 4 is controlled to emit the stimulation light sources of different images; the image data output end of image collector 12 is connected with the image data input end of controller 13, and the image data collected by image collector 12 is transmitted to the controller for recording.
In a preferred embodiment of the present invention, the first light source emitter 1 is a laser light source emitter;
and/or the second light source emitter 4 is an image emitter;
and/or image collector 12 is one of a camera, a CCD camera, a CMOS camera.
In a preferred embodiment of the present invention, the first light source emitter 1 emits an 840nm infrared laser source under the control of the controller 13;
and/or the second light source emitter 4 emits a stimulation image under the control of the controller 13.
The invention also discloses a multimode fundus dynamic imaging analysis method, which comprises the following steps:
s1, acquiring an image to be processed;
s2, processing the image to be processed acquired in the step S1 into a segmented blood vessel image;
s3, searching the minimum pixel value of the blood vessel section on the segmented blood vessel image obtained in S2 as the gray value of the incident light intensity image;
s4, calculating the gray value of the emergent light intensity image on the segmented blood vessel image obtained in the step S2;
s5, calculating the retinal blood oxygen saturation level from the values calculated in steps S3 and S4, and presenting the calculated image.
In a preferred embodiment of the present invention, step S2 is: and (4) performing one or any combination of image denoising and image adaptive histogram processing on the to-be-processed image acquired in the step (S1) to obtain a corrected image, and processing the obtained corrected image into a segmented blood vessel image.
In a preferred embodiment of the present invention, the calculation method for processing the image to be processed or the corrected image into the segmented blood vessel image in step S2 is:
matching a filter:
Figure BDA0002194608500000071
where σ is the filter's scale, t1L is the neighborhood length along the y-axis for noise smoothing, which is a constant;
and performing image convolution operation on the image to be processed or the corrected image and the matched filter to obtain a blood vessel segmentation image.
In a preferred embodiment of the present invention, in step S3, the calculation method for finding the minimum pixel value of the blood vessel section is:
Figure BDA0002194608500000081
wherein the content of the first and second substances,
Figure BDA0002194608500000082
a point on the centerline of the blood vessel is represented,
Figure BDA0002194608500000083
and
Figure BDA0002194608500000084
respectively representing points on the left and right vessel walls, t2Is a constant number from 0 to 1 and,
Figure BDA0002194608500000085
is shown in
Figure BDA0002194608500000086
The pixel gray value of (a).
In a preferred embodiment of the present invention, in step S4, the method for calculating the gray-scale value of the emergent light intensity image comprises:
Figure BDA0002194608500000087
wherein the content of the first and second substances,
Figure BDA0002194608500000088
are respectively shown in
Figure BDA0002194608500000089
The gray value of the pixel at (D) is the width of the blood vessel at each point along the central line,
Figure BDA00021946085000000810
is a unit vector perpendicular to the blood vessel.
In a preferred embodiment of the present invention, the method for calculating the retinal blood oxygen saturation level comprises:
Figure BDA00021946085000000811
wherein, IoutIs the gray value of the emergent light intensity image IinIs the grey value of the incident light intensity image.
In a preferred embodiment of the invention, for each branch point
Figure BDA00021946085000000812
Searching the starting point of its branch in turn
Figure BDA00021946085000000813
Starting from different branches, taking every P points as a small blood vessel section, wherein P is a positive integer, preferably every 4 points as a small blood vessel section, calculating the direction of the blood vessel of the small blood vessel section, expressing the direction by an included angle theta with the horizontal direction, distinguishing pixel points inside and outside the blood vessel, judging the blood vessel wall, and taking a point d (x _ loc) on the blood vessel section as a pointi,y_loci) The following formula is used:
x=x_loci+L′×cos(θ-π/2),
y=y_loci+ L '× sin (theta-pi/2), L' is the length of the small vessel segment;
continuously and iteratively searching the vessel wall formula, judging whether the point (x, y) is in the vessel, obtaining the position of a vessel pixel point of the section in the direction vertical to the vessel, and taking the minimum value of the grey level in the vessel as a transmission light intensity grey level value;
the vessel width D can be obtained by performing geometric distance calculation on the left and right vessel walls:
Figure BDA0002194608500000091
where, (xl, yl), (xr, yr) are coordinates at which the left and right sides of the blood vessel wall have the smallest grayscale values, respectively.
Subfunction 1: the 840nm infrared laser preview video recording is realized, the stimulator is combined with a laser speckle imaging system, and speckles are recorded and calculated simultaneously in the stimulating process.
The laser speckle imaging system is shown in fig. 1, a first light source emitter 1 emits a laser speckle light source, preferably emits a laser light source, the wavelength can be selected according to design requirements, an 840nm infrared laser source is selected in the scheme, and the laser speckle imaging system enters a fundus 9 through a first lens 2, a dichroic mirror 3, a reflective mirror 6, a hollow reflective mirror 7 and an eye-catching objective lens 8 in sequence. The reflected light of the retina is collected by an image collector 12 through an ocular objective lens 8, a hollow reflector 7, a relay lens 10 and a second lens 11. The image collector can be a camera, a CCD camera, a CMOS camera and the like. The image collector 12 sends the collected image to the controller 13 (computer) for algorithm processing, so as to realize real-time recording and calculation of speckles. At the same time, the second light source emitter 4 (stimulator) stimulates the fundus in real time. The stimulating image generated by the second light source emitter 4 sequentially passes through the third lens 5, the dichroic mirror 3, the reflecting mirror 6, the hollow reflecting mirror 7 and the ocular objective 8 to enter the fundus 9, and stimulates the fundus. The combination of the stimulator and the laser speckle imaging system can effectively observe the change of the eyeground under the stimulation state, and more effective information can be obtained through the subsequent algorithm processing.
Subfunction 2: eye movements were tracked using near infrared imaging at 840 nm.
Eye movement studies are widely used in the following fields of research: human factors, behavioral studies, pattern recognition, marketing studies, medical studies, highway engineering studies, driver training and evaluation, instrument panel design evaluation and reading studies, and the like.
There are three basic forms of eye movement: gaze, jerk, and smooth tracking motion. When people look at things at ordinary times, the eyes actually do different forms of movement. First, both eyes must be held in a certain orientation to get the image of the object to fall on the fovea of both retinas for the clearest vision, an activity of which the eyes aim at the object called fixation. In order to achieve and maintain a gaze on an object, the eye must make two additional movements: the jumping of the eyeball and the following movement of the eyeball. The ultimate purpose of these several forms of eyeball motion is to ensure a clear perception of objects.
Several major eye movement recording methods are: 1. an electromagnetic induction method; 2. a mechanical recording method; 3. a current recording method; 4. optical recording method. These recording methods are specifically described below:
an electromagnetic induction method: the eye to be tested is anesthetized and a contact lens with a search coil is attached to the eye. The presence of induced voltages in the coils allows accurate measurement of eye movements in both the horizontal and vertical directions by phase-sensitive detection of the induced voltages. This method is highly accurate, but contact with the eyeball can cause discomfort to the subject.
Mechanical recording method: a small mirror is attached to the eye to be tested, light is emitted to the mirror, and the reflected light changes along with the movement of the eyeball, so that an eye movement signal is obtained. The method has the technical characteristics of highest precision, high bandwidth, large interference to people and uncomfortable feeling.
Current recording method: eye movements can produce bioelectrical phenomena. Metabolism of cornea and retina is different, metabolism rate of cornea part is small, metabolism rate of omentum part is large, so that potential difference of 0.4-1.0 mV is formed between cornea and omentum, cornea is positively charged, and omentum is negatively charged. When no eye movement occurs in front of the eye's gaze, a stable reference potential can be recorded, the potential difference between the skin on the left and right sides of the eye changes when the eye moves in the horizontal direction, and the potential difference between the upper and lower sides of the eye changes when the eye moves in the vertical direction. Two pairs of silver chloride skin surface electrodes are respectively arranged on the left side, the right side, the upper side and the lower side of eyes, so that weak electric signals in the eyeball change direction can be caused, and the eyeball movement position information can be obtained after amplification. The method has the characteristics of high bandwidth, low precision and large interference to people.
Optical recording method:
corneal reflex tracking method: because the cornea is projected from the surface of the eyeball, the reflection angle of the cornea to the light from the fixed light source is changed during the movement of the eyeball, so that a near infrared LED light source and a camera fixed right in front of the head of a subject can be placed in front of the human eye, and the light reflected by the cornea is transmitted to the camera through a light beam separation device in front of the eye and a plurality of reflectors and lenses. The same device was placed in front of the other eye. The position of the corneal reflected light is determined by means of an image on a camera screen fixed in front of the head and a corresponding number of algorithms. The largest errors of this system are mainly the slippage of the head optics and the errors due to the distance between the eye and the camera lens.
Pupil-cornea tracking method: as shown in fig. 2, the system irradiates the eye with infrared light 3, the optical elements of the system are fixed in space at a relatively fixed distance from the subject's eye 1, the reflected image is recorded by the camera 4 through the optical element 2, the data obtained by the camera 4 is processed by a computer or a microprocessor to discriminate between the pupil and the CR (cornea), the corneal reflection point data is used as the base point of the relative position of the eye camera and the eyeball, and the fixation point in the screen space is calculated from the pupil center position coordinates. The method is accurate, has small error and has no interference to human.
Subfunction 3: pupillary observations were performed during the stimulation: the pupil size is detected by using the contraposition iris camera, and the variation trend of the pupil size in the stimulation process is recorded.
The technology for detecting the size of the pupil of the human eye has important research and application significance in the medical field. Not only does the pupil change due to the intensity of light, but the appearance of certain physiological and psychological processes can also affect the change in pupil size. The physiological, pathological and neural consciousness information can be obtained by detecting the size of the pupil of the human body.
At present, the pupil size detection method includes an eye diagram method, a cornea reflection method, an infrared TV method, an infrared radio and television reflection method, a pupil-cornea tracking method, a mathematical morphology method and an image processing method, wherein the image processing method has the characteristics of high accuracy, small error, no interference to human eyes and the like, and becomes the pupil size detection method which is widely applied at present.
The image processing method for pupil size detection is generally divided into the following steps: image preprocessing, candidate pupil map acquisition, binarization, edge detection, pupil boundary storage and pupil fitting to determine the pupil position and size.
For pupils which change under the stimulation of fixed wavelength and light intensity, capturing related images by aligning an iris camera, and processing the images, thereby detecting the positions and the sizes of the pupils, and recording the change trend in the stimulation process, and the method is specifically realized as follows:
the pupil acquisition process consists of two steps, a full frame operation and a pupil candidate operation.
In the full-frame operation, the following image processing procedures are performed:
1/4 x 1/4 downsampling the iris image to reduce the amount of computation;
performing filtering enhancement on the sub-images obtained by down-sampling, wherein a filtering core is a pupil-non-pupil template, as shown in fig. 3, a transverse line mark represents a pixel of a pupil area in the template, an oblique line mark represents a pixel of a non-pupil area in the template, a cross line mark represents a central pixel of the template, and the enhanced images are stored in a pupil candidate list;
in the pupil candidate operation, the following image processing procedures are performed:
adjusting the center of each image in the pupil candidate list to be matched with the same position of the corresponding high-resolution image to obtain an initial constraint square so as to define the ROI of the high-resolution image;
performing binarization on the ROI by a pixel intensity method and a pseudo gradient method, and performing edge detection on a binary image, wherein the edge detection of the pseudo gradient method is based on an edge image obtained by the pixel intensity method, as shown in FIG. 4, an example image of an upper path is a result obtained by the pixel intensity method, and an example image of a lower path is a result obtained by the pseudo gradient method;
combining the two binary edge maps to obtain a pupil edge pixel map, and storing the pupil edge pixel map in a corresponding pupil edge pixel list;
the pupil position and size are determined using a best fit circle in combination with a least squares method.
Subfunction 4: blood vessel extraction is carried out through an image or speckle result in 840nm infrared preview, and blood oxygen calculation before and after stimulation and blood vessel diameter change in the stimulation process are analyzed.
Collecting 840nm images, and obtaining blood vessel segmentation I by multi-scale matched filtering methodvAnd (5) structure. The blood oxygen calculation before and after can be carried out through different stimulations, and the change of the diameter of the blood vessel in the stimulation process is observed. Because the retinal vascular structures at 840nm do not have a large number of labels to segment the sample, the patent makesAnd (4) performing blood vessel segmentation by using a matched filter method. The matched filter method can better distinguish the pixels of the blood vessels from the pixels which are not the blood vessels, the blood vessels have different trends in the space, and the images for detecting the blood vessels can be obtained by designing various filters to be convoluted with each pixel point of the original image. The concrete implementation is as follows:
first order gaussian matched filter:
Figure BDA0002194608500000121
where σ is the filter scale and the usual filter coefficients are
Figure BDA0002194608500000122
2,
Figure BDA0002194608500000123
The best effect can be obtained when sigma is 2. t is a constant and is usually set to 3, since more than 99% of the area under the Gaussian curve is [ -3 σ,3 σ [ ]]Within the range. L is the neighborhood length along the y-axis for noise smoothing. The blood vessels are linear, and their directions are the same within a certain length range, so that we can align them
Figure BDA0002194608500000131
The small blocks of (a) are filtered simultaneously, which may improve efficiency.
Performing image denoising processing and image self-adaptive histogram processing on the 840nm image to obtain a corrected image IGCAfter the image convolution operation is carried out on the image I and the matched filter group W, a blood vessel segmentation image I is obtainedvAs shown in fig. 5.
Iv=IGC*W,
W integrates gaussian matched filters of various scales (different values of σ, also can be understood as filters in 12 directions).
After obtaining the blood vessel segmentation image, aiming at one point on the center line of the blood vessel
Figure BDA0002194608500000132
Order to
Figure BDA0002194608500000133
And
Figure BDA0002194608500000134
respectively representing points on the vessel walls at the left side and the right side, searching the minimum pixel value of the section of the vessel, comparing the gray value of pixel points on a section line vertical to the vessel direction, and expressing by the following formula:
Figure BDA0002194608500000135
wherein, min { } is the minimum value in the set, and arg min { f (x) } represents the value of x when f (x) takes the minimum value.
Figure BDA0002194608500000136
Is shown in
Figure BDA0002194608500000137
At a constant value from 0 to 1.
The reflection intensity (also called the emergent intensity) outside the blood vessel at the same point is represented by the gray value of a pixel point which is about one blood vessel width away from the blood vessel walls at the left and right sides, and can be expressed as:
Figure BDA0002194608500000138
wherein the content of the first and second substances,
Figure BDA0002194608500000139
are respectively shown in
Figure BDA00021946085000001310
The gray value of the pixel at (D) is the width of the blood vessel at each point along the central line,
Figure BDA00021946085000001311
is a unit vector perpendicular to the segment of the blood vessel.
Starting from the branch point recorded in the previous step, for each branch point
Figure BDA00021946085000001312
a is 1,2, k, k is the total number of blood vessel branches, and the starting point of the branches is searched sequentially
Figure BDA00021946085000001313
Starting from different branches, every fourth point is used as a small blood vessel section, and the direction of the blood vessel section is calculated and is expressed by an included angle theta with the horizontal direction. Distinguish the inside and outside pixel of blood vessel, need judge the vascular wall, to a point d (x _ loc) on the blood vessel sectioni,y_loci) The following formula is used:
x=x_loci+L′×cos(θ-π/2),
y=y_loci+L′×sin(θ-π/2),
and continuously and iteratively searching the vessel wall formula, judging whether the point (x, y) is in the vessel, obtaining the position of a vessel pixel point of the section in the direction vertical to the vessel, and taking the minimum value of the grey level in the vessel as the grey level value of the incident light intensity.
The vessel width D can be obtained by performing geometric distance calculation on the left and right vessel walls:
Figure BDA0002194608500000141
where, (xl, yl), (xr, yr) are coordinates of the left and right sides of the blood vessel wall when the gray scale value is the smallest, and the results of the blood vessel width are shown in fig. 6.
The calculation of the retinal blood oxygen saturation depends on the calculation of the optical density ratio ODR, i.e. the ratio of the optical densities at two different wavelengths. The calculation of the optical density requires the acquisition of values of the incident and transmitted light intensities, represented by the pixel intensity outside the blood vessel and the pixel intensity inside the blood vessel, respectively. For each small segment of blood vessel, the emergent light intensity can be represented by the minimum gray value of the pixel point of the cross section of the segment of blood vessel, and the gray values of the pixel points at certain positions on the left side and the right side of the segment of blood vessel can be used for representing the incident light intensity. Therefore, the key to performing blood oxygen calculation is to acquire pixel values of the region where the fundus image represents the corresponding light intensity. Selecting the background pixel points at the two ends as follows:
Figure BDA0002194608500000142
Figure BDA0002194608500000143
and calculating to obtain gray values of pixel points in the blood vessel and the background. The optical density function OD at that point is then:
Figure BDA0002194608500000144
wherein, IoutIs the gray value of the emergent light intensity image IinFor incident light intensity image gray value, ODλIs the retinal blood oxygen saturation.
Subfunction 5: the blood flow changes (blood flow, blood flow velocity, etc.) are calculated by speckle during the stimulation.
When the scattering particles move, the interference pattern changes with time, the coherent light source is irradiated to the biological tissue and the reflected speckle image is recorded by the camera, the movement of the scattering particles can cause the speckle image to be blurred within a limited integration time, and the blurring degree can be represented by the speckle contrast. The laser speckle calculation methods are various and can be roughly classified into 3 types: time-space contrast calculation, spatial contrast calculation, and time-space contrast calculation.
Time contrast calculation method: a CCD or CMOS camera is used for continuously collecting a plurality of frames of original speckle images (usually 25 frames or 49 frames), then the standard deviation and the mean value of the speckle intensity change of any pixel point in the images on a time sequence are calculated, and the contrast values of all the pixel points are combined to obtain the whole contrast image. This approach has a higher spatial resolution but a lower temporal resolution, while the frame rate requirements for the camera are higher.
Spatial contrast calculation method: a spatial window of fixed size (typically 5 x 5 or 7 x 7 pixels) is used to calculate the standard deviation and mean of all pixels within the window, and thus the value of the background of the pixel at the center of the window. And moving the window along the horizontal direction and the vertical direction of the original speckle image by taking the pixel as a unit, traversing the whole image to obtain the contrast value of each pixel point, and finally obtaining the whole contrast image. This method has a higher temporal resolution but a lower spatial resolution.
Time-space contrast calculation method: the method can simultaneously reserve higher time resolution and higher spatial resolution by combining the advantages of the time contrast calculation method and the spatial contrast calculation method. The specific operation flow comprises the steps of firstly calculating the spatial contrast ratio of the image of each frame, then calculating the time contrast ratio according to the time sequence on the basis of the spatial contrast ratio, and finally obtaining the processed contrast image.
It is worth noting that the spatial contrast of the speckle can only be accurately estimated, i.e. complete scattering of the speckle is required, if the intensity of the speckle is assumed to fit into a negative exponential probability distribution. The specific evaluation measures are as follows: the minimum speckle size is more than 2 times larger than the CCD pixel size. The minimum speckle size has the following relationship with the aperture size of the camera lens:
ρs=2.44*λ*f/#*(1+M)
wherein λ is the laser wavelength; f/# is the f number of the imaging lens; m is the magnification of the imaging system
The change of the blood flow velocity can be effectively reflected through the laser speckle contrast image, and the change of the blood flow velocity at the bottom of the eye in the whole stimulating process can be more comprehensively known through the combination with the optical stimulator, so that more information is brought to the diagnosis of the disease at the bottom of the eye.
While embodiments of the invention have been shown and described, it will be understood by those of ordinary skill in the art that: various changes, modifications, substitutions and alterations can be made to the embodiments without departing from the principles and spirit of the invention, the scope of which is defined by the claims and their equivalents.

Claims (8)

1. The multimode fundus dynamic imaging analysis system is characterized by comprising a first light source emitter (1), a first lens (2), a dichroic mirror (3), a reflective mirror (6), a hollow reflective mirror (7), an ocular lens (8) and a fundus (9); the light source emitted by the first light source emitter (1) sequentially passes through the first lens (2), the dichroic mirror (3), the reflector (6), the hollow reflector (7) and the eyepiece (8) and then reaches the fundus (9); the light ray entering the reflector (6) and the light ray entering the hollow reflector (7) form an angle of alpha;
the eyeground reflected light sequentially passes through an ocular lens (8), a hollow reflector (7), the relay lens (10) and the second lens (11) and then reaches the image collector (12);
the eyeground illuminating device also comprises a second light source emitter (4) and a third lens (5), wherein a light source emitted by the second light source emitter (4) sequentially passes through the third lens (5), a dichroic mirror (3), a reflector (6), a hollow reflector (7) and an eyepiece (8) and then reaches the eyeground (9); the light entering the dichroic mirror (3) and the light entering the reflector (6) form an angle of beta;
the control end of the first light source emitter (1) is connected with the first light source control end of the controller (13) to control the first light source emitter (1) to emit light sources with different wavelengths; the control end of the second light source emitter (4) is connected with the second light source control end of the controller (13) to control the second light source emitter (4) to emit the stimulus light sources of different images; the image data output end of the image collector (12) is connected with the image data input end of the controller (13), and the image data collected by the image collector (12) is transmitted to the controller for recording;
the multimode fundus dynamic imaging analysis method of the multimode fundus dynamic imaging analysis system comprises the following steps:
s1, acquiring an image to be processed;
s2, processing the image to be processed acquired in the step S1 into a segmented blood vessel image;
s3, searching the minimum pixel value of the blood vessel section on the segmented blood vessel image obtained in S2 as the gray value of the incident light intensity image; the calculation method for searching the minimum pixel value of the blood vessel section comprises the following steps:
Figure FDA0003009174590000011
wherein the content of the first and second substances,
Figure FDA0003009174590000012
a point on the centerline of the blood vessel is represented,
Figure FDA0003009174590000013
and
Figure FDA0003009174590000014
respectively representing points on the left and right vessel walls, t2Is a constant number from 0 to 1 and,
Figure FDA0003009174590000021
is shown in
Figure FDA0003009174590000022
The pixel gray value of (d);
s4, calculating the gray value of the emergent light intensity image on the segmented blood vessel image obtained in the step S2;
s5, calculating the retinal blood oxygen saturation level from the values calculated in steps S3 and S4, and presenting the calculated image.
2. The multimode fundus dynamic imaging analysis system according to claim 1, wherein the first light source emitter (1) is a laser light source emitter;
and/or the second light source emitter (4) is an image emitter;
and/or the image collector (12) is one of a camera, a CCD camera and a CMOS camera.
3. The multimode fundus dynamic imaging analysis system according to claim 2, wherein the first light source emitter (1) emits a 840nm infrared laser source under the control of the controller (13);
and/or the second light source emitter (4) emits a stimulation image under control of the controller (13).
4. The multimode fundus dynamic imaging analysis system according to claim 1, wherein step S2 is: and (4) performing one or any combination of image denoising and image adaptive histogram processing on the to-be-processed image acquired in the step (S1) to obtain a corrected image, and processing the obtained corrected image into a segmented blood vessel image.
5. The multimode fundus dynamic imaging analysis system according to claim 1 or 4, wherein the calculation method of processing the image to be processed or the corrected image into the segmented blood vessel image in step S2 is:
matching a filter:
Figure FDA0003009174590000023
|x|≤t1σ,
Figure FDA0003009174590000024
where σ is the filter's scale and L is the neighborhood length along the y-axis for noise smoothing;
and performing image convolution operation on the image to be processed or the corrected image and the matched filter to obtain a blood vessel segmentation image.
6. The multimode fundus dynamic imaging analysis system according to claim 1, wherein in step S4, the calculation method of the exit light intensity image gray scale value is:
Figure FDA0003009174590000031
wherein the content of the first and second substances,
Figure FDA0003009174590000032
are respectively shown in
Figure FDA0003009174590000033
The gray value of the pixel at (D) is the width of the blood vessel at each point along the central line,
Figure FDA0003009174590000034
is a unit vector perpendicular to the blood vessel.
7. The multimode fundus dynamic imaging analysis system according to claim 1, wherein the retinal blood oxygen saturation is calculated by:
Figure FDA0003009174590000035
wherein, IoutIs the gray value of the emergent light intensity image IinIs the grey value of the incident light intensity image.
8. Multimode ocular fundus dynamic imaging analysis system according to claim 6 or 7, characterized in that for each branch point
Figure FDA0003009174590000036
Searching the starting point of its branch in turn
Figure FDA0003009174590000037
Starting from different branches, taking every P points as a small blood vessel section, wherein P is a positive integer, calculating the direction of the blood vessel of the small blood vessel section, expressing the direction by an included angle theta with the horizontal direction, distinguishing pixel points inside and outside the blood vessel, judging the blood vessel wall, and judging a point d (x _ loc) on the blood vessel sectioni,y_loci) The following formula is used:
x=x_loci+L′×cos(θ-π/2),
y=y_loci+ L '× sin (theta-pi/2), L' is the length of the small vessel segment;
continuously and iteratively searching the vessel wall formula, judging whether the point (x, y) is in the vessel, obtaining the position of a vessel pixel point of the section in the direction vertical to the vessel, and taking the minimum value of the grey level in the vessel as a transmission light intensity grey level value;
the vessel width D can be obtained by performing geometric distance calculation on the left and right vessel walls:
Figure FDA0003009174590000038
where, (xl, yl), (xr, yr) are coordinates at which the left and right sides of the blood vessel wall have the smallest grayscale values, respectively.
CN201910844043.0A 2019-09-06 2019-09-06 Multimode fundus dynamic imaging analysis system and method Active CN110448267B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910844043.0A CN110448267B (en) 2019-09-06 2019-09-06 Multimode fundus dynamic imaging analysis system and method

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910844043.0A CN110448267B (en) 2019-09-06 2019-09-06 Multimode fundus dynamic imaging analysis system and method

Publications (2)

Publication Number Publication Date
CN110448267A CN110448267A (en) 2019-11-15
CN110448267B true CN110448267B (en) 2021-05-25

Family

ID=68491004

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910844043.0A Active CN110448267B (en) 2019-09-06 2019-09-06 Multimode fundus dynamic imaging analysis system and method

Country Status (1)

Country Link
CN (1) CN110448267B (en)

Families Citing this family (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
ES2911698T3 (en) 2014-11-07 2022-05-20 Ohio State Innovation Foundation Procedures and apparatus for making a determination on an eye under ambient lighting conditions
WO2021133400A1 (en) * 2019-12-27 2021-07-01 Ohio State Innovation Foundation Methods and apparatus for making a determination about an eye using color temperature adjusted ambient lighting
US11622682B2 (en) 2019-12-27 2023-04-11 Ohio State Innovation Foundation Methods and apparatus for making a determination about an eye using color temperature adjusted ambient lighting
CN111035361B (en) * 2019-12-28 2022-06-21 重庆贝奥新视野医疗设备有限公司 Fundus camera imaging and illuminating system
CN113359294B (en) * 2020-03-06 2022-11-22 苏州苏大维格科技集团股份有限公司 Micro optical system
US11950848B2 (en) * 2020-08-10 2024-04-09 Welch Allyn, Inc. Fundus imaging for microvascular assessment
CN116327111B (en) * 2023-02-28 2024-01-16 中山大学中山眼科中心 Fundus blood vessel blood oxygen function coefficient measurement system and method based on fundus photo

Citations (28)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1408319A (en) * 2001-09-04 2003-04-09 皇家菲利浦电子有限公司 Method for blood vessel photographic image process by digital reduction method
CN102800087A (en) * 2012-06-28 2012-11-28 华中科技大学 Automatic dividing method of ultrasound carotid artery vascular membrane
CN102999905A (en) * 2012-11-15 2013-03-27 天津工业大学 Automatic eye fundus image vessel detecting method based on PCNN (pulse coupled neural network)
CN103190932A (en) * 2013-04-22 2013-07-10 华北电力大学(保定) Method for estimating stress and strain of coronary artery blood vessel wall
CN103340620A (en) * 2013-05-31 2013-10-09 中国科学院深圳先进技术研究院 Tube wall stress phase angle measuring method and system
CN203576470U (en) * 2013-11-15 2014-05-07 浙江大学 Spectral domain OCT detection system based on segmented spectrum optical path coding
CN103876764A (en) * 2013-11-21 2014-06-25 沈阳东软医疗系统有限公司 Vascular imaging method and device
CN104027068A (en) * 2014-05-28 2014-09-10 北京大学 Real-time multi-mode photoacoustic human eye imaging system and imaging method thereof
CN104732499A (en) * 2015-04-01 2015-06-24 武汉工程大学 Retina image enhancement algorithm based on multiple scales and multiple directions
CN104997519A (en) * 2015-08-13 2015-10-28 中国科学院光电技术研究所 Dual-wavelength retinal vessel blood oxygen measuring system based on fundus camera
JP2016140518A (en) * 2015-01-30 2016-08-08 キヤノン株式会社 Tomographic imaging device, tomographic imaging method, and program
WO2017014137A1 (en) * 2015-07-17 2017-01-26 ソニー株式会社 Eyeball observation device, eyewear terminal, gaze detection method, and program
CN106407917A (en) * 2016-09-05 2017-02-15 山东大学 Dynamic scale distribution-based retinal vessel extraction method and system
WO2017046377A1 (en) * 2015-09-16 2017-03-23 INSERM (Institut National de la Santé et de la Recherche Médicale) Method and computer program product for processing an examination record comprising a plurality of images of at least parts of at least one retina of a patient
CN106651846A (en) * 2016-12-20 2017-05-10 中南大学湘雅医院 Method for segmenting vasa sanguinea retinae image
CN106886991A (en) * 2017-01-20 2017-06-23 北京理工大学 A kind of fuzziness automatic grading method based on colored eyeground figure
CN106943124A (en) * 2012-09-10 2017-07-14 俄勒冈健康科学大学 Local circulation is quantified with optical coherence tomography angiogram
CN107862724A (en) * 2017-12-01 2018-03-30 中国医学科学院生物医学工程研究所 A kind of improved microvascular blood flow imaging method
CN108257126A (en) * 2018-01-25 2018-07-06 苏州大学 The blood vessel detection and method for registering, equipment and application of three-dimensional retina OCT image
CN108309229A (en) * 2018-04-18 2018-07-24 电子科技大学 A kind of hierarchical structure division methods for eye fundus image retinal vessel
CN108520512A (en) * 2018-03-26 2018-09-11 北京医拍智能科技有限公司 A kind of method and device measuring eye parameter
CN108618749A (en) * 2017-03-22 2018-10-09 南通大学 Retinal vessel three-dimensional rebuilding method based on portable digital fundus camera
CN108670192A (en) * 2018-04-21 2018-10-19 重庆贝奥新视野医疗设备有限公司 A kind of multispectral eyeground imaging system and method for dynamic vision stimulation
CN109199322A (en) * 2018-08-31 2019-01-15 福州依影健康科技有限公司 A kind of macula lutea detection method and a kind of storage equipment
CN109547677A (en) * 2018-12-06 2019-03-29 代黎明 Eye fundus image image pickup method and system and equipment
CN208892542U (en) * 2018-04-16 2019-05-24 中国科学院苏州生物医学工程技术研究所 Optical coherence tomography and the confocal synchronous imaging system of spot scan
CN109829942A (en) * 2019-02-21 2019-05-31 韶关学院 A kind of automatic quantization method of eye fundus image retinal blood vessels caliber
CN110189320A (en) * 2019-05-31 2019-08-30 中南大学 Segmentation Method of Retinal Blood Vessels based on middle layer block space structure

Family Cites Families (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US7712898B2 (en) * 2006-04-03 2010-05-11 University Of Iowa Research Foundation Methods and systems for optic nerve head segmentation
US20180055355A1 (en) * 2015-09-11 2018-03-01 Marinko Venci Sarunic Systems and Methods for Angiography and Motion Corrected Averaging
EP3563347A4 (en) * 2016-12-27 2020-06-24 Gerard Dirk Smits Systems and methods for machine perception

Patent Citations (28)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1408319A (en) * 2001-09-04 2003-04-09 皇家菲利浦电子有限公司 Method for blood vessel photographic image process by digital reduction method
CN102800087A (en) * 2012-06-28 2012-11-28 华中科技大学 Automatic dividing method of ultrasound carotid artery vascular membrane
CN106943124A (en) * 2012-09-10 2017-07-14 俄勒冈健康科学大学 Local circulation is quantified with optical coherence tomography angiogram
CN102999905A (en) * 2012-11-15 2013-03-27 天津工业大学 Automatic eye fundus image vessel detecting method based on PCNN (pulse coupled neural network)
CN103190932A (en) * 2013-04-22 2013-07-10 华北电力大学(保定) Method for estimating stress and strain of coronary artery blood vessel wall
CN103340620A (en) * 2013-05-31 2013-10-09 中国科学院深圳先进技术研究院 Tube wall stress phase angle measuring method and system
CN203576470U (en) * 2013-11-15 2014-05-07 浙江大学 Spectral domain OCT detection system based on segmented spectrum optical path coding
CN103876764A (en) * 2013-11-21 2014-06-25 沈阳东软医疗系统有限公司 Vascular imaging method and device
CN104027068A (en) * 2014-05-28 2014-09-10 北京大学 Real-time multi-mode photoacoustic human eye imaging system and imaging method thereof
JP2016140518A (en) * 2015-01-30 2016-08-08 キヤノン株式会社 Tomographic imaging device, tomographic imaging method, and program
CN104732499A (en) * 2015-04-01 2015-06-24 武汉工程大学 Retina image enhancement algorithm based on multiple scales and multiple directions
WO2017014137A1 (en) * 2015-07-17 2017-01-26 ソニー株式会社 Eyeball observation device, eyewear terminal, gaze detection method, and program
CN104997519A (en) * 2015-08-13 2015-10-28 中国科学院光电技术研究所 Dual-wavelength retinal vessel blood oxygen measuring system based on fundus camera
WO2017046377A1 (en) * 2015-09-16 2017-03-23 INSERM (Institut National de la Santé et de la Recherche Médicale) Method and computer program product for processing an examination record comprising a plurality of images of at least parts of at least one retina of a patient
CN106407917A (en) * 2016-09-05 2017-02-15 山东大学 Dynamic scale distribution-based retinal vessel extraction method and system
CN106651846A (en) * 2016-12-20 2017-05-10 中南大学湘雅医院 Method for segmenting vasa sanguinea retinae image
CN106886991A (en) * 2017-01-20 2017-06-23 北京理工大学 A kind of fuzziness automatic grading method based on colored eyeground figure
CN108618749A (en) * 2017-03-22 2018-10-09 南通大学 Retinal vessel three-dimensional rebuilding method based on portable digital fundus camera
CN107862724A (en) * 2017-12-01 2018-03-30 中国医学科学院生物医学工程研究所 A kind of improved microvascular blood flow imaging method
CN108257126A (en) * 2018-01-25 2018-07-06 苏州大学 The blood vessel detection and method for registering, equipment and application of three-dimensional retina OCT image
CN108520512A (en) * 2018-03-26 2018-09-11 北京医拍智能科技有限公司 A kind of method and device measuring eye parameter
CN208892542U (en) * 2018-04-16 2019-05-24 中国科学院苏州生物医学工程技术研究所 Optical coherence tomography and the confocal synchronous imaging system of spot scan
CN108309229A (en) * 2018-04-18 2018-07-24 电子科技大学 A kind of hierarchical structure division methods for eye fundus image retinal vessel
CN108670192A (en) * 2018-04-21 2018-10-19 重庆贝奥新视野医疗设备有限公司 A kind of multispectral eyeground imaging system and method for dynamic vision stimulation
CN109199322A (en) * 2018-08-31 2019-01-15 福州依影健康科技有限公司 A kind of macula lutea detection method and a kind of storage equipment
CN109547677A (en) * 2018-12-06 2019-03-29 代黎明 Eye fundus image image pickup method and system and equipment
CN109829942A (en) * 2019-02-21 2019-05-31 韶关学院 A kind of automatic quantization method of eye fundus image retinal blood vessels caliber
CN110189320A (en) * 2019-05-31 2019-08-30 中南大学 Segmentation Method of Retinal Blood Vessels based on middle layer block space structure

Non-Patent Citations (3)

* Cited by examiner, † Cited by third party
Title
A super‑resolution method‑based pipeline for fundus fluorescein angiography imaging;Zhe Jiang,Zekuan yu,Qiushi Ren;《BioMedical Engineering Online》;20181219;第17卷;1-19 *
双波长视网膜血氧测量系统;先永利,戴云,高椿明,杜睿;《光电工程》;20160630;第43卷(第6期);68-74 *
基于光学相干层析的视网膜图像分割;江源源,周传清,任秋实;《北京生物医学工程》;20111031;第30卷(第5期);453-456 *

Also Published As

Publication number Publication date
CN110448267A (en) 2019-11-15

Similar Documents

Publication Publication Date Title
CN110448267B (en) Multimode fundus dynamic imaging analysis system and method
EP3785602B1 (en) Multi-spectral fundus imaging system and method using dynamic visual stimulation
US7370967B2 (en) Method and apparatus for optical imaging of retinal function
US8801183B2 (en) Assessment of microvascular circulation
Tavakoli et al. A complementary method for automated detection of microaneurysms in fluorescein angiography fundus images to assess diabetic retinopathy
US20220160228A1 (en) A patient tuned ophthalmic imaging system with single exposure multi-type imaging, improved focusing, and improved angiography image sequence display
CN111933275B (en) Depression evaluation system based on eye movement and facial expression
CN111128382B (en) Artificial intelligence multimode imaging analysis device
US20080021331A1 (en) Characterization of moving objects in a stationary background
US20050131284A1 (en) Characterization of moving objects in a stationary background
JP2021037239A (en) Area classification method
CN110575132A (en) Method for calculating degree of strabismus based on eccentric photography
Giancardo Automated fundus images analysis techniques to screen retinal diseases in diabetic patients
CN116172507A (en) Eye motion capturing and tear film detecting system and equipment
Noronha et al. A review of fundus image analysis for the automated detection of diabetic retinopathy
US8403862B2 (en) Time-based imaging
Santhakumar et al. A fast algorithm for optic disc segmentation in fundus images
US20220151482A1 (en) Biometric ocular measurements using deep learning
Valencia Automatic detection of diabetic related retina disease in fundus color images
Mayer Automated glaucoma detection with optical coherence tomography
Odstrčilík Analysis of retinal image data to support glaucoma diagnosis
Li Computational Methods for Enhancements of Optical Coherence Tomography
CN114596623A (en) Eye-shake type identification method based on optical flow method
Semerád Theoretical and Experimental Determination of the Amount of Information in Human Ocular Biometric Characteristics
Jernigan Visual field plotting using eye movement response

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant