WO2020237520A1 - 一种智能眼底激光手术辅助诊断系统及其方法 - Google Patents

一种智能眼底激光手术辅助诊断系统及其方法 Download PDF

Info

Publication number
WO2020237520A1
WO2020237520A1 PCT/CN2019/088979 CN2019088979W WO2020237520A1 WO 2020237520 A1 WO2020237520 A1 WO 2020237520A1 CN 2019088979 W CN2019088979 W CN 2019088979W WO 2020237520 A1 WO2020237520 A1 WO 2020237520A1
Authority
WO
WIPO (PCT)
Prior art keywords
fundus
laser
module
data
imaging
Prior art date
Application number
PCT/CN2019/088979
Other languages
English (en)
French (fr)
Inventor
张�杰
张金莲
Original Assignee
南京博视医疗科技有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 南京博视医疗科技有限公司 filed Critical 南京博视医疗科技有限公司
Priority to US17/428,188 priority Critical patent/US20220117780A1/en
Publication of WO2020237520A1 publication Critical patent/WO2020237520A1/zh

Links

Images

Classifications

    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61FFILTERS IMPLANTABLE INTO BLOOD VESSELS; PROSTHESES; DEVICES PROVIDING PATENCY TO, OR PREVENTING COLLAPSING OF, TUBULAR STRUCTURES OF THE BODY, e.g. STENTS; ORTHOPAEDIC, NURSING OR CONTRACEPTIVE DEVICES; FOMENTATION; TREATMENT OR PROTECTION OF EYES OR EARS; BANDAGES, DRESSINGS OR ABSORBENT PADS; FIRST-AID KITS
    • A61F9/00Methods or devices for treatment of the eyes; Devices for putting-in contact lenses; Devices to correct squinting; Apparatus to guide the blind; Protective devices for the eyes, carried on the body or in the hand
    • A61F9/007Methods or devices for eye surgery
    • A61F9/008Methods or devices for eye surgery using laser
    • A61F9/00821Methods or devices for eye surgery using laser for coagulation
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61FFILTERS IMPLANTABLE INTO BLOOD VESSELS; PROSTHESES; DEVICES PROVIDING PATENCY TO, OR PREVENTING COLLAPSING OF, TUBULAR STRUCTURES OF THE BODY, e.g. STENTS; ORTHOPAEDIC, NURSING OR CONTRACEPTIVE DEVICES; FOMENTATION; TREATMENT OR PROTECTION OF EYES OR EARS; BANDAGES, DRESSINGS OR ABSORBENT PADS; FIRST-AID KITS
    • A61F9/00Methods or devices for treatment of the eyes; Devices for putting-in contact lenses; Devices to correct squinting; Apparatus to guide the blind; Protective devices for the eyes, carried on the body or in the hand
    • A61F9/007Methods or devices for eye surgery
    • A61F9/008Methods or devices for eye surgery using laser
    • A61F9/009Auxiliary devices making contact with the eyeball and coupling in laser light, e.g. goniolenses
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B3/00Apparatus for testing the eyes; Instruments for examining the eyes
    • A61B3/10Objective types, i.e. instruments for examining the eyes independent of the patients' perceptions or reactions
    • A61B3/102Objective types, i.e. instruments for examining the eyes independent of the patients' perceptions or reactions for optical coherence tomography [OCT]
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B3/00Apparatus for testing the eyes; Instruments for examining the eyes
    • A61B3/10Objective types, i.e. instruments for examining the eyes independent of the patients' perceptions or reactions
    • A61B3/1025Objective types, i.e. instruments for examining the eyes independent of the patients' perceptions or reactions for confocal scanning
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B3/00Apparatus for testing the eyes; Instruments for examining the eyes
    • A61B3/10Objective types, i.e. instruments for examining the eyes independent of the patients' perceptions or reactions
    • A61B3/12Objective types, i.e. instruments for examining the eyes independent of the patients' perceptions or reactions for looking at the eye fundus, e.g. ophthalmoscopes
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B3/00Apparatus for testing the eyes; Instruments for examining the eyes
    • A61B3/10Objective types, i.e. instruments for examining the eyes independent of the patients' perceptions or reactions
    • A61B3/12Objective types, i.e. instruments for examining the eyes independent of the patients' perceptions or reactions for looking at the eye fundus, e.g. ophthalmoscopes
    • A61B3/1225Objective types, i.e. instruments for examining the eyes independent of the patients' perceptions or reactions for looking at the eye fundus, e.g. ophthalmoscopes using coherent radiation
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B3/00Apparatus for testing the eyes; Instruments for examining the eyes
    • A61B3/10Objective types, i.e. instruments for examining the eyes independent of the patients' perceptions or reactions
    • A61B3/13Ophthalmic microscopes
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B3/00Apparatus for testing the eyes; Instruments for examining the eyes
    • A61B3/10Objective types, i.e. instruments for examining the eyes independent of the patients' perceptions or reactions
    • A61B3/14Arrangements specially adapted for eye photography
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B90/00Instruments, implements or accessories specially adapted for surgery or diagnosis and not covered by any of the groups A61B1/00 - A61B50/00, e.g. for luxation treatment or for protecting wound edges
    • A61B90/36Image-producing devices or illumination devices not otherwise provided for
    • A61B90/37Surgical systems with images on a monitor during operation
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61FFILTERS IMPLANTABLE INTO BLOOD VESSELS; PROSTHESES; DEVICES PROVIDING PATENCY TO, OR PREVENTING COLLAPSING OF, TUBULAR STRUCTURES OF THE BODY, e.g. STENTS; ORTHOPAEDIC, NURSING OR CONTRACEPTIVE DEVICES; FOMENTATION; TREATMENT OR PROTECTION OF EYES OR EARS; BANDAGES, DRESSINGS OR ABSORBENT PADS; FIRST-AID KITS
    • A61F9/00Methods or devices for treatment of the eyes; Devices for putting-in contact lenses; Devices to correct squinting; Apparatus to guide the blind; Protective devices for the eyes, carried on the body or in the hand
    • A61F9/007Methods or devices for eye surgery
    • A61F9/008Methods or devices for eye surgery using laser
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/214Generating training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/22Matching criteria, e.g. proximity measures
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/18Eye characteristics, e.g. of the iris
    • GPHYSICS
    • G16INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR SPECIFIC APPLICATION FIELDS
    • G16HHEALTHCARE INFORMATICS, i.e. INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR THE HANDLING OR PROCESSING OF MEDICAL OR HEALTHCARE DATA
    • G16H15/00ICT specially adapted for medical reports, e.g. generation or transmission thereof
    • GPHYSICS
    • G16INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR SPECIFIC APPLICATION FIELDS
    • G16HHEALTHCARE INFORMATICS, i.e. INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR THE HANDLING OR PROCESSING OF MEDICAL OR HEALTHCARE DATA
    • G16H20/00ICT specially adapted for therapies or health-improving plans, e.g. for handling prescriptions, for steering therapy or for monitoring patient compliance
    • G16H20/40ICT specially adapted for therapies or health-improving plans, e.g. for handling prescriptions, for steering therapy or for monitoring patient compliance relating to mechanical, radiation or invasive therapies, e.g. surgery, laser therapy, dialysis or acupuncture
    • GPHYSICS
    • G16INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR SPECIFIC APPLICATION FIELDS
    • G16HHEALTHCARE INFORMATICS, i.e. INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR THE HANDLING OR PROCESSING OF MEDICAL OR HEALTHCARE DATA
    • G16H30/00ICT specially adapted for the handling or processing of medical images
    • G16H30/20ICT specially adapted for the handling or processing of medical images for handling medical images, e.g. DICOM, HL7 or PACS
    • GPHYSICS
    • G16INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR SPECIFIC APPLICATION FIELDS
    • G16HHEALTHCARE INFORMATICS, i.e. INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR THE HANDLING OR PROCESSING OF MEDICAL OR HEALTHCARE DATA
    • G16H30/00ICT specially adapted for the handling or processing of medical images
    • G16H30/40ICT specially adapted for the handling or processing of medical images for processing medical images, e.g. editing
    • GPHYSICS
    • G16INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR SPECIFIC APPLICATION FIELDS
    • G16HHEALTHCARE INFORMATICS, i.e. INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR THE HANDLING OR PROCESSING OF MEDICAL OR HEALTHCARE DATA
    • G16H40/00ICT specially adapted for the management or administration of healthcare resources or facilities; ICT specially adapted for the management or operation of medical equipment or devices
    • G16H40/60ICT specially adapted for the management or administration of healthcare resources or facilities; ICT specially adapted for the management or operation of medical equipment or devices for the operation of medical equipment or devices
    • G16H40/63ICT specially adapted for the management or administration of healthcare resources or facilities; ICT specially adapted for the management or operation of medical equipment or devices for the operation of medical equipment or devices for local operation
    • GPHYSICS
    • G16INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR SPECIFIC APPLICATION FIELDS
    • G16HHEALTHCARE INFORMATICS, i.e. INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR THE HANDLING OR PROCESSING OF MEDICAL OR HEALTHCARE DATA
    • G16H50/00ICT specially adapted for medical diagnosis, medical simulation or medical data mining; ICT specially adapted for detecting, monitoring or modelling epidemics or pandemics
    • G16H50/20ICT specially adapted for medical diagnosis, medical simulation or medical data mining; ICT specially adapted for detecting, monitoring or modelling epidemics or pandemics for computer-aided diagnosis, e.g. based on medical expert systems
    • GPHYSICS
    • G16INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR SPECIFIC APPLICATION FIELDS
    • G16HHEALTHCARE INFORMATICS, i.e. INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR THE HANDLING OR PROCESSING OF MEDICAL OR HEALTHCARE DATA
    • G16H50/00ICT specially adapted for medical diagnosis, medical simulation or medical data mining; ICT specially adapted for detecting, monitoring or modelling epidemics or pandemics
    • G16H50/70ICT specially adapted for medical diagnosis, medical simulation or medical data mining; ICT specially adapted for detecting, monitoring or modelling epidemics or pandemics for mining of medical data, e.g. analysing previous cases of other patients
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B90/00Instruments, implements or accessories specially adapted for surgery or diagnosis and not covered by any of the groups A61B1/00 - A61B50/00, e.g. for luxation treatment or for protecting wound edges
    • A61B90/36Image-producing devices or illumination devices not otherwise provided for
    • A61B90/37Surgical systems with images on a monitor during operation
    • A61B2090/373Surgical systems with images on a monitor during operation using light, e.g. by using optical scanners
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B90/00Instruments, implements or accessories specially adapted for surgery or diagnosis and not covered by any of the groups A61B1/00 - A61B50/00, e.g. for luxation treatment or for protecting wound edges
    • A61B90/36Image-producing devices or illumination devices not otherwise provided for
    • A61B90/37Surgical systems with images on a monitor during operation
    • A61B2090/373Surgical systems with images on a monitor during operation using light, e.g. by using optical scanners
    • A61B2090/3735Optical coherence tomography [OCT]
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61FFILTERS IMPLANTABLE INTO BLOOD VESSELS; PROSTHESES; DEVICES PROVIDING PATENCY TO, OR PREVENTING COLLAPSING OF, TUBULAR STRUCTURES OF THE BODY, e.g. STENTS; ORTHOPAEDIC, NURSING OR CONTRACEPTIVE DEVICES; FOMENTATION; TREATMENT OR PROTECTION OF EYES OR EARS; BANDAGES, DRESSINGS OR ABSORBENT PADS; FIRST-AID KITS
    • A61F9/00Methods or devices for treatment of the eyes; Devices for putting-in contact lenses; Devices to correct squinting; Apparatus to guide the blind; Protective devices for the eyes, carried on the body or in the hand
    • A61F9/007Methods or devices for eye surgery
    • A61F9/008Methods or devices for eye surgery using laser
    • A61F2009/00844Feedback systems
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61FFILTERS IMPLANTABLE INTO BLOOD VESSELS; PROSTHESES; DEVICES PROVIDING PATENCY TO, OR PREVENTING COLLAPSING OF, TUBULAR STRUCTURES OF THE BODY, e.g. STENTS; ORTHOPAEDIC, NURSING OR CONTRACEPTIVE DEVICES; FOMENTATION; TREATMENT OR PROTECTION OF EYES OR EARS; BANDAGES, DRESSINGS OR ABSORBENT PADS; FIRST-AID KITS
    • A61F9/00Methods or devices for treatment of the eyes; Devices for putting-in contact lenses; Devices to correct squinting; Apparatus to guide the blind; Protective devices for the eyes, carried on the body or in the hand
    • A61F9/007Methods or devices for eye surgery
    • A61F9/008Methods or devices for eye surgery using laser
    • A61F2009/00844Feedback systems
    • A61F2009/00846Eyetracking
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61FFILTERS IMPLANTABLE INTO BLOOD VESSELS; PROSTHESES; DEVICES PROVIDING PATENCY TO, OR PREVENTING COLLAPSING OF, TUBULAR STRUCTURES OF THE BODY, e.g. STENTS; ORTHOPAEDIC, NURSING OR CONTRACEPTIVE DEVICES; FOMENTATION; TREATMENT OR PROTECTION OF EYES OR EARS; BANDAGES, DRESSINGS OR ABSORBENT PADS; FIRST-AID KITS
    • A61F9/00Methods or devices for treatment of the eyes; Devices for putting-in contact lenses; Devices to correct squinting; Apparatus to guide the blind; Protective devices for the eyes, carried on the body or in the hand
    • A61F9/007Methods or devices for eye surgery
    • A61F9/008Methods or devices for eye surgery using laser
    • A61F2009/00844Feedback systems
    • A61F2009/00851Optical coherence topography [OCT]
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61FFILTERS IMPLANTABLE INTO BLOOD VESSELS; PROSTHESES; DEVICES PROVIDING PATENCY TO, OR PREVENTING COLLAPSING OF, TUBULAR STRUCTURES OF THE BODY, e.g. STENTS; ORTHOPAEDIC, NURSING OR CONTRACEPTIVE DEVICES; FOMENTATION; TREATMENT OR PROTECTION OF EYES OR EARS; BANDAGES, DRESSINGS OR ABSORBENT PADS; FIRST-AID KITS
    • A61F9/00Methods or devices for treatment of the eyes; Devices for putting-in contact lenses; Devices to correct squinting; Apparatus to guide the blind; Protective devices for the eyes, carried on the body or in the hand
    • A61F9/007Methods or devices for eye surgery
    • A61F9/008Methods or devices for eye surgery using laser
    • A61F2009/00861Methods or devices for eye surgery using laser adapted for treatment at a particular location
    • A61F2009/00863Retina
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61FFILTERS IMPLANTABLE INTO BLOOD VESSELS; PROSTHESES; DEVICES PROVIDING PATENCY TO, OR PREVENTING COLLAPSING OF, TUBULAR STRUCTURES OF THE BODY, e.g. STENTS; ORTHOPAEDIC, NURSING OR CONTRACEPTIVE DEVICES; FOMENTATION; TREATMENT OR PROTECTION OF EYES OR EARS; BANDAGES, DRESSINGS OR ABSORBENT PADS; FIRST-AID KITS
    • A61F9/00Methods or devices for treatment of the eyes; Devices for putting-in contact lenses; Devices to correct squinting; Apparatus to guide the blind; Protective devices for the eyes, carried on the body or in the hand
    • A61F9/007Methods or devices for eye surgery
    • A61F9/008Methods or devices for eye surgery using laser
    • A61F2009/00878Planning

Definitions

  • the invention relates to fundus laser surgery diagnosis and treatment technology, in particular to an intelligent fundus laser surgery auxiliary diagnosis system and method.
  • Diabetic retinopathy is the first blinding disease among working-age people.
  • the main causes of visual impairment and blindness in patients with DR are proliferative diabetic retinopathy (PDR) and diabetic macular edema (DME), and laser photocoagulation is the main treatment method for patients with diabetic retinopathy (DR).
  • PDR proliferative diabetic retinopathy
  • DME diabetic macular edema
  • laser photocoagulation is the main treatment method for patients with diabetic retinopathy (DR).
  • the current fundus laser treatment technology for patients with diabetic retinopathy (DR), macular degeneration and other ophthalmic diseases mainly relies on doctors to manually operate lasers for fixed-point strikes, or use two-dimensional galvanometers for array-shaped laser strikes. treatment.
  • DR diabetic retinopathy
  • the use of these technologies is often not accurate enough, and the treatment measures are based on mechanical contact. It is common that the operation time is longer and the experience of clinicians and patients is poor (such as aggravating DME, causing permanent central vision damage, laser scars, etc. Side effects, cause the patient's peripheral vision decline, visual field reduction, and scotopic vision deficiency).
  • the main purpose of the present invention is to provide an intelligent fundus laser surgery auxiliary diagnosis system and method thereof, so as to solve the problem that the process of preoperative diagnosis and treatment of the existing fundus laser surgery is highly dependent on the experience judgment and operation of clinicians, resulting in
  • auxiliary diagnosis reports such as preoperative diagnosis plan, intraoperative target determination and postoperative effect prediction can be automatically given, reducing the misdiagnosis rate and further simplifying the diagnosis by clinicians And the operation process, while ensuring the accuracy of surgical treatment, improve the diagnostic efficiency, and greatly reduce the risk of laser surgery.
  • An intelligent fundus laser surgery auxiliary diagnosis system including a laser image stabilization and treatment device 1, a data control device 2 and an image display device 3; and a data processing device 4:
  • the data processing device includes a first database 41, a feature extraction module 42, a data analysis and matching module 43, a case feature template library 44, a second database 43, and a diagnosis report generation module 46;
  • the first database 41 is used to store The high-definition fundus image data acquired by the laser image stabilization and treatment device 1 at any angle and various imaging methods;
  • the disease feature data in the fundus image is extracted by the feature extraction module 42, and the data analysis and matching module 45 is used for comparison calculation, Match with the disease feature data stored in the known case feature template library 44, and store the result of the matching operation in the second database 43. If the matching degree exceeds the set threshold, the corresponding auxiliary diagnosis conclusion will be given, and then The auxiliary diagnosis report is generated by the diagnosis report generation module 46.
  • the laser image stabilization and treatment device 1 includes:
  • the imaging diagnosis module is used to obtain reflection signals returned from any angle of the fundus or/and obtain image data of the fundus in real time;
  • the laser treatment module is used to track and lock the fundus target in real time, and automatically adjust the output of the laser dose.
  • the imaging diagnosis module supports one or more of confocal laser scanning imaging SLO, line scan fundus camera LSO, fundus camera, or adaptive fundus imager AOSLO.
  • the imaging diagnosis module also supports a combination of multiple imaging forms, including one or more of SLO+OCT, fundus camera+OCT, fundus camera+SLO, or AOSLO+SLO.
  • the intelligent fundus laser surgery auxiliary diagnosis system also includes a deep learning module 47, which is used to perform a large amount of data training according to the collected fundus image data of the patient in combination with the disease feature data extracted from the fundus image, and perform data analysis automatically Matching operation to obtain the matching operation result for medical experts' reference.
  • a deep learning module 47 which is used to perform a large amount of data training according to the collected fundus image data of the patient in combination with the disease feature data extracted from the fundus image, and perform data analysis automatically Matching operation to obtain the matching operation result for medical experts' reference.
  • the matching operation result whose matching degree is less than the set threshold is confirmed by medical experts, and the case feature data corresponding to the fundus image is written into the new case feature template and entered into the case feature template library 44), that is, the case feature template library is updated .
  • the content of the auxiliary diagnosis report includes the preoperative diagnosis plan, the intraoperative target determination plan and the content of the postoperative treatment effect prediction result.
  • a smart fundus laser surgery assisted diagnosis method including the following steps:
  • A. Use the laser image stabilization and treatment device 1 to collect high-definition fundus image data acquired at any angle and various imaging methods, and store them in the first database 41 of the data processing device 4;
  • the matching degree exceeds the set threshold, a corresponding auxiliary diagnosis conclusion is given, and then the auxiliary diagnosis report is generated by the diagnosis report generating module 46.
  • step D also includes:
  • E. Use the deep learning module 47 to perform a large amount of data training based on the collected patient fundus image data combined with the disease feature data extracted from the fundus image, and automatically perform data analysis and matching operations to give medical experts reference Matching operation result.
  • Step E further includes:
  • the intelligent fundus laser surgery auxiliary diagnosis system and method of the present invention can not only provide a visualized intelligent diagnosis and treatment reference plan for patients with fundus laser surgery, but also can provide real-time human fundus image acquisition, real-time disease analysis and planning The treatment reference area and adaptive adjustment of the laser dose, automatic laser treatment; also supports laser treatment in the mode of manual intervention.
  • the intelligent fundus laser surgery auxiliary diagnosis system of the present invention integrates a variety of ophthalmic fundus imaging technologies and laser treatment technologies, which can realize one-stop diagnosis + treatment services, and at the same time, it can also realize intelligence, automation, High-precision treatment, simplified operation, and improved patient experience.
  • the fundus laser surgery treatment device of the present invention can integrate the treatment laser function through a mechanical device and share hardware with the imaging device, which has the characteristics of cost saving.
  • the fundus laser surgery treatment device of the present invention also provides a variety of imaging diagnostic functions, including: confocal laser (SLO) or line scan imaging (LSO), cross-sectional tomography (OCT), fundus camera (fundus camera) ), even ultra-high-definition adaptive fundus imager (AOSLO); at the same time, it also provides a variety of imaging module combinations, such as SLO+OCT, fundus camera+OCT, fundus camera+SLO, or AOSLO+SLO. Therefore, it can adapt to different and complex application scenarios, and provide real-time fundus imaging and real-time image stabilization.
  • SLO confocal laser
  • LSO line scan imaging
  • OCT cross-sectional tomography
  • fundus camera fundus camera
  • AOSLO ultra-high-definition adaptive fundus imager
  • the present invention is based on the fundus retinal surface imaging function, such as SLO or fundus camera's high-precision fundus navigation and target tracking system, which can ensure that clinicians can easily select pathological areas; at the same time, it also provides intelligent disease diagnosis functions (using artificial intelligence technology), Help doctors plan before surgery, provide reference areas for surgery, and simplify operations.
  • the fundus retinal surface imaging function such as SLO or fundus camera's high-precision fundus navigation and target tracking system
  • the present invention adopts a data control and data processing system, which can analyze preoperative imaging, diagnose the condition and record the image data into the database; it can combine real-time imaging to facilitate the doctor to confirm the accuracy of the treatment area during treatment; and analyze after the operation Imaging is convenient for clinicians to evaluate surgery, and at the same time, post-operative imaging data is entered into the database for easy indexing and further application.
  • the laser output adjustment module and laser control module of the present invention can combine fundus image data feedback to perform intelligent laser strikes, can achieve precise strikes, use low-power, same-color light for target recognition, and achieve precise laser treatment after locking the treatment area. Help clinicians to operate.
  • the laser treatment device can also automatically adjust the size of the spot. The operator can choose the spot size according to the needs; traditional CW laser can be used as the laser source, or picosecond or femtosecond laser can be used as the light source; when using femtosecond laser for fundus In laser surgery, photomechanical effects can be used to achieve the purpose of precise treatment.
  • Figure 1 is a schematic diagram of a smart fundus laser surgery treatment system according to an embodiment of the present invention
  • FIG. 2 is a schematic diagram of a hardware implementation of the laser image stabilization and treatment device 1 shown in FIG. 1 of the present invention
  • Figure 3 is a schematic diagram of a typical SLO fast scan and slow scan mechanism
  • FIG. 4 is a schematic diagram of an implementation manner of the spectroscopic device S1 shown in FIG. 2;
  • Figure 5 is a schematic diagram of fundus tracking in the sawtooth wave scanning direction realized by the sawtooth wave superimposed offset
  • FIG. 6 is a schematic diagram of a mechanical device for controlling the mirror M3 according to an embodiment of the present invention.
  • FIG. 7 is a schematic diagram of a two-dimensional scanning method for controlling the position of OCT in the scanning space of the fundus according to an embodiment of the present invention
  • FIG. 8 is a schematic diagram of a design method of a spectroscopic device S3 corresponding to the auxiliary module light source according to an embodiment of the present invention.
  • FIG. 9 is a schematic diagram of a mechanical and electronic combined device for notifying the user and the host control system whether the current auxiliary module is imaging mode 2 or laser treatment according to an embodiment of the present invention
  • Fig. 10 is a functional block diagram of an auxiliary diagnosis system for intelligent fundus laser surgery according to an embodiment of the present invention.
  • FIG. 11 is a schematic diagram of a smart laser treatment according to an embodiment of the present invention, which is used to provide a treatment reference plan for the clinic;
  • FIG. 12 is a schematic diagram of another smart laser treatment according to an embodiment of the present invention, which is used to provide a treatment reference plan for the clinic;
  • Fig. 13 is a schematic diagram of yet another smart laser treatment according to an embodiment of the present invention, which is used to provide a treatment reference plan for the clinic.
  • Fig. 1 is a schematic diagram of a smart fundus laser surgery treatment system according to an embodiment of the present invention.
  • the intelligent fundus laser surgery treatment system is also an ophthalmology diagnosis and treatment platform. It mainly includes laser image stabilization and treatment device 1, data control device 2, image display device 3. Preferably, a data processing device 4 may also be included. among them:
  • the laser image stabilization and treatment device 1 further includes an imaging diagnosis module 1A and a laser treatment module 1B.
  • the laser treatment module 1B can be combined with one of the imaging modules (ie, the second imaging module 12); preferably, it can also share hardware with the second imaging module 12 to achieve savings The purpose of cost and convenience control.
  • the laser treatment module 1B includes a laser output adjustment module 13 and a second imaging module 12;
  • the imaging diagnosis module 1A includes a first imaging module 11 and a coupling module 14.
  • the first imaging module 11 is set as a master module, and correspondingly, the internal scanning mirrors are master scanners.
  • the second imaging module 12 and the laser output adjustment module 13 (used for laser treatment) are configured as slave modules, and the corresponding internal scanning mirrors are slave scanners.
  • the first imaging module 11 may be a confocal laser scanning imaging (SLO) or a line scan fundus camera (LSO), or a fundus camera (fundus camera), or an ultra-high definition adaptive fundus imager (AOSLO).
  • the second imaging module 12 may be an optical coherence tomography (OCT) or SLO.
  • the first imaging module 11 and the second imaging module 12 support multiple imaging module combinations, such as SLO+OCT, fundus camera+OCT, fundus camera+SLO, or AOSLO+SLO.
  • the laser output adjustment module 13 has a built-in zoom lens for adjusting the laser output dose, and can also control the size of the fundus laser spot by changing the position of the zoom lens, which is convenient for clinical operation.
  • the data control device 2 further includes a laser control module 21, an imaging control module 22 and an image data acquisition module 23. among them:
  • the first imaging module 11 and the second imaging module 12 are controlled in real time. Further, the first imaging module 11, such as SLO, LSO, or/and the second imaging module 12, such as OCT, are used for scanning and imaging through a galvanometer.
  • the data control module 2 realizes real-time scanning of the fundus by adjusting the clock signal, amplitude, frequency and other parameters of the system.
  • the data control module 2 can also control the vibrating optics in the first imaging module 11 and the second imaging module 12 at the same time, and arbitrarily (angle) change scanning parameters, such as the size of the image, the frame rate of the image, and the image brightness And gray scale control, image pixel resolution, image dynamic range, etc.
  • image acquisition can be performed through the data acquisition port of the image data acquisition module 23, and the fundus images of the first imaging module 11 and the second imaging module 12 can be displayed on the image display device 3 in real time to facilitate clinicians Perform real-time observation and diagnosis.
  • the clinician can use the data processing device 4 to analyze the obtained images in real time, and provide relevant reference treatment plans. For example: mark the reference treatment area, give the reference laser dose standard corresponding to each area, give the laser spot size corresponding to each area, etc.
  • the laser image stabilization and treatment device 1 of the embodiment of the present invention can realize fundus target tracking and locking functions.
  • the specific process is: using the fundus image information acquired by the first imaging module 11 to calculate real-time human eye movement signals (including motion signal x and y) are sent to the data control device 2.
  • the data control device 2 outputs real-time control signals through the imaging control module 22 to change the position of the galvanometer in the second imaging module 12 and lock it with the target in real time , To achieve the purpose of real-time target tracking and locking.
  • the real-time control signal will be calibrated in advance to ensure that the change of the galvanometer position is consistent with the actual eye offset.
  • the laser output adjustment module 13 and the second imaging module 12 of the laser treatment device support sharing a hardware system.
  • the function of fundus imaging and laser treatment can also be realized through the cooperation of the coupler.
  • the data control device 2 can control the fundus target to perform imaging and adjust the laser output in the laser output adjustment module 13 in real time through the imaging control module 22 and the laser control module 21, respectively, including adjusting output power, output switches, and output signals. Modulation and so on.
  • the laser control module 21 can use two lasers with similar wavelengths, or the same laser can be used as both the treatment laser and the reference light.
  • the laser light source can be a 532nm CW or a femtosecond laser system.
  • the clinician can also observe the image of the fundus of the patient after treatment in real time through the display screen of the image display device 3, evaluate the results of the operation in real time, and support the upload of the fundus image to the patient database file in the data processing device 4 , In order to facilitate later follow-up observation.
  • the human eye fundus is taken as an example.
  • the laser image stabilization and treatment device 1 composed of the first imaging module 11, the second imaging module 12, and the coupling module 14 can also be used for other different biological tissues, such as stomach, skin and other parts. The following description is still applied to human fundus as an example.
  • FIG. 2 is a schematic diagram of a hardware implementation of the laser image stabilization and treatment device 1 shown in FIG. 1 of the present invention.
  • the laser image stabilization and treatment device can be used as an independent laser fundus navigation and treatment equipment, or it can be combined with other data control devices as a complete laser surgery treatment system for clinical application .
  • the light sources L11, L12,..., L1n are multiple imaging light sources that are controlled (or modulated) by the control (signal) 11, 12,..., 1n, respectively, for the first imaging module 11 to perform imaging.
  • the control (signal) 11, 12,..., 1n, respectively, for the first imaging module 11 to perform imaging For example, infrared light with a wavelength of 780 nm is used for fundus reflection imaging, and light with a wavelength of 532 nm is used for fundus autofluorescence imaging, or other wavelengths of light sources are used for other forms of fundus imaging.
  • the multiple imaging light sources can enter the optical system through the fiber coupling device FC2, and any one of the light sources L11...L1n is controllable (or modulated), as shown in the control signal of the main module in Figure 2 , Namely control (signal) 11,..., control (signal) 1n.
  • the control (or modulation) parameters including output power, switch state, etc., can also be selectively synchronized with the scanning mirror or asynchronously. Among them, the related technology synchronized with the scanning mirror has been described in detail in the previously filed patent application, and will not be repeated here.
  • the imaging light sources L11...L1n pass through the beam splitting device S1, pass through the scanning mirror M11 and the scanning mirror M12, and then pass through the beam splitting device S2, and enter the bottom of the eye.
  • the signal returned from the fundus such as the reflected signal of the photoreceptor cells, or the fluorescent signal excited by the fundus protein, or other signals returned from the fundus, will be reflected along the same optical path to reach the spectroscopic device S1, and then pass through another optical path.
  • the moving spectroscopic device S3 arrives at a photodetector, such as an avalanche photodiode (APD).
  • APD avalanche photodiode
  • the APD is used as a photodetector as an example for description.
  • the photodetector can also be a photomultiplier tube (PMT), CMOS, CCD, or other photodetector devices.
  • the above-mentioned photodetectors (such as APD, PMT, CMOS, CCD) are equipped with a controllable or programmable gain adjustment mechanism, which can be dynamically adjusted by receiving the program control signal of the system host, so as to Adapt to different imaging modes, for example, through the control signal 4 shown in Figure 2 for dynamic adjustment.
  • a controllable or programmable gain adjustment mechanism which can be dynamically adjusted by receiving the program control signal of the system host, so as to Adapt to different imaging modes, for example, through the control signal 4 shown in Figure 2 for dynamic adjustment.
  • the set of scanning mirrors M11 and M12 shown in FIG. 2 are mainly used for orthogonal scanning of the fundus imaging position.
  • the scanning axes of the scanning mirrors M11 and M12 are usually 90 degrees.
  • the scanning mirror M11 can be a resonant scanner.
  • a typical practical application scenario is: setting the scanning mirror M11 to scan in the horizontal direction and setting M12 to scan in the vertical direction , M12 is a slow linear scanning mirror.
  • the orthogonal scanning direction of the scanning mirrors M11 and M12 supports scanning in any direction of 360 degrees in a two-dimensional space.
  • the scanning mirror M11 adopts the CRS8k fast resonance mirror of Cambridge Technology. In other application systems, the CRS12k or other types of fast resonance mirrors can also be adopted.
  • the scanning mirror M12 in the embodiment of the present invention may be implemented by one two-dimensional steering mirror or two one-dimensional tilting scanning mirrors.
  • the scanning mirror M12 adopts a set of two-dimensional scanning mirrors 6220H (or 6210H) of Cambridge Technology.
  • the first axis of the 6220H-the slow scan axis is orthogonal to the scan direction of the M11 fast scan axis; the second axis of the 6220H, does not participate in scanning but is only used for target tracking, and is parallel to the scan axis of M11.
  • the scanning field of the scanning mirror M11 as a fast resonant mirror is controlled by the system host or manually.
  • the scanning motion track of M12 orthogonal to M11 is a triangular wave.
  • the sweep parameters such as the amplitude and frequency of the triangle wave, the climb period and the return period of the triangle wave, and so on are controlled by the system host.
  • the amplitude of the triangle wave determines the size of the field of view in the slow scan direction, and the frequency of the triangle wave determines the frame rate of the image system (refer to Figure 3).
  • Figure 3 is a schematic diagram of a typical SLO fast scan and slow scan mechanism.
  • the fast resonant mirror scans one cycle, the slow mirror linearly increases by one step.
  • the fast (resonant) scan of the SLO completes a sine (or cosine) period 11
  • the slow (linear) scan moves one step 12 in the orthogonal direction.
  • the image frame rate (fps), the resonance frequency (f) of the fast scanning mirror, and the number of lines (N) contained in each frame of the image meet the following requirements relationship:
  • N includes all the scan lines 121 and 122 in the part of FIG. 3. Among them, 121 is the rising creeping period of the sawtooth wave, and 122 is the returning period.
  • the SLO image generally does not include the 122 part of Figure 3, because the image in the 122 period and the 121 period have different pixel compression ratios. SLO images are generally only obtained from part 121 of Figure 3.
  • the function of the spectroscopic device S1 shown in FIG. 2 is to transmit all incident light from the coupling device FC2, but reflect all signals from the fundus to the APD.
  • One implementation mode is to dig out a hollow cylinder at the axis of S1 to allow the incident focused light from FC2 to pass through, but reflect all the expanded light from the fundus to the photodetector APD, as shown in Figure 4 and Figure 2
  • a schematic diagram of an implementation of the spectroscopic device S1 is shown.
  • the scanning mirror M12 of FIG. 2 has two independent motion axes.
  • the first movement axis is orthogonal to the movement (scanning) axis of M11, and the second movement axis is parallel to the movement (scanning) axis of M11.
  • the movement (scanning) axis of the scanning mirrors M12 and M11 are orthogonal to the movement axis, which can receive two signals from the system host: one is the sawtooth wave shown in Figure 3 (such as 121 and 122), and the other is superimposed on the sawtooth The translation signal above the wave.
  • the sawtooth wave is used to scan the fundus to obtain a fundus image
  • the translation signal is used to optically track the eyeball movement of the fundus in the scanning direction of the sawtooth wave. As shown in Figure 5.
  • Fig. 5 is a schematic diagram of fundus tracking mode of sawtooth wave superimposed offset in the sawtooth wave scanning direction.
  • the control host adjusts the offset of the sawtooth wave in real time to track the position of the fundus relative to this reference surface.
  • the system control host mentioned above can be a PC equipped with a corresponding control program module, or a device containing a field programmable logic array (Field Programming Gate Array, FPGA), or a digital signal processor ( Digital Signal Processor (DSP) devices may also be devices that use other types of electronic signal processors, or they may be combined devices that include these hardware.
  • FPGA Field Programming Gate Array
  • DSP Digital Signal Processor
  • the control device uses an Intel PC (Intel i7) machine equipped with nVidia graphics processing unit (GPU), such as GTX1050, for calculating eye movement signals (x, y , ⁇ ), and then through Xilinx FPGA (considering cost factors, the embodiment of the present invention uses Virtex-5 device ML507 or Spartan 6 SP605; more powerful but also more expensive Virtex-6, Virtex-7 , Kintex-7, Artix-7 and other latest series of FPGA devices, you can also use other manufacturers such as Altera FPGA devices), by digitally synthesizing the y part of (x, y, ⁇ ) into the signal form of Figure 5, and then send it Go to a Digital-to-Analog Converter (DAC), such as Texas Instruments' DAC5672, to control the first movement axis of the scanning mirror M12.
  • DAC Digital-to-Analog Converter
  • the signal in Figure 5 can also be realized by analog synthesis.
  • the sawtooth wave in Figure 5 is generated by the first DAC to generate the first analog signal.
  • the offset in Figure 5 is also the y component of (x, y, ⁇ ), and the second analog signal is generated by the second DAC.
  • the two analog signals are synthesized by the analog signal mixer, and finally sent to the first movement axis of the scanning reflector M12.
  • the x of the signal (x, y, ⁇ ) is an analog signal generated by another separate DAC and sent to the second movement axis of M12 to track the movement of the eyeball on the second movement axis.
  • the second movement axis of the scanning mirror M12 is parallel to the scanning axis of M11.
  • the translational part (x, y) of the above-mentioned eye movement signal (x, y, ⁇ ) has two orthogonal movement axes of M12 to realize closed-loop optical tracking.
  • the rotating part ( ⁇ ) of the first imaging module 11 is implemented by digital tracking in the embodiment of the invention, but it can also be implemented by optical or/and mechanical closed-loop tracking in the future.
  • the optical or/and mechanical tracking related technology of the rotating part ( ⁇ ) has been described in detail in US Patent No. 9775515.
  • fundus tracking and eye tracking are a concept. In clinical application, most of the physical movement comes from the eyeball, and the movement of the eyeball causes the fundus image obtained by the imaging system to change randomly in space with time. The equivalent consequence is that at any time of the imaging system, different images are obtained from different fundus positions, and the observed result is that the images jitter randomly over time.
  • the tracking technology in the embodiment of the present invention is to capture eye movement signals (x, y, ⁇ ) in real time through fundus images in the imaging system, and then feed back (x, y) to M12 in FIG.
  • the scanning space of two scanning mirrors (M11 and M12 are orthogonal to the direction of M11) is locked in a pre-defined fundus physical space, so as to realize accurate fundus tracking and stabilize the random changes of fundus images in space over time.
  • the imaging mode in Figure 2 (corresponding to the main module) constitutes a complete closed-loop control system for high-speed real-time tracking of fundus position. This part of the technology has been described in detail in two US patents US9406133 and US9226656.
  • the imaging mode 2 in FIG. 2 that is "slave L2-M3-M2-S2- fundus" on the left corresponds to the imaging mode 1 (main module) shown in FIG.
  • a typical application is the application of optical coherence tomography (Optical Coherence Tomography, OCT) imaging technology.
  • OCT optical Coherence Tomography
  • L31/L32-M2-S2-Fundus corresponds to the fundus laser treatment device described in Figure 1.
  • the functional realization of OCT and fundus laser treatment is described in detail below.
  • the M3 is a movable mirror.
  • the movement method can be mechanical, electronic, or a combination of the two.
  • the movable part of the mirror M3 can also be replaced by a beam splitting device.
  • the state of the mirror M3 is controlled mechanically.
  • the state of the M3 entry/exit optical system is determined by the state of the coupling device FC1 in Figure 2.
  • FC1 the coupling device
  • Fig. 6 is a schematic diagram of a mechanical device for controlling the mirror M3 according to an embodiment of the present invention.
  • the M3 is pushed out or put into the optical system according to the FC1's insertion and withdrawal mechanism.
  • the switch is connected to the foldable frame through a connecting rod.
  • the frame is opened and the FC1 interface is also opened, allowing access to the treatment laser.
  • Figure 6A When the switch is closed, as shown in Figure 6B, when it is at 0 degrees, the FC1 interface is closed. At this time, the treatment laser cannot be connected.
  • the foldable frame returns to the original position (refer to Figure 2), and the imaging laser L2 can be reflected. enter the system.
  • the function of the mirror M3 is to allow the user to select one of the functions of imaging mode 2 or fundus laser treatment in the slave module.
  • M3 When realizing OCT imaging, that is, imaging mode 2 shown above, M3 is placed in the optical path of "L2-M3-M2-S2-fundus" shown in Figure 2, so that the light source of L2 reaches the fundus.
  • M2 is a two-dimensional scanning mirror.
  • a fast tilt mirror with two independent orthogonal control axes and a single reflective surface can also be controlled by two one-dimensional tilt mirrors for orthogonal scanning.
  • the latter case is used in the present invention, and the 6210H dual-mirror combination of Cambridge Technology of the United States is used.
  • M2 in FIG. 2 has multiple functions.
  • the system host In the case of imaging mode 2 shown in Figure 2, the system host generates an OCT scan signal to control the scanning mode of M2, thereby controlling the two-dimensional imaging space of L2 in the fundus.
  • the system generates a set of orthogonal host program scan control group as shown in FIG. 7 S x S y and the control FPGA.
  • S x and Sy are vectors with directions.
  • FIG. 7 is a schematic diagram of a two-dimensional scanning method for controlling the position of OCT in the scanning space of the fundus according to an embodiment of the present invention.
  • the system host program controls the two scanning bases of the FPGA (as shown in Figure 7) to multiply their respective amplitudes (Ax and Ay) and positive and negative signs to realize OCT in any direction of the fundus 360 degrees, and specify the two-dimensional field of view size Scanning can be expressed by the following relation:
  • OCT scan S x A x +S y A y ;
  • the parameters A x and A y are also vectors with signed (or) directions; S x A x +S y A y can realize OCT in any direction in the 360-degree two-dimensional fundus space, as any field size allowed by the optical system scanning.
  • L2 is an imaging light source with a wavelength of 880 nm
  • light source L31 has a wavelength of 561 nm
  • light source L32 has a wavelength of 532 nm.
  • the design of the spectroscopic device S3 needs to be changed differently for different auxiliary module light sources.
  • One way is to customize a different light splitting device S3 for different slave module light sources and place it at the S3 position in FIG. 2, as shown in FIG. 8.
  • FIG. 8 is a schematic diagram of a design method of a light splitting device S3 corresponding to the auxiliary module light source according to an embodiment of the present invention.
  • the spectroscopic device S3 transmits 90%-95% and reflects 5%-10% of light at 532nm and above 830nm, and transmits 5%-10% and reflects 90%-95% of light in other wavelength bands.
  • the light source L31 in the auxiliary module is the aiming light for laser treatment.
  • the aiming light reaches the fundus, and the light spot reflected from the fundus is received by the APD of the first imaging module 11, and a light spot generated by L31 is superimposed on the SLO image.
  • This spot position indicates that the treatment light L32 will have a nearly uniform spatial position on the fundus.
  • the degree of overlap of the light sources L31 and L32 on the fundus depends on the transverse chromatic aberration (TCA) produced by the two wavelengths of 532nm and 561nm on the fundus.
  • TCA transverse chromatic aberration
  • the light with wavelengths of 532nm and 561nm, the TCA generated on the fundus will not exceed 10 microns.
  • the wrong position of the 532nm treatment light of L32 will not exceed 10 microns.
  • the power of the aiming light of L31 to reach the fundus is generally below 100 microwatts, and the power of the treatment light of L32 to reach the fundus can be several hundred milliwatts or higher.
  • the amplitude of the signal reflected by L31 from the fundus to the APD is close to the amplitude of the image signal of the SLO, but the 532nm high-power therapeutic light still has a considerable signal reflected to the SLO through the spectroscopic device S3.
  • the 532nm signal returned from the fundus reaches the SLO and impacts the APD and causes the APD to be overexposed.
  • a spectroscopic device S3 is placed in front of the APD. S3 reflects all light below 550nm and transmits all light above 550nm to protect the APD.
  • the beam splitter S3 in Fig. 3 is movable, and its moving state is just the opposite of that of M3.
  • S3 is also connected to the optical system; when FC1 is not connected to the system, S3 is pushed out of the optical system.
  • the S3 access and push-out optical system can be mechanical, electronic, or a combination of the two. In the embodiment of the present invention, a mechanical method is adopted, as shown in FIG. 6.
  • the auxiliary module integrates two functions, namely, laser imaging, image stabilization, and the use of the second imaging module 12 and the laser output adjustment module 13 to achieve laser treatment.
  • the switching between the above two functions is achieved by changing the position of M3.
  • M3 When M3 is placed in the optical system, the second imaging module 12 is activated and the laser treatment device does not work.
  • the laser treatment function When the M3 is pushed out of the optical system, the laser treatment function is activated, and the second imaging module 12 does not work at this time.
  • the position of the M3 and S3 in the optical system is controlled by the position of the knob installed on the coupling device FC1 to realize the function of dynamically switching imaging mode 2 and laser clinical treatment.
  • Another function of the FC1 knob is to connect and disconnect one or more electronic devices to remind the user and the system host control program which of the two functions should be run.
  • FIG. 9 is a schematic diagram of a mechanical and electronic combined device for notifying the user and the host control system of whether the current auxiliary module is imaging mode 2 or laser treatment according to an embodiment of the present invention.
  • the device controls an LED indicator light and provides a high/low level signal to the electronic hardware through a conductive metal sheet mounted on the FC1 knob to notify the user and the host control system of the current Does the slave module work in imaging, image stabilization mode or laser treatment mode.
  • point C In the default setting, A and B are disconnected, the LED is off, and point C outputs a 0V voltage or low level.
  • point C is connected to the FPGA to detect whether the input terminal is low level (0V) or high level (3.3V or 2.5V), so as to control the software to automatically switch to imaging, image stabilization or laser therapy mode.
  • the entire system can also be used as imaging mode 1 only, for example, only SLO/SLO imaging is performed without OCT. This way of working can be achieved through the system host control program.
  • control M2 in Figure 2 combines a variety of laser strike modes, including: 1) single-point strike mode; 2) regular space area array strike mode; 3) custom none Multi-point strike mode in regular space area.
  • the user uses the real-time image of imaging mode 1 to determine the laser strike position in the pathological area, and after aiming at the target with the aiming light, the therapeutic light is activated to preset the laser dose, exposure time, etc. Parameters for target strike.
  • the regular space area array strike mode is a combination of the single-point strike mode and the scanning mode of imaging mode 2, allowing the user to define the laser dose and other parameters for each position, then start the treatment light, and wait for time Hit predetermined targets one by one at intervals.
  • the customized multi-point strike mode in an irregular space area is a completely free strike mode.
  • the user customizes the laser dose, exposure time and other parameters of any strike position in the pathological zone, and then strikes the predetermined targets one by one.
  • a beam splitting device is used to send a part of the light obtained from the treatment light L32 to a power meter.
  • the value of the power detector is read in real time through the control program, and the laser dose of the L32 power reaching the strike target is dynamically adjusted to a preset value.
  • an FPGA hardware clock is used to control the on and off states of the L32.
  • a control method can be implemented through a real-time operating system, such as Linux.
  • Another control method can be implemented by installing real-time control software (Wind River) on a non-real-time operating system such as Microsoft Windows; another control method can be controlled by a timer on a completely non-real-time operating system such as Microsoft Windows.
  • Wind River real-time control software
  • the host control software displays a stable SLO/LSO image in real time.
  • the spatial resolution of the image stabilization technology is approximately 1/2 of the lateral optical resolution of the imaging module 1.
  • the stabilized real-time SLO/LSO image allows the user to conveniently locate the fundus space position to be processed by the auxiliary module.
  • the fundus tracking of the main module is a closed-loop control system. After the fundus tracking function is activated, the command of the master module (master module) to control the tracking mirror M12 is sent to M2 of the slave module (slave module) according to the pre-calibrated mapping relationship. Therefore, the light coming from L2 or L31/L32 can be locked to the predetermined fundus position with considerable accuracy after reaching the fundus through M2.
  • a core technology here is to use the closed-loop control command of the main module to drive the open-loop tracking of the auxiliary module.
  • the spatial mapping relationship between M12 and M2, that is, how to convert the control commands (x, y, ⁇ ) of M12 into the control commands (x', y', ⁇ ') of M2 depends on the design of the optical system.
  • (x', y', ⁇ '; x, y, ⁇ ) can be realized by calibration of the optical system.
  • the core technology that is, the closed-loop control command of the master module is used to drive the open-loop tracking of the slave module, which is an M12 closed-loop and M2 open-loop optical tracking.
  • the scanning mirror M2 of the auxiliary module can perform optical scanning in any direction of 360° in the two-dimensional space. Therefore, the auxiliary module M2 is the open-loop optical tracking of the three variables (x', y', ⁇ ') in the above formula, although the main module only has the closed-loop optical tracking of translation (x, y) and digital tracking of rotation ⁇ .
  • the closed-loop tracking accuracy of the main module and the calibration accuracy of the above formula determine the open-loop tracking accuracy of the light from the auxiliary module to the fundus, or the accuracy of target locking.
  • the closed-loop optical tracking accuracy of the main module is equivalent to the optical resolution of the imaging system of the main module, about 15 microns, and the open-loop optical tracking accuracy of the auxiliary module can reach 2/3 of the closed-loop optical tracking accuracy of the main module. -1/2, or 20-30 microns. It should be emphasized that in different system devices, these accuracy will have different changes.
  • the invention is mainly applied to ophthalmology, and the targeted cases are diabetic retinal degeneration, age-related macular degeneration and the like.
  • the fundus laser treatment technology provided by the present invention supports intelligent automatic fundus diagnosis and treatment solutions, and also provides a material basis for future one-stop diagnosis and treatment services.
  • FIG. 10 is a functional block diagram of a smart fundus laser surgery auxiliary diagnosis system according to an embodiment of the present invention.
  • FIG. 11 is a schematic diagram of a smart laser treatment according to an embodiment of the present invention, which is used to provide a reference treatment plan for the clinic;
  • FIG. 12 is a schematic diagram of another smart laser treatment according to an embodiment of the present invention, which is used to provide a reference treatment plan for the clinic;
  • 13 is another schematic diagram of smart laser treatment according to an embodiment of the present invention, which is used to provide a treatment reference plan for the clinic.
  • the intelligent fundus laser surgery auxiliary diagnosis system mainly uses laser image stabilization and treatment device 1 to collect high-definition fundus image data (including images and videos) acquired at any angle and various imaging methods, stored in the first database 41 In), the fundus image is processed and analyzed by the data processing device 4, for example, the feature extraction module 42 extracts disease feature data in the fundus image, and the data analysis and matching module 45 is used to perform a comparison operation and compare it with a known case feature template The disease characteristic data stored in the library 44 is matched, and the result of the matching operation is stored in the second database 43. If the matching degree exceeds the set threshold, the corresponding auxiliary diagnosis conclusion is given, and then generated by the diagnosis report generation module 46 Auxiliary diagnosis report.
  • the main content of the auxiliary diagnosis report includes the preoperative diagnosis plan, the intraoperative target determination plan, and the postoperative treatment effect prediction result.
  • a deep learning module 47 which is used to perform a large amount of data training based on the collected fundus image data of the patient in combination with the disease feature data extracted from the fundus image, and automatically perform data analysis and matching operations (using data fuzzy Matching algorithm), giving the matching operation result that can be referenced by medical experts.
  • a deep learning module 47 uses data analysis and matching operations (using data fuzzy Matching algorithm), giving the matching operation result that can be referenced by medical experts.
  • the case feature data corresponding to the fundus image is written into the new case feature template and entered into the case feature template library 44, that is, the case feature template library is updated.
  • the deep learning module 47 can also be set in a cloud server, and the patient’s fundus image data transmitted from other intelligent fundus laser surgery assisted diagnosis systems can be used as training data through the Internet.
  • the latest disease feature data extracted from the known fundus images undergoes a large amount of data training, and automatically performs data analysis and matching operations (using a parallel, multi-dimensional data fuzzy matching algorithm) to give medical experts reference matching results.
  • an example of multi-wavelength synchronous imaging according to an embodiment of the present invention is shown to more accurately locate the case area and then realize laser strike.
  • the intelligent fundus laser surgery auxiliary diagnosis system of the embodiment of the present invention adopts different wavelengths for synchronous imaging, because different cells and different proteins have different sensitivity to light of different wavelengths.
  • Fig. 11a the three pathological areas indicated by the circles in the figure are not obvious in Fig. 11b, and the pathological areas in the white area in Fig. 11b are not obvious in Fig. 11a. Therefore, a significant function of multi-wavelength simultaneous imaging is to allow clinicians to dynamically observe the pathological area during the imaging process to achieve real-time manual or semi-automatic laser strikes on the pathological area.
  • One function of multi-wavelength simultaneous imaging is to allow clinicians to extract typical multi-wavelength images from the image database of the software after completing the fundus imaging, as shown in the left and right images in Fig. 11a and Fig. 11b. Then, more accurately identify and edit the pathological area offline, and reasonably arrange the laser strike treatment plan.
  • One method is shown in Figure 12. The clinician sets the laser strike dose, exposure time, and other parameters for each area according to the conditions of the pathological area. After setting, as shown in Figure 12, the image with pathological area is imported into the software system, and the image is used as the reference image for tracking to realize fully automatic or semi-automatic laser strike treatment.
  • Another function of multi-wavelength simultaneous imaging is to allow clinicians to extract typical multi-wavelength images from the image database of the software after completing the imaging, as shown in the left and right images in Fig. 11a and Fig. 11b.
  • Another method is shown in Figure 13.
  • the clinician sets up array laser strikes on a whole area according to the conditions of the pathological area.
  • the software allows the user to set the laser dose, exposure time, and other parameters. After setting, import the image with pathological area as shown in Fig. 13 into the software system, and use this image as the reference image for tracking to realize automatic or semi-automatic array laser strike.
  • the control of laser exposure dose and exposure time has mature technologies in existing industrial lasers.
  • an acousto-optic modulator can simultaneously control the laser output power or exposure dose (analog control), and the laser Switch status (digital control).
  • the control signal of the present invention comes from FPGA, which can control the laser switch state to nanosecond precision on electronic hardware, and the laser power output precision is to the tolerance of the manufacturer (usually in the range of tens of milliseconds to hundreds of nanoseconds).

Abstract

本发明公开一种智能眼底激光手术辅助诊断系统及其方法,包括激光稳像及治疗装置 (1)、数据控制装置 (2)和图像显示装置 (3)以及数据处理装置 (4),其第一数据库 (41)储存眼底影像数据;通过特征提取模块 (42)提取眼底影像中的疾病特征数据,利用数据分析匹配模块 (45)进行比对运算,与已知的病例特征模板库 (44)中储存的疾病特征数据进行匹配,将匹配运算的结果储存在第二数据库 (43)中,如果匹配度超过设定的阈值,则给出相应的辅助诊断结论,然后通过诊断报告生成模块 (46)生成辅助诊断报告。采用本发明,能够降低误诊率,进一步简化临床医生诊断和操作过程,在保证手术治疗精度的同时,提高诊断效率,并降低激光手术治疗的风险。

Description

一种智能眼底激光手术辅助诊断系统及其方法 技术领域
本发明涉及眼底激光手术诊断和治疗技术,尤其涉及一种智能眼底激光手术辅助诊断系统及其方法。
背景技术
糖尿病视网膜病变(DR)是工作年龄人群排在第一位的致盲性疾病。DR患者视力损害和失明的主要原因是增生性糖尿病视网膜病变(PDR)和糖尿病黄斑水肿(DME),而激光光凝法是糖尿病视网膜病变(DR)患者最主要的治疗方法。
当前用于糖尿病视网膜病变(DR)患者、黄斑变性等眼科疾病患者的眼底激光治疗技术,主要依赖医生通过手动操作激光进行定点打击,或者采用二维振镜进行阵列形状的激光进行打击的方式来治疗。但是采用这些技术往往打击的精度不够,而且治疗措施是基于机械接触式的,普遍存在手术时间较长,临床医生和患者的体验差(如加重DME造成永久中心视力受损、激光癜痕扩大等副作用,引起患者周边视觉下降、视野缩小、暗视力下降)的不足。另外,现有采用手动眼底激光手术,或者用扫描振镜进行点阵激光打击进行治疗的方法,主要依赖于临床医生的经验判断和操作,还不能实现自动化、智能化进行术前诊断及实施激光眼底手术。因此诊疗效率不高,而且存在一定的手术治疗风险,不适合临床诊疗经验不够丰富的临床医生,局限性明显。
发明内容
有鉴于此,本发明的主要目的在于提供一种智能眼底激光手术辅助诊断 系统及其方法,以解决现有眼底激光手术术前诊断、实施治疗的过程高度依赖临床医生的经验判断和操作,导致实施手术治疗技术难度较高的问题,通过利用该辅助诊断系统,可自动给出术前诊断方案、术中靶标的确定和术后效果预测等辅助诊断报告,降低误诊率,进一步简化临床医生诊断和操作过程,在保证手术治疗精度的同时,提高诊断效率,并大幅降低激光手术治疗的风险。
为达到上述目的,本发明的技术方案如下:
一种智能眼底激光手术辅助诊断系统,包括激光稳像及治疗装置1、数据控制装置2和图像显示装置3;还包括数据处理装置4:
所述数据处理装置包括第一数据库41、特征提取模块42、数据分析匹配模块43、病例特征模板库44、第二数据库43和诊断报告生成模块46;所述第一数据库41,用于储存通过激光稳像及治疗装置1采集的任意角度和各种成像方式获取的高清眼底影像数据;通过特征提取模块42提取所述眼底影像中的疾病特征数据,利用数据分析匹配模块45进行比对运算,与已知的病例特征模板库44中储存的疾病特征数据进行匹配,将匹配运算的结果储存在第二数据库43中,如果匹配度超过设定的阈值,则给出相应的辅助诊断结论,然后通过诊断报告生成模块46生成辅助诊断报告。
其中:所述激光稳像及治疗装置1包括:
所述成像诊断模块,用于实时获取从眼底任意角度返回的反射信号或/和获取眼底的影像数据;
所述激光治疗模块,用于实时进行眼底目标的跟踪与锁定,并自动调节激光剂量的输出。
所述成像诊断模块,支持共聚焦激光扫描成像SLO、线扫描眼底相机LSO、眼底相机,或自适应眼底成像仪AOSLO中的一种或多种。
所述成像诊断模块,还支持多种成像形式的组合,包括SLO+OCT、眼底相机+OCT、眼底相机+SLO或AOSLO+SLO中的一种或多种。
所述智能眼底激光手术辅助诊断系统,还包括深度学习模块47,用于根据采集到的患者眼底影像数据结合从所述眼底影像中提取的疾病特征数据进行大量的数据训练,通过自动执行数据分析匹配运算,得出供医学专家参考的匹配运算结果。
进一步包括:对所述供医学专家参考的匹配运算结果进行处理:
将匹配度大于设定阈值的匹配运算结果,与病例特征模板库中的案例进行匹配,作为案例进行登记;或,
将匹配度小于设定阈值的匹配运算结果,经医学专家确认,将该眼底影像对应的病例特征数据写入新的病例特征模板录入所述病例特征模板库44)中,即更新病例特征模板库。
辅助诊断报告的内容,包括术前诊断方案、术中靶标的确定方案和术后治疗效果预测结果的内容。
一种智能眼底激光手术辅助诊断方法,包括如下步骤:
A、利用激光稳像及治疗装置1采集任意角度和各种成像方式获取的高清眼底影像数据,将其储存在数据处理装置4的第一数据库41中;
B、通过特征提取模块42提取所述眼底影像中的疾病特征数据,利用数据分析匹配模块45进行比对运算,得到比对结果;
C、将所述比对结果与已知的病例特征模板库44中储存的疾病特征数据进行匹配,将匹配运算的结果储存在第二数据库43中;
D、如果匹配度超过设定的阈值,则给出相应的辅助诊断结论,然后通过诊断报告生成模块46生成辅助诊断报告。
其中:步骤D之后还包括:
E、利用深度学习模块47,根据采集到的患者眼底影像数据结合从所述眼底影像中提取的疾病特征数据进行大量的数据训练,通过自动执行数据分析匹配运算,给出可供医学专家参考的匹配运算结果。
步骤E进一步包括:
E1、将匹配度大于设定阈值的匹配运算结果,与病例特征模板库中的案例进行匹配,作为案例进行登记;或,
E2、将匹配度小于设定阈值的匹配运算结果,经确认后,将该眼底影像对应的病例特征数据写入新的病例特征模板录入所述病例特征模板库44)中,即更新病例特征模板库。
本发明的智能眼底激光手术辅助诊断系统及其方法,具有如下有益效果:
1)本发明的智能眼底激光手术辅助诊断系统及其方法,不但能够为患者眼底激光手术提供可视化的智能诊断和治疗参考方案,还能够通过提供实时的人眼眼底图像采集、实时疾病分析和规划治疗参考区域以及自适应调整激光剂量,自动进行激光治疗;也支持在人工干预的模式下进行激光治疗。
2)本发明的智能眼底激光手术辅助诊断系统,将多种眼科眼底成像技术与激光治疗技术融为一体,可实现一站式的诊断+治疗的服务,同时,还可实现智能化、自动化、高精度的治疗,并简化了操作,提高患者的体验。
3)本发明的眼底激光手术治疗装置,能够通过机械装置,集成治疗激光功能,并与成像装置共享硬件,具有节约成本的特点。
4)本发明的眼底激光手术治疗装置,还提供了多种成像诊断功能,包括:共聚焦激光(SLO)或者线扫描成像(LSO),横截面断层扫描成像(OCT),眼底相机(fundus camera),甚至超高清的自适应眼底成像仪(AOSLO);同时,还提供多种成像模块组合,比如SLO+OCT,fundus camera+OCT,fundus camera+SLO,或者AOSLO+SLO等。因此,能够适应不同的、复杂的应用场景,提供实时的眼底成像和实时的稳像。
5)本发明基于眼底视网膜面成像功能,比如SLO或fundus camera的高精度眼底导航与目标跟踪系统,能够保证临床医生方便选择病理区;同时,还提供智能疾病诊断功能(采用人工智能技术),帮助医生进行术前的规划,提供手术参考区域,简化操作。
6)本发明采用了数据控制和数据处理系统,能够分析术前成像,对病 情进行诊断和影像资料进数据库备案;能够结合实时成像,方便医生在治疗时确认治疗区域准确无误;以及分析术后成像,方便临床医生进行手术评价,同时将术后影像数据录入数据库,方便索引和进一步应用。
7)本发明的激光输出调节模块和激光控制模块,能够结合眼底影像数据反馈进行智能激光打击,可以实现精准打击,利用低功率同色光进行目标识别,在锁定治疗区域后,实现精确激光治疗,帮助临床工作者进行操作。激光治疗装置,还可以自动调节光斑大小,操作人员可以根据需要自行选择光斑尺寸;可以用传统的CW激光做激光源,也可用皮秒或者飞秒激光做为光源;在采用飞秒激光进行眼底激光手术时,可利用photomechanical光机械效应,达到精准治疗的目的。
附图说明
图1为本发明实施例智能眼底激光手术治疗系统示意图;
图2为本发明图1所示激光稳像及治疗装置1的一种硬件实现方式原理示意图;
图3为一个典型的SLO快扫描和慢扫描的机制示意图;
图4为图2所示的分光装置S1的一种实现方式示意图;
图5为锯齿波叠加偏移量实现在锯齿波扫描方向的眼底跟踪方式示意图;
图6为本发明实施例控制反射镜M3的一种机械装置原理示意图;
图7为本发明实施例用于控制OCT在眼底的扫描空间位置的一种二维扫描方式示意图;
图8为本发明实施例与所述辅模块光源对应的一种分光装置S3的设计方式示意图;
图9为本发明实施例用于通知用户以及主机控制系统当前辅模块是成像模式2还是激光治疗的一种机械结合电子相结合的装置原理示意图;
图10为本发明实施例的智能眼底激光手术辅助诊断系统功能框图;
图11为本发明实施例的一种智能激光治疗示意图,用于为临床提供治疗参考方案;
图12为本发明实施例的另一种智能激光治疗示意图,用于为临床提供治疗参考方案;
图13为本发明实施例的再一种智能激光治疗示意图,用于为临床提供治疗参考方案。
具体实施方式
下面结合附图及本发明的实施例对本发明作进一步详细的说明。
图1为本发明实施例智能眼底激光手术治疗系统示意图。
如图1所示,该智能眼底激光手术治疗系统,也是一个眼科诊疗平台。主要包括激光稳像及治疗装置1、数据控制装置2、图像显示装置3。较佳地,还可以包括数据处理装置4。其中:
所述的激光稳像及治疗装置1,进一步包括成像诊断模块1A和激光治疗模块1B。作为另一个实施例,所述激光治疗模块1B,可以与其中一个成像模块(即第二成像模块12)组合构成;较佳地,还可以与所述第二成像模块12共享硬件,以达到节约成本和方便控制的目的。
其中,所述激光治疗模块1B,包括激光输出调节模块13和第二成像模块12;所述成像诊断模块1A,包括第一成像模块11和耦合模块14。
具体的,在本实施例中,将第一成像模块11设置为主模块(master module),相应的其内部的扫描镜为主扫描镜(master scanners)。将第二成像模块12和激光输出调节模块13(用于激光治疗)设置为辅模块(slave module),相应的其内部的扫描镜即为辅扫描镜(slave scanners)。所述第一成像模块11,可以为共聚焦激光扫描成像(SLO)或者线扫描眼底相机(LSO)、或者眼底相机(fundus camera),或超高清的自适应眼底成像仪(AOSLO)。所述第二成像模块12,可以是光相干断层扫描仪(OCT)或 SLO。相应的,所述的第一成像模块11和第二成像模块12,支持多种成像模块组合,例如SLO+OCT、fundus camera+OCT、fundus camera+SLO、或者AOSLO+SLO等形式。
所述激光输出调节模块13内置变焦镜头,用于调节激光输出剂量大小,还可通过改变变焦透镜(zoom lens)的位置,控制眼底激光光斑的大小,方便临床操作。
所述的数据控制装置2,进一步包括激光控制模块21、成像控制模块22和图像数据采集模块23。其中:
通过所述数据控制装置2,通过成像控制模块22,实时的控制第一成像模块11和第二成像模块12。进一步的,利用第一成像模块11,如SLO、LSO或/和第二成像模块12,如OCT,通过振镜进行扫描成像。
所述数据控制模块2通过调节系统的时钟信号、振幅、频率等参数,实现对眼底的实时扫描。同时,所述数据控制模块2,还能够同时控制第一成像模块11、第二成像模块12里的振动光学器件,任意(角度)改变扫描参数,如成像的大小、图像的帧频、图像亮度和灰度控制以及图像像素分辨率、图像的动态范围等。并且,还能够通过所述图像数据采集模块23的数据采集端口进行图像采集,并实时将第一成像模块11、第二成像模块12的眼底影像通过图像显示装置3上显示出来,以方便临床医生进行实时观察和诊断。
较佳地,临床医生可以通过数据处理装置4,对获得的影像进行实时分析,并给出相关的参考治疗方案。例如:标出参考治疗区域,给出每个区域对应的参考激光剂量标准,给出每个区域对应的激光光斑大小等。
此外,本发明实施例的激光稳像及治疗装置1,能够实现眼底目标跟踪和锁定功能,具体过程是:通过第一成像模块11获取的眼底图像信息,计算出实时的人眼运动信号(包括motion signal x和y),被发送到数据控制装置2中,所述数据控制装置2通过成像控制模块22输出实时的控制信号,改变第二成像模块12中振镜的位置,并与目标实时锁定,达到实时的目标 跟踪与锁定的目的。其中,实时控制信号会经过事先的标定,以保证振镜位置的改变与实际人眼偏移的大小一致。
在本发明实施例中,激光治疗装置的激光输出调节模块13和第二成像模块12支持共用一个硬件系统。也可以通过耦合器配合来实现眼底成像与激光治疗一体工作的功能。
所述数据控制装置2,能够分别通过成像控制模块22和激光控制模块21,分别实时的控制眼底目标进行成像和调节激光输出调节模块13中激光的输出,包括调节输出功率、输出开关、输出信号的调制等。
所述激光控制模块21,可以用两个波长接近的激光,也可以用同一个激光既作治疗激光,也做参考光。在本实施例中,激光光源可以选择532nm的CW,或者飞秒激光系统。
在激光治疗结束后,临床医生还可以通过图像显示装置3的显示屏幕实时观察治疗后患者眼底的影像,实时评判手术结果,并支持将眼底的影像上传至数据处理装置4中的患者数据库文档中,以方便后期随访观察。
本发明实施例中是以人眼眼底为例。由所述第一成像模块11、第二成像模块12以及耦合模块14等构成的激光稳像及治疗装置1,还可以用于其他不同的生物组织,比如肠胃、皮肤等部位。以下描述内容仍然以应用于人的眼底为例进行说明。
图2为本发明图1所示激光稳像及治疗装置1的一种硬件实现方式原理示意图。
如图2所示,该激光稳像及治疗装置,可以作为一种独立的激光眼底导航和治疗设备使用,也可以与其他的数据控制装置相结合作为一套完整的激光手术治疗系统进行临床应用。
图2中,光源L11、L12、…、L1n,为分别通过控制(信号)11、12、…、1n控制(或调制),用于第一成像模块11进行成像的多个成像光源。例如,将波长为780nm的红外光用于眼底反射成像,将波长为532nm的光用于眼 底自发荧光成像,或者采用其他波段的光源用于其他形式的眼底成像。所述多个成像光源,可以通过光纤耦合器件FC2进入光学系统,所述的光源L11…L1n的任何一个光源,都是可控制(或者调制)的,如图2中主模块所示的控制信号,即控制(信号)11,…,控制(信号)1n。所述的控制(或调制)参数,包括输出功率、开关状态等,还可以选择性地和扫描镜同步或者非同步进行。其中,与扫描镜同步进行的相关技术已在之前提交的专利申请中有详细说明,这里不再赘述。
所述成像光源L11…L1n,透射过分光装置S1,经过扫描反射镜M11和扫描反射镜M12后,再经过分光装置S2后,进入眼(eye)底。
从眼底返回的信号,如感光细胞的反射信号,或者眼底蛋白质被激发出来的荧光信号,或者其他从眼底返回的信号,会沿着同样的光学路径,反射到达分光装置S1,然后经过另一个可移动的分光装置S3到达光电探测器,如雪崩光电二极管(Avalanche Photo Diode,APD)。本发明实施例中,以APD作为一个光电探测器为例进行说明。所述的光电探测器,还也可以是光电倍增管(Photo Multiplier,PMT)、CMOS、CCD,或者其他光电探测器件。
本发明实施例中,以上所述的光电探测器(如APD、PMT,CMOS,CCD)均设置有可控或者可编程的增益调节机制,能够通过接收系统主机的程序控制信号进行动态调整,以便适应不同的成像模式,例如,通过图2所示的控制信号4进行动态调整。
图2所示的一组扫描反射镜M11和M12,主要用于正交扫描眼底成像位置,扫描反射镜M11和M12的扫描轴通常是90度。
第一成像模块11在对应SLO的情况下,扫描反射镜M11可以是快速共振镜(resonant scanner),一个典型的实际应用场景是:设置扫描反射镜M11在水平方向扫描,设置M12在垂直方向扫描,M12为一个慢速线性扫描镜。在通用的情况下,扫描反射镜M11和M12的正交扫描方向,支持在二维空 间的360度任意方向扫描。在本发明实施例中,扫描反射镜M11采用Cambridge Technology的CRS8k快速共振镜,在其他应用系统中,还可以采用CRS12k或者其他型号的快速共振镜。
第一成像模块11在对应SLO的情况下,本发明实施例的扫描反射镜M12,可以由一个二维倾斜扫描镜(steering mirror)或者两个一维倾斜扫描镜来实现。在本发明的实际光机系统中,扫描反射镜M12采用Cambridge Technology的一组2维扫描镜6220H(或者6210H)。6220H的第一个轴-慢速扫描轴,与M11快速扫描轴的扫描方向正交;6220H的第二个轴,不参与扫描而是仅用于目标跟踪,与M11的扫描轴平行。
在上述对应SLO的情况下,扫描反射镜M11作为快速共振镜的扫描视场(scanning field)由系统主机控制,或者手动控制。
在上述实施例中,所述M12正交于M11的扫描运动轨迹是一个三角波。三角波的振幅、频率、三角波的爬升期和回程期,等扫描参数由系统主机控制。三角波的振幅决定了慢扫描方向的视场大小,三角波的频率决定了图像系统的帧频(参考图3)。
图3为一个典型的SLO快扫描和慢扫描的机制示意图。快速共振镜每扫描一个周期,慢速镜线性增加一步。
如图3所示,通常情况下,所述SLO的快(共振)扫描每完成一个正弦(或者余弦)周期11,慢(线性)扫描在正交方向移动一步12。这样,图像帧频(fps),快速扫描镜的共振频率(f),以及每帧图像包含的线的数量(N)(通常代表最大图像高度,特殊情况下也可以作为图像宽度),满足如下关系:
f=fps·N
上式中,N包含了图3部分的所有扫描线121和122。其中,121是锯齿波的上升爬行期,122是回程期。
SLO的图像一般不包括图3的122部分,因为122期间的图像和121期 间的有不同的像素压缩比例。SLO的图像一般仅仅从图3的121部分获取。
图2中所示的分光装置S1作用是透过所有从耦合器件FC2过来的入射光,但是反射所有从眼底过来的信号到APD。一种实现模式是在S1的轴心挖出一个空心圆柱让从FC2过来的入射聚焦光透过,但是反射所有从眼底过来的扩展光到光电探测器APD,如图4所示,为图2所示的分光装置S1的一种实现方式示意图。
如上所述,图2的扫描反射镜M12有两个独立的运动轴。第一个运动轴和M11的运动(扫描)轴正交,第二个运动轴和M11的运动(扫描)轴平行。
扫描反射镜M12和M11的运动(扫描)轴正交的运动轴,可接收系统主机的两种信号:一种是图3显示的锯齿波(如121和122),另一种是叠加在锯齿波之上的平移信号。其中,锯齿波用来扫描眼底得到眼底图像,平移信号用于光学跟踪眼底在锯齿波扫描方向的眼球运动。如图5所示。
图5为锯齿波叠加偏移量实现在锯齿波扫描方向的眼底跟踪方式示意图。
如图5所示,当目标(如眼球)在某一个参考时刻的时候,也就是跟踪算法的参考面,锯齿波的扫描中心在一个相对的零位置。当眼球相对于这个参考面开始运动时,控制主机实时调整锯齿波的偏移量来跟踪眼底相对于这个参考面的位置。
以上所述的系统控制主机,可以是一个设置有相应控制程序模块的PC机,也可以是包含场可编程逻辑阵列(Field Programming Gate Array,FPGA)的装置,也可以是包含数字信号处理器(Digital Signal Processor,DSP)的装置,也可以是采用其他类型电子信号处理器的装置,还也可以是包含这些硬件的组合装置。
例如:本发明的实施例中,该控制装置中,采用了一个Intel PC(Intel i7)机搭载nVidia图像处理器(Graphic Processing Unit,GPU),如GTX1050,用于计算眼球运动信号(x,y,θ),然后通过Xilinx FPGA(考虑到成本因 素,本发明实施例采用Virtex-5的器件ML507或者Spartan 6的SP605;将来也可以用功能更强大但也更昂贵的Virtex-6,Virtex-7,Kintex-7,Artix-7等最新系列的FPGA器件,也可以用其他厂家比如Altera的FPGA器件),通过将(x,y,θ)的y部分数字合成为图5的信号形式,之后送到一个数模转换芯片(Digital-to-Analog Converter,DAC),如Texas Instruments的DAC5672,去控制扫描反射镜M12的第一个运动轴。
图5中的信号也可以通过模拟合成来实现。这种情况下,图5的锯齿波由第一个DAC产生第一个模拟信号。图5的偏移量,也是(x,y,θ)的y分量,由第二个DAC产生第二个模拟信号。通过模拟信号混合器把这两个模拟信号合成,最后送到扫描反射器M12的第一个运动轴。
所述信号(x,y,θ)的x由另外一个单独的DAC产生模拟信号,输送到M12的第二个运动轴,用来跟踪眼球在第二个运动轴的运动。本发明实施例里,扫描反射镜M12的第二个运动轴和M11的扫描轴平行。
上述眼球运动信号(x,y,θ)的平移部分(x,y)有M12的两个正交运动轴来实现闭环光学跟踪。第一成像模块11的旋转部分(θ)在发明实施例中,采用数字跟踪来实现,但是将来也可以用光学或/和机械的闭环跟踪方式来实现。旋转部分(θ)的光学或/和机械跟踪相关技术,已经在美国专利US9775515中有详细的描述。
本发明实施例中提到的经常切换的两个关键术语:眼底跟踪和眼球跟踪。在本发明相关的技术里,眼底跟踪和眼球跟踪是一个概念。临床应用时,物理上的绝大部分运动来自眼球,眼球的运动导致成像系统得到的眼底图像在空间上伴随时间的随机变化。等效的后果是成像系统的任意一个时刻,从不同的眼底位置得到了不同的图像,观察到的结果是图像随时间在随机抖动。本发明实施例中的跟踪技术是在成像系统里,通过眼底图像实时捕捉眼球运动信号(x,y,θ),然后把(x,y)反馈到图2的M12里,实现任意时刻把两个扫描镜(M11和M12正交于M11的方向)的扫描空间锁定在一个预先 定义好的眼底物理空间,从而实现精确的眼底跟踪,稳定了眼底图像在空间上随时间的随机变化。
图2中的成像模式(对应主模块)构成一个完整的闭环控制系统,用于高速实时跟踪眼底位置。该部分的技术已经在两个美国专利US9406133和US9226656中有详细的描述。
图2的成像模式2即左侧“从L2-M3-M2-S2-眼底”和图1中所示的成像模式1(主模块)相对应。一个典型的应用是应用光学相干断层扫描(Optical Coherence Tomography,OCT)成像技术。
图2中,“L31/L32-M2-S2-眼底”和图1描述的眼底激光治疗装置相对应。OCT和眼底激光治疗的功能实现,在下文有详细说明。
M3为一个可移动的反射镜。移动方式可以是机械方式,也可以是电子方式,也可以是两者的结合。所述反射镜M3的可移动的部件也可以由一个分光装置来取代。
本发明实施例中,使用机械的方式控制反射镜M3的状态。M3进/出光学系统的状态由图2的耦合器件FC1状态决定。当光源L31/L32通过FC1接入光学系统时,M3被推出光学系统,L31/L32的光直接到达反射镜M2。当FC1没有接入光学系统时,M3放置在图2所示的位置,用来反射L2过来的光到达反射镜M2。可移动的反射镜M3受FC1机械控制的原理如图6所示。
图6为本发明实施例控制反射镜M3的一种机械装置原理示意图。
如图6所示,在该机械装置中,根据FC1的插入和拔出机制来将M3推出或者放入光学系统。开关通过一个连杆连接可以折叠的镜架,当开关位于图示90度时,镜架打开,同时FC1的接口也打开,可以接入治疗激光。如图6A所示。当开关合上,如图6B所示,位于0度时,FC1接口关闭,此时不能接入治疗激光,同时,可折叠镜架也回到原位(参考图2),可以反射成像激光L2进入系统。
反射镜M3的作用是,允许用户选择辅模块(slave module)里成像模式2或者眼底激光治疗的功能之一。
在实现OCT成像时,也就是以上所示的成像模式2,M3放置在图2所示“L2-M3-M2-S2-眼底”的光学路径里,让L2的光源到达眼底。
在图2所示的成像模式2情况下,L2的光经过M3到达M2,M2是一个二维扫描镜,可以用一个具备两个独立正交控制轴有单一反射面的快速倾斜镜(如Physik Instrumente的S334.2SL),也可以用两个一维倾斜镜正交扫描控制。本发明中使用了后者的情况,采用了美国Cambridge Technology的6210H双镜组合。
在本发明实施例,图2中的M2有多种功能。在图2所示的成像模式2情况下,系统主机产生OCT扫描信号,控制M2的扫描方式,从而控制L2在眼底的二维成像空间。
本发明实施例中,系统主机程序通过控制FPGA产生一组如图7所示的正交扫描控制基S x和S y。这里S x和S y是带方向的矢量。
图7为本发明实施例用于控制OCT在眼底的扫描空间位置的一种二维扫描方式示意图。
系统主机程序通过控制FPGA(如图7所示)的两个扫描基相乘各自的幅度(Ax和Ay)和正负符号,实现OCT在眼底360度任意一个方向,指定视场尺寸的二维扫描,可用以下关系式表示:
OCT扫描=S xA x+S yA y
其中,参数A x和A y也是带符号(或者)方向的矢量;S xA x+S yA y可以实现OCT在360度二维眼底空间的任何方向,作光学系统允许的任何视场尺寸扫描。
光源L2的光经过反射镜M3,扫描镜M2,再通过分光装置S3到达眼底。本发明实施例中,L2为波长为880nm的成像光源,光源L31波长为561nm,光源L32波长为532nm。相应的,分光装置S3的设计对应不同的辅模块光 源需要做不同的变更。一种方式是,对不同的辅模块(slave module)光源定制一个不同的分光装置S3放置在图2的S3位置,如图8所示。
图8为本发明实施例与所述辅模块光源对应的一种分光装置S3的设计方式示意图。
如图8所示,分光装置S3,透射90%-95%、反射5%-10%的532nm以及830nm以上的光,透射5%-10%反射90%-95%的其他波段的光。
参考图2,辅模块中光源L31是用于激光治疗的瞄准光。瞄准光到达眼底,从眼底反射回来的光斑被第一成像模块11的APD接收,在SLO图像上叠加一个L31产生的光斑。这个光斑位置预示着治疗光L32在眼底将会有接近一致的空间位置。光源L31和L32在眼底的重叠程度取决于两个波长532nm和561nm在眼底产生的横向色差值(Transverse Chromatic Aberration,TCA)。
本发明实施例中,所述532nm和561nm波长的光,在眼底产生的TCA不会超过10微米。也就是说,在L31的561nm瞄准光对准眼底打击位置之后,L32的532nm治疗光打错位置不会超过10微米。
L31的瞄准光到达眼底的功率一般在100微瓦以下,L32的治疗光到达眼底的功率可以是几百毫瓦或者更高功率。L31从眼底反射到APD的信号幅度和SLO的图像信号幅度接近,但是532nm的大功率治疗光仍然有相当大的信号通过分光装置S3反射到SLO。
为了防止治疗光开启眼底激光打击,从眼底返回的532nm信号到达SLO冲击APD并且导致APD过度曝光,本发明实施例的装置里,在APD前面放置一个分光装置S3。S3反射所有550nm以下的光,透射所有550nm以上的光,起到保护APD的作用。
图3的分光镜S3是可移动的,移动状态和M3正好相反。当耦合器FC1接入光学系统时,S3也接入光学系统;当FC1没有接入系统时,S3被推出光学系统。S3的接入和推出光学系统可以是机械方式、电子方式,也可以是 两者方式的结合。本发明实施例中,采用的是机械方式,参考图6所示。
如上所述,辅模块集成了两种功能,即激光成像、稳像功能,以及利用第二成像模块12和激光输出调节模块13实现激光治疗的功能。
上述两种功能之间的切换,是通过改变M3的位置来实现的。当M3被放置在光学系统中时,第二成像模块12被激活,激光治疗装置不工作。当M3被推出光学系统时,激光治疗的功能被激活,此时第二成像模块12不工作。
以上是对涉及第二成像模块12的工程实现的说明。下面对本发明实施例涉及的激光治疗功能的工程实现进行描述。
参考图6,通过装在耦合器件FC1的旋钮位置来控制M3和S3在光学系统的位置,实现动态切换成像模式2和激光临床治疗的功能。FC1的旋钮的另外一个作用是连接和断开一个或者多个电子装置,来提醒用户以及系统主机控制程序,应该运行两种功能的哪一个。
图9为本发明实施例用于通知用户以及主机控制系统当前辅模块是成像模式2还是激光治疗的一种机械结合电子相结合的装置原理示意图。
如图9所示,该装置通过一个装在FC1旋钮上的一个导电金属片来控制一个LED指示灯以及给电子硬件提供高/低电平信号,以此用来通知用户和主机控制系统,当前的辅模块(slave module)是工作在成像、稳像模式下还是激光治疗模式下。
默认设置下,A和B断开,LED熄灭,C点输出0V电压或者低电平。本发明实施例中,将C点连接到FPGA用于探测输入端是低电平(0V)还是高电平(3.3V或2.5V),以便控制软件自动切换到成像、稳像模式还是激光治疗模式。
当FC1旋钮旋转90度(或者其他角度,但是与图6一致)时,导电金属片接通A和B,使LED发光,同时C点电位上拉至高电平。以此控制程序自动切换激光治疗的功能。
当设置为用于成像、稳像模式时,整个系统也可以仅作为成像模式1的成像,比如只进行SLO/SLO成像,没有OCT。这种工作方式可以通过系统主机控制程序实现。
在成像模式1结合激光治疗的工作模式下,图2的控制M2结合了多种的激光打击模式,包括:1)单点打击模式;2)规则空间区域阵列式打击模式;3)自定义无规则空间区域多点打击模式。
所述的单点打击模式,就是用户通过成像模式1的实时图像在病理区确定欲进行激光打击位置,用瞄准光对准目标后,启动治疗光,以预先设置好的激光剂量、曝光时间等参数进行目标打击。
所述的的规则空间区域阵列式打击模式,是结合了所述的单点打击模式和成像模式2的扫描方式,让用户定义好每个位置的激光剂量等参数,然后启动治疗光,等时间间隔地逐个打击预定目标。
所述的自定义无规则空间区域多点打击模式,是完全的自由打击模式。用户自定义打击病理区里任何一个打击位置的激光剂量、曝光时间等参数,然后逐个打击预定目标。
较佳地,为了精确控制激光到达打击目标的剂量,在本发明实施例中,采用一个分光装置,将从治疗光L32得到的一部分光,送到一个光功率探测器(power meter)。通过控制程序实时读取功率探测器的数值,动态调整L32功率到达打击目标的激光剂量到预先设定的值。
较佳地,为了精确控制激光在打击目标的曝光时间,本发明实施例中,采用FPGA硬件时钟来控制L32的打开和关闭状态。一种控制方式可以通过实时操作系统实现,比如Linux。另一种控制方式可以通过非实时操作系统比如Microsoft Windows安装实时控制软件(Wind River)来实现;还有一种控制方式是,可以通过完全非实时操作系统比如Microsoft Windows上的定时器来控制。
以上辅模块的所有功能,成像、稳像功能,以及激光治疗功能的,都会 得到主模块实时目标(眼底)跟踪以及实时稳像技术的支持。
主模块的闭环眼底跟踪功能启动之后,主机控制软件实时显示稳定了的SLO/LSO图像。本发明实施例中,采用稳像技术的空间分辨率大约是成像模块1横向光学分辨率的1/2。稳定了的实时SLO/LSO图像为用户方便地定位辅模块即将处理的眼底空间位置。
所述主模块的眼底跟踪是一个闭环的控制系统。眼底跟踪功能启动之后,主模块(master module)控制跟踪镜M12的指令按照预先校准好的映射关系被送到辅模块(slave module)的M2。从而L2或者L31/L32过来的光,经过M2到达眼底后,在相当大的精确度上,可以被锁定到预定的眼底位置。这里的一个核心技术是,采用主模块的闭环控制指令带动辅模块的开环进行跟踪。
所述的M12和M2的空间映射关系,也就是如何把M12的控制指令(x,y,θ)转化成M2的控制指令(x',y',θ'),取决于光学系统的设计。
这里,所述(x,y,θ)和(x',y',θ'),存在如下关系:
(x',y',θ')=f(x',y',θ';x,y,θ)(x,y,θ)
其中,(x',y',θ';x,y,θ)可以通过光学系统的校准(calibration)来实现。
所述的核心技术,即用主模块(master module)的闭环控制指令带动辅模块(slave module)的开环跟踪,是一个M12闭环,M2开环的光学跟踪。
参考图7,又如公式“OCT扫描=S xA x+S yA y”所示,辅模块的扫描镜M2可以在二维空间360°的任意一个方向进行光学扫描。从而辅模块M2是上式中三个变量(x',y',θ')的开环光学跟踪,尽管主模块只有平移(x,y)的闭环光学跟踪和旋转θ的数字跟踪。
主模块的闭环跟踪精度,以及上式的校准精度,决定了辅模块出来的光到达眼底的开环跟踪精度,或者目标锁定精度。现有最先进的技术中,主模块的闭环光学跟踪精度和主模块成像系统的光学分辨率相当,大约15微米, 辅模块的开环光学跟踪精度可以达到主模块闭环光学跟踪精度的2/3-1/2,或者20-30微米。需要强调的是,在不同的系统装置里,这些精度会有不同的变化。
本发明主要应用于眼科,针对的病例为糖尿病视网膜变性,老年黄斑变性等。本发明提供的眼底激光治疗技术,支持智能自动眼底诊疗解决方案,也为将来一站式诊疗服务提供了物质基础。
图10为本发明实施例的智能眼底激光手术辅助诊断系统功能框图。图11为本发明实施例的一种智能激光治疗示意图,用于为临床提供治疗参考方案;图12为本发明实施例的另一种智能激光治疗示意图,用于为临床提供治疗参考方案;图13为本发明实施例的再一种智能激光治疗示意图,用于为临床提供治疗参考方案。
如图10所示,该智能眼底激光手术辅助诊断系统,主要通过激光稳像及治疗装置1采集任意角度和各种成像方式获取的高清眼底影像数据(包括图像和视频,储存在第一数据库41中),将眼底影像通过数据处理装置4进行图像处理和分析,例如通过特征提取模块42提取眼底影像中的疾病特征数据,利用数据分析匹配模块45进行比对运算,与已知的病例特征模板库44中储存的疾病特征数据进行匹配,将匹配运算的结果储存在第二数据库43中,如果匹配度超过设定的阈值,则给出相应的辅助诊断结论,然后通过诊断报告生成模块46生成辅助诊断报告。所述辅助诊断报告的主要内容,包括术前诊断方案、术中靶标的确定方案和术后治疗效果预测结果等内容。
较佳地,还包括深度学习模块47,用于根据采集到的患者眼底影像数据结合从所述眼底影像中提取的疾病特征数据进行大量的数据训练,通过自动执行数据分析匹配运算(采用数据模糊匹配算法),给出可供医学专家参考的匹配运算结果。最后,1)将匹配度大于设定阈值的匹配运算结果,与病例特征模板库中的案例进行匹配,作为案例进行登记;2)将匹配度小于设 定阈值的匹配运算结果(可能是新发现的医学病例,也可能不是),经医学专家确认,将该眼底影像对应的病例特征数据写入新的病例特征模板录入所述病例特征模板库44中,即更新病例特征模板库。
作为另一种实施方式,所述的深度学习模块47,还可设置在云端服务器中,通过互联网将其他智能眼底激光手术辅助诊断系统传送来的患者眼底影像数据作为训练数据,结合从现有已知的眼底影像中提取的最新疾病特征数据进行大量的数据训练,通过自动执行数据分析匹配运算(采用并行、多维度的数据模糊匹配算法),给出可供医学专家参考的匹配运算结果。
如图11所示,显示一种本发明实施例的多波长同步成像,更精确定位病例区然后实现激光打击的例子。临床上,单一波长往往不能准确定位全部的病理区。本发明实施例的智能眼底激光手术辅助诊断系统,采用不同的波长同步成像,由于不同的细胞、不同的蛋白质对不同的波长光线的灵敏度不一样。图11a所示,图中圆圈所示的三个病理区在图11b中不明显,图11b的白色区域病理区在图11a中不明显。因此,多波长同步成像的一个显著功能是允许临床医生可以在成像过程动态观察病理区的情况实现实时的手动或者半自动病理区激光打击。
多波长同步成像的一个功能是,允许临床医生在完成眼底成像之后,从软件的图像数据库提取典型的多波长图像,如图11a、图11b所示的左右两图。然后,更精确地离线识别和编辑病理区,合理安排激光打击的治疗方案。一种方法如图12所示,临床医生根据病理区的情况,对各个区域设置激光打击的剂量、曝光时间、以及其它参数。设置完毕,如图12所示的带病理区的图像导入软件系统,以该图像作为跟踪的参考图像,实现全自动或者半自动的激光打击治疗。
多波长同步成像的另一个功能是,允许临床医生在完成成像之后,从软件的图像数据库提取典型的多波长图像,如图11a、图11b所示的左右两图。另一种方法如图13所示,临床医生根据病理区的情况,对一整片区域设置 阵列式激光打击。软件允许用户设置激光的剂量、曝光时间、以及其它参数。设置完毕,将如图13所示的带病理区的图像导入软件系统,以该图像作为跟踪的参考图像,实现全自动或者半自动的阵列式激光打击。
需要指出的是,以上实施例仅以两个波长为例进行说明,实际情况可以是更多个波长的同步成像。激光曝光剂量、曝光时间的控制,在现有的工业级激光器里已经有成熟的技术,比如说,用一个声光调制器可以同时控制激光输出的功率或者曝光剂量(模拟控制),以及激光的开关状态(数字控制)。本发明的控制信号来自FPGA,可以在电子硬件上控制激光的开关状态到纳秒精度,激光功率输出的精度到厂家的容忍度(通常几十毫秒到几百纳秒的范围)。
以上所述,仅为本发明的较佳实施例而已,并非用于限定本发明的保护范围。

Claims (10)

  1. 一种智能眼底激光手术辅助诊断系统,包括激光稳像及治疗装置(1)、数据控制装置(2)和图像显示装置(3);其特征在于,还包括数据处理装置(4):
    所述数据处理装置包括第一数据库(41)、特征提取模块(42)、数据分析匹配模块(43)、病例特征模板库(44)、第二数据库(43)和诊断报告生成模块(46);所述第一数据库(41),用于储存通过激光稳像及治疗装置(1)采集的任意角度和各种成像方式获取的高清眼底影像数据;通过特征提取模块(42)提取所述眼底影像中的疾病特征数据,利用数据分析匹配模块(45)进行比对运算,与已知的病例特征模板库(44)中储存的疾病特征数据进行匹配,将匹配运算的结果储存在第二数据库(43)中,如果匹配度超过设定的阈值,则给出相应的辅助诊断结论,然后通过诊断报告生成模块(46)生成辅助诊断报告。
  2. 根据权利要求1所述智能眼底激光手术辅助诊断系统,其特征在于,所述激光稳像及治疗装置(1)包括:
    所述成像诊断模块,用于实时获取从眼底任意角度返回的反射信号或/和获取眼底的影像数据;
    所述激光治疗模块,用于实时进行眼底目标的跟踪与锁定,并自动调节激光剂量的输出。
  3. 根据权利要求2所述智能眼底激光手术辅助诊断系统,其特征在于,所述成像诊断模块,支持共聚焦激光扫描成像SLO、线扫描眼底相机LSO、眼底相机,或自适应眼底成像仪AOSLO中的一种或多种。
  4. 根据权利要求2所述智能眼底激光手术辅助诊断系统,其特征在于,所述成像诊断模块,还支持多种成像形式的组合,包括SLO+OCT、眼底相机+OCT、眼底相机+SLO或AOSLO+SLO中的一种或多种。
  5. 据权利要求1所述智能眼底激光手术辅助诊断系统,其特征在于,所 述智能眼底激光手术辅助诊断系统,还包括深度学习模块(47),用于根据采集到的患者眼底影像数据结合从所述眼底影像中提取的疾病特征数据进行大量的数据训练,通过自动执行数据分析匹配运算,得出供医学专家参考的匹配运算结果。
  6. 据权利要求5所述智能眼底激光手术辅助诊断系统,其特征在于,进一步包括:对所述供医学专家参考的匹配运算结果进行处理:
    将匹配度大于设定阈值的匹配运算结果,与病例特征模板库中的案例进行匹配,作为案例进行登记;或,
    将匹配度小于设定阈值的匹配运算结果,经医学专家确认,将该眼底影像对应的病例特征数据写入新的病例特征模板录入所述病例特征模板库(44)中,即更新病例特征模板库。
  7. 据权利要求1或6所述智能眼底激光手术辅助诊断系统,其特征在于,辅助诊断报告的内容,包括术前诊断方案、术中靶标的确定方案和术后治疗效果预测结果的内容。
  8. 一种智能眼底激光手术辅助诊断方法,其特征在于,包括如下步骤:
    A、利用激光稳像及治疗装置(1)采集任意角度和各种成像方式获取的高清眼底影像数据,将其储存在数据处理装置(4)的第一数据库(41)中;
    B、通过特征提取模块(42)提取所述眼底影像中的疾病特征数据,利用数据分析匹配模块(45)进行比对运算,得到比对结果;
    C、将所述比对结果与已知的病例特征模板库(44)中储存的疾病特征数据进行匹配,将匹配运算的结果储存在第二数据库(43)中;
    D、如果匹配度超过设定的阈值,则给出相应的辅助诊断结论,然后通过诊断报告生成模块(46)生成辅助诊断报告。
  9. 根据权利要求8所述智能眼底激光手术辅助诊断方法,其特征在于,步骤D之后还包括:
    E、利用深度学习模块(47),根据采集到的患者眼底影像数据结合从 所述眼底影像中提取的疾病特征数据进行大量的数据训练,通过自动执行数据分析匹配运算,给出可供医学专家参考的匹配运算结果。
  10. 根据权利要求9所述智能眼底激光手术辅助诊断方法,其特征在于,步骤E进一步包括:
    E1、将匹配度大于设定阈值的匹配运算结果,与病例特征模板库中的案例进行匹配,作为案例进行登记;或,
    E2、将匹配度小于设定阈值的匹配运算结果,经确认后,将该眼底影像对应的病例特征数据写入新的病例特征模板录入所述病例特征模板库(44)中,即更新病例特征模板库。
PCT/CN2019/088979 2019-05-24 2019-05-29 一种智能眼底激光手术辅助诊断系统及其方法 WO2020237520A1 (zh)

Priority Applications (1)

Application Number Priority Date Filing Date Title
US17/428,188 US20220117780A1 (en) 2019-05-24 2019-05-29 Smart auxiliary diagnosis system and method for fundus oculi laser surgery

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN201910442076.2 2019-05-24
CN201910442076.2A CN110176297B (zh) 2019-05-24 2019-05-24 一种智能眼底激光手术辅助诊断系统

Publications (1)

Publication Number Publication Date
WO2020237520A1 true WO2020237520A1 (zh) 2020-12-03

Family

ID=67695714

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2019/088979 WO2020237520A1 (zh) 2019-05-24 2019-05-29 一种智能眼底激光手术辅助诊断系统及其方法

Country Status (3)

Country Link
US (1) US20220117780A1 (zh)
CN (1) CN110176297B (zh)
WO (1) WO2020237520A1 (zh)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114343840A (zh) * 2022-02-23 2022-04-15 桂林市啄木鸟医疗器械有限公司 一种激光治疗仪

Families Citing this family (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110200584B (zh) * 2019-07-03 2022-04-29 南京博视医疗科技有限公司 一种基于眼底成像技术的目标跟踪控制系统及其方法
CN111428070A (zh) * 2020-03-25 2020-07-17 南方科技大学 眼科案例的检索方法、装置、服务器及存储介质
CN111658309A (zh) * 2020-06-16 2020-09-15 温州医科大学附属眼视光医院 一种集成式眼科手术系统
CN113425251A (zh) * 2021-05-28 2021-09-24 云南中医药大学 一种目诊图像识别系统及方法
CN114642502B (zh) * 2022-02-21 2023-07-14 北京工业大学 斜视手术方案的辅助设计方法及装置
CN116548910B (zh) * 2023-05-19 2023-12-08 北京至真互联网技术有限公司 一种眼科相干断层扫描仪的分辨率自适应调节方法及系统

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102946791A (zh) * 2010-06-17 2013-02-27 佳能株式会社 眼底图像获取设备及其控制方法
CN108231194A (zh) * 2018-04-04 2018-06-29 苏州医云健康管理有限公司 一种疾病诊断系统

Family Cites Families (14)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1101249A (zh) * 1994-06-24 1995-04-12 中国科学院上海技术物理研究所 实时眼底图像获取与处理方法及其装置
JP4819851B2 (ja) * 2008-07-31 2011-11-24 キヤノン株式会社 診断支援装置およびその方法、プログラム、記録媒体
CA2785190C (en) * 2008-12-31 2019-04-02 I Optima Ltd. System for laser assisted deep sclerectomy
MX344107B (es) * 2011-10-10 2016-12-05 Wavelight Gmbh Sistema, dispositivos de interfaz, uso de los dispositivos de interfaz y metodo para cirugia ocular.
US9226656B2 (en) * 2013-09-19 2016-01-05 University Of Rochester Real-time optical and digital image stabilization for adaptive optics scanning ophthalmoscopy
US9406133B2 (en) * 2014-01-21 2016-08-02 University Of Rochester System and method for real-time image registration
NZ773826A (en) * 2015-03-16 2022-07-29 Magic Leap Inc Methods and systems for diagnosing and treating health ailments
CN104835150B (zh) * 2015-04-23 2018-06-19 深圳大学 一种基于学习的眼底血管几何关键点图像处理方法及装置
US9775515B2 (en) * 2015-05-28 2017-10-03 University Of Rochester System and method for multi-scale closed-loop eye tracking with real-time image montaging
CN205665697U (zh) * 2016-04-05 2016-10-26 陈进民 基于细胞神经网络或卷积神经网络的医学影像识别诊断系统
EP3448229A4 (en) * 2016-04-28 2019-12-04 Alex Artsyukhovich REMOVABLE MICROSCOPE-MOUNTED MINIATURE KERATOMETER FOR CATARACTURGERY
CN107423571B (zh) * 2017-05-04 2018-07-06 深圳硅基仿生科技有限公司 基于眼底图像的糖尿病视网膜病变识别系统
CN108198632A (zh) * 2018-02-28 2018-06-22 烟台威兹曼智能信息技术有限公司 一种视网膜病变激光治疗的术前规划系统及方法
CN109102494A (zh) * 2018-07-04 2018-12-28 中山大学中山眼科中心 一种后发性白内障图像分析方法及装置

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102946791A (zh) * 2010-06-17 2013-02-27 佳能株式会社 眼底图像获取设备及其控制方法
CN108231194A (zh) * 2018-04-04 2018-06-29 苏州医云健康管理有限公司 一种疾病诊断系统

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114343840A (zh) * 2022-02-23 2022-04-15 桂林市啄木鸟医疗器械有限公司 一种激光治疗仪
CN114343840B (zh) * 2022-02-23 2023-10-27 桂林市啄木鸟医疗器械有限公司 一种激光治疗仪

Also Published As

Publication number Publication date
US20220117780A1 (en) 2022-04-21
CN110176297B (zh) 2020-09-15
CN110176297A (zh) 2019-08-27

Similar Documents

Publication Publication Date Title
WO2020237520A1 (zh) 一种智能眼底激光手术辅助诊断系统及其方法
CN109938919B (zh) 一种智能眼底激光手术治疗装置、系统及其实现方法
CN210009227U (zh) 一种智能眼底激光手术治疗装置及治疗系统
CN103997948B (zh) 栅格图案激光治疗及方法
EP2030151B1 (en) Laser scanning digital camera with simplified optics
RU2675688C2 (ru) Хирургическая система визуализации oct широкого поля обзора без использования микроскопа
RU2593745C2 (ru) Снижение нарушения выравнивания под управлением процессора изображений для офтальмологических систем
JP5918241B2 (ja) 眼科システム及び眼を眼科システムにアライメントする方法
JP7164339B2 (ja) 光凝固装置
CN103705208A (zh) 检眼镜和使用检眼镜的方法
CN105451638A (zh) 用于眼生物统计的集成oct屈光计系统
JP2015211734A (ja) 眼科手術装置および眼科手術用アタッチメント
CN101854845A (zh) 半自动眼科光凝固方法和装置
US20210186753A1 (en) Laser treatment of media opacities
CN111513918A (zh) 一种基于机器视觉的全自动眼底激光治疗系统
WO2019225290A1 (ja) 撮影装置及びその制御方法
CN109008942A (zh) 一种基于裂隙灯平台的全眼光学相干断层成像装置及成像方法
JP2019201951A (ja) 撮影装置及びその制御方法
CN108652581B (zh) 基于线共焦成像的激光刺激系统和方法
JP7091018B2 (ja) 断層画像取得装置及び方法
CN209172278U (zh) 一种基于裂隙灯平台的全眼光学相干断层成像装置
CN210114570U (zh) 眼底激光治疗装置的成像模式与治疗模式自动切换装置
EP3918976A1 (en) Optical system for real-time closed-loop control of fundus camera and implementation method therefor
JP7164338B2 (ja) 光凝固装置、眼底観察装置、プログラム、及び、記録媒体
CN210249799U (zh) 一种眼前节Oct定位装置

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 19931138

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 19931138

Country of ref document: EP

Kind code of ref document: A1

122 Ep: pct application non-entry in european phase

Ref document number: 19931138

Country of ref document: EP

Kind code of ref document: A1

32PN Ep: public notification in the ep bulletin as address of the adressee cannot be established

Free format text: NOTING OF LOSS OF RIGHTS PURSUANT TO RULE 112(1) EPC (EPO FORM 1205A DATED 24/06/2022)

122 Ep: pct application non-entry in european phase

Ref document number: 19931138

Country of ref document: EP

Kind code of ref document: A1