CN116407080A - Evolution identification and 3D visualization system and method for fundus structure of myopic patient - Google Patents
Evolution identification and 3D visualization system and method for fundus structure of myopic patient Download PDFInfo
- Publication number
- CN116407080A CN116407080A CN202310425376.6A CN202310425376A CN116407080A CN 116407080 A CN116407080 A CN 116407080A CN 202310425376 A CN202310425376 A CN 202310425376A CN 116407080 A CN116407080 A CN 116407080A
- Authority
- CN
- China
- Prior art keywords
- test
- model
- coordinate system
- physical
- key points
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
- 238000000034 method Methods 0.000 title claims abstract description 50
- 238000012800 visualization Methods 0.000 title claims abstract description 23
- 238000012360 testing method Methods 0.000 claims description 263
- 210000005252 bulbus oculi Anatomy 0.000 claims description 122
- 239000011159 matrix material Substances 0.000 claims description 67
- 238000013507 mapping Methods 0.000 claims description 59
- 210000001508 eye Anatomy 0.000 claims description 40
- 239000013598 vector Substances 0.000 claims description 23
- 238000000605 extraction Methods 0.000 claims description 21
- 238000005457 optimization Methods 0.000 claims description 18
- 238000010276 construction Methods 0.000 claims description 17
- 238000013519 translation Methods 0.000 claims description 14
- 230000000007 visual effect Effects 0.000 claims description 10
- 239000002245 particle Substances 0.000 claims description 9
- 230000008859 change Effects 0.000 abstract description 9
- 210000001525 retina Anatomy 0.000 description 21
- 238000004458 analytical method Methods 0.000 description 11
- 230000008569 process Effects 0.000 description 11
- 238000010586 diagram Methods 0.000 description 10
- 238000004364 calculation method Methods 0.000 description 7
- 230000004379 myopia Effects 0.000 description 7
- 208000001491 myopia Diseases 0.000 description 7
- 238000003384 imaging method Methods 0.000 description 6
- 230000000694 effects Effects 0.000 description 5
- 208000014733 refractive error Diseases 0.000 description 5
- 230000002207 retinal effect Effects 0.000 description 5
- 230000009466 transformation Effects 0.000 description 5
- 230000004423 myopia development Effects 0.000 description 4
- 230000002792 vascular Effects 0.000 description 4
- 208000029091 Refraction disease Diseases 0.000 description 3
- 230000004430 ametropia Effects 0.000 description 3
- 210000004204 blood vessel Anatomy 0.000 description 3
- 230000006870 function Effects 0.000 description 3
- 230000002146 bilateral effect Effects 0.000 description 2
- 238000006243 chemical reaction Methods 0.000 description 2
- 210000004087 cornea Anatomy 0.000 description 2
- 230000007547 defect Effects 0.000 description 2
- 201000010099 disease Diseases 0.000 description 2
- 208000037265 diseases, disorders, signs and symptoms Diseases 0.000 description 2
- 238000005516 engineering process Methods 0.000 description 2
- 210000003128 head Anatomy 0.000 description 2
- 230000002427 irreversible effect Effects 0.000 description 2
- 238000012544 monitoring process Methods 0.000 description 2
- 210000002445 nipple Anatomy 0.000 description 2
- 230000002093 peripheral effect Effects 0.000 description 2
- 238000011002 quantification Methods 0.000 description 2
- 230000004256 retinal image Effects 0.000 description 2
- 230000011218 segmentation Effects 0.000 description 2
- 230000002159 abnormal effect Effects 0.000 description 1
- 238000013459 approach Methods 0.000 description 1
- 230000003416 augmentation Effects 0.000 description 1
- 230000003190 augmentative effect Effects 0.000 description 1
- 238000003759 clinical diagnosis Methods 0.000 description 1
- 238000010835 comparative analysis Methods 0.000 description 1
- 230000008878 coupling Effects 0.000 description 1
- 238000010168 coupling process Methods 0.000 description 1
- 238000005859 coupling reaction Methods 0.000 description 1
- 238000013135 deep learning Methods 0.000 description 1
- 238000013136 deep learning model Methods 0.000 description 1
- 230000006735 deficit Effects 0.000 description 1
- 230000001419 dependent effect Effects 0.000 description 1
- 238000001514 detection method Methods 0.000 description 1
- 238000011161 development Methods 0.000 description 1
- 230000018109 developmental process Effects 0.000 description 1
- 230000004438 eyesight Effects 0.000 description 1
- 230000002349 favourable effect Effects 0.000 description 1
- 238000001914 filtration Methods 0.000 description 1
- 238000003703 image analysis method Methods 0.000 description 1
- 230000007774 longterm Effects 0.000 description 1
- 230000000877 morphologic effect Effects 0.000 description 1
- 210000001328 optic nerve Anatomy 0.000 description 1
- 238000012014 optical coherence tomography Methods 0.000 description 1
- 231100000915 pathological change Toxicity 0.000 description 1
- 230000036285 pathological change Effects 0.000 description 1
- 208000037821 progressive disease Diseases 0.000 description 1
- 230000000750 progressive effect Effects 0.000 description 1
- 230000004515 progressive myopia Effects 0.000 description 1
- 230000000717 retained effect Effects 0.000 description 1
- 210000001210 retinal vessel Anatomy 0.000 description 1
- 238000007794 visualization technique Methods 0.000 description 1
Images
Classifications
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61B—DIAGNOSIS; SURGERY; IDENTIFICATION
- A61B3/00—Apparatus for testing the eyes; Instruments for examining the eyes
- A61B3/10—Objective types, i.e. instruments for examining the eyes independent of the patients' perceptions or reactions
- A61B3/12—Objective types, i.e. instruments for examining the eyes independent of the patients' perceptions or reactions for looking at the eye fundus, e.g. ophthalmoscopes
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61B—DIAGNOSIS; SURGERY; IDENTIFICATION
- A61B3/00—Apparatus for testing the eyes; Instruments for examining the eyes
- A61B3/10—Objective types, i.e. instruments for examining the eyes independent of the patients' perceptions or reactions
- A61B3/14—Arrangements specially adapted for eye photography
-
- Y—GENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
- Y02—TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
- Y02T—CLIMATE CHANGE MITIGATION TECHNOLOGIES RELATED TO TRANSPORTATION
- Y02T10/00—Road transport of goods or passengers
- Y02T10/10—Internal combustion engine [ICE] based vehicles
- Y02T10/40—Engine management systems
Landscapes
- Life Sciences & Earth Sciences (AREA)
- Health & Medical Sciences (AREA)
- Medical Informatics (AREA)
- Biophysics (AREA)
- Ophthalmology & Optometry (AREA)
- Engineering & Computer Science (AREA)
- Biomedical Technology (AREA)
- Heart & Thoracic Surgery (AREA)
- Physics & Mathematics (AREA)
- Molecular Biology (AREA)
- Surgery (AREA)
- Animal Behavior & Ethology (AREA)
- General Health & Medical Sciences (AREA)
- Public Health (AREA)
- Veterinary Medicine (AREA)
- Eye Examination Apparatus (AREA)
Abstract
The invention discloses an evolution recognition and 3D visualization system and method of a fundus structure of a myopic patient, and relates to the technical field of ophthalmology. The invention can accurately, efficiently and intuitively find the typical and slight evolution rule change of the fundus structure of a myopic patient based on the module.
Description
Technical Field
The invention relates to the technical field of ophthalmology, in particular to a system and a method for evolution identification and 3D visualization of fundus structures of myopic patients.
Background
Myopia is highly frequent worldwide, and can cause serious retinal complications, causing irreversible vision impairment. The problem of myopia with low age is severe and still in increasing trend, the situation is not optimistic. Myopia, a progressive disease, requires long-term continuous monitoring to avoid serious consequences.
The development and comparison of fundus structures in the images of the cornea, fundus images and optical coherence tomography, and the like, the clinical problem can be found only through the report of doctors comparing different time points, once typical changes are found, irreversible damage is caused, the accuracy is lacking, a great deal of time is consumed, and particularly, the characteristics of obvious prompting of myopia development, such as the morphological position of the optic disc in the fundus images and the shape of the vascular structure, are difficult to accurately identify when slight changes occur. Thus, there is a need for accurate, efficient, intuitive methods to discover these typical, subtle changes and evolutionary laws to diagnose and intervene early.
Disclosure of Invention
The invention aims to provide an evolution recognition and 3D visualization system and method for a fundus structure of a myopic patient, which can accurately, efficiently and intuitively find typical and slight changes and evolution rules of the fundus structure of the myopic patient.
In order to achieve the above object, the present invention provides the following solutions:
an evolution recognition and 3D visualization system of a myopic patient's fundus structure, the system comprising:
The acquisition module is used for acquiring a fundus image of a myopic patient acquired at a first time and a fundus image of a myopic patient acquired at a second time, taking the fundus image of the myopic patient acquired at the first time as a reference image and taking the fundus image of the myopic patient acquired at the second time as a test image; the first time is earlier than the second time;
the key point extraction module is used for extracting key points in the reference image and the test image by adopting a key point extraction algorithm to obtain the extracted key points in the reference image and the extracted key points in the test image;
the key point matching module is used for matching the key points extracted from the reference image with the key points extracted from the test image by adopting a key point matching algorithm to obtain a plurality of pairs of matched key points;
the physical eyeball model construction module is used for performing approximate construction on eyes by adopting an ellipsoid model to obtain a physical eyeball model;
the camera model building module is used for building a reference camera model by adopting an internal reference matrix and an external reference matrix of the reference camera, and building a test camera model by adopting the internal reference matrix and the external reference matrix of the test camera; the reference camera is a camera for collecting the reference image; the test camera is a camera for collecting the test image;
The first mapping module of the key point is used for mapping the reference key point to the reference camera model for each pair of matched key points to obtain the coordinate of the reference key point under the coordinate system of the reference camera model, and simultaneously mapping the test key point to the test camera model to obtain the coordinate of the test key point under the coordinate system of the test camera model; the reference key points are key points extracted from the reference image; the test key points are key points extracted from the test image;
the second mapping module of the key point is used for mapping the coordinate of the reference key point under the reference camera model coordinate system to the physical eyeball model to obtain the coordinate of the reference key point under the physical eyeball model coordinate system, and mapping the coordinate of the test key point under the test camera model coordinate system to the physical eyeball model to obtain the coordinate of the test key point under the physical eyeball model coordinate system;
the model optimization module is used for optimizing the distance between the coordinates of the reference key point under the physical eyeball model coordinate system and the coordinates of the test key point under the physical eyeball model coordinate system based on a particle swarm optimization algorithm, and optimizing the parameters of the physical eyeball model and the parameters of the test camera model by minimizing the distance to obtain an optimized physical eyeball model and an optimized test camera model;
The first remapping module of the test key point is used for mapping the test key point to the optimized test camera model to obtain the coordinate of the test key point under the optimized test camera model coordinate system;
the second remapping module of the test key point is used for mapping the coordinates of the test key point under the optimized test camera model coordinate system to the optimized physical eyeball model to obtain the coordinates of the test key point under the optimized physical eyeball model coordinate system;
a third remapping module of the test key points is used for mapping the coordinates of the test key points under the optimized physical eyeball model coordinate system to the reference camera model to obtain the coordinates of the test key points under the reference camera model coordinate system;
a fourth mapping module of the test key points is used for mapping the coordinates of the test key points under the reference camera model coordinate system to the reference image to obtain the position comparison result of the test key points and the reference key points;
and the position comparison result 3D visual presentation module is used for carrying out 3D visual presentation on the position comparison results of all the test key points and the reference key points.
Optionally, the key point extraction algorithm comprises an oxford method, a scale invariant feature transform algorithm and a vessel bifurcation extraction algorithm.
Optionally, the keypoint matching algorithm includes a single-sided match and a double-sided match.
Optionally, the physical eyeball model coordinate system is a space coordinate system; the origin of the space coordinate system is the center of the eyeball.
Optionally, the parameters of the physical model include the lengths of three orthogonal half axes of the physical model and the angle by which the three orthogonal half axes of the physical model rotate relative to the physical model coordinate system.
Optionally, the parameters of the test camera model include an angle by which the test camera coordinate system rotates relative to the physical eyeball model coordinate system x-axis, y-axis, z-axis and a translation vector of the test camera coordinate system relative to the physical eyeball model coordinate system x-axis, y-axis, z-axis.
The invention also provides the following scheme:
a method of evolution identification and 3D visualization of fundus structures of myopes, the method comprising:
acquiring a fundus image of a myopic patient acquired at a first time and a fundus image of a myopic patient acquired at a second time, taking the fundus image of the myopic patient acquired at the first time as a reference image, and taking the fundus image of the myopic patient acquired at the second time as a test image; the first time is earlier than the second time;
Extracting key points in the reference image and the test image by adopting a key point extraction algorithm to obtain the extracted key points in the reference image and the extracted key points in the test image;
matching the key points extracted from the reference image with the key points extracted from the test image by adopting a key point matching algorithm to obtain a plurality of pairs of matched key points;
adopting an ellipsoid model to perform approximate construction on eyes to obtain a physical eyeball model;
constructing a reference camera model by adopting an internal reference matrix and an external reference matrix of the reference camera, and constructing a test camera model by adopting the internal reference matrix and the external reference matrix of the test camera; the reference camera is a camera for collecting the reference image; the test camera is a camera for collecting the test image;
mapping a reference key point to the reference camera model for each pair of matched key points to obtain the coordinates of the reference key point under a reference camera model coordinate system, and simultaneously mapping a test key point to the test camera model to obtain the coordinates of the test key point under a test camera model coordinate system; the reference key points are key points extracted from the reference image; the test key points are key points extracted from the test image;
Mapping the coordinates of the reference key points under a reference camera model coordinate system onto the physical eyeball model to obtain the coordinates of the reference key points under the physical eyeball model coordinate system, and simultaneously mapping the coordinates of the test key points under a test camera model coordinate system onto the physical eyeball model to obtain the coordinates of the test key points under the physical eyeball model coordinate system;
optimizing the distance between the coordinates of the reference key point under the physical eyeball model coordinate system and the coordinates of the test key point under the physical eyeball model coordinate system based on a particle swarm optimization algorithm, and optimizing the parameters of the physical eyeball model and the parameters of the test camera model by minimizing the distance to obtain an optimized physical eyeball model and an optimized test camera model;
mapping the test key points to the optimized test camera model to obtain coordinates of the test key points under an optimized test camera model coordinate system;
mapping the coordinates of the test key points under the optimized test camera model coordinate system to the optimized physical eyeball model to obtain the coordinates of the test key points under the optimized physical eyeball model coordinate system;
Mapping the coordinates of the test key points under the optimized physical eyeball model coordinate system to the reference camera model to obtain the coordinates of the test key points under the reference camera model coordinate system;
mapping the coordinates of the test key points in the reference camera model coordinate system to the reference image to obtain a position comparison result of the test key points and the reference key points;
and performing 3D visual presentation on the position comparison results of all the test key points and the reference key points.
Optionally, the physical eyeball model coordinate system is a space coordinate system; the origin of the space coordinate system is the center of the eyeball.
Optionally, the parameters of the physical model include the lengths of three orthogonal half axes of the physical model and the angle by which the three orthogonal half axes of the physical model rotate relative to the physical model coordinate system.
Optionally, the parameters of the test camera model include an angle by which the test camera coordinate system rotates relative to the physical eyeball model coordinate system x-axis, y-axis, z-axis and a translation vector of the test camera coordinate system relative to the physical eyeball model coordinate system x-axis, y-axis, z-axis.
According to the specific embodiment provided by the invention, the invention discloses the following technical effects:
the invention discloses an evolution identification and 3D visualization system and method of a fundus structure of a myopic patient, which are based on key points extracted and matched in fundus images of the myopic patient and physical eyeball models, a reference camera model and a test camera model which are constructed, aiming at each pair of matched key points, the key points are mapped onto the physical eyeball models through the reference camera model and the test camera model, the distance between the two key points on the physical eyeball models is optimized based on a particle swarm optimization algorithm, the physical eyeball models and the test camera model are optimized through the minimized distance, the extracted key points in the test image are mapped onto the optimized physical eyeball models through the optimized test camera model, and the comparison of the positions of the test image and the reference image can be completed through the reference camera model, so that typical and fine changes (evolution) of the fundus structure of the myopic patient are identified. Meanwhile, compared with the current planar image display, the invention can visually display the position comparison result in a 3D way, so that the whole three-dimensional condition (3D effect) before and after comparison can be seen, and typical and slight change and evolution rules of the fundus structure of a myopic patient can be intuitively found.
Drawings
In order to more clearly illustrate the embodiments of the present invention or the technical solutions of the prior art, the drawings that are needed in the embodiments will be briefly described below, it being obvious that the drawings in the following description are only some embodiments of the present invention, and that other drawings may be obtained according to these drawings without inventive effort for a person skilled in the art.
FIG. 1 is a block diagram of an embodiment of an evolution recognition and 3D visualization system for a myopic patient's fundus structure of the present invention;
FIG. 2 is a schematic diagram of a geometric approach to approximate solution;
FIG. 3 is a schematic diagram of a u-v coordinate system;
FIG. 4 is a schematic diagram of a process of converting 3D point coordinates in a spatial coordinate system to 2D point coordinates in a pixel coordinate system by a camera matrix;
FIG. 5 is a geometric diagram of a parametric computational solution to a reference camera within a constructed spatial model;
FIG. 6 is a diagram showing a specific embodiment of the present invention;
fig. 7 is an image of the registration output.
Detailed Description
The following description of the embodiments of the present invention will be made clearly and completely with reference to the accompanying drawings, in which it is apparent that the embodiments described are only some embodiments of the present invention, but not all embodiments. All other embodiments, which can be made by those skilled in the art based on the embodiments of the invention without making any inventive effort, are intended to be within the scope of the invention.
The invention aims to provide an evolution recognition and 3D visualization system and method for a fundus structure of a myopic patient, which can accurately, efficiently and intuitively find typical and slight changes and evolution rules of the fundus structure of the myopic patient.
In order that the above-recited objects, features and advantages of the present invention will become more readily apparent, a more particular description of the invention will be rendered by reference to the appended drawings and appended detailed description.
FIG. 1 is a block diagram of an embodiment of an evolution recognition and 3D visualization system for a myopic patient's fundus structure. As shown in fig. 1, the present embodiment provides an evolution recognition and 3D visualization system of a fundus structure of a myopic patient, the system including the following modules:
an acquisition module 101, configured to acquire a fundus image of a myopic patient acquired at a first time and a fundus image of a myopic patient acquired at a second time, and take the fundus image of the myopic patient acquired at the first time as a reference image and the fundus image of the myopic patient acquired at the second time as a test image; the first time is earlier than the second time.
The key point extraction module 102 is configured to extract key points in the reference image and the test image by using a key point extraction algorithm, so as to obtain the extracted key points in the reference image and the extracted key points in the test image.
The key point extraction algorithm comprises an Ojin method, a scale invariant feature transformation algorithm and a blood vessel bifurcation extraction algorithm.
And the key point matching module 103 is used for matching the key points extracted from the reference image with the key points extracted from the test image by adopting a key point matching algorithm to obtain a plurality of pairs of matched key points.
The key point matching algorithm comprises single-side matching and double-side matching.
The physical eyeball model construction module 104 is configured to perform approximate construction on the eye by using an ellipsoid model to obtain a physical eyeball model.
Wherein the physical eyeball model coordinate system is a space coordinate system; the origin of the spatial coordinate system is the center of the eyeball.
A camera model construction module 105, configured to construct a reference camera model using the reference matrix and the reference matrix of the reference camera, and construct a test camera model using the reference matrix and the reference matrix of the test camera; the reference camera is a camera for collecting reference images; the test camera is a camera for collecting test images.
A first mapping module 106 for mapping, for each pair of matched key points, the reference key points onto the reference camera model to obtain coordinates of the reference key points under the coordinate system of the reference camera model, and mapping the test key points onto the test camera model to obtain coordinates of the test key points under the coordinate system of the test camera model; the reference key points are key points extracted from the reference image; the test key points are key points extracted from the test image.
And the second mapping module 107 is configured to map the coordinates of the reference key point in the reference camera model coordinate system onto the physical eyeball model to obtain the coordinates of the reference key point in the physical eyeball model coordinate system, and map the coordinates of the test key point in the test camera model coordinate system onto the physical eyeball model to obtain the coordinates of the test key point in the physical eyeball model coordinate system.
The model optimization module 108 is configured to optimize a distance between a coordinate of a reference key point under a physical eyeball model coordinate system and a coordinate of a test key point under the physical eyeball model coordinate system based on a particle swarm optimization algorithm, and optimize parameters of the physical eyeball model and parameters of the test camera model by minimizing the distance, thereby obtaining an optimized physical eyeball model and an optimized test camera model.
The parameters of the physical eyeball model comprise the lengths of three orthogonal half axes of the physical eyeball model and the rotation angle of the three orthogonal half axes of the physical eyeball model relative to a coordinate system of the physical eyeball model.
The parameters of the test camera model comprise the rotation angles of the test camera coordinate system relative to the x-axis, the y-axis and the z-axis of the physical eyeball model coordinate system and the translation vectors of the test camera coordinate system relative to the x-axis, the y-axis and the z-axis of the physical eyeball model coordinate system.
And the test key point first remapping module 109 is configured to map the test key point onto the optimized test camera model, so as to obtain coordinates of the test key point under the optimized test camera model coordinate system.
And the second remapping module 110 is configured to map the coordinates of the test key points in the optimized coordinate system of the test camera model onto the optimized physical eyeball model, so as to obtain the coordinates of the test key points in the optimized coordinate system of the physical eyeball model.
And the third remapping module 111 is configured to map the coordinates of the test key point in the optimized physical eyeball model coordinate system onto the reference camera model, so as to obtain the coordinates of the test key point in the reference camera model coordinate system.
And the fourth remapping module 112 for mapping the coordinates of the test key points in the reference camera model coordinate system to the reference image to obtain the position comparison result of the test key points and the reference key points.
And the position comparison result 3D visual presentation module 113 is used for performing 3D visual presentation on the position comparison results of all the test key points and the reference key points.
The technical scheme of the invention is described in the following by a specific embodiment:
The invention discloses an evolution recognition and 3D visualization system of a myopic patient fundus structure, which is an analysis (determination) system of a myopic patient fundus evolution rule based on spatial modeling and posture estimation, is used for improving the monitoring accuracy of ametropia, and belongs to a healthy management and detection system of ametropia.
For analysis of ocular fundus images with myopia and other ametropia, the existing method mainly realizes classification and segmentation tasks, as patent CN 113768460A, CN 111242212A discloses an analysis method for ocular fundus images, analysis and quantification of ocular fundus images are realized, but only images at a single time point can be analyzed, the comparison problem between any two images can not be solved, the image characteristics comparison at different time points can not be realized, the evolution process of ocular fundus can not be observed, for the analysis of ocular fundus images with timeliness, the analysis is mainly realized based on an image registration technology, comparison between any two images of the same group of data can be realized through registration, patent CN 106651827A discloses a characteristic-based ocular fundus image registration method, patent CN 112819867A discloses an ocular fundus image registration method based on key point matching, patent CN 112598028A discloses an ocular fundus image registration method, however, the coupling of the curved surface-shaped structure in the fundus image and the eyeball rotation often causes more complex transformation relation between the fundus images to be registered, the registration precision is greatly limited, the registration work of the fundus images with larger change under a large view field is particularly difficult to be qualified, the problems of low accuracy, incorrect registration and the like are very easy to occur in the method disclosed by the patent under the condition that key points are seriously missing or the local structure similarity of the two images is high, the real-time requirement in clinical diagnosis and treatment is not met in the long registration time, particularly, the image rotation and the like caused by camera angle or head position and eye position change of a patient exist in the process of shooting an ophthalmic image, noise interference is often introduced during the registration, the patent CN 114565654A discloses a registration method based on a multi-disc mask, although the influence of the rotation angle of the image is overcome, the efficiency of fundus image registration is improved, but the robustness is poor, the image registration method is sensitive to noise in the image and imaging effect thereof, and the registration effect of fundus image pairs with smaller overlapping areas is poor. Therefore, the accurate and efficient comparison analysis of the time sequence fundus images is realized, and the method has important value for finding out the evolution rule of the myopia fundus and finding out and intervening the myopia progress in advance.
Aiming at the problems and the defects in the prior art, the invention provides a system for identifying and analyzing the eyeground evolution rule of a myopic patient based on physical modeling and posture estimation, namely a system for identifying the evolution of the eyeground structure of the myopic patient and a 3D (three-dimensional) visualization system, which adopts a method of physical modeling and posture estimation to overcome errors caused by artifacts, exposure degree differences, head and eye position deviation of the patient and inherent cambered surface shape of retina in the process of multiple acquisition at different time points, realize the accurate analysis and quantification of the eyeground evolution rule of the myopic patient and solve the problem that the change of the eye structure caused by myopia development is difficult to find currently.
The invention discloses an evolution identification and 3D visualization system of a myopic patient fundus structure, which is realized on the basis of the following technical means:
1. two retinal images (fundus images of myopic patients) to be registered (aligned) are input as a reference image and a test image, respectively.
2. And extracting and matching key points (key feature points) of the reference image and the test image respectively.
In order to improve accuracy of key feature point pairing, it is necessary to ensure that key points of images to be paired (compared) are uniformly distributed in fundus important areas such as optic disc, blood vessels, fovea and the like, and spread over the whole retinal image area as much as possible.
1. Extraction of key points
(1) And determining a binary segmentation threshold value of the reference image and the test image by adopting an Otsu method (OTSU), respectively generating mask images covering retinal areas in the reference image and the test image, and finally filtering key points at the edges of the retinal areas in the reference image and the test image according to the mask images (namely, finding out the background and the foreground in the images, namely, the background of the whole retinal area and the whole image).
(2) And extracting key points in the two retina images to be registered by adopting a Scale Invariant Feature Transform (SIFT) algorithm and a vascular bifurcation extraction algorithm.
Preferably, the uniformity of distribution of key points in the whole retina image area is improved by a method combining SIFT and vascular bifurcation extraction.
2. Matching of keypoints
(1) And unilateral matching, namely selecting the nearest neighbor of the minimum Euclidean distance of the key point description vector between the image pairs, and discarding all the matching with the distance ratio between the nearest neighbor and the next nearest neighbor being larger than 0.8 so as to remove abnormal values and reduce mismatching.
(2) Bilateral matching, i.e. two key point matching processes from the reference image a to the image to be registered (test image) B and from the image to be registered B to the reference image a are performed, and only matching points common to both processes are retained.
Preferably, bilateral matching is adopted, so that accuracy of key point matching is improved, and a foundation is provided for parameter optimization of model construction.
3. Construction of registration model
1. Construction of physical eyeball model
1.1, adopting an ellipsoidal model { A, Q } to approximate an eye to construct an ellipsoidal model, and fixing the origin of a space coordinate system at the center c of the eyeball s Where is c s =[0,0,0] T The eyeball model equation in matrix form is obtained as follows:
x T Q T AQx=1
wherein x is a 3D point on the eye model curved surface epsilon, T is the transpose of the matrix, and matrix A is represented by the eye modelIs composed of the lengths of 3 orthogonal half shafts,the rotation matrix Q represents the rotation of the half axis relative to the space coordinate system, reflecting the attitude of the eyeball model, and its calculation formula is as q=r a (r a )·R b (r b )·R c (r c ) Shown in the specification, wherein R ɑ 、R b 、R c The rotation matrices are shown as 3*3 rotation matrices.
Formula x T Q T The lengths a, b, c of the three orthogonal half-axes in AQx =1 can determine the shape of the eye model, three parameters r in the rotation matrix Q a 、r b 、r c The three orthogonal half-axes of the eye model are respectively rotated relative to the spatial coordinate system, so that the posture of the eye model can be determined, and the 6 parameters need to be optimized later through a particle swarm optimization algorithm.
2. Construction of camera model
Defining the camera model, also called camera matrix P, is a 3 x 4 matrix that can convert homogeneous forms of three-dimensional coordinates into two-dimensional pixel coordinates. The camera matrix P can be decomposed into a product of two matrices, an inner matrix K and an outer matrix [ r|t ], the formula being as follows:
P=K[R|t]=K[R|-RC]
an upper triangular matrix with K3*3, which describes the internal parameters of the camera; r is a rotation matrix of 3*3 describing the rotation of the camera coordinate system relative to the spatial coordinate system; c is a vector of 3*1 describing the position of the camera coordinate system origin in the spatial coordinate system of the entire model system; t is a translation vector of 3*1 describing the position of the spatial coordinate system origin of the entire model system in the camera coordinate system.
2.1 construction of an internal reference matrix K
The principal function of the internal reference matrix K is to describe the transformation of the 3D camera coordinate system into the 2D homogeneous pixel coordinate system, the specific form of the matrix is as follows:
wherein f x And f y Representing focal length in pixels, typically f of a camera x And f y Have the same value; s represents the inclination of the axis, and here the value is 0; x is x 0 And y 0 Representing the offset of the principal point, the principal axis defining the camera is a line perpendicular to the image plane, and its intersection with the image plane, called the principal point, is generally located at the center of the image. f (f) x And f y The calculation of (2) is approximately solved using a geometric method, as shown in fig. 2. Regarding determination of principal point offset parameters, two coordinate systems are referred to as an image coordinate system and a pixel coordinate system, wherein the image coordinate system is a coordinate system (a common x-y coordinate system) which is established by taking a principal point as an origin and expressed in physical units, the pixel coordinate system is a coordinate system which is established by taking an upper left corner of an image as an origin and expressed in pixels, such as a u-v coordinate system in fig. 3, and then the principal point offset parameters are the origin O of the image coordinate system 1 Value u in a u-v pixel coordinate system in pixels 0 And v 0 Typically the principal point is located in the center of the image. Fig. 4 summarizes the process by which the camera matrix converts 3D point coordinates in the spatial coordinate system to 2D point coordinates in the pixel coordinate system.
2.2 solving the extrinsic matrix [ R|t ]
The principal function of the extrinsic matrix [ R|t ] is to describe the transformation of the spatial coordinate system into the camera coordinate system, which consists of two parts-the rotation matrix R and translation vector t, both of which form a matrix of size 3 x 4 in the form of augmentation ("|"). The extrinsic matrix reflects the position of the camera within the spatial coordinate system.
2.3, the eye position difference between the images to be registered is converted into the camera pose difference of the acquired images by the eye fundus image registration frame constructed by the invention, so that the reference image and the test image correspond to one camera respectively, the camera acquired by the reference image is named as a reference camera, the camera acquired by the test image is named as a test camera, and the two cameras are modeled as follows:
2.3.1 matrix solution for reference cameras
(1) Internal referenceMatrix solving: constructing a matrix form of reference cameras, wherein alpha x And alpha y Representing the focal length at the pixel level, respectively, and γ is the tilt coefficient, then there is a matrix K:
the parameters of the reference camera are calculated and solved in the constructed space model, and the geometric diagram is shown in fig. 5.
Where l represents the lens-to-cornea distance (in millimeters) and k represents the field of view of the camera (in radians), both parameters being dependent on the camera specifications. When the focal length of the camera is a fixed value, the imaging image distance is a fixed value, and the corresponding object distance is a fixed value, so that the distance from the lens to the fundus is a fixed value theoretically, in the model in fig. 5, the invention approximately takes the standard eyeballs with the radius of 12mm of all eyeballs, and when the distance from the lens to the fundus is a fixed value, the distance from the lens to the cornea is converted into a fixed value; r represents the radius (in pixels) of the retinal area in the image, determined by the input reference image; the initial value of the constructed physical eye model is a sphere with radius rho=12 mm; p represents the correspondence between the pixel and the actual physical length (mm), and is only an intermediate variable, and no calibration or calculation is required. The calculation formula is as follows:
Where f represents the focal length in mm in physical terms.
Preferably, alpha is further deduced x And alpha y The calculation formula of (2) is as follows, and thus the solution of all parameters in the reference matrix of the reference camera is completed.
(2) Solving an external parameter matrix: the reference camera is used to acquire a reference image, which in turn is the basis of the image pair to be registered, so that the position of the reference camera in the spatial coordinate system is fixed and known, the rotation matrix of the reference camera is as follows:
wherein r is θ 、r ω Representing the angles of rotation of the camera with respect to the three coordinate axes of the world coordinate system, respectively. R is R x 、R y 、R z A rotation matrix corresponding to each of the three rotation angles r is shown in 3*3.
Translation vector t (t) ref ) Wherein l and p have the same meaning as described above.
Where δ represents the distance of the camera lens from the origin of the world coordinate system, i.e. the center of the eyeball.
And finally, the obtained rotation matrix and the obtained translation vector are put together in an augmented form to finish the solving of the reference camera external parameter matrix.
2.3.2 matrix solution for test cameras
Solving a camera matrix of the test camera, and equally dividing the camera matrix into two parts: solving an internal reference matrix and solving an external reference matrix.
(1) Solving an internal reference matrix: in the invention, the two retina images to be registered use the same camera, and the internal reference matrix of the test camera and the internal reference matrix of the reference camera are identical.
Preferably, when the two cameras for capturing the images are different, the calculation of each parameter in the reference matrix of the test camera is performed according to the same steps.
(2) Solving an external parameter matrix: the extrinsic matrix of the test camera is one of the objects that need to be optimally solved, and contains 6 parameters in total: the 3 parameters in the rotation matrix (i.e., the 3 elements in the rotation vector, i.e., the angle by which the test camera coordinate system rotates relative to the world coordinate system x, y, z axes, in radians) and the 3 parameters in the translation vector.
3. 3D space point mapping model from 2D pixel point to eyeball on retina image
Under the given eye model and camera model, the 3D retina space point coordinate corresponding to the image key point u is obtained by calculating the intersection point of the ray formed from the image key point u to the center of the camera in the solving space and the rear hemisphere of the eye model. The calculation formula of the ray is as follows:
x=P + u+λc
P + =P T (PP T ) -1
wherein P is + But one intermediate variable, without other meaning, c denotes the camera center.
3.1, eye model used, e.g. formula x T Q T AQx =1, the formula x=p + u+λc and formula x T Q T AQx =1 solves for the variable λ, where λ has two values.
3.2 substituting the values of λ in 3.1 into the formula x=p, respectively + Two 3D point coordinates are solved for in u+λc.
And 3.3, respectively normalizing the elements, and converting the fourth row element into 1, wherein the obtained first three rows of elements are 3D point coordinates.
And 3.4, selecting the z value in the coordinates as a negative number, namely the intersection point of the ray and the retrobulbar retina, and thus completing the mapping from one 2D pixel point on the retina image to the 3D space point on the eyeball retina.
4. Construction and solution of an ensemble model
Consider such an equivalent geometry-acquiring reference image F 0 Is fixed, the eye pose and the test image F is acquired t The pose of the test camera is defined relative to the reference camera, and the rotation of the eye (i.e. the difference in position of the eyeball between the images to be registered)Iso) translates to a change in the position of the test camera relative to the reference camera. A specific model diagram is shown in fig. 6.
Preferably, in the present invention, in order to avoid the additional conversion required in the 2D to 3D point mapping and inverse mapping and the complexity of the optimization algorithm construction, a spatial coordinate system is selected as a reference, so that the reference camera and the eyeball position remain fixed.
4. Initializing a pose parameter of a test camera based on a RANSAC method;
the initialization of the 6 position parameters of the test camera, the specific algorithm implementation flow can be described as: (1) Since the pose of the reference camera is fixed and known, a sphere with a radius of 12mm is used as the eye model; (2) Mapping the 2D key points on the reference image onto the 3D retina to obtain corresponding three-dimensional coordinates; (3) Since the key points on the reference image are all the key points matched with each other on the test image, the 3D coordinates obtained in the second step can be obtained by mapping the key points on the test image theoretically, so that a plurality of pairs of 2D image points and corresponding 3D retina points are obtained for the test camera; (4) Finally, solving the 3D-to-2D point mapping problem based on the RANSAC method to obtain the (initial) pose parameters of the test camera.
5. Eyeball model parameter and test camera pose parameter optimization based on particle swarm optimization algorithm
1. Parameters to be optimized:
(1) Eye model correlation—3 semi-axial lengths of the ellipsoid model, 3 parameters of the rotation vector reflecting the eye pose;
(2) Test camera pose correlation—3 parameters of rotation vector reflecting test camera pose and 3 parameters of translation vector reflecting test camera position.
2. Problem of optimization: as shown in fig. 6, assume q i The point being the reference image F 0 Is a 3D coordinate position of key points on retina of eyeball model, and point p i Is a test camera F t 3D coordinate positions of the matching key points on the retina of the model eye, then the 3D distances of the corresponding key points on the sphere are:
d i =|q i -p i |
minimizing distance d i An objective function o (S h ) This translates into an optimization problem, where S h Representing a set of all parameters to be optimized. To enhance the robustness of the algorithm to mismatch, the sum of 80% of the minimum distances is used, as follows:
wherein d is j,h Representing Euclidean distance of each pair of key points in 3D space, j enumerates the small 80% values ofd i,h ,d i,h Representing the euclidean distance of each pair of keypoints in 3D space.
3. Preferably, the invention sets the optimized content as follows
(1) Setting of search space
The search space is set for 3 half-axis lengths of the eye model: around 12mm, within plus or minus 2 mm.
The search space is set for 3 rotation angles of the eye model rotation vector as: centered at 0rad, within a range of plus or minus 1 rad.
The search space is set for 3 rotation angles of the test camera rotation vector as: centering on the results of the initialization in 2.5, a range of plus or minus 1 rad.
The search space is set for 3 elements of the test camera translation vector as: centering on the results of the initialization in 2.5, in the range of plus or minus 2 mm.
(2) Setting of search speed
This search speed can also be understood as the setting of the maximum search step size.
The search speed is set for 3 half-axis lengths of the eye model: within a range of plus or minus 0.1 mm.
The search speed is set for 3 rotation angles of the eye model rotation vector as: plus or minus 0.01 rad.
The search speed is set for 3 rotation angles of the test camera rotation vector as: plus or minus 0.01 rad.
The search speed is set for 3 elements of the test camera translation vector as: within a range of plus or minus 0.1 mm.
6. Outputting the registered test image according to the parameter optimization result:
1. mapping all pixel points of a retina area in the test image onto retina positioned in a rear hemisphere of the eyeball model through a test camera;
2. and imaging all-3D image points on the retina through a reference camera model to obtain registered retina images.
The spatial model diagram is as shown in fig. 6: the points of the test image are mapped to the 3D eyeball model through the test camera and then mapped to the reference image through the reference camera, so that comparison can be completed.
The innovation of the invention is that (i.e. the technical defects which can be overcome are that): the image registration realized based on the invention can furthest reserve the physical structure information in the original image from being influenced by the registration process. The conventional registration technology based on the deep learning method inevitably has image distortion in the process, and the deep learning model loses original structural characteristics for realizing the alignment of more characteristic points, so that the registered result loses original structural significance. The change of the retina results in the myopia development process is slow, the pathological changes which are finally caused are gradually accumulated and formed, and it is very important to ensure that the original retina structure is not changed due to the registration process to the maximum extent.
Of course, the invention is not limited to fundus imaging only, but also considers the applicability of other fundus imaging techniques such as fundus fluoroscopic imaging.
The invention provides a method for changing pairing comparison of retina images of myopic patients based on a physical model and relative pose parameters, which realizes analysis between time-series fundus images, can not destroy information (such as a nipple area) which is contained in the images and is helpful for myopia development analysis and judgment, does not destroy the spatial structure of retinal blood vessels in the images, is favorable for finding tiny structural changes, and maximally avoids image distortion caused by blindly pursuing registration accuracy among pixels at present, so that the result has clinical practical significance and good interpretability. By adopting the evolution recognition and 3D visualization system of the eye fundus structure of the myopic patient to compare the two shot images obtained in actual clinic, fig. 7 is a graph of the results of extraction and matching of key points, namely the images output by registration, wherein the connecting lines in the graph represent the matching results, and the denser the connecting lines have wider coverage range, the better the matching effect is represented.
Compared with the prior art, the invention provides a time sequence fundus image analysis method for rapidly diagnosing refractive errors, which is a method for rapidly and accurately diagnosing refractive error fundus diseases through the comparative analysis of time sequence fundus images, and has the advantages that:
1. the invention realizes the pairing comparison of fundus images of myopic patients at different time points by simultaneously considering the physical eye model and the relative pose parameters when images are acquired, thereby realizing the identification and analysis of fundus evolution rules of myopic patients.
2. The invention provides a pairing comparison method for myopic fundus changes based on a three-dimensional space, which improves the accuracy of fundus image pairing comparison in a larger visual field range, reduces distortion caused by peripheral curvature, and has important significance in that fundus diseases caused by myopia change occur at the peripheral part of retina early.
3. The invention introduces a physical eye model, and improves the accuracy of non-overlapping region conversion in any two images, namely the accuracy of identifying the non-overlapping region which is caused by myopia progression and comprises the slow change of optic nerve nipple area, optic disc macular distance, vascular morphology and the like.
The invention also provides an evolution identification and 3D visualization method of the fundus structure of the myopic patient, which comprises the following steps:
acquiring a fundus image of a myopic patient acquired at a first time and a fundus image of a myopic patient acquired at a second time, taking the fundus image of the myopic patient acquired at the first time as a reference image, and taking the fundus image of the myopic patient acquired at the second time as a test image; the first time is earlier than the second time.
And extracting key points in the reference image and the test image by adopting a key point extraction algorithm to obtain the extracted key points in the reference image and the extracted key points in the test image.
And matching the key points extracted from the reference image with the key points extracted from the test image by adopting a key point matching algorithm to obtain a plurality of pairs of matched key points.
And adopting an ellipsoid model to perform approximate construction on the eyes to obtain a physical eyeball model.
Constructing a reference camera model by adopting an internal reference matrix and an external reference matrix of the reference camera, and constructing a test camera model by adopting the internal reference matrix and the external reference matrix of the test camera; the reference camera is a camera for collecting reference images; the test camera is a camera for collecting test images.
Mapping the reference key points to the reference camera model for each pair of matched key points to obtain the coordinates of the reference key points under the coordinate system of the reference camera model, and mapping the test key points to the test camera model to obtain the coordinates of the test key points under the coordinate system of the test camera model; the reference key points are key points extracted from the reference image; the test key points are key points extracted from the test image.
And mapping the coordinates of the reference key points under the reference camera model coordinate system onto the physical eyeball model to obtain the coordinates of the reference key points under the physical eyeball model coordinate system, and mapping the coordinates of the test key points under the test camera model coordinate system onto the physical eyeball model to obtain the coordinates of the test key points under the physical eyeball model coordinate system.
And optimizing the distance between the coordinates of the reference key points under the physical eyeball model coordinate system and the coordinates of the test key points under the physical eyeball model coordinate system based on a particle swarm optimization algorithm, and optimizing the parameters of the physical eyeball model and the parameters of the test camera model by minimizing the distance to obtain an optimized physical eyeball model and an optimized test camera model.
Mapping the test key points to the optimized test camera model to obtain coordinates of the test key points under the optimized test camera model coordinate system.
And mapping the coordinates of the test key points under the optimized test camera model coordinate system to the optimized physical eyeball model to obtain the coordinates of the test key points under the optimized physical eyeball model coordinate system.
And mapping the coordinates of the test key points under the optimized physical eyeball model coordinate system to the reference camera model to obtain the coordinates of the test key points under the reference camera model coordinate system.
And mapping the coordinates of the test key points in the reference camera model coordinate system to the reference image to obtain the position comparison result of the test key points and the reference key points.
And performing 3D visual presentation on the position comparison results of all the test key points and the reference key points.
The key point extraction algorithm comprises an Ojin method, a scale invariant feature transformation algorithm and a blood vessel bifurcation extraction algorithm. The key point matching algorithm comprises single-side matching and double-side matching.
The physical eyeball model coordinate system is a space coordinate system; the origin of the spatial coordinate system is the center of the eyeball.
The parameters of the physical model include the length of the three orthogonal half axes of the physical model and the angle at which the three orthogonal half axes of the physical model rotate relative to the physical model coordinate system. The parameters of the test camera model comprise the rotation angles of the test camera coordinate system relative to the x-axis, the y-axis and the z-axis of the physical eyeball model coordinate system and the translation vectors of the test camera coordinate system relative to the x-axis, the y-axis and the z-axis of the physical eyeball model coordinate system.
In the present specification, each embodiment is described in a progressive manner, and each embodiment is mainly described in a different point from other embodiments, and identical and similar parts between the embodiments are all enough to refer to each other. For the method disclosed in the embodiment, since it corresponds to the system disclosed in the embodiment, the description is relatively simple, and the relevant points refer to the system part.
The principles and embodiments of the present invention have been described herein with reference to specific examples, the description of which is intended only to assist in understanding the methods of the present invention and the core ideas thereof; also, it is within the scope of the present invention to be modified by those of ordinary skill in the art in light of the present teachings. In view of the foregoing, this description should not be construed as limiting the invention.
Claims (10)
1. An evolution recognition and 3D visualization system for a myopic patient's fundus structure, the system comprising:
the acquisition module is used for acquiring a fundus image of a myopic patient acquired at a first time and a fundus image of a myopic patient acquired at a second time, taking the fundus image of the myopic patient acquired at the first time as a reference image and taking the fundus image of the myopic patient acquired at the second time as a test image; the first time is earlier than the second time;
The key point extraction module is used for extracting key points in the reference image and the test image by adopting a key point extraction algorithm to obtain the extracted key points in the reference image and the extracted key points in the test image;
the key point matching module is used for matching the key points extracted from the reference image with the key points extracted from the test image by adopting a key point matching algorithm to obtain a plurality of pairs of matched key points;
the physical eyeball model construction module is used for performing approximate construction on eyes by adopting an ellipsoid model to obtain a physical eyeball model;
the camera model building module is used for building a reference camera model by adopting an internal reference matrix and an external reference matrix of the reference camera, and building a test camera model by adopting the internal reference matrix and the external reference matrix of the test camera; the reference camera is a camera for collecting the reference image; the test camera is a camera for collecting the test image;
the first mapping module of the key point is used for mapping the reference key point to the reference camera model for each pair of matched key points to obtain the coordinate of the reference key point under the coordinate system of the reference camera model, and simultaneously mapping the test key point to the test camera model to obtain the coordinate of the test key point under the coordinate system of the test camera model; the reference key points are key points extracted from the reference image; the test key points are key points extracted from the test image;
The second mapping module of the key point is used for mapping the coordinate of the reference key point under the reference camera model coordinate system to the physical eyeball model to obtain the coordinate of the reference key point under the physical eyeball model coordinate system, and mapping the coordinate of the test key point under the test camera model coordinate system to the physical eyeball model to obtain the coordinate of the test key point under the physical eyeball model coordinate system;
the model optimization module is used for optimizing the distance between the coordinates of the reference key point under the physical eyeball model coordinate system and the coordinates of the test key point under the physical eyeball model coordinate system based on a particle swarm optimization algorithm, and optimizing the parameters of the physical eyeball model and the parameters of the test camera model by minimizing the distance to obtain an optimized physical eyeball model and an optimized test camera model;
the first remapping module of the test key point is used for mapping the test key point to the optimized test camera model to obtain the coordinate of the test key point under the optimized test camera model coordinate system;
the second remapping module of the test key point is used for mapping the coordinates of the test key point under the optimized test camera model coordinate system to the optimized physical eyeball model to obtain the coordinates of the test key point under the optimized physical eyeball model coordinate system;
A third remapping module of the test key points is used for mapping the coordinates of the test key points under the optimized physical eyeball model coordinate system to the reference camera model to obtain the coordinates of the test key points under the reference camera model coordinate system;
a fourth mapping module of the test key points is used for mapping the coordinates of the test key points under the reference camera model coordinate system to the reference image to obtain the position comparison result of the test key points and the reference key points;
and the position comparison result 3D visual presentation module is used for carrying out 3D visual presentation on the position comparison results of all the test key points and the reference key points.
2. The system for evolution identification and 3D visualization of fundus structure of myopic patient of claim 1 wherein the key point extraction algorithm includes an oxford method, a scale invariant feature transform algorithm and a vessel bifurcation extraction algorithm.
3. The system for evolution identification and 3D visualization of fundus structure of myopic patient of claim 1 wherein the key point matching algorithm includes single side matching and double side matching.
4. The system for evolution identification and 3D visualization of fundus structure of myopic patient of claim 1 wherein the physical model eye coordinate system is a spatial coordinate system; the origin of the space coordinate system is the center of the eyeball.
5. The system for evolution identification and 3D visualization of fundus structure of myopic patient of claim 1 wherein the parameters of the physical eyeball model include the lengths of the three orthogonal half-axes of the physical eyeball model and the angle at which the three orthogonal half-axes of the physical eyeball model rotate relative to the physical eyeball model coordinate system.
6. The system of claim 1, wherein the parameters of the test camera model comprise the angle of rotation of the test camera coordinate system relative to the physical eye model coordinate system x, y, z axes and the translation vector of the test camera coordinate system relative to the physical eye model coordinate system x, y, z axes.
7. A method for evolution identification and 3D visualization of fundus structure of myopic patient, the method comprising:
acquiring a fundus image of a myopic patient acquired at a first time and a fundus image of a myopic patient acquired at a second time, taking the fundus image of the myopic patient acquired at the first time as a reference image, and taking the fundus image of the myopic patient acquired at the second time as a test image; the first time is earlier than the second time;
Extracting key points in the reference image and the test image by adopting a key point extraction algorithm to obtain the extracted key points in the reference image and the extracted key points in the test image;
matching the key points extracted from the reference image with the key points extracted from the test image by adopting a key point matching algorithm to obtain a plurality of pairs of matched key points;
adopting an ellipsoid model to perform approximate construction on eyes to obtain a physical eyeball model;
constructing a reference camera model by adopting an internal reference matrix and an external reference matrix of the reference camera, and constructing a test camera model by adopting the internal reference matrix and the external reference matrix of the test camera; the reference camera is a camera for collecting the reference image; the test camera is a camera for collecting the test image;
mapping a reference key point to the reference camera model for each pair of matched key points to obtain the coordinates of the reference key point under a reference camera model coordinate system, and simultaneously mapping a test key point to the test camera model to obtain the coordinates of the test key point under a test camera model coordinate system; the reference key points are key points extracted from the reference image; the test key points are key points extracted from the test image;
Mapping the coordinates of the reference key points under a reference camera model coordinate system onto the physical eyeball model to obtain the coordinates of the reference key points under the physical eyeball model coordinate system, and simultaneously mapping the coordinates of the test key points under a test camera model coordinate system onto the physical eyeball model to obtain the coordinates of the test key points under the physical eyeball model coordinate system;
optimizing the distance between the coordinates of the reference key point under the physical eyeball model coordinate system and the coordinates of the test key point under the physical eyeball model coordinate system based on a particle swarm optimization algorithm, and optimizing the parameters of the physical eyeball model and the parameters of the test camera model by minimizing the distance to obtain an optimized physical eyeball model and an optimized test camera model;
mapping the test key points to the optimized test camera model to obtain coordinates of the test key points under an optimized test camera model coordinate system;
mapping the coordinates of the test key points under the optimized test camera model coordinate system to the optimized physical eyeball model to obtain the coordinates of the test key points under the optimized physical eyeball model coordinate system;
Mapping the coordinates of the test key points under the optimized physical eyeball model coordinate system to the reference camera model to obtain the coordinates of the test key points under the reference camera model coordinate system;
mapping the coordinates of the test key points in the reference camera model coordinate system to the reference image to obtain a position comparison result of the test key points and the reference key points;
and performing 3D visual presentation on the position comparison results of all the test key points and the reference key points.
8. The method for the evolution recognition and 3D visualization of fundus structures of myopes of claim 7, wherein the physical model eye coordinate system is a spatial coordinate system; the origin of the space coordinate system is the center of the eyeball.
9. The method of claim 7, wherein the parameters of the physical model include the lengths of the three orthogonal half-axes of the physical model and the angle at which the three orthogonal half-axes of the physical model rotate relative to the physical model coordinate system.
10. The method of claim 7, wherein the parameters of the test camera model include the angle of rotation of the test camera coordinate system relative to the physical eye model coordinate system x, y, z axes and the translation vector of the test camera coordinate system relative to the physical eye model coordinate system x, y, z axes.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202310425376.6A CN116407080A (en) | 2023-04-20 | 2023-04-20 | Evolution identification and 3D visualization system and method for fundus structure of myopic patient |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202310425376.6A CN116407080A (en) | 2023-04-20 | 2023-04-20 | Evolution identification and 3D visualization system and method for fundus structure of myopic patient |
Publications (1)
Publication Number | Publication Date |
---|---|
CN116407080A true CN116407080A (en) | 2023-07-11 |
Family
ID=87051139
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202310425376.6A Pending CN116407080A (en) | 2023-04-20 | 2023-04-20 | Evolution identification and 3D visualization system and method for fundus structure of myopic patient |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN116407080A (en) |
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN116646079A (en) * | 2023-07-26 | 2023-08-25 | 武汉大学人民医院(湖北省人民医院) | Auxiliary diagnosis method and device for ophthalmologic symptoms |
-
2023
- 2023-04-20 CN CN202310425376.6A patent/CN116407080A/en active Pending
Cited By (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN116646079A (en) * | 2023-07-26 | 2023-08-25 | 武汉大学人民医院(湖北省人民医院) | Auxiliary diagnosis method and device for ophthalmologic symptoms |
CN116646079B (en) * | 2023-07-26 | 2023-10-10 | 武汉大学人民医院(湖北省人民医院) | Auxiliary diagnosis method and device for ophthalmologic symptoms |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
Xiang et al. | Automatic segmentation of retinal layer in OCT images with choroidal neovascularization | |
Li et al. | Automated feature extraction in color retinal images by a model based approach | |
Chanwimaluang et al. | Hybrid retinal image registration | |
Dufour et al. | Graph-based multi-surface segmentation of OCT data using trained hard and soft constraints | |
Abràmoff et al. | Retinal imaging and image analysis | |
AU2021202217B2 (en) | Methods and systems for ocular imaging, diagnosis and prognosis | |
JP2019192215A (en) | 3d quantitative analysis of retinal layers with deep learning | |
CN108618749A (en) | Retinal vessel three-dimensional rebuilding method based on portable digital fundus camera | |
Zhu et al. | Digital image processing for ophthalmology: Detection of the optic nerve head | |
JP2021529622A (en) | Method and computer program for segmentation of light interference tomography images of the retina | |
CN107563996A (en) | A kind of new discus nervi optici dividing method and system | |
CN106446805B (en) | A kind of eyeground shine in optic cup dividing method and system | |
Pan et al. | OCTRexpert: a feature-based 3D registration method for retinal OCT images | |
CN109325955B (en) | Retina layering method based on OCT image | |
CN108665474B (en) | B-COSFIRE-based retinal vessel segmentation method for fundus image | |
CN116407080A (en) | Evolution identification and 3D visualization system and method for fundus structure of myopic patient | |
Guo et al. | Robust fovea localization based on symmetry measure | |
CN109919098B (en) | Target object identification method and device | |
CN115393239A (en) | Multi-mode fundus image registration and fusion method and system | |
Yadav et al. | Optic nerve head three-dimensional shape analysis | |
Padmasini et al. | State-of-the-art of level-set methods in segmentation and registration of spectral domain optical coherence tomographic retinal images | |
Rivas-Villar et al. | Joint keypoint detection and description network for color fundus image registration | |
ES2977594T3 (en) | Method for automatic quantification of the shape of an optic nerve head | |
CN116452571A (en) | Image recognition method based on deep neural network | |
CN115294152A (en) | Automatic layering method and system for retina OCT (optical coherence tomography) image |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination |