CN117495864A - Imaging direction computing system and diopter estimating system based on image processing - Google Patents
Imaging direction computing system and diopter estimating system based on image processing Download PDFInfo
- Publication number
- CN117495864A CN117495864A CN202410004274.1A CN202410004274A CN117495864A CN 117495864 A CN117495864 A CN 117495864A CN 202410004274 A CN202410004274 A CN 202410004274A CN 117495864 A CN117495864 A CN 117495864A
- Authority
- CN
- China
- Prior art keywords
- abscissa
- image
- boundary
- right boundary
- pupil
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
- 238000012545 processing Methods 0.000 title claims abstract description 33
- 238000003384 imaging method Methods 0.000 title claims abstract description 21
- 210000001747 pupil Anatomy 0.000 claims abstract description 56
- 238000013507 mapping Methods 0.000 claims abstract description 48
- 238000000034 method Methods 0.000 claims abstract description 25
- 238000004364 calculation method Methods 0.000 claims abstract description 23
- 230000011218 segmentation Effects 0.000 claims abstract description 18
- 238000003708 edge detection Methods 0.000 claims abstract description 12
- 230000001131 transforming effect Effects 0.000 claims abstract description 9
- 230000003287 optical effect Effects 0.000 claims description 39
- 230000009466 transformation Effects 0.000 claims description 14
- 238000001514 detection method Methods 0.000 claims description 12
- 230000000877 morphologic effect Effects 0.000 claims description 9
- 238000007781 pre-processing Methods 0.000 claims description 5
- 238000013527 convolutional neural network Methods 0.000 claims description 3
- 230000008569 process Effects 0.000 abstract description 6
- 238000010191 image analysis Methods 0.000 abstract description 3
- 208000014733 refractive error Diseases 0.000 description 4
- 238000012216 screening Methods 0.000 description 3
- 206010063385 Intellectualisation Diseases 0.000 description 2
- 208000029091 Refraction disease Diseases 0.000 description 2
- 230000004430 ametropia Effects 0.000 description 2
- 230000009286 beneficial effect Effects 0.000 description 2
- 238000010586 diagram Methods 0.000 description 2
- 238000012986 modification Methods 0.000 description 2
- 230000004048 modification Effects 0.000 description 2
- 230000000007 visual effect Effects 0.000 description 2
- 208000022873 Ocular disease Diseases 0.000 description 1
- 238000012937 correction Methods 0.000 description 1
- 230000007547 defect Effects 0.000 description 1
- 230000006735 deficit Effects 0.000 description 1
- 238000005516 engineering process Methods 0.000 description 1
- 238000001914 filtration Methods 0.000 description 1
- 239000011521 glass Substances 0.000 description 1
- 230000006872 improvement Effects 0.000 description 1
- 210000001525 retina Anatomy 0.000 description 1
Classifications
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61B—DIAGNOSIS; SURGERY; IDENTIFICATION
- A61B3/00—Apparatus for testing the eyes; Instruments for examining the eyes
- A61B3/10—Objective types, i.e. instruments for examining the eyes independent of the patients' perceptions or reactions
- A61B3/103—Objective types, i.e. instruments for examining the eyes independent of the patients' perceptions or reactions for determining refraction, e.g. refractometers, skiascopes
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61B—DIAGNOSIS; SURGERY; IDENTIFICATION
- A61B3/00—Apparatus for testing the eyes; Instruments for examining the eyes
- A61B3/10—Objective types, i.e. instruments for examining the eyes independent of the patients' perceptions or reactions
- A61B3/14—Arrangements specially adapted for eye photography
- A61B3/145—Arrangements specially adapted for eye photography by video means
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T3/00—Geometric image transformations in the plane of the image
- G06T3/40—Scaling of whole images or parts thereof, e.g. expanding or contracting
- G06T3/4053—Scaling of whole images or parts thereof, e.g. expanding or contracting based on super-resolution, i.e. the output image resolution being higher than the sensor resolution
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/0002—Inspection of images, e.g. flaw detection
- G06T7/0012—Biomedical image inspection
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/10—Segmentation; Edge detection
- G06T7/11—Region-based segmentation
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/10—Segmentation; Edge detection
- G06T7/13—Edge detection
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/10—Segmentation; Edge detection
- G06T7/136—Segmentation; Edge detection involving thresholding
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/10—Segmentation; Edge detection
- G06T7/155—Segmentation; Edge detection involving morphological operators
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/10—Image acquisition modality
- G06T2207/10016—Video; Image sequence
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/30—Subject of image; Context of image processing
- G06T2207/30004—Biomedical image processing
- G06T2207/30041—Eye; Retina; Ophthalmic
Landscapes
- Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- Health & Medical Sciences (AREA)
- Life Sciences & Earth Sciences (AREA)
- Theoretical Computer Science (AREA)
- General Physics & Mathematics (AREA)
- Computer Vision & Pattern Recognition (AREA)
- General Health & Medical Sciences (AREA)
- Medical Informatics (AREA)
- Surgery (AREA)
- Biomedical Technology (AREA)
- Animal Behavior & Ethology (AREA)
- Heart & Thoracic Surgery (AREA)
- Public Health (AREA)
- Veterinary Medicine (AREA)
- Biophysics (AREA)
- Ophthalmology & Optometry (AREA)
- Molecular Biology (AREA)
- Quality & Reliability (AREA)
- Radiology & Medical Imaging (AREA)
- Nuclear Medicine, Radiotherapy & Molecular Imaging (AREA)
- Multimedia (AREA)
- Image Processing (AREA)
Abstract
The invention relates to the technical field of medical image analysis, and discloses an imaging direction calculation system and a diopter estimation system based on image processing, wherein the diopter estimation system is used for acquiring an optometry video of an eye area of a patient; dividing pupil area images from each frame of images; performing super-resolution processing on the pupil region image, and performing threshold segmentation on the super-resolution image to segment out a light reflecting region; edge detection is carried out on the mapping area, mapping edge outlines are obtained, and left and right boundary abscissas of the mapping edge outlines are obtained; transforming the left and right boundary abscissas to obtain transformed left and right boundary abscissas; obtaining left and right boundary abscissas after all frame images in the optometry video are transformed; performing linear fitting on the transformed left and right boundary abscissa sequences to obtain the moving speed of the left and right boundaries; taking the average value of the moving speed as the mapping speed; based on the reflected light velocity, diopters are calculated. The invention reduces the interference of subjective factors and accelerates the optometry process.
Description
Technical Field
The invention relates to the technical field of medical image analysis, in particular to an imaging direction computing system and a diopter estimating system based on image processing.
Background
The statements in this section merely relate to the background of the present disclosure and may not necessarily constitute prior art.
Refractive errors are the most common ocular disorders and are also the key cause behind correctable vision impairment. Ametropia can be diagnosed by a variety of methods, including subjective refraction, objective refraction, and the like. The imaging refraction is one of the traditional objective refraction methods, and the imaging direction and brightness in the fundus image of the patient are analyzed to be matched with different lenses so as to judge the ametropia of the patient. The shadow checking method has the advantages of objective and reliable result, no subjective cooperation of the checked person and strong applicability. However, the method of screening optometry also has problems in that it generally requires a long time and intervention of a professional, limiting its use in large-scale vision screening.
Disclosure of Invention
In order to solve the defects in the prior art, the invention provides an imaging direction calculation system and a diopter estimation system based on image processing; according to the shadow checking optometry video, the shadow moving direction and the refraction inaccuracy are calculated by using an image processing technology and through mathematical modeling and corresponding algorithms. The main advantages of this technique are improved accuracy in refractive error calculation, reduced interference from subjective factors, and accelerated optometry.
In one aspect, a diopter estimation system is provided, comprising: a first acquisition module configured to: collecting an optometry video of an eye region of a patient; an specular velocity calculation module configured to: dividing pupil area images from each frame of image in the optometry video; performing super-resolution processing on the pupil region image obtained by segmentation to obtain a super-resolution pupil region image; threshold segmentation is carried out on the super-resolution pupil region image, and a reflection region is segmented; carrying out morphological processing on the mapping region and carrying out edge detection to obtain a mapping edge profile, and further obtaining a left boundary abscissa and a right boundary abscissa of the mapping edge profile; transforming the left and right boundary abscissas to obtain transformed left and right boundary abscissas; obtaining the left and right boundary abscissa after transformation for all frame images in the optometry video; respectively performing linear fitting on the transformed left and right boundary abscissa sequences to obtain the moving speed of the left and right boundaries; taking the average value of the moving speeds of the left boundary and the right boundary as the mapping speed; a diopter calculation module configured to: and calculating the diopter based on the mapping speed and diopter fitting formula.
In another aspect, there is provided an imaging direction calculation system based on image processing, comprising: a second acquisition module configured to: collecting an optometry video of an eye region of a patient; an specular moving direction calculation module configured to: dividing pupil area images from each frame of image in the optometry video; performing super-resolution processing on the pupil region image obtained by segmentation to obtain a super-resolution pupil region image; threshold segmentation is carried out on the super-resolution pupil region image, and a reflection region is segmented; carrying out morphological processing on the mapping region and carrying out edge detection to obtain a mapping edge profile, and further obtaining a left boundary abscissa and a right boundary abscissa of the mapping edge profile; transforming the left and right boundary abscissas to obtain transformed left and right boundary abscissas; obtaining the left and right boundary abscissa after transformation for all frame images in the optometry video; respectively performing linear fitting on the transformed left and right boundary abscissa sequences to obtain the moving speed of the left and right boundaries; taking the average value of the moving speeds of the left boundary and the right boundary as the mapping speed; calculating the movement direction of the reflected light according to the speed of the reflected light; a light band movement direction calculation module configured to: calculating the moving direction of the light band; a shadow direction computing module configured to: and calculating the shadow movement direction according to the shadow movement direction and the light band movement direction.
The technical scheme has the following advantages or beneficial effects: according to the refraction video, the imaging direction and the diopter are calculated by adopting an image processing mode, so that accuracy of calculation of the diopter error is improved, interference of subjective factors is reduced, the refraction process is accelerated, and automation and intellectualization of the refraction process are facilitated. The invention belongs to the technical field of medical image analysis, and particularly relates to an imaging direction and refraction misalignment estimation method based on image processing. The method comprises the steps of obtaining an optometry video, extracting an optometry boundary and calculating an optometry related parameter, extracting an optical band boundary and calculating an optical band related parameter, and calculating a shadow movement direction and diopter. The invention can realize accurate identification and calculation of the shadow movement direction and diopter, reduces the interference of subjective factors, accelerates the optometry process, and is beneficial to realizing the automation and the intellectualization of the optometry process.
Drawings
The accompanying drawings, which are included to provide a further understanding of the invention and are incorporated in and constitute a part of this specification, illustrate embodiments of the invention and together with the description serve to explain the invention.
FIG. 1 is a functional block diagram of a diopter estimation system according to a first embodiment;
fig. 2 is a functional block diagram of an image processing-based imaging direction calculation system according to the first embodiment.
Detailed Description
It should be noted that the following detailed description is exemplary and is intended to provide further explanation of the invention. Unless defined otherwise, all technical and scientific terms used herein have the same meaning as commonly understood by one of ordinary skill in the art to which this invention belongs.
Example 1
As shown in fig. 1, the diopter estimation system includes: a first acquisition module configured to: collecting an optometry video of an eye region of a patient; an specular velocity calculation module configured to: dividing pupil area images from each frame of image in the optometry video; performing super-resolution processing on the pupil region image obtained by segmentation to obtain a super-resolution pupil region image; threshold segmentation is carried out on the super-resolution pupil region image, and a reflection region is segmented; carrying out morphological processing on the mapping region and carrying out edge detection to obtain a mapping edge profile, and further obtaining a left boundary abscissa and a right boundary abscissa of the mapping edge profile; transforming the left and right boundary abscissas to obtain transformed left and right boundary abscissas; obtaining the left and right boundary abscissa after transformation for all frame images in the optometry video; respectively performing linear fitting on the transformed left and right boundary abscissa sequences to obtain the moving speed of the left and right boundaries; taking the average value of the moving speeds of the left boundary and the right boundary as the mapping speed; a diopter calculation module configured to: and calculating the diopter based on the mapping speed and diopter fitting formula.
Further, the optometry video of the eye area of the patient is acquired by using image acquisition equipment, and the optometry video is an optometry video which is scanned at a uniform speed at the left and right sides of the eye area of the patient by using an optometry mirror light band; and in the process of image acquisition, the patient wears the reference spectacle frame of white background board, has two rectangle openings on the spectacle frame, can expose the eye region. The white background board can be arranged on the glasses frame without lenses, is convenient for a patient to wear, but is not limited to the form, the white background board can be attached to the face of the patient, and the white background board is provided with a rectangular opening, and the rectangular opening can shoot the eye area of the patient.
During image acquisition, the lens of the camera is tightly attached to the observation hole of the imaging lens, an image is acquired through the observation hole of the imaging lens, the image shot by the camera is the image seen by an optometrist during imaging and optometry, and the camera is equivalent to the eyes of the optometrist.
Further, the dividing the pupil area image from each frame of image in the pair of optometry videos includes: preprocessing each frame of image in the optometry video; performing reference detection on the preprocessed image to obtain an eye rectangular region; detecting an exit pupil in the rectangular region to obtain a minimum circumscribed rectangular region of the pupil; and dividing the minimum circumscribed rectangular area of the pupil to obtain a pupil area image.
Further, the preprocessing of each frame of image in the optometry video specifically includes: and carrying out graying treatment on each frame of image, and carrying out filtering treatment, brightness adjustment and gamma correction on the image subjected to the graying treatment to obtain a preprocessed image.
Further, the performing reference detection on the preprocessed image to obtain an eye rectangular area, where the reference detection includes: taking the center point of the eye rectangular area as a datum point F 1 。
It will be appreciated that the camera is moved left and right relative to the patient, so that the position of the same object is also changing in the captured image. Fiducial detection is the finding of a reference point in the image that is stationary in the world coordinate system, while moving in the image coordinate system. The center point of the rectangular region of the eye is taken as a reference point, namely a datum point.
Further, detecting an exit pupil in the rectangular area to obtain a minimum circumscribed rectangular area of the pupil, identifying the pupil by adopting a circular detection mode or a trained convolutional neural network, obtaining the minimum circumscribed rectangular area of the pupil, and recording endpoint coordinates F of the minimum circumscribed rectangular area of the pupil, which are closest to a coordinate origin 2 。
Further, the Super-resolution processing is performed on the pupil area image obtained by segmentation to obtain a Super-resolution pupil area image, and the Super-resolution pupil area image is obtained by inputting the pupil area image obtained by segmentation into a Super-resolution network SRCNN (Super-Resolution Convolutional Neural Network) and performing four times of Super-resolution processing.
Further, the threshold segmentation is performed on the super-resolution pupil area image to segment a mapping area, which specifically includes: and setting a threshold value, dividing points with pixel values higher than the threshold value in the super-resolution pupil region image, and discarding points lower than the threshold value to obtain a light reflection region.
Further, the mapping area performs morphological processing and edge detection to obtain a mapping edge profile, so as to obtain a left border abscissa and a right border abscissa of the mapping edge profile, including: carrying out morphological treatment on the mapping region, eliminating discontinuous parts and protruding parts, carrying out edge detection, then obtaining a mapping edge contour, taking the point with the smallest abscissa in the contour as the left boundary abscissa of the mapping edge contour, and taking the point with the largest abscissa in the contour as the right boundary of the mapping edge contour; recording left boundary abscissa of mapped edge profileRight boundary abscissa of a mapped edge contour。
The reflected light is the reflection light of the retina of the human eye under the irradiation of the strip light beam of the imaging lens.
Further, the transforming the left and right boundary abscissas to obtain transformed left and right boundary abscissas includes: will map the boundary abscissa of light、/>Performing transformation to obtain transformed mapping boundary abscissa +.>、/>The transformation formula is as follows: />;/>。
Further, all frame images in the pair of optometry videos obtain transformed left and right boundary abscissas, including: recording the transformed mapped border abscissa L frame by frame 2 、R 2 Obtaining a transformed mapping left boundary coordinate sequence S 1 Transformed mapped right boundary coordinate sequence S 2 。
Further, the transformed left and right boundary abscissa sequences are respectively subjected to linear fitting to obtain the moving speeds of the left and right boundaries; the average value of the movement speeds of the left and right boundaries is defined as the reflection speed, and specifically includes: the transformed mapping left boundary coordinate sequence S1 is subjected to piecewise linear fitting to obtain two-section linear slopes, the slope close to zero is removed, and the linear slope which is not removed is taken as the moving speed V of the left boundary rl The method comprises the steps of carrying out a first treatment on the surface of the Similarly, the movement speed V of the right boundary is obtained rr The method comprises the steps of carrying out a first treatment on the surface of the Then the calculated left boundary moving speed V rl Right boundary movement speed V rr Average is taken as the reflection speed V r 。
Further, the calculating the diopter based on the fitting formula of the reflecting speed and diopter includes: will reflect the light velocityInputting diopter fitting formula, calculating to obtain diopter +.>The fitting formula is in the form of, wherein +.>、/>、/>To fit the parameters: />。
And (3) collecting optometry videos of a group of patients with different diopters, marking diopter data by an ophthalmologist, and fitting to obtain fitting parameters by combining the calculated light reflection speeds in the embodiment.
Example two
As shown in fig. 2, the imaging direction calculation system based on image processing includes: a second acquisition module configured to: collecting an optometry video of an eye region of a patient; an specular moving direction calculation module configured to: dividing pupil area images from each frame of image in the optometry video; performing super-resolution processing on the pupil region image obtained by segmentation to obtain a super-resolution pupil region image; threshold segmentation is carried out on the super-resolution pupil region image, and a reflection region is segmented; carrying out morphological processing on the mapping region and carrying out edge detection to obtain a mapping edge profile, and further obtaining a left boundary abscissa and a right boundary abscissa of the mapping edge profile; transforming the left and right boundary abscissas to obtain transformed left and right boundary abscissas; obtaining the left and right boundary abscissa after transformation for all frame images in the optometry video; respectively performing linear fitting on the transformed left and right boundary abscissa sequences to obtain the moving speed of the left and right boundaries; taking the average value of the moving speeds of the left boundary and the right boundary as the mapping speed; calculating the movement direction of the reflected light according to the speed of the reflected light; a light band movement direction calculation module configured to: calculating the moving direction of the light band; a shadow direction computing module configured to: and calculating the shadow movement direction according to the shadow movement direction and the light band movement direction.
Further, the calculating the movement direction of the reflected light according to the speed of the reflected light includes: according to the speed of light r To obtain the movement direction of the reflected light if the reflected light velocity V r If the value of (2) is positive, the movement direction of the reflected light is rightward movement; if the light-reflecting speed V r The direction of movement of the reflected light is left if the value of (2) is negative.
The left-right direction is identical to the left-right direction in the camera view angle when recording the optometry video. During optometry, the optometrist needs to sit right opposite to the patient, the left hand direction of the optometrist is the left side, the right hand direction of the optometrist is the right side, and the visual angle of the camera in the system is consistent with the visual angle of the optometrist.
Further, the calculating the optical tape moving direction includes: acquiring an optometry video, and preprocessing each frame of image of the optometry video; performing reference detection on the preprocessed image to obtain a rectangular area, and taking the central point of the rectangular area as a reference point F 1 The method comprises the steps of carrying out a first treatment on the surface of the Setting an area above a rectangular area, performing threshold segmentation and edge detection, then performing straight line detection, screening the detected straight line, finding the left and right boundaries of the optical band, and recording the abscissa L of the left boundary of the optical band 3 Abscissa R of right boundary of optical band 3 The method comprises the steps of carrying out a first treatment on the surface of the Abscissa L for left boundary of optical band 3 Abscissa R of right boundary of optical band 3 Performing coordinate transformation to obtain the transformed left boundary abscissa L of the light band 4 Transformed optical band right boundary abscissa R 4 The method comprises the steps of carrying out a first treatment on the surface of the For each frame of image of the optometry video, the transformed left boundary abscissa L of the optical band is obtained 4 Transformed optical band right boundary abscissa R 4 Obtaining a left boundary coordinate sequence S of the light band 3 Right boundary coordinate sequence S of optical band 4 The method comprises the steps of carrying out a first treatment on the surface of the Will left boundary coordinate sequence S 3 Performing piecewise linear fitting to obtain two-section linear slopes, removing slopes close to zero, and taking the slopes which are not removed as the moving speed V of the left boundary bl The method comprises the steps of carrying out a first treatment on the surface of the Similarly, the movement speed V of the right boundary is obtained br The method comprises the steps of carrying out a first treatment on the surface of the Then the calculated left boundary moving speed V bl Right boundary movement speed V br Average value is taken as the moving speed V of the optical tape b The method comprises the steps of carrying out a first treatment on the surface of the And obtaining the moving direction of the optical tape according to the sign of the moving speed of the optical tape.
If the tape speed V b If the value of (2) is positive, the moving direction of the optical tape is rightward; if the tape speed V b The direction of movement of the tape is left if the value of (2) is negative.
The viewing angle in the left-right direction is determined to be identical to the viewing angle in the left-right direction of the reflected light.
Further, the detected straight line is screened to find the left and right boundaries of the optical band, and the left edge of the optical band is recordedAbscissa L of the world 3 Abscissa R of right boundary of optical band 3 The method specifically comprises the following steps: setting a threshold value, selecting a straight line in the vertical direction in the image according to the slope of the straight line, and removing the straight line with the length smaller than the set threshold value; then, finding out leftmost and rightmost straight lines from the screened straight lines according to the abscissa of the line segment center point, and taking the straight lines as the left and right boundaries of the light band; taking the line segment with the smallest abscissa of the central point as the left boundary of the light band, and taking the abscissa of the straight line central point as the left boundary abscissa L3 of the light band; and taking the straight line with the maximum abscissa of the central point as the right boundary of the light band, and taking the abscissa of the central point of the straight line as the right boundary abscissa R3 of the light band.
It will be appreciated that since the patient wears the reference eye-box and the face area around the eyes is blocked by the white background plate of the reference eye-box, a portion of the ribbon beam from the scope is projected onto the white background plate around the eyes and another portion is projected onto the eye area, and the projection of the scope ribbon beam presented on the white background plate is the optical band.
Further, the abscissa L of the left boundary of the pair of optical bands 3 Abscissa R of right boundary of optical band 3 And carrying out coordinate transformation, wherein the transformation formula is as follows:;/>。
further, the calculating the shadow movement direction according to the shadow movement direction and the light band movement direction includes: comparing the calculated movement direction of the reflected light with the movement direction of the optical tape, if the directions are consistent, the shadow movement direction is forward movement, and if the directions are inconsistent, the shadow movement direction is reverse movement.
The above description is only of the preferred embodiments of the present invention and is not intended to limit the present invention, but various modifications and variations can be made to the present invention by those skilled in the art. Any modification, equivalent replacement, improvement, etc. made within the spirit and principle of the present invention should be included in the protection scope of the present invention.
Claims (10)
1. A diopter estimation system, comprising:
a first acquisition module configured to: collecting an optometry video of an eye region of a patient;
an specular velocity calculation module configured to: dividing pupil area images from each frame of image in the optometry video; performing super-resolution processing on the pupil region image obtained by segmentation to obtain a super-resolution pupil region image; threshold segmentation is carried out on the super-resolution pupil region image, and a reflection region is segmented; carrying out morphological processing on the mapping region and carrying out edge detection to obtain a mapping edge profile, and further obtaining a left boundary abscissa and a right boundary abscissa of the mapping edge profile; transforming the left and right boundary abscissas to obtain transformed left and right boundary abscissas; obtaining the left and right boundary abscissa after transformation for all frame images in the optometry video; respectively performing linear fitting on the transformed left and right boundary abscissa sequences to obtain the moving speed of the left and right boundaries; taking the average value of the moving speeds of the left boundary and the right boundary as the mapping speed;
a diopter calculation module configured to: and calculating the diopter based on the mapping speed and diopter fitting formula.
2. The diopter estimation system of claim 1 wherein each frame of image in the pair of prescription videos segments out a pupil area image comprising: preprocessing each frame of image in the optometry video; performing reference detection on the preprocessed image to obtain an eye rectangular region; detecting an exit pupil in the rectangular region to obtain a minimum circumscribed rectangular region of the pupil; and dividing the minimum circumscribed rectangular area of the pupil to obtain a pupil area image.
3. The diopter estimation system of claim 2 wherein said detection of exit pupil in the rectangular region yields a minimum circumscribed rectangular region of pupil by circular detection or trainingThe convolutional neural network identifies the pupil, obtains the minimum circumscribed rectangular area of the pupil, and records the endpoint coordinate F of the minimum circumscribed rectangular area of the pupil nearest to the origin of coordinates 2 。
4. The diopter estimation system of claim 1 wherein the super-resolution processing is performed on the split pupil area image to obtain a super-resolution pupil area image, and the split pupil area image is input into a super-resolution network to perform four times super-resolution processing to obtain a super-resolution pupil area image.
5. The diopter estimation system of claim 1 wherein said mapped region is morphologically processed and edge detected to obtain a mapped edge profile, and further wherein obtaining left and right boundary abscissas of the mapped edge profile comprises:
carrying out morphological treatment on the mapping region, eliminating discontinuous parts and protruding parts, carrying out edge detection, then obtaining a mapping edge contour, taking the point with the smallest abscissa in the contour as the left boundary abscissa of the mapping edge contour, and taking the point with the largest abscissa in the contour as the right boundary of the mapping edge contour; left border abscissa L of the mapping edge profile is recorded 1 The right boundary abscissa L of the mapped edge contour 2 。
6. The diopter estimation system of claim 1 wherein said transforming the left and right boundary abscissas to obtain transformed left and right boundary abscissas comprises:
will map the boundary abscissa L 1 、R 1 Performing transformation processing to obtain transformed mapping boundary abscissa L 2 、R 2 The transformation formula is as follows:
;
。
7. the diopter estimation system of claim 1 wherein said calculating diopter based on an optical reflection rate and a diopter fitting formula comprises:
will reflect the light velocity V r Inputting diopter fitting formula, calculating to obtain diopterThe fitting formula is as follows:wherein->、/>、/>Is a fitting parameter.
8. An imaging direction calculation system based on image processing, comprising:
a second acquisition module configured to: collecting an optometry video of an eye region of a patient;
an specular moving direction calculation module configured to: dividing pupil area images from each frame of image in the optometry video; performing super-resolution processing on the pupil region image obtained by segmentation to obtain a super-resolution pupil region image; threshold segmentation is carried out on the super-resolution pupil region image, and a reflection region is segmented; carrying out morphological processing on the mapping region and carrying out edge detection to obtain a mapping edge profile, and further obtaining a left boundary abscissa and a right boundary abscissa of the mapping edge profile; transforming the left and right boundary abscissas to obtain transformed left and right boundary abscissas; obtaining the left and right boundary abscissa after transformation for all frame images in the optometry video; respectively performing linear fitting on the transformed left and right boundary abscissa sequences to obtain the moving speed of the left and right boundaries; taking the average value of the moving speeds of the left boundary and the right boundary as the mapping speed; calculating the movement direction of the reflected light according to the speed of the reflected light;
a light band movement direction calculation module configured to: calculating the moving direction of the light band;
a shadow direction computing module configured to: and calculating the shadow movement direction according to the shadow movement direction and the light band movement direction.
9. The image processing-based imaging direction calculation system of claim 8, wherein said calculating a direction of movement of the optical tape comprises:
acquiring an optometry video, and preprocessing each frame of image of the optometry video;
performing reference detection on the preprocessed image to obtain a rectangular area, and taking the central point of the rectangular area as a reference point F 1 ;
Threshold segmentation and edge detection are carried out on the upper area of the rectangular area, then straight line detection is carried out, the detected straight line is screened, the left and right boundaries of the optical band are found, and the abscissa L of the left boundary of the optical band is recorded 3 Abscissa R of right boundary of optical band 3 ;
Abscissa L for left boundary of optical band 3 Abscissa R of right boundary of optical band 3 Performing coordinate transformation to obtain the transformed left boundary abscissa L of the light band 4 Transformed optical band right boundary abscissa R 4 ;
For each frame of image of the optometry video, the transformed left boundary abscissa L of the optical band is obtained 4 Transformed optical band right boundary abscissa R 4 Obtaining a left boundary coordinate sequence S of the light band 3 Right boundary coordinate sequence S of optical band 4 ;
Will left boundary coordinate sequence S 3 Performing piecewise linear fitting to obtain two-section linear slope, and removing slope close to zero to obtain leftSpeed of movement V of boundary bl The method comprises the steps of carrying out a first treatment on the surface of the Similarly, the movement speed V of the right boundary is obtained br The method comprises the steps of carrying out a first treatment on the surface of the Then the calculated left boundary moving speed V bl Right boundary movement speed V br Average value is taken as the moving speed V of the optical tape b ;
And obtaining the moving direction of the optical tape according to the sign of the moving speed of the optical tape.
10. The image processing-based imaging direction calculation system according to claim 8, wherein said calculating the imaging direction from the imaging direction and the optical band direction comprises: comparing the calculated movement direction of the reflected light with the movement direction of the optical tape, if the directions are consistent, the shadow movement direction is forward movement, and if the directions are inconsistent, the shadow movement direction is reverse movement.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202410004274.1A CN117495864B (en) | 2024-01-03 | 2024-01-03 | Imaging direction computing system and diopter estimating system based on image processing |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202410004274.1A CN117495864B (en) | 2024-01-03 | 2024-01-03 | Imaging direction computing system and diopter estimating system based on image processing |
Publications (2)
Publication Number | Publication Date |
---|---|
CN117495864A true CN117495864A (en) | 2024-02-02 |
CN117495864B CN117495864B (en) | 2024-04-09 |
Family
ID=89667633
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202410004274.1A Active CN117495864B (en) | 2024-01-03 | 2024-01-03 | Imaging direction computing system and diopter estimating system based on image processing |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN117495864B (en) |
Citations (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20070171366A1 (en) * | 2006-01-20 | 2007-07-26 | Clarity Medical Systems | Sequential wavefront sensor |
CN101964111A (en) * | 2010-09-27 | 2011-02-02 | 山东大学 | Method for improving sight tracking accuracy based on super-resolution |
CN108703738A (en) * | 2017-04-28 | 2018-10-26 | 分界线(天津)网络技术有限公司 | A kind of measuring system and method for hyperopic refractive degree |
CN109684915A (en) * | 2018-11-12 | 2019-04-26 | 温州医科大学 | Pupil tracking image processing method |
CN111067479A (en) * | 2019-12-31 | 2020-04-28 | 西安电子科技大学 | Fundus imaging device and fundus imaging method |
CN115499588A (en) * | 2022-09-15 | 2022-12-20 | 江苏至真健康科技有限公司 | Exposure time control method and system of portable mydriasis-free fundus camera |
-
2024
- 2024-01-03 CN CN202410004274.1A patent/CN117495864B/en active Active
Patent Citations (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20070171366A1 (en) * | 2006-01-20 | 2007-07-26 | Clarity Medical Systems | Sequential wavefront sensor |
CN101964111A (en) * | 2010-09-27 | 2011-02-02 | 山东大学 | Method for improving sight tracking accuracy based on super-resolution |
CN108703738A (en) * | 2017-04-28 | 2018-10-26 | 分界线(天津)网络技术有限公司 | A kind of measuring system and method for hyperopic refractive degree |
CN109684915A (en) * | 2018-11-12 | 2019-04-26 | 温州医科大学 | Pupil tracking image processing method |
CN111067479A (en) * | 2019-12-31 | 2020-04-28 | 西安电子科技大学 | Fundus imaging device and fundus imaging method |
CN115499588A (en) * | 2022-09-15 | 2022-12-20 | 江苏至真健康科技有限公司 | Exposure time control method and system of portable mydriasis-free fundus camera |
Non-Patent Citations (3)
Title |
---|
LYNDON JONES: "TFOS Lifestyle: Impact of contact lenses on the ocular surface", 《THE OCULAR SURFACE》, 31 July 2023 (2023-07-31) * |
吴帅贤: "面向检影验光中散光的图像采集设备和处理算法研究", 《中国优秀硕士学位论文全文数据库》, 15 January 2023 (2023-01-15), pages 32 - 63 * |
马佳霖: "车联网应用的成功案例—武汉城市自由流", 《中国优秀硕士学位论文全文数据库》, 15 February 2016 (2016-02-15) * |
Also Published As
Publication number | Publication date |
---|---|
CN117495864B (en) | 2024-04-09 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US8048065B2 (en) | Method and apparatus for eye position registering and tracking | |
EP3355104B2 (en) | Method and device and computer program for determining a representation of a spectacle glass rim | |
US5617155A (en) | Method for determining measurement parameters for a spectacle wearer | |
US7434931B2 (en) | Custom eyeglass manufacturing method | |
JP6833674B2 (en) | Determination of user data based on the image data of the selected spectacle frame | |
US20090051871A1 (en) | Custom eyeglass manufacturing method | |
US12056274B2 (en) | Eye tracking device and a method thereof | |
JP3453911B2 (en) | Gaze recognition device | |
US20120257162A1 (en) | Measurement method and equipment for the customization and mounting of corrective ophtalmic lenses | |
JP2006095008A (en) | Visual axis detecting method | |
CN117495864B (en) | Imaging direction computing system and diopter estimating system based on image processing | |
JP3711053B2 (en) | Line-of-sight measurement device and method, line-of-sight measurement program, and recording medium recording the program | |
CN114638879A (en) | Medical pupil size measuring system | |
JPH06319701A (en) | Glance recognizing device | |
CN114727755A (en) | Methods for assessing stability of tear film | |
CN114502058A (en) | Device and method for detecting tear film disruption | |
CN117710280B (en) | Pupil automatic positioning method and device | |
Rezazadeh et al. | Semi-automatic measurement of rigid gas-permeable contact lens movement in keratoconus patients using blinking images | |
CN116958885B (en) | Correcting glasses wearing comfort evaluation method and system based on reading vision | |
Stahl et al. | DirectFlow: A Robust Method for Ocular Torsion Measurement | |
JPH07318873A (en) | Prescription method and prescription system for multifocal contact lens | |
IL310806A (en) | Eye tracking device and a method thereof | |
Grisel et al. | An image analysis based full-embedded system for optical metrology |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |