CN117495864B - Imaging direction computing system and diopter estimating system based on image processing - Google Patents

Imaging direction computing system and diopter estimating system based on image processing Download PDF

Info

Publication number
CN117495864B
CN117495864B CN202410004274.1A CN202410004274A CN117495864B CN 117495864 B CN117495864 B CN 117495864B CN 202410004274 A CN202410004274 A CN 202410004274A CN 117495864 B CN117495864 B CN 117495864B
Authority
CN
China
Prior art keywords
abscissa
image
boundary
right boundary
pupil
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202410004274.1A
Other languages
Chinese (zh)
Other versions
CN117495864A (en
Inventor
刘治
薛鹏
马佳霖
崔立真
蒋亚丽
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shandong University
Original Assignee
Shandong University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shandong University filed Critical Shandong University
Priority to CN202410004274.1A priority Critical patent/CN117495864B/en
Publication of CN117495864A publication Critical patent/CN117495864A/en
Application granted granted Critical
Publication of CN117495864B publication Critical patent/CN117495864B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B3/00Apparatus for testing the eyes; Instruments for examining the eyes
    • A61B3/10Objective types, i.e. instruments for examining the eyes independent of the patients' perceptions or reactions
    • A61B3/103Objective types, i.e. instruments for examining the eyes independent of the patients' perceptions or reactions for determining refraction, e.g. refractometers, skiascopes
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B3/00Apparatus for testing the eyes; Instruments for examining the eyes
    • A61B3/10Objective types, i.e. instruments for examining the eyes independent of the patients' perceptions or reactions
    • A61B3/14Arrangements specially adapted for eye photography
    • A61B3/145Arrangements specially adapted for eye photography by video means
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T3/00Geometric image transformations in the plane of the image
    • G06T3/40Scaling of whole images or parts thereof, e.g. expanding or contracting
    • G06T3/4053Scaling of whole images or parts thereof, e.g. expanding or contracting based on super-resolution, i.e. the output image resolution being higher than the sensor resolution
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/0002Inspection of images, e.g. flaw detection
    • G06T7/0012Biomedical image inspection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/11Region-based segmentation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/13Edge detection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/136Segmentation; Edge detection involving thresholding
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/155Segmentation; Edge detection involving morphological operators
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10016Video; Image sequence
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30004Biomedical image processing
    • G06T2207/30041Eye; Retina; Ophthalmic

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Health & Medical Sciences (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Theoretical Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • General Health & Medical Sciences (AREA)
  • Medical Informatics (AREA)
  • Surgery (AREA)
  • Biomedical Technology (AREA)
  • Animal Behavior & Ethology (AREA)
  • Heart & Thoracic Surgery (AREA)
  • Public Health (AREA)
  • Veterinary Medicine (AREA)
  • Biophysics (AREA)
  • Ophthalmology & Optometry (AREA)
  • Molecular Biology (AREA)
  • Quality & Reliability (AREA)
  • Radiology & Medical Imaging (AREA)
  • Nuclear Medicine, Radiotherapy & Molecular Imaging (AREA)
  • Multimedia (AREA)
  • Image Processing (AREA)

Abstract

The invention relates to the technical field of medical image analysis, and discloses an imaging direction calculation system and a diopter estimation system based on image processing, wherein the diopter estimation system is used for acquiring an optometry video of an eye area of a patient; dividing pupil area images from each frame of images; performing super-resolution processing on the pupil region image, and performing threshold segmentation on the super-resolution image to segment out a light reflecting region; edge detection is carried out on the mapping area, mapping edge outlines are obtained, and left and right boundary abscissas of the mapping edge outlines are obtained; transforming the left and right boundary abscissas to obtain transformed left and right boundary abscissas; obtaining left and right boundary abscissas after all frame images in the optometry video are transformed; performing linear fitting on the transformed left and right boundary abscissa sequences to obtain the moving speed of the left and right boundaries; taking the average value of the moving speed as the mapping speed; based on the reflected light velocity, diopters are calculated. The invention reduces the interference of subjective factors and accelerates the optometry process.

Description

Imaging direction computing system and diopter estimating system based on image processing
Technical Field
The invention relates to the technical field of medical image analysis, in particular to an imaging direction computing system and a diopter estimating system based on image processing.
Background
The statements in this section merely relate to the background of the present disclosure and may not necessarily constitute prior art.
Refractive errors are the most common ocular disorders and are also the key cause behind correctable vision impairment. Ametropia can be diagnosed by a variety of methods, including subjective refraction, objective refraction, and the like. The imaging refraction is one of the traditional objective refraction methods, and the imaging direction and brightness in the fundus image of the patient are analyzed to be matched with different lenses so as to judge the ametropia of the patient. The shadow checking method has the advantages of objective and reliable result, no subjective cooperation of the checked person and strong applicability. However, the method of screening optometry also has problems in that it generally requires a long time and intervention of a professional, limiting its use in large-scale vision screening.
Disclosure of Invention
In order to solve the defects in the prior art, the invention provides an imaging direction calculation system and a diopter estimation system based on image processing; according to the shadow checking optometry video, the shadow moving direction and the refraction inaccuracy are calculated by using an image processing technology and through mathematical modeling and corresponding algorithms. The main advantages of this technique are improved accuracy in refractive error calculation, reduced interference from subjective factors, and accelerated optometry.
In one aspect, a diopter estimation system is provided, comprising: a first acquisition module configured to: collecting an optometry video of an eye region of a patient; an specular velocity calculation module configured to: dividing pupil area images from each frame of image in the optometry video; performing super-resolution processing on the pupil region image obtained by segmentation to obtain a super-resolution pupil region image; threshold segmentation is carried out on the super-resolution pupil region image, and a reflection region is segmented; carrying out morphological processing on the mapping region and carrying out edge detection to obtain a mapping edge profile, and further obtaining a left boundary abscissa and a right boundary abscissa of the mapping edge profile; transforming the left and right boundary abscissas to obtain transformed left and right boundary abscissas; obtaining the left and right boundary abscissa after transformation for all frame images in the optometry video; respectively performing linear fitting on the transformed left and right boundary abscissa sequences to obtain the moving speed of the left and right boundaries; taking the average value of the moving speeds of the left boundary and the right boundary as the mapping speed; a diopter calculation module configured to: and calculating the diopter based on the mapping speed and diopter fitting formula.
In another aspect, there is provided an imaging direction calculation system based on image processing, comprising: a second acquisition module configured to: collecting an optometry video of an eye region of a patient; an specular moving direction calculation module configured to: dividing pupil area images from each frame of image in the optometry video; performing super-resolution processing on the pupil region image obtained by segmentation to obtain a super-resolution pupil region image; threshold segmentation is carried out on the super-resolution pupil region image, and a reflection region is segmented; carrying out morphological processing on the mapping region and carrying out edge detection to obtain a mapping edge profile, and further obtaining a left boundary abscissa and a right boundary abscissa of the mapping edge profile; transforming the left and right boundary abscissas to obtain transformed left and right boundary abscissas; obtaining the left and right boundary abscissa after transformation for all frame images in the optometry video; respectively performing linear fitting on the transformed left and right boundary abscissa sequences to obtain the moving speed of the left and right boundaries; taking the average value of the moving speeds of the left boundary and the right boundary as the mapping speed; calculating the movement direction of the reflected light according to the speed of the reflected light; a light band movement direction calculation module configured to: calculating the moving direction of the light band; a shadow direction computing module configured to: and calculating the shadow movement direction according to the shadow movement direction and the light band movement direction.
The technical scheme has the following advantages or beneficial effects: according to the refraction video, the imaging direction and the diopter are calculated by adopting an image processing mode, so that accuracy of calculation of the diopter error is improved, interference of subjective factors is reduced, the refraction process is accelerated, and automation and intellectualization of the refraction process are facilitated. The invention belongs to the technical field of medical image analysis, and particularly relates to an imaging direction and refraction misalignment estimation method based on image processing. The method comprises the steps of obtaining an optometry video, extracting an optometry boundary and calculating an optometry related parameter, extracting an optical band boundary and calculating an optical band related parameter, and calculating a shadow movement direction and diopter. The invention can realize accurate identification and calculation of the shadow movement direction and diopter, reduces the interference of subjective factors, accelerates the optometry process, and is beneficial to realizing the automation and the intellectualization of the optometry process.
Drawings
The accompanying drawings, which are included to provide a further understanding of the invention and are incorporated in and constitute a part of this specification, illustrate embodiments of the invention and together with the description serve to explain the invention.
FIG. 1 is a functional block diagram of a diopter estimation system according to a first embodiment;
fig. 2 is a functional block diagram of an image processing-based imaging direction calculation system according to the first embodiment.
Detailed Description
It should be noted that the following detailed description is exemplary and is intended to provide further explanation of the invention. Unless defined otherwise, all technical and scientific terms used herein have the same meaning as commonly understood by one of ordinary skill in the art to which this invention belongs.
Example 1
As shown in fig. 1, the diopter estimation system includes: a first acquisition module configured to: collecting an optometry video of an eye region of a patient; an specular velocity calculation module configured to: dividing pupil area images from each frame of image in the optometry video; performing super-resolution processing on the pupil region image obtained by segmentation to obtain a super-resolution pupil region image; threshold segmentation is carried out on the super-resolution pupil region image, and a reflection region is segmented; carrying out morphological processing on the mapping region and carrying out edge detection to obtain a mapping edge profile, and further obtaining a left boundary abscissa and a right boundary abscissa of the mapping edge profile; transforming the left and right boundary abscissas to obtain transformed left and right boundary abscissas; obtaining the left and right boundary abscissa after transformation for all frame images in the optometry video; respectively performing linear fitting on the transformed left and right boundary abscissa sequences to obtain the moving speed of the left and right boundaries; taking the average value of the moving speeds of the left boundary and the right boundary as the mapping speed; a diopter calculation module configured to: and calculating the diopter based on the mapping speed and diopter fitting formula.
Further, the optometry video of the eye area of the patient is acquired by using image acquisition equipment, and the optometry video is an optometry video which is scanned at a uniform speed at the left and right sides of the eye area of the patient by using an optometry mirror light band; and in the process of image acquisition, the patient wears the reference spectacle frame of white background board, has two rectangle openings on the spectacle frame, can expose the eye region. The white background board can be arranged on the glasses frame without lenses, is convenient for a patient to wear, but is not limited to the form, the white background board can be attached to the face of the patient, and the white background board is provided with a rectangular opening, and the rectangular opening can shoot the eye area of the patient.
During image acquisition, the lens of the camera is tightly attached to the observation hole of the imaging lens, an image is acquired through the observation hole of the imaging lens, the image shot by the camera is the image seen by an optometrist during imaging and optometry, and the camera is equivalent to the eyes of the optometrist.
Further, the dividing the pupil area image from each frame of image in the pair of optometry videos includes: preprocessing each frame of image in the optometry video; performing reference detection on the preprocessed image to obtain an eye rectangular region; detecting an exit pupil in the rectangular region to obtain a minimum circumscribed rectangular region of the pupil; and dividing the minimum circumscribed rectangular area of the pupil to obtain a pupil area image.
Further, the preprocessing of each frame of image in the optometry video specifically includes: and carrying out graying treatment on each frame of image, and carrying out filtering treatment, brightness adjustment and gamma correction on the image subjected to the graying treatment to obtain a preprocessed image.
Further, the performing reference detection on the preprocessed image to obtain an eye rectangular area, where the reference detection includes: taking the center point of the eye rectangular area as a datum point F 1
It will be appreciated that the camera is moved left and right relative to the patient, so that the position of the same object is also changing in the captured image. Fiducial detection is the finding of a reference point in the image that is stationary in the world coordinate system, while moving in the image coordinate system. The center point of the rectangular region of the eye is taken as a reference point, namely a datum point.
Further, detecting an exit pupil in the rectangular area to obtain a minimum circumscribed rectangular area of the pupil, identifying the pupil by adopting a circular detection mode or a trained convolutional neural network, obtaining the minimum circumscribed rectangular area of the pupil, and recording endpoint coordinates F of the minimum circumscribed rectangular area of the pupil, which are closest to a coordinate origin 2
Further, the Super-resolution processing is performed on the pupil area image obtained by segmentation to obtain a Super-resolution pupil area image, and the Super-resolution pupil area image is obtained by inputting the pupil area image obtained by segmentation into a Super-resolution network SRCNN (Super-Resolution Convolutional Neural Network) and performing four times of Super-resolution processing.
Further, the threshold segmentation is performed on the super-resolution pupil area image to segment a mapping area, which specifically includes: and setting a threshold value, dividing points with pixel values higher than the threshold value in the super-resolution pupil region image, and discarding points lower than the threshold value to obtain a light reflection region.
Further, the mapping area performs morphological processing and edge detection to obtain a mapping edge profile, so as to obtain a left border abscissa and a right border abscissa of the mapping edge profile, including: morphological treatment is carried out on the mapping region, discontinuous parts and protruding parts are eliminated, edge detection is carried out, mapping edge outline is obtained, and the outline middle cross is takenThe point with the smallest coordinate is taken as the left border abscissa of the mapping edge outline, and the point with the largest abscissa in the outline is taken as the right border of the mapping edge outline; recording left boundary abscissa of mapped edge profileRight boundary abscissa of a mapped edge contour
The reflected light is the reflection light of the retina of the human eye under the irradiation of the strip light beam of the imaging lens.
Further, the transforming the left and right boundary abscissas to obtain transformed left and right boundary abscissas includes: will map the boundary abscissa of light、/>Performing transformation to obtain transformed mapping boundary abscissa +.>、/>The transformation formula is as follows: />;/>
Further, all frame images in the pair of optometry videos obtain transformed left and right boundary abscissas, including: recording the transformed mapped border abscissa L frame by frame 2 、R 2 Obtaining a transformed mapping left boundary coordinate sequence S 1 Transformed mapped right boundary coordinate sequence S 2
Further, the transformed left and right boundary abscissa sequences are respectively subjected to linear fitting to obtain the movement of the left and right boundariesA moving speed; the average value of the movement speeds of the left and right boundaries is defined as the reflection speed, and specifically includes: the transformed mapping left boundary coordinate sequence S1 is subjected to piecewise linear fitting to obtain two-section linear slopes, the slope close to zero is removed, and the linear slope which is not removed is taken as the moving speed V of the left boundary rl The method comprises the steps of carrying out a first treatment on the surface of the Similarly, the movement speed V of the right boundary is obtained rr The method comprises the steps of carrying out a first treatment on the surface of the Then the calculated left boundary moving speed V rl Right boundary movement speed V rr Average is taken as the reflection speed V r
Further, the calculating the diopter based on the fitting formula of the reflecting speed and diopter includes: will reflect the light velocityInputting diopter fitting formula, calculating to obtain diopter +.>The fitting formula is in the form of, wherein +.>、/>、/>To fit the parameters: />
And (3) collecting optometry videos of a group of patients with different diopters, marking diopter data by an ophthalmologist, and fitting to obtain fitting parameters by combining the calculated light reflection speeds in the embodiment.
Example two
As shown in fig. 2, the imaging direction calculation system based on image processing includes: a second acquisition module configured to: collecting an optometry video of an eye region of a patient; an specular moving direction calculation module configured to: dividing pupil area images from each frame of image in the optometry video; performing super-resolution processing on the pupil region image obtained by segmentation to obtain a super-resolution pupil region image; threshold segmentation is carried out on the super-resolution pupil region image, and a reflection region is segmented; carrying out morphological processing on the mapping region and carrying out edge detection to obtain a mapping edge profile, and further obtaining a left boundary abscissa and a right boundary abscissa of the mapping edge profile; transforming the left and right boundary abscissas to obtain transformed left and right boundary abscissas; obtaining the left and right boundary abscissa after transformation for all frame images in the optometry video; respectively performing linear fitting on the transformed left and right boundary abscissa sequences to obtain the moving speed of the left and right boundaries; taking the average value of the moving speeds of the left boundary and the right boundary as the mapping speed; calculating the movement direction of the reflected light according to the speed of the reflected light; a light band movement direction calculation module configured to: calculating the moving direction of the light band; a shadow direction computing module configured to: and calculating the shadow movement direction according to the shadow movement direction and the light band movement direction.
Further, the calculating the movement direction of the reflected light according to the speed of the reflected light includes: according to the speed of light r To obtain the movement direction of the reflected light if the reflected light velocity V r If the value of (2) is positive, the movement direction of the reflected light is rightward movement; if the light-reflecting speed V r The direction of movement of the reflected light is left if the value of (2) is negative.
The left-right direction is identical to the left-right direction in the camera view angle when recording the optometry video. During optometry, the optometrist needs to sit right opposite to the patient, the left hand direction of the optometrist is the left side, the right hand direction of the optometrist is the right side, and the visual angle of the camera in the system is consistent with the visual angle of the optometrist.
Further, the calculating the optical tape moving direction includes: acquiring an optometry video, and preprocessing each frame of image of the optometry video; performing reference detection on the preprocessed image to obtain a rectangular area, and taking the central point of the rectangular area as a reference point F 1 The method comprises the steps of carrying out a first treatment on the surface of the Setting an area above the rectangular area, performing threshold segmentation and edge detection, then performing straight line detection, screening the detected straight line, and finding out a light bandLeft and right boundaries of (2) and recording the abscissa L of the left boundary of the optical tape 3 Abscissa R of right boundary of optical band 3 The method comprises the steps of carrying out a first treatment on the surface of the Abscissa L for left boundary of optical band 3 Abscissa R of right boundary of optical band 3 Performing coordinate transformation to obtain the transformed left boundary abscissa L of the light band 4 Transformed optical band right boundary abscissa R 4 The method comprises the steps of carrying out a first treatment on the surface of the For each frame of image of the optometry video, the transformed left boundary abscissa L of the optical band is obtained 4 Transformed optical band right boundary abscissa R 4 Obtaining a left boundary coordinate sequence S of the light band 3 Right boundary coordinate sequence S of optical band 4 The method comprises the steps of carrying out a first treatment on the surface of the Will left boundary coordinate sequence S 3 Performing piecewise linear fitting to obtain two-section linear slopes, removing slopes close to zero, and taking the slopes which are not removed as the moving speed V of the left boundary bl The method comprises the steps of carrying out a first treatment on the surface of the Similarly, the movement speed V of the right boundary is obtained br The method comprises the steps of carrying out a first treatment on the surface of the Then the calculated left boundary moving speed V bl Right boundary movement speed V br Average value is taken as the moving speed V of the optical tape b The method comprises the steps of carrying out a first treatment on the surface of the And obtaining the moving direction of the optical tape according to the sign of the moving speed of the optical tape.
If the tape speed V b If the value of (2) is positive, the moving direction of the optical tape is rightward; if the tape speed V b The direction of movement of the tape is left if the value of (2) is negative.
The viewing angle in the left-right direction is determined to be identical to the viewing angle in the left-right direction of the reflected light.
Further, the detected straight line is screened to find the left and right boundaries of the optical band, and the abscissa L of the left boundary of the optical band is recorded 3 Abscissa R of right boundary of optical band 3 The method specifically comprises the following steps: setting a threshold value, selecting a straight line in the vertical direction in the image according to the slope of the straight line, and removing the straight line with the length smaller than the set threshold value; then, finding out leftmost and rightmost straight lines from the screened straight lines according to the abscissa of the line segment center point, and taking the straight lines as the left and right boundaries of the light band; taking the line segment with the smallest abscissa of the central point as the left boundary of the light band, and taking the abscissa of the straight line central point as the left boundary abscissa L3 of the light band; straight line with maximum abscissa of central pointThe line is taken as the right boundary of the band and the abscissa of the center point of the line is taken as the right boundary abscissa R3 of the band.
It will be appreciated that since the patient wears the reference eye-box and the face area around the eyes is blocked by the white background plate of the reference eye-box, a portion of the ribbon beam from the scope is projected onto the white background plate around the eyes and another portion is projected onto the eye area, and the projection of the scope ribbon beam presented on the white background plate is the optical band.
Further, the abscissa L of the left boundary of the pair of optical bands 3 Abscissa R of right boundary of optical band 3 And carrying out coordinate transformation, wherein the transformation formula is as follows:;/>
further, the calculating the shadow movement direction according to the shadow movement direction and the light band movement direction includes: comparing the calculated movement direction of the reflected light with the movement direction of the optical tape, if the directions are consistent, the shadow movement direction is forward movement, and if the directions are inconsistent, the shadow movement direction is reverse movement.
The above description is only of the preferred embodiments of the present invention and is not intended to limit the present invention, but various modifications and variations can be made to the present invention by those skilled in the art. Any modification, equivalent replacement, improvement, etc. made within the spirit and principle of the present invention should be included in the protection scope of the present invention.

Claims (8)

1. A diopter estimation system, comprising:
a first acquisition module configured to: collecting an optometry video of an eye region of a patient;
an specular velocity calculation module configured to: dividing pupil area images from each frame of image in the optometry video; performing super-resolution processing on the pupil region image obtained by segmentation to obtain a super-resolution pupil region image; threshold segmentation is carried out on the super-resolution pupil region image, and a reflection region is segmented; carrying out morphological processing on the mapping region and carrying out edge detection to obtain a mapping edge profile, and further obtaining a left boundary abscissa and a right boundary abscissa of the mapping edge profile; transforming the left and right boundary abscissas to obtain transformed left and right boundary abscissas; obtaining the left and right boundary abscissa after transformation for all frame images in the optometry video; respectively performing linear fitting on the transformed left and right boundary abscissa sequences to obtain the moving speed of the left and right boundaries; taking the average value of the moving speeds of the left boundary and the right boundary as the mapping speed;
a diopter calculation module configured to: calculating to obtain diopter based on the mapping speed and diopter fitting formula;
each frame of image in the pair of optometry videos is divided into pupil area images, and the pupil area image segmentation method comprises the following steps: preprocessing each frame of image in the optometry video; performing reference detection on the preprocessed image to obtain an eye rectangular region; detecting an exit pupil in the rectangular region to obtain a minimum circumscribed rectangular region of the pupil; and dividing the minimum circumscribed rectangular area of the pupil to obtain a pupil area image.
2. The diopter estimation system of claim 1 wherein in the rectangular region, the exit pupil is detected to obtain a minimum circumscribed rectangular region of the pupil, the pupil is identified by a circular detection mode or a trained convolutional neural network, the minimum circumscribed rectangular region of the pupil is obtained, and the endpoint coordinate F of the minimum circumscribed rectangular region of the pupil closest to the origin of coordinates is recorded 2
3. The diopter estimation system of claim 1 wherein the super-resolution processing is performed on the split pupil area image to obtain a super-resolution pupil area image, and the split pupil area image is input into a super-resolution network to perform four times super-resolution processing to obtain a super-resolution pupil area image.
4. The diopter estimation system of claim 1 wherein said mapped region is morphologically processed and edge detected to obtain a mapped edge profile, and further wherein obtaining left and right boundary abscissas of the mapped edge profile comprises:
carrying out morphological treatment on the mapping region, eliminating discontinuous parts and protruding parts, carrying out edge detection, then obtaining a mapping edge contour, taking the point with the smallest abscissa in the contour as the left boundary abscissa of the mapping edge contour, and taking the point with the largest abscissa in the contour as the right boundary of the mapping edge contour; left border abscissa L of the mapping edge profile is recorded 1 The right boundary abscissa L of the mapped edge contour 2
5. The diopter estimation system of claim 1 wherein said transforming the left and right boundary abscissas to obtain transformed left and right boundary abscissas comprises:
will map the boundary abscissa L 1 、R 1 Performing transformation processing to obtain transformed mapping boundary abscissa L 2 、R 2 The transformation formula is as follows:
6. the diopter estimation system of claim 1 wherein said calculating diopter based on an optical reflection rate and a diopter fitting formula comprises:
will reflect the light velocity V r Inputting diopter fitting formula, calculating to obtain diopterThe fitting formula is as follows:wherein->、/>、/>Is a fitting parameter.
7. An imaging direction calculation system based on image processing, comprising:
a second acquisition module configured to: collecting an optometry video of an eye region of a patient;
an specular moving direction calculation module configured to: dividing pupil area images from each frame of image in the optometry video; performing super-resolution processing on the pupil region image obtained by segmentation to obtain a super-resolution pupil region image; threshold segmentation is carried out on the super-resolution pupil region image, and a reflection region is segmented; carrying out morphological processing on the mapping region and carrying out edge detection to obtain a mapping edge profile, and further obtaining a left boundary abscissa and a right boundary abscissa of the mapping edge profile; transforming the left and right boundary abscissas to obtain transformed left and right boundary abscissas; obtaining the left and right boundary abscissa after transformation for all frame images in the optometry video; respectively performing linear fitting on the transformed left and right boundary abscissa sequences to obtain the moving speed of the left and right boundaries; taking the average value of the moving speeds of the left boundary and the right boundary as the mapping speed; calculating the movement direction of the reflected light according to the speed of the reflected light;
a light band movement direction calculation module configured to: calculating the moving direction of the light band;
a shadow direction computing module configured to: calculating the shadow movement direction according to the shadow movement direction and the light band movement direction;
the calculating the moving direction of the optical tape comprises:
acquiring an optometry video, and preprocessing each frame of image of the optometry video;
performing reference detection on the preprocessed image to obtain a rectangular area, and taking the central point of the rectangular area as a reference point F 1
Threshold segmentation and edge detection are carried out on the upper area of the rectangular area, then straight line detection is carried out, the detected straight line is screened, the left and right boundaries of the optical band are found, and the abscissa L of the left boundary of the optical band is recorded 3 Abscissa R of right boundary of optical band 3
Abscissa L for left boundary of optical band 3 Abscissa R of right boundary of optical band 3 Performing coordinate transformation to obtain the transformed left boundary abscissa L of the light band 4 Transformed optical band right boundary abscissa R 4
For each frame of image of the optometry video, the transformed left boundary abscissa L of the optical band is obtained 4 Transformed optical band right boundary abscissa R 4 Obtaining a left boundary coordinate sequence S of the light band 3 Right boundary coordinate sequence S of optical band 4
Will left boundary coordinate sequence S 3 Performing piecewise linear fitting to obtain two-section linear slope, and removing slope close to zero to obtain moving speed V of left boundary bl The method comprises the steps of carrying out a first treatment on the surface of the Similarly, the movement speed V of the right boundary is obtained br The method comprises the steps of carrying out a first treatment on the surface of the Then the calculated left boundary moving speed V bl Right boundary movement speed V br Average value is taken as the moving speed V of the optical tape b
And obtaining the moving direction of the optical tape according to the sign of the moving speed of the optical tape.
8. The image processing-based imaging direction calculation system according to claim 7, wherein said calculating the imaging direction from the imaging direction and the optical band direction comprises: comparing the calculated movement direction of the reflected light with the movement direction of the optical tape, if the directions are consistent, the shadow movement direction is forward movement, and if the directions are inconsistent, the shadow movement direction is reverse movement.
CN202410004274.1A 2024-01-03 2024-01-03 Imaging direction computing system and diopter estimating system based on image processing Active CN117495864B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202410004274.1A CN117495864B (en) 2024-01-03 2024-01-03 Imaging direction computing system and diopter estimating system based on image processing

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202410004274.1A CN117495864B (en) 2024-01-03 2024-01-03 Imaging direction computing system and diopter estimating system based on image processing

Publications (2)

Publication Number Publication Date
CN117495864A CN117495864A (en) 2024-02-02
CN117495864B true CN117495864B (en) 2024-04-09

Family

ID=89667633

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202410004274.1A Active CN117495864B (en) 2024-01-03 2024-01-03 Imaging direction computing system and diopter estimating system based on image processing

Country Status (1)

Country Link
CN (1) CN117495864B (en)

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101964111A (en) * 2010-09-27 2011-02-02 山东大学 Method for improving sight tracking accuracy based on super-resolution
CN108703738A (en) * 2017-04-28 2018-10-26 分界线(天津)网络技术有限公司 A kind of measuring system and method for hyperopic refractive degree
CN109684915A (en) * 2018-11-12 2019-04-26 温州医科大学 Pupil tracking image processing method
CN111067479A (en) * 2019-12-31 2020-04-28 西安电子科技大学 Fundus imaging device and fundus imaging method
CN115499588A (en) * 2022-09-15 2022-12-20 江苏至真健康科技有限公司 Exposure time control method and system of portable mydriasis-free fundus camera

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US7445335B2 (en) * 2006-01-20 2008-11-04 Clarity Medical Systems, Inc. Sequential wavefront sensor

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101964111A (en) * 2010-09-27 2011-02-02 山东大学 Method for improving sight tracking accuracy based on super-resolution
CN108703738A (en) * 2017-04-28 2018-10-26 分界线(天津)网络技术有限公司 A kind of measuring system and method for hyperopic refractive degree
CN109684915A (en) * 2018-11-12 2019-04-26 温州医科大学 Pupil tracking image processing method
CN111067479A (en) * 2019-12-31 2020-04-28 西安电子科技大学 Fundus imaging device and fundus imaging method
CN115499588A (en) * 2022-09-15 2022-12-20 江苏至真健康科技有限公司 Exposure time control method and system of portable mydriasis-free fundus camera

Non-Patent Citations (3)

* Cited by examiner, † Cited by third party
Title
TFOS Lifestyle: Impact of contact lenses on the ocular surface;Lyndon Jones;《The Ocular Surface》;20230731;全文 *
车联网应用的成功案例—武汉城市自由流;马佳霖;《中国优秀硕士学位论文全文数据库》;20160215;全文 *
面向检影验光中散光的图像采集设备和处理算法研究;吴帅贤;《中国优秀硕士学位论文全文数据库》;20230115;正文第32-63页 *

Also Published As

Publication number Publication date
CN117495864A (en) 2024-02-02

Similar Documents

Publication Publication Date Title
US8048065B2 (en) Method and apparatus for eye position registering and tracking
EP3355104B1 (en) Method and device and computer program for determining a representation of a spectacle glass rim
US5617155A (en) Method for determining measurement parameters for a spectacle wearer
US7434931B2 (en) Custom eyeglass manufacturing method
US12056274B2 (en) Eye tracking device and a method thereof
JP3453911B2 (en) Gaze recognition device
US20120257162A1 (en) Measurement method and equipment for the customization and mounting of corrective ophtalmic lenses
WO2016026570A1 (en) Determining user data based on image data of a selected eyeglass frame
CN111084603A (en) Pupil distance measuring method and system based on depth camera
JP2006095008A (en) Visual axis detecting method
CN117495864B (en) Imaging direction computing system and diopter estimating system based on image processing
JP3711053B2 (en) Line-of-sight measurement device and method, line-of-sight measurement program, and recording medium recording the program
CN114638879A (en) Medical pupil size measuring system
JPH06319701A (en) Glance recognizing device
CN114727755A (en) Methods for assessing stability of tear film
CN114502058A (en) Device and method for detecting tear film disruption
CN117710280B (en) Pupil automatic positioning method and device
Rezazadeh et al. Semi-automatic measurement of rigid gas-permeable contact lens movement in keratoconus patients using blinking images
CN116958885B (en) Correcting glasses wearing comfort evaluation method and system based on reading vision
CN113469936B (en) Human eye cataract detection system and three-dimensional model image reconstruction method of crystalline lens
Stahl et al. DirectFlow: A Robust Method for Ocular Torsion Measurement
JPH07318873A (en) Prescription method and prescription system for multifocal contact lens
IL310806A (en) Eye tracking device and a method thereof
CN117281504A (en) Automatic eye feature measurement method based on facial image acquisition
Grisel et al. An image analysis based full-embedded system for optical metrology

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant