CN115590462A - Vision detection method and device based on camera - Google Patents

Vision detection method and device based on camera Download PDF

Info

Publication number
CN115590462A
CN115590462A CN202211523520.1A CN202211523520A CN115590462A CN 115590462 A CN115590462 A CN 115590462A CN 202211523520 A CN202211523520 A CN 202211523520A CN 115590462 A CN115590462 A CN 115590462A
Authority
CN
China
Prior art keywords
subject
screen
vision
camera
visual target
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202211523520.1A
Other languages
Chinese (zh)
Inventor
谢伟浩
郑小宾
吴梓华
刘玉萍
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Guangzhou Shijing Medical Software Co ltd
Original Assignee
Guangzhou Shijing Medical Software Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Guangzhou Shijing Medical Software Co ltd filed Critical Guangzhou Shijing Medical Software Co ltd
Priority to CN202211523520.1A priority Critical patent/CN115590462A/en
Publication of CN115590462A publication Critical patent/CN115590462A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B3/00Apparatus for testing the eyes; Instruments for examining the eyes
    • A61B3/0016Operational features thereof
    • A61B3/0025Operational features thereof characterised by electronic signal processing, e.g. eye models

Abstract

The invention provides a vision detection method and a device based on a camera, wherein the vision detection method comprises the following steps: acquiring image information of a subject by using a camera, and acquiring a sight line drop point of the subject on a screen through a pre-trained sight line estimation model based on the image information; wherein the screen displays optotypes for vision detection; and determining the minimum stripe width of the visual target which can be seen on the screen by the subject, and calculating the corresponding vision level of the subject according to the minimum stripe width. Compared with the prior art that a priority watching method is adopted, the method has the advantages that the intervention degree of professionals is greatly reduced, the dependence on experience of medical staff or inspectors is reduced, the implementation and operation are simple, and the consumed time can be effectively shortened.

Description

Vision detection method and device based on camera
Technical Field
The invention relates to the field of vision screening, in particular to a vision detection method and device based on a camera.
Background
At present, for some patients with vision disorders or key populations, such as infants between 0 and 6 years old, it is necessary to periodically test the vision as a key stage of the infant's visual development. Regular screening of vision helps to detect possible problems early, even if prophylactic treatment is performed. However, in general, children younger than 2 years old have limited expression ability and are not suitable for detection by methods such as a conventional E-word visual chart, and therefore, the prior art adopts a priority viewing method (PL) as a main clinical detection method, but the method depends on guidance and observation by professionals, is complicated in implementation process, is long in time consumption, and has high labor cost.
Disclosure of Invention
The invention provides a vision detection method and device based on a camera, which can realize rapid detection of a subject and effectively shorten the time consumed in the vision detection process.
In order to solve the above technical problem, an embodiment of the present invention provides a vision detection method based on a camera, including:
acquiring image information of a subject by using a camera, and acquiring a sight line drop point of the subject on a screen through a pre-trained sight line estimation model based on the image information; wherein the screen displays optotypes for vision detection;
and determining the minimum stripe width of the visual target which can be seen on the screen by the subject, and calculating the corresponding vision level of the subject according to the minimum stripe width.
Preferably, the minimum stripe width of the visual target which can be seen by the subject on the screen is determined by:
sending a preset control instruction to the screen so that the screen displays a sighting mark with a preset stripe width in a preset range with a sight line drop point as a center; the sighting target consists of black and white stripes with the same interval;
and judging whether the subject sees the sighting target according to the sight line drop point of the subject, the movement track of the sight line and the position of the sighting target, and further determining the minimum stripe width of the sighting target which can be seen by the subject.
Preferably, the image information includes face information of the subject;
the method comprises the following steps of determining the minimum stripe width of the visual target which can be seen by a subject on the screen, specifically:
extracting corneal information of the subject from the facial information by a deep learning technique; wherein the corneal information comprises movement of the cornea;
identifying nystagmus conditions of the subject according to the movement conditions of the cornea;
determining a minimum fringe width of a visual target visible to the subject based on the visual target that causes nystagmus in the subject.
Preferably, the calculating, according to the minimum fringe width, a vision level corresponding to the subject includes:
calculating a vision value corresponding to the subject according to the following formula:
Figure 185950DEST_PATH_IMAGE001
wherein d is the distance between the subject and the visual target during detection, and h is the stripe width of the visual target.
Preferably, the image information includes head information and face information of the subject;
before the acquiring the line of sight landing point of the subject on the screen, further comprising: and obtaining the distance between the subject and the sighting target through a head pose estimation and monocular distance measurement model trained in advance based on the head information and the face information of the subject.
Correspondingly, the embodiment of the invention also provides a vision detection device based on the camera, which comprises a sight line acquisition module and a detection module; wherein the content of the first and second substances,
the sight line acquisition module is used for acquiring image information of the subject by using a camera and acquiring a sight line falling point of the subject on a screen through a pre-trained sight line estimation model based on the image information; wherein the screen displays optotypes for vision detection;
the detection module is used for determining the minimum stripe width of the visual target which can be seen on the screen by the subject and calculating the vision level corresponding to the subject according to the minimum stripe width.
Preferably, the detection module determines a minimum stripe width of the optotype visible on the screen by the subject, specifically:
the detection module comprises a first detection unit, and the first detection unit is used for sending a preset control instruction to the screen so as to enable the screen to display a sighting mark with a preset stripe width in a preset range with a sight line drop point as a center; the sighting target consists of black and white stripes with the same interval;
judging whether the test subject sees the visual target according to the visual line drop point of the test subject, the movement track of the visual line and the position of the visual target, and further determining the minimum stripe width of the visual target which can be seen by the test subject.
Preferably, the image information includes face information of the subject;
the detection module determines the minimum stripe width of the visual target which can be seen by the subject on the screen, and specifically comprises the following steps:
the detection module comprises a second detection unit, wherein the second detection unit is used for extracting cornea information of the subject from the face information through a deep learning technology; wherein the corneal information comprises a movement profile of the cornea;
identifying nystagmus conditions of the subject according to the movement conditions of the cornea;
determining a minimum fringe width of a visual target visible to the subject based on the visual target that causes nystagmus in the subject.
As a preferred scheme, the detecting module calculates, according to the minimum fringe width, a vision level corresponding to the subject, specifically:
the detection module comprises a calculation unit, and the calculation unit is used for calculating the vision value corresponding to the subject according to the following formula:
Figure 556145DEST_PATH_IMAGE001
wherein d is the distance between the subject and the visual target during detection, and h is the stripe width of the visual target.
Preferably, the image information includes head information and face information of the subject;
the vision detection device further comprises a distance measurement module, wherein the distance measurement module is used for obtaining the distance between the subject and the sighting target through a head posture estimation and monocular distance measurement model trained in advance based on the head information and the face information of the subject before the sight line landing point of the subject on the screen is obtained.
Compared with the prior art, the embodiment of the invention has the following beneficial effects:
the embodiment of the invention provides a vision detection method and a device based on a camera, wherein the vision detection method comprises the following steps: acquiring image information of a subject by using a camera, and acquiring a sight line drop point of the subject on a screen through a pre-trained sight line estimation model based on the image information; wherein the screen displays optotypes for vision detection; and determining the minimum stripe width of the visual target which can be seen on the screen by the subject, and calculating the corresponding vision level of the subject according to the minimum stripe width. Compared with the prior art that a priority watching method is adopted, the method has the advantages that the intervention degree of professionals is greatly reduced, the dependence on experience of medical workers or inspectors is reduced, the implementation and operation are simple, and the consumed time can be effectively shortened.
Drawings
FIG. 1: is a schematic flow chart of an embodiment of the vision testing method based on the camera.
FIG. 2 is a schematic diagram: is a schematic structural diagram of an embodiment of the vision testing device based on the camera.
Detailed Description
The technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are only a part of the embodiments of the present invention, and not all of the embodiments. All other embodiments, which can be obtained by a person skilled in the art without inventive step based on the embodiments of the present invention, are within the scope of protection of the present invention.
The first embodiment is as follows:
referring to fig. 1, fig. 1 shows a vision testing method based on a camera according to an embodiment of the present invention, which includes a step S1 and a step S2, wherein,
the method comprises the following steps that S1, image information of a subject is obtained through a camera, and a sight line drop point of the subject on a screen is obtained through a pre-trained sight line estimation model on the basis of the image information; wherein the screen displays a optotype for vision test.
In this embodiment, the optotypes may be set to different spatial frequencies/fringe widths and move at different speeds and directions, and may be formed of fringes of different frequencies. The camera is arranged in front of the subject, a common camera can be adopted, for example, a mobile phone camera and a notebook computer camera are used for obtaining image information of the subject, and the screen and the camera can be arranged on the same plane. The image information includes head information and face information of the subject. Compared with the prior art that images are acquired by adopting special equipment such as an infrared camera and the like, the arrangement scene is simpler, and the vision detection can be completed with lower labor cost and economic cost; meanwhile, the vision detection method can be suitable for wider application scenes such as user self-check.
Before the step S2 of acquiring the sight line landing point of the subject on the screen, the method further includes: and obtaining the distance between the subject and the sighting target through the head information and the face information based on the subject and through a pre-trained head posture estimation (model) and a monocular distance measurement model. Preferably, the head pose estimation model may employ a deep learning based head pose method such as an FSA network. Specifically, the face position in the image can be detected by using a face detection model provided by the dlib library; then, the face position is expanded by a certain proportion (for example, the width and the height can be respectively expanded by 2 times): inputting the image information after external expansion into the head attitude estimation model to obtain the yaw angle, the pitch angle, the roll angle and the like of the head; furthermore, the user can be guided to adjust the head posture through voice, image and the like according to the predicted yaw angle, pitch angle and roll angle, so that the human face is kept right in front of the camera and parallel to the camera as far as possible.
On the other hand, the monocular distance measuring method is mainly implemented according to the imaging principle of a camera. With the physical size of the iris diameter (typically 11.7 ± 0.5 mm), the pixel size, and the focal length of the camera known, the distance of the human eye from the camera can be found from the similar triangles. The physical size of the iris diameter is given a priori, and this embodiment may be preferred and preset to 11.7 mm. And the camera focal length can be read from the relevant API or EXIF data of the image captured by the camera. The pixel size of the iris diameter is obtained by segmenting the iris image by a segmentation model based on deep learning, such as a unit, and extracting the outline of the iris, and then obtaining the pixel size of the iris diameter on the image according to the iris outline.
Further, based on the image information, a sight line falling point of the subject on the screen can be obtained through a pre-trained sight line estimation model. The pre-trained sight line estimation model can learn the mapping relation from the face to the screen fixation point by adopting an AFF network. Specifically, a human face position and a human eye position are detected by using a dlib library, the human face information, the human eye information, head posture information obtained through head posture estimation, and position information of the human face and the human eyes in a picture are input into a sight estimation model, physical coordinates of a fixation point in a camera coordinate system can be obtained, and further, the physical coordinates in the camera coordinate system are converted into pixel coordinates in a screen coordinate system according to the relation between the camera coordinate system and the screen coordinate system. In this embodiment, because the camera and the screen are on the same plane, the relationship between the camera coordinate system and the screen coordinate system can be obtained in a calibration manner or at a relative position between the camera and the screen.
If the subject is an infant, in order to improve the accuracy of the model for estimating the line of sight of the infant, after the model is trained, the model can be refined by adopting infant data so as to improve the accuracy of the model for estimating the line of sight of the infant.
Further, for subjects not limited to infants, if the subject's line of sight is determined not to be within the screen (some non-infant subjects may also be inattentive), the subject's line of sight may be returned to the screen by drawing his attention, including but not limited to audio prompts, or images, videos, and the like.
And S2, determining the minimum stripe width of the visual target which can be seen on the screen by the subject, and calculating the vision level corresponding to the subject according to the minimum stripe width.
In the present embodiment, two preferred embodiments are mainly included:
as a preferred embodiment, an OKN-based method may be employed. Traditional vision testing relies primarily on subjective statements of a patient's ability to see objects. While vision measurement can be challenging for patients with significant visual impairment and functional vision loss. Visual evoked potentials, oculomotor nystagmus (OKN) tests, and the like have been attempted for objective assessment of visual function. OKN is a series of involuntary eye movements caused by visual stimuli in the visual field. It involves the macula, the lateral geniculate body, the occipital lobe, the lobules of the cerebellum, the parabridgework and the eye motor neurons. OKN represents a series of reflective smooth tracking movements followed by a fast return movement, all caused by an object moving in the visual field (consisting of slow follow-up phases (in the direction of the stimulating movement) interspersed with corrective eye jump movements in the opposite direction).
However, the OKN test of the prior art is typically performed at a relatively close working distance, and the estimated near vision function cannot be converted into distance vision function. Although distance vision and near vision have some correlation, they are not interchangeable.
In this embodiment, the OKN-based test method specifically includes: the minimum stripe width of the optotype visible to the subject on the screen is determined as follows:
extracting corneal information of the subject from the facial information by a deep learning technique; wherein the corneal information comprises a movement profile of the cornea;
identifying nystagmus conditions of the subject according to the movement conditions of the cornea;
determining a minimum fringe width of a visual target visible to the subject based on the visual target that causes nystagmus in the subject.
As another preferred embodiment besides the OKN test mode, the determining of the minimum stripe width of the visual target visible on the screen by the subject is specifically:
sending a preset control instruction to the screen so that the screen displays a sighting mark with a preset stripe width in a preset range with a sight line drop point as a center; the sighting target consists of black and white stripes with the same interval;
and judging whether the subject sees the sighting target according to the sight line drop point of the subject, the movement track of the sight line and the position of the sighting target, and further determining the minimum stripe width of the sighting target which can be seen by the subject. The preferred embodiment differs in that the determination is made based on the subject's gaze point. As an example of the present embodiment, on the screen, visual targets of different spatial frequencies may be randomly displayed on a circle having a radius r around the subject's gaze point. The optotype is mainly composed of 16 black and white stripes with equal intervals of different spatial frequencies (unit is week/cm: including 0.23, 0.32, 0.43, 0.64, 0.86, 1.3, 1.6, 2.4, 3.2, 4.8, 6.5, 9.8, 13.0, 19.0, 26.0 and 38.2), and the different spatial frequencies respectively correspond to the width of one stripe. When the sight line of the subject is determined to be on the screen at the initial moment, the gray background is displayed integrally, then the sighting marks with different spatial frequencies/stripe widths are displayed randomly around the sight line, whether the subject sees the sighting marks or not is judged through the sight line estimation model, when the sighting marks are not recognized after being displayed for a certain time, the subject cannot see the sighting marks, the interval of black and white stripes can be adjusted in the process, and the position of the sighting marks can be changed randomly.
As a further preferred example, the optotype of each fringe width may appear 3 times, and whether to see or not is determined based on the proportion of the optotype that is seen (e.g., the proportion of the number of times the same optotype is seen by the subject to the total number of occurrences), thereby determining the minimum fringe width of the optotype that the subject can see.
Said calculating the vision level corresponding to said subject according to said minimum fringe width, optionally:
calculating a vision value corresponding to the subject according to the following formula:
Figure 372791DEST_PATH_IMAGE001
wherein d is the distance between the subject and the visual target during detection, and h is the stripe width of the visual target. It should be noted that, under the condition that the distance between the subject and the optotype and other parameters of the optotype are not changed, the width of the stripe corresponding to each optotype can be understood as corresponding to one vision value, that is, if it can be ensured that other parameters are not changed and the widths of the stripes of the optotypes are preset values, calculation is not required, the vision level corresponding to the subject can be directly obtained according to the corresponding relationship between the widths of the stripes of the optotypes and the vision values, and convenience is provided.
Correspondingly, referring to fig. 2, an embodiment of the present invention further provides a vision detection apparatus based on a camera, including a sight line acquisition module 101 and a detection module 102; wherein the content of the first and second substances,
the sight line acquisition module 101 is configured to acquire image information of a subject by using a camera, and acquire a sight line drop point of the subject on a screen through a pre-trained sight line estimation model based on the image information; wherein the screen displays optotypes for vision detection;
the detecting module 102 is configured to determine a minimum stripe width of an optotype that a subject can see on the screen, and calculate a vision level corresponding to the subject according to the minimum stripe width.
As a preferred embodiment, the detection module 102 determines the minimum stripe width of the visual target visible on the screen by the subject, specifically:
the detection module 102 includes a first detection unit, and the first detection unit is configured to send a preset control instruction to the screen, so that the screen displays a visual target with a preset stripe width within a preset range with a sight line drop point as a center; the sighting marks are composed of black and white stripes with the same interval;
judging whether the test subject sees the visual target according to the visual line drop point of the test subject, the movement track of the visual line and the position of the visual target, and further determining the minimum stripe width of the visual target which can be seen by the test subject.
As a preferred embodiment, the image information includes face information of the subject;
the detection module 102 determines a minimum stripe width of the optotype visible on the screen by the subject, specifically:
the detection module 102 includes a second detection unit for extracting corneal information of the subject from the facial information by a deep learning technique; wherein the corneal information comprises a movement profile of the cornea;
identifying nystagmus conditions of the subject according to the movement conditions of the cornea;
determining a minimum fringe width of an optotype visible to the subject based on the optotype triggering nystagmus in the subject.
As a preferred embodiment, the detecting module 102 calculates, according to the minimum stripe width, a vision level corresponding to the subject, specifically:
the detection module 102 includes a calculation unit configured to calculate a vision value corresponding to the subject according to the following formula:
Figure 172120DEST_PATH_IMAGE001
wherein d is the distance between the subject and the visual target during detection, and h is the bar grid width of the visual target; and calculating the width of the bar grid according to the minimum stripe width.
As a preferred embodiment, the image information includes head information and face information of the subject;
the vision detection device further comprises a distance measurement module, wherein the distance measurement module is used for obtaining the distance between the subject and the sighting target through a head posture estimation and monocular distance measurement model trained in advance based on the head information and the face information of the subject before the sight line landing point of the subject on the screen is obtained.
Compared with the prior art, the embodiment of the invention has the following beneficial effects:
the embodiment of the invention provides a vision detection method and a device based on a camera, wherein the vision detection method comprises the following steps: acquiring image information of a subject by using a camera, and acquiring a sight line drop point of the subject on a screen through a pre-trained sight line estimation model based on the image information; wherein the screen displays optotypes for vision detection; and determining the minimum stripe width of the visual target which can be seen on the screen by the subject, and calculating the corresponding vision level of the subject according to the minimum stripe width. Compared with the prior art that a priority watching method is adopted, the method has the advantages that the intervention degree of professionals is greatly reduced, the dependence on experience of medical staff or inspectors is reduced, the implementation and operation are simple, and the consumed time can be effectively shortened.
The above-mentioned embodiments are provided to further explain the objects, technical solutions and advantages of the present invention in detail, and it should be understood that the above-mentioned embodiments are only examples of the present invention and are not intended to limit the scope of the present invention. It should be understood that any modifications, equivalents, improvements and the like, which come within the spirit and principle of the invention, may occur to those skilled in the art and are intended to be included within the scope of the invention.

Claims (10)

1. A vision detection method based on a camera is characterized by comprising the following steps:
acquiring image information of a subject by using a camera, and acquiring a sight line drop point of the subject on a screen through a pre-trained sight line estimation model based on the image information; wherein the screen displays optotypes for vision detection;
and determining the minimum stripe width of the visual target which can be seen on the screen by the subject, and calculating the corresponding vision level of the subject according to the minimum stripe width.
2. The camera-based vision testing method of claim 1, wherein said determining a minimum fringe width of a visual target visible to a subject on said screen is specifically:
sending a preset control instruction to the screen to enable the screen to display a sighting mark with a preset stripe width in a preset range with a sight line drop point as a center; the sighting marks are composed of black and white stripes with the same interval;
judging whether the test subject sees the visual target according to the visual line drop point of the test subject, the movement track of the visual line and the position of the visual target, and further determining the minimum stripe width of the visual target which can be seen by the test subject.
3. The camera-based vision inspection method of claim 1, wherein the image information includes face information of the subject;
the minimum stripe width of the visual target which can be seen by the subject on the screen is determined, and specifically:
extracting corneal information of the subject from the facial information by a deep learning technique; wherein the corneal information comprises a movement profile of the cornea;
identifying nystagmus conditions of the subject according to the movement conditions of the cornea;
determining a minimum fringe width of a visual target visible to the subject based on the visual target that causes nystagmus in the subject.
4. The vision testing method based on camera of claim 1, wherein said calculating the vision level corresponding to said subject according to said minimum fringe width specifically comprises:
calculating a vision value corresponding to the subject according to the following formula:
Figure 226710DEST_PATH_IMAGE001
wherein d is the distance between the subject and the visual target during detection, and h is the stripe width of the visual target.
5. The camera-based vision detecting method of claim 4, wherein the image information includes head information and face information of the subject;
before the acquiring the sight line landing point of the subject on the screen, the method further comprises: and obtaining the distance between the subject and the sighting target through a head pose estimation and monocular distance measurement model trained in advance based on the head information and the face information of the subject.
6. A vision detection device based on a camera is characterized by comprising a sight line acquisition module and a detection module; wherein, the first and the second end of the pipe are connected with each other,
the sight line acquisition module is used for acquiring image information of the subject by using a camera and acquiring a sight line falling point of the subject on a screen through a pre-trained sight line estimation model based on the image information; wherein the screen displays optotypes for vision detection;
the detection module is used for determining the minimum stripe width of the visual target which can be seen on the screen by the subject and calculating the vision level corresponding to the subject according to the minimum stripe width.
7. The camera-based vision testing apparatus of claim 6, wherein said testing module determines a minimum fringe width of a visual target visible to the subject on said screen, specifically:
the detection module comprises a first detection unit, and the first detection unit is used for sending a preset control instruction to the screen so as to enable the screen to display a visual target with a preset stripe width in a preset range with a sight line drop point as a center; the sighting target consists of black and white stripes with the same interval;
and judging whether the subject sees the sighting target according to the sight line drop point of the subject, the movement track of the sight line and the position of the sighting target, and further determining the minimum stripe width of the sighting target which can be seen by the subject.
8. The camera-based vision detection apparatus of claim 6, wherein the image information includes face information of the subject;
the detection module determines the minimum stripe width of the visual target which can be seen on the screen by the subject, and specifically comprises the following steps:
the detection module comprises a second detection unit, wherein the second detection unit is used for extracting cornea information of the subject from the face information through a deep learning technology; wherein the corneal information comprises movement of the cornea;
identifying nystagmus conditions of the subject according to the movement conditions of the cornea;
determining a minimum fringe width of a visual target visible to the subject based on the visual target that causes nystagmus in the subject.
9. The vision testing apparatus based on camera of claim 6, wherein said testing module calculates the vision level corresponding to said subject according to said minimum stripe width, specifically:
the detection module comprises a calculation unit for calculating a vision value corresponding to the subject according to the following formula:
Figure 487927DEST_PATH_IMAGE001
wherein d is the distance between the subject and the visual target during detection, and h is the stripe width of the visual target.
10. The camera-based vision detecting apparatus of claim 9, wherein the image information includes head information and face information of the subject;
the vision detection device further comprises a distance measurement module, wherein the distance measurement module is used for obtaining the distance between the subject and the sighting target through a pre-trained head posture estimation and monocular distance measurement model based on the head information and the face information of the subject before the sight line landing point of the subject on the screen is obtained.
CN202211523520.1A 2022-12-01 2022-12-01 Vision detection method and device based on camera Pending CN115590462A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202211523520.1A CN115590462A (en) 2022-12-01 2022-12-01 Vision detection method and device based on camera

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202211523520.1A CN115590462A (en) 2022-12-01 2022-12-01 Vision detection method and device based on camera

Publications (1)

Publication Number Publication Date
CN115590462A true CN115590462A (en) 2023-01-13

Family

ID=84852616

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202211523520.1A Pending CN115590462A (en) 2022-12-01 2022-12-01 Vision detection method and device based on camera

Country Status (1)

Country Link
CN (1) CN115590462A (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116027910A (en) * 2023-03-29 2023-04-28 广州视景医疗软件有限公司 Eye bitmap generation method and system based on VR eye movement tracking technology
CN117058148A (en) * 2023-10-12 2023-11-14 超目科技(北京)有限公司 Imaging quality detection method, device and equipment for nystagmus patient

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106037627A (en) * 2016-05-20 2016-10-26 上海青研科技有限公司 Full-automatic visual acuity examination method and device for infants
CN106175657A (en) * 2015-12-11 2016-12-07 北京大学第医院 A kind of vision automatic checkout system
CN108171152A (en) * 2017-12-26 2018-06-15 深圳大学 Deep learning human eye sight estimation method, equipment, system and readable storage medium storing program for executing
CN110895369A (en) * 2018-09-13 2020-03-20 奇酷互联网络科技(深圳)有限公司 Intelligent glasses and control method thereof
CN112419399A (en) * 2019-08-23 2021-02-26 北京七鑫易维信息技术有限公司 Image ranging method, device, equipment and storage medium
CN113614674A (en) * 2018-12-19 2021-11-05 视觉系统有限责任公司 Method for generating and displaying virtual objects by means of an optical system

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106175657A (en) * 2015-12-11 2016-12-07 北京大学第医院 A kind of vision automatic checkout system
CN106037627A (en) * 2016-05-20 2016-10-26 上海青研科技有限公司 Full-automatic visual acuity examination method and device for infants
CN108171152A (en) * 2017-12-26 2018-06-15 深圳大学 Deep learning human eye sight estimation method, equipment, system and readable storage medium storing program for executing
CN110895369A (en) * 2018-09-13 2020-03-20 奇酷互联网络科技(深圳)有限公司 Intelligent glasses and control method thereof
CN113614674A (en) * 2018-12-19 2021-11-05 视觉系统有限责任公司 Method for generating and displaying virtual objects by means of an optical system
CN112419399A (en) * 2019-08-23 2021-02-26 北京七鑫易维信息技术有限公司 Image ranging method, device, equipment and storage medium

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116027910A (en) * 2023-03-29 2023-04-28 广州视景医疗软件有限公司 Eye bitmap generation method and system based on VR eye movement tracking technology
CN117058148A (en) * 2023-10-12 2023-11-14 超目科技(北京)有限公司 Imaging quality detection method, device and equipment for nystagmus patient
CN117058148B (en) * 2023-10-12 2024-02-02 超目科技(北京)有限公司 Imaging quality detection method, device and equipment for nystagmus patient

Similar Documents

Publication Publication Date Title
US9439592B2 (en) Eye tracking headset and system for neuropsychological testing including the detection of brain damage
US20200069179A1 (en) Head and eye tracking
US9844317B2 (en) Method and system for automatic eyesight diagnosis
US6381339B1 (en) Image system evaluation method and apparatus using eye motion tracking
CN115590462A (en) Vision detection method and device based on camera
Otero-Millan et al. Knowing what the brain is seeing in three dimensions: A novel, noninvasive, sensitive, accurate, and low-noise technique for measuring ocular torsion
Miao et al. Virtual reality-based measurement of ocular deviation in strabismus
Turuwhenua et al. A method for detecting optokinetic nystagmus based on the optic flow of the limbus
CN113288044B (en) Dynamic vision testing system and method
WO2017123086A1 (en) Method, system and computer readable medium to determine a strabismus angle between the eyes of an individual
Lin An eye behavior measuring device for VR system
Hirota et al. Automatic recording of the target location during smooth pursuit eye movement testing using video-oculography and deep learning-based object detection
CN115886721B (en) Eyeball activity evaluation method, system and storage medium
CN114569056A (en) Eyeball detection and vision simulation device and eyeball detection and vision simulation method
Chaudhary et al. Enhancing the precision of remote eye-tracking using iris velocity estimation
Jaiseeli et al. SLKOF: Subsampled Lucas-Kanade Optical Flow for Opto Kinetic Nystagmus detection
RU2531132C1 (en) Method for determining complex hand-eye reaction rate of person being tested and device for implementing it
CN115546214B (en) Set near point measurement method and device based on neural network
JPH04279143A (en) Eyeball motion inspector
Quang et al. Mobile traumatic brain injury assessment system
US20240081641A1 (en) System for strabismus assessment and a method of strabismus assessment
Morimoto Automatic Measurement of Eye Features Using Image Processing
Jansen et al. A confidence measure for real-time eye movement detection in video-oculography
CN116725536A (en) Eye movement checking equipment
Sarès et al. Analyzing head roll and eye torsion by means of offline image processing

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination