CN112137576A - Method and system for detecting observation and reading ability based on eye movement data - Google Patents
Method and system for detecting observation and reading ability based on eye movement data Download PDFInfo
- Publication number
- CN112137576A CN112137576A CN202011020334.7A CN202011020334A CN112137576A CN 112137576 A CN112137576 A CN 112137576A CN 202011020334 A CN202011020334 A CN 202011020334A CN 112137576 A CN112137576 A CN 112137576A
- Authority
- CN
- China
- Prior art keywords
- work
- user
- observation
- kth
- determining
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
Images
Classifications
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61B—DIAGNOSIS; SURGERY; IDENTIFICATION
- A61B3/00—Apparatus for testing the eyes; Instruments for examining the eyes
- A61B3/10—Objective types, i.e. instruments for examining the eyes independent of the patients' perceptions or reactions
- A61B3/113—Objective types, i.e. instruments for examining the eyes independent of the patients' perceptions or reactions for determining or recording eye movement
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V40/00—Recognition of biometric, human-related or animal-related patterns in image or video data
- G06V40/10—Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
- G06V40/18—Eye characteristics, e.g. of the iris
- G06V40/193—Preprocessing; Feature extraction
-
- G—PHYSICS
- G09—EDUCATION; CRYPTOGRAPHY; DISPLAY; ADVERTISING; SEALS
- G09B—EDUCATIONAL OR DEMONSTRATION APPLIANCES; APPLIANCES FOR TEACHING, OR COMMUNICATING WITH, THE BLIND, DEAF OR MUTE; MODELS; PLANETARIA; GLOBES; MAPS; DIAGRAMS
- G09B5/00—Electrically-operated educational appliances
- G09B5/02—Electrically-operated educational appliances with visual presentation of the material to be studied, e.g. using film strip
Landscapes
- Engineering & Computer Science (AREA)
- Health & Medical Sciences (AREA)
- Physics & Mathematics (AREA)
- Life Sciences & Earth Sciences (AREA)
- Theoretical Computer Science (AREA)
- General Health & Medical Sciences (AREA)
- Ophthalmology & Optometry (AREA)
- Human Computer Interaction (AREA)
- General Physics & Mathematics (AREA)
- Biomedical Technology (AREA)
- Surgery (AREA)
- Biophysics (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Heart & Thoracic Surgery (AREA)
- Medical Informatics (AREA)
- Molecular Biology (AREA)
- Multimedia (AREA)
- Animal Behavior & Ethology (AREA)
- Public Health (AREA)
- Veterinary Medicine (AREA)
- Business, Economics & Management (AREA)
- Educational Administration (AREA)
- Educational Technology (AREA)
- Eye Examination Apparatus (AREA)
Abstract
The application discloses a method and a system for detecting observational reading capacity based on eye movement data. The method comprises the following steps: sequentially displaying a plurality of works for a user to view on a display screen; in the process that the user views each work, collecting eye movement data corresponding to each work viewed by the user; determining the observation reading capability of the user according to the eye movement data; determining a preferable display mode of the works corresponding to the user according to the observation and reading capacity; and displaying the next work for the user according to the work preferred display mode corresponding to the user.
Description
Technical Field
The invention relates to the technical field of intelligence, in particular to an observation and reading capability detection method and system based on eye movement data.
Background
For some children with cognitive impairment, such as autistic patients, the children have low receiving speed of knowledge in nature, and the personalized features of the children have more obvious influence on the cognitive process, for example, some children like to watch pictures, some children like to listen to music, some children watch pictures, more concern about pictures with larger sizes, and some children watch pictures, more concern about pictures with smaller sizes. If the children like to see large-size pictures or small-size pictures through intelligent means, the teaching method which is personalized and high in matching performance and is suitable for the children by the educational structure is very beneficial.
Disclosure of Invention
The application provides an observation and reading capacity detection method and system based on eye movement data, which are used for intelligently analyzing the observation and reading capacity of a user and further displaying works for the user according to a work display mode interested by the user.
The embodiment of the invention provides an observation and reading capability detection method based on eye movement data, which comprises the following steps:
sequentially displaying a plurality of works for a user to view on a display screen;
in the process that the user views each work, collecting eye movement data corresponding to each work viewed by the user;
determining the observation reading capability of the user according to the eye movement data;
determining a preferable display mode of the works corresponding to the user according to the observation and reading capacity;
and displaying the next work for the user according to the work preferred display mode corresponding to the user.
In one embodiment, each of the works has a plurality of feature objects randomly distributed, each feature object is a pattern with a significant color, and the area of the pattern is greater than 1/200 and less than 1/50 of the area of the work; the colors of the characteristic objects in each picture are different; each work is provided with a plurality of non-feature objects besides the feature objects, and each non-feature object is a pattern with an inconspicuous color;
the eye movement data comprise focusing angles of the double-eye focusing points, deviation angles of the double-eye focusing points and the existence or nonexistence of characteristic objects at the double-eye focusing points;
determining the observation reading ability of the user according to the eye movement data comprises: and determining the observation reading capability of the user according to the eye movement data corresponding to each work viewed by the user.
In one embodiment, the determining the user's observational reading ability based on the user's viewing of the eye movement data corresponding to each work includes steps a 1-A3:
step A1, for the kth work, the following steps A11-A16 are performed:
step A11, calculating the distance from the ith focus point of the two eyes on the kth work to the middle point of the two eyes according to the following formula (1):
wherein HkiRepresenting the distance from the ith focus point of the two eyes on the kth work to the middle point of the two eyes (ED distance in the figure); y represents a binocular distance (BC distance in the figure); beta is akiRepresenting the focusing angles (a & lt BEC in the figure) of the two eyes at the ith focusing point on the kth work; gamma raykiRepresents the deviation angle of the two eyes at the ith focusing point on the kth work (< CEF in the figure);
step A12, calculating the binocular visual field angle (angle EDA in the figure) of the binocular at the ith focusing point on the kth work according to the following formula (2);
wherein eta iskiRepresenting the binocular visual field angle (EDA in the figure) of the binocular at the ith focusing point on the kth work;
step A13, determining whether characteristic objects exist at the ith focus point on the kth work or not by the two eyes;
step A14, calculating the influence factor χ of the characteristic object corresponding to the kth work on the visual field according to the following formula (3)k:
Wherein, χkRepresenting the influence factor, N, of the feature object corresponding to the kth work on the field of viewkRepresenting the total number of focusing points on the kth work, the corresponding binocular visual field angles of which are equal to or greater than a preset angle; fkRepresented on the kth work, said NkThe total number of focusing points of the characteristic object in the focusing points;
step A15, according to the following formula (4)Calculating the influence factor of the visual line distance corresponding to the kth work on the visual field
Wherein HkjRepresents the distance between the j-th focusing point of the two eyes on the k-th work and the middle point of the two eyes, HkjAll are equal to or greater than a preset distance, and m is the total number of the corresponding focusing points with the distance equal to or greater than the preset distance; n is the total number of all focus points of the two eyes on the kth work;
step A16, Using theTo the xkCorrecting the influence factor chi of the corrected characteristic object corresponding to the kth work on the visual fieldk' the following formulas (5) and (6):
step A2, determining the influence factor of the corrected feature object corresponding to each work on the visual field according to the steps A11-A16 aiming at the rest works;
a3, calculating the average value of the influence factors of the corrected feature objects corresponding to the plurality of works on the visual field; taking the average value as an observation reading capability index of the user to represent the observation reading capability of the user; the larger the observation and reading capability index of the user is, the stronger the observation and reading capability of the user is.
In one embodiment, the determining, according to the viewing and reading ability, a preferred display mode of the work corresponding to the user includes:
determining a preferable display mode of the works corresponding to the user according to the observation and reading capability index of the user, wherein:
when the observation reading capability index of the user is equal to or larger than a preset index threshold value, determining that the preferable display mode of the works corresponding to the user is to display the works in a mode of being equal to or larger than a preset area;
and when the observation reading capability index of the user is smaller than a preset index threshold value, determining that the preferable display mode of the works corresponding to the user is to display the works in a mode of smaller than a preset area.
An observational reading ability detection system based on eye movement data, comprising:
the first display module is used for sequentially displaying a plurality of works for a user to view on the display screen;
the acquisition module is used for acquiring eye movement data corresponding to each work viewed by the user in the process of viewing each work by the user;
the first determining module is used for determining the observation and reading capacity of the user according to the eye movement data;
the second determination module is used for determining the preferable display mode of the works corresponding to the user according to the observation and reading capacity;
and the second display module is used for displaying the next work for the user according to the work optimal display mode corresponding to the user.
In one embodiment, each of the works has a plurality of feature objects randomly distributed, each feature object is a pattern with a significant color, and the area of the pattern is greater than 1/200 and less than 1/50 of the area of the work; the colors of the characteristic objects in each picture are different; each work is provided with a plurality of non-feature objects besides the feature objects, and each non-feature object is a pattern with an inconspicuous color;
the eye movement data comprise focusing angles of the double-eye focusing points, deviation angles of the double-eye focusing points and the existence or nonexistence of characteristic objects at the double-eye focusing points;
determining the observation reading ability of the user according to the eye movement data comprises: and determining the observation reading capability of the user according to the eye movement data corresponding to each work viewed by the user.
In one embodiment, the determining the user's observational reading ability based on the user's viewing of the eye movement data corresponding to each work includes steps a 1-A3:
step A1, for the kth work, the following steps A11-A16 are performed:
step A11, calculating the distance from the ith focus point of the two eyes on the kth work to the middle point of the two eyes according to the following formula (1):
wherein HkiRepresenting the distance from the ith focus point of the two eyes on the kth work to the middle point of the two eyes (ED distance in the figure); y represents a binocular distance (BC distance in the figure); beta is akiRepresenting the focusing angles (a & lt BEC in the figure) of the two eyes at the ith focusing point on the kth work; gamma raykiRepresents the deviation angle of the two eyes at the ith focusing point on the kth work (< CEF in the figure);
step A12, calculating the binocular visual field angle (angle EDA in the figure) of the binocular at the ith focusing point on the kth work according to the following formula (2);
wherein eta iskiRepresenting the binocular visual field angle (EDA in the figure) of the binocular at the ith focusing point on the kth work;
step A13, determining whether characteristic objects exist at the ith focus point on the kth work or not by the two eyes;
step A14, calculating the influence factor χ of the characteristic object corresponding to the kth work on the visual field according to the following formula (3)k:
Wherein, χkRepresenting the influence factor, N, of the feature object corresponding to the kth work on the field of viewkRepresenting the total number of focusing points on the kth work, the corresponding binocular visual field angles of which are equal to or greater than a preset angle; fkRepresented on the kth work, said NkThe total number of focusing points of the characteristic object in the points;
step A15, calculating the influence factor of the visual distance corresponding to the kth work on the visual field according to the following formula (4)
Wherein HkjRepresents the distance between the j-th focusing point of the two eyes on the k-th work and the middle point of the two eyes, HkjAll are equal to or greater than a preset distance, and m is the total number of the corresponding focusing points with the distance equal to or greater than the preset distance; n is the total number of all focus points of the two eyes on the kth work;
step A16, Using theTo the xkCorrecting the influence factor chi of the corrected characteristic object corresponding to the kth work on the visual fieldk' the following formulas (5) and (6):
step A2, determining the influence factor of the corrected feature object corresponding to each work on the visual field according to the steps A11-A16 aiming at the rest works;
a3, calculating the average value of the influence factors of the corrected feature objects corresponding to the plurality of works on the visual field; taking the average value as an observation reading capability index of the user to represent the observation reading capability of the user; the larger the observation and reading capability index of the user is, the stronger the observation and reading capability of the user is.
In one embodiment, the second determining module is further configured to:
determining a preferable display mode of the works corresponding to the user according to the observation and reading capability index of the user, wherein:
when the observation reading capability index of the user is equal to or larger than a preset index threshold value, determining that the preferable display mode of the works corresponding to the user is to display the works in a mode of being equal to or larger than a preset area;
and when the observation reading capability index of the user is smaller than a preset index threshold value, determining that the preferable display mode of the works corresponding to the user is to display the works in a mode of smaller than a preset area.
According to the technical scheme provided by the embodiment of the invention, the observation and reading capacity of the user is intelligently analyzed, the display mode of the works which the user is interested in is further analyzed, and the works are displayed for the user according to the display mode of the works which the user is interested in.
Drawings
In order to more clearly illustrate the embodiments of the present application or the technical solutions in the prior art, the drawings used in the description of the embodiments or the prior art will be briefly described below, it is obvious that the drawings in the following description are only some embodiments of the present application, and for those skilled in the art, other drawings can be obtained according to the drawings without creative efforts.
FIG. 1 is a flow chart of a method for eye movement data based observational reading capability detection disclosed herein;
fig. 2 is a schematic diagram of various parameters of an observation and reading capability detection method based on eye movement data disclosed in the present application.
Detailed Description
The technical solutions in the embodiments of the present application will be clearly and completely described below with reference to the drawings in the embodiments of the present application, and it is obvious that the described embodiments are only a part of the embodiments of the present application, and not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present application.
The embodiment of the invention discloses an observation and reading capability detection method based on eye movement data, which comprises the following steps of S1-S5:
and step S1, sequentially displaying a plurality of works for the user to view on the display screen.
The works can be pictures, for example, pictures composed of a plurality of patterns, and the colors of the patterns can be varied.
Step S2, in the process that the user views each work, eye movement data corresponding to each work viewed by the user are collected.
And step S3, determining the observation and reading ability of the user according to the eye movement data.
And step S4, determining the preferable display mode of the works corresponding to the user according to the observation and reading ability.
And step S5, displaying the next work for the user according to the work preferred display mode corresponding to the user.
In one embodiment, each work has a plurality of feature objects displayed therein, each feature object being a pattern of significant color, the area of the pattern being greater than 1/200 of the area of the work and less than 1/50 of the area of the work; besides a plurality of characteristic objects, a plurality of non-characteristic objects are displayed in each work, and each non-characteristic object is a pattern with an inconspicuous color; the patterns of the featured objects and the non-featured objects may be different or the same.
In the process that a user views each work, collecting eye movement data corresponding to each work viewed by the user, wherein the eye movement data comprises a focusing angle of a binocular focusing point, a deviation angle of the binocular focusing point and the existence of a characteristic object at the binocular focusing point;
and determining the observation reading capacity of the user according to the eye movement data corresponding to each work viewed by the user.
In one embodiment, determining the user's viewing reading ability based on the eye movement data corresponding to each work viewed by the user includes steps A1-A3:
step A1, for the kth work, the following steps A11-A16 are performed:
step A11, calculating the distance from the ith focus point of the two eyes on the kth work to the middle point of the two eyes according to the following formula (1):
wherein HkiIndicating the distance of the i-th focal point on the k-th work from the middle point of the two eyes (as in FIG. 2)The ED distance, A is a focus point on the kth work during binocular flat viewing, E is the ith focus point, AE is the work on the display screen, point D is the middle point of two eyes, B is the central point of one eye, and C is the central point of the other eye); y represents the interocular distance (BC distance in fig. 2); beta is akiRepresenting the focusing angles of the two eyes at the ith focusing point on the kth work (e.g., < BEC in FIG. 2); gamma raykiRepresenting the deviation angle of the two eyes at the ith focusing point on the kth work (as < CEF in fig. 2, EF is vertical to the plane of BC);
step A12, calculating the binocular visual field angle (such as ^ EDA in fig. 2) of the eyes at the ith focusing point on the kth work according to the following formula (2);
wherein eta iskiRepresenting the binocular visual field angle of the eyes at the ith focus point on the kth work;
a13, determining whether characteristic objects exist at the ith focus point on the kth work by the two eyes;
step A14, calculating the influence factor χ of the characteristic object corresponding to the kth work on the visual field according to the following formula (3)k:
Wherein, χkRepresenting the influence factor, N, of the feature object corresponding to the kth work on the field of viewkRepresenting the total number of focusing points on the kth work, the corresponding binocular visual field angles of which are equal to or greater than a preset angle; fkRepresented on the kth work, NkThe total number of focal points with feature objects in the points;
step A15, calculating the influence factor of the visual distance corresponding to the kth work on the visual field according to the following formula (4)
Wherein HkjRepresents the distance between the j-th focusing point of the two eyes on the k-th work and the middle point of the two eyes, HkjAll are equal to or greater than a preset distance, and m is the total number of corresponding focusing points with the distance equal to or greater than the preset distance; n is the total number of all focus points of both eyes on the kth work;
step A16, usePair chikCorrecting the influence factor chi of the corrected characteristic object corresponding to the kth work on the visual fieldk' the following formulas (5) and (6):
step A2, determining the influence factor of the corrected feature object corresponding to each work on the visual field according to the steps A11-A16 aiming at the rest works;
a3, calculating the average value of the influence factors of the corrected feature objects corresponding to the plurality of works on the visual field; the average value is used as an observation reading capability index of the user to represent the observation reading capability of the user; the larger the index of the observing and reading ability of the user is, the stronger the observing and reading ability of the user is.
In one embodiment, determining a preferred display mode of the work corresponding to the user according to the observation and reading ability comprises:
determining a preferable display mode of the works corresponding to the user according to the observation reading ability index of the user, wherein:
when the observation reading capability index of the user is equal to or larger than a preset index threshold value, determining that the preferable display mode of the works corresponding to the user is to display the works in a mode of being equal to or larger than a preset area; that is, this situation illustrates that users such as autistic children have a stronger and more sensitive ability to view large-format works, and are suitable for providing the users with the large-format works to increase the knowledge acceptance speed of the users.
When the observation reading capability index of the user is smaller than a preset index threshold value, determining that the preferable display mode of the works corresponding to the user is to display the works in a mode of being smaller than a preset area; that is, this situation illustrates that a user, such as an autistic child, is more able to observe a small-sized work and is more sensitive, and is suitable for providing the user with the small-sized work to increase the knowledge acceptance speed of the user. Wherein, the preset area can be set artificially.
According to the technical scheme provided by the embodiment of the invention, the observation and reading capacity of the user is intelligently analyzed, the display mode of the works which the user is interested in is further analyzed, and the works are displayed for the user according to the display mode of the works which the user is interested in.
The previous description of the disclosed embodiments is provided to enable any person skilled in the art to make or use the present application. Various modifications to these embodiments will be readily apparent to those skilled in the art, and the generic principles defined herein may be applied to other embodiments without departing from the spirit or scope of the application. Thus, the present application is not intended to be limited to the embodiments shown herein but is to be accorded the widest scope consistent with the principles and novel features disclosed herein.
Claims (8)
1. An observation and reading ability detection method based on eye movement data is characterized by comprising the following steps:
sequentially displaying a plurality of works for a user to view on a display screen;
in the process that the user views each work, collecting eye movement data corresponding to each work viewed by the user;
determining the observation reading capability of the user according to the eye movement data;
determining a preferable display mode of the works corresponding to the user according to the observation and reading capacity;
and displaying the next work for the user according to the work preferred display mode corresponding to the user.
2. The method of claim 1,
a plurality of feature objects which are randomly distributed are displayed in each work, each feature object is a pattern with a remarkable color, and the area of the pattern is greater than 1/200 of the area of the work and smaller than 1/50 of the area of the work; the colors of the characteristic objects in each picture are different; each work is provided with a plurality of non-feature objects besides the feature objects, and each non-feature object is a pattern with an inconspicuous color;
the eye movement data comprise focusing angles of the double-eye focusing points, deviation angles of the double-eye focusing points and the existence or nonexistence of characteristic objects at the double-eye focusing points;
determining the observation reading ability of the user according to the eye movement data comprises: and determining the observation reading capability of the user according to the eye movement data corresponding to each work viewed by the user.
3. The method of claim 2,
determining the observation reading ability of the user according to the eye movement data corresponding to the work viewed by the user, comprising the steps of A1-A3:
step A1, for the kth work, the following steps A11-A16 are performed:
step A11, calculating the distance from the ith focus point of the two eyes on the kth work to the middle point of the two eyes according to the following formula (1):
wherein HkiRepresenting the distance from the ith focus point of the two eyes on the kth work to the middle point of the two eyes (ED distance in the figure); y represents a binocular distance (BC distance in the figure); beta is akiRepresenting the focusing angles (a & lt BEC in the figure) of the two eyes at the ith focusing point on the kth work; gamma raykiRepresents the deviation angle of the two eyes at the ith focusing point on the kth work (< CEF in the figure);
step A12, calculating the binocular visual field angle (angle EDA in the figure) of the binocular at the ith focusing point on the kth work according to the following formula (2);
wherein eta iskiRepresenting the binocular visual field angle (EDA in the figure) of the binocular at the ith focusing point on the kth work;
step A13, determining whether characteristic objects exist at the ith focus point on the kth work or not by the two eyes;
step A14, calculating the influence factor χ of the characteristic object corresponding to the kth work on the visual field according to the following formula (3)k:
Wherein, χkRepresenting feature objects corresponding to the kth workFactor influencing the visual field, NkRepresenting the total number of focusing points on the kth work, the corresponding binocular visual field angles of which are equal to or greater than a preset angle; fkRepresented on the kth work, said NkThe total number of focusing points of the characteristic object in the focusing points;
step A15, calculating the influence factor of the visual distance corresponding to the kth work on the visual field according to the following formula (4)
Wherein HkjRepresents the distance between the j-th focusing point of the two eyes on the k-th work and the middle point of the two eyes, HkjAll are equal to or greater than a preset distance, and m is the total number of the corresponding focusing points with the distance equal to or greater than the preset distance; n is the total number of all focus points of the two eyes on the kth work;
step A16, Using theTo the xkCorrecting the influence factor chi of the corrected characteristic object corresponding to the kth work on the visual fieldk' the following formulas (5) and (6):
step A2, determining the influence factor of the corrected feature object corresponding to each work on the visual field according to the steps A11-A16 aiming at the rest works;
a3, calculating the average value of the influence factors of the corrected feature objects corresponding to the plurality of works on the visual field; taking the average value as an observation reading capability index of the user to represent the observation reading capability of the user; the larger the observation and reading capability index of the user is, the stronger the observation and reading capability of the user is.
4. The method of claim 3, wherein said determining a preferred display of said user's corresponding composition based on said viewing reading ability comprises:
determining a preferable display mode of the works corresponding to the user according to the observation and reading capability index of the user, wherein:
when the observation reading capability index of the user is equal to or larger than a preset index threshold value, determining that the preferable display mode of the works corresponding to the user is to display the works in a mode of being equal to or larger than a preset area;
and when the observation reading capability index of the user is smaller than a preset index threshold value, determining that the preferable display mode of the works corresponding to the user is to display the works in a mode of smaller than a preset area.
5. An observational reading ability detection system based on eye movement data, comprising:
the first display module is used for sequentially displaying a plurality of works for a user to view on the display screen;
the acquisition module is used for acquiring eye movement data corresponding to each work viewed by the user in the process of viewing each work by the user;
the first determining module is used for determining the observation and reading capacity of the user according to the eye movement data;
the second determination module is used for determining the preferable display mode of the works corresponding to the user according to the observation and reading capacity;
and the second display module is used for displaying the next work for the user according to the work optimal display mode corresponding to the user.
6. The system of claim 5,
a plurality of feature objects which are randomly distributed are displayed in each work, each feature object is a pattern with a remarkable color, and the area of the pattern is greater than 1/200 of the area of the work and smaller than 1/50 of the area of the work; the colors of the characteristic objects in each picture are different; each work is provided with a plurality of non-feature objects besides the feature objects, and each non-feature object is a pattern with an inconspicuous color;
the eye movement data comprise focusing angles of the double-eye focusing points, deviation angles of the double-eye focusing points and the existence or nonexistence of characteristic objects at the double-eye focusing points;
determining the observation reading ability of the user according to the eye movement data comprises: and determining the observation reading capability of the user according to the eye movement data corresponding to each work viewed by the user.
7. The system of claim 6,
determining the observation reading ability of the user according to the eye movement data corresponding to the work viewed by the user, comprising the steps of A1-A3:
step A1, for the kth work, the following steps A11-A16 are performed:
step A11, calculating the distance from the ith focus point of the two eyes on the kth work to the middle point of the two eyes according to the following formula (1):
wherein HkiRepresenting the distance from the ith focus point of the two eyes on the kth work to the middle point of the two eyes (ED distance in the figure); y represents a binocular distance (BC distance in the figure); beta is akiRepresenting the focusing angles (a & lt BEC in the figure) of the two eyes at the ith focusing point on the kth work; gamma raykiRepresents the deviation angle of the two eyes at the ith focusing point on the kth work (< CEF in the figure);
step A12, calculating the binocular visual field angle (angle EDA in the figure) of the binocular at the ith focusing point on the kth work according to the following formula (2);
wherein eta iskiRepresenting the binocular visual field angle (EDA in the figure) of the binocular at the ith focusing point on the kth work;
step A13, determining whether characteristic objects exist at the ith focus point on the kth work or not by the two eyes;
step A14, calculating the influence factor χ of the characteristic object corresponding to the kth work on the visual field according to the following formula (3)k:
Wherein, χkRepresenting the influence factor, N, of the feature object corresponding to the kth work on the field of viewkRepresenting the total number of focusing points on the kth work, the corresponding binocular visual field angles of which are equal to or greater than a preset angle; fkRepresented on the kth work, said NkThe total number of focusing points of the characteristic object in the points;
step A15, calculating the influence factor of the visual distance corresponding to the kth work on the visual field according to the following formula (4)
Wherein HkjRepresents the distance between the j-th focusing point of the two eyes on the k-th work and the middle point of the two eyes, HkjAll are equal to or greater than a preset distance, and m is the total number of the corresponding focusing points with the distance equal to or greater than the preset distance; n is the total number of all focus points of the two eyes on the kth work;
step A16, Using theTo the xkCorrecting the influence factor chi of the corrected characteristic object corresponding to the kth work on the visual fieldk' the following formulas (5) and (6):
step A2, determining the influence factor of the corrected feature object corresponding to each work on the visual field according to the steps A11-A16 aiming at the rest works;
a3, calculating the average value of the influence factors of the corrected feature objects corresponding to the plurality of works on the visual field; taking the average value as an observation reading capability index of the user to represent the observation reading capability of the user; the larger the observation and reading capability index of the user is, the stronger the observation and reading capability of the user is.
8. The system of claim 7, wherein the second determination module is further configured to:
determining a preferable display mode of the works corresponding to the user according to the observation and reading capability index of the user, wherein:
when the observation reading capability index of the user is equal to or larger than a preset index threshold value, determining that the preferable display mode of the works corresponding to the user is to display the works in a mode of being equal to or larger than a preset area;
and when the observation reading capability index of the user is smaller than a preset index threshold value, determining that the preferable display mode of the works corresponding to the user is to display the works in a mode of smaller than a preset area.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202011020334.7A CN112137576B (en) | 2020-09-24 | 2020-09-24 | Method and system for detecting observation and reading ability based on eye movement data |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202011020334.7A CN112137576B (en) | 2020-09-24 | 2020-09-24 | Method and system for detecting observation and reading ability based on eye movement data |
Publications (2)
Publication Number | Publication Date |
---|---|
CN112137576A true CN112137576A (en) | 2020-12-29 |
CN112137576B CN112137576B (en) | 2021-07-09 |
Family
ID=73896891
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202011020334.7A Active CN112137576B (en) | 2020-09-24 | 2020-09-24 | Method and system for detecting observation and reading ability based on eye movement data |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN112137576B (en) |
Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20150213634A1 (en) * | 2013-01-28 | 2015-07-30 | Amit V. KARMARKAR | Method and system of modifying text content presentation settings as determined by user states based on user eye metric data |
CN106293045A (en) * | 2015-06-30 | 2017-01-04 | 北京智谷睿拓技术服务有限公司 | Display control method, display control unit and subscriber equipment |
CN106843500A (en) * | 2017-02-27 | 2017-06-13 | 南通大学 | Human-subject test rehabilitation training system based on the dynamic tracer technique of eye |
CN106897363A (en) * | 2017-01-11 | 2017-06-27 | 同济大学 | The text for moving tracking based on eye recommends method |
CN108471486A (en) * | 2018-03-09 | 2018-08-31 | 浙江工业大学 | A kind of intelligent reading operations method and device suitable for electronic viewing aid |
-
2020
- 2020-09-24 CN CN202011020334.7A patent/CN112137576B/en active Active
Patent Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20150213634A1 (en) * | 2013-01-28 | 2015-07-30 | Amit V. KARMARKAR | Method and system of modifying text content presentation settings as determined by user states based on user eye metric data |
CN106293045A (en) * | 2015-06-30 | 2017-01-04 | 北京智谷睿拓技术服务有限公司 | Display control method, display control unit and subscriber equipment |
CN106897363A (en) * | 2017-01-11 | 2017-06-27 | 同济大学 | The text for moving tracking based on eye recommends method |
CN106843500A (en) * | 2017-02-27 | 2017-06-13 | 南通大学 | Human-subject test rehabilitation training system based on the dynamic tracer technique of eye |
CN108471486A (en) * | 2018-03-09 | 2018-08-31 | 浙江工业大学 | A kind of intelligent reading operations method and device suitable for electronic viewing aid |
Also Published As
Publication number | Publication date |
---|---|
CN112137576B (en) | 2021-07-09 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN101140365B (en) | Unevenness inspection method, method for manufacturing display panel, and unevenness inspection apparatus | |
Burton et al. | Color and spatial structure in natural scenes | |
CN107808397B (en) | Pupil positioning device, pupil positioning method and sight tracking equipment | |
WO2012064010A1 (en) | Image conversion apparatus and display apparatus and methods using the same | |
CN110619301A (en) | Emotion automatic identification method based on bimodal signals | |
US8687894B2 (en) | Continuous edge and detail mapping using a weighted monotony measurement | |
CN101946519B (en) | Image processing device, imaging device, correction coefficient calculating method, and image processing program | |
US20110134318A1 (en) | Head-mounted visual display device for low-vision aid and its system | |
CN101390128A (en) | Detecting method and detecting system for positions of face parts | |
CN104581141B (en) | A kind of stereo image vision comfort level evaluation methodology | |
CN104867125A (en) | Image obtaining method and image obtaining device | |
CN101398999A (en) | Display equipment test device and method | |
CN112508887A (en) | Image definition evaluation method, system, storage medium, equipment and application | |
CN108389182B (en) | Image quality detection method and device based on deep neural network | |
CN109345552A (en) | Stereo image quality evaluation method based on region weight | |
CN112137576B (en) | Method and system for detecting observation and reading ability based on eye movement data | |
CN102998095A (en) | Method and device for detecting naked eye stereoscopic displayer | |
Shen et al. | Color enhancement algorithm based on Daltonization and image fusion for improving the color visibility to color vision deficiencies and normal trichromats | |
US20140193085A1 (en) | Image manipulating system and method | |
Hu et al. | Jpeg ringing artifact visibility evaluation | |
CN105100776B (en) | A kind of three-dimensional video-frequency screenshot method and device | |
CN109752852B (en) | Display system for head-mounted equipment and design method thereof | |
CN114245110B (en) | Method and device for detecting dead pixel of camera | |
CN115756285A (en) | Screen display brightness adjusting method and device, storage medium and electronic equipment | |
Bijl | Visual image quality assessment with sensor motion: effect of recording and presentation velocity |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant | ||
PP01 | Preservation of patent right |
Effective date of registration: 20221020 Granted publication date: 20210709 |
|
PP01 | Preservation of patent right |