CN112137576B - Method and system for detecting observation and reading ability based on eye movement data - Google Patents

Method and system for detecting observation and reading ability based on eye movement data Download PDF

Info

Publication number
CN112137576B
CN112137576B CN202011020334.7A CN202011020334A CN112137576B CN 112137576 B CN112137576 B CN 112137576B CN 202011020334 A CN202011020334 A CN 202011020334A CN 112137576 B CN112137576 B CN 112137576B
Authority
CN
China
Prior art keywords
work
user
eyes
observation
kth
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202011020334.7A
Other languages
Chinese (zh)
Other versions
CN112137576A (en
Inventor
王鑫
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shanghai Squirrel Classroom Artificial Intelligence Technology Co Ltd
Original Assignee
Shanghai Squirrel Classroom Artificial Intelligence Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shanghai Squirrel Classroom Artificial Intelligence Technology Co Ltd filed Critical Shanghai Squirrel Classroom Artificial Intelligence Technology Co Ltd
Priority to CN202011020334.7A priority Critical patent/CN112137576B/en
Publication of CN112137576A publication Critical patent/CN112137576A/en
Application granted granted Critical
Publication of CN112137576B publication Critical patent/CN112137576B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B3/00Apparatus for testing the eyes; Instruments for examining the eyes
    • A61B3/10Objective types, i.e. instruments for examining the eyes independent of the patients' perceptions or reactions
    • A61B3/113Objective types, i.e. instruments for examining the eyes independent of the patients' perceptions or reactions for determining or recording eye movement
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/18Eye characteristics, e.g. of the iris
    • G06V40/193Preprocessing; Feature extraction
    • GPHYSICS
    • G09EDUCATION; CRYPTOGRAPHY; DISPLAY; ADVERTISING; SEALS
    • G09BEDUCATIONAL OR DEMONSTRATION APPLIANCES; APPLIANCES FOR TEACHING, OR COMMUNICATING WITH, THE BLIND, DEAF OR MUTE; MODELS; PLANETARIA; GLOBES; MAPS; DIAGRAMS
    • G09B5/00Electrically-operated educational appliances
    • G09B5/02Electrically-operated educational appliances with visual presentation of the material to be studied, e.g. using film strip

Landscapes

  • Engineering & Computer Science (AREA)
  • Health & Medical Sciences (AREA)
  • Physics & Mathematics (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Theoretical Computer Science (AREA)
  • General Health & Medical Sciences (AREA)
  • Ophthalmology & Optometry (AREA)
  • Human Computer Interaction (AREA)
  • General Physics & Mathematics (AREA)
  • Biomedical Technology (AREA)
  • Surgery (AREA)
  • Biophysics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Heart & Thoracic Surgery (AREA)
  • Medical Informatics (AREA)
  • Molecular Biology (AREA)
  • Multimedia (AREA)
  • Animal Behavior & Ethology (AREA)
  • Public Health (AREA)
  • Veterinary Medicine (AREA)
  • Business, Economics & Management (AREA)
  • Educational Administration (AREA)
  • Educational Technology (AREA)
  • Eye Examination Apparatus (AREA)

Abstract

The application discloses a method and a system for detecting observational reading capacity based on eye movement data. The method comprises the following steps: sequentially displaying a plurality of works for a user to view on a display screen; in the process that the user views each work, collecting eye movement data corresponding to each work viewed by the user; determining the observation reading capability of the user according to the eye movement data; determining a preferable display mode of the works corresponding to the user according to the observation and reading capacity; and displaying the next work for the user according to the work preferred display mode corresponding to the user.

Description

Method and system for detecting observation and reading ability based on eye movement data
Technical Field
The invention relates to the technical field of intelligence, in particular to an observation and reading capability detection method and system based on eye movement data.
Background
For some children with cognitive impairment, such as autistic patients, the children have low receiving speed of knowledge in nature, and the personalized features of the children have more obvious influence on the cognitive process, for example, some children like to watch pictures, some children like to listen to music, some children watch pictures, more concern about pictures with larger sizes, and some children watch pictures, more concern about pictures with smaller sizes. If the children like to see large-size pictures or small-size pictures through intelligent means, the teaching method which is personalized and high in matching performance and is suitable for the children by the educational structure is very beneficial.
Disclosure of Invention
The application provides an observation and reading capacity detection method and system based on eye movement data, which are used for intelligently analyzing the observation and reading capacity of a user and further displaying works for the user according to a work display mode interested by the user.
The embodiment of the invention provides an observation and reading capability detection method based on eye movement data, which comprises the following steps:
sequentially displaying a plurality of works for a user to view on a display screen;
in the process that the user views each work, collecting eye movement data corresponding to each work viewed by the user;
determining the observation reading capability of the user according to the eye movement data;
determining a preferable display mode of the works corresponding to the user according to the observation and reading capacity;
and displaying the next work for the user according to the work preferred display mode corresponding to the user.
In one embodiment, each of the works has a plurality of feature objects randomly distributed, each feature object is a pattern with a significant color, and the area of the pattern is greater than 1/200 and less than 1/50 of the area of the work; the colors of the characteristic objects in each picture are different; each work is provided with a plurality of non-feature objects besides the feature objects, and each non-feature object is a pattern with an inconspicuous color;
the eye movement data comprise focusing angles of the double-eye focusing points, deviation angles of the double-eye focusing points and the existence or nonexistence of characteristic objects at the double-eye focusing points;
determining the observation reading ability of the user according to the eye movement data comprises: and determining the observation reading capability of the user according to the eye movement data corresponding to each work viewed by the user.
In one embodiment, the determining the user's observational reading ability based on the user's viewing of the eye movement data corresponding to each work includes steps a 1-A3:
step A1, for the kth work, the following steps A11-A16 are performed:
step A11, calculating the distance from the ith focus point of the two eyes on the kth work to the middle point of the two eyes according to the following formula (1):
Figure BDA0002700415280000021
wherein HkiRepresenting the distance from the ith focus point of the two eyes on the kth work to the middle point of the two eyes (ED distance in the figure); y represents a binocular distance (BC distance in the figure); beta is akiRepresenting the focusing angles (a & lt BEC in the figure) of the two eyes at the ith focusing point on the kth work; gamma raykiRepresents the deviation angle of the two eyes at the ith focusing point on the kth work (< CEF in the figure);
step A12, calculating the binocular visual field angle (angle EDA in the figure) of the binocular at the ith focusing point on the kth work according to the following formula (2);
Figure BDA0002700415280000022
wherein eta iskiRepresenting the binocular visual field angle (EDA in the figure) of the binocular at the ith focusing point on the kth work;
step A13, determining whether characteristic objects exist at the ith focus point on the kth work or not by the two eyes;
step A14, calculating the influence factor χ of the characteristic object corresponding to the kth work on the visual field according to the following formula (3)k
Figure BDA0002700415280000031
Wherein, χkRepresenting the influence factor, N, of the feature object corresponding to the kth work on the field of viewkRepresenting the total number of focusing points on the kth work, the corresponding binocular visual field angles of which are equal to or greater than a preset angle; fkRepresented on the kth work, said NkThe total number of focusing points of the characteristic object in the focusing points;
step A15, calculating the influence factor of the visual distance corresponding to the kth work on the visual field according to the following formula (4)
Figure BDA0002700415280000032
Figure BDA0002700415280000033
Wherein HkjRepresents the distance between the j-th focusing point of the two eyes on the k-th work and the middle point of the two eyes, HkjAll are equal to or greater than a preset distance, and m is the total number of the corresponding focusing points with the distance equal to or greater than the preset distance; n is the total number of all focus points of the two eyes on the kth work;
step A16, Using the
Figure BDA0002700415280000034
To what is neededChi-shapedkCorrecting the influence factor chi of the corrected characteristic object corresponding to the kth work on the visual fieldk' the following formulas (5) and (6):
wherein when
Figure BDA0002700415280000035
When the value is equal to or greater than the preset factor threshold value,
Figure BDA0002700415280000036
when in use
Figure BDA0002700415280000037
When the value is less than the preset factor threshold value,
Figure BDA0002700415280000038
step A2, determining the influence factor of the corrected feature object corresponding to each work on the visual field according to the steps A11-A16 aiming at the rest works;
a3, calculating the average value of the influence factors of the corrected feature objects corresponding to the plurality of works on the visual field; taking the average value as an observation reading capability index of the user to represent the observation reading capability of the user; the larger the observation and reading capability index of the user is, the stronger the observation and reading capability of the user is.
In one embodiment, the determining, according to the viewing and reading ability, a preferred display mode of the work corresponding to the user includes:
determining a preferable display mode of the works corresponding to the user according to the observation and reading capability index of the user, wherein:
when the observation reading capability index of the user is equal to or larger than a preset index threshold value, determining that the preferable display mode of the works corresponding to the user is to display the works in a mode of being equal to or larger than a preset area;
and when the observation reading capability index of the user is smaller than a preset index threshold value, determining that the preferable display mode of the works corresponding to the user is to display the works in a mode of smaller than a preset area.
An observational reading ability detection system based on eye movement data, comprising:
the first display module is used for sequentially displaying a plurality of works for a user to view on the display screen;
the acquisition module is used for acquiring eye movement data corresponding to each work viewed by the user in the process of viewing each work by the user;
the first determining module is used for determining the observation and reading capacity of the user according to the eye movement data;
the second determination module is used for determining the preferable display mode of the works corresponding to the user according to the observation and reading capacity;
and the second display module is used for displaying the next work for the user according to the work optimal display mode corresponding to the user.
In one embodiment, each of the works has a plurality of feature objects randomly distributed, each feature object is a pattern with a significant color, and the area of the pattern is greater than 1/200 and less than 1/50 of the area of the work; the colors of the characteristic objects in each picture are different; each work is provided with a plurality of non-feature objects besides the feature objects, and each non-feature object is a pattern with an inconspicuous color;
the eye movement data comprise focusing angles of the double-eye focusing points, deviation angles of the double-eye focusing points and the existence or nonexistence of characteristic objects at the double-eye focusing points;
determining the observation reading ability of the user according to the eye movement data comprises: and determining the observation reading capability of the user according to the eye movement data corresponding to each work viewed by the user.
In one embodiment, the determining the user's observational reading ability based on the user's viewing of the eye movement data corresponding to each work includes steps a 1-A3:
step A1, for the kth work, the following steps A11-A16 are performed:
step A11, calculating the distance from the ith focus point of the two eyes on the kth work to the middle point of the two eyes according to the following formula (1):
Figure BDA0002700415280000051
wherein HkiRepresenting the distance from the ith focus point of the two eyes on the kth work to the middle point of the two eyes (ED distance in the figure); y represents a binocular distance (BC distance in the figure); beta is akiRepresenting the focusing angles (a & lt BEC in the figure) of the two eyes at the ith focusing point on the kth work; gamma raykiRepresents the deviation angle of the two eyes at the ith focusing point on the kth work (< CEF in the figure);
step A12, calculating the binocular visual field angle (angle EDA in the figure) of the binocular at the ith focusing point on the kth work according to the following formula (2);
Figure BDA0002700415280000052
wherein eta iskiRepresenting the binocular visual field angle (EDA in the figure) of the binocular at the ith focusing point on the kth work;
step A13, determining whether characteristic objects exist at the ith focus point on the kth work or not by the two eyes;
step A14, calculating the influence factor χ of the characteristic object corresponding to the kth work on the visual field according to the following formula (3)k
Figure BDA0002700415280000053
Wherein, χkRepresenting the influence factor, N, of the feature object corresponding to the kth work on the field of viewkRepresenting the total number of focusing points on the kth work, the corresponding binocular visual field angles of which are equal to or greater than a preset angle; fkRepresented on the kth work, said NkClustering of objects with said features in a pointThe total number of foci;
step A15, calculating the influence factor of the visual distance corresponding to the kth work on the visual field according to the following formula (4)
Figure BDA0002700415280000061
Figure BDA0002700415280000062
Wherein HkjRepresents the distance between the j-th focusing point of the two eyes on the k-th work and the middle point of the two eyes, HkjAll are equal to or greater than a preset distance, and m is the total number of the corresponding focusing points with the distance equal to or greater than the preset distance; n is the total number of all focus points of the two eyes on the kth work;
step A16, Using the
Figure BDA0002700415280000063
To the xkCorrecting the influence factor chi of the corrected characteristic object corresponding to the kth work on the visual fieldk' the following formulas (5) and (6):
wherein when
Figure BDA0002700415280000064
When the value is equal to or greater than the preset factor threshold value,
Figure BDA0002700415280000065
when in use
Figure BDA0002700415280000066
When the value is less than the preset factor threshold value,
Figure BDA0002700415280000067
step A2, determining the influence factor of the corrected feature object corresponding to each work on the visual field according to the steps A11-A16 aiming at the rest works;
a3, calculating the average value of the influence factors of the corrected feature objects corresponding to the plurality of works on the visual field; taking the average value as an observation reading capability index of the user to represent the observation reading capability of the user; the larger the observation and reading capability index of the user is, the stronger the observation and reading capability of the user is.
In one embodiment, the second determining module is further configured to:
determining a preferable display mode of the works corresponding to the user according to the observation and reading capability index of the user, wherein:
when the observation reading capability index of the user is equal to or larger than a preset index threshold value, determining that the preferable display mode of the works corresponding to the user is to display the works in a mode of being equal to or larger than a preset area;
and when the observation reading capability index of the user is smaller than a preset index threshold value, determining that the preferable display mode of the works corresponding to the user is to display the works in a mode of smaller than a preset area.
According to the technical scheme provided by the embodiment of the invention, the observation and reading capacity of the user is intelligently analyzed, the display mode of the works which the user is interested in is further analyzed, and the works are displayed for the user according to the display mode of the works which the user is interested in.
Drawings
In order to more clearly illustrate the embodiments of the present application or the technical solutions in the prior art, the drawings used in the description of the embodiments or the prior art will be briefly described below, it is obvious that the drawings in the following description are only some embodiments of the present application, and for those skilled in the art, other drawings can be obtained according to the drawings without creative efforts.
FIG. 1 is a flow chart of a method for eye movement data based observational reading capability detection disclosed herein;
fig. 2 is a schematic diagram of various parameters of an observation and reading capability detection method based on eye movement data disclosed in the present application.
Detailed Description
The technical solutions in the embodiments of the present application will be clearly and completely described below with reference to the drawings in the embodiments of the present application, and it is obvious that the described embodiments are only a part of the embodiments of the present application, and not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present application.
The embodiment of the invention discloses an observation and reading capability detection method based on eye movement data, which comprises the following steps of S1-S5:
and step S1, sequentially displaying a plurality of works for the user to view on the display screen.
The works can be pictures, for example, pictures composed of a plurality of patterns, and the colors of the patterns can be varied.
Step S2, in the process that the user views each work, eye movement data corresponding to each work viewed by the user are collected.
And step S3, determining the observation and reading ability of the user according to the eye movement data.
And step S4, determining the preferable display mode of the works corresponding to the user according to the observation and reading ability.
And step S5, displaying the next work for the user according to the work preferred display mode corresponding to the user.
In one embodiment, each work has a plurality of feature objects displayed therein, each feature object being a pattern of significant color, the area of the pattern being greater than 1/200 of the area of the work and less than 1/50 of the area of the work; besides a plurality of characteristic objects, a plurality of non-characteristic objects are displayed in each work, and each non-characteristic object is a pattern with an inconspicuous color; the patterns of the featured objects and the non-featured objects may be different or the same.
In the process that a user views each work, collecting eye movement data corresponding to each work viewed by the user, wherein the eye movement data comprises a focusing angle of a binocular focusing point, a deviation angle of the binocular focusing point and the existence of a characteristic object at the binocular focusing point;
and determining the observation reading capacity of the user according to the eye movement data corresponding to each work viewed by the user.
In one embodiment, determining the user's viewing reading ability based on the eye movement data corresponding to each work viewed by the user includes steps A1-A3:
step A1, for the kth work, the following steps A11-A16 are performed:
step A11, calculating the distance from the ith focus point of the two eyes on the kth work to the middle point of the two eyes according to the following formula (1):
Figure BDA0002700415280000081
wherein HkiIndicating the distance from the ith focus point of the two eyes on the kth work to the middle point of the two eyes (e.g. ED distance in fig. 2, where a is the focus point on the kth work when the two eyes are in flat view, E is the ith focus point, AE is the work on the display screen, point D is the middle point of the two eyes, B is the center point of one eye, and C is the center point of the other eye); y represents the interocular distance (BC distance in fig. 2); beta is akiRepresenting the focusing angles of the two eyes at the ith focusing point on the kth work (e.g., < BEC in FIG. 2); gamma raykiRepresenting the deviation angle of the two eyes at the ith focusing point on the kth work (as < CEF in fig. 2, EF is vertical to the plane of BC);
step A12, calculating the binocular visual field angle (such as ^ EDA in fig. 2) of the eyes at the ith focusing point on the kth work according to the following formula (2);
Figure BDA0002700415280000091
wherein eta iskiRepresenting the binocular visual field angle of the eyes at the ith focus point on the kth work;
a13, determining whether characteristic objects exist at the ith focus point on the kth work by the two eyes;
step A14, calculating the influence factor χ of the characteristic object corresponding to the kth work on the visual field according to the following formula (3)k
Figure BDA0002700415280000092
Wherein, χkRepresenting the influence factor, N, of the feature object corresponding to the kth work on the field of viewkRepresenting the total number of focusing points on the kth work, the corresponding binocular visual field angles of which are equal to or greater than a preset angle; fkRepresented on the kth work, NkThe total number of focal points with feature objects in the points;
step A15, calculating the influence factor of the visual distance corresponding to the kth work on the visual field according to the following formula (4)
Figure BDA0002700415280000093
Figure BDA0002700415280000094
Wherein HkjRepresents the distance between the j-th focusing point of the two eyes on the k-th work and the middle point of the two eyes, HkjAll are equal to or greater than a preset distance, and m is the total number of corresponding focusing points with the distance equal to or greater than the preset distance; n is the total number of all focus points of both eyes on the kth work;
step A16, use
Figure BDA0002700415280000095
Pair chikCorrecting the influence factor chi of the corrected characteristic object corresponding to the kth work on the visual fieldk' the following formulas (5) and (6):
wherein when
Figure BDA0002700415280000101
When the value is equal to or greater than the preset factor threshold value,
Figure BDA0002700415280000102
when in use
Figure BDA0002700415280000103
When the value is less than the preset factor threshold value,
Figure BDA0002700415280000104
step A2, determining the influence factor of the corrected feature object corresponding to each work on the visual field according to the steps A11-A16 aiming at the rest works;
a3, calculating the average value of the influence factors of the corrected feature objects corresponding to the plurality of works on the visual field; the average value is used as an observation reading capability index of the user to represent the observation reading capability of the user; the larger the index of the observing and reading ability of the user is, the stronger the observing and reading ability of the user is.
In one embodiment, determining a preferred display mode of the work corresponding to the user according to the observation and reading ability comprises:
determining a preferable display mode of the works corresponding to the user according to the observation reading ability index of the user, wherein:
when the observation reading capability index of the user is equal to or larger than a preset index threshold value, determining that the preferable display mode of the works corresponding to the user is to display the works in a mode of being equal to or larger than a preset area; that is, this situation illustrates that users such as autistic children have a stronger and more sensitive ability to view large-format works, and are suitable for providing the users with the large-format works to increase the knowledge acceptance speed of the users.
When the observation reading capability index of the user is smaller than a preset index threshold value, determining that the preferable display mode of the works corresponding to the user is to display the works in a mode of being smaller than a preset area; that is, this situation illustrates that a user, such as an autistic child, is more able to observe a small-sized work and is more sensitive, and is suitable for providing the user with the small-sized work to increase the knowledge acceptance speed of the user. Wherein, the preset area can be set artificially.
According to the technical scheme provided by the embodiment of the invention, the observation and reading capacity of the user is intelligently analyzed, the display mode of the works which the user is interested in is further analyzed, and the works are displayed for the user according to the display mode of the works which the user is interested in.
The previous description of the disclosed embodiments is provided to enable any person skilled in the art to make or use the present application. Various modifications to these embodiments will be readily apparent to those skilled in the art, and the generic principles defined herein may be applied to other embodiments without departing from the spirit or scope of the application. Thus, the present application is not intended to be limited to the embodiments shown herein but is to be accorded the widest scope consistent with the principles and novel features disclosed herein.

Claims (4)

1. An observation and reading ability detection method based on eye movement data is characterized by comprising the following steps:
sequentially displaying a plurality of works for a user to view on a display screen;
in the process that the user views each work, collecting eye movement data corresponding to each work viewed by the user;
determining the observation reading capability of the user according to the eye movement data;
determining a preferable display mode of the works corresponding to the user according to the observation and reading capacity;
displaying the next work for the user according to the work optimal display mode corresponding to the user;
wherein each work is displayed with a plurality of feature objects distributed randomly, each feature object is a pattern with a significant color, and the area of the pattern is greater than 1/200 and smaller than 1/50 of the area of the work; the colors of the characteristic objects in each picture are different; each work is provided with a plurality of non-feature objects besides the feature objects, and each non-feature object is a pattern with an inconspicuous color;
the eye movement data comprise focusing angles of the double-eye focusing points, deviation angles of the double-eye focusing points and the existence or nonexistence of characteristic objects at the double-eye focusing points;
determining the observation reading ability of the user according to the eye movement data comprises: determining the observation reading capacity of the user according to the eye movement data corresponding to each work viewed by the user;
wherein the determining of the user's observational reading ability based on the eye movement data corresponding to each work viewed by the user comprises steps A1-A3:
step A1, for the kth work, the following steps A11-A16 are performed:
step A11, calculating the distance from the ith focus point of the two eyes on the kth work to the middle point of the two eyes according to the following formula (1):
Figure FDA0003065331860000011
wherein HkiRepresenting the distance from the ith focus point of the two eyes on the kth work to the middle point of the two eyes; y represents the interocular distance; beta is akiRepresenting the focus angles of the two eyes at the ith focus point on the kth work; gamma raykiRepresenting the deviation angles of the two eyes at the ith focus point on the kth work;
a12, calculating the visual field angle of the eyes at the ith focus point on the kth work according to the following formula (2);
Figure FDA0003065331860000021
wherein eta iskiRepresenting the binocular visual field angle of the two eyes at the ith focus point on the kth work;
step A13, determining whether characteristic objects exist at the ith focus point on the kth work or not by the two eyes;
step A14, calculating the influence factor χ of the characteristic object corresponding to the kth work on the visual field according to the following formula (3)k
Figure FDA0003065331860000022
Wherein, χkRepresenting the influence factor, N, of the feature object corresponding to the kth work on the field of viewkRepresenting the total number of focusing points on the kth work, the corresponding binocular visual field angles of which are equal to or greater than a preset angle; fkRepresented on the kth work, NkThe total number of focusing points of the characteristic object in the focusing points;
step A15, calculating the influence factor of the visual distance corresponding to the kth work on the visual field according to the following formula (4)
Figure FDA0003065331860000023
Figure FDA0003065331860000024
Wherein HkjRepresents the distance between the j-th focusing point of the two eyes on the k-th work and the middle point of the two eyes, HkjAll are equal to or greater than a preset distance, and m is the total number of the corresponding focusing points with the distance equal to or greater than the preset distance; n is the total number of all focus points of the two eyes on the kth work;
step A16, Using the
Figure FDA0003065331860000025
To the xkCorrecting the influence factor chi of the corrected characteristic object corresponding to the kth work on the visual fieldk' the following formulas (5) and (6):
wherein when
Figure FDA0003065331860000031
When the value is equal to or greater than the preset factor threshold value,
Figure FDA0003065331860000032
when in use
Figure FDA0003065331860000033
When the value is less than the preset factor threshold value,
Figure FDA0003065331860000034
step A2, determining the influence factor of the corrected feature object corresponding to each work on the visual field according to the steps A11-A16 aiming at the rest works;
a3, calculating the average value of the influence factors of the corrected feature objects corresponding to the plurality of works on the visual field; taking the average value as an observation reading capability index of the user to represent the observation reading capability of the user; the larger the observation and reading capability index of the user is, the stronger the observation and reading capability of the user is.
2. The method of claim 1, wherein said determining a preferred display of said user's corresponding composition based on said viewing reading ability comprises:
determining a preferable display mode of the works corresponding to the user according to the observation and reading capability index of the user, wherein:
when the observation reading capability index of the user is equal to or larger than a preset index threshold value, determining that the preferable display mode of the works corresponding to the user is to display the works in a mode of being equal to or larger than a preset area;
and when the observation reading capability index of the user is smaller than a preset index threshold value, determining that the preferable display mode of the works corresponding to the user is to display the works in a mode of smaller than a preset area.
3. An observational reading ability detection system based on eye movement data, comprising:
the first display module is used for sequentially displaying a plurality of works for a user to view on the display screen;
the acquisition module is used for acquiring eye movement data corresponding to each work viewed by the user in the process of viewing each work by the user;
the first determining module is used for determining the observation and reading capacity of the user according to the eye movement data;
the second determination module is used for determining the preferable display mode of the works corresponding to the user according to the observation and reading capacity;
the second display module is used for displaying the next work for the user according to the work optimal display mode corresponding to the user;
wherein each work is displayed with a plurality of feature objects distributed randomly, each feature object is a pattern with a significant color, and the area of the pattern is greater than 1/200 and smaller than 1/50 of the area of the work; the colors of the characteristic objects in each picture are different; each work is provided with a plurality of non-feature objects besides the feature objects, and each non-feature object is a pattern with an inconspicuous color;
the eye movement data comprise focusing angles of the double-eye focusing points, deviation angles of the double-eye focusing points and the existence or nonexistence of characteristic objects at the double-eye focusing points;
determining the observation reading ability of the user according to the eye movement data comprises: determining the observation reading capacity of the user according to the eye movement data corresponding to each work viewed by the user;
wherein the determining of the user's observational reading ability based on the eye movement data corresponding to each work viewed by the user comprises steps A1-A3:
step A1, for the kth work, the following steps A11-A16 are performed:
step A11, calculating the distance from the ith focus point of the two eyes on the kth work to the middle point of the two eyes according to the following formula (1):
Figure FDA0003065331860000041
wherein HkiRepresenting the distance from the ith focus point of the two eyes on the kth work to the middle point of the two eyes; y represents the interocular distance; beta is akiRepresenting the focus angles of the two eyes at the ith focus point on the kth work; gamma raykiRepresenting the deviation angles of the two eyes at the ith focus point on the kth work;
a12, calculating the visual field angle of the eyes at the ith focus point on the kth work according to the following formula (2);
Figure FDA0003065331860000042
wherein eta iskiRepresenting the binocular visual field angle of the two eyes at the ith focus point on the kth work;
step A13, determining whether characteristic objects exist at the ith focus point on the kth work or not by the two eyes;
step A14, calculating the influence factor χ of the characteristic object corresponding to the kth work on the visual field according to the following formula (3)k
Figure FDA0003065331860000051
Wherein, χkRepresenting the influence factor, N, of the feature object corresponding to the kth work on the field of viewkFocusing indicating that the corresponding binocular visual field angle is equal to or greater than a preset angle on the kth workThe total number of points; fkRepresented on the kth work, NkThe total number of focusing points of the characteristic object in the focusing points;
step A15, calculating the influence factor of the visual distance corresponding to the kth work on the visual field according to the following formula (4)
Figure FDA0003065331860000052
Figure FDA0003065331860000053
Wherein HkjRepresents the distance between the j-th focusing point of the two eyes on the k-th work and the middle point of the two eyes, HkjAll are equal to or greater than a preset distance, and m is the total number of the corresponding focusing points with the distance equal to or greater than the preset distance; n is the total number of all focus points of the two eyes on the kth work;
step A16, Using the
Figure FDA0003065331860000054
To the xkCorrecting the influence factor chi of the corrected characteristic object corresponding to the kth work on the visual fieldk' the following formulas (5) and (6):
wherein when
Figure FDA0003065331860000055
When the value is equal to or greater than the preset factor threshold value,
Figure FDA0003065331860000056
when in use
Figure FDA0003065331860000057
When the value is less than the preset factor threshold value,
Figure FDA0003065331860000058
step A2, determining the influence factor of the corrected feature object corresponding to each work on the visual field according to the steps A11-A16 aiming at the rest works;
a3, calculating the average value of the influence factors of the corrected feature objects corresponding to the plurality of works on the visual field; taking the average value as an observation reading capability index of the user to represent the observation reading capability of the user; the larger the observation and reading capability index of the user is, the stronger the observation and reading capability of the user is.
4. The system of claim 3, wherein the second determination module is further configured to:
determining a preferable display mode of the works corresponding to the user according to the observation and reading capability index of the user, wherein:
when the observation reading capability index of the user is equal to or larger than a preset index threshold value, determining that the preferable display mode of the works corresponding to the user is to display the works in a mode of being equal to or larger than a preset area;
and when the observation reading capability index of the user is smaller than a preset index threshold value, determining that the preferable display mode of the works corresponding to the user is to display the works in a mode of smaller than a preset area.
CN202011020334.7A 2020-09-24 2020-09-24 Method and system for detecting observation and reading ability based on eye movement data Active CN112137576B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202011020334.7A CN112137576B (en) 2020-09-24 2020-09-24 Method and system for detecting observation and reading ability based on eye movement data

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202011020334.7A CN112137576B (en) 2020-09-24 2020-09-24 Method and system for detecting observation and reading ability based on eye movement data

Publications (2)

Publication Number Publication Date
CN112137576A CN112137576A (en) 2020-12-29
CN112137576B true CN112137576B (en) 2021-07-09

Family

ID=73896891

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202011020334.7A Active CN112137576B (en) 2020-09-24 2020-09-24 Method and system for detecting observation and reading ability based on eye movement data

Country Status (1)

Country Link
CN (1) CN112137576B (en)

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106293045A (en) * 2015-06-30 2017-01-04 北京智谷睿拓技术服务有限公司 Display control method, display control unit and subscriber equipment
CN106843500A (en) * 2017-02-27 2017-06-13 南通大学 Human-subject test rehabilitation training system based on the dynamic tracer technique of eye
CN106897363A (en) * 2017-01-11 2017-06-27 同济大学 The text for moving tracking based on eye recommends method
CN108471486A (en) * 2018-03-09 2018-08-31 浙江工业大学 A kind of intelligent reading operations method and device suitable for electronic viewing aid

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20150213634A1 (en) * 2013-01-28 2015-07-30 Amit V. KARMARKAR Method and system of modifying text content presentation settings as determined by user states based on user eye metric data

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106293045A (en) * 2015-06-30 2017-01-04 北京智谷睿拓技术服务有限公司 Display control method, display control unit and subscriber equipment
CN106897363A (en) * 2017-01-11 2017-06-27 同济大学 The text for moving tracking based on eye recommends method
CN106843500A (en) * 2017-02-27 2017-06-13 南通大学 Human-subject test rehabilitation training system based on the dynamic tracer technique of eye
CN108471486A (en) * 2018-03-09 2018-08-31 浙江工业大学 A kind of intelligent reading operations method and device suitable for electronic viewing aid

Also Published As

Publication number Publication date
CN112137576A (en) 2020-12-29

Similar Documents

Publication Publication Date Title
US7444017B2 (en) Detecting irises and pupils in images of humans
US8611615B2 (en) Image processing apparatus and method, and program
Amirshahi et al. Jenaesthetics subjective dataset: analyzing paintings by subjective scores
CN110619301A (en) Emotion automatic identification method based on bimodal signals
US8687894B2 (en) Continuous edge and detail mapping using a weighted monotony measurement
CN101390128A (en) Detecting method and detecting system for positions of face parts
CN105303151A (en) Human face similarity detection method and apparatus
CN104867125A (en) Image obtaining method and image obtaining device
CN113038272B (en) Method, device and equipment for automatically editing baby video and storage medium
CN108389182B (en) Image quality detection method and device based on deep neural network
CN111654694A (en) Quality evaluation method and device of image processing algorithm and electronic equipment
CN102998095B (en) The detection method of a kind of naked-eye stereoscopic display and device
Perrin et al. EyeTrackUAV2: A large-scale binocular eye-tracking dataset for UAV videos
CN112137576B (en) Method and system for detecting observation and reading ability based on eye movement data
Shen et al. Color enhancement algorithm based on Daltonization and image fusion for improving the color visibility to color vision deficiencies and normal trichromats
CN113327020A (en) Teaching quality evaluation system
Hu et al. Jpeg ringing artifact visibility evaluation
KR101331055B1 (en) Visual aid system based on the analysis of visual attention and visual aiding method for using the analysis of visual attention
CN109752852B (en) Display system for head-mounted equipment and design method thereof
CN115756285A (en) Screen display brightness adjusting method and device, storage medium and electronic equipment
WO2022137601A1 (en) Visual distance estimation method, visual distance estimation device, and visual distance estimation program
CN112801997B (en) Image enhancement quality evaluation method, device, electronic equipment and storage medium
CN110175531B (en) Attitude-based examinee position positioning method
CN114817607A (en) Image detection method, device, equipment and storage medium
Syahbana et al. Nystagmus estimation for dizziness diagnosis by pupil detection and tracking using mexican-hat-type ellipse pattern matching

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant
PP01 Preservation of patent right

Effective date of registration: 20221020

Granted publication date: 20210709

PP01 Preservation of patent right