CN110464365B - Attention degree determination method, device, equipment and storage medium - Google Patents

Attention degree determination method, device, equipment and storage medium Download PDF

Info

Publication number
CN110464365B
CN110464365B CN201810444687.6A CN201810444687A CN110464365B CN 110464365 B CN110464365 B CN 110464365B CN 201810444687 A CN201810444687 A CN 201810444687A CN 110464365 B CN110464365 B CN 110464365B
Authority
CN
China
Prior art keywords
determining
point
interest
region
gazing
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201810444687.6A
Other languages
Chinese (zh)
Other versions
CN110464365A (en
Inventor
姚洋
晏婷
谢津
王龙飞
周晖晖
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shenzhen Institute of Advanced Technology of CAS
Original Assignee
Shenzhen Institute of Advanced Technology of CAS
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shenzhen Institute of Advanced Technology of CAS filed Critical Shenzhen Institute of Advanced Technology of CAS
Priority to CN201810444687.6A priority Critical patent/CN110464365B/en
Publication of CN110464365A publication Critical patent/CN110464365A/en
Application granted granted Critical
Publication of CN110464365B publication Critical patent/CN110464365B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • A61B5/16Devices for psychotechnics; Testing reaction times ; Devices for evaluating the psychological state
    • A61B5/163Devices for psychotechnics; Testing reaction times ; Devices for evaluating the psychological state by tracking eye movement, gaze, or pupil change

Abstract

The embodiment of the invention discloses a method, a device, equipment and a storage medium for determining attention. The method comprises the following steps: acquiring eye movement data of a user to be tested when watching the stimulation image, and determining the position of a fixation point based on the eye movement data; determining the watching duration of the tested user watching each interested area according to the acquisition frequency of the watching point position and the position relation between the watching point position and each interested area contained in the stimulation image; and determining the attention degree of the tested user to each interested area according to the watching duration and the area of the corresponding interested area. By adopting the technical scheme, the accuracy of determining the attention of the user to be tested is improved, and the reliability of subsequent brain related research of the user to be tested is ensured.

Description

Attention degree determination method, device, equipment and storage medium
Technical Field
The embodiment of the invention relates to the field of neurobiology, in particular to a method, a device, equipment and a storage medium for determining attention.
Background
In neurobiology, eye movement is affected by a variety of cognitive factors. Many neuropsychiatric diseases, such as autism, schizophrenia, etc., have general abnormalities in cognitive abilities (attention, memory, decision, etc.), and the eye movement characteristics (e.g., eye movement trajectory, eye movement speed, eye movement image, etc.) of the patients may be significantly different from those of normal individuals. Therefore, it is important to determine the attention of the user to be tested based on the eye movement characteristics to study the brain activity rule of the user to be tested.
The existing eye movement data analysis tool is mainly used in the fields of user interface usability evaluation, advertisement and product design evaluation, driving behavior detection and the like. However, the attention of the tested user determined by the commercial eye movement data analysis tool is not accurate enough to meet the requirement of subsequent research, and the reliability of the brain-related research of the tested user is affected.
Disclosure of Invention
The invention provides a method, a device, equipment and a storage medium for determining attention, which are used for improving the accuracy of the attention and ensuring the reliability of the subsequent brain-related research of a user to be tested.
In a first aspect, an embodiment of the present invention provides a method for determining a degree of attention, where the method includes:
acquiring eye movement data of a tested user when watching a stimulation image, and determining a gazing point position based on the eye movement data; wherein the stimulation image includes at least one region of interest;
according to the acquisition frequency of the gazing point position and the position relation between the gazing point position and the interested areas, the gazing duration of each interested area in the stimulation image watched by the user to be tested is determined;
and determining the attention degree of the tried user to each interested area according to the watching duration and the area of the corresponding interested area.
In a second aspect, an embodiment of the present invention further provides an attention degree determining apparatus, where the apparatus includes:
the device comprises a gazing point position determining module, a gazing point position determining module and a control module, wherein the gazing point position determining module is used for acquiring eye movement data when a tested user watches a stimulation image and determining a gazing point position based on the eye movement data; wherein the stimulation image includes at least one region of interest;
the gazing duration determining module is used for determining the gazing duration of each interested area in the stimulated image watched by the user to be tried according to the acquisition frequency of the gazing point position and the position relation between the gazing point position and the interested area;
and the attention degree determining module is used for determining the attention degree of the tried user to each interested area according to the watching duration and the area of the corresponding interested area.
In a third aspect, an embodiment of the present invention further provides a terminal device, including an input apparatus, where the terminal device further includes:
one or more processors;
storage means for storing one or more programs;
the one or more programs are executed by the one or more processors, so that the one or more processors implement a method for determining attention provided in any embodiment of the present invention.
In a fourth aspect, an embodiment of the present invention further provides a computer-readable storage medium, on which a computer program is stored, where the computer program, when executed by a processor, implements a method for determining a degree of attention according to any embodiment of the present invention.
According to the embodiment of the invention, the eye movement data of the tested user when watching the stimulation image is obtained, and the position of the fixation point is determined based on the eye movement data; determining the watching duration of each interested area watched by the user to be tested according to the acquisition frequency of the watching point position and the position relation between the watching point position and each interested area contained in the stimulation image; and determining the attention degree of the tested user to each interested area according to the watching duration and the area of the corresponding interested area. By adopting the technical scheme, the problem that the attention determined by the prior art cannot meet the research requirement is solved, the accuracy of determining the attention of the user to be tested is improved, and the reliability of the subsequent brain-related research of the user to be tested is ensured.
Drawings
Fig. 1 is a schematic flow chart of a method for determining attention according to a first embodiment of the present invention;
fig. 2A is a schematic flow chart of a method for determining attention according to a second embodiment of the present invention;
FIG. 2B is a flowchart illustrating another attention determination method according to a second embodiment of the present invention;
FIG. 2C is a schematic diagram of a region-of-interest expansion method in an embodiment of the present invention;
fig. 3 is a schematic structural diagram of a attention degree determination apparatus according to a third embodiment of the present invention;
fig. 4 is a schematic diagram of a hardware structure of a terminal device in the fourth embodiment of the present invention.
Detailed Description
The present invention will be described in further detail with reference to the accompanying drawings and examples. It is to be understood that the specific embodiments described herein are merely illustrative of the invention and are not limiting of the invention. It should be further noted that, for the convenience of description, only some of the structures related to the present invention are shown in the drawings, not all of the structures.
Example one
Fig. 1 is a flowchart of a method for determining attention, which is applicable to a case where an attention of a user to a region of interest in an image is determined when the user views a stimulus image, and the method may be executed by an attention determining apparatus, which is implemented by software and/or hardware and configured in a terminal device, and the method for determining attention includes:
s110, eye movement data of the user to be tested when watching the stimulation image is obtained, and the position of the gazing point is determined based on the eye movement data.
The tested user may be a human or a non-human primate for brain recognition and brain function research. Illustratively, the user being tried may be a monkey suffering from autism. The stimulating image can be an image in multiple formats such as JPEG, PNG or BMP, and is used for giving visual stimulation to the tested user according to certain frequency switching. Wherein, the stimulation image can be set into a plurality of types according to the tested requirements. Illustratively, when the subject user is a monkey, the image types may include a monkey face type, a monkey interaction type, a complex scene, a simple background, and the like. The complex scene may be a scene image corresponding to a living environment such as a forest, a mountain, and the like. The simple background may be a scene image corresponding to a playful environment such as a green field.
The eye movement data can be eye movement images captured or collected by the image collecting device when the tested user watches the stimulating images, and can also be eyeball rotation data of the tested user obtained by adopting an eye movement instrument auxiliary tool. The data type of the eye movement data can be monocular data, binocular data or monocular and binocular mixed data. In the actual operation process, the data type of the eye movement data can be determined by the number of the eye movement data acquired in a single time. When the data type is monocular data, the current eye movement data can be directly used for processing; when the data type is binocular data, the binocular data may be divided into monocular data to be processed.
Wherein the region of interest is an image region related to the cognitive ability of the tested user determined based on the image types of the different stimulation images. Illustratively, when the image type is a monkey face type, the region of interest includes at least: eye region, nose region, and lip region. Wherein the stimulation image comprises at least one region of interest.
In the step, a display device is arranged at a preset distance in front of a user to be tested for displaying a stimulation image, and then the eye movement data when the user to be tested watches the display screen of the display device is acquired through the arranged eye movement data acquisition device. The terminal equipment acquires the collected eye movement data and determines the gazing point position of the tested user when the tested user receives the stimulation image based on the eye movement data.
Wherein the gazing point position may be coordinate point data of a rectangular coordinate system determined based on a two-dimensional plane determined by a display screen of the display device. Wherein, the origin of coordinates can be set by the test user according to the needs. Illustratively, the center point of the display screen may be chosen as the origin of coordinates. It should be noted that the eye movement data acquisition device may be worn on the head and eye position of the user to be tested, may be disposed on the display device, and may be disposed in other positions convenient for acquiring the eye movement data of the user to be tested. It should be noted that the preset distance can be set by the test user according to the requirement. Illustratively, the preset distance may be 57 centimeters.
S120, according to the acquisition frequency of the gazing point position and the position relation between the gazing point position and the interested areas, the gazing duration of each interested area in the stimulated image watched by the user to be tested is determined.
The acquisition frequency is the frequency when the user to be tested adopts the acquisition device to acquire images, and can also be the working frequency of the eye tracker when the eye tracker is adopted to acquire eyeball rotation data. The position relation comprises an inclusion relation and a non-inclusion relation. The inclusion relationship may be a relationship type when the gazing point position is located within the region of interest; accordingly, the non-containment relationship may be a type of relationship when the point of regard position is not within the region of interest. The watching duration can be a time of a certain region of interest of the watching stimulation image counted when the tested user is given a visual stimulation within a preset time period. The preset time period can be set by the test user according to the requirement.
In the step, the position relation between the gazing point position and each interested area is obtained as the number of the gazing points containing the relation, and the gazing duration when the tested user gazes at each interested area of the stimulation image is determined according to the acquisition frequency of the gazing point position.
S130, determining the attention of the user to be tried to each interested area according to the watching duration and the area of the corresponding interested area.
The area of the region of interest may be the number of pixel points of the region of interest in the stimulation image.
In the step, the terminal equipment determines the attention degree of the tested user to the region of interest according to the ratio of the watching duration to the area of the corresponding region of interest; the initial attention degree can be determined according to the area ratio of the watching duration to the corresponding region of interest, and the weighted initial attention degree is determined to be the corresponding attention degree by combining the attention weight of the region of interest in the stimulation image; of course, other mathematical operations may be used to determine the interest level of the region of interest.
It should be noted that the determined attention may be output and displayed in the form of a bar graph, a line graph and/or a data list.
According to the embodiment of the invention, the eye movement data of the tested user when watching the stimulation image is obtained, and the position of the fixation point is determined based on the eye movement data; determining the watching duration of each interested area watched by the user to be tested according to the acquisition frequency of the watching point position and the position relation between the watching point position and each interested area contained in the stimulation image; and determining the attention degree of the tested user to each interested area according to the watching duration and the area of the corresponding interested area. By adopting the technical scheme, the problem that the attention determined by the prior art cannot meet the research requirement is solved, the accuracy of determining the attention of the user to be tested is improved, and the reliability of the subsequent brain-related research of the user to be tested is ensured.
On the basis of the technical solutions of the above embodiments, further, the eye movement data includes an eyeball rotation degree;
accordingly, determining a gaze location based on the eye movement data comprises:
according to the formula
Figure BDA0001656888260000071
Determining a gaze location;
wherein (X ', Y') is the degree of eye rotation; w is the width of the stimulus image; h is the height of the stimulus image; (X, Y) is the gaze point location; and S is the pixel ratio of the eye movement data during acquisition.
On the basis of the technical solutions of the foregoing embodiments, further after determining, according to the gazing duration and the area of the corresponding region of interest, the attention degree of the tried user to each of the regions of interest, the method further includes:
and determining the average attention of the tested user to the stimulation image according to the attention of the tested user to each region of interest.
Example two
Fig. 2A and fig. 2B are schematic flow diagrams of a method for determining a degree of attention according to a second embodiment of the present invention, and this embodiment further adds and refines features based on the technical solutions of the above embodiments.
Further, the step of "determining the watching duration of each interested area in the stimulated image watched by the user to be tried" is refined to "determining a first boundary line according to each adjacent position coordinate in the area coordinate set according to the acquisition frequency of the watching point position and the position relationship between the watching point position and the interested area"; determining a second boundary line according to the first position coordinate and the last position coordinate in the area coordinate set; emitting rays along a set direction by taking the fixation point corresponding to the fixation position information as a starting point, and determining the number of intersection points of the rays and the first boundary line and the second boundary line; if the number of the intersection points is odd, determining that the fixation point corresponding to the fixation position information is located in the region of interest corresponding to the region coordinate set; and determining the gazing duration based on the acquisition frequency and the number of the gazing points in the region of interest so as to complete the gazing duration determining step.
Based on the technical solutions of the above embodiments, as shown in fig. 2A, further, the feature "receiving at least one of the stimulation images and determining at least one region of interest" in the stimulation images is refined to S201A-S205A, so as to determine the regions of interest on the stimulation images by hand-drawing the regions of interest and combining with coordinate position capture.
The attention degree determination method shown in fig. 2A includes:
S201A, receiving at least one stimulation image, and determining a corresponding region of interest threshold value according to the image type of the stimulation image.
The region-of-interest threshold identifies the number of regions of interest contained in a stimulus image.
In this step, the terminal device stores in advance the correspondence between the stimulation images of different image types and the region-of-interest threshold. Illustratively, when the image type of the stimulation image is a monkey face type, the region-of-interest threshold is 3, corresponding to the eye region, the nose region, and the lip region, respectively. The terminal equipment determines the region-of-interest threshold value corresponding to the received stimulation image according to the image type of the stimulation image.
S202A, receiving a touch screen clicking instruction, and capturing the position coordinates of the current touch point.
And when the test user clicks the touch screen of the terminal equipment, generating a touch screen clicking instruction. And after receiving the touch screen click instruction, the terminal equipment can capture the position coordinate of the current touch point. The position coordinates of the touch point can be coordinate point data of a rectangular coordinate system, and only the coordinate system and the origin of coordinates which are adopted by the position of the gazing point need to be ensured to be the same.
S203A, judging whether the number of the received touch screen clicking instructions reaches a preset marking point threshold value; if so, go to S204A; if not, S202A is executed.
The preset mark point threshold is used for representing the number of mark points required for completing the circle timing of an interested area. Illustratively, when completing the region of interest formed by the eye region in the monkey face stimulation image, the test user is required to click the display screen 16 times, and correspondingly generate a touchscreen click command 16 times. It should be noted that the preset marking point threshold required for each region of interest may be the same or different.
When the number of touch screen clicking instructions received by the terminal equipment reaches a preset marking point threshold value, the fact that a test user finishes marking the current region of interest to be defined and capturing position coordinates corresponding to all touch points is shown, and then corresponding position coordinates can be directly stored according to the marking sequence of the touch points to form an area coordinate set corresponding to the current region of interest to be defined. When the number of the touch screen click instructions received by the terminal equipment does not reach the preset mark point threshold value, the number of the current touch points (namely mark points) is not enough to form the current region to be defined, the touch screen click instructions are continuously received subsequently, and the position coordinates of the current touch points are captured.
S204A, the position coordinates corresponding to each touch point are sequentially stored to form an area coordinate set.
S205A, judging whether the number of the formed area coordinate sets meets the area-of-interest threshold value; if so, executing S210; if not, S202A is executed.
When the number of the area coordinate sets determined by the terminal device satisfies the area-of-interest threshold, it indicates that all the areas of interest contained in the current stimulation image have been defined, and subsequent operations may be performed. When the number of the area coordinate sets determined by the terminal device does not meet the area-of-interest threshold, it indicates that there is an area of interest that is not defined in the current stimulation image, and therefore the operations of receiving a touch screen click instruction and capturing the position coordinate of the current touch point need to be continuously performed.
S210, acquiring eye movement data when a tested user watches the stimulation image, and determining a gazing point position based on the eye movement data; wherein the stimulation image comprises at least one region of interest.
This step corresponds to S110, and is not described herein again. It should be noted that S210 may be located before S201A or after S205A, and the specific position thereof is not limited at all.
And further refining the step of determining the watching duration of each interested area in the stimulation image watched by the user to be tested according to the acquisition frequency of the watching point position and the position relation between the watching point position and the interested areas into S221-S225.
S221, determining a first boundary line according to the coordinates of each adjacent position in the area coordinate set.
And the terminal equipment respectively acquires an area coordinate set corresponding to each region of interest in the current stimulation image. Assume a region coordinate set a ═ a 1 ,A 2 ,…,A n }, the determined first boundary line comprises A 1 A 2 ,A 2 A 3 ,A 3 A 4 ,…,A n-2 A n-1 And A n-1 A n The total number of the strips is n-1. Wherein A is i Represents a touch point A i Where i is 1, 2, … n.
S222, determining a second boundary line according to the first position coordinate and the last position coordinate in the area coordinate set.
Since the first boundary line cannot form a closed area, the first touch point and the last touch point in the area coordinate set are connected to form a second boundary line. Taking the aforementioned set of area coordinates a as an example, the second boundary line is determined to be a n A 1
And S223, emitting rays along a set direction by taking the fixation point corresponding to the fixation position information as a starting point, and determining the number of intersection points of the rays and the first boundary line and the second boundary line.
And S224, if the number of the intersection points is an odd number, determining that the fixation point corresponding to the fixation position information is located in the region of interest corresponding to the region coordinate set.
S225, determining the gazing duration based on the acquisition frequency and the number of the gazing points in the region of interest.
For example, when the frequency of acquiring the eye movement data by the eye tracker is 1000Hz, the gazing duration of each gazing point may be considered to be 0.001 second, and the gazing duration of each interested area is determined by counting the number of gazing points located in the current interested area (that is, the position relationship between the position of the gazing point and the interested area is an inclusion relationship).
S230, determining the attention degree of the tested user to each interested area according to the watching duration and the area of the corresponding interested area.
The step is the same as the step S130, and is not described herein again.
According to the embodiment of the invention, the method for determining the attention is enriched by refining the step of determining the watching duration, and the method for determining the region of interest is perfected by adding a hand-drawn region of interest circling and coordinate position capturing mode. By adopting the technical scheme, the problem that the attention determined by the prior art cannot meet the research requirement is solved, the accuracy of determining the attention of the user to be tested is improved, and the reliability of the subsequent brain-related research of the user to be tested is ensured.
Based on the technical solution shown in fig. 2A, the attention determining method shown in fig. 2B replaces only the region of interest determining steps refined in S201A-S205A with S201B-S205B, so as to determine a new region of interest by expanding or contracting a known region of interest.
The attention degree determination method shown in fig. 2B includes:
S201B, receiving at least one stimulation image, and acquiring a touch point in a preset area coordinate set corresponding to the stimulation image as a point to be expanded.
The preset area coordinate set may be a position coordinate set of the mark points of the region of interest stored in advance, or may be an area coordinate set determined by the method of S201A to S205A in fig. 2A.
When the terminal device receives the current stimulation image, because the current stimulation image already contains the preset region of interest, the terminal device can expand by taking a preset region coordinate set corresponding to the preset region of interest as a basis to finally determine a new region of interest.
S202B, acquiring position coordinates of two adjacent sides of the current point to be expanded as a front touch point coordinate and a rear touch point coordinate respectively.
Referring to fig. 2C, a preset region coordinate set B ═ B is set 1 ,B 2 ,…,B n }. Wherein, B i Represents a touch point B i Where i is 1, 2, … n. Selected B k Is the current point to be extended, then B k-1 Is a front touch point B k-1 Corresponding front touch point coordinates, B k+1 Is a rear touch point B k+1 Corresponding back touch point coordinates.
S203B, determining that a vector formed by the front touch point coordinate and the position coordinate of the point to be expanded is a first vector, and determining that a vector formed by the position coordinate of the point to be expanded and the rear touch point coordinate is a second vector.
The first vector is taken as the example of the coordinate set B of the predetermined region
Figure BDA0001656888260000121
The second vector is
Figure BDA0001656888260000122
S204B, determining the coordinates of the target expansion points corresponding to the points to be expanded based on the first vector, the second vector and the preset expansion distance.
And the preset expansion interval is the distance between the preset region of interest and the corresponding new region of interest. It should be noted that the preset expansion interval may be set by the test user as needed, when the preset expansion interval is a positive value, it indicates that the preset region of interest needs to be expanded, and when the preset expansion interval is a negative value, it indicates that the preset region of interest needs to be reduced.
In particular, according to the formula
Figure BDA0001656888260000123
Determining the current point B to be expanded k Expanded target expansion point C k The coordinates of (a).
Wherein L is a preset expansion interval;
Figure BDA0001656888260000124
is composed of
Figure BDA0001656888260000125
A vector that needs to be expanded;
Figure BDA0001656888260000126
is composed of
Figure BDA0001656888260000127
An extended vector is required.
And S205B, storing the determined coordinates of the target extension points in a storage order of the position coordinates of the touch points in the preset area coordinate set to form a new area coordinate set of the stimulation image.
S210, S221 to S225, and S230 correspond to the foregoing steps one to one, and are not described herein again.
The embodiment of the invention determines a new region of interest by means of expansion or contraction on the basis of the existing region of interest, and realizes the reutilization of data in a mode of enriching the delimitation of the region of interest. By adopting the technical scheme, the problem that the attention determined by the prior art cannot meet the research requirement is solved, the accuracy of determining the attention of the user to be tested is improved, and the reliability of the subsequent brain-related research of the user to be tested is ensured.
EXAMPLE III
Fig. 3 is a schematic structural diagram of an attention degree determining apparatus according to a third embodiment of the present invention, which is applicable to a case of determining an attention degree of a region of interest in an image when a user to be tested views a stimulus image, and includes: a gaze point location determination module 310, a gaze duration determination module 320, and a degree of attention determination module 330. Wherein:
a gazing point position determining module 310, configured to obtain eye movement data of the user watching the stimulation image, and determine a gazing point position based on the eye movement data; wherein the stimulation image includes at least one region of interest;
a gazing duration determining module 320, configured to determine, according to an acquisition frequency of the gazing point position and a position relationship between the gazing point position and the region of interest, a gazing duration for the user to be tried to gaze at each region of interest in the stimulation image;
and the attention degree determining module 330 is configured to determine, according to the gazing duration and the area of the corresponding region of interest, an attention degree of the user to each region of interest.
In the embodiment of the invention, the eye movement data of the tested user when watching the stimulation image is obtained through the gazing point position determining module 310, and the gazing point position is determined based on the eye movement data; determining the gazing duration of each region of interest watched by the user to be tested according to the acquisition frequency of the gazing point position and the position relation between the gazing point position and each region of interest contained in the stimulation image by using a gazing duration determining module 320; the attention degree determination module 330 determines the attention degree of the tested user to each region of interest according to the gazing duration and the area of the corresponding region of interest. By adopting the technical scheme, the problem that the attention determined by the prior art cannot meet the research requirement is solved, the accuracy of determining the attention of the user to be tested is improved, and the reliability of the subsequent brain-related research of the user to be tested is ensured.
On the basis of the technical solutions of the above embodiments, the apparatus further includes:
and the region-of-interest determining module is used for receiving at least one stimulation image and determining at least one region of interest in the stimulation image.
On the basis of the technical solutions of the foregoing embodiments, further, the region of interest determining module includes:
the threshold value determining unit is used for receiving at least one stimulation image and determining a corresponding region-of-interest threshold value according to the image type of the stimulation image;
the first area coordinate set forming unit is used for receiving touch screen click instructions, capturing position coordinates of current touch points, and sequentially storing the position coordinates corresponding to the touch points to form an area coordinate set when the number of the received touch screen click instructions reaches a preset mark point threshold value;
and the loop judgment unit is used for returning to execute the operation of receiving a touch screen click instruction and capturing the position coordinates of the current touch point until the number of the formed area coordinate sets meets the region-of-interest threshold.
On the basis of the technical solutions of the foregoing embodiments, further, the region of interest determining module includes:
the device comprises a to-be-expanded point acquisition unit, a to-be-expanded point acquisition unit and a to-be-expanded point acquisition unit, wherein the to-be-expanded point acquisition unit is used for receiving at least one stimulation image and acquiring a touch point in a preset area coordinate set corresponding to the stimulation image as a to-be-expanded point;
the adjacent touch point acquisition unit is used for acquiring position coordinates of two adjacent sides of the current point to be expanded as a front touch point coordinate and a rear touch point coordinate respectively;
the vector determining unit is used for determining that a vector formed by the position coordinates of the front touch point and the to-be-expanded point is a first vector and a vector formed by the position coordinates of the to-be-expanded point and the rear touch point is a second vector;
the target expansion point determining unit is used for determining target expansion point coordinates corresponding to the points to be expanded on the basis of the first vector, the second vector and a preset expansion distance;
and the second area coordinate set forming unit is used for storing the determined coordinates of each target expansion point according to the position coordinate storage sequence of each touch point in the preset area coordinate set to form a new area coordinate set of the stimulation image.
On the basis of the technical solutions of the foregoing embodiments, further, the gaze duration determining module 320 includes:
a first boundary line determining unit configured to determine a first boundary line based on each adjacent position coordinate in the area coordinate set, respectively;
the second boundary line determining unit is used for determining a second boundary line according to the first position coordinate and the last position coordinate in the area coordinate set;
an intersection number determining unit, configured to emit a ray in a set direction with a gaze point corresponding to the gaze position information as a start point, and determine the number of intersections of the ray with the first boundary line and the second boundary line;
a position relation determining unit, configured to determine that the gaze point corresponding to the gaze position information is located in the region of interest corresponding to the region coordinate set when the number of the intersection points is an odd number;
and the gazing duration determining unit is used for determining the gazing duration based on the acquisition frequency and the number of the gazing points in the region of interest.
On the basis of the technical solutions of the above embodiments, further, the eye movement data includes an eyeball rotation degree;
accordingly, the gazing point position determining module 310 includes:
a gaze point location determination unit for determining a gaze point location based on a formula
Figure BDA0001656888260000161
Determining a gaze location;
wherein (X ', Y') is the degree of eye rotation; w is the width of the stimulus image; h is the height of the stimulus image; (X, Y) is the gaze point location; and S is the pixel ratio of the eye movement data during acquisition.
On the basis of the technical solutions of the above embodiments, the method further includes:
and the average attention determining module is used for determining the average attention of the tested user to the stimulation image according to the attention of the tested user to each region of interest.
The attention degree determining device can execute the attention degree determining method provided by any embodiment of the invention, and has the corresponding functional modules and beneficial effects of executing the attention degree determining method.
Example four
Fig. 4 is a schematic diagram of a hardware structure of a terminal device according to a fourth embodiment of the present invention. As shown in fig. 4, the terminal device includes an input device 410, a processor 420, and a storage device 430.
The input device 410 is used for acquiring eye movement data when the tested user watches the stimulation image;
one or more processors 420;
a storage device 430 for storing one or more programs.
Further, the terminal device further comprises an output device for displaying the stimulation image.
In fig. 4, a processor 420 is taken as an example, the input device 410 in the terminal device may be connected to the processor 420 and the storage device 430 through a bus or other means, and the processor 420 and the storage device 430 are also connected through a bus or other means, which is taken as an example in fig. 4.
In this embodiment, the processor 420 in the terminal device may determine the gazing point position according to the eye movement data acquired by the input device 410; the gazing duration of each interested area in the gazing stimulation image of the user to be tested can be determined according to the acquisition frequency of the gazing point position and the position relation between the gazing point position and the interested area; and determining the attention degree of the tested user to each interested area according to the watching duration and the area of the corresponding interested area.
The storage device 430 in the terminal device, as a computer-readable storage medium, may be used for storing one or more programs, which may be software programs, computer-executable programs, and modules, such as program instructions/modules corresponding to the attention degree determination method in the embodiment of the present invention (for example, the attention degree determination module 310, the gazing duration determination module 320, and the attention degree determination module 330 shown in fig. 3). The processor 420 executes various functional applications and data processing of the terminal device by running software programs, instructions and modules stored in the storage 430, that is, implements the attention level determination method in the above-described method embodiments.
The storage device 430 may include a storage program area and a storage data area, wherein the storage program area may store an operating system, an application program required for at least one function; the storage data area may store data or the like (such as eye movement data, gaze position, acquisition frequency, gaze duration, and region of interest area in the above-described embodiments). Further, the storage 430 may include high speed random access memory, and may also include non-volatile memory, such as at least one magnetic disk storage device, flash memory device, or other non-volatile solid state storage device. In some examples, storage 430 may further include memory located remotely from processor 420, which may be connected to the terminal device via a network. Examples of such networks include, but are not limited to, the internet, intranets, local area networks, mobile communication networks, and combinations thereof.
Furthermore, an embodiment of the present invention provides a computer-readable storage medium, on which a computer program is stored, which, when executed by a attention degree determination apparatus, implements an attention degree determination method provided by an implementation of the present invention, the method including: acquiring eye movement data of a tested user when watching a stimulation image, and determining a gazing point position based on the eye movement data; wherein the stimulation image includes at least one region of interest; according to the acquisition frequency of the gazing point position and the position relation between the gazing point position and the interested areas, the gazing duration of each interested area in the stimulation image watched by the user to be tested is determined; and determining the attention degree of the tried user to each interested area according to the watching duration and the area of the corresponding interested area.
From the above description of the embodiments, it is obvious for those skilled in the art that the present invention can be implemented by software and necessary general hardware, and certainly, can also be implemented by hardware, but the former is a better embodiment in many cases. Based on such understanding, the technical solutions of the present invention may be embodied in the form of a software product, which can be stored in a computer-readable storage medium, such as a floppy disk, a Read-Only Memory (ROM), a Random Access Memory (RAM), a FLASH Memory (FLASH), a hard disk or an optical disk of a computer, and includes several instructions for enabling a computer device (which may be a personal computer, a server, or a network device) to execute the methods according to the embodiments of the present invention.
It is to be noted that the foregoing description is only exemplary of the invention and that the principles of the technology may be employed. It will be understood by those skilled in the art that the present invention is not limited to the particular embodiments described herein, but is capable of various obvious changes, rearrangements and substitutions as will now become apparent to those skilled in the art without departing from the scope of the invention. Therefore, although the present invention has been described in some detail by the above embodiments, the invention is not limited to the above embodiments, and may include other equivalent embodiments without departing from the spirit of the invention, and the scope of the invention is determined by the scope of the appended claims.

Claims (6)

1. A method for determining attention, comprising:
acquiring eye movement data of a tested user when watching a stimulation image, and determining a gazing point position based on the eye movement data; wherein the stimulation image includes at least one region of interest;
according to the acquisition frequency of the gazing point position and the position relation between the gazing point position and the interested areas, the gazing duration of each interested area in the stimulation image watched by the user to be tested is determined;
determining the attention degree of the tested user to each region of interest according to the watching duration and the area of the corresponding region of interest;
before the acquiring the gaze location information when the tried-out user views the stimulation image within a preset time period, further comprising:
receiving at least one of the stimulation images and determining at least one region of interest in the stimulation image;
the receiving at least one of the stimulation images and determining at least one region of interest in the stimulation image includes:
receiving at least one stimulation image, and determining a corresponding region-of-interest threshold value according to the image type of the stimulation image;
receiving a touch screen click instruction, capturing position coordinates of a current touch point, and sequentially storing the position coordinates corresponding to each touch point to form an area coordinate set when the number of the received touch screen click instructions reaches a preset mark point threshold value;
returning to execute the operation of receiving a touch screen click instruction and capturing the position coordinates of the current touch point until the number of the formed area coordinate sets meets the threshold of the region of interest;
the receiving at least one of the stimulation images and determining at least one region of interest in the stimulation image includes:
receiving at least one stimulation image, and acquiring a touch point in a preset area coordinate set corresponding to the stimulation image as a point to be expanded;
acquiring position coordinates of two adjacent sides of a current point to be expanded as a front touch point coordinate and a rear touch point coordinate respectively;
determining that a vector formed by the position coordinates of the front touch point and the to-be-expanded point is a first vector, and a vector formed by the position coordinates of the to-be-expanded point and the rear touch point is a second vector;
determining target expansion point coordinates corresponding to the points to be expanded based on the first vector, the second vector and a preset expansion distance;
storing the determined coordinates of each target extension point according to the position coordinate storage sequence of each touch point in the preset area coordinate set to form a new area coordinate set of the stimulation image;
the determining, according to the acquisition frequency of the gazing point position and the position relationship between the gazing point position and the regions of interest, the gazing duration of each region of interest in the stimulation image watched by the user includes:
determining a first boundary line according to each adjacent position coordinate in the area coordinate set;
determining a second boundary line according to the first position coordinate and the last position coordinate in the area coordinate set;
emitting rays along a set direction by taking the fixation point corresponding to the fixation position information as a starting point, and determining the number of intersection points of the rays and the first boundary line and the second boundary line;
if the number of the intersection points is odd, determining that the fixation point corresponding to the fixation position information is located in the region of interest corresponding to the region coordinate set;
and determining the gazing duration based on the acquisition frequency and the number of the gazing points in the interested area.
2. The method of claim 1, wherein the eye movement data comprises a degree of eye rotation;
accordingly, determining a gaze location based on the eye movement data comprises:
according to the formula
Figure FDA0003665521230000031
Determining a gaze location;
wherein, X 'and Y' are the eyeball rotation degrees; w is the width of the stimulus image; h is the height of the stimulus image; (X, Y) is the gaze point location; and S is the pixel ratio of the eye movement data during acquisition.
3. The method of claim 1, further comprising, after determining the attentiveness of the tried user to each of the regions of interest based on the gaze duration and the area of the corresponding region of interest:
and determining the average attention of the tested user to the stimulation image according to the attention of the tested user to each region of interest.
4. An attention degree determination device characterized by comprising:
the device comprises a gazing point position determining module, a gazing point position determining module and a control module, wherein the gazing point position determining module is used for acquiring eye movement data when a tested user watches a stimulation image and determining a gazing point position based on the eye movement data; wherein the stimulation image includes at least one region of interest;
the gazing duration determining module is used for determining the gazing duration of each interested area in the stimulated image watched by the user to be tried according to the acquisition frequency of the gazing point position and the position relation between the gazing point position and the interested area;
the attention degree determining module is used for determining the attention degree of the tried user to each interested area according to the watching duration and the area of the corresponding interested area;
a region-of-interest determination module for receiving at least one of the stimulation images and determining at least one region of interest in the stimulation image;
the region of interest determination module includes:
the threshold value determining unit is used for receiving at least one stimulation image and determining a corresponding region-of-interest threshold value according to the image type of the stimulation image;
the first area coordinate set forming unit is used for receiving touch screen click instructions, capturing position coordinates of current touch points, and sequentially storing the position coordinates corresponding to the touch points to form an area coordinate set when the number of the received touch screen click instructions reaches a preset mark point threshold value;
the loop judgment unit is used for returning to execute the operation of receiving a touch screen click instruction and capturing the position coordinates of the current touch point until the number of the formed area coordinate sets meets the region-of-interest threshold;
wherein the region of interest determination module comprises:
the device comprises a to-be-expanded point acquisition unit, a to-be-expanded point acquisition unit and a to-be-expanded point acquisition unit, wherein the to-be-expanded point acquisition unit is used for receiving at least one stimulation image and acquiring a touch point in a preset area coordinate set corresponding to the stimulation image as a to-be-expanded point;
the adjacent touch point acquisition unit is used for acquiring position coordinates of two adjacent sides of the current point to be expanded as a front touch point coordinate and a rear touch point coordinate respectively;
the vector determining unit is used for determining that a vector formed by the position coordinates of the front touch point and the to-be-expanded point is a first vector and a vector formed by the position coordinates of the to-be-expanded point and the rear touch point is a second vector;
the target expansion point determining unit is used for determining target expansion point coordinates corresponding to the points to be expanded on the basis of the first vector, the second vector and a preset expansion distance;
a second area coordinate set forming unit, configured to store the determined coordinates of each target extension point in the preset area coordinate set according to a position coordinate storage sequence of each touch point in the preset area coordinate set to form a new area coordinate set of the stimulation image;
the gaze duration determination module comprises:
a first boundary line determining unit configured to determine a first boundary line based on each adjacent position coordinate in the area coordinate set, respectively;
the second boundary line determining unit is used for determining a second boundary line according to the first position coordinate and the last position coordinate in the area coordinate set;
an intersection number determining unit, configured to emit a ray in a set direction with a gaze point corresponding to the gaze position information as a start point, and determine the number of intersections of the ray with the first boundary line and the second boundary line;
a position relation determining unit, configured to determine that the gaze point corresponding to the gaze position information is located in the region of interest corresponding to the region coordinate set when the number of the intersection points is an odd number;
and the gazing duration determining unit is used for determining the gazing duration based on the acquisition frequency and the number of the gazing points in the region of interest.
5. A terminal device, comprising an input device, and further comprising:
one or more processors;
storage means for storing one or more programs;
the one or more programs are executable by the one or more processors to cause the one or more processors to implement a method of attentiveness determination as recited in any of claims 1-3.
6. A computer-readable storage medium, on which a computer program is stored which, when being executed by a processor, carries out a method of attention determination as claimed in any one of claims 1-3.
CN201810444687.6A 2018-05-10 2018-05-10 Attention degree determination method, device, equipment and storage medium Active CN110464365B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201810444687.6A CN110464365B (en) 2018-05-10 2018-05-10 Attention degree determination method, device, equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201810444687.6A CN110464365B (en) 2018-05-10 2018-05-10 Attention degree determination method, device, equipment and storage medium

Publications (2)

Publication Number Publication Date
CN110464365A CN110464365A (en) 2019-11-19
CN110464365B true CN110464365B (en) 2022-08-12

Family

ID=68504164

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201810444687.6A Active CN110464365B (en) 2018-05-10 2018-05-10 Attention degree determination method, device, equipment and storage medium

Country Status (1)

Country Link
CN (1) CN110464365B (en)

Families Citing this family (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111695516B (en) * 2020-06-12 2023-11-07 百度在线网络技术(北京)有限公司 Thermodynamic diagram generation method, device and equipment
CN112949409A (en) * 2021-02-02 2021-06-11 首都师范大学 Eye movement data analysis method and device based on interested object and computer equipment
CN114063780A (en) * 2021-11-18 2022-02-18 兰州乐智教育科技有限责任公司 Method and device for determining user concentration degree, VR equipment and storage medium
CN114371781A (en) * 2021-12-31 2022-04-19 金地(集团)股份有限公司 User portrait generation method and system in real estate marketing
US20240036640A1 (en) * 2022-07-29 2024-02-01 Qualcomm Incorporated User attention determination for extended reality
CN115363581B (en) * 2022-08-19 2023-05-05 山东心法科技有限公司 Method, equipment and medium for predicting dysreading for young children
CN116841299B (en) * 2023-08-31 2023-12-22 之江实验室 Autonomous tour control method and device for tour guide robot

Citations (12)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101172034A (en) * 2006-11-03 2008-05-07 上海迪康医学生物技术有限公司 Eyeball moving track detecting method
CN101779960A (en) * 2010-02-24 2010-07-21 沃建中 Test system and method of stimulus information cognition ability value
CN101894380A (en) * 2010-07-14 2010-11-24 北京航空航天大学 Method for tracing target object in panoramic video automatically
CN102149320A (en) * 2008-04-28 2011-08-10 科技研究局(A*Star) Method and system for concentration detection
CN102566858A (en) * 2010-12-09 2012-07-11 联想(北京)有限公司 Touch control method and electronic equipment
CN103530618A (en) * 2013-10-23 2014-01-22 哈尔滨工业大学深圳研究生院 Non-contact sight tracking method based on corneal reflex
WO2016127248A1 (en) * 2015-02-10 2016-08-18 Abbas Mohamad Methods and systems relating to ratings and advertising content delivery
CN107224292A (en) * 2017-05-27 2017-10-03 西南交通大学 A kind of method of testing and system of the attention span of dispatcher
CN107256332A (en) * 2017-05-24 2017-10-17 上海交通大学 The electric experimental evaluation system and method for brain based on eye movement data
CN107390863A (en) * 2017-06-16 2017-11-24 北京七鑫易维信息技术有限公司 Control method and device, electronic equipment, the storage medium of equipment
CN107656613A (en) * 2017-09-08 2018-02-02 国网山东省电力公司电力科学研究院 A kind of man-machine interactive system and its method of work based on the dynamic tracking of eye
CN107929007A (en) * 2017-11-23 2018-04-20 北京萤视科技有限公司 A kind of notice and visual capacity training system and method that tracking and intelligent evaluation technology are moved using eye

Family Cites Families (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
DE10334074A1 (en) * 2003-07-25 2005-02-24 Siemens Ag Medical 3-D image virtual channel viewing unit processes preoperative tomography data to show virtual channel linked to instrument position
EP2007271A2 (en) * 2006-03-13 2008-12-31 Imotions - Emotion Technology A/S Visual attention and emotional response detection and display system
CN102591575B (en) * 2011-12-26 2014-03-26 广东威创视讯科技股份有限公司 Method for deploying software on wide-screen display and device
EP2688283B1 (en) * 2012-07-20 2020-03-18 BlackBerry Limited Dynamic region of interest adaptation and image capture device providing same
CN103049199A (en) * 2012-12-14 2013-04-17 中兴通讯股份有限公司 Touch screen terminal, control device and working method of touch screen terminal
CN104750325B (en) * 2013-12-30 2018-02-06 西安易朴通讯技术有限公司 A kind of method for improving electric capacity touch screen and clicking on accuracy
JP6644505B2 (en) * 2015-09-24 2020-02-12 日本光電工業株式会社 Airway adapter and respiratory air flow sensor
CN107220230A (en) * 2016-03-22 2017-09-29 阿里巴巴集团控股有限公司 A kind of information collecting method and device, and a kind of intelligent terminal

Patent Citations (12)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101172034A (en) * 2006-11-03 2008-05-07 上海迪康医学生物技术有限公司 Eyeball moving track detecting method
CN102149320A (en) * 2008-04-28 2011-08-10 科技研究局(A*Star) Method and system for concentration detection
CN101779960A (en) * 2010-02-24 2010-07-21 沃建中 Test system and method of stimulus information cognition ability value
CN101894380A (en) * 2010-07-14 2010-11-24 北京航空航天大学 Method for tracing target object in panoramic video automatically
CN102566858A (en) * 2010-12-09 2012-07-11 联想(北京)有限公司 Touch control method and electronic equipment
CN103530618A (en) * 2013-10-23 2014-01-22 哈尔滨工业大学深圳研究生院 Non-contact sight tracking method based on corneal reflex
WO2016127248A1 (en) * 2015-02-10 2016-08-18 Abbas Mohamad Methods and systems relating to ratings and advertising content delivery
CN107256332A (en) * 2017-05-24 2017-10-17 上海交通大学 The electric experimental evaluation system and method for brain based on eye movement data
CN107224292A (en) * 2017-05-27 2017-10-03 西南交通大学 A kind of method of testing and system of the attention span of dispatcher
CN107390863A (en) * 2017-06-16 2017-11-24 北京七鑫易维信息技术有限公司 Control method and device, electronic equipment, the storage medium of equipment
CN107656613A (en) * 2017-09-08 2018-02-02 国网山东省电力公司电力科学研究院 A kind of man-machine interactive system and its method of work based on the dynamic tracking of eye
CN107929007A (en) * 2017-11-23 2018-04-20 北京萤视科技有限公司 A kind of notice and visual capacity training system and method that tracking and intelligent evaluation technology are moved using eye

Also Published As

Publication number Publication date
CN110464365A (en) 2019-11-19

Similar Documents

Publication Publication Date Title
CN110464365B (en) Attention degree determination method, device, equipment and storage medium
Henderson et al. Meaning guides attention in real-world scene images: Evidence from eye movements and meaning maps
Braunagel et al. Driver-activity recognition in the context of conditionally autonomous driving
Jiang et al. Learning to predict sequences of human visual fixations
CN101515199B (en) Character input device based on eye tracking and P300 electrical potential of the brain electricity
Toker et al. Individual user characteristics and information visualization: connecting the dots through eye tracking
Braunagel et al. Online recognition of driver-activity based on visual scanpath classification
Barthelmé et al. Modeling fixation locations using spatial point processes
Venugopal et al. Developing an application using eye tracker
CN106155308B (en) A kind of eye-tracking method and system based on memory and mark
Clarke et al. The saccadic flow baseline: Accounting for image-independent biases in fixation behavior
CN103324287A (en) Computer-assisted sketch drawing method and system based on eye movement and brush stroke data
CN110866450A (en) Parkinson disease monitoring method and device and storage medium
CN113723530A (en) Intelligent psychological assessment system based on video analysis and electronic psychological sand table
CN114022642B (en) Method, device, equipment, system and storage medium for collecting and generating space-time behavior trajectory
CN114092985A (en) Terminal control method, device, terminal and storage medium
CN117275673A (en) Cognitive training equipment and method for improving distribution attention
WO2016167741A1 (en) Method of generating an adaptive partial report and apparatus for implementing the same
US20110007952A1 (en) Enhanced Visualizations for Ultrasound Videos
Valeriani et al. Augmenting group performance in target-face recognition via collaborative brain-computer interfaces for surveillance applications
Gollan et al. SEEV-effort—Is it enough to model human attentional behavior in public display settings
CN114816055A (en) Eyeball motion track capturing and analyzing method, device and medium based on VR equipment
CN113780051A (en) Method and device for evaluating concentration degree of student
Xu et al. What has been missed for predicting human attention in viewing driving clips?
CN113413134A (en) Fatigue identification method, fatigue identification device, electronic equipment and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant