CN117854714B - Information recommendation method and device based on eye movement tracking - Google Patents

Information recommendation method and device based on eye movement tracking Download PDF

Info

Publication number
CN117854714B
CN117854714B CN202410262494.4A CN202410262494A CN117854714B CN 117854714 B CN117854714 B CN 117854714B CN 202410262494 A CN202410262494 A CN 202410262494A CN 117854714 B CN117854714 B CN 117854714B
Authority
CN
China
Prior art keywords
attention
detected
person
detection
video
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202410262494.4A
Other languages
Chinese (zh)
Other versions
CN117854714A (en
Inventor
郑迪
马宁
周宏豪
董波
陈奕菡
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Zhejiang Lab
Original Assignee
Zhejiang Lab
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Zhejiang Lab filed Critical Zhejiang Lab
Priority to CN202410262494.4A priority Critical patent/CN117854714B/en
Publication of CN117854714A publication Critical patent/CN117854714A/en
Application granted granted Critical
Publication of CN117854714B publication Critical patent/CN117854714B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Landscapes

  • Alarm Systems (AREA)
  • Image Analysis (AREA)

Abstract

The specification discloses an information recommendation method and device based on eye movement tracking, which are characterized in that eye movement track data of a person to be detected when watching a detection video played by a display are collected through eye movement detection equipment, the eye movement track data are input into a preset attention detection model, so that the attention detection model generates an attention characteristic map of the person to be detected when watching the detection video according to the eye movement track data, and information recommendation is carried out on the person to be detected according to the attention characteristic map. When information recommendation is carried out by the method, the accuracy of the generated recommendation information is higher, and practical information which is more in line with the actual situation can be provided for the to-be-detected person, so that more effective help can be provided for the to-be-detected person with attention problems.

Description

Information recommendation method and device based on eye movement tracking
Technical Field
The present disclosure relates to the field of computer technologies, and in particular, to an information recommendation method and apparatus based on eye tracking.
Background
Autism (Autism Spectrum Disorder, ASD), also known as autism or autism, is a representative disease related to the development disorder of the nervous system in the extensive development disorder (PERVASIVE DEVELOPMENTAL DISORDER, PDD), the induction cause of which is mainly related to genetic genes or the environmental factors during pregnancy of the fetus, so that the onset period is very early, the patient can develop corresponding symptoms during the childhood, such as social avoidance, board carving behavior and the like, and the attention is greatly deficient. In the current medical field, a curing means and medicines which are relatively efficient for the attention problem of children do not exist, and early detection of early treatment is a relatively effective and common medical method at the current stage.
In the existing medical technology, children possibly suffering from attention problems are distinguished, and the children are observed and communicated for a fixed time mainly by professional doctors trained in professional psychology and medical treatment. And the professional doctors diagnose whether the children have attention problems by using evaluation means such as clinical scales and the like, and further recommend and help corresponding information according to diagnosis results. In such a method, the practicality and accuracy of the relevant recommendation information determined according to the diagnosis result may be affected due to the subjective judgment of the expert physician.
Therefore, how to make more accurate and effective information recommendation for children with attention problems is a problem to be solved.
Disclosure of Invention
The present disclosure provides an information recommendation method and apparatus based on eye tracking, so as to partially solve the above-mentioned problems in the prior art.
The technical scheme adopted in the specification is as follows:
the specification provides an information recommendation method based on eye movement tracking, which comprises the following steps:
playing a preset detection video through a preset display;
Acquiring an eye image of a person to be detected when watching the detection video through preset eye movement detection equipment, and determining eye movement track data of the person to be detected when watching the detection video according to the eye image;
Inputting the eye movement track data and the detection video into a preset attention detection model to generate an attention characteristic map aiming at the person to be detected when watching the detection video, wherein the attention characteristic map is used for representing attention degrees of the person to be detected aiming at different objects in the detection video in different states and attention degrees of the person to be detected aiming at the background in the detection video;
and recommending information to the person to be detected according to the attention characteristic map.
Optionally, according to the attention feature map, recommending information to the to-be-detected person, specifically including:
Determining detection result data corresponding to the eye movement track data according to the eye movement track data, wherein the detection result data comprises an eye movement track heat map corresponding to the eye movement track data, an eye movement calibration degree value, a visual angle moving speed of the person to be detected at different moments when watching the detection video, and pupil size change data of the person to be detected at different moments when watching the detection video;
Determining the credibility corresponding to the attention feature map according to the detection result data;
And recommending information to the person to be detected according to the attention characteristic spectrum and the credibility corresponding to the attention characteristic spectrum.
Optionally, recommending information to the person to be detected according to the attention feature spectrum and the credibility corresponding to the attention feature spectrum specifically includes:
and when the credibility corresponding to the attention feature map is not lower than a preset credibility threshold, recommending information to the person to be detected according to the attention feature map.
Optionally, the method further comprises:
when the credibility corresponding to the attention feature pattern is lower than the preset credibility threshold, determining the video type of the detection video corresponding to the attention feature pattern, and determining detection videos of other video types which are different from the video type of the detection video corresponding to the attention feature pattern from a preset detection video library as a first retest video;
Playing the first retest video to the person to be detected through the preset eye movement detection equipment, and determining an attention characteristic map of the person to be detected when watching the first retest video;
And recommending information to the person to be detected according to the attention characteristic spectrum of the person to be detected when watching the first retest video.
Optionally, according to the attention feature map, recommending information to the to-be-detected person, specifically including:
Inputting the attention characteristic spectrum into a preset attention analysis model, so that the attention analysis model performs attention analysis on the person to be detected according to the attention characteristic spectrum, and generating an attention analysis result for the person to be detected;
And recommending information to the person to be detected according to the attention analysis result.
Optionally, the attention feature map is input into a preset attention analysis model, so that the attention analysis model performs attention analysis on the person to be detected according to the attention feature map, and generates an attention analysis result for the person to be detected, and specifically includes:
Determining average similarity values of the attention characteristic pattern of the person to be detected and the attention characteristic patterns of the plurality of appointed persons recorded in history and average similarity values of the attention characteristic pattern of the person to be detected and the attention characteristic patterns of the plurality of normal persons recorded in history according to the attention characteristic pattern, the attention characteristic patterns of the plurality of appointed persons recorded in history and the attention characteristic patterns of the plurality of normal persons recorded in history through the attention analysis model;
Determining attention information of the person to be detected according to the average similarity value of the attention characteristic spectrum of the person to be detected and the attention characteristic spectrums of a plurality of appointed persons recorded in history and the average similarity value of the attention characteristic spectrum of the person to be detected and the attention characteristic spectrums of a plurality of normal persons recorded in history through the attention analysis model;
And generating an attention analysis result aiming at the person to be detected according to the attention information of the person to be detected.
Optionally, according to the attention feature map, recommending information to the to-be-detected person, specifically including:
in response to the fact that the detected video is not met by the detected person recorded by the staff when the detected video is watched, playing the detected video to the detected person through eye movement detection equipment again, and acquiring eye movement track data of the detected person when the replayed detected video is watched through the eye movement detection equipment;
Generating an attention characteristic map for the to-be-detected person when watching the replayed detection video according to the eye movement track data of the to-be-detected person when watching the replayed detection video;
And recommending information to the to-be-detected person according to the attention characteristic spectrum of the to-be-detected person when watching the replayed detection video.
The present specification provides an information recommendation device based on eye movement tracking, comprising:
the playing module is used for playing a preset detection video through a preset display;
The acquisition module is used for acquiring eye images of a person to be detected when watching the detection video through preset eye movement detection equipment, and determining eye movement track data of the person to be detected when watching the detection video according to the eye images;
The map generation module is used for inputting the eye movement track data and the detection video into a preset attention detection model so as to generate an attention characteristic map aiming at the person to be detected when watching the detection video, wherein the attention characteristic map is used for representing attention degrees of the person to be detected aiming at different objects in the detection video in different states and attention degrees aiming at the background in the detection video;
and the recommending module is used for recommending the information of the to-be-detected person according to the attention characteristic spectrum.
The present specification provides a computer-readable storage medium storing a computer program which, when executed by a processor, implements the above-described eye-tracking-based information recommendation method.
The present specification provides an electronic device comprising a memory, a processor and a computer program stored on the memory and executable on the processor, the processor implementing the above-mentioned eye-tracking-based information recommendation method when executing the program.
As can be seen from the above method, in the information recommendation method based on eye tracking provided in the present disclosure, eye movement track data of a person to be detected when watching a detection video played by a display is collected by an eye movement detection device, and the eye movement track data is input into a preset attention detection model, so that the attention detection model generates an attention feature map of the person to be detected when watching the detection video according to the eye movement track data, and further performs information recommendation on the person to be detected according to the attention feature map.
From the above, it can be seen that, according to the information recommendation method based on eye movement tracking provided in the present disclosure, an attention feature map corresponding to a person to be detected may be determined according to eye movement track data when the person to be detected views a detection video, and then information recommendation is performed on the person to be detected according to the attention feature map. Compared with the traditional mode of subjective evaluation and recommendation depending on a professional doctor, the recommendation information generated by the method is more accurate and practical and is more fit with the actual situation of the to-be-detected person, so that more effective help can be provided for the to-be-detected person with attention problems.
Drawings
The accompanying drawings, which are included to provide a further understanding of the specification, illustrate and explain the exemplary embodiments of the present specification and their description, are not intended to limit the specification unduly. In the drawings:
Fig. 1 is a schematic flow chart of an information recommendation method based on eye tracking provided in the present specification;
FIG. 2 is a schematic illustration of an attention profile provided in the present specification;
FIG. 3 is a schematic diagram of detection result data provided in the present specification;
FIG. 4 is a schematic diagram of an information recommendation system based on eye tracking provided in the present specification;
FIG. 5 is a schematic diagram of an information recommendation device based on eye tracking provided in the present specification;
Fig. 6 is a schematic structural diagram of an electronic device corresponding to fig. 1 provided in the present specification.
Detailed Description
For the purposes of making the objects, technical solutions and advantages of the present specification more apparent, the technical solutions of the present specification will be clearly and completely described below with reference to specific embodiments of the present specification and corresponding drawings. It will be apparent that the described embodiments are only some, but not all, of the embodiments of the present specification. All other embodiments, which can be made by one of ordinary skill in the art without undue burden from the present disclosure, are intended to be within the scope of the present disclosure.
The following describes in detail the technical solutions provided by the embodiments of the present specification with reference to the accompanying drawings.
Fig. 1 is a flow chart of an information recommendation method based on eye tracking provided in the present specification, which includes the following steps:
S101: and playing the preset detection video through a preset display.
S102: and acquiring eye images of a person to be detected when watching the detection video through preset eye movement detection equipment, and determining eye movement track data of the person to be detected when watching the detection video according to the eye images.
Autism (Autism Spectrum Disorder, ASD), a neurological disorder with a general early onset, is a disease that manifests in the childhood of patients with symptoms such as attention deficit, social avoidance, etc. At present, diagnosis for children who may suffer from attention problems is mostly carried out by clinical observation and communication of specialists. The related information recommended to the detected child may have a problem of low accuracy due to subjective judgment and evaluation by the professional physician. Therefore, it is important how to provide more accurate and effective information recommendation assistance for children with attention problems.
For this reason, the present specification provides an information recommendation method based on eye tracking, where the execution subject adopted in the method provided in the present specification may be a terminal device such as a desktop computer, a notebook computer, or a server, and in addition thereto, the execution subject in the present specification may also be a subject in the form of software, such as a client installed in the terminal device, or the like. For convenience of explanation, the description below will be made with only the server as the execution subject, and the information recommendation method based on eye tracking will be provided.
Based on the information recommendation method, the server can determine the attention feature map corresponding to the to-be-detected person according to the eye movement track data when the to-be-detected person views the detection video, and then recommend information to the to-be-detected person according to the attention feature map. The actual application scene of the server when recommending information for the person to be detected can be determined according to actual requirements, for example, when all children in a certain area need to be detected in attention, whether the eye movement acquisition equipment is brought into the home of the children can be selected according to different situations of the children, and the door-to-door detection is performed; or a server applying the method provided in the specification is established in each medical institution, so that children can perform attention detection and recommended information acquisition to the corresponding medical institution when the suspected symptoms occur regularly or under the accompany of the guardian.
In the specification, the server can play a preset detection video to the to-be-detected person through a preset display, collect eye change images of the to-be-detected person through a preset eye movement detection device when the to-be-detected person views the detection video, and obtain eye movement track data when the to-be-detected person views the detection video according to the eye change images of the to-be-detected person.
Specifically, the server may randomly select a plurality of detection videos from a preset detection video library, where the detection videos are required to be watched by a person to be detected. All detection videos stored in the detection video library are subjected to semantic segmentation, and each object in each detection video is marked with a corresponding label after being subjected to semantic segmentation, so that when an attention characteristic map is generated through an attention detection model in the follow-up process, the attention detection model can effectively identify the attention degree of a person to be detected on each object in the detection video according to eye movement track data, in addition, the detection video library can be pre-established in a database of a server or can be stored in a computer or a notebook connected with a preset display, and the storage and calling modes of the detection video library are not strictly limited in the specification.
The server can continuously play a plurality of detection videos to a to-be-detected person through a preset display, and for each detection video in the plurality of detection videos selected at random, the server can acquire eye images matched with the to-be-detected person when watching the detection video through the preset display when the to-be-detected person watches the detection video through a preset eye movement acquisition device, further eye movement track data corresponding to the detection video are obtained according to the eye images, and the eye movement track data and the corresponding detection video are mutually bound and are used for reflecting the binocular fixation positions and the change tracks of the to-be-detected person at all moments when watching the corresponding detection video.
The specific type of the eye movement collection device mentioned above may be an electronic device capable of collecting eye movement data, such as a desktop eye movement device commonly used in the market at present, and the specific type of the eye movement collection device is not limited in the present specification and may be selected according to actual requirements. Likewise, the specific type of the above-mentioned display is not strictly limited in the present specification, and a computer with an electronic display screen, or an electronic device with video playing capability, such as a notebook, may be used, and may be flexibly adjusted according to actual situations and requirements.
In addition, the above-mentioned detection video library is made by the developer by self shooting, each detection video in the detection video library is made by strict conception and design, the video content of each detection video is associated with each dimension of the attention feature map generated subsequently according to the eye movement track data, in order to ensure mutual unification among data when the attention feature map is generated subsequently, the number of objects involved in each detection video and the video duration of different objects under the characteristics of different dimensions are not greatly different, namely, the video content possibly displayed among each detection video is different, but the time length of each dimension involved by each detection video and the number of objects are not much different. The purpose of selecting a plurality of detection videos to be watched by a person to be detected is to prevent the difference of the attention attraction degree of the person to be detected due to different video contents, and to record the eye movement track data of the person to be detected when watching different detection videos for a plurality of times, thereby improving the credibility of the attention feature map generated later. The number of the detection videos specifically selected is not strictly limited in numerical value in the specification, and can be adjusted according to actual requirements and different conditions of different testers to be detected.
It should be noted that, before the formal detection is performed on the person to be detected, that is, before the selected detection video is played to the person to be detected through the display, there is an adaptation process and a calibration process for the person to be detected. The adaptation process for the person to be detected is simple, the server can randomly select 1-2 detection videos from the detection video library, and the person to be detected can smoothly watch one or two detection videos by guiding corresponding staff and the help of the guardian to which the person to be detected belongs, so that the adaptation process can be finished and the calibration process can be started. If the abnormality of the person to be detected is reflected in the adaptation process, for example: it is difficult to understand instructions for watching video, inability to concentrate on watching video, frequent movements of head or body, etc., and the adaptation process needs to be re-performed under the recording and confirmation of the corresponding staff. If the abnormal situation is too serious, normal guidance cannot be performed, or a person to be detected has stronger resistance emotion, under the recording and confirmation of corresponding staff, the whole detection process needs to be stopped immediately in consideration of the personal situation of the person to be detected, and whether the person to be detected needs to be detected again is judged after negotiating with a guardian to which the person to be detected belongs.
After the adaptation process is normally finished, the server can calibrate the process for the person to be detected through a preset display and eye movement acquisition equipment, and the main purpose of the calibration process is to ensure the accuracy of eye movement track data generated according to eye images. The method comprises the steps that under the help and guidance of a worker and a guardian to which a to-be-detected person belongs, the head and eyes of the to-be-detected person are aligned with the center part of a display, after the fact that the position relation between the to-be-detected person and the display cannot be changed easily is guaranteed, a server can display a calibration pattern (such as a car, a small animal and the like) to the to-be-detected person through a display screen of the display, then the to-be-detected person looks at the calibration pattern under the help and guidance of the worker and the guardian to which the to-be-detected person belongs, the position of the calibration pattern is continuously moved and changed in the display screen through control of the calibration pattern, and eye images of the to-be-detected person when the to-be-detected person looks at different positions are collected through a preset eye movement collecting device. And then the server calibrates the binocular fixation range of the person to be detected according to the acquired eye images and the positions of the calibration patterns corresponding to the images in the display screen, and generates an eye movement calibration degree value corresponding to the person to be detected.
The eye movement calibration degree value is used for indicating the matching degree of the actual gazing position of the person to be detected when gazing at the display screen and the gazing position calculated by the server according to the eye images acquired by the eye movement acquisition equipment, and can be expressed in a percentage mode. After confirming the eye movement calibration degree value of the person to be detected, the server can judge whether the calibration link is qualified or not according to the magnitude of the eye movement calibration degree value and a preset calibration threshold value. For example: assuming that the calibration threshold value corresponding to the calibration link is 80%, when the eye movement calibration degree value of the person to be detected is 92%, the server determines that the calibration link is qualified at the time, and further can perform the subsequent normal test link; when the eye movement calibration degree value of the person to be detected is 76%, the server judges that the calibration link is unqualified, prompts whether to carry out the calibration link again or not to corresponding staff, and after the relevant staff makes judgment according to actual conditions, the server decides whether to carry out the calibration link again or not. For example, when some special problems (such as personal physiological problems of the person to be detected, special conditions of the place where the test is located, faults or interference of eye collecting equipment, etc.) which cannot be easily solved occur, the eye movement calibration degree value of the person to be detected cannot reach the preset calibration threshold, and the corresponding staff cannot re-calibrate the calibration process again, but directly takes the eye movement calibration degree value which does not meet the preset calibration threshold as the corresponding eye movement calibration degree value of the person to be detected in the subsequent normal detection process.
After determining the eye movement calibration degree value corresponding to the to-be-detected person, the server can store the eye movement calibration degree value and bind the eye movement track data corresponding to the detection video watched by the to-be-detected person during normal detection, and when determining the detection result data corresponding to the eye movement track data subsequently, the server can take the stored eye movement calibration degree value as one of the detection result data. And, the number of the eye movement calibration degree values also corresponds to the number of the detection videos watched by the person to be detected in the normal detection process, because after the person to be detected watches one detection video, the position of the head or the body may change, when the situation occurs, the recalibration needs to be performed, the corresponding eye movement calibration degree values are recorded for the second time and are mutually bound with the next detection video watched by the person to be detected, if the body position does not move greatly after the person to be detected watches one detection video, the recalibration is not needed, and the eye movement calibration degree value corresponding to the previous calibration process is directly used as the eye movement calibration degree value bound by the next detection video watched by the person to be detected.
The method in the present specification is mainly used for detecting the attention of the low-age children, so that the main target group of the detected person is a low-collar child, the specific age range can be selected from children under 6 years (including 6 years), and the detected person can be adjusted according to the actual situation and the requirement, and the present specification does not strictly limit the detected person of the detected person.
S103: the eye movement track data and the detection video are input into a preset attention detection model to generate an attention characteristic map aiming at the person to be detected when the detection video is watched, wherein the attention characteristic map is used for representing attention degrees of the person to be detected aiming at different objects in the detection video in different states and attention degrees of the person to be detected aiming at the background in the detection video.
In this specification, the server may input the eye movement trajectory data corresponding to the detection video watched by the person to be detected and the corresponding detection video into a preset attention detection model to generate an attention feature map of the person to be detected when watching the detection video through the attention detection model.
Specifically, the server may input each eye movement track data corresponding to each detection video watched by the to-be-detected person and each corresponding detection video subjected to semantic segmentation into an attention detection model, and determine, according to the eye movement track data of the to-be-detected person and the positions of each object in the corresponding detection video, attention feature maps which may represent attention degrees of the to-be-detected person in different states for different objects in the detection video and attention degrees for backgrounds in the detection video through the attention detection model. Then, the server can perform spectrum fusion on a plurality of attention feature spectrums corresponding to the eye movement track data to obtain fused attention feature spectrums corresponding to the to-be-detected person. The method is characterized in that the method comprises the steps of merging all attention feature patterns on the premise that the time lengths of all dimensions related to detection videos corresponding to all attention feature patterns are basically equal and the number of objects is not great, and as mentioned in the steps, the video contents among the detection videos corresponding to all attention feature patterns can be different, but the time lengths of all dimensions related to each detection video and the number of objects are almost the same, and on the basis, the merged attention feature patterns can more accurately reflect the attention degree of a person to be detected to different dimensions. For convenience of description of the specific form of the attention profile, the description will be given below with a schematic view of one attention profile in the example, as shown in fig. 2.
Fig. 2 is a schematic diagram of an attention profile provided in the present specification.
As shown in fig. 2, the attention profile is a fused attention profile generated by the server for the person to be detected after the person to be detected views the plurality of detection videos in a certain example. As can be seen from fig. 2, the attention feature map in the example is mainly divided into five object dimensions according to the detected video after semantic segmentation, and three attention dimensions, where the five object dimensions are respectively: the method comprises the steps of representing a social dimension of a real person object in a detection video, representing an object dimension of an object in the detection video without real life, representing a brightness dimension of a light darkness degree of each object in the detection video, representing a motion dimension of a motion state of each object in the detection video, and representing a distraction dimension of a video background in the detection video except each object. And the three attention dimensions are respectively: the active attention dimension comprises a social dimension and an article dimension, and is used for representing the attention degree of a person to be detected on each object in the detection video; the passive attention dimension comprises a brightness dimension and a motion dimension and is used for representing the attention degree of a person to be detected on different characteristics of each object in the detected video; and a notice-missing dimension including a distraction dimension for indicating a degree of attention of the person to be detected to the video background other than the objects.
As can be seen from fig. 2, when the person to be detected to which the attention feature map belongs views the detected video, the attention degree of the person to be detected on each object in the detected video is higher, but the person to be detected on each object in the social dimension is almost not paid attention, and the person to be detected also has a certain attention degree on the moving object, but does not pay much attention to the object with obvious brightness, and the attention to the video background in the distraction dimension is less obvious. In summary, the person to be detected lacks attention to social dimension seriously, so that it can be inferred that the person to be detected may have defects or obstacles in communication with people, namely social avoidance phenomenon, the above result is only preliminary inference, and the analysis result with real reference value is obtained through the subsequent attention analysis process.
It should be noted that, the attention feature map in the above example is only for convenience of description and understanding, and specific division between each dimension in the attention feature map is not limited in this specification, and may be optimized and adjusted according to actual situations and requirements, so as to keep the video content of the detection video corresponding to the eye movement track data corresponding to each dimension in the attention feature map generated according to the eye movement track data. In addition, the specific model types of the above-mentioned attention detection model are not strictly limited in the present specification, and a mathematical calculation model, such as a decision model, a classifier model, etc., which has the capability of generating an attention feature map according to the eye movement track data after being trained in advance, may be adopted, and may be selected and set according to actual needs and application situations.
S104: and recommending information to the person to be detected according to the attention characteristic map.
In the present specification, the server may recommend information for a person to be detected to which the attention feature map belongs according to the attention feature map generated by the preset attention detection model.
Specifically, the server may input the attention feature pattern corresponding to the to-be-detected person into a preset attention analysis model, so that the attention analysis model determines attention information of the to-be-detected person when watching the detection video according to the attention feature pattern of the to-be-detected person, and then the server may enable the attention analysis model to generate a corresponding attention analysis result according to the attention information of the to-be-detected person. Then, the server can recommend information to the to-be-detected person according to the attention analysis result of the to-be-detected person.
The server may input the attention feature pattern of the person to be detected and the attention feature pattern of a certain normal person recorded in history into a preset attention analysis model, so that the attention analysis model may extract corresponding feature data from the attention feature pattern of the person to be detected, and simultaneously extract corresponding feature data from the attention feature pattern of the normal person. And the attention analysis model can determine the similarity value between the attention characteristic pattern of the person to be detected and the attention characteristic pattern of the normal person according to the characteristic data of the attention characteristic pattern of the person to be detected and the characteristic data of the attention characteristic pattern of the normal person. And similarly, sequentially determining the similarity value of the attention characteristic patterns of each normal person recorded in the history and the characteristic attention patterns of the person to be detected through the attention analysis model, so as to determine the average similarity value of the attention characteristic patterns of the person to be detected and the attention characteristic patterns of a plurality of normal persons recorded in the history.
Similarly, the server may also input the attention feature map of the person to be detected and the attention feature map of a specific person recorded in history into a preset attention analysis model, so that the attention analysis model may extract corresponding feature data from the attention feature map of the person to be detected, and simultaneously extract corresponding feature data from the attention feature map of the specific person. And the attention analysis model can determine the similarity value between the attention characteristic pattern of the person to be detected and the attention characteristic pattern of the appointed person according to the characteristic data of the attention characteristic pattern of the person to be detected and the characteristic data of the attention characteristic pattern of the appointed person. Similarly, the similarity value of the attention characteristic patterns of the appointed persons recorded in the history and the characteristic attention patterns of the to-be-detected person is sequentially determined through the attention analysis model, so that the average similarity value of the attention characteristic patterns of the to-be-detected person and the attention characteristic patterns of the appointed persons recorded in the history is determined.
Then, the server can enable the attention analysis model to determine the attention information of the person to be detected when watching the detection video according to the average similarity value of the attention feature pattern of the person to be detected and the attention feature patterns of a plurality of appointed persons recorded in history and the average similarity value of the attention feature pattern of the person to be detected and the attention feature patterns of a plurality of normal persons recorded in history, and further enable the attention analysis model to generate a corresponding attention analysis result according to the attention information. And then, according to the attention analysis result, carrying out corresponding information recommendation.
The specific model types of the above-mentioned attention analysis model are not strictly limited in the present specification, and a mathematical model such as a twin neural network (Siamese Neural Network) or a metric learning network (METRIC LEARNING Networks) with the capability of calculating the similarity degree between a plurality of data can be adopted, so that the model can be flexibly set and adjusted according to the actual application scenario and requirements. The attention profile of the specified person mentioned above may be a profile of attention of a person with a problem of attention deficit recorded in history, or may be a profile of attention of a patient suffering from autism recorded in a hospital scene, and specific person types of the specified person are not limited in this specification.
The above-described attention analysis result for the person to be detected may be used to reflect the lack of attention of the person to be detected. In the present specification, the server may make information recommendation of different contents according to the attention analysis results reflecting different degrees of attention deficit. Specifically, the server may determine help information matching the attention deficit degree in the database of the history according to the attention deficit degree reflected by the attention analysis result, and recommend information to the person to be detected and the corresponding guardian by using each information content as recommendation information.
For example, assuming that the attention analysis result of a certain person to be detected reflects that the attention loss condition of the person to be detected is serious, the server can determine help information corresponding to the serious attention loss from the database according to the condition, and then recommend information to the person to be detected and the corresponding guardian according to the content of the related information. The recommended information may be, for example: the guardian is recommended to pay more attention to and careless accompany to the person to be detected, and the living environment is changed to adjust living habits, so that the person to be detected can feel comfortable in life; or may also be, for example: the recommended guardian pays attention to psychological growth of the person to be detected, encourages and supports at appropriate timing, builds a good home environment, and the like.
In addition, the server may also filter and filter the corresponding information content according to the personal information (such as age, gender, special preference, etc.) of the person to be detected in the history when determining the corresponding help information from the database based on the attention deficit degree reflected by the attention analysis result. Therefore, when information recommendation is carried out on the to-be-detected person and the corresponding guardian, the server can recommend useful information which is more close to the real situation of the to-be-detected person, and more accurate and effective help is provided for the to-be-detected person.
Note that, the actual application scenario of information recommendation for different content according to the different attention analysis results is not strictly limited in the present specification. The recommendation information can be performed according to the attention analysis result to judge whether the child has autism or not, for example, in a scene of performing autism early screening on a large-scale child, or in a scene of performing brain detection on a low-collar child in a hospital, the recommendation information can be performed according to the attention analysis result to evaluate and verify the intelligence quotient of the child, and the specific application scene can be flexibly adjusted.
In addition, the server may display the attention feature pattern generated by the attention detection model to a pre-trained staff, perform attention analysis on the attention feature pattern of the person to be detected by the pre-trained staff, and record and generate an attention analysis result of the person to be detected.
It should be noted that, before the staff performs attention analysis based on the attention profile of the person to be detected, a strict training process is required. The training process for the staff specifically comprises that the staff can respectively watch the attention characteristic patterns of 10-20 normal persons and the attention characteristic patterns of 10-20 attention deficit persons. After the staff is observed, the staff can participate in the test link, the staff can watch the attention characteristic patterns of other 10 persons which are not marked as normal persons or attention deficit persons except the previously observed patterns, the staff can record the judging results of the 10 attention characteristic patterns after the staff is observed, and whether the staff can participate in the formal test link is evaluated according to the real results corresponding to the judging results and the actual patterns.
For example, in a specific generation process of the evaluation result, it is assumed that, among the other 10 judgment results recorded after the attention feature patterns of the person who does not designate the person who is normal or attention deficit are watched by the worker, the 9 judgment results of the attention feature patterns are the same as the real results corresponding to the attention feature patterns, so that the evaluation result after training for the worker is that the accuracy reaches 90%, and when the evaluation threshold for qualified training of the worker is not less than 80%, the worker can be regarded as passing through the training link smoothly, and the attention analysis can be formally performed according to the attention feature patterns of the person to be detected. The number of attention-deficit people or normal people who watch attention feature patterns before the test link and the number of attention feature patterns of unknown people who watch attention feature patterns of unknown people in the test link, and the evaluation threshold for judging the qualification of staff training are all described in the form of specific values for easy understanding, and the specific sizes of the above-mentioned values are not strictly limited in the specification, so that flexible adjustment and setting can be performed according to actual conditions and requirements.
In the above step, the server may generate the attention feature pattern corresponding to each detection video according to the eye movement track data by using the attention detection model, and determine the corresponding detection result data according to the eye movement track data corresponding to each detection video. The detection result data mainly comprises an eye movement track heat map corresponding to the eye movement track data, the eye movement calibration degree value which is generated in the calibration process and is bound with the detection video, the movement speeds of visual angles of different angles of a person to be detected at each moment when the person to be detected watches the detection video, and pupil size change data of the person to be detected at different moments when the person to be detected watches the detection video. The detection result data is mainly used for determining the credibility of the attention feature pattern generated by the attention detection model, and for convenience of explanation of the specific form of the detection result data, the following description will be given with a schematic diagram of one detection result data in the example, as shown in fig. 3.
Fig. 3 is a schematic diagram of detection result data provided in the present specification.
As shown in fig. 3, the eye movement track heat map is generated by the server according to the eye movement track data of the to-be-detected person corresponding to the example when watching a certain detection video, and it can be seen from fig. 3 that the eye movement track heat map reflects the attention time of the to-be-detected person to each position on the display screen when watching the detection video mainly through color depth. After determining the eye movement track heat map, the server can determine whether the eye movement track heat map is likely to record abnormality according to the eye movement track heat map of other people (including normal children and appointed people) in the history record when watching the detection video.
The method comprises the steps that a visual angle moving speed coordinate graph and a pupil size change image are recorded and generated by a server according to an eye image acquired by an eye movement detection device, wherein the two images are provided with corresponding set normal thresholds, the server judges whether data are abnormal according to a calculated corresponding mean value and each threshold after the visual angle moving speed coordinate graph and the pupil size change image are determined, for example, the threshold range is 2 cm/s-2.5 cm/s when the visual angle of the eye is 20 degrees, and when the server obtains the eye movement speed mean value of the eye of a person to be detected when the visual angle of the eye is 20 degrees to be far more than 2.5cm/s or far less than 2cm/s according to the eye movement track data of the person to be detected, the server considers that the visual angle moving speed data in detection result data corresponding to the eye movement track data are abnormal; or taking pupil size change data as an example, assuming that the pupil size change threshold range is set to be 2-5 mm, when the server obtains that the pupil size of the person to be detected is greater than 5mm or less than 2mm when watching the corresponding detection video according to the eye movement track data, the server considers that the pupil size change data in the detection result data corresponding to the eye movement track data is abnormal.
The eye movement calibration degree value image in the detection result data is determined in the calibration step before the detection process, each eye movement calibration value corresponds to a corresponding detection video, as mentioned in the above step, the eye movement calibration degree value also has a corresponding set threshold, and when the eye movement calibration degree value in the finally obtained detection result data does not meet the corresponding set threshold, the server considers that the eye movement calibration degree value in the detection result data is abnormal.
The above-mentioned detection result data is mainly used for determining the reliability of the attention feature pattern generated by the attention detection model, and specifically, when the server determines the reliability of the corresponding attention feature pattern according to abnormal conditions occurring in each data in the detection result data. When the server determines that the reliability of the attention feature pattern is not completely reliable, that is, the reliability corresponding to the attention feature pattern is lower than a preset reliability threshold, the server prompts information to staff responsible for the detection process, recommends the staff to carry out a test, determines the video type of the detection video corresponding to the attention feature pattern with lower reliability after receiving an instruction of confirming to restart the test by the staff, reselects the detection video of the video type of the detection video corresponding to the attention feature pattern from a detection video library, and plays the detection video again to a to-be-detected person, and simultaneously records eye movement track data to generate a new attention feature pattern according to the eye movement track data. After confirming attention feature patterns corresponding to a plurality of detection videos watched by a person to be detected, the server fuses each attention feature pattern, and calculates the reliability of each attention feature pattern on average to obtain the reliability of the finally fused attention feature pattern.
The method is characterized in that the number of the detection videos which are watched smoothly by a person to be detected in a normal detection link, namely the quantity of eye movement track data acquired by a server can influence the credibility of the finally fused attention feature pattern, if the person to be detected stops detecting after watching one or two detection videos or even when watching one detection video incompletely, the server can reduce the credibility corresponding to the finally fused attention feature pattern and prompt retests to corresponding staff, after receiving an instruction of selecting retests by the staff, the server can play the selected plurality of detection videos to the person to be detected in sequence again, record the eye movement track data corresponding to the detection videos, and generate corresponding attention feature patterns through a preset attention detection model so as to recommend the information of the subsequent person to be detected.
For ease of understanding and description of the overall method, a schematic diagram of an eye tracking based information recommendation system is described below, as shown in fig. 4.
Fig. 4 is a schematic structural diagram of an information recommendation system based on eye tracking provided in the present specification.
As shown in fig. 4, first, a worker may record personal information of a person to be detected in a user information setting module in a server, where the user information setting module is mainly used to record personal information of the person to be detected and create a corresponding detection task. After the personal information of the person to be detected is recorded, the server selects a plurality of detection videos from the detection video library through the analysis and output module and sends the detection videos to the data acquisition module of the external equipment, wherein the data acquisition module mainly comprises the eye movement detection equipment and a computer connected with a display device, and is mainly used for playing the detection videos and acquiring eye movement track data of the person to be detected when watching the detection videos.
After the detection video is played through the display device in the data acquisition module and eye movement track data of the person to be detected are acquired through the eye movement detection device, the data acquisition module sends the eye movement track data back to the analysis and output module in the server, the analysis and output module generates corresponding detection result data according to the eye movement track data, and meanwhile, based on the attention detection model in the analysis and output module, attention feature maps corresponding to the person to be detected are generated according to the eye movement track data. Then, the attention analysis result of the person to be detected is determined by inputting the attention characteristic pattern corresponding to the person to be detected into a preset attention analysis model deployed in the analysis and output module, and information recommendation is further carried out on the person to be detected according to the attention analysis result. Of course, the corresponding staff who is trained in advance can record and generate the attention analysis result of the person to be detected according to the attention characteristic spectrum of the person to be detected.
After determining the attention analysis result of the person to be detected, the server transmits corresponding data records (including eye movement track data of the person to be detected, attention characteristic patterns of the person to be detected and detection result data based on the eye movement track data of the person to be detected) to the data management module, and binds the corresponding data records with the personal information of the person to be detected, which is recorded by the user information setting module, and records and stores the personal information of the person to be detected, so that a guardian of the subsequent person to be detected can acquire the analysis result of the person to be detected more quickly.
In addition, corresponding error checking modules are arranged in the server and the external equipment, the error checking modules in the external equipment are mainly arranged in a computer connected with the display equipment in a software mode, and the main functions of the two error checking modules are to detect whether errors possibly occur in a normal detection process, for example, the error checking modules in the server can be used for carrying out a calibration error prompt on a worker when the eye movement calibration degree value of the person to be detected is lower than a preset calibration threshold value during the calibration process of the person to be detected; for another example, when the data acquisition module acquires the eye movement track data of the person to be detected, the error detection module set in the computer connected with the display device can judge whether the acquired eye movement track data is normal, if obvious abnormal data appears, the error detection module does not send the data to the server, and meanwhile, data acquisition error prompt is carried out to the staff.
When the external equipment works in a normal detection link, a corresponding staff is also established, the staff is mainly used for recording abnormal phenomenon information of a person to be detected in the detection process, the recorded abnormal phenomenon information is sent to a server together with eye movement track data of the person to be detected through a data acquisition module when the abnormal phenomenon occurs, and the abnormal phenomenon information recorded by the staff is mainly used for evaluating the credibility of a follow-up generated attention feature map. As can be seen from fig. 4 and the foregoing, compared with the existing detection means for detecting attention, the detection process and the detection means are lighter, the eye movement track data of the person to be detected is acquired through the preset external device, and then is sent to the cloud server for data processing and result generation, so that information recommendation can be performed on the person to be detected, thereby realizing wide popularization and expansion with the server as the center, greatly reducing fund and time consumption, and effectively improving efficiency.
From the above, it can be seen that, according to the information recommendation method based on eye movement tracking provided in the present disclosure, according to the eye movement track data of the person to be detected when watching the detection video, the attention feature map corresponding to the person to be detected and the detection result data are determined, the reliability of the attention feature map is evaluated according to the detection result data, and then the information recommendation is performed on the person to be detected according to the attention feature map with determined reliability. Compared with the current common means of subjective evaluation and recommendation by a professional doctor, the method has the advantages that the recommendation information generated during information recommendation is more accurate, the practicability is higher, and the content of the recommendation information is more fit with the real situation of a person to be detected, so that more effective help can be provided for the person to be detected with attention problems.
The above is a method implemented by one or more embodiments of the present specification, and based on the same concept, the present specification further provides a corresponding information recommendation device based on eye tracking, as shown in fig. 5.
Fig. 5 is a schematic diagram of an information recommendation device based on eye tracking provided in the present specification, including:
a playing module 501, configured to play a preset detection video through a preset display;
The acquisition module 502 is configured to acquire, through a preset eye movement detection device, an eye image of a person to be detected when watching the detection video, and determine eye movement track data of the person to be detected when watching the detection video according to the eye image;
A map generating module 503, configured to input the eye movement trajectory data and the detection video into a preset attention detection model, so as to generate an attention feature map for the person to be detected when watching the detection video, where the attention feature map is used to characterize attention degrees of the person to be detected for different objects in the detection video in different states and attention degrees of the person to be detected for a background in the detection video;
and a recommending module 504, configured to recommend information to the person to be detected according to the attention feature map.
Optionally, the detecting module 504 is specifically configured to determine, according to the eye movement track data, detection result data corresponding to the eye movement track data, where the detection result data includes an eye movement track heat map corresponding to the eye movement track data, an eye movement calibration degree value, a movement speed of a viewing angle of the person to be detected at different moments when the person to be detected views the detection video, and pupil size change data of the person to be detected at different moments when the person to be detected views the detection video; determining the credibility corresponding to the attention feature map according to the detection result data; and recommending information to the person to be detected according to the attention characteristic spectrum and the credibility corresponding to the attention characteristic spectrum.
Optionally, the detection module 504 is specifically configured to recommend information to the person to be detected according to the attention feature map when the confidence level corresponding to the attention feature map is not lower than a preset confidence level threshold.
Optionally, the detecting module 504 is specifically configured to determine, when the reliability corresponding to the attention feature spectrum is lower than the preset reliability threshold, a video type to which the detected video corresponding to the attention feature spectrum belongs, and determine, from a preset detected video library, detected videos of other video types different from the video type to which the detected video corresponding to the attention feature spectrum belongs, as the first retest video; playing the first retest video to the person to be detected through the preset eye movement detection equipment, and determining an attention characteristic map of the person to be detected when watching the first retest video; and recommending information to the person to be detected according to the attention characteristic spectrum of the person to be detected when watching the first retest video.
Optionally, the detection module 504 is specifically configured to input the attention feature map into a preset attention analysis model, so that the attention analysis model performs attention analysis on the person to be detected according to the attention feature map, and generates an attention analysis result for the person to be detected; and recommending information to the person to be detected according to the attention analysis result.
Optionally, the detecting module 504 is specifically configured to determine, according to the attention feature pattern and the attention feature patterns of the plurality of specified persons recorded in history and the attention feature patterns of the plurality of normal persons recorded in history, an average similarity value between the attention feature pattern of the person to be detected and the attention feature patterns of the plurality of specified persons recorded in history and an average similarity value between the attention feature pattern of the person to be detected and the attention feature patterns of the plurality of normal persons recorded in history by using the attention analysis model; determining attention information of the person to be detected according to the average similarity value of the attention characteristic spectrum of the person to be detected and the attention characteristic spectrums of a plurality of appointed persons recorded in history and the average similarity value of the attention characteristic spectrum of the person to be detected and the attention characteristic spectrums of a plurality of normal persons recorded in history through the attention analysis model; and generating an attention analysis result aiming at the person to be detected according to the attention information of the person to be detected.
Optionally, the detecting module 504 is specifically configured to, in response to the fact that the detected video is not met by the person to be detected when the person to be detected views the detected video, replay the detected video to the person to be detected through an eye movement detecting device, and collect, through the eye movement detecting device, eye movement track data of the person to be detected when the person to be detected views the replayed detected video; generating an attention characteristic map for the to-be-detected person when watching the replayed detection video according to the eye movement track data of the to-be-detected person when watching the replayed detection video; and recommending information to the to-be-detected person according to the attention characteristic spectrum of the to-be-detected person when watching the replayed detection video.
The present specification also provides a computer-readable storage medium storing a computer program operable to perform an eye-tracking based information recommendation method provided in fig. 1 above.
The present specification also provides a schematic structural diagram of an electronic device corresponding to fig. 1 shown in fig. 6. At the hardware level, as shown in fig. 6, the electronic device includes a processor, an internal bus, a network interface, a memory, and a nonvolatile storage, and may of course include hardware required by other services. The processor reads the corresponding computer program from the nonvolatile memory into the memory and then runs the computer program to implement the information recommendation method based on eye tracking described in fig. 1.
In the 90 s of the 20 th century, improvements to one technology could clearly be distinguished as improvements in hardware (e.g., improvements to circuit structures such as diodes, transistors, switches, etc.) or software (improvements to the process flow). However, with the development of technology, many improvements of the current method flows can be regarded as direct improvements of hardware circuit structures. Designers almost always obtain corresponding hardware circuit structures by programming improved method flows into hardware circuits. Therefore, an improvement of a method flow cannot be said to be realized by a hardware entity module. For example, a programmable logic device (Programmable Logic Device, PLD) (e.g., field programmable gate array (Field Programmable GATE ARRAY, FPGA)) is an integrated circuit whose logic functions are determined by user programming of the device. A designer programs to "integrate" a digital system onto a PLD without requiring the chip manufacturer to design and fabricate application-specific integrated circuit chips. Moreover, nowadays, instead of manually manufacturing integrated circuit chips, such programming is mostly implemented with "logic compiler (logic compiler)" software, which is similar to the software compiler used in program development and writing, and the original code before being compiled is also written in a specific programming language, which is called hardware description language (Hardware Description Language, HDL), but HDL is not just one, but a plurality of kinds, such as ABEL(Advanced Boolean Expression Language)、AHDL(Altera Hardware Description Language)、Confluence、CUPL(Cornell University Programming Language)、HDCal、JHDL(Java Hardware Description Language)、Lava、Lola、MyHDL、PALASM、RHDL(Ruby Hardware Description Language), and VHDL (Very-High-SPEED INTEGRATED Circuit Hardware Description Language) and Verilog are currently most commonly used. It will also be apparent to those skilled in the art that a hardware circuit implementing the logic method flow can be readily obtained by merely slightly programming the method flow into an integrated circuit using several of the hardware description languages described above.
The controller may be implemented in any suitable manner, for example, the controller may take the form of, for example, a microprocessor or processor and a computer readable medium storing computer readable program code (e.g., software or firmware) executable by the (micro) processor, logic gates, switches, application SPECIFIC INTEGRATED Circuits (ASICs), programmable logic controllers, and embedded microcontrollers, examples of controllers include, but are not limited to, the following microcontrollers: ARC 625D, atmel AT91SAM, microchip PIC18F26K20, and Silicone Labs C8051F320, the memory controller may also be implemented as part of the control logic of the memory. Those skilled in the art will also appreciate that, in addition to implementing the controller in a pure computer readable program code, it is well possible to implement the same functionality by logically programming the method steps such that the controller is in the form of logic gates, switches, application specific integrated circuits, programmable logic controllers, embedded microcontrollers, etc. Such a controller may thus be regarded as a kind of hardware component, and means for performing various functions included therein may also be regarded as structures within the hardware component. Or even means for achieving the various functions may be regarded as either software modules implementing the methods or structures within hardware components.
The system, apparatus, module or unit set forth in the above embodiments may be implemented in particular by a computer chip or entity, or by a product having a certain function. One typical implementation is a computer. In particular, the computer may be, for example, a personal computer, a laptop computer, a cellular telephone, a camera phone, a smart phone, a personal digital assistant, a media player, a navigation device, an email device, a game console, a tablet computer, a wearable device, or a combination of any of these devices.
For convenience of description, the above devices are described as being functionally divided into various units, respectively. Of course, the functions of each element may be implemented in one or more software and/or hardware elements when implemented in the present specification.
It will be appreciated by those skilled in the art that embodiments of the present description may be provided as a method, system, or computer program product. Accordingly, the present specification may take the form of an entirely hardware embodiment, an entirely software embodiment, or an embodiment combining software and hardware aspects. Furthermore, the present description can take the form of a computer program product on one or more computer-usable storage media (including, but not limited to, disk storage, CD-ROM, optical storage, etc.) having computer-usable program code embodied therein.
The present description is described with reference to flowchart illustrations and/or block diagrams of methods, apparatus (systems) and computer program products according to embodiments of the specification. It will be understood that each flow and/or block of the flowchart illustrations and/or block diagrams, and combinations of flows and/or blocks in the flowchart illustrations and/or block diagrams, can be implemented by computer program instructions. These computer program instructions may be provided to a processor of a general purpose computer, special purpose computer, embedded processor, or other programmable data processing apparatus to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable data processing apparatus, create means for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be stored in a computer-readable memory that can direct a computer or other programmable data processing apparatus to function in a particular manner, such that the instructions stored in the computer-readable memory produce an article of manufacture including instruction means which implement the function specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be loaded onto a computer or other programmable data processing apparatus to cause a series of operational steps to be performed on the computer or other programmable apparatus to produce a computer implemented process such that the instructions which execute on the computer or other programmable apparatus provide steps for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
In one typical configuration, a computing device includes one or more processors (CPUs), input/output interfaces, network interfaces, and memory.
The memory may include volatile memory in a computer-readable medium, random Access Memory (RAM) and/or nonvolatile memory, such as Read Only Memory (ROM) or flash memory (flash RAM). Memory is an example of computer-readable media.
Computer readable media, including both non-transitory and non-transitory, removable and non-removable media, may implement information storage by any method or technology. The information may be computer readable instructions, data structures, modules of a program, or other data. Examples of storage media for a computer include, but are not limited to, phase change memory (PRAM), static Random Access Memory (SRAM), dynamic Random Access Memory (DRAM), other types of Random Access Memory (RAM), read Only Memory (ROM), electrically Erasable Programmable Read Only Memory (EEPROM), flash memory or other memory technology, compact disc read only memory (CD-ROM), digital Versatile Discs (DVD) or other optical storage, magnetic cassettes, magnetic tape magnetic disk storage or other magnetic storage devices, or any other non-transmission medium, which can be used to store information that can be accessed by a computing device. Computer-readable media, as defined herein, does not include transitory computer-readable media (transmission media), such as modulated data signals and carrier waves.
It should also be noted that the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising one … …" does not exclude the presence of other like elements in a process, method, article or apparatus that comprises the element.
It will be appreciated by those skilled in the art that embodiments of the present description may be provided as a method, system, or computer program product. Accordingly, the present specification may take the form of an entirely hardware embodiment, an entirely software embodiment or an embodiment combining software and hardware aspects. Furthermore, the present description can take the form of a computer program product on one or more computer-usable storage media (including, but not limited to, disk storage, CD-ROM, optical storage, etc.) having computer-usable program code embodied therein.
The description may be described in the general context of computer-executable instructions, such as program modules, being executed by a computer. Generally, program modules include routines, programs, objects, components, data structures, etc. that perform particular tasks or implement particular abstract data types. The specification may also be practiced in distributed computing environments where tasks are performed by remote processing devices that are linked through a communications network. In a distributed computing environment, program modules may be located in both local and remote computer storage media including memory storage devices.
In this specification, each embodiment is described in a progressive manner, and identical and similar parts of each embodiment are all referred to each other, and each embodiment mainly describes differences from other embodiments. In particular, for system embodiments, since they are substantially similar to method embodiments, the description is relatively simple, as relevant to see a section of the description of method embodiments.
The foregoing is merely exemplary of the present disclosure and is not intended to limit the disclosure. Various modifications and alterations to this specification will become apparent to those skilled in the art. Any modifications, equivalent substitutions, improvements, or the like, which are within the spirit and principles of the present description, are intended to be included within the scope of the claims of the present description.

Claims (9)

1. An information recommendation method based on eye movement tracking, comprising:
playing a preset detection video through a preset display;
Acquiring an eye image of a person to be detected when watching the detection video through preset eye movement detection equipment, and determining eye movement track data of the person to be detected when watching the detection video according to the eye image;
Inputting the eye movement track data and the detection video into a preset attention detection model to generate an attention characteristic map aiming at the person to be detected when watching the detection video, wherein the attention characteristic map is used for representing attention degrees of the person to be detected aiming at different objects in the detection video in different states and attention degrees of the person to be detected aiming at the background in the detection video;
and recommending information to the to-be-detected person according to the attention characteristic spectrum, wherein the attention characteristic spectrum is input into a preset attention analysis model, so that the attention analysis model performs attention analysis to the to-be-detected person according to the attention characteristic spectrum, generates an attention analysis result aiming at the to-be-detected person, and recommending information to the to-be-detected person according to the attention analysis result.
2. The method according to claim 1, wherein the recommending information to the person to be detected according to the attention profile specifically includes:
Determining detection result data corresponding to the eye movement track data according to the eye movement track data, wherein the detection result data comprises an eye movement track heat map corresponding to the eye movement track data, an eye movement calibration degree value, a visual angle moving speed of the person to be detected at different moments when watching the detection video, and pupil size change data of the person to be detected at different moments when watching the detection video;
Determining the credibility corresponding to the attention feature map according to the detection result data;
And recommending information to the person to be detected according to the attention characteristic spectrum and the credibility corresponding to the attention characteristic spectrum.
3. The method of claim 2, wherein recommending information to the person to be detected according to the attention feature map and the confidence level corresponding to the attention feature map, specifically comprises:
and when the credibility corresponding to the attention feature map is not lower than a preset credibility threshold, recommending information to the person to be detected according to the attention feature map.
4. A method as claimed in claim 3, wherein the method further comprises:
when the credibility corresponding to the attention feature pattern is lower than the preset credibility threshold, determining the video type of the detection video corresponding to the attention feature pattern, and determining detection videos of other video types which are different from the video type of the detection video corresponding to the attention feature pattern from a preset detection video library as a first retest video;
Playing the first retest video to the person to be detected through the preset eye movement detection equipment, and determining an attention characteristic map of the person to be detected when watching the first retest video;
And recommending information to the person to be detected according to the attention characteristic spectrum of the person to be detected when watching the first retest video.
5. The method according to claim 1, wherein the attention profile is input into a preset attention analysis model, so that the attention analysis model performs attention analysis on the person to be detected according to the attention profile, and generates an attention analysis result for the person to be detected, specifically including:
Determining average similarity values of the attention characteristic pattern of the person to be detected and the attention characteristic patterns of the plurality of appointed persons recorded in history and average similarity values of the attention characteristic pattern of the person to be detected and the attention characteristic patterns of the plurality of normal persons recorded in history according to the attention characteristic pattern, the attention characteristic patterns of the plurality of appointed persons recorded in history and the attention characteristic patterns of the plurality of normal persons recorded in history through the attention analysis model;
Determining attention information of the person to be detected according to the average similarity value of the attention characteristic spectrum of the person to be detected and the attention characteristic spectrums of a plurality of appointed persons recorded in history and the average similarity value of the attention characteristic spectrum of the person to be detected and the attention characteristic spectrums of a plurality of normal persons recorded in history through the attention analysis model;
And generating an attention analysis result aiming at the person to be detected according to the attention information of the person to be detected.
6. The method according to claim 1, wherein the recommending information to the person to be detected according to the attention profile specifically includes:
in response to the fact that the detected video is not met by the detected person recorded by the staff when the detected video is watched, playing the detected video to the detected person through eye movement detection equipment again, and acquiring eye movement track data of the detected person when the replayed detected video is watched through the eye movement detection equipment;
Generating an attention characteristic map for the to-be-detected person when watching the replayed detection video according to the eye movement track data of the to-be-detected person when watching the replayed detection video;
And recommending information to the to-be-detected person according to the attention characteristic spectrum of the to-be-detected person when watching the replayed detection video.
7. An eye movement tracking-based information recommendation device, comprising:
the playing module is used for playing a preset detection video through a preset display;
The acquisition module is used for acquiring eye images of a person to be detected when watching the detection video through preset eye movement detection equipment, and determining eye movement track data of the person to be detected when watching the detection video according to the eye images;
The map generation module is used for inputting the eye movement track data and the detection video into a preset attention detection model so as to generate an attention characteristic map aiming at the person to be detected when watching the detection video, wherein the attention characteristic map is used for representing attention degrees of the person to be detected aiming at different objects in the detection video in different states and attention degrees aiming at the background in the detection video;
And the recommending module is used for recommending information to the to-be-detected person according to the attention characteristic spectrum, wherein the attention characteristic spectrum is input into a preset attention analysis model so that the attention analysis model performs attention analysis on the to-be-detected person according to the attention characteristic spectrum, generates an attention analysis result aiming at the to-be-detected person, and recommends information to the to-be-detected person according to the attention analysis result.
8. A computer readable storage medium, characterized in that the storage medium stores a computer program which, when executed by a processor, implements the method of any of the preceding claims 1-7.
9. An electronic device comprising a memory, a processor and a computer program stored on the memory and executable on the processor, characterized in that the processor implements the method of any of the preceding claims 1-6 when executing the program.
CN202410262494.4A 2024-03-07 2024-03-07 Information recommendation method and device based on eye movement tracking Active CN117854714B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202410262494.4A CN117854714B (en) 2024-03-07 2024-03-07 Information recommendation method and device based on eye movement tracking

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202410262494.4A CN117854714B (en) 2024-03-07 2024-03-07 Information recommendation method and device based on eye movement tracking

Publications (2)

Publication Number Publication Date
CN117854714A CN117854714A (en) 2024-04-09
CN117854714B true CN117854714B (en) 2024-05-24

Family

ID=90531492

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202410262494.4A Active CN117854714B (en) 2024-03-07 2024-03-07 Information recommendation method and device based on eye movement tracking

Country Status (1)

Country Link
CN (1) CN117854714B (en)

Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107515677A (en) * 2017-08-31 2017-12-26 杭州极智医疗科技有限公司 Notice detection method, device and storage medium
CN107783945A (en) * 2017-11-13 2018-03-09 山东师范大学 A kind of search result web page notice assessment method and device based on the dynamic tracking of eye
WO2020119355A1 (en) * 2018-12-14 2020-06-18 深圳先进技术研究院 Method for evaluating multi-modal emotional understanding capability of patient with autism spectrum disorder
CN112086196A (en) * 2020-09-16 2020-12-15 中国科学院自动化研究所 Method and system for multi-selective attention assessment and training
CN112966186A (en) * 2021-03-30 2021-06-15 北京三快在线科技有限公司 Model training and information recommendation method and device
WO2021169367A1 (en) * 2020-02-27 2021-09-02 深圳大学 Multi-layer attention based recommendation method
CN114209324A (en) * 2022-02-21 2022-03-22 北京科技大学 Psychological assessment data acquisition method based on image visual cognition and VR system
CN117442154A (en) * 2023-11-21 2024-01-26 波克医疗科技(上海)有限公司 Visual detection system based on children's attention
CN117576771A (en) * 2024-01-17 2024-02-20 之江实验室 Visual attention assessment method, device, medium and equipment

Patent Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107515677A (en) * 2017-08-31 2017-12-26 杭州极智医疗科技有限公司 Notice detection method, device and storage medium
CN107783945A (en) * 2017-11-13 2018-03-09 山东师范大学 A kind of search result web page notice assessment method and device based on the dynamic tracking of eye
WO2020119355A1 (en) * 2018-12-14 2020-06-18 深圳先进技术研究院 Method for evaluating multi-modal emotional understanding capability of patient with autism spectrum disorder
WO2021169367A1 (en) * 2020-02-27 2021-09-02 深圳大学 Multi-layer attention based recommendation method
CN112086196A (en) * 2020-09-16 2020-12-15 中国科学院自动化研究所 Method and system for multi-selective attention assessment and training
CN112966186A (en) * 2021-03-30 2021-06-15 北京三快在线科技有限公司 Model training and information recommendation method and device
CN114209324A (en) * 2022-02-21 2022-03-22 北京科技大学 Psychological assessment data acquisition method based on image visual cognition and VR system
CN117442154A (en) * 2023-11-21 2024-01-26 波克医疗科技(上海)有限公司 Visual detection system based on children's attention
CN117576771A (en) * 2024-01-17 2024-02-20 之江实验室 Visual attention assessment method, device, medium and equipment

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
孤独症谱系障碍儿童动态眼动系统预测模型建立研究;孙宾宾;何慧静;江雯;张诗;林艳;万国斌;;中国实用儿科杂志;20180406(第04期);全文 *
自闭症儿童对卡通与真人动态社会情境的视觉注意特点研究;张坤;高磊;袁艺双;彭世新;徐如意;;广西师范大学学报(哲学社会科学版);20200515(第03期);全文 *

Also Published As

Publication number Publication date
CN117854714A (en) 2024-04-09

Similar Documents

Publication Publication Date Title
Aslin Infant eyes: A window on cognitive development
US8388529B2 (en) Differential diagnosis of neuropsychiatric conditions
Venker et al. An open conversation on using eye-gaze methods in studies of neurodevelopmental disorders
Lai et al. Measuring saccade latency using smartphone cameras
Zou et al. Distinct generation of subjective vividness and confidence during naturalistic memory retrieval in angular gyrus
Shukla et al. SMART-T: A system for novel fully automated anticipatory eye-tracking paradigms
CN104185020A (en) System and method for detecting stereo visual fatigue degree
Scassellati Using social robots to study abnormal social development
CN115329818A (en) Multi-modal fusion attention assessment method, system and storage medium based on VR
CN113658697B (en) Psychological assessment system based on video fixation difference
McCormick et al. Not doomed to repeat: Enhanced medial prefrontal cortex tracking of errors promotes adaptive behavior during adolescence
Fuzi et al. Voluntary and spontaneous smile quantification in facial palsy patients: validation of a novel mobile application
CN117854714B (en) Information recommendation method and device based on eye movement tracking
Sava-Segal et al. Individual differences in neural event segmentation of continuous experiences
US20150160474A1 (en) Corrective lens prescription adaptation system for personalized optometry
CN116392123A (en) Multi-movement symptom screening method and system based on game interaction and eye movement tracking
Riemer et al. Interrelations between the perception of time and space in large-scale environments
Miller Using Eye-Tracking to Understand the Complex Relations Between Attention and Language in Children's Spatial Skill Development
US11580874B1 (en) Methods, systems, and computer readable media for automated attention assessment
US20210353208A1 (en) Systems and methods for automated passive assessment of visuospatial memory and/or salience
US20210259603A1 (en) Method for evaluating a risk of neurodevelopmental disorder with a child
Zhang et al. Opposing timing constraints severely limit the use of pupillometry to investigate visual statistical learning
KR20210028199A (en) How to assess a child's risk of neurodevelopmental disorders
Stone Eye and body tracking in the lab, in the wild, and in the clinic
Shanmugaraja et al. Cognitive Assessment Based on Eye Tracking Using Device‐Embedded Cameras via Tele‐Neuropsychology

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant