WO2016115895A1 - Procédé et système d'identification de type d'utilisateur en ligne d'après un comportement visuel - Google Patents

Procédé et système d'identification de type d'utilisateur en ligne d'après un comportement visuel Download PDF

Info

Publication number
WO2016115895A1
WO2016115895A1 PCT/CN2015/087701 CN2015087701W WO2016115895A1 WO 2016115895 A1 WO2016115895 A1 WO 2016115895A1 CN 2015087701 W CN2015087701 W CN 2015087701W WO 2016115895 A1 WO2016115895 A1 WO 2016115895A1
Authority
WO
WIPO (PCT)
Prior art keywords
user
eye movement
user type
data
data set
Prior art date
Application number
PCT/CN2015/087701
Other languages
English (en)
Chinese (zh)
Inventor
吕胜富
栗觅
马理旺
钟宁
Original Assignee
北京工业大学
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 北京工业大学 filed Critical 北京工业大学
Publication of WO2016115895A1 publication Critical patent/WO2016115895A1/fr

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/18Eye characteristics, e.g. of the iris
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • G06F18/241Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches
    • G06F18/2411Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches based on the proximity to a decision surface, e.g. support vector machines

Definitions

  • the invention relates to the field of user type automatic identification technology, in particular to a method and system for online type recognition based on visual behavior.
  • the network has become an indispensable communication tool and information exchange platform for people's life, study, work, etc.
  • the network can only passively accept users through the keyboard, mouse, touch screen, etc. of computer hardware.
  • the information request slowly receives the user's manual input, but the user can quickly obtain a large amount of information from the computer interface and audio, thereby causing a problem of unbalanced human-computer interaction bandwidth.
  • research on computer network intelligence has attracted widespread attention.
  • Eye tracking technology provides a way for the realization of network intelligence. Eye tracking technology (referred to as eye movement technology) can record the user's eye movements, enabling users to directly operate the interface through the visual channel. The problem of overlapping bandwidth imbalance.
  • the existing online user type identification is mainly through questionnaires, online click-through rate and other methods, so it is difficult to obtain psychological activities in the process of online users online, the recognition accuracy is low, and the credibility is not high.
  • the object of the present invention is to provide a method and system for identifying online users based on visual behavior, which can actively record eye movement data of online users, identify users according to different eye movement data, and extract data is simple and reliable, and the recognition accuracy is high and reliable. High degree.
  • a method for recognizing an online user type based on visual behavior is provided.
  • eye movement data of one or more different types of users is collected and processed to obtain a data set F including a gaze information and a user.
  • one or more eye movement feature data are obtained according to the gaze information in the gaze information data set F to form a sample data set;
  • the eye movement feature data input support vector machine is selected from the sample data set, and the user type classifier is trained to complete the machine learning process to obtain the classifier;
  • the collected eye movement data of any user on the network is input to the trained user type classifier, and the user type of any user on the network is identified according to the classifier.
  • t fk is the time of the browsing
  • n fk is the number of gaze points of the browsing in t fk time
  • d lk is the diameter of the left pupil
  • d rk is the diameter of the right pupil.
  • the plurality of eye movement characteristic data forming the sampling data set includes the following steps:
  • training to obtain the classifier includes the following steps:
  • a basic sampling unit Mi ⁇ fq fi , SD i, D i , c q ⁇ ;
  • the second step is to extract the eye movement characteristic data, that is, the training sample characteristic parameters fq fi , SD i and D i to form a feature parameter vector;
  • the user type identification is implemented by the following steps:
  • the first step is to input the eye movement data of any user on the network into the trained user type classifier
  • the user type of any user on the network is identified according to the classifier.
  • a visual behavior based online user type identification system comprising an acquisition processing unit, an acquisition unit, a training unit and an identification unit connected in sequence; wherein the collection processing unit is used for one or more The eye movement data of different types of users are collected and processed to obtain a gaze information data set and a user type set; the obtaining unit is configured to obtain one or more eye movement characteristic data according to the gaze information in the gaze information data set F, to form a sampling data set; the training unit is configured to select an eye movement feature data input support vector machine from the sample data set, train a user type classifier to complete the machine learning process to obtain a classifier; and the identification unit is configured to collect the eye of any user on the network.
  • the dynamic data is input to the trained user type classifier, and the user type of any user on the network is identified according to the classifier.
  • the obtaining unit further includes:
  • Extracting the eye movement feature data that is, the training sample feature parameters fq fi , S Di and D i to form a feature parameter vector;
  • the identifying unit further includes: inputting the collected eye movement data of any user on the network to the trained user type classifier;
  • the user type of any user on the network is identified according to the classifier.
  • the invention discloses a visual behavior-based online user type recognition method and system, which mainly utilizes an eye tracking technology to identify an online user type according to an online user visual mode and a plurality of eye movement features. It is used in the eye-moving human-computer interaction environment, and three kinds of eye movements are obtained by calculating the user browsing the webpage. The data is collected, and the type of online users is determined according to the difference of the eye movement characteristic data.
  • User recognition based on visual behavior can actively record eye movement data of online users, and the data is simple and reliable, with high accuracy and high credibility.
  • FIG. 1 is a flow chart of an embodiment of a visual behavior based online user type identification method according to the present invention
  • FIG. 2 is a schematic diagram of an embodiment of eye movement data
  • FIG. 3 is a schematic structural diagram of an embodiment of a visual behavior based online user type identification system according to the present invention.
  • FIG. 1 is a flowchart of an embodiment of a visual behavior-based online user type identification method of the present invention, and an embodiment of the method of the present invention is described in conjunction with an embodiment of the eye movement data shown in FIG. .
  • the visual behavior based online user type identification method may mainly include the following steps:
  • Visual behavior the sensitivity of people to the information of graphic symbols and the way of thinking reflected by visual senses (the behavior of eyeballs based on visual senses), here refers to the characteristics of different types of online users when browsing the web, such as when the elderly browse the web More attention is paid to the central area of the webpage, and young people present an irregular free browsing strategy.
  • Eye movement data here refers to data related to eye movements, including but not limited to data related to eye movements (or eye movement patterns) such as gaze, saccade, and follow-up.
  • a method for collecting eye movement data can be realized by a combination of an optical system, a pupil center coordinate extraction system, a vision and pupil coordinate superposition system, and an image and data recording and analysis system.
  • the camera's eye tracker, etc. can collect the eye movement data of the online user, and can also eliminate the abnormal data to obtain a correct gaze information data set.
  • the eye tracker can collect and record the eye movement data.
  • Eye movement data and user types are used as learning sets to learn eye movement patterns (eye movement patterns) of different users. Among them, according to the eye movement data, the sensitivity of the user who browses the webpage to different graphic symbol information and/or the behavior of visual sensory reflection and the like can be known.
  • the information data here refers to the data related to such eye movement information of the "observed" object being observed in the eye movement data.
  • User type here refers to the type of network access user corresponding to the collected eye movement data.
  • the types that need to be divided can be preset, such as types by age (elderly, young people), types by gender (men, women), and so on.
  • the online user type is pre-set to the age type
  • 52 different types can be collected and recorded at a sampling frequency of 120 Hz by using a sensing device including an eye tracker device (eg, an infrared camera of a Tobii T120 non-invasive eye tracker manufactured in Sweden).
  • an eye tracker device eg, an infrared camera of a Tobii T120 non-invasive eye tracker manufactured in Sweden
  • each user performs 10 times of eye movement data generated by the visual behavior of the task in the web interface.
  • the gaze information data set F ⁇ f 1 , f 2 , f 3 , f 4 , .
  • f k is a four-element array, which may contain
  • the four kinds of information (t fk , n fk , d lk , d rk ) can in turn represent the browsing time t fk of the kth user, the number of gaze points browsed during the t fk time, and the diameter of the left pupil at this time. The diameter of the right pupil at this time.
  • the gaze point may refer to a point at which the eye does not move at the position of the web page when browsing the webpage.
  • the gaze information data f 1 at the time of the first browsing of the first user includes four kinds of information (t f1 , n f1 , d l1 , d r1 ), where t f1 is the first browsing of the first user.
  • Time; n f1 is the number of gaze points viewed in the t f1 time;
  • d l1 is the left pupil diameter (left eye pupil diameter);
  • d r1 is the right pupil diameter (right eye pupil diameter).
  • step S2 one or more eye movement feature data (or at least one eye movement feature data) is obtained based on the gaze information in the gaze information data set F to form a sample data set.
  • a specific method is as follows: extracting the gaze information included in the gaze information data set F, and calculating, by calculating, the saccade distance S Dk , the gaze frequency fq fk , the pupil diameter d fk , and the like for each user browsing task Dynamic feature data (ie, feature data that characterizes eye movements).
  • the eye hop distance refers to the Euclidean distance of the two gaze points when each user performs a browsing task and the position of the gaze point changes.
  • a method for calculating the saccade distance S Dk may be: when the first user browses the task for the first time, the coordinates of the ith gaze point are (x i , y i ), and the i+1th gaze The coordinates of the point are (x i+1 , y i+1 ), and the average value of the i-th eye-jump distance is taken as the feature of the current eye-jump distance (S D1 ).
  • the calculation formula is:
  • the gaze frequency refers to the number of gaze points per unit time each time the user performs a browsing task.
  • the gaze frequency data set (set) of all 52 users performing 10 browsing tasks ie, 520 times
  • the pupil diameter d fk may refer to the diameter value of the pupil of a certain fixation point of each user at a certain browsing.
  • the left and right pupil diameter data d lk and d rk collected in the set are extracted, and the pupil diameter can be calculated.
  • Each of the rows represents the pupil diameter value of each gaze point of the same user under a certain browsing task, and there are a total of n gaze points, so each row has n pupil diameter values;
  • the element Di in the pupil diameter matrix is the average value of each row of the pupil matrix, which is:
  • a d ⁇ 1.2523, 1.3799, ..., -1.2757 ⁇ .
  • the three eye movement characteristic data of the gaze frequency fq fn , the pupil diameter D m and the saccade distance S Di are selected, and the saccade distance S Di and the gaze frequency fq fi of each of the above-mentioned users each time browsing tasks are performed.
  • step S3 the eye movement feature data input support vector machine is selected from the sampled data set, and the user type classifier is trained. Thereby completing the machine learning process to obtain the classifier.
  • the eye movement feature data is selected from the sampled data set in step S2, that is, a set of numerical values of the gaze frequency array, the pupil diameter array, and the squint distance array are input to the support vector machine SVM for training, thereby training the user.
  • Type classifier a set of numerical values of the gaze frequency array, the pupil diameter array, and the squint distance array are input to the support vector machine SVM for training, thereby training the user.
  • the kernel function is selected as a Gaussian (radial basis) function, and the existing decomposition type algorithm can be used for the corresponding user type (eg, elderly or young people).
  • step S4 the collected eye movement data of any user on the network is input to the trained user type classifier, and the user type of any user on the network is identified according to the classifier.
  • the eye movement data is any collected eye movement data of the online user (such as captured or acquired by the eye tracker), and may include, for example, all that has been collected (eg, all collected in step S1). Eye tracking data), and/or real-time (or current) eye movement data that is further tracked when the user browses the Internet in real time, etc., that is, any eye movement data of the user who browses online, And enter this data into the trained user type classifier.
  • one way may be to determine the corresponding type of online user through the output decision function, thereby identifying the type of user of the online user corresponding to any eye movement data (for example: young people or elderly people, women or men, luxury Product users or general item users, etc.).
  • FIG. 1 a block diagram of an embodiment of a visual behavior based online user type identification system in accordance with the present invention is shown in FIG.
  • the visual behavior based online user type identification system 300 includes an acquisition processing unit 301, an acquisition unit 302, a training unit 303, and an identification unit 304.
  • the unit can use various eye movement data collection devices such as an eye tracker to collect eye movement data of the online user, and then can also cull the abnormal data to obtain a correct set of gaze information data sets, such as step S1.
  • the example of distinguishing user types by age records the eye movement data when the user browses the webpage in the interface, and the eye movement data and the user type are used as learning sets to learn the eye movements of different users.
  • the gaze information data set F ⁇ f 1 , f 2 , f 3 , f 4 ,...f m ⁇
  • user type set C ⁇ c 1 , c 2 , c 3 ,...c q ⁇ .
  • the gaze information data set F ⁇ f 1 , f 2 , f 3 , f 4 , . . .
  • f m ⁇ contains all the gaze information
  • f k is a quaternary array containing four kinds of information (t fk , n fk , d Lk , d rk ), t fk is the time of the browsing; n fk is the number of gaze points for browsing in t fk time; d lk is the diameter of the left pupil; d rk is the diameter of the right pupil.
  • step S1 For the specific processing and function of the acquisition processing unit 301, refer to the description of step S1.
  • the obtaining unit 302 is configured to obtain one or more eye movement feature data (or obtain at least one eye movement feature data) according to the gaze information in the gaze information data set F to form a sample data set. For example, in the example of step S2, it is possible to extract and calculate a plurality of eye movement characteristic data based on the gaze information data set from the acquisition processing unit 301 to constitute a sample data set.
  • the eye movement characteristic data includes the eye movement distance S Dk , the fixation frequency fq fk , the pupil diameter d fk , and the like.
  • the sampled eye movement data set can be normalized to obtain an optimized new sample data set M".
  • step S2 For the specific processing and function of the obtaining unit 302, refer to the description of step S2.
  • the training unit 303 is configured to select an eye movement feature data input support vector machine from the sample data set, and obtain a user type classifier. Thereby completing the machine learning process to obtain the classifier.
  • the eye movement feature data of the acquisition data set of the acquisition unit 2 that is, the gaze frequency array, the pupil diameter array, and the set of values in the eye hop distance array are selected, and the support vector machine SVM is input, and the user type classifier is trained.
  • the SVM training can select the elderly and young people's eye movement feature data sentences as training samples from the eye movement feature array; select one of the user types as the recognition target, and extract the characteristic parameters for the i-th eye movement data statement.
  • the feature parameter vector and SVM output of the training sample are used as the training set, and the kernel function is a Gaussian (radial basis) function.
  • the existing decomposition algorithm is used to train the user type support vector machine to obtain support of the training set.
  • step S3 The specific processing and function of the training unit 303 is described in the description of step S3.
  • the identification unit 304 is configured to input the collected eye movement data of any user on the network to the trained user type classifier, and identify the user type of any user on the network according to the classifier.
  • the eye movement data may be eye movement data (current, past, real-time, etc.) of any online user captured or collected by the eye tracker, including: all collected (eg, collected in step S1) All eye movement data, and/or real-time (or current) eye movement data, etc., which are further tracked when the user browses the Internet in real time. That is, any eye movement data of the user who browses on the Internet is obtained, and the data is input to the trained user type classifier.
  • one way may be that the classifier determines the corresponding online user type through the output decision function, thereby identifying the type of user of the online user corresponding to any eye movement data (for example: young or elderly, woman or Men, luxury users or general goods users, etc.).
  • eye movement data for example: young or elderly, woman or Men, luxury users or general goods users, etc.
  • step S4 The specific processing and function of the identification unit 304 is described in the description of step S4.
  • 52 users were recorded at a sampling frequency of 120 Hz by using the Tobii T120 non-invasive eye tracker produced in Sweden, including 26 senior citizens and 26 young people, respectively, 10 times.
  • the eye movement data of the task is used to learn the eye movement mode when different types of users browse the webpage.
  • the collected eye movement data of the 52 users and the corresponding user type data divide all the records into two basic data sets: the gaze information data set of the eye movement data of the user including all the gaze information.
  • FQ F ⁇ 437.9683, 230.3333, ..., 584.2778 ⁇ .
  • the basic sampling unit is:
  • the resulting sample data set is:
  • the sampled data set to be identified is input (extracting the sample training and obtaining the classifier) and judged by the output decision function, that is, selecting the gaze frequency, the pupil diameter, and the saccade distance.
  • the classification function selects a linear function, inputs the eye movement data of the user to be identified into the trained classifier, and outputs the identified user type.
  • the line jump distance, the fixation frequency, the pupil diameter, and the feature combination are respectively classified by the Liner function, the Polynomial function, the Rbf kernel function, and the Sigmoid function.
  • Table 1 shows the classification results as follows:
  • the present invention is directed to a visual behavior based online user type identification method and system for eye movement
  • a visual behavior based online user type identification method and system for eye movement In the interactive environment, by obtaining three kinds of eye movement characteristic data when the user browses the webpage, according to the difference of the eye movement characteristic data, the identification of the visual behavior of the online user type is determined, and the eye movement data of the online user can be actively recorded, and the data is simple and reliable. , high accuracy and high credibility.

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Health & Medical Sciences (AREA)
  • General Health & Medical Sciences (AREA)
  • Ophthalmology & Optometry (AREA)
  • Human Computer Interaction (AREA)
  • Multimedia (AREA)
  • Data Mining & Analysis (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Artificial Intelligence (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Evolutionary Biology (AREA)
  • Evolutionary Computation (AREA)
  • General Engineering & Computer Science (AREA)
  • User Interface Of Digital Computer (AREA)
  • Eye Examination Apparatus (AREA)
  • Information Retrieval, Db Structures And Fs Structures Therefor (AREA)

Abstract

L'invention concerne un procédé et un système permettant d'identifier un type d'utilisateur en ligne d'après un comportement visuel. Les données de mouvement oculaire d'un ou de plusieurs types différents d'utilisateurs sont collectées et traitées, un ensemble de données d'informations de visionnage et un ensemble de types d'utilisateurs sont obtenus, un ou plusieurs éléments de données caractéristiques du mouvement oculaire sont obtenus en fonction des informations de visionnage de l'ensemble de données d'informations de visionnage de façon à former un ensemble de données d'échantillonnage, les données caractéristiques du mouvement oculaire sont sélectionnées dans l'ensemble de données d'échantillonnage pour être entrées dans une machine à vecteur de support, un classificateur de types d'utilisateurs est obtenu par formation, un processus d'apprentissage automatique est exécuté de façon à obtenir un classificateur, les données collectées du mouvement oculaire d'un utilisateur en ligne aléatoire sont entrées dans le classificateur de types d'utilisateurs formé, et un type d'utilisateur de l'utilisateur en ligne aléatoire est identifié d'après le classificateur. Trois types de données caractéristiques de mouvement oculaire, lorsque chaque utilisateur navigue sur une page Web, sont acquis et calculés en utilisant principalement une technologie de suivi du mouvement oculaire, et les types d'utilisateurs en ligne sont évalués en fonction des différences de données caractéristiques du mouvement oculaire. Au moyen d'une identification d'utilisateur basée sur un comportement visuel, les données du mouvement oculaire des utilisateurs en ligne peuvent être enregistrées de manière active, les données peuvent être extraites de manière simple et fiable, et la précision et la fiabilité sont élevées.
PCT/CN2015/087701 2015-01-23 2015-08-20 Procédé et système d'identification de type d'utilisateur en ligne d'après un comportement visuel WO2016115895A1 (fr)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN201510037404.2A CN104504404B (zh) 2015-01-23 2015-01-23 一种基于视觉行为的网上用户类型识别方法及系统
CN2015100374042 2015-01-23

Publications (1)

Publication Number Publication Date
WO2016115895A1 true WO2016115895A1 (fr) 2016-07-28

Family

ID=52945800

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2015/087701 WO2016115895A1 (fr) 2015-01-23 2015-08-20 Procédé et système d'identification de type d'utilisateur en ligne d'après un comportement visuel

Country Status (2)

Country Link
CN (1) CN104504404B (fr)
WO (1) WO2016115895A1 (fr)

Cited By (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107920251A (zh) * 2016-10-06 2018-04-17 英特尔公司 基于观看者到显示器的距离来调整视频质量的方法和系统
CN109558005A (zh) * 2018-11-09 2019-04-02 中国人民解放军空军工程大学 一种自适应人机界面配置方法
CN109800706A (zh) * 2019-01-17 2019-05-24 齐鲁工业大学 一种眼动视频数据的特征提取方法及系统
EP3671464A1 (fr) * 2018-12-17 2020-06-24 Citrix Systems, Inc. Facteur de distraction utilisé dans des essais a/b d'une application web
CN111882365A (zh) * 2020-08-06 2020-11-03 中国农业大学 一种高效自助售货机商品智能推荐系统及方法
CN111970958A (zh) * 2017-11-30 2020-11-20 思维股份公司 用于检测神经障碍和测量一般认知能力的系统和方法
CN113589742A (zh) * 2021-08-16 2021-11-02 贵州梓恒科技服务有限公司 一种绕线机数控系统
CN113689138A (zh) * 2021-09-06 2021-11-23 北京邮电大学 一种基于眼动追踪和社工要素的网络钓鱼易感性预测方法

Families Citing this family (14)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104504404B (zh) * 2015-01-23 2018-01-12 北京工业大学 一种基于视觉行为的网上用户类型识别方法及系统
CN105138961A (zh) * 2015-07-27 2015-12-09 华南师范大学 基于眼球追踪大数据的心动异性自动识别方法和系统
CN106073805B (zh) * 2016-05-30 2018-10-19 南京大学 一种基于眼动数据的疲劳检测方法和装置
CN106933356A (zh) * 2017-02-28 2017-07-07 闽南师范大学 一种基于眼动仪的远程学习者类型快速确定方法
CN107049329B (zh) * 2017-03-28 2020-04-28 南京中医药大学 一种眨眼频率检测装置及其检测方法
CN107562213A (zh) * 2017-10-27 2018-01-09 网易(杭州)网络有限公司 视疲劳状态的检测方法、装置以及头戴式可视设备
CN107783945B (zh) * 2017-11-13 2020-09-29 山东师范大学 一种基于眼动追踪的搜索结果网页注意力测评方法及装置
CN109255309B (zh) * 2018-08-28 2021-03-23 中国人民解放军战略支援部队信息工程大学 面向遥感图像目标检测的脑电与眼动融合方法及装置
CN109726713B (zh) * 2018-12-03 2021-03-16 东南大学 基于消费级视线追踪仪的用户感兴趣区域检测系统和方法
CN109620259B (zh) * 2018-12-04 2020-10-27 北京大学 基于眼动技术与机器学习对孤独症儿童自动识别的系统
CN109800434B (zh) * 2019-01-25 2023-07-18 陕西师范大学 基于眼动注意力的抽象文本标题生成方法
CN111144379B (zh) * 2020-01-02 2023-05-23 哈尔滨工业大学 基于图像技术的小鼠视动反应自动识别方法
CN111475391B (zh) * 2020-04-03 2024-04-16 中国工商银行股份有限公司 眼动数据处理方法、装置及系统
CN111966223B (zh) * 2020-08-17 2022-06-28 陈涛 非感知的mr眼镜人机识别方法、系统、设备及存储介质

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101908152A (zh) * 2010-06-11 2010-12-08 电子科技大学 一种基于用户定制分类器的眼睛状态识别方法
CN103500011A (zh) * 2013-10-08 2014-01-08 百度在线网络技术(北京)有限公司 眼动轨迹规律分析方法和装置
CN104504404A (zh) * 2015-01-23 2015-04-08 北京工业大学 一种基于视觉行为的网上用户类型识别方法及系统

Family Cites Families (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US7146050B2 (en) * 2002-07-19 2006-12-05 Intel Corporation Facial classification of static images using support vector machines
WO2009001558A1 (fr) * 2007-06-27 2008-12-31 Panasonic Corporation Dispositif et procédé d'estimation de la condition humaine
CN103324287B (zh) * 2013-06-09 2016-01-20 浙江大学 基于眼动和笔触数据的计算机辅助草图绘制的方法和系统

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101908152A (zh) * 2010-06-11 2010-12-08 电子科技大学 一种基于用户定制分类器的眼睛状态识别方法
CN103500011A (zh) * 2013-10-08 2014-01-08 百度在线网络技术(北京)有限公司 眼动轨迹规律分析方法和装置
CN104504404A (zh) * 2015-01-23 2015-04-08 北京工业大学 一种基于视觉行为的网上用户类型识别方法及系统

Cited By (13)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107920251A (zh) * 2016-10-06 2018-04-17 英特尔公司 基于观看者到显示器的距离来调整视频质量的方法和系统
CN111970958A (zh) * 2017-11-30 2020-11-20 思维股份公司 用于检测神经障碍和测量一般认知能力的系统和方法
CN109558005A (zh) * 2018-11-09 2019-04-02 中国人民解放军空军工程大学 一种自适应人机界面配置方法
CN109558005B (zh) * 2018-11-09 2023-05-23 中国人民解放军空军工程大学 一种自适应人机界面配置方法
EP3671464A1 (fr) * 2018-12-17 2020-06-24 Citrix Systems, Inc. Facteur de distraction utilisé dans des essais a/b d'une application web
US11144118B2 (en) 2018-12-17 2021-10-12 Citrix Systems, Inc. Distraction factor used in A/B testing of a web application
CN109800706A (zh) * 2019-01-17 2019-05-24 齐鲁工业大学 一种眼动视频数据的特征提取方法及系统
CN111882365A (zh) * 2020-08-06 2020-11-03 中国农业大学 一种高效自助售货机商品智能推荐系统及方法
CN111882365B (zh) * 2020-08-06 2024-01-26 中国农业大学 一种高效自助售货机商品智能推荐系统及方法
CN113589742A (zh) * 2021-08-16 2021-11-02 贵州梓恒科技服务有限公司 一种绕线机数控系统
CN113589742B (zh) * 2021-08-16 2024-03-29 贵州梓恒科技服务有限公司 一种绕线机数控系统
CN113689138A (zh) * 2021-09-06 2021-11-23 北京邮电大学 一种基于眼动追踪和社工要素的网络钓鱼易感性预测方法
CN113689138B (zh) * 2021-09-06 2024-04-26 北京邮电大学 一种基于眼动追踪和社工要素的网络钓鱼易感性预测方法

Also Published As

Publication number Publication date
CN104504404A (zh) 2015-04-08
CN104504404B (zh) 2018-01-12

Similar Documents

Publication Publication Date Title
WO2016115895A1 (fr) Procédé et système d'identification de type d'utilisateur en ligne d'après un comportement visuel
WO2016112690A1 (fr) Procédé et dispositif de reconnaissance de l'état d'un utilisateur en ligne à partir de données de mouvement oculaire
US11762474B2 (en) Systems, methods and devices for gesture recognition
WO2016123777A1 (fr) Procédé et dispositif de présentation et de recommandation d'objets basés sur une caractéristique biologique
KR102062586B1 (ko) 화장품 관련 리뷰 데이터 기반 화장품 추천 시스템 및 화장품 추천 방법
CN113722474A (zh) 文本分类方法、装置、设备及存储介质
US20210118550A1 (en) System and method for automated diagnosis of skin cancer types from dermoscopic images
CN106537387B (zh) 检索/存储与事件相关联的图像
Akshay et al. Machine learning algorithm to identify eye movement metrics using raw eye tracking data
CN109086794A (zh) 一种基于t-lda主题模型的驾驶行为模式识方法
CA2883697C (fr) Identification de mouvements a l'aide d'un dispositif detecteur de mouvement couple a une memoire associative
WO2015176417A1 (fr) Procédé de normalisation de regroupement de caractéristiques permettant une reconnaissance d'un état cognitif
Creagh et al. Interpretable deep learning for the remote characterisation of ambulation in multiple sclerosis using smartphones
Zhang et al. Pose-based tremor classification for Parkinson’s disease diagnosis from video
Alyasseri et al. Eeg-based person identification using multi-verse optimizer as unsupervised clustering techniques
CN114424941A (zh) 疲劳检测模型构建方法、疲劳检测方法、装置及设备
Singh et al. A robust, real-time camera-based eye gaze tracking system to analyze users’ visual attention using deep learning
Krishnamoorthy et al. StimulEye: An intelligent tool for feature extraction and event detection from raw eye gaze data
Drishya et al. Cyberbully image and text detection using convolutional neural networks
Jiang et al. View-independent representation with frame interpolation method for skeleton-based human action recognition
WO2018122868A1 (fr) Système de clonage du cerveau et son procédé
Tyagi et al. Emotionomics: Pioneering Depression Detection Through Facial Expression Analytics
Mirowski et al. Predicting poll trends using twitter and multivariate time-series classification
Prome et al. LieVis: A Visual Interactive Dashboard for Lie Detection Using Machine Learning and Deep Learning Techniques
Riaz et al. Surface EMG Real-Time Chinese Language Recognition Using Artificial Neural Networks

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 15878564

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 15878564

Country of ref document: EP

Kind code of ref document: A1