CN109199411B - Case-conscious person identification method based on model fusion - Google Patents

Case-conscious person identification method based on model fusion Download PDF

Info

Publication number
CN109199411B
CN109199411B CN201811135018.7A CN201811135018A CN109199411B CN 109199411 B CN109199411 B CN 109199411B CN 201811135018 A CN201811135018 A CN 201811135018A CN 109199411 B CN109199411 B CN 109199411B
Authority
CN
China
Prior art keywords
eye
total
time
fixation
tested person
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201811135018.7A
Other languages
Chinese (zh)
Other versions
CN109199411A (en
Inventor
唐闺臣
梁瑞宇
谢跃
徐梦圆
叶超
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Nanjing Institute of Technology
Original Assignee
Nanjing Institute of Technology
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Nanjing Institute of Technology filed Critical Nanjing Institute of Technology
Priority to CN201811135018.7A priority Critical patent/CN109199411B/en
Publication of CN109199411A publication Critical patent/CN109199411A/en
Application granted granted Critical
Publication of CN109199411B publication Critical patent/CN109199411B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • A61B5/16Devices for psychotechnics; Testing reaction times ; Devices for evaluating the psychological state
    • A61B5/165Evaluating the state of mind, e.g. depression, anxiety
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • A61B5/103Detecting, measuring or recording devices for testing the shape, pattern, colour, size or movement of the body or parts thereof, for diagnostic purposes
    • A61B5/11Measuring movement of the entire body or parts thereof, e.g. head or hand tremor, mobility of a limb
    • A61B5/1103Detecting eye twinkling
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • A61B5/72Signal processing specially adapted for physiological signals or for diagnostic purposes
    • A61B5/7235Details of waveform analysis
    • A61B5/7264Classification of physiological signals or data, e.g. using neural networks, statistical classifiers, expert systems or fuzzy systems
    • A61B5/7267Classification of physiological signals or data, e.g. using neural networks, statistical classifiers, expert systems or fuzzy systems involving training the classification device
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • G06F18/241Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches
    • G06F18/2411Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches based on the proximity to a decision surface, e.g. support vector machines
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/18Eye characteristics, e.g. of the iris
    • G06V40/193Preprocessing; Feature extraction

Abstract

The invention discloses a case observer identification method based on model fusion, which comprises the following steps of extracting 32-dimensional eye movement characteristics of each tested person when watching a single picture; training a support vector machine model A based on 32-dimensional eye movement characteristics to identify the speech confidence of each tested person in a single picture and output the probability f of each tested person in the single picture1(xi) And f2(xi) (ii) a Extracting 110-dimensional eye movement characteristics of each tested person when watching the combined picture; training a support vector machine model B based on 110-dimensional eye movement characteristics to identify the speech confidence of each tested person in picture combination, and outputting the probability g of each tested person in picture combination1(xi) And g2(xi) (ii) a And (3) fusing the classifier probabilities of the support vector machine models A and B by using a multiplication rule to obtain a joint probability, and taking the class with the maximum probability of each tested person as a final decision result. The invention can effectively inhibit the lie detection prevention means and improve the algorithm efficiency.

Description

Case-conscious person identification method based on model fusion
Technical Field
The invention relates to the technical field of criminal investigation interrogation analysis, in particular to a case informant identification method based on model fusion.
Background
In the context of criminal investigation, the key to interrogation of a criminal suspect is the evaluation of the criminal suspect's abnormal mood, so-called "lie detection". The interrogation personnel judges the psychological state of the criminal suspect by observing the expression of the criminal suspect, and adopts some interrogation skills aiming at the loophole in the speech of the criminal suspect to break through the psychological defense line of the criminal suspect and force the criminal suspect to say the fact true. However, the ability of normal people to detect lie is close to guesswork, usually relying on intuitive judgment, and therefore the accuracy is only a little higher than random probability, and usually also relying on a few highly experienced auditors, which is obviously time consuming and inefficient.
Since psychological changes of a person when lying cause changes of some physiological parameters (such as skin electricity, heartbeat, blood pressure, respiration and brain waves, sound and the like), it is an effective auxiliary means to evaluate whether a subject knows a case by detecting the changes. In early studies, case recognition of criminal suspects using multi-channel physiological instruments was one of the most common methods. However, the physiological indices used by multi-channel physiological instruments are often influenced by various factors, including the physical function, mental state, task stimulation intensity, and lie detector ability of a person.
In recent years, with the development of brain cognitive neural technology, researchers can directly observe neural activities of internal related brain areas when lying behaviors occur, and compared with the traditional lie detection technology which depends on external physiological activity changes, the brain cognitive neural technology is more objective, can reveal internal rules of lying activities, and becomes one of the developing directions of lie detection technologies. However, the professional equipment required by the technology is large and expensive, the practicability of the technology is limited, and corresponding anti-lie detection means influence the test result.
Therefore, the lie detection technology based on the physiological signals still has some places to be improved in practical application, and the main reasons are as follows: 1) the cooperation degree of a testee, most physiological lie detection methods, when collecting physiological parameters such as electrocardio, electrodermal activity, blood pressure, brain waves and the like, an electrode or a sensor patch of a contact sensor needs to be attached to a certain part of the body of the testee, the testee needs to be subjectively matched, otherwise, the testee can adopt a concealed lie-resisting technology (such as moving toes, thinking, and the like) to interfere with a test result; 2) the imperceptibility of the measuring means, emotional stress, has important research significance in lie detection, but the obvious testing device itself will cause some extra stress to the patient, in which case the measuring influence caused by emotional fluctuation is difficult to estimate. Although the voice lie detection technology has certain concealment, the voice is easily affected by external environments, such as dialects, accents, and talks, the technical difficulty is high, and research is just started at present. Therefore, effective lie detection should have the characteristics of non-contact, strong concealment, and easy acquisition and processing of the analyzed signals.
Therefore, the above-mentioned conventional case-informed person identification method still has the inconvenience and drawbacks, and needs to be further improved. In order to solve the problems of the case-conscious person identification method, the related art has tried to solve the problems without taking great efforts, but it has not been known that an applicable method is developed and completed for a long time, and the general case-conscious person identification algorithm cannot solve the problems properly, which is obviously a problem that the related person wants to solve urgently. Compared with other lie-detection-preventing means, some eye movement indexes are not controlled by people, and certain eye movement indexes are controlled intentionally, so that the indexes are abnormal. Therefore, the case-conscious person identification by the eye movement index has certain feasibility, and how to realize the problem which needs to be solved currently.
Disclosure of Invention
The method aims to overcome the inconvenience and the defect of a case reporter identification method in the prior art. The case conscious person identification method based on model fusion solves the technical problems that in the prior art, the case conscious person identification method is limited by the matching degree of detected persons, a test method is not secret, the test efficiency is low and the like, adopts eye movement data to identify the case conscious person, can effectively inhibit anti-lie detection means, adopts a 32-dimensional eye movement feature and 110-dimensional eye movement feature model fusion algorithm, effectively utilizes tested psychological performance in different modes, improves algorithm efficiency, and has the advantages of ingenious and novel method, high identification accuracy and good application prospect.
In order to achieve the purpose, the invention adopts the technical scheme that:
a case-conscious person identification method based on model fusion comprises the following steps,
step (A), extracting the 32-dimensional eye movement characteristics of each tested person when watching a single picture;
step (B), training a support vector machine model A based on 32-dimensional eye movement characteristics to identify the speech confidence of each tested person in a single picture, and outputting the probability f of each tested person in the single picture1(xi) And f2(xi) Wherein x isiRepresenting the ith subject, f1And f2Respectively representing the probability that the ith tested person is an informed person or not when the ith tested person is in a single picture;
step (C), extracting 110-dimensional eye movement characteristics of each tested person when watching the combined picture;
step (D), training a support vector machine model B based on 110-dimensional eye movement characteristics to identify the speech confidence of each tested person in picture combination, and outputting the probability g of each tested person in picture combination1(xi) And g2(xi) Wherein x isiRepresents the ith subject, g1And g2Respectively representing the probability that the ith tested person is an informed person or not when the ith tested person combines the pictures;
step (E), fusing the classifier probabilities of the support vector machine models A and B by applying a multiplication rule to obtain a joint probability f1(xi)g1(xi) And f2(xi)g2(xi) And taking the class corresponding to the maximum probability of each tested person as the final decision result.
The case-conscious person identification method based on model fusion, step (a), includes the following steps that the 32-dimensional eye movement features include 6 items of blink statistics: the method comprises the following steps of (1) blinking times, blinking frequency, total blinking duration, average blinking duration, maximum blinking duration and minimum blinking duration; gaze statistics 11 terms: the fixation times, fixation frequency, total fixation time, average fixation time, maximum fixation time, minimum fixation time, total fixation deviation, average fixation deviation, maximum fixation deviation, minimum fixation deviation and glance path length; eye jump statistic 15 items: the number of eye jumps, the frequency of the eye jumps, the total length of the eye jumps, the average length of the eye jumps, the maximum length of the eye jumps, the minimum length of the eye jumps, the total amplitude of the eye jumps, the average amplitude of the eye jumps, the maximum amplitude of the eye jumps, the minimum amplitude of the eye jumps, the total speed of the eye jumps, the average speed of the eye jumps, the maximum speed of the eye jumps, the minimum speed of the eye jumps and the average delay time of the eye jumps.
The 110-dimensional eye movement feature refers to 10 zones on a combined picture, and each zone has 11-dimensional features, including a total injection Time Net Dwell Time in an interest zone, a total gaze and eye jump Time in the interest zone Dwell Time, a total Glance Duration of eye jump Time and Dwell Time entering the interest zone, a sum of eye jump Time and Glance Duration leaving the interest zone, a first gaze Duration, a number of times of eye jump from other zones to the zone, a gaze number, a total injection Time Net Dwell Time to total Time ratio in the interest zone, a total gaze and eye jump Time sum Dwell Time to total Time ratio in the interest zone, a total gaze Duration, and a total gaze to total Time ratio.
The invention has the beneficial effects that: the case conscious person identification method based on model fusion solves the technical problems that in the prior art, the case conscious person identification method is limited by the matching degree of detected persons, a test method is not secret, the test efficiency is low and the like, adopts eye movement data to identify the case conscious person, can effectively inhibit anti-lie detection means, adopts a 32-dimensional eye movement feature and 110-dimensional eye movement feature model fusion algorithm, effectively utilizes tested psychological performance in different modes, improves algorithm efficiency, and has the advantages of ingenious and novel method, high identification accuracy and good application prospect.
Drawings
Fig. 1 is a flowchart of an abnormal emotion recognition method based on eye movement data analysis of the present invention.
Detailed Description
The invention will be further described with reference to the accompanying drawings.
As shown in FIG. 1, the case-conscious person identification method based on model fusion of the present invention comprises the following steps,
step (A), extracting 32-dimensional eye movement characteristics of each tested person when watching a single picture, wherein the 32-dimensional eye movement characteristics comprise 6 items of blink statistics: the method comprises the following steps of (1) blinking times, blinking frequency, total blinking duration, average blinking duration, maximum blinking duration and minimum blinking duration; gaze statistics 11 terms: the fixation times, fixation frequency, total fixation time, average fixation time, maximum fixation time, minimum fixation time, total fixation deviation, average fixation deviation, maximum fixation deviation, minimum fixation deviation and glance path length; eye jump statistic 15 items: the number of eye jumps, the frequency of the eye jumps, the total length of the eye jumps, the average length of the eye jumps, the maximum length of the eye jumps, the minimum length of the eye jumps, the total amplitude of the eye jumps, the average amplitude of the eye jumps, the maximum amplitude of the eye jumps, the minimum amplitude of the eye jumps, the total speed of the eye jumps, the average speed of the eye jumps, the maximum speed of the eye jumps, the minimum speed of the eye jumps and the average delay time of the eye jumps;
step (B), training a Support Vector Machine (SVM) model A based on 32-dimensional eye movement characteristics to identify the speech confidence of each tested person in a single picture, and outputting the probability f of each tested person in the single picture1(xi) And f2(xi) Wherein x isiRepresenting the ith subject, f1And f2Respectively representing the probability that the ith tested person is an informed person or not when the ith tested person is in a single picture;
extracting 110-dimensional eye movement characteristics of each tested person when watching the combined picture, wherein the 110-dimensional eye movement characteristics refer to that the combined picture is divided into 10 regions, each region has 11-dimensional characteristics, and each region comprises a total injection Time Net Dwell Time in the region of interest, a total gaze and eye jump Time Dwell Time in the region of interest, a total Glance Duration of eye jump Time and Dwell Time entering the region of interest, a total Glance Duration of eye jump Time and Glance Duration leaving the region of interest, a first gaze Duration, the number of times of eye jump from other regions to the region, the gaze times, a ratio of total injection Time Net Dwell Time in the region of interest to the total Time, a ratio of the total gaze and eye jump Time in the region of interest to the total Time, a total gaze Duration and a ratio of the total gaze to the total Time;
step (D), training a support vector machine model B based on 110-dimensional eye movement characteristics to identify the speech confidence of each tested person in picture combination, and outputting the probability g of each tested person in picture combination1(xi) And g2(xi) Wherein x isiRepresents the ith subject, g1And g2Respectively representing the probability that the ith tested person is an informed person or not when the ith tested person combines the pictures;
step (E), fusing the classifier probabilities of the support vector machine models A and B by applying a multiplication rule to obtain a joint probability f1(xi)g1(xi) And f2(xi)g2(xi) And taking the corresponding category when the probability of each tested person is maximum as the final decision result, wherein the final decision result is the case informed person identification result.
According to the case informed person identification method based on model fusion, the identification effect is shown in table 1, the comparison algorithm comprises a Support Vector Machine (SVM), an Artificial Neural Network (ANN), a Decision Tree (DT) and a Random Forest (RF), and as can be seen from table 1, the RF algorithm is the highest and the ANN algorithm is the lowest for a single picture; for the combined picture, the ANN algorithm is highest, the RF algorithm is lowest, relatively speaking, the recognition rates of the SVM algorithm and the DT algorithm are moderate, the combined picture is suitable for the two models, after the model fusion strategy is adopted, the recognition rate of a Support Vector Machine (SVM) can reach 86.1%, the highest recognition rates of the two models are respectively improved by 9.2% and 17.6%, therefore, the probability of the classifiers of the support vector machine model A and the support vector machine model B is fused by applying the multiplication rule, and the joint probability f is obtained1(xi)g1(xi) And f2(xi)g2(xi) The corresponding category when the probability of each tested person is the maximum is taken as the final decision result, so that the accuracy of the decision result can be greatly improved, and the selection of the 32-dimensional eye movement characteristic and the 110-dimensional eye movement characteristic is very reasonable, so that all indexes in the eye movement process can be accurately reflected.
TABLE 1 comparison of algorithm recognition rates in two modes
Figure BDA0001814484110000081
In conclusion, the case informant recognition method based on model fusion solves the technical problems that in the prior art, the case informant recognition method is restricted by the matching degree of detected persons, the test method is not secret, the test efficiency is low and the like, adopts eye movement data to recognize the case informant, can effectively inhibit anti-lie detection means, adopts a 32-eye movement feature and 110-dimensional eye movement feature model fusion algorithm, effectively utilizes tested psychological performance in different modes, improves algorithm efficiency, is ingenious and novel, has high recognition accuracy and has good application prospect.
The foregoing illustrates and describes the principles, general features, and advantages of the present invention. It will be understood by those skilled in the art that the present invention is not limited to the embodiments described above, which are described in the specification and illustrated only to illustrate the principle of the present invention, but that various changes and modifications may be made therein without departing from the spirit and scope of the present invention, which fall within the scope of the invention as claimed. The scope of the invention is defined by the appended claims and equivalents thereof.

Claims (3)

1. The case-conscious person identification method based on model fusion is characterized by comprising the following steps: comprises the following steps of (a) carrying out,
step A, extracting the 32-dimensional eye movement characteristics of each tested person when watching a single picture;
step B, training a support vector machine model A based on 32-dimensional eye movement characteristics to identify the speech confidence of each tested person in a single picture, and outputting the probability f of each tested person in the single picture1(xi) And f2(xi) Wherein x isiRepresenting the ith subject, f1And f2Respectively representing the probability that the ith tested person is an informed person or not when the ith tested person is in a single picture;
step C, extracting 110-dimensional eye movement characteristics of each tested person when watching the combined picture;
step D, training a support vector machine model B based on 110-dimensional eye movement characteristics to identify the speech confidence of each tested person in picture combination, and outputting the probability g of each tested person in picture combination1(xi) And g2(xi) Wherein x isiRepresents the ith subject, g1And g2Respectively representing the probability that the ith tested person is an informed person or not when the ith tested person combines the pictures;
step E, fusing the classifier probabilities of the support vector machine models A and B by applying a multiplication rule to obtain a joint probability f1(xi) g1(xi) And f2(xi) g2(xi) And taking the class corresponding to the maximum probability of each tested person as the final decision result.
2. The case-conscious person recognition method based on model fusion according to claim 1, characterized in that: step A, the 32-dimensional eye movement features comprise 6 items of blink statistics: the method comprises the following steps of (1) blinking times, blinking frequency, total blinking duration, average blinking duration, maximum blinking duration and minimum blinking duration; gaze statistics 11 terms: the fixation times, fixation frequency, total fixation time, average fixation time, maximum fixation time, minimum fixation time, total fixation deviation, average fixation deviation, maximum fixation deviation, minimum fixation deviation and glance path length; eye jump statistic 15 items: the number of eye jumps, the frequency of the eye jumps, the total length of the eye jumps, the average length of the eye jumps, the maximum length of the eye jumps, the minimum length of the eye jumps, the total amplitude of the eye jumps, the average amplitude of the eye jumps, the maximum amplitude of the eye jumps, the minimum amplitude of the eye jumps, the total speed of the eye jumps, the average speed of the eye jumps, the maximum speed of the eye jumps, the minimum speed of the eye jumps and the average delay time of the eye jumps.
3. The case-conscious person recognition method based on model fusion according to claim 1, characterized in that: and C, dividing the 110-dimensional eye movement characteristics into 10 areas on the combined picture, wherein each area has 11-dimensional characteristics, and each area comprises a total injection Time Net Dwell Time in the interest area, a total Dwell Time of the sum of the fixation Time and the eye jump Time in the interest area, a total Glance Duration of the sum of the eye jump Time and the Dwell Time entering the interest area, a sum of the eye jump Time and the Glance Duration leaving the interest area, a first fixation Time, the times of the eye jump from other areas to the area, fixation times, a total injection Time Net Dwell Time in the interest area to the total Time, a total fixation Time and a total fixation to the total Time.
CN201811135018.7A 2018-09-28 2018-09-28 Case-conscious person identification method based on model fusion Active CN109199411B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201811135018.7A CN109199411B (en) 2018-09-28 2018-09-28 Case-conscious person identification method based on model fusion

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201811135018.7A CN109199411B (en) 2018-09-28 2018-09-28 Case-conscious person identification method based on model fusion

Publications (2)

Publication Number Publication Date
CN109199411A CN109199411A (en) 2019-01-15
CN109199411B true CN109199411B (en) 2021-04-09

Family

ID=64981889

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201811135018.7A Active CN109199411B (en) 2018-09-28 2018-09-28 Case-conscious person identification method based on model fusion

Country Status (1)

Country Link
CN (1) CN109199411B (en)

Families Citing this family (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110327061B (en) * 2019-08-12 2022-03-08 北京七鑫易维信息技术有限公司 Character determining device, method and equipment based on eye movement tracking technology
CN110693509B (en) * 2019-10-17 2022-04-05 中国人民公安大学 Case correlation determination method and device, computer equipment and storage medium
CN110956143A (en) * 2019-12-03 2020-04-03 交控科技股份有限公司 Abnormal behavior detection method and device, electronic equipment and storage medium
CN111568367B (en) * 2020-05-14 2023-07-21 中国民航大学 Method for identifying and quantifying eye jump invasion

Citations (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101098241A (en) * 2006-06-26 2008-01-02 腾讯科技(深圳)有限公司 Method and system for implementing virtual image
US7792335B2 (en) * 2006-02-24 2010-09-07 Fotonation Vision Limited Method and apparatus for selective disqualification of digital images
CN103116763A (en) * 2013-01-30 2013-05-22 宁波大学 Vivo-face detection method based on HSV (hue, saturation, value) color space statistical characteristics
CN105147248A (en) * 2015-07-30 2015-12-16 华南理工大学 Physiological information-based depressive disorder evaluation system and evaluation method thereof
US20160132726A1 (en) * 2014-05-27 2016-05-12 Umoove Services Ltd. System and method for analysis of eye movements using two dimensional images
CN206285117U (en) * 2016-08-31 2017-06-30 北京新科永创科技有限公司 Intelligence hearing terminal
CN106999111A (en) * 2014-10-01 2017-08-01 纽洛斯公司 System and method for detecting invisible human emotion
CN107480716A (en) * 2017-08-15 2017-12-15 安徽大学 A kind of combination EOG and video pan signal recognition method and system
WO2018005594A1 (en) * 2016-06-28 2018-01-04 Google Llc Eye gaze tracking using neural networks
CN108108715A (en) * 2017-12-31 2018-06-01 厦门大学 It is inspired based on biology and depth attribute learns the face aesthetic feeling Forecasting Methodology being combined
US20180268733A1 (en) * 2017-03-15 2018-09-20 International Business Machines Corporation System and method to teach and evaluate image grading performance using prior learned expert knowledge base

Family Cites Families (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
DE102005047211A1 (en) * 2005-10-01 2007-04-05 Carl Zeiss Meditec Ag Mammal or human eye movement detecting system, has detection device generating independent detection signal using radiation from spot pattern, and control device evaluating signal for determining data about movement of eyes
WO2009116043A1 (en) * 2008-03-18 2009-09-24 Atlas Invest Holdings Ltd. Method and system for determining familiarity with stimuli
CN202060785U (en) * 2011-03-31 2011-12-07 上海天岸电子科技有限公司 Human eye pupil lie detector
CN202472688U (en) * 2011-12-03 2012-10-03 辽宁科锐科技有限公司 Inquest-assisting judgment and analysis meter based on eyeball characteristic
CN103211605B (en) * 2013-05-14 2015-02-18 重庆大学 Psychological testing system and method
CN203379122U (en) * 2013-07-26 2014-01-08 蔺彬涛 Wireless electroencephalogram and eye movement polygraph
CN109063551A (en) * 2018-06-20 2018-12-21 新华网股份有限公司 Validity test method of talking and system

Patent Citations (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US7792335B2 (en) * 2006-02-24 2010-09-07 Fotonation Vision Limited Method and apparatus for selective disqualification of digital images
CN101098241A (en) * 2006-06-26 2008-01-02 腾讯科技(深圳)有限公司 Method and system for implementing virtual image
CN103116763A (en) * 2013-01-30 2013-05-22 宁波大学 Vivo-face detection method based on HSV (hue, saturation, value) color space statistical characteristics
US20160132726A1 (en) * 2014-05-27 2016-05-12 Umoove Services Ltd. System and method for analysis of eye movements using two dimensional images
CN106999111A (en) * 2014-10-01 2017-08-01 纽洛斯公司 System and method for detecting invisible human emotion
CN105147248A (en) * 2015-07-30 2015-12-16 华南理工大学 Physiological information-based depressive disorder evaluation system and evaluation method thereof
WO2018005594A1 (en) * 2016-06-28 2018-01-04 Google Llc Eye gaze tracking using neural networks
CN206285117U (en) * 2016-08-31 2017-06-30 北京新科永创科技有限公司 Intelligence hearing terminal
US20180268733A1 (en) * 2017-03-15 2018-09-20 International Business Machines Corporation System and method to teach and evaluate image grading performance using prior learned expert knowledge base
CN107480716A (en) * 2017-08-15 2017-12-15 安徽大学 A kind of combination EOG and video pan signal recognition method and system
CN108108715A (en) * 2017-12-31 2018-06-01 厦门大学 It is inspired based on biology and depth attribute learns the face aesthetic feeling Forecasting Methodology being combined

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
Multistable phenomena:changing views in perception;David A. Leopold等;《Trends in Cognitive Sciences》;19980731;第3卷(第7期);第254-264页 *
基于眼运动追踪技术的测谎模式构建;任延涛 等;《中国刑警学院学报》;20110331(第1期);第26-第28页 *

Also Published As

Publication number Publication date
CN109199411A (en) 2019-01-15

Similar Documents

Publication Publication Date Title
CN109199411B (en) Case-conscious person identification method based on model fusion
Wang et al. Channel selection method for EEG emotion recognition using normalized mutual information
Abo-Zahhad et al. A new EEG acquisition protocol for biometric identification using eye blinking signals
Zhao et al. EmotionSense: Emotion recognition based on wearable wristband
de Santos Sierra et al. Stress detection by means of stress physiological template
CN109199412B (en) Abnormal emotion recognition method based on eye movement data analysis
Khan et al. Biometric systems utilising health data from wearable devices: applications and future challenges in computer security
Agrafioti et al. Heart biometrics: Theory, methods and applications
de Santos Sierra et al. A stress-detection system based on physiological signals and fuzzy logic
CN113729707A (en) FECNN-LSTM-based emotion recognition method based on multi-mode fusion of eye movement and PPG
Higashi et al. EEG auditory steady state responses classification for the novel BCI
EP3449409B1 (en) Biometric method and device for identifying a person through an electrocardiogram (ecg) waveform
Belgacem et al. Person identification system based on electrocardiogram signal using LabVIEW
Zhang et al. Biometric verification of subjects using saccade eye movements
Jaafar et al. Acceleration plethysmogram based biometric identification
Gui et al. Multichannel EEG-based biometric using improved RBF neural networks
Hu et al. A real-time electroencephalogram (EEG) based individual identification interface for mobile security in ubiquitous environment
Alshamrani An advanced stress detection approach based on processing data from wearable wrist devices
CN106344008B (en) Waking state detection method and system in sleep state analysis
CN114983434A (en) System and method based on multi-mode brain function signal recognition
Neubig et al. Recognition of imagined speech using electroencephalogram signals
Ma et al. Dynamic threshold distribution domain adaptation network: A cross-subject fatigue recognition method based on EEG signals
CN113951886A (en) Brain magnetic pattern generation system and lie detection decision system
Mantri et al. Real time multimodal depression analysis
CN106333675B (en) The mask method and system of EEG signals data type under waking state

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant