CN109199411A - Case insider's recognition methods based on Model Fusion - Google Patents

Case insider's recognition methods based on Model Fusion Download PDF

Info

Publication number
CN109199411A
CN109199411A CN201811135018.7A CN201811135018A CN109199411A CN 109199411 A CN109199411 A CN 109199411A CN 201811135018 A CN201811135018 A CN 201811135018A CN 109199411 A CN109199411 A CN 109199411A
Authority
CN
China
Prior art keywords
twitching
eyelid
testee
duration
attentively
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201811135018.7A
Other languages
Chinese (zh)
Other versions
CN109199411B (en
Inventor
唐闺臣
梁瑞宇
谢跃
徐梦圆
叶超
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Nanjing Institute of Technology
Original Assignee
Nanjing Institute of Technology
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Nanjing Institute of Technology filed Critical Nanjing Institute of Technology
Priority to CN201811135018.7A priority Critical patent/CN109199411B/en
Publication of CN109199411A publication Critical patent/CN109199411A/en
Application granted granted Critical
Publication of CN109199411B publication Critical patent/CN109199411B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • A61B5/16Devices for psychotechnics; Testing reaction times ; Devices for evaluating the psychological state
    • A61B5/165Evaluating the state of mind, e.g. depression, anxiety
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • A61B5/103Detecting, measuring or recording devices for testing the shape, pattern, colour, size or movement of the body or parts thereof, for diagnostic purposes
    • A61B5/11Measuring movement of the entire body or parts thereof, e.g. head or hand tremor, mobility of a limb
    • A61B5/1103Detecting eye twinkling
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • A61B5/72Signal processing specially adapted for physiological signals or for diagnostic purposes
    • A61B5/7235Details of waveform analysis
    • A61B5/7264Classification of physiological signals or data, e.g. using neural networks, statistical classifiers, expert systems or fuzzy systems
    • A61B5/7267Classification of physiological signals or data, e.g. using neural networks, statistical classifiers, expert systems or fuzzy systems involving training the classification device
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • G06F18/241Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches
    • G06F18/2411Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches based on the proximity to a decision surface, e.g. support vector machines
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/18Eye characteristics, e.g. of the iris
    • G06V40/193Preprocessing; Feature extraction

Abstract

Case insider's recognition methods based on Model Fusion that the invention discloses a kind of, includes the following steps, extracts 32 dimension eye movement characteristics of each testee when watching single picture;Based on 32 dimension eye movement characteristics training SVM model A, to identify speech confidence level of each testee in single picture, and probability f of each testee in single picture is exported1(xi) and f2(xi);Extract 110 dimension eye movement characteristics of each testee when watching combination picture;Based on 110 dimension eye movement characteristics training SVM Model Bs, to identify speech confidence level of each testee in combination picture, and probability g of each testee in combination picture is exported1(xi) and g2(xi);With multiplication rule, the classifier probability of SVM model A and B is merged, joint probability is obtained, taking the classification of the maximum probability of each testee is the last result of decision.The present invention can effectively inhibit anti-means of detecting a lie, and improve efficiency of algorithm.

Description

Case insider's recognition methods based on Model Fusion
Technical field
The present invention relates to criminal investigations to inquest analysis technical field, and in particular to a kind of case insider knowledge based on Model Fusion Other method.
Background technique
Under criminal investigation background, the key for inquesting suspect is to evaluate and test the abnormal emotion of suspect, I.e. so-called " detecting a lie ".Interrogator judges its psychological condition by observing the performance of suspect, and in its speech Loophole is broken through the mental line of defense of suspect, it is forced to say the truth of the matter using some hearing skills.But normal person Ability of detecting a lie near conjecture, usually rely on intuition judgement, therefore, accuracy rate only it is higher than random chance little by little, Er Qietong Often will also be by a small number of hearing experts having wide experience, this is clearly time-consuming and inefficient.
Since psychology variation of the people when lying can cause some physiological parameters (such as skin pricktest, heartbeat, blood pressure, breathing brain Electric wave, sound etc.) variation, therefore, it is a kind of effective for assessing measured by detecting these variations whether case is known Supplementary means.In the research of early stage, case is carried out to suspect using polygraph and knows identify it is most common One of method.But physical signs used by polygraph is often subject to the influence of various factors, the body machine including people Energy, psychological condition, task stimulus intensity and the person's of detecting a lie ability etc..
In recent years, with the development of brain cognition neural technology, researcher can directly observe interior when mentition occurs The nervous activity of portion related brain areas, it is more objective compared with by traditional lie-detection technology of external physiologic activity change, more can Movable inherent laws of telling a lie are disclosed, one of lie-detection technology developing direction is become.However, professional equipment needed for such technology is huge It is big and valuable, the practicality is limited, and there are corresponding anti-means of detecting a lie to influence test result.
It follows that the lie-detection technology based on above-mentioned physiological signal, is above urgently improved there are still some actually practical Place, main reason is that: 1) degree of cooperation of measured, most of physiology lie detecting method, acquisition physiology ginseng Whens number such as electrocardio, skin potential activity, blood pressure, brain wave etc., require to paste at the electrode of contact-sensing instrument or sensor patch In the body somewhere of tested person, need measured subjective must cooperate, otherwise, concealed anti-lie-detection technology can be used in measured (such as dynamic toe is given free rein to fancy) carrys out disturbed test result;2) concealment of measurement means, emotional stress have weight in detecting a lie The research significance wanted, it will be clear, however, that test equipment can inherently cause certain extra pressure, feelings in this case to patient Thread fluctuation bring measurement influences to be difficult to estimate.Although voice lie detection technology has certain concealment, voice is easy It by external environment influence, for example dialect, accent, chips in, technical difficulty is larger, studies at present at the early-stage.Therefore, effectively The characteristics of detecting a lie, it is untouchable to have, strong concealment, and analyzed signal is convenient for acquisition and processing.
It can be seen that above-mentioned existing case insider recognition methods, there are still there is inconvenient and defect, and be urgently subject into One step is improved.In order to solve the problems, such as that case insider's recognition methods exists, those skilled in the relevant arts there's no one who doesn't or isn't painstakingly come Seek solution, but has no that applicable method is developed completion always for a long time, and general case insider's recognizer Again cannot be appropriate solve the above problems, this is clearly the problem of related dealer is suddenly to be solved.Relative to other anti-means of detecting a lie For, some eye movement indexs are not known by people's will to be controlled, and control certain eye movement indexs intentionally will appear Indexes Abnormality instead.Therefore, Carrying out case insider identification with eye movement index has certain feasibility, current problem to be solved when how to realize.
Summary of the invention
In order to overcome case insider recognition methods in the prior art, existing inconvenient and defect.It is of the invention based on mould Case insider's recognition methods of type fusion, solves the examined people's degree of cooperation of case insider recognition methods in the prior art Restrict, test method it is not concealed, the technical problems such as testing efficiency is low carry out case insider identification using eye movement data, can be with Effectively inhibit anti-means of detecting a lie, and using 32 eye movement characteristics and 110 dimension eye movement characteristics Model Fusion algorithms, is effectively utilized difference Subject Psychological Manifestations under mode improve efficiency of algorithm, and the ingenious novelty of method, identification accuracy is high, has good application Prospect.
In order to achieve the above object, the technical scheme adopted by the invention is that:
A kind of case insider's recognition methods based on Model Fusion, includes the following steps,
Step (A) extracts 32 dimension eye movement characteristics of each testee when watching single picture;
Step (B), based on 32 dimension eye movement characteristics training SVM model A, to identify each testee single Speech confidence level when picture, and export probability f of each testee in single picture1(xi) and f2(xi), wherein xiGeneration I-th of testee of table, f1And f2Respectively indicating i-th of testee is insider in single picture or is not insider Probability;
Step (C) extracts 110 dimension eye movement characteristics of each testee when watching combination picture;
Step (D), based on 110 dimension eye movement characteristics training SVM Model Bs, to identify that each testee is combining Speech confidence level when picture, and export probability g of each testee in combination picture1(xi) and g2(xi), wherein xiGeneration I-th of testee of table, g1And g2Respectively indicating i-th of testee is insider in combination picture or is not insider Probability;
Step (E) merges the classifier probability of SVM model A and B, obtains joint probability with multiplication rule f1(xi)g1(xi) and f2(xi)g2(xi), the classification for taking each testee's maximum probability corresponding is the last result of decision.
Case insider's recognition methods above-mentioned based on Model Fusion, step (A), the 32 dimension eye movement characteristics, including Blink statistic 6: number of winks, frequency of wink, blink total duration, duration of averagely blinking, blink duration are maximum Value, blink duration minimum value;Watch statistic 11 attentively: fixation times, gaze frequency, when watching total duration, average fixation attentively It is long, watch duration maximum value attentively, watch duration minimum value attentively, always watching deviation, average fixation deviation, maximum attentively and watch deviation, minimum note attentively Depending on deviation, pan path length;Twitching of the eyelid statistic 15: twitching of the eyelid number, twitching of the eyelid frequency, twitching of the eyelid total duration, twitching of the eyelid be averaged duration, Twitching of the eyelid duration maximum value, twitching of the eyelid duration minimum value, total twitching of the eyelid amplitude, average eye skip frame degree, maximum twitching of the eyelid amplitude, minimum twitching of the eyelid Amplitude, total saccadic speed, average saccadic speed, twitching of the eyelid maximum speed, twitching of the eyelid minimum speed, average twitching of the eyelid delay time.
Case insider's recognition methods above-mentioned based on Model Fusion, step (C), the 110 dimension eye movement characteristics refer to There are 11 dimensional features in the area combination picture Shang Fen10Ge, each area, including fixation time summation Net Dwell Time in region of interest, emerging Watched attentively in interesting area with the sum of twitching of the eyelid time Dwell Time, into the sum of the twitching of the eyelid time of region of interest and Dwell time Glance The sum of Duration, the twitching of the eyelid time for leaving region of interest and Glance Duration, watch duration attentively for the first time, twitching of the eyelid is from other areas Domain jump to fixation time summation Net Dwell Time in the number, fixation times, region of interest in the region account for total time ratio, Watch attentively in region of interest and accounts for the ratio of total time with the sum of twitching of the eyelid time Dwell Time, always watches duration attentively, always watches attentively and account for total time Ratio.
The beneficial effects of the present invention are: case insider's recognition methods of the invention based on Model Fusion, solves existing The skills such as the examined people's degree of cooperation of case insider's recognition methods in technology restricts, test method is not concealed, and testing efficiency is low Art problem carries out case insider identification using eye movement data, can effectively inhibit anti-means of detecting a lie, and use 32 eye movement characteristics With 110 dimension eye movement characteristics Model Fusion algorithms, the subject Psychological Manifestations being effectively utilized under different mode improve algorithm effect Rate, the ingenious novelty of method, identification accuracy is high, has a good application prospect.
Detailed description of the invention
Fig. 1 is the flow chart of the abnormal emotion recognition methods of the invention based on eye movement data analysis.
Specific embodiment
Below in conjunction with Figure of description, the present invention is further illustrated.
As shown in Figure 1, case insider's recognition methods of the invention based on Model Fusion, includes the following steps,
Step (A) extracts 32 dimension eye movement characteristics of each testee when watching single picture, 32 Wei Yandongte Sign, including blink statistic 6: when number of winks, frequency of wink, blink total duration, averagely blink duration, blink continue Long maximum value, blink duration minimum value;Watch statistic 11 attentively: fixation times, gaze frequency watch total duration attentively, are average Watch attentively duration, watch attentively duration maximum value, watch attentively duration minimum value, always watch attentively deviation, average fixation deviation, maximum watch attentively deviation, Minimum watches deviation, pan path length attentively;Twitching of the eyelid statistic 15: twitching of the eyelid number, twitching of the eyelid frequency, twitching of the eyelid total duration, twitching of the eyelid are flat Equal duration, twitching of the eyelid duration maximum value, twitching of the eyelid duration minimum value, total twitching of the eyelid amplitude, average eye skip frame degree, maximum twitching of the eyelid amplitude, When minimum twitching of the eyelid amplitude, total saccadic speed, average saccadic speed, twitching of the eyelid maximum speed, twitching of the eyelid minimum speed, average twitching of the eyelid delay Between;
Step (B), based on 32 dimension eye movement characteristics training SVM (SVM) model A, to identify that each testee exists Speech confidence level when single picture, and export probability f of each testee in single picture1(xi) and f2(xi), In, xiRepresent i-th of testee, f1And f2Respectively indicating i-th of testee is insider in single picture or is not The probability of insider;
Step (C) extracts 110 dimension eye movement characteristics of each testee when watching combination picture, the 110 dimension eye movement Feature refers to that there are 11 dimensional features in the area combination picture Shang Fen10Ge, each area, including fixation time summation Net Dwell in region of interest Watch attentively in Time, region of interest with the sum of twitching of the eyelid time Dwell Time, into the twitching of the eyelid time of region of interest and Dwell time it With Glance Duration, the sum of the twitching of the eyelid time and the Glance Duration that leave region of interest, watch duration, twitching of the eyelid attentively for the first time When jumping to that fixation time summation Net Dwell Time accounts for total in the number, fixation times, region of interest in the region from other regions Between ratio, watch attentively in region of interest and to account for the ratio of total time with the sum of twitching of the eyelid time Dwell Time, always watch duration attentively, always watch attentively Account for the ratio of total time;
Step (D), based on 110 dimension eye movement characteristics training SVM Model Bs, to identify that each testee is combining Speech confidence level when picture, and export probability g of each testee in combination picture1(xi) and g2(xi), wherein xiGeneration I-th of testee of table, g1And g2Respectively indicating i-th of testee is insider in combination picture or is not insider Probability;
Step (E) merges the classifier probability of SVM model A and B, obtains joint probability with multiplication rule f1(xi)g1(xi) and f2(xi)g2(xi), taking corresponding classification when the maximum probability of each testee is the last result of decision, The last result of decision is case insider's recognition result.
Case insider's recognition methods based on Model Fusion of the invention, recognition effect is as shown in table 1, compares algorithm packet SVM (SVM), artificial neural network (ANN), decision tree (DT) and random forest (RF) are included, as can be known from Table 1, for For single picture, RF algorithm highest, ANN algorithm is minimum;And for combination picture, ANN algorithm highest, RF algorithm is most Low, comparatively, the discrimination of SVM algorithm and DT algorithm is moderate, is suitble to two kinds of models, after Model Fusion strategy, support The discrimination of vector machine (SVM) can reach 86.1%, and 9.2% He is respectively increased for the highest discrimination of both of which 17.6%, therefore, with multiplication rule, the classifier probability of SVM model A and B is merged, joint probability f is obtained1(xi) g1(xi) and f2(xi)g2(xi), taking corresponding classification when the maximum probability of each testee is the last result of decision, can The accuracy of the result of decision is greatly improved, moreover, ten division of selection of 32 eye movement characteristics of the invention and 110 dimension eye movement characteristics Reason, can accurately react all indexs in eye moving process.
Algorithm discrimination comparison under 1 both of which of table
In conclusion case insider's recognition methods of the invention based on Model Fusion, solves case in the prior art The technical problems such as the examined people's degree of cooperation of part insider's recognition methods restricts, test method is not concealed, and testing efficiency is low, use Eye movement data carries out case insider identification, can effectively inhibit anti-means of detecting a lie, and using 32 eye movement characteristics and 110 dimension eye movements Characteristic model blending algorithm, the subject Psychological Manifestations being effectively utilized under different mode improve efficiency of algorithm, and method is ingenious new Grain husk, identification accuracy is high, has a good application prospect.
Basic principles and main features and advantage of the invention have been shown and described above.The technical staff of the industry should Understand, the present invention is not limited to the above embodiments, and the above embodiments and description only describe originals of the invention Reason, without departing from the spirit and scope of the present invention, various changes and improvements may be made to the invention, these changes and improvements It all fall within the protetion scope of the claimed invention.The claimed scope of the invention is by appended claims and its equivalent circle It is fixed.

Claims (3)

1. case insider's recognition methods based on Model Fusion, it is characterised in that: include the following steps,
Step (A) extracts 32 dimension eye movement characteristics of each testee when watching single picture;
Step (B), based on 32 dimension eye movement characteristics training SVM model A, to identify each testee in single picture When speech confidence level, and export probability f of each testee in single picture1(xi) and f2(xi), wherein xiRepresent I testee, f1And f2Respectively indicating i-th of testee is insider in single picture or is not the general of insider Rate;
Step (C) extracts 110 dimension eye movement characteristics of each testee when watching combination picture;
Step (D), based on 110 dimension eye movement characteristics training SVM Model Bs, to identify each testee in combination picture When speech confidence level, and export probability g of each testee in combination picture1(xi) and g2(xi), wherein xiRepresent I testee, g1And g2Respectively indicating i-th of testee is insider in combination picture or is not the general of insider Rate;
Step (E) merges the classifier probability of SVM model A and B, obtains joint probability f with multiplication rule1(xi) g1(xi) and f2(xi)g2(xi), the classification for taking each testee's maximum probability corresponding is the last result of decision.
2. case insider's recognition methods according to claim 1 based on Model Fusion, it is characterised in that: step (A), The 32 dimension eye movement characteristics, including blink statistic 6: number of winks, frequency of wink, blink total duration, average blink Duration, blink duration maximum value, blink duration minimum value;Watch statistic 11 attentively: fixation times, gaze frequency, Watch total duration, average fixation duration attentively, watches duration maximum value attentively, watches that duration minimum value, always to watch deviation, average fixation attentively inclined attentively Difference, maximum watches deviation attentively, minimum watches deviation attentively, pan path length;Twitching of the eyelid statistic 15: twitching of the eyelid number, twitching of the eyelid frequency, eye Total duration, twitching of the eyelid is jumped to be averaged duration, twitching of the eyelid duration maximum value, twitching of the eyelid duration minimum value, total twitching of the eyelid amplitude, average eye skip frame Degree, maximum twitching of the eyelid amplitude, minimum twitching of the eyelid amplitude, total saccadic speed, average saccadic speed, twitching of the eyelid maximum speed, the minimum speed of twitching of the eyelid Degree, average twitching of the eyelid delay time.
3. case insider's recognition methods according to claim 1 based on Model Fusion, it is characterised in that: step (C), The 110 dimension eye movement characteristics refer to that there are 11 dimensional features, including fixation time in region of interest in the area combination picture Shang Fen10Ge, each area Watched attentively in summation Net Dwell Time, region of interest with the sum of twitching of the eyelid time Dwell Time, into the twitching of the eyelid time of region of interest With the sum of the sum of Dwell time Glance Duration, the twitching of the eyelid time for leaving region of interest and Glance Duration, first It is secondary to watch duration, twitching of the eyelid attentively and jump to fixation time summation Net in the number, fixation times, region of interest in the region from other regions Dwell Time account for the ratio of total time, watch attentively in region of interest with the sum of twitching of the eyelid time Dwell Time account for total time ratio, Always watch duration attentively, always watch the ratio for accounting for total time attentively.
CN201811135018.7A 2018-09-28 2018-09-28 Case-conscious person identification method based on model fusion Active CN109199411B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201811135018.7A CN109199411B (en) 2018-09-28 2018-09-28 Case-conscious person identification method based on model fusion

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201811135018.7A CN109199411B (en) 2018-09-28 2018-09-28 Case-conscious person identification method based on model fusion

Publications (2)

Publication Number Publication Date
CN109199411A true CN109199411A (en) 2019-01-15
CN109199411B CN109199411B (en) 2021-04-09

Family

ID=64981889

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201811135018.7A Active CN109199411B (en) 2018-09-28 2018-09-28 Case-conscious person identification method based on model fusion

Country Status (1)

Country Link
CN (1) CN109199411B (en)

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110327061A (en) * 2019-08-12 2019-10-15 北京七鑫易维信息技术有限公司 It is a kind of based on the personality determining device of eye movement tracer technique, method and apparatus
CN110693509A (en) * 2019-10-17 2020-01-17 中国人民公安大学 Case correlation determination method and device, computer equipment and storage medium
CN110956143A (en) * 2019-12-03 2020-04-03 交控科技股份有限公司 Abnormal behavior detection method and device, electronic equipment and storage medium
CN111568367A (en) * 2020-05-14 2020-08-25 中国民航大学 Method for identifying and quantifying eye jump invasion

Citations (18)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
DE102005047211A1 (en) * 2005-10-01 2007-04-05 Carl Zeiss Meditec Ag Mammal or human eye movement detecting system, has detection device generating independent detection signal using radiation from spot pattern, and control device evaluating signal for determining data about movement of eyes
CN101098241A (en) * 2006-06-26 2008-01-02 腾讯科技(深圳)有限公司 Method and system for implementing virtual image
WO2009116043A1 (en) * 2008-03-18 2009-09-24 Atlas Invest Holdings Ltd. Method and system for determining familiarity with stimuli
US7792335B2 (en) * 2006-02-24 2010-09-07 Fotonation Vision Limited Method and apparatus for selective disqualification of digital images
CN202060785U (en) * 2011-03-31 2011-12-07 上海天岸电子科技有限公司 Human eye pupil lie detector
CN202472688U (en) * 2011-12-03 2012-10-03 辽宁科锐科技有限公司 Inquest-assisting judgment and analysis meter based on eyeball characteristic
CN103116763A (en) * 2013-01-30 2013-05-22 宁波大学 Vivo-face detection method based on HSV (hue, saturation, value) color space statistical characteristics
CN103211605A (en) * 2013-05-14 2013-07-24 重庆大学 Psychological testing system and method
CN203379122U (en) * 2013-07-26 2014-01-08 蔺彬涛 Wireless electroencephalogram and eye movement polygraph
CN105147248A (en) * 2015-07-30 2015-12-16 华南理工大学 Physiological information-based depressive disorder evaluation system and evaluation method thereof
US20160132726A1 (en) * 2014-05-27 2016-05-12 Umoove Services Ltd. System and method for analysis of eye movements using two dimensional images
CN206285117U (en) * 2016-08-31 2017-06-30 北京新科永创科技有限公司 Intelligence hearing terminal
CN106999111A (en) * 2014-10-01 2017-08-01 纽洛斯公司 System and method for detecting invisible human emotion
CN107480716A (en) * 2017-08-15 2017-12-15 安徽大学 A kind of combination EOG and video pan signal recognition method and system
WO2018005594A1 (en) * 2016-06-28 2018-01-04 Google Llc Eye gaze tracking using neural networks
CN108108715A (en) * 2017-12-31 2018-06-01 厦门大学 It is inspired based on biology and depth attribute learns the face aesthetic feeling Forecasting Methodology being combined
US20180268733A1 (en) * 2017-03-15 2018-09-20 International Business Machines Corporation System and method to teach and evaluate image grading performance using prior learned expert knowledge base
CN109063551A (en) * 2018-06-20 2018-12-21 新华网股份有限公司 Validity test method of talking and system

Patent Citations (18)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
DE102005047211A1 (en) * 2005-10-01 2007-04-05 Carl Zeiss Meditec Ag Mammal or human eye movement detecting system, has detection device generating independent detection signal using radiation from spot pattern, and control device evaluating signal for determining data about movement of eyes
US7792335B2 (en) * 2006-02-24 2010-09-07 Fotonation Vision Limited Method and apparatus for selective disqualification of digital images
CN101098241A (en) * 2006-06-26 2008-01-02 腾讯科技(深圳)有限公司 Method and system for implementing virtual image
WO2009116043A1 (en) * 2008-03-18 2009-09-24 Atlas Invest Holdings Ltd. Method and system for determining familiarity with stimuli
CN202060785U (en) * 2011-03-31 2011-12-07 上海天岸电子科技有限公司 Human eye pupil lie detector
CN202472688U (en) * 2011-12-03 2012-10-03 辽宁科锐科技有限公司 Inquest-assisting judgment and analysis meter based on eyeball characteristic
CN103116763A (en) * 2013-01-30 2013-05-22 宁波大学 Vivo-face detection method based on HSV (hue, saturation, value) color space statistical characteristics
CN103211605A (en) * 2013-05-14 2013-07-24 重庆大学 Psychological testing system and method
CN203379122U (en) * 2013-07-26 2014-01-08 蔺彬涛 Wireless electroencephalogram and eye movement polygraph
US20160132726A1 (en) * 2014-05-27 2016-05-12 Umoove Services Ltd. System and method for analysis of eye movements using two dimensional images
CN106999111A (en) * 2014-10-01 2017-08-01 纽洛斯公司 System and method for detecting invisible human emotion
CN105147248A (en) * 2015-07-30 2015-12-16 华南理工大学 Physiological information-based depressive disorder evaluation system and evaluation method thereof
WO2018005594A1 (en) * 2016-06-28 2018-01-04 Google Llc Eye gaze tracking using neural networks
CN206285117U (en) * 2016-08-31 2017-06-30 北京新科永创科技有限公司 Intelligence hearing terminal
US20180268733A1 (en) * 2017-03-15 2018-09-20 International Business Machines Corporation System and method to teach and evaluate image grading performance using prior learned expert knowledge base
CN107480716A (en) * 2017-08-15 2017-12-15 安徽大学 A kind of combination EOG and video pan signal recognition method and system
CN108108715A (en) * 2017-12-31 2018-06-01 厦门大学 It is inspired based on biology and depth attribute learns the face aesthetic feeling Forecasting Methodology being combined
CN109063551A (en) * 2018-06-20 2018-12-21 新华网股份有限公司 Validity test method of talking and system

Non-Patent Citations (3)

* Cited by examiner, † Cited by third party
Title
C. MENG AND X. ZHAO: "Webcam-Based Eye Movement Analysis Using CNN", 《IEEE ACCESS》 *
DAVID A. LEOPOLD等: "Multistable phenomena:changing views in perception", 《TRENDS IN COGNITIVE SCIENCES》 *
任延涛 等: "基于眼运动追踪技术的测谎模式构建", 《中国刑警学院学报》 *

Cited By (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110327061A (en) * 2019-08-12 2019-10-15 北京七鑫易维信息技术有限公司 It is a kind of based on the personality determining device of eye movement tracer technique, method and apparatus
CN110693509A (en) * 2019-10-17 2020-01-17 中国人民公安大学 Case correlation determination method and device, computer equipment and storage medium
CN110693509B (en) * 2019-10-17 2022-04-05 中国人民公安大学 Case correlation determination method and device, computer equipment and storage medium
CN110956143A (en) * 2019-12-03 2020-04-03 交控科技股份有限公司 Abnormal behavior detection method and device, electronic equipment and storage medium
CN111568367A (en) * 2020-05-14 2020-08-25 中国民航大学 Method for identifying and quantifying eye jump invasion
CN111568367B (en) * 2020-05-14 2023-07-21 中国民航大学 Method for identifying and quantifying eye jump invasion

Also Published As

Publication number Publication date
CN109199411B (en) 2021-04-09

Similar Documents

Publication Publication Date Title
CN109199411A (en) Case insider's recognition methods based on Model Fusion
Zhang et al. Multimodal depression detection: Fusion of electroencephalography and paralinguistic behaviors using a novel strategy for classifier ensemble
Colomer Granero et al. A comparison of physiological signal analysis techniques and classifiers for automatic emotional evaluation of audiovisual contents
CN109199412B (en) Abnormal emotion recognition method based on eye movement data analysis
Zhu et al. Detecting emotional reactions to videos of depression
Sulaiman et al. EEG-based stress features using spectral centroids technique and k-nearest neighbor classifier
CN112259237B (en) Depression evaluation system based on multi-emotion stimulus and multi-stage classification model
Hartmann et al. EpiScan: online seizure detection for epilepsy monitoring units
CN113729707A (en) FECNN-LSTM-based emotion recognition method based on multi-mode fusion of eye movement and PPG
Berbano et al. Classification of stress into emotional, mental, physical and no stress using electroencephalogram signal analysis
Das et al. Analyzing gaming effects on cognitive load using artificial intelligent tools
CN113729729B (en) Schizophrenia early detection system based on graph neural network and brain network
Wang et al. A hybrid classification to detect abstinent heroin-addicted individuals using EEG microstates
CN113397482A (en) Human behavior analysis method and system
Cakmak et al. Neuro signal based lie detection
Reches et al. A novel ERP pattern analysis method for revealing invariant reference brain network models
CN115363585A (en) Standardized group depression risk screening system and method based on habituation removal and film watching tasks
Altaf et al. Machine Learning Approach for Stress Detection based on Alpha-Beta and Theta-Beta Ratios of EEG Signals
Ma et al. Dynamic threshold distribution domain adaptation network: A cross-subject fatigue recognition method based on EEG signals
Zhu et al. Visceral versus verbal: Can we see depression?
CN113855022A (en) Emotion evaluation method and device based on eye movement physiological signals
CN114305454A (en) Fatigue state identification method and device based on domain confrontation neural network
CN112687373A (en) System and method for quantifying psychological craving degree
Chu et al. Detecting Lies: Finding the Degree of Falsehood from Observers’ Physiological Responses
CN114366102B (en) Multi-mode tension emotion recognition method, device, equipment and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant