CN104318223A - Face distinguishing feature position determining method and system - Google Patents
Face distinguishing feature position determining method and system Download PDFInfo
- Publication number
- CN104318223A CN104318223A CN201410652586.XA CN201410652586A CN104318223A CN 104318223 A CN104318223 A CN 104318223A CN 201410652586 A CN201410652586 A CN 201410652586A CN 104318223 A CN104318223 A CN 104318223A
- Authority
- CN
- China
- Prior art keywords
- face
- notable feature
- user
- image
- eye
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
Landscapes
- Image Analysis (AREA)
Abstract
The invention relates to a face distinguishing feature position determining method and system. The system comprises a target image collection module, an eye movement information collection module, an eye movement information processing module and a face recognition training module. The method and system have the advantages that in the face recognition process, face distinguishing features are determined through an eye tracker and a psychological experiment, and the accuracy in determining the face distinguishing features in the face recognition process is greatly improved.
Description
Technical field
The present invention relates to a kind of face notable feature position calibration method and application, particularly relate to a kind of face notable feature position calibration method based on eye tracker and system.
Background technology
Eye-tracking technology provide one reliably, effectively, vision working research method timely, be widely used in the fields such as advertised product assessment, user behavior analysis, ergonomics research, space safety at present.
In the middle of advertising psychology research and apply, eye movement when client can be watched attentively advertisement by eye tracker is recorded, by the data of analytic record, sequencing when client watches advertisement attentively can be well understood to, to the fixation time of certain part (can divide between region of interest during analysis result) of picture, fixation times, twitching of the eyelid distance, pupil diameter (area) change etc.And the psychological activity of advertisement viewer is analyzed with this, by the mental process of consumer and the research of feature, find out the interested region of user, design the advertisement that can evoke consumer's desire to buy.Eye tracker is the important instrument of psychology basis research.Be generally used for the eye movement feature of recorder when processing visual information, be widely used in the research in the fields such as attention, visual perception, reading.
Chinese patent 201110403953.9 discloses a kind of interesting image regions extracting method based on eye tracker experimental data and low-level image feature, described method extracts the true semantic interesting image regions of reflection people by eye tracker viewpoint tracking test data on the one hand, and namely eye moves ROI; The form combined by low-level image feature cum rights on the other hand extracts interesting image regions in general sense, i.e. feature ROI; And then by the similarity that analytical characteristic ROI and eye move ROI find out similarity the highest time weight combine, i.e. optimal weight.The area-of-interest of other pictures of the same type utilizing this weight extraction to go out can meet the semantic requirement of user more.
Be with the difference of above-mentioned patent disclosure method, eye-tracking technology is applied on face notable feature location position by the present invention.The present invention has significant application value user in the extraction of facial image area-of-interest, such as existing face recognition technology uses PCA mostly, the methods such as LDA, these methods are comparatively difficult when face notable feature is extracted, accuracy is lacked for the notable feature extracted, thus can affect greatly accuracy of face identification, and the accuracy that when the present invention can improve recognition of face, face notable feature is determined, to the face recognition technology improving view-based access control model, there is vital role.
Summary of the invention
Problem to be solved by this invention is for above-mentioned the deficiencies in the prior art, provides a kind of face notable feature position calibration method and system.
Technical scheme of the present invention is: a kind of face notable feature location position system, comprises with lower module: target image acquisition module, eye move information acquisition module, eye moves message processing module, recognition of face training module.
A scaling method for upper described face notable feature location position system, it is characterized in that, its scaling method is as follows:
1) working procedure, connects eye tracker, calibrates, open eye movement recording file;
2) suitable target image is presented, the human face photo scan image that the target image such as adopted in the technical program is enough sample sizes, sample is uniformly distributed each age level, size is similar, men and women half and half (having carried out balancing controlling to factors such as human face expressions), random selecting one in the target image;
3) eye obtained when user observes image moves information, is obtained eye movement scanning and the trace data of user by eye tracker, calculates user's eyeball in the focal position of image;
4) according to user's eyeball of obtaining in the focal position of image, analyze and extract the face notable feature region of user's concern;
5) process is blocked to the face notable feature region of extracting;
6) present the image after blocking, repeat the 3rd), 4) and 5) step repeatedly, extract successively each user pay close attention to face notable feature region, one group experiment end;
7) 1 is repeated) to 6) step, carry out organizing experiment, and analysis design mothod result more, determine that sight line when user carries out recognition of face pays close attention to characteristic area, analyze the face notable feature of mankind's sight line concern when recognition of face;
8) choose the 7th) step obtain face notable feature, carry out recognition of face training, training set is set up suitable disaggregated model, such as the technical program can be selected but be not limited to support vector machine (SVM) disaggregated model, and in feature space, build optimum segmentation lineoid, obtain optimal weight, generate training pattern file;
9) the 8th is used) model file that step generates, build facial image sort program, set up face identification system.
In described step 3), obtained eye movement scanning and the trace data of user by eye tracker, such as, in the technical program, have chosen pupil diameter, first fixation duration, fixation time, fixation times, regression time, blink duration, twitching of the eyelid amplitude, twitching of the eyelid duration.
In described step 4), according to the focal position of user's eyeball on image obtained, analyzed the face notable feature region of user's concern by the activity ratio calculating region, focal position; Activity ratio represents the probability of user to face notable feature area interest, and when this value exceedes set threshold value, this face notable feature region will be activated and be extracted; When the activity ratio in the face notable feature region activated is lower than set threshold value, then the state of activation in this face notable feature region is suppressed, and simultaneously a new active region will be selected.
When the face notable feature region activity ratio be blocked exceedes set threshold value, then suppression process is carried out to its state of activation.
Determine face notable feature by eye tracker and Experiment of Psychology when the invention has the advantages that and carry out recognition of face, substantially increase accuracy when face notable feature is determined in recognition of face.
Accompanying drawing explanation
Fig. 1 is embodiment of the present invention face notable feature deterministic process structural representation.
Embodiment
As shown in Figure 1, a kind of face notable feature location position system, comprises with lower module: target image acquisition module, eye move information acquisition module, eye moves message processing module, recognition of face training module.
A scaling method for upper described face notable feature location position system, it is characterized in that, its scaling method is as follows:
1) working procedure, connects eye tracker, calibrates, open eye movement recording file;
2) suitable target image is presented, the human face photo scan image that the target image such as adopted in the technical program is enough sample sizes, sample is uniformly distributed each age level, size is similar, men and women half and half (having carried out balancing controlling to factors such as human face expressions), random selecting one in the target image;
3) eye obtained when user observes image moves information, is obtained eye movement scanning and the trace data of user by eye tracker, calculates user's eyeball in the focal position of image;
4) according to user's eyeball of obtaining in the focal position of image, analyze and extract the face notable feature region of user's concern;
5) process is blocked to the face notable feature region of extracting;
6) present the image after blocking, repeat the 3rd), 4) and 5) step repeatedly, extract successively each user pay close attention to face notable feature region, one group experiment end;
7) 1 is repeated) to 6) step, carry out organizing experiment, and analysis design mothod result more, determine that sight line when user carries out recognition of face pays close attention to characteristic area, analyze the face notable feature of mankind's sight line concern when recognition of face;
8) choose the 7th) step obtain face notable feature, carry out recognition of face training, training set is set up suitable disaggregated model, such as the technical program can be selected but be not limited to support vector machine (SVM) disaggregated model, and in feature space, build optimum segmentation lineoid, obtain optimal weight, generate training pattern file;
9) the 8th is used) model file that step generates, build facial image sort program, set up face identification system.
In described step 3), obtained eye movement scanning and the trace data of user by eye tracker, such as, in the technical program, have chosen pupil diameter, first fixation duration, fixation time, fixation times, regression time, blink duration, twitching of the eyelid amplitude, twitching of the eyelid duration.In described step 4), according to the focal position of user's eyeball on image obtained, analyzed the face notable feature region of user's concern by the activity ratio calculating region, focal position; Activity ratio represents the probability of user to face notable feature area interest, and when this value exceedes set threshold value, this face notable feature region will be activated and be extracted; When the activity ratio in the face notable feature region activated is lower than set threshold value, then the state of activation in this face notable feature region is suppressed, and simultaneously a new active region will be selected.When the face notable feature region activity ratio be blocked exceedes set threshold value, then suppression process is carried out to its state of activation.
Target image acquisition module: the human face photo scan image that target image is enough sample sizes, sample is uniformly distributed each age level, size is similar, men and women half and half (having carried out balancing controlling to factors such as human face expressions), random selecting one in the target image;
The dynamic information acquisition module of eye: the eye movement scanning and the trace data that are obtained user by eye tracker, such as, have chosen pupil diameter, first fixation duration, fixation time, fixation times, regression time, blink duration, twitching of the eyelid amplitude, twitching of the eyelid duration in the technical program.
The dynamic message processing module of eye: move information acquisition module with eye and be connected, according to the focal position of user's eyeball on image obtained, analyzes the face notable feature region of user's concern by the activity ratio calculating region, focal position; Activity ratio represents the probability of user to face notable feature area interest, and when this value exceedes set threshold value, this face notable feature region will be activated and be extracted; When the activity ratio in the face notable feature region activated is lower than set threshold value, then the state of activation in this face notable feature region is suppressed, and simultaneously a new active region will be selected.Further, when the face notable feature region activity ratio be blocked exceedes set threshold value, then suppression process is carried out to its state of activation.
Recognition of face training module: recognition of face training is carried out to the face notable feature gathered, training set is set up suitable disaggregated model, such as the technical program can be selected but be not limited to support vector machine (SVM) disaggregated model, and in feature space, build optimum segmentation lineoid, obtain optimal weight, generate training pattern file, by the model file generated, build facial image sort program, set up face identification system.
Claims (5)
1. a face notable feature location position system, is characterized in that, comprises with lower module: target image acquisition module, eye move information acquisition module, eye moves message processing module, recognition of face training module.
2. a scaling method for face notable feature location position system as claimed in claim 1, it is characterized in that, its scaling method is as follows:
1) working procedure, connects eye tracker, calibrates, open eye movement recording file;
2) suitable target image is presented, the human face photo scan image that the target image such as adopted in the technical program is enough sample sizes, sample is uniformly distributed each age level, size is similar, men and women half and half (having carried out balancing controlling to factors such as human face expressions), random selecting one in the target image;
3) eye obtained when user observes image moves information, is obtained eye movement scanning and the trace data of user by eye tracker, calculates user's eyeball in the focal position of image;
4) according to user's eyeball of obtaining in the focal position of image, analyze and extract the face notable feature region of user's concern;
5) process is blocked to the face notable feature region of extracting;
6) present the image after blocking, repeat the 3rd), 4) and 5) step repeatedly, extract successively each user pay close attention to face notable feature region, one group experiment end;
7) 1 is repeated) to 6) step, carry out organizing experiment, and analysis design mothod result more, determine that sight line when user carries out recognition of face pays close attention to characteristic area, analyze the face notable feature of mankind's sight line concern when recognition of face;
8) choose the 7th) step obtain face notable feature, carry out recognition of face training, training set is set up suitable disaggregated model, such as the technical program can be selected but be not limited to support vector machine (SVM) disaggregated model, and in feature space, build optimum segmentation lineoid, obtain optimal weight, generate training pattern file;
9) the 8th is used) model file that step generates, build facial image sort program, set up face identification system.
3. face notable feature position calibration method according to claim 2, it is characterized in that: in described step 3), obtained eye movement scanning and the trace data of user by eye tracker, such as, in the technical program, have chosen pupil diameter, first fixation duration, fixation time, fixation times, regression time, blink duration, twitching of the eyelid amplitude, twitching of the eyelid duration.
4. face notable feature position calibration method according to claim 2, it is characterized in that: in described step 4), according to the focal position of user's eyeball on image obtained, analyzed the face notable feature region of user's concern by the activity ratio calculating region, focal position; Activity ratio represents the probability of user to face notable feature area interest, and when this value exceedes set threshold value, this face notable feature region will be activated and be extracted; When the activity ratio in the face notable feature region activated is lower than set threshold value, then the state of activation in this face notable feature region is suppressed, and simultaneously a new active region will be selected.
5. face notable feature position calibration method according to claim 4, is characterized in that: when the face notable feature region activity ratio be blocked exceedes set threshold value, then carry out suppression process to its state of activation.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201410652586.XA CN104318223A (en) | 2014-11-18 | 2014-11-18 | Face distinguishing feature position determining method and system |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201410652586.XA CN104318223A (en) | 2014-11-18 | 2014-11-18 | Face distinguishing feature position determining method and system |
Publications (1)
Publication Number | Publication Date |
---|---|
CN104318223A true CN104318223A (en) | 2015-01-28 |
Family
ID=52373452
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201410652586.XA Pending CN104318223A (en) | 2014-11-18 | 2014-11-18 | Face distinguishing feature position determining method and system |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN104318223A (en) |
Cited By (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN105069304A (en) * | 2015-08-18 | 2015-11-18 | 广东顺德中山大学卡内基梅隆大学国际联合研究院 | Machine learning-based method for evaluating and predicting ASD |
CN108052973A (en) * | 2017-12-11 | 2018-05-18 | 中国人民解放军战略支援部队信息工程大学 | Map symbol user interest analysis method based on multinomial eye movement data |
CN113111745A (en) * | 2021-03-30 | 2021-07-13 | 四川大学 | Eye movement identification method based on product attention of openposition |
Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN102221881A (en) * | 2011-05-20 | 2011-10-19 | 北京航空航天大学 | Man-machine interaction method based on analysis of interest regions by bionic agent and vision tracking |
CN102521595A (en) * | 2011-12-07 | 2012-06-27 | 中南大学 | Method for extracting image region of interest based on eye movement data and bottom-layer features |
US20140006463A1 (en) * | 2012-04-23 | 2014-01-02 | Michal Jacob | System, method, and computer program product for using eye movement tracking for retrieval of observed information and of related specific context |
-
2014
- 2014-11-18 CN CN201410652586.XA patent/CN104318223A/en active Pending
Patent Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN102221881A (en) * | 2011-05-20 | 2011-10-19 | 北京航空航天大学 | Man-machine interaction method based on analysis of interest regions by bionic agent and vision tracking |
CN102521595A (en) * | 2011-12-07 | 2012-06-27 | 中南大学 | Method for extracting image region of interest based on eye movement data and bottom-layer features |
US20140006463A1 (en) * | 2012-04-23 | 2014-01-02 | Michal Jacob | System, method, and computer program product for using eye movement tracking for retrieval of observed information and of related specific context |
Non-Patent Citations (2)
Title |
---|
方芳: "视觉注意建模及其在图像分析中的应用", 《中国博士学位论文全文数据库》 * |
曹晓华等: "人脸图形识别取样的眼动特征", 《心理科学》 * |
Cited By (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN105069304A (en) * | 2015-08-18 | 2015-11-18 | 广东顺德中山大学卡内基梅隆大学国际联合研究院 | Machine learning-based method for evaluating and predicting ASD |
CN105069304B (en) * | 2015-08-18 | 2019-04-05 | 广东顺德中山大学卡内基梅隆大学国际联合研究院 | A kind of device of the assessment prediction ASD based on machine learning |
CN108052973A (en) * | 2017-12-11 | 2018-05-18 | 中国人民解放军战略支援部队信息工程大学 | Map symbol user interest analysis method based on multinomial eye movement data |
CN108052973B (en) * | 2017-12-11 | 2020-05-05 | 中国人民解放军战略支援部队信息工程大学 | Map symbol user interest analysis method based on multiple items of eye movement data |
CN113111745A (en) * | 2021-03-30 | 2021-07-13 | 四川大学 | Eye movement identification method based on product attention of openposition |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
Pileliene et al. | The effect of female celebrity spokesperson in FMCG advertising: neuromarketing approach | |
WO2016112690A1 (en) | Eye movement data based online user state recognition method and device | |
Kunze et al. | Activity recognition for the mind: Toward a cognitive" Quantified Self" | |
Kiefer et al. | Using eye movements to recognize activities on cartographic maps | |
Braunagel et al. | Online recognition of driver-activity based on visual scanpath classification | |
US20150242707A1 (en) | Method and system for predicting personality traits, capabilities and suggested interactions from images of a person | |
CN104318221A (en) | Facial expression recognition method based on ELM | |
US20140164056A1 (en) | Biosensitive response evaluation for design and research | |
Yan et al. | Raf-au database: in-the-wild facial expressions with subjective emotion judgement and objective au annotations | |
Farinella et al. | Face re-identification for digital signage applications | |
CN103019382A (en) | Brain-computer interaction method for reflecting subjective motive signals of brain through induced potentials | |
CN103077206A (en) | Image semantic classifying searching method based on event-related potential | |
CN109276243A (en) | Brain electricity psychological test method and terminal device | |
CN103077205A (en) | Method for carrying out semantic voice search by sound stimulation induced ERP (event related potential) | |
Li et al. | Smartphone‐based fatigue detection system using progressive locating method | |
CN104318223A (en) | Face distinguishing feature position determining method and system | |
Sun et al. | An integrated model for effective saliency prediction | |
Gorbova et al. | Going deeper in hidden sadness recognition using spontaneous micro expressions database | |
Wedel et al. | Eye tracking methodology for research in consumer psychology | |
Healy et al. | Eye fixation related potentials in a target search task | |
Ma et al. | VIP: A unifying framework for computational eye-gaze research | |
CN104731341A (en) | Face image retrieval method based on EEG and computer vision | |
Buvaneswari et al. | A review of EEG based human facial expression recognition systems in cognitive sciences | |
Ding et al. | A robust online saccadic eye movement recognition method combining electrooculography and video | |
Vranceanu et al. | A computer vision approach for the eye accesing cue model used in neuro-linguistic programming |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
C06 | Publication | ||
PB01 | Publication | ||
C10 | Entry into substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
RJ01 | Rejection of invention patent application after publication |
Application publication date: 20150128 |
|
RJ01 | Rejection of invention patent application after publication |