CN106909220A - A kind of sight line exchange method suitable for touch-control - Google Patents

A kind of sight line exchange method suitable for touch-control Download PDF

Info

Publication number
CN106909220A
CN106909220A CN201710093165.1A CN201710093165A CN106909220A CN 106909220 A CN106909220 A CN 106909220A CN 201710093165 A CN201710093165 A CN 201710093165A CN 106909220 A CN106909220 A CN 106909220A
Authority
CN
China
Prior art keywords
image
layer
neural networks
convolutional neural
screen
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201710093165.1A
Other languages
Chinese (zh)
Inventor
孙建德
吴雪梅
李静
万文博
吴强
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shandong Normal University
Original Assignee
Shandong Normal University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shandong Normal University filed Critical Shandong Normal University
Priority to CN201710093165.1A priority Critical patent/CN106909220A/en
Publication of CN106909220A publication Critical patent/CN106909220A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/011Arrangements for interaction with the human body, e.g. for user immersion in virtual reality
    • G06F3/013Eye tracking input arrangements
    • G06T3/08
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/18Eye characteristics, e.g. of the iris
    • G06V40/193Preprocessing; Feature extraction
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/18Eye characteristics, e.g. of the iris
    • G06V40/197Matching; Classification

Abstract

The invention discloses a kind of sight line exchange method suitable for touch-control, belong to video, multimedia signal processing technique field.Be input to human eye area image in CNN by the method, and being operated by convolution and down-sampling carries out feature extraction and dimension reduction to image, improves the limitation that conventional line-of-sight method of estimation builds eyeball phantom using high definition camera and additional infrared equipment.Image acquisition phase, causes that human eye database possesses the diversity of illumination, the colour of skin and form so that system meets the requirement of the various conditions of Different Individual by Different Individual in the IMAQ of diverse location different time.Largely improve the accuracy of classification.

Description

A kind of sight line exchange method suitable for touch-control
Technical field
The present invention relates to a kind of sight line exchange method suitable for touch-control, belong to video, multimedia signal processing technique neck Domain.
Technical background
Human-computer interaction technology is to develop to obtain one of most fast field in current user interface research.People commonplace at present Machine interactive mode includes keyboard, mouse, images first-class traditional interactive mode.However, continuing to develop with artificial intelligence technology, Many contactless interactive modes are arisen at the historic moment, such as gesture identification.Relative to traditional contact interactive mode, noncontact Formula interactive mode has the advantages that safety, health, is more and more applied to the every aspect of life.
Sight line exchange method of the present invention is one kind of non-contact type human-machine interaction.So-called sight line interaction is as its name suggests The technology of man-machine interaction is exactly realized by estimating gazing direction of human eyes, its core technology is gaze estimation method.Sight line is handed over Application mutually in fields such as virtual reality, developmental psychology, medical diagnosis, perception analysis, commercial advertisement test and appraisal is increasingly extensive.Together When, sight line interaction is also an auxiliary well for leaden paralysis but the good patient of vision, and they can be by eye Oneself wish and demand are expressed in the motion of ball, and control the corresponding system to meet the demand of oneself.
Traditional sight line estimation technique based on model by high resolution camera and infrared light supply obtain pupil center and The particular location of corneal reflection point sets up the Mathematical Modeling of eyeball, and the position of human eye fixation point is obtained by certain mapping relations Put.This method to hardware device requirement it is higher, easily influenceed by photoenvironment and needed calibration process, be sight line interact set A threshold higher is found.And the sight line for being based on eye image is estimated, using the method for machine learning, to initially set up people's eye pattern As database, dimensionality reduction and feature extraction are carried out to eye image feature using neutral net, eye image and direction of gaze it Between set up corresponding relation so that realize according to the direction of gaze of eye image estimate.But, such method is accurately noted with obtaining Apparent direction is target, in the training process of machine learning, often using the method for returning;This causes this kind of method training process Complexity, network parameter regulation difficulty is big, and the direction of gaze precision for being obtained is poor.Meanwhile, used in this kind of method Eye image database be not mostly to be look at gathering under the scene of screen, therefore, in the touch-control application based on screen Realize unsatisfactory.
The content of the invention
Man-machine interaction based on touch-control is most common man-machine interaction mode in present consumption electronic product, and it is by triggering There is the push button of certain area to activate corresponding function on the screen of limited areal.The present invention is with reference to touch technology Feature, for being dfficult to apply to touch-control application in traditional gaze estimation method based on eyeball phantom and based on eye image Shortcoming, the invention provides a kind of sight line exchange method suitable for touch-control.The method will watch screen block attentively, with screen Block simulation touch interface in button, will innovatively watch attentively screen different masses its corresponding regional location of eye image it Between relation be modeled, by the foundation and the application of convolutional neural networks of eye image database, estimated according to eye image The block on the screen that human eye is watched attentively is counted out, the touch-control based on sight line interaction is realized.In this process, due to that need not estimate Accurate direction of gaze, it is only necessary to estimate the block watched attentively, therefore need to use method for classifying modes in machine learning, rather than The method of recurrence.It is demonstrated experimentally that the method can reach resolution higher under the conditions of different illumination conditions and Different Individual.
The realization of the method needs a general network camera, and personal computer one.The skill that the present invention is used Art scheme is as follows.
A kind of sight line exchange method suitable for touch-control, it is characterised in that:Different Individual is betted in different external conditions The some eye images for treating as a screen position block are classified as a class, are classified with convolutional neural networks CNN implementation patterns, so as to pass through Eye image recognizes that its corresponding screen watches block of locations attentively, and the method comprises the following steps:
(1) foundation of human eye database:Multiple individualities are watched attentively under the conditions of different illumination, different time, diverse location The process of the random blinkpunkt for occurring carries out data acquisition on screen, and the image to gathering carries out Face datection and human eye detection, Human eye area image is obtained, human eye database is set up and eye image is screened;
(2) eye image is divided into training set and checking collects, as the input of convolutional neural networks, according to practical application need Ask carries out piecemeal to computer screen, and the eye image for watching same piecemeal on screen attentively is considered as a class, and eye image is classified, Training convolutional neural networks;
(3) when carrying out sight line estimation, eye image to be sorted is input in the convolutional neural networks model for training, i.e., Its classification is can determine whether, and correspondence obtains corresponding screen and watches block of locations attentively, so as to estimate direction of visual lines.
More specifically, the step that implements of above-mentioned steps (1) is:
A, eye image collection, it is comprised the following steps that:
A () considers practical application, computer screen is divided into some bulks first, if continuing to be divided into inside each bulk Dry fritter, each fritter middle setting blinkpunkt;
B () experimenter is sitting in the range of the camera of screen front, eyes follow blinkpunkt to move, simultaneous camera collection people Face image, in order to prevent people's kopiopia, two neighboring blinkpunkt sets interval, and now camera does not gather image;
C () screens to the image for gathering, extract usable samples data;
B, Face datection and human eye detection are carried out to facial image, and human eye rectangle is normalized into unified size for instructing Practice convolutional neural networks.
More specifically, the step that implements of above-mentioned steps (2) is:
A, data selection aspect, usable samples are divided into training set by a certain percentage and checking collection is respectively used to convolutional Neural The checking of network training and classification accuracy;
B, according to real data size set convolutional neural networks structural model be:
The convolutional layer of (a) convolutional neural networks:Network extracts the depth characteristic of image by convolutional layer, big according to characteristic pattern The small corresponding convolution kernel of selection simultaneously carries out edge expansion to image, and i-th layer of j-th characteristic pattern of convolutional layer is at (x, y) position It is worth and is:
Wherein, relu () is amendment linear unit, and its formula is:G (x)=max (0, x), bijIt is i-th layer of j-th spy The biasing of figure is levied, n refers to the set of the last layer characteristic pattern being connected with current signature figure, pi,qiRefer to respectively i-th layer of length of convolution kernel, Width,It is the value of the convolution kernel that is connected with upper strata characteristic pattern at (p, q) place;
The sample level of (b) convolutional neural networks:Network carries out dimension-reduction treatment, i-th layer of sample level to image by sample level J-th characteristic pattern is expressed as
fij=f (βijdown(f(i-1)j)+bij)
Wherein, βijAnd bijIt is respectively i-th layer of multiplier deviation and additional deviation of j-th characteristic pattern, down () is that drop is adopted Sample function, using maximum pond, LRN layers, the LRN layers of lateral inhibition machine of mimic biology nervous system is connected behind down-sampling layer System, the activity to local neuron creates competition mechanism so that the larger value of response ratio is relatively bigger, improves the extensive energy of model Power.
The output layer of (c) convolutional neural networks:Network realizes full connection by interior lamination, finally exports classification number.
The above-mentioned classification mechanism based on convolutional neural networks is that known class eye image is input to the CNN nets for having configured Network losses are calculated in network, until loss is constantly reduced and tended towards stability.By in checking collection input network after the completion of network training Calculating is divided into the probable value of each class, and probable value soprano is final classification.
Sight line estimation technique based on convolutional neural networks directly using eye image as input, is carried by way of study Take feature to be classified, it is to avoid manual features are extracted and the eyeball modelling phase, and higher-dimension eye image is converted into low-dimensional feature Figure improves classification performance, reduces the complexity of experiment.Training set is increased by being trained to different eye images Diversity so that classification results are applied to Different Individual.The Accuracy Verification higher present invention has one in many classification tasks Fixed practical value, for sight line estimation technique provides new way.
Brief description of the drawings
Fig. 1 is system hardware figure;
Fig. 2 is and data acquisition figure;
Fig. 3 is the sight line estimating system block diagram based on CNN;
Fig. 4 is that human face region and human eye area extract schematic diagram;
Fig. 5 is convolutional neural networks structure chart;
Fig. 6 is the loss of 6 classification problem training sets and checking collection accuracy curve map;
Fig. 7 is the loss of 54 classification problem training sets and checking collection accuracy curve map;
Fig. 8 is each layer characteristic pattern of network.
Specific embodiment
Invention is further described below in conjunction with the accompanying drawings:
Estimate new method the invention provides a kind of sight line based on convolutional neural networks.According to side proposed by the present invention Method, carries out the collection of human face data first.Hardware facility needed for experiment includes that a customary personal computer and a network are taken the photograph As head.Camera is positioned in the middle of computer screen top in experimentation.Fig. 1 gives the hardware chart of system, including one PC and an IP Camera.6 bulks are screened into experiment first, each bulk be then divided into 9 fritters and Blinkpunkt is set in each fritter.Such as Fig. 2, tester is sitting at 50~60cm of screen front in experimentation, keeps head not Dynamic, eyes are moved with the movement of blinkpunkt on screen.Meanwhile, camera collection facial image.Fig. 3 provides whole sight line and estimates The flow chart of system.So that shown in flow chart, we are by taking certain individuality in database as an example to invention based on convolutional neural networks The realization of gaze estimation method be described in detail.The present invention includes following specific steps:
1st, human face region and human eye area are extracted
(1) Fig. 4 provides human face region extraction process.Fig. 4 (a) is that human face region is detected in original color image.Experiment Face datection is carried out using Haar features.The Haar feature classifiers of face are exactly an XML file, and people can be described in this document The Haar characteristic values of face, face different zones are determined by the characteristic value of face different zones.
(2) right and left eyes further are extracted using Haar human eye detections in the human face region for extracting.Fig. 4 (b) is in inspection The human face region detection human eye for measuring, shown in the eye image for detecting such as Fig. 4 (c).
(3) eye image that will be extracted carries out size normalized, is needed to return human eye rectangular area according to experiment One turns to 40*72 pixels.
2nd, training set and checking collection treatment
181440 width left-eye images in the selected above-mentioned steps (3) of experiment, select training set and checking to collect by a certain percentage. Image and corresponding initial labels are converted into LMDB forms for the training of convolutional neural networks.
3rd, CNN networks are built
CNN network trainings use Caffe deep learning frameworks.Caffe is a clear and efficient deep learning framework. Caffe is pure C++/CUDA frameworks, supports order line, Python and MATLAB interfaces;Can be in the direct nothings of CPU and GPU Seaming and cutting are changed.Experiment dramatically saves on the training time using GPU training.Fig. 5 provides the convolutional neural networks structure for classifying Figure.Full connection and some other excitation functions etc. are realized including some convolutional layers, maximum pond layer, full articulamentum.Respectively count layer by layer Fig. 5 also gives an example.Respectively number can be determined by experiment layer by layer during practice.
(1) convolutional layer of CNN:Network extracts the depth characteristic of image by convolutional layer.Phase is selected according to characteristic pattern size The convolution kernel answered simultaneously carries out edge expansion to image.Value of i-th layer of j-th characteristic pattern of convolutional layer at (x, y) position be:
Wherein, relu () is amendment linear unit (Rectified linear unit), and its formula is:G (x)=max (0,x)。bijIt is i-th layer of biasing of j-th characteristic pattern, n refers to the set of the last layer characteristic pattern being connected with current signature figure, pi, qiRefer to i-th layer of length and width of convolution kernel respectively,It is the value of the convolution kernel that is connected with upper strata characteristic pattern at (p, q) place.
(2) sample level of CNN:Network carries out dimension-reduction treatment, i-th layer of j-th feature of sample level to image by sample level Chart is shown as:
fij=f (βijdown(f(i-1)j)+bij)
Wherein, βijAnd bijIt is respectively i-th layer of multiplier deviation and additional deviation of j-th characteristic pattern, down () is that drop is adopted Sample function, using maximum pond.LRN layers is connected behind down-sampling layer.The LRN layers of lateral inhibition machine of mimic biology nervous system System, the activity to local neuron creates competition mechanism so that the larger value of response ratio is relatively bigger, improves the extensive energy of model Power.
(3) output layer of CNN:The intrinsic dimensionality of full articulamentum selects 256,256 respectively, finally sets classification number real respectively The classification task of existing 6 major classes and 54 groups.
4th, Configuration network model and other required parameters, carry out the training of 6 class problems and 54 class sorter networks respectively.
(1) the output loss and accuracy to training process and verification process is analyzed, determine network convergence situation and Classification accuracy.Fig. 6 Fig. 7 provides 6 classification problems and the corresponding training set of 54 class classification problems respectively and the loss of checking collection is bent Line chart and checking collection classification accuracy figure.In Fig. 6 and Fig. 7, the value of loss function when dotted line represents training, triangle setting-out table The value of loss function when showing test, star setting-out represents that accuracy rate y is estimated in the realization obtained during test.With the increasing of iterations Plus, loss constantly decreases up to convergence, and now checking collection classification accuracy reaches highest until stabilization.By adjusting network configuration Parameter determination optimal network is the sorter network of our needs.
(2) to each layer visualization of convolutional neural networks, by taking certain eye image of 6 classification problems as an example, each spy of CNN networks Levy figure as shown in Figure 8.Fig. 8 (a) provides the characteristic pattern and corresponding convolution weight exported after each convolutional layer and down-sampling layer treatment, Fig. 8 (b) provides the feature and its histogram and last generic by being exported after the treatment of full articulamentum.By last Probability graph understands that probability highest classification is input picture generic.
(3) eye image to be sorted need to be only input in many sorter network models for obtaining, you can judge its classification.By people Eye pattern can determine that this person's eye pattern as corresponding substantially direction of visual lines as corresponding classification is corresponding with screen position.Experimental result table It is bright, 6 classification classification accuracies are carried out to image up to 93%, 54 class classification accuracies are up to 83%.

Claims (3)

1. a kind of sight line exchange method suitable for touch-control, it is characterised in that:Different Individual is watched attentively under different external conditions Some eye images of same screen position block are classified as a class, are classified with convolutional neural networks CNN implementation patterns, so as to pass through people Eye image recognition its corresponding screen watches block of locations attentively, and the method comprises the following steps:
(1) foundation of human eye database:Under the conditions of different illumination, different time, diverse location attentively screen is watched to multiple individualities The process of the upper random blinkpunkt for occurring carries out data acquisition, and the image to gathering carries out Face datection and human eye detection, obtains Human eye area image, sets up human eye database and eye image is screened;
(2) eye image is divided into training set and checking collects, as the input of convolutional neural networks, according to practical application request pair Computer screen carries out piecemeal, and the eye image for watching same piecemeal on screen attentively is considered as a class, and eye image is classified, training Convolutional neural networks;
(3) when carrying out sight line estimation, eye image to be sorted is input in the convolutional neural networks model for training, you can sentence Disconnected its classification, and correspondingly obtaining corresponding screen watches block of locations attentively, so as to estimate direction of visual lines.
2. the sight line exchange method suitable for touch-control according to claim 1, it is characterised in that:The tool of above-mentioned steps (1) Body realizes that step is:
A, eye image collection, it is comprised the following steps that:
A () considers practical application, computer screen is divided into some bulks first, continues to be divided into several inside each bulk Fritter, each fritter middle setting blinkpunkt;
B () experimenter is sitting in the range of the camera of screen front, eyes follow blinkpunkt to move, simultaneous camera collection face figure Picture, in order to prevent people's kopiopia, two neighboring blinkpunkt sets interval, and now camera does not gather image;
C () screens to the image for gathering, extract usable samples data;
B, Face datection and human eye detection are carried out to facial image, and by human eye rectangle normalize to unified size for train volume Product neutral net.
3. the sight line exchange method suitable for touch-control according to claim 1, it is characterised in that:The tool of above-mentioned steps (2) Body realizes that step is:
A, data selection aspect, usable samples are divided into training set by a certain percentage and checking collection is respectively used to convolutional neural networks Training and the checking of classification accuracy;
B, according to real data size set convolutional neural networks structural model be:
The convolutional layer of (a) convolutional neural networks:Network extracts the depth characteristic of image by convolutional layer, is selected according to characteristic pattern size Select corresponding convolution kernel and edge expansion is carried out to image, value of i-th layer of j-th characteristic pattern of convolutional layer at (x, y) position is:
f i j x y = r e l u ( b i j + Σ n Σ p = 0 p i - 1 Σ q = 0 q i - 1 w i j n p q f ( i - 1 ) n ( x + p ) ( y + q ) )
Wherein, relu () is amendment linear unit, and its formula is:G (x)=max (0, x), bijIt is i-th layer of j-th characteristic pattern Biasing, n refers to the set of the last layer characteristic pattern being connected with current signature figure, pi,qiRefer to i-th layer of length and width of convolution kernel respectively,It is the value of the convolution kernel that is connected with upper strata characteristic pattern at (p, q) place;
The sample level of (b) convolutional neural networks:Network carries out dimension-reduction treatment to image by sample level, j-th of i-th layer of sample level Characteristic pattern is expressed as
fij=f (βijdown(f(i-1)j)+bij)
Wherein, βijAnd bijIt is respectively i-th layer of multiplier deviation and additional deviation of j-th characteristic pattern, down () is down-sampled letter Number, using maximum pond, LRN layers of the connection behind down-sampling layer, the LRN layers of lateral inhibition mechanism of mimic biology nervous system, Activity to local neuron creates competition mechanism so that the larger value of response ratio is relatively bigger, improves the generalization ability of model.
The output layer of (c) convolutional neural networks:Network realizes full connection by interior lamination, finally exports classification number.
CN201710093165.1A 2017-02-21 2017-02-21 A kind of sight line exchange method suitable for touch-control Pending CN106909220A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201710093165.1A CN106909220A (en) 2017-02-21 2017-02-21 A kind of sight line exchange method suitable for touch-control

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201710093165.1A CN106909220A (en) 2017-02-21 2017-02-21 A kind of sight line exchange method suitable for touch-control

Publications (1)

Publication Number Publication Date
CN106909220A true CN106909220A (en) 2017-06-30

Family

ID=59208488

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201710093165.1A Pending CN106909220A (en) 2017-02-21 2017-02-21 A kind of sight line exchange method suitable for touch-control

Country Status (1)

Country Link
CN (1) CN106909220A (en)

Cited By (16)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107392120A (en) * 2017-07-06 2017-11-24 电子科技大学 A kind of notice intelligence direct method based on sight estimation
CN107545302A (en) * 2017-08-02 2018-01-05 北京航空航天大学 A kind of united direction of visual lines computational methods of human eye right and left eyes image
CN107563212A (en) * 2017-09-07 2018-01-09 维沃移动通信有限公司 A kind of information processing method, mobile terminal and computer-readable recording medium
CN108427722A (en) * 2018-02-09 2018-08-21 卫盈联信息技术(深圳)有限公司 intelligent interactive method, electronic device and storage medium
CN108446605A (en) * 2018-03-01 2018-08-24 南京邮电大学 Double interbehavior recognition methods under complex background
CN108462868A (en) * 2018-02-12 2018-08-28 叠境数字科技(上海)有限公司 The prediction technique of user's fixation point in 360 degree of panorama VR videos
CN108491823A (en) * 2018-03-30 2018-09-04 百度在线网络技术(北京)有限公司 Method and apparatus for generating eye recognition model
CN108595011A (en) * 2018-05-03 2018-09-28 北京京东金融科技控股有限公司 Information displaying method, device, storage medium and electronic equipment
CN108919982A (en) * 2018-06-14 2018-11-30 北京理工大学 A kind of automatic key mouse switching method based on facial orientation identification
CN109976590A (en) * 2017-12-27 2019-07-05 上海品奇数码科技有限公司 A kind of touch control detecting method based on camera
CN110046546A (en) * 2019-03-05 2019-07-23 成都旷视金智科技有限公司 A kind of adaptive line of sight method for tracing, device, system and storage medium
CN110147163A (en) * 2019-05-20 2019-08-20 浙江工业大学 The eye-tracking method and system of the multi-model fusion driving of facing mobile apparatus
CN111291607A (en) * 2018-12-06 2020-06-16 广州汽车集团股份有限公司 Driver distraction detection method, driver distraction detection device, computer equipment and storage medium
CN113419623A (en) * 2021-05-27 2021-09-21 中国人民解放军军事科学院国防科技创新研究院 Non-calibration eye movement interaction method and device
US11132543B2 (en) 2016-12-28 2021-09-28 Nvidia Corporation Unconstrained appearance-based gaze estimation
WO2022160933A1 (en) * 2021-01-26 2022-08-04 Huawei Technologies Co.,Ltd. Systems and methods for gaze prediction on touch-enabled devices using touch interactions

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2005202675A (en) * 2004-01-15 2005-07-28 Canon Inc Image processor, image processing method, program, storage medium, and image processing system
CN101947113A (en) * 2005-12-28 2011-01-19 奥林巴斯医疗株式会社 Image processing method in image processing apparatus and this image processing apparatus

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2005202675A (en) * 2004-01-15 2005-07-28 Canon Inc Image processor, image processing method, program, storage medium, and image processing system
CN101947113A (en) * 2005-12-28 2011-01-19 奥林巴斯医疗株式会社 Image processing method in image processing apparatus and this image processing apparatus

Non-Patent Citations (3)

* Cited by examiner, † Cited by third party
Title
ANJITH GEORGE ETAL: "Real-time Eye Gaze Direction Classification Using Convolutional Neural Network", 《 INTERNATIONAL COFERENCE ON SIGNAL PROCESSING AND COMMUNICATIONS(SPCOM). IEEE》 *
段建等: "深度卷积神经网络在Caltech-101图像分类中的相关研究", 《计算机应用与软件》 *
胡芳琴: "基于视线检测的屏幕感兴趣区域追踪", 《中国优秀硕士学位论文全文数据库信息科技辑》 *

Cited By (25)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US11132543B2 (en) 2016-12-28 2021-09-28 Nvidia Corporation Unconstrained appearance-based gaze estimation
CN107392120B (en) * 2017-07-06 2020-04-14 电子科技大学 Attention intelligent supervision method based on sight line estimation
CN107392120A (en) * 2017-07-06 2017-11-24 电子科技大学 A kind of notice intelligence direct method based on sight estimation
CN107545302A (en) * 2017-08-02 2018-01-05 北京航空航天大学 A kind of united direction of visual lines computational methods of human eye right and left eyes image
CN107545302B (en) * 2017-08-02 2020-07-07 北京航空航天大学 Eye direction calculation method for combination of left eye image and right eye image of human eye
CN107563212A (en) * 2017-09-07 2018-01-09 维沃移动通信有限公司 A kind of information processing method, mobile terminal and computer-readable recording medium
CN109976590B (en) * 2017-12-27 2022-04-01 上海品奇数码科技有限公司 Camera-based touch detection method
CN109976590A (en) * 2017-12-27 2019-07-05 上海品奇数码科技有限公司 A kind of touch control detecting method based on camera
CN108427722A (en) * 2018-02-09 2018-08-21 卫盈联信息技术(深圳)有限公司 intelligent interactive method, electronic device and storage medium
CN108462868A (en) * 2018-02-12 2018-08-28 叠境数字科技(上海)有限公司 The prediction technique of user's fixation point in 360 degree of panorama VR videos
CN108446605A (en) * 2018-03-01 2018-08-24 南京邮电大学 Double interbehavior recognition methods under complex background
CN108491823A (en) * 2018-03-30 2018-09-04 百度在线网络技术(北京)有限公司 Method and apparatus for generating eye recognition model
CN108491823B (en) * 2018-03-30 2021-12-24 百度在线网络技术(北京)有限公司 Method and device for generating human eye recognition model
CN108595011A (en) * 2018-05-03 2018-09-28 北京京东金融科技控股有限公司 Information displaying method, device, storage medium and electronic equipment
CN108919982B (en) * 2018-06-14 2020-10-20 北京理工大学 Automatic keyboard-mouse switching method based on face orientation recognition
CN108919982A (en) * 2018-06-14 2018-11-30 北京理工大学 A kind of automatic key mouse switching method based on facial orientation identification
CN111291607B (en) * 2018-12-06 2021-01-22 广州汽车集团股份有限公司 Driver distraction detection method, driver distraction detection device, computer equipment and storage medium
CN111291607A (en) * 2018-12-06 2020-06-16 广州汽车集团股份有限公司 Driver distraction detection method, driver distraction detection device, computer equipment and storage medium
CN110046546B (en) * 2019-03-05 2021-06-15 成都旷视金智科技有限公司 Adaptive sight tracking method, device and system and storage medium
CN110046546A (en) * 2019-03-05 2019-07-23 成都旷视金智科技有限公司 A kind of adaptive line of sight method for tracing, device, system and storage medium
CN110147163A (en) * 2019-05-20 2019-08-20 浙江工业大学 The eye-tracking method and system of the multi-model fusion driving of facing mobile apparatus
CN110147163B (en) * 2019-05-20 2022-06-21 浙江工业大学 Eye movement tracking method and system driven by multi-model fusion for mobile equipment
WO2022160933A1 (en) * 2021-01-26 2022-08-04 Huawei Technologies Co.,Ltd. Systems and methods for gaze prediction on touch-enabled devices using touch interactions
US11474598B2 (en) 2021-01-26 2022-10-18 Huawei Technologies Co., Ltd. Systems and methods for gaze prediction on touch-enabled devices using touch interactions
CN113419623A (en) * 2021-05-27 2021-09-21 中国人民解放军军事科学院国防科技创新研究院 Non-calibration eye movement interaction method and device

Similar Documents

Publication Publication Date Title
CN106909220A (en) A kind of sight line exchange method suitable for touch-control
CN104850825B (en) A kind of facial image face value calculating method based on convolutional neural networks
Liao et al. Deep facial spatiotemporal network for engagement prediction in online learning
CN102520796B (en) Sight tracking method based on stepwise regression analysis mapping model
CN104063719B (en) Pedestrian detection method and device based on depth convolutional network
CN104463100B (en) Intelligent wheel chair man-machine interactive system and method based on human facial expression recognition pattern
US9239956B2 (en) Method and apparatus for coding of eye and eye movement data
CN109635727A (en) A kind of facial expression recognizing method and device
Weidenbacher et al. A comprehensive head pose and gaze database
CN109325408A (en) A kind of gesture judging method and storage medium
Zamani et al. Saliency based alphabet and numbers of American sign language recognition using linear feature extraction
CN107392151A (en) Face image various dimensions emotion judgement system and method based on neutral net
Balasuriya et al. Learning platform for visually impaired children through artificial intelligence and computer vision
Vishwakarma et al. Simple and intelligent system to recognize the expression of speech-disabled person
CN109376621A (en) A kind of sample data generation method, device and robot
CN106408579A (en) Video based clenched finger tip tracking method
CN105912126A (en) Method for adaptively adjusting gain, mapped to interface, of gesture movement
CN114821753B (en) Eye movement interaction system based on visual image information
Wu et al. Appearance-based gaze block estimation via CNN classification
CN102184016A (en) Noncontact type mouse control method based on video sequence recognition
Faria et al. Interface framework to drive an intelligent wheelchair using facial expressions
CN114550270A (en) Micro-expression identification method based on double-attention machine system
Agrawal et al. A Tutor for the hearing impaired (developed using Automatic Gesture Recognition)
CN108108648A (en) A kind of new gesture recognition system device and method
CN108108715A (en) It is inspired based on biology and depth attribute learns the face aesthetic feeling Forecasting Methodology being combined

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
WD01 Invention patent application deemed withdrawn after publication
WD01 Invention patent application deemed withdrawn after publication

Application publication date: 20170630