CN101813976A - Sighting tracking man-computer interaction method and device based on SOC (System On Chip) - Google Patents

Sighting tracking man-computer interaction method and device based on SOC (System On Chip) Download PDF

Info

Publication number
CN101813976A
CN101813976A CN 201010123009 CN201010123009A CN101813976A CN 101813976 A CN101813976 A CN 101813976A CN 201010123009 CN201010123009 CN 201010123009 CN 201010123009 A CN201010123009 A CN 201010123009A CN 101813976 A CN101813976 A CN 101813976A
Authority
CN
China
Prior art keywords
eye
window
soc
computer
human
Prior art date
Application number
CN 201010123009
Other languages
Chinese (zh)
Inventor
秦华标
陈荣华
Original Assignee
华南理工大学
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 华南理工大学 filed Critical 华南理工大学
Priority to CN 201010123009 priority Critical patent/CN101813976A/en
Publication of CN101813976A publication Critical patent/CN101813976A/en

Links

Abstract

The invention discloses sighting tracking man-computer interaction method and device based on SOC (System On Chip). The method comprises the following steps of: inputting a digital image acquired by a camera to an SOC platform, realizing Adaboost detection algorithm based on haar characteristic by utilizing a hardware logic module, and detecting the digital image within a human eye area; distinguishing the user sighting by utilizing a sighting direction distinguishing algorithm according to the detected human eye area, converting the user sighting direction into a mouse control signal, and transmitting the mouse control signal to a computer to realize human-computer interaction. The device comprises the SOC platform, the camera for acquiring a human eye image, the computer and four LEDs which are installed on the four corners of the computer screen and arranged in a rectangular shape, wherein the SOC platform comprises a human eye area detecting hardware logic module, a processor and a memory. In the invention, the detection of the human eye area and the sighting direction distinguishment are realized through hardware, and finally the human-computer interaction is realized, thereby the invention has the advantages of convenient use and high accuracy.

Description

基于SOC的视线跟踪人机交互方法及装置 Human-computer interaction method and apparatus for tracking based SOC sight

技术领域 FIELD

[0001] 本发明涉及SOC (片上系统)设计技术,视觉跟踪算法属于图像处理和模式识别技术领域,具体是一种基于SOC的视线跟踪人机交互装置。 [0001] The present invention relates to a SOC (system on chip) design techniques, visual tracking algorithm is the field of image processing and pattern recognition techniques, in particular based on the SOC interactive gaze tracking device.

背景技术 Background technique

[0002] 人眼视线在人机交互中扮演重要角色,它具有直接、自然和双向等优点。 [0002] human eye sight plays an important role in human-computer interaction, it has a direct, two-way nature and advantages. 目前视线跟踪技术刚起步,未达到实用阶段,成功的实用性项目很少并且价格昂贵,对硬件的要求高。 Currently gaze tracking technology start-ups, has not reached the practical stage, very few successful practical projects and expensive, high hardware requirements. 视觉跟踪技术一般可以分为两类,接触式和非接触式。 Visual tracking techniques can generally be divided into two categories, contact and non-contact. 接触式的精度高,但用户须穿戴特殊器具,这会给用户带来很大的不舒适。 High precision contact, but the user should wear special equipment, which will give users a great deal of discomfort. 非接触式一般采用基于视频图像处理的办法,通过分析人眼部分图像判定视线方向,不对用户产生干扰,使用更加方便。 Usually the non-contact-based approach to video image processing, image analysis part determines eye gaze direction, does not interfere with the user, easier to use.

[0003] 基于视线跟踪的人机交互装置,目前研究的主要方向是基于计算机平台或性能较高的嵌入式处理器,纯软件运行的,但由于其算法计算复杂度高,占用系统资源多,不利于用户利用此系统在计算机上做其他复杂的操作。 [0003] Based on human-computer interaction gaze tracking device, the main direction of research is based on the computer platform or a high-performance embedded processors, software-only operation, but because of its high computational complexity of algorithms, system resources and more users take advantage of this system is not conducive to doing other complex operations on your computer. 鉴于纯软件实现视线跟踪算法在人机交互装置上的局限性,可利用硬件逻辑的并行性及流水线操作,将视线跟踪算法中计算量较大部分用硬件实现,大大提高算法的执行效率。 In view of the limitations of software-implemented algorithms in a gaze tracking interactive device may be utilized hardware logic parallelism and pipelining, the greater part of calculation algorithm implemented in hardware, greatly improve the efficiency of the algorithm will gaze tracking. 经对现有技术文献的检索发现,尚未有报道过有基于SOC的视线跟踪人机交互方法及装置。 After retrieval of prior art documents found it has not yet been reported tracking computer interaction method and device based on the SOC of sight.

发明内容 SUMMARY

[0004] 本发明克服现有视线跟踪技术中的不足,提供基于SOC的视线跟踪人机交互方法及装置。 [0004] The present invention overcomes the deficiencies of the prior art gaze tracking, tracking providing interactive method and apparatus based on the SOC of sight. 本发明通过合理的软硬件划分,在SOC平台上实现,把复杂度较高部分即人眼区域检测部分用硬件实现,大大提高算法的执行效率。 The present invention reasonable hardware partitioning on SOC platform, i.e. the upper part of the complexity of the eye region detection section implemented in hardware, greatly improve the efficiency of the algorithm. 本发明通过如下技术方案实现: The present invention is achieved by the following technical solutions:

[0005] 一种基于SOC的视线跟踪人机交互方法,该方法包括如下步骤: [0005] A tracking method based on the SOC of interactive visual line, the method comprising the steps of:

[0006] (1)摄像头将采集到的数字图像输入到SOC平台,采用硬件逻辑模块实现基于haar特征的Adaboost检测算法,对所述数字图像进行人眼区域的检测; [0006] (1) The camera to capture a digital image is input to SOC platforms, hardware logic modules implemented using Adaboost haar feature detection algorithm based on the digital image of the eye region detection;

[0007] (2)根据检测到的人眼区域,利用视线方向判别算法,判别出用户视线,再将用户视线方向转化为鼠标控制信号通过USB传输给计算机,实现人机交互。 [0007] (2) based on the detected eye region, using gaze direction determination algorithm determines the line of sight of the user, then the user gaze direction into mouse control signals transmitted to the computer through USB, interacting with a computer.

[0008] 上述的人机交互方法中,所述硬件逻辑模块包括如下模块: [0008] The interactive method, the hardware logic module comprises the following modules:

[0009] 积分模块,完成数字图像的积分与平方积分的计算,并将计算结果存放在存储器上;子窗口扫描模块,对整帧数字图像子窗口的横坐标和纵坐标按设定步长进行遍历,得出待测子窗口的坐标及长宽; [0009] The integrator module, the integrator calculates the integral of the square of a digital image is completed, the calculation result stored in the memory; child window scanning module of abscissa and ordinate frame of the digital image of the entire sub-window setting step is performed by traverse, and obtain the coordinates of the child window to be measured in length and width;

[0010] 子窗口处理模块,判定待测子窗口是否为人眼子窗口; [0010] child window processing module, the test determines whether the child window to the human eye child window;

[0011] 子窗口融合模块,对判定出的所有人眼子窗口进行融合处理,即整合位置相近的窗口,然后重新调整人眼窗口位置,确定人眼区域。 [0011] child window module integration, the eye for all the sub-window is determined fusion, i.e. the position close to the integration window, and then re-adjust the position of the eye window, determining eye region.

[0012] 上述的人机交互方法中,所述子窗口处理模块是根据采用Modesto CastriΙΐόη训练的右眼分类器,运用Cascade级联方法实现子窗口处理,具体步骤包括:首先提取右眼分类器的haar特征参数;将haar特征参数具体化,即将haar特征参数与扫描后的子窗口大小进行匹配,再根据具体化后haar特征参数中矩形区域的位置读取积分模块计算出的积分数据,最后运用Cascade级联方法确定人眼子窗口。 [0012] The interactive method, the processing module is a child window using Modesto CastriΙΐόη classifier trained eye, using Cascade Cascade implemented method of processing sub-window, the specific steps include: a right eye is first extracted classifier haar characteristic parameters; haar the characteristic parameters of the particular, i.e. haar characteristic parameter and the sub-scanning window size after the matching, and then read the credit data based on integrator module calculates the position of the specific feature parameters haar rectangular areas, and finally the use of cascade cascade method for determining human eye child window.

[0013] 上述的人机交互方法中,所述运用Cascade级联方法确定人眼子窗口是将多个右眼弱分类器加权组成右眼强分类器,再将20级右眼强分类器串联完成人眼子窗口的确定, 具体步骤包括:首先根据每个右眼弱分类器中的haar特征参数及从积分模块中读取的积分数据,计算出实际子窗口的haar特征值,再与当前右眼弱分类器的阈值进行比较,确定此右眼弱分类器的权值,最后将这些右眼弱分类器的权值进行累加,再与右眼强分类器的阈值做比较,如果大于此阈值,则通过此级右眼强分类器的验证,进入下一级右眼强分类器的判别,如果小于此阈值,则认为此子窗口为非人眼区域;当子窗口通过20级右眼强分类器的验证,则可确定为人眼子窗口。 [0013] The interactive method, using the human eye Cascade cascading method determines a plurality of right-eye sub-window is weighted weak classifiers strong classifier consisting of the right-eye, right-eye strong classifier 20 then serially complete human eye sub-window is determined, the specific steps include: first, according to characteristic parameters of each right-eye haar weak classifier is read from the integrator module and the integrated data, to calculate the characteristic value of the actual haar sub-window, then the current eye threshold weak classifiers determined by comparing this eye weights of weak classifiers, and finally the right eye these weak classifiers accumulated value, and then make a right-eye with a threshold value of the strong classifier, and if larger than this threshold, then validated this stage strong classifier right eye, a right eye enters the determination of the strong classifier, if less than this threshold, this is considered a non-eye sub-window region; child window when the eye by 20 verify strong classifier, the human eye can determine child window.

[0014] 上述的人机交互方法中,所述视线方向判别算法是根据位于计算机屏幕四个角上的四个LED红外光源在人眼角膜上形成的四个反射亮点,即普尔钦斑点,与瞳孔中心之间的几何位置关系计算视线方向。 [0014] The method of human-computer interaction, the gaze direction determination reflection highlights The four algorithms on a computer screen located in the four corners of infrared LED light source is formed in the human cornea, i.e. Purkinje spot, and the geometric positional relationship between the center of the pupil is calculated gaze direction.

[0015] 上述的人机交互方法中,所述视线方向判别算法具体步骤包括:首先采用灰度投影方法定位瞳孔中心,在以瞳孔中心为中心,在其上下左右30个像素的区域内搜索普尔钦斑点,计算瞳孔中心与所述四个反射亮点的关系,确定其视线方向。 [0015] The interactive method, the visual line direction determining step of the algorithm specifically comprises: first positioning projection method using the gradation center of the pupil, the pupil center in the center of the search area in the right and left vertical 30 pixels Pool Chin spot, calculating the relationship between the pupil center and the four reflection highlights, determine the gaze direction.

[0016] 实现上述的人机交互方法的基于SOC的人机交互装置,该装置包括SOC平台、用于采集人眼图像的摄像头、计算机、安装在计算机显示屏上四个角且排列成矩形的四个LED, SOC平台包括人眼区域检测硬件逻辑模块、处理器和存储器;所述摄像头将采集到的数字图像输入到SOC平台上的存储器;计算机通过USB与SOC平台连接;人眼区域检测硬件逻辑模块完成人眼区域的检测,处理器根据检测出的人眼区域,结合视线方向判别算法识别出用户视线方向,将与用户视线方向对应的控制信号转化为鼠标控制信号通过USB传输给计算机。 [0016] The method to achieve the above human-machine interaction based on the SOC of the interactive apparatus includes an SOC platform, a camera for capturing images of the human eye, a computer, on a computer display screen is mounted and arranged in the four corners of a rectangle the LED four, SOC platform comprises hardware logic eye region detection module, a processor and a memory; the camera to capture the digital image input to the memory on SOC platform; SOC computer platform via the USB connection; eye region detection hardware the logic module detects the completion of the eye region, the processor in accordance with the detected eye region, combining sight direction determining algorithm identifies gaze direction of a user, a control signal corresponding to the gaze direction of the user into a mouse control signal to the computer via USB.

[0017] 上述装置中,所述计算机的显示屏由两条对角线分为四个区域,SOC平台的处理器根据用户眼睛所注视的区域,将用户视线分为上、下、左、右四个方向,模拟鼠标移动功能, 作为用户输入的控制信息,并通过眨眼动作用来确认视线控制信息、模拟鼠标单击并输入用户确认信息。 [0017] The above-described apparatus, the computer display screen divided into four regions by two diagonal lines, the SOC processor of the platform area according to a user's eye gaze, will be divided into the user's eyes, down, left and right four directions, analog mouse movement function, as control information input by the user, and used to identify the line of sight through the blinking control information, and inputs the analog-click user confirmation information.

[0018] 上述装置中,所述眨眼动作是将1〜3秒的眨眼动作用来发出确认命令信息。 [0018] The above-described apparatus, the blinking is blinking ~ 3 seconds for a confirmation command information.

[0019] 本发明将基于SOC的视线跟踪技术应用于人机交互装置,填补了我国这方面的空白。 [0019] The present invention is applied to human-computer interaction device based on the SOC of gaze tracking technology to fill this gap in our country. 通过连接于SOC平台的摄像头跟踪人眼注视视线,由于屏幕两条对角线可以把屏幕分成四个区域,则将人眼注视这四个区域的四种不同视线方向作为用户发出的四种控制信息,同时将1〜3秒的闭眼这个眨眼动作用来发出确认命令信息,再将此平台连接到计算机,实现鼠标的基本操作功能。 By connecting to the internet camera SOC eye gaze tracking sight, since the two diagonals of the screen the screen can be divided into four regions, then four different human eye gaze gaze direction as the four regions of four control given by the user information, ~ 3 seconds while the closed eye blinking command to a confirmation message, and then this is connected to the computer platform, the basic operation realize the mouse functions.

[0020] 与现有技术相比,本发明的优点与积极效果在于: [0020] Compared with the prior art, the advantages and positive effects of the present invention comprising:

[0021] 1、对于计算复杂度高的视线跟踪算法,通过合理的软硬件划分,在SOC平台上,充分运用硬件逻辑的并行性以及流水线等操作,把整个系统算法运算量大的模块运用硬件逻辑模块(即人眼区域检测IP核)实现,大大提高算法的执行效率,解决了纯软件实现算法占用系统资源多,效率较低的缺陷。 [0021] 1, for the high computational complexity gaze tracking algorithm, reasonable hardware partitioning on SOC platforms, hardware logic full use of pipelining and parallelism and other operations, the arithmetic operation amount of the entire system using a hardware module logic module (that is, the human eye area detection IP cores) achieved greatly improve the efficiency of the algorithm to solve the pure software algorithms take up more system resources, low efficiency defects.

[0022] 2、采用本发明的视线跟踪技术,实现了一个占用资源小,实时性高的人机交互装置,具体功能有: [0022] 2, the present invention gaze tracking technology, a small footprint, high real-time interactive apparatus specific functions include:

[0023] 可以用视线实现计算机的基本操作,如打开网页,电子书的上下翻页等; Basic Operation [0023] may be implemented using computer vision, such as opening a web page, an electronic book and the like of the upper and lower page;

[0024] 可以作为虚拟现实的人机交互装置,在虚拟现实环境中,根据用户当前的注视状态,提供给用户相应注视方位的场景信息,这样用户在不同的方位就会看到不同的场景,达到身临其境的效果。 [0024] can be used as human-computer interaction device virtual reality, virtual reality environments, according to the user's current state gaze, watching the scene corresponding to the user's position information, so that users will see different scenes in different directions, to achieve the immersive effect. 使人与计算机间的交互与现实世界中的交互方式趋于一致,更为简单、 自然、高效。 Interact with the interaction between people and computers in the real world of convergence, more simple, natural and efficient.

附图说明 BRIEF DESCRIPTION

[0025] 图1是本发明实施方式中的基于SOC的视线跟踪人机交互装置构成框图。 [0025] FIG. 1 is a block diagram interactive device tracking gaze based on the SOC of the embodiment of the present invention.

[0026] 图2是本发明实施方式中显示屏、红外光源与摄像头的布置示意图。 [0026] FIG. 2 is a display screen in the embodiment of the present invention, a schematic arrangement of an infrared light source and the camera.

[0027] 图3是本发明实施方式中视线跟踪方法的流程示意图。 [0027] FIG. 3 is a schematic flow diagram of an embodiment of the present invention, a method of gaze tracking.

[0028] 图4是本发明实施方式中人眼区域检测IP核的流程示意图。 [0028] FIG. 4 is a schematic flow diagram of the eye region detection IP core embodiment of the invention the human.

[0029] 图5是本发明实施方式中子窗口处理流水线示意图。 [0029] FIG. 5 is an embodiment of the present invention, the neutron window processing pipeline Fig.

[0030] 图6a〜图6c分别是本发明实施方式中haar特征的三种示意图。 [0030] FIG 6a~ FIG. 6c are three kinds of embodiments of the invention in a schematic haar features.

[0031] 图7是本发明实施方式中haar特征求法示意图。 [0031] FIG. 7 is a schematic diagram haar method wherein embodiments of the present invention seek.

[0032] 图8是本发明实施方式中普尔钦斑点构成矩形的对角线交点示意图。 [0032] FIG. 8 is an embodiment of the present invention the Purkinje spot constitute a rectangular intersection of diagonals FIG.

具体实施方式 Detailed ways

[0033] 下面结合附图对本发明的具体实施方式做进一步说明。 [0033] The following drawings further illustrate specific embodiments of the present invention binds.

[0034] 如图1、图2所示,基于SOC的视线跟踪人机交互装置,包括SOC平台、用于采集人眼图像的摄像头、计算机、安装在计算机显示屏上四个角且排列成矩形的四个LED(红外光源),S0C平台包括人眼区域检测硬件逻辑模块(即人眼区域检测IP核)、处理器和存储器; 所述摄像头将采集到的数字图像输入到SOC平台上的存储器;计算机通过USB与SOC平台连接;人眼区域检测硬件逻辑模块完成人眼区域的检测,处理器根据检测出的人眼区域,结合视线方向判别算法识别出用户视线方向,再根据用户视线方向将相应的控制信号转化为鼠标控制信号通过USB传输给计算机。 [0034] As shown in FIG 1, FIG. 2, based on the SOC of interactive gaze tracking apparatus includes an SOC platform for camera capture an image of the human eye, a computer, on a computer display screen is mounted and arranged in a rectangular corners four of the LED (infrared light), the platform comprising S0C eye region detection hardware logic modules (i.e., eye region detection IP core), a processor and a memory; the camera to capture the digital image is input to the memory on SOC platform ; SOC computer platform via the USB connection; eye region detection completion detection hardware logic module eye region, the processor in accordance with the detected eye region, combining sight direction determining algorithm identifies gaze direction of a user, then the user will gaze direction corresponding control signal into a mouse control signal to the computer via USB.

[0035] 本实施方式中,红外光源是安装在显示屏四角的四个LED灯,摄像头位于屏幕中心正下方,摄像头采集的数字图像输入视线跟踪模块。 [0035] In the present embodiment, the infrared light source is mounted on the four corners of the LED display screen, a camera positioned directly below the center of the screen, a digital image captured by the camera gaze tracking input module. 红外光源在人眼角膜表面形成反射亮点,即普尔钦斑点,并以普尔钦斑点为基准点计算人眼视线方向。 Highlights reflecting infrared light is formed on the surface of human cornea, i.e. Purkinje spot, and to calculate the Purkinje spot eye gaze direction reference point. 红外光源以及摄像头摆放位置如图1所示,四个红外LED灯安装在屏幕四个角上,摄像头放置在屏幕中心正下方。 Infrared light source and camera placement shown in Figure 1, four infrared LED lamp mounted on the four corners of the screen, a camera placed directly below the center of the screen. 摄像头采用640X480像素普通摄像头,为增加摄像头对红外光源的敏感度,把其镜头更换为对红外更敏感的镜头,同时为了避免外界自然光源的影响,在镜头前加上滤光片。 640X480 pixel camera uses ordinary camera, camera to increase the sensitivity of the infrared light source, to which the interchangeable lens is more sensitive to the infrared camera, while in order to avoid the influence of external natural light, in front of the camera plus filter. 屏幕上LED灯与图像中反射亮点的对应,视线与瞳孔中心位置相对应。 LED light corresponding to reflected highlights in the image on the screen, the line of sight corresponds with the pupil center position. 四个反射亮点构成一个矩形,两条对角线把矩形分成四个区域,瞳孔中心位于哪个区域便代表了人眼视线的方向。 Four reflective highlights a rectangular configuration, two diagonals of the rectangle is divided into four regions, which area is located in the center of the pupil of the human eye would represent the line of sight direction.

[0036] 本发明的一个实施例,如图3所示,首先通过摄像头采集用户图像,然后根据人眼区域检测IP核检测图像中是否存在人眼来判断当前是否有用户使用该系统,只有检测到人眼后,才进行后续的处理。 [0036] An embodiment of the present invention, shown in Figure 3, the user is first collected images through the camera, and then detecting the presence of IP core eye image is detected to determine whether a user is currently using the system according to the eye region, only detected to the human eye, before subsequent processing. 在检测到人眼的基础上,通过眨眼状态判别算法进行判别,如果判别为闭眼,则通过USB线发送鼠标单击信号至计算机,如果判别为睁眼,则进行视线方向的判别。 Upon detection of the human eye, the blink determination by the state determination algorithm, when judged with eyes closed, click signal is sent to a computer via a USB cable, when judged eyes, gaze direction determination is performed. 再将视线方向信息通过USB线发送至计算机。 Gaze direction information and then transmitted to the computer via a USB cable. [0037] 本实施方式中,人眼区域检测IP核的内部框架如图4所示。 [0037] In the present embodiment, the eye region detection frame inside IP core shown in Figure 4. 通过基于haar特征的Adaboost人眼检测算法判断图像中是否有人眼存在,具体实施步骤如下: By Adaboost eye detection algorithm determines whether the image feature-based haar eye whether it was present, specific implementation steps are as follows:

[0038] 步骤:图像积分。 [0038] Step: The image integration. 图像积分模块完成数字图像的积分与平方积分的计算,并也将计算结果存放在SRAM上; Calculating an integral of the square integral image integration module to complete the digital image, and the calculation result is also stored in the SRAM;

[0039] 步骤二:子窗口扫描。 [0039] Step II: sub-window scan. 子窗口扫描模块完成对整帧图像子窗口的遍历。 Sub-window module to complete traversal of the scan the whole frame of the child window.

[0040] 步骤三:子窗口处理。 [0040] Step Three: sub-window process. 子窗口处理模块完成对子窗口是否为人眼窗口的判定。 Sub-sub-window module to complete the processing window is the window determined by the human eye.

[0041] 步骤四:子窗口融合。 [0041] Step Four: the sub-window integration. 子窗口融合模块,对检测到的所有子窗口进行整合,除去相邻的人眼子窗口,得出人眼位置。 Fusion child window modules, for all the sub-window detection integration to remove sub-window adjacent to the human eye, the eye position obtained.

[0042] 其中步骤一的具体实施步骤为:将图像像素数据存放在SOC平台上的存储器(片夕卜SRAM)中,用一个寄存器保存行像素灰度累加值。 [0042] wherein the step of a particular embodiment of the steps of: storing the image pixel data in memory on SOC platform (Bu Xi sheet SRAM), the accumulated value with a pixel gray row register save. 为了加快运算速度,在SOC片内生成一个RAM,用于保存上一行的积分数据,以减少对片外SRAM的访问。 In order to accelerate the operation speed, is generated within a SOC chip RAM, for storing data on a line integral, to reduce access to the off-chip SRAM. 每计算完一个坐标点的积分数据,就把这个数据写入到外部SRAM中,同时覆盖掉片内RAM相应的积分数据以便下一次计算需要。 After each calculation of a coordinate point of the credit data, the write data is put into the external SRAM, while overwriting RAM corresponding to the data points required within the next calculation sheet.

[0043] 其中步骤二的具体实施步骤为:用一个状态机实现对子窗口的遍历。 [0043] In particular embodiments wherein the step is step two: Traversing the window with a sub state machine. 首先,对整帧数字图像子窗口的横坐标和纵坐标按设定步长进行遍历,之后乘上放大系数,再次进行横纵遍历,得出待测子窗口的坐标及长宽。 First, the abscissa and ordinate frame of the digital image of the entire sub-window is traversed by the set step size, then multiplied by the amplification factor of the vertical and horizontal traversing again performed, the coordinates of the test results and the sub-window length and width.

[0044] 其中步骤三的具体实施步骤为:采用Modesto Castrillon训练的右眼分类器特征,运用Cascade级联的方法实现子窗口处理,具体步骤包括:首先提取右眼分类器的haar特征参数;将haar特征参数具体化,即将haar特征参数与扫描后的子窗口大小进行匹配,再根据haar特征参数中矩形区域的位置读取积分模块计算出的积分数据,最后运用Cascade级联的方法确定人眼子窗口。 [0044] Step three specific embodiments in which the step of: using Modesto Castrillon classifier trained eye features using Cascade cascade way to achieve sub-window processing steps specifically comprises: a right eye is first extracted characteristic parameter haar classifier; and haar characteristic parameters concrete, i.e. haar characteristic parameter and the sub-window size matching scanned, then read credit data integration module calculates the position of the feature parameters haar rectangular areas, and finally the use of cascade cascade method for determining human eye child window. 为了加快处理速度,将右眼分类器数据保存在SOC片内开辟的ROM上。 To speed up processing, the data stored in the classifier eye open on the inner SOC chip ROM. 从已发布的OpenCV 1. O中的.xml文件中读取出右眼分类器数据,保存为.mif文件格式用于初始化ROM。 Read from the published OpenCV 1. O .xml file in the right-classified data, saved to a file format used to initialize .mif ROM. 如图5所示,应用硬件逻辑的并行计算能力对于每个分类器的判别通过流水线处理设计。 5, the application of parallel computing hardware logic determines for each classifier by pipeline processing designs.

[0045] 上述步骤三中所述运用Cascade级联方法确定人眼子窗口是指将多个右眼弱分类器加权组成右眼强分类器,再将20级右眼强分类器串联完成人眼区域的检测。 [0045] In the above-described three step method for determining the use of the human eye Cascade Cascade refers to a plurality of child window right-weighted weak classifiers strong classifier consisting of the right-eye, right eye and then the strong classifier 20 to complete a series of the human eye detection region. 本发明中各级级右眼强分类器各由右眼弱分类器10,10,16,20,16,20,24,30,34,38,38,42,44,48, 48,56,52,58,68,64个组成。 In the present invention the right-eye levels stage strong classifier by the right eye of each weak classifier 10,10,16,20,16,20,24,30,34,38,38,42,44,48, 48, 56, 52,58,68,64 a composition. 具体步骤包括:首先根据每个弱分类器中的haar特征参数及从积分模块中读取的积分数据,计算出实际子窗口的haar特征值,再与当前右眼弱分类器的阈值进行比较,确定此右眼弱分类器的权值,最后将这些右眼弱分类器的权值进行累加, 再与右眼强分类器的阈值做比较,如果大于此阈值,则通过此级右眼强分类器,可进入下一级右眼强分类器的判别。 These steps include: First, according to each feature parameter haar weak classifiers read from the integrator module and the credit data, the calculated actual characteristic value haar child window, and then compared with the current threshold value of the right-eye weak classifiers, this weight is determined weak classifiers eye, right eye and finally these weak classifiers accumulated value, and then make a right-eye with a threshold value of the strong classifier, and if greater than this threshold, the right-eye through this stage strong classifier , which can be determined at an eye into the strong classifier. 否则,则认为此子窗口为非人眼区域。 Otherwise, this is considered a non-eye sub-window area. 当子窗口通过20级右眼强分类器的验证,则可确定为人眼子窗口。 When the verification sub-window 20 by the right eye of the strong classifier, child window may be determined by the human eye.

[0046] 其中haar特征,也叫矩形特征,它对一些简单的图形结构,比如边缘、线段,比较敏感,能描述特定走向(水平、垂直、中心)的结构,如图6所示,这些特征表征了图像的局部haar特征,其中图(a)中两个的矩形特征分别表征了上下和左右边界特征,图(b)的矩形特征表征了细线特征,图(c)的矩形特征表征了对角线特征。 [0046] wherein haar feature, also called rectangular features, its simple graphical structure, such as the edge, line, sensitive, capable of describing particular direction (horizontal, vertical, center) structure, as shown in Figure 6, the features characterized haar local features of an image, wherein FIG rectangle features (a), respectively, characterized in two vertical and lateral boundary characteristics, (b) of a rectangular thin lines characterize the features of FIG. (c) characterize the rectangular diagonal features. 眼睛一些特征能够由矩形特征简单的描绘,例如,眉毛比眼皮的颜色更深,并呈现上下边界特征,眼睛边缘比眼皮的颜色更深,并呈现左右边界特征。 Some features of the eye can be depicted by simple rectangular features, e.g., deeper than the color of the eyelid eyebrows, upper and lower boundaries and presentation features, eye color deeper than the edge of the eyelids, and exhibits about the boundary feature. [0047] 如图6所示,haar特征值的求法为白色矩形区域内的所有像素点的和减去灰色矩形区域中的所有像素点的和。 As shown in [0047] FIG 6, haar method for finding eigenvalues ​​for all of the pixels within the rectangular area minus all white pixels in a rectangular region of the gray and. 具体计算步骤为:首先根据haar特征参数中的矩形区域的具体位置,提取矩形区域四个点的积分数据,则可计算出所要求矩形区域的所有像素点的和。 The calculation steps: First, according to the specific position of the rectangular region haar feature parameters extracted four points credit data rectangular area, can be calculated for all the pixels of the rectangular area and required. 如图7举例说明,图中的矩形为待求图像,A、B、C、D分别为图像中的几个矩形区域,积分图元素值计算:点“1”的积分值是矩形框A中所有像素的像素值之和。 7 illustrates, in the histogram is to be requested image, A, B, C, D are several rectangular area in the image, integral value calculation map element: "1" is the integrated value of the point A in the rectangular frame pixel values ​​of all pixels. 点“2”的积分值所对应的值为A+C,点“3”的积分值是A+B,点“4”的积分值是A+B+C+D,所以D中所有的像素值之和可以用4+1-(2+3)计算。 Point "2" corresponding to the integrated value is A + C, point "3" is the integral value of A + B, point "4" is the integral value of A + B + C + D, so all pixels in D can be calculated as the sum of 4 + 1- (2 + 3). 因此图像中任何矩形中所有像素的值之和都可以通过类似如上的四个矩形计算出,即通过四个点的积分数据算出。 Thus any image rectangle and the values ​​of all the pixels can be calculated by four rectangular Similarly as above, i.e., by integrating the calculated four data points. 最后对右眼分类器中haar特征所指矩形区域,进行像素和的相减,得出haar特征值。 Finally, the right-eye features referred haar classifier rectangular region, and the pixel subtraction value obtained haar feature.

[0048] 其中步骤四的具体实施步骤为:对判定出的所有人眼子窗口进行融合处理,定位出人眼位置。 [0048] In particular embodiments wherein the step of four steps: Potamogeton window for all of the determined fusion process, locate the position of the eye. 由于步骤三确定的人眼子窗口不止一个,而且人眼子窗口之间互相交叉、包含等情况,我们可以通过他们的位置、大小的条件把将他们合并,减少重复窗口出现的情况, 即整合位置相近的窗口,然后重新调整人眼窗口位置。 Since the human eye determined in step three sub-window more than one, the human eye but to cross each other between the sub-window, comprising etc., we can their location, the size of their combined condition, reduce a repetitive window, i.e., the integration of similar to the position of the window, and then readjust the human eye window position.

[0049] 本实施方式中,眨眼状态判别是通过统计每帧图像中二值眼部区域黑色像素的个数,并与前一帧进行比较,利用帧间黑色像素个数之间的关系,来判别是否存在人眼由开转为闭的眨眼动作。 [0049] The embodiment according to the present embodiment, the blinking state discrimination is performed and the previous frame by counting the number of each frame image binary comparison eye region of black pixels using the number of inter-relationships between the black pixels to examined whether the human eye by a blink of an eye opening and closing into action. 具体过程如下: Specific process is as follows:

[0050] 设第i帧图像为Fi,人眼区域为Di [0050] provided for the i-th frame image Fi, Di eye area

[0051] 1.统计Fi中,区域Di灰度值小于150的像素点数目Ci ; [0051] 1. The Statistical Fi, the area Di gradation value smaller than the number of pixel Ci of 150;

[0052] 2.统计Fi+Ι中,区域Di灰度值小于150的像素点数目Ci'; [0052] 2. Statistics Fi + Ι in area Di gradation value smaller than the number of pixels 150 Ci ';

[0053] 3.若Ci/Ci ' > 0.9,则认为可能出现闭眼事件; [0053] 3. eyes closed event if Ci / Ci '> 0.9, it is considered likely to occur;

[0054] 4.若出现可能的闭眼事件后连续若干帧检测不到眼睛,则确定为闭眼。 [0054] 4. If the event may be closed eye several consecutive frames not detect the eye, it is determined as closed-eye.

[0055] 本实施方式中,视线方向的判定是通过检测瞳孔中心与普尔钦斑点的位置,再通过几何计算求瞳孔中心与普尔钦斑点的位置关系,从而判别视线方向。 [0055] In the present embodiment, the viewing direction is determined by detecting the pupil and the Purkinje spot center position, and then calculating the geometric positional relationship with the pupil center seeking Purkinje spot, thereby determining the visual line direction. 具体步骤如下: Specific steps are as follows:

[0056] 步骤一:定位瞳孔中心,并利用瞳孔中心位置与普尔钦斑点的关系,与普尔钦斑点的几何特征,搜索出普尔钦斑点。 [0056] Step a: positioning the center of the pupil, and the pupil center position using the relationship with the Purkinje spot, and the geometric characteristics of Purkinje spot, the Purkinje spot search.

[0057] 步骤二:通过几何计算求取瞳孔中心与普尔钦斑点的位置关系,从而判别视线方向; [0057] Step two: the positional relationship is obtained by calculating the geometric center of the pupil and the Purkinje spot, thereby determining the visual line direction;

[0058] 其中步骤一的具体实施步骤为: [0058] wherein the step of a particular embodiment of the steps of:

[0059] 眼睛区域中,普尔钦斑点具有以下几何特征: [0059] In the eye region, the Purkinje spot having geometric characteristics:

[0060] 1.位于瞳孔周围,与瞳孔中心距离小于30像素; [0060] 1 is located around the pupil, the pupil center distance is less than 30 pixels;

[0061] 2.大小为5〜20个像素不等,灰度值在100以上; [0061] 2. A size ranging 5~20 pixel, the gradation value of 100 or more;

[0062] 3.在眼睛区域,亮点处的灰度值各有一个极大值,且在理想条件下,四个普尔钦斑点处灰度的突变最大; [0062] 3. In the eye region, the gradation value of each bright spot at a maximum, and under ideal conditions, four gray spots at the maximum Purkinje mutation;

[0063] 4.四个普尔钦斑点之间距离在8〜18像素范围内,且近似成矩形关系; 4. The distance between the Purkinje four spots [0063] In the pixel range 8~18, and approximately rectangular relationship;

[0064] 因此通过以下步骤来寻找普尔钦斑点(即亮点): [0064] Thus to find the Purkinje spot (i.e. highlights) by the following steps:

[0065] 1.用水平灰度投影和垂直灰度投影法定位瞳孔的中心,以该中心的上下左右各30个像素的范围作为搜索区域; [0065] 1. gray central horizontal projection and vertical positioning gray pupil projection, the range of the vertical and horizontal centers of the pixels 30 as the search area;

[0066] 2.在搜索区域内寻求灰度极值点,即寻找灰度值满足以下条件的点集G : [0066] 2. gradation seek extreme points within the search area, i.e., gradation values ​​meet the following criteria to find the set of points G:

[0067] g (x0, y0)≥ max {g (χ, y)}, g (χ0, y0) > 100[0068] 其中g(x0,y0)为(x0,y0)点的灰度值。 [0067] g (x0, y0) ≥ max {g (χ, y)}, the point of gradation values ​​g (χ0, y0)> 100 [0068] where g (x0, y0) is (x0, y0).

[0069] 3.用如下所示的Laplace算子对点集G中每一个点与其周围点进行卷积,求出每个点g处的微分f。 [0069] 3. The convolving each point and its point set around point G, as shown by the Laplace operator, differentiating f g at each point.

[0070] <formula>formula see original document page 9</formula> [0070] <formula> formula see original document page 9 </ formula>

[0071] 由于Laplace算子是一种各向同性的微分算子,它的作用是强调图像中灰度突变的区域,Laplace卷积中值越大,说明该处灰度的突变越大。 [0071] Since the Laplace operator is a differential operator isotropic, its role is to emphasize the grayscale image region mutations, larger Laplace median convolution, the larger the gradation where mutations. 采用5X5的Laplace算子,可以进一步避免噪声干扰; 5X5 using the Laplace operator, noise can be further prevented;

[0072] 4.对点集G按其微分值f进行排序,选择f最大的四个点PO〜P3,作为候选点; [0072] 4. G point set sort its differential value f, f select the maximum four points PO~P3, as candidate points;

[0073] 5.检验PO〜P3,若能形成矩形,则确定PO〜P3为四个普尔钦斑点,否则丢弃当前帧图像。 [0073] The test PO~P3, if a rectangular, it is determined PO~P3 four Purkinje spot, otherwise, discard the current frame image.

[0074] 其中步骤二的具体实施步骤为: [0074] In particular embodiments wherein the step of two steps of:

[0075] 根据物理和几何方法,可确定屏幕与采集图像的一一对应关系,即屏幕四个角的LED与人眼图像中四个亮点相对应,视线方向与瞳孔中心相对应。 [0075] The physical and geometrical methods, may determine one correspondence between the screen and the image acquisition, i.e., the LED and the four corners of the screen human-eye image corresponding to the four bright spots, viewing direction corresponding to the center of the pupil. 因此根据这个对应关系, 就可通过采集的人眼图像判定视线方向,如图8所示,设P0、PU P2、P3为检测到的四个普尔钦斑点,Q为瞳孔中心,利用定比分点公式求出PO〜P3的对角线交点0的坐标,连结0Q, 0P0,0P1,0P2,0P3,0P0〜0P3把由点PO〜P3连接成的矩形分割为四个区域,计算OQ处于哪个区域,便可计算出视线方向。 Therefore, according to this correspondence relationship, can be determined by the human eye gaze direction of image acquisition, as shown in FIG. 8, it is assumed P0, PU P2, P3 four Purkinje spot detected, Q is the center of the pupil, using fixed-point score formula determined PO~P3 diagonal intersection coordinates of 0, link 0Q, 0P0,0P1,0P2,0P3,0P0~0P3 connected by the rectangle is divided into four regions PO~P3 point, which area is calculated OQ , gaze direction can be calculated. 具体方法如下: Specific methods are as follows:

[0076] 1.根据附图3,用计算几何的方法求PO〜P3的对角线交点0 : [0076] 1. In accordance with Figure 3, by seeking PO~P3 method of calculating the geometric intersection of diagonals of 0:

[0077] 由三角形的面积公式和叉积的定义有: [0077] The formula for the area of ​​the triangle and the cross product are defined:

[0078]<formula>formula see original document page 9</formula> [0078] <formula> formula see original document page 9 </ formula>

[0079] 其中^A/;,恥为三角形PtlP1P2的面积,^v3P2为三角形PqP3P2的面积。 [0079] wherein ^ A / ;, shame PtlP1P2 triangular area, ^ v3P2 PqP3P2 triangular area.

[0080] 由定比分点的公式,可以求出0点的χ坐标为: [0080] from the formula given score points, the coordinates of the zero point χ can be determined as follows:

[0081] <formula>formula see original document page 9</formula> [0081] <formula> formula see original document page 9 </ formula>

[0082] 同理也可以求出0点的y坐标。 [0082] Similarly the y coordinate may be calculated to 0:00.

[0083] 2.连结00,0?0,0?1,0?2,0?3,则瞳孔中心0所在的区域可以根据下列关系求出: [0083] 2. The link 00,0 0,0 1,0 2,0 3, the area where the pupil center 0 can be obtained from the following relationship????:

[0084] 区域0 :0Q落在OPO与OPl之间,对应视线方向为“上” [0084] Area 0: 0Q fall between OPO and OPl, corresponding to the line of sight direction is "on"

[0085] 区域1 :0Q落在OPl与0P2之间,对应视线方向为“右” [0085] The region 1: 0Q falls between OPl and 0P2, corresponding to the line of sight direction is "right"

[0086] 区域2 =OQ落在0P2与0P3之间,对应视线方向为“下” [0086] 2 = OQ region falls between 0P2 and 0P3, corresponding to the gaze direction "down"

[0087] 区域3 =OQ落在0P3与OPO之间,对应视线方向为“左” [0087] 3 = OQ region falls between 0P3 and OPO, corresponding to the line of sight direction is "left"

[0088] 本实施方式中,用户视线方向确定后,再将用户视线方向转化为USB鼠标控制信号通过USB线传输给计算机,实现人机交互。 [0088] In the present embodiment, after determining the direction of the user's eyes, and then the user into the USB mouse gaze direction control signal transmitted to the computer through a USB cable, interacting with a computer.

Claims (9)

  1. 一种基于SOC的视线跟踪人机交互方法,其特征在于该方法包括如下步骤:(1)摄像头将采集到的数字图像输入到SOC平台,采用硬件逻辑模块实现基于haar特征的Adaboost检测算法,对所述数字图像进行人眼区域的检测;(2)根据检测到的人眼区域,利用视线方向判别算法,判别出用户视线,再将用户视线方向转化为鼠标控制信号通过USB传输给计算机,实现人机交互。 Based on the SOC interactive gaze tracking method, characterized in that the method comprises the steps of: a digital image (1) to the camera to capture input to the SOC platforms, hardware logic modules implemented using Adaboost haar feature detection algorithm based on, for the digital image is detected eye region; (2) based on the detected eye region, using gaze direction determination algorithm determines the line of sight of the user, then the user gaze direction into mouse control signals transmitted to the computer through USB, implemented human-computer interaction.
  2. 2.根据权利要求1所述的人机交互方法,其特征在于所述硬件逻辑模块包括如下模块:积分模块,完成数字图像的积分与平方积分的计算,并将计算结果存放在存储器上;子窗口扫描模块,对整帧数字图像子窗口的横坐标和纵坐标按设定步长进行遍历,得出待测子窗口的坐标及长宽;子窗口处理模块,判定待测子窗口是否为人眼子窗口;子窗口融合模块,对判定出的所有人眼子窗口进行融合处理,即整合位置相近的窗口, 然后重新调整人眼窗口位置,确定人眼区域。 2. The interactive method of claim 1, wherein said hardware logic module comprises the following modules: an integrator module, complete with the integral of the square calculating the integral of the digital image, the calculation result stored in the memory; sub window scanning module of abscissa and ordinate frame of the digital image of the entire child window traversed by the set step size, the coordinates of the test results and the length and width of the child window; child window processing module, the test determines whether the child window to the human eye child window; child window module integration, the eye for all the sub-window is determined fusion, i.e. the position close to the integration window, and then re-adjust the position of the eye window, determining eye region.
  3. 3.根据权利要求2所述的方法,其特征在于所述子窗口处理模块是根据采用ModestoCastrillon训练的右眼分类器,运用Cascade级联方法实现子窗口处理,具体步骤包括:首先提取右眼分类器的haar特征参数;将haar特征参数具体化,即将haar特征参数与扫描后的子窗口大小进行匹配,再根据具体化后haar特征参数中矩形区域的位置读取积分模块计算出的积分数据,最后运用Cascade级联方法确定人眼子窗口。 The method according to claim 2, wherein said processing module is a child window using ModestoCastrillon classifier trained eye, using Cascade Cascade implemented method of processing sub-window, the specific steps include: first extracts the right-eye classification haar filter characteristic parameter; haar the characteristic parameters of the particular, i.e. haar characteristic parameter and the sub-scanning window size after the matching, the integrator module and then read the data calculated based on the integration position after haar particular feature parameters of rectangular areas, Finally, the use of cascade cascade method to determine the human eye child window.
  4. 4.根据权利要求3所述的方法,其特征在于所述运用Cascade级联方法确定人眼子窗口是将多个右眼弱分类器加权组成右眼强分类器,再将20级右眼强分类器串联完成人眼子窗口的确定,具体步骤包括:首先根据每个右眼弱分类器中的haar特征参数及从积分模块中读取的积分数据,计算出实际子窗口的haar特征值,再与当前右眼弱分类器的阈值进行比较,确定此右眼弱分类器的权值,最后将这些右眼弱分类器的权值进行累加,再与右眼强分类器的阈值做比较,如果大于此阈值,则通过此级右眼强分类器的验证,进入下一级右眼强分类器的判别,如果小于此阈值,则认为此子窗口为非人眼区域;当子窗口通过20级右眼强分类器的验证,则可确定为人眼子窗口。 4. The method according to claim 3, characterized in that the method of determining the use of the human eye Cascade Cascade child window is a plurality of weighted weak classifiers eye strong classifier consisting of the right-eye, then the eye 20 Strong classification series completion determination sub-window of the human eye, comprising specific steps: first, according to characteristic parameters of each right-eye haar weak classifier is read from the integrator module and the integrated data, to calculate the characteristic value of the actual haar subwindow then the threshold current right-eye weak classifiers determined by comparing the weight of this eye weak classifiers, and finally the right eye weak classifier weights accumulated, and then to make the right-eye threshold value is relatively strong classifier, If greater than this threshold, the eye-level verification by this strong classifier, enters the eye is determined under a strong classifier, if less than this threshold, this is considered a non-eye sub-window area; 20 when the child window by eye-level verification strong classifier, the human eye can determine child window.
  5. 5.根据权利要求1所述的人机交互方法,其特征在于所述视线方向判别算法是根据位于计算机屏幕四个角上的四个LED红外光源在人眼角膜上形成的四个反射亮点,即普尔钦斑点,与瞳孔中心之间的几何位置关系计算视线方向。 The interactive method recited in claim 1, characterized in that the gaze direction determination reflection highlights The four algorithms on a computer screen located in the four corners of infrared LED light source is formed in the human cornea, That Purkinje spot, the positional relationship between the gaze direction of the geometric center of the pupil is calculated.
  6. 6.根据权利要求4所述的人机交互方法,其特征在于所述视线方向判别算法具体步骤包括:首先采用灰度投影方法定位瞳孔中心,在以瞳孔中心为中心,在其上下左右30个像素的区域内搜索普尔钦斑点,计算瞳孔中心与所述四个反射亮点的关系,确定其视线方向。 The interactive method of claim 4, wherein said visual line direction determining step of the algorithm specifically comprises: first positioning projection method using the gradation center of the pupil, the pupil center in the center, left and right upper and lower 30 within the search area of ​​the pixel Purkinje spot, calculating the relationship between the pupil center and the four reflection highlights, determine the gaze direction.
  7. 7. 一种实现权利要求1〜6任一项所述人机交互方法的基于SOC的人机交互装置,其特征在于包括SOC平台、用于采集人眼图像的摄像头、计算机、安装在计算机显示屏上四个角且排列成矩形的四个LED,SOC平台包括人眼区域检测硬件逻辑模块、处理器和存储器; 所述摄像头将采集到的数字图像输入到SOC平台上的存储器;计算机通过USB与SOC平台连接;人眼区域检测硬件逻辑模块完成人眼区域的检测,处理器根据检测出的人眼区域,结合视线方向判别算法识别出用户视线方向,将与用户视线方向对应的控制信号转化为鼠标控制信号通过USB传输给计算机。 An implement according to any of claims 1~6 SOC based interactive device, wherein said method comprises an interactive SOC platform for camera capture an image of the human eye, a computer, installed in a computer display the four corners of the screen and the LED arranged in a rectangular four, SOC platform comprises hardware logic eye region detection module, a processor and a memory; the camera to capture the digital image input to the memory on SOC platform; computer via USB SOC internet connection; eye region detection hardware logic module completes eye region detection processor in accordance with the detected eye region, combining sight direction determining algorithm identifies gaze direction of a user, a control signal corresponding to the user's view into the direction a mouse control signal to the computer via USB.
  8. 8.根据权利要求7所述的装置,其特征在于所述计算机的显示屏由两条对角线分为四个区域,SOC平台的处理器根据用户眼睛所注视的区域,将用户视线分为上、下、左、右四个方向,模拟鼠标移动功能,作为用户输入的控制信息,并通过眨眼动作用来确认视线控制信息、模拟鼠标单击并输入用户确认信息。 8. The apparatus according to claim 7, wherein said computer display screen divided into four regions by two diagonal lines, the SOC processor of the platform area according to a user's eye gaze, the user's view into upper, lower, left, and right directions, analog mouse movement function, as control information input by the user, and used to identify the line of sight through the blinking control information, and inputs the analog-click user confirmation information.
  9. 9.根据权利要求8所述的装置,其特征在于所述眨眼动作是将1〜3秒的眨眼动作用来发出确认命令信息。 9. The apparatus according to claim 8, characterized in that the blinking is blinking ~ 3 seconds for a confirmation command information.
CN 201010123009 2010-03-09 2010-03-09 Sighting tracking man-computer interaction method and device based on SOC (System On Chip) CN101813976A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN 201010123009 CN101813976A (en) 2010-03-09 2010-03-09 Sighting tracking man-computer interaction method and device based on SOC (System On Chip)

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN 201010123009 CN101813976A (en) 2010-03-09 2010-03-09 Sighting tracking man-computer interaction method and device based on SOC (System On Chip)

Publications (1)

Publication Number Publication Date
CN101813976A true CN101813976A (en) 2010-08-25

Family

ID=42621247

Family Applications (1)

Application Number Title Priority Date Filing Date
CN 201010123009 CN101813976A (en) 2010-03-09 2010-03-09 Sighting tracking man-computer interaction method and device based on SOC (System On Chip)

Country Status (1)

Country Link
CN (1) CN101813976A (en)

Cited By (15)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103247019A (en) * 2013-04-17 2013-08-14 清华大学 Reconfigurable device used for detecting object and based on AdaBoost algorithm
CN103475893A (en) * 2013-09-13 2013-12-25 北京智谷睿拓技术服务有限公司 Device and method for picking object in three-dimensional display
CN103559809A (en) * 2013-11-06 2014-02-05 常州文武信息科技有限公司 Computer-based on-site interaction demonstration system
WO2014029245A1 (en) * 2012-08-22 2014-02-27 中国移动通信集团公司 Terminal input control method and apparatus
CN103777351A (en) * 2012-10-26 2014-05-07 鸿富锦精密工业(深圳)有限公司 Multimedia glasses
WO2014075418A1 (en) * 2012-11-13 2014-05-22 华为技术有限公司 Man-machine interaction method and device
CN104253944A (en) * 2014-09-11 2014-12-31 陈飞 Sight connection-based voice command issuing device and method
CN104968270A (en) * 2012-12-11 2015-10-07 阿米·克林 Systems and methods for detecting blink inhibition as a marker of engagement and perceived stimulus salience
US20160026242A1 (en) 2014-07-25 2016-01-28 Aaron Burns Gaze-based object placement within a virtual reality environment
CN105373766A (en) * 2014-08-14 2016-03-02 由田新技股份有限公司 Method and apparatus for positioning pupil
WO2016176959A1 (en) * 2015-05-04 2016-11-10 惠州Tcl移动通信有限公司 Multi-screen control method and system for display screen based on eyeball tracing technology
CN106528468A (en) * 2016-10-11 2017-03-22 深圳市紫光同创电子有限公司 USB data monitoring apparatus, method and system
CN106990839A (en) * 2017-03-21 2017-07-28 张文庆 A kind of eyeball identification multimedia player and its implementation
US10311638B2 (en) 2014-07-25 2019-06-04 Microsoft Technology Licensing, Llc Anti-trip when immersed in a virtual reality environment
US10451875B2 (en) 2014-07-25 2019-10-22 Microsoft Technology Licensing, Llc Smart transparency for virtual objects

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1731418A (en) * 2005-08-19 2006-02-08 清华大学 Method of robust accurate eye positioning in complicated background image
CN101344919A (en) * 2008-08-05 2009-01-14 华南理工大学 Sight tracing method and disabled assisting system using the same
CN101344816A (en) * 2008-08-15 2009-01-14 华南理工大学 Human-machine interaction method and device based on sight tracing and gesture discriminating

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1731418A (en) * 2005-08-19 2006-02-08 清华大学 Method of robust accurate eye positioning in complicated background image
CN101344919A (en) * 2008-08-05 2009-01-14 华南理工大学 Sight tracing method and disabled assisting system using the same
CN101344816A (en) * 2008-08-15 2009-01-14 华南理工大学 Human-machine interaction method and device based on sight tracing and gesture discriminating

Cited By (23)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103631365B (en) * 2012-08-22 2016-12-21 中国移动通信集团公司 A kind of terminal input control method and device
WO2014029245A1 (en) * 2012-08-22 2014-02-27 中国移动通信集团公司 Terminal input control method and apparatus
CN103631365A (en) * 2012-08-22 2014-03-12 中国移动通信集团公司 Terminal input control method and device
CN103777351A (en) * 2012-10-26 2014-05-07 鸿富锦精密工业(深圳)有限公司 Multimedia glasses
WO2014075418A1 (en) * 2012-11-13 2014-05-22 华为技术有限公司 Man-machine interaction method and device
US9740281B2 (en) 2012-11-13 2017-08-22 Huawei Technologies Co., Ltd. Human-machine interaction method and apparatus
CN104968270A (en) * 2012-12-11 2015-10-07 阿米·克林 Systems and methods for detecting blink inhibition as a marker of engagement and perceived stimulus salience
CN103247019B (en) * 2013-04-17 2016-02-24 清华大学 For the reconfigurable device based on AdaBoost algorithm of object detection
CN103247019A (en) * 2013-04-17 2013-08-14 清华大学 Reconfigurable device used for detecting object and based on AdaBoost algorithm
CN103475893A (en) * 2013-09-13 2013-12-25 北京智谷睿拓技术服务有限公司 Device and method for picking object in three-dimensional display
CN103559809A (en) * 2013-11-06 2014-02-05 常州文武信息科技有限公司 Computer-based on-site interaction demonstration system
CN103559809B (en) * 2013-11-06 2017-02-08 常州文武信息科技有限公司 Computer-based on-site interaction demonstration system
US20160026242A1 (en) 2014-07-25 2016-01-28 Aaron Burns Gaze-based object placement within a virtual reality environment
US10311638B2 (en) 2014-07-25 2019-06-04 Microsoft Technology Licensing, Llc Anti-trip when immersed in a virtual reality environment
US10416760B2 (en) 2014-07-25 2019-09-17 Microsoft Technology Licensing, Llc Gaze-based object placement within a virtual reality environment
CN106575153A (en) * 2014-07-25 2017-04-19 微软技术许可有限责任公司 Gaze-based object placement within a virtual reality environment
US10451875B2 (en) 2014-07-25 2019-10-22 Microsoft Technology Licensing, Llc Smart transparency for virtual objects
CN105373766B (en) * 2014-08-14 2019-04-23 由田新技股份有限公司 Pupil positioning method and device
CN105373766A (en) * 2014-08-14 2016-03-02 由田新技股份有限公司 Method and apparatus for positioning pupil
CN104253944A (en) * 2014-09-11 2014-12-31 陈飞 Sight connection-based voice command issuing device and method
WO2016176959A1 (en) * 2015-05-04 2016-11-10 惠州Tcl移动通信有限公司 Multi-screen control method and system for display screen based on eyeball tracing technology
CN106528468A (en) * 2016-10-11 2017-03-22 深圳市紫光同创电子有限公司 USB data monitoring apparatus, method and system
CN106990839A (en) * 2017-03-21 2017-07-28 张文庆 A kind of eyeball identification multimedia player and its implementation

Similar Documents

Publication Publication Date Title
Bobick et al. The recognition of human movement using temporal templates
Agarwal et al. Learning to detect objects in images via a sparse, part-based representation
Tompson et al. Real-time continuous pose recovery of human hands using convolutional networks
JP6079832B2 (en) Human computer interaction system, hand-to-hand pointing point positioning method, and finger gesture determination method
CN101853071B (en) Gesture identification method and system based on visual sense
Mohan et al. Example-based object detection in images by components
KR100343223B1 (en) Apparatus for eye and face detection and method thereof
EP0551941B1 (en) Classifying faces
US20140044320A1 (en) Texture features for biometric authentication
CN101344816B (en) Human-machine interaction method and device based on sight tracing and gesture discriminating
US8761446B1 (en) Object detection with false positive filtering
Miao et al. A hierarchical multiscale and multiangle system for human face detection in a complex background using gravity-center template
KR100888554B1 (en) Recognition system
US8649575B2 (en) Method and apparatus of a gesture based biometric system
KR100724932B1 (en) apparatus and method for extracting human face in a image
US20050227217A1 (en) Template matching on interactive surface
US8836777B2 (en) Automatic detection of vertical gaze using an embedded imaging device
Horng et al. Driver fatigue detection based on eye tracking and dynamic template matching
Gao et al. Standardization of face image sample quality
US8351662B2 (en) System and method for face verification using video sequence
US8000505B2 (en) Determining the age of a human subject in a digital image
EP1969559B1 (en) Contour finding in segmentation of video sequences
US8483450B1 (en) Quality metrics for biometric authentication
US20160019421A1 (en) Multispectral eye analysis for identity authentication
US9697414B2 (en) User authentication through image analysis

Legal Events

Date Code Title Description
C06 Publication
C12 Rejection of an application for a patent