CN100462047C - Safe driving auxiliary device based on omnidirectional computer vision - Google Patents

Safe driving auxiliary device based on omnidirectional computer vision Download PDF

Info

Publication number
CN100462047C
CN100462047C CN 200710067633 CN200710067633A CN100462047C CN 100462047 C CN100462047 C CN 100462047C CN 200710067633 CN200710067633 CN 200710067633 CN 200710067633 A CN200710067633 A CN 200710067633A CN 100462047 C CN100462047 C CN 100462047C
Authority
CN
China
Prior art keywords
eye
driver
driving
point
state
Prior art date
Application number
CN 200710067633
Other languages
Chinese (zh)
Other versions
CN101032405A (en
Inventor
汤一平
Original Assignee
汤一平
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 汤一平 filed Critical 汤一平
Priority to CN 200710067633 priority Critical patent/CN100462047C/en
Publication of CN101032405A publication Critical patent/CN101032405A/en
Application granted granted Critical
Publication of CN100462047C publication Critical patent/CN100462047C/en

Links

Abstract

一种基于全方位计算机视觉的安全驾驶辅助装置,包括用于获取车内外全方位视频信息的全方位视觉传感器、用于检测各种疲劳驾驶并有疲劳驾驶情况发生时提供报警的安全驾驶辅助控制器,所述的全方位视觉传感器安装在车内驾驶座的右方;所述的全方位视觉传感器输出连接安全驾驶辅助控制器,检测出驾驶员的面部的状态、眼部的状态、嘴巴的状态、方向盘的状态、监视车辆行驶方向、监视车辆行驶速度等状态,在检测到有驾驶疲劳现象发生时对驾驶员实施报警。 Based on computer vision full safe driving assistance device, comprising a vehicle for acquiring and external full video information omnidirectional vision sensor for providing an alarm upon detection of a variety of driver fatigue and fatigue occurs driving safety driving assistance control , said omnidirectional vision sensor is attached to the right of the vehicle seat; omnidirectional vision sensor outputs the safe driving support connected to the controller, detects a state of the face of the driver, the state of the eye, mouth state condition, the steering wheel, the traveling direction of the vehicle monitoring, monitoring running state of the vehicle speed, when the driver detects that the driver fatigue phenomenon embodiment alarms. 本发明综合检测驾驶疲劳状态的特征参数,判别精度高,提高了测量精度。 Integrated driving fatigue detecting feature parameters of the present invention, high precision determination, improve the measurement accuracy.

Description

基于全方位计算机视觉的安全驾驶辅助装置 Based on safe driving a full range of computer vision aids

(-) (-)

技术领域 FIELD

本发明属于全方位视觉传感器技术、图像识别以及理解技术、计算机控制技.术在车辆安全驾驶方面的应用,尤其是一种基于全方位计算机视觉的安全驾駚辅助装置。 The present invention belongs to the omnidirectional vision sensor technology, image recognition and understanding technology, computer control technology. Application of safe driving in terms of art, in particular a safe drive Yang full assistance device based on computer vision.

(二) (two)

背景技术 Background technique

图I为道路环境、驾驶员、车的驾驶模型,在整个驾驶作业过il中,驾驶员不断进行感知判断-动作,而感知疲劳、判断疲劳和动作疲劳这些时刻存在的干扰因素随时都影响驾驶员正常作业,直至驾驶结束,任何疲劳单独或共同所产生的疲劳都存在驾驶失败的危险。 Figure I is a road environment, driver, car driver model, the entire operation had il driving, the driver constantly aware judgment - Action, fatigue and perception, judgment and actions fatigue fatigue interfering factors present in these moments always affect driving members of normal operations until the end of the drive, any individual or fatigue common fatigue generated by the presence of dangerous driving have failed. . .

交通信息首先经过驾驶员的感知阶段,在该阶段驾驶员对道路环境中各种信息进行感知,感应的器官为视觉器官、听觉器官、味觉器官等,由于受感知疲劳的影响,往往会出现感知误感现象、减弱现象、不感现象,驾驶员以抗疲劳能力对误感进行拮抗,如果得以成功使之通过感知阶段到达判断阶段。 First, traffic information through the driver's perception of the stage, in this stage the driver's environment in a variety of information on road awareness, sense organ to organ of vision, hearing organ, the organ of taste, etc., due to the impact of fatigue perception, perception often appear false sense of phenomenon, weakened, which is not a sense of phenomenon, driver fatigue ability to antagonize the misuse of sense, if able to successfully make it through to reach the stage of determining the perception stage.

同样,因信息处理阶段行为罘判断疲劳的影响,仍会出现误判现象,判断的器官为中枢神经系统,包括大脑、小脑等,由于受判断行为形成致疲劳因子的制约,往往会出现误判断、判断能力弱、不判断现象,驾驶员以抗疲劳能力对误判断拮抗,如果得以成功使之通过判断阶段到达动作阶段。 Also, due to the behavior determining information processing stages Fu Fatigue, the phenomenon still occurs misjudgment, determined organs of the central nervous system, including the brain, cerebellum, since the restriction is determined by factor-induced fatigue behavior, often appear misjudgment judgment weak, do not judge the phenomenon of resistance to fatigue to the driver erroneously determined antagonism, if the operation is successful so that it reaches the stage by stage determination.

因动作阶段行为受动作疲劳的影响,驾驶员亦会出现误动现象,如果对误动拮抗成功,即可实现对车辆的驾驶,最后,车辆运行状况和后续道路再反馈给.驾驶员。 Due to the action phase behavior by the action of the impact of fatigue, driver phenomenon will occur malfunction, if successful antagonism of malfunction, the driver of the vehicle can be realized, finally, vehicle operating conditions and follow the road back to again. Driver.

作为上述惑知-判断-动作的驾驶疲劳分别用其疲劳状态来表示的话,其主要表现为以下10类:(1)哈欠连天,脸发木;(2)头越来越沉,不自觉的频频点头(打瞌睡),很难保持抬头的姿态;(3)肌肉放松,眼睑下垂,甚至闭眼;(4)视线模..糊,眼睛发红、发干;(5)视野变窄,总是漏看错看信息;(6)反应迟钝,判断迟缓;(7)注意力无法集中,思维能力下降;(8)动作僵硬,节奏缓慢;(9)失去方向感,驾车左右摇摆在公路上;10)随意变换车速,行驶速率不定。 Examples of the known confusion - Analyzing - Driving operation fatigue respectively represent the fatigue state, then it mainly for the following 10 categories: (1) yawns, face made of wood; (2) grew heavy head, unconscious nod (asleep), it is difficult to maintain the posture of the rise; (3) muscle relaxation, ptosis, even eyes closed; (4) .. sight molding paste, eye redness, dryness; (5) narrow field, always wrong to see the missing information; (6) unresponsive, slow judgment; (7) inability to concentrate, decreased thinking ability; (8) stiff, slow pace; (9) the loss of sense of direction, driving side to side of the road a; 10) are free to change the vehicle speed, with variable speed.

中国发明专利(公开号CN 1851498Α)公开了一种疲劳驾驶检测技术,中国发.明专利(公开号CN 1830389Α)公开了一种疲劳驾驶状态监控装置及方法,中国实用新型专利(专利号03218647. 9)公开了一种车辆运行状态记录报警分析仪,中国实用新型专利(专利号ZL 200420072961.5)公7F了-种客车疲劳驾驶及超载运输的远程监控装置。 Chinese patent (publication No. CN 1851498Α) discloses a fatigue driving detection technology, Chinese hair Ming Patent (Publication No. CN 1830389Α) discloses a fatigue driving condition monitoring apparatus and method, Chinese utility model patent (Patent No. 03218647. 9) discloses a vehicle running state record alarm analyzer, Chinese utility model patent (Patent No. ZL 200420072961.5) a well-7F - bus driver fatigue and species overloading remote monitoring device.

'国外对于检测驾驶员的驾驶疲劳状态也有不少研究者进行了研究以及产品的开发,明尼苏达大学的NikolaosP与Papaniko lopoulos开发了一套驾驶员眼睛:的追踪和定位系统,通过安置在车内的一个CCD摄像头监视驾驶员的脸部;ASCI(Advanced Safety Concepts Inc.)研制开发的头部位置传感器來测量驾驶员头部位置。 'For foreign driver fatigue detection of the driver, there are many researchers have conducted research and product development, NikolaosP University of Minnesota and Papaniko lopoulos developed a driver's eye: tracking and positioning system, by placing in the car a CCD camera to monitor the driver's face; head position sensor ASCI (Advanced Safety Concepts Inc.) developed to measure the position of the driver's head. 该装置是设计安装在驾驶员座位上面的一个电容传感器阵列,每个传感器都能输出驾驶员头部距离传感器的位置,利用三角涵数系计算出头在X,Y, Z三维空间中的位置,也能够实时跟踪头部的位置,同时利用各个时间段头部位置的变化特征,可以判断出司机是否在打瞌睡;美国Hectronic SafetyProducts公司开发的方向盘监视装置SAM(steering attention monitor)是一种监_测方向盘非正常运动的传感器装置,适用于各种车辆。 The apparatus is a capacitive sensor array designed to be mounted above the driver's seat, each sensor can output the driver's head from the position sensors, using the calculated coefficients culvert triangular head in X, Y, Z three dimensional position in space, it is possible to real-time tracking of the position of the head, while the head position of use change each time period, it can determine whether the drivers dozing; U.S. wheel developed Hectronic SafetyProducts monitoring device SAM (steering attention monitor) is a monitoring _ transducer means abnormal movement of the steering wheel, for a variety of vehicles. 方向盘正常运动时传感器装置不报警,若方向盘持续4s没有任何操作,SAM就会发出报警声,直到方向盘继续正常运动为止;美国Ellison Research Labs实验室研制的DAS2000型路面警告系统(The DAS2000 Road Alert System)就是一种设置在高速公路上用计算机控制的红外线监测装置,当车辆偏离道路中线时,会向驾驶员发出聱告;此外,也有研究者在车辆的前端安装摄像头,用来测量车辆离开白线的时间和程度。 When the steering wheel is not normal motion sensor alarm device, if the steering wheel does not have any operations continued 4s, SAM will sound an alarm until the steering wheel to continue normal movement; US Ellison Research Labs Laboratory developed DAS2000 type road warning system (The DAS2000 Road Alert System ) is an infrared monitoring apparatus provided in a computer-controlled on the freeway, off the road when the vehicle centerline, difficult to pronounce report will be issued to the driver; in addition, researchers at the front end of the vehicle mounted camera, for measuring the vehicle leaves white timing and extent of the line. 并向驾驶员报警。 Warning to the driver.

从上述5项中国专利来看,在围绕着疲劳驾驶三个不同层面上缺乏总体的分析和综合,比如通过建立道路环境、驾驶员、车的驾驶模型来分析在上述三个的不同层面上的驾驶疲劳状态以及检测方法;在驾驶疲劳的检测、获取手段上比较单一,因而只能从比较单一的表象去判断,造成了判断精度不髙等问题; From the above-mentioned five Chinese patents of view, around the overall lack of analysis and synthesis of three different levels of fatigue driving, for example, to analyze on the three different levels of the environment by building roads, driver, car driver model and a driving method for detecting fatigue; on relatively simple driving detecting fatigue, acquiring means, and thus only from relatively simple to judge the appearance, resulting in problems such determination accuracy Gao;

功能比较单一,监测的方法还存在着局限性;缺乏客观判断生理疲劳程度的测量指标。 A single function, there is a method of monitoring the limitations; lack of objective determination of the degree of fatigue physiological measure. 从国外的一些研究成果和产品情况来说,也基本上是从驾驶疲劳的某一个层面去解决问题的;由于检测手段不同,如果要将这些技术综合起来就需要各种不同的传感器。 Some research results and products from foreign situation, it is also substantially from a certain level of driver fatigue to solve the problem; due to the different testing methods, if you want these technologies together will require a variety of sensors.

(三) (three)

发明内容 SUMMARY

为了克服已有安全驾驶辅助装置的检测手段单一、判别精度不高、测量精度不高的不足,本发明提供一种采用一个全方位视觉传感器来同时检测出驾驶员的面部的状态、眼部的状态、嘴巴的状态、方向盘的状态、监视车辆行驶方向、监视车辆行驶速度等状态,综合检测驾驶疲劳状态的特征参数,判别精度高,提高了测量精度的基于全方位计算机视觉的安全驾驶辅助装置。 In order to overcome the existing safety driving support device of a single detecting means, the determination accuracy is not high, high measurement accuracy is insufficient, the present invention provides a method using an omnidirectional vision sensor simultaneously detects the state of the face of the driver, the eye state state, the state of the mouth, the steering wheel, the traveling direction of the vehicle monitoring, monitoring running state of the vehicle speed, the driving integrated fatigue detecting characteristic parameters, high determination accuracy, based on the full range of computer vision improved safety driving assistance device measuring accuracy .

本发明解决其技术问题所采用的技术方案是: The present invention solves the technical problem using the technical solution is:

一种基于全方位计算机视觉的安全驾驶辅助装置,包括用于获取车内外全方位视频信息的全方位视觉传感器、用于检测各种疲劳驾驶并有疲劳驾驶情况发生时提供报警的安全驾驶辅助控制器,所述的全方位视觉传感器安装在车内驾驶座的右方;所述的全方位视觉传感器输出连接安全驾驶辅助控制器,所述的全方位视棠传感器包括用以反射车内外领域中物体的外凸折反射镜面、用以防止光折射和光饱和的黑色圆锥体、透明圆柱体和用于拍摄外凸反射镜面上成像体的摄像头,所述的外凸折反射镜面位于透明圆柱体的上方,外凸折反射镜面朝下,黑色圆锥棟厢由龙抓Λ =HilS財络.而的甶轴由也.Sf沬堪桷衬卷络而钿h.所述的安全驾驶辅助控制器包括: Based on computer vision full safe driving assistance device, comprising a vehicle for acquiring and external full video information omnidirectional vision sensor for providing an alarm upon detection of a variety of driver fatigue and fatigue occurs driving safety driving assistance control , said omnidirectional vision sensor mounted to the right in the vehicle seat; said omnidirectional vision sensor output controller connected to the safety driving support, said sensor comprising a full view tong for inner and outer reflective art car folding mirror convex object, to prevent a black light cone and the light refraction saturated, transparent cylinder and a camera for photographing on the convex mirror surface of the imaging member, said convex mirror of the transparent folding cylinder upwardly convex folding mirror down, the black compartment by Long Dong conical catch Λ = HilS financial network. Fu and the axis of the tin can .Sf h Jue liner roll through droplets worthy network. the safety driving assistance controller comprising :

检测领域分害I]模块,用于将从全方位视觉传感器获取的全方位视频信息分割为车辆前方视角、驾驶座位视角、方向盘视角的视频透视图像; FIELD damage detection points I] means for full video from the omnidirectional vision sensor information acquired vehicle is divided into a front perspective, the perspective of the driver's seat, a steering wheel angle of view of the video image perspective;

脸部定位模块,用于对驾驶员的人脸定位,人脸肤色在CrCb色度空间服从二维高斯分布,该肤色分布模型的概率密度函数用公式(4)表示, Face positioning module for positioning of the driver's face, skin color in color space CrCb subject to a two-dimensional Gaussian probability density function of the skin color distribution model using equation (4),

其中,# = (0^^/=(156.560,117.436)7',该向量中的两个值分别指的是颜色分量Cr、Cb的均值,C是Cr、Cb的协方差矩阵,用公式C5)表示, Where # = (0 ^^ / = (156.560,117.436) 7 ', the value of the two vectors respectively refer to the color components Cr, Cb mean, C is Cr, Cb covariance matrix, using the formula C5 ) said,

其中,<、^分别是Cr、Cb的方差,〜、分别是Cr、Cb的协方差;根据肤色的高斯分布模型,计算人脸图像中所有像素点的颜色与肤色的相似度,取相似度的计算公式为, Where, <, ^ are Cr, Cb variance, ~, are Cr, Cb covariance; The Gaussian distribution model of skin color, calculates a similarity with the color face image of the color of all pixels, taking similarity the formula is,

其中,1 = (0,(¾)7"为像素在Cr、Cb色度空间中的向量,C、μ取值与上述的(4)、(5)式相同; Where, 1 = (0, (¾) 7 "is a pixel value same as the above (4), (5) In the formula Cr, Cb color space vector, C, μ;

在计算出相似度值后,利用归一化的方法将相似度转化为O〜255之间的灰度值,得到驾驶座位视角图像的灰度图;利用设定的阈值对灰度图进行二值化处理,肤色区域变成全白,其余部分变成全黑;并利用图像灰度直方图做图像的水平投影图来获取所提取区域在垂直方向上的顶部和底部的极大值;做垂直投.影图来获取所提取区域在水平方向上的左侧和右侧的极大值; After calculating a similarity value, using a normalization method will be converted to the similarity between the gray value O~255, to obtain a grayscale image of the driver's seat perspective; use of grayscale threshold value set for two binarization process, a skin color region becomes completely white, black into the rest; done using the image histogram horizontal projection image to obtain maxima top and bottom areas in the vertical direction of the extracted; do FIG vertical projection to obtain the left and right of the maximum value of the extracted region in the horizontal direction.;

设人脸长为h、宽为W,根据人脸的尺寸约束,如满足人脸长宽比0_8U/wU5的条件,确认为该区域为人脸定位图像; Disposed face length is h, width W, depending on the size constraints of the face, such as the aspect ratio of the face satisfies the condition 0_8U / wU5, it was confirmed that the image area of ​​a human face is positioned;

嘴唇定位和打哈欠检测模块,用于定位驾驶员嘴唇以及检测驾驶员打哈欠,对人脸定位图像中的红色像素点进行水平投影和垂直投影,确定此区域为嘴部区域,水平投影两相邻波谷间的最长距离为嘴唇的长度,竖直投影中两相邻波谷的最大距离为嘴唇的宽度;并依次定义嘴唇处于闭合状态、张幵状态时的嘴唇特征点:左右嘴角点、上嘴唇中心最上点、上嘴唇中心最下点、下嘴唇中心最上点、下嘴唇中心最下点;将上嘴唇中心最上点与下嘴唇中心最下点的距离Aw与嘴巴长度》;的比,根据嘴模型定义为嘴张开程度的参数,如公式(32)所示: Locating lips yawning and detection means for detecting the positioning of the driver and the driver's lips, yawning, red pixels in the face image horizontally positioning projection and vertical projection, this area is determined as the mouth area, the horizontal projection of the two-phase adjacent troughs between a distance of the length of the maximum distance of the lips, the vertical projection is the width of two adjacent wave trough of the lips; and sequentially define the lips in the closed state, the lips feature points Zhang Jian status: points around the mouth, the lips central uppermost point, the upper lip central lowermost point, the lower lip central uppermost point, the lowermost point of the lower lip center; upper lips and the central uppermost point distance Aw lowermost point of the lower lip center of the mouth length "; ratio, in accordance with mouth model is defined as the degree of opening of the nozzle parameters, as shown in equation (32):

设定持续一段时间的较大的嘴张开状态成立时判定打哈欠发生,如公式(33)所示: Setting a period of time a large mouth open state determination occurs when the yawn established, as shown in Equation (33):

定义一次打哈欠持续时间为哈欠开始到哈欠结束的时间,用公式(35)来表示: The definition of a yawn yawn duration yawn start to the end of time, using the formula (35) expressed:

Ty =Z2-Z1 (35) Ty = Z2-Z1 (35)

即连续的嘴张开程度大于或等于α的时间间隔,当发现有一次打哈欠,则统计在一段时间中打哈欠的次数或者持续时间,用公式(36)来进行统计: I.e., continuous opening degree α mouth or greater intervals, and when there yawning once, the count of the number of times or the duration of a period of time yawning, using equation (36) to count:

疲劳驾驶测评和报警模块,用于根据预设的一段时间那打哈欠的次数或持续时间域值,如测得的次数或持续时间大于设定域值,判定为疲劳驾驶,并向报警装S发出告警指令。 Fatigue driving evaluation module and an alarm, according to a preset period of time or the number of times that yawning duration field values, measured as the number or duration greater than the set threshold, it is determined that driver fatigue, and alarm means S an alarm instruction.

作为优选的一种方案:所述的安全驾驶辅助控制器还包括: As a preferred embodiment: the safety driving assistance controller further comprises:

眼睛识别模块,用于依据嘴唇的左右嘴角点和面部的归一化参数标定眼睛特征区域和特征点,设左嘴角的坐标为(leftx,lefty),右嘴角的坐标为(rightx,righty), Eye recognition module, for calibrating the eye region and the feature point feature normalization parameters based on points around the mouth and the lips of the face, mouth disposed left coordinates (leftx, lefty), right corner coordinates are (rightx, righty),

根据面部五官的排列顺序,嘴巴长度公式用公式(31)来表示: The facial features of the order, the length of the mouth by the formula formula (31) represented by:

Wm = sqrt{{rightx - leftx) * {rightx - leftx) + {righty - lefty) * (righty - lefty)) (31) Wm = sqrt {{rightx - leftx) * {rightx - leftx) + {righty - lefty) * (righty - lefty)) (31)

单侧眼睛的高度Highoffiye用公式(37)计算: Unilateral eye height Highoffiye using equation (37) is calculated:

HighoJEye = 0.3\* Wm (37) HighoJEye = 0.3 \ * Wm (37)

单侧眼睛的长度LengthofEye用公式(38)计算: LengthofEye unilateral eye length using equation (38) is calculated:

LengthofEye = 0.63 * Wm (38) LengthofEye = 0.63 * Wm (38)

单侧眼睛区域的起始坐标(x2,y2)用公式(39)、(40)计算: x2 = rightx - O. I * LengthofEye (39) Start coordinates (x2, y2) unilateral eye region (40) is calculated using equation (39): x2 = rightx - O. I * LengthofEye (39)

y2 = righty -1.35 * Wm (40) y2 = righty -1.35 * Wm (40)

利用图像中的黑色像素点的投影来确定眼睛边界,黑色像素点垂直投影图的.纵坐标为此区域一列上所有被判断为黑色的像素点之和,长度为N;横坐标为列号,长度为M ,设区域大小为M*N ,各点像素的值为Ie(x,y),黑色像素点垂直与水平方向上的投影函数用公式(41)、(42)进行计算: The ordinate on a region of the eye to determine the boundaries of black pixels vertical projection by a projection of black pixels in the image are all determined for this black pixels sum of length N; abscissa is the column number, length is M, the size of the region set to M * N, the frequency of pixel value Ie (x, y), the projection function of black pixels vertically and horizontally by the formula (41), (42) is calculated:

在水平投影时,由嘴巴的长度估算出眼睛的高度Highoffiye,由眼睛区域的下部往上找每两个相连波谷之间的距离W,当遇到第一个W与右眼高度HighofEye比较相近时,认为这两个波谷之间的区域为眼睛的高度区域; When horizontal projection, by the length of the mouth of the estimated height Highoffiye eye, from the lower region of the eye looking up the distance W between each connected two troughs, when the first encounter relatively similar height W and the right-eye HighofEye that between the two regions is the height of the trough region of the eye;

在垂直投影时,由嘴巴的长度估算出眼睛的长度Lengthoffiye,从垂直投影的右侧开始搜索,当搜索到两个相连波谷之间的距离L与右眼的高度LengthofEye比较相近时,认为这两个波谷之间的区域为眼睛的长度区域; In the vertical projection, by the length of the mouth of the estimated length of the eye Lengthoffiye, from the right side of the vertical projection start the search, when the height LengthofEye relatively close distance L between the right eye search connected to the two troughs, that the two a trough region between the length of the region of the eye;

眨眼检测模块,用于眼睛睁开程度计算方法采用眼睛模型定义,如公式(43): Blink detection module configured to use the eye opening degree calculation method defined eye model, as shown in equation (43):

公式(43)中用眼睛外接矩形的宽高比表示眼睛的睁开程度,其中化为眼睛的睁开尺度,%为眼睛的宽度眼睛宽度取两眼角点的距离; Formula (43) with the eye of the aspect ratio of the circumscribed rectangle represents the degree of open eyes, which open into the eye scale, taken from two corners% point is the width of the width of the eye of the eye;

定义眨眼状态为视频图像中眼睛的睁开程度在视频中连续小于设定阈值的.帧数累加大于一定的帧数的状态用公式(44)表示为: Blink degree open state is defined as a continuous video image of the eye is smaller than the set threshold value is greater than a certain accumulated number of frames in the video frames is represented by a state equation (44):

(Dooe)^fie (44) (Dooe) ^ fie (44)

其中' among them'

公式(45)表示眼睛睁开程度在视频中连续小于或等于义的帧数累计超过爲帧时认为发生了一次眨眼; Equation (45) represents the eye opening degree is less than or equal to the sense of continuous frames that accumulated more than a blink occurs in the video frame;

定义眨眼持续时间为眨眼过程中眼睛闭合到睁开之间的时间间瞞,用公式(46)表示: Blink duration defines the time interval between the closure to open the eye during blinking to hiding, represented by the formula (46):

Γ6 = ί2 - tx (46) Γ6 = ί2 - tx (46)

7;表示连续的眼睁开程度小于或等于&的时间长度; 7; indicates the length of time of continuous open-eye or less degree of &;

定义眨眼频率为最近发生的两次眨眼时间间隔的倒数,用公式(47)表示眨眼的频率为: Is defined as the reciprocal of twice the frequency of the blink blink recent time interval, represented by the formula (47) blink frequency is:

在疲劳驾驶测评和报警模块中,设定眨眼频率域值,如测得的贬眼频率大于眨艰频率域值,判定为疲劳驾驶,并向报聱装置发出告警指令。 Driving fatigue evaluation module and an alarm is set blinking frequency domain values, as measured derogatory eye blink frequency greater than the frequency domain values ​​difficult, driver fatigue is determined, and reported to an alarm instruction means difficult to pronounce.

作为优选的另一种方案:所述的安全驾驶辅助控制器还包括:面部运动轨迹跟踪模块,用于采用卡尔曼滤被来跟踪驾驶员的面部活动,根据.公式(15)在得到一系列代表面部状态向量AT,、尤,+|、义,+2...中的面部区域中心点(xt,yt)、(xt+,, y,+i)、(x,+2,yt+2)信息后,按时间序列作出一条面部区域中心点运动轨迹线: As a further preferred embodiment: the safety driving assistance controller further comprising: a facial motion tracking module configured to track the Kalman filter is the driver's facial movements, according to the formula (15) obtained in a series. on behalf of the state vector AT ,, especially in the face, + |, righteousness, + the face of the central point (xt, yt) 2 ... in, (xt + ,, y, + i), (x, + 2, yt + 2 ) the information to a central point of a face area in time series motion trajectory:

采用在鸾驶员开始驾驶后一段时间内的面部区域中心点的统计平均值,计算公式如(48)所示, Statistical average value of the central point of a face in a period after the start luan driving member driving time, as shown in formula (48),

定义点头状态为视频图像中面部中心点位置连续向前方移动距离超过某个阈值的帧数,累加大于一定的帧数的状态用公式(49)表示为: Nod moving state defined as a video image of the face from the forward positions of the centers continuously exceeds a certain threshold number of frames, the accumulated state more than a certain number of frames represented by the formula (49) as:

公式(49)表示点头的程度在视频中连续大于或等于%的帧数累计超过久.顿时,判定发_生一次点头; Equation (49) represents a degree greater than or equal nodding% of continuous frames in the video accumulated over a long time immediately determined once nod green hair _.;

式(51)中,(jcn,y„)为η帧时驾驶员的面部区域中心点位置.,(i, y)为驾驶员开始驾驶后一段时间内的面部区域中心点位置的统计平均值; Statistical average value of formula (51), (jcn, y ") is η frame face region center point of the driver., (I, y) after the start of the driving of the driver's face region center point of a period of time ;

在疲劳驾驶测评和报警模块中,设定点头次数域值、与正常驾驶时头部的标准位置超过偏离值持续时间域值,如测得的点头次数大于点头次数域值,或者测得的与正常驾驶时头部的标准位置超过偏离值&持续时间大于持续时间域值,判定为疲劳驾驶,并向报警装置发出告警指令。 Driving fatigue evaluation module and an alarm, the number of threshold setting nodding, standard normal driving position of the head when the offset value exceeds the duration threshold, as measured by the number of nod nod times greater than the threshold, or measured and standard deviation value exceeds the position of the head during normal driving & duration greater than the duration threshold, driver fatigue is determined, an alarm command to the alarm device.

作为优选的再另一种方案:所述的安全驾驶辅助控制费还包括: As a further alternative preferred embodiment: the safety driving assistance control fee further comprises:

车辆行驶方向检测模块,用于依照车辆前方视角图像,以道路上的:白线或者是道路上边缘绿化带为基准线,.通过视频图像检测车辆的前进方向是否与上述的白线或者绿化带相偏离,将车辆的中心点上作一条与上述的基准线的平行线,检测车辆的运行轨迹是否与所述的平行线左右偏离,如车辆运行轨迹点超过设定的偏离域值,检查当前驾驶处于该偏移状态,如果上次的偏离状态是正偏离状态,本次所检测到的状态的偏离状态是负偏离状态,或者是没有偏离,那么就认为产生了一次偏离,记录该次偏离的时间以及程度; Vehicle travel direction detection module configured in accordance with the vehicle front-view image, on the road in order to: a white line on the road or the edge of the green band as a reference line, the video image detected by the traveling direction of the vehicle is above the white line or the green tape deviation from the above as a reference line parallel to the line on the center point of the vehicle, detecting whether the vehicle deviates from the running track and left and right of the parallel lines, such as the value of the vehicle running track deviate from the range exceeds a set point, check the current the shift in the driving state, if the last state is a positive deviation deviation state, this state departing from the detected state is a state of negative deviation or no deviation then that produces a deviation of the recording time offset time and degree;

在疲劳驾驶测评和报聱模块中,设定偏移报警距离,如当前的偏移程序大于偏移报警距离,判定为疲劳驾驶或危险驾驶,并向报警装置发出告警指令。 In the packet driver fatigue and difficult to pronounce evaluation module, alarm set offset distance, such as the current program offset greater than the offset distance alarm, it is determined that fatigue or dangerous driving, an alarm command to the alarm device.

进一步,所述的安全驾驶辅助控制器还包括:操作方向盘状态检测模块,用于依据方向盘视角图僂,如检测到前方道路上有弯道、有变道、有岔道及前方车械讲入杆3Φ主逍楂况时.烚涮当前县否右右卞白齿的搵作油柞.弇庙苗袈妯删评和报窨模块中,如当前没有方向盘的操作动作,判定为危险驾驶,并向报警装置发出告警指令。 Further, the safety driving assistance controller further comprising: a steering wheel operating state detection module, for a steering wheel according to the perspective of FIG hunchback, as detected with a curve on the road ahead, there is a lane change, the vehicle has a front fork and mechanical stresses in lever when 3Φ main Xiao quince conditions. Xia rinse current press with finger as an oil oak County no Right Right Bian white teeth. trap temple seedlings Buddhist monk's robe wives of brothers puncturing assessment and reporting scenting modules, such as the current is not operated the operation of the steering wheel, it is determined as dangerous driving, and an alarm command to the alarm device.

更进一步,所述的检测领域分割模块还包括:透视图展开单元,用于利用透视投影平面的坐标点P (ij)求空间三坐标中的A (Χ,Υ,Ζ),得到投影平面与空间三坐标的转换关系,转换关系式用公式(60)来表示: Still further, the detection module further comprises a segmentation art: a perspective view of the developed means for utilizing the perspective projection plane coordinate point P (ij) find the coordinate space A (Χ, Υ, Ζ), and a projection plane to obtain space coordinate conversion relationships, the relationship with the conversion formula (60) represented by:

上式中:D为透视投影平面到双曲面的焦点Om&距离,β角度是入射光线在XY平面上投影的夹角,γ角度是入射光线与双曲面焦点的水平面的夹角,i轴是与XY平面平行的横轴,j轴是与i轴和0m—G轴直角相交的纵轴; In the above formula: D is a perspective projection plane to the focal point of the hyperboloid & distance, β is the angle Om on the XY plane of an incident light ray projection angle, γ is the angle of the horizontal angle of the incident light and the focus of the hyperboloid, i of the axis XY plane parallel to the horizontal axis, j i axis and the axis is perpendicular to the axis 0m-G intersecting the longitudinal axis;

将上述用公式(60)求得的P(X,Y,Z)点代入公式(57)和(58)就能求得与透视投影平面的坐标点P(U)^S对应的在成像平面上的P (X, y)点: The above equation (60) determined P (X, Y, Z) points into the formula (57) and (58) can be calculated coordinate point of a perspective projection plane P (U) ^ S corresponding to the imaging plane P (X, y) point on:

再进一步,所述的折反射镜面采用双曲面镜来进行设计:所示的双曲面镜构成的光学系统可以由下面5个等式表示;. Still further, the folding mirror is designed to use a hyperbolic mirror: hyperbolic mirror optical system shown in the following configuration may be represented by Equation 5;.

.式中X,Y,Z表示空间坐标,c.表示双曲面镜的焦点,2c表示两个焦点之间的距离,a,b分别是双曲面镜的实轴和虚轴的长度,P表示入射光线在XY平面上的夹角一方位角,α表示入射光线在XZ平面上的夹角一俯角,f表示成像平面到双曲面镜的虚焦点的距离》 . Wherein X, Y, Z represent spatial coordinates, c. Denotes a focus of the hyperbolic mirror, 2c denotes a distance between two focal points, a, b are the lengths of the hyperbolic mirror to the real axis and the imaginary axis, P represents an azimuth angle of incident light on the XY plane, α represents the angle between the incident ray at a depression angle on the XZ plane, f denotes the focal distance of the virtual image plane of the hyperbolic mirror "

本发明的技术构思为:釆用视频技术和图像理解技术来监视驾驶员的驾驶疲劳可以通过检测驾驶员的头部的状态-哈欠连天:不自觉的频频点头ί打瞌睡),很难保持抬头的姿态:检测驾驶员的眼部的状态-眼睑下垂,甚至闭眼;检测方向盘的状态-反应迟钝,判断迟缓;动作值硬,节奏缓慢;监汽车行驶方向失去方向感,驾车左右摇摆在公路上;监视汽车行驶速度-随意变换车速,行驶速率不定等各种疲劳状态以及驾驶状态來进行判断,从而能得到精确的判断结果。 Technical concept of the present invention is: preclude the use of video and image understanding technology to monitor the driver's driving fatigue head of the driver by detecting a state - yawn: unconsciously nodded ί asleep), it is difficult to maintain the rise posture: the state of the driver is detected eye - ptosis, even eyes closed; steering state detector - slow, slow determination; action value hard, slow pace; disorient direction of vehicle travel supervisor, swing around the driving road on; monitoring vehicle speed - speed random transformation, with other variable speed driving state and a fatigue state to be judged, so that an accurate determination result can be obtained. 要同时能检测驾驶员的头部的状态、嘴巴的状态、眼部的状态、方向盘的状态、监视汽车行驶方向、监视汽车行驶速度等状态,最理想的方式是采用一个传感器來获得这些状态信息,全方位视觉传感器为实现这种技术提供了一种可能。 To simultaneously detect the state of the driver's head, STATUS mouth, eye state, the steering wheel, the monitoring direction of travel, vehicle speed monitoring state, etc., the best way is to use a sensor to obtain status information all-round vision sensor provides a possibility to implement this technique.

近年发展起来的全方位视觉传感器ODVS(OmniDirectional Vision Sensors)为实时获取场景的全景图像提供了一种新的解决方案。 In recent years developed a full range of vision sensors ODVS (OmniDirectional Vision Sensors) provides a new solution for real-time access panoramic image of the scene. ODVS的特点是视野广(360度),能把一个半球视野中的信息压缩成一幅图像,一幅图像的信息量更大:获取一个场景图像时,ODVS在车内中的安放位置更加自由;监视环境时ODVS ODVS is characterized by wide-field (360), a message can be compressed into a hemispherical field of view of an image, a larger amount of information of an image: acquiring a scene image, ODVS placement in the vehicle is more freedom; ODVS when monitoring environment

不用瞄准目标;检测和跟踪监视范围内的运动目标时算法更加简单;可以获得车内外场景的实时图像。 Do not aim at the target; algorithm easier when moving target detection and tracking within the monitoring range; you can get real-time image of the scene inside and outside the car. 这种ODVS摄像机主要由一个CCD摄像机和正对着摄像头的一个反光镜组成。 Such ODVS a camera mainly consists of a mirror and a CCD camera facing the camera components. 反光镜面将水平方向一周的图像反射给CCD摄像机成像,这样,就可以在一幅图像中获取水平方向360°的环境信息。 The reflective mirror reflection image of the week in the horizontal direction, so that, the environmental information can be acquired in the horizontal direction of 360 ° in one image to the CCD camera imaging. 这种全方位摄像机有着非常突出的优点,特别在对全景实时处理要求下,是一种快速、可靠的视觉信息采集途径。 This omnidirectional camera has a very prominent advantages, especially in the real-time processing requirements under the panoramic, is a fast, reliable way to collect visual information.

因而需要一种采用全方位视觉传感器同时能检测驾驶员的面部的状态、嘴巴的状态、眼部的状态、方向盘的状态、监视汽车行驶方向、监视汽车行驶速度等状态,在检测到有驾驶疲劳现象发生时对驾驶员实施报警,同时记录车辆运行状态,实现对超载、超速、疲劳驾驶等违章操作进行灯光或语音提示报警,非常事件的录音、录像与再现等功能的安全驾驶辅助装置。 Thus a need for a simultaneous use of omnidirectional vision sensor can detect the state of the face of the driver, the state of the state of the mouth, eye state, the steering wheel, the monitoring direction of travel, vehicle speed monitoring state, etc., there is detected a driver fatigue embodiment alarm when the driver occurs, while recording the vehicle running state, to realize the overload, speeding, driver fatigue and other illegal operations safe driving lights or voice prompt alarm, recording extraordinary events, recording and playback functions such as auxiliary devices.

本发明的有益效果主要表现在:I、通过上述全方位的疲劳驾驶检测,能有效地检测出任何疲劳单独或共同所产生的疲劳,提高了安全驾驶的可靠性;2、判别精度高,提髙了测暈精度;3、通过全方位视觉传感器所拍摄的视频信息,可以用來记录车辆运行状态,实现对超载、超速、疲劳驾驶等违章操作进行灯光或语音提示报警,非常事件的录音、录像与再现等功能。 Advantageous effects of the present invention mainly in: I, the above-described full-driving detecting fatigue, can effectively detect any fatigue fatigue produced individually or together, to improve the reliability of safe driving; 2, high-precision determination, mention Gao the measurement accuracy halo; 3, video information captured by the omnidirectional vision sensor can be used to record the vehicle running state, to realize the overload, speeding, fatigue driving illegal operation or voice alarm light, recording extraordinary events, recording and reproduction functions.

(四) (four)

附图说明 BRIEF DESCRIPTION

图I为本发明的安全辅助驾驶装置的原理图; I Schematic FIG safe driving support apparatus of the present invention;

图2为本发明的安全辅助驾驶装置的软件模块划分; A software module safe driving assistance apparatus of the present invention FIG. 2 is divided;

图3为本发明的安全辅助驾驶装置中的驾驶员座位上的人脸识别流程图; Face flowchart on safety driving assistance apparatus of FIG. 3 in the present invention, the driver's seat;

图4为本发明的安全辅助驾驶装置中的驾驶疲劳判断流程图; Analyzing a flowchart driving fatigue safety driving assistant apparatus 4 of the present invention;

.图5为本发明的安全辅助驾驶装置中的检测眼睛睁开程度的示意图; Figure 5 of the present invention, the safety driving assistance means for detecting the open degree schematic eye;

图6为本发明的安全辅助驾驶装置中的检测嘴巴张开度的示意图; 6 is a schematic safety driving support apparatus of the invention for detecting the present opening degree of the mouth;

图7为本发明的安全辅助驾驶装置中的检测眼睛张开度的示意图; Safe driving assistance apparatus of the present invention FIG 7 is a schematic view of the opening degree of the detected eyes;

图R %某千仝卞仿if笪和抑.带的虫仝鋪肋袈肿葜晋由的蜜轴.帛竑仿l·AiiPSP.定位算法流程图; FIG R% if an imitation of one thousand Da Bian and suppression plated with tape worms with swollen smilacinus honey ribs Buddhist monk's robe shaft Jin Hong imitation made of silk l · AiiPSP flowchart positioning algorithm.;

图9为全方位视觉传感器的折反射成像的原理图; FIG 9 is a schematic diagram of an imaging catadioptric omnidirectional vision sensor;

.图10为全方位视觉传感器的结构图; Figure 10 is a configuration diagram of omnidirectional vision sensor;

图11为去方位视觉传感器的透视原理图; FIG 11 is a schematic perspective view to the orientation of the visual sensor;

图12为基于全方位计算机视觉的安全辅助驾驶装置中的安全驾驶检测总流程图; FIG 12 is a safe-driving means driving a general flow chart of the computer vision based on omnidirectional detection safety;

图13为驾车失去方向感、左右摇摆在公路上的一种表现形式。 13 is a driving disorientation, side to side in a form on the road.

(五) (Fives)

具体实施方式 Detailed ways

下面结合附图对本发明作进一步描述。 DRAWINGS The invention will be further described below in conjunction.

参照图I〜图13,一种基于全方位计算机视觉的安全辅助驾驶装置,首先是通过全方位视觉传感器获取驾驶座位视角、方向盘视角、车辆驾驶谭路环境的透视图;驾驶座位视角瑋视图用于对驾驶员的人脸检测、嘴巴检测、眼睛检测、面部轨迹检测,通过图像理解与识别来判断驾驶员的感知状态以及判断状态是否处在疲劳状态;通过对方向盘视角透视图的理解来判断驾驶员的动作状态是否存在着反应迟钝、判断迟缓、动作值硬、节奏缓慢等现象:通过车辆前方视角的透视图得到道路环境以及车辆运行轨迹,然后判断车辆是否失去了的方向感、左右摇摆驾驶在公路上、偏离白线的时间和程度; Referring to FIG I~ 13, based on computer vision full safety driving assistance apparatus first acquires the driver's seat through the full angle of view vision sensor, a steering wheel angle of view, a perspective view of a vehicle driving road environment Tan; Wei perspective view of a driver's seat in the face detection to the driver who, mouth detection, eye detection, face detection track, understanding and recognition by the image sensing state, and determines whether the determination in the fatigue state of the driver; understood by the steering wheel to determine the perspective viewing angle whether the operation state of the driver there is slow, slow determination, the operation of the hard value, the slow pace of such phenomena: to give the vehicle running track and the road environment by a front perspective view of a vehicle perspective, and then determines whether the vehicle lost sense of direction, vacillating driving on the road, and the extent of time deviation from the white line;

说明全方位视觉传感器(ODVS)的原理以及透视图的原理;所述的全方位视觉传感器,其光学部分的制造技术方案主要由垂直向下的折反射镜和面向上的摄像头所构成。 Explain the principles of omnidirectional vision sensor (ODVS) and a perspective view of the principle; said omnidirectional vision sensor, the optical portion of its manufacturing technology solutions mainly constituted by a camera in the vertical and downwardly facing folding mirror. 具体构成是由聚光透镜以及CCD(或者CMOS)构成的摄像单元固定在由透明树脂或者玻璃制的圆筒体的下部,圆筒体的上部固定有一个向下的大曲率的折反射镜,在折反射镜和聚光透镜之间有一根直径逐渐变小的黑色圆锥状体,该圆锥状体固定在折反射镜的中部,黑色圆锥状体的目的是为了防止过剩的光射入而导致在圆筒体内部的光饱和以及通过圆筒体壁产生的光反射现象。 DETAILED constituted by a condenser lens and a CCD (or CMOS) is fixed to the imaging unit constituted by the lower cylindrical body made of a transparent resin or glass, fixed to the upper cylindrical body has a large curvature downward folding mirror, between the folding mirror and the condenser lens has a diameter gradually becomes smaller black conical body, the cone-shaped body is fixed in the middle of folding mirrors purpose, a black body is conical in order to prevent excessive incident light caused by in light saturation inside the cylindrical body, and a light reflection phenomenon generated by the cylinder wall. 图9是表示本发明的全方位成像装置的光学系统的原理图。 FIG 9 is a schematic diagram showing an optical system of the image forming apparatus of the full range of the present invention.

全方位视觉传感器的工作原理是:进入双曲面镜的中心的光,根据双曲面的镜面特性向着其虚焦点折射。 Omnidirectional vision sensor working principle is: the center of the light entering the hyperbolic mirror, according to the characteristics of the hyperboloid mirror towards its virtual focal refraction. 实物图像经双曲面镜反射到聚光透镜中成像,在该成像平面上的一个点P (x,y)对应着实物在空间上的一个点的坐标A(X,Y,Z)。 Physical image reflected by the hyperbolic mirror image of the condenser lens on the imaging plane of a point P (x, y) corresponding to the coordinates of a point A on the physical space (X, Y, Z).

图9中11—双曲线面镜,12—入射光线,13_双曲面镜的焦点Om (0,0,C),14一双曲面镜的虚焦点即相机中心Oc (O,O,-c), 15—反射光线,16—成像平面,17—实物图像的空间坐标A(X,Y,Z),18—入射到双曲面镜面上的图像的空间坐标,19一反射在成像平面上的点P (x,y)。 FIG 911- hyperbolic mirror, 12 an incident light ray, 13_ hyperbolic mirror focal Om (0,0, C), the virtual focal point of the curved mirror 14, i.e., one pair of cameras center Oc (O, O, -c) , the reflected light 15, 16 an imaging plane, the image coordinates of the physical space 17- a (X, Y, Z), 18- spatial coordinates of the image incident on the surface of the hyperbolic mirror, a reflection point 19 on the imaging plane P (x, y).

图10中所示的双曲面镜构成的光学系统可以由下面5个等式表示; ((Χ2+Γ2)/α2)-(Ζ2/62) = -1 (Ζ>0) (52) Hyperbolic mirror optical system shown in FIG. 10 may be constituted by the following five equations represent; ((Χ2 + Γ2) / α2) - (Ζ2 / 62) = -1 (Ζ> 0) (52)

c != ^Ja2 + b2 (53) c! = ^ Ja2 + b2 (53)

式中X,Y,Z表示空间坐标,c表示双曲面镜的焦点,2c表示两个蕉点之间的距离,a,b分别是双曲面镜的实轴和虚轴的长度,β表示入射光线在XY平面上的夹角一方位角,α表示入射光线在XZ平面上的夹角一俯角,f表示成像平面到双曲面镜的虚焦点的距离。 Wherein X, Y, Z represent spatial coordinates, c denotes the focal point of the hyperbolic mirror, 2c denotes a distance between two points banana, a, b are the lengths of the hyperbolic mirror to the real axis and the imaginary axis, β shows the entrance an azimuth angle of light on the XY plane, α represents the angle between the incident ray at a depression angle on the XZ plane, f denotes the focal distance of the virtual image plane to the hyperbolic mirror.

所述的全方位视觉传感器结构如附图9所示。 Said omnidirectional vision sensor structure as shown in Figure 9. 全方位视觉传感器能在水平方向上实现360°视觉范围,在垂直方向上实现90°视觉范围;因此可以将全方位视觉传感器安装在车内驾驶座的右前方,也可以根据需要安装在驾驶座位的右上方,因此可以根据用户的需要以及车内的实际空间情况作出选择;全方位视觉传感器的安装高度以及在车内的位置是根据全方位视觉传感器的俯角以及驾驶座位的情况来决定的,全方位视觉传感器的反射镜面设计以及其安装位置的三个必要条件是:I)能清晰地捕捉到驾驶员的人脸;2)能捕捉到驾驶员操作驾驶盘的情况;3)能观测到车辆运行前方环境的状况。 To achieve all-round vision sensor 360 ° in the horizontal direction of vision, the visual range of 90 ° to achieve in a vertical direction; omnidirectional vision sensor may therefore be installed right in front of the driver's seat in the car, may be installed in the driver's seat as required upper right, can be selected according to actual needs and the user space vehicle; omnidirectional vision sensor mounting height in the car and the position is based on the depression angle of omnidirectional vision sensor, and to determine when the driver's seat, mirror design omnidirectional vision sensor, and its installation position three necessary conditions are: I) can clearly capture the driver's face; 2) to capture the case where the driver operates the steering wheel; 3) can be observed condition of the vehicle running in front of the environment.

来说明360°全方位进行摄像的原理,空间上的一个点A(xl,yl, zl)经折反射.2镜面反射到透镜6上对应有一个投影点Pl(x,y),通过透镜6的光线变成平行光投射到CMOS摄像单元5 ,微处理器7通过视频接口读入该环状图像,采用软件对该环状图像进行透视图展开得到按照车辆前方视角、驾驶座位视角、方向盘视角分割的透视的视频图像。 Full 360 ° will be described the principle of imaging, a point A (xl, yl, zl) in the space .2 reflected by specular reflection off the lens 6 corresponds to a projection point Pl (x, y), through a lens 6 light into parallel light is projected onto the CMOS imaging unit 5, the microprocessor 7 via the annular image into the video interface reading, using software expanded perspective view of the annular image obtained in accordance with the angle of view ahead of the vehicle, the driver's seat angle, a steering wheel angle of view dividing the video image perspective.

为了对透视图有一个较圩的理解,如附图10所示,这里我们从双曲面的实焦.点O01到透视投影坐标原点G引一条距离为D的直线Oin—G,与这条Oltl—G相垂直的平面作为透视投影平面,从点A (X,Y,Z)向着焦点On1的光线在透视投影平面上有一个交点P(X,Y,Z),如果将该交点Ρ(Χ,Υ,Ζ)代入到公式(5 7 )、(58)中就.能容易地求的在成像平面上的P (X,y)点,因此可以通过从上述关系求得在透视投影平面上的各个点。 In order to have a more fair understanding of the perspective view as shown in Figure 10, where we hyperboloid from the real power. Point O01 to lead a perspective projection distance D coordinate origin G is straight Oin-G, and this Oltl -G vertical plane as the perspective projection plane, from point a (X, Y, Z) toward the light has a focus On1 intersection P (X, Y, Z) on the perspective projection plane, an intersection if [rho] ([chi] , Υ, Ζ) substituted into equation (57), (58) it is. possible to easily find on the imaging plane P (X, y) point, and therefore may be on the perspective projection plane by obtaining from the relationship various points.

如附图10所示,双曲面镜的光轴为Z轴,摄像头向着Z轴的正方向设置,成像平面是摄像头的输入图像,我们将双曲面镜的光轴与成像平面的交点g作为成像平面的原点,其坐标系为X、y,X轴、y轴分别与摄像头中的感光芯片的长短边相一致,因此0Π1_ΧΥΖ坐标系的X轴与成像平面坐标系的X y平面平行。 As shown in Figure 10, the optical axis of the Z-axis hyperbolic mirror, the camera toward the positive direction of Z-axis is provided, the input image plane of the imaging camera, the intersection of the optical axis of the imaging plane will be imaged as a hyperbolic mirror g origin of the plane, the coordinate system X, y, the X-axis, y-axis respectively coincide with the length of the photosensitive chip camera side, and therefore X 0Π1_ΧΥΖ X-axis coordinate system and the imaging plane coordinate y plane is parallel.

透视投影平面是与OG连接线相垂直的平面,将G点作为原点的二元平面坐标系i, j,其中i轴是与XY平面平行的横轴,j轴是与i轴和0„—G绅直角相交的纵轴,将从透视投影平面到双曲面的焦点Oni的距离作为D,定义透视投影平面的横幅为W,纵幅为H。由于i轴是与XY平面平行,又是与Z轴垂直的,因此所得到的透视投影平面是以G点为坐标中心与XY平面(水平面)上旋转一个角度,该角度就是Om-G连接线与Z轴的夹角。 OG perspective projection plane is a plane perpendicular to the line connecting the point G dibasic plane coordinate system origin i, j, where i axis is a horizontal axis parallel to the XY plane, j is the i axis and the axis 0 "- Shen G at right angles to the longitudinal axis, the distance from the focal plane perspective projection Oni of the hyperboloid, D, of the perspective projection plane is defined as a banner W, since the vertical web is H. i axis parallel to the XY plane, and with perpendicular to the Z-axis, thus resulting in the perspective projection plane is a rotation angle of point G on the coordinates of the center of the XY plane (horizontal plane), this angle is the angle between the line Om-G is connected to the Z-axis.

这里我们将On,— G作为变换中心轴,点G作为变换中心点,用P (入射光线在XY平面上的夹角一方位角)、γ (入射光线与双曲面焦点的水平面的夹角)以及距离D (透视投影平面到双曲面的焦点Oni的距离)来表示变换中心轴,β角度在O。 Here we will On, - G as a center axis transformation, transformation point as the center point G, with P (the angle between an incident light in the XY plane, the azimuth angle) gamma] (angle between the incident light and the horizontal focus hyperboloid), and the distance D (distance from the focal plane to a perspective projection of the hyperboloid Oni) represented transformation center axis, β the angle O. 〜360°范围内,可以由式(54)计算得到,同样也可以用式(59)来表示: ~360 ° within the range can be calculated by the formula (54), also may be of formula (59) represented by:

β = (Y / X) = tan-1 (y / x) (59) β = (Y / X) = tan-1 (y / x) (59)

这里β角度是入射光线在XY平面上投影的夹角,以Z轴为原点(极坐标系的原点)逆时针方向,在O。 Where β is the angle of the incident light in the XY plane of the projection angle, the Z axis as the origin (the origin of a polar coordinate system) in the counterclockwise direction, the O. 〜360°范围内(这是全方位视觉的水平视场范围);γ角度是入射光线与双曲面焦点的水平面的夹角,由式(56)所示,.该角度与空间坐标与双曲面焦点位置有关,如果在双曲面焦点上作一个水平面的话,那么就是给水平面与O m — G轴的夹角,这里将空间坐标Z点在双曲面焦:点以上的作为[ + ],称为仰角,Z点在双曲面焦点以下的作为[―],称为俯角;γ角度范围在一90°〜+90°之间,根据不同的镜面设计就会有不同的γ角度范围(这是全方位视觉的垂直视场范围); ~360 ° within a range (this is the full range of the horizontal visual field of view);. Γ is the angle of the horizontal angle of the incident light and the focus of the hyperboloid, (56), the angle with which the space coordinates of the formula hyperboloid focus position, and if the level for a focus on the hyperboloid surface, then is given a horizontal plane O m - shaft angle G, Z coordinate space where a hyperboloid focal points: a point above the [+], called elevation, Z hyperboloid focal point as the following [-], referred to as a depression angle; gamma] between a range of angles 90 ° ~ + 90 °, there will be a range of different angles gamma] depending on the design of the mirror (which is full the vertical orientation of the visual field range);

距离D根据透视投影平面与双曲面焦点的直线距离来确定,一般来说,距离D越长景物越小,距离D越短景物越大;透视投影平面的横幅W、纵幅H可以由需要来确定,在确定横幅W、纵幅H大小时首先要确定显示窗的横纵比,由于在计算机中是用像素来表示横幅W、纵幅H的大小,因此要确定横幅W、纵幅H的像素值。 The distance D is determined according to the linear distance with perspective projection plane hyperboloidal focus, in general, the smaller the longer the distance D scene, the greater the shorter the distance D scene; banner perspective projection plane W, H may be by a vertical web needs determination, in determining the banner W, when the vertical width H size must first determine the display window of the aspect ratio, since the computer is a pixel represented banners W, the vertical amplitude levels H and thus to determine banner W, the vertical web of H Pixel values.

通过透视投影平面的坐标点P (ij)求空间三坐标中的A (Χ,Υ,Ζ),这样就能得到投影平面与空间三坐标的转换关系,转换关系式用公式(60)来表示: By the perspective projection plane coordinate point P (ij) find the coordinate space A (Χ, Υ, Ζ), so that the projection plane can be obtained and space conversion relationship between the coordinate conversion relationships equation (60) to represent :

X = R* cos β -i* sin β X = R * cos β -i * sin β

Y = R* sin β + i* cos β .,Λ、 Y = R * sin β + i * cos β., Λ,

C 60) C 60)

Z = D*smy- j*cosy Z = D * smy- j * cosy

(R = D*cosy + j * sin γ) (R = D * cosy + j * sin γ)

式中:D为透视投影平面到双曲面的焦点Olt^J距离,β角度是入射光线在XY平面上投彰的夹角,γ角度是入射光线与双曲面焦点的水平面的夹角,i轴是与XY平面平行的横轴,j轴是与i轴和OG轴直角相交的纵轴,i轴与j轴的方向由附图Il所示;将上述用公式(60)求得的P(X,Y,Z)点代入公式(57)和(58)就能求得与透视投影平面的坐标点P(Lj)相对应的在成像平面上的P (X, y)点。 Where: D is the perspective projection plane to the focal point of the hyperboloid ^ J Olt distance, β is the angle of the incident light in the XY plane administered Akira angle, γ is the angle of the horizontal angle of the incident light and the focus of the hyperboloid, i of the shaft a longitudinal axis, i and j shaft axis and a horizontal axis parallel to the XY plane, j is the i axis and the axis perpendicular to the axis intersecting OG by Il shown in the drawings; the above equation (60) is obtained, P ( X, Y, Z) points into the formula (57) and (58) can be obtained by a perspective projection plane coordinate point P (Lj) corresponding on the imaging plane P (X, y) point. 这样就可以通过在成像平面上得到的图像信息求得全方位透视图,也就是说建立了成像平面上的坐标舉与透视投影平面的坐标系的对应关系。 This can be obtained by full perspective image information obtained on the imaging plane, that is to say to establish a coordinate on the imaging plane move correspondence between perspective projection plane coordinate system. 有了这样的对应关系,我们就能从成像平面上得到的某个点的图像信息;通过两个坐标系的对应关系,将该点的图像信息正确地显示在透视投影平面相对应的位置上。 With this correspondence between the image information we can get a point from the imaging plane; a correspondence relationship by two coordinates, the point in the image information correctly displayed in perspective projection plane corresponding to a position .

为了得到车辆前方视角、驾驶座位视角、方向盘视角分割的透视的视频图像, To obtain vehicle front perspective, the perspective of the driver's seat, steering wheel angle of view perspective divided video image,

本专利中根据不同需要分为远距离检测与近距离检测两种糢式。 This patent is divided into long-range detection and proximity detection modes according to different needs. 所述的远距离检测是将透视投影平面到双曲面的焦点Oin的距离放在IOm以外的地方,比如对于车辆前方视角的透视图就是采用远距离检测模式;所述的近距离检测是将透视投影平面到双曲面的焦点Oin的距离放在Im以内的地方,比如对于驾驶座位视角、方向盘视角的透视图就是采用近距离检测模式。 The long-range detection is a perspective projection plane to the distance from the focal point of the hyperboloid Oin place than IOm, such as the viewing angle of the vehicle is a front perspective view of the use of remote detection mode; said proximity detector is a perspective Oin projection plane focal distance of the hyperboloid in place within Im, such as the angle of view for the driver's seat, a steering wheel angle of view is a perspective view of proximity detection mode employed. 透视图的展开是在附图2中的透视图展开模块中进行的。 Is an exploded perspective view of the module for deployment in the FIG. 2 perspective drawings.

为了能得到上述不同视角以及不同要求的透视图,在程序中配置有检测领域.分割模块,如附图2中所示,在该模块中用户可以根据从全方位视觉传感器所获得的图像上自定义透视图的大小及方位;在自定义透视图后,程序会生成相对应的透视图窗口,在该窗口上进行检测内容的定义;比如在驾驶座位视角透视图窗口上要定义上述的人脸检测、嘴巴检测、眼睛检测、面部轨迹检测,用于对驾驶员的疲劳驾驶的检测,该检测主要是从驾驶员的感知状态以及判断状态方面进行的检测;在方向盘视角透视图窗口上要定义方向盘操作检测,该检测主要是从驾驶员的动作状态方面来进行的检测;在车辆前方视角的透视图窗口上要定义道路环境检测以及车辆运行轨迹的检测,该检测主要是从道路状态以及行驶状态来进行的检测。 In order to obtain the above different viewing angles and perspective views of different requirements, arranged in the program for detecting the art. Segmentation module, as shown in figures 2, the user can customize according to the image obtained from the omnidirectional vision sensor module definition of a perspective view of the size and orientation; in custom perspective, the program will generate a corresponding perspective view window, define detected content on the window; for example, in the driver's seat perspective perspective window to define the above-described face detection, mouth detection, eye detection, face detection track, for detecting the driver fatigue driving, the detection is mainly performed from the sensing state detecting and determining the state of the driver aspect; viewing angle on the steering wheel to define a perspective view of the window detecting steering operation, which is mainly detected from the detected operating state of the driver is carried out aspect; in front perspective a perspective view of a vehicle window to define the road environment detection and track detection operation of the vehicle, which is detected mainly from the state of the road and the travel state detection performed.

.要检测打哈欠、眨眼、不自觉的频频点头、很难保持抬头的姿态等驾驶感知. 状态,是通过人脸检测、嘴巴检测、眼睛检测、面部运动轨迹检测和图像理解来实现的;所述的嘴巴检测、眼睛检测、面部轨迹检测是建立在人脸检测基础上的; To detect yawn, blink, unconsciously nodded, looked difficult to maintain posture and other state driver's perception, through face detection, mouth detection, eye detection, face detection and image understanding trajectory to achieve;. The said mouth detection, eye detection, face detection is based on the track face detection on the basis of;

因此,下一步的工作首先是人脸检测,在定义好驾驶座位视角透视图之后,驾驶员的人脸位置必定是在该透视图范围之内。 Thus, the next step is the first face detection, after the definition of good driving perspective a perspective view of a seat, the driver's face must be within the range of perspective. 人脸检测是在附图2中的脸部定位模块中进行的。 Face detection face is positioned in the second module in the drawings. 其处理流程如附图3给出。 The processing flow thereof is given in Figure 3.

本发明针对装置实时性的要求,利用人脸肤色在颜色空间具有很好的聚类特性的特点,选用YCrCb空间作为肤色分布统计的映射空间,较好的限制肤色分布区域,通过统计识别建立一个基于肤色的二维高斯分布数学模型,利用基于相似度和人脸形状特性的人脸检测方法。 The present invention is apparatus for real-time requirements, the use of skin color having a good color space clustering characteristic features selected YCrCb color space as the statistical distribution of the mapping space, it is preferred to limit skin color distribution area, by establishing a statistical recognition two-dimensional Gaussian distribution mathematical model based on skin color, use face detection method based on similarity and face shape characteristics. 采取最大类间方差阈值分割法,利用图像灰度直方图投影对面部进行定位。 Taken Otsu thresholding method using the image histogram positioning projection on the face.

人脸肤色在CrCb色度空间服从二维高斯分布,该肤色分布模型的概率密度函数可以用公式(4)表示, Skin color in color space CrCb subject to a two-dimensional Gaussian probability density function of the skin color distribution model can be represented by formula (4),

其中,;/ = (石^巧/ =(156.560,117.436)7",该向量中的两个值分别指的是颜色分量Cr、Cb的均值,C是Cr、Cb的协方差矩阵,用公式(5)表示, Wherein; / = (^ clever stone / = (156.560,117.436) 7 ", the value of the two vectors respectively refer to the color components Cr, Cb mean, C is Cr, Cb covariance matrix, using the formula (5),

其中,<、分别是Cr、Cb的方差,〜、~分别是Cr、Cb的协方差。 Where, <, are Cr, Cb variance, ~, ~ are Cr, Cb covariance. 根据肤色的高斯分布模型,计算人脸图像中所有像素点的颜色与肤色的相似度。 According to a Gaussian distribution model of skin color, facial image similarity computing color and skin color of all pixels. 取相似度的计算公式为, Similarity calculation formula is taken,

其中,尤= (0,007"为像素在Cr、Cb色度空间中的向量,C、μ取值与上述的(4)、(5)式相同。 Wherein, in particular = (0,007 "for the pixel (4), the same (5) formula in the above-described Cr, Cb color space vector, C, μ value.

在计算出相似度值之后,利用归一化的方法将相/i以度转化为0,255之间的灰度值,得到被检测彩色图像的灰度图,灰度图可以直观地反映出各像素点的颜色与肤色的相似程度。 After calculating the similarity value, the method using normalized phase / i converted gradation value in degrees between 0,255 to obtain a color image is detected grayscale images, can directly reflect grayscale the similarity of color and skin color of each pixel. 这里首先找出所有像素的相似度值中的最大值,并以该值为基准进行相似度的归一化,让相似度值最大的像素变为纯白色(灰度值为.255),其他像素根据各自的相似度大小分别转化为对应的灰度值。 Here first find the maximum similarity values ​​of all the pixels, and the degree of similarity is normalized to the reference, so that the maximum similarity values ​​of a pixel to pure white (gray value .255), other the respective pixels are converted to the gradation of similarity corresponding to a value.

为了分割出图像的肤色区域,需要选取合适的阈值将灰度图像二值化。 For skin color divided area of ​​the image, it is necessary to select the appropriate threshold a grayscale image binarized. 本专利采取最大类间方差阈值分割法,将直方图在某一阈值处分割成两组,当被分成两组的方差为最大时决定阈值。 This patent taken Otsu thresholding method, the histogram is cut into two at a certain threshold action, it is divided into two groups when the variance is maximum threshold value is determined. 如果将一幅图像的灰度值为I〜M级,灰度值 If a gray-level image is I~M, gradation values

然后将所有的像素用K将其分成两组S。 Then all the pixel be divided into two groups K S. = {I -尺丨和S1 = {尺+1 - M},则S0发生的概率可以用公式(9)进行计算, = {I - Shu feet and feet S1 = {+1 - M}, the probability of S0 can be calculated using equation (9),

S1发生的概率可以用公式(10)进行计算, S。 S1 is the probability of occurrence can be calculated using equation (10), S. 的平均值可以用公式(11)进行计算, The average may be calculated using Equation (11),

S1的平均值可以用公式.(12)进行计算,其中冬=Yjpi是图像的整体灰度平均值, The averages S1 can be formulated. (12) is calculated, where winter = Yjpi overall mean gray value image,

是灰度值低于K的像素的灰度平均植,定义*S。 The mean gray value is lower than the gradation of plant K pixels defined * S. 、S1两组间的方差用公式0(3)表示, Variance between groups S1 0 is expressed by formula (3),

寻找I〜M间使得σ2(尺)取最大值的K,即可得到阈值。 Looking between I~M such sigma] 2 (ft) K takes the maximum value, the threshold value can be obtained. 利用该阈值对人脸图像进行二值化处理,肤色区域变成全白,其余部分变成全黑利用图像灰度直方图做图像的水平投影图来获取所提取区域在垂直方向上的顶部和底部的极大值·,做垂直投影图来获取所提取区域在水平方向上的左侧和有侧的极大值,然后利用这些值对面部进行定位。 Using this threshold value face image binarization process, a skin color region becomes completely white, the rest becomes a black image by using horizontal projection histogram do to acquire the top image of the extracted region in the vertical direction and · the bottom of the maximum value, done to get the left vertical projection in the horizontal direction with a side maxima and the extracted region, and then use these values ​​to locate the face. 设人脸长为h、宽为w,根据人脸的尺寸约束,即满足人脸长宽比0.8SA/WS1.5的条件,可以确认为该区域为人脸的定位图像。 Disposed face length is h, width w, depending on the size constraints of the face, i.e., the aspect ratio of the face satisfies the condition 0.8SA / WS1.5, the positioning image can be confirmed in the region of a human face. 如附图3所示,在确定了驾驶员的人脸位置以后,就可以根据嘴巴以及眼睛的五官和颜色特征确定其位置,用于检测是否有疲劳驾驶的表象。 As shown in Figure 3, after determining the position of the driver's face, it can determine its location based features and color features of the mouth and eyes, for detecting whether there is the appearance of driver fatigue.

所述的嘴唇定位以及打哈欠的检测,这部分的检测是在附图2中的嘴部定位模块中进行的:由于驾驶员嘴唇定位与跟踪可以用来判断驾驶员打哈欠的情况,同时它决定着下一步眼睛定位的准确性。 The positioning lips and yawning detection, this detection section is performed in the mouth portion 2 of the drawings positioning module: Locating and Tracking lips since the driver can be used to determine that the driver yawning, and it the next step determines the accuracy of the positioning of the eyes. 因为人脸的五官分布是有规则的,由两个基本点和面部的归一化参数可以大概标定其它特征区域和待征点的粗略位置,也就是说,只要定出两个嘴角,就可以计算出头部在平面范围内的倾斜角度以及眼睛的区域。 Because the face of the distribution of facial features is regular, by the normalization parameter and two basic points about the calibration may face rough position of the region and other features of feature points to be, that is, as long as the fix two corners of the mouth, you can and calculates the inclination angle of the head region of the eye in a plane range.

利用嘴唇颜色为红色这一特点对嘴唇进行分割和定位,其具体的实现方法是:首先,采用红色像素点对驾驶员面部图像进行水平和垂直投影的方法对嘴唇进行区域分割,然后再利用边缘提取和红色像素点提取相结合的方法对嘴唇进行提取。 Using a lip color was red lips on this feature segmentation and positioning, its specific method is: first, the method red pixel driver's face image projected on the horizontal and vertical lip divided into areas, and then use the edge the method of extracting the red pixel and the extract combined extract lips. 嘴曆特征提取处理流程如附图6、7所示。 Mouth calendar feature extraction process flow as shown in figures 6 and 7.

利用面部图像中红色像素点的投影来确定嘴唇边界,红色像素点的垂直投影.图的纵坐标为图像一列(长度为N )上所有被判断为红色的像素点之和,横坐标为列号(长度为M ),它反映了图像在水平方向上红色像素点的变化。 Determining a vertical projection lips boundary, the red pixel points by the projection of red pixels in the face image. On the ordinate of FIG into an image one (of length N) all determined as a red pixel sum, the abscissa is the column number (a length of M), which reflects changes in the horizontal direction of the red pixel image. 设图像大小为,各点像素值为I(x,y),则红色像素点在垂直方向和水平方向上的投影函数用公式(28)、(29)表示, The image size is set, the frequency of pixel values ​​I (x, y), the red pixel mapping function in the vertical direction and the horizontal direction by the formula (28), (29), said

.由于人的嘴唇髙度HeightofLip与头部高度HeightofHead的大致比例关系为I: 10,利用该关系可以得到一个比较合适的统计变量RedTTuesh,在水平投影和垂直投影的计算过程中判断每一个坐标点的一行C或一列)的投影值Pi(X),如果投影值大于头部高度或宽度的I / 6则将统计变量RedThresh自动加I,当统计变量RedThresh大于2 * HeightofLip时,自动将阐值Thresh.加一·个数,然后重新进行投影计算,直到选出比较合适的阈值Thresh进行投影计算。 Since the height of the substantially proportional relationship HeightofHead Gao lips of people HeightofLip head is I: 10, this relationship can be obtained by using a suitable statistical variables RedTTuesh, each determined coordinate point calculation process in the horizontal projection and vertical projection of row or column C) projection values ​​Pi (X), if the value is greater than the projection head height or width of the I / 6 will RedThresh automatic statistical variables I, when RedThresh statistical variables is greater than 2 * HeightofLip, automatically explain value Thresh. · plus a number, and then re-projection calculations, until the selected threshold Thresh suitable projection calculations.

对人脸图像中的红色像素点进行水平投影和垂直投影,来确定此区域为嘴部区域。 Red pixel in the face image in vertical projection and horizontal projection to determine if this region is a region of the mouth. 水平投影两相邻波谷间的最长距离为嘴唇的长度,竖直投影中两相邻波谷的最大距离为嘴唇的宽度,从而定位出嘴部的区域。 The horizontal projection of the longest distance from the maximum width of the length of the lips, the vertical projection of the lips of two adjacent valleys between two adjacent valleys, thereby positioning the mouth portion of the region.

利用上述嘴唇分割算法将嘴部区域分割出来以后,在嘴部图像中提取嘴唇,然后对嘴角和嘴唇其它特征点进行定位。 Segmentation algorithm using the lips after the segmented region of the mouth, lips, mouth extracted image portion, and other characteristic points of mouth and lips positioned. 这里结合嘴唇为红色的特点,采用边缘提取和红色像素点提取相结合的方法对嘴唇进行提取。 Here lips red binding characteristics, the edge extraction method, and red pixels of the extracted combination of lips extracted. 算法主要过程如下: The main algorithm is as follows:

(I )嘴唇边缘的提取:对嘴唇采用边缘提取并且二值化,提取比较明显的嘴唇边缘;(2)红色像素点的提取:由于嘴唇为红色,故从边缘区提取所红唇边缘更为完整,提取噪声更少。 (I) extracts the lip edge: on the lips using the edge extraction and binarization, edge extraction obvious lips; extracting (2) red pixel points: Since red lips, it extracts the edge from the edge region of more lips complete, extract less noise. 只要红色分量的颜色值大于绿色分量和蓝色分量,即判断此点为红色。 As long as the value of the red color component is greater than the green and blue components, i.e., red is determined at this point. 为了使判断结果更为可靠,设一阈值Thresh ,当红色分量的颜色值大于绿色分量和蓝色分量且超过阈值Thresh时判断此点为红色;判断公式由式(30)给出, In order to make more reliable determination result, a set threshold Thresh, this point is determined to exceed the threshold Thresh red when the color component is greater than the value of the red green and blue components and; Analyzing formula given by the formula (30),

((及-B)> Thresh) & &((Λ-G)> Thresh) (30) ((And -B)> Thresh) & & ((Λ-G)> Thresh) (30)

(3)嘴唇的提取:提取嘴唇边缘以后,设置两个统计变量“红色得分数”和“所有得分数”,其中“红色得分数”表示嘴唇区域内红色像素点的个数所有得分数”表示红色像素点和边缘点的个数之和,对每一个像素点进行处理时IP要对这两个统计变量进行统计。边缘提取算子中阈值sobelThresh的确定采用动态阈值法,先给sobelThresh —个初值(取经验值10),若统计变量“所有得分数”大于嘴唇区域像素点总数的一半则sobelThresh自动加上一个整数Kl;若统计变量“红色得分数”大于嘴唇区域像素点总数的I / 4,则在程序中将Thresh自动加上一个整数K2,并且重新进行嘴唇的提取计算;这样,当sobelThresh和Thresh取一个适当的值时,就可以很好地将嘴唇提取了出来; (3) extracts the lip of: after extracting the edge lips, two statistical variable "score-red" and "all-score", where "red-score" represents the number of red pixel in the lip area of ​​all-score "represents the number of red pixels and the edge point and, when processing for each pixel to statistical IP these two statistical variables edge detection operator determines the threshold value sobelThresh dynamic threshold method, give sobelThresh -. a the initial value (experience value takes 10), if the count variable "score-all" more than half the total number of pixels with the lips sobelThresh Kl automatically added to an integer; if the statistical variables "red-score" lip area than the total number of pixels I / 4, the program will automatically add an integer Thresh K2, and re-extracted lips calculated; this way, when sobelThresh and Thresh take an appropriate value, it can well be extracted out of the lips;

嘴唇的定位是人眼定位的基础,也是判断打哈欠的重要手段,如果嘴唇定位错误或有较大地误差,将会给判断打哈欠以及眼睛特征点的定位带来很大的困难甚至整个定位过程失败,所以嘴巴定位的准确性是关键。 The positioning of the lips is the basis of eye location, but also an important means of judging yawning, lip positioning error or if the error is greater, will be to determine positioning for eye and yawn a great deal of difficulty even entire positioning process failure, so the mouth positioning accuracy is the key. 嘴唇特征点的定位是在嘴唇定位釣基础上进行,嘴唇特征点定位的流程如附图8所示; Locating lip feature points is positioned on the basis catch lip, the lip features localization process as shown in Figure 8;

进一步说明嘴唇特征点定位的算法,在准确得到嘴唇特征点后就可以进行嘴唇特征向量。 Further illustrate features localization algorithm lips, the lips can be obtained accurately lip feature vectors after the feature point. 本专利需要定义的嘴唇特征点有:左右嘴角点、上嘴唇中心最上点、上嘴唇中心最下点、下嘴唇中心最上点、下嘴唇中心最下点。 Lips need to define features of the present patent are: points around the mouth, lips central uppermost point, the lowermost point of the center of the upper lip, the lower lip central uppermost point, the lowermost point of the center of the lower lip. 本专利采用逼近法在嘴唇处于闭合状态、张开状态时对嘴唇特征点进行精确定位。 In this patent the use of approximation lips in the closed state, the lips of feature points for precise positioning when the open state. 具体算法是: The algorithm is:

(I)嘴唇处于闭合状态.(a)左、右嘴角点定位时沿着嘴唇的边缘线条往左移动,当这两个指针所指位置的距离很小(或相等)或继续往左移动都已经没有边缘点时,就认为此处就是左边嘴角的X坐标,两个指针所指位置的Y坐标的中间就是左嘴角的Y坐标。 (I) the lips in the closed state. (A) left, right corner point positioning when moving along the left edge of the line of the lips, when the distance of the two positions is small pointer (or equivalent) are moved to the left or to continue it has been no edge points, considered here is the middle of the left side of the mouth of the X-coordinate, Y coordinate position indicated by the two pointers is Y coordinate of the left corner of the mouth. 按照类似的方法得到右边嘴角的坐标。 To obtain the coordinates of the right side of the mouth in a similar manner.

Cb)上嘴唇中心最上点、下嘴唇中心最下点定位 Cb) on the lips of the most central point, the most central point is positioned under the lower lip

在定出右边嘴角和左边嘴角的坐标以后,以这两个点为基本点分别由两边往中间移动两个指针的方法,定出上嘴唇中心最上点、下嘴唇中心最下点。 After mouth fix the right and left coordinates of the mouth, to these two points as the basic method of the intermediate points are moved to either side by two pointers, fix the central uppermost point of the upper lip, lower lip lowermost center point. 设K,是嘴唇的长度,即两个嘴角间的距离;上下嘴唇外边缘中点的特征点与左嘴角的水平距离是固定的,即1/2%,这样就定出了上嘴唇中心最上点、下嘴唇中心最下点的X坐标。 Set K, is the length of the lips, i.e. the distance between two lips; horizontal feature points outside the upper and lower lips and the midpoint of the left edge of mouth fixed distance, i.e., 1/2%, thus fix the lips uppermost center point, the center of the lower lip most X coordinate of a point lower. 分别由左嘴角出发,沿着上下嘴唇边缘搜索,每当搜索到上嘴唇中心最上点、下嘴唇中心最下点的X坐标时,就可以定出它们的Y坐标。 When starting from the left respectively the mouth, the search along the edge of the upper and lower lips, the lips to the center whenever searching the uppermost point, the center of the lower lip of the lower most point X coordinate, they can fix the Y coordinate.

(C)上嘴唇中心最下点、下嘴唇中心最上点定位在嘴唇闭合时,上嘴唇中心最下点、下嘴唇中心最上点和左右嘴角点在同一直线上,取其左、右嘴角点坐标值的中点作为上嘴唇中心最下点、下嘴唇中心最上点的坐标值。 (C) the upper lip central lowermost point, the lower lip central uppermost point is positioned at the mouth is closed, the lip central lowermost point, the lower lip central uppermost point and left corner point in the same line, whichever is the left and right corner point coordinates on the lips as the midpoint of the central lowermost point, the lower lip uppermost center point coordinate values.

(2)嘴唇处于张开状态 (2) in the open state lips

(a)左、右嘴角点定位 (A) left and right corner point is positioned

类似于嘴唇闭合时的定位方法,也分别设两个指针,先从上嘴唇的中间开始,一个在上面、一个在下面,沿着上嘴唇的边缘线条往左移动,当这两个指针所指位置的距离很小(或相等)或继续往左移动都己经没有边缘点时,就认为此处就是左边嘴角的X坐标,两个指针所指位置的Y坐标的中间就是左边嘴角的Y坐标。 When positioning method is similar to the mouth is closed, and two pointers are located intermediate the upper lip begins to start, one above and one below, moves along the left edge of the upper lip lines, when the two pointer small distance position (or equivalent), or have had to continue to move to the left when there is no edge points, considered here is the middle of the X-coordinate of the left side of the mouth, the two Y-coordinate position indicated by the pointer is the Y coordinate of the left side of the mouth . 同理定位出右边嘴角特征点。 Similarly locate the right of the mouth feature point.

(b)上嘴唇中心最上点、上嘴唇中心最下点定位 (B) the uppermost point of the central upper lip, upper lip lowermost center point location

在定出右边嘴角和左边嘴角的坐标以后,以这两个点为基本点,指针在上嘴唇运动,分别由两边往中间移动两个指针的方法,定出上嘴唇中心最上点、上嘴唇中心最下点。 After mouth fix the right and left coordinates of the mouth, to these two points as the basic point, the pointer movement on the lips, the intermediate movement of the two methods by pointers to respective sides, set the central uppermost point of the upper lip, the upper lip center lowest point. 与嘴唇闭合时的定位方法相同,设%是嘴巴的长度;嘴巴边.缘的其它两个特征点与嘴角的水平距离是固定的,即上下嘴唇外边缘中点的特征点与左嘴角的水平距离为1/2%,,这样就定出了上嘴唇中心最上点、上嘴唇中心最下点的X坐标。 The same positioning method lips when closed, is disposed% of the length of the mouth;. The other two sides of the mouth feature point and the level of the mouth edge of the fixed distance, i.e. the level of the midpoint of the outer edge of the feature point of the left upper and lower lips of the mouth distance of 1/2% ,, is thereby determined on the lips central uppermost point, X coordinates of points on the center of the lowermost lip. 分别由左嘴角出发,沿着上嘴唇边缘搜索,每当搜索到对应的特征点的X坐标时,就可以定出它们的Y坐标。 Starting from the left, respectively, the mouth, the search along the upper edge of the lips, the X coordinate of search whenever the feature point corresponding to, they can fix the Y coordinate.

Ce)下嘴唇中心最上点、下嘴唇中心最下点定位 Under Ce) the uppermost point of the center lip, lower lip lowermost center point location

再由左嘴角出发,上下指针在下嘴唇运动,沿着下嘴唇边缘搜索,到达与上嘴唇中心点的X坐标时,定出下嘴唇中心特征点的Y坐标。 Starting from the left and then the mouth, lips and down movement next pointer, the search along the lower edge of the lips, the upper reaches of the X coordinate of the center point of the lips, lip fix the Y coordinate of the center point of the feature.

设左嘴角的坐标为(Ieftx5Iefty),右嘴角的坐标为(rightx,righty),根据面部五官的排列顺序,经验参数计算公式如下: Provided mouth left coordinates (Ieftx5Iefty), right corner coordinates are (rightx, righty), according to the order of facial features, empirical parameters calculated as follows:

嘴£长度公式以用公式(31)来表示, £ nozzle length formula to equation (31) is represented,

W .= xnrtK— 1ρ·ϋγ\* (riobtr 4-1riahtv —1pAv\ * (rivhtu —>Ietifht-W I、有了上述的嘴唇连续的闭合、张开状态的检测,就可以判断是否有哈欠连天,这里需要检测嘴唇的张开程度、张开频率以及持续时间。首先是嘴唐的张幵程度,在本专利中将上嘴唇中心最上点与下嘴唇中心最下点的距离九,与嘴巴长度^的比,(这里假设嘴唇闭合、张开状态时嘴巴长度Fw不变),这种假设对检测打哈欠影响不大,根据嘴模型定义描述嘴张开程度的参数Doow,可以用公式(32)来表示, . W = xnrtK- 1ρ · ϋγ \ * (riobtr 4-1riahtv -1pAv \ * (rivhtu -> Ietifht-W I, with the above-described continuous lips closed, the open state is detected, can determine whether a yawn herein, need to open the lips is detected, the open frequency and duration. the first is the degree of Jian Zhang Tang mouth, lips uppermost central point from the lowermost point of the lower lip centered on nine in the present patent, and the length of the mouth ^ ratio (assumed here that the mouth is closed, mouth open state unchanged length Fw), this assumption has little influence on the detection yawn, described nozzle opening degree of the nozzle according to the model parameters Doow definition, equation (32 )To represent,

接着是打哈欠检测,打哈欠与正常讲话或正常呼吸时嘴形的变化不同表现为持续一定时间的较大的嘴张开状态公式(33)成立时认为打哈欠发生, Followed by detection yawn, yawn normal speech or behave differently when the changes in shape of the mouth to a larger normal breathing for a certain time, nozzle open state equation (33) that occurs when the yawn established,

Υ^(Ώ0<Ο>β (33) Υ ^ (Ώ0 <Ο> β (33)

其中' among them'

公式(34)表示嘴的张开程度在视频中连续大于或等于α的帧数累计超过#帧时认为是发生了一次打哈欠,根据对多个模拟的打哈欠视频的经验分析,本发明中取α -0.5、β =5帧(约0.5秒),由于实际的打哈欠视频要根据实际采样的频率进行调整; Frames formula (34) represents the mouth of the continuous opening degree in the video accumulated α equal to or greater than that occurred more than a frame # yawn, yawn experience analyzing the plurality of analog video, the present invention take α -0.5, β = 5 frames (about 0.5 seconds), because the actual video yawn to be adjusted according to actual sampling frequency;

要检测打哈欠持续时间,附图7为理想的嘴张开程度变化曲线定义一次打哈欠持续时间为.哈欠开始到哈欠结束的时间,可以用公式(35)来表示, To test duration yawn, Figure 7 curve is defined as the degree of opening of the nozzle over the duration of a yawn. Yawn yawn to start end time, it can be used in equation (35) is represented,

Tr=t2-t, (35) Tr = t2-t, (35)

即连续的嘴张开程度大于或等于α的时间间隔,在实际时甩累计帧数来计算时间间隔;当发现有一次打哈欠,接着就要统计在一段时间中打哈欠的次数或者持续时间,用公式(36)来进行统计, I.e., continuous or greater mouth opening degree α interval, the actual number of frames accumulated Rejection calculation interval; found when a yawning, then we must count the number or duration of a period of time yawning, using the formula (36) for statistical,

打哈欠的次数或者持续时间越长则表示驾驶员的疲劳程度越高。 Yawning number of times or longer duration is, the higher the degree of fatigue of the driver.

在检测出嘴巴的基础上,通过人的五官特征可以检测眨服持续时间眨眼频率等参数,作为衡量驾驶员是否疲劳的眼睛特征,要检测眨眼持续时间眨眼频率首先要识别出服睛,附图2中的眼睛识别模块中主要处理的是通过人脸以及嘴巴的五官特征识别出人眼;由于人脸的五官分布是有规则的,由嘴角的两个基本点和面部的归一化参数可以大概定眼睛特征区域和特征点的粗略位置,也就是说,只要定出两个嘴角,就可以计算出眼睛的区域。 On the basis of the detected mouth by human facial features may be detected blink duration serving blinking frequency and other parameters, as a measure of whether the driver is tired eye feature to be detected blink duration blink frequency to identify the first service eye, the drawings 2 eyes identification module is processed by the main face of the mouth and facial features identified human eye; as face features regular distribution, the normalization parameters may be two basic points of the mouth and face coarse location region probably predetermined eye feature and the feature point, that is, as long as the two fix the mouth, can calculate the area of ​​the eye. 由于人的左右眼对称、眨眼的同步性,本专利中只检测右眼的眨眼,首先利用公式(31),右眼的高度Highoffiye用公式(37)计算, Since the left and right eyes symmetrical synchronization blink, blink in this patent detects only the right eye, first using equation (31), the height of the right-eye Highoffiye using equation (37) is calculated,

右眼的长度LengthofEye用公式(38)计算, LengthofEye eye length using equation (38) is calculated,

LengthofEye = 0.63 * Wm (38) LengthofEye = 0.63 * Wm (38)

右眼区埤的起始坐标(x2,y2)用公式(39)、(40)计算, Pi is the right-eye region start coordinates (x2, y2), (40) is calculated using equation (39),

X2 = rightx-0.1* LengthofEye (39) X2 = rightx-0.1 * LengthofEye (39)

y2 = righty -1.35* Wm (40) y2 = righty -1.35 * Wm (40)

在推算出右眼的范围后,考虑到人脸的长度和宽度不一样,.且头部可能有一定的水平旋转和深度旋转,由经验参数定出的区域可能与实际情况有较大的出入,比如所定出的眼部区域可能包括眉毛的区域和部分头发覆盖的区域,因此必须利用黑色像素点进行水平投影和垂直投影,以及用动态阈值确定出眼睛的准确区域。 After the right-calculate the range, taking into account the length and width of the face is not the same,. And the head may have a certain depth rotation and horizontal rotation, set by the empirical parameter region may have greater access to the actual situation , such as the eye region may fix a region including the partial region of hair and eyebrows covered, it must use the black pixel horizontal projection and vertical projection, and determining the exact area of ​​the eye using a dynamic threshold.

由于东方人的眼睑、眼珠都呈天然的黑色,因此可利用图像中的黑色像素点的投影来确定眼腈边界。 Since Asians eyelid, natural black eyes were tested, and therefore may be determined by the projection boundary carbonitrile eye black pixels in the image. 黑色像素点垂直投影图的纵坐标为此区域一列(长度为N )上所有被判断为黑色的像素点之和;横坐标为列号(长度为M),它反映了图像在水平方向上黑色像素点的变化。 Black pixel ordinate of all the vertical projection area is determined for this one (of length N) is the sum of black pixels; abscissa is the column number (length M), which reflects the black image in the horizontal direction change pixel points. 设区域大小为M *N ,各点像素的值为Ie(x,y),则黑色像素点垂直与水平方向上的投影函数用公式(41)、(42)进行计算, Set area size M * N, the frequency of pixel value Ie (x, y), the projection function of the black pixel on the vertical and horizontal directions by the formula (41), (42) is calculated,

利用黑色像素点对眼睛区域进行精确分割,为了消除阴影的干扰*设每一个像素点的颜色值为ColorValue,设一阈值Thresh ,当ColorValu^ < Thresh成立,则将此像素点定为黑色像素点。 Using eye black pixel accurate segmentation region, in order to eliminate the interference of the shadow set * color value of each pixel of the model ColorValue, set a threshold Thresh, when ColorValu ^ <Thresh true, then this pixel as black pixels . Thresh值的大小对眼睛区域:的准确判断非常重要,多数样本图像中的眼睛区域周围有很多深浅不一的阴影的影响,所以Thresh值若取一固定值,对不同的图像在投影时必定不能产生正确的波峰和波谷,也就不能很好地区分出眼睛区域,所以确定一个合适的阈值非常重要。 Size value of Thresh eye region: an accurate determination is important, there are many shades of influence around the eye shadow area most sample image, so that if the value of Thresh takes a fixed value, different images must not during projection produce the correct peaks and troughs, it can not be well distinguished by the eye region, to determine an appropriate threshold value is very important.

一般来说,眼睛的颜色比其周围的阴影深,以上述所定眼睛区域的高度和宽度作为判断基准,设右眼区域高度为HighofArea,宽度为LengthofArea。 Generally, the eye shadow color darker than its surroundings, the above-described height and width of the eye region as the predetermined criterion, provided the right-eye region height HighofArea, width LengthofArea.

判断每一列的投影值PfV(x),如果投影使大于图像高度HighofArea的I/3,统计变量Flag.自动加I。 Analyzing projection values ​​PfV (x) for each column, so that if the projected image is larger than the height HighofArea I / 3, statistical variable Flag. Automatic I. 统计变量Flag大于LengthofArea/4时,自动将阈值Thresh加5,且统计变量Flag清O,然后重新进行垂直投影计算,直到选到比较合适的阈值Thresh进行垂直投影计算。 When statistical variable Flag is greater than LengthofArea / 4, the threshold Thresh will automatically add 5 and statistical variables Flag cleared O, and then re-calculate the vertical projection, until the selected threshold Thresh suitable vertical projection calculation. 同理,按照此方法对此区域进行水平投影计算。 Similarly, according to this method to calculate the horizontal projection of this area.

进一步在进行水平投影时,投影结果可能会出现多个波峰和波谷,这是因为原先推算的这个区域可能将眉毛的区域和头发覆盖的区域也包括进来。 When carrying out further horizontal projection, the projection results may occur more peaks and troughs, it is because the original estimate of the region will likely eyebrows hair and regional coverage also included. 因为可由嘴巴的长度佶算出右眼的高度Highoffiye,由右眼区域的下部往上找每两个相连波谷之间的距离W,当遇到第一个W与右眼高度Highoffiye比较相近时, Because the length of the mouth may be calculated from the height Ji Highoffiye right eye, a right eye from the lower area upward to find the distance W between each connected two troughs, when W is more similar to the first right-eye height Highoffiye when encountered,

就可以认为这两个波谷之间的区域就是右眼的高度区域。 It can be considered the area between the two valleys is the height of the right eye area. 进行垂直投影时,由嘴巴的长度估算出右眼的长度LengthofEye,从垂直投影的右侧开始搜索,当搜索到两个相连波谷之间的距离L与右眼的高度LengthofEye比较相近时,便可认为这两个波谷之间的区域就是右眼的长度区域。 When the vertical projection, by the estimated length of the length of the mouth LengthofEye the right eye, from the right side of the vertical projection start the search, when the height LengthofEye relatively close distance L between the right eye search connected to the two troughs can that between the two regions is the length of the trough region of the eye. 由右眼的宽度区域与右眼的长度区域就可定位出右眼的位置。 By the width of the right-eye region to the length of the right-eye region can locate the position of the right eye.

许多基于眼睛特征的驾驶员疲劳检测应用中都采用PERCL0S作为检测眨.眼的参数,PERCL0S实际上是虹膜没有被眼皮遮挡部分的高度除以虹膜直径,由于虹膜的直径在眼睛闭合的时候是检测不出來的,而两眼為点的检测较为容易而且两眼角点的距离基本不受眼睛睁开或闭合的影响,与PERCL0S计算眼睛睁开程度的方法不同,本专利用眼睛睁开的尺度与两眼角点的距离而不是虹膜直径的比值作为眼睛睁开程度的度量。 Many drivers eye fatigue detection feature-based applications are used as PERCL0S blink detection parameter eye, the iris is not actually PERCL0S eyelids occluded height divided by the diameter of the iris portion, since the diameter of the iris in the eye is closed when the detection out of the two detection points is relatively easy and the tail point from the two eyes are open or not substantially affect the closure, the different methods PERCL0S computing eye opening degree, and the present patent with eyes open and scale ratio of the distance eye point rather than two iris diameter as a measure of the degree of eye opening. 对应于附图5的服睛模型公式(43)定义了眼睛睁开程度计算方法 Service model formula corresponding to the eye of FIG. 5 (43) defines the calculated opening degree of eye

公式(43)中用眼睛外接矩形的宽高比表示眼睛的挣开程度。 Formula (43) with the eye of the circumscribed rectangle of aspect ratio indicates the degree of eye broke away. 其中Ae为眼睛的睁开尺度,为眼睛的宽度眼睛宽度取两眼角点的距离。 Wherein Ae is open eye scale, taken from the two corner points of the eye width of an eye width.

在本专利中定眨眼状态为视频图像中眼睛的睁开程度在视频中连续小于某个阈值的帧数累加大于一定的帧数的状态用公式(44)表示为(X>ooe)>Pc (44) Status is set in the blinking state level of the present patent eyes open continuously a video image in the video frames is less than a certain threshold accumulated more than a certain number of frames (44) is expressed as (X> ooe)> Pc (with the formula 44)

公式(45.)表示眼睛睁开程度在视频中连续小于或等于%的顿数累计超过久帧时认为发生了一次眨眼,根据对多个模拟的眨眼视频的经验分析,本发明中取1 = 0.2、. β=Α,由于实际的眨眼视频要根据实际采样的频率进行调整; Equation (45) represents the eye opening degree or less continuous video% in the cumulative number of Dayton that occurs over a long time frame blink, blink empirically analysis of the plurality of analog video, the present invention taken 1 = 0.2 ,. β = Α, because the actual video blink frequency should be adjusted according to actual samples;

进一步检测眨眼持续时间,根据附图7的理想眼睛睁开程度曲线定义眨眼持续时间为眨眼过程中眼睛闭合到睁开之间的时间间隔,用公式(46)表示, Further detects blink duration, over the eye of Figure 7 in accordance with the opening degree of the curve defined blink duration blink eye closure during the time interval between the open, represented by the formula (46),

Tm (46) Tm (46)

7;表示了连续的眼睁开程度小于或等于《„的时间长度,在视频检测应用中用眨眼持续顿数的多少來计算时间间隔,眨眼持续时间越长说明疲劳程度越高; 7; represents the length of time successive eye opening degree is less than or equal to "," the number of blink duration Dayton with how video detection applied to calculate the time interval, the longer the duration the higher the blink degree of fatigue;

进一步检测眨眼频率,根据附图7的理想化眼睛睁开程度变化曲线,定义眨眼频率为最近发生的两次贬眼时间间隔的倒数,用公式(47)表示眨眼的频率为 Further detects blinking frequency, an idealized curve of Figure 7 eye opening degree, blink rate is defined as twice the time interval of the inverse of the eye demoted recent, indicates a frequency of blinking by the formula (47)

眨眼频率越高表明疲劳程度越高,在视频检测应用中可以用两次眨眼之间的帧数來计算时间间隔。 The higher the frequency of blinking indicates higher degree of fatigue, the video detection applications can be used between two frames is calculated blink interval. 对于带着眼镜的驾驶员的情况,特别是带着墨镜的驾驶员,由于从视频角度无法得到眼睛的图像信息,所以对于无法识别服睛的情况出现时,程序不进行眼睛疲劳信息的判断。 In the case of the driver with a glasses, especially the driver with sunglasses, because the eye can not be obtained from the video image information of the angle, so that for the case when the eye does not recognize the service occurs, the program does not judge eyestrain information.

更进一步,在检测出人脸中心点的基础上可以得到人脸中心点的轨迹线,利用该轨迹线可以判断驾驶员不自觉的频频点头、很难保持抬头的姿态,该判断在附图2中的面部运动轨迹跟踪模块中实现。 Still further, upon detection of a person's face can be obtained center on the face center of trace, the trace may be determined using the driver unconsciously nodded, the rise is difficult to maintain the posture, the determination in figures 2 module implementation of facial movement tracking. 在本专利中采用卡尔曼滤波來跟踪驾驶员的面部活动,根据公式(15)在得到一系列代表面部状态向量(、尤,+|、义,+2.··中的面部区域中心点(xt,yt)、(xt+i,yt+丨)、(xt+2,yt+2)...等信息后,可以按时间序列作出一条面部区域中心点运动轨迹线。 Kalman filter is used in this patent to track facial movements of the driver, according to equation (15) in series to obtain a representative face state vector (, esp + |, Yi + 2 · face region of the center point ( after xt, yt), (xt + i, yt + Shu), (xt + 2, yt + 2) ... and other information, can make a face region center point of the motion trajectory in time series.

首先需要有一个判断标准,本发明中采用在驾驶员开始驾驶后一段时间内的面区域中心点的统计平均值,计算公式如(48)所示, First, there is a need for criteria, the present invention uses statistical average surface area of ​​a center point within the driver starts driving after a period of time, as shown in formula (48),

在本专利中定义点头状态为视频图像中面部中心点位置连续向前方移动距离超过某个阈值的帧数,累加大于一定的帧数的状态用公式(49)表示为 Is defined in this patent as the nodding state face center point position of the video image frames in a row exceeds a certain threshold distance of forward movement, the accumulated state more than a certain number of frames represented by the formula (49) is

其中, among them,

公式(49)表示点头的程度在视频中连续大于或等于%的啤歡累计超过凡帧时,认为发生了一次点头,根据对多个模拟的点头视频的经验分析基本上与眨眼频率相,近,由于实际的点头视频要根据实际采样的频率进行调整; Equation (49) represents the degree of nodding is continuously greater than or equal to the video accumulated Huan% of beer every frame exceeds that there was a nod, nod video analysis of empirically more simulated blinking frequency and phase substantially near Since the actual video nod to be adjusted according to the sampling frequency;

式中,(x„,凡)为η帧时驾驶员的面部区域中心点位置,(ϊ,Ϊ)为驾驶员开始驾驶后一段时间内的面部区域中心点位置的统计平均值。. Wherein the statistical mean value (x ", where) the frame face region η is the center point of the driver, (ϊ, Ϊ) after the start of the driving of the driver's face region center point of a period of time ..

点头的次数或者是与正常驾驶时头部的标准位置超过偏离值A持续时间越长则表示驾驶员的疲劳程度越高。 Nod is the number of standard or normal driving position of the head exceeds the offset value A longer duration the higher the degree of fatigue of the driver.

上述的图像理解以及智能判断都是根据驾驶座位视角透视图的视频图像基础上进行的。 The above-described image understanding, and are determined according to intelligent driving perspective video image based on a perspective view of a seat for the.

通过车辆驾驶的道路环境以及驾驶的状态来判断是否有疲劳驾驶的情况,附图2中的车辆行驶方向检测模块将主要实现以下检测内容;首先是要监视汽车行驶方向,主要通过检测驾驶是否失去了方向感,驾车左右摇摆在公路上:本发明中是通过全方位视觉传感器获取的车辆前行方向上的环境来进行检测的,检测的基准是道路上的白线或者是道路上边缘绿化带,通过视频图像检测车辆的前进方向是否与上述的白线或者绿体带相偏离,如附图15所示。 To determine whether the fatigue by driving the vehicle driving situation and a road environment condition of the driver, the vehicle 2 in the drawings the traveling direction of the detection module to achieve the following test content; first is to monitor the direction of travel, mainly by detecting whether the driver losing sense of direction, driving on the road side to side: the environment of the present invention is the forward direction of the vehicle acquired by the omnidirectional vision sensor for detecting the reference is detected white line on the road or on the road edge green belt, whether deviate, as shown in FIG. 15 by detecting the traveling direction of the vehicle and a video image of the white line or the above-described green body tape. 本发明中将车辆的中心点上作一条与上述的基准线(白线或者边缘线)的平行线,通过检测车辆的运行 As described above with a reference line (white line or edge line) on the center point of parallel lines in the vehicle of the present invention, by detecting the vehicle running

且不t: 缺-t:亡伯攻桂jo f Ttr伯TSr t存Y白:摇摆在公路上。 Not t: lack -t: death attack Gui Bo Bo jo f Ttr TSr t keep white Y: Swing on the highway. 本发明中设置一个偏离阈值,当车辆运行轨迹点超过该阈值时, The present invention is provided a deviation threshold, when the vehicle trajectory point exceeds the threshold value,

检查目前驾驶中的车辆处于哪种偏离状态,接着根据上次的偏离状态来确定是否发生了驾车左右摇摆;如果上次的偏离状态是正偏离状态,本次所检测到的状态的偏离状态是负偏离状态,或者是没有偏离,那么就认为产生了一次偏离,记录该次偏离的时间以及程度;程序中根据每次的偏离时间以及偏离程度来判断是否有疲劳驾驶或者危险驾驶情况。 Check the current driving state of the vehicle is deviated from which then determines whether the yaw drive has occurred under the previous state of the deviation; If the last state is a positive deviation from the deviation state of the deviation of the detected state of this state is negative deviated state, or not deviated, that then produces a deviation logs the time and the extent of deviation; each program in accordance with the time of departure, and the degree of divergence to determine whether fatigue driving or dangerous driving situations.

要检测驾驶员操作方向盘的状态,检测方向盘的状态主要是检测驾驶员是否有反应迟钝、判断迟缓、动作值硬、节奏缓慢等现象的发生,本发明中将方向盘的操作动作检测主要放在附图2中方向盘动作检测模块中进行,当然也用到.了车辆行驶方向检测模块中的一些检测结果;这虽然是一个动作状态的检测,如附图I所示,但是也是感知状态以及判断状态失常或者异常的一种后续的动作反映。 To detect the state of the driver operating the steering wheel, the steering state detector detects whether the driver is mainly occur slow, slow determination, the operation of the hard value, the slow pace of such phenomena, in the motion detection operation of a steering wheel of the present invention is attached on the main FIG 2 is a steering wheel operation detection module, of course, also used in some detection result of the vehicle travel direction detection module;. Although this is a detection of the operating state, as shown in figures I, but also the state and perceived state determination disorder or an abnormal operation of the subsequent reflection. 这个驾驶动作检测与道路情况以及周边的车辆情况有着密切的关系,如果检测到前方道路上有弯道、有变道、有岔道(或者遇到车辆运行标志)以及前方车辆进入行驶车道情况时,没有方向盘的操作动作就可以认为是动作反应失常或者异常,这种情况就非常危险,是上述几种疲劳驾驶表象在驾驶员驾驶动作上的结果。 The driving motion detection and road conditions and vehicle situation around have a close relationship, if there is a curve on the road ahead is detected, there is lane change, there is fork (or vehicle operation encountered logo) and entering the front of the vehicle while driving lane situation, no operation of the steering wheel operation can be considered disorders or abnormal operation of the reaction, this situation is very dangerous, driver fatigue is above some representation results in a driver's driving operation. 作为检测方向盘的状态是与监视汽车行驶方向紧密相关的,这反映了驾驶员对道路情况所作出的动作反应。 As the steering wheel state detection is closely related to the monitoring direction of vehicle travel, which reflects the movement of road conditions and the driver's reaction made.

附图12是基于全方位计算机视觉的安全辅助驾驶装置中的安全驾驶检测总流程图,驾驶贾接上电源幵动车辆后,全方位视觉传感器就获取车内外的全方位视频图像,然后根据用户所定义的三个检测区域,通过软件展开成驾驶座位视角、方向盘视角、车辆驾驶道路环境的三个透视图;首先通过车辆驾驶道路环境透视图进行前方道路环境识别,接着检测车辆是否偏离了行驶线,如果发现有偏离情况通过输出装置向驾驶员提醒,要求纠正其行驶方向;接着再检测道路上是否有标志牌,如果检测到有标志通过输出装置向驾驶员提醒注意;紧接着马上判断驾驶员是否有操作方向盘的动作,在开始驾驶经过一段时间后,如果没有发现有任何的操作方向盘的动作,就认为有驾驶疲劳可疑,上述的检测是通过方向盘视角透视图来获得视频图像的;然后根据驾驶座位 Figure 12 is a full computer vision based driver assistance safety device detecting a general flow chart of safe driving, the driving Jia plugged Jian motor vehicle, the visual sensor to acquire full full video images inside and outside the car, and then the user three detection region defined by software to expand the driver's seat angle, steering wheel angle, the road driving environment a perspective view of a vehicle three; firstly through a vehicle front road-environment recognition perspective view of a road driving environment, and then detects whether the vehicle deviates from the travel line, if there are deviations from the alert to the driver through the output device, which is required to correct the traveling direction; then if there is a road sign on the detected again, if detects a flag to draw attention to the driver through the output device; followed immediately Analyzing driving whether member has an operation operating the steering wheel, at the start of the driving after a period of time, if not found the operation of any operation of the steering wheel, it is considered driving fatigue suspicious, the aforementioned detection is used to obtain video image by perspective a perspective view of a steering wheel; and according to the driver's seat 角的透视图,检测驾驶员的嘴部、眼部以及面部中心的轨迹,根据人的一些疲劳特征来判断驾驶员是杏有疲劳驾驶,如果检测到有疲劳驾驶的情况或者倾向,输出装置向驾驶员提醒安全驾驶信息,提醒的方式可以采用发出声音、光、使人清醒的气味,对于严重疲劳驾驶也可以采用自动紧急刹车等强制制动方式。 A perspective view of a corner, the driver is detected locus mouth, eyes and the center of the face, in accordance with some features of a person to determine the fatigue of the driver is driving apricot fatigue, fatigue if the detected driving situation or tendency to an output device remind the driver of safe driving information, reminders of ways using sound, light, smell sobering for the serious driver fatigue forced braking mode automatic emergency braking, etc. may be used.

Claims (6)

1 、一种基于全方位计算机视觉的安全驾驶辅助装置,其特征在于:所述的安全驾驶辅助装置包括用于获取车内外全方位视频信息的全方位视觉传感器、用于检测各种疲劳驾驶并有疲劳驾驶情况发生时提供报警的安全驾驶辅助控制器,所述的全方位视觉传感器安装在车内驾驶座的右方;所述的全方位视觉传感器输出连接安全驾驶辅助控制器,所述的全方位视觉传感器包括用以反射车内外领域中物体的外凸折反射镜面、用以防止光折射和光饱和的黑色圆锥体、透明摑柱体和用于拍摄外凸反射镜面上成像体的摄像头,所述的外凸折反射镜面位于透明圆柱体的上方,外凸折反射镜面朝下,黑色圆锥体固定在外凸折反射镜面的底部中央,所述摄像头对着外凸折反射镜面朝上;所述的安全驾驶辅助控制器包括:检测领域分割模块,用于将从全方位视觉传感器 1, a full range of safe driving based computer vision assistance device, characterized in that: the safety driving support device comprises inner and outer omnidirectional vision sensor for acquiring video information of the full vehicle, driver fatigue and for detecting various there provide safe driving assistance control alarm occurs when the driver fatigue, the omnidirectional vision sensor mounted to the right of the vehicle seat; said omnidirectional vision sensor output controller connected to the safety driving support, the omnidirectional vision sensor includes inner and outer convex to the object field reflected off the car mirror, to prevent the black light cone and the light refraction saturated, transparent cylinder and slapping camera for photographing the convex mirror image thereof, folding said convex mirror located above the transparent cylinder, folding mirror downward convex, black cone fixed to the center of the bottom of the convex folding mirror, said catadioptric camera facing convex mirror up; the safe driving support said controller comprises: detecting the art segmentation module configured from the omnidirectional vision sensor 取的全方位视频信息分割为车辆前方视角、驾驶座位视角、方向盘视角的视频透视图像; 脸部定位模块,用于对驾驶员的人脸定位,人脸肤色在CrCb色度空间服从二维高斯分布,肤色分布模型的概率密度函数用公式(4)表示, /(xl,x2) = —il/exp{ -Ux - μ)τ C-\X - (4) 2n\C\/2 I 2 J 其中,// = (^0^=(156.560,117.436/,该向量中的两个值分别指的是颜色分量Cr、Cb的均值,C是Cr、Cb的协方差矩阵,用公式(5)表示, C = R σΜΊ = Γΐ60.130 12.143 1 ⑴ [ahr σ62Λ」-L 12.143 299.457」 其中,σ2η、分别是Cr、Cb的方差,〜、分别是Cr、Cb的协方差; 根据肤色的高斯分布模型,计算人脸图像中所有像素点的颜色与肤色的相似度,取相似度的计算公式为, Taken full video information is divided into a front perspective of the vehicle, the driver's seat viewing angle, a steering wheel angle of view of the video image perspective; face positioning means for positioning the driver's face, skin color in chromaticity space subject to two-dimensional Gaussian CrCb distribution, the probability density function of the model of skin color distribution by the formula (4), / (xl, x2) = -il / exp {-Ux - μ) τ C- \ X - (4) 2n \ C \ / 2 I 2 wherein J, = // (^ 0 ^ = (156.560,117.436 /, the value of the two vectors respectively refer to the color components Cr, Cb mean, C is Cr, Cb covariance matrix, using the formula (5 ) represents, C = R σΜΊ = Γΐ60.130 12.143 1 ⑴ [ahr σ62Λ "-L 12.143 299.457" wherein, σ2η, namely Cr, Cb variance, ~, are Cr, Cb covariance; Gaussian skin color distribution model, calculating the color face image and the color of all the pixels of the similarity, the similarity is calculated to take,
其中,1 = (0%0)广为像素在Cr、Cb色度空间中的向量,C、μ取值与上述的(4)、(5)式相同; 在计算出相似度值后,利用归一化的方法将相似度转化为O〜255之间的灰度值,得到驾驶座位视角图像的灰度图;利用设定的阈值对灰度图进行二值化处理,肤色区域变成全白,其余部分变成全黑;并利用图像灰度直方图做图像的水平投影图来获取所提取区域在垂直方向上的顶部和底部的极大值;做垂直投影图来获取所提取区域在水平方向上的左侧和右侧的极大值; 设人脸长为h、宽为W,根据人脸的尺寸约束,如果满足人脸长宽比的条件,确认为该区域为人脸定位图像;嘴唇定位和打哈欠检测模块,用于定位驾驶员嘴唇以及检测驾驶员打哈欠,对人脸定位图像中的红色像素点进行水平投影和垂直投影,确定此区域为嘴部区域,水平投影两相邻波谷间的最长距离为 Wherein 1 = (0 0%) is widely pixel (4), (5) the same as the above formula Cr, Cb color space vector, C, μ value; After calculating a similarity value, using normalized similarity method of conversion of gray value between O~255, to obtain a grayscale image of the driver's seat perspective; use of grayscale threshold value set binarization process, the skin color area becomes full white, black into the rest; done using the image histogram horizontal projection image to obtain maxima top and bottom areas in the vertical direction of the extracted; do to obtain a vertical projection of the extracted region the maximum value of the left and right in the horizontal direction; disposed face length is h, width W, depending on the size constraints of the human face, the face aspect ratio if the condition is satisfied, it was confirmed that the image area is a face positioned ; locating lips yawning and detection means for detecting the positioning of the driver and the driver's lips, yawning, red pixels in the face image horizontally positioning projection and vertical projection, this area is determined as the mouth area, two horizontal projection the longest distance between adjacent troughs is 唇的长度,竖直投影中两相邻波谷的最大距离为嘴唇的宽度;并依次定义嘴唇处于闭合状态、张开状态时的嘴唇特征点:左右嘴角点、上嘴唇中心最上点、上嘴唇中心最下点、下嘴唇中心最上点、下嘴唇中心最下点;将上嘴唇中心最上点与下嘴唇中心最下点的距离Ani与嘴巴长度PFm的比,根据嘴模型定义为嘴张开程度的参数Dooni,如公式(32)所示: Lip length, the vertical projection of the maximum distance is the width of two adjacent wave trough of the lips; and sequentially define the lips in the closed state, the lips open state characteristic points: points around the mouth, lips central uppermost point, the center of the upper lip the lowermost point, the lower lip central uppermost point, the lowermost point of the lower lip center; upper lips and the central most point of the lowest point of the lower lip center distance Ani ratio of the mouth length PFm on to the mouth opening degree in accordance mouth model definition parameters Dooni, as shown in equation (32) below:
设定持续一段时间的较大的嘴张开状态成立时判定打哈欠发生,如公式(33)所示: Setting a period of time a large mouth open state determination occurs when the yawn established, as shown in Equation (33):
定义一次打哈欠持续时间为哈欠开始到哈欠结束的时间,用公式(35)来表示: Ty = t2 — tx (35) 即连续的嘴张开程度大于或等于α的时间间隔,当发现有一次打哈欠,则统计在一段时间中打哈欠的次数或者持续时间,用公式(36)来进行统计: Total = YjTy (36) 疲劳驾驶测评和报警模块,用于根据预设的一段时间内打哈欠的次数或持续时间阈值,如果测得的次数或持续时间大于设定阈值,判定为疲劳驾驶,并向报警装置发出告警指令。 A defined duration yawn yawn yawn start to the end of time, using Equation (35) represents: Ty = t2 - tx (35) i.e. continuous nozzle opening degree is greater than or equal to the time intervals α, that when there is a yawning, then count the number or duration of a period of time yawning, using equation (36) to statistics: Total = YjTy (36) and the alarm driver fatigue evaluation module for yawning within a preset period of time in accordance with the number or duration threshold, or if the number of the measured time duration is greater than the set threshold, driver fatigue is determined, an alarm command to the alarm device.
2、如权利要求I所述的基于全方位计算机视觉的安全驾驶辅助装置,其特征在于:所述的安全驾驶辅助控制器还包括:眼睛识别模块,用于依据嘴唇的左右嘴角点和面部的归一化参数标定眼睛特征区域和特征点,设左嘴角的坐标为(leftx,Iefty),右嘴角的坐标为(rightx,righty),根据面部五官的排列顺序,嘴巴长度公式用公式(31)来表示:Wm = sqrt{(rightx - leftx) * (rightx - leftx) + (righty - lefty) * (righty - lefty)) ^ ^)单侧眼睛的高度Highoffiye用公式(37)计算: Higho^ye = 0.31* Wm (37)单侧眼睛的长度LengthofEye用公式(38)计算: LengthofEye = 0.63 *Wm (38)单侧眼睛区域的起始坐标(x2,y2)用公式(39)、(40)计算: 2, as claimed in claim safe driving based on a full range of computer vision assistance device of claim I, wherein: said safety controller further comprises a driving assistance: eye identification module according to points around the mouth and the lips of the face normalization parameter calibration eye feature region and the feature point, set the left mouth coordinates (leftx, Iefty), coordinates of the right corner is (rightx, righty), according to the order facial features, the mouth length of formula by the formula (31) represented: Wm = sqrt {(rightx - leftx) * (rightx - leftx) + (righty - lefty) * (righty - lefty)) ^ ^) unilateral eye height Highoffiye using equation (37) calculated: Higho ^ ye = 0.31 * Wm (37) the length of one side of the eye LengthofEye by the formula (38) is calculated: start coordinates (x2, y2) LengthofEye = 0.63 * Wm (38) of one side of the eye region by the formula (39), (40) calculation:
利用图像中的黑色像素点的投影来确定眼睛边界,黑色像素点垂直投影图的纵坐标为此区域一列上所有被判断为黑色的像素点之和,长度为N;横坐标为列号,长度为M,设区域大小为M*N,各点像素的值为Ie(x,y),黑色像素点垂直与水平方向上的投影函数用公式(41)、(42)进行计算: Determining a boundary of the eye by the projection of black pixels in the image, the black pixel ordinate of all the vertical projection of black pixels determined as the sum of a length of a region for this N; abscissa is the column number, length is M, the size of the region set of M * N, the frequency of pixel value Ie (x, y), the projection function of black pixels vertically and horizontally by the formula (41), (42) is calculated:
在水平投影时,由嘴巴的长度估算出眼睛的高度Highoffiye ,由眼睛区域的下部往上找每两个相连波谷之间的距离W,当遇到第一个W与右眼高度HighofEye比较相近时,认为这两个波谷之间的区域为眼睛的高度区域; 在垂直投影时,由嘴巴的长度估算出眼睛的长度LengthofEye,从垂直投影的右侧开始搜索,当搜索到两个相连波谷之间的距离L与右眼的高度Lengthoffiye比较相近时,认为这两个波谷之间的区域为眼睛的长度区域;眨眼检测模块,用于眼睛睁开程度计算方法采用眼睛模型定义,如公式(43): When horizontal projection, by the length of the mouth of the estimated height Highoffiye eye, from the lower region of the eye looking up the distance W between each connected two troughs, when the first encounter relatively similar height W and the right-eye HighofEye that the height of the region of the eye region between two troughs; in the vertical projection, by the length of the mouth of the estimated length of the eye LengthofEye, from the right side of the vertical projection start the search, when the search is connected to the trough between two the distance L Lengthoffiye relatively close to the height of the right eye, the length of that region of the eye region between two troughs; blink detection means for calculating the degree of eye opening defined methods the eye model, as shown in equation (43) :
公式(43)中用眼睛外接矩形的宽高比表示眼睛的睁开程度,其中\为眼睛的睁开尺度,%为眼睛的宽度眼睛宽度取两眼角点的距离; 定义眨眼状态为视频图像中眼睛的睁开程度在视频中连续小于或等于设定阈值的帧数累加大于一定的帧数的状态用公式(44)表示为: Formula (43) with the eye of the aspect ratio of the circumscribed rectangle represents the degree of open eyes, where \ is the scale open eyes, eye width% width of the eye taken from two corner points; blinking state is defined as the video image successive frames eyes open degree is less than or equal to the set in the video threshold value more than a certain number of frames accumulated status (44) is represented by the formula:
公式(45)表示眼睛睁开程度在视频中连续小于或等于%的帧数累计超过汉帧时认为发生了一次眨眼; 定义眨眼持续时间为眨眼过程中眼睛闭合到睁开之间的时间间隔,用公式(46)表示: Tb =tI ~h (46) 7;表示连续的眼睁开程度小于或等于%的时间长度; 定义眨眼频率为最近发生的两次眨眼时间间隔的倒数,用公式(47)表示眨眼的频率为: A=Ti- (47); 在疲劳驾驶测评和报警模块中,设定眨眼频率阈值,如果测得的眨眼频率大于眨眼频率阈值,判定为疲劳驾驶,并向报警装置发出告警指令。 Equation (45) represents the successive eye opening degree is less than or equal to% of the video frames that there was a total of more than blink Han frame; blink duration defined blink eye closure during the time interval between the open, using equation (46) represents: Tb = tI ~ h (46) 7; represents successive eye opening degree is less than or equal to% of the length of time; is defined as the reciprocal of twice the frequency of blinking blink recent time interval, using the equation ( 47) represents a blink frequency is: a = Ti- (47); driving fatigue evaluation module and an alarm, a threshold is set blinking frequency, blinking if the measured frequency is greater than a threshold frequency of blinking, driver fatigue is determined, and an alarm an alarm instruction means.
3、如权利要求I或2所述的基于全方位计算机视觉的安全驾驶辅助装置,其特征在于:所述的安全驾驶辅助控制器还包括:车辆行驶方向检测模块,用于依照车辆前方视角图像,以道路上的白线或者是道路上边缘绿化带为基准线,通过视频图像检测车辆的前进方向是否与上述的白线或者绿化带相偏离,将车辆的中心点上作一条与上述的基准线的平行线,检测车辆的运行轨迹是否与所述的平行线左右偏离,如车辆运行轨迹点超过设定的偏离阈值,检查当前驾驶是否处于偏移状态,如果上次的偏离状态是正偏离状态,本次所检测到的状态的偏离状态是负偏离状态,或者是没有偏离,那么就认为产生了一次偏离,记录该次偏离的时间以及程度; 在疲劳驾驶测评和报警模块中,设定偏移报警距离,如果当前的偏移距离大于偏移报警距离,判定为疲劳驾驶或危 3, I as claimed in full or safe driving based on computer vision assistance device according to claim 2, wherein: said safety controller further driving assistance comprising: a vehicle travel direction detection module configured in accordance with the angle of view of the vehicle front image , a white line on the road or on the road green belt edge as a reference line, if the deviation from the above-described white line or a green tape by detecting the vehicle traveling direction of a video image, which will be a center point on the above-mentioned reference vehicle trajectory line is running parallel lines, the vehicle is detected and the deviation of the left and right parallel lines, such as the vehicle running track deviation threshold exceeds the set point, to check whether the current driving at an offset state, if the last state is a positive deviation from the deviation state , departing from this state detected state is a state of negative deviation or no deviation then that produces a deviation logs the time and the extent of deviation; fatigue driving evaluation module and an alarm, setting the partial shift warning distance, if the offset distance is greater than the offset current warning distance, driver fatigue is determined or dangerous 险驾驶,并向报警装置发出告警指令。 Dangerous driving, an alarm command to the alarm device.
4、如权利要求3所述的基于全方位计算机视觉的安全驾驶辅助装置,其特征在于:所述的安全驾驶辅助控制器还包括:操作方向盘状态检测模块,用于依据方向盘视角图像,如果检测到前方道路上有弯道、有变道、有岔道及前方车辆进入行驶车道情况时,检测当前是否有方向盘的操作动作;在疲劳驾驶测评和报警模块中,如果当前没有方向盘的操作动作,判定为危险驾驶,并向报警装置发出告警指令。 If the detection steering operation state detecting means for perspective image based on the steering wheel,: 4, as claimed in safe driving based on a full range of computer vision assistance device according to claim 3, wherein: said safety controller further comprises a driving assistance when there is a curve on the road ahead, there is a lane change, and has slipped into the driving lane ahead of the vehicle, the operation of detecting whether the current operation of the steering wheel; fatigue driving evaluation module and an alarm, if no operation of the steering wheel operation, it is determined as dangerous driving, an alarm command to the alarm device.
5、如权利要求I或2所述的基于全方位计算机视觉的安全驾驶辅®]装置,其特征在于:所述的检测领域分割模块还包括:透视图展开单元,用于利用透视投影平面的坐标点P (ij)求空间三坐标中的A(Χ,Υ,Ζ),得到投影平面与空间三坐标的转换关系,转换关系式用公式(60)来表示: 5, I as claimed in full or computer vision-based driving safety auxiliary ®] apparatus according to claim 2, wherein: said detection field splitting module further comprises: a perspective view of the developed means for utilizing the perspective projection plane coordinate point P (ij) find the coordinate space a (Χ, Υ, Ζ), to obtain the conversion relationship between the projection plane and space coordinate conversion relationships equation (60) to represent:
上式中:D为透视投影平面到双曲面的焦点0111的距离,β角度是入射光线在XY平面上投影的夹角,γ角度是入射光线与双曲面焦点的水平面的夹角,i轴是与XY平面平行的横轴,j轴是与i轴和0„—G轴直角相交的纵轴,G为透视投影坐标原点; 将上述用公式(60)求得的A(X,Y,Z)点代入公式(57)和(58)就能求得与透视投影平面的坐标点P(iJ)相对应的在成像平面上的P (X,y)点: In the above formula: D is the distance from the focal plane to a perspective projection hyperboloid 0111, β is the angle of the incident light in the XY plane of the projection angle, γ is the angle of the horizontal angle of the incident light and the focus of the hyperboloid, i of axis a horizontal axis parallel to the XY plane, j is the i axis and the axis 0 "-G axis at right angles to the longitudinal axis, G is perspective projection coordinate origin; the above equation (60) obtained a (X, Y, Z ) point into the formula (57) and (58) can be obtained by a perspective projection plane coordinate point P (iJ) corresponding on the imaging plane P (X, y) points:
上式中,c表示双曲面镜的焦点,a,b分别是双曲面镜的实轴和虚轴的长度,f表示成像平面到双曲面镜的虚焦点的距离。 In the above formula, c denotes the focal point of the hyperbolic mirror, a, b are the lengths of the hyperbolic mirror to the real axis and the imaginary axis, f represents the distance to the image plane of the virtual focal point of the hyperbolic mirror.
6、如权利要求I或2所述的基于全方位计算机视觉的安全驾驶辅助装置,其特征在于:所述的折反射镜面采用双曲面镜来进行设计:所示的双曲面镜构成的光学系统由下面5个等式表示; 6, or I as claimed in safe driving based on a full range of computer vision assistance device according to claim 2, wherein: the folding mirror is designed to use a hyperbolic mirror: an optical system composed of the hyperbolic mirror shown 5 expressed by the following equation;
式中X,Y,Z表示空间坐标,c表示双曲面镜的焦点,2c表示两个焦点之间的距离,a,b分别是双曲面镜的实轴和虚轴的长度,β表示入射光线在XY平面上的夹角即方位角,α表示入射光线在XZ平面上的夹角即俯角,γ是入射光线与双曲面焦点的水平面的夹角,f表示成像平面到双曲面镜的虚焦点的距离。 Wherein X, Y, Z represent spatial coordinates, c denotes the focal point of the hyperbolic mirror, 2c denotes a distance between two focal points, a, b are the lengths of the hyperbolic mirror real axis and the imaginary axis, β represents the incident light angle in the XY plane, i.e., azimuth angle, α denotes the angle the incident light in the XZ plane, i.e., depression angle, γ is the angle of incident light and the horizontal hyperboloid focal point, f represents the imaging plane of the virtual focus to the hyperbolic mirror the distance.
CN 200710067633 2007-03-21 2007-03-21 Safe driving auxiliary device based on omnidirectional computer vision CN100462047C (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN 200710067633 CN100462047C (en) 2007-03-21 2007-03-21 Safe driving auxiliary device based on omnidirectional computer vision

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN 200710067633 CN100462047C (en) 2007-03-21 2007-03-21 Safe driving auxiliary device based on omnidirectional computer vision

Publications (2)

Publication Number Publication Date
CN101032405A CN101032405A (en) 2007-09-12
CN100462047C true CN100462047C (en) 2009-02-18

Family

ID=38729287

Family Applications (1)

Application Number Title Priority Date Filing Date
CN 200710067633 CN100462047C (en) 2007-03-21 2007-03-21 Safe driving auxiliary device based on omnidirectional computer vision

Country Status (1)

Country Link
CN (1) CN100462047C (en)

Families Citing this family (48)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP4372804B2 (en) 2007-05-09 2009-11-25 トヨタ自動車株式会社 Image processing apparatus
CN101802885B (en) * 2007-09-17 2016-06-29 沃尔沃技术公司 A method for communicating a deviation of vehicle parameters
US8570176B2 (en) * 2008-05-28 2013-10-29 7352867 Canada Inc. Method and device for the detection of microsleep events
CN101732055B (en) 2009-02-11 2012-04-18 北京智安邦科技有限公司 Method and system for testing fatigue of driver
JP5444898B2 (en) * 2009-07-09 2014-03-19 アイシン精機株式会社 State detecting device, state detecting method and program
CN102696061B (en) * 2009-09-30 2015-01-07 本田技研工业株式会社 Driver state assessment device
JP5593695B2 (en) * 2009-12-28 2014-09-24 ソニー株式会社 Image processing apparatus, image processing method, and program
CN102088471A (en) * 2010-03-16 2011-06-08 上海海事大学 Health and safety monitoring system for personnel on board based on wireless sensor network
CN102097003B (en) * 2010-12-31 2014-03-19 北京星河易达科技有限公司 Intelligent traffic safety system and terminal
CN102184388A (en) * 2011-05-16 2011-09-14 苏州两江科技有限公司 Face and vehicle adaptive rapid detection system and detection method
EP2564777B1 (en) * 2011-09-02 2017-06-07 Volvo Car Corporation Method for classification of eye closures
CN102413282B (en) * 2011-10-26 2015-02-18 惠州Tcl移动通信有限公司 Self-shooting guidance method and equipment
CN102723003B (en) * 2012-05-28 2015-01-07 华为终端有限公司 Reading state reminding method and mobile terminal
CN103198616B (en) * 2013-03-20 2015-10-28 重庆大学 Fatigue driving detection method and system wherein movement of the driver's head and neck Recognition
TWI506564B (en) * 2013-05-29 2015-11-01
CN104464192B (en) * 2013-09-18 2017-02-08 武汉理工大学 Vehicle driver driving state detecting instability warning device and method
KR101386823B1 (en) * 2013-10-29 2014-04-17 김재철 2 level drowsy driving prevention apparatus through motion, face, eye,and mouth recognition
KR20150061943A (en) * 2013-11-28 2015-06-05 현대모비스 주식회사 Device for detecting the status of the driver and method thereof
US9971411B2 (en) 2013-12-10 2018-05-15 Htc Corporation Method, interactive device, and computer readable medium storing corresponding instructions for recognizing user behavior without user touching on input portion of display screen
CN103617421A (en) * 2013-12-17 2014-03-05 上海电机学院 Fatigue detecting method and system based on comprehensive video feature analysis
CN103824421B (en) * 2014-03-26 2016-11-16 重庆长安汽车股份有限公司 Active safety warning system to detect driver fatigue
KR101589427B1 (en) * 2014-04-04 2016-01-27 현대자동차 주식회사 Apparatus and method for controlling vehicle drive based on driver fatigue
CN103902986B (en) * 2014-04-17 2017-04-26 拓维信息系统股份有限公司 A method of positioning reference organ insect antennae function is implemented in mobile games in Facebook
CN104013414B (en) * 2014-04-30 2015-12-30 深圳佑驾创新科技有限公司 Driver fatigue detection system based on mobile smartphone
CN103983239B (en) * 2014-05-21 2016-02-10 南京航空航天大学 Ranging method based on the lane line width
DE102014220759B4 (en) * 2014-10-14 2019-06-19 Audi Ag Monitoring a degree of attention of a driver of a vehicle
CN104408878B (en) * 2014-11-05 2017-01-25 唐郁文 One kind of team driver fatigue warning monitoring system and method
US9586618B2 (en) * 2015-03-16 2017-03-07 Thunder Power Hong Kong Ltd. Vehicle control system for controlling steering of vehicle
US9866163B2 (en) 2015-03-16 2018-01-09 Thunder Power New Energy Vehicle Development Company Limited Method for controlling operating speed and torque of electric motor
CN105589459A (en) * 2015-05-19 2016-05-18 中国人民解放军国防科学技术大学 Unmanned vehicle semi-autonomous remote control method
CN105069431B (en) * 2015-08-07 2018-09-14 成都明图通科技有限公司 Face positioning method and apparatus
CN105303830A (en) * 2015-09-15 2016-02-03 成都通甲优博科技有限责任公司 Driving behavior analysis system and analysis method
CN105469467A (en) * 2015-12-04 2016-04-06 北海创思电子科技产业有限公司 EDR (event data recorder) capable of monitoring fatigue driving
CN106904169A (en) * 2015-12-17 2017-06-30 北京奇虎科技有限公司 Driving safety early warning method and device
CN105718872A (en) * 2016-01-15 2016-06-29 武汉光庭科技有限公司 Auxiliary method and system for rapid positioning of two-side lanes and detection of deflection angle of vehicle
CN205440259U (en) * 2016-03-25 2016-08-10 深圳市兼明科技有限公司 Safe?driving device based on iris discernment
CN105788176B (en) * 2016-05-25 2018-01-26 厦门理工学院 Reminded fatigue driving monitoring method and system
CN106249877A (en) * 2016-07-18 2016-12-21 广东欧珀移动通信有限公司 Control method and control device
CN106236046A (en) * 2016-09-05 2016-12-21 合肥飞鸟信息技术有限公司 Driver fatigue monitoring system
CN106571015A (en) * 2016-09-09 2017-04-19 武汉依迅电子信息技术有限公司 Driving behavior data collection method based on Internet
CN106448265A (en) * 2016-10-27 2017-02-22 广州微牌智能科技有限公司 Collecting method and device of driver's driving behavior data
CN108475121A (en) * 2016-11-26 2018-08-31 华为技术有限公司 Information prompt method and terminal
CN106817364A (en) * 2016-12-29 2017-06-09 北京神州绿盟信息安全科技股份有限公司 Brutal-force crack detection method and apparatus
CN107161152A (en) * 2017-05-24 2017-09-15 成都志博科技有限公司 Driver detecting system for lane skewing monitoring
CN107358151A (en) * 2017-06-02 2017-11-17 广州视源电子科技股份有限公司 Eye movement detection method and device and living body recognition method and system
CN108128241A (en) * 2017-12-25 2018-06-08 芜湖皖江知识产权运营中心有限公司 Fatigue driving recognition and control system applied to intelligent vehicles
CN108099915A (en) * 2017-12-25 2018-06-01 芜湖皖江知识产权运营中心有限公司 Fatigue driving identification control system applicable to intelligent vehicles
CN108162754A (en) * 2017-12-25 2018-06-15 芜湖皖江知识产权运营中心有限公司 Vehicle voice input control system for intelligent travel

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1613425A (en) * 2004-09-15 2005-05-11 南京大学 Method and system for drivers' fatigue prealarming biological identification
US6927694B1 (en) * 2001-08-20 2005-08-09 Research Foundation Of The University Of Central Florida Algorithm for monitoring head/eye motion for driver alertness with one camera
CN1878297A (en) * 2005-06-07 2006-12-13 浙江工业大学 Omnibearing vision device

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6927694B1 (en) * 2001-08-20 2005-08-09 Research Foundation Of The University Of Central Florida Algorithm for monitoring head/eye motion for driver alertness with one camera
CN1613425A (en) * 2004-09-15 2005-05-11 南京大学 Method and system for drivers' fatigue prealarming biological identification
CN1878297A (en) * 2005-06-07 2006-12-13 浙江工业大学 Omnibearing vision device

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
基于24位彩色人脸图像嘴唇的分割和提取. 陆继祥等.计算机工程,第29卷第2期. 2003 *
基于机器视觉的行车安全综合保障系统研究. 王荣本等.山东交通学院学报,第14卷第2期. 2006 *

Also Published As

Publication number Publication date
CN101032405A (en) 2007-09-12

Similar Documents

Publication Publication Date Title
Bobick et al. The recognition of human movement using temporal templates
Horng et al. Driver fatigue detection based on eye tracking and dynamic template matching
US8902070B2 (en) Eye closure detection using structured illumination
US7460940B2 (en) Method and arrangement for interpreting a subjects head and eye activity
JP4966816B2 (en) Gaze direction measuring method and gaze direction measuring device
CN101344919B (en) Sight tracing method and disabled assisting system using the same
JP4876687B2 (en) Attention measuring apparatus and attention measurement system
US9412011B2 (en) Warning a user about adverse behaviors of others within an environment based on a 3D captured image stream
CN104598882B (en) The method and system that electronic deception for biological characteristic validation detects
NL1016006C2 (en) Method and apparatus for detecting eyes and body of a speaking person.
US6154559A (en) System for classifying an individual&#39;s gaze direction
US9405982B2 (en) Driver gaze detection system
US20040104702A1 (en) Robot audiovisual system
CN101466305B (en) Method for determining and analyzing a location of visual interest
JP4723582B2 (en) Traffic sign detection method
EP1748387B1 (en) Devices for classifying the arousal state of the eyes of a driver, corresponding method and computer readable storage medium
Singh et al. Monitoring driver fatigue using facial analysis techniques
CN102254151B (en) Driver fatigue detection method based on face video analysis
US20060187305A1 (en) Digital processing of video images
Vishwakarma et al. Automatic detection of human fall in video
JP4898026B2 (en) Use the stereo camera face-line-of-sight recognition device
US7792328B2 (en) Warning a vehicle operator of unsafe operation behavior based on a 3D captured image stream
WO2009111498A2 (en) Object matching for tracking, indexing, and search
CN101639894B (en) Method for detecting train driver behavior and fatigue state on line and detection system thereof
JP2006527443A (en) Estimation of object orientation using the depth detection

Legal Events

Date Code Title Description
C06 Publication
C10 Entry into substantive examination
C14 Grant of patent or utility model
C41 Transfer of patent application or patent right or utility model
COR Change of bibliographic data

Free format text: CORRECT: ADDRESS; FROM: 310014 COLLEGE OF INFORMATION ENGINEERING, ZHEJIANG UNIVERSITY OF TECHNOLOGY, AREA 6, ZHAOHUI, XIACHENG DISTRICT, HANGZHOU CITY, ZHEJIANG PROVINCE TO: 310012 6/F, BUILDING 18, NO. 176 (WESTLAKE SOYEA SOFTWARE PARK), TIANMUSHAN ROAD, HANGZHOU CITY, ZHEJIANG PROVINCE

ASS Succession or assignment of patent right

Owner name: HAKIM INFORMATION TECHNOLOGY CO., LTD.

Free format text: FORMER OWNER: TANG YIPING

Effective date: 20110420

CP03