CN104200199A - TOF (Time of Flight) camera based bad driving behavior detection method - Google Patents
TOF (Time of Flight) camera based bad driving behavior detection method Download PDFInfo
- Publication number
- CN104200199A CN104200199A CN201410428258.1A CN201410428258A CN104200199A CN 104200199 A CN104200199 A CN 104200199A CN 201410428258 A CN201410428258 A CN 201410428258A CN 104200199 A CN104200199 A CN 104200199A
- Authority
- CN
- China
- Prior art keywords
- formula
- rectangle frame
- point cloud
- depth image
- pixel
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
- 238000001514 detection method Methods 0.000 title claims description 23
- 239000011159 matrix material Substances 0.000 claims description 48
- 238000012549 training Methods 0.000 claims description 21
- 238000013519 translation Methods 0.000 claims description 19
- 238000001914 filtration Methods 0.000 claims description 7
- 230000017105 transposition Effects 0.000 claims description 6
- NAWXUBYGYWOOIX-SFHVURJKSA-N (2s)-2-[[4-[2-(2,4-diaminoquinazolin-6-yl)ethyl]benzoyl]amino]-4-methylidenepentanedioic acid Chemical compound C1=CC2=NC(N)=NC(N)=C2C=C1CCC1=CC=C(C(=O)N[C@@H](CC(=C)C(O)=O)C(O)=O)C=C1 NAWXUBYGYWOOIX-SFHVURJKSA-N 0.000 claims description 5
- GNFTZDOKVXKIBK-UHFFFAOYSA-N 3-(2-methoxyethoxy)benzohydrazide Chemical compound COCCOC1=CC=CC(C(=O)NN)=C1 GNFTZDOKVXKIBK-UHFFFAOYSA-N 0.000 claims description 2
- FGUUSXIOTUKUDN-IBGZPJMESA-N C1(=CC=CC=C1)N1C2=C(NC([C@H](C1)NC=1OC(=NN=1)C1=CC=CC=C1)=O)C=CC=C2 Chemical compound C1(=CC=CC=C1)N1C2=C(NC([C@H](C1)NC=1OC(=NN=1)C1=CC=CC=C1)=O)C=CC=C2 FGUUSXIOTUKUDN-IBGZPJMESA-N 0.000 claims description 2
- 238000000605 extraction Methods 0.000 claims description 2
- 230000008878 coupling Effects 0.000 claims 1
- 238000010168 coupling process Methods 0.000 claims 1
- 238000005859 coupling reaction Methods 0.000 claims 1
- 238000010606 normalization Methods 0.000 claims 1
- 238000000034 method Methods 0.000 abstract description 6
- 230000002452 interceptive effect Effects 0.000 abstract description 2
- 230000006399 behavior Effects 0.000 description 30
- 210000003128 head Anatomy 0.000 description 26
- 230000007704 transition Effects 0.000 description 4
- 206010039203 Road traffic accident Diseases 0.000 description 2
- 230000004886 head movement Effects 0.000 description 2
- 238000011160 research Methods 0.000 description 2
- 230000009286 beneficial effect Effects 0.000 description 1
- 230000007812 deficiency Effects 0.000 description 1
- 238000010586 diagram Methods 0.000 description 1
- 230000000694 effects Effects 0.000 description 1
- 230000004399 eye closure Effects 0.000 description 1
- 210000000744 eyelid Anatomy 0.000 description 1
- 230000001788 irregular Effects 0.000 description 1
- 238000012544 monitoring process Methods 0.000 description 1
- 238000012216 screening Methods 0.000 description 1
- 230000011218 segmentation Effects 0.000 description 1
- 230000001568 sexual effect Effects 0.000 description 1
Landscapes
- Image Analysis (AREA)
Abstract
本发明公开了一种基于TOF相机的不良驾驶行为检测方法,其特征是按如下步骤进行:步骤1、获得深度图像;步骤2、头部区域的检测和匹配;步骤3、手部区域的检测和跟踪。本发明能在不干扰驾驶员正常驾驶的情况下准确有效地检测出驾驶员的不良驾驶行为,提高识别准确度,从而降低事故的发生率。
The invention discloses a method for detecting bad driving behavior based on a TOF camera, which is characterized in that it is carried out in the following steps: step 1, obtaining a depth image; step 2, detecting and matching the head region; step 3, detecting the hand region and track. The invention can accurately and effectively detect the bad driving behavior of the driver without interfering with the driver's normal driving, thereby improving the recognition accuracy and reducing the occurrence rate of accidents.
Description
技术领域technical field
本发明涉及一种基于TOF相机的运动识别及安全辅助驾驶技术,属于三维视觉的智能交通安全领域。The invention relates to a motion recognition and safety assisted driving technology based on a TOF camera, and belongs to the field of intelligent traffic safety of three-dimensional vision.
背景技术Background technique
随着我国城市化和机动化水平的不断提高,道路交通安全问题日益严峻。分析我国近几年道路交通事故原因发现,约90%的交通事故是由于驾驶员的的不良驾驶行为造成的。因此,对驾驶行为的研究尤其是如何对不安全驾驶行为进行干预就成为保障道路交通安全的核心问题。With the continuous improvement of the level of urbanization and motorization in our country, the problem of road traffic safety is becoming increasingly serious. Analyzing the reasons of road traffic accidents in my country in recent years, it is found that about 90% of traffic accidents are caused by bad driving behaviors of drivers. Therefore, the research on driving behavior, especially how to intervene on unsafe driving behavior has become the core issue to ensure road traffic safety.
目前国内外对驾驶行为的研究主要是利用2D相机获取驾驶员的视频图像,从而对眼睑的动作、眼睛的闭合情况、点头与嘴部动作等驾驶员的外在状态进行检测,并开发报警监测设备来唤醒驾驶员。但是由于2D相机容易受光照、阴影、人体肤色等的影响,同时3D真实场景投影到2D平面时会有距离信息的丢失,所以在用2D相机进行驾驶员行为识别时会带来信息的丢失、检测不准确、识别率不高等问题。At present, the research on driving behavior at home and abroad mainly uses 2D cameras to obtain video images of drivers, so as to detect the driver's external states such as eyelid movements, eye closures, nodding and mouth movements, and develop alarm monitoring. device to wake up the driver. However, since 2D cameras are easily affected by lighting, shadows, human skin color, etc., and the distance information will be lost when the 3D real scene is projected onto the 2D plane, the information will be lost when using the 2D camera for driver behavior recognition. Problems such as inaccurate detection and low recognition rate.
发明内容Contents of the invention
本发明为克服现有技术存在的不足之处,提出了一种基于TOF相机的不良驾驶行为检测方法,能在不干扰驾驶员正常驾驶的情况下有效检测出驾驶员的不良驾驶行为,提高识别准确度,从而降低事故的发生率。In order to overcome the deficiencies in the prior art, the present invention proposes a method for detecting bad driving behavior based on a TOF camera, which can effectively detect the bad driving behavior of the driver without interfering with the normal driving of the driver, and improve recognition accuracy, thereby reducing the incidence of accidents.
本发明解决技术问题采用如下技术方案:The present invention solves technical problem and adopts following technical scheme:
本发明一种基于TOF相机的不良驾驶行为检测方法的特点是按如下步骤进行:A kind of bad driving behavior detection method based on TOF camera of the present invention is characterized in following steps:
步骤1、获得深度图像:Step 1. Get the depth image:
利用TOF相机获取在时间段T=(t1,t2,...tv...,tm)内驾驶员的驾驶行为的深度图像,选取t1时刻的驾驶行为深度图像为初始深度图像;t2时刻至tm时刻的驾驶行为深度图像为序列深度图像;Use the TOF camera to obtain the depth image of the driver's driving behavior within the time period T=(t 1 ,t 2 ,...t v ...,t m ), and select the driving behavior depth image at time t 1 as the initial depth image; the driving behavior depth image from time t 2 to time t m is a sequence depth image;
步骤2、头部区域的检测和匹配:Step 2. Detection and matching of the head region:
2.1、利用AdaBoost算法对所述初始深度图像中的人脸区域进行检测,并利用头部矩形框对所检测到的人脸区域进行标注,并获取所述头部矩形框的中心位置;2.1. Utilize the AdaBoost algorithm to detect the human face area in the initial depth image, and use the head rectangular frame to mark the detected human face area, and obtain the center position of the head rectangular frame;
2.2、将所标头部矩形框向外扩充A%,获得头部扩充矩形框;2.2. Expand the marked head rectangle outward by A% to obtain the head expanded rectangle;
2.3、由所述头部矩形框中人脸区域的像素点构成参考三维点云;由所述头部扩充矩形框中人脸区域的像素点构成待匹配三维点云,利用ICP算法对所述待匹配三维点云与所述参考三维点云进行立体配准,获得ICP算法的最终迭代次数,根据所述最终迭代次数与设定的阈值进行比较,若所述最终迭代次数≥所设定的阈值,则判断为不良驾驶行为,否则判断为正常驾驶行为;2.3. A reference three-dimensional point cloud is formed by the pixels of the face area in the rectangular frame of the head; a three-dimensional point cloud to be matched is formed by the pixels of the face area in the expanded rectangular frame of the head, and the ICP algorithm is used for the The three-dimensional point cloud to be matched is stereo-registered with the reference three-dimensional point cloud to obtain the final number of iterations of the ICP algorithm, and the final number of iterations is compared with the set threshold, if the final number of iterations ≥ the set threshold threshold, it is judged as bad driving behavior, otherwise it is judged as normal driving behavior;
步骤3、手部区域的检测和跟踪:Step 3. Detection and tracking of the hand area:
3.1、对所述初始深度图像中驾驶员的整个上半身进行标注,获得上半身矩形框;3.1. Mark the entire upper body of the driver in the initial depth image to obtain a rectangular frame for the upper body;
3.2、将所述上半身矩形框中深度值最小的区域作为手部区域,并用外接矩形框对所述手部区域进行标注,所述外接矩形框的宽为所述头部矩形框的宽,所述外接矩形框的长为所述头部矩形框的长的两倍;3.2. Use the area with the smallest depth value in the upper body rectangular frame as the hand area, and mark the hand area with a circumscribed rectangular frame. The width of the circumscribed rectangular frame is the width of the head rectangular frame. The length of the circumscribed rectangle is twice the length of the head rectangle;
3.3、利用卡尔曼滤波算法对所述序列深度图像中外接矩形框内的手部区域进行跟踪,获得所述序列深度图像中外接矩形框的中心位置;3.3. Using the Kalman filter algorithm to track the hand area in the circumscribed rectangular frame in the sequence depth image, and obtain the center position of the circumscribed rectangle frame in the sequence depth image;
3.4、获得所述外接矩形框的中心位置与所述头部矩形框的中心位置之间的欧式距离;3.4. Obtain the Euclidean distance between the central position of the circumscribed rectangular frame and the central position of the head rectangular frame;
3.5、若所述欧式距离≥所设定的距离阈值,则判断为不良驾驶行为,否则判断为正常驾驶行为。3.5. If the Euclidean distance ≥ the set distance threshold, it is judged as bad driving behavior, otherwise it is judged as normal driving behavior.
本发明基于TOF相机的不良驾驶行为检测方法的特点也在于:The characteristics of the bad driving behavior detection method based on the TOF camera of the present invention are also:
所述步骤2.1中AdaBoost算法是如下步骤进行:In the step 2.1, the AdaBoost algorithm is carried out in the following steps:
步骤a、定义所述序列深度图像为训练样本I={(x1,y1),(x2,y2),…,(xi,yi),…,(xN,yN)},1≤i≤N,xi表示第i个训练样本,yi∈{0,1},yi=0表示正样本;yi=1负样本;Step a, define the sequence of depth images as training samples I={(x 1 ,y 1 ),(x 2 ,y 2 ),...,( xi ,y i ),...,(x N ,y N ) }, 1≤i≤N, x i represents the i-th training sample, y i ∈ {0,1}, y i =0 represents a positive sample; y i =1 a negative sample;
步骤b、对所述训练样本I中的每个训练样本进行Harr特征的提取,获得Harr特征集合F={f1,f2,…,fj,…,fM},fj表示第j个Harr特征,1≤j≤M;Step b. Extract Harr features for each training sample in the training samples I, and obtain a Harr feature set F={f 1 , f 2 ,...,f j ,...,f M }, where f j represents the jth Harr features, 1≤j≤M;
步骤c、利用式(1)获得第i个训练样本的第j个Harr特征fj的分类器hj(xi):Step c, using formula (1) to obtain the classifier h j ( xi ) of the jth Harr feature f j of the i-th training sample:
式(1)中,pj为所述分类器hj(xi)的方向参数,pj=±1,θ为阈值;In formula (1), p j is the direction parameter of the classifier h j ( xi ), p j =±1, and θ is the threshold;
步骤d、重复步骤c,从而获得M个分类器集合F={h1(xi),h2(xi),…,hj(xi),…,hM(xi)};Step d, repeating step c, thereby obtaining M classifier sets F={h 1 ( xi ), h 2 ( xi ),...,h j (xi ) ,...,h M ( xi )};
步骤e、利用式(2)获得所述第i个训练样本的第j个分类器hj(xi)的加权分类误差εj:Step e, using formula (2) to obtain the weighted classification error ε j of the j-th classifier h j ( xi ) of the i-th training sample:
式(2)中,ωi表示第i个训练样本归一化后的权重;In formula (2), ω i represents the normalized weight of the i-th training sample;
步骤f、重复步骤e,从而获得M个加权分类误差集合E={ε1,ε2,…,εj,…,εM};Step f, repeating step e, thereby obtaining M weighted classification error sets E={ε 1 ,ε 2 ,...,ε j ,...,ε M };
步骤g、选取所述加权分类误差集合E中加权分类误差最小的分类器,以所述加权分类误差最小的分类器所对应的Harr特征对所述初始深度图像中的人脸区域进行检测。Step g: Select the classifier with the smallest weighted classification error in the weighted classification error set E, and use the Harr feature corresponding to the classifier with the smallest weighted classification error to detect the face area in the initial depth image.
所述步骤2.3中的ICP算法是按如下步骤进行:The ICP algorithm in described step 2.3 is to carry out as follows:
步骤a、定义三维空间为R3;Step a, define the three-dimensional space as R 3 ;
定义所述待匹配三维点云中像素点的初始坐标为{Pi|Pi∈R3,i=1,....,NP},NP表示所述待匹配三维点云中像素点总数;Define the initial coordinates of the pixels in the 3D point cloud to be matched as {P i |P i ∈ R 3 , i=1,..., N P }, where N P represents the pixel in the 3D point cloud to be matched total number of points;
定义所述参考三维点云中像素点的初始坐标为{Qi|Qi∈R3,i=1,....,NQ};NQ表示述参考三维点云中像素点的总数;Define the initial coordinates of the pixels in the reference 3D point cloud as {Q i |Q i ∈ R 3 , i=1,..., N Q }; N Q represents the total number of pixels in the reference 3D point cloud ;
步骤b、定义迭代总次数为W;Step b, define the total number of iterations as W;
定义第k次迭代时,k∈W;对所述参考三维点云中像素点的初始坐标{Qi|Qi∈R3,i=1,....,NQ}中的每一个坐标点,在待匹配三维点云中像素点的坐标中都找出距离{Qi|Qi∈R3,i=1,....,NQ}中每一个坐标点最近的点,由所述距离最近的点构成参考三维点云中像素点的坐标为 When defining the kth iteration, k∈W; for each of the initial coordinates {Q i |Q i ∈R 3 , i=1,....,N Q } of the pixel in the reference 3D point cloud Coordinate point, the coordinate of the pixel point in the 3D point cloud to be matched Find the closest point to each coordinate point in {Q i |Q i ∈ R 3 ,i=1,....,N Q }, and use the closest point to form the reference pixel in the 3D point cloud The coordinates of the point are
步骤c、定义第k+1次迭代时,利用式(3)和式(4)分别获得待匹配三维点云中像素点的坐标为和参考三维点云中像素点的坐标为
式(3)和式(4)中,RO k+1表示三维旋转矩阵;tk+1表示平移向量;In formula (3) and formula (4), R O k+1 represents a three-dimensional rotation matrix; t k+1 represents a translation vector;
步骤d、利用式(5)获得第k+1次迭代的待匹配三维点云中像素点的坐标和第k次迭代的参考三维点云中像素点的坐标之间的欧式距离dk+1:Step d, using formula (5) to obtain the coordinates of the pixels in the k+1th iteration to be matched in the three-dimensional point cloud and the coordinates of the pixels in the reference 3D point cloud of the kth iteration The Euclidean distance d k+1 between:
步骤e、重复步骤b-步骤d,获得所述欧式距离dk+1的最小值时,停止迭代并获得最终迭代次数W'。Step e, repeat step b-step d, when the minimum value of the Euclidean distance d k+1 is obtained, stop the iteration and obtain the final number of iterations W'.
所述步骤3.3中的卡尔曼滤波算法是按如下步骤进行:The Kalman filtering algorithm in the described step 3.3 is to carry out as follows:
1)、令t2时刻的手部状态为初始化 1) Let the hand state at time t2 be initialization
2)、利用式(6)获得所述序列深度图像的外接矩形框中第tv时刻的手部预测状态 2), using formula (6) to obtain the hand prediction state at the tvth moment in the circumscribed rectangular frame of the sequence depth image
式(6)中,表示手部状态转移矩阵,
3)、利用式(7)获得第tv-1时刻的外接矩形框内的手部状态的协方差矩阵 3) Use formula (7) to obtain the hand state in the circumscribed rectangular frame at time t v-1 The covariance matrix of
式(7)中,表示所述序列深度图像的外接矩形框中第tv-1时刻的手部预测状态;表示所述序列深度图像的外接矩形框中第tv-2时刻的手部状态;In formula (7), Represents the hand prediction state at the tv-1th moment in the circumscribed rectangular frame of the sequence depth image; Represent the hand state at the t v-2th moment in the circumscribed rectangular frame of the sequence depth image;
4)、利用式(8)获得所述第tv时刻的手部预测状态的协方差矩阵 4), using formula (8) to obtain the hand prediction state at the tvth moment The covariance matrix of
式(8)中,为手部状态转移矩阵的转置;表示动态噪声协方差矩阵,且服从标准正态分布 In formula (8), is the hand state transition matrix the transposition of Represents the dynamic noise covariance matrix, and obeys the standard normal distribution
5)、利用式(9)更新第tv时刻的外接矩形框内的手部状态 5), use formula (9) to update the hand state in the circumscribed rectangular frame at time t v
式(9)中,
6)、利用式(10)计算卡尔曼滤波增益系数 6), using formula (10) to calculate the Kalman filter gain coefficient
式(10)中,为手部状态观测矩阵的转置,表示噪声协方差矩阵,且服从标准正态分布Rk~N(0,1);In formula (10), Observation matrix for the hand state the transposition of Represents the noise covariance matrix, and obeys the standard normal distribution R k ~N(0,1);
7)、重复步骤1)-6),不断更新第tv时刻的外接矩形框内的手部状态Xtv,从而实现外接矩形框内手部区域的跟踪,由所述手部区域获得所述序列深度图像中外接矩形框的中心位置。7), repeating steps 1)-6 ), constantly updating the hand state Xt v in the circumscribed rectangular frame at time tv, so as to realize the tracking of the hand area in the circumscribed rectangular frame, and obtain the described The center position of the bounding rectangle in the sequential depth image.
所述步骤c中三维旋转矩阵RO k+1和平移向量tk+1按如下步骤获得:In the step c, the three-dimensional rotation matrix R O k+1 and the translation vector t k+1 are obtained in the following steps:
1)、利用式(11)和(12)获得第k+1次迭代时的待匹配三维点云中像素点的坐标的质心μP和参考三维点云中像素点的坐标的质心μQ:1), using equations (11) and (12) to obtain the coordinates of the pixels in the 3D point cloud to be matched at the k+1 iteration The centroid μ P and the coordinates of the pixel point in the reference 3D point cloud The centroid μ Q :
2)、利用式(13)和(14)分别获得所述待匹配三维点云中像素点的坐标相对于质心μP的平移和所述参考三维点云中像素点的坐标相对于质心μQ的平移 2), using formula (13) and (14) to obtain the coordinates of the pixels in the three-dimensional point cloud to be matched respectively Translation relative to the center of mass μ P and the coordinates of the pixel points in the reference 3D point cloud The translation relative to the centroid μ Q
3)、利用式(15)获得所述平移和平移之间的相关矩阵Kαβ:3), using formula (15) to obtain the translation and pan The correlation matrix K αβ between:
式(15)中,α,β=1,2,3;In formula (15), α, β=1, 2, 3;
4)、利用式(16)获得相关矩阵Kαβ所构造出的四维对称矩阵K:4), using formula (16) to obtain the four-dimensional symmetric matrix K constructed by the correlation matrix K αβ :
5)、利用所述四维对称矩阵K获得最大特征根;并根据所述最大特征根获得单位特征向量q=[q0,q1,q2,q3]T;5), using the four-dimensional symmetric matrix K to obtain the largest eigenvalue; and obtaining the unit eigenvector q = [ q0 , q1 , q2 , q3 ] T according to the maximum eigenvalue;
6)、利用式(17)获得反对称矩阵K(q):6), using formula (17) to obtain the antisymmetric matrix K(q):
7)、利用式(18)获得三维旋转矩阵 7), using formula (18) to obtain the three-dimensional rotation matrix
8)、利用式(19)获得平移向量tk+1:8), using equation (19) to obtain the translation vector t k+1 :
与现有技术相比,本发明的有益效果体现在:Compared with the prior art, the beneficial effects of the present invention are reflected in:
1、本发明通过AdaBoost算法和卡尔曼滤波实现了驾驶员头部检测及手部状态的跟踪,并用ICP算法完成了当前深度图和参考深度图的三维点云配准,解决了现有技术中的2D相机容易受光照、阴影、人体肤色及驾驶员所穿服饰颜色等影响的问题,从而提高了不良驾驶行为的检测精度,降低了事故发生率;1. The present invention realizes the driver's head detection and hand state tracking through the AdaBoost algorithm and Kalman filter, and uses the ICP algorithm to complete the three-dimensional point cloud registration of the current depth map and the reference depth map, which solves the problems in the prior art. The 2D camera is easily affected by light, shadow, human skin color and the color of the driver's clothing, which improves the detection accuracy of bad driving behavior and reduces the accident rate;
2、本发明利用TOF距离信息的驾驶员识别方法,因TOF相机获取深度图像速度快,帧频可达40fps,具有实时性好的特点;2. The driver identification method using TOF distance information in the present invention, because the TOF camera acquires depth images at a fast speed, the frame rate can reach 40fps, and has the characteristics of good real-time performance;
3、本发明所采用的AdaBoost算法是一种高精度的分类器,不会因为驾驶员头部运动带来的检测误差而对后续检测结果产生巨大的影响,并且因算法的框架简单,不用做特征筛选,从而解决了检测的速度问题,具有较好的识别效果。3. The AdaBoost algorithm used in the present invention is a high-precision classifier, which will not have a huge impact on the subsequent detection results due to the detection error brought by the driver's head movement, and because the framework of the algorithm is simple, no need to do it Feature screening solves the problem of detection speed and has a better recognition effect.
4、本发明所采用的ICP算法不必对待处理的点集进行分割和特征提取,且在选择准确的初始位置的情况下,可以得到很好的算法收敛性,在三维点云的配准方面可以获得非常精确的配准效果。4. The ICP algorithm adopted in the present invention does not need to carry out segmentation and feature extraction on the point set to be processed, and in the case of selecting an accurate initial position, good algorithm convergence can be obtained, and it can be used in the registration of three-dimensional point clouds. Obtain very accurate registration results.
5、本发明所采用的卡尔曼滤波算法是以最小均方误差为估计的最佳准则,在数学架构上比较简单,而且是最优线性递推滤波方法,能有效解决基于不规则手部连续性图像的目标跟踪,具有计算量小的优点。5. The Kalman filter algorithm adopted in the present invention is based on the minimum mean square error as the best criterion for estimation. It is relatively simple in the mathematical structure, and it is an optimal linear recursive filtering method, which can effectively solve problems based on irregular hands. Target tracking of sexual images has the advantage of small amount of computation.
附图说明Description of drawings
图1本发明检测系统示意图;Fig. 1 schematic diagram of detection system of the present invention;
图2本发明检测方法流程图;Fig. 2 flow chart of detection method of the present invention;
图中标号:1头部矩形框;2头部扩充矩形框;3上半身矩形框;4外接矩形框;5TOF相机。Labels in the figure: 1 head rectangle frame; 2 head expansion rectangle frame; 3 upper body rectangle frame; 4 external rectangle frame; 5TOF camera.
具体实施方式Detailed ways
本实施例中,一种基于TOF相机的不良驾驶行为检测方法是通过安装在车内驾驶员斜上方的TOF相机实时获取驾驶员的深度图像,通过虚拟人脸矩形框,利用ICP(迭代最近点)算法将当前帧深度图像与参考图象进行立体配准,根据迭代次数判断是否发生了不良驾驶行为;采用背景减法检测出手部并进行跟踪,根据手部位置与虚拟人脸矩形框位置判断是否发生了不良驾驶行为,具体的是按以下步骤进行:In this embodiment, a TOF camera-based bad driving behavior detection method is to obtain the driver's depth image in real time through the TOF camera installed obliquely above the driver in the car, and use the ICP (Iterative Closest Point ) algorithm performs three-dimensional registration between the depth image of the current frame and the reference image, and judges whether bad driving behavior has occurred according to the number of iterations; uses background subtraction to detect and track the hand, and judges whether it is Bad driving behavior has occurred, specifically follow the steps below:
步骤1、获得深度图像:Step 1. Get the depth image:
将TOF相机安装在驾驶室内驾驶员的斜上方,并连接一个可报警的微型处理器安装在汽车控制台上,利用TOF相机获取在时间段T=(t1,t2,...tv...,tm)内驾驶员的驾驶行为的深度图像,选取t1时刻的驾驶行为深度图像为初始深度图像;t2时刻至tm时刻的驾驶行为深度图像为序列深度图像;Install the TOF camera obliquely above the driver in the cab, and connect a microprocessor that can alarm and install it on the car console, and use the TOF camera to obtain the ..., t m ) depth image of the driver's driving behavior, the driving behavior depth image at time t 1 is selected as the initial depth image; the driving behavior depth image at time t 2 to t m is the sequence depth image;
步骤2、头部区域的检测和匹配:Step 2. Detection and matching of the head region:
2.1、利用AdaBoost算法对初始深度图像中的人脸区域进行检测,并利用头部矩形框对所检测到的人脸区域进行标注,并获取头部矩形框的中心位置;2.1. Use the AdaBoost algorithm to detect the face area in the initial depth image, and use the head rectangle to mark the detected face area, and obtain the center position of the head rectangle;
具体的,AdaBoost算法按如下步骤进行:Specifically, the AdaBoost algorithm proceeds as follows:
a)、定义序列深度图像为训练样本I={(x1,y1),(x2,y2),…,(xi,yi),…,(xN,yN)},1≤i≤N,xi表示第i个训练样本,yi∈{0,1},yi=0表示正样本;yi=1负样本;a), define sequence depth images as training samples I={(x 1 ,y 1 ),(x 2 ,y 2 ),...,( xi ,y i ),...,(x N ,y N )}, 1≤i≤N, x i represents the i-th training sample, y i ∈ {0,1}, y i =0 represents a positive sample; y i =1 a negative sample;
b)、对训练样本I中的每个训练样本进行Harr特征的提取,获得Harr特征集合F={f1,f2,…,fj,…,fM},fj表示第j个Harr特征,1≤j≤M;b) Extract the Harr feature from each training sample in the training sample I to obtain the Harr feature set F={f 1 , f 2 ,...,f j ,...,f M }, where f j represents the jth Harr feature, 1≤j≤M;
c)、利用式(1)获得第i个训练样本的第j个Harr特征fj的分类器hj(xi):c), using formula (1) to obtain the classifier h j ( xi ) of the j-th Harr feature f j of the i-th training sample:
式(1)中,pj为分类器hj(xi)的方向参数,pj=±1,θ为阈值,在本实例中取为0.15;In formula (1), p j is the direction parameter of classifier h j ( xi ), p j =±1, θ is the threshold, which is taken as 0.15 in this example;
d)、重复步骤c),从而获得M个分类器集合F={h1(xi),h2(xi),…,hj(xi),…,hM(xi)};d) Repeat step c) to obtain M classifier sets F={h 1 ( xi ),h 2 (xi ) ,...,h j (xi ) ,...,h M ( xi )} ;
e)、利用式(2)获得第i个训练样本的第j个分类器hj(xi)的加权分类误差εj:e) Using formula (2) to obtain the weighted classification error ε j of the j-th classifier h j ( xi ) of the i-th training sample:
式(2)中,ωi表示第i个训练样本归一化后的权重;In formula (2), ω i represents the normalized weight of the i-th training sample;
f)、重复步骤e),从而获得M个加权分类误差集合E={ε1,ε2,…,εj,…,εM};f), repeating step e), thereby obtaining M weighted classification error sets E={ε 1 ,ε 2 ,...,ε j ,...,ε M };
g)、选取加权分类误差集合E中加权分类误差最小的分类器,以加权分类误差最小的分类器所对应的Harr特征对初始深度图像中的人脸区域进行检测。g) Select the classifier with the smallest weighted classification error in the weighted classification error set E, and use the Harr feature corresponding to the classifier with the smallest weighted classification error to detect the face area in the initial depth image.
2.2、将所标头部矩形框向外扩充A%,这里扩充的原因是驾驶员在正常驾驶中,会有微小的头部运动,在此范围内并非属于不良驾驶行为,在本实例中A取10,获得头部扩充矩形框;2.2. Expand the marked head rectangular frame outward by A%. The reason for the expansion here is that the driver will have a small head movement during normal driving, which is not a bad driving behavior within this range. In this example, A Take 10 to obtain the head expansion rectangle;
2.3、由头部矩形框中人脸区域的像素点构成参考三维点云;其中,三维点云是一组像素点的集合,由头部扩充矩形框中人脸区域的像素点构成待匹配三维点云,利用ICP算法,ICP算法为最近点迭代算法,对待匹配三维点云与参考三维点云进行立体配准,获得ICP算法的最终迭代次数,根据最终迭代次数与设定的阈值进行比较,若最终迭代次数≥所设定的阈值,在本实例中所设定的阈值为50,则判断为不良驾驶行为,否则判断为正常驾驶行为;这里根据迭代次数进行判断的依据为,ICP算法是一种迭代匹配算法,在匹配过程中需进行不停地迭代,若两幅驾驶员行为图像的匹配度高则最终迭代次数少,此时为正常驾驶,若两幅驾驶员行为图像差别大匹配度低则最终迭代次数多,此时为不良驾驶,所以选择根据驾驶员驾驶行为图像的匹配迭代次数来判断驾驶员是否发生了不良驾驶。2.3. The reference 3D point cloud is formed by the pixels of the face area in the rectangular frame of the head; among them, the 3D point cloud is a set of pixel points, and the 3D point cloud to be matched is formed by the pixels of the face area in the expanded rectangular frame of the head. Point cloud, using the ICP algorithm, the ICP algorithm is the nearest point iteration algorithm, the three-dimensional point cloud to be matched and the reference three-dimensional point cloud are stereoscopically registered, the final iteration number of the ICP algorithm is obtained, and the final iteration number is compared with the set threshold value. If the final number of iterations ≥ the set threshold, in this example the set threshold is 50, it is judged as bad driving behavior, otherwise it is judged as normal driving behavior; the basis for judging based on the number of iterations here is that the ICP algorithm is An iterative matching algorithm, which needs to be iterated continuously during the matching process. If the matching degree of the two driver behavior images is high, the final number of iterations is small. At this time, it is normal driving. If the degree is low, the final number of iterations will be large, and at this time it is bad driving. Therefore, the number of matching iterations based on the driver's driving behavior image is selected to determine whether the driver has bad driving.
具体的,ICP算法是按如下步骤进行:Specifically, the ICP algorithm is performed as follows:
h)、定义三维空间为R3;h), define three-dimensional space as R 3 ;
定义待匹配三维点云中像素点的初始坐标为{Pi|Pi∈R3,i=1,....,NP},NP表示待匹配三维点云中像素点总数;Define the initial coordinates of the pixels in the 3D point cloud to be matched as {P i |P i ∈ R 3 , i=1,..., N P }, and N P represents the total number of pixels in the 3D point cloud to be matched;
定义参考三维点云中像素点的初始坐标为{Qi|Qi∈R3,i=1,....,NQ};NQ表示述参考三维点云中像素点的总数;Define the initial coordinates of the pixels in the reference 3D point cloud as {Q i |Q i ∈ R 3 , i=1,..., N Q }; N Q represents the total number of pixels in the reference 3D point cloud;
i)、定义迭代总次数为W;i), define the total number of iterations as W;
定义第k次迭代时,k∈W;对参考三维点云中像素点的初始坐标{Qi|Qi∈R3,i=1,....,NQ}中的每一个坐标点,在待匹配三维点云中像素点的坐标中都找出距{Qi|Qi∈R3,i=1,....,NQ}中每一个坐标点最近的点,这些最近的点构成参考三维点云中像素点的坐标为
j)、定义第k+1次迭代时,利用式(3)和式(4)分别获得待匹配三维点云中像素点的坐标为
式(3)和式(4)中,RO k+1表示三维旋转矩阵;tk+1表示平移向量;In formula (3) and formula (4), R O k+1 represents a three-dimensional rotation matrix; t k+1 represents a translation vector;
具体的,三维旋转矩阵RO k+1和平移向量tk+1按如下步骤获得:Specifically, the three-dimensional rotation matrix R O k+1 and the translation vector t k+1 are obtained according to the following steps:
j1)、利用式(5)和(6)获得第k+1次迭代时的待匹配三维点云中像素点的坐标的质心μP和参考三维点云中像素点的坐标的质心μQ:j1), using equations (5) and (6) to obtain the coordinates of the pixel points in the 3D point cloud to be matched at the k+1 iteration The centroid μ P and the coordinates of the pixel point in the reference 3D point cloud The centroid μ Q :
j2)、利用式(7)和(8)分别获得待匹配三维点云中像素点的坐标相对于质心μP的平移和参考三维点云中像素点的坐标相对于质心μQ的平移 j2), using equations (7) and (8) to obtain the coordinates of the pixels in the 3D point cloud to be matched Translation relative to the center of mass μ P and the coordinates of the pixel points in the reference 3D point cloud The translation relative to the centroid μ Q
j3)、利用式(9)获得平移和平移之间的相关矩阵Kαβ:j3), using formula (9) to obtain translation and pan The correlation matrix K αβ between:
式(9)中,α,β=1,2,3;In formula (9), α, β=1, 2, 3;
j4)、利用式(10)获得相关矩阵Kαβ所构造出的四维对称矩阵K:j4), using formula (10) to obtain the four-dimensional symmetric matrix K constructed by the correlation matrix K αβ :
j5)、利用四维对称矩阵K获得最大特征根;并根据最大特征根获得单位特征向量q=[q0,q1,q2,q3]T;j5), utilizing the four-dimensional symmetric matrix K to obtain the largest characteristic root; and obtaining the unit characteristic vector q=[q 0 ,q 1 ,q 2 ,q 3 ] T according to the largest characteristic root;
j6)、利用式(11)获得反对称矩阵K(q):j6), using formula (11) to obtain the antisymmetric matrix K(q):
j7)、利用式(12)获得三维旋转矩阵 j7), using formula (12) to obtain the three-dimensional rotation matrix
j8)、利用式(13)获得平移向量tk+1:j8), using formula (13) to obtain the translation vector t k+1 :
k)、利用式(14)获得第k+1次迭代的待匹配三维点云中像素点的坐标和第k次迭代的参考三维点云中像素点的坐标之间的欧式距离dk+1:k), using formula (14) to obtain the coordinates of the pixels in the k+1th iteration to be matched in the three-dimensional point cloud and the coordinates of the pixels in the reference 3D point cloud of the kth iteration The Euclidean distance d k+1 between:
l)、重复步骤i)-步骤k),获得欧式距离dk+1的最小值时,停止迭代并获得最终迭代次数W'。l), repeat steps i)-step k), and when the minimum value of the Euclidean distance d k+1 is obtained, stop the iteration and obtain the final number of iterations W'.
步骤3、手部区域的检测和跟踪:Step 3. Detection and tracking of the hand area:
3.1、对初始深度图像中驾驶员的整个上半身进行标注,获得上半身矩形框;3.1. Mark the entire upper body of the driver in the initial depth image to obtain a rectangular frame for the upper body;
3.2、将上半身矩形框中深度值最小的区域作为手部区域,因为手部距离TOF相机近,所以其深度值最小,并用外接矩形框对手部区域进行标注,外接矩形框的宽为头部矩形框的宽,外接矩形框的长为头部矩形框的长的两倍;因为一般来说,人的手部和头部大小差不多一样,考虑到正常驾驶时手部会有一定的活动范围,所以这里将外接矩形框的宽设为头部矩形框的宽,外接矩形框的长设为头部矩形框的长的两倍。3.2. The area with the smallest depth value in the upper body rectangular frame is used as the hand area. Because the hand is close to the TOF camera, its depth value is the smallest, and the hand area is marked with the circumscribed rectangle frame. The width of the circumscribed rectangle frame is the head rectangle. The width of the frame, and the length of the circumscribed rectangular frame is twice the length of the head rectangular frame; generally speaking, the size of the human hand is almost the same as the head, and considering that the hand will have a certain range of motion during normal driving, so Here, the width of the circumscribing rectangle is set as the width of the head rectangle, and the length of the circumscribing rectangle is set as twice the length of the head rectangle.
3.3、利用卡尔曼滤波算法对序列深度图像中外接矩形框内的手部区域进行跟踪,获得序列深度图像中外接矩形框的中心位置;根据几何关系得到外接矩形框的中心位置。3.3. Use the Kalman filter algorithm to track the hand area in the circumscribed rectangular frame in the sequence depth image, and obtain the center position of the circumscribed rectangle frame in the sequence depth image; obtain the center position of the circumscribed rectangle frame according to the geometric relationship.
具体的,卡尔曼滤波算法是按如下步骤进行:Specifically, the Kalman filtering algorithm is performed as follows:
m)、令t2时刻的手部状态为初始化 m), let the hand state at time t2 be initialization
n)、利用式(15)获得序列深度图像的外接矩形框中第tv时刻的手部预测状态 n), use formula (15) to obtain the hand prediction state at the tvth moment in the circumscribed rectangular frame of the sequence depth image
式(15)中,表示手部状态转移矩阵,
o)、利用式(16)获得第tv-1时刻的外接矩形框内的手部状态的协方差矩阵 o), use formula (16) to obtain the hand state in the circumscribed rectangular frame at the tv-1th moment The covariance matrix of
式(16)中,表示序列深度图像的外接矩形框中第tv-1时刻的手部预测状态;表示序列深度图像的外接矩形框中第tv-2时刻的手部状态;In formula (16), Represents the hand prediction state at the t v-1th moment in the circumscribed rectangular frame of the sequence depth image; Represents the hand state at the t v-2th moment in the circumscribed rectangular frame of the sequence depth image;
p)、利用式(17)获得第tv时刻的手部预测状态的协方差矩阵 p), using formula (17) to obtain the predicted state of the hand at time t v The covariance matrix of
式(17)中,为手部状态转移矩阵的转置;表示动态噪声协方差矩阵,且服从标准正态分布 In formula (17), is the hand state transition matrix the transposition of Represents the dynamic noise covariance matrix, and obeys the standard normal distribution
q)、利用式(18)更新第tv时刻的外接矩形框内的手部状态 q), use formula (18) to update the hand state in the circumscribed rectangular frame at time t v
式(18)中,
r)、利用式(19)计算卡尔曼滤波增益系数 r), using formula (19) to calculate the Kalman filter gain coefficient
式(19)中,为手部状态观测矩阵的转置,表示噪声协方差矩阵,且服从标准正态分布Rk~N(0,1);In formula (19), Observation matrix for the hand state the transposition of Represents the noise covariance matrix, and obeys the standard normal distribution R k ~N(0,1);
s)、重复步骤m)-q),不断更新第tv时刻的外接矩形框内的手部状态从而实现外接矩形框内手部区域的跟踪,由手部区域获得序列深度图像中外接矩形框的中心位置。s), repeat steps m)-q), and constantly update the state of the hand in the circumscribed rectangular frame at time t v In this way, the tracking of the hand area in the circumscribed rectangular frame is realized, and the center position of the circumscribed rectangular frame in the sequence depth image is obtained from the hand area.
3.4、获得外接矩形框的中心位置与头部矩形框的中心位置之间的欧式距离;3.4. Obtain the Euclidean distance between the center position of the circumscribed rectangular frame and the center position of the head rectangular frame;
3.5、若欧式距离≥所设定的距离阈值,在本实例中所设定的距离阈值为40cm,则判断为不良驾驶行为,否则判断为正常驾驶行为。3.5. If the Euclidean distance ≥ the set distance threshold, in this example the set distance threshold is 40cm, it is judged as bad driving behavior, otherwise it is judged as normal driving behavior.
Claims (5)
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201410428258.1A CN104200199B (en) | 2014-08-27 | 2014-08-27 | Bad steering behavioral value method based on TOF camera |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201410428258.1A CN104200199B (en) | 2014-08-27 | 2014-08-27 | Bad steering behavioral value method based on TOF camera |
Publications (2)
Publication Number | Publication Date |
---|---|
CN104200199A true CN104200199A (en) | 2014-12-10 |
CN104200199B CN104200199B (en) | 2017-04-05 |
Family
ID=52085489
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201410428258.1A Expired - Fee Related CN104200199B (en) | 2014-08-27 | 2014-08-27 | Bad steering behavioral value method based on TOF camera |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN104200199B (en) |
Cited By (9)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN106291536A (en) * | 2015-06-03 | 2017-01-04 | 霍尼韦尔国际公司 | Including the door of time-of-flight sensor and window contact system and method |
CN109556511A (en) * | 2018-11-14 | 2019-04-02 | 南京农业大学 | A kind of suspension-type high throughput hothouse plants phenotype measuring system based on multi-angle of view RGB-D integration technology |
CN109977786A (en) * | 2019-03-01 | 2019-07-05 | 东南大学 | A kind of driver gestures detection method based on video and area of skin color distance |
CN110046560A (en) * | 2019-03-28 | 2019-07-23 | 青岛小鸟看看科技有限公司 | A kind of dangerous driving behavior detection method and camera |
CN110599407A (en) * | 2019-06-21 | 2019-12-20 | 杭州一隅千象科技有限公司 | Human body noise reduction method and system based on multiple TOF cameras in downward inclination angle direction |
CN110708518A (en) * | 2019-11-05 | 2020-01-17 | 北京深测科技有限公司 | People flow analysis early warning dispersion method and system |
CN112634270A (en) * | 2021-03-09 | 2021-04-09 | 深圳华龙讯达信息技术股份有限公司 | Imaging detection system and method based on industrial internet |
CN112990153A (en) * | 2021-05-11 | 2021-06-18 | 创新奇智(成都)科技有限公司 | Multi-target behavior identification method and device, storage medium and electronic equipment |
CN117036481A (en) * | 2023-08-22 | 2023-11-10 | 之江实验室 | Behavior detection method and device for unknown object and storage medium |
Citations (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20080243558A1 (en) * | 2007-03-27 | 2008-10-02 | Ash Gupte | System and method for monitoring driving behavior with feedback |
CN102982316A (en) * | 2012-11-05 | 2013-03-20 | 安维思电子科技(广州)有限公司 | Driver abnormal driving behavior recognition device and method thereof |
-
2014
- 2014-08-27 CN CN201410428258.1A patent/CN104200199B/en not_active Expired - Fee Related
Patent Citations (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20080243558A1 (en) * | 2007-03-27 | 2008-10-02 | Ash Gupte | System and method for monitoring driving behavior with feedback |
CN102982316A (en) * | 2012-11-05 | 2013-03-20 | 安维思电子科技(广州)有限公司 | Driver abnormal driving behavior recognition device and method thereof |
Non-Patent Citations (3)
Title |
---|
SHUZO NORIDOMI ETC.: ""Driving behavior analysis using vision-based head pose estimation for enhanced communication among traffic participants"", 《2013 INTERNATIONAL CONFERENCE ON CONNECTED VEHICLES AND EXPO》 * |
朱玉华等: ""一种基于特征三角形的驾驶员头部朝向分析方法"", 《中国人工智能学会第十三届学术年会》 * |
黄思博: ""基于计算机视觉的异常驾驶行为检测方法研究"", 《中国优秀硕士学位论文全文数据库 信息科技辑》 * |
Cited By (12)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN106291536A (en) * | 2015-06-03 | 2017-01-04 | 霍尼韦尔国际公司 | Including the door of time-of-flight sensor and window contact system and method |
CN109556511A (en) * | 2018-11-14 | 2019-04-02 | 南京农业大学 | A kind of suspension-type high throughput hothouse plants phenotype measuring system based on multi-angle of view RGB-D integration technology |
CN109977786A (en) * | 2019-03-01 | 2019-07-05 | 东南大学 | A kind of driver gestures detection method based on video and area of skin color distance |
CN109977786B (en) * | 2019-03-01 | 2021-02-09 | 东南大学 | Driver posture detection method based on video and skin color area distance |
CN110046560A (en) * | 2019-03-28 | 2019-07-23 | 青岛小鸟看看科技有限公司 | A kind of dangerous driving behavior detection method and camera |
CN110046560B (en) * | 2019-03-28 | 2021-11-23 | 青岛小鸟看看科技有限公司 | Dangerous driving behavior detection method and camera |
CN110599407A (en) * | 2019-06-21 | 2019-12-20 | 杭州一隅千象科技有限公司 | Human body noise reduction method and system based on multiple TOF cameras in downward inclination angle direction |
CN110708518A (en) * | 2019-11-05 | 2020-01-17 | 北京深测科技有限公司 | People flow analysis early warning dispersion method and system |
CN112634270A (en) * | 2021-03-09 | 2021-04-09 | 深圳华龙讯达信息技术股份有限公司 | Imaging detection system and method based on industrial internet |
CN112634270B (en) * | 2021-03-09 | 2021-06-04 | 深圳华龙讯达信息技术股份有限公司 | Imaging detection system and method based on industrial internet |
CN112990153A (en) * | 2021-05-11 | 2021-06-18 | 创新奇智(成都)科技有限公司 | Multi-target behavior identification method and device, storage medium and electronic equipment |
CN117036481A (en) * | 2023-08-22 | 2023-11-10 | 之江实验室 | Behavior detection method and device for unknown object and storage medium |
Also Published As
Publication number | Publication date |
---|---|
CN104200199B (en) | 2017-04-05 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN104200199B (en) | Bad steering behavioral value method based on TOF camera | |
CN106875424B (en) | A kind of urban environment driving vehicle Activity recognition method based on machine vision | |
CN104091348B (en) | The multi-object tracking method of fusion marked feature and piecemeal template | |
CN104463146B (en) | Posture identification method and device based on near-infrared TOF camera depth information | |
CN106682603B (en) | Real-time driver fatigue early warning system based on multi-source information fusion | |
CN117949942B (en) | Target tracking method and system based on fusion of radar data and video data | |
CN102663429B (en) | Method for motion pattern classification and action recognition of moving target | |
CN106023257B (en) | A kind of method for tracking target based on rotor wing unmanned aerial vehicle platform | |
CN103473542B (en) | Multi-clue fused target tracking method | |
CN104050685B (en) | Moving target detecting method based on particle filter visual attention model | |
CN108830246B (en) | Multi-dimensional motion feature visual extraction method for pedestrians in traffic environment | |
CN108805906A (en) | A kind of moving obstacle detection and localization method based on depth map | |
CN104794737B (en) | A kind of depth information Auxiliary Particle Filter tracking | |
CN108052880A (en) | Traffic monitoring scene actual situation method for detecting lane lines | |
CN104899590A (en) | Visual target tracking method and system for unmanned aerial vehicle | |
CN104318258A (en) | Time domain fuzzy and kalman filter-based lane detection method | |
CN107909604A (en) | Dynamic object movement locus recognition methods based on binocular vision | |
CN105160649A (en) | Multi-target tracking method and system based on kernel function unsupervised clustering | |
CN106570490B (en) | A real-time pedestrian tracking method based on fast clustering | |
CN105373135A (en) | Method and system for guiding airplane docking and identifying airplane type based on machine vision | |
CN104537689B (en) | Method for tracking target based on local contrast conspicuousness union feature | |
CN107563349A (en) | A kind of Population size estimation method based on VGGNet | |
CN101354254A (en) | A method for tracking the course of an aircraft | |
CN107798691B (en) | A vision-based real-time detection and tracking method for autonomous landing landmarks of unmanned aerial vehicles | |
CN101814137A (en) | Driver fatigue monitor system based on infrared eye state identification |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
C06 | Publication | ||
PB01 | Publication | ||
C10 | Entry into substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant | ||
CF01 | Termination of patent right due to non-payment of annual fee | ||
CF01 | Termination of patent right due to non-payment of annual fee |
Granted publication date: 20170405 Termination date: 20190827 |