CN104200199A - TOF (Time of Flight) camera based bad driving behavior detection method - Google Patents

TOF (Time of Flight) camera based bad driving behavior detection method Download PDF

Info

Publication number
CN104200199A
CN104200199A CN201410428258.1A CN201410428258A CN104200199A CN 104200199 A CN104200199 A CN 104200199A CN 201410428258 A CN201410428258 A CN 201410428258A CN 104200199 A CN104200199 A CN 104200199A
Authority
CN
China
Prior art keywords
rectangle frame
formula
point cloud
pixel
depth image
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201410428258.1A
Other languages
Chinese (zh)
Other versions
CN104200199B (en
Inventor
胡良梅
张旭东
高隽
董文菁
杨慧
杨静
段琳琳
徐小红
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Hefei University of Technology
Original Assignee
Hefei University of Technology
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Hefei University of Technology filed Critical Hefei University of Technology
Priority to CN201410428258.1A priority Critical patent/CN104200199B/en
Publication of CN104200199A publication Critical patent/CN104200199A/en
Application granted granted Critical
Publication of CN104200199B publication Critical patent/CN104200199B/en
Expired - Fee Related legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Landscapes

  • Image Analysis (AREA)

Abstract

The invention discloses a TOF (Time of Flight) camera based bad driving behavior detection method. The TOF camera based bad driving behavior detection method comprises step 1, obtaining a depth image; step 2, detecting and matching head areas; step 3, detecting and tracking hand areas. The TOF camera based bad driving behavior detection method can accurately and effectively detect bad driver driving behaviors under the condition that normal driving of a driver is not interfered, improves the identification accuracy and accordingly reduces the accident occurrence rate.

Description

Bad steering behavior detection method based on TOF camera
Technical field
The present invention relates to a kind of identification of the motion based on TOF camera and the auxiliary driving technology of safety, belong to the intelligent traffic safety field of 3D vision.
Background technology
Along with improving constantly of Chinese Urbanization and motorization level, traffic safety problem is increasingly serious.Analyze China in recent years road traffic accident reason find, approximately 90% traffic hazard be due to driver bad steering behavior cause.Therefore, to the research of driving behavior especially how to unsafe driving behavior intervene just to become and ensure the key problem of traffic safety.
Mainly to utilize 2D camera to obtain driver's video image to the research of driving behavior both at home and abroad at present, thereby the closed situation of the action to eyelid, eyes, nod and detect with the drivers' such as mouth action external state, and develop alert detecting equipment and wake driver up.But because 2D camera is easily subject to the impact of illumination, shade, human body complexion etc., when simultaneously 3D real scene projects to 2D plane, have the loss of range information, so carrying out can bringing when driving behavior is identified the loss of information with 2D camera, detect the problems such as inaccurate, discrimination is not high.
Summary of the invention
The present invention overcomes the weak point that prior art exists, a kind of bad steering behavior detection method based on TOF camera has been proposed, can in the situation that not disturbing driver's normal driving, effectively detect driver's bad steering behavior, improve recognition accuracy, thus the incidence of reduction accident.
Technical solution problem of the present invention adopts following technical scheme:
The feature of a kind of bad steering behavior detection method based on TOF camera of the present invention is to carry out as follows:
Step 1, acquisition depth image:
Utilize TOF camera to obtain the (t at time period T= 1, t 2... t v..., t m) interior driver's the depth image of driving behavior, choose t 1the driving behavior depth image in moment is initial depth image; t 2moment is to t mthe driving behavior depth image in moment is sequence depth image;
The detection of step 2, head zone and coupling:
2.1, utilize AdaBoost algorithm to detect the human face region in described initial depth image, and utilize head rectangle frame to mark detected human face region, and obtain the center of described head rectangle frame;
2.2, header portion of institute rectangle frame is outwards expanded to A%, obtain head and expand rectangle frame;
2.3, formed with reference to three-dimensional point cloud by the pixel of human face region in described head rectangle frame; The pixel that is expanded human face region in rectangle frame by described head forms three-dimensional point cloud to be matched, utilize ICP algorithm to described three-dimensional point cloud to be matched and describedly carry out three-dimensional registration with reference to three-dimensional point cloud, obtain the final iterations of ICP algorithm, compare according to the threshold value of described final iterations and setting, if the threshold value of described final iterations >=set, be judged as bad steering behavior, otherwise be judged as normal driving behavior;
The detection and tracking of step 3, hand region:
3.1, the whole upper part of the body of driver in described initial depth image is marked, obtain rectangle frame above the waist;
3.2, using the region of depth value minimum in described upper part of the body rectangle frame as hand region, and described hand region is marked with boundary rectangle frame, the wide of described boundary rectangle frame is the wide of described head rectangle frame, and the length of described boundary rectangle frame is the twice of the length of described head rectangle frame;
3.3, utilize Kalman filtering algorithm to follow the tracks of the hand region in external rectangle frame in described sequence depth image, obtain the center of external rectangle frame in described sequence depth image;
3.4, obtain the Euclidean distance between the center of described boundary rectangle frame and the center of described head rectangle frame;
If the distance threshold of 3.5 described Euclidean distances >=set, is judged as bad steering behavior, otherwise is judged as normal driving behavior.
The feature that the present invention is based on the bad steering behavior detection method of TOF camera is also:
In described step 2.1, AdaBoost algorithm is that following steps are carried out:
Step a, to define described sequence depth image be training sample I={ (x 1, y 1), (x 2, y 2) ..., (x i, y i) ..., (x n, y n), 1≤i≤N, x irepresent i training sample, y i∈ { 0,1}, y i=0 represents positive sample; y i=1 negative sample;
Step b, the each training sample in described training sample I is carried out to the extraction of Harr feature, obtain Harr characteristic set F={f 1, f 2..., f j..., f m, f jrepresent j Harr feature, 1≤j≤M;
Step c, utilize formula (1) to obtain j Harr feature f of i training sample jsorter h j(x i):
In formula (1), p jfor described sorter h j(x i) direction parameter, p j=± 1, θ is threshold value;
Steps d, repeating step c, thus M sorter set F={h obtained 1(x i), h 2(x i) ..., h j(x i) ..., h m(x i);
Step e, utilize formula (2) to obtain j sorter h of described i training sample j(x i) weighting error in classification ε j:
ϵ j = Σ i = 1 N ω i | h j ( x i ) - y i | - - - ( 2 )
In formula (2), ω irepresent i the weight after training sample normalization;
Step f, repeating step e, thus M weighting error in classification set E={ ε obtained 1, ε 2..., ε j..., ε m;
Step g, choose the sorter of weighting error in classification minimum in described weighting error in classification set E, with the corresponding Harr feature of sorter of described weighting error in classification minimum, the human face region in described initial depth image is detected.
ICP algorithm in described step 2.3 is to carry out as follows:
Step a, definition three dimensions are R 3;
The initial coordinate that defines pixel in described three-dimensional point cloud to be matched is { P i| P i∈ R 3, i=1 ...., N p, N prepresent pixel sum in described three-dimensional point cloud to be matched;
Defining the described initial coordinate with reference to pixel in three-dimensional point cloud is { Q i| Q i∈ R 3, i=1 ...., N q; N qrepresent to state the sum with reference to pixel in three-dimensional point cloud;
Step b, definition iteration total degree are W;
While defining the k time iteration, k ∈ W; To the described initial coordinate { Q with reference to pixel in three-dimensional point cloud i| Q i∈ R 3, i=1 ...., N qin each coordinate points, the coordinate of pixel in three-dimensional point cloud to be matched in all find out distance { Q i| Q i∈ R 3, i=1 ...., N qin the nearest point of each coordinate points, formed with reference to the coordinate of pixel in three-dimensional point cloud by described nearest point and be
When step c, the k+1 time iteration of definition, the coordinate that utilizes formula (3) and formula (4) to obtain respectively pixel in three-dimensional point cloud to be matched is with the coordinate with reference to pixel in three-dimensional point cloud be { Q i k + 1 | Q i k + 1 ∈ R 3 , i = 1 , . . . . , N Q } ;
P i k + 1 = R O k P i k + t k - - - ( 3 )
Q i k + 1 = R O k Q i k + t k - - - ( 4 )
In formula (3) and formula (4), R o k+1represent three-dimensional rotation matrix; t k+1represent translation vector;
Steps d, utilize formula (5) to obtain the coordinate of pixel in the three-dimensional point cloud to be matched of the k+1 time iteration coordinate with pixel in the reference three-dimensional point cloud of the k time iteration between Euclidean distance d k+1:
d k + 1 = 1 N P Σ i = 1 N P | | P i k + 1 - Q i k | | 2 - - - ( 5 )
Step e, repeating step b-steps d, obtain described Euclidean distance d k+1minimum value time, stop iteration and obtain final iterations W'.
Kalman filtering algorithm in described step 3.3 is to carry out as follows:
1), make t 2the hand state in moment is initialization
2), utilize formula (6) to obtain t in the boundary rectangle frame of described sequence depth image vthe hand predicted state in moment
X t v ′ = A t v X t v - 1 - - - ( 6 )
In formula (6), represent hand state-transition matrix, A t v = 1 0 t 0 0 1 0 t 0 0 1 0 0 0 0 1 , T is the interval between adjacent two moment; represent the interior t of external rectangle frame in described sequence depth image v-1the hand state in moment;
3), utilize formula (7) to obtain t v-1hand state in the boundary rectangle frame in moment covariance matrix
P t v - 1 = cov ( X t v - 1 ′ - X t v - 2 ) - - - ( 7 )
In formula (7), represent t in the boundary rectangle frame of described sequence depth image v-1the hand predicted state in moment; represent t in the boundary rectangle frame of described sequence depth image v-2the hand state in moment;
4), utilize formula (8) to obtain described t vthe hand predicted state in moment covariance matrix
P t v ′ = A t v P t v - 1 A t v T + B t v - 1 - - - ( 8 )
In formula (8), for hand state-transition matrix transposition; represent dynamic noise covariance matrix, and obey standardized normal distribution
5), utilize formula (9) to upgrade t vhand state in the boundary rectangle frame in moment
X t v = X t v ′ + K t v H t v X t v ′ - - - ( 9 )
In formula (9), H t v = 1 0 0 0 0 1 0 0 For hand state observation matrix, be t vmoment Kalman filtering gain coefficient;
6), utilize formula (10) computer card Kalman Filtering gain coefficient
K t v = P t v ′ H t v T ( H t v P t v ′ H t v T + R t v ) - 1 - - - ( 10 )
In formula (10), for hand state observation matrix transposition, represent noise covariance matrix, and obey standardized normal distribution R k~N (0,1);
7), repeating step 1)-6), constantly update t vhand state Xt in the boundary rectangle frame in moment vthereby, realize the tracking of hand region in boundary rectangle frame, obtained the center of external rectangle frame in described sequence depth image by described hand region.
Three-dimensional rotation matrix R in described step c o k+1with translation vector t k+1obtain as follows:
1) coordinate of pixel in the three-dimensional point cloud to be matched while, utilizing formula (11) and (12) to obtain the k+1 time iteration barycenter μ pwith the coordinate with reference to pixel in three-dimensional point cloud barycenter μ q:
μ P = 1 N P Σ i = 1 N P P i k + 1 - - - ( 11 )
μ Q = 1 N T Σ i = 1 N Q Q i k + 1 - - - ( 12 )
2), utilize formula (13) and (14) to obtain respectively the coordinate of pixel in described three-dimensional point cloud to be matched with respect to barycenter μ ptranslation with the described coordinate with reference to pixel in three-dimensional point cloud with respect to barycenter μ qtranslation
m i P = P i k + 1 - μ P - - - ( 13 )
m i Q = Q i k + 1 - μ Q - - - ( 14 )
3), utilize formula (15) to obtain described translation and translation between correlation matrix K α β:
K αβ = 1 N Σ i = 1 N P m i P ( m i Q ) T - - - ( 15 )
In formula (15), α, β=1,2,3;
4), utilize formula (16) to obtain correlation matrix K α βthe four-dimensional symmetric matrix K constructing:
K = K 11 + K 12 + K 13 K 32 - K 23 K 13 - K 31 K 21 - K 12 K 32 - K 23 K 11 - K 22 - K 33 K 12 + K 21 K 13 + K 31 K 13 - K 31 K 12 + K 21 - K 11 + K 22 - K 33 K 23 + K 32 K 21 - K 12 K 31 + K 13 K 32 + K 23 - K 11 - K 22 + K 33 - - - ( 16 )
5), utilize described four-dimensional symmetric matrix K to obtain maximum characteristic root; And obtain unit character vector according to described maximum characteristic root q=[ q0, q1, q2, q3] t;
6), utilize formula (17) to obtain antisymmetric matrix K (q):
K ( q ) = 0 - q 2 q 1 q 2 0 - q 0 - q 1 q 0 0 - - - ( 17 )
7), utilize formula (18) to obtain three-dimensional rotation matrix
R O k + 1 = ( q 3 2 + q T q + 2 q 3 K ( q ) ) - - - ( 18 )
8), utilize formula (19) to obtain translation vector t k+1:
t k + 1 = μ Q - R O k + 1 μ P - - - ( 19 ) .
Compared with prior art, beneficial effect of the present invention is embodied in:
1, the present invention has realized the tracking of driver's head detection and hand state by AdaBoost algorithm and Kalman filtering, and complete the three-dimensional point cloud registration of current depth map and reference depth figure with ICP algorithm, having solved 2D camera of the prior art is easily subject to illumination, shade, human body complexion and driver to wear the problem that dress ornament color etc. affects, thereby the accuracy of detection that has improved bad steering behavior, has reduced accident rate;
2, the present invention utilizes driver's recognition methods of TOF range information, and because of TOF camera, to obtain depth image speed fast, and frame frequency can reach 40fps, has the advantages that real-time is good;
3, AdaBoost algorithm of the present invention is a kind of high-precision sorter, the detection error that can not bring because of driver's head movement and to subsequent detection result produce tremendous influence, and because of the framework of algorithm simple, it goes without doing Feature Selection, thereby solve the speed issue detecting, there is good recognition effect.
4, ICP algorithm of the present invention needn't be cut apart and feature extraction pending point set, and in the situation that selecting accurately initial position, can obtain good Algorithm Convergence, aspect the registration of three-dimensional point cloud, can obtain point-device registration effect.
5, Kalman filtering algorithm of the present invention is the optimum criterion taking least mean-square error as estimating, fairly simple on mathematics framework, and be optimum linearity recursive filtering method, can effectively solve the target following based on irregular hand continuity chart picture, have advantages of that calculated amount is little.
Brief description of the drawings
Fig. 1 detection system schematic diagram of the present invention;
Fig. 2 detection method process flow diagram of the present invention;
Number in the figure: 1 head rectangle frame; 2 heads expand rectangle frame; 3 upper part of the body rectangle frames; 4 boundary rectangle frames; 5TOF camera.
Embodiment
In the present embodiment, a kind of bad steering behavior detection method based on TOF camera is the depth image of the TOF camera Real-time Obtaining driver by being arranged on driver's oblique upper in car, by virtual face rectangle frame, utilize ICP (iterative closest point) algorithm that present frame depth image and reference picture are carried out to three-dimensional registration, judge whether to have occurred bad steering behavior according to iterations; Adopt background subtraction to detect the hand line trace of going forward side by side, according to hand position and virtual face rectangle frame position judgment, whether bad steering behavior has occurred, concrete is to carry out according to the following steps:
Step 1, acquisition depth image:
TOF camera is arranged on to the oblique upper of driver in pilothouse, and connects the microprocessor that can report to the police and be arranged on vehicle console, utilize TOF camera to obtain the (t at time period T= 1, t 2... t v..., t m) interior driver's the depth image of driving behavior, choose t 1the driving behavior depth image in moment is initial depth image; t 2moment is to t mthe driving behavior depth image in moment is sequence depth image;
The detection of step 2, head zone and coupling:
2.1, utilize AdaBoost algorithm to detect the human face region in initial depth image, and utilize head rectangle frame to mark detected human face region, and obtain the center of head rectangle frame;
Concrete, AdaBoost algorithm carries out as follows:
A), defined nucleotide sequence depth image is training sample I={ (x 1, y 1), (x 2, y 2) ..., (x i, y i) ..., (x n, y n), 1≤i≤N, x irepresent i training sample, y i∈ { 0,1}, y i=0 represents positive sample; y i=1 negative sample;
B), the each training sample in training sample I is carried out to the extraction of Harr feature, acquisition Harr characteristic set F={f 1, f 2..., f j..., f m, f jrepresent j Harr feature, 1≤j≤M;
C), utilize formula (1) to obtain j Harr feature f of i training sample jsorter h j(x i):
In formula (1), p jfor sorter h j(x i) direction parameter, p j=± 1, θ is threshold value, is taken as in this example 0.15;
D), repeating step c), thereby obtain M sorter set F={h 1(x i), h 2(x i) ..., h j(x i) ..., h m(x i);
E), utilize formula (2) to obtain j sorter h of i training sample j(x i) weighting error in classification ε j:
ϵ j = Σ i = 1 N ω i | h j ( x i ) - y i | - - - ( 2 )
In formula (2), ω irepresent i the weight after training sample normalization;
F), repeating step e), thereby obtain M weighting error in classification set E={ ε 1, ε 2..., ε j..., ε m;
G), choose the sorter of weighting error in classification minimum in weighting error in classification set E, with the corresponding Harr feature of sorter of weighting error in classification minimum, the human face region in initial depth image is detected.
2.2, header portion of institute rectangle frame is outwards expanded to A%, here expand reason be driver in normal driving, have small head movement, within the scope of this, not belong to bad steering behavior, A gets 10 in this example, obtain head expand rectangle frame;
2.3, formed with reference to three-dimensional point cloud by the pixel of human face region in head rectangle frame; Wherein, three-dimensional point cloud is the set of one group of pixel, the pixel that is expanded human face region in rectangle frame by head forms three-dimensional point cloud to be matched, utilize ICP algorithm, ICP algorithm is closest approach iterative algorithm, to three-dimensional point cloud to be matched with carry out three-dimensional registration with reference to three-dimensional point cloud, obtain the final iterations of ICP algorithm, compare according to the threshold value of final iterations and setting, if the threshold value of final iterations >=set, the threshold value setting is in this example 50, is judged as bad steering behavior, otherwise is judged as normal driving behavior; Here judge according to iterations according to being, ICP algorithm is a kind of Iterative matching algorithm, in matching process, need to carry out ceaselessly iteration, if the matching degree height of two width driving behavior images final iterations is few, it is now normal driving, if the low final iterations of the large matching degree of two width driving behavior image difference is many, is now bad steering, so select to judge according to the coupling iterations of driver's driving behavior image whether driver bad steering has occurred.
Concrete, ICP algorithm is to carry out as follows:
H), definition three dimensions is R 3;
The initial coordinate that defines pixel in three-dimensional point cloud to be matched is { P i| P i∈ R 3, i=1 ...., N p, N prepresent pixel sum in three-dimensional point cloud to be matched;
Definition is { Q with reference to the initial coordinate of pixel in three-dimensional point cloud i| Q i∈ R 3, i=1 ...., N q; N qrepresent to state the sum with reference to pixel in three-dimensional point cloud;
I), definition iteration total degree is W;
While defining the k time iteration, k ∈ W; To the initial coordinate { Q with reference to pixel in three-dimensional point cloud i| Q i∈ R 3, i=1 ...., N qin each coordinate points, the coordinate of pixel in three-dimensional point cloud to be matched in all find out apart from { Q i| Q i∈ R 3, i=1 ...., N qin the nearest point of each coordinate points, these nearest points form and with reference to the coordinate of pixel in three-dimensional point cloud are { Q i k | Q i k ∈ R 3 , i = 1 , . . . . , N Q } ;
J), definition is when the k+1 time iteration, the coordinate that utilizes formula (3) and formula (4) to obtain respectively pixel in three-dimensional point cloud to be matched is { P i k + 1 | P i k + 1 ∈ R 3 , i = 1 , . . . . , N P } With the coordinate with reference to pixel in three-dimensional point cloud be { Q i k + 1 | Q i k + 1 ∈ R 3 , i = 1 , . . . . , N Q } ;
P i k + 1 = R O k P i k + t k - - - ( 3 )
Q i k + 1 = R O k Q i k + t k - - - ( 4 )
In formula (3) and formula (4), R o k+1represent three-dimensional rotation matrix; t k+1represent translation vector;
Concrete, three-dimensional rotation matrix R o k+1with translation vector t k+1obtain as follows:
J1) coordinate of pixel in the three-dimensional point cloud to be matched while, utilizing formula (5) and (6) to obtain the k+1 time iteration barycenter μ pwith the coordinate with reference to pixel in three-dimensional point cloud barycenter μ q:
μ P = 1 N P Σ i = 1 N P P i k + 1 - - - ( 5 )
μ Q = 1 N T Σ i = 1 N Q Q i k + 1 - - - ( 6 )
J2), utilize formula (7) and (8) to obtain respectively the coordinate of pixel in three-dimensional point cloud to be matched with respect to barycenter μ ptranslation with the coordinate with reference to pixel in three-dimensional point cloud with respect to barycenter μ qtranslation
m i P = P i k + 1 - μ P - - - ( 7 )
m i Q = Q i k + 1 - μ Q - - - ( 8 )
J3), utilize formula (9) to obtain translation and translation between correlation matrix K α β:
K αβ = 1 N Σ i = 1 N P m i P ( m i Q ) T - - - ( 9 )
In formula (9), α, β=1,2,3;
J4), utilize formula (10) to obtain correlation matrix K α βthe four-dimensional symmetric matrix K constructing:
K = K 11 + K 12 + K 13 K 32 - K 23 K 13 - K 31 K 21 - K 12 K 32 - K 23 K 11 - K 22 - K 33 K 12 + K 21 K 13 + K 31 K 13 - K 31 K 12 + K 21 - K 11 + K 22 - K 33 K 23 + K 32 K 21 - K 12 K 31 + K 13 K 32 + K 23 - K 11 - K 22 + K 33 - - - ( 10 )
J5), utilize four-dimensional symmetric matrix K to obtain maximum characteristic root; And obtain unit character vector q=[q according to maximum characteristic root 0, q 1, q 2, q 3] t;
J6), utilize formula (11) to obtain antisymmetric matrix K (q):
K ( q ) = 0 - q 2 q 1 q 2 0 - q 0 - q 1 q 0 0 - - - ( 11 )
J7), utilize formula (12) to obtain three-dimensional rotation matrix
R O k + 1 = ( q 3 2 + q T q + 2 q 3 K ( q ) ) - - - ( 12 )
J8), utilize formula (13) to obtain translation vector t k+1:
t k + 1 = μ Q - R O k + 1 μ P - - - ( 13 )
K), utilize formula (14) to obtain the coordinate of pixel in the three-dimensional point cloud to be matched of the k+1 time iteration coordinate with pixel in the reference three-dimensional point cloud of the k time iteration between Euclidean distance d k+1:
d k + 1 = 1 N P Σ i = 1 N P | | P i k + 1 - Q i k | | 2 - - - ( 14 )
L), repeating step i)-step k), obtain Euclidean distance d k+1minimum value time, stop iteration and obtain final iterations W'.
The detection and tracking of step 3, hand region:
3.1, the whole upper part of the body of driver in initial depth image is marked, obtain rectangle frame above the waist;
3.2, using the region of depth value minimum in upper part of the body rectangle frame as hand region, because hand is near apart from TOF camera, so its depth value minimum, and hand region is marked with boundary rectangle frame, the wide of boundary rectangle frame is the wide of head rectangle frame, and the length of boundary rectangle frame is the twice of the length of head rectangle frame; Because in general, people's hand and head size are about the same, while considering normal driving, hand has certain scope of activities, so boundary rectangle frame wide is made as to the wide of head rectangle frame here, the length of boundary rectangle frame is made as the twice of the length of head rectangle frame.
3.3, utilize Kalman filtering algorithm to follow the tracks of the hand region in external rectangle frame in sequence depth image, obtain the center of external rectangle frame in sequence depth image; Obtain the center of boundary rectangle frame according to geometric relationship.
Concrete, Kalman filtering algorithm is to carry out as follows:
M), make t 2the hand state in moment is initialization
N), utilize formula (15) to obtain t in the boundary rectangle frame of sequence depth image vthe hand predicted state in moment
X t v ′ = A t v X t v - 1 - - - ( 15 )
In formula (15), represent hand state-transition matrix, A t v = 1 0 t 0 0 1 0 t 0 0 1 0 0 0 0 1 , T is the interval between adjacent two moment; T gets 0.2s in this example, represent the interior t of external rectangle frame in sequence depth image v-1the hand state in moment;
O), utilize formula (16) to obtain t v-1hand state in the boundary rectangle frame in moment covariance matrix
P t v - 1 = cov ( X t v - 1 ′ - X t v - 2 ) - - - ( 16 )
In formula (16), t in the boundary rectangle frame of expression sequence depth image v-1the hand predicted state in moment; t in the boundary rectangle frame of expression sequence depth image v-2the hand state in moment;
P), utilize formula (17) to obtain t vthe hand predicted state in moment covariance matrix
P t v ′ = A t v P t v - 1 A t v T + B t v - 1 - - - ( 17 )
In formula (17), for hand state-transition matrix transposition; represent dynamic noise covariance matrix, and obey standardized normal distribution
Q), utilize formula (18) to upgrade t vhand state in the boundary rectangle frame in moment
X t v = X t v ′ + K t v H t v X t v ′ - - - ( 18 )
In formula (18), H t v = 1 0 0 0 0 1 0 0 For hand state observation matrix, be t vmoment Kalman filtering gain coefficient;
R), utilize formula (19) computer card Kalman Filtering gain coefficient
K t v = P t v ′ H t v T ( H t v P t v ′ H t v T + R t v ) - 1 - - - ( 19 )
In formula (19), for hand state observation matrix transposition, represent noise covariance matrix, and obey standardized normal distribution R k~N (0,1);
S), repeating step m)-q), constantly update t vhand state in the boundary rectangle frame in moment thereby realize the tracking of hand region in boundary rectangle frame, obtained the center of external rectangle frame in sequence depth image by hand region.
3.4, the Euclidean distance between center and the center of head rectangle frame of acquisition boundary rectangle frame;
If the distance threshold of 3.5 Euclidean distances >=set, the distance threshold setting is in this example 40cm, is judged as bad steering behavior, otherwise is judged as normal driving behavior.

Claims (5)

1. the bad steering behavior detection method based on TOF camera, is characterized in that carrying out as follows:
Step 1, acquisition depth image:
Utilize TOF camera to obtain the (t at time period T= 1, t 2... t v..., t m) interior driver's the depth image of driving behavior, choose t 1the driving behavior depth image in moment is initial depth image; t 2moment is to t mthe driving behavior depth image in moment is sequence depth image;
The detection of step 2, head zone and coupling:
2.1, utilize AdaBoost algorithm to detect the human face region in described initial depth image, and utilize head rectangle frame to mark detected human face region, and obtain the center of described head rectangle frame;
2.2, header portion of institute rectangle frame is outwards expanded to A%, obtain head and expand rectangle frame;
2.3, formed with reference to three-dimensional point cloud by the pixel of human face region in described head rectangle frame; The pixel that is expanded human face region in rectangle frame by described head forms three-dimensional point cloud to be matched, utilize ICP algorithm to described three-dimensional point cloud to be matched and describedly carry out three-dimensional registration with reference to three-dimensional point cloud, obtain the final iterations of ICP algorithm, compare according to the threshold value of described final iterations and setting, if the threshold value of described final iterations >=set, be judged as bad steering behavior, otherwise be judged as normal driving behavior;
The detection and tracking of step 3, hand region:
3.1, the whole upper part of the body of driver in described initial depth image is marked, obtain rectangle frame above the waist;
3.2, using the region of depth value minimum in described upper part of the body rectangle frame as hand region, and described hand region is marked with boundary rectangle frame, the wide of described boundary rectangle frame is the wide of described head rectangle frame, and the length of described boundary rectangle frame is the twice of the length of described head rectangle frame;
3.3, utilize Kalman filtering algorithm to follow the tracks of the hand region in external rectangle frame in described sequence depth image, obtain the center of external rectangle frame in described sequence depth image;
3.4, obtain the Euclidean distance between the center of described boundary rectangle frame and the center of described head rectangle frame;
If the distance threshold of 3.5 described Euclidean distances >=set, is judged as bad steering behavior, otherwise is judged as normal driving behavior.
2. the bad steering behavior detection method based on TOF camera according to claim 1, is characterized in that: in described step 2.1, AdaBoost algorithm is that following steps are carried out:
Step a, to define described sequence depth image be training sample I={ (x 1, y 1), (x 2, y 2) ..., (x i, y i) ..., (x n, y n), 1≤i≤N, x irepresent i training sample, y i∈ { 0,1}, y i=0 represents positive sample; y i=1 negative sample;
Step b, the each training sample in described training sample I is carried out to the extraction of Harr feature, obtain Harr characteristic set F={f 1, f 2..., f j..., f m, f jrepresent j Harr feature, 1≤j≤M;
Step c, utilize formula (1) to obtain j Harr feature f of i training sample jsorter h j(x i):
In formula (1), p jfor described sorter h j(x i) direction parameter, p j=± 1, θ is threshold value;
Steps d, repeating step c, thus M sorter set F={h obtained 1(x i), h 2(x i) ..., h j(x i) ..., h m(x i);
Step e, utilize formula (2) to obtain j sorter h of described i training sample j(x i) weighting error in classification ε j:
ϵ j = Σ i = 1 N ω i | h j ( x i ) - y i | - - - ( 2 )
In formula (2), ω irepresent i the weight after training sample normalization;
Step f, repeating step e, thus M weighting error in classification set E={ ε obtained 1, ε 2..., ε j..., ε m;
Step g, choose the sorter of weighting error in classification minimum in described weighting error in classification set E, with the corresponding Harr feature of sorter of described weighting error in classification minimum, the human face region in described initial depth image is detected.
3. the bad steering behavior detection method based on TOF camera according to claim 1, is characterized in that: the ICP algorithm in described step 2.3 is to carry out as follows:
Step a, definition three dimensions are R 3;
The initial coordinate that defines pixel in described three-dimensional point cloud to be matched is { P i| P i∈ R 3, i=1 ...., N p, N prepresent pixel sum in described three-dimensional point cloud to be matched;
Defining the described initial coordinate with reference to pixel in three-dimensional point cloud is { Q i| Q i∈ R 3, i=1 ...., N q; N qrepresent to state the sum with reference to pixel in three-dimensional point cloud;
Step b, definition iteration total degree are W;
While defining the k time iteration, k ∈ W; To the described initial coordinate { Q with reference to pixel in three-dimensional point cloud i| Q i∈ R 3, i=1 ...., N qin each coordinate points, the coordinate of pixel in three-dimensional point cloud to be matched in all find out distance { Q i| Q i∈ R 3, i=1 ...., N qin the nearest point of each coordinate points, formed with reference to the coordinate of pixel in three-dimensional point cloud by described nearest point and be
When step c, the k+1 time iteration of definition, the coordinate that utilizes formula (3) and formula (4) to obtain respectively pixel in three-dimensional point cloud to be matched is with the coordinate with reference to pixel in three-dimensional point cloud be { Q i k + 1 | Q i k + 1 ∈ R 3 , i = 1 , . . . . , N Q } ;
P i k + 1 = R O k P i k + t k - - - ( 3 )
Q i k + 1 = R O k Q i k + t k - - - ( 4 )
In formula (3) and formula (4), R o k+1represent three-dimensional rotation matrix; t k+1represent translation vector;
Steps d, utilize formula (5) to obtain the coordinate of pixel in the three-dimensional point cloud to be matched of the k+1 time iteration coordinate with pixel in the reference three-dimensional point cloud of the k time iteration between Euclidean distance d k+1:
d k + 1 = 1 N P Σ i = 1 N P | | P i k + 1 - Q i k | | 2 - - - ( 5 )
Step e, repeating step b-steps d, obtain described Euclidean distance d k+1minimum value time, stop iteration and obtain final iterations W'.
4. the bad steering behavior detection method based on TOF camera according to claim 1, is characterized in that: the Kalman filtering algorithm in described step 3.3 is to carry out as follows:
1), make t 2the hand state in moment is initialization
2), utilize formula (6) to obtain t in the boundary rectangle frame of described sequence depth image vthe hand predicted state in moment
X t v ′ = A t v X t v - 1 - - - ( 6 )
In formula (6), represent hand state-transition matrix, A t v = 1 0 t 0 0 1 0 t 0 0 1 0 0 0 0 1 , T is the interval between adjacent two moment; represent the interior t of external rectangle frame in described sequence depth image v-1the hand state in moment;
3), utilize formula (7) to obtain t v-1hand state in the boundary rectangle frame in moment covariance matrix
P t v - 1 = cov ( X t v - 1 ′ - X t v - 2 ) - - - ( 7 )
In formula (7), represent t in the boundary rectangle frame of described sequence depth image v-1the hand predicted state in moment; represent t in the boundary rectangle frame of described sequence depth image v-2the hand state in moment;
4), utilize formula (8) to obtain described t vthe hand predicted state in moment covariance matrix
P t v ′ = A t v P t v - 1 A t v T + B t v - 1 - - - ( 8 )
In formula (8), for hand state-transition matrix transposition; represent dynamic noise covariance matrix, and obey standardized normal distribution
5), utilize formula (9) to upgrade t vhand state in the boundary rectangle frame in moment
X t v = X t v ′ + K t v H t v X t v ′ - - - ( 9 )
In formula (9), H t v = 1 0 0 0 0 1 0 0 For hand state observation matrix, be t vmoment Kalman filtering gain coefficient;
6), utilize formula (10) computer card Kalman Filtering gain coefficient
K t v = P t v ′ H t v T ( H t v P t v ′ H t v T + R t v ) - 1 - - - ( 10 )
In formula (10), for hand state observation matrix transposition, represent noise covariance matrix, and obey standardized normal distribution R k~N (0,1);
7), repeating step 1)-6), constantly update t vhand state in the boundary rectangle frame in moment thereby realize the tracking of hand region in boundary rectangle frame, obtained the center of external rectangle frame in described sequence depth image by described hand region.
5. the bad steering behavior detection method based on TOF camera according to claim 3, is characterized in that: three-dimensional rotation matrix R in described step c o k+1with translation vector t k+1obtain as follows:
1) coordinate of pixel in the three-dimensional point cloud to be matched while, utilizing formula (11) and (12) to obtain the k+1 time iteration barycenter μ pwith the coordinate with reference to pixel in three-dimensional point cloud barycenter μ q:
μ P = 1 N P Σ i = 1 N P P i k + 1 - - - ( 11 )
μ Q = 1 N T Σ i = 1 N Q Q i k + 1 - - - ( 12 )
2), utilize formula (13) and (14) to obtain respectively the coordinate of pixel in described three-dimensional point cloud to be matched with respect to barycenter μ ptranslation with the described coordinate with reference to pixel in three-dimensional point cloud with respect to barycenter μ qtranslation
m i P = P i k + 1 - μ P - - - ( 13 )
m i Q = Q i k + 1 - μ Q - - - ( 14 )
3), utilize formula (15) to obtain described translation and translation between correlation matrix K α β:
K αβ = 1 N Σ i = 1 N P m i P ( m i Q ) T - - - ( 15 )
In formula (15), α, β=1,2,3;
4), utilize formula (16) to obtain correlation matrix K α βthe four-dimensional symmetric matrix K constructing:
K = K 11 + K 12 + K 13 K 32 - K 23 K 13 - K 31 K 21 - K 12 K 32 - K 23 K 11 - K 22 - K 33 K 12 + K 21 K 13 + K 31 K 13 - K 31 K 12 + K 21 - K 11 + K 22 - K 33 K 23 + K 32 K 21 - K 12 K 31 + K 13 K 32 + K 23 - K 11 - K 22 + K 33 - - - ( 16 )
5), utilize described four-dimensional symmetric matrix K to obtain maximum characteristic root; And obtain unit character vector q=[q according to described maximum characteristic root 0, q 1, q 2, q 3] t;
6), utilize formula (17) to obtain antisymmetric matrix K (q):
K ( q ) = 0 - q 2 q 1 q 2 0 - q 0 - q 1 q 0 0 - - - ( 17 )
7), utilize formula (18) to obtain three-dimensional rotation matrix
R O k + 1 = ( q 3 2 + q T q + 2 q 3 K ( q ) ) - - - ( 18 )
8), utilize formula (19) to obtain translation vector t k+1:
t k + 1 = μ Q - R O k + 1 μ P - - - ( 19 ) .
CN201410428258.1A 2014-08-27 2014-08-27 Bad steering behavioral value method based on TOF camera Expired - Fee Related CN104200199B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201410428258.1A CN104200199B (en) 2014-08-27 2014-08-27 Bad steering behavioral value method based on TOF camera

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201410428258.1A CN104200199B (en) 2014-08-27 2014-08-27 Bad steering behavioral value method based on TOF camera

Publications (2)

Publication Number Publication Date
CN104200199A true CN104200199A (en) 2014-12-10
CN104200199B CN104200199B (en) 2017-04-05

Family

ID=52085489

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201410428258.1A Expired - Fee Related CN104200199B (en) 2014-08-27 2014-08-27 Bad steering behavioral value method based on TOF camera

Country Status (1)

Country Link
CN (1) CN104200199B (en)

Cited By (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106291536A (en) * 2015-06-03 2017-01-04 霍尼韦尔国际公司 Including the door of time-of-flight sensor and window contact system and method
CN109556511A (en) * 2018-11-14 2019-04-02 南京农业大学 A kind of suspension-type high throughput hothouse plants phenotype measuring system based on multi-angle of view RGB-D integration technology
CN109977786A (en) * 2019-03-01 2019-07-05 东南大学 A kind of driver gestures detection method based on video and area of skin color distance
CN110046560A (en) * 2019-03-28 2019-07-23 青岛小鸟看看科技有限公司 A kind of dangerous driving behavior detection method and camera
CN110599407A (en) * 2019-06-21 2019-12-20 杭州一隅千象科技有限公司 Human body noise reduction method and system based on multiple TOF cameras in downward inclination angle direction
CN110708518A (en) * 2019-11-05 2020-01-17 北京深测科技有限公司 People flow analysis early warning dispersion method and system
CN112634270A (en) * 2021-03-09 2021-04-09 深圳华龙讯达信息技术股份有限公司 Imaging detection system and method based on industrial internet
CN112990153A (en) * 2021-05-11 2021-06-18 创新奇智(成都)科技有限公司 Multi-target behavior identification method and device, storage medium and electronic equipment

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20080243558A1 (en) * 2007-03-27 2008-10-02 Ash Gupte System and method for monitoring driving behavior with feedback
CN102982316A (en) * 2012-11-05 2013-03-20 安维思电子科技(广州)有限公司 Driver abnormal driving behavior recognition device and method thereof

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20080243558A1 (en) * 2007-03-27 2008-10-02 Ash Gupte System and method for monitoring driving behavior with feedback
CN102982316A (en) * 2012-11-05 2013-03-20 安维思电子科技(广州)有限公司 Driver abnormal driving behavior recognition device and method thereof

Non-Patent Citations (3)

* Cited by examiner, † Cited by third party
Title
SHUZO NORIDOMI ETC.: ""Driving behavior analysis using vision-based head pose estimation for enhanced communication among traffic participants"", 《2013 INTERNATIONAL CONFERENCE ON CONNECTED VEHICLES AND EXPO》 *
朱玉华等: ""一种基于特征三角形的驾驶员头部朝向分析方法"", 《中国人工智能学会第十三届学术年会》 *
黄思博: ""基于计算机视觉的异常驾驶行为检测方法研究"", 《中国优秀硕士学位论文全文数据库 信息科技辑》 *

Cited By (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106291536A (en) * 2015-06-03 2017-01-04 霍尼韦尔国际公司 Including the door of time-of-flight sensor and window contact system and method
CN109556511A (en) * 2018-11-14 2019-04-02 南京农业大学 A kind of suspension-type high throughput hothouse plants phenotype measuring system based on multi-angle of view RGB-D integration technology
CN109977786A (en) * 2019-03-01 2019-07-05 东南大学 A kind of driver gestures detection method based on video and area of skin color distance
CN109977786B (en) * 2019-03-01 2021-02-09 东南大学 Driver posture detection method based on video and skin color area distance
CN110046560A (en) * 2019-03-28 2019-07-23 青岛小鸟看看科技有限公司 A kind of dangerous driving behavior detection method and camera
CN110046560B (en) * 2019-03-28 2021-11-23 青岛小鸟看看科技有限公司 Dangerous driving behavior detection method and camera
CN110599407A (en) * 2019-06-21 2019-12-20 杭州一隅千象科技有限公司 Human body noise reduction method and system based on multiple TOF cameras in downward inclination angle direction
CN110708518A (en) * 2019-11-05 2020-01-17 北京深测科技有限公司 People flow analysis early warning dispersion method and system
CN112634270A (en) * 2021-03-09 2021-04-09 深圳华龙讯达信息技术股份有限公司 Imaging detection system and method based on industrial internet
CN112634270B (en) * 2021-03-09 2021-06-04 深圳华龙讯达信息技术股份有限公司 Imaging detection system and method based on industrial internet
CN112990153A (en) * 2021-05-11 2021-06-18 创新奇智(成都)科技有限公司 Multi-target behavior identification method and device, storage medium and electronic equipment

Also Published As

Publication number Publication date
CN104200199B (en) 2017-04-05

Similar Documents

Publication Publication Date Title
CN104200199A (en) TOF (Time of Flight) camera based bad driving behavior detection method
CN104573646B (en) Chinese herbaceous peony pedestrian detection method and system based on laser radar and binocular camera
CN113370977B (en) Intelligent vehicle forward collision early warning method and system based on vision
CN107031623B (en) A kind of road method for early warning based on vehicle-mounted blind area camera
CN102831618B (en) Hough forest-based video target tracking method
CN105718870B (en) Based on the preceding roadmarking extracting method to camera in automatic Pilot
CN103824070B (en) A kind of rapid pedestrian detection method based on computer vision
Zhou et al. LIDAR and vision-based real-time traffic sign detection and recognition algorithm for intelligent vehicle
Keller et al. The benefits of dense stereo for pedestrian detection
CN104091348B (en) The multi-object tracking method of fusion marked feature and piecemeal template
CN106256606A (en) A kind of lane departure warning method based on vehicle-mounted binocular camera
CN103310194B (en) Pedestrian based on crown pixel gradient direction in a video shoulder detection method
EP2589218B1 (en) Automatic detection of moving object by using stereo vision technique
CN104091157A (en) Pedestrian detection method based on feature fusion
CN109334563A (en) A kind of anticollision method for early warning based on road ahead pedestrian and bicyclist
Fernández et al. Road curb and lanes detection for autonomous driving on urban scenarios
CN107909604A (en) Dynamic object movement locus recognition methods based on binocular vision
CN105005999A (en) Obstacle detection method for blind guiding instrument based on computer stereo vision
CN112381870B (en) Binocular vision-based ship identification and navigational speed measurement system and method
CN104700088B (en) A kind of gesture track recognition method under the follow shot based on monocular vision
CN103810491A (en) Head posture estimation interest point detection method fusing depth and gray scale image characteristic points
CN104732236B (en) A kind of crowd's abnormal behaviour intelligent detecting method based on layered shaping
CN104794737A (en) Depth-information-aided particle filter tracking method
CN108830246A (en) A kind of traffic environment pedestrian multi-dimensional movement characteristic visual extracting method
CN102289822A (en) Method for tracking moving target collaboratively by multiple cameras

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant
CF01 Termination of patent right due to non-payment of annual fee

Granted publication date: 20170405

Termination date: 20190827

CF01 Termination of patent right due to non-payment of annual fee