CN104200199B - Bad steering behavioral value method based on TOF camera - Google Patents
Bad steering behavioral value method based on TOF camera Download PDFInfo
- Publication number
- CN104200199B CN104200199B CN201410428258.1A CN201410428258A CN104200199B CN 104200199 B CN104200199 B CN 104200199B CN 201410428258 A CN201410428258 A CN 201410428258A CN 104200199 B CN104200199 B CN 104200199B
- Authority
- CN
- China
- Prior art keywords
- formula
- rectangle frame
- point cloud
- pixel
- dimensional point
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Expired - Fee Related
Links
Landscapes
- Image Analysis (AREA)
Abstract
The invention discloses a kind of bad steering behavioral value method based on TOF camera, is characterized in that carrying out as follows:Step 1, acquisition depth image;Step 2, the detection of head zone and matching;The detect and track of step 3, hand region.The present invention can accurately and effectively detect the bad steering behavior of driver in the case where driver's normal driving is not disturbed, and improve recognition accuracy, so as to reduce the incidence rate of accident.
Description
Technical field
The present invention relates to a kind of Motion Recognition and safety assistant driving technology based on TOF camera, belongs to 3D vision
Intelligent traffic safety field.
Background technology
With the continuous improvement of Chinese Urbanization and motorization level, traffic safety problem is increasingly serious.Analyze me
Road traffic accident reason finds that about 90% vehicle accident is caused due to the bad steering behavior of driver in recent years for state
's.Therefore, just become guarantee road traffic to specifically how the research of driving behavior is intervened to unsafe driving behavior
The key problem of safety.
The video image that 2D cameras obtain driver is mainly used to the research of driving behavior both at home and abroad at present, so as to right
The action of eyelid, the closure situation of eyes, nod and the external state of the driver such as mouth action is detected, and develop warning
Monitoring device is waking up driver.But as 2D cameras are easily affected by illumination, shade, human body complexion etc., while 3D is true
Real field scape has the loss of range information when projecting to 2D planes, so meeting band when driving behavior is recognized is being carried out with 2D cameras
Carry out the loss of information, the problems such as detection is inaccurate, discrimination is not high.
The content of the invention
The present invention is the weak point for overcoming prior art to exist, it is proposed that a kind of bad steering row based on TOF camera
For detection method, effective detection can go out the bad steering behavior of driver in the case where driver's normal driving is not disturbed, carry
High recognition accuracy, so as to reduce the incidence rate of accident.
The present invention solves technical problem and adopts the following technical scheme that:
A kind of the characteristics of bad steering behavioral value method based on TOF camera of the present invention is to carry out as follows:
Step 1, acquisition depth image:
Obtained in time period T=(t using TOF camera1,t2,...tv...,tm) interior driver driving behavior depth
Image, chooses t1The driving behavior depth image at moment is ID image;t2Moment is to tmThe driving behavior depth map at moment
As being sequence depth image;
Step 2, the detection of head zone and matching:
2.1st, the human face region in the ID image is detected using AdaBoost algorithms, and utilizes head
Rectangle frame is labeled to detected human face region, and obtains the center of the head rectangle frame;
2.2nd, the head rectangle frame is outwards expanded into A%, obtains head and expand rectangle frame;
2.3rd, it is made up of the pixel of human face region in the head rectangle frame and refers to three-dimensional point cloud;Expanded by the head
In rectangle frame, the pixel of human face region constitutes three-dimensional point cloud to be matched, using ICP algorithm to the three-dimensional point cloud to be matched with
The reference three-dimensional point cloud carries out three-dimensional registration, obtains the final iterationses of ICP algorithm, according to the final iterationses with
The threshold value of setting is compared, if the threshold value of final iterationses >=set, is judged as bad steering behavior, otherwise
Head is judged for normal driving behavior, and continue executing with step 3;
The detect and track of step 3, hand region:
3.1st, the whole of driver in the ID image is labeled above the waist, obtains upper part of the body rectangle frame;
3.2nd, using depth value in the upper part of the body rectangle frame minimum region as hand region, and with boundary rectangle frame pair
The hand region is labeled, the width of a width of described head rectangle frame of the boundary rectangle frame, the boundary rectangle frame
The twice of the length of a length of head rectangle frame;
3.3rd, the hand region in external rectangle frame in the sequence depth image is carried out using Kalman filtering algorithm
Tracking, obtains the center of external rectangle frame in the sequence depth image;
3.4th, obtain European between the center of boundary rectangle frame and the center of the head rectangle frame
Distance;
If the 3.5, the distance threshold of the Euclidean distance >=set, is judged as bad steering behavior, is otherwise judged as
Normal driving behavior.
The characteristics of bad steering behavioral value method of the present invention based on TOF camera, lies also in:
In the step 2.1, AdaBoost algorithms are the steps of carrying out:
Step a, the definition sequence depth image are training sample I={ (x1,y1),(x2,y2),…,(xi,yi),…,
(xN,yN), 1≤i≤N, xiRepresent i-th training sample, yi∈ { 0,1 }, yi=0 represents positive sample;yi=1 negative sample;
Step b, the extraction that Harr features are carried out to each training sample in the training sample I, obtain Harr features
Set F={ f1,f2,...,fj,...,fM, fjRepresent j-th Harr feature, 1≤j≤M;
Step c, j-th Harr feature f that i-th training sample is obtained using formula (1)jGrader hj(xi):
In formula (1), pjFor the grader hj(xi) directioin parameter, pj=± 1, θ is threshold value;
Step d, repeat step c, so as to obtain M grader set F={ h1(xi),h2(xi),...,hj(xi),...,
hM(xi)};
Step e, j-th grader h that i-th training sample is obtained using formula (2)j(xi) weighting classification error εj:
In formula (2), ωiRepresent the weight after i-th training sample normalization;
Step f, repeat step e, so as to obtain M weighting classification error set E={ ε1,ε2,...,εj,...,εM};
Step g, the grader for choosing weighting classification error minimum in the weighting classification error set E, with the weighting
Harr features corresponding to the minimum grader of error in classification are detected to the human face region in the ID image.
ICP algorithm in the step 2.3 is to carry out as follows:
Step a, definition three dimensions are R3;
The initial coordinate for defining pixel in the three-dimensional point cloud to be matched is { Pi|Pi∈R3, i=1 ...., NP, NPTable
Show pixel sum in the three-dimensional point cloud to be matched;
The initial coordinate for defining pixel in the reference three-dimensional point cloud is { Qi|Qi∈R3, i=1 ...., NQ};NQRepresent
State the sum with reference to pixel in three-dimensional point cloud;
Step b, definition iteration total degree are W;
When defining kth time iteration, k ∈ W;Initial coordinate { Q to pixel in the reference three-dimensional point cloudi|Qi∈R3,i
=1 ...., NQIn each coordinate points, the coordinate { P of pixel in three-dimensional point cloud to be matchedi k|Pi k∈R3, i=
1,....,NPIn all find out distance { Qi|Qi∈R3, i=1 ...., NQIn the nearest point of each coordinate points, by the distance
Nearest point composition refers to the coordinate of pixel in three-dimensional point cloud
When step c, definition+1 iteration of kth, pixel in three-dimensional point cloud to be matched is obtained respectively using formula (3) and formula (4)
The coordinate of point is { Pi k+1|Pi k+1∈R3, i=1 ...., NPAnd with reference to the coordinate of pixel in three-dimensional point cloud be
In formula (3) and formula (4), RO k+1Represent three-dimensional rotation matrix;tk+1Represent translation vector;
Step d, using formula (5) obtain+1 iteration of kth three-dimensional point cloud to be matched in pixel coordinate { Pi k+1And
The coordinate of pixel in the reference three-dimensional point cloud of kth time iterationBetween Euclidean distance dk+1:
Step e, repeat step b- step d, obtain the Euclidean distance dk+1Minima when, stop iteration and acquisition most
Whole iterationses W'.
Kalman filtering algorithm in the step 3.3 is to carry out as follows:
1), make t2The hand state at moment isInitialization
2) t in the boundary rectangle frame of the sequence depth image, is obtained using formula (6)vThe hand predicted state at moment
In formula (6),Hand state transfer matrix is represented,T is between the two neighboring moment
Interval;Represent in the sequence depth image t in external rectangle framev-1The hand state at moment;
3), t is obtained using formula (7)v-1The hand state of the boundary rectangle inframe at momentCovariance matrix
In formula (7),Represent t in the boundary rectangle frame of the sequence depth imagev-1The hand predicted state at moment;Represent t in the boundary rectangle frame of the sequence depth imagev-2The hand state at moment;
4), the t is obtained using formula (8)vThe hand predicted state at momentCovariance matrix
In formula (8),For hand state transfer matrixTransposition;Dynamic noise covariance matrix is represented, and is obeyed
Standard normal distribution
5), t is updated using formula (9)vThe hand state of the boundary rectangle inframe at moment
In formula (9),For hand state observing matrix,For tvMoment Kalman filtering gain
Coefficient;
6), Kalman filtering gain coefficient is calculated using formula (10)
In formula (10),For hand state observing matrixTransposition,Noise covariance matrix is represented, and obeys mark
Quasi normal distribution Rk~N (0,1);
7), repeat step 1) -6), constantly update tvThe hand state of the boundary rectangle inframe at momentSo as to realize
The tracking of boundary rectangle inframe hand region, by external rectangle frame in the hand region acquisition sequence depth image
Heart position.
Three-dimensional rotation matrix R in step cO k+1With translation vector tk+1Obtain as follows:
1) coordinate { P of pixel in the three-dimensional point cloud to be matched when+1 iteration of kth, is obtained using formula (11) and (12)i k +1Barycenter μPWith the coordinate with reference to pixel in three-dimensional point cloudBarycenter μQ:
2), obtain the coordinate { P of pixel in the three-dimensional point cloud to be matched using formula (13) and (14) respectivelyi k+1Relative
In barycenter μPTranslationWith the coordinate of pixel in the reference three-dimensional point cloudRelative to barycenter μQTranslation
3), the translation is obtained using formula (15)And translationBetween correlation matrix Kαβ:
In formula (15), α, β=1,2,3;
4), correlation matrix K is obtained using formula (16)αβThe four-dimensional symmetrical matrix K for being constructed:
5), Maximum characteristic root is obtained using the four-dimensional symmetrical matrix K;And it is special to obtain unit according to the Maximum characteristic root
Levy vectorial q=[q0,q1,q2,q3]T;
6), antisymmetric matrix K (q) is obtained using formula (17):
7), three-dimensional rotation matrix is obtained using formula (18)
8), translation vector t is obtained using formula (19)k+1:
Compared with prior art, beneficial effects of the present invention are embodied in:
1st, the present invention realizes driver head by AdaBoost algorithms and Kalman filtering and detects and hand state
Tracking, and the three-dimensional point cloud registration of current depth figure and reference depth figure is completed with ICP algorithm, solve of the prior art
The problem that 2D cameras are easily affected by worn dress ornament color of illumination, shade, human body complexion and driver etc., it is bad so as to improve
The accuracy of detection of driving behavior, reduces accident rate;
2nd, driver recognition methodss of the present invention using TOF range informations, because TOF camera obtains depth image speed soon,
Frame frequency up to 40fps, with real-time it is good the characteristics of;
3rd, AdaBoost algorithms of the present invention are a kind of high-precision graders, will not be transported because of driver head
The dynamic detection error brought and tremendous influence is produced to subsequent detection result, and because the framework of algorithm is simple, it is special that it goes without doing
Screening is levied, so as to solve the speed issue of detection, with preferable recognition effect.
4th, ICP algorithm of the present invention need not be split to pending point set and feature extraction, and select
Accurately in the case of initial position, good Algorithm Convergence can be obtained, can be obtained in terms of the registration of three-dimensional point cloud
Point-device registration effect.
5th, Kalman filtering algorithm of the present invention is the optimum criterion with least mean-square error to estimate, in mathematics
It is fairly simple on framework, and be optimum linearity recursive filtering method, energy effectively solving is based on irregular hand continuity chart picture
Target following, have the advantages that amount of calculation is little.
Description of the drawings
Fig. 1 detecting system schematic diagrams of the present invention;
Fig. 2 detection method flow charts;
Label in figure:1 head rectangle frame;2 heads expand rectangle frame;3 upper part of the body rectangle frames;4 boundary rectangle frames;5TOF phases
Machine.
Specific embodiment
In the present embodiment, a kind of bad steering behavioral value method based on TOF camera is to be driven by installation in the car
The TOF camera of member oblique upper obtains the depth image of driver in real time, and by conjecture face rectangle frame, using ICP, (iteration is nearest
Point) present frame depth image and reference picture carry out three-dimensional registering by algorithm, judge whether to there occurs according to iterationses bad
Driving behavior;Hand is detected using background subtraction and is tracked, sentenced with conjecture face rectangle frame position according to hand position
It is disconnected whether to there occurs bad steering behavior, particularly carry out according to the following steps:
Step 1, acquisition depth image:
TOF camera is arranged on the oblique upper for driving indoor driver, and connects a microprocessor peace that can be reported to the police
It is mounted on vehicle console, is obtained in time period T=(t using TOF camera1,t2,...tv...,tm) interior driver driving row
For depth image, choose t1The driving behavior depth image at moment is ID image;t2Moment is to tmThe driving row at moment
It is sequence depth image for depth image;
Step 2, the detection of head zone and matching:
2.1st, the human face region in ID image is detected using AdaBoost algorithms, and utilizes head rectangle
Frame is labeled to detected human face region, and obtains the center of head rectangle frame;
Specifically, AdaBoost algorithms are carried out as follows:
A), defined nucleotide sequence depth image is training sample I={ (x1,y1),(x2,y2),…,(xi,yi),…,(xN,yN),
1≤i≤N, xiRepresent i-th training sample, yi∈ { 0,1 }, yi=0 represents positive sample;yi=1 negative sample;
B) extraction of Harr features, is carried out to each training sample in training sample I, Harr characteristic set F=are obtained
{f1,f2,...,fj,...,fM, fjRepresent j-th Harr feature, 1≤j≤M;
C) j-th Harr feature f of i-th training sample, is obtained using formula (1)jGrader hj(xi):
In formula (1), pjFor grader hj(xi) directioin parameter, pj=± 1, θ is threshold value, is taken as 0.15 in this example;
D), repeat step c), so as to obtain M grader set F={ h1(xi),h2(xi),...,hj(xi),...,hM
(xi)};
E) j-th grader h of i-th training sample, is obtained using formula (2)j(xi) weighting classification error εj:
In formula (2), ωiRepresent the weight after i-th training sample normalization;
F), repeat step e), so as to obtain M weighting classification error set E={ ε1,ε2,...,εj,...,εM};
G) the minimum grader of weighting classification error in weighting classification error set E, is chosen, it is minimum with weighting classification error
Grader corresponding to Harr features the human face region in ID image is detected.
2.2nd, head rectangle frame is outwards expanded into A%, here expand the reason for be driver in normal driving, have micro-
Little head movement, not belongs to bad steering behavior within this range, and A takes 10 in this example, obtains head and expands rectangle
Frame;
2.3rd, it is made up of the pixel of human face region in head rectangle frame and refers to three-dimensional point cloud;Wherein, three-dimensional point cloud is one
The set of group pixel, the pixel for expanding human face region in rectangle frame by head constitute three-dimensional point cloud to be matched, are calculated using ICP
Method, ICP algorithm are iterative closest point algorithm, to three-dimensional point cloud to be matched with three-dimensional registering, acquisition ICP is carried out with reference to three-dimensional point cloud
The final iterationses of algorithm, are compared according to threshold value of the final iterationses with setting, if final iterationses >=set
Fixed threshold value, set threshold value is 50 in this example, then be judged as bad steering behavior, otherwise judge head normally to drive
Sail behavior;, and continue executing with step 3;The foundation judged here according to iterationses is that ICP algorithm is a kind of iteration
With algorithm, need in the matching process to carry out ceaselessly iteration, the final iteration if the matching degree height of two width driving behavior images
Number of times is few, is now normal driving, and if the big matching degree of two width driving behavior image difference is low, final iterationses are more, now
For bad steering, so selecting according to the matching iterationses of driver's driving behavior image to judge whether driver there occurs
Bad steering.
Specifically, ICP algorithm is to carry out as follows:
H) it is R, to define three dimensions3;
The initial coordinate for defining pixel in three-dimensional point cloud to be matched is { Pi|Pi∈R3, i=1 ...., NP, NPExpression is treated
Pixel sum in matching three-dimensional point cloud;
Definition is { Q with reference to the initial coordinate of pixel in three-dimensional point cloudi|Qi∈R3, i=1 ...., NQ};NQGinseng is stated in expression
Examine the sum of pixel in three-dimensional point cloud;
I) it is W, to define iteration total degree;
When defining kth time iteration, k ∈ W;To the initial coordinate { Q with reference to pixel in three-dimensional point cloudi|Qi∈R3, i=
1,....,NQIn each coordinate points, the coordinate { P of pixel in three-dimensional point cloud to be matchedi k|Pi k∈R3, i=
1,....,NPIn all find out away from { Qi|Qi∈R3, i=1 ...., NQIn the nearest point of each coordinate points, these nearest points
Composition refers to the coordinate of pixel in three-dimensional point cloud
When j), defining+1 iteration of kth, pixel in three-dimensional point cloud to be matched is obtained respectively using formula (3) and formula (4)
Coordinate is { Pi k+1|Pi k+1∈R3, i=1 ...., NPAnd with reference to the coordinate of pixel in three-dimensional point cloud be
In formula (3) and formula (4), RO k+1Represent three-dimensional rotation matrix;tk+1Represent translation vector;
Specifically, three-dimensional rotation matrix RO k+1With translation vector tk+1Obtain as follows:
Coordinate { the P of pixel in three-dimensional point cloud to be matched when+1 iteration of kth j1), is obtained using formula (5) and (6)i k +1Barycenter μPWith the coordinate with reference to pixel in three-dimensional point cloudBarycenter μQ:
J2), obtain the coordinate { P of pixel in three-dimensional point cloud to be matched using formula (7) and (8) respectivelyi k+1Relative to matter
Heart μPTranslationWith the coordinate with reference to pixel in three-dimensional point cloudRelative to barycenter μQTranslation
J3), translated using formula (9)And translationBetween correlation matrix Kαβ:
In formula (9), α, β=1,2,3;
J4), correlation matrix K is obtained using formula (10)αβThe four-dimensional symmetrical matrix K for being constructed:
J5), Maximum characteristic root is obtained using four-dimensional symmetrical matrix K;And unit character vector q is obtained according to Maximum characteristic root
=[q0,q1,q2,q3]T;
J6), antisymmetric matrix K (q) is obtained using formula (11):
J7), three-dimensional rotation matrix is obtained using formula (12)
J8), translation vector t is obtained using formula (13)k+1:
K) coordinate { P of pixel in the three-dimensional point cloud to be matched of+1 iteration of kth, is obtained using formula (14)i k+1And kth
The coordinate of pixel in the reference three-dimensional point cloud of secondary iterationBetween Euclidean distance dk+1:
L), repeat step i)-step k), obtains Euclidean distance dk+1Minima when, stop iteration and simultaneously obtain final iteration
Number of times W'.
The detect and track of step 3, hand region:
3.1st, the whole of driver in ID image is labeled above the waist, obtains upper part of the body rectangle frame;
3.2nd, using the minimum region of depth value in upper part of the body rectangle frame as hand region, because hand is apart from TOF camera
Closely, so its depth value is minimum, and hand region is labeled with boundary rectangle frame, a width of head rectangle of boundary rectangle frame
The width of frame, the twice of the length of a length of head rectangle frame of boundary rectangle frame;Because in general, the hand and head size of people are poor
It is few the same, it is contemplated that during normal driving, hand has certain range of activity, so being set to the width of boundary rectangle frame here
The width of head rectangle frame, the length of boundary rectangle frame are set to the twice of the length of head rectangle frame.
3.3rd, the hand region in external rectangle frame in sequence depth image is tracked using Kalman filtering algorithm,
Obtain the center of external rectangle frame in sequence depth image;The center of boundary rectangle frame is obtained according to geometrical relationship.
Specifically, Kalman filtering algorithm is to carry out as follows:
M), make t2The hand state at moment isInitialization
N) t in the boundary rectangle frame of sequence depth image, is obtained using formula (15)vThe hand predicted state at moment
In formula (15),Hand state transfer matrix is represented,T is between the two neighboring moment
Interval;T takes 0.2s in this example,T in external rectangle frame in expression sequence depth imagev-1The hand shape at moment
State;
O), t is obtained using formula (16)v-1The hand state of the boundary rectangle inframe at momentCovariance matrix
In formula (16),Represent t in the boundary rectangle frame of sequence depth imagev-1The hand predicted state at moment;Represent t in the boundary rectangle frame of sequence depth imagev-2The hand state at moment;
P), t is obtained using formula (17)vThe hand predicted state at momentCovariance matrix
In formula (17),For hand state transfer matrixTransposition;Dynamic noise covariance matrix is represented, and is taken
From standard normal distribution
Q), t is updated using formula (18)vThe hand state of the boundary rectangle inframe at moment
In formula (18),For hand state observing matrix,For tvMoment Kalman filtering increases
Beneficial coefficient;
R), Kalman filtering gain coefficient is calculated using formula (19)
In formula (19),For hand state observing matrixTransposition,Noise covariance matrix is represented, and obeys mark
Quasi normal distribution Rk~N (0,1);
S), repeat step m)-q), constantly update tvThe hand state of the boundary rectangle inframe at momentSo as to realize
The tracking of boundary rectangle inframe hand region, obtains the center of external rectangle frame in sequence depth image by hand region.
3.4th, obtain the Euclidean distance between the center and the center of head rectangle frame of boundary rectangle frame;
If the 3.5, distance threshold of Euclidean distance >=set, set distance threshold is 40cm in this example, then
It is judged as bad steering behavior, is otherwise judged as normal driving behavior.
Claims (5)
1. a kind of bad steering behavioral value method based on TOF camera, is characterized in that carrying out as follows:
Step 1, acquisition depth image:
Obtained in time period T=(t using TOF camera1,t2,...tv...,tm) interior driver driving behavior depth image,
Choose t1The driving behavior depth image at moment is ID image;t2Moment is to tmThe driving behavior depth image at moment is
Sequence depth image;
Step 2, the detection of head zone and matching:
2.1st, the human face region in the ID image is detected using AdaBoost algorithms, and utilizes head rectangle
Frame is labeled to detected human face region, and obtains the center of the head rectangle frame;
2.2nd, the head rectangle frame is outwards expanded into A%, obtains head and expand rectangle frame;
2.3rd, it is made up of the pixel of human face region in the head rectangle frame and refers to three-dimensional point cloud;Rectangle is expanded by the head
In frame, the pixel of human face region constitutes three-dimensional point cloud to be matched, using ICP algorithm to the three-dimensional point cloud to be matched with it is described
Three-dimensional registration is carried out with reference to three-dimensional point cloud, the final iterationses of ICP algorithm are obtained, according to the final iterationses and setting
Threshold value be compared, if the threshold value of final iterationses >=set, is judged as bad steering behavior, otherwise judges
Head is normal driving behavior, and continues executing with step 3;
The detect and track of step 3, hand region:
3.1st, the whole of driver in the ID image is labeled above the waist, obtains upper part of the body rectangle frame;
3.2nd, using the minimum region of depth value in the upper part of the body rectangle frame as hand region, and with boundary rectangle frame to described
Hand region is labeled, the width of a width of described head rectangle frame of the boundary rectangle frame, the boundary rectangle frame it is a length of
The twice of the length of the head rectangle frame;
3.3rd, the hand region in external rectangle frame in the sequence depth image is tracked using Kalman filtering algorithm,
Obtain the center of external rectangle frame in the sequence depth image;
3.4th, obtain the Euclidean distance between the center of boundary rectangle frame and the center of the head rectangle frame;
If the 3.5, the distance threshold of the Euclidean distance >=set, is judged as bad steering behavior, otherwise it is judged as normal
Driving behavior.
2. the bad steering behavioral value method based on TOF camera according to claim 1, is characterized in that:The step
In 2.1, AdaBoost algorithms are the steps of carrying out:
Step a, the definition sequence depth image are training sample I={ (x1,y1),(x2,y2),…,(xi,yi),…,(xN,
yN), 1≤i≤N, xiRepresent i-th training sample, yi∈ { 0,1 }, yi=0 represents positive sample;yi=1 negative sample;
Step b, the extraction that Harr features are carried out to each training sample in the training sample I, obtain Harr characteristic set F
={ f1,f2,...,fj,...,fM, fjRepresent j-th Harr feature, 1≤j≤M;
Step c, j-th Harr feature f that i-th training sample is obtained using formula (1)jGrader hj(xi):
In formula (1), pjFor the grader hj(xi) directioin parameter, pj=± 1, θ is threshold value;
Step d, repeat step c, so as to obtain M grader set F={ h1(xi),h2(xi),...,hj(xi),...,hM
(xi)};
Step e, j-th grader h that i-th training sample is obtained using formula (2)j(xi) weighting classification error εj:
In formula (2), ωiRepresent the weight after i-th training sample normalization;
Step f, repeat step e, so as to obtain M weighting classification error set E={ ε1,ε2,...,εj,...,εM};
Step g, the grader for choosing weighting classification error minimum in the weighting classification error set E, with the weighting classification
Harr features corresponding to the minimum grader of error are detected to the human face region in the ID image.
3. the bad steering behavioral value method based on TOF camera according to claim 1, is characterized in that:The step
ICP algorithm in 2.3 is to carry out as follows:
Step a, definition three dimensions are R3;
The initial coordinate for defining pixel in the three-dimensional point cloud to be matched is { Pi|Pi∈R3, i=1 ...., NP, NPRepresent institute
State pixel sum in three-dimensional point cloud to be matched;
The initial coordinate for defining pixel in the reference three-dimensional point cloud is { Qi|Qi∈R3, i=1 ...., NQ};NQGinseng is stated in expression
Examine the sum of pixel in three-dimensional point cloud;
Step b, definition iteration total degree are W;
When defining kth time iteration, k ∈ W;Initial coordinate { Q to pixel in the reference three-dimensional point cloudi|Qi∈R3, i=
1,....,NQIn each coordinate points, the coordinate { P of pixel in three-dimensional point cloud to be matchedi k|Pi k∈R3, i=
1,....,NPIn all find out distance { Qi|Qi∈R3, i=1 ...., NQIn the nearest point of each coordinate points, by the distance
Nearest point composition refers to the coordinate of pixel in three-dimensional point cloud
When step c, definition+1 iteration of kth, pixel in three-dimensional point cloud to be matched is obtained respectively using formula (3) and formula (4)
Coordinate is { Pi k+1|Pi k+1∈R3, i=1 ...., NPAnd with reference to the coordinate of pixel in three-dimensional point cloud be
In formula (3) and formula (4), RO k+1Represent three-dimensional rotation matrix;tk+1Represent translation vector;
Step d, using formula (5) obtain+1 iteration of kth three-dimensional point cloud to be matched in pixel coordinate { Pi k+1And kth time
The coordinate of pixel in the reference three-dimensional point cloud of iterationBetween Euclidean distance dk+1:
Step e, repeat step b- step d, obtain the Euclidean distance dk+1Minima when, stop iteration and obtain finally changing
Generation number W'.
4. the bad steering behavioral value method based on TOF camera according to claim 1, is characterized in that:The step
Kalman filtering algorithm in 3.3 is to carry out as follows:
1), make t2The hand state at moment isInitialization
2) t in the boundary rectangle frame of the sequence depth image, is obtained using formula (6)vThe hand predicted state at moment
In formula (6),Hand state transfer matrix is represented,T be between the two neighboring moment between
Every;Represent in the sequence depth image t in external rectangle framev-1The hand state at moment;
3), t is obtained using formula (7)v-1The hand state of the boundary rectangle inframe at momentCovariance matrix
In formula (7),Represent t in the boundary rectangle frame of the sequence depth imagev-1The hand predicted state at moment;
Represent t in the boundary rectangle frame of the sequence depth imagev-2The hand state at moment;
4), the t is obtained using formula (8)vThe hand predicted state at momentCovariance matrix
In formula (8),For hand state transfer matrixTransposition;Dynamic noise covariance matrix is represented, and obeys standard
Normal distribution
5), t is updated using formula (9)vThe hand state of the boundary rectangle inframe at moment
In formula (9),For hand state observing matrix,For tvMoment Kalman filtering gain system
Number;
6), Kalman filtering gain coefficient is calculated using formula (10)
In formula (10),For hand state observing matrixTransposition,Noise covariance matrix is represented, and just obeys standard
State is distributed Rk~N (0,1);
7), repeat step 1) -6), constantly update tvThe hand state of the boundary rectangle inframe at momentIt is external so as to realize
The tracking of rectangle inframe hand region, obtains the centre bit of external rectangle frame in the sequence depth image by the hand region
Put.
5. the bad steering behavioral value method based on TOF camera according to claim 3, is characterized in that:Step c
Middle three-dimensional rotation matrix RO k+1With translation vector tk+1Obtain as follows:
1) coordinate { P of pixel in the three-dimensional point cloud to be matched when+1 iteration of kth, is obtained using formula (11) and (12)i k+1
Barycenter μPWith the coordinate with reference to pixel in three-dimensional point cloudBarycenter μQ:
2), obtain the coordinate { P of pixel in the three-dimensional point cloud to be matched using formula (13) and (14) respectivelyi k+1Relative to matter
Heart μPTranslationWith the coordinate of pixel in the reference three-dimensional point cloudRelative to barycenter μQTranslation
3), the translation is obtained using formula (15)And translationBetween correlation matrix Kαβ:
In formula (15), α, β=1,2,3;
4), correlation matrix K is obtained using formula (16)αβThe four-dimensional symmetrical matrix K for being constructed:
5), Maximum characteristic root is obtained using the four-dimensional symmetrical matrix K;And according to the Maximum characteristic root obtain unit character to
Amount q=[q0,q1,q2,q3]T;
6), antisymmetric matrix K (q) is obtained using formula (17):
7), three-dimensional rotation matrix is obtained using formula (18)
8), translation vector t is obtained using formula (19)k+1:
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201410428258.1A CN104200199B (en) | 2014-08-27 | 2014-08-27 | Bad steering behavioral value method based on TOF camera |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201410428258.1A CN104200199B (en) | 2014-08-27 | 2014-08-27 | Bad steering behavioral value method based on TOF camera |
Publications (2)
Publication Number | Publication Date |
---|---|
CN104200199A CN104200199A (en) | 2014-12-10 |
CN104200199B true CN104200199B (en) | 2017-04-05 |
Family
ID=52085489
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201410428258.1A Expired - Fee Related CN104200199B (en) | 2014-08-27 | 2014-08-27 | Bad steering behavioral value method based on TOF camera |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN104200199B (en) |
Families Citing this family (8)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US9576449B2 (en) * | 2015-06-03 | 2017-02-21 | Honeywell International Inc. | Door and window contact systems and methods that include time of flight sensors |
CN109556511A (en) * | 2018-11-14 | 2019-04-02 | 南京农业大学 | A kind of suspension-type high throughput hothouse plants phenotype measuring system based on multi-angle of view RGB-D integration technology |
CN109977786B (en) * | 2019-03-01 | 2021-02-09 | 东南大学 | Driver posture detection method based on video and skin color area distance |
CN110046560B (en) * | 2019-03-28 | 2021-11-23 | 青岛小鸟看看科技有限公司 | Dangerous driving behavior detection method and camera |
CN110599407B (en) * | 2019-06-21 | 2022-04-05 | 杭州一隅千象科技有限公司 | Human body noise reduction method and system based on multiple TOF cameras in downward inclination angle direction |
CN110708518B (en) * | 2019-11-05 | 2021-05-25 | 北京深测科技有限公司 | People flow analysis early warning dispersion method and system |
CN112634270B (en) * | 2021-03-09 | 2021-06-04 | 深圳华龙讯达信息技术股份有限公司 | Imaging detection system and method based on industrial internet |
CN112990153A (en) * | 2021-05-11 | 2021-06-18 | 创新奇智(成都)科技有限公司 | Multi-target behavior identification method and device, storage medium and electronic equipment |
Citations (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN102982316A (en) * | 2012-11-05 | 2013-03-20 | 安维思电子科技(广州)有限公司 | Driver abnormal driving behavior recognition device and method thereof |
Family Cites Families (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20080243558A1 (en) * | 2007-03-27 | 2008-10-02 | Ash Gupte | System and method for monitoring driving behavior with feedback |
-
2014
- 2014-08-27 CN CN201410428258.1A patent/CN104200199B/en not_active Expired - Fee Related
Patent Citations (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN102982316A (en) * | 2012-11-05 | 2013-03-20 | 安维思电子科技(广州)有限公司 | Driver abnormal driving behavior recognition device and method thereof |
Non-Patent Citations (3)
Title |
---|
"Driving behavior analysis using vision-based head pose estimation for enhanced communication among traffic participants";Shuzo Noridomi etc.;《2013 International Conference on Connected Vehicles and Expo》;20140517;第26-31页 * |
"一种基于特征三角形的驾驶员头部朝向分析方法";朱玉华等;《中国人工智能学会第十三届学术年会》;20130621;第378-386页 * |
"基于计算机视觉的异常驾驶行为检测方法研究";黄思博;《中国优秀硕士学位论文全文数据库 信息科技辑》;20111215(第12期);I138-851 * |
Also Published As
Publication number | Publication date |
---|---|
CN104200199A (en) | 2014-12-10 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN104200199B (en) | Bad steering behavioral value method based on TOF camera | |
CN107031623B (en) | A kind of road method for early warning based on vehicle-mounted blind area camera | |
CN103824070B (en) | A kind of rapid pedestrian detection method based on computer vision | |
CN104318258B (en) | Time domain fuzzy and kalman filter-based lane detection method | |
CN106127148A (en) | A kind of escalator passenger's unusual checking algorithm based on machine vision | |
CN104123549B (en) | Eye positioning method for real-time monitoring of fatigue driving | |
CN103426179B (en) | A kind of method for tracking target based on mean shift multiple features fusion and device | |
CN110175576A (en) | A kind of driving vehicle visible detection method of combination laser point cloud data | |
CN102129690B (en) | Tracking method of human body moving object with environmental disturbance resistance | |
CN106256606A (en) | A kind of lane departure warning method based on vehicle-mounted binocular camera | |
CN104715244A (en) | Multi-viewing-angle face detection method based on skin color segmentation and machine learning | |
CN106682586A (en) | Method for real-time lane line detection based on vision under complex lighting conditions | |
CN104408462B (en) | Face feature point method for rapidly positioning | |
CN108108667B (en) | A kind of front vehicles fast ranging method based on narrow baseline binocular vision | |
CN101520892B (en) | Detection method of small objects in visible light image | |
CN106485245A (en) | A kind of round-the-clock object real-time tracking method based on visible ray and infrared image | |
CN102592288B (en) | Method for matching pursuit of pedestrian target under illumination environment change condition | |
CN103310194A (en) | Method for detecting head and shoulders of pedestrian in video based on overhead pixel gradient direction | |
CN106682641A (en) | Pedestrian identification method based on image with FHOG- LBPH feature | |
CN105005999A (en) | Obstacle detection method for blind guiding instrument based on computer stereo vision | |
CN109919074A (en) | A kind of the vehicle cognitive method and device of view-based access control model cognition technology | |
CN105260705A (en) | Detection method suitable for call receiving and making behavior of driver under multiple postures | |
CN106682603A (en) | Real time driver fatigue warning system based on multi-source information fusion | |
CN104951758B (en) | The vehicle-mounted pedestrian detection of view-based access control model and tracking and system under urban environment | |
CN112381870B (en) | Binocular vision-based ship identification and navigational speed measurement system and method |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
C06 | Publication | ||
PB01 | Publication | ||
C10 | Entry into substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant | ||
CF01 | Termination of patent right due to non-payment of annual fee | ||
CF01 | Termination of patent right due to non-payment of annual fee |
Granted publication date: 20170405 Termination date: 20190827 |