CN106599792B - Method for detecting hand driving violation behavior - Google Patents

Method for detecting hand driving violation behavior Download PDF

Info

Publication number
CN106599792B
CN106599792B CN201611037118.7A CN201611037118A CN106599792B CN 106599792 B CN106599792 B CN 106599792B CN 201611037118 A CN201611037118 A CN 201611037118A CN 106599792 B CN106599792 B CN 106599792B
Authority
CN
China
Prior art keywords
point
hand
value
pixel
area
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201611037118.7A
Other languages
Chinese (zh)
Other versions
CN106599792A (en
Inventor
孙伟
施顺顺
张小瑞
刘佳
张小娜
闫朝阳
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Nanjing University of Information Science and Technology
Original Assignee
Nanjing University of Information Science and Technology
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Nanjing University of Information Science and Technology filed Critical Nanjing University of Information Science and Technology
Priority to CN201611037118.7A priority Critical patent/CN106599792B/en
Publication of CN106599792A publication Critical patent/CN106599792A/en
Application granted granted Critical
Publication of CN106599792B publication Critical patent/CN106599792B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/50Context or environment of the image
    • G06V20/59Context or environment of the image inside of a vehicle, e.g. relating to seat occupancy, driver state or inner lighting conditions
    • G06V20/597Recognising the driver's state or behaviour, e.g. attention or drowsiness
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/12Computing arrangements based on biological models using genetic models
    • G06N3/126Evolutionary algorithms, e.g. genetic algorithms or genetic programming
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/40Scenes; Scene-specific elements in video content
    • G06V20/41Higher-level, semantic clustering, classification or understanding of video scenes, e.g. detection, labelling or Markovian modelling of sport events or news items
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/107Static hand or arm
    • G06V40/11Hand-related biometrics; Hand pose recognition

Abstract

The invention discloses a method for detecting a hand-driving violation, which comprises the steps of reading an image from a monitoring video; preprocessing the read image, including format change, building an elliptical skin color model and determining the skin color probability value of a pixel point; detecting the outline of the hand from the preprocessed image; framing five regions of interest by taking a steering wheel as a center, and dividing the framed regions into coordinates; whether the hand gesture belongs to the illegal driving behavior is distinguished by judging whether the hand contour is located in the central area of the steering wheel, preliminary determination of hand pixel information is carried out by establishing an oval skin color model and utilizing logistic regression analysis, analysis and marking of a communication area are carried out by applying a region growing method, a hand skin color image is extracted, a complete hand image is obtained, and whether operation in a cab is illegal or not can be accurately judged by dividing the image into five interested areas.

Description

Method for detecting hand driving violation behavior
Technical Field
The invention relates to the field of traffic information control and a driving behavior monitoring and early warning technology, in particular to a method for detecting a hand-driving violation.
Background
With the rapid development of social economy, automobiles become more and more popular in our lives, bring convenience to the lives of people and bring a large number of traffic accidents, and huge loss is caused to the human society. The driving bad habit of the driver is the main cause of the accident. Therefore, more and more attention is paid to the social circles for judging the illegal driving behavior of the driver. The current judging method mainly comprises the following steps: and data acquisition is carried out by utilizing a sensor information fusion method, and the judgment of the driving behavior is realized by establishing a driving model. Based on the detection method of the vehicle behavior, when the vehicle deviates from the driving route, the system can capture the information and send out an alarm signal, so that the driving behavior is judged. The judgment method based on computer vision mainly acquires driving behavior images and analyzes the acquired images through an image processing technology to obtain a judgment result. The former two methods require more devices, have high cost and need to improve the accuracy. Therefore, the third method based on the computer vision technology has the advantages of low cost, high accuracy, high safety performance and the like. The invention mainly depends on the computer vision technology, is easy to realize in the era of high-speed information development nowadays, and can effectively reduce traffic accidents.
201110211193.1 in the prior art discloses an illegal driving behavior detection method based on hand gesture tracking, which comprises the following steps: step 1, reading an image from a monitoring video; step 2, preprocessing the read image, including gray level transformation, image filtering, edge extraction, contour enhancement and the like; step 3, positioning a steering wheel; step 4, intercepting a corresponding region of interest by taking a steering wheel as a center; step 5, extracting the features of the intercepted area; and 6, classifying and identifying the extracted features, and distinguishing whether the hand gesture belongs to the illegal driving behavior. In the scheme, the maximum elliptical shape is detected by using a direct least square ellipse fitting algorithm to complete the positioning of a steering wheel area, then a corresponding region of interest is extracted by taking the steering wheel as the center, the hand skin is detected by using a simple Gaussian model in a skin color model, then the hand gesture is subjected to skeleton extraction by using a chamfer distance transformation algorithm, and the hand gesture motion trajectory is tracked by using a Kalman filtering tracking algorithm. And finally, classifying and identifying the identified hand gesture motion trajectory by adopting a neural network and a Bayesian network classifier, and distinguishing whether the hand gesture belongs to the illegal driving, but the surrounding region of interest with the steering wheel as the center is not subdivided into five regions in the scheme, so that fuzzy judgment and misjudgment sometimes occur when the driving operation is judged to be illegal.
Disclosure of Invention
Aiming at the defects, the invention provides a detection method for the hand driving violation, which can accurately judge whether the driving violation is caused, and the specific scheme is as follows;
a detection method for hand driving violation comprises reading images from a monitoring video; preprocessing the read image, including format change, building an elliptical skin color model and determining the skin color probability value of a pixel point; detecting the outline of the hand from the preprocessed image; framing five regions of interest by taking a steering wheel as a center, and dividing the framed regions into coordinates; whether the hand gesture belongs to the illegal driving behavior is distinguished by judging whether the hand contour is located in the central area of the steering wheel.
Further, the above scheme comprises the following steps:
acquiring images of a cockpit, namely acquiring driving behavior image information of a driver by using a camera, wherein the read effective images comprise a steering wheel and hand information of the driver;
step two, extracting a hand skin color area, and determining the skin color probability value of each pixel point in the image by establishing an oval skin color model and logistic regression analysis;
step three, segmenting a hand skin color area, determining a segmentation threshold value in a self-adaptive manner by using a genetic algorithm, and completing segmentation of the hand skin color;
fourthly, eliminating noise in the image, analyzing the connected region by using a region growing method, eliminating discrete noise points and noise regions, and determining the detected maximum and secondary connected regions as hand regions;
step five, dividing image areas, comparing the processed hand image with an original photo to frame five regions of interest, and establishing an area coordinate system;
and step six, judging the driving behavior, comparing the hand coordinates in the image with the coordinate area where the hand is correctly placed, and judging whether the hand position in the image is in the violation position.
Further, the second step includes converting the RGB color space of the image into a YCrCb color space, performing nonlinear piecewise color transformation according to the obtained Y, Cr, Cb values at each pixel point in the YCrCb color space to obtain Cr 'and Cb' values corresponding to each pixel point in YCr 'Cb', describing skin color distribution by using an elliptical model, establishing an elliptical skin color model on YCr 'Cb' color space to primarily judge the skin color point, and determining the skin color probability value of each pixel point by using logistic regression analysis.
And further in the fourth step, a gray value is assigned according to the skin color probability value of the pixel point, namely the gray value is equal to the probability value of 255, a self-adaptive segmentation threshold value is determined by using a genetic algorithm, and the gray value is compared with the segmentation threshold value to segment the final hand region.
Further, the five regions of interest framed in the images in the fifth step include a door handle region framed extending from the steering wheel region as a center to the periphery, a knee region, a gear shift lever region, and a vehicle-mounted multimedia system region.
Furthermore, in the sixth step, the area where the two hands are placed and the number of the hands in the area are judged by using an integral projection method, and a judgment criterion is set to realize the judgment of the driving violation.
The method comprises the steps of collecting driving behavior image information of a driver's hand in real time through a camera arranged on the roof above the driver's head, determining five regions of interest according to the placement habits of the driver's hands during normal driving, determining the skin color probability value of each pixel point through establishing an elliptical skin color model and logistic regression analysis, adaptively determining a segmentation threshold value by using a genetic algorithm, completing the segmentation of the skin color of the hand, analyzing a connected region, eliminating discrete noise points and noise regions, determining the detected maximum and second largest connected regions as the hand regions, setting a judgment criterion according to the detected positions of the hand regions of the driver and combining the placement positions of the hands of the driver during normal driving, determining whether the driver has driving behavior according to the judgment criterion, and having good accurate judgment when judging whether the driver violates rules or not, the transmission of erroneous decisions is reduced.
Drawings
FIG. 1 is a block diagram of a method for detecting a hand driving violation;
FIG. 2 is a diagram illustrating a crossover step in a genetic algorithm.
Detailed Description
The invention relates to a detection method for judging whether a driver has a driving violation behavior according to a hand position, which comprises the following specific processes:
step 1: driver hand image acquisition
The method comprises the steps of collecting driving behavior image information of a driver by utilizing a camera, installing the camera at the position of a vehicle roof above the head of the driver, enabling the lens of the camera to face a steering wheel, adjusting the position of the lens to ensure that the lens is not shielded by the head of the driver and clearly collecting an image of a hand moving area of the driver during driving, wherein the image comprises a vehicle door handle, the steering wheel, a vehicle-mounted multimedia system, a gear lever and the front end position of a vehicle seat, the resolution ratio of the image is set to be m multiplied by n, and a coordinate system i 'oj' is set according to the position of pixel points, wherein the top left vertex is a coordinate origin o, the upper boundary of the image is an i 'axis, the left boundary of the image is a j' axis, m is the number of.
Step 2: hand skin color region extraction
(1) RGB color (captured image color format) space is converted to YCrCb color space (adding skin color clustering and segmentation effects)
According to the conversion formula from RGB space to YCrCb space, the values of Y, Cr and Cb at the pixel point (x, Y) in YCrCb color space are obtained, x is sequentially 0,1,2, …, m-1, Y is sequentially 0,1,2, …, n-1, the whole image is traversed to obtain the values of R (x, Y), G (x, Y) and B (x, Y) at each pixel point and the values of Y (x, Y), Cr (x, Y) and Cb (x, Y) at each pixel point,
Y(x,y)=0.257R(x,y)+0.504G(x,y)+0.098B(x,y)+16
Cr(x,y)=0.439R(x,y)-0.368G(x,y)-0.071B(x,y)+128
Cb(x,y)=-0.148R(x,y)-0.219G(x,y)+0.439B(x,y)+128
wherein, x and Y are respectively the horizontal and vertical coordinate values of the pixel at the current position, R (x, Y), G (x, Y), B (x, Y) represent the intensity information of three colors of red, green and blue at the pixel (x, Y) in RGB color space, Y (x, Y) represents the brightness information at the pixel (x, Y) in YCrCb color space, Cr (x, Y), Cb (x, Y) represents the chroma information at the pixel (x, Y) in YCrCb color space, x sequentially takes 0,1,2, …, m-1, Y sequentially takes 0,1,2, …, n-1,
(2) carrying out nonlinear piecewise color transformation on the image (taking the values of cb and cr corresponding to the coordinates of each pixel point)
Traversing the whole image according to the obtained Y, Cr and Cb values of each pixel point in the YCrCb color space, and calculating to obtain a central axis expression of the skin color area
Figure BDA0001159925490000041
And skin color region width expression WCb(x,y),WCr(x,y),
Figure BDA0001159925490000051
Figure BDA0001159925490000052
Figure BDA0001159925490000053
Wherein k is1,khThe segment threshold value representing the nonlinear segment color transformation is an empirical value obtained by counting pixel points, and k is used in the invention1=125,kh=188,Ymin,YmaxRespectively representing the minimum and maximum values of luminance, which are empirical values obtained by counting pixel points, Y being the value of Y in the present inventionmin=16,Ymax=235,WLCb,WLCr,WHCb,WHCr,WCb,WCrIn the present invention, WL is an empirical value obtained by counting pixel pointsCb=23,WLCr=20,WHCb=14,WHCr=10,WCb=46.97,WCrTaking x as 38.76, sequentially taking 0,1,2, …, taking m-1, and taking y as 0,1,2, …, and n-1, then performing nonlinear transformation by using the obtained related information, traversing the whole image, and obtaining Cr 'and Cb' values corresponding to each pixel point in a new color space YCr 'Cb', wherein the expression is as follows:
Figure RE-GDA0001222938860000054
Figure RE-GDA0001222938860000055
wherein, Cr '(x, y), Cb' (x, y) is the corrected value of the chroma component Cr, Cb at the pixel point (x, y) after adding chroma compensation, Cr (x, y), Cb (x, y) is the value of the chroma information Cr, Cb at the pixel point (x, y), x is 0,1,2, …, m-1, y is 0,1,2, …, n-1,
(3) establishing an elliptical skin color model
Describing the skin color distribution by using an elliptical model, establishing YCr 'Cb' elliptical skin color model on the color space,
u(x,y)=cosω*(Cb′(x,y)-Cx)+sinω*(Cr′(x,y)-Cy)
v(x,y)=(-sinω)*(Cb′(x,y)-Cx)+cosω*(Cr′(x,y)-Cy)
Figure BDA0001159925490000061
the method comprises the steps that u (x, y) and v (x, y) are values obtained by calculating Cb 'and Cr' values at a pixel point (x, y), x sequentially takes 0,1,2, …, m-1 and y sequentially take 0,1,2, …, n-1, a and b are lengths of a major semi-axis and a minor semi-axis of an ellipse, omega represents a dip radian of the ellipse, Cx and Cy represent centers of the ellipse on a Cb 'Cr' plane, whether the pixel point is a skin color point or not is judged for the first time, if d (u (x, y) and v (x, y)) are less than or equal to 0, the pixel point is possibly a skin color point in an elliptical area or on an elliptical boundary, and if d (u (x, y), v (x, y)) > 0, the pixel point is outside the elliptical area and is possibly not the skin color point.
(4) Determining a skin color probability value of a pixel
In order to more accurately determine skin color pixel points on the basis of primary judgment of the skin color points, the skin color probability value p of each pixel point is further determined by using logistic regression analysiss(u(x,y),v(x,y)),
Figure BDA0001159925490000062
Wherein, β12D (u (x, y), v (x, y)) is an empirical value, β in the present invention1=2.247,β2=1,psHas a value range of [0,1 ]]If the number of gray levels is 255, the gray level at the corresponding pixel point (x, y) is B (x, y), and B (x, y) is 255 × ps(u (x, y), v (x, y)), x is 0,1,2, …, m-1, y is 0,1,2, …, n-1.
And step 3: hand skin color region segmentation
Determining an optimal segmentation threshold t using a genetic algorithmsJudging whether the pixel point is a skin color point or not, and comparing the optimal segmentation threshold value tsDetermining a skin color area according to the gray value B (x, y) of the pixel point (x, y) to obtain a binary image B1(x,y),
Figure BDA0001159925490000071
x is 0,1,2, …, m-1, y is 0,1,2, …, n-1, the gray value is larger than the threshold value tsThe pixel point is determined as a skin color area, the segmentation of the hand skin color area is completed, and a genetic algorithm is used for self-adaptively selecting a segmentation threshold value tsThe steps are as follows:
(1) and (3) encoding: the gray scale value of the image is 0 to 255, each gray scale value corresponds to an 8-bit binary number, so that the gray scale value of each pixel in the image can be represented by an 8-bit binary number,
(2) generating an initialization population: randomly generating M initial individuals X11,X12,…,X1MFrom these individuals, an initial population X is formed1,X1={X11,X12,…,X1MAnd setting the maximum evolution algebra as H and the cross rate as PcThe rate of variation is PmWherein H, Pc,PmAll are self-set empirical values, in the present invention H is 40, Pc=0.2,Pm=0.01。
(3) Determining an adaptive function: determining an adaptive function g (t), g (t) w by using a maximum inter-class variance methodA(uA-ut)2+wB(uB-ut)2=wAwB(uA-uB)2Wherein the parameter w in the formulaA,wB,uA,uB,utIs determined by the following method:
assume that there are N pixels in the acquired image, where N is m × N, and N is the pixel with the gray value λλObtaining the probability p of each gray levelλ,pλ=nλand/N, a certain gray value is used as a threshold value t, the image is divided into a human skin color area and a background area, A and B are respectively used, then A is (0, …, t), B is (t +1, …, L-1), L is the gray level number, and the probability P of each gray level is determined according to the occurrence probability P of each gray levelλThe probability value w of the gray levels in the A class and the B class can be obtainedAAnd wB
Figure BDA0001159925490000072
And the overall average gray value uL(t) and the average grayscale value u at threshold tt(t),Subsequently, the mean gray value u of class A is further obtainedAAnd average gray value u of class BB Finally, determining an adaptive function by using the maximum between-class variance method, wherein the adaptive function is g (t) wA(uA-ut)2+wB(uB-ut)2=wAwB(uA-uB)2
(4) Selecting: using the obtained fitness function g (t), g (t) ═ wA(uA-ut)2+wB(uB-ut)2=wAwB(uA-uB)2Calculating to obtain the fitness value of the population, and recording as g1(t),g2(t),…,gM(t), the groups are sorted according to the fitness, and the individual with larger fitness value is copied to replace the individual with smaller fitness value, so as to generate a new group X1′,X1′={X11′,X12′,…,X1M′}。
(5) And (3) crossing: disordering the sequence of individuals in the population, randomly sorting the individuals, randomly selecting two individuals in the sorted population by using a two-point crossing method, and according to a crossing rate PcPerforming two-point crossing, namely selecting two crossing points from the code strings of the two individuals after encoding, exchanging the code strings between the two crossing points to obtain two new individuals, performing crossing operation on all the individuals to generate a new population X1″,X1″={X11″,X12″,…,X1M", the process of two-dot crossing (as shown in FIG. 2).
(6) Mutation: at a rate of variation Pm0.01 pairs of population X1' any one bit in the 8-bit binary digit string of any individual is inverted, i.e. the number of the selected bit is changed from 0 to 1 or from 1 to 0, so as to generate a new individual, and after all the individuals are subjected to variation operation, a new generation of population X is formed2,X2={X21,X22,…X2M}。
(7) Termination conditions were as follows: when the maximum evolution algebra H is 40 or the maximum fitness value in the population is not changed greatly, the operation is terminated, if the maximum evolution algebra H is not 40 or the maximum fitness value in the population is not changed greatly, the operation of selection, intersection and mutation is continued until the termination condition is reached, the individual with the highest fitness value is decoded to obtain the optimal threshold value ts
Figure BDA0001159925490000082
And 4, step 4: skin color region connectivity and hand image extraction
(1) Hand skin color point marking
Analyzing and marking the connected region by using a region growing method, firstly selecting pixel points (g, h) as initial points, and marking a marking value f(g,h)And a label llabelSetting the pixel points as 0, wherein g and h are horizontal and vertical coordinate values of the current pixel point, g is more than or equal to 0 and less than or equal to m-1, h is more than or equal to 0 and less than or equal to n-1, scanning the image, traversing the whole image from the left to the right from the top to the bottom from the top of the image, judging whether the pixel point is a skin color point, if the judged initial pixel point is not a skin color point, not marking the initial pixel point, taking the next pixel point as the current point, namely when h is more than or equal to 0 and less than or equal to n-2, the current point coordinate is (g +1, h), the current point coordinate is the right pixel point adjacent to the pixel point (g, h), when h is n-1, the current point coordinate is (0, h +1), the first pixel point in the next line of the pixel point (g, h), and if the judged initial pixel point is a skin color point, setting the(g,h)1, then judge next pixel as current point, if the flesh color point, then to its last, left side, upper left, four pixel judge on the upper right, if only a pixel is marked, just give the mark value of this pixel to current point, if there are a plurality of pixels to be marked, then according to last, left side, upper left, the upper right order carries out the mark value assignment to current point, whether the pixel of checking current point top earlier promptly is marked, if marked, give the mark value assignment of top pixel to current point, f is promptly(g,h)=f(g,h-1)And selecting the next pixel as the current point to continue analyzing the mark without considering the marking condition of other three pixels, if the current point is not marked, checking whether the left pixel is marked, if the current point is marked, assigning the marking value of the left pixel to the current point, namely f(g,h)=f(g-1,h)And selecting the next pixel point as the current point to continue analyzing the mark without considering the marking condition of the next two pixel points, and if the marking condition is not considered, selecting the next pixel point as the current point to continue analyzing the markIf not, checking whether the pixel point at the upper left is marked, if so, assigning the marking value of the pixel point at the upper left to the current point, namely f(g,h)=f(g-1,h-1)And if the marking condition of the last pixel point is not considered, selecting the next pixel point as the current point to continue analyzing the marking, if the current point is not marked, checking whether the pixel point at the upper right of the current point is marked, and if the current point is marked, assigning the marking value of the pixel point at the upper right to the current point, namely f(g,h)=f(g+1,h-1)If the marking is finished, selecting the next pixel point as the current point to continue analyzing the marking, if the current point is not marked, indicating that four pixel points on the current point, namely the left pixel point, the upper left pixel point and the upper right pixel point are not marked, taking the current point as the initial point of a new connected region, giving a new marking value, and then giving llabelAdding 1, traversing the whole image, and finishing marking the skin color point, wherein the marking formula is as follows:
Figure BDA0001159925490000091
wherein f is(g,h)Representing the mark value, f, at a pixel point (g, h)(g,h-1)Indicating the mark value of the pixel above the pixel point (g, h), f(g-1,h)The mark value, f, representing the pixel to the left of pixel (g, h)(g-1,h-1)The mark value, f, representing the upper left pixel of pixel (g, h)(g+1,h-1)And (5) representing the marking value of the pixel point at the upper right of the pixel point (g, h).
(2) Hand region determination
After marking is finished, distinguishing connected regions according to the marking values, and setting the region of each marking value as R1,R2,…,Rn′N' is the number of the regions, and the number a of the pixels in each region is countedf′(F '═ 1,2, …, n'), leaving the regions with the largest and next largest pixel counts, other regions can be identified as noise regions, their grayscale values set to 0, and then the pixel count threshold is set to FnIf the number of pixels in the area with the largest number of pixels and the second largest number of pixels is less than FnThen, the two regions are considered as noise regions, and their grayscales are determinedThe value is set to 0, and if the number of pixels in the region where the number of pixels is large is less than FnThe number of pixels in the region with the largest number of pixels is greater than or equal to FnIf the number of pixels in the area with the largest number of pixels is greater than or equal to F, the area with the largest number of pixels is determined as a skin color areanIf the area with the largest number of pixels and the area with the second largest number of pixels are skin color areas, the gray value B of the marked binary image is obtained2(x,y),
Figure BDA0001159925490000101
And extracting a skin color image to obtain a complete hand image.
And 5: hand position area segmentation
Utilizing the acquired image information, according to the habit of placing two hands of a driver during normal driving, framing five regions of interest, a steering wheel region, a vehicle door handle region, a knee region, a gear lever region and a vehicle-mounted multimedia system region in the acquired image, respectively defining the regions as a region 1, a region 2, a region 3, a region 4 and a region 5, and setting a point A1(a1,b1),B1(a2,b1),C1(a1,b2),D1(a2,b2) The rectangular area (pixel coordinates) is the steering wheel area, wherein A1(a1,b1),B(a2,b1),C1(a1,b2),D1(a2,b2) Four vertex coordinates representing the steering wheel area, a1,b1Is point A1A horizontal and vertical coordinate values of2,b1Is point B1A horizontal and vertical coordinate values of1,b2Is point C1A horizontal and vertical coordinate values of2,b2Is a point D1The horizontal and vertical coordinate values of (1) are represented by point A2(0,b3),B2(a3,b3),C2(0,b4),D2(a3,b4) The enclosed rectangular area is a door handle area, wherein A2(0,b3),B2(a3,b3),C2(0,b4),D2(a3,b4) Four vertex coordinates, 0, b, representing the door handle area3Is point A2A horizontal and vertical coordinate values of3,b3Is point B20, b is the horizontal and vertical coordinate values of4Is point C2A horizontal and vertical coordinate values of3,b4Is a point D2The horizontal and vertical coordinate values of (1) are represented by point A3(a1,b4),B3(a2,b4),C3(a1,n-1),D3(a2N-1) is a knee area, wherein A3(a1,b4),B3(a2,b4),C3(a1,n-1),D3(a2N-1) four vertex coordinates of the knee region, a1,b4Is point A3A horizontal and vertical coordinate values of2,b4Is point B3A horizontal and vertical coordinate values of1N-1 is a point C3A horizontal and vertical coordinate values of2N-1 is a point D3The horizontal and vertical coordinate values of (1) are represented by point A4(a4,b5),B4(m-1,b5),C4(a4,n-1),D4The rectangular area surrounded by (m-1, n-1) is the gear lever area, wherein A4(a4,b5),B4(m-1,b5),C4(a4,n-1),D4(m-1, n-1) four vertex coordinates of the shift lever region, a4,b5Is point A4M-1, b are the horizontal and vertical coordinate values of5Is point B4A horizontal and vertical coordinate values of4N-1 is a point C4M-1, n-1 is the point D4The horizontal and vertical coordinate values of (1) are represented by point A5(a4,b1),B5(m-1,b1),C5(a4,b6),D5(m-1,b6) The enclosed rectangular area is an on-vehicle multimedia system area, wherein A5(a4,b1),B5(m-1,b1),C5(a4,b6),D5(m-1,b6) Four vertex coordinates representing the area of the vehicle multimedia system, a4,b1Is point A5M-1, b are the horizontal and vertical coordinate values of1Is point B5A horizontal and vertical coordinate values of4,b6Is point C5M-1, b are the horizontal and vertical coordinate values of6Is a point D5A horizontal and vertical coordinate values of (a)3<a1<a2<a4<m-1,b1<b3<b6<b5<b2<b4< n-1 (as shown in FIG. 1).
Step 6: hand recognition and violation driving determination
Judging the areas where the two hands are placed and the number of the hands in the areas by using an integral projection method, setting a judgment criterion, realizing the judgment of the illegal driving behavior, and enabling f1=0,1,…,n-1,h10,1, …, m-1, calculating the sum of gray values of all pixels on each column parallel to the j' axis, determining the left and right boundaries of the hand according to the calculation results, and storing the calculation results in the array F1[f1]In (1),
Figure BDA0001159925490000111
when record to F1[f1]The first time greater than 0, then the left boundary of the hand region in the image more to the left hand is detected, using the variable JL1Recording the current abscissa f1And make JL1=f1And recording the coordinates of the first pixel point with the gray value larger than 0 on the column as H1(f1′,h1') continue to give f1Assigned value when F1[f1]When the value is equal to 0, the right boundary of the hand area of the hand which is more deviated from the left hand in the image is detected, and the variable J is usedR1Recording the abscissa f of the previous column of the current position1-1, and JR1=f1-1, continue to give f1Assigned value when F1[f1]Again, if it is greater than 0, then the left border of the hand region of the hand that is more to the right in the image is detected, using variable JL2Recording the current abscissa f1And make JL2=f1And recording the coordinates of the first pixel point with the gray value larger than 0 on the column as H2(f1″,h1") continue to give f1Assigned value when F1[f1]Again equal to 0, then the right border of the hand region of the hand that is more to the right in the image is detected, using the variable JR2Recording the horizontal coordinate value f of the column before the current position1-1, and JR2=f1-1, if only J is recordedL1,JR1A value of (a), then JL2=0,JR2Calculating the sum of the gray values of all pixels on each line parallel to the i' axis, determining the upper and lower boundaries of the hand according to the calculation result, and storing the calculation result in an array H1[h1]In (1),
Figure BDA0001159925490000121
when recording to H1[h1]The first time greater than 0, the upper boundary of the hand region of the hand more deviated from the upper hand in the image is detected, and the variable G is usedU1Recording the current ordinate h1And order GU1=h1Continue to supply h1Assigned value when H1[h1]When the value is equal to 0, the lower boundary of the hand region of the hand which is more deviated from the upper hand in the image is detected, and the variable G is usedD1Recording the longitudinal coordinate value h of a line at the current position11, and let GD1=h1-1, continue to give h1Assignment value when recording to H1[h1]Again greater than 0, the upper boundary of the hand region in the image more eccentric to the lower hand is detected, using the variable GU2Recording the current ordinate h1And order GU2=h1Continue to supply h1Assigned value when H1[h1]Again equal to 0, the lower boundary of the hand region of the hand in the image which is more inclined to the lower hand is detected, using the variable GD2Recording the vertical coordinate h of a line on the current position11, and let GD2=h1-1, if only G is recordedU1,GD1A value of (d), then GU2=0,GD2=0。
Judging the driving behavior: in the actual driving process, normal driving behavior is considered only when two hands are on a steering wheel or one hand is on the steering wheel and the other hand is in gear shifting, namely both hands are in the steering wheel area or one hand is in the steering wheel area while the other hand is in the gear shifting lever area, otherwise, the two hands are placed at other positions of the five interested areas and are considered as illegal driving behavior, and the specific judgment rule is as follows:
1. when a is1<JL1<a2And a is1<JR1<a2And JL2=0,JR2When the left and right boundaries of the hand are calculated, the staggered positions of the two hands are not obvious or the two hands are overlapped.
1) If b is1<GU1<b2And b is1<GD1<b2And G U20 and GD2When the position of the both hands is not obvious when the steering wheel is 0, only one upper boundary and one lower boundary of the both hands are obtained, and the steering wheel is judged to be normally driven when both hands are in the steering wheel area.
2) If b is1<GU1<b2And b is1<GD1<b2And b is1<GU2<b2And b is1<GD2<b2And when the two hands are staggered obviously, the left hand and the right hand can obtain an upper boundary and a lower boundary, and the two hands are in the steering wheel area and are judged to be in normal driving.
2. When a is1<JL1<a2And a is1<JR1<a2And a is1<JL2<a2And a is1<JR2<a2And then, the left hand and the right hand respectively obtain a left boundary and a right boundary, which shows that the two hands are obviously staggered and do not overlap when the left boundary and the right boundary of the hands are calculated.
1) If b is1<GU1<b2And b is1<GD1<b2And G U20 and GD2When the position of the both hands is not obvious when the steering wheel is 0, only one upper boundary and one lower boundary of the both hands are obtained, and the steering wheel is judged to be normally driven when both hands are in the steering wheel area.
2) If b is1<GU1<b2And b is1<GD1<b2And b is1<GU2<b2And b is1<GD2<b2And when the two hands are staggered obviously, the left hand and the right hand can obtain an upper boundary and a lower boundary, and the two hands are also in the steering wheel area, so that the normal driving is judged.
3. When a is1<JL1<a2And a is1<JR1<a2And a is4<JL2< m-1 and a4<JR2If < m-1, if H1(f1′,h1') in the steering wheel region, i.e. a1<f1′<a2,b1<h1′<b2,H2(f1″,h1Within the gear lever, i.e. a4<f1″<m-1,b5<h1"< n-1, in which case if b1<GU1<b2And b is1<GD1<b2And b is5<GU2< n-1 and b5<GD2When the time is less than n-1, one hand is in the steering wheel, one hand is in the gear lever area, then the number of frames of images in the period from the moment that the hand is just connected with the gear lever to the moment that the hand is away from the gear lever is calculated, the time T' of the hand staying on the gear lever is obtained, and the staying time threshold is set to be TtIf T' is less than or equal to TtIf the condition for normal driving is not satisfied, the driver is uniformly determined as illegal driving.

Claims (1)

1. A method for detecting hand driving violation, comprising the steps of:
step one, acquiring hand images of a driver,
the method comprises the steps that a camera is used for collecting driving behavior image information of a driver, the camera is installed at the position of a vehicle roof above the head of the driver, the lens of the camera faces a steering wheel, the position of the lens is adjusted to ensure that the lens is not shielded by the head of the driver and can clearly collect images of hand moving areas of the driver during driving, the images comprise a vehicle door handle, the steering wheel, a vehicle-mounted multimedia system, a gear lever and the front end position of a vehicle seat, the resolution ratio of the images is set to be mxn, and a coordinate system i 'oj' is set according to the positions of pixel points, wherein the top left vertex is a coordinate origin o, the upper boundary of the images is an i 'axis, the left boundary of the images is a j' axis, m is the number of pixels in the;
step two, extracting the skin color area of the hand,
(1) RGB color space conversion to YCrCb color space
According to the conversion formula from RGB space to YCrCb space, the values of Y, Cr and Cb at the pixel point (x, Y) in YCrCb color space are obtained, x is sequentially 0,1,2, …, m-1, Y is sequentially 0,1,2, …, n-1, the whole image is traversed to obtain the values of R (x, Y), G (x, Y) and B (x, Y) at each pixel point and the values of Y (x, Y), Cr (x, Y) and Cb (x, Y) at each pixel point,
Y(x,y)=0.257R(x,y)+0.504G(x,y)+0.098B(x,y)+16;
Cr(x,y)=0.439R(x,y)-0.368G(x,y)-0.071B(x,y)+128;
Cb(x,y)=-0.148R(x,y)-0.219G(x,y)+0.439B(x,y)+128;
x and Y are respectively the horizontal and vertical coordinate values of the current position pixel, R (x, Y), G (x, Y) and B (x, Y) represent the intensity information of three colors of red, green and blue at the pixel point (x, Y) in RGB color space, Y (x, Y) represents the brightness information at the pixel point (x, Y) in YCrCb color space, and Cr (x, Y) and Cb (x, Y) represent the chrominance information at the pixel point (x, Y) in YCrCb color space;
(2) the image is subjected to a non-linear piecewise color transform,
traversing the whole image according to the obtained Y, Cr and Cb values of each pixel point in the YCrCb color space, and calculating to obtain a central axis expression of the skin color areaAnd skin color region width expression WCb(x,y),WCr(x,y),
Figure FDA0002238503160000022
Figure FDA0002238503160000023
Figure FDA0002238503160000025
Wherein k is1,khThe segment threshold value representing the nonlinear segment color transformation is an empirical value obtained by counting pixel points, wherein k is1=125,kh=188,Ymin,YmaxRespectively representing the minimum value and the maximum value of the brightness, which are empirical values obtained by counting pixel points, wherein Y ismin=16,Ymax=235,
WLCb,WLCr,WHCb,WHCr,WCb,WCrIs an empirical value obtained by counting pixel points, wherein WLCb=23,WLCr=20,WHCb=14,WHCr=10,WCb=46.97,WCrAnd (6) performing nonlinear transformation by using the obtained related information, traversing the whole image to obtain Cr 'and Cb' values corresponding to each pixel point in a new color space YCr 'Cb', wherein the expression is as follows:
Figure FDA0002238503160000031
wherein, Cr '(x, y), Cb' (x, y) are corrected values of chrominance components Cr, Cb at the pixel point (x, y) after the chrominance compensation is added, and Cr (x, y), Cb (x, y) are values of chrominance information Cr, Cb at the pixel point (x, y);
(3) establishing an elliptical skin color model
Describing the skin color distribution by using an elliptical model, establishing YCr 'Cb' elliptical skin color model on the color space,
u(x,y)=cosω*(Cb′(x,y)-Cx)+sinω*(Cr′(x,y)-Cy)
v(x,y)=(-sinω)*(Cb′(x,y)-Cx)+cosω*(Cr′(x,y)-Cy)
Figure FDA0002238503160000033
wherein u (x, y), v (x, y) are values obtained by calculating Cb 'and Cr' values at the pixel point (x, y), a, b are lengths of a major semi-axis and a minor semi-axis of an ellipse respectively, omega represents a dip radian of the ellipse, Cx and Cy represent centers of the ellipse on a Cb 'Cr' plane, whether the pixel point is a skin color point or not is judged for the first time, if d (u (x, y), v (x, y)) < 0, the pixel point is in an elliptical area or on an elliptical boundary, the pixel point is possibly a skin color point, and if d (u (x, y), v (x, y)) > 0, the pixel point is out of the elliptical area and is considered to be possibly not a skin color point;
(4) determining the skin color probability value of the pixel point,
in order to more accurately determine skin color pixel points on the basis of primary judgment of the skin color points, the skin color probability value p of each pixel point is further determined by using logistic regression analysiss(u(x,y),v(x,y)),
Figure FDA0002238503160000041
Wherein, β12Is a parameter for d (u (x, y), v (x, y)), which is an empirical value, of which β1=2.247,β2=1,psHas a value range of [0,1 ]]Assuming that the gray scale number is 255, the gray scale value at the corresponding pixel point (x, y) is B (x, y),
B(x,y)=255×ps(u(x,y),v(x,y));
step three, segmenting the skin color area of the hand
Determining an optimal segmentation threshold t using a genetic algorithmsJudging whether the pixel point is a skin color point again, and comparing the optimal segmentation threshold value tsDetermining a skin color area according to the gray value B (x, y) of the pixel point (x, y) to obtain a binary image B1(x,y),Make the gray value greater than the threshold tsThe pixel point is determined as a skin color area, the segmentation of the hand skin color area is completed, and a genetic algorithm is used for self-adaptively selecting a segmentation threshold value tsThe steps are as follows:
(1) and (3) encoding: the gray scale value of the image is 0 to 255, each gray scale value corresponds to an 8-bit binary number, so that the gray scale value of each pixel in the image can be represented by an 8-bit binary number,
(2) generating an initialization population: randomly generating M initial individuals X11,X12,…,X1MFrom these individuals, an initial population X is formed1,X1={X11,X12,…,X1MAnd setting the maximum evolution algebra as H and the cross rate as PcThe rate of variation is PmWherein H, Pc,PmAre self-setting empirical values, where H-40,
Pc=0.2,Pm=0.01;
(3) determining an adaptive function: determining an adaptive function g (t) by using a variance method between maximum classes,
g(t)=wA(uA-ut)2+wB(uB-ut)2=wAwB(uA-uB)2wherein the parameter w in the formulaA,wB,uA,uB,utIs determined by the following method:
assume that there are N pixels in the acquired image, where N is m × N, and N is the pixel with the gray value λλRespectively obtain ashProbability p of degree occurrenceλ,pλ=nλand/N, a certain gray value is used as a threshold value t, the image is divided into a human skin color area and a background area, A and B are respectively used, then A is (0, …, t), B is (t +1, …, L-1), L is the gray level number, and the probability P of each gray level is determined according to the occurrence probability P of each gray levelλThe probability value w of the gray levels in the A class and the B class can be obtainedAAnd wB
Figure FDA0002238503160000051
And the overall average gray value uL(t) and the average grayscale value u at threshold tt(t),
Figure FDA0002238503160000052
Subsequently, the mean gray value u of class A is further obtainedAAnd average gray value u of class BB
Figure FDA0002238503160000054
Finally, determining an adaptive function by using the maximum between-class variance method, wherein the adaptive function is g (t) wA(uA-ut)2+wB(uB-ut)2=wAwB(uA-uB)2
(4) Selecting: using the obtained fitness function g (t), g (t) ═ wA(uA-ut)2+wB(uB-ut)2=wAwB(uA-uB)2Calculating to obtain the fitness value of the population, and recording as g1(t),g2(t),…,gM(t), the groups are sorted according to the fitness, and the individual with larger fitness value is copied to replace the individual with smaller fitness value, so as to generate a new group X1′,X1′={X11′,X12′,…,X1M′};
(5) And (3) crossing:disordering the sequence of individuals in the population, randomly sorting the individuals, randomly selecting two individuals in the sorted population by using a two-point crossing method, and according to a crossing rate PcPerforming two-point crossing, namely selecting two crossing points from the code strings of the two individuals after encoding, exchanging the code strings between the two crossing points to obtain two new individuals, performing crossing operation on all the individuals to generate a new population X1″,X1″={X11″,X12″,…,X1M", the process of two-dot crossing;
(6) mutation: at a rate of variation Pm0.01 pairs of population X1' any one bit in the 8-bit binary digit string of any individual is inverted, i.e. the number of the selected bit is changed from 0 to 1 or from 1 to 0, so as to generate a new individual, and after all the individuals are subjected to variation operation, a new generation of population X is formed2,X2={X21,X22,…X2M};
(7) Termination conditions were as follows: when the maximum evolution algebra H is 40 or the maximum fitness value in the population is not changed greatly, the operation is terminated, if the maximum evolution algebra H is not 40 or the maximum fitness value in the population is not changed greatly, the operation of selection, intersection and mutation is continued until the termination condition is reached, the individual with the highest fitness value is decoded to obtain the optimal threshold value ts
Figure FDA0002238503160000061
Step four, skin color area communication and hand image extraction
(1) Hand skin color point marking
Analyzing and marking the connected region by using a region growing method, firstly selecting pixel points (g, h) as initial points, and marking a marking value f(g,h)And a label llabelSetting the color point as 0, wherein g and h are horizontal and vertical coordinate values of the current pixel point, g is more than or equal to 0 and less than or equal to m-1, h is more than or equal to 0 and less than or equal to n-1, scanning the image, traversing the whole image from left to right from the upper left vertex of the image, judging whether the image is a skin color point, and if the image is judged to be the skin color point, initially judging whether the image is the skin color pointIf the judged initial pixel point is the skin color point, the marking value of the pixel point is determined as 1, namely f is(g,h)1, then judge next pixel as current point, if the flesh color point, then to its last, left side, upper left, four pixel judge on the upper right, if only a pixel is marked, just give the mark value of this pixel to current point, if there are a plurality of pixels to be marked, then according to last, left side, upper left, the upper right order carries out the mark value assignment to current point, whether the pixel of checking current point top earlier promptly is marked, if marked, give the mark value assignment of top pixel to current point, f is promptly(g,h)=f(g,h-1)And selecting the next pixel as the current point to continue analyzing the mark without considering the marking condition of other three pixels, if the current point is not marked, checking whether the left pixel is marked, if the current point is marked, assigning the marking value of the left pixel to the current point, namely f(g,h)=f(g-1,h)And if the marking condition of the two subsequent pixel points is not considered, selecting the next pixel point as the current point to continue analyzing the marking, if the next pixel point is not marked, checking whether the pixel point at the upper left is marked, and if the current point is marked, assigning the marking value of the pixel point at the upper left to the current point, namely f(g,h)=f(g-1,h-1)And if the marking condition of the last pixel point is not considered, selecting the next pixel point as the current point to continue analyzing the marking, if the current point is not marked, checking whether the pixel point at the upper right of the current point is marked, and if the current point is marked, assigning the marking value of the pixel point at the upper right to the current point, namely f(g,h)=f(g+1,h-1)If the marking is finished, selecting the next pixel point as the current point to continue analyzing the marking, if the current point is not marked, indicating that four pixel points on the current point, namely the left pixel point, the upper left pixel point and the upper right pixel point are not marked, taking the current point as the initial point of a new communication area, and giving a new markRecording the value of llabelAdding 1, traversing the whole image, and finishing marking the skin color point, wherein the marking formula is as follows:
Figure FDA0002238503160000071
wherein f is(g,h)Representing the mark value, f, at a pixel point (g, h)(g,h-1)Indicating the mark value, f, of the pixel above pixel (g, h)(g-1,h)The mark value, f, representing the pixel to the left of pixel (g, h)(g-1,h-1)The mark value, f, representing the upper left pixel of pixel (g, h)(g+1,h-1)The marking value of the pixel point at the upper right of the pixel point (g, h) is represented;
(2) hand region determination
After marking is finished, distinguishing connected regions according to the marking values, and setting the region of each marking value as R1,R2,…,Rn′N' is the number of the regions, and the number a of the pixels in each region is countedf′(F '═ 1,2, …, n'), leaving the regions with the largest and next largest pixel counts, other regions can be identified as noise regions, their grayscale values set to 0, and then the pixel count threshold is set to FnIf the number of pixels in the area with the largest number of pixels and the second largest number of pixels is less than FnThen, the two regions are considered as noise regions, and their gradation values are set to 0, and if the number of pixels in the region having the largest number of pixels is smaller than FnThe number of pixels in the region with the largest number of pixels is greater than or equal to FnIf the number of pixels in the area with the largest number of pixels is greater than or equal to F, the area with the largest number of pixels is determined as a skin color areanIf the area with the largest number of pixels and the area with the second largest number of pixels are skin color areas, the gray value B of the marked binary image is obtained2(x,y),
Figure FDA0002238503160000072
Extracting a skin color image to obtain a complete hand image;
step five, dividing hand position areas
Utilizing the acquired image information, according to the habit of placing two hands of a driver during normal driving, framing five regions of interest, a steering wheel region, a vehicle door handle region, a knee region, a gear lever region and a vehicle-mounted multimedia system region in the acquired image, respectively defining the regions as a region 1, a region 2, a region 3, a region 4 and a region 5, and setting a point A1(a1,b1),B1(a2,b1),C1(a1,b2),D1(a2,b2) The rectangular area (pixel coordinates) is the steering wheel area, wherein A1(a1,b1),B(a2,b1),C1(a1,b2),D1(a2,b2) Four vertex coordinates representing the steering wheel area, a1,b1Is point A1A horizontal and vertical coordinate values of2,b1Is point B1A horizontal and vertical coordinate values of1,b2Is point C1A horizontal and vertical coordinate values of2,b2Is a point D1The horizontal and vertical coordinate values of (1) are represented by point A2(0,b3),B2(a3,b3),C2(0,b4),D2(a3,b4) The enclosed rectangular area is a door handle area, wherein A2(0,b3),B2(a3,b3),C2(0,b4),D2(a3,b4) Four vertex coordinates, 0, b, representing the door handle area3Is point A2A horizontal and vertical coordinate values of3,b3Is point B20, b is the horizontal and vertical coordinate values of4Is point C2A horizontal and vertical coordinate values of3,b4Is a point D2The horizontal and vertical coordinate values of (1) are represented by point A3(a1,b4),B3(a2,b4),C3(a1,n-1),D3(a2N-1) is a knee area, wherein A3(a1,b4),B3(a2,b4),C3(a1,n-1),D3(a2N-1) four vertex coordinates of the knee region, a1,b4Is point A3A horizontal and vertical coordinate values of2,b4Is point B3A horizontal and vertical coordinate values of1N-1 is a point C3A horizontal and vertical coordinate values of2N-1 is a point D3The horizontal and vertical coordinate values of (1) are represented by point A4(a4,b5),B4(m-1,b5),C4(a4,n-1),D4The rectangular area surrounded by (m-1, n-1) is the gear lever area, wherein A4(a4,b5),B4(m-1,b5),C4(a4,n-1),D4(m-1, n-1) four vertex coordinates of the shift lever region, a4,b5Is point A4M-1, b are the horizontal and vertical coordinate values of5Is point B4A horizontal and vertical coordinate values of4N-1 is a point C4M-1, n-1 is the point D4The horizontal and vertical coordinate values of (1) are represented by point A5(a4,b1),B5(m-1,b1),C5(a4,b6),D5(m-1,b6) The enclosed rectangular area is an on-vehicle multimedia system area, wherein A5(a4,b1),B5(m-1,b1),C5(a4,b6),D5(m-1,b6) Four vertex coordinates representing the area of the vehicle multimedia system, a4,b1Is point A5M-1, b are the horizontal and vertical coordinate values of1Is point B5A horizontal and vertical coordinate values of4,b6Is point C5M-1, b are the horizontal and vertical coordinate values of6Is a point D5A horizontal and vertical coordinate values of (a)3<a1<a2<a4<m-1,b1<b3<b6<b5<b2<b4<n-1;
Step six, hand recognition and illegal driving judgment
Determining the area where two hands are placed and the hands in the area by using integral projection methodQuantity, setting judgment criterion, realizing the judgment of the driving violation behavior, and ordering f1=0,1,…,n-1,h10,1, …, m-1, calculating the sum of gray values of all pixels on each column parallel to the j' axis, determining the left and right boundaries of the hand according to the calculation results, and storing the calculation results in the array F1[f1]In (1),when record to F1[f1]The first time greater than 0, then the left boundary of the hand region in the image more to the left hand is detected, using the variable JL1Recording the current abscissa f1And make JL1=f1And recording the coordinates of the first pixel point with the gray value larger than 0 on the column as H1(f1′,h1') continue to give f1Assigned value when F1[f1]When the value is equal to 0, the right boundary of the hand area of the hand which is more deviated from the left hand in the image is detected, and the variable J is usedR1Recording the abscissa f of the previous column of the current position1-1, and JR1=f1-1, continue to give f1Assigned value when F1[f1]Again, if it is greater than 0, then the left border of the hand region of the hand that is more to the right in the image is detected, using variable JL2Recording the current abscissa f1And make JL2=f1And recording the coordinates of the first pixel point with the gray value larger than 0 on the column as H2(f1″,h1") continue to give f1Assigned value when F1[f1]Again equal to 0, then the right border of the hand region of the hand that is more to the right in the image is detected, using the variable JR2Recording the horizontal coordinate value f of the column before the current position1-1, and JR2=f1-1, if only J is recordedL1,JR1A value of (a), then JL2=0,JR2Calculating the sum of the gray values of all pixels on each line parallel to the i' axis, determining the upper and lower boundaries of the hand according to the calculation result, and storing the calculation result in an array H1[h1]In (1),
Figure FDA0002238503160000092
when recording to H1[h1]The first time greater than 0, the upper boundary of the hand region of the hand more deviated from the upper hand in the image is detected, and the variable G is usedU1Recording the current ordinate h1And order GU1=h1Continue to supply h1Assigned value when H1[h1]When the value is equal to 0, the lower boundary of the hand region of the hand which is more deviated from the upper hand in the image is detected, and the variable G is usedD1Recording the longitudinal coordinate value h of a line at the current position11, and let GD1=h1-1, continue to give h1Assignment value when recording to H1[h1]Again greater than 0, the upper boundary of the hand region in the image more eccentric to the lower hand is detected, using the variable GU2Recording the current ordinate h1And order GU2=h1Continue to supply h1Assigned value when H1[h1]Again equal to 0, the lower boundary of the hand region of the hand in the image which is more inclined to the lower hand is detected, using the variable GD2Recording the vertical coordinate h of a line on the current position11, and let GD2=h1-1, if only G is recordedU1,GD1A value of (d), then GU2=0,GD2=0;
Judging the driving behavior: in the actual driving process, normal driving behavior is considered only when two hands are on a steering wheel or one hand is on the steering wheel and the other hand is in gear shifting, namely both hands are in the steering wheel area or one hand is in the steering wheel area while the other hand is in the gear shifting lever area, otherwise, the two hands are placed at other positions of the five interested areas and are considered as illegal driving behavior, and the specific judgment rule is as follows:
a, when a1<JL1<a2And a is1<JR1<a2And JL2=0,JR2When the left and right boundaries of the hand are calculated, only one left boundary and one right boundary are obtained by the two hands, which means that when the left and right boundaries of the hand are calculated,the staggered positions of the two hands are not obvious or the two hands are overlapped;
1) if b is1<GU1<b2And b is1<GD1<b2And GU20 and GD2When the position of the staggered positions of the two hands is 0, only one upper boundary and one lower boundary of the two hands are obtained, and the two hands are in the steering wheel area and are judged to be normally driven;
2) if b is1<GU1<b2And b is1<GD1<b2And b is1<GU2<b2And b is1<GD2<b2When the driver drives the steering wheel, the staggered positions of the two hands are obvious, the left hand and the right hand can obtain an upper boundary and a lower boundary, and the two hands are in the steering wheel area and are judged to be normally driven;
b, when a1<JL1<a2And a is1<JR1<a2And a is1<JL2<a2And a is1<JR2<a2Then, the left hand and the right hand respectively obtain a left boundary and a right boundary, which shows that the two hands are obviously staggered and do not overlap when the left boundary and the right boundary of the hands are calculated;
1) if b is1<GU1<b2And b is1<GD1<b2And GU20 and GD2When the position of the staggered positions of the two hands is 0, only one upper boundary and one lower boundary of the two hands are obtained, and the two hands are in the steering wheel area and are judged to be normally driven;
2) if b is1<GU1<b2And b is1<GD1<b2And b is1<GU2<b2And b is1<GD2<b2When the driver drives the steering wheel, the staggered positions of the two hands are obvious, the left hand and the right hand can obtain an upper boundary and a lower boundary, and the two hands are also in the steering wheel area and are judged to be normally driven;
c, when a1<JL1<a2And a is1<JR1<a2And a is4<JL2< m-1 and a4<JR2If < m-1, if H1(f1′,h1') in the steering wheel region, i.e. a1<f1′<a2,b1<h1′<b2,H2(f1″,h1Within the gear lever, i.e. a4<f1″<m-1,b5<h1"< n-1, in which case if b1<GU1<b2And b is1<GD1<b2And b is5<GU2< n-1 and b5<GD2When the time is less than n-1, one hand is in the steering wheel, one hand is in the gear lever area, then the number of frames of images in the period from the moment that the hand is just connected with the gear lever to the moment that the hand is away from the gear lever is calculated, the time T' of the hand staying on the gear lever is obtained, and the staying time threshold is set to be TtIf T' is less than or equal to TtIf the condition for normal driving is not satisfied, the driver is uniformly determined as illegal driving.
CN201611037118.7A 2016-11-23 2016-11-23 Method for detecting hand driving violation behavior Active CN106599792B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201611037118.7A CN106599792B (en) 2016-11-23 2016-11-23 Method for detecting hand driving violation behavior

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201611037118.7A CN106599792B (en) 2016-11-23 2016-11-23 Method for detecting hand driving violation behavior

Publications (2)

Publication Number Publication Date
CN106599792A CN106599792A (en) 2017-04-26
CN106599792B true CN106599792B (en) 2020-02-18

Family

ID=58592796

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201611037118.7A Active CN106599792B (en) 2016-11-23 2016-11-23 Method for detecting hand driving violation behavior

Country Status (1)

Country Link
CN (1) CN106599792B (en)

Families Citing this family (15)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
TWI647625B (en) * 2017-10-23 2019-01-11 緯創資通股份有限公司 Image detection method and image detection device for determining postures of user
CN107862296A (en) * 2017-11-20 2018-03-30 深圳市深视创新科技有限公司 The monitoring method and system of driving behavior, computer-readable recording medium
CN108509902B (en) * 2018-03-30 2020-07-03 湖北文理学院 Method for detecting call behavior of handheld phone in driving process of driver
CN108596064A (en) * 2018-04-13 2018-09-28 长安大学 Driver based on Multi-information acquisition bows operating handset behavioral value method
CN108564034A (en) * 2018-04-13 2018-09-21 湖北文理学院 The detection method of operating handset behavior in a kind of driver drives vehicle
CN109214370B (en) * 2018-10-29 2021-03-19 东南大学 Driver posture detection method based on arm skin color area centroid coordinates
CN109446999B (en) * 2018-10-31 2021-08-31 中电科新型智慧城市研究院有限公司 Rapid sensing system and method for dynamic human body movement based on statistical calculation
CN109584507B (en) * 2018-11-12 2020-11-13 深圳佑驾创新科技有限公司 Driving behavior monitoring method, device, system, vehicle and storage medium
CN110096991A (en) * 2019-04-25 2019-08-06 西安工业大学 A kind of sign Language Recognition Method based on convolutional neural networks
CN112849117B (en) * 2019-11-12 2022-11-15 合肥杰发科技有限公司 Steering wheel adjusting method and related device thereof
CN111008583B (en) * 2019-11-28 2023-01-06 清华大学 Pedestrian and rider posture estimation method assisted by limb characteristics
CN111860210A (en) * 2020-06-29 2020-10-30 杭州鸿泉物联网技术股份有限公司 Method and device for detecting separation of hands from steering wheel, electronic equipment and storage medium
CN112329646A (en) * 2020-11-06 2021-02-05 吉林大学 Hand gesture motion direction identification method based on mass center coordinates of hand
CN113518180B (en) * 2021-05-25 2022-08-05 宁夏宁电电力设计有限公司 Vehicle-mounted camera mounting method for electric power working vehicle
CN117622177A (en) * 2024-01-23 2024-03-01 青岛创新奇智科技集团股份有限公司 Vehicle data processing method and device based on industrial large model

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US8471909B2 (en) * 2010-04-19 2013-06-25 Denso Corporation Driving assistance apparatus
CN104276080A (en) * 2014-10-16 2015-01-14 北京航空航天大学 Bus driver hand-off-steering-wheel detection warning system and warning method
CN105404862A (en) * 2015-11-13 2016-03-16 山东大学 Hand tracking based safe driving detection method

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US8471909B2 (en) * 2010-04-19 2013-06-25 Denso Corporation Driving assistance apparatus
CN104276080A (en) * 2014-10-16 2015-01-14 北京航空航天大学 Bus driver hand-off-steering-wheel detection warning system and warning method
CN105404862A (en) * 2015-11-13 2016-03-16 山东大学 Hand tracking based safe driving detection method

Non-Patent Citations (3)

* Cited by examiner, † Cited by third party
Title
基于机器视觉的违规驾驶行为检测研究;卓胜华;《中国优秀硕士学位论文全文数据库 信息科技辑》;20130115(第01期);论文第21-37、45-53页 *
基于肤色和haar方差特征的人脸检测方法研究;李燕;《中国优秀硕士学位论文全文数据库 信息科技辑》;20141215(第12期);论文第15-20页 *
车辆驾驶行为实时监测关键技术的研究;何龙文;《中国优秀硕士学位论文全文数据库 工程科技II辑》;20140115(第01期);论文第21-31页 *

Also Published As

Publication number Publication date
CN106599792A (en) 2017-04-26

Similar Documents

Publication Publication Date Title
CN106599792B (en) Method for detecting hand driving violation behavior
CN110197589B (en) Deep learning-based red light violation detection method
CN102708356B (en) Automatic license plate positioning and recognition method based on complex background
CN109255350B (en) New energy license plate detection method based on video monitoring
CN103034836B (en) Road sign detection method and road sign checkout equipment
CN106682586A (en) Method for real-time lane line detection based on vision under complex lighting conditions
CN109086687A (en) The traffic sign recognition method of HOG-MBLBP fusion feature based on PCA dimensionality reduction
CN110969160A (en) License plate image correction and recognition method and system based on deep learning
CN113160575A (en) Traffic violation detection method and system for non-motor vehicles and drivers
CN104978567A (en) Vehicle detection method based on scenario classification
CN111899515B (en) Vehicle detection system based on wisdom road edge calculates gateway
CN108960055A (en) A kind of method for detecting lane lines based on local line&#39;s stage mode feature
CN111047874B (en) Intelligent traffic violation management method and related product
CN113128507B (en) License plate recognition method and device, electronic equipment and storage medium
CN111931683B (en) Image recognition method, device and computer readable storage medium
CN103544480A (en) Vehicle color recognition method
CN111325769A (en) Target object detection method and device
CN107590500A (en) A kind of color recognizing for vehicle id method and device based on color projection classification
CN111860509A (en) Coarse-to-fine two-stage non-constrained license plate region accurate extraction method
CN109977941A (en) Licence plate recognition method and device
CN112115800A (en) Vehicle combination recognition system and method based on deep learning target detection
CN116993970A (en) Oil and gas pipeline excavator occupation pressure detection method and system based on yolov5
CN111723805A (en) Signal lamp foreground area identification method and related device
CN113158954A (en) Automatic traffic off-site zebra crossing area detection method based on AI technology
CN111402185A (en) Image detection method and device

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant