CN110688921A - Method for detecting smoking behavior of driver based on human body action recognition technology - Google Patents

Method for detecting smoking behavior of driver based on human body action recognition technology Download PDF

Info

Publication number
CN110688921A
CN110688921A CN201910875018.9A CN201910875018A CN110688921A CN 110688921 A CN110688921 A CN 110688921A CN 201910875018 A CN201910875018 A CN 201910875018A CN 110688921 A CN110688921 A CN 110688921A
Authority
CN
China
Prior art keywords
joint point
driver
point
joint
hand
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201910875018.9A
Other languages
Chinese (zh)
Inventor
叶智锐
李鲁玉
黄卫
王超
许跃如
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Southeast University
Original Assignee
Southeast University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Southeast University filed Critical Southeast University
Priority to CN201910875018.9A priority Critical patent/CN110688921A/en
Publication of CN110688921A publication Critical patent/CN110688921A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/214Generating training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • G06F18/241Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches
    • G06F18/2411Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches based on the proximity to a decision surface, e.g. support vector machines
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/107Static hand or arm

Abstract

The invention discloses a method for detecting smoking behavior of a driver based on a human body action recognition technology, which comprises the following steps: s1: collecting an upper body depth image of a driver; s2: extracting a key frame image for smoking detection of a driver from the video image; s3: extracting the angle characteristics formed by coordinate vectors of the upper body joint points of the driver; s4: establishing an angle characteristic set of upper half limbs of a driver; s5: constructing a training model of a support vector machine to obtain the trained model; s6: extracting the angle characteristic set of the upper body limb of the driver from the real-time collected upper body depth image of the driver, judging whether the driver has smoking behavior, recording the smoking behavior if the driver has the smoking behavior, and judging the upper body depth image of the driver at the next moment if the driver does not have the smoking behavior. The invention extracts the limb angle characteristics of the upper half of the body of the driver when the driver does smoking action, and identifies the smoking action, thereby comprehensively detecting the smoking behavior of the driver in real time.

Description

Method for detecting smoking behavior of driver based on human body action recognition technology
Technical Field
The invention relates to the technical field of human body action recognition, in particular to a method for detecting smoking behavior of a driver based on a human body action recognition technology.
Background
The driver smokes and causes distraction in the driving process, so that the coordination ability, the working efficiency and the driving safety of limbs are influenced, and even traffic accidents can be caused.
There are two conventional methods for detecting the smoking behavior of a driver: firstly, detecting smoke generated during smoking by using a gas detection sensor; and secondly, detecting the temperature of the burning cigarette by using an infrared temperature sensor. However, both of the two methods have great limitations, and because the amount of smoke generated during smoking is limited and the smoke area is small, the sensitivity of the sensor is reduced under the condition that the cab is well ventilated or other heat sources exist, and further, the detection omission and the false detection in the detection process can be caused.
In order to overcome the defects existing in the traditional smoking behavior detection process based on sensor equipment, under the continuous development of technologies based on computer vision, image processing and the like, a method for detecting whether the smoking behavior of a driver exists or not through a video image is gradually researched and developed. Which comprises the following steps: and identifying whether a cigarette end highlight area exists around the calibrated face, whether a cigarette straight edge exists around the mouth, and the like. However, a number of observations suggest: in order to avoid checking the smoking behavior during the driving process, the driver usually adopts the behavior of shielding the cigarette by hands, so that the accuracy of the smoking behavior detection result is reduced.
Disclosure of Invention
The purpose of the invention is as follows: the invention provides a method for detecting a smoking behavior of a driver based on a human body action recognition technology, which aims at solving the problems of missing detection and false detection in the existing process of recognizing the smoking behavior of the driver.
The technical scheme is as follows: in order to realize the purpose of the invention, the technical scheme adopted by the invention is as follows:
a method for detecting smoking behavior of a driver based on a human body action recognition technology specifically comprises the following steps:
s1: collecting an upper body depth image of a driver in a driving area;
s2: extracting a key frame image of the smoking detection of the driver from the acquired video image according to the upper body depth image of the driver;
s3: extracting angle features formed by coordinate vectors of upper body joint points of the driver from the key frame images;
s4: establishing an angle feature set of the upper body limb of the driver according to the angle feature of the joint point;
s5: constructing a support vector machine training model through the upper half limb angle characteristic set of the driver, training the support vector machine training model, and acquiring a trained support vector machine training model;
s6: extracting the angle characteristic set of the upper body of the driver from the depth image of the upper body of the driver acquired in real time, inputting the angle characteristic set of the upper body of the driver into a trained support vector machine training model, judging whether the driver has smoking behavior or not through the trained support vector machine training model, recording the smoking behavior if the smoking behavior exists, and judging the depth image of the upper body of the driver acquired in real time at the next moment if the smoking behavior does not exist.
Further, the upper body depth image of the driver includes 10 joint points of the upper body of the driver and coordinate information of the 10 joint points in a three-dimensional space, and the 10 joint points of the upper body of the driver are respectively: the head joint point, two shoulder center joint points, left shoulder joint points, left elbow joint points, left wrist joint points, left hand joints, right shoulder joint points, right elbow joint points, right wrist joint points and right hand joint points, wherein the coordinate of each joint point in a three-dimensional space is as follows: pi(pix,piy,piz) i-1, 2,3, …,10, wherein pixIs the coordinate of the ith joint point on the x-axis in three-dimensional space, piyIs the coordinate of the i-th joint point on the y-axis in three-dimensional space, pizFor the z-axis coordinate of the i-th joint point in three-dimensional space, i denotes the joint pointA serial number.
Further, in step S2, the key frame image for detecting the smoking of the driver is extracted from the acquired video image, specifically as follows:
s2.1: according to an Euclidean distance formula, obtaining the distances between the head joint points of the driver and the left hand joint points and the right hand joint points respectively, specifically:
Figure BDA0002204048870000021
wherein: d1Distance between the head joint point and the left hand joint point, d2Head joint point and right hand joint point, p1xFor the coordinates of the head joint point in three-dimensional space on the x-axis, p1yFor the coordinates of the head joint point in three-dimensional space on the y-axis, p1zFor the coordinates of the head joint point in the three-dimensional space on the z-axis, p6xFor the left-hand joint point on the x-axis in three-dimensional space, p6yFor the left-hand joint point in three-dimensional space on the y-axis, p6zFor the left-hand joint point in three-dimensional space on the z-axis, p10xFor the coordinates of the right hand joint point on the x-axis in three-dimensional space, p10yFor the coordinates of the right hand joint point in three dimensions on the y-axis, p10zCoordinates of the right hand joint point on a z-axis in a three-dimensional space;
s2.2: judging whether the distances between the head joint point of the driver and the left hand joint point and the distance between the head joint point of the driver and the right hand joint point of the driver respectively satisfy the following formulas, wherein in the acquired video images, the images satisfying the following formulas are all key frame images for the smoking detection of the driver, and the following formulas are specifically as follows:
Figure BDA0002204048870000031
wherein: d1Distance between the head joint point and the left hand joint point, d2Are head joint point and right hand joint point, A1For the left-hand articulation point and head joint of the driverExperimental values of the distances between the nodes, A2The experimental value of the distance between the right-hand joint point and the head joint point of the driver is T, the time for which the experimental value is not less than the distance between the left-hand joint point, the right-hand joint point and the head joint point of the driver lasts is T, and the experimental value of the time for which the experimental value is not less than the theoretical time for which the distance between the left-hand joint point, the right-hand joint point and the head joint point of the driver lasts is B.
Further, in step S3, the angular features formed by the coordinate vectors of the upper body joint point of the driver are extracted from the key frame image, specifically as follows:
s3.1: extracting an upper body depth image of the driver from the key frame image, and acquiring a coordinate vector of the upper body joint point of the driver according to the upper body depth image of the driver, wherein the method specifically comprises the following steps:
Figure BDA0002204048870000032
wherein:
Figure BDA0002204048870000033
is a joint point coordinate vector with the ith joint point as a starting point and the jth joint point as an end point, pixIs the coordinate of the ith joint point on the x-axis in three-dimensional space, piyIs the coordinate of the i-th joint point on the y-axis in three-dimensional space, pizFor the z-axis coordinate of the i-th joint point in three-dimensional space, pjxFor the coordinate of the j-th joint point on the x-axis in three-dimensional space, pjyFor the y-axis coordinate of the j-th joint point in three-dimensional space, pjzThe coordinate of the jth joint point on the z axis in the three-dimensional space is shown, and i and j are serial numbers of the joint points;
s3.2: extracting joint point angle characteristics through the upper body joint point coordinate vector of the driver, wherein the joint point angle characteristics specifically comprise:
Figure BDA0002204048870000034
wherein: thetakIs the ith 1The joint point, the j1 th joint point, the i2 th joint point and the j2 th joint point form an included angle,
Figure BDA0002204048870000041
the joint point coordinate vector taking the i1 th joint point as a starting point and the j1 th joint point as an end point,
Figure BDA0002204048870000042
the joint point coordinate vector takes the i2 th joint point as a starting point and the j2 th joint point as an end point, and i and j are serial numbers of the joint points.
Further, the limb angle feature set of the upper half of the driver comprises a limb angle feature set for judging right-hand smoking and a limb angle feature set for judging left-hand smoking, and the limb angle feature set for judging right-hand smoking is as follows: (theta)1234) And the limb angle characteristic set for judging left-hand smoking is as follows: (theta)5678);
Wherein: theta1Is the angle theta formed by the right elbow joint point, the right shoulder joint point and the right wrist joint point2Is the angle theta formed by the right elbow joint point, the right wrist joint point and the right hand joint point3Is the included angle theta formed by the head joint point, the two shoulder central joint points, the right elbow joint point and the right hand joint point4Is the included angle theta formed by the head joint point, the two shoulder central joint points, the right elbow joint point and the right shoulder joint point5Is the angle theta formed by the left shoulder joint point, the left elbow joint point and the left wrist joint point6Is the angle theta formed by the left elbow joint point, the left wrist joint point and the left hand joint point7Is the included angle theta formed by the head joint point, the two shoulder central joint points, the left elbow joint point and the left hand joint point8Is the included angle formed by the head joint point, the two shoulder central joint points, the left elbow joint point and the left shoulder joint point.
Further, in step S5, the trained support vector machine training model is obtained as follows:
s5.1: collecting an image stream of a driving state of a driver, and dividing the image stream into a positive sample and a negative sample according to a skeleton image of the action of the driver;
s5.2: repeating the steps S3-S4 according to the skeleton images of the positive sample and the negative sample, and acquiring a limb angle characteristic set of the upper half body of the driver, which corresponds to the positive sample and the negative sample respectively;
s5.3: respectively dividing the upper half body limb angle characteristic set of the driver corresponding to the positive sample and the negative sample into a training set and a testing set;
s5.4: and constructing a support vector machine training model based on a linear kernel function, training the support vector machine training model through the training set, detecting the trained support vector machine training model according to the test set, wherein when the recognition rate of the support vector machine training model is at least 90%, the support vector machine training model is the trained support vector machine training model, otherwise, the support vector machine training model is continuously trained until the recognition rate of the support vector machine training model is at least 90%.
Further, the positive sample is composed of the skeleton image of the driver containing the smoking action in the image stream, and the negative sample is composed of the skeleton image of the driver not containing the smoking action in the image stream.
Has the advantages that: compared with the prior art, the technical scheme of the invention has the following beneficial technical effects:
(1) the invention extracts the limb angle characteristics of the upper half body of the driver when the driver does smoking action based on Kinect equipment, and identifies the smoking action, thereby comprehensively detecting the smoking action of the driver in real time, and meanwhile, in the process of analyzing the smoking action, judging the smoking action by utilizing the angle difference between each joint point of the human body when the driver does different actions;
(2) the invention overcomes the limitation of traditional smoking behavior detection methods based on gas detection sensors and infrared temperature sensors, can effectively identify the smoking behavior of a driver under the condition of good ventilation of a cab or other heating sources in the cab, and simultaneously solves the problem that the driver shields cigarettes with hands and does not cooperate with detection in the video image identification detection technology, thereby improving the accuracy of the smoking behavior detection result.
Drawings
FIG. 1 is a schematic flow chart of a method of driver smoking behavior detection in accordance with the present invention;
fig. 2 is a schematic diagram of the present invention tracking the upper body joint point of the driver using the Kinect device.
Detailed Description
In order to make the objects, technical solutions and advantages of the embodiments of the present invention clearer, the technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention. The described embodiments are a subset of the embodiments of the invention and are not all embodiments of the invention. Thus, the following detailed description of the embodiments of the present invention, presented in the figures, is not intended to limit the scope of the invention, as claimed, but is merely representative of selected embodiments of the invention.
Example 1
Referring to fig. 1, the present embodiment provides a method for detecting a smoking behavior of a driver based on a human body motion recognition technology, which specifically includes the following steps:
step S1: referring to fig. 2, an upper body depth image of a driver in a driving area is acquired by using a Kinect device, wherein the upper body depth image includes 10 joint points of the upper body of the driver and coordinate information of the 10 joint points in a three-dimensional space, and a coordinate of each joint point in the three-dimensional space is as follows: pi(pix,piy,piz) i-1, 2,3, …,10, wherein pixIs the coordinate of the ith joint point on the x-axis in three-dimensional space, piyIs the coordinate of the i-th joint point on the y-axis in three-dimensional space, pizI represents the serial number of the joint point for the coordinate of the ith joint point on the z-axis in the three-dimensional space.
Specifically, the first joint point is a head joint point, and the coordinates are: p1-head(p1x,p1y,p1z) The second joint point is a center joint point of the two shoulders, and the coordinates are as follows: p2-shoulder_center(p2x,p2y,p2z) And the third joint point is a left shoulder joint point, and the coordinates are as follows: p3-shoulder_left(p3x,p3y,p3z) And the fourth joint point is a left elbow joint point, and the coordinates are as follows: p4-elbow_left(p4x,p4y,p4z) And the fifth joint point is a left wrist joint point, and the coordinates are as follows: p5-wrist_left(p5x,p5y,p5z) And the sixth joint point is a left-hand joint point, and the coordinates are as follows: p6-hand_left(p6x,p6y,p6z) And the coordinates of the seventh joint point as the right shoulder joint point are as follows: p7-shoulder_right(p7x,p7y,p7z) And the eighth joint point is a right elbow joint point, and the coordinates are as follows: p8-elbow_right(p8x,p8y,p8z) And the ninth joint point is a right wrist joint point, and the coordinates are as follows: p9-wrist_right(p9x,p9y,p9z) And the tenth joint point is a joint point of the right hand part, and the coordinates are as follows: p10-hand_right(p10x,p10y,p10z)。
Step S2: and extracting a key frame image of the smoking detection of the driver from the video image of the driver in the driving area acquired by the Kinect device according to the coordinate information of each joint point in the three-dimensional space in the step S1. The method comprises the following specific steps:
step S2.1: tracking hand joint points of a driver in a driving area by using a Kinect device, and respectively calculating head joint points P of the driver according to a Euclidean distance formula1-headAnd left hand joint point P6-hand_leftA distance d between1Driver's head joint point P1-headAnd right hand joint point P10-hand_rightA distance d between2The method specifically comprises the following steps:
Figure BDA0002204048870000061
wherein: d1Distance between the head joint point and the left hand joint point, d2Head joint point and right hand joint point, p1xFor the coordinates of the head joint point in three-dimensional space on the x-axis, p1yFor the coordinates of the head joint point in three-dimensional space on the y-axis, p1zFor the coordinates of the head joint point in the three-dimensional space on the z-axis, p6xFor the left-hand joint point on the x-axis in three-dimensional space, p6yFor the left-hand joint point in three-dimensional space on the y-axis, p6zFor the left-hand joint point in three-dimensional space on the z-axis, p10xFor the coordinates of the right hand joint point on the x-axis in three-dimensional space, p10yFor the coordinates of the right hand joint point in three dimensions on the y-axis, p10zThe coordinates of the right hand joint point on the z-axis in three-dimensional space.
In particular, the distance d between the head joint point and the left-hand joint point1Head joint point and right hand joint point d as determination index of left hand smoking of driver2As an index for determining the right-hand smoking of the driver.
Step S2.2: selecting experimental values of distances among left-hand joint points, right-hand joint points and head joint points of a driver under smoking action, and judging whether the selected experimental values are smaller than the distances among the left-hand joint points, the right-hand joint points and the head joint points of the driver, wherein the experimental values are as follows:
d1≤A1or d2≤A2
Wherein: d1Distance between the head joint point and the left hand joint point, d2Are head joint point and right hand joint point, A1For the experimental values of the distance between the left-hand and the head joint of the driver, A2Is an experimental value of the distance between the right-hand articulation point and the head articulation point of the driver.
Under the condition that the selected experimental value is not less than the distance between the left-hand joint point, the right-hand joint point and the head joint point of the driver, determining the time when the condition is established, and judging whether the time when the condition is established is greater than the theoretical time when the condition is established, specifically:
T≤B
wherein: t is the time which lasts when the experimental value is not less than the distance between the left and right hand joint points and the head joint point of the driver, and B is the theoretical time which lasts when the experimental value is not less than the distance between the left hand joint point, the right hand joint point and the head joint point of the driver.
In the present embodiment, the experimental value A of the distance between the left-hand and head joint points of the driver1And the experimental value A of the distance between the right-hand joint point and the head joint point of the driver2Are set to 0.1m, and the theoretical time B for which the experimental value is not less than the distance between the left and right hand joint points and the head joint point of the driver is set to 0.8 s.
Meanwhile, in the video image of the driver in the driving area acquired by the Kinect device, the images meeting the two relations are both key frame images of the smoking detection of the driver.
Step S3: from the key frame image acquired in step S2, coordinate information of 10 joint points of the upper body of the driver in the three-dimensional space is acquired, a joint point coordinate vector of the upper body of the driver is specified from the coordinate information, and then a joint point angle feature is acquired from the joint point coordinate vector. The method comprises the following specific steps:
step S3.1: in the key frame image acquired in step S2, coordinate information of 10 joint points of the upper body of the driver in the three-dimensional space is acquired, and the coordinate information of the 10 joint points in the three-dimensional space is processed to obtain 11 joint point coordinate vectors, where each joint point coordinate vector specifically includes:
Figure BDA0002204048870000071
wherein:
Figure BDA0002204048870000072
is a joint point coordinate vector with the ith joint point as a starting point and the jth joint point as an end point, pixIs the coordinate of the ith joint point on the x-axis in three-dimensional space, piyIs the coordinate of the i-th joint point on the y-axis in three-dimensional space, pizFor the z-axis coordinate of the i-th joint point in three-dimensional space, pjxIs as followsCoordinates of j joint points on the x-axis in three-dimensional space, pjyFor the y-axis coordinate of the j-th joint point in three-dimensional space, pjzThe j is the coordinate of the j-th joint point on the z-axis in the three-dimensional space, and i and j are the serial numbers of the joint points.
In this embodiment, the 11 joint point coordinate vectors are specifically:
Figure BDA0002204048870000081
wherein:is a joint point coordinate vector taking the right elbow joint point as a starting point and taking the right shoulder joint point as an ending point,is a joint point coordinate vector taking the right elbow joint point as a starting point and taking the right wrist joint point as an end point,is a joint point coordinate vector taking a right wrist joint point as a starting point and taking a right elbow joint point as an ending point,is a joint point coordinate vector taking a right wrist joint point as a starting point and taking a right hand joint point as an end point,
Figure BDA0002204048870000086
is a joint point coordinate vector taking a head joint point as an end point and taking a center joint point of two shoulders as a starting point,
Figure BDA0002204048870000087
is a joint point coordinate vector taking the right elbow joint point as a starting point and taking the right hand joint point as an end point,
Figure BDA0002204048870000088
is a joint point coordinate vector taking a left elbow joint point as a starting point and a left shoulder joint point as an ending point,
Figure BDA0002204048870000089
is a joint point coordinate vector taking a left elbow joint point as a starting point and a left wrist joint point as an end point,
Figure BDA00022040488700000810
is a joint point coordinate vector taking a left wrist joint point as a starting point and a left elbow joint point as an ending point,
Figure BDA00022040488700000811
is a joint point coordinate vector taking a left wrist joint point as a starting point and taking a left hand joint point as an end point,
Figure BDA00022040488700000812
the joint point coordinate vector taking the left elbow joint point as a starting point and taking the left hand joint point as an end point.
Step S3.2: extracting 8 joint point angle characteristics through 11 joint point coordinate vectors, wherein each joint point angle characteristic specifically comprises the following steps:
Figure BDA0002204048870000091
wherein: thetakIs an included angle formed by the joint point of the i1 th joint point, the joint point of the j1 th joint point, the joint point of the i2 th joint point and the joint point of the j2 th joint point,the joint point coordinate vector taking the i1 th joint point as a starting point and the j1 th joint point as an end point,
Figure BDA0002204048870000093
the joint point coordinate vector takes the i2 th joint point as a starting point and the j2 th joint point as an end point, and i and j are serial numbers of the joint points.
In this embodiment, the angular characteristics of the 8 joint points are specifically:
Figure BDA0002204048870000094
wherein:θ1is the angle theta formed by the right elbow joint point, the right shoulder joint point and the right wrist joint point2Is the angle theta formed by the right elbow joint point, the right wrist joint point and the right hand joint point3Is the included angle theta formed by the head joint point, the two shoulder central joint points, the right elbow joint point and the right hand joint point4Is the included angle theta formed by the head joint point, the two shoulder central joint points, the right elbow joint point and the right shoulder joint point5Is the angle theta formed by the left shoulder joint point, the left elbow joint point and the left wrist joint point6Is the angle theta formed by the left elbow joint point, the left wrist joint point and the left hand joint point7Is the included angle theta formed by the head joint point, the two shoulder central joint points, the left elbow joint point and the left hand joint point8Is an included angle formed by a head joint point, two shoulder central joint points, a left elbow joint point and a left shoulder joint point,
Figure BDA0002204048870000101
is a joint point coordinate vector taking the right elbow joint point as a starting point and taking the right shoulder joint point as an ending point,
Figure BDA0002204048870000102
is a joint point coordinate vector taking the right elbow joint point as a starting point and taking the right wrist joint point as an end point,
Figure BDA0002204048870000103
is a joint point coordinate vector taking a right wrist joint point as a starting point and taking a right elbow joint point as an ending point,
Figure BDA0002204048870000104
is a joint point coordinate vector taking a right wrist joint point as a starting point and taking a right hand joint point as an end point,
Figure BDA0002204048870000105
is a joint point coordinate vector taking a head joint point as an end point and taking a center joint point of two shoulders as a starting point,
Figure BDA0002204048870000106
the right elbow joint point is taken as the starting point and the right hand joint point is taken as the ending pointThe number of the scalar vectors is determined,
Figure BDA0002204048870000107
is a joint point coordinate vector taking a left elbow joint point as a starting point and a left shoulder joint point as an ending point,is a joint point coordinate vector taking a left elbow joint point as a starting point and a left wrist joint point as an end point,
Figure BDA0002204048870000109
is a joint point coordinate vector taking a left wrist joint point as a starting point and a left elbow joint point as an ending point,
Figure BDA00022040488700001010
is a joint point coordinate vector taking a left wrist joint point as a starting point and taking a left hand joint point as an end point,
Figure BDA00022040488700001011
the joint point coordinate vector taking the left elbow joint point as a starting point and taking the left hand joint point as an end point.
Step S4: under the two conditions of distinguishing the left-hand smoking and the right-hand smoking of the driver, establishing an upper limb angle characteristic set of the driver according to the 8 joint point angle characteristics in the step S3.2, wherein the established upper limb angle characteristic set can be used as a basis for judging whether the driver smokes in the driving process. Therefore, the limb angle characteristic set of the upper half body of the driver is divided into a limb angle characteristic set for judging right-hand smoking and a limb angle characteristic set for judging left-hand smoking.
Specifically, the limb angular feature set of right-hand smoking is judged as follows: (theta)1234) And judging that the limb angle characteristic set of the left-hand smoking is as follows: (theta)5678). Wherein: theta1Is the angle theta formed by the right elbow joint point, the right shoulder joint point and the right wrist joint point2Is the angle theta formed by the right elbow joint point, the right wrist joint point and the right hand joint point3Is a head joint point, two shoulder central joint points,Angle of intersection, theta, between right elbow joint point and right hand joint point4Is the included angle theta formed by the head joint point, the two shoulder central joint points, the right elbow joint point and the right shoulder joint point5Is the angle theta formed by the left shoulder joint point, the left elbow joint point and the left wrist joint point6Is the angle theta formed by the left elbow joint point, the left wrist joint point and the left hand joint point7Is the included angle theta formed by the head joint point, the two shoulder central joint points, the left elbow joint point and the left hand joint point8Is the included angle formed by the head joint point, the two shoulder central joint points, the left elbow joint point and the left shoulder joint point.
Step S5: a support vector machine training model is constructed by judging the limb angle characteristic set of right-hand smoking and judging the limb angle characteristic set of left-hand smoking, and is used for judging two situations of smoking and non-smoking of a driver, and the training model is as follows:
step S5.1: the method comprises the steps of collecting an image stream of a driving state of a driver by using a Kinect device, dividing the collected image stream into two types, namely a positive sample and a negative sample, wherein the positive sample is composed of a driver skeleton image containing smoking actions in the image stream, the negative sample is composed of a driver skeleton image not containing smoking actions in the image stream, and creating an upper body skeleton image database of the driving state of the driver according to the driver skeleton images in the positive sample and the negative sample.
In this embodiment, the driver skeleton image not including the smoking action includes a driver skeleton image of a call-making action, a driver skeleton image of a eating action, and similarly, a driver skeleton image of some other action.
Step S5.2: and extracting the upper half body limb angle characteristic set of the driver corresponding to each driver skeleton image from the positive sample driver skeleton image and the negative sample driver skeleton image. Namely, in the driver bone image of the positive sample and the driver bone image of the negative sample, the steps S3-S4 are repeated, so as to obtain the driver upper half limb angle feature set corresponding to each driver bone image.
Step S5.3: and averaging the upper half limb angle characteristic set of the driver corresponding to the positive sample of the skeleton image of the driver and the upper half limb angle characteristic set of the driver corresponding to the negative sample of the skeleton image of the driver into a training set and a testing set.
Specifically, in the upper half of the limb angle feature set of the driver for each sample, 80% of the samples were combined into the training set and 20% of the samples were combined into the test set.
Step S5.4: and (4) constructing a support vector machine training model based on the linear kernel function, and training the support vector machine through the samples in the training set in the step S5.3. And after training is finished, testing the detection effect of the training model of the support vector machine on the smoking behavior by using the sample with concentrated test. When the recognition rate is at least 90%, the training model of the support vector machine is shown to meet the precision requirement of the smoking behavior recognition, namely the training model of the support vector machine is a trained training model of the support vector machine. When the recognition rate does not reach 90%, the training of the training model of the support vector machine is required to be continued until the recognition rate of the training model of the support vector machine is at least 90%.
Step S6: the method comprises the steps of collecting an upper body depth image of a driver in real time, extracting an upper body limb angle characteristic set of the driver from the upper body depth image of the driver collected in real time, and inputting the upper body limb angle characteristic set of the driver into a trained support vector machine training model. And judging whether the driver has smoking behavior through the trained support vector machine training model, recording the smoking behavior of the driver if the driver has the smoking behavior, and acquiring the upper body depth image of the driver at the next moment and continuously judging if the driver does not have the smoking behavior.
The present invention and its embodiments have been described in an illustrative manner, and are not to be considered limiting, as illustrated in the accompanying drawings, which are merely exemplary embodiments of the invention and not limiting of the actual constructions and methods. Therefore, if the person skilled in the art receives the teaching, the structural modes and embodiments similar to the technical solutions are not creatively designed without departing from the spirit of the invention, and all of them belong to the protection scope of the invention.

Claims (7)

1. A method for detecting smoking behavior of a driver based on a human body action recognition technology is characterized by comprising the following steps:
s1: collecting an upper body depth image of a driver in a driving area;
s2: extracting a key frame image of the smoking detection of the driver from the acquired video image according to the upper body depth image of the driver;
s3: extracting angle features formed by coordinate vectors of upper body joint points of the driver from the key frame images;
s4: establishing an angle feature set of the upper body limb of the driver according to the angle feature of the joint point;
s5: constructing a support vector machine training model through the upper half limb angle characteristic set of the driver, training the support vector machine training model, and acquiring a trained support vector machine training model;
s6: extracting the angle characteristic set of the upper body of the driver from the depth image of the upper body of the driver acquired in real time, inputting the angle characteristic set of the upper body of the driver into a trained support vector machine training model, judging whether the driver has smoking behavior or not through the trained support vector machine training model, recording the smoking behavior if the smoking behavior exists, and judging the depth image of the upper body of the driver acquired in real time at the next moment if the smoking behavior does not exist.
2. The method for detecting smoking behavior of a driver based on human body motion recognition technology according to claim 1, wherein the upper body depth image of the driver includes 10 joint points of the upper body of the driver and coordinate information of the 10 joint points in a three-dimensional space, and the 10 joint points of the upper body of the driver are respectively: head joint point, two shoulder center joint points, left shoulder joint point, left elbow joint point, left wrist joint point, left hand joint, right shoulder joint point, right elbow jointThe system comprises points, a right wrist joint point and a right hand joint point, wherein the coordinates of each joint point in a three-dimensional space are as follows: pi(pix,piy,piz) i-1, 2,3, …,10, wherein pixIs the coordinate of the ith joint point on the x-axis in three-dimensional space, piyIs the coordinate of the i-th joint point on the y-axis in three-dimensional space, pizI represents the serial number of the joint point for the coordinate of the ith joint point on the z-axis in the three-dimensional space.
3. The method for detecting the smoking behavior of the driver based on the human body motion recognition technology as claimed in claim 1 or 2, wherein in the step S2, the key frame image of the smoking detection of the driver is extracted from the acquired video image, specifically as follows:
s2.1: according to an Euclidean distance formula, obtaining the distances between the head joint points of the driver and the left hand joint points and the right hand joint points respectively, specifically:
Figure FDA0002204048860000021
wherein: d1Distance between the head joint point and the left hand joint point, d2Head joint point and right hand joint point, p1xFor the coordinates of the head joint point in three-dimensional space on the x-axis, p1yFor the coordinates of the head joint point in three-dimensional space on the y-axis, p1zFor the coordinates of the head joint point in the three-dimensional space on the z-axis, p6xFor the left-hand joint point on the x-axis in three-dimensional space, p6yFor the left-hand joint point in three-dimensional space on the y-axis, p6zFor the left-hand joint point in three-dimensional space on the z-axis, p10xFor the coordinates of the right hand joint point on the x-axis in three-dimensional space, p10yFor the coordinates of the right hand joint point in three dimensions on the y-axis, p10zCoordinates of the right hand joint point on a z-axis in a three-dimensional space;
s2.2: judging whether the distances between the head joint point of the driver and the left hand joint point and the distance between the head joint point of the driver and the right hand joint point of the driver respectively satisfy the following formulas, wherein in the acquired video images, the images satisfying the following formulas are all key frame images for the smoking detection of the driver, and the following formulas are specifically as follows:
Figure FDA0002204048860000022
wherein: d1Distance between the head joint point and the left hand joint point, d2Are head joint point and right hand joint point, A1For the experimental values of the distance between the left-hand and the head joint of the driver, A2The experimental value of the distance between the right-hand joint point and the head joint point of the driver is T, the time for which the experimental value is not less than the distance between the left-hand joint point, the right-hand joint point and the head joint point of the driver lasts is T, and the experimental value of the time for which the experimental value is not less than the theoretical time for which the distance between the left-hand joint point, the right-hand joint point and the head joint point of the driver lasts is B.
4. The method for detecting smoking behavior of a driver based on human body motion recognition technology as claimed in claim 3, wherein in the step S3, the angular features formed by the coordinate vectors of the upper body joint point of the driver are extracted from the key frame image, specifically as follows:
s3.1: extracting an upper body depth image of the driver from the key frame image, and acquiring a coordinate vector of the upper body joint point of the driver according to the upper body depth image of the driver, wherein the method specifically comprises the following steps:
Figure FDA0002204048860000023
wherein:
Figure FDA0002204048860000024
is a joint point coordinate vector with the ith joint point as a starting point and the jth joint point as an end point, pixIs the coordinate of the ith joint point on the x-axis in three-dimensional space, piyIs the ithCoordinates of individual joint points in three-dimensional space on the y-axis, pizFor the z-axis coordinate of the i-th joint point in three-dimensional space, pjxFor the coordinate of the j-th joint point on the x-axis in three-dimensional space, pjyFor the y-axis coordinate of the j-th joint point in three-dimensional space, pjzThe coordinate of the jth joint point on the z axis in the three-dimensional space is shown, and i and j are serial numbers of the joint points;
s3.2: extracting joint point angle characteristics through the upper body joint point coordinate vector of the driver, wherein the joint point angle characteristics specifically comprise:
Figure FDA0002204048860000031
wherein: thetakIs an included angle formed by the joint point of the i1 th joint point, the joint point of the j1 th joint point, the joint point of the i2 th joint point and the joint point of the j2 th joint point,
Figure FDA0002204048860000032
the joint point coordinate vector taking the i1 th joint point as a starting point and the j1 th joint point as an end point,
Figure FDA0002204048860000033
the joint point coordinate vector takes the i2 th joint point as a starting point and the j2 th joint point as an end point, and i and j are serial numbers of the joint points.
5. The method for detecting the smoking behavior of the driver based on the human body motion recognition technology as claimed in claim 4, wherein the limb angle feature set of the upper half of the driver comprises a limb angle feature set for judging right-hand smoking and a limb angle feature set for judging left-hand smoking, and the limb angle feature set for judging right-hand smoking is as follows: (theta)1234) And the limb angle characteristic set for judging left-hand smoking is as follows: (theta)5678);
Wherein: theta1Is composed of a right elbow joint point, a right shoulder joint point and a right wrist joint pointAngle of inclination, theta2Is the angle theta formed by the right elbow joint point, the right wrist joint point and the right hand joint point3Is the included angle theta formed by the head joint point, the two shoulder central joint points, the right elbow joint point and the right hand joint point4Is the included angle theta formed by the head joint point, the two shoulder central joint points, the right elbow joint point and the right shoulder joint point5Is the angle theta formed by the left shoulder joint point, the left elbow joint point and the left wrist joint point6Is the angle theta formed by the left elbow joint point, the left wrist joint point and the left hand joint point7Is the included angle theta formed by the head joint point, the two shoulder central joint points, the left elbow joint point and the left hand joint point8Is the included angle formed by the head joint point, the two shoulder central joint points, the left elbow joint point and the left shoulder joint point.
6. The method for detecting smoking behavior of a driver based on human body motion recognition technology of claim 4, wherein in the step S5, the trained support vector machine training model is obtained, specifically as follows:
s5.1: collecting an image stream of a driving state of a driver, and dividing the image stream into a positive sample and a negative sample according to a skeleton image of the action of the driver;
s5.2: repeating the steps S3-S4 according to the skeleton images of the positive sample and the negative sample, and acquiring a limb angle characteristic set of the upper half body of the driver, which corresponds to the positive sample and the negative sample respectively;
s5.3: respectively dividing the upper half body limb angle characteristic set of the driver corresponding to the positive sample and the negative sample into a training set and a testing set;
s5.4: and constructing a support vector machine training model based on a linear kernel function, training the support vector machine training model through the training set, detecting the trained support vector machine training model according to the test set, wherein when the recognition rate of the support vector machine training model is at least 90%, the support vector machine training model is the trained support vector machine training model, otherwise, the support vector machine training model is continuously trained until the recognition rate of the support vector machine training model is at least 90%.
7. The method as claimed in claim 6, wherein the positive sample is composed of the skeleton image of the driver containing the smoking action in the image stream, and the negative sample is composed of the skeleton image of the driver not containing the smoking action in the image stream.
CN201910875018.9A 2019-09-17 2019-09-17 Method for detecting smoking behavior of driver based on human body action recognition technology Pending CN110688921A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910875018.9A CN110688921A (en) 2019-09-17 2019-09-17 Method for detecting smoking behavior of driver based on human body action recognition technology

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910875018.9A CN110688921A (en) 2019-09-17 2019-09-17 Method for detecting smoking behavior of driver based on human body action recognition technology

Publications (1)

Publication Number Publication Date
CN110688921A true CN110688921A (en) 2020-01-14

Family

ID=69109485

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910875018.9A Pending CN110688921A (en) 2019-09-17 2019-09-17 Method for detecting smoking behavior of driver based on human body action recognition technology

Country Status (1)

Country Link
CN (1) CN110688921A (en)

Cited By (17)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111553190A (en) * 2020-03-30 2020-08-18 浙江工业大学 Image-based driver attention detection method
CN111611868A (en) * 2020-04-24 2020-09-01 上海大学 System and method for recognizing head action semantics facing to dumb language system
CN111611912A (en) * 2020-05-19 2020-09-01 北京交通大学 Method for detecting pedestrian head lowering abnormal behavior based on human body joint points
CN111611971A (en) * 2020-06-01 2020-09-01 城云科技(中国)有限公司 Behavior detection method and system based on convolutional neural network
CN111626101A (en) * 2020-04-13 2020-09-04 惠州市德赛西威汽车电子股份有限公司 Smoking monitoring method and system based on ADAS
CN111639632A (en) * 2020-07-31 2020-09-08 南京浦和数据有限公司 Subway driver action sequence identification method based on support vector machine
CN111832434A (en) * 2020-06-23 2020-10-27 广州市保伦电子有限公司 Campus smoking behavior recognition method under privacy protection and processing terminal
CN111832526A (en) * 2020-07-23 2020-10-27 浙江蓝卓工业互联网信息技术有限公司 Behavior detection method and device
CN111898571A (en) * 2020-08-05 2020-11-06 北京华捷艾米科技有限公司 Action recognition system and method
CN111931653A (en) * 2020-08-11 2020-11-13 沈阳帝信人工智能产业研究院有限公司 Safety monitoring method and device, electronic equipment and readable storage medium
CN112668387A (en) * 2020-09-24 2021-04-16 上海荷福人工智能科技(集团)有限公司 Illegal smoking recognition method based on AlphaPose
CN113158914A (en) * 2021-04-25 2021-07-23 胡勇 Intelligent evaluation method for dance action posture, rhythm and expression
CN113609963A (en) * 2021-08-03 2021-11-05 北京睿芯高通量科技有限公司 Real-time multi-human-body-angle smoking behavior detection method
WO2022142786A1 (en) * 2020-12-30 2022-07-07 中兴通讯股份有限公司 Driving behavior recognition method, and device and storage medium
CN115063740A (en) * 2022-06-10 2022-09-16 嘉洋智慧安全生产科技发展(北京)有限公司 Safety monitoring method, device, equipment and computer readable storage medium
CN116965781A (en) * 2023-04-28 2023-10-31 南京晓庄学院 Method and system for monitoring vital signs and driving behaviors of driver
CN117409484A (en) * 2023-12-14 2024-01-16 四川汉唐云分布式存储技术有限公司 Cloud-guard-based client offence detection method, device and storage medium

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105469073A (en) * 2015-12-16 2016-04-06 安徽创世科技有限公司 Kinect-based call making and answering monitoring method of driver
CN107180235A (en) * 2017-06-01 2017-09-19 陕西科技大学 Human action recognizer based on Kinect
CN109902562A (en) * 2019-01-16 2019-06-18 重庆邮电大学 A kind of driver's exception attitude monitoring method based on intensified learning

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105469073A (en) * 2015-12-16 2016-04-06 安徽创世科技有限公司 Kinect-based call making and answering monitoring method of driver
CN107180235A (en) * 2017-06-01 2017-09-19 陕西科技大学 Human action recognizer based on Kinect
CN109902562A (en) * 2019-01-16 2019-06-18 重庆邮电大学 A kind of driver's exception attitude monitoring method based on intensified learning

Cited By (22)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111553190A (en) * 2020-03-30 2020-08-18 浙江工业大学 Image-based driver attention detection method
CN111626101A (en) * 2020-04-13 2020-09-04 惠州市德赛西威汽车电子股份有限公司 Smoking monitoring method and system based on ADAS
CN111611868A (en) * 2020-04-24 2020-09-01 上海大学 System and method for recognizing head action semantics facing to dumb language system
CN111611912A (en) * 2020-05-19 2020-09-01 北京交通大学 Method for detecting pedestrian head lowering abnormal behavior based on human body joint points
CN111611912B (en) * 2020-05-19 2024-03-19 北京交通大学 Detection method for pedestrian head-falling abnormal behavior based on human body joint point
CN111611971B (en) * 2020-06-01 2023-06-30 城云科技(中国)有限公司 Behavior detection method and system based on convolutional neural network
CN111611971A (en) * 2020-06-01 2020-09-01 城云科技(中国)有限公司 Behavior detection method and system based on convolutional neural network
CN111832434A (en) * 2020-06-23 2020-10-27 广州市保伦电子有限公司 Campus smoking behavior recognition method under privacy protection and processing terminal
CN111832526A (en) * 2020-07-23 2020-10-27 浙江蓝卓工业互联网信息技术有限公司 Behavior detection method and device
CN111639632A (en) * 2020-07-31 2020-09-08 南京浦和数据有限公司 Subway driver action sequence identification method based on support vector machine
CN111898571A (en) * 2020-08-05 2020-11-06 北京华捷艾米科技有限公司 Action recognition system and method
CN111931653A (en) * 2020-08-11 2020-11-13 沈阳帝信人工智能产业研究院有限公司 Safety monitoring method and device, electronic equipment and readable storage medium
CN112668387A (en) * 2020-09-24 2021-04-16 上海荷福人工智能科技(集团)有限公司 Illegal smoking recognition method based on AlphaPose
CN112668387B (en) * 2020-09-24 2023-06-27 上海荷福人工智能科技(集团)有限公司 Illegal smoking identification method based on alpha Pose
WO2022142786A1 (en) * 2020-12-30 2022-07-07 中兴通讯股份有限公司 Driving behavior recognition method, and device and storage medium
CN113158914B (en) * 2021-04-25 2022-01-18 胡勇 Intelligent evaluation method for dance action posture, rhythm and expression
CN113158914A (en) * 2021-04-25 2021-07-23 胡勇 Intelligent evaluation method for dance action posture, rhythm and expression
CN113609963A (en) * 2021-08-03 2021-11-05 北京睿芯高通量科技有限公司 Real-time multi-human-body-angle smoking behavior detection method
CN115063740A (en) * 2022-06-10 2022-09-16 嘉洋智慧安全生产科技发展(北京)有限公司 Safety monitoring method, device, equipment and computer readable storage medium
CN116965781A (en) * 2023-04-28 2023-10-31 南京晓庄学院 Method and system for monitoring vital signs and driving behaviors of driver
CN116965781B (en) * 2023-04-28 2024-01-05 南京晓庄学院 Method and system for monitoring vital signs and driving behaviors of driver
CN117409484A (en) * 2023-12-14 2024-01-16 四川汉唐云分布式存储技术有限公司 Cloud-guard-based client offence detection method, device and storage medium

Similar Documents

Publication Publication Date Title
CN110688921A (en) Method for detecting smoking behavior of driver based on human body action recognition technology
CN108734125B (en) Smoking behavior identification method for open space
WO2018120964A1 (en) Posture correction method based on depth information and skeleton information
CN109597485B (en) Gesture interaction system based on double-fingered-area features and working method thereof
CN110711374A (en) Multi-modal dance action evaluation method
CN110889455B (en) Fault detection positioning and safety assessment method for chemical engineering garden inspection robot
CN105740779A (en) Method and device for human face in-vivo detection
WO2021147905A1 (en) Method and apparatus for identifying gaze behavior in three-dimensional space, and storage medium
CN111931804A (en) RGBD camera-based automatic human body motion scoring method
WO2022228252A1 (en) Human behavior detection method and apparatus, electronic device and storage medium
CN112101315B (en) Deep learning-based exercise judgment guidance method and system
CN113856186B (en) Pull-up action judging and counting method, system and device
CN116226691A (en) Intelligent finger ring data processing method for gesture sensing
CN106997505A (en) Analytical equipment and analysis method
CN112185514A (en) Rehabilitation training effect evaluation system based on action recognition
CN111539245A (en) CPR (CPR) technology training evaluation method based on virtual environment
JP5248236B2 (en) Image processing apparatus and image processing method
CN111931869A (en) Method and system for detecting user attention through man-machine natural interaction
CN113609963B (en) Real-time multi-human-body-angle smoking behavior detection method
CN111626135A (en) Three-dimensional gesture recognition system based on depth map
CN114894337A (en) Temperature measurement method and device for outdoor face recognition
CN103150022B (en) Gesture identification method and device
CN114913598A (en) Smoking behavior identification method based on computer vision
CN103198297B (en) Based on the kinematic similarity assessment method of correlativity geometric properties
CN116502923B (en) Simulation method and system of virtual simulation teaching practical training platform

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication
RJ01 Rejection of invention patent application after publication

Application publication date: 20200114