CN111666818A - Driver abnormal posture detection method - Google Patents
Driver abnormal posture detection method Download PDFInfo
- Publication number
- CN111666818A CN111666818A CN202010384258.1A CN202010384258A CN111666818A CN 111666818 A CN111666818 A CN 111666818A CN 202010384258 A CN202010384258 A CN 202010384258A CN 111666818 A CN111666818 A CN 111666818A
- Authority
- CN
- China
- Prior art keywords
- driver
- posture
- abnormal
- data
- body joint
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
- 208000028752 abnormal posture Diseases 0.000 title claims abstract description 54
- 238000001514 detection method Methods 0.000 title claims abstract description 33
- 230000036544 posture Effects 0.000 claims abstract description 40
- 238000000034 method Methods 0.000 claims abstract description 28
- 238000012544 monitoring process Methods 0.000 claims abstract description 21
- 230000002159 abnormal effect Effects 0.000 claims abstract description 17
- 238000009826 distribution Methods 0.000 claims description 26
- 210000002478 hand joint Anatomy 0.000 claims description 13
- 238000005452 bending Methods 0.000 claims description 10
- 238000007476 Maximum Likelihood Methods 0.000 claims description 5
- 239000000284 extract Substances 0.000 claims description 5
- 238000004364 calculation method Methods 0.000 claims description 4
- 239000000203 mixture Substances 0.000 claims description 4
- 238000012549 training Methods 0.000 claims description 4
- 210000002310 elbow joint Anatomy 0.000 claims description 3
- 238000010223 real-time analysis Methods 0.000 claims description 3
- 210000000323 shoulder joint Anatomy 0.000 claims description 3
- FGUUSXIOTUKUDN-IBGZPJMESA-N C1(=CC=CC=C1)N1C2=C(NC([C@H](C1)NC=1OC(=NN=1)C1=CC=CC=C1)=O)C=CC=C2 Chemical compound C1(=CC=CC=C1)N1C2=C(NC([C@H](C1)NC=1OC(=NN=1)C1=CC=CC=C1)=O)C=CC=C2 FGUUSXIOTUKUDN-IBGZPJMESA-N 0.000 claims description 2
- 238000013461 design Methods 0.000 claims description 2
- 238000006467 substitution reaction Methods 0.000 claims description 2
- 230000006399 behavior Effects 0.000 abstract description 24
- 230000008569 process Effects 0.000 abstract description 6
- 230000000391 smoking effect Effects 0.000 abstract description 6
- 206010039203 Road traffic accident Diseases 0.000 abstract description 5
- 239000003651 drinking water Substances 0.000 abstract description 2
- 235000020188 drinking water Nutrition 0.000 abstract description 2
- 230000006870 function Effects 0.000 description 10
- 238000013527 convolutional neural network Methods 0.000 description 4
- 230000035622 drinking Effects 0.000 description 4
- 238000013528 artificial neural network Methods 0.000 description 2
- 238000011161 development Methods 0.000 description 2
- 238000005516 engineering process Methods 0.000 description 2
- 238000005457 optimization Methods 0.000 description 2
- 238000013136 deep learning model Methods 0.000 description 1
- 230000007547 defect Effects 0.000 description 1
- 238000010586 diagram Methods 0.000 description 1
- 238000000605 extraction Methods 0.000 description 1
- 238000010801 machine learning Methods 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
- 239000000779 smoke Substances 0.000 description 1
- 239000002699 waste material Substances 0.000 description 1
- XLYOFNOQVPJJNP-UHFFFAOYSA-N water Substances O XLYOFNOQVPJJNP-UHFFFAOYSA-N 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V20/00—Scenes; Scene-specific elements
- G06V20/50—Context or environment of the image
- G06V20/59—Context or environment of the image inside of a vehicle, e.g. relating to seat occupancy, driver state or inner lighting conditions
- G06V20/597—Recognising the driver's state or behaviour, e.g. attention or drowsiness
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/21—Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
- G06F18/214—Generating training patterns; Bootstrap methods, e.g. bagging or boosting
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/23—Clustering techniques
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/24—Classification techniques
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/24—Classification techniques
- G06F18/24765—Rule-based classification
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/04—Architecture, e.g. interconnection topology
- G06N3/045—Combinations of networks
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V40/00—Recognition of biometric, human-related or animal-related patterns in image or video data
- G06V40/10—Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V40/00—Recognition of biometric, human-related or animal-related patterns in image or video data
- G06V40/20—Movements or behaviour, e.g. gesture recognition
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Data Mining & Analysis (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Evolutionary Computation (AREA)
- General Engineering & Computer Science (AREA)
- Artificial Intelligence (AREA)
- Life Sciences & Earth Sciences (AREA)
- Evolutionary Biology (AREA)
- Bioinformatics & Cheminformatics (AREA)
- Bioinformatics & Computational Biology (AREA)
- Multimedia (AREA)
- General Health & Medical Sciences (AREA)
- Health & Medical Sciences (AREA)
- Human Computer Interaction (AREA)
- Computational Linguistics (AREA)
- Molecular Biology (AREA)
- Computing Systems (AREA)
- Mathematical Physics (AREA)
- Software Systems (AREA)
- Biophysics (AREA)
- Biomedical Technology (AREA)
- Psychiatry (AREA)
- Social Psychology (AREA)
- Image Analysis (AREA)
Abstract
A driver abnormal posture detection method belongs to the field of automobile advanced auxiliary driving systems. The detection method is used for preventing traffic accidents caused by abnormal postures in the driving process of a driver, firstly, the abnormal posture judgment criterion is used for distinguishing whether the driver is in the abnormal driving posture or not when the behavior posture of the driver is monitored, and then whether a behavior classifier is started or not is determined according to the judgment result. The abnormal driving posture comprises the actions of making a call, drinking water, smoking, playing a mobile phone and leaving the steering wheel by one hand or two hands. Compared with a method for monitoring the driver behavior by using a behavior classifier globally, the method can save limited computing resources on the premise of ensuring the same detection precision.
Description
Technical Field
The invention belongs to the technical field of advanced auxiliary driving systems of automobiles, and particularly relates to a method for detecting abnormal postures of a driver.
Background
With the rapid development of computer vision technology, many driver monitoring systems with superior performance are emerging in the automotive field. The driver monitoring systems can monitor various abnormal driving behaviors of the driver in the driving process in the whole process and adopt a real-time warning mode to achieve the purpose of preventing traffic accidents. However, under the real driving condition, the driver can keep the correct driving posture for a long time and can be in the abnormal posture only in a short time. If global classification detection is carried out all the time in the whole driving interval, more limited computing resources are undoubtedly occupied, and normal operation of other vehicle-mounted systems is influenced. Chinese patent application No. CN109063586A, entitled "fast R-CNN driver detection method based on candidate optimization", inventor road wave, continental mingchen, etc. introduces residual structure into feature extraction network to improve detection accuracy and real-time, uses candidate optimization sub-network to filter redundant invalid candidate regions, and finally performs classification regression to complete driver positioning detection. The method can improve the detection efficiency on the premise of ensuring the detection precision, but still belongs to a global detection method and can continuously occupy limited computing resources. The Chinese patent application No. CN109214370A is named as a driver posture detection method based on a centroid coordinate of an arm skin color area, and the inventor Hojie, Wu guanhe and the like extract the centroid coordinate of two hands or two arms of a driver as a training sample, and train a classifier based on a machine learning algorithm to obtain a driver posture detection model for posture detection. Since the rapid detection of the posture of the driver can be realized, although the calculation resources can be saved, higher detection accuracy cannot be provided. Therefore, the driver posture detection system based on the two single models as the kernel has the contradiction between detection precision and occupation of computing resources.
Disclosure of Invention
The invention aims to solve the problem that the existing driver monitoring system is in contradiction between high precision and high occupation of computing resources, and provides a driver abnormal posture detection method which can keep higher detection precision and save computing resources. Namely, an abnormal driving posture judgment criterion is added on the basis of the original global monitoring method to be used as a basis for judging whether a behavior classifier is started or not so as to solve the problem that the computing resource occupation of the global monitoring method is too high; and the driving behavior classifier trained by the deep neural network is combined to ensure higher detection precision.
In order to achieve the purpose, the invention adopts the technical scheme that:
a driver abnormal posture detection method is used for preventing traffic accidents caused by abnormal postures occurring in the driving process of a driver. Compared with a method for monitoring the driver behaviors by using a behavior classifier globally, the method can save limited computing resources on the premise of ensuring the same detection precision. The abnormal driving posture comprises the actions of making a call, drinking water, smoking, playing a mobile phone and leaving the steering wheel by one hand or two hands.
The method comprises the following specific steps:
firstly, color or gray image data of a driver collected by a vehicle-mounted camera is used as input, and an attitude detection algorithm with high real-time performance, high robustness, high detection precision and low resource occupation is used as a human body joint point detector to extract upper body joint points of the driver. The upper body articulation point mainly comprises: a left hand measuring point 1, a left arm elbow measuring point 2, a left shoulder measuring point 3, a right hand measuring point 4, a right arm elbow measuring point 5, a right shoulder measuring point 6, a left eye measuring point 7 and a right eye measuring point 8.
And in the driving interval, a human body joint point detector is adopted to extract a large amount of upper body joint point data of the driver so as to analyze and determine an abnormal posture judgment criterion in the human body joint point detector, and then whether the driver is in an abnormal driving posture or not is distinguished according to the abnormal posture judgment criterion.
The abnormal posture judgment criterion is determined by using a Gaussian mixture model clustering algorithm after the body joint point coordinates of the driver are obtained by the human body joint point detector and the double-arm angle and the distance from the hand joint point to the eye joint point of the driver are calculated. The method comprises the following steps:
1) calculating the bending angle theta of two arms of the driver1,θ2:
wherein ,θ1,θ2Is the bending angle of the arm; p1(X1,Y1),P4(X4,Y4) Is the hand joint point coordinates; p2(X2,Y2),P5(X5,Y5) The coordinates of the elbow joint points are obtained; p3(X3,Y3),P6(X6,Y6) Coordinates of shoulder joint points;
2)P7(X7,Y7),P8(X8,Y8) Eye joint coordinates. Calculating the distance d from the hands to the eyes of the driver1,d2:
wherein ,d1,d2The distance from the driver's hands to the eyes; p1(X1,Y1),P4(X4,Y4) Is the hand joint point coordinates; p7(X7,Y7),P8(X8,Y8) Eye joint coordinates.
3) Determining the parameter distribution range under each posture, which comprises the following steps:
a) the bending angle of the arm and the distance between the hand and the eye in each posture are respectively obeyed N different Gaussian distributions, and the formed Gaussian mixture model is formed by mixing the Gaussian distributions of design parameters in N different postures.
b) Observation data yjJ-1, 2, …, M is generated by: first by the probability wkSelecting the kth Gaussian distribution submodel phi (y)|θk) And then based on the probability distribution phi (y | theta) of this sub-modelk) Generating observation data yjAnd a plurality of the M observation data come from the same submodel. At this time, observation data yjJ-1, 2, …, M being known but reflecting the observation yjThe data coming from the fourth submodel is unknown at all, i.e. the hidden variables, by gammajkRepresents:
wherein j is 1,2, …, M; k is 1,2, …, K. Obtaining observation data yjAnd unobserved data γjkThe complete data can then be expressed as:
(yj,γj1,γj2,…,γjk),j=1,2…,M(6)
c) to obtain the maximum likelihood estimate of log likelihood function L (θ) logP (y | θ) of incomplete data, it may be equivalent to obtaining the desired maximum likelihood estimate of log likelihood function logP (y, γ | θ) of complete data. The likelihood function of the complete data is obtained by adopting the following method:
wherein ,mkThe number of data generated by the kth sub-model among the M pieces of observation data is represented. Mu.skIn order to correspond to the mean of the gaussian distribution,to correspond to the Gaussian distribution variance, ωkCorresponding to the gaussian distribution weights.
The likelihood function for the complete data is:
d) given the observation data y and the parameter theta of the ith iteration(i)Expectation of log-likelihood function logP (y, γ | θ) of the complete data, let this function be denoted as Q (θ, θ)(i)). The probability when calculating the expectation is the conditional probability distribution P (y, γ | θ) of the hidden random variable γ, and the Q function is calculated as follows:
wherein the conditional probability distribution P (gamma | y, theta) of the hidden random variable gamma(i)) Comprises the following steps:
wherein ,E(γjk|y,θ(i)) The calculation method of (c) is as follows:
wherein ,is the current model parameter θ(i)The probability that the next jth observation comes from the kth sub-model is called the sub-model k for observation yjThe responsivity of (2).
f) obtaining the parameter theta of the ith wheel(i)Thereafter, the next iteration θ continues(i+1)Let the function Q (theta )(i)) Maximum, i.e.
g) And repeating the step d and the step f until the iteration is stopped after convergence. The conditions for stopping the iteration are, for small positive numbers: [ theta ](i+1)-θ(i)||<。
h) And selecting the optimal parameter distribution range of the normal driving posture and the abnormal driving posture as a judgment basis for judging whether the driver is in the abnormal posture or not according to the final clustering result. When the driver is in the abnormal posture: the angle range of the arms of the driver is (0,84) degrees, and the distance range of the pixel points from the hand joint points to the eye joint points is (0,457.4). When the driver is in the normal posture: the angle range of the arms of the driver is (84,180), and the distance range of the pixel points from the hand joint points to the eye joint points is (457.4 +).
And secondly, determining whether to start a behavior classifier according to the abnormal posture judgment criterion judgment result of the first step.
The behavior classifier is obtained by deep convolutional network training and can further identify specific abnormal driving posture categories of smoking, drinking and calling of a driver. The convolutional neural network used can be a convolutional neural network architecture such as Resnet-50, VGG, increment, Densenet and the like. The human body joint point detector and the behavior classifier cannot be started at the same time, when the abnormal posture judgment criterion in the human body joint point detector judges that the driver is not in the abnormal posture, the behavior classifier is in a standby state, and only the human body joint point detector works independently at the moment. The human body joint point detector can extract the upper body joint points of the driver in real time, calculate the bending angle of the arm and the pixel point distance from the hand to the eyes and provide the calculated values for the abnormal posture judgment criterion to carry out real-time analysis; when the abnormal posture judgment criterion identifies that the driver is in the abnormal posture, the human body joint point detector is converted into a standby state, the behavior classifier is started, and the specific types of the abnormal postures of calling, drinking and smoking of the driver are identified. And after the behavior classifier identifies that the driver maintains the normal driving posture for 60 seconds, the driver becomes in a standby state, and the human body joint point detector is started to continue monitoring the driver.
By adopting the technical scheme, compared with the conventional global detection scheme, the invention has the advantages that:
(1) the method takes video data shot by a vehicle-mounted camera in real time as input, and uses a human body joint point detector to extract coordinates of upper body joint points of a driver, and then uses a driver abnormal posture judgment criterion to preliminarily identify whether the driver is in an abnormal driving posture. The method is simple and efficient, has strong robustness and high accuracy, and solves the problem of high resource occupation when the deep learning model is used globally for monitoring.
(2) And detecting the determined abnormal posture by using a classifier trained by a deep neural network, so that the system can distinguish specific abnormal posture types, and the characteristic of high identification precision of a global monitoring system is reserved.
Drawings
Fig. 1 is a schematic view of a human joint involved in the present invention.
Fig. 2 is a schematic position diagram of the vehicle-mounted camera in the invention.
Fig. 3 is a flow chart of the method of the present invention.
In the figure: 1 left hand measuring point, 2 left arm elbow measuring points, 3 left shoulder measuring points, 4 right hand measuring points, 5 right arm elbow measuring points, 6 right shoulder measuring points, 7 left eye measuring points and 8 right eye measuring points.
Detailed Description
The present invention will be further described with reference to the following detailed description and the accompanying drawings.
The advanced auxiliary driving system of the automobile is one of important vehicle-mounted systems of the automobile, and plays a critical role in ensuring the safety of passengers and preventing traffic accidents. Driver driving posture monitoring is an emerging function in advanced driver assistance systems, and plays an important role in preventing traffic accidents. With the rapid development of computer vision technology in recent years, the recognition accuracy of the driver posture monitoring system is improved remarkably, but the higher accuracy can be achieved only by occupying high computing resources. The current solutions of driver monitoring systems are all global monitoring, i.e. a behavior classifier is used to continuously monitor the driving posture of the driver during the whole driving process of the driver. However, most drivers are not in abnormal postures for a long time in the whole driving process, so that the use of the global monitoring method causes the waste of computing resources and influences the performance of other vehicle-mounted systems.
Therefore, aiming at the defect of high occupation of the existing global monitoring computing resources, the invention considers the overall performance requirement and the precision requirement of the system, and ensures that the driving posture system designed by the method occupies less computing resources when a driver is not in an abnormal posture, can detect the specific abnormal posture type when the abnormal posture occurs, and keeps the high detection precision of the original global monitoring method.
The method comprises the following specific implementation steps:
1) a large number of video samples are collected on a real vehicle, and the collected video samples comprise: normal driving attitude and abnormal driving attitude. The abnormal driving posture in this example is divided into: make a call, smoke, and drink water. Obtaining the hand joint points of the driver through a human body joint point detector: p1(X1,Y1),P4(X4,Y4) And elbow joint points: p2(X2,Y2),P5(X5,Y5) Shoulder joint point: p3(X3,Y3),P6(X6,Y6) Eye joint points: p7(X7,Y7),P8(X8,Y8)。
2) Calculating the bending angle theta of the two arms of the driver according to the joint points of the hands, the elbows and the shoulders1,θ2Calculating the distance d from the hand to the eyes of the driver from the joint points of the hand and the eyes1,d2. A total of 10 ten thousand sets of data were obtained.
3) Respectively recording the observation data in the step 2 as y1=(a1,a2,…,a200000)、y2=(b1,b2,…,b200000) A probability distribution model of the form:
wherein ,wkNot less than 0 and ∑ wk1, is the weight of each gaussian distribution. Phi (y | theta)k) Is the probability density of the kth gaussian distribution submodel. The value of k is 2, representing one type as normal driving attitude and the other type as abnormal driving attitude. Parameter(s)
4) And initializing parameters randomly and starting iteration.
5) After the ith iteration, the current model parameter theta is determined(i)And calculating the responsivity of each Gaussian distribution sub-model to each observation datum:
6) calculating parameters of the new iteration:
7) and repeating the step 5 and the step 6 until the iteration is stopped after convergence. The conditions for stopping the iteration are, for small positive numbers: [ theta ](i+1)-θ(i)||<。
8) And determining an abnormal attitude judgment criterion by referring to the iterated result:
in the abnormal posture: the angle range of the arms of the driver is (0,84) degrees, and the distance range of the pixel points from the hand joint points to the eye joint points is (0,457.4).
Under the normal posture: the angle range of the arms of the driver is (84,180), and the distance range of the pixel points from the hand joint points to the eye joint points is (457.4 +).
9) And intercepting the abnormal posture picture of the driver in the video sample as a data set. The data set used for training the behavior classifier of the invention comprises four behaviors of normal driving, smoking, drinking and calling, wherein 16000 behaviors are included. The convolutional neural network used in this example is selected from Resnet-50, and other networks such as VGG, inclusion, Densenet, etc. may be used.
10) The human body joint point detector and the behavior classifier cannot be started at the same time, when the abnormal posture judgment criterion in the human body joint point detector judges that the driver is not in the abnormal posture, the behavior classifier is in a standby state, and only the human body joint point detector works independently at the moment. The human body joint point detector can extract the upper body joint points of the driver in real time, calculate the bending angle of the arm and the pixel point distance from the hand to the eyes and provide the calculated values for the abnormal posture judgment criterion to carry out real-time analysis; when the abnormal posture judgment criterion identifies that the driver is in the abnormal posture, the human body joint point detector is converted into a standby state, the behavior classifier is started, and the specific types of the abnormal postures of calling, drinking and smoking of the driver are identified. And after the behavior classifier identifies that the driver maintains the normal driving posture for 60 seconds, the driver becomes in a standby state, and the human body joint point detector is started to continue monitoring the driver.
The method adopts a driver image shot by a vehicle-mounted camera in real time as input data, extracts the position coordinates of upper body joint points of a driver by using a Light-weight open posture detection algorithm, and provides an abnormal posture judgment criterion based on coordinate data; in order to more accurately identify the specific abnormal posture type, a behavior classifier is added after the abnormal posture distinguishing algorithm. Compared with the existing driver monitoring system, the invention has the characteristics that the detection accuracy is ensured, meanwhile, the occupation of the monitoring system on the calculation resources of the vehicle-mounted processor can be greatly saved, and more resource support is provided for other vehicle-mounted systems.
The above-mentioned embodiments only express the embodiments of the present invention, but not should be understood as the limitation of the scope of the invention patent, it should be noted that, for those skilled in the art, many variations and modifications can be made without departing from the concept of the present invention, and these all fall into the protection scope of the present invention.
Claims (1)
1. A driver abnormal posture detection method is characterized in that when the detection method monitors the behavior posture of a driver, firstly, an abnormal posture judgment criterion is used for distinguishing whether the driver is in an abnormal driving posture, and then whether a behavior classifier is started or not is determined according to a judgment result, and the method comprises the following steps:
firstly, taking color or gray image data of a driver collected by a vehicle-mounted camera as input, and extracting upper body joint points of the driver by taking an attitude detection algorithm as a human body joint point detector; the upper body articulation point mainly comprises: a left hand measuring point, a left arm elbow measuring point, a left shoulder measuring point, a right hand measuring point, a right arm elbow measuring point, a right shoulder measuring point, a left eye measuring point and a right eye measuring point;
extracting upper body joint data of a driver by adopting a human body joint detector in a driving interval to analyze and determine an abnormal posture judgment criterion in the human body joint detector, and distinguishing whether the driver is in an abnormal driving posture according to the abnormal posture judgment criterion;
the abnormal posture judgment criterion is determined by a Gaussian mixture model clustering algorithm after the body joint point coordinates of the driver are obtained by the human body joint point detector and the double-arm angle and the distance from the hand joint point to the eye joint point of the driver are calculated; the method comprises the following steps:
1) calculating the bending angle theta of two arms of the driver1,θ2:
wherein ,θ1,θ2Is the bending angle of the arm; p1(X1,Y1),P4(X4,Y4) Is the hand joint point coordinates; p2(X2,Y2),P5(X5,Y5) The coordinates of the elbow joint points are obtained; p3(X3,Y3),P6(X6,Y6) Coordinates of shoulder joint points;
2)P7(X7,Y7),P8(X8,Y8) Coordinates of eye joint points; calculating the distance d from the hands to the eyes of the driver1,d2:
wherein ,d1,d2The distance from the driver's hands to the eyes; p1(X1,Y1),P4(X4,Y4) Is the hand joint point coordinates; p7(X7,Y7),P8(X8,Y8) Coordinates of eye joint points;
3) determining the parameter distribution range under each posture, which comprises the following steps:
a) the bending angle of the arm and the distance between the hand and the eye under each posture respectively obey N different Gaussian distributions, and the formed Gaussian mixture model is formed by mixing the Gaussian distributions of design parameters under N different postures;
b) observation data yjJ-1, 2, …, M is generated by: first by the probability wkSelecting the kth Gaussian distribution submodel phi (y | theta)k) And then based on the probability distribution phi (y | theta) of this sub-modelk) Generating observation data yjA plurality of M observation data come from the same sub-model; at this time, observation data yjJ-1, 2, …, M being known but reflecting the observation yjThe data coming from the fourth submodel is unknown at all, i.e. the hidden variables, by gammajkRepresents:
wherein j is 1,2, …, M; k is 1,2, …, K; obtaining observation data yjAnd unobserved data γjkAfter that, the complete data is represented as:
(yj,γj1,γj2,…,γjk),j=1,2…,M (6)
c) in order to obtain the maximum likelihood estimation of the log likelihood function L (θ) of incomplete data, i.e. log P (y | θ), the maximum likelihood estimation can be equivalent to the maximum likelihood estimation expected by solving the log likelihood function log P (y, γ | θ) of complete data; the likelihood function of the complete data is obtained by adopting the following method:
wherein ,mkRepresenting the number of data generated by the kth sub-model in the M pieces of observation data; mu.skIn order to correspond to the mean of the gaussian distribution,to correspond to the Gaussian distribution variance, ωkIs the corresponding gaussian distribution weight;
the likelihood function for the complete data is:
d) given the observation data y and the parameter theta of the ith iteration(i)Expectation of log-likelihood function logP (y, γ | θ) of the complete data, let this function be denoted as Q (θ, θ)(i)) (ii) a The probability when calculating the expectation is the conditional probability distribution P (y, γ | θ) of the hidden random variable γ, and the Q function is calculated as follows:
wherein the conditional probability distribution P (gamma | y, theta) of the hidden random variable gamma(i)) Comprises the following steps:
wherein ,E(γjk|y,θ(i)) The calculation method of (c) is as follows:
wherein ,is the current model parameter θ(i)The probability that the next jth observation comes from the kth sub-model is called the sub-model k for observation yjThe responsivity of (a);
f) obtaining the parameter theta of the ith wheel(i)Thereafter, the next iteration θ continues(i+1)Let the function Q (theta )(i)) Maximum, i.e.
g) Repeating the step d and the step f until the iteration is stopped after convergence; the conditions for stopping the iteration are, for small positive numbers: [ theta ](i+1)-θ(i)||<;
h) Selecting the optimal parameter distribution range of the normal driving posture and the abnormal driving posture as a judgment basis for judging whether the driver is in the abnormal posture or not according to the final clustering result; when the driver is in the abnormal posture: the angle range of the two arms of the driver is (0,84 degrees), and the distance range of the pixel points from the hand joint points to the eye joint points is (0,457.4); when the driver is in the normal posture: the angle range of the arms of the driver is (84,180), and the distance range of the pixel points from the hand joint points to the eye joint points is (457.4 +);
secondly, determining whether to start a behavior classifier according to the judgment result of the abnormal posture judgment criterion in the first step;
obtaining a behavior classifier by adopting deep convolutional network training, and further identifying the abnormal driving posture category through the behavior classifier;
the human body joint point detector and the behavior classifier cannot be started at the same time, when the abnormal posture judgment criterion in the human body joint point detector judges that the driver is not in the abnormal posture, the behavior classifier is in a standby state, and only the human body joint point detector works independently at the moment; the human body joint point detector extracts upper body joint points of the driver in real time, calculates the bending angle of the arm and the pixel point distance from the hand to the eyes and provides the pixel point distance for the abnormal posture judgment criterion for real-time analysis; when the abnormal posture judgment criterion identifies that the driver is in the abnormal posture, the human body joint point detector is converted into a standby state, the behavior classifier is started, and the specific type of the abnormal posture of the driver is identified; and after the behavior classifier identifies that the driver maintains the normal driving posture for 60 seconds, the driver becomes in a standby state, and the human body joint point detector is started to continue monitoring the driver.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202010384258.1A CN111666818B (en) | 2020-05-09 | 2020-05-09 | Driver abnormal posture detection method |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202010384258.1A CN111666818B (en) | 2020-05-09 | 2020-05-09 | Driver abnormal posture detection method |
Publications (2)
Publication Number | Publication Date |
---|---|
CN111666818A true CN111666818A (en) | 2020-09-15 |
CN111666818B CN111666818B (en) | 2023-06-16 |
Family
ID=72383245
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202010384258.1A Active CN111666818B (en) | 2020-05-09 | 2020-05-09 | Driver abnormal posture detection method |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN111666818B (en) |
Cited By (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN112287795A (en) * | 2020-10-22 | 2021-01-29 | 北京百度网讯科技有限公司 | Abnormal driving posture detection method, device, equipment, vehicle and medium |
CN112381066A (en) * | 2020-12-10 | 2021-02-19 | 杭州西奥电梯有限公司 | Abnormal behavior identification method for elevator riding, monitoring system, computer equipment and storage medium |
CN113673319A (en) * | 2021-07-12 | 2021-11-19 | 浙江大华技术股份有限公司 | Abnormal posture detection method, abnormal posture detection device, electronic device and storage medium |
WO2022142786A1 (en) * | 2020-12-30 | 2022-07-07 | 中兴通讯股份有限公司 | Driving behavior recognition method, and device and storage medium |
CN114898342A (en) * | 2022-07-15 | 2022-08-12 | 深圳市城市交通规划设计研究中心股份有限公司 | Method for detecting call receiving and making of non-motor vehicle driver in driving |
CN116965781A (en) * | 2023-04-28 | 2023-10-31 | 南京晓庄学院 | Method and system for monitoring vital signs and driving behaviors of driver |
Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP2009145951A (en) * | 2007-12-11 | 2009-07-02 | Toyota Central R&D Labs Inc | Driver status estimation device and program |
CN102289660A (en) * | 2011-07-26 | 2011-12-21 | 华南理工大学 | Method for detecting illegal driving behavior based on hand gesture tracking |
CN109902562A (en) * | 2019-01-16 | 2019-06-18 | 重庆邮电大学 | A kind of driver's exception attitude monitoring method based on intensified learning |
CN110751051A (en) * | 2019-09-23 | 2020-02-04 | 江苏大学 | Abnormal driving behavior detection method based on machine vision |
CN110949398A (en) * | 2019-11-28 | 2020-04-03 | 同济大学 | Method for detecting abnormal driving behavior of first-vehicle drivers in vehicle formation driving |
-
2020
- 2020-05-09 CN CN202010384258.1A patent/CN111666818B/en active Active
Patent Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP2009145951A (en) * | 2007-12-11 | 2009-07-02 | Toyota Central R&D Labs Inc | Driver status estimation device and program |
CN102289660A (en) * | 2011-07-26 | 2011-12-21 | 华南理工大学 | Method for detecting illegal driving behavior based on hand gesture tracking |
CN109902562A (en) * | 2019-01-16 | 2019-06-18 | 重庆邮电大学 | A kind of driver's exception attitude monitoring method based on intensified learning |
CN110751051A (en) * | 2019-09-23 | 2020-02-04 | 江苏大学 | Abnormal driving behavior detection method based on machine vision |
CN110949398A (en) * | 2019-11-28 | 2020-04-03 | 同济大学 | Method for detecting abnormal driving behavior of first-vehicle drivers in vehicle formation driving |
Cited By (10)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN112287795A (en) * | 2020-10-22 | 2021-01-29 | 北京百度网讯科技有限公司 | Abnormal driving posture detection method, device, equipment, vehicle and medium |
CN112287795B (en) * | 2020-10-22 | 2023-09-01 | 北京百度网讯科技有限公司 | Abnormal driving gesture detection method, device, equipment, vehicle and medium |
CN112381066A (en) * | 2020-12-10 | 2021-02-19 | 杭州西奥电梯有限公司 | Abnormal behavior identification method for elevator riding, monitoring system, computer equipment and storage medium |
WO2022142786A1 (en) * | 2020-12-30 | 2022-07-07 | 中兴通讯股份有限公司 | Driving behavior recognition method, and device and storage medium |
CN113673319A (en) * | 2021-07-12 | 2021-11-19 | 浙江大华技术股份有限公司 | Abnormal posture detection method, abnormal posture detection device, electronic device and storage medium |
CN113673319B (en) * | 2021-07-12 | 2024-05-03 | 浙江大华技术股份有限公司 | Abnormal gesture detection method, device, electronic device and storage medium |
CN114898342A (en) * | 2022-07-15 | 2022-08-12 | 深圳市城市交通规划设计研究中心股份有限公司 | Method for detecting call receiving and making of non-motor vehicle driver in driving |
CN114898342B (en) * | 2022-07-15 | 2022-11-25 | 深圳市城市交通规划设计研究中心股份有限公司 | Method for detecting call receiving and making of non-motor vehicle driver in driving |
CN116965781A (en) * | 2023-04-28 | 2023-10-31 | 南京晓庄学院 | Method and system for monitoring vital signs and driving behaviors of driver |
CN116965781B (en) * | 2023-04-28 | 2024-01-05 | 南京晓庄学院 | Method and system for monitoring vital signs and driving behaviors of driver |
Also Published As
Publication number | Publication date |
---|---|
CN111666818B (en) | 2023-06-16 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN111666818B (en) | Driver abnormal posture detection method | |
CN109902562B (en) | Driver abnormal posture monitoring method based on reinforcement learning | |
US11887064B2 (en) | Deep learning-based system and method for automatically determining degree of damage to each area of vehicle | |
CN108537197B (en) | Lane line detection early warning device and method based on deep learning | |
CN107038422B (en) | Fatigue state identification method based on space geometric constraint deep learning | |
CN111192237B (en) | Deep learning-based glue spreading detection system and method | |
CN111611905B (en) | Visible light and infrared fused target identification method | |
US20210303919A1 (en) | Image processing method and apparatus for target recognition | |
CN109875568A (en) | A kind of head pose detection method for fatigue driving detection | |
CN108446645B (en) | Vehicle-mounted face recognition method based on deep learning | |
CN110728241A (en) | Driver fatigue detection method based on deep learning multi-feature fusion | |
CN108596087B (en) | Driving fatigue degree detection regression model based on double-network result | |
CN116664558B (en) | Method, system and computer equipment for detecting surface defects of steel | |
CN105868690A (en) | Method and apparatus for identifying mobile phone use behavior of driver | |
CN110103816B (en) | Driving state detection method | |
CN111353451A (en) | Battery car detection method and device, computer equipment and storage medium | |
WO2020181426A1 (en) | Lane line detection method and device, mobile platform, and storage medium | |
CN110895802A (en) | Image processing method and device | |
CN101320477B (en) | Human body tracing method and equipment thereof | |
CN112052829B (en) | Pilot behavior monitoring method based on deep learning | |
CN112069898A (en) | Method and device for recognizing human face group attribute based on transfer learning | |
CN109117719B (en) | Driving posture recognition method based on local deformable component model fusion characteristics | |
CN115841735A (en) | Safe driving auxiliary system based on dynamic coupling of people, roads and environment | |
CN113989495A (en) | Vision-based pedestrian calling behavior identification method | |
CN113361452A (en) | Driver fatigue driving real-time detection method and system based on deep learning |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |