CN111666818B - Driver abnormal posture detection method - Google Patents

Driver abnormal posture detection method Download PDF

Info

Publication number
CN111666818B
CN111666818B CN202010384258.1A CN202010384258A CN111666818B CN 111666818 B CN111666818 B CN 111666818B CN 202010384258 A CN202010384258 A CN 202010384258A CN 111666818 B CN111666818 B CN 111666818B
Authority
CN
China
Prior art keywords
driver
abnormal
gesture
joint point
data
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202010384258.1A
Other languages
Chinese (zh)
Other versions
CN111666818A (en
Inventor
杨姝
亓昌
陈辉
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Dalian University of Technology
Original Assignee
Dalian University of Technology
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Dalian University of Technology filed Critical Dalian University of Technology
Priority to CN202010384258.1A priority Critical patent/CN111666818B/en
Publication of CN111666818A publication Critical patent/CN111666818A/en
Application granted granted Critical
Publication of CN111666818B publication Critical patent/CN111666818B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/50Context or environment of the image
    • G06V20/59Context or environment of the image inside of a vehicle, e.g. relating to seat occupancy, driver state or inner lighting conditions
    • G06V20/597Recognising the driver's state or behaviour, e.g. attention or drowsiness
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/214Generating training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/23Clustering techniques
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • G06F18/24765Rule-based classification
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/20Movements or behaviour, e.g. gesture recognition

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Data Mining & Analysis (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Evolutionary Computation (AREA)
  • General Engineering & Computer Science (AREA)
  • Artificial Intelligence (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Evolutionary Biology (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Multimedia (AREA)
  • General Health & Medical Sciences (AREA)
  • Health & Medical Sciences (AREA)
  • Human Computer Interaction (AREA)
  • Computational Linguistics (AREA)
  • Molecular Biology (AREA)
  • Computing Systems (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Biophysics (AREA)
  • Biomedical Technology (AREA)
  • Psychiatry (AREA)
  • Social Psychology (AREA)
  • Image Analysis (AREA)

Abstract

A driver abnormal gesture detection method belongs to the field of advanced auxiliary driving systems of automobiles. The detection method is used for preventing traffic accidents caused by abnormal postures in the driving process of a driver, when the behavior postures of the driver are monitored, firstly, an abnormal posture judgment criterion is used for distinguishing whether the driver is in an abnormal driving posture, and then whether the behavior classifier is started is determined according to a judgment result. The abnormal driving gesture comprises the actions of making a call, drinking water, smoking, playing a mobile phone, and leaving the steering wheel by other hands or both hands. Compared with a method for monitoring the driver behavior by using a behavior classifier globally, the method can save limited computing resources on the premise of ensuring the same detection precision.

Description

Driver abnormal posture detection method
Technical Field
The invention belongs to the technical field of advanced auxiliary driving systems of automobiles, and particularly relates to a method for detecting abnormal postures of drivers.
Background
With the rapid development of computer vision technology, many excellent-performance driver monitoring systems are emerging in the automotive field. The driver monitoring system can monitor various abnormal driving behaviors of the driver in the driving process in the whole course and achieve the purpose of preventing traffic accidents by adopting a real-time warning mode. However, under the real driving condition, the driver can keep the correct driving posture for a long time, and can be in an abnormal posture only in a short time. If the global classification detection is always carried out in the whole driving interval, more limited computing resources are definitely occupied, and the normal operation of other vehicle-mounted systems is affected. Chinese patent application No. CN109063586A, patent name is "a candidate optimization-based fast R-CNN driver detection method", inventor's road wavelet, liu Mingqi et al introduce residual structure into feature extraction network to improve detection accuracy and real-time, use candidate optimization sub-network to filter redundant invalid candidate region, and finally perform classification regression to complete positioning detection of driver. The method can improve the detection efficiency on the premise of ensuring the detection precision, but still belongs to a global detection method, and can continuously occupy limited computing resources. Chinese patent application number CN109214370a, entitled "a method for detecting a driver gesture based on centroid coordinates of skin color region of an arm", the inventors He Jie, wu Guanhe, etc. extract centroid coordinates of both hands or both arms of the driver as training samples, train a classifier based on a machine learning algorithm, and obtain a model for detecting a driver gesture to detect a gesture. Because the driver gesture can be rapidly detected, the calculation resources can be saved, but higher detection precision cannot be provided. Therefore, the driver gesture detection system based on the two single models as the kernel has contradiction between detection precision and calculation resource occupation.
Disclosure of Invention
The invention aims to solve the problem that the existing driver monitoring system has contradiction between high precision and high occupation of computing resources, and discloses a driver abnormal gesture detection method which can keep higher detection precision and save computing resources. The abnormal driving gesture judgment criterion is added on the basis of the original global monitoring method and is used as the basis of whether to enable the behavior classifier so as to solve the problem that the computing resource occupation of the global monitoring method is too high; and a driving behavior classifier trained by the deep neural network is combined to ensure higher detection precision.
In order to achieve the above purpose, the invention adopts the following technical scheme:
a driver abnormal posture detection method is used for preventing traffic accidents caused by abnormal postures in the driving process of a driver. Compared with a method for monitoring the driver behavior by using a behavior classifier globally, the method can save limited computing resources on the premise of ensuring the same detection precision. The abnormal driving gesture comprises the actions of making a call, drinking water, smoking, playing a mobile phone, and leaving the steering wheel by other hands or both hands.
The method comprises the following specific steps:
the first step, color or gray image data of the driver collected by the vehicle-mounted camera is taken as input, and a gesture detection algorithm with high real-time performance, high robustness, high detection precision and low resource occupation is adopted as a human body joint point detector to extract the upper body joint point of the driver. The upper body joint mainly comprises: left hand station 1, left arm elbow station 2, left shoulder station 3, right hand station 4, right arm elbow station 5, right shoulder station 6, left eye station 7, right eye station 8.
And extracting a large amount of upper body joint point data of the driver by adopting a human body joint point detector in the driving interval so as to analyze and determine an abnormal gesture judgment criterion in the human body joint point detector, and further distinguishing whether the driver is in an abnormal driving gesture according to the abnormal gesture judgment criterion.
The abnormal gesture judgment criterion is determined by using a Gaussian mixture model clustering algorithm after the coordinates of the joints of the body of the driver are obtained by means of a joint point detector of the human body, and the angles of the arms of the driver and the distances from the joints of the hands to the joints of the eyes are calculated. The method comprises the following steps:
1) Calculating the bending angle theta of the two arms of the driver 12
Figure SMS_1
Figure SMS_2
wherein ,θ12 Is the bending angle of the arm; p (P) 1 (X 1 ,Y 1 ),P 4 (X 4 ,Y 4 ) Is the coordinates of the joint points of the hand; P 2 (X 2 ,Y 2 ),P 5 (X 5 ,Y 5 ) The coordinates of the elbow joint point are shown; p (P) 3 (X 3 ,Y 3 ),P 6 (X 6 ,Y 6 ) The coordinates of the joint points of the shoulder;
2)P 7 (X 7 ,Y 7 ),P 8 (X 8 ,Y 8 ) Is the eye joint point coordinates. Calculating the distance d from the hands to the eyes of a driver 1 ,d 2
Figure SMS_3
Figure SMS_4
wherein ,d1 ,d 2 Distance from the hands to eyes of the driver; p (P) 1 (X 1 ,Y 1 ),P 4 (X 4 ,Y 4 ) Coordinates of joint points of the hand; p (P) 7 (X 7 ,Y 7 ),P 8 (X 8 ,Y 8 ) Is the eye joint point coordinates.
3) The parameter distribution range under each posture is determined, and the steps are as follows:
a) The arm bending angle and the hand-eye distance under each gesture are respectively subjected to N different Gaussian distributions, and the formed Gaussian mixture model is formed by mixing the Gaussian distributions of the design parameters under the N different gestures.
b) Observation data y j J=1, 2, …, M is generated as follows: first by probability w k Selecting the kth gaussian distribution sub-model Φ (y|θ k ) And then based on the probability distribution phi (y|theta k ) Generating observation data y j A plurality of M observations are from the same sub-model. At this time observe data y j J=1, 2, …, M is known but is used to reflect the observed data y j The data from what submodel is unknown at the bottom, i.e. hidden variables, with gamma jk The representation is:
Figure SMS_5
where j=1, 2, …, M; k=1, 2, …, K. Obtaining observation data y j And unobserved data gamma jk The complete data can then be expressed as:
(y jj1j2 ,…,γ jk ),j=1,2…,M(6)
c) To obtain a maximum likelihood estimate of the log likelihood function L (θ) =log p (y|θ) of incomplete data, it can be equivalent to a maximum likelihood estimate expected of the log likelihood function log p (y, γ|θ) of complete data. The likelihood function of the complete data is obtained by the following method:
Figure SMS_6
wherein ,mk The number of data generated by the kth sub-model among the M observation data is represented. Mu (mu) k In order to correspond to the mean value of the gaussian distribution,
Figure SMS_7
to correspond to the variance of Gaussian distribution, ω k To correspond to gaussian distribution weights.
Figure SMS_8
The likelihood function of the complete data is:
Figure SMS_9
d) Given observation y and parameter θ for the ith round of iteration (i) In this case, the log likelihood function log P (y, γ|θ) of the complete data is expected, and this function is expressed as Q (θ, θ) (i) ). The probability at the time of calculation is the conditional probability distribution P (y, γ|θ) of the hidden random variable γ, and the Q function is calculated as follows:
Figure SMS_10
wherein the conditional probability distribution P (gamma|y, theta) of the hidden random variable gamma (i) ) The method comprises the following steps:
Figure SMS_11
wherein ,E(γjk |y,θ (i) ) The calculation mode of (2) is as follows:
Figure SMS_12
wherein ,
Figure SMS_13
is the current model parameter theta (i) The probability that the jth observation comes from the kth submodel, called submodel k versus observation y j Is a response to the test signal.
e) Will be
Figure SMS_14
and />
Figure SMS_15
The carry-over into the Q function results in:
Figure SMS_16
f) Obtaining the parameter theta of the ith wheel (i) Thereafter, the iteration θ of the next round is continued (i+1) Let the function Q (θ, θ (i) ) Extremely large, i.e
θ (i+1) =argmax θ Q(θ,θ (i) ). By using
Figure SMS_17
Is->
Figure SMS_18
Represents θ (i+1) Is to make
Figure SMS_19
Figure SMS_20
g) Repeating the step d and the step f until the iteration is stopped after convergence. The condition for stopping the iteration is that for a small positive number epsilon there is: ||θ (i+1)(i) ||<ε。
h) And according to the final clustering result, selecting the optimal parameter distribution range of the normal driving gesture and the abnormal driving gesture as a judging basis for judging whether the driver is in the abnormal gesture. When the driver is in an abnormal posture: the angle range of the driver's arms is (0,84) degrees, and the pixel point distance range from the hand articulation point to the eye articulation point is (0,457.4). When the driver is in a normal posture: the angle range of the driver's arms is (84,180) degrees, and the pixel point distance range from the hand articulation point to the eye articulation point is (457.4 +).
And secondly, deciding whether to enable the behavior classifier according to the abnormal gesture judgment criterion judgment result of the first step.
The behavior classifier is obtained by training a deep convolution network, and can further identify specific abnormal driving gesture categories of smoking, drinking water and making a call of a driver. The convolutional neural network used may be a convolutional neural network architecture such as Resnet-50, VGG, inception, densenet, etc. The human body joint point detector and the behavior classifier are not simultaneously activated, and when the abnormal posture judgment criterion in the human body joint point detector judges that the driver is not in the abnormal posture, the behavior classifier is in a standby state, and only the human body joint point detector works independently. The human body joint point detector can extract the joint point of the upper body of the driver in real time, calculate the bending angle of the hand and the pixel point distance from the hand to the eye, and provide the abnormal gesture judgment criterion for real-time analysis; when the abnormal posture judging criterion identifies that the driver is in the abnormal posture, the human body joint point detector is converted into a standby state, the behavior classifier is started, and the specific types of the abnormal postures of calling, drinking and smoking of the driver are identified. The human body joint point detector is enabled to continuously monitor the driver after the behavior classifier recognizes that the driver maintains the normal driving posture for 60 seconds and then becomes a standby state.
Compared with the existing global detection scheme, the invention has the advantages that:
(1) The method takes video data shot by the vehicle-mounted camera in real time as input, and uses the abnormal posture judgment criterion of the driver to preliminarily identify whether the driver is in the abnormal driving posture after the coordinates of the joints of the upper body of the driver are extracted by using the human joint point detector. The criterion is simple and efficient, has strong robustness and high accuracy, and solves the problem of high resource occupation when the global use of the deep learning model is monitored.
(2) And detecting the abnormal gesture after judgment by using a classifier trained by a deep neural network, so that the system can distinguish specific abnormal gesture types, and the characteristic of high recognition precision of the global monitoring system is reserved.
Drawings
Fig. 1 is a schematic view of a human body joint point according to the present invention.
Fig. 2 is a schematic view of the position of the vehicle-mounted camera in the present invention.
Fig. 3 is a flow chart of the method of the present invention.
In the figure: 1 left hand measuring point, 2 left arm elbow measuring point, 3 left shoulder measuring point, 4 right hand measuring point, 5 right arm elbow measuring point, 6 right shoulder measuring point, 7 left eye measuring point, 8 right eye measuring point.
Detailed Description
The present invention will be further described with reference to specific examples and drawings.
The automobile advanced auxiliary driving system is used as one of important vehicle-mounted systems of automobiles, and plays a key role in ensuring the safety of passengers and preventing traffic accidents. Driver driving posture monitoring plays an important role in preventing traffic accidents as an emerging function in advanced assisted driving systems. With the rapid development of computer vision technology in recent years, the recognition accuracy of the driver gesture monitoring system is improved significantly, but a high computing resource is required to achieve higher accuracy. The current solutions of the driver monitoring system are global monitoring, namely, the driving gesture of the driver is continuously monitored by using a behavior classifier in the whole driving process of the driver. However, most drivers cannot be in abnormal postures for a long time in the whole driving process, so that the use of the global monitoring method can cause waste of calculation resources and influence the performance of other vehicle-mounted systems.
Therefore, aiming at the defect of high occupation of the existing global monitoring computing resources, the invention considers the overall performance requirement and the precision requirement of the system, and ensures that the driving gesture system designed by the method occupies less computing resources when the driver is not in an abnormal gesture, can detect specific abnormal gesture types when the abnormal gesture occurs, and keeps the high detection precision of the original global monitoring method.
The specific implementation steps of the invention are as follows:
1) A large number of video samples are collected on a real vehicle, and the collected video samples comprise: normal driving posture and abnormal driving posture. The abnormal driving posture in this example is classified into: making a call, smoking and drinking water. Obtaining the hand node of the driver through a human body joint point detector: p (P) 1 (X 1 ,Y 1 ),P 4 (X 4 ,Y 4 ) Elbow joint point: p (P) 2 (X 2 ,Y 2 ),P 5 (X 5 ,Y 5 ) Shoulder joint point: p (P) 3 (X 3 ,Y 3 ),P 6 (X 6 ,Y 6 ) Eye closing point: p (P) 7 (X 7 ,Y 7 ),P 8 (X 8 ,Y 8 )。
2) Calculating the bending angle theta of the two arms of the driver by the joints of the hands, the elbows and the shoulders 12 Calculating the distance d from the hands to the eyes of a driver by the joints of the hands and the eyes 1 ,d 2 . A total of 10 ten thousand sets of data were obtained.
3) The observed data in the step 2 are respectively marked as y 1 =(a 1 ,a 2 ,…,a 200000 )、y 2 =(b 1 ,b 2 ,…,b 200000 ) The probability distribution pattern is obtained as followsType (2):
Figure SMS_21
wherein ,wk More than or equal to 0 and sigma w k =1, which is the weight of each gaussian distribution. Phi (y|theta) k ) Is the probability density of the kth gaussian distribution sub-model. The value of k is 2, representing one type as normal driving posture and the other type as abnormal driving posture. Parameters (parameters)
Figure SMS_22
4) Parameters are randomly initialized, and iteration is started.
5) After the ith iteration, according to the current model parameter theta (i) And (3) solving the responsivity of each Gaussian distribution sub-model to each observation data:
Figure SMS_23
6) Calculating parameters of the new iteration:
Figure SMS_24
Figure SMS_25
Figure SMS_26
7) And (5) repeating the step 5 and the step 6 until the iteration is stopped after convergence. The condition for stopping the iteration is that for a small positive number epsilon there is: ||θ (i+1)(i) ||<ε。
8) Determining an abnormal gesture determination criterion by referring to the iterative result:
under abnormal posture: the angle range of the driver's arms is (0,84) degrees, and the pixel point distance range from the hand articulation point to the eye articulation point is (0,457.4).
Under normal posture: the angle range of the driver's arms is (84,180) degrees, and the pixel point distance range from the hand articulation point to the eye articulation point is (457.4 +).
9) And intercepting the abnormal posture picture of the driver in the video sample as a data set. The data set for training the behavior classifier comprises four behaviors of normal driving, smoking, drinking water and making a call, and the total number of the behaviors is 16000. The convolutional neural network used in this example is selected from Resnet-50, other networks such as VGG, inception, densenet, etc. may also be used.
10 The human body joint point detector and the behavior classifier are not simultaneously activated, and when the abnormal posture judgment criterion in the human body joint point detector judges that the driver is not in the abnormal posture, the behavior classifier is in a standby state, and only the human body joint point detector works independently. The human body joint point detector can extract the joint point of the upper body of the driver in real time, calculate the bending angle of the hand and the pixel point distance from the hand to the eye, and provide the abnormal gesture judgment criterion for real-time analysis; when the abnormal posture judging criterion identifies that the driver is in the abnormal posture, the human body joint point detector is converted into a standby state, the behavior classifier is started, and the specific types of the abnormal postures of calling, drinking and smoking of the driver are identified. The human body joint point detector is enabled to continuously monitor the driver after the behavior classifier recognizes that the driver maintains the normal driving posture for 60 seconds and then becomes a standby state.
The invention adopts a driver image shot by a vehicle-mounted camera in real time as input data, extracts the position coordinates of the upper body joint point of the driver by using a Light-weight openposition gesture detection algorithm, and proposes an abnormal gesture judgment criterion based on the coordinate data; in order to more accurately identify specific abnormal gesture types, a behavior classifier is added after an abnormal gesture discrimination algorithm. Compared with the existing driver monitoring system, the invention has the characteristics that the occupation of the monitoring system to the computing resources of the vehicle-mounted processor can be greatly saved while the detection accuracy is ensured, and more resource support is provided for other vehicle-mounted systems.
The examples described above represent only embodiments of the invention and are not to be understood as limiting the scope of the patent of the invention, it being pointed out that several variants and modifications may be made by those skilled in the art without departing from the concept of the invention, which fall within the scope of protection of the invention.

Claims (1)

1. The abnormal posture detection method for the driver is characterized in that when the behavior posture of the driver is monitored, firstly, an abnormal posture judgment criterion is used for distinguishing whether the driver is in an abnormal driving posture, and then whether a behavior classifier is started or not is determined according to a judgment result, and the method comprises the following steps:
the method comprises the steps that firstly, color or gray image data of a driver, collected by a vehicle-mounted camera, are taken as input, and a gesture detection algorithm is adopted as a human body joint point detector to extract a joint point of the upper body of the driver; the upper body joint mainly comprises: a left hand measurement point, a left arm elbow measurement point, a left shoulder measurement point, a right hand measurement point, a right arm elbow measurement point, a right shoulder measurement point, a left eye measurement point, and a right eye measurement point;
extracting the upper body joint point data of the driver by adopting a human body joint point detector in a driving interval to analyze and determine an abnormal gesture judgment criterion in the human body joint point detector, and distinguishing whether the driver is in an abnormal driving gesture according to the abnormal gesture judgment criterion;
the abnormal gesture judgment criterion is that coordinates of joints of a body of a driver are obtained by means of a human joint point detector, and after the angles of the arms of the driver and the distances from the joints of the hands to the joints of eyes are calculated, the abnormal gesture judgment criterion is determined by using a Gaussian mixture model clustering algorithm; the method comprises the following steps:
1) Calculating the bending angle theta of the two arms of the driver 12
Figure FDA0004073885630000011
Figure FDA0004073885630000012
wherein ,θ12 Is the bending angle of the arm; p (P) 1 (X 1 ,Y 1 ),P 4 (X 4 ,Y 4 ) Coordinates of joint points of the hand; p (P) 2 (X 2 ,Y 2 ),P 5 (X 5 ,Y 5 ) The coordinates of the elbow joint point are shown; p (P) 3 (X 3 ,Y 3 ),P 6 (X 6 ,Y 6 ) The coordinates of the joint points of the shoulder;
2)P 7 (X 7 ,Y 7 ),P 8 (X 8 ,Y 8 ) Coordinates of the joint points of the eyes; calculating the distance d from the hands to the eyes of a driver 1 ,d 2
Figure FDA0004073885630000013
Figure FDA0004073885630000014
wherein ,d1 ,d 2 Distance from the hands to eyes of the driver; p (P) 1 (X 1 ,Y 1 ),P 4 (X 4 ,Y 4 ) Coordinates of joint points of the hand; p (P) 7 (X 7 ,Y 7 ),P 8 (X 8 ,Y 8 ) Coordinates of the joint points of the eyes;
3) The parameter distribution range under each posture is determined, and the steps are as follows:
a) The arm bending angle and the hand-eye distance under each gesture are respectively subjected to N different Gaussian distributions, and the formed Gaussian mixture model is formed by mixing the Gaussian distributions of the design parameters under the N different gestures;
b) Observation data y j J=1, 2, …, M is generated as follows: first by probability w k Selecting the kth gaussian distribution sub-model Φ (y|θ k ) And then based on the probability distribution phi (y|theta k ) Generating observation data y j How many of M observations areFrom the same sub-model; at this time observe data y j J=1, 2, …, M is known but is used to reflect the observed data y j The data from what submodel is unknown at the bottom, i.e. hidden variables, with gamma jk The representation is:
Figure FDA0004073885630000021
where j=1, 2, …, M; k=1, 2, …, K; obtaining observation data y j And unobserved data gamma jk After that, the complete data is expressed as:
(y jj1j2 ,…,γ jk ),j=1,2…,M (6)
c) To obtain a maximum likelihood estimate of the log likelihood function L (θ) =log p (y|θ) of incomplete data, it can be equivalent to a maximum likelihood estimate expected of the log likelihood function log p (y, γ|θ) of complete data; the likelihood function of the complete data is obtained by the following method:
Figure FDA0004073885630000022
wherein ,mk Representing the number of data generated by the kth sub-model in the M observation data; mu (mu) k In order to correspond to the mean value of the gaussian distribution,
Figure FDA0004073885630000029
to correspond to the variance of Gaussian distribution, ω k The weights are distributed for corresponding gauss;
Figure FDA0004073885630000023
the likelihood function of the complete data is:
Figure FDA0004073885630000024
d) Given observation y and parameter θ for the ith round of iteration (i) In this case, the log likelihood function log P (y, gamma|θ) of the complete data is expected, and this function is expressed as Q (θ, θ) (i) ) The method comprises the steps of carrying out a first treatment on the surface of the The probability at the time of calculation is the conditional probability distribution P (y, γ|θ) of the hidden random variable γ, and the Q function is calculated as follows:
Figure FDA0004073885630000025
wherein the conditional probability distribution P (gamma|y, theta) of the hidden random variable gamma (i) ) The method comprises the following steps:
Figure FDA0004073885630000026
wherein ,E(γjk |y,θ (i) ) The calculation mode of (2) is as follows:
Figure FDA0004073885630000027
wherein ,
Figure FDA0004073885630000028
is the current model parameter theta (i) The probability that the jth observation comes from the kth submodel, called submodel k versus observation y j Is the responsiveness of (1);
e) Will be
Figure FDA0004073885630000031
and />
Figure FDA0004073885630000032
The carry-over into the Q function results in:
Figure FDA0004073885630000033
f) Obtaining the parameter theta of the ith wheel (i) Thereafter, the iteration θ of the next round is continued (i+1) Let the function Q (θ, θ (i) ) Extremely large, i.e. theta (i+1) =arg max θ Q(θ,θ (i) ) The method comprises the steps of carrying out a first treatment on the surface of the By using
Figure FDA0004073885630000034
Is->
Figure FDA0004073885630000035
Represents θ (i+1 Is to make
Figure FDA0004073885630000036
Figure FDA0004073885630000037
g) Repeating the step d and the step f until the iteration is stopped after convergence; the condition for stopping the iteration is that for a small positive number epsilon there is: ||θ (i+1)(i) ||<ε;
h) According to the final clustering result, selecting an optimal parameter distribution range of the normal driving gesture and the abnormal driving gesture as a judging basis for judging whether the driver is in the abnormal gesture; when the driver is in an abnormal posture: the angle range of the two arms of the driver is [0,84] DEG, and the pixel point distance range from the hand joint point to the eye joint point is [0,457.4]; when the driver is in a normal posture: the angle range of the two arms of the driver is (84,180) degrees, and the pixel point distance range from the hand joint point to the eye joint point is (457.4 +);
step two, deciding whether to enable the behavior classifier according to the abnormal gesture judgment criterion judgment result of the step one;
training by adopting a deep convolution network to obtain a behavior classifier, and further identifying abnormal driving gesture categories through the behavior classifier;
the human body joint point detector and the behavior classifier are not started at the same time, when the abnormal gesture judgment criterion in the human body joint point detector judges that the driver is not in an abnormal gesture, the behavior classifier is in a standby state, and only the human body joint point detector works independently at the moment; the human body joint point detector extracts the joint point of the upper body of the driver in real time, calculates the bending angle of the hand and the pixel point distance from the hand to the eye, and provides the calculated bending angle and the pixel point distance for the abnormal gesture judgment criterion for real-time analysis; when the abnormal gesture judgment criterion identifies that the driver is in the abnormal gesture, the human body joint point detector is converted into a standby state, the behavior classifier is started, and the specific type of the abnormal gesture of the driver is identified; the human body joint point detector is enabled to continuously monitor the driver after the behavior classifier recognizes that the driver maintains the normal driving posture for 60 seconds and then becomes a standby state.
CN202010384258.1A 2020-05-09 2020-05-09 Driver abnormal posture detection method Active CN111666818B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010384258.1A CN111666818B (en) 2020-05-09 2020-05-09 Driver abnormal posture detection method

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010384258.1A CN111666818B (en) 2020-05-09 2020-05-09 Driver abnormal posture detection method

Publications (2)

Publication Number Publication Date
CN111666818A CN111666818A (en) 2020-09-15
CN111666818B true CN111666818B (en) 2023-06-16

Family

ID=72383245

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010384258.1A Active CN111666818B (en) 2020-05-09 2020-05-09 Driver abnormal posture detection method

Country Status (1)

Country Link
CN (1) CN111666818B (en)

Families Citing this family (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112287795B (en) * 2020-10-22 2023-09-01 北京百度网讯科技有限公司 Abnormal driving gesture detection method, device, equipment, vehicle and medium
CN112381066B (en) * 2020-12-10 2023-04-18 杭州西奥电梯有限公司 Abnormal behavior identification method for elevator riding, monitoring system, computer equipment and storage medium
CN114764912A (en) * 2020-12-30 2022-07-19 中兴通讯股份有限公司 Driving behavior recognition method, device and storage medium
CN113673319B (en) * 2021-07-12 2024-05-03 浙江大华技术股份有限公司 Abnormal gesture detection method, device, electronic device and storage medium
CN114898342B (en) * 2022-07-15 2022-11-25 深圳市城市交通规划设计研究中心股份有限公司 Method for detecting call receiving and making of non-motor vehicle driver in driving
CN116965781B (en) * 2023-04-28 2024-01-05 南京晓庄学院 Method and system for monitoring vital signs and driving behaviors of driver

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2009145951A (en) * 2007-12-11 2009-07-02 Toyota Central R&D Labs Inc Driver status estimation device and program
CN102289660A (en) * 2011-07-26 2011-12-21 华南理工大学 Method for detecting illegal driving behavior based on hand gesture tracking
CN109902562A (en) * 2019-01-16 2019-06-18 重庆邮电大学 A kind of driver's exception attitude monitoring method based on intensified learning
CN110751051A (en) * 2019-09-23 2020-02-04 江苏大学 Abnormal driving behavior detection method based on machine vision
CN110949398A (en) * 2019-11-28 2020-04-03 同济大学 Method for detecting abnormal driving behavior of first-vehicle drivers in vehicle formation driving

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2009145951A (en) * 2007-12-11 2009-07-02 Toyota Central R&D Labs Inc Driver status estimation device and program
CN102289660A (en) * 2011-07-26 2011-12-21 华南理工大学 Method for detecting illegal driving behavior based on hand gesture tracking
CN109902562A (en) * 2019-01-16 2019-06-18 重庆邮电大学 A kind of driver's exception attitude monitoring method based on intensified learning
CN110751051A (en) * 2019-09-23 2020-02-04 江苏大学 Abnormal driving behavior detection method based on machine vision
CN110949398A (en) * 2019-11-28 2020-04-03 同济大学 Method for detecting abnormal driving behavior of first-vehicle drivers in vehicle formation driving

Also Published As

Publication number Publication date
CN111666818A (en) 2020-09-15

Similar Documents

Publication Publication Date Title
CN111666818B (en) Driver abnormal posture detection method
CN111611905B (en) Visible light and infrared fused target identification method
CN109117826B (en) Multi-feature fusion vehicle identification method
CN108537197B (en) Lane line detection early warning device and method based on deep learning
JP2022537857A (en) Automatic Determination System and Method for Degree of Damage by Automobile Parts Based on Deep Learning
CN111652087B (en) Car inspection method, device, electronic equipment and storage medium
CN108446645B (en) Vehicle-mounted face recognition method based on deep learning
CN109875568A (en) A kind of head pose detection method for fatigue driving detection
CN111027481B (en) Behavior analysis method and device based on human body key point detection
CN113378676A (en) Method for detecting figure interaction in image based on multi-feature fusion
CN101533466B (en) Image processing method for positioning eyes
CN108960074B (en) Small-size pedestrian target detection method based on deep learning
CN111553310B (en) Security inspection image acquisition method and system based on millimeter wave radar and security inspection equipment
CN113870254B (en) Target object detection method and device, electronic equipment and storage medium
CN110689526A (en) Retinal blood vessel segmentation method and system based on retinal fundus image
CN112183220A (en) Driver fatigue detection method and system and computer storage medium
CN109214289A (en) A kind of Activity recognition method of making a phone call from entirety to local two stages
CN110263836B (en) Bad driving state identification method based on multi-feature convolutional neural network
CN112052829B (en) Pilot behavior monitoring method based on deep learning
CN117408947B (en) Deep learning-based multi-label bridge surface defect detection method and system
CN113537013A (en) Multi-scale self-attention feature fusion pedestrian detection method
CN111222477B (en) Vision-based method and device for detecting departure of hands from steering wheel
CN112069898A (en) Method and device for recognizing human face group attribute based on transfer learning
CN109117719B (en) Driving posture recognition method based on local deformable component model fusion characteristics
CN116152754A (en) Expressway accident judging method based on DCNN

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant