CN109584507B - Driving behavior monitoring method, device, system, vehicle and storage medium - Google Patents

Driving behavior monitoring method, device, system, vehicle and storage medium Download PDF

Info

Publication number
CN109584507B
CN109584507B CN201811339289.4A CN201811339289A CN109584507B CN 109584507 B CN109584507 B CN 109584507B CN 201811339289 A CN201811339289 A CN 201811339289A CN 109584507 B CN109584507 B CN 109584507B
Authority
CN
China
Prior art keywords
driving state
image frame
characteristic
detection
preset
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201811339289.4A
Other languages
Chinese (zh)
Other versions
CN109584507A (en
Inventor
刘国清
杨广
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Hangzhou Ruijian Zhixing Technology Co ltd
Original Assignee
Shenzhen Minieye Innovation Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shenzhen Minieye Innovation Technology Co Ltd filed Critical Shenzhen Minieye Innovation Technology Co Ltd
Priority to CN201811339289.4A priority Critical patent/CN109584507B/en
Publication of CN109584507A publication Critical patent/CN109584507A/en
Application granted granted Critical
Publication of CN109584507B publication Critical patent/CN109584507B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G08SIGNALLING
    • G08BSIGNALLING OR CALLING SYSTEMS; ORDER TELEGRAPHS; ALARM SYSTEMS
    • G08B21/00Alarms responsive to a single specified undesired or abnormal condition and not otherwise provided for
    • G08B21/02Alarms for ensuring the safety of persons
    • G08B21/06Alarms for ensuring the safety of persons indicating a condition of sleep, e.g. anti-dozing alarms
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B60VEHICLES IN GENERAL
    • B60WCONJOINT CONTROL OF VEHICLE SUB-UNITS OF DIFFERENT TYPE OR DIFFERENT FUNCTION; CONTROL SYSTEMS SPECIALLY ADAPTED FOR HYBRID VEHICLES; ROAD VEHICLE DRIVE CONTROL SYSTEMS FOR PURPOSES NOT RELATED TO THE CONTROL OF A PARTICULAR SUB-UNIT
    • B60W40/00Estimation or calculation of non-directly measurable driving parameters for road vehicle drive control systems not related to the control of a particular sub unit, e.g. by using mathematical models
    • B60W40/08Estimation or calculation of non-directly measurable driving parameters for road vehicle drive control systems not related to the control of a particular sub unit, e.g. by using mathematical models related to drivers or passengers
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B60VEHICLES IN GENERAL
    • B60WCONJOINT CONTROL OF VEHICLE SUB-UNITS OF DIFFERENT TYPE OR DIFFERENT FUNCTION; CONTROL SYSTEMS SPECIALLY ADAPTED FOR HYBRID VEHICLES; ROAD VEHICLE DRIVE CONTROL SYSTEMS FOR PURPOSES NOT RELATED TO THE CONTROL OF A PARTICULAR SUB-UNIT
    • B60W40/00Estimation or calculation of non-directly measurable driving parameters for road vehicle drive control systems not related to the control of a particular sub unit, e.g. by using mathematical models
    • B60W40/08Estimation or calculation of non-directly measurable driving parameters for road vehicle drive control systems not related to the control of a particular sub unit, e.g. by using mathematical models related to drivers or passengers
    • B60W40/09Driving style or behaviour
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B60VEHICLES IN GENERAL
    • B60WCONJOINT CONTROL OF VEHICLE SUB-UNITS OF DIFFERENT TYPE OR DIFFERENT FUNCTION; CONTROL SYSTEMS SPECIALLY ADAPTED FOR HYBRID VEHICLES; ROAD VEHICLE DRIVE CONTROL SYSTEMS FOR PURPOSES NOT RELATED TO THE CONTROL OF A PARTICULAR SUB-UNIT
    • B60W50/00Details of control systems for road vehicle drive control not related to the control of a particular sub-unit, e.g. process diagnostic or vehicle driver interfaces
    • B60W50/08Interaction between the driver and the control system
    • B60W50/14Means for informing the driver, warning the driver or prompting a driver intervention
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B60VEHICLES IN GENERAL
    • B60WCONJOINT CONTROL OF VEHICLE SUB-UNITS OF DIFFERENT TYPE OR DIFFERENT FUNCTION; CONTROL SYSTEMS SPECIALLY ADAPTED FOR HYBRID VEHICLES; ROAD VEHICLE DRIVE CONTROL SYSTEMS FOR PURPOSES NOT RELATED TO THE CONTROL OF A PARTICULAR SUB-UNIT
    • B60W40/00Estimation or calculation of non-directly measurable driving parameters for road vehicle drive control systems not related to the control of a particular sub unit, e.g. by using mathematical models
    • B60W40/08Estimation or calculation of non-directly measurable driving parameters for road vehicle drive control systems not related to the control of a particular sub unit, e.g. by using mathematical models related to drivers or passengers
    • B60W2040/0818Inactivity or incapacity of driver
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B60VEHICLES IN GENERAL
    • B60WCONJOINT CONTROL OF VEHICLE SUB-UNITS OF DIFFERENT TYPE OR DIFFERENT FUNCTION; CONTROL SYSTEMS SPECIALLY ADAPTED FOR HYBRID VEHICLES; ROAD VEHICLE DRIVE CONTROL SYSTEMS FOR PURPOSES NOT RELATED TO THE CONTROL OF A PARTICULAR SUB-UNIT
    • B60W50/00Details of control systems for road vehicle drive control not related to the control of a particular sub-unit, e.g. process diagnostic or vehicle driver interfaces
    • B60W50/08Interaction between the driver and the control system
    • B60W50/14Means for informing the driver, warning the driver or prompting a driver intervention
    • B60W2050/143Alarm means

Landscapes

  • Engineering & Computer Science (AREA)
  • Automation & Control Theory (AREA)
  • Transportation (AREA)
  • Mechanical Engineering (AREA)
  • Physics & Mathematics (AREA)
  • Mathematical Physics (AREA)
  • Human Computer Interaction (AREA)
  • Business, Economics & Management (AREA)
  • Emergency Management (AREA)
  • General Physics & Mathematics (AREA)
  • Traffic Control Systems (AREA)

Abstract

The application relates to a driving behavior monitoring method, a driving behavior monitoring device, a driving behavior monitoring system, a vehicle and a storage medium. The method comprises the following steps: executing detection operation on the collected video information in the vehicle, and determining whether the current driving state of the driver is an abnormal driving state; wherein the detection operation includes a distracted driving detection and a fatigue driving detection; and if the current driving state is an abnormal driving state, executing early warning operation corresponding to the abnormal driving state. By adopting the method, the comprehensive monitoring of two abnormal driving states, namely the fatigue driving state and the distraction driving state, can be realized, the targeted high-efficiency early warning is realized by executing the early warning operation corresponding to the abnormal driving state, and the ground driving risk is reduced.

Description

Driving behavior monitoring method, device, system, vehicle and storage medium
Technical Field
The present application relates to the field of vehicle technologies, and in particular, to a driving behavior monitoring method, device, system, vehicle, and storage medium.
Background
With The development of safe driving technology of vehicles, The driver behavior detection system (The driver behavior detection system) is an important technology of driver assistance systems (The driver assistance systems), and has a wide application prospect in The aspect of preventing traffic accidents, so that The driver behavior detection system is concerned by more and more research institutions and learners.
Generally, a driver behavior detection system can find unsafe driving operation in advance by monitoring the driving behavior and the driving state of a driver in real time, improve the safe driving awareness of the driver, and avoid major traffic accidents.
However, the detection range of the driver behavior detection system is narrow at present, and only intensive research is made on fatigue driving, while few other factors are involved, which undoubtedly increases the driving risk.
Disclosure of Invention
In view of the above, it is necessary to provide a driving behavior monitoring method, device, system, vehicle, and storage medium that can realize comprehensive driving detection including distracted driving detection and fatigue driving detection, in view of the above technical problems.
In a first aspect, a driving behavior monitoring method, the method comprising:
executing detection operation on the collected video information in the vehicle, and determining whether the current driving state of the driver is an abnormal driving state; wherein the detection operation includes a distracted driving detection and a fatigue driving detection;
and if the current driving state is an abnormal driving state, executing early warning operation corresponding to the abnormal driving state.
In one embodiment, the detecting operation further comprises: abnormal variable speed drive detection, the method further comprising:
the abnormal variable speed driving detection is performed on the acquired running state information of the vehicle.
In one embodiment, the performing distracted driving detection on the captured video information in the vehicle comprises:
acquiring each image frame of the video information within a first preset time period, and processing each image frame through a preset elliptic contour detection algorithm to acquire each pixel point of a steering wheel area in each image frame;
calculating the characteristic probability that each pixel point of the steering wheel area in each image frame belongs to the skin color area according to a preset Gaussian skin color model and the pixel value of each pixel point of the steering wheel area;
and identifying whether the current driving state is a distracted driving state or not according to the feature probability corresponding to each image frame and the size of a preset probability threshold.
In one embodiment, identifying whether the current driving state is a distracted driving state according to the feature probability corresponding to each image frame and a preset probability threshold includes:
calculating the difference value of the feature probabilities corresponding to any two continuous image frames in each image frame;
and if the difference value of the characteristic probabilities is larger than a preset probability threshold value, determining that the current driving state is a distracted driving state.
In one embodiment, the performing distracted driving detection on the captured video information in the vehicle comprises:
acquiring each image frame of the video information within a first preset time period, and processing each image frame through a preset face recognition technology to acquire each pixel point of a mouth area in each image frame;
calculating the characteristic pixel value of each pixel point of the mouth area in each image frame;
and identifying whether the current driving state is a distracted driving state or not according to the characteristic pixel value corresponding to each image frame and a preset characteristic pixel threshold value.
In one embodiment, the performing fatigue driving detection on the collected video information in the vehicle comprises:
acquiring each image frame of the video information in a first preset time period, processing each image frame through a preset face recognition technology, and calculating the closing frequency of the face characteristic points;
and if the closing frequency of the face characteristic points is greater than a preset frequency threshold, determining that the current driving state is a fatigue driving state.
In one embodiment, if the face feature point is an eye, the calculating the closing frequency of the face feature point includes:
counting the blinking times in the first preset time period according to the human eye state in each image frame; the human eye state comprises an open state and a closed state;
and calculating the blinking frequency of the eyes according to the first preset time period and the blinking times.
In one embodiment, if the face feature point is a mouth, the calculating the closing frequency of the face feature point includes:
counting the mouth closing times in the first preset time period according to the mouth states in the image frames; the mouth state comprises an open state and a closed state;
and calculating the closing frequency of the mouth according to the first preset time period and the mouth closing times.
In one embodiment, the performing the abnormal variable speed driving detection on the acquired running state information of the vehicle includes:
acquiring speed information of the vehicle and angle information of a steering wheel in a second preset time period;
acquiring the characteristic speed and the characteristic acceleration of the vehicle within the second preset time period according to the speed information of the vehicle; acquiring the characteristic angular acceleration of the vehicle within the second preset time period according to the angle information of the steering wheel;
and identifying whether the current driving state is an abnormal variable speed driving state or not according to the characteristic speed and a preset characteristic speed threshold value, the characteristic acceleration and a preset characteristic acceleration threshold value and the characteristic angular acceleration and a preset angular acceleration threshold value.
In a second aspect, a driving behaviour monitoring device, the device comprising:
the first detection module is used for executing detection operation on the collected video information in the vehicle and determining whether the current driving state of the driver is an abnormal driving state; wherein the detection operation includes a distracted driving detection and a fatigue driving detection;
and the early warning module is used for executing early warning operation corresponding to the abnormal driving state if the current driving state is the abnormal driving state.
In a third aspect, a driving behavior monitoring system comprises: the device comprises a video information acquisition device, a speed acquisition device, an angle acquisition device, a memory and a processor, wherein the memory stores a computer program, and the processor realizes the following steps when executing the computer program:
executing detection operation on the collected video information in the vehicle, and determining whether the current driving state of the driver is an abnormal driving state; wherein the detection operation includes a distracted driving detection and a fatigue driving detection;
and if the current driving state is an abnormal driving state, executing early warning operation corresponding to the abnormal driving state.
In a fourth aspect, a vehicle comprises the driving behavior monitoring system described above.
In a fifth aspect, a computer-readable storage medium, having stored thereon a computer program which, when executed by a processor, performs the steps of:
executing detection operation on the collected video information in the vehicle, and determining whether the current driving state of the driver is an abnormal driving state; wherein the detection operation includes a distracted driving detection and a fatigue driving detection;
and if the current driving state is an abnormal driving state, executing early warning operation corresponding to the abnormal driving state.
According to the driving behavior monitoring method, the device, the system, the vehicle and the storage medium, whether the current driving state of the driver is an abnormal driving state is determined by executing the detection operation on the collected video information in the vehicle, and when the current driving state is the abnormal driving state, the early warning operation corresponding to the abnormal driving state is executed to realize the omnibearing driving detection including the distracted driving detection and the fatigue driving detection, so that the comprehensive monitoring on the fatigue driving state and the distracted driving state can be realized, the targeted high-efficiency early warning is realized by executing the early warning operation corresponding to the abnormal driving state, and the driving risk is reduced.
Drawings
FIG. 1 is a diagram of an exemplary driving behavior monitoring system;
FIG. 2 is a schematic flow chart of a driving behavior monitoring method according to one embodiment;
FIG. 3 is a schematic flow chart of a driving behavior monitoring method according to another embodiment;
FIG. 4 is a schematic flow chart of a distracted driving detection step in one embodiment;
FIG. 5 is a schematic flow chart of another distracted driving detection step in one embodiment;
FIG. 6 is a schematic flow chart of a fatigue driving detection procedure in one embodiment;
FIG. 7 is a flowchart illustrating an abnormal shifting driving detection procedure in accordance with one embodiment;
FIG. 8 is a schematic illustration of a driving behavior monitoring method according to an embodiment;
FIG. 9 is a block diagram showing the construction of a driving behavior monitoring apparatus according to an embodiment;
fig. 10 is a block diagram showing the construction of a driving behavior monitoring apparatus according to another embodiment;
fig. 11 is a block diagram showing the structure of a driving behavior monitoring system according to an embodiment.
Detailed Description
In order to make the objects, technical solutions and advantages of the present application more apparent, the present application is described in further detail below with reference to the accompanying drawings and embodiments. It should be understood that the specific embodiments described herein are merely illustrative of the present application and are not intended to limit the present application.
The driving behavior monitoring method can be applied to terminal equipment and is suitable for various vehicles including but not limited to vehicles, ships and the like. It will be appreciated that when the vehicle is a watercraft, the steering wheel in this embodiment may be a rudder. Taking a vehicle as an example, the terminal device may be, but is not limited to, various vehicle-mounted terminals, or a notebook computer, a smart phone, a tablet computer, and a portable wearable device; the vehicle-mounted terminal can be an independent terminal or integrated in a vehicle-mounted multimedia terminal, a driver assistance system and the like. Referring to fig. 1, taking a vehicle 10 as an example, the terminal device may be a vehicle-mounted terminal 12, and the vehicle-mounted terminal 12 may be connected to a video information acquisition device 14 installed in the vehicle to acquire video information in the vehicle; the vehicle-mounted terminal 12 can also be connected with a speed acquisition device 15 and an angle acquisition device 16 which are installed in a vehicle to acquire the running state information of the vehicle; the in-vehicle terminal 12 may determine whether the current driving state of the driver 11 is an abnormal driving state according to the in-vehicle video information and/or the running state information of the vehicle. For example, the vehicle-mounted terminal may perform a detection operation on the collected video information in the vehicle through a preset elliptical contour detection algorithm and a gaussian skin color model, and determine whether the current driving state is an abnormal driving state by determining whether the hands of the driver 11 are separated from the steering wheel 13.
In addition, the vehicle-mounted terminal can also be a smart phone which can interact with a video information acquisition device, a speed acquisition device and an angle acquisition device which are arranged in a vehicle; for example, the image acquisition device and the smart phone are both provided with short-distance communication modules such as a bluetooth module, and the smart phone can perform short-distance communication with the image acquisition device; for another example, the speed acquisition device and the angle acquisition device may interact with a driver assistance system, and the smart phone interacts with the driver assistance system through a short-distance communication module or the internet. In summary, the present embodiment is not limited to the above example.
In one embodiment, as shown in fig. 2, by performing a detection operation on the collected video information in the vehicle, it is determined whether the current driving state of the driver is an abnormal driving state, and when the current driving state is the abnormal driving state, an early warning operation corresponding to the abnormal driving state is performed, so as to implement omnidirectional driving detection including distracted driving detection and fatigue driving detection and a corresponding early warning operation. The driving behavior monitoring method is described by taking the vehicle-mounted terminal applied to fig. 1 as an example, and may include the following steps:
s201, detecting operation is carried out on the collected video information in the vehicle, and whether the current driving state of the driver is an abnormal driving state or not is determined.
Wherein the detection operation comprises distracted driving detection and fatigue driving detection; distracted driving may include at least one distraction condition during driving, where the driver's hands are off the steering wheel, the driver makes a call, the driver smokes, etc.; fatigue driving may include situations where the driver frequently yawns, blinks, etc. due to fatigue during driving. Therefore, in the present embodiment, the in-vehicle terminal may acquire video information in the vehicle through an image acquisition device installed in the vehicle, acquire a face image, a hand image, a steering wheel image, or the like of the driver, and perform a detection operation to determine whether the driver is in an abnormal driving state such as a distracted driving state or a fatigue driving state.
It is understood that S201 may include: when the vehicle is in a driving state, detecting operation is carried out on the collected video information in the vehicle, and whether the current driving state of the driver is an abnormal driving state or not is determined. Wherein the vehicle being in a driving state may comprise: the speed of the vehicle is greater than a preset parking speed threshold. For example, for a vehicle, the preset parking speed threshold may be 3 km/h. Therefore, when the vehicle is in the driving state, the detection operation as shown in S201 is performed; when the vehicle is in a non-driving state such as a parking state, the detection operation and the subsequent early warning operation are not performed, so that resources corresponding to the detection operation can be saved, and the error early warning caused in the non-driving state can be avoided.
Of course, in this embodiment, a plurality of image capturing devices may be installed in the vehicle, and are configured to capture video information in a plurality of vehicles in different directions, for example, the facial video information of the driver and the steering wheel video information of the vehicle may be respectively captured, and perform fatigue driving detection on the captured facial video information of the driver, and perform distraction driving detection on the captured steering wheel video information of the vehicle, so as to avoid interference of redundant video information and improve accuracy of the detection operation by obtaining more accurate video information.
And S202, if the current driving state is the abnormal driving state, executing early warning operation corresponding to the abnormal driving state.
The early warning operation may include early warning operations in various modes such as a video playing mode, a text display mode, a voice playing mode, and the like. Illustratively, if the current driving state is an abnormal driving state corresponding to the distracted driving state, executing an early warning operation corresponding to the distracted driving state, wherein the early warning operation can be preset voice playing for reminding a driver of attentively driving and not distracting; further, when the distracted driving state is a situation where the driver's hands are disengaged from the steering wheel, the warning operation may be a preset voice play for reminding the driver of the steering wheel being grasped by the driver's hands. Similarly, if the current driving state is an abnormal driving state corresponding to the fatigue driving state, executing an early warning operation corresponding to the fatigue driving state, where the early warning operation may be playing of preset voice content for reminding a driver to stop a vehicle for a rest. In a word, accurate and effective early warning operation can be realized by executing the early warning operation corresponding to the abnormal driving state, and the driving risk is reduced.
In the driving behavior monitoring method, the detection operation is executed on the collected video information in the vehicle to determine whether the current driving state of the driver is an abnormal driving state, and when the current driving state is the abnormal driving state, the early warning operation corresponding to the abnormal driving state is executed to realize the omnibearing driving detection including the distraction driving detection and the fatigue driving detection, so that the omnibearing monitoring on the two abnormal driving states of the fatigue driving state and the distraction driving state can be realized, the targeted high-efficiency early warning is realized by executing the early warning operation corresponding to the abnormal driving state, and the driving risk is reduced.
In one embodiment, as shown in fig. 3, whether the current driving state of the driver is an abnormal driving state is determined by performing distraction driving detection and fatigue driving detection on the collected video information in the vehicle and performing abnormal variable speed driving detection on the acquired running state information of the vehicle, and when the current driving state is the abnormal driving state, performing an early warning operation corresponding to the abnormal driving state to achieve omnibearing driving detection including distraction driving detection, fatigue driving detection and abnormal variable speed driving detection and a corresponding early warning operation. The driving behavior monitoring method is described by taking the vehicle-mounted terminal applied to fig. 1 as an example, and may include the following steps:
s301, performing distraction driving detection and fatigue driving detection on the collected video information in the vehicle, performing abnormal variable speed driving detection on the acquired running state information of the vehicle, and determining whether the current driving state of the driver is an abnormal driving state.
The abnormal speed change driving can include abnormal speed change conditions such as overspeed, sharp acceleration, sharp turning, sharp braking and the like of the vehicle caused by operation problems of a driver or vehicle problems during driving. The abnormal speed change condition may be determined by obtaining information of the operating state of the vehicle, for example, when the brake pedal of the vehicle is at the braking height for a time longer than a preset braking time, the vehicle may have an abnormal speed change condition with sudden braking. And when the abnormal speed change driving detection is carried out on the acquired running state information of the vehicle and the abnormal speed change of the vehicle is detected, determining that the current driving state of the driver is an abnormal driving state of the abnormal speed change driving state.
It should be noted that three detections, namely, detection of distraction driving, detection of fatigue driving, and detection of abnormal shifting driving, may affect each other: when a driver is in a fatigue driving state, the driver is distracted due to low driving attention caused by fatigue, and the driver may not know the accelerator and the gear lever in place so that the vehicle speed is suddenly changed; when a driver is in a distracted driving state, the hands block the sight and distract the driver due to smoking, calling and other actions, or the hands leave an operating lever or a steering wheel, so that the vehicle is not mastered, and sudden speed change is easily caused; when the driver is in the abnormal shift driving state, the vehicle is also shifted in speed generally due to fatigue driving and distracted driving. Because the three are mutually linked and influenced, any detection abnormality can indicate that the driver is in an abnormal driving state; therefore, in the embodiment, a plurality of detection modes are adopted to indirectly and directly detect the driving behavior of the driver, so that the driving behavior characteristics of the driver can be obtained in all directions, the behavior characteristics of the driver are richer rather than a certain simple characteristic expression, the comprehensiveness and the accuracy of driving monitoring are improved, and the driving risk is reduced.
Optionally, referring to fig. 4, performing distracted driving detection on the collected video information in the vehicle in S301 may include:
s401, obtaining each image frame of video information in a first preset time period, processing each image frame through a preset elliptic contour detection algorithm, and obtaining each pixel point of a steering wheel area in each image frame.
The first preset time period may generally be a time period from a time point which is a first preset time period away from the current time point to the current time point, and the time length of the first preset time period may be set according to actual conditions, for example, the first preset time period may be a time period within 10 seconds or 5 seconds away from the current time point. For example, if the frame rate of the video is 12fps (frames per second), when the length of the first preset time period is 5 seconds, the number of the acquired image frames of the video information may be 60, and for each image frame of the 60 image frames, a preset elliptical contour detection algorithm may be used to identify a steering wheel area in the image frame, so as to obtain each pixel point of the steering wheel area, and specifically, may obtain a coordinate value of each pixel point. The ellipse contour algorithm may adopt an ellipse detection algorithm of contour detection in an opensource Computer Vision Library (opensource Computer Vision Library), an ellipse detection algorithm based on hough transform, an ellipse detection algorithm based on random hough transform, and the like.
It can be understood that the outline of the steering wheel is generally circular, but the outline of each steering wheel in the image frame is generally elliptical due to the relation of the shooting angles, so that each external feature point forming the outer layer elliptical outline of the steering wheel and each internal feature point forming the inner layer elliptical outline of the steering wheel in the image can be identified through a preset elliptical outline detection algorithm, a steering wheel region formed by the ellipse corresponding to each external feature point and the ellipse corresponding to each internal feature point in an enclosing manner is further obtained, and each pixel point of the steering wheel region is further obtained.
S402, calculating the characteristic probability that each pixel point of the steering wheel area in each image frame belongs to the skin color area according to a preset Gaussian skin color model and the pixel value of each pixel point of the steering wheel area.
In this embodiment, the vehicle-mounted terminal may obtain the pixel value of each pixel point of the identified steering wheel region, and specifically, may obtain the pixel value corresponding to the coordinate value according to the coordinate value of each pixel point of the steering wheel region. Illustratively, when each image frame is a color image, the pixel value of each pixel point can be represented by using the YCb-Cr color space, and can be represented by (Cb, Cr) ignoring the influence of the luminance component YTWhere Cb is the blue chrominance component and Cr is the red chrominance component. Correspondingly, in this embodiment, a large number of flesh color pixel points in the pre-acquired hand color image of the driver may be used as training samples to establish a corresponding gaussian flesh color model G ═ m, C, according to that the distribution trend of chrominance components Cb and Cr in the pixel values of the flesh color pixel points is two-dimensional gaussian distribution, where:
Figure BDA0001862099830000091
Figure BDA0001862099830000092
xifor training the pixel value (Cb) of the skin color pixel point i in the samplei,Cri)TN is the number of skin color pixel points in the training sample, m represents the mean matrix of the pixel values of the skin color pixel points in the training sample, and C represents the covariance matrix of the pixel values of the skin color pixel points in the training sample, so that when the pixel value of the pixel point in the steering wheel area in each image frame is x, the probability that the pixel point belongs to the skin color area is as follows:
p(x)=exp(-0.5(x-m)TC-1(x-m))
illustratively, when each image frame is a gray image, a large number of skin color pixel points in the pre-acquired gray image of the hand of the driver can be used as training samples to establish a corresponding Gaussian skin color model according to the fact that the distribution trend of the gray values of the skin color pixel points is one-dimensional Gaussian distribution. Similarly, in this embodiment, the color space may also be a color space such as RGB, but compared with other skin color spaces, the YCbCr color space is less susceptible to interference from illumination and other objects when performing face skin color recognition, and has higher accuracy.
The characteristic probability that each pixel point in the steering wheel region in each image frame belongs to the skin color region may be an average value, a maximum value, a minimum value, or other statistical values of probability values that each pixel point in the steering wheel region in the image frame belongs to the skin color region, and the comparison in this embodiment is not limited. In addition, in the embodiment, other skin color models such as an elliptical skin color model can be adopted, and compared with the elliptical skin color model, the elliptical skin color model focuses more on the detection of skin color in the aspect of contour, and the gaussian skin color model is more specific in the aspect of color and is suitable for the detection of skin color in the aspect of color.
And S403, identifying whether the current driving state is a distracted driving state or not according to the feature probability corresponding to each image frame and the preset probability threshold value.
For example, in this embodiment, the feature probability corresponding to each image frame may be compared with a preset probability threshold, where the preset probability threshold may be an average value of the feature probabilities corresponding to each image frame captured when the driver manipulates the steering wheel with both hands, and if the feature probability corresponding to at least one image frame in each image frame of the video information is smaller than the preset probability threshold, it means that at the time corresponding to the image frame, the driver may have both hands separated from the steering wheel, that is, the driving state at the time is the distraction driving state, and it is considered that the current driving state is the distraction driving state.
Specifically, S403 may include: calculating the difference value of the feature probabilities corresponding to any two continuous image frames in each image frame; and if the difference value of the characteristic probabilities is larger than a preset probability threshold value, determining that the current driving state is a distracted driving state. For example, in this embodiment, the preset probability threshold may be a standard difference of feature probability values corresponding to each image frame captured when the driver manipulates the steering wheel with both hands, and then a difference of feature probabilities corresponding to any two consecutive image frames in each image frame of the video information may be calculated; if the difference value of the feature probabilities corresponding to at least one pair of two consecutive image frames is greater than the preset probability threshold, it means that the contact state between the driver's hands and the steering wheel is greatly changed within the time corresponding to the two consecutive image frames, meaning that the driver may have both hands separated from the steering wheel, and then the current driving state is considered as a distracted driving state.
Similarly, in this embodiment, when the preset probability threshold is a difference between a maximum feature probability value and a minimum feature probability value corresponding to each image frame captured when the driver operates the steering wheel with both hands, a difference between a maximum feature probability value and a minimum feature probability value corresponding to each image frame of the video information may be further calculated, and the difference is used as a feature difference to represent a maximum variation range of a feature probability value corresponding to each image frame in the first preset time period; obviously, if the difference between the characteristic difference and the preset probability threshold is smaller than the preset difference, it means that the contact state between the driver's hands and the steering wheel is greatly changed in the first preset time period, which means that the driver may have the hands separated from the steering wheel, and the current driving state is considered as the distracted driving state.
In addition, in the embodiment, the in-vehicle terminal may further acquire steering information and gear information of the vehicle, and correct the result of the distraction detection according to the steering information and gear information of the vehicle, so as to avoid detecting the normal driving state as the distraction driving state. For example, when the steering of the vehicle changes, which is generally the case when the driver operates the steering wheel, such a case should not be considered as a distracted driving state in which the hand is separated from the steering wheel; for another example, when the shift position of the vehicle is changed, which means that one hand of the driver may be separated from the steering wheel and the shift position is set, the contact state between the two hands of the driver and the steering wheel is largely changed, and this case should not be considered as the distracted driving state.
Optionally, referring to fig. 5, performing distracted driving detection on the collected video information in the vehicle in S301 may include:
s501, obtaining each image frame of video information in a first preset time period, processing each image frame through a preset face recognition technology, and obtaining each pixel point of a mouth area in each image frame.
It can be understood that when the driver smokes smoke, the cigarette end is located in the mouth area, and because the turning-off of the cigarette end causes the pixel value of the mouth area to change suddenly frequently, whether the driver is in a distracted driving state caused by smoking can be judged according to the characteristic. Therefore, in this embodiment, the mouth region of the face region in each image frame may be identified by a preset face recognition technology, and then each pixel point of the mouth region is obtained.
Specifically, the face recognition technology can be a face recognition technology based on a multi-task network (multitask learning network), can be used for simultaneously positioning the face and the face characteristic points, has the advantages of high speed and good effect on the same hardware equipment, and can be used on mobile equipment. In a multi-task network obtained by training a large number of training samples containing human faces and non-human face images, an image containing the human face of a driver can be input, and a human face marked by a frame and a characteristic point image marked by points are output. The multi-task face detection network based on deep learning can simultaneously perform face detection and face feature point detection in a manner of cascading three CNNs (convolutional neural networks), where the three CNNs include: P-Net, R-Net and O-Net, wherein P-Net is a full convolution network for generating candidate boxes and Bounding box regression vectors (Bounding box regression vectors), calibrating the candidate boxes by using a Bounding box regression method, and merging overlapped candidate boxes by using non-maximum suppression (NMS); and R-Net is used for improving the candidate frames, inputting the candidate frames passing through P-Net into R-Net, rejecting most false candidate frames, continuing to use Bounding box regression and NMS for merging, and O-Net is used for outputting the final candidate frames (the positions of the face frame and the feature point).
S502, calculating the characteristic pixel value of each pixel point of the mouth area in each image frame.
In this embodiment, the characteristic pixel value of each pixel point of the mouth region in each image frame may be an average value, a maximum value, a minimum value, or other statistical values of the pixel values of each pixel point of the mouth region in the image frame; optionally, when the pixel value is an RGB value, the characteristic pixel value is a maximum value of R values in pixel values of each pixel point in the mouth region in the image frame, and the R value of the pixel value corresponding to the cigarette end is the maximum R value in the mouth region when the cigarette end is burned violently.
And S503, identifying whether the current driving state is a distracted driving state or not according to the characteristic pixel value corresponding to each image frame and a preset characteristic pixel threshold value.
For example, in this embodiment, the feature pixel value corresponding to each image frame may be compared with a preset feature pixel threshold, where the preset feature pixel threshold may be an average value of feature pixel values corresponding to each image frame taken when the driver smokes, and if a difference value between the feature pixel value corresponding to at least one image frame and the preset feature pixel threshold in each image frame of the video information is smaller than a preset difference value, it means that the driver may smoke at a moment corresponding to the image frame, that is, the driving state at the moment is a distraction driving state, and the current driving state is considered as a distraction driving state.
Similarly, in this embodiment, when the preset feature pixel threshold is a difference between a maximum feature pixel value and a minimum feature pixel value corresponding to each image frame captured when the driver manipulates the steering wheel with both hands, a difference between the maximum feature pixel value and the minimum feature pixel value corresponding to each image frame of the video information may be further calculated, and the difference is used as a feature difference to represent a maximum variation range of the feature pixel value corresponding to each image frame in the first preset time period; obviously, if the difference between the characteristic difference and the preset characteristic pixel threshold is smaller than the preset difference, it means that the driver is smoking in the first preset time period, and the current driving state is considered as the distracted driving state.
Alternatively, as shown in fig. 6, performing fatigue driving detection on the collected video information in the vehicle in S301 may include:
s601, acquiring each image frame of video information in a first preset time period, processing each image frame through a preset face recognition technology, and calculating the closing frequency of the face characteristic points.
In this embodiment, the closing frequency of the face feature points can be calculated by performing face recognition and face feature point recognition on each image frame through the face recognition technology based on the multi-task network, and then recognizing and counting the closed state or the open state of the face feature points in each image frame.
Specifically, if the face feature point is an eye, calculating a closing frequency of the face feature point, including: counting the blinking times within a first preset time period according to the human eye state in each image frame; the human eye state comprises an open state and a closed state; and calculating the blinking frequency of the eyes according to the first preset time period and the blinking times. For example, the determination may be performed by comparing a feature pixel value of each pixel point of the eye region in each image frame with a preset skin color pixel value, for example, when the difference between the feature pixel value of each pixel point of the eye region in each image frame and the preset skin color pixel value is small, meaning that each pixel point of the eye region in the image frame is biased toward skin color, the eye state in the image frame is in a closed state; on the contrary, when the difference between the characteristic pixel value of each pixel point of the eye region in each image frame and the preset skin color pixel value is large, it means that each pixel point of the eye region in the image frame deviates from the skin color to be large, and the eye state in the image frame is the open state. It can be understood that, when the eye state in the previous image frame in each image frame is an open state and the eye state in the next image frame is a closed state, meaning that the eye blinks once, the number of blinks within the first preset time period may be obtained through statistics, and then divided by the time length of the first preset time period to obtain the blink frequency.
Specifically, if the face feature point is a mouth, calculating a closing frequency of the face feature point, including: counting the mouth closing times within a first preset time period according to the mouth state in each image frame; the mouth state comprises an open state and a closed state; and calculating the closing frequency of the mouth according to the first preset time period and the mouth closing times. It will be appreciated that the manner of calculating the closing frequency of the mouth and the blink frequency is substantially the same, and will not be described in detail herein.
And S602, if the closing frequency of the face characteristic points is greater than a preset frequency threshold, determining that the current driving state is a fatigue driving state.
For example, when the blinking frequency is greater than the preset frequency threshold, it means that the driver blinks too frequently, which may be caused by fatigue, and thus it is determined that the current driving state is a fatigue driving state; when the mouth closing frequency is greater than the preset frequency threshold, it means that the driver closes the mouth too frequently, possibly because of yawning due to fatigue, and thus it is determined that the current driving state is a fatigue driving state. It should be noted that the preset frequency thresholds corresponding to different facial feature points are different, for example, the preset frequency threshold corresponding to the mouth is smaller than the preset frequency threshold corresponding to the eyes.
Alternatively, referring to fig. 7, performing abnormal variable speed driving detection on the acquired running state information of the vehicle in S301 includes:
s701, acquiring the speed information of the vehicle and the angle information of the steering wheel in a second preset time period.
The second preset time period is similar to the first preset time period, but the time lengths of the second preset time period and the first preset time period are different. Generally, the time length of the second preset time period may be set shorter, for example, 2 seconds, because the instantaneous change of the speed information of the vehicle and the angle information of the steering wheel may be large corresponding to the second preset time period. For example, the speed information may include a speed of the vehicle corresponding to each time within the second preset time period, and the angle information of the steering wheel may include an angle of the steering wheel corresponding to each time within the second preset time period.
S702, acquiring the characteristic speed and the characteristic acceleration of the vehicle in a second preset time period according to the speed information of the vehicle; and acquiring the characteristic angular acceleration of the vehicle within a second preset time period according to the angle information of the steering wheel.
For example, the characteristic speed may be a maximum value of the speed of the vehicle corresponding to each time within the second preset time period; in addition, the acceleration of the vehicle corresponding to each time may be calculated based on the speed of the vehicle corresponding to each time within the second preset time period, and the characteristic acceleration may be a maximum value of the acceleration of the vehicle corresponding to each time within the second preset time period; the angular acceleration of the steering wheel corresponding to each time may be calculated according to the angle of the steering wheel corresponding to each time in the second preset time period, and the characteristic angular acceleration may be a maximum value of the angular acceleration of the steering wheel corresponding to each time in the second preset time period.
And S703, identifying whether the current driving state is the abnormal variable speed driving state or not according to the characteristic speed and the preset characteristic speed threshold, the characteristic acceleration and the preset characteristic acceleration threshold, and the characteristic angular acceleration and the preset angular acceleration threshold.
If the characteristic speed is greater than the preset speed threshold, it means that the vehicle may overspeed; if the characteristic acceleration is larger than the preset acceleration threshold, the vehicle is possibly accelerated suddenly or turns sharply; if the characteristic angular acceleration is greater than the preset angular acceleration threshold, the vehicle is possibly turned sharply; the in-vehicle terminal can thus determine that the current driving state is the abnormal shift driving state.
And S302, if the current driving state is the abnormal driving state, executing early warning operation corresponding to the abnormal driving state.
For example, if the current driving state is an abnormal driving state corresponding to the abnormal variable speed driving state, an early warning operation corresponding to the abnormal variable speed driving state is performed, and the early warning operation may be a preset voice play for reminding a driver to control the vehicle to run stably.
Referring to fig. 8, in this embodiment, a video image in a vehicle may be captured by a camera, image preprocessing may be performed by graying, image filtering, histogram equalization, and the like, steering wheel positioning may be performed by an elliptical contour detection algorithm, and face positioning and eye and mouth positioning may be performed by a multi-task network; finally, whether the driver is in a distracted driving state corresponding to the situation that the hands are separated from the steering wheel or not can be judged by comparing whether the probability that each pixel point of the steering wheel area between the continuous frames belongs to the skin color area is obviously changed or not; whether the driver is in a distracted driving state corresponding to smoking or not can be judged by comparing whether the pixel value of each pixel point in the mouth area between the continuous frames is obviously changed or not; and whether the driver is in a fatigue driving state can be judged by comparing whether the blink frequency and/or the mouth closing frequency between the continuous frames are obviously changed; and whether the driver is in the abnormal speed change driving state can be judged by comparing the vehicle speed, the vehicle acceleration, the angular acceleration of the steering wheel and the respective corresponding threshold values.
In the driving behavior monitoring method, by executing distraction driving detection and fatigue driving detection on the collected video information in the vehicle, and performing abnormal variable speed driving detection on the acquired running state information of the vehicle to determine whether the current driving state of the driver is an abnormal driving state, and when the current driving state is an abnormal driving state, executing early warning operation corresponding to the abnormal driving state, so as to realize omnibearing driving detection including distraction driving detection, fatigue driving detection and abnormal variable speed driving detection and corresponding early warning operation, therefore, the comprehensive monitoring of three abnormal driving states, namely a fatigue driving state, a distraction driving state and an abnormal variable speed driving state can be realized, and the targeted high-efficiency early warning is realized by executing the early warning operation corresponding to the abnormal driving state, so that the ground driving risk is reduced.
It should be understood that although the various steps in the flow charts of fig. 2-7 are shown in order as indicated by the arrows, the steps are not necessarily performed in order as indicated by the arrows. The steps are not performed in the exact order shown and described, and may be performed in other orders, unless explicitly stated otherwise. Moreover, at least some of the steps in fig. 2-7 may include multiple sub-steps or multiple stages that are not necessarily performed at the same time, but may be performed at different times, and the order of performance of the sub-steps or stages is not necessarily sequential, but may be performed in turn or alternating with other steps or at least some of the sub-steps or stages of other steps.
In one embodiment, as shown in fig. 9, there is provided a driving behavior monitoring device 90 comprising: a first detection module 91 and an early warning module 92, wherein:
the first detection module 91 is configured to perform a detection operation on the collected video information in the vehicle, and determine whether the current driving state of the driver is an abnormal driving state; wherein the detection operation comprises distracted driving detection and fatigue driving detection;
and the early warning module 92 is configured to execute an early warning operation corresponding to the abnormal driving state if the current driving state is the abnormal driving state.
Optionally, as shown in fig. 10, on the basis of fig. 9, the detecting operation further includes: the abnormal variable speed driving detection device further includes: and a second detection module 93 for performing abnormal variable speed driving detection on the acquired running state information of the vehicle.
Alternatively, as shown in fig. 10, the first detection module 91 may include:
the steering wheel recognition unit 911 is configured to acquire each image frame of the video information within a first preset time period, process each image frame through a preset elliptical contour detection algorithm, and acquire each pixel point of a steering wheel region in each image frame;
a feature probability calculation unit 912, configured to calculate, according to a preset gaussian skin color model and a pixel value of each pixel point of the steering wheel region, a feature probability that each pixel point of the steering wheel region in each image frame belongs to the skin color region;
the first distraction driving unit 913 is configured to identify whether the current driving state is a distraction driving state according to the feature probability corresponding to each image frame and a preset probability threshold.
Optionally, the first distracted driving unit 913 is configured to calculate a difference between feature probabilities corresponding to any two consecutive image frames in each image frame; and if the difference value of the characteristic probabilities is larger than a preset probability threshold value, determining that the current driving state is a distracted driving state.
Alternatively, as shown in fig. 10, the first detection module 91 may include:
the mouth region identification unit 914 is used for acquiring each image frame of the video information in a first preset time period, processing each image frame through a preset face identification technology and acquiring each pixel point of a mouth region in each image frame;
a characteristic pixel value calculating unit 915, configured to calculate a characteristic pixel value of each pixel point in the mouth region in each image frame;
the second distraction driving unit 916 is configured to identify whether the current driving state is a distraction driving state according to the feature pixel value corresponding to each image frame and a preset feature pixel threshold.
Alternatively, as shown in fig. 10, the first detection module 91 may include:
a closing probability calculation unit 917 configured to obtain each image frame of the video information in a first preset time period, process each image frame by using a preset face recognition technology, and calculate a closing frequency of the face feature point;
a fatigue driving unit 918, configured to determine that the current driving state is a fatigue driving state if the closing frequency of the face feature point is greater than a preset frequency threshold.
Optionally, if the face feature point is an eye, the closing probability calculation unit 917 is configured to count the number of blinks within a first preset time period according to the eye state in each image frame; the human eye state comprises an open state and a closed state; and calculating the blinking frequency of the eyes according to the first preset time period and the blinking times.
Optionally, if the facial feature point is a mouth, the closing probability calculation unit 917 is configured to count the number of times of mouth closing within a first preset time period according to a mouth state in each image frame; the mouth state comprises an open state and a closed state; and calculating the closing frequency of the mouth according to the first preset time period and the mouth closing times.
Alternatively, as shown in fig. 10, the second detection module 93 may include:
an operation state acquisition unit 931 configured to acquire speed information of the vehicle and angle information of the steering wheel in a second preset time period;
the running state calculating unit 932 is configured to obtain a characteristic speed and a characteristic acceleration of the vehicle within a second preset time period according to the speed information of the vehicle; acquiring the characteristic angular acceleration of the vehicle within a second preset time period according to the angle information of the steering wheel;
and an abnormal variable speed driving unit 933, configured to identify whether the current driving state is an abnormal variable speed driving state according to the characteristic speed and a preset characteristic speed threshold, the characteristic acceleration and a preset characteristic acceleration threshold, and the characteristic angular acceleration and a preset angular acceleration threshold.
In the driving behavior monitoring device, whether the current driving state of the driver is an abnormal driving state is determined by executing detection operation on the collected video information in the vehicle, and when the current driving state is the abnormal driving state, early warning operation corresponding to the abnormal driving state is executed to realize omnibearing driving detection including distracted driving detection and fatigue driving detection, so that omnibearing monitoring on the two abnormal driving states of the fatigue driving state and the distracted driving state can be realized, and targeted efficient early warning is realized by executing the early warning operation corresponding to the abnormal driving state to reduce driving risks.
For specific limitations of the driving behavior monitoring device, reference may be made to the above limitations of the driving behavior monitoring method, which are not described herein again. The various modules in the driving behavior monitoring device described above may be implemented in whole or in part by software, hardware, and combinations thereof. The modules can be embedded in a hardware form or independent from a processor in the computer device, and can also be stored in a memory in the computer device in a software form, so that the processor can call and execute operations corresponding to the modules.
In one embodiment, referring to fig. 11, there is provided a driving behavior monitoring system comprising: the video information acquisition device, the speed acquisition device, the angle acquisition device, a memory and a processor, wherein the memory stores a computer program, and the processor realizes the following steps when executing the computer program:
executing detection operation on the collected video information in the vehicle, and determining whether the current driving state of the driver is an abnormal driving state; wherein the detection operation comprises distracted driving detection and fatigue driving detection;
and if the current driving state is the abnormal driving state, executing early warning operation corresponding to the abnormal driving state.
Wherein, video information collection system can be for on-vehicle camera, and the camera can be installed in the car right front, puts to one side and lays, mainly aims at driver's face, can finely tune camera horizontal visual angle and vertical visual angle according to installation angle. Alternatively, the camera may adopt a 6-layer all-glass lens, the installation angle may be 60 ° of horizontal viewing angle (HFOV), 45 ° of vertical viewing angle (VFOV), and the specific installation angle may depend on the vehicle type, in order to collect all behaviors of the driver in the vehicle, and the photosensitive chip may be a CMOS digital image sensor AR0132 AT. The video information acquisition device has the advantages of high integration level, low power consumption and low cost, wherein the camera adopts 6 layers of full-glass lenses to better adapt to the light and shade change condition in the vehicle.
The speed acquisition device and the angle acquisition device are a vehicle speed detection sensor and an angle sensor which respectively sense the change condition of the vehicle speed and the change condition of the angle of the steering wheel, wherein the vehicle speed detection sensor detects the rotating speed by adopting a Hall sensor to measure the rotating speed of the wheel so as to measure the vehicle speed; the processor is a main control chip, the signal output end of the angle sensor, the signal output end of the vehicle speed detection sensor and the signal output end of the video information acquisition device can be connected with the signal input end of the main control chip, and the signal output end of the main control chip can be connected with the communication circuit in a bidirectional mode; the active chip, the angle sensor, the vehicle speed detection sensor, the video information acquisition device and the communication circuit are all powered by a power circuit, and the vehicle speed detection device has the characteristics of simple structure, high reliability and small influence of environmental factors.
In one embodiment, the detecting operation further comprises: and detecting abnormal variable speed driving, wherein the processor executes the computer program to further realize the following steps: the abnormal variable speed driving detection is performed on the acquired running state information of the vehicle.
In one embodiment, the processor, when executing the computer program, further performs the steps of: acquiring each image frame of the video information within a first preset time period, and processing each image frame through a preset elliptic contour detection algorithm to acquire each pixel point of a steering wheel area in each image frame; calculating the characteristic probability that each pixel point of the steering wheel area in each image frame belongs to the skin color area according to a preset Gaussian skin color model and the pixel value of each pixel point of the steering wheel area; and identifying whether the current driving state is a distracted driving state or not according to the feature probability corresponding to each image frame and the size of a preset probability threshold.
In one embodiment, the processor, when executing the computer program, further performs the steps of: calculating the difference value of the feature probabilities corresponding to any two continuous image frames in each image frame; and if the difference value of the characteristic probabilities is larger than a preset probability threshold value, determining that the current driving state is a distracted driving state.
In one embodiment, the processor, when executing the computer program, further performs the steps of: acquiring each image frame of the video information within a first preset time period, and processing each image frame through a preset face recognition technology to acquire each pixel point of a mouth area in each image frame; calculating the characteristic pixel value of each pixel point of the mouth area in each image frame; and identifying whether the current driving state is a distracted driving state or not according to the characteristic pixel value corresponding to each image frame and a preset characteristic pixel threshold value.
In one embodiment, the processor, when executing the computer program, further performs the steps of: acquiring each image frame of the video information in a first preset time period, processing each image frame through a preset face recognition technology, and calculating the closing frequency of the face characteristic points; and if the closing frequency of the face characteristic points is greater than a preset frequency threshold, determining that the current driving state is a fatigue driving state.
In one embodiment, if the face feature point is an eye, the processor when executing the computer program further performs the following steps: counting the blinking times in the first preset time period according to the human eye state in each image frame; the human eye state comprises an open state and a closed state; and calculating the blinking frequency of the eyes according to the first preset time period and the blinking times.
In one embodiment, if the facial feature point is a mouth, the processor when executing the computer program further performs the steps of: counting the mouth closing times in the first preset time period according to the mouth states in the image frames; the mouth state comprises an open state and a closed state; and calculating the closing frequency of the mouth according to the first preset time period and the mouth closing times.
In one embodiment, the processor, when executing the computer program, further performs the steps of: acquiring speed information of the vehicle and angle information of a steering wheel in a second preset time period; acquiring the characteristic speed and the characteristic acceleration of the vehicle within the second preset time period according to the speed information of the vehicle; acquiring the characteristic angular acceleration of the vehicle within the second preset time period according to the angle information of the steering wheel; and identifying whether the current driving state is an abnormal variable speed driving state or not according to the characteristic speed and a preset characteristic speed threshold value, the characteristic acceleration and a preset characteristic acceleration threshold value and the characteristic angular acceleration and a preset angular acceleration threshold value.
In one embodiment, referring to fig. 1, a vehicle is provided that includes the driving behavior monitoring system described above.
In one embodiment, a computer-readable storage medium is provided, having a computer program stored thereon, which when executed by a processor, performs the steps of:
executing detection operation on the collected video information in the vehicle, and determining whether the current driving state of the driver is an abnormal driving state; wherein the detection operation comprises distracted driving detection and fatigue driving detection;
and if the current driving state is the abnormal driving state, executing early warning operation corresponding to the abnormal driving state.
In one embodiment, the detecting operation further comprises: detecting abnormal variable speed driving, the computer program when executed by the processor further implementing the steps of: the abnormal variable speed driving detection is performed on the acquired running state information of the vehicle.
In one embodiment, the computer program when executed by the processor further performs the steps of: acquiring each image frame of the video information within a first preset time period, and processing each image frame through a preset elliptic contour detection algorithm to acquire each pixel point of a steering wheel area in each image frame; calculating the characteristic probability that each pixel point of the steering wheel area in each image frame belongs to the skin color area according to a preset Gaussian skin color model and the pixel value of each pixel point of the steering wheel area; and identifying whether the current driving state is a distracted driving state or not according to the feature probability corresponding to each image frame and the size of a preset probability threshold.
In one embodiment, the computer program when executed by the processor further performs the steps of: calculating the difference value of the feature probabilities corresponding to any two continuous image frames in each image frame; and if the difference value of the characteristic probabilities is larger than a preset probability threshold value, determining that the current driving state is a distracted driving state.
In one embodiment, the computer program when executed by the processor further performs the steps of: acquiring each image frame of the video information within a first preset time period, and processing each image frame through a preset face recognition technology to acquire each pixel point of a mouth area in each image frame; calculating the characteristic pixel value of each pixel point of the mouth area in each image frame; and identifying whether the current driving state is a distracted driving state or not according to the characteristic pixel value corresponding to each image frame and a preset characteristic pixel threshold value.
In one embodiment, the computer program when executed by the processor further performs the steps of: acquiring each image frame of the video information in a first preset time period, processing each image frame through a preset face recognition technology, and calculating the closing frequency of the face characteristic points; and if the closing frequency of the face characteristic points is greater than a preset frequency threshold, determining that the current driving state is a fatigue driving state.
In one embodiment, if the facial feature points are eyes, the computer program when executed by the processor further performs the steps of: counting the blinking times in the first preset time period according to the human eye state in each image frame; the human eye state comprises an open state and a closed state; and calculating the blinking frequency of the eyes according to the first preset time period and the blinking times.
In one embodiment, the computer program when executed by the processor further performs the steps of, if the facial feature point is a mouth: counting the mouth closing times in the first preset time period according to the mouth states in the image frames; the mouth state comprises an open state and a closed state; and calculating the closing frequency of the mouth according to the first preset time period and the mouth closing times.
In one embodiment, the computer program when executed by the processor further performs the steps of: acquiring speed information of the vehicle and angle information of a steering wheel in a second preset time period; acquiring the characteristic speed and the characteristic acceleration of the vehicle within the second preset time period according to the speed information of the vehicle; acquiring the characteristic angular acceleration of the vehicle within the second preset time period according to the angle information of the steering wheel; and identifying whether the current driving state is an abnormal variable speed driving state or not according to the characteristic speed and a preset characteristic speed threshold value, the characteristic acceleration and a preset characteristic acceleration threshold value and the characteristic angular acceleration and a preset angular acceleration threshold value.
It will be understood by those skilled in the art that all or part of the processes of the methods of the embodiments described above can be implemented by hardware instructions of a computer program, which can be stored in a non-volatile computer-readable storage medium, and when executed, can include the processes of the embodiments of the methods described above. Any reference to memory, storage, database, or other medium used in the embodiments provided herein may include non-volatile and/or volatile memory, among others. Non-volatile memory can include read-only memory (ROM), Programmable ROM (PROM), Electrically Programmable ROM (EPROM), Electrically Erasable Programmable ROM (EEPROM), or flash memory. Volatile memory can include Random Access Memory (RAM) or external cache memory. By way of illustration and not limitation, RAM is available in a variety of forms such as Static RAM (SRAM), Dynamic RAM (DRAM), Synchronous DRAM (SDRAM), Double Data Rate SDRAM (DDRSDRAM), Enhanced SDRAM (ESDRAM), Synchronous Link DRAM (SLDRAM), Rambus Direct RAM (RDRAM), direct bus dynamic RAM (DRDRAM), and memory bus dynamic RAM (RDRAM).
The technical features of the above embodiments can be arbitrarily combined, and for the sake of brevity, all possible combinations of the technical features in the above embodiments are not described, but should be considered as the scope of the present specification as long as there is no contradiction between the combinations of the technical features.
The above-mentioned embodiments only express several embodiments of the present application, and the description thereof is more specific and detailed, but not construed as limiting the scope of the invention. It should be noted that, for a person skilled in the art, several variations and modifications can be made without departing from the concept of the present application, which falls within the scope of protection of the present application. Therefore, the protection scope of the present patent shall be subject to the appended claims.

Claims (13)

1. A driving behavior monitoring method, characterized in that the method comprises:
executing detection operation on the collected video information in the vehicle, and determining whether the current driving state of the driver is an abnormal driving state; wherein the detection operation includes a distracted driving detection and a fatigue driving detection; performing the distracted driving detection on the video information comprises: determining the characteristic probability that each pixel point of a steering wheel area in each image frame of the video information belongs to a skin color area within a first preset time period, and identifying whether the current driving state is a distracted driving state or not according to the characteristic probability corresponding to each image frame and the size of a preset probability threshold; the characteristic probability is a statistical characteristic of a probability value that each pixel point of a steering wheel area in an image frame belongs to a skin color area;
and if the current driving state is an abnormal driving state, executing early warning operation corresponding to the abnormal driving state.
2. The method of claim 1, wherein the detecting operation further comprises: abnormal variable speed drive detection, the method further comprising:
the abnormal variable speed driving detection is performed on the acquired running state information of the vehicle.
3. The method of claim 1, wherein the performing the distracted driving detection on the video information further comprises:
acquiring each image frame of the video information within a first preset time period, and processing each image frame through a preset elliptic contour detection algorithm to acquire each pixel point of a steering wheel area in each image frame;
and calculating the characteristic probability that each pixel point of the steering wheel area in each image frame belongs to the skin color area according to a preset Gaussian skin color model and the pixel value of each pixel point of the steering wheel area.
4. The method according to claim 1 or 3, wherein the identifying whether the current driving state is a distracted driving state according to the feature probability corresponding to each image frame and a preset probability threshold comprises:
calculating the difference value of the feature probabilities corresponding to any two continuous image frames in each image frame;
and if the difference value of the characteristic probabilities is larger than a preset probability threshold value, determining that the current driving state is a distracted driving state.
5. The method of claim 1, wherein the performing the distracted driving detection on the video information further comprises:
acquiring each image frame of the video information within a first preset time period, and processing each image frame through a preset face recognition technology to acquire each pixel point of a mouth area in each image frame;
calculating the characteristic pixel value of each pixel point of the mouth area in each image frame;
and identifying whether the current driving state is a distracted driving state or not according to the characteristic pixel value corresponding to each image frame and a preset characteristic pixel threshold value.
6. The method of claim 1, wherein performing the fatigue driving detection on the video information comprises:
acquiring each image frame of the video information in a first preset time period, processing each image frame through a preset face recognition technology, and calculating the closing frequency of the face characteristic points;
and if the closing frequency of the face characteristic points is greater than a preset frequency threshold, determining that the current driving state is a fatigue driving state.
7. The method of claim 6, wherein calculating the closing frequency of the face feature point if the face feature point is an eye comprises:
counting the blinking times in the first preset time period according to the human eye state in each image frame; the human eye state comprises an open state and a closed state;
and calculating the blinking frequency of the eyes according to the first preset time period and the blinking times.
8. The method of claim 6, wherein calculating the closing frequency of the face feature point if the face feature point is a mouth comprises:
counting the mouth closing times in the first preset time period according to the mouth states in the image frames; the mouth state comprises an open state and a closed state;
and calculating the closing frequency of the mouth according to the first preset time period and the mouth closing times.
9. The method according to claim 2, wherein the performing the abnormal variable speed driving detection on the acquired running state information of the vehicle includes:
acquiring speed information of the vehicle and angle information of a steering wheel in a second preset time period;
acquiring the characteristic speed and the characteristic acceleration of the vehicle within the second preset time period according to the speed information of the vehicle; acquiring the characteristic angular acceleration of the vehicle within the second preset time period according to the angle information of the steering wheel;
and identifying whether the current driving state is an abnormal variable speed driving state or not according to the characteristic speed and a preset characteristic speed threshold value, the characteristic acceleration and a preset characteristic acceleration threshold value and the characteristic angular acceleration and a preset angular acceleration threshold value.
10. A driving behaviour monitoring device, characterised in that the device comprises:
the first detection module is used for executing detection operation on the collected video information in the vehicle and determining whether the current driving state of the driver is an abnormal driving state; wherein the detection operation includes a distracted driving detection and a fatigue driving detection; performing the distracted driving detection on the video information comprises: determining the characteristic probability that each pixel point of a steering wheel area in each image frame of the video information belongs to a skin color area within a first preset time period, and identifying whether the current driving state is a distracted driving state or not according to the characteristic probability corresponding to each image frame and the size of a preset probability threshold; the characteristic probability is a statistical characteristic of a probability value that each pixel point of a steering wheel area in an image frame belongs to a skin color area;
and the early warning module is used for executing early warning operation corresponding to the abnormal driving state if the current driving state is the abnormal driving state.
11. A driving behavior monitoring system, comprising: video information acquisition device, speed acquisition device, angle acquisition device, memory storing a computer program and a processor implementing the steps of the method according to any one of claims 1 to 9 when executing said computer program.
12. A vehicle comprising a driving behaviour monitoring system according to claim 11.
13. A computer-readable storage medium, on which a computer program is stored, which, when being executed by a processor, carries out the steps of the method of any one of claims 1 to 9.
CN201811339289.4A 2018-11-12 2018-11-12 Driving behavior monitoring method, device, system, vehicle and storage medium Active CN109584507B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201811339289.4A CN109584507B (en) 2018-11-12 2018-11-12 Driving behavior monitoring method, device, system, vehicle and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201811339289.4A CN109584507B (en) 2018-11-12 2018-11-12 Driving behavior monitoring method, device, system, vehicle and storage medium

Publications (2)

Publication Number Publication Date
CN109584507A CN109584507A (en) 2019-04-05
CN109584507B true CN109584507B (en) 2020-11-13

Family

ID=65922151

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201811339289.4A Active CN109584507B (en) 2018-11-12 2018-11-12 Driving behavior monitoring method, device, system, vehicle and storage medium

Country Status (1)

Country Link
CN (1) CN109584507B (en)

Families Citing this family (19)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110751011A (en) * 2019-05-23 2020-02-04 北京嘀嘀无限科技发展有限公司 Driving safety detection method, driving safety detection device and vehicle-mounted terminal
CN110334592A (en) * 2019-05-27 2019-10-15 天津科技大学 A kind of monitoring of driver's abnormal behaviour and safety control system and safety control method
CN112183173B (en) * 2019-07-05 2024-04-09 北京字节跳动网络技术有限公司 Image processing method, device and storage medium
CN110728218A (en) * 2019-09-29 2020-01-24 深圳市大拿科技有限公司 Dangerous driving behavior early warning method and device, electronic equipment and storage medium
CN110807436B (en) * 2019-11-07 2022-10-18 深圳鼎然信息科技有限公司 Dangerous driving behavior recognition and dangerous event prediction method, device and storage medium
CN111265220A (en) * 2020-01-21 2020-06-12 王力安防科技股份有限公司 Myopia early warning method, device and equipment
CN111422203B (en) * 2020-02-28 2022-03-15 南京交通职业技术学院 Driving behavior evaluation method and device
CN113370990A (en) * 2020-03-10 2021-09-10 华为技术有限公司 Driving assistance method and driving assistance device
CN111532281A (en) * 2020-05-08 2020-08-14 奇瑞汽车股份有限公司 Driving behavior monitoring method and device, terminal and storage medium
CN112622921B (en) * 2020-09-29 2022-09-30 广州宸祺出行科技有限公司 Method and device for detecting abnormal driving behavior of driver and electronic equipment
CN112356839A (en) * 2020-11-06 2021-02-12 广州小鹏自动驾驶科技有限公司 Driving state monitoring method and system and automobile
CN112395993A (en) * 2020-11-18 2021-02-23 珠海大横琴科技发展有限公司 Method and device for detecting ship sheltered based on monitoring video data and electronic equipment
CN113283286B (en) * 2021-03-24 2023-11-21 上海高德威智能交通系统有限公司 Driver abnormal behavior detection method and device
CN114030471B (en) * 2022-01-07 2022-04-26 深圳佑驾创新科技有限公司 Vehicle acceleration control method and device based on road traffic characteristics
CN114454891B (en) * 2022-02-28 2023-09-26 奇瑞汽车股份有限公司 Control method and device for automobile and computer storage medium
CN114841679B (en) * 2022-06-29 2022-10-18 陕西省君凯电子科技有限公司 Intelligent management system for vehicle running data
CN115471826B (en) * 2022-08-23 2024-03-26 中国航空油料集团有限公司 Method and device for judging safe driving behavior of aviation fueller and safe operation and maintenance system
CN116311181B (en) * 2023-03-21 2023-09-12 重庆利龙中宝智能技术有限公司 Method and system for rapidly detecting abnormal driving
CN117911992A (en) * 2023-12-12 2024-04-19 广州市启宏普浩企业管理服务有限公司 Method and system for analyzing driving abnormal behavior data of driver

Citations (15)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101032405A (en) * 2007-03-21 2007-09-12 汤一平 Safe driving auxiliary device based on omnidirectional computer vision
CN101638079A (en) * 2008-07-28 2010-02-03 天津宝龙机电有限公司 Tactile-reminding automobile safety early-warning system
CN101872171A (en) * 2009-04-24 2010-10-27 中国农业大学 Driver fatigue state recognition method and system based on information fusion
CN102289660A (en) * 2011-07-26 2011-12-21 华南理工大学 Method for detecting illegal driving behavior based on hand gesture tracking
CN102371902A (en) * 2010-08-10 2012-03-14 日产自动车株式会社 Stability display apparatus
CN105574487A (en) * 2015-11-26 2016-05-11 中国第一汽车股份有限公司 Facial feature based driver attention state detection method
CN205365591U (en) * 2015-12-31 2016-07-06 山东科技大学 Driver's state suggestion device based on vehicle motion gesture
CN106067016A (en) * 2016-07-20 2016-11-02 深圳市飘飘宝贝有限公司 A kind of facial image eyeglass detection method and device
CN106205052A (en) * 2016-07-21 2016-12-07 上海仰笑信息科技有限公司 A kind of driving recording method for early warning
CN106295600A (en) * 2016-08-18 2017-01-04 宁波傲视智绘光电科技有限公司 Driver status real-time detection method and device
DE102016006316A1 (en) * 2016-05-21 2017-02-09 Daimler Ag A method for determining a degree of attention of a driver of a vehicle
CN106599792A (en) * 2016-11-23 2017-04-26 南京信息工程大学 Hand-based driving illegal behavior detection method
KR20180065333A (en) * 2016-12-07 2018-06-18 광주대학교산학협력단 The apparatus to prevent drowsiness while driving
CN108460355A (en) * 2018-03-12 2018-08-28 知行汽车科技(苏州)有限公司 Driver status and behavioral value system and method
CN108710898A (en) * 2018-04-24 2018-10-26 武汉理工大学 Travel safety ensuring system based on Multi-sensor Fusion and method

Family Cites Families (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US7542600B2 (en) * 2004-10-21 2009-06-02 Microsoft Corporation Video image quality
CN102270303B (en) * 2011-07-27 2013-06-05 重庆大学 Joint detection method for sensitive image
CN104123008B (en) * 2014-07-30 2017-11-03 哈尔滨工业大学深圳研究生院 A kind of man-machine interaction method and system based on static gesture
CN105760822B (en) * 2016-02-01 2019-01-15 徐黎卿 A kind of vehicle drive control method and system
CN107895145A (en) * 2017-10-31 2018-04-10 南京信息工程大学 Method based on convolutional neural networks combination super-Gaussian denoising estimation finger stress

Patent Citations (15)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101032405A (en) * 2007-03-21 2007-09-12 汤一平 Safe driving auxiliary device based on omnidirectional computer vision
CN101638079A (en) * 2008-07-28 2010-02-03 天津宝龙机电有限公司 Tactile-reminding automobile safety early-warning system
CN101872171A (en) * 2009-04-24 2010-10-27 中国农业大学 Driver fatigue state recognition method and system based on information fusion
CN102371902A (en) * 2010-08-10 2012-03-14 日产自动车株式会社 Stability display apparatus
CN102289660A (en) * 2011-07-26 2011-12-21 华南理工大学 Method for detecting illegal driving behavior based on hand gesture tracking
CN105574487A (en) * 2015-11-26 2016-05-11 中国第一汽车股份有限公司 Facial feature based driver attention state detection method
CN205365591U (en) * 2015-12-31 2016-07-06 山东科技大学 Driver's state suggestion device based on vehicle motion gesture
DE102016006316A1 (en) * 2016-05-21 2017-02-09 Daimler Ag A method for determining a degree of attention of a driver of a vehicle
CN106067016A (en) * 2016-07-20 2016-11-02 深圳市飘飘宝贝有限公司 A kind of facial image eyeglass detection method and device
CN106205052A (en) * 2016-07-21 2016-12-07 上海仰笑信息科技有限公司 A kind of driving recording method for early warning
CN106295600A (en) * 2016-08-18 2017-01-04 宁波傲视智绘光电科技有限公司 Driver status real-time detection method and device
CN106599792A (en) * 2016-11-23 2017-04-26 南京信息工程大学 Hand-based driving illegal behavior detection method
KR20180065333A (en) * 2016-12-07 2018-06-18 광주대학교산학협력단 The apparatus to prevent drowsiness while driving
CN108460355A (en) * 2018-03-12 2018-08-28 知行汽车科技(苏州)有限公司 Driver status and behavioral value system and method
CN108710898A (en) * 2018-04-24 2018-10-26 武汉理工大学 Travel safety ensuring system based on Multi-sensor Fusion and method

Also Published As

Publication number Publication date
CN109584507A (en) 2019-04-05

Similar Documents

Publication Publication Date Title
CN109584507B (en) Driving behavior monitoring method, device, system, vehicle and storage medium
CN111274881B (en) Driving safety monitoring method and device, computer equipment and storage medium
US11783601B2 (en) Driver fatigue detection method and system based on combining a pseudo-3D convolutional neural network and an attention mechanism
CN111079476B (en) Driving state analysis method and device, driver monitoring system and vehicle
WO2019232972A1 (en) Driving management method and system, vehicle-mounted intelligent system, electronic device and medium
US20220277558A1 (en) Cascaded Neural Network-Based Attention Detection Method, Computer Device, And Computer-Readable Storage Medium
CN109147368A (en) Intelligent driving control method device and electronic equipment based on lane line
CN110866427A (en) Vehicle behavior detection method and device
Dozza et al. Recognising safety critical events: Can automatic video processing improve naturalistic data analyses?
CN110728218A (en) Dangerous driving behavior early warning method and device, electronic equipment and storage medium
Lashkov et al. Driver dangerous state detection based on OpenCV & dlib libraries using mobile video processing
CN114170585B (en) Dangerous driving behavior recognition method and device, electronic equipment and storage medium
KR20210065177A (en) Image acquisition device occlusion detection method, device, device and storage medium
Lashkov et al. Ontology-based approach and implementation of ADAS system for mobile device use while driving
US11423676B2 (en) Method and apparatus for detecting on-duty state of driver, device, and computer storage medium
CN117333853A (en) Driver fatigue monitoring method and device based on image processing and storage medium
Zhou et al. Development of a camera-based driver state monitoring system for cost-effective embedded solution
CN115147818A (en) Method and device for identifying mobile phone playing behaviors
Yazici et al. System-on-Chip Based Driver Drowsiness Detection and Warning System
CN113283286A (en) Driver abnormal behavior detection method and device
CN114120395B (en) Driving behavior monitoring method and device and computer readable storage medium
CN112258813A (en) Vehicle active safety control method and device
CN111959517A (en) Distance prompting method and device, computer equipment and storage medium
CN113011347B (en) Intelligent driving method and device based on artificial intelligence and related products
Pochal Empowering Safe Driving With Mobile Crowdsourced Drowsiness Detection

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant
CP03 Change of name, title or address

Address after: Floor 25, Block A, Zhongzhou Binhai Commercial Center Phase II, No. 9285, Binhe Boulevard, Shangsha Community, Shatou Street, Futian District, Shenzhen, Guangdong 518000

Patentee after: Shenzhen Youjia Innovation Technology Co.,Ltd.

Address before: 518051 410, Taibang science and technology building, Gaoxin South Sixth Road, Nanshan District, Shenzhen City, Guangdong Province

Patentee before: SHENZHEN MINIEYE INNOVATION TECHNOLOGY Co.,Ltd.

CP03 Change of name, title or address
TR01 Transfer of patent right

Effective date of registration: 20230901

Address after: No. 602-165, Complex Building, No. 1099, Qingxi Second Road, Hezhuang Street, Qiantang District, Hangzhou, Zhejiang, 310000

Patentee after: Hangzhou Ruijian Zhixing Technology Co.,Ltd.

Address before: Floor 25, Block A, Zhongzhou Binhai Commercial Center Phase II, No. 9285, Binhe Boulevard, Shangsha Community, Shatou Street, Futian District, Shenzhen, Guangdong 518000

Patentee before: Shenzhen Youjia Innovation Technology Co.,Ltd.

TR01 Transfer of patent right