CN111222449A - Driver behavior detection method based on fixed camera image - Google Patents

Driver behavior detection method based on fixed camera image Download PDF

Info

Publication number
CN111222449A
CN111222449A CN202010000481.1A CN202010000481A CN111222449A CN 111222449 A CN111222449 A CN 111222449A CN 202010000481 A CN202010000481 A CN 202010000481A CN 111222449 A CN111222449 A CN 111222449A
Authority
CN
China
Prior art keywords
target
driver
image
camera
behavior
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202010000481.1A
Other languages
Chinese (zh)
Other versions
CN111222449B (en
Inventor
杨辰曲
周建武
张银河
赵晓臻
童卫青
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shnaghai Zhongan Electron Information Technology Co ltd
East China Normal University
Original Assignee
Shnaghai Zhongan Electron Information Technology Co ltd
East China Normal University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shnaghai Zhongan Electron Information Technology Co ltd, East China Normal University filed Critical Shnaghai Zhongan Electron Information Technology Co ltd
Priority to CN202010000481.1A priority Critical patent/CN111222449B/en
Publication of CN111222449A publication Critical patent/CN111222449A/en
Application granted granted Critical
Publication of CN111222449B publication Critical patent/CN111222449B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/50Context or environment of the image
    • G06V20/52Surveillance or monitoring of activities, e.g. for recognising suspicious objects
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • G06V10/25Determination of region of interest [ROI] or a volume of interest [VOI]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/40Scenes; Scene-specific elements in video content
    • G06V20/41Higher-level, semantic clustering, classification or understanding of video scenes, e.g. detection, labelling or Markovian modelling of sport events or news items

Abstract

The invention provides a driver behavior detection method based on a fixed camera image. The invention is characterized in that a plurality of driver behaviors are abstracted into the interaction between a driver and a specific target object, so that the detection of the plurality of driver behaviors is finished at one time in the invention, and therefore, an end-to-end target detection algorithm can be adopted to finish a target detection task and the behavior analysis is carried out through a low-cost first-order classification scheme. The invention is suitable for different camera image types, can obtain better results under natural light and near infrared light, and does not require the size of the image obtained by the camera, thereby having better applicability.

Description

Driver behavior detection method based on fixed camera image
Technical Field
The invention belongs to the technical field of computer vision, mainly relates to image target detection and analysis, and particularly relates to a method for detecting specific bad driving behaviors according to the mutual relation among targets in an image.
Background
The technology for detecting the violation of the driver in real time through the video image has very important practical value. Currently, the existing technologies are not perfect enough, and only a certain driving behavior can be detected.
Wandan. detection of driver's behavior of making a call [ D ] based on machine vision. Beijing university of physical Engineers, 2015. A method for detecting driver's behavior of making a call by computer vision is proposed, which decomposes the driver's behavior of making a call into a series of atomic actions satisfying a certain time sequence relationship, models the hierarchical structure of the behavior of making a call and the time sequence relationship between the atomic actions by using an and-or graph, and detects the behavior of making a call in the driver's video by a statistical analysis method. The semi-supervised support vector machine-based driver call-making behavior detection [ D ]. Hunan university, 2018, a method for detecting the driver call-making behavior by the semi-supervised support vector machine is proposed, which determines a detection area of the call-making behavior according to deflection of a head position, thereby ensuring superiority of detection results; by improving a local search algorithm in the semi-supervised vector machine, the semi-supervised support vector machine based on the call-making behavior detection is obtained. The driver's call behavior detection algorithm [ J ] based on LBP and cascade XGboost, information and computer (theoretical edition), 2019(03):72-76, provides a call behavior detection algorithm based on LBP and cascade XGboost, is used for screening handheld phone samples collected by a sliding window, and improves the detection efficiency and the positioning accuracy through the combination of the characteristics of the LBP and a cascade XGboost classifier.
In the aspect of detection of single-hand and double-hand driving behaviors of a driver, Zhang Shenghua illegal driving behavior detection research [ D ]. southern China university, 2012, proposes a method for extracting hands of the driver based on establishing an area of interest, and then detects whether the driver is driving straight, turning or has other illegal behaviors through different postures of the hands of the driver. Li shiwu, faecio, li dida, beam luck, lissengwei, wang cheng, monte stone, peng light an early warning system and detection method [ P ] Jilin to prevent a driver from operating a steering wheel with one hand: CN105905031A,2016-08-31, proposes an early warning system and a detection method for preventing a driver from operating a steering wheel with one hand, which determine whether the driver operates the steering wheel with one hand for a long time and send out an early warning by a vehicle running state detection device, a steering wheel turning angle detection module, a steering wheel state detection module and an image detection and processing module.
In the aspect of detection of off-duty behaviors of drivers, in terms of detection of Deep left with spatial temporal constraints for driver detection from video [ J ]. Pattern recognition letters.2019.volume 119: 222. in D-STC, a R-CNN frame with faster fine adjustment is used for detecting the drivers of the trains as an initial detection result, and an optimal threshold value adjusting mechanism is provided, so that the D-STC frame is developed to improve the detection accuracy of the drivers of the trains. The D-STC frame is based on a Faster R-CNN model, the detection speed is improved through adjustment, image detection can be carried out on real-time monitored videos, and whether a train driver is off duty or not is judged.
Computer vision-based detection methods are also widely used in the detection of smoking behavior of drivers. Liuwei, Radonghua, Shilin, Wanchangming, Xuyejie, Xuwenjie a driver smoking action detection algorithm [ P ] based on histogram of directional gradients: CN108960094A,2018-12-07, a smoking action detection algorithm for a driver based on a directional gradient histogram is provided, wherein an image region of a human face mouth is extracted as an interested region, then the gradient of a local image in the region is extracted as a feature, and then a support vector machine is used for learning and classifying driving behaviors.
Disclosure of Invention
The purpose of the invention is: the method for detecting the specific bad behavior of the driver through the camera image analyzes 6 illegal driving behaviors through detecting an analysis target in the driving behaviors. Of these, 6 specific offending driving behaviors are: separating both hands from the steering wheel, separating one hand from the steering wheel, smoking, using the mobile phone, and shielding off posts and the camera; the analysis target means: a head target, a hand target, a cell phone target, a steering wheel target, a cigarette target, a cell phone target, a driver empty seat target.
In order to achieve the above object, the present invention provides a method for detecting driver behavior based on a fixed camera image, which analyzes driving behavior according to a correlation between objects in the image, and is characterized by comprising the following steps:
step 1, constructing and training a target detection neural network, wherein the target detection neural network is used for detecting an analysis target of driving behaviors. The target detection neural network is based on a target detection network of candidate regions, which consists of two modules, the first being a deep convolutional network that generates regions of interest, the second being a detector that uses these proposed regions. The target detection neural network comprises a full convolution neural network, an interested region proposing network and a detector, wherein the full convolution neural network is used for acquiring image characteristics, the interested region proposing network is used for acquiring a plurality of initial interested regions of each image according to the image characteristics, the initial interested regions are restrained and merged through a non-maximum value to obtain a final interested region, the detector is used for classifying the final interested region to detect an analysis target, and the analysis target comprises a head target, a hand target, a mobile phone target, a steering wheel target, a cigarette target, a mobile phone target and a driver empty seat target.
Step 2, acquiring a visible light image and a near infrared light image of a camera fixed at the right front upper position of the cab, wherein the camera can completely capture video images of the upper half of the driver, the driver seat and the steering wheel;
step 3, detecting the head target, the hand target, the steering wheel target, the cigarette target, the mobile phone target and the empty seat target area and the position of the driver in the image obtained in the previous step through the trained target detection neural network obtained in the step 1, wherein the position is the position of the target in a pixel plane coordinate system;
and 4, detecting the behavior of the driver by adopting a hierarchical multi-label classification scheme based on prior knowledge according to the target detection result in the step 3, and obtaining a driver behavior analysis result. The invention adopts a hierarchical multi-label classification scheme based on prior knowledge, which is a first-order multi-label classification scheme, and the scheme divides a plurality of classification labels into three levels according to the prior knowledge and sequentially classifies each level. Layering is to classify some classes in the previous layer of multi-label and take the classification result as the classification basis of the next layer. The priori knowledge refers to common knowledge in analyzing the behavior of the driver, for example, when the behavior that the driver does not hold the steering wheel by both hands occurs, the behavior that the driver does not hold the steering wheel by one hand does not occur. The method comprises the following steps:
step 5, in the target detection result obtained in the step 3, if the steering wheel target and the driver head target are not detected, judging that the camera is completely shielded, and turning to the step 11; if the steering wheel target and the empty driver seat target are not detected, judging that the camera is completely shielded, and turning to the step 11; if the steering wheel target is not obtained, judging that the camera is partially shielded, and turning to the step 11; otherwise, entering step 6;
step 6, if the target of the driver with an empty seat is detected in the target detection result obtained in the step 3, judging that the driver has a off-duty behavior, and turning to the step 11; otherwise, entering step 7;
step 7, if in the target detection result obtained in the step 3, if two detected hand targets of the driver are not in contact with the steering wheel target, judging that the two hands are separated from the steering wheel, and entering the step 9, otherwise, entering the step 8;
step 8, if one or only one driver hand target is in contact with the steering wheel target in the target detection results obtained in the step 3, judging that a single hand is separated from the steering wheel, and entering a step 9; otherwise, entering step 9;
step 9, if the mobile phone target is in contact with the driver hand target or the mobile phone target is detected to be held in the target detection result obtained in the step 3, judging that a mobile phone using behavior exists, and entering the step 10; otherwise, entering the step 10;
step 10, if a cigarette target exists in the target detection result obtained in the step 3 and the cigarette target is in contact with a driver head target or a driver hand target, judging that smoking behavior exists, and entering a step 11; otherwise, entering step 11;
and 11, outputting the detected driver behavior result and the camera running state according to the analysis conclusion.
Preferably, in step 1, because the convolutional neural network is not robust enough to rotate the image, the present invention manually rotates the sample containing the multi-angle target in the training sample by a data enhancement method, and adds the rotated image into the training set to be used as a new sample for training. Specifically, when the target detection neural network is trained, an image with a target external box as a label is used as a training sample, the image containing a cigarette target, a mobile phone target and a handheld mobile phone target in the training sample is manually rotated and cut to the original size for data enhancement, and the image after data enhancement is added into a training set to be used as a new sample for training the target detection neural network;
during target detection neural network training, the training sample is scaled to the size that the longest edge is smaller than 1333 pixels or the shortest edge is larger than 800 pixels, after alignment, the 3 channel values are respectively subtracted by 121.5, 117.6 and 112.0 and then divided by 256, training is carried out, the batch size is 8, the minimum confidence coefficient of the interested space is 0.7, the maximum confidence coefficient of the interested space is 0.3, the maximum number of the interested areas of each picture is 2000, the number of iterations is 20000, the initial learning rate is 0.005, and the initial learning rate is sequentially updated to 1/3 after 8000,12000 and 16000 iterations.
Preferably, the rotation angles of the sample including the images of the cigarette object, the mobile phone object and the mobile phone object are-30 °, -20 °, -10 °,20 ° and 30 °, respectively.
Preferably, the target detection neural network employs fast R-CNN. The Faster R-CNN is a two-stage target detection deep learning model, and a region-of-interest extraction network is used for replacing a sliding window mechanism to obtain a higher detection speed.
Preferably, in step 2, the camera is automatically adjusted to a visible light shooting mode, a near infrared light shooting mode or a visible light and near infrared light mixed shooting mode according to the lighting condition in the vehicle cabin.
Preferably, after step 2 and before step 3, the method further comprises: and (3) carrying out bilateral filtering processing on the image obtained in the step (2).
The invention is characterized in that a plurality of driving behaviors are abstracted into the interaction between a driver and a specific target object, and the detection of the plurality of driving behaviors can be completed at one time, so that an end-to-end target detection algorithm can be adopted to complete a target detection task and a low-cost first-order classification scheme is adopted to perform behavior analysis. The invention is suitable for different camera image types, can obtain better results under natural light and near infrared light, has no limit on the size of the image obtained by the camera and has better universality.
Drawings
FIG. 1 is a flow chart of the present invention;
FIG. 2 is a neural network architecture of the present invention;
FIG. 3 is an original image acquired in step 1 of the embodiment;
FIG. 4 is a visual display of the results of step 3 in the examples.
Detailed Description
The invention will be further illustrated with reference to the following specific examples. It should be understood that these examples are for illustrative purposes only and are not intended to limit the scope of the present invention. Further, it should be understood that various changes or modifications of the present invention may be made by those skilled in the art after reading the teaching of the present invention, and such equivalents may fall within the scope of the present invention as defined in the appended claims.
The procedures, conditions, experimental methods and the like for carrying out the present invention are general knowledge and common general knowledge in the art except for the contents specifically mentioned below, and the present invention is not particularly limited. As shown in fig. 1, the method for detecting the behavior of a driver based on a fixed camera image provided by the invention comprises the following steps:
step 1: acquiring a visible light image and a near infrared light image of a camera fixed at the upper right front part of a cab;
step 2: carrying out bilateral filtering processing on the acquired image;
and step 3: detecting the areas and positions of a head target, a hand target, a steering wheel target, a cigarette target, a mobile phone holding target and a driver empty seat target of the driver in the image through a target detection neural network;
and 4, step 4: analyzing whether the image source camera is shielded or not, and if so, turning to the processing of the step 10;
and 5: analyzing whether the driver has off-duty behavior, and if so, turning to the step 10;
step 6: analyzing whether the driver has the behavior that both hands are separated from the steering wheel;
and 7: analyzing whether the driver has the behavior of separating the single hand from the steering wheel;
and 8: analyzing whether the driver uses the mobile phone;
and step 9: analyzing whether the driver has smoking behavior;
step 10: and outputting a driving behavior analysis result and the running state of the camera according to the analysis.
First, the process of constructing and training the neural network in step 3 is described.
A neural network is constructed as shown in fig. 2.
The invention takes the image with the target external box as the label as the training sample, and rotates the image containing the cigarette target, the mobile phone target and the combined target of the mobile phone and the hand by-30 degrees, -20 degrees, -10 degrees, 20 degrees and 30 degrees, and cuts the image to the original size for data enhancement.
During neural network training, the training sample is scaled to the size that the longest edge is smaller than 1333 pixels or the shortest edge is larger than 800 pixels, and after alignment, the 3-channel value is subtracted by 121.5, 117.6 and 112.0 respectively and then divided by 256 for training. The batch size is 8, the minimum confidence coefficient of the interested space is 0.7, the maximum confidence coefficient of the interested space is 0.3, the maximum number of interested regions of each picture is 2000, the iteration number is 20000, the initial learning rate is 0.005, and the initial learning rate is sequentially updated to 1/3 after 8000,12000,16000 iterations.
After the neural network training is completed, the neural network can be used for detecting the driving behavior analysis target.
First, as described in step 1, a visible light image and a near infrared light image of a camera fixed at the upper right front of a cab are acquired, as shown in fig. 3.
And 2, performing bilateral filtering on the sample image, scaling the sample image to the size that the longest edge is smaller than 1333 pixels or the shortest edge is larger than 800 pixels, subtracting 121.5 from the aligned 3-channel value, 117.6,112.0 from the aligned 3-channel value, and dividing the subtracted value by 256.
Step 3 is performed, at this time, the hyper-parameters of the neural network are adjusted: the minimum confidence of the interested space is 0.7, the maximum confidence of the interested space is 0.3, and the maximum number of interested areas of each picture is 1000. An example of a visualization of the results of the tests performed on fig. 2 is shown in fig. 4.
And judging whether a camera serving as an image source is shielded or not in the next step, and then judging the position relation between the detected targets by adopting a hierarchical multi-label classification scheme based on prior knowledge.
Assume that the analysis of driver behavior is performed with fig. 4:
and 4, detecting the head and the steering wheel of the driver in the image, wherein the camera is not shielded.
And 5, detecting no empty driver seat in the image, and preventing the driver from falling off duty.
And 6, the target positions of the two hands detected in the image are overlapped with the target position of the steering wheel, and the behavior that the two hands are separated from the steering wheel does not occur.
And 7, the target positions of the two hands detected in the image are overlapped with the target position of the steering wheel, and the behavior that one hand is separated from the steering wheel does not occur.
And 8, the mobile phone target detected in the image is superposed with the hand target position, and a mobile phone using behavior occurs.
Step 9, detecting the cigarette target in the image, and generating smoking behavior.
And step 10, outputting the detection result in a form of a fixed-length integer array [0,0,0,0,1,1], wherein each bit of the array sequentially represents a camera shielding state, a post-off state, a two-hand off-steering-wheel state, a one-hand off-steering-wheel state, a mobile phone using behavior state and a smoking behavior state. 1 is found corresponding violation and 0 is not found.
The invention defines a complete flow scheme implemented by a method for detecting specific bad behaviors of a driver through a camera image, and comprises the steps of image acquisition, target position detection, behavior analysis according to a target position and result acquisition as shown in figure 1. Firstly, obtaining an image from a camera data stream, then obtaining the position of a specific target in a pixel plane coordinate system through a target detection module, then adopting a hierarchical multi-label classification scheme based on prior knowledge, and finally obtaining a driver behavior analysis result.

Claims (6)

1. A driver behavior detection method based on a fixed camera image analyzes driving behaviors according to the mutual relation among targets in the image, and is characterized by comprising the following steps:
step 1, constructing and training a target detection neural network, wherein the target detection neural network is used for detecting an analysis target of a driving behavior, the target detection neural network comprises a full convolution neural network, an interested region proposing network and a detector, the full convolution neural network is used for acquiring image characteristics, the interested region proposing network is used for acquiring a plurality of initial interested regions of each image according to the image characteristics, the initial interested regions are inhibited and combined through a non-maximum value to obtain a final interested region, and the detector is used for classifying the final interested region to detect the analysis target, wherein the analysis target comprises a head target, a hand target, a mobile phone target, a steering wheel target, a cigarette target, a mobile phone target and a driver empty seat target;
step 2, acquiring a visible light image and a near infrared light image of a camera fixed at the right front upper position of the cab, wherein the camera can completely capture video images of the upper half of the driver, the driver seat and the steering wheel;
step 3, detecting the head target, the hand target, the steering wheel target, the cigarette target, the mobile phone target and the empty seat target area and position of the driver in the image obtained in the last step through the trained target detection neural network obtained in the step 1;
step 4, detecting the behavior of the driver by adopting a hierarchical multi-label classification scheme based on prior knowledge according to the target detection result in the step 3, and the method comprises the following steps:
step 5, in the target detection result obtained in the step 3, if the steering wheel target and the driver head target are not detected, judging that the camera is completely shielded, and turning to the step 11; if the steering wheel target and the empty driver seat target are not detected, judging that the camera is completely shielded, and turning to the step 11; if the steering wheel target is not obtained, judging that the camera is partially shielded, and turning to the step 11; otherwise, entering step 6;
step 6, if the target of the driver with an empty seat is detected in the target detection result obtained in the step 3, judging that the driver has a off-duty behavior, and turning to the step 11; otherwise, entering step 7;
step 7, if in the target detection result obtained in the step 3, if two detected hand targets of the driver are not in contact with the steering wheel target, judging that the two hands are separated from the steering wheel, and entering the step 9, otherwise, entering the step 8;
step 8, if one or only one driver hand target is in contact with the steering wheel target in the target detection results obtained in the step 3, judging that a single hand is separated from the steering wheel, and entering a step 9; otherwise, entering step 9;
step 9, if the mobile phone target is in contact with the driver hand target or the mobile phone target is detected to be held in the target detection result obtained in the step 3, judging that a mobile phone using behavior exists, and entering the step 10; otherwise, entering the step 10;
step 10, if a cigarette target exists in the target detection result obtained in the step 3 and the cigarette target is in contact with a driver head target or a driver hand target, judging that smoking behavior exists, and entering a step 11; otherwise, entering step 11;
and 11, outputting the detected driver behavior result and the camera running state according to the analysis conclusion.
2. The method for detecting the driver behavior based on the fixed camera image according to claim 1, wherein in the step 1, when the target detection neural network is trained, an image with a target external box as a label is used as a training sample, the image containing a cigarette target, a mobile phone target and a mobile phone target in the training sample is manually rotated and cut to the original size for data enhancement, and the image after data enhancement is added into a training set to be used as a new sample for training the target detection neural network;
during target detection neural network training, the training sample is scaled to the size that the longest edge is smaller than 1333 pixels or the shortest edge is larger than 800 pixels, after alignment, the 3 channel values are respectively subtracted by 121.5, 117.6 and 112.0 and then divided by 256, training is carried out, the batch size is 8, the minimum confidence coefficient of the interested space is 0.7, the maximum confidence coefficient of the interested space is 0.3, the maximum number of the interested areas of each picture is 2000, the number of iterations is 20000, the initial learning rate is 0.005, and the initial learning rate is sequentially updated to 1/3 after 8000,12000 and 16000 iterations.
3. The method of claim 2, wherein the rotation angles of the sample of the images including the cigarette object, the mobile phone object, and the handheld mobile phone object are-30 °, -20 °, -10 °,20 °, 30 ° respectively when the sample is manually rotated.
4. The fixed-camera-image-based driver behavior detection method as claimed in claim 2, wherein the target detection neural network employs fast R-CNN.
5. The fixed-camera-image-based driver behavior detection method according to claim 1, wherein in step 2, the camera is automatically adjusted to a visible light shooting mode, a near infrared light shooting mode or a visible light and near infrared light mixed shooting mode according to the illumination condition in the vehicle cabin.
6. The fixed-camera-image-based driver behavior detection method as claimed in claim 1, further comprising, after step 2 and before step 3: and (3) carrying out bilateral filtering processing on the image obtained in the step (2).
CN202010000481.1A 2020-01-02 2020-01-02 Driver behavior detection method based on fixed camera image Active CN111222449B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010000481.1A CN111222449B (en) 2020-01-02 2020-01-02 Driver behavior detection method based on fixed camera image

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010000481.1A CN111222449B (en) 2020-01-02 2020-01-02 Driver behavior detection method based on fixed camera image

Publications (2)

Publication Number Publication Date
CN111222449A true CN111222449A (en) 2020-06-02
CN111222449B CN111222449B (en) 2023-04-11

Family

ID=70829324

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010000481.1A Active CN111222449B (en) 2020-01-02 2020-01-02 Driver behavior detection method based on fixed camera image

Country Status (1)

Country Link
CN (1) CN111222449B (en)

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112022153A (en) * 2020-09-27 2020-12-04 西安电子科技大学 Electroencephalogram signal detection method based on convolutional neural network
CN112132015A (en) * 2020-09-22 2020-12-25 平安国际智慧城市科技股份有限公司 Detection method, device, medium and electronic equipment for illegal driving posture
CN112818913A (en) * 2021-02-24 2021-05-18 西南石油大学 Real-time smoking calling identification method
CN113139452A (en) * 2021-04-19 2021-07-20 中国人民解放军91054部队 Method for detecting behavior of using mobile phone based on target detection

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20140139655A1 (en) * 2009-09-20 2014-05-22 Tibet MIMAR Driver distraction and drowsiness warning and sleepiness reduction for accident avoidance
WO2016028228A1 (en) * 2014-08-21 2016-02-25 Avennetz Technologies Pte Ltd System, method and apparatus for determining driving risk
CN107491764A (en) * 2017-08-25 2017-12-19 电子科技大学 A kind of violation based on depth convolutional neural networks drives detection method
FR3062977A1 (en) * 2017-02-15 2018-08-17 Valeo Comfort And Driving Assistance VIDEO SEQUENCE COMPRESSION DEVICE AND CONDUCTOR MONITORING DEVICE COMPRISING SUCH A COMPRESSION DEVICE
CN108764034A (en) * 2018-04-18 2018-11-06 浙江零跑科技有限公司 A kind of driving behavior method for early warning of diverting attention based on driver's cabin near infrared camera
CN110222596A (en) * 2019-05-20 2019-09-10 浙江零跑科技有限公司 A kind of driving behavior analysis anti-cheating method of view-based access control model

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20140139655A1 (en) * 2009-09-20 2014-05-22 Tibet MIMAR Driver distraction and drowsiness warning and sleepiness reduction for accident avoidance
WO2016028228A1 (en) * 2014-08-21 2016-02-25 Avennetz Technologies Pte Ltd System, method and apparatus for determining driving risk
FR3062977A1 (en) * 2017-02-15 2018-08-17 Valeo Comfort And Driving Assistance VIDEO SEQUENCE COMPRESSION DEVICE AND CONDUCTOR MONITORING DEVICE COMPRISING SUCH A COMPRESSION DEVICE
CN107491764A (en) * 2017-08-25 2017-12-19 电子科技大学 A kind of violation based on depth convolutional neural networks drives detection method
CN108764034A (en) * 2018-04-18 2018-11-06 浙江零跑科技有限公司 A kind of driving behavior method for early warning of diverting attention based on driver's cabin near infrared camera
CN110222596A (en) * 2019-05-20 2019-09-10 浙江零跑科技有限公司 A kind of driving behavior analysis anti-cheating method of view-based access control model

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
刘炜煌等: "基于多面部特征融合的驾驶员疲劳检测算法", 《计算机系统应用》 *

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112132015A (en) * 2020-09-22 2020-12-25 平安国际智慧城市科技股份有限公司 Detection method, device, medium and electronic equipment for illegal driving posture
CN112022153A (en) * 2020-09-27 2020-12-04 西安电子科技大学 Electroencephalogram signal detection method based on convolutional neural network
CN112022153B (en) * 2020-09-27 2021-07-06 西安电子科技大学 Electroencephalogram signal detection method based on convolutional neural network
CN112818913A (en) * 2021-02-24 2021-05-18 西南石油大学 Real-time smoking calling identification method
CN113139452A (en) * 2021-04-19 2021-07-20 中国人民解放军91054部队 Method for detecting behavior of using mobile phone based on target detection

Also Published As

Publication number Publication date
CN111222449B (en) 2023-04-11

Similar Documents

Publication Publication Date Title
CN111222449B (en) Driver behavior detection method based on fixed camera image
CN108985186B (en) Improved YOLOv 2-based method for detecting pedestrians in unmanned driving
Hoang Ngan Le et al. Robust hand detection and classification in vehicles and in the wild
US8509478B2 (en) Detection of objects in digital images
Zhou et al. Robust vehicle detection in aerial images using bag-of-words and orientation aware scanning
CN110263712B (en) Coarse and fine pedestrian detection method based on region candidates
Abdi et al. Deep learning traffic sign detection, recognition and augmentation
Lin et al. A real-time vehicle counting, speed estimation, and classification system based on virtual detection zone and YOLO
Fan et al. Modeling of temporarily static objects for robust abandoned object detection in urban surveillance
CN111046856B (en) Parallel pose tracking and map creating method based on dynamic and static feature extraction
CN111881750A (en) Crowd abnormity detection method based on generation of confrontation network
CN110222596B (en) Driver behavior analysis anti-cheating method based on vision
CN110490043A (en) A kind of forest rocket detection method based on region division and feature extraction
CN104615986A (en) Method for utilizing multiple detectors to conduct pedestrian detection on video images of scene change
Espinosa et al. Motorcycle detection and classification in urban Scenarios using a model based on Faster R-CNN
CN112070051B (en) Pruning compression-based fatigue driving rapid detection method
CN110717863A (en) Single-image snow removing method based on generation countermeasure network
Su et al. A new local-main-gradient-orientation HOG and contour differences based algorithm for object classification
Wang Vehicle image detection method using deep learning in UAV video
Li et al. Distracted driving detection by combining ViT and CNN
Khan et al. A novel deep learning based anpr pipeline for vehicle access control
CN111626197A (en) Human behavior recognition network model and recognition method
CN114038011A (en) Method for detecting abnormal behaviors of human body in indoor scene
CN112232124A (en) Crowd situation analysis method, video processing device and device with storage function
Jehad et al. Developing and validating a real time video based traffic counting and classification

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant