CN111724408B - Verification experiment method of abnormal driving behavior algorithm model based on 5G communication - Google Patents

Verification experiment method of abnormal driving behavior algorithm model based on 5G communication Download PDF

Info

Publication number
CN111724408B
CN111724408B CN202010503167.5A CN202010503167A CN111724408B CN 111724408 B CN111724408 B CN 111724408B CN 202010503167 A CN202010503167 A CN 202010503167A CN 111724408 B CN111724408 B CN 111724408B
Authority
CN
China
Prior art keywords
driver
image
algorithm
data
behavior
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202010503167.5A
Other languages
Chinese (zh)
Other versions
CN111724408A (en
Inventor
徐国保
麦锐滔
叶昌鑫
姚旭
赵剪
王骥
李依潼
刘雯景
彭银桥
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Guangdong Ocean University
Original Assignee
Guangdong Ocean University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Guangdong Ocean University filed Critical Guangdong Ocean University
Priority to CN202010503167.5A priority Critical patent/CN111724408B/en
Publication of CN111724408A publication Critical patent/CN111724408A/en
Application granted granted Critical
Publication of CN111724408B publication Critical patent/CN111724408B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/194Segmentation; Edge detection involving foreground-background segmentation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/214Generating training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/25Fusion techniques
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/044Recurrent networks, e.g. Hopfield networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/049Temporal neural networks, e.g. delay elements, oscillating neurons or pulsed inputs
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • G06N3/082Learning methods modifying the architecture, e.g. adding, deleting or silencing nodes or connections
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/50Context or environment of the image
    • G06V20/59Context or environment of the image inside of a vehicle, e.g. relating to seat occupancy, driver state or inner lighting conditions
    • G06V20/597Recognising the driver's state or behaviour, e.g. attention or drowsiness
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V2201/00Indexing scheme relating to image or video recognition or understanding
    • G06V2201/07Target detection

Abstract

The invention discloses a verification experiment method of an abnormal driving behavior algorithm model based on 5G communication, which comprises the following steps: segmenting a driver background, identifying and positioning a key subregion image of the driver, identifying the behavior and action state and time of the driver, and verifying, analyzing and deploying experiments; the method comprises the steps of firstly designing a driver background image segmentation algorithm based on an improved Mask-RCNN, segmenting the driver background image, then designing a Yolov3 target detection algorithm based on the improved and identifying a key subregion image of the driver after segmenting the image, then designing a CNN-LSTM fusion classification algorithm applied to a driving abnormal behavior identification scene, identifying the action state and time of the driver by inputting three images, finally constructing a trolley experiment system, carrying out comprehensive verification analysis and experimental deployment of the system, and finally realizing a verification experiment of an algorithm model.

Description

Verification experiment method of abnormal driving behavior algorithm model based on 5G communication
Technical Field
The invention relates to the technical field of abnormal driving behavior detection, in particular to a verification experiment method of an abnormal driving behavior algorithm model based on 5G communication.
Background
The technology detects the abnormal behavior of the driver and sends out warning, thereby avoiding the occurrence of traffic accidents, having important application value and social significance, at the present stage, research and development of various countries at home and abroad in the field are limited by the characteristics of the abnormal behavior, such as various abnormal behaviors, including fatigue behaviors such as closing eyes, yawning, rubbing eyes and nodding of the driver for a long time, and improper driving behaviors such as calling, eating snacks, smoking, left looking after right looking after, and the like, because the behaviors of the driver are normal in most of time, the occurrence frequency of the abnormal behaviors is generally low, so that data acquisition is difficult, and a reliable algorithm model is needed to detect the abnormal behavior of the driver;
however, most of the existing driver abnormal behavior identification and classification algorithms proposed for driver abnormal behavior detection are not subjected to effective and reliable verification analysis and experimental deployment, so that the accuracy and speed of the algorithm model cannot be guaranteed, and the comprehensive performance and practicability cannot be determined, therefore, the invention provides a verification experimental method based on the abnormal driving behavior algorithm model under 5G communication to solve the problems in the prior art.
Disclosure of Invention
Aiming at the problems, the invention aims to provide a verification experiment method of an abnormal driving behavior algorithm model based on 5G communication, the method firstly verifies and analyzes the precision and the speed of the final algorithm model, then reasonably adjusts parameters and prunes the model, then deploys a deep learning environment and a framework in a server and develops a background management system, and finally deploys a vehicle-mounted and server experiment based on 5G communication, and the method is simple and reliable and has higher practicability.
In order to achieve the purpose of the invention, the invention is realized by the following technical scheme: the verification experiment method of the abnormal driving behavior algorithm model based on the 5G communication comprises the following steps:
the method comprises the following steps: driver background segmentation
In a complex driving environment in a vehicle, under the condition of considering different illumination and different figures, firstly acquiring driving environment data in real time on the basis of data enhancement, then preprocessing the acquired driving environment data, then making the preprocessed data into a data set to enable the detection speed of the data set to meet the real-time requirement, specially improving the detection precision aiming at the specific scene in the driving environment, and finally designing an improved Mask-RCNN-based driver background image segmentation algorithm and segmenting the driver background image;
step two: identifying and positioning key subregion image of driver
According to the first step, firstly, an image segmented by Mask-RCNN is used as network input, on the premise that the precision loss of small object detection is considered, the detection precision of the key subregion image of the driver is improved by adopting multi-scale prediction, the characteristic of network extraction characteristics of the image is researched by combining a network structure through carrying different scales, different carrying modes and different quantities of fusion, the feasibility verification of the network is carried out through the identification precision, a network model most suitable for the scene is obtained, and then an improved Yolov 3-based target detection algorithm is designed and the key subregion image of the driver after the image segmentation is identified;
step three: identifying behavior action state and time of driver
According to the second step, after obtaining the key subregion image of the driver after the segmentation image detected by the target detection model, firstly inputting the key subregion image into the CNN to extract a feature map, then expanding the key subregion image into a vector to be input into the LSTM and seeking out an optimal fusion structure of CNN-LSTM space-time characteristics, then designing a CNN-LSTM fusion classification algorithm applied to a driving abnormal behavior recognition scene and enabling the CNN-LSTM fusion classification algorithm to recognize the action state and time of the driver by inputting three images;
step four: verification analysis
According to the third step, firstly, a trolley experiment system is constructed, vehicle-mounted equipment uses a 5G network for communication, comprehensive verification analysis and experiment deployment of the system are carried out, then, the final algorithm model is verified in precision and speed on a test set, and reasonable parameter adjustment or pruning is carried out on the algorithm model according to result feedback, so that the algorithm model achieves real-time detection and meets the requirement of required precision, and the performance of the CNN-LSTM fusion classification algorithm applied to the driving abnormal behavior recognition scene is verified;
step five: experimental deployment
According to the fourth step, the deep learning environment and the framework required by the final model are deployed on the server, a background management system is developed for data processing and statistics, normal operation of the system is achieved, then a small camera and equipment are mounted in the vehicle, so that the vehicle has the functions of collecting images of a driver in real time and transmitting data with the cloud, the system is automatically monitored and recognizes the behavior state of the driver, and timely voice early warning reminding and recording are carried out when abnormal driving behaviors of the driver are detected, so that experimental deployment of the abnormal driving behavior algorithm model is completed.
The further improvement lies in that: in the first step, the preprocessing comprises turning, translating and adding noise to the image data, the data is enhanced through a countermeasure network before the preprocessing is carried out on the data acquired in real time, and then the network framework of the Mask-RCNN is scientifically pruned by adopting a precision comparison and distillation method.
The further improvement lies in that: in the second step, before designing the improved Yolov 3-based target detection algorithm, the influence of the body states of different areas of the driver on classification is researched, key sub-areas are searched, then a model for identifying and positioning the key sub-areas of the driver is established, and finally the improved Yolov 3-based key sub-area positioning and identifying algorithm is designed.
The further improvement lies in that: in the third step, after obtaining the key subregion image of the driver after the segmentation image detected by the target detection model, firstly researching the relation between the behavior and action state of the driver and time and the space-time feature fusion mechanism, so as to obtain the optimal fusion structure of the CNN-LSTM space-time feature.
The further improvement lies in that: in the fourth step, the trolley experiment system is used for receiving the video images collected by the camera and transmitting the received data to the server, and is also used for detecting abnormal driving behaviors and carrying out voice reminding and data recording on the driver.
The further improvement lies in that: and in the fifth step, the small camera and the equipment loaded in the vehicle use a 5G communication network with high speed, large capacity and low time delay to transmit data so as to meet the transmission of high-definition videos in the vehicle.
The invention has the beneficial effects that: the invention fully utilizes the data set of the online open source on the basis of self-acquiring the data set, increases the data set by combining the operations of image turning, translation, noise addition and the like of the traditional image processing, simultaneously uses the generated countermeasure network to carry out image enhancement and increases the data set, establishes a method model for segmenting the driver from a complex background based on the mechanism of removing background interference, modifies the network structure by using the thought of a high-precision Mask-RCNN open source segmentation algorithm to adapt to the scene of the driver for segmenting the image of the driver, establishes and positions a key subregion model of the driver on the basis, researches the influence proportion of the key subregion on the abnormal driving result, provides an improved Yolov3 target detection algorithm to identify the key subregion image of the driver after segmenting the image, and further researches the relation between the behavior action state of the driver and the time, seeking an optimal fusion structure of CNN-LSTM time-space characteristics, designing a CNN-LSTM fusion classification algorithm applied to a scene of identifying abnormal driving behaviors, identifying the state of a driver by inputting three images, finally constructing a trolley experiment system, carrying out communication and data transmission by using a 5G network, carrying out comprehensive verification analysis and experimental deployment of the system, finally realizing automatic monitoring and identification of the behavior state of the driver by the system, detecting that abnormal driving behaviors are timely reminded and recorded by voice early warning, and having higher practicability, ensuring the precision and speed of an algorithm model, and determining the comprehensive performance of the algorithm model, thereby being convenient for popularization.
Drawings
FIG. 1 is a flow chart of the steps of the present invention;
FIG. 2 is a technical roadmap for the present invention;
FIG. 3 is a flow chart of an experimental protocol of the present invention.
Detailed Description
In order to further understand the present invention, the following detailed description will be made with reference to the following examples, which are only used for explaining the present invention and are not to be construed as limiting the scope of the present invention.
According to fig. 1, 2 and 3, the embodiment provides a verification experiment method of an abnormal driving behavior algorithm model based on 5G communication, which includes the following steps:
the method comprises the following steps: driver background segmentation
In a complex driving environment in a vehicle, under the condition of considering different illumination and different figures, firstly acquiring driving environment data in real time on the basis of enhancing the data through a countermeasure network, then overturning, translating and adding noise to the acquired driving environment data, then making the preprocessed data into a data set, scientifically pruning a network frame of a Mask-RCNN by adopting methods such as precision comparison, distillation and the like, enabling the detection speed of the data set to meet the real-time requirement, specially improving the detection precision aiming at the specific scene in the driving environment, and finally designing a driver background image segmentation algorithm based on an improved Mask-RCNN and segmenting the driver background image;
step two: identifying and positioning key subregion image of driver
According to the first step, firstly, images segmented by Mask-RCNN are used as network input, on the premise that the precision loss of small object detection is considered, the detection precision of key subregion images of a driver is improved by adopting multi-scale prediction, the characteristics of network extraction features of the images are researched by carrying different scales, carrying different carrying modes and different quantities, combining a network structure, feasibility verification of the network is carried out by identifying the precision, a network model most suitable for the scene is obtained, then, the influence of body states of different regions of the driver on classification is researched, key subregions are searched, then, a key subregion image model for identifying and positioning the driver is established, and finally, a key subregion image of the driver after the images are segmented is designed on the basis of an improved Yolov3 target detection algorithm;
step three: identifying behavior action state and time of driver
According to the second step, after obtaining the key subregion image of the driver after the segmentation image detected by the target detection model, researching the relationship between the behavior and the action state of the driver and the time and the spatial-temporal feature fusion mechanism, firstly inputting the key subregion image into CNN to extract a feature map, then expanding the feature map into a vector, inputting the vector into LSTM, seeking out the optimal fusion structure of the CNN-LSTM spatial-temporal feature, then designing a CNN-LSTM fusion classification algorithm applied to a driving abnormal behavior recognition scene, and enabling the CNN-LSTM fusion classification algorithm to recognize the action state and the time of the driver by inputting three images;
step four: verification analysis
According to the third step, firstly, a trolley experiment system used for receiving video images collected by a camera and transmitting received data to a server is constructed, abnormal driving behaviors are detected through the trolley experiment system, voice reminding and data recording are carried out on a driver, vehicle-mounted equipment uses a 5G network for communication, comprehensive verification analysis and experiment deployment of the system are carried out, then verification on the final algorithm model in precision and speed is carried out on a test set, and reasonable parameter adjustment or pruning is carried out on the algorithm model according to result feedback, so that the algorithm model achieves real-time detection and meets the requirement of required precision, and the performance of a CNN-LSTM fusion classification algorithm applied to a driving abnormal behavior recognition scene is verified;
step five: experimental deployment
According to the fourth step, a deep learning environment and a frame required by a final model are deployed on a server, a background management system is developed for data processing and statistics, normal operation of the system is achieved, then a small camera and equipment are loaded in a vehicle, and a 5G communication network with high speed, large capacity and low time delay is used for meeting transmission of high-definition video in the vehicle, so that the vehicle has the functions of collecting images of a driver in real time and transmitting data with the cloud, the behavior state of the driver is automatically monitored and recognized by the system, and timely voice early warning reminding and recording are performed when abnormal driving behaviors of the driver are detected, so that experimental deployment of the abnormal driving behavior algorithm model is completed.
The verification experiment method based on the abnormal driving behavior algorithm model under 5G communication fully utilizes the data set of online open source on the basis of self-acquisition of the data set, combines the traditional image processing to perform operations such as image turning, translation, noise addition and the like to increase the data set, simultaneously uses the generated countermeasure network to perform image enhancement and increase the data set, establishes a method model for segmenting a driver from a complex background based on a mechanism of removing background interference, modifies a network structure by using the thought of a high-precision Mask-RCNN open source segmentation algorithm to adapt to a driver scene for segmenting the driver image, establishes and positions a driver key subregion model on the basis, researches the influence proportion of the key subregion on an abnormal driving result and provides a Yolov3 target detection algorithm based on improvement to identify the key subregion image of the driver after segmenting the image, the relation between the behavior and action states of the driver and time is further researched, the optimal fusion structure of the CNN-LSTM space-time characteristics is sought, a CNN-LSTM fusion classification algorithm applied to a scene of recognizing abnormal driving behaviors is designed, the state of the driver is recognized by inputting three images, finally, a trolley experiment system is constructed, a 5G network is used for communication and data transmission, comprehensive verification analysis and experimental deployment of the system are carried out, the behavior states of the driver are automatically monitored and recognized by the system, timely voice early warning reminding and recording of abnormal driving behaviors are detected, the practicability is high, the precision and the speed of an algorithm model are guaranteed, the comprehensive performance of the algorithm model is determined, and the popularization is facilitated.
The foregoing illustrates and describes the principles, general features, and advantages of the present invention. It will be understood by those skilled in the art that the present invention is not limited to the embodiments described above, which are described in the specification and illustrated only to illustrate the principle of the present invention, but that various changes and modifications may be made therein without departing from the spirit and scope of the present invention, which fall within the scope of the invention as claimed. The scope of the invention is defined by the appended claims and equivalents thereof.

Claims (4)

1. The verification experiment method of the abnormal driving behavior algorithm model based on the 5G communication is characterized by comprising the following steps of: the method comprises the following steps:
the method comprises the following steps: driver background segmentation
In a complex driving environment in a vehicle, under the condition of considering different illumination and different figures, firstly acquiring driving environment data in real time on the basis of data enhancement, then preprocessing the acquired driving environment data, then making the preprocessed data into a data set to enable the detection speed of the data set to meet the real-time requirement, specially improving the detection precision aiming at the specific scene in the driving environment, and finally designing an improved Mask-RCNN-based driver background image segmentation algorithm and segmenting the driver background image;
step two: identifying and positioning key subregion image of driver
According to the first step, firstly, an image segmented by Mask-RCNN is used as network input, on the premise that the precision loss of small object detection is considered, the detection precision of the key subregion image of the driver is improved by adopting multi-scale prediction, the characteristic of network extraction characteristics of the image is researched by combining a network structure through carrying different scales, different carrying modes and different quantities of fusion, the feasibility verification of the network is carried out through the identification precision, a network model most suitable for the scene is obtained, and then an improved Yolov 3-based target detection algorithm is designed and the key subregion image of the driver after the image segmentation is identified;
step three: identifying behavior action state and time of driver
According to the second step, after obtaining the key subregion image of the driver after the segmentation image detected by the target detection model, firstly inputting the key subregion image into the CNN to extract the feature, then expanding the key subregion image into a vector to be input into the LSTM and seeking out an optimal fusion structure of the CNN-LSTM space-time characteristics, then designing a CNN-LSTM fusion classification algorithm applied to the driving abnormal behavior recognition scene and enabling the CNN-LSTM fusion classification algorithm to recognize the action state and time of the driver by inputting three images;
step four: verification analysis
According to the third step, firstly, a trolley experiment system is constructed, vehicle-mounted equipment uses a 5G network for communication, comprehensive verification analysis and experiment deployment of the system are carried out, then, the final algorithm model is verified in precision and speed on a test set, and reasonable parameter adjustment or pruning is carried out on the algorithm model according to result feedback, so that the algorithm model achieves real-time detection and meets the requirement of required precision, and the performance of the CNN-LSTM fusion classification algorithm applied to the driving abnormal behavior recognition scene is verified;
step five: experimental deployment
According to the fourth step, firstly, a deep learning environment and a frame required by a final model are deployed on a server, a background management system is developed for data processing and statistics, normal operation of the system is achieved, then a small camera and equipment are mounted in the vehicle, so that the vehicle has the functions of collecting images of a driver in real time and transmitting data with a cloud end, finally, the system automatically monitors and identifies the behavior state of the driver, and timely voice early warning reminding and recording are carried out when abnormal driving behaviors of the driver are detected, so that experimental deployment of an algorithm model of the abnormal driving behaviors is completed;
in the first step, preprocessing comprises turning, translating and adding noise to image data, enhancing the data through a countermeasure network before preprocessing the data acquired in real time, and then scientifically pruning a network framework of the Mask-RCNN by adopting a precision comparison and distillation method;
in the second step, before designing the improved Yolov 3-based target detection algorithm, the influence of the body states of different areas of the driver on classification is researched, key sub-areas are searched, then a model for identifying and positioning the key sub-areas of the driver is established, and finally the improved Yolov 3-based key sub-area positioning and identifying algorithm is designed.
2. The verification experiment method based on the abnormal driving behavior algorithm model under 5G communication according to claim 1, characterized in that: in the third step, after obtaining the key subregion image of the driver after the segmentation image detected by the target detection model, firstly researching the relation between the behavior and action state of the driver and time and the space-time feature fusion mechanism, so as to obtain the optimal fusion structure of the CNN-LSTM space-time feature.
3. The verification experiment method based on the abnormal driving behavior algorithm model under 5G communication according to claim 1, characterized in that: in the fourth step, the trolley experiment system is used for receiving the video images collected by the camera and transmitting the received data to the server, and is also used for detecting abnormal driving behaviors and carrying out voice reminding and data recording on the driver.
4. The verification experiment method based on the abnormal driving behavior algorithm model under 5G communication according to claim 1, characterized in that: and in the fifth step, the small camera and the equipment loaded in the vehicle use a 5G communication network with high speed, large capacity and low time delay to transmit data so as to meet the transmission of high-definition videos in the vehicle.
CN202010503167.5A 2020-06-05 2020-06-05 Verification experiment method of abnormal driving behavior algorithm model based on 5G communication Active CN111724408B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010503167.5A CN111724408B (en) 2020-06-05 2020-06-05 Verification experiment method of abnormal driving behavior algorithm model based on 5G communication

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010503167.5A CN111724408B (en) 2020-06-05 2020-06-05 Verification experiment method of abnormal driving behavior algorithm model based on 5G communication

Publications (2)

Publication Number Publication Date
CN111724408A CN111724408A (en) 2020-09-29
CN111724408B true CN111724408B (en) 2021-09-03

Family

ID=72565898

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010503167.5A Active CN111724408B (en) 2020-06-05 2020-06-05 Verification experiment method of abnormal driving behavior algorithm model based on 5G communication

Country Status (1)

Country Link
CN (1) CN111724408B (en)

Families Citing this family (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112560649A (en) * 2020-12-09 2021-03-26 广州云从鼎望科技有限公司 Behavior action detection method, system, equipment and medium
CN112477873A (en) * 2020-12-15 2021-03-12 广东海洋大学 Auxiliary driving and vehicle safety management system based on Internet of vehicles
CN113095183A (en) * 2021-03-31 2021-07-09 西北工业大学 Micro-expression detection method based on deep neural network
CN115134537A (en) * 2022-01-18 2022-09-30 长城汽车股份有限公司 Image processing method and device and vehicle

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104318237A (en) * 2014-10-28 2015-01-28 厦门大学 Fatigue driving warning method based on face identification
CN105336105A (en) * 2015-11-30 2016-02-17 宁波力芯科信息科技有限公司 Method, intelligent device and system for preventing fatigue driving
CN105769120A (en) * 2016-01-27 2016-07-20 深圳地平线机器人科技有限公司 Fatigue driving detection method and device

Family Cites Families (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US10592785B2 (en) * 2017-07-12 2020-03-17 Futurewei Technologies, Inc. Integrated system for detection of driver condition
CN109215292A (en) * 2018-08-10 2019-01-15 珠海研果科技有限公司 A kind of fatigue driving householder method and system
CN209471477U (en) * 2019-02-25 2019-10-08 湖南远眸科技有限公司 A kind of fatigue driving real-time early warning interfering system based on the edge 5G cloud network
CN109977930B (en) * 2019-04-29 2021-04-02 中国电子信息产业集团有限公司第六研究所 Fatigue driving detection method and device

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104318237A (en) * 2014-10-28 2015-01-28 厦门大学 Fatigue driving warning method based on face identification
CN105336105A (en) * 2015-11-30 2016-02-17 宁波力芯科信息科技有限公司 Method, intelligent device and system for preventing fatigue driving
CN105769120A (en) * 2016-01-27 2016-07-20 深圳地平线机器人科技有限公司 Fatigue driving detection method and device

Non-Patent Citations (3)

* Cited by examiner, † Cited by third party
Title
"Depth Video-based Two-stream Convolutional Neural Networks for Driver Fatigue Detection";Xiaoxi Ma et.al.;《2017 International Conference on Orange Technologies (ICOT)》;20171231;全文 *
"基于卷积循环神经网络的驾驶员疲劳检测方法研究";胡志强;《中国优秀硕士学位论文全文数据库 工程科技II辑》;20190715(第7期);文献第3-5章 *
"基于卷积神经网络的驾驶员不安全行为识别";田文洪等;《电子科技大学学报》;20190531;第48卷(第3期);全文 *

Also Published As

Publication number Publication date
CN111724408A (en) 2020-09-29

Similar Documents

Publication Publication Date Title
CN111724408B (en) Verification experiment method of abnormal driving behavior algorithm model based on 5G communication
CN109657552B (en) Vehicle type recognition device and method for realizing cross-scene cold start based on transfer learning
Anand et al. Crack-pot: Autonomous road crack and pothole detection
TWI430212B (en) Abnormal behavior detection system and method using automatic classification of multiple features
CN110795595B (en) Video structured storage method, device, equipment and medium based on edge calculation
KR101653278B1 (en) Face tracking system using colar-based face detection method
CN106384513B (en) A kind of fake-licensed car capture system and method based on intelligent transportation
CN107578091B (en) Pedestrian and vehicle real-time detection method based on lightweight deep network
CN102164270A (en) Intelligent video monitoring method and system capable of exploring abnormal events
CN105654733A (en) Front and back vehicle license plate recognition method and device based on video detection
CN107257161A (en) A kind of transformer station's disconnecting link remote control auxiliary check method and system based on state recognition algorithm
CN107103314A (en) A kind of fake license plate vehicle retrieval system based on machine vision
CN108921866A (en) A kind of image processing method and system
CN111310026A (en) Artificial intelligence-based yellow-related terrorism monitoring method
CN107832721B (en) Method and apparatus for outputting information
CN106096504A (en) A kind of model recognizing method based on unmanned aerial vehicle onboard platform
CN103914682A (en) Vehicle license plate recognition method and system
CN111539317A (en) Vehicle illegal driving detection method and device, computer equipment and storage medium
CN113642474A (en) Hazardous area personnel monitoring method based on YOLOV5
CN110620760A (en) FlexRay bus fusion intrusion detection method and detection device for SVM (support vector machine) and Bayesian network
CN113344967B (en) Dynamic target identification tracking method under complex background
KR101547255B1 (en) Object-based Searching Method for Intelligent Surveillance System
Katsamenis et al. A Few-Shot Attention Recurrent Residual U-Net for Crack Segmentation
CN108932503A (en) The recognition methods of Chinese herbaceous peony obstacle and device, storage medium, terminal under bad weather
CN107124577A (en) A kind of real-time alarm system for guarding against theft based on moving object detection

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant