CN111724408A - Verification experiment method of abnormal driving behavior algorithm model based on 5G communication - Google Patents
Verification experiment method of abnormal driving behavior algorithm model based on 5G communication Download PDFInfo
- Publication number
- CN111724408A CN111724408A CN202010503167.5A CN202010503167A CN111724408A CN 111724408 A CN111724408 A CN 111724408A CN 202010503167 A CN202010503167 A CN 202010503167A CN 111724408 A CN111724408 A CN 111724408A
- Authority
- CN
- China
- Prior art keywords
- driver
- image
- data
- algorithm
- behavior
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/10—Segmentation; Edge detection
- G06T7/194—Segmentation; Edge detection involving foreground-background segmentation
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/21—Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
- G06F18/214—Generating training patterns; Bootstrap methods, e.g. bagging or boosting
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/25—Fusion techniques
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/04—Architecture, e.g. interconnection topology
- G06N3/044—Recurrent networks, e.g. Hopfield networks
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/04—Architecture, e.g. interconnection topology
- G06N3/049—Temporal neural networks, e.g. delay elements, oscillating neurons or pulsed inputs
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/08—Learning methods
- G06N3/082—Learning methods modifying the architecture, e.g. adding, deleting or silencing nodes or connections
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V20/00—Scenes; Scene-specific elements
- G06V20/50—Context or environment of the image
- G06V20/59—Context or environment of the image inside of a vehicle, e.g. relating to seat occupancy, driver state or inner lighting conditions
- G06V20/597—Recognising the driver's state or behaviour, e.g. attention or drowsiness
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V2201/00—Indexing scheme relating to image or video recognition or understanding
- G06V2201/07—Target detection
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Physics & Mathematics (AREA)
- Data Mining & Analysis (AREA)
- General Physics & Mathematics (AREA)
- Life Sciences & Earth Sciences (AREA)
- Artificial Intelligence (AREA)
- General Engineering & Computer Science (AREA)
- Evolutionary Computation (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Computational Linguistics (AREA)
- Software Systems (AREA)
- Mathematical Physics (AREA)
- Health & Medical Sciences (AREA)
- Biomedical Technology (AREA)
- Biophysics (AREA)
- Computing Systems (AREA)
- General Health & Medical Sciences (AREA)
- Molecular Biology (AREA)
- Evolutionary Biology (AREA)
- Bioinformatics & Cheminformatics (AREA)
- Bioinformatics & Computational Biology (AREA)
- Multimedia (AREA)
- Traffic Control Systems (AREA)
- Image Analysis (AREA)
Abstract
The invention discloses a verification experiment method of an abnormal driving behavior algorithm model based on 5G communication, which comprises the following steps: segmenting a driver background, identifying and positioning a key subregion image of the driver, identifying the behavior and action state and time of the driver, and verifying, analyzing and deploying experiments; the method comprises the steps of firstly designing a driver background image segmentation algorithm based on an improved Mask-RCNN, segmenting the driver background image, then designing a Yolov3 target detection algorithm based on the improved and identifying a key subregion image of the driver after segmenting the image, then designing a CNN-LSTM fusion classification algorithm applied to a driving abnormal behavior identification scene, identifying the action state and time of the driver by inputting three images, finally constructing a trolley experiment system, carrying out comprehensive verification analysis and experimental deployment of the system, and finally realizing a verification experiment of an algorithm model.
Description
Technical Field
The invention relates to the technical field of abnormal driving behavior detection, in particular to a verification experiment method of an abnormal driving behavior algorithm model based on 5G communication.
Background
The technology detects the abnormal behavior of the driver and sends out warning, thereby avoiding the occurrence of traffic accidents, having important application value and social significance, at the present stage, research and development of various countries at home and abroad in the field are limited by the characteristics of the abnormal behavior, such as various abnormal behaviors, including fatigue behaviors such as closing eyes, yawning, rubbing eyes and nodding of the driver for a long time, and improper driving behaviors such as calling, eating snacks, smoking, left looking after right looking after, and the like, because the behaviors of the driver are normal in most of time, the occurrence frequency of the abnormal behaviors is generally low, so that data acquisition is difficult, and a reliable algorithm model is needed to detect the abnormal behavior of the driver;
however, most of the existing driver abnormal behavior identification and classification algorithms proposed for driver abnormal behavior detection are not subjected to effective and reliable verification analysis and experimental deployment, so that the accuracy and speed of the algorithm model cannot be guaranteed, and the comprehensive performance and practicability cannot be determined, therefore, the invention provides a verification experimental method based on the abnormal driving behavior algorithm model under 5G communication to solve the problems in the prior art.
Disclosure of Invention
Aiming at the problems, the invention aims to provide a verification experiment method of an abnormal driving behavior algorithm model based on 5G communication, the method firstly verifies and analyzes the precision and the speed of the final algorithm model, then reasonably adjusts parameters and prunes the model, then deploys a deep learning environment and a framework in a server and develops a background management system, and finally deploys a vehicle-mounted and server experiment based on 5G communication, and the method is simple and reliable and has higher practicability.
In order to achieve the purpose of the invention, the invention is realized by the following technical scheme: the verification experiment method of the abnormal driving behavior algorithm model based on the 5G communication comprises the following steps:
the method comprises the following steps: driver background segmentation
In a complex driving environment in a vehicle, under the condition of considering different illumination and different figures, firstly acquiring driving environment data in real time on the basis of data enhancement, then preprocessing the acquired driving environment data, then making the preprocessed data into a data set to enable the detection speed of the data set to meet the real-time requirement, specially improving the detection precision aiming at the specific scene in the driving environment, and finally designing an improved Mask-RCNN-based driver background image segmentation algorithm and segmenting the driver background image;
step two: identifying and positioning key subregion image of driver
According to the first step, firstly, an image segmented by Mask-RCNN is used as network input, on the premise that the precision loss of small object detection is considered, the detection precision of the key subregion image of the driver is improved by adopting multi-scale prediction, the characteristic of network extraction characteristics of the image is researched by combining a network structure through carrying different scales, different carrying modes and different quantities of fusion, the feasibility verification of the network is carried out through the identification precision, a network model most suitable for the scene is obtained, and then an improved Yolov 3-based target detection algorithm is designed and the key subregion image of the driver after the image segmentation is identified;
step three: identifying behavior action state and time of driver
According to the second step, after obtaining the key subregion image of the driver after the segmentation image detected by the target detection model, firstly inputting the key subregion image into the CNN to extract a feature map, then expanding the key subregion image into a vector to be input into the LSTM and seeking out an optimal fusion structure of CNN-LSTM space-time characteristics, then designing a CNN-LSTM fusion classification algorithm applied to a driving abnormal behavior recognition scene and enabling the CNN-LSTM fusion classification algorithm to recognize the action state and time of the driver by inputting three images;
step four: verification analysis
According to the third step, firstly, a trolley experiment system is constructed, vehicle-mounted equipment uses a 5G network for communication, comprehensive verification analysis and experiment deployment of the system are carried out, then, the final algorithm model is verified in precision and speed on a test set, and reasonable parameter adjustment or pruning is carried out on the algorithm model according to result feedback, so that the algorithm model achieves real-time detection and meets the requirement of required precision, and the performance of the CNN-LSTM fusion classification algorithm applied to the driving abnormal behavior recognition scene is verified;
step five: experimental deployment
According to the fourth step, the deep learning environment and the framework required by the final model are deployed on the server, a background management system is developed for data processing and statistics, normal operation of the system is achieved, then a small camera and equipment are mounted in the vehicle, so that the vehicle has the functions of collecting images of a driver in real time and transmitting data with the cloud, the system is automatically monitored and recognizes the behavior state of the driver, and timely voice early warning reminding and recording are carried out when abnormal driving behaviors of the driver are detected, so that experimental deployment of the abnormal driving behavior algorithm model is completed.
The further improvement lies in that: in the first step, the preprocessing comprises turning, translating and adding noise to the image data, the data is enhanced through a countermeasure network before the preprocessing is carried out on the data acquired in real time, and then the network framework of the Mask-RCNN is scientifically pruned by adopting a precision comparison and distillation method.
The further improvement lies in that: in the second step, before designing the improved Yolov 3-based target detection algorithm, the influence of the body states of different areas of the driver on classification is researched, key sub-areas are searched, then a model for identifying and positioning the key sub-areas of the driver is established, and finally the improved Yolov 3-based key sub-area positioning and identifying algorithm is designed.
The further improvement lies in that: in the third step, after obtaining the key subregion image of the driver after the segmentation image detected by the target detection model, firstly researching the relation between the behavior and action state of the driver and time and the space-time feature fusion mechanism, so as to obtain the optimal fusion structure of the CNN-LSTM space-time feature.
The further improvement lies in that: in the fourth step, the trolley experiment system is used for receiving the video images collected by the camera and transmitting the received data to the server, and is also used for detecting abnormal driving behaviors and carrying out voice reminding and data recording on the driver.
The further improvement lies in that: and in the fifth step, the small camera and the equipment loaded in the vehicle use a 5G communication network with high speed, large capacity and low time delay to transmit data so as to meet the transmission of high-definition videos in the vehicle.
The invention has the beneficial effects that: the invention fully utilizes the data set of the online open source on the basis of self-acquiring the data set, increases the data set by combining the operations of image turning, translation, noise addition and the like of the traditional image processing, simultaneously uses the generated countermeasure network to carry out image enhancement and increases the data set, establishes a method model for segmenting the driver from a complex background based on the mechanism of removing background interference, modifies the network structure by using the thought of a high-precision Mask-RCNN open source segmentation algorithm to adapt to the scene of the driver for segmenting the image of the driver, establishes and positions a key subregion model of the driver on the basis, researches the influence proportion of the key subregion on the abnormal driving result, provides an improved Yolov3 target detection algorithm to identify the key subregion image of the driver after segmenting the image, and further researches the relation between the behavior action state of the driver and the time, seeking an optimal fusion structure of CNN-LSTM time-space characteristics, designing a CNN-LSTM fusion classification algorithm applied to a scene of identifying abnormal driving behaviors, identifying the state of a driver by inputting three images, finally constructing a trolley experiment system, carrying out communication and data transmission by using a 5G network, carrying out comprehensive verification analysis and experimental deployment of the system, finally realizing automatic monitoring and identification of the behavior state of the driver by the system, detecting that abnormal driving behaviors are timely reminded and recorded by voice early warning, and having higher practicability, ensuring the precision and speed of an algorithm model, and determining the comprehensive performance of the algorithm model, thereby being convenient for popularization.
Drawings
FIG. 1 is a flow chart of the steps of the present invention;
FIG. 2 is a technical roadmap for the present invention;
FIG. 3 is a flow chart of an experimental protocol of the present invention.
Detailed Description
In order to further understand the present invention, the following detailed description will be made with reference to the following examples, which are only used for explaining the present invention and are not to be construed as limiting the scope of the present invention.
According to fig. 1, 2 and 3, the embodiment provides a verification experiment method of an abnormal driving behavior algorithm model based on 5G communication, which includes the following steps:
the method comprises the following steps: driver background segmentation
In a complex driving environment in a vehicle, under the condition of considering different illumination and different figures, firstly acquiring driving environment data in real time on the basis of enhancing the data through a countermeasure network, then overturning, translating and adding noise to the acquired driving environment data, then making the preprocessed data into a data set, scientifically pruning a network frame of a Mask-RCNN by adopting methods such as precision comparison, distillation and the like, enabling the detection speed of the data set to meet the real-time requirement, specially improving the detection precision aiming at the specific scene in the driving environment, and finally designing a driver background image segmentation algorithm based on an improved Mask-RCNN and segmenting the driver background image;
step two: identifying and positioning key subregion image of driver
According to the first step, firstly, images segmented by Mask-RCNN are used as network input, on the premise that the precision loss of small object detection is considered, the detection precision of key subregion images of a driver is improved by adopting multi-scale prediction, the characteristics of network extraction features of the images are researched by carrying different scales, carrying different carrying modes and different quantities, combining a network structure, feasibility verification of the network is carried out by identifying the precision, a network model most suitable for the scene is obtained, then, the influence of body states of different regions of the driver on classification is researched, key subregions are searched, then, a key subregion image model for identifying and positioning the driver is established, and finally, a key subregion image of the driver after the images are segmented is designed on the basis of an improved Yolov3 target detection algorithm;
step three: identifying behavior action state and time of driver
According to the second step, after obtaining the key subregion image of the driver after the segmentation image detected by the target detection model, researching the relationship between the behavior and the action state of the driver and the time and the spatial-temporal feature fusion mechanism, firstly inputting the key subregion image into CNN to extract a feature map, then expanding the feature map into a vector, inputting the vector into LSTM, seeking out the optimal fusion structure of the CNN-LSTM spatial-temporal feature, then designing a CNN-LSTM fusion classification algorithm applied to a driving abnormal behavior recognition scene, and enabling the CNN-LSTM fusion classification algorithm to recognize the action state and the time of the driver by inputting three images;
step four: verification analysis
According to the third step, firstly, a trolley experiment system used for receiving video images collected by a camera and transmitting received data to a server is constructed, abnormal driving behaviors are detected through the trolley experiment system, voice reminding and data recording are carried out on a driver, vehicle-mounted equipment uses a 5G network for communication, comprehensive verification analysis and experiment deployment of the system are carried out, then verification on the final algorithm model in precision and speed is carried out on a test set, and reasonable parameter adjustment or pruning is carried out on the algorithm model according to result feedback, so that the algorithm model achieves real-time detection and meets the requirement of required precision, and the performance of a CNN-LSTM fusion classification algorithm applied to a driving abnormal behavior recognition scene is verified;
step five: experimental deployment
According to the fourth step, a deep learning environment and a frame required by a final model are deployed on a server, a background management system is developed for data processing and statistics, normal operation of the system is achieved, then a small camera and equipment are loaded in a vehicle, and a 5G communication network with high speed, large capacity and low time delay is used for meeting transmission of high-definition video in the vehicle, so that the vehicle has the functions of collecting images of a driver in real time and transmitting data with the cloud, the behavior state of the driver is automatically monitored and recognized by the system, and timely voice early warning reminding and recording are performed when abnormal driving behaviors of the driver are detected, so that experimental deployment of the abnormal driving behavior algorithm model is completed.
The verification experiment method based on the abnormal driving behavior algorithm model under 5G communication fully utilizes the data set of online open source on the basis of self-acquisition of the data set, combines the traditional image processing to perform operations such as image turning, translation, noise addition and the like to increase the data set, simultaneously uses the generated countermeasure network to perform image enhancement and increase the data set, establishes a method model for segmenting a driver from a complex background based on a mechanism of removing background interference, modifies a network structure by using the thought of a high-precision Mask-RCNN open source segmentation algorithm to adapt to a driver scene for segmenting the driver image, establishes and positions a driver key subregion model on the basis, researches the influence proportion of the key subregion on an abnormal driving result and provides a Yolov3 target detection algorithm based on improvement to identify the key subregion image of the driver after segmenting the image, the relation between the behavior and action states of the driver and time is further researched, the optimal fusion structure of the CNN-LSTM space-time characteristics is sought, a CNN-LSTM fusion classification algorithm applied to a scene of recognizing abnormal driving behaviors is designed, the state of the driver is recognized by inputting three images, finally, a trolley experiment system is constructed, a 5G network is used for communication and data transmission, comprehensive verification analysis and experimental deployment of the system are carried out, the behavior states of the driver are automatically monitored and recognized by the system, timely voice early warning reminding and recording of abnormal driving behaviors are detected, the practicability is high, the precision and the speed of an algorithm model are guaranteed, the comprehensive performance of the algorithm model is determined, and the popularization is facilitated.
The foregoing illustrates and describes the principles, general features, and advantages of the present invention. It will be understood by those skilled in the art that the present invention is not limited to the embodiments described above, which are described in the specification and illustrated only to illustrate the principle of the present invention, but that various changes and modifications may be made therein without departing from the spirit and scope of the present invention, which fall within the scope of the invention as claimed. The scope of the invention is defined by the appended claims and equivalents thereof.
Claims (6)
1. The verification experiment method of the abnormal driving behavior algorithm model based on the 5G communication is characterized by comprising the following steps of: the method comprises the following steps:
the method comprises the following steps: driver background segmentation
In a complex driving environment in a vehicle, under the condition of considering different illumination and different figures, firstly acquiring driving environment data in real time on the basis of data enhancement, then preprocessing the acquired driving environment data, then making the preprocessed data into a data set to enable the detection speed of the data set to meet the real-time requirement, specially improving the detection precision aiming at the specific scene in the driving environment, and finally designing an improved Mask-RCNN-based driver background image segmentation algorithm and segmenting the driver background image;
step two: identifying and positioning key subregion image of driver
According to the first step, firstly, an image segmented by Mask-RCNN is used as network input, on the premise that the precision loss of small object detection is considered, the detection precision of the key subregion image of the driver is improved by adopting multi-scale prediction, the characteristic of network extraction characteristics of the image is researched by combining a network structure through carrying different scales, different carrying modes and different quantities of fusion, the feasibility verification of the network is carried out through the identification precision, a network model most suitable for the scene is obtained, and then an improved Yolov 3-based target detection algorithm is designed and the key subregion image of the driver after the image segmentation is identified;
step three: identifying behavior action state and time of driver
According to the second step, after obtaining the key subregion image of the driver after the segmentation image detected by the target detection model, firstly inputting the key subregion image into the CNN to extract a feature map, then expanding the key subregion image into a vector to be input into the LSTM and seeking out an optimal fusion structure of CNN-LSTM space-time characteristics, then designing a CNN-LSTM fusion classification algorithm applied to a driving abnormal behavior recognition scene and enabling the CNN-LSTM fusion classification algorithm to recognize the action state and time of the driver by inputting three images;
step four: verification analysis
According to the third step, firstly, a trolley experiment system is constructed, vehicle-mounted equipment uses a 5G network for communication, comprehensive verification analysis and experiment deployment of the system are carried out, then, the final algorithm model is verified in precision and speed on a test set, and reasonable parameter adjustment or pruning is carried out on the algorithm model according to result feedback, so that the algorithm model achieves real-time detection and meets the requirement of required precision, and the performance of the CNN-LSTM fusion classification algorithm applied to the driving abnormal behavior recognition scene is verified;
step five: experimental deployment
According to the fourth step, the deep learning environment and the framework required by the final model are deployed on the server, a background management system is developed for data processing and statistics, normal operation of the system is achieved, then a small camera and equipment are mounted in the vehicle, so that the vehicle has the functions of collecting images of a driver in real time and transmitting data with the cloud, the system is automatically monitored and recognizes the behavior state of the driver, and timely voice early warning reminding and recording are carried out when abnormal driving behaviors of the driver are detected, so that experimental deployment of the abnormal driving behavior algorithm model is completed.
2. The verification experiment method based on the abnormal driving behavior algorithm model under 5G communication according to claim 1, characterized in that: in the first step, the preprocessing comprises turning, translating and adding noise to the image data, the data is enhanced through a countermeasure network before the preprocessing is carried out on the data acquired in real time, and then the network framework of the Mask-RCNN is scientifically pruned by adopting a precision comparison and distillation method.
3. The verification experiment method based on the abnormal driving behavior algorithm model under 5G communication according to claim 1, characterized in that: in the second step, before designing the improved Yolov 3-based target detection algorithm, the influence of the body states of different areas of the driver on classification is researched, key sub-areas are searched, then a model for identifying and positioning the key sub-areas of the driver is established, and finally the improved Yolov 3-based key sub-area positioning and identifying algorithm is designed.
4. The verification experiment method based on the abnormal driving behavior algorithm model under 5G communication according to claim 1, characterized in that: in the third step, after obtaining the key subregion image of the driver after the segmentation image detected by the target detection model, firstly researching the relation between the behavior and action state of the driver and time and the space-time feature fusion mechanism, so as to obtain the optimal fusion structure of the CNN-LSTM space-time feature.
5. The verification experiment method based on the abnormal driving behavior algorithm model under 5G communication according to claim 1, characterized in that: in the fourth step, the trolley experiment system is used for receiving the video images collected by the camera and transmitting the received data to the server, and is also used for detecting abnormal driving behaviors and carrying out voice reminding and data recording on the driver.
6. The verification experiment method based on the abnormal driving behavior algorithm model under 5G communication according to claim 1, characterized in that: and in the fifth step, the small camera and the equipment loaded in the vehicle use a 5G communication network with high speed, large capacity and low time delay to transmit data so as to meet the transmission of high-definition videos in the vehicle.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202010503167.5A CN111724408B (en) | 2020-06-05 | 2020-06-05 | Verification experiment method of abnormal driving behavior algorithm model based on 5G communication |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202010503167.5A CN111724408B (en) | 2020-06-05 | 2020-06-05 | Verification experiment method of abnormal driving behavior algorithm model based on 5G communication |
Publications (2)
Publication Number | Publication Date |
---|---|
CN111724408A true CN111724408A (en) | 2020-09-29 |
CN111724408B CN111724408B (en) | 2021-09-03 |
Family
ID=72565898
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202010503167.5A Active CN111724408B (en) | 2020-06-05 | 2020-06-05 | Verification experiment method of abnormal driving behavior algorithm model based on 5G communication |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN111724408B (en) |
Cited By (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN112477873A (en) * | 2020-12-15 | 2021-03-12 | 广东海洋大学 | Auxiliary driving and vehicle safety management system based on Internet of vehicles |
CN112560649A (en) * | 2020-12-09 | 2021-03-26 | 广州云从鼎望科技有限公司 | Behavior action detection method, system, equipment and medium |
CN112906515A (en) * | 2021-02-03 | 2021-06-04 | 珠海研果科技有限公司 | In-vehicle abnormal behavior identification method and system, electronic device and storage medium |
CN113095183A (en) * | 2021-03-31 | 2021-07-09 | 西北工业大学 | Micro-expression detection method based on deep neural network |
CN113536989A (en) * | 2021-06-29 | 2021-10-22 | 广州博通信息技术有限公司 | Refrigerator frosting monitoring method and system based on camera video frame-by-frame analysis |
WO2023138537A1 (en) * | 2022-01-18 | 2023-07-27 | 长城汽车股份有限公司 | Image processing method and apparatus, terminal device and storage medium |
Citations (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN104318237A (en) * | 2014-10-28 | 2015-01-28 | 厦门大学 | Fatigue driving warning method based on face identification |
CN105336105A (en) * | 2015-11-30 | 2016-02-17 | 宁波力芯科信息科技有限公司 | Method, intelligent device and system for preventing fatigue driving |
CN105769120A (en) * | 2016-01-27 | 2016-07-20 | 深圳地平线机器人科技有限公司 | Fatigue driving detection method and device |
CN109215292A (en) * | 2018-08-10 | 2019-01-15 | 珠海研果科技有限公司 | A kind of fatigue driving householder method and system |
US20190019068A1 (en) * | 2017-07-12 | 2019-01-17 | Futurewei Technologies, Inc. | Integrated system for detection of driver condition |
CN109977930A (en) * | 2019-04-29 | 2019-07-05 | 中国电子信息产业集团有限公司第六研究所 | Method for detecting fatigue driving and device |
CN209471477U (en) * | 2019-02-25 | 2019-10-08 | 湖南远眸科技有限公司 | A kind of fatigue driving real-time early warning interfering system based on the edge 5G cloud network |
-
2020
- 2020-06-05 CN CN202010503167.5A patent/CN111724408B/en active Active
Patent Citations (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN104318237A (en) * | 2014-10-28 | 2015-01-28 | 厦门大学 | Fatigue driving warning method based on face identification |
CN105336105A (en) * | 2015-11-30 | 2016-02-17 | 宁波力芯科信息科技有限公司 | Method, intelligent device and system for preventing fatigue driving |
CN105769120A (en) * | 2016-01-27 | 2016-07-20 | 深圳地平线机器人科技有限公司 | Fatigue driving detection method and device |
US20190019068A1 (en) * | 2017-07-12 | 2019-01-17 | Futurewei Technologies, Inc. | Integrated system for detection of driver condition |
CN109215292A (en) * | 2018-08-10 | 2019-01-15 | 珠海研果科技有限公司 | A kind of fatigue driving householder method and system |
CN209471477U (en) * | 2019-02-25 | 2019-10-08 | 湖南远眸科技有限公司 | A kind of fatigue driving real-time early warning interfering system based on the edge 5G cloud network |
CN109977930A (en) * | 2019-04-29 | 2019-07-05 | 中国电子信息产业集团有限公司第六研究所 | Method for detecting fatigue driving and device |
Non-Patent Citations (3)
Title |
---|
XIAOXI MA ET.AL.: ""Depth Video-based Two-stream Convolutional Neural Networks for Driver Fatigue Detection"", 《2017 INTERNATIONAL CONFERENCE ON ORANGE TECHNOLOGIES (ICOT)》 * |
田文洪等: ""基于卷积神经网络的驾驶员不安全行为识别"", 《电子科技大学学报》 * |
胡志强: ""基于卷积循环神经网络的驾驶员疲劳检测方法研究"", 《中国优秀硕士学位论文全文数据库 工程科技II辑》 * |
Cited By (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN112560649A (en) * | 2020-12-09 | 2021-03-26 | 广州云从鼎望科技有限公司 | Behavior action detection method, system, equipment and medium |
CN112477873A (en) * | 2020-12-15 | 2021-03-12 | 广东海洋大学 | Auxiliary driving and vehicle safety management system based on Internet of vehicles |
CN112477873B (en) * | 2020-12-15 | 2024-04-30 | 广东海洋大学 | Auxiliary driving and vehicle safety management system based on Internet of vehicles |
CN112906515A (en) * | 2021-02-03 | 2021-06-04 | 珠海研果科技有限公司 | In-vehicle abnormal behavior identification method and system, electronic device and storage medium |
CN113095183A (en) * | 2021-03-31 | 2021-07-09 | 西北工业大学 | Micro-expression detection method based on deep neural network |
CN113536989A (en) * | 2021-06-29 | 2021-10-22 | 广州博通信息技术有限公司 | Refrigerator frosting monitoring method and system based on camera video frame-by-frame analysis |
WO2023138537A1 (en) * | 2022-01-18 | 2023-07-27 | 长城汽车股份有限公司 | Image processing method and apparatus, terminal device and storage medium |
Also Published As
Publication number | Publication date |
---|---|
CN111724408B (en) | 2021-09-03 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN111724408B (en) | Verification experiment method of abnormal driving behavior algorithm model based on 5G communication | |
CN109657552B (en) | Vehicle type recognition device and method for realizing cross-scene cold start based on transfer learning | |
Anand et al. | Crack-pot: Autonomous road crack and pothole detection | |
CN110795595B (en) | Video structured storage method, device, equipment and medium based on edge calculation | |
TWI430212B (en) | Abnormal behavior detection system and method using automatic classification of multiple features | |
KR101653278B1 (en) | Face tracking system using colar-based face detection method | |
CN106384513B (en) | A kind of fake-licensed car capture system and method based on intelligent transportation | |
CN104915655A (en) | Multi-path monitor video management method and device | |
CN107578091B (en) | Pedestrian and vehicle real-time detection method based on lightweight deep network | |
CN111310026A (en) | Artificial intelligence-based yellow-related terrorism monitoring method | |
CN105654733A (en) | Front and back vehicle license plate recognition method and device based on video detection | |
CN110222596B (en) | Driver behavior analysis anti-cheating method based on vision | |
CN110738857A (en) | vehicle violation evidence obtaining method, device and equipment | |
CN107103314A (en) | A kind of fake license plate vehicle retrieval system based on machine vision | |
CN113642474A (en) | Hazardous area personnel monitoring method based on YOLOV5 | |
CN104615986A (en) | Method for utilizing multiple detectors to conduct pedestrian detection on video images of scene change | |
CN103914682A (en) | Vehicle license plate recognition method and system | |
CN111209905B (en) | Defect shielding license plate recognition method based on combination of deep learning and OCR technology | |
CN111539317A (en) | Vehicle illegal driving detection method and device, computer equipment and storage medium | |
Katsamenis et al. | A few-shot attention recurrent residual U-Net for crack segmentation | |
CN110738080A (en) | method, device and electronic equipment for identifying modified motor vehicle | |
KR20200123324A (en) | A method for pig segmentation using connected component analysis and yolo algorithm | |
CN113837222A (en) | Cloud-edge cooperative machine learning deployment application method and device for millimeter wave radar intersection traffic monitoring system | |
CN116977995A (en) | Vehicle-mounted front license plate recognition method and system | |
Sreedhar et al. | Autotrack: a framework for query-based vehicle tracking and retrieval from CCTV footages using machine learning at the edge |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |