CN108216252B - Subway driver vehicle-mounted driving behavior analysis method, vehicle-mounted terminal and system - Google Patents

Subway driver vehicle-mounted driving behavior analysis method, vehicle-mounted terminal and system Download PDF

Info

Publication number
CN108216252B
CN108216252B CN201711477182.1A CN201711477182A CN108216252B CN 108216252 B CN108216252 B CN 108216252B CN 201711477182 A CN201711477182 A CN 201711477182A CN 108216252 B CN108216252 B CN 108216252B
Authority
CN
China
Prior art keywords
driver
driving
vehicle
driving action
database
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201711477182.1A
Other languages
Chinese (zh)
Other versions
CN108216252A (en
Inventor
田寅
王经纬
龚明
唐海川
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
CRRC Industry Institute Co Ltd
Original Assignee
CRRC Industry Institute Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by CRRC Industry Institute Co Ltd filed Critical CRRC Industry Institute Co Ltd
Priority to CN201711477182.1A priority Critical patent/CN108216252B/en
Publication of CN108216252A publication Critical patent/CN108216252A/en
Application granted granted Critical
Publication of CN108216252B publication Critical patent/CN108216252B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • BPERFORMING OPERATIONS; TRANSPORTING
    • B60VEHICLES IN GENERAL
    • B60WCONJOINT CONTROL OF VEHICLE SUB-UNITS OF DIFFERENT TYPE OR DIFFERENT FUNCTION; CONTROL SYSTEMS SPECIALLY ADAPTED FOR HYBRID VEHICLES; ROAD VEHICLE DRIVE CONTROL SYSTEMS FOR PURPOSES NOT RELATED TO THE CONTROL OF A PARTICULAR SUB-UNIT
    • B60W40/00Estimation or calculation of non-directly measurable driving parameters for road vehicle drive control systems not related to the control of a particular sub unit, e.g. by using mathematical models
    • B60W40/08Estimation or calculation of non-directly measurable driving parameters for road vehicle drive control systems not related to the control of a particular sub unit, e.g. by using mathematical models related to drivers or passengers
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B60VEHICLES IN GENERAL
    • B60WCONJOINT CONTROL OF VEHICLE SUB-UNITS OF DIFFERENT TYPE OR DIFFERENT FUNCTION; CONTROL SYSTEMS SPECIALLY ADAPTED FOR HYBRID VEHICLES; ROAD VEHICLE DRIVE CONTROL SYSTEMS FOR PURPOSES NOT RELATED TO THE CONTROL OF A PARTICULAR SUB-UNIT
    • B60W40/00Estimation or calculation of non-directly measurable driving parameters for road vehicle drive control systems not related to the control of a particular sub unit, e.g. by using mathematical models
    • B60W40/08Estimation or calculation of non-directly measurable driving parameters for road vehicle drive control systems not related to the control of a particular sub unit, e.g. by using mathematical models related to drivers or passengers
    • B60W40/09Driving style or behaviour
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/50Context or environment of the image
    • G06V20/59Context or environment of the image inside of a vehicle, e.g. relating to seat occupancy, driver state or inner lighting conditions
    • G06V20/597Recognising the driver's state or behaviour, e.g. attention or drowsiness
    • GPHYSICS
    • G07CHECKING-DEVICES
    • G07CTIME OR ATTENDANCE REGISTERS; REGISTERING OR INDICATING THE WORKING OF MACHINES; GENERATING RANDOM NUMBERS; VOTING OR LOTTERY APPARATUS; ARRANGEMENTS, SYSTEMS OR APPARATUS FOR CHECKING NOT PROVIDED FOR ELSEWHERE
    • G07C5/00Registering or indicating the working of vehicles
    • G07C5/08Registering or indicating performance data other than driving, working, idle, or waiting time, with or without registering driving, working, idle or waiting time
    • G07C5/0841Registering performance data
    • G07C5/085Registering performance data using electronic data carriers
    • G07C5/0866Registering performance data using electronic data carriers the electronic data carrier being a digital video recorder in combination with video camera
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B60VEHICLES IN GENERAL
    • B60WCONJOINT CONTROL OF VEHICLE SUB-UNITS OF DIFFERENT TYPE OR DIFFERENT FUNCTION; CONTROL SYSTEMS SPECIALLY ADAPTED FOR HYBRID VEHICLES; ROAD VEHICLE DRIVE CONTROL SYSTEMS FOR PURPOSES NOT RELATED TO THE CONTROL OF A PARTICULAR SUB-UNIT
    • B60W40/00Estimation or calculation of non-directly measurable driving parameters for road vehicle drive control systems not related to the control of a particular sub unit, e.g. by using mathematical models
    • B60W40/08Estimation or calculation of non-directly measurable driving parameters for road vehicle drive control systems not related to the control of a particular sub unit, e.g. by using mathematical models related to drivers or passengers
    • B60W2040/0818Inactivity or incapacity of driver
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B60VEHICLES IN GENERAL
    • B60WCONJOINT CONTROL OF VEHICLE SUB-UNITS OF DIFFERENT TYPE OR DIFFERENT FUNCTION; CONTROL SYSTEMS SPECIALLY ADAPTED FOR HYBRID VEHICLES; ROAD VEHICLE DRIVE CONTROL SYSTEMS FOR PURPOSES NOT RELATED TO THE CONTROL OF A PARTICULAR SUB-UNIT
    • B60W40/00Estimation or calculation of non-directly measurable driving parameters for road vehicle drive control systems not related to the control of a particular sub unit, e.g. by using mathematical models
    • B60W40/08Estimation or calculation of non-directly measurable driving parameters for road vehicle drive control systems not related to the control of a particular sub unit, e.g. by using mathematical models related to drivers or passengers
    • B60W2040/0818Inactivity or incapacity of driver
    • B60W2040/0827Inactivity or incapacity of driver due to sleepiness
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B60VEHICLES IN GENERAL
    • B60WCONJOINT CONTROL OF VEHICLE SUB-UNITS OF DIFFERENT TYPE OR DIFFERENT FUNCTION; CONTROL SYSTEMS SPECIALLY ADAPTED FOR HYBRID VEHICLES; ROAD VEHICLE DRIVE CONTROL SYSTEMS FOR PURPOSES NOT RELATED TO THE CONTROL OF A PARTICULAR SUB-UNIT
    • B60W40/00Estimation or calculation of non-directly measurable driving parameters for road vehicle drive control systems not related to the control of a particular sub unit, e.g. by using mathematical models
    • B60W40/08Estimation or calculation of non-directly measurable driving parameters for road vehicle drive control systems not related to the control of a particular sub unit, e.g. by using mathematical models related to drivers or passengers
    • B60W2040/0818Inactivity or incapacity of driver
    • B60W2040/0836Inactivity or incapacity of driver due to alcohol

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Automation & Control Theory (AREA)
  • Mathematical Physics (AREA)
  • Transportation (AREA)
  • Mechanical Engineering (AREA)
  • Multimedia (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Image Analysis (AREA)

Abstract

The invention provides a subway driver vehicle-mounted driving behavior analysis method, a vehicle-mounted terminal and a system, wherein the method comprises the following steps: constructing a vehicle-mounted driving behavior database of a subway driver; based on the vehicle-mounted driving behavior database, respectively obtaining a driver driving action recognition model and a driver facial state recognition model through training by adopting a deep learning method; acquiring a real-time working video of a subway driver and extracting continuous multi-frame images according to a preset frame rate; and acquiring a driving action track of a driver in the continuous multi-frame images, identifying the driving action track by using the driver driving action identification model, and identifying the driver face state in the continuous multi-frame images by using the driver face state identification model. The invention can monitor and intelligently evaluate the driving behavior and the driving state of the driver in real time, is beneficial to discovering possible human misoperation as soon as possible and ensures the driving safety.

Description

Subway driver vehicle-mounted driving behavior analysis method, vehicle-mounted terminal and system
Technical Field
The invention relates to the technical field of artificial intelligence, in particular to a subway driver vehicle-mounted driving behavior analysis method, a vehicle-mounted terminal and a system.
Background
In recent years, the average running speed and the running density of urban rail transit trains are continuously improved, and although the running safety equipment of the trains and the standard operation level of drivers are greatly improved, the probability that the drivers are seriously threatened to run safely, such as inattention and incapability caused by accidental injury, and the like, which may occur in the value taking process still exists.
At present, existing train running safety equipment for urban rail transit in China basically has no function of vigilance control of a driver, does not recognize and alarm the personal working state of the driver, and cannot meet the actual operation requirement due to the system function. The monitoring and recording device for running of vehicles used for various types of subways and light rails stores various collected running data into a memory of the device when the vehicles run, downloads information by a specially-assigned person when the vehicles stop, then delivers the information to a ground management department, and finally stores and analyzes the data by technical staff through a ground information processing system. However, such a train operation monitoring device has obvious defects, and cannot give an alarm to the operation condition of a driver in real time, and cannot meet the actual requirements of a rail transit operation management department. Ground management personnel can only passively wait for a driver to submit an operation record in an office, can only record, manage and analyze historical operation data, and lack the capability of monitoring and managing dynamic data generated by the operation of the driver in the running process of a train in real time. Meanwhile, ground management personnel do not have visual understanding of the driving state of the driver, and supervision on the working condition of the driver is obviously lack of strength. This makes the management lack real-time and intuitive data and materials when various accidents occur, which is not conducive to analyzing the cause of the accident or handling various events.
Therefore, it is necessary to provide a method or system capable of identifying and alarming the individual working state of the driver to meet the actual operation requirement of the urban rail transit train.
Disclosure of Invention
The invention provides a subway driver vehicle-mounted driving behavior analysis method, a vehicle-mounted terminal and a system, which are used for solving the problems that the existing train driving safety equipment of urban rail transit does not basically have the function of driver alert control, does not recognize and alarm the individual working state of a driver and cannot meet the actual operation requirement.
According to one aspect of the invention, a subway driver vehicle-mounted driving behavior analysis method is provided, which comprises the following steps:
s1, constructing a vehicle-mounted driving behavior database of the subway driver, wherein the vehicle-mounted driving behavior database comprises a driving action database and a face state database;
s2, respectively obtaining a driver driving action recognition model and a driver face state recognition model by training based on the vehicle-mounted driving behavior database by adopting a deep learning method;
s3, acquiring a real-time working video of a subway driver and extracting continuous multi-frame images according to a preset frame rate;
s4, acquiring the driving action track of the driver in the continuous multi-frame images, recognizing the driving action track by using the driver driving action recognition model, and judging whether the driving action of the driver is in compliance; and identifying the face state of the driver in the continuous multi-frame images by using the driver face state identification model, and detecting whether the face state of the driver is normal or not.
Wherein the step S1 further includes:
s11, acquiring a standard working video of a subway driver by using an infrared vision sensor, uniformly extracting an image frame from the standard working video after the acquisition process is finished, labeling coordinate points of positions of two hands of the driver and a timestamp in the image frame, storing the labeled image frame, and generating a driving action database;
s12, continuously shooting the working video of the subway driver by using the camera, intercepting the working video into images according to a certain frame rate, screening effective pictures capable of distinguishing the face state of the driver from the intercepted images, storing the effective pictures, and generating a face state database.
Wherein the step S2 further includes:
s21, based on the driving action database, obtaining a driver driving action recognition model through training by adopting a time sequence-based action recognition method;
s22, constructing a deep learning network model for state identification, and training the deep learning network model for state identification by using the facial state database to obtain a driver facial state identification model.
Wherein the step S21 further includes:
s211, aiming at the same action, extracting all image frames describing the action in the driving action database, and repeatedly training the action for multiple times;
s212, recording the two-hand track points at any moment in the action process as average values obtained by multiple times of training, recording the two-hand track points on a picture according to a time sequence, generating a track graph, recording all track points of the action time sequence in the track graph, and setting a threshold value for the position of each track point in the track graph;
and S213, repeating the steps S211 and S212 until the training of all the actions in the driving action database is completed.
The deep learning network model for state recognition constructed in step S22 includes:
the self-encoder comprises a first convolution layer, a first pooling layer, a second convolution layer, a second pooling layer, a third convolution layer, a fourth convolution layer, a third pooling layer, a fifth convolution layer, a sixth convolution layer, a fourth pooling layer, a seventh convolution layer, an eighth convolution layer, a fifth pooling layer, a full connection layer, a Softmax output layer and a self-encoder which are sequentially connected.
Wherein the step of training the deep learning network model for state recognition by using the facial state database in step S22 to obtain a driver facial state recognition model further comprises:
taking continuous multi-frame pictures from the face state database, inputting the continuous multi-frame pictures into the first convolution layer, and starting convolution calculation;
calculating the output vector of the full-connection layer through an activation function to obtain a predicted value, calculating a loss function value of the predicted value and a real value by using a cross entropy loss function, and minimizing the loss function value;
continuously adjusting the network weight and bias by a random gradient descent method and recalculating a loss function value until the loss function value tends to be stable or reaches a set iteration number to obtain classified picture characteristics;
inputting the classified picture characteristics into the self-encoder for encoding compression, abstracting depth characteristics, reconstructing the input picture characteristics through decoding, and continuously performing iterative training through a back propagation algorithm until the learning error of the self-encoder is smaller than a preset threshold value;
and solidifying the structure and the parameters of the deep learning network model after training to obtain a driver state recognition model.
In step S4, the step of recognizing the driving motion trajectory by using the driver driving motion recognition model and determining whether the driving motion of the driver is compliant further includes:
and inputting the driving action track into a trained driver driving action recognition model, and judging whether the driving action of the driver meets the specification or not by comparing whether the deviation of the driving action track and the track point in the driver driving action recognition model is within a preset threshold range or not.
Wherein, in step S4, the driver face state recognition model is used to recognize the driver face state in the consecutive multi-frame images, and the step of detecting whether the driver face state is normal further comprises:
and inputting the continuous multi-frame images into the driver state recognition model, outputting a classification result of the driver face state, and judging whether the driver face state is normal or not according to the classification result.
According to another aspect of the present invention, there is provided a vehicle-mounted terminal including: a processor module, a custom-made board and a dedicated power module, wherein,
the processor module consists of a 256-core nvidipasca GPU and a 6-core 64-bit ARMv8 processor cluster for performing the method as described above;
the user customization board is used for realizing a bus expansion storage function and a 4G wireless communication function and protecting and leading out an external interface of the processor module;
the special power supply module is used for converting direct current 110V voltage into direct current low voltage used by the processor module.
According to another aspect of the present invention, there is provided a subway driver on-board driving behavior analysis system, comprising: the vehicle-mounted terminal, the communication layer and the cloud management platform are as described above, wherein,
the communication layer is used for data transmission between the vehicle-mounted terminal and the cloud management platform;
the cloud management platform is used for realizing the functions of on-line self-learning and big data analysis.
The method, the vehicle-mounted terminal and the system for analyzing the vehicle-mounted driving behavior of the subway driver, which are provided by the invention, can monitor and intelligently evaluate the driving behavior and the driving state of the subway driver in real time, help to discover possible manual misoperation as soon as possible and ensure driving safety.
Drawings
Fig. 1 is a schematic flow chart of a method for analyzing driving behavior of a subway driver in a vehicle according to an embodiment of the present invention;
fig. 2 is a schematic structural diagram of a vehicle-mounted terminal according to another embodiment of the present invention;
fig. 3 is a schematic structural diagram of a subway driver vehicle-mounted driving behavior analysis system according to another embodiment of the present invention.
Detailed Description
In order to make the objects, technical solutions and advantages of the embodiments of the present invention clearer, the technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are only a part of the embodiments of the present invention, and not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments of the present invention without making any creative effort, shall fall within the protection scope of the present invention.
The invention takes a subway driver as a main monitoring object, carries out real-time monitoring on the vehicle-mounted driving behavior of the subway driver, and develops the analysis of the vehicle-mounted driving behavior of the subway driver from two aspects, one is the driving action of the driver, and the other is the facial state of the driver.
As shown in fig. 1, a schematic flow chart of a method for analyzing driving behavior of a subway driver in a vehicle according to an embodiment of the present invention includes:
s1, constructing a vehicle-mounted driving behavior database of the subway driver, wherein the vehicle-mounted driving behavior database comprises a driving action database and a face state database;
almost all human behavior recognition and facial recognition research works are carried out on internationally recognized public databases at present, and comparison among algorithms is convenient. However, most of the databases are recorded in a fixed scene, the difference between different data sets is not large, actions in the public database for human behavior recognition are all actions of all parts of a human body, and are not mainly focused on hand actions, which basically does not accord with the situation of the invention. Therefore, the invention needs to construct a database by itself, and a camera is used for recording for a long time to obtain a large amount of driving behavior data of a driver, so as to expand training and testing samples.
Specifically, the step S1 further includes:
s11, acquiring a standard working video of a subway driver by using an infrared vision sensor, uniformly extracting an image frame from the acquired video after the acquisition process is finished, labeling coordinate points of positions of two hands of the driver and a timestamp in the image frame, storing the labeled image frame, and generating a driving action database;
the standard work video of the subway driver is collected so that the driving actions in the driving action database are standard actions and accord with the operation rules of the subway driver, and therefore the driving action database can be used for identifying the driver actions collected in real time. The method comprises the steps of collecting a subway driver standard work video with a certain duration, then carrying out certain processing on the video to obtain an image frame, marking a hand position directly related to driver action in the image frame, stamping a timestamp, and storing the marked image frame to generate a driving action database. The labeling of the obtained image frames is very laborious. Some automatic labeling methods have been proposed, but the reliability of these automatic labeling methods is not ideal, and the gap between the automatic labeling methods and the manual labeling methods is obvious. However, with the increasing data volume of the action set, the manual annotation obviously cannot meet the requirement. Therefore, a manual marking mode can be adopted, then an unsupervised learning algorithm is used after a training machine has a certain identification basis, so that the machine can automatically mark the obtained image frames, and a driving action database is generated.
S12, continuously shooting the working video of the subway driver by using the camera, intercepting the working video into images according to a certain frame rate, screening effective pictures capable of distinguishing the face state of the driver from the intercepted images, storing the effective pictures, and generating a face state database.
The method comprises the steps of using a camera to shoot subway driver working videos continuously and intercepting the working videos into images according to a certain frame rate, wherein the intercepted images are not always effective, can clearly distinguish facial states, and cannot be used for training if the facial states cannot be clearly distinguished, so that effective images capable of distinguishing the facial states of the driver need to be further screened out from the intercepted images to be stored, wherein the effective images include normal states and abnormal states (such as drinking, fatigue, emotional instability and the like), and are divided into N types, and accordingly a facial state database is constructed.
S2, respectively obtaining a driver driving action recognition model and a driver face state recognition model by training based on the vehicle-mounted driving behavior database by adopting a deep learning method;
the vehicle-mounted driving behavior database is divided into a training set and a testing set, and a driver driving action recognition model for recognizing the driving action of a driver and a driver face state recognition model for recognizing the face state of the driver are obtained through training based on a deep learning method.
Specifically, a time sequence track marking method is used for obtaining a driver hand motion track, and the driver hand motion track is mapped to a time-space dimension, so that the accuracy and precision of judgment of the vehicle-mounted driving action of the driver are greatly improved. Compared with the traditional face recognition algorithm, the deep learning can extract information from pixel-level original data to abstract key point information layer by layer, and the extracted features have more efficient expression capability than the manually designed features, so the method has outstanding advantages in the aspect of image recognition. The method is based on a deep learning method, a 3D convolutional neural network and a self-encoder are fused, a deep learning network model for state identification is constructed, the face state database is input into the constructed deep learning network model for training, and a driver face state model is obtained.
The step S2 further includes:
s21, based on the driving action database, obtaining a driver driving action recognition model through training by adopting a time sequence-based action recognition method;
after the driving action database is obtained, because actions are usually continuous, a driver driving action recognition model is obtained through training by adopting a time sequence-based action recognition method, wherein the time sequence-based action recognition method is that action tracks are recorded according to a time sequence, the average value of all track points is obtained through multiple times of training for each action to obtain a track graph of each action, and a certain changeable range is set for each track graph, so that a track training model, namely the driver driving action recognition model, is formed.
The step S21 further includes:
s211, aiming at the same action, extracting all image frames describing the action in the driving action database, and repeatedly training the action for multiple times;
the image frames in the driving action database are marked on the coordinate points of the positions of the two hands of the driver and the time stamps, and are recorded as:
Rt={Rx,Ry,t};Lt={Lx,Ly,t} (1),
wherein R istAs two-hand position coordinate points, LtIs a time stamp.
S212, recording the two-hand track points at any moment in the action process as average values obtained by multiple times of training, recording the two-hand track points on a picture according to a time sequence, generating a track graph, recording all track points of the action time sequence in the track graph, and setting a threshold value for the position of each track point in the track graph;
the coordinates and time stamps of the trace points in the trace plot are expressed as follows:
wherein K is the training times.
And S213, repeating the steps S211 and S212 until the training of all the actions in the driving action database is completed.
And obtaining a driver driving action recognition model after training.
S22, constructing a deep learning network model for state identification, and training the deep learning network model for state identification by using the facial state database to obtain a driver facial state identification model.
After the face state database is obtained, a series of preprocessing processes such as face detection, alignment, cutting and graying, histogram calculation, histogram equalization, median filtering and the like can be used as input for the images in the face state database, the input can be input into the built deep learning network model for training and testing, all parameters of the trained deep learning network model are stored, and the driver face state recognition model is formed.
The invention provides an improved deep learning network model fusing a 3D convolutional neural network (3DCNN) and a self-encoder (LSTM), which is different from a common deep learning network.
The deep learning network model for state recognition constructed in step S22 includes:
the self-encoder comprises a first convolution layer, a first pooling layer, a second convolution layer, a second pooling layer, a third convolution layer, a fourth convolution layer, a third pooling layer, a fifth convolution layer, a sixth convolution layer, a fourth pooling layer, a seventh convolution layer, an eighth convolution layer, a fifth pooling layer, a full connection layer, a Softmax output layer and a self-encoder which are sequentially connected. Namely:
Conv1→Pool1→Conv2→Pool2→Conv3a→Conv3b→Pool3→Conv4a→Conv4b→Pool4→Conv5a→Conv5b→Pool5→fc6→Softmax→AE
the deep learning network model comprises 8 convolutional layers (Conv), 5 pooling layers (Pool), 1 full connection layer (fc), 1 Softmax output layer and 1 self-encoder. The convolution kernel of the convolution layer and the pooling kernel of the pooling layer are three-dimensional structures, and the self-encoder can process pictures in real time, so that continuous image frames can be processed.
The first layer is the convolutional layer Conv1, and receives 128 × 16 × 1 input, where 128 × 128 refers to the width and height of the input picture, 16 refers to the continuous 16 frames of pictures, and 1 refers to the picture being a single channel. For a common convolutional layer, the output result is a set of single characteristic maps, and the invention adopts an improved 3D convolutional neural network to output a set of multi-characteristic maps, called a characteristic body. Thus, Conv1 would output 64 tokens of 128 × 16 × 1. Conv2 outputs 128, Conv3 outputs 256, Conv4 outputs 512, and Conv5 outputs 512. Wherein, the sizes of all convolution kernels are 3 × 3, the weights are initialized by adopting positive-space distribution with the mean value of 0 and the variance of 1, the moving step length is 1, the input boundary filling is 0, the activation function is a Relu function, and the formula is as follows:
f(x)=max(0,x) (3);
for pooling layers, Pool kernel size of Pool1 was 2 × 1, the rest of the layers were 2 × 2, the Pool kernel weights were initialized with positive distribution with mean 0 and variance 1, the shift step size was 1, and maximum pooling was performed.
For fully connected layers, fc6 received 512 feature inputs of 4 x 1 from Pool 5. Each full link layer has 4096 nodes, the weight is initialized with a positive distribution with a mean of 0 and a variance of 1, and a Relu activation function is used. fc6 outputs 4096 parameters to the self-encoder.
For the Softmax layer, which has N nodes, each node corresponding to a facial state and outputting a probability of targeting that class, for node N, the formula for Softmax is as follows:
yn=f(Wn,xn) (5);
wherein the content of the first and second substances,output for Softmax the probability that the sample is of class n, ynThe value obtained for the node from the previous network.
For the self-encoder, classified picture features are input into hidden layers of 1024 nodes for encoding compression, depth features are abstracted out, and then the input features are reconstructed through decoding.
The step of training the deep learning network model for state recognition by using the facial state database in step S22 to obtain a driver facial state recognition model further includes:
taking continuous multi-frame pictures from the face state database, inputting the continuous multi-frame pictures into the first convolution layer, and starting convolution calculation;
calculating the output vector of the full-connection layer through an activation function to obtain a predicted value, calculating a loss function value of the predicted value and a real value by using a cross entropy loss function, and minimizing the loss function value;
continuously adjusting network weight and bias by a random gradient descent method and recalculating a loss function value until the loss function value tends to be stable or reaches a set iteration number, thereby obtaining classified picture characteristics;
inputting the classified picture characteristics into the self-encoder for encoding compression, abstracting depth characteristics, reconstructing the input picture characteristics through decoding, and continuously performing iterative training through a back propagation algorithm until the learning error of the self-encoder is smaller than a preset threshold value;
and solidifying the structure and the parameters of the deep learning network model after training to obtain a driver state recognition model.
Specifically, for a sample i of the sample library belonging to type j, the sample is a video containing an action, assuming that there are a frames of images in common. First it is divided into(To round down) segments, each containing 25 frames, and if the last segment is less than 25 frames, the segment is discarded and the resolution of each frame is adjusted to 128 x 128. Meanwhile, the label of the sample is subjected to One-hot encoding (One-hot encoding). Finally, the sample data is input into the network.
The training process uses a cross entropy loss function, and after the stability condition of numerical calculation is considered, the formula of the cross entropy loss function is as follows:
after introducing an L1 regularization penalty for all samples, the formula of the loss function is:
a random gradient descent method is used in the training process, B is the batch number, 30 samples are taken as one batch, the learning rate is set to be 0.001, then the weight of each layer of the network is reversely updated in each iteration after 10w times of iterative computation is halved. The final gradient direction from the loss function is:
Pi,Nis the label one-hot vector of sample i, with dimension N x 1, the jth element value of 1,the other element has a value of 0. PNIs the probability of the sample i output by the network model over N classes. And when the loss change tends to be stable along with the training process or reaches the set iteration number, stopping training and obtaining the classified picture characteristics.
Then, inputting the classified picture characteristics into the self-encoder for encoding compression, abstracting out depth characteristics, reconstructing the input picture characteristics by decoding, performing regular and minimum operation on the loss function, continuously performing iterative training by a back propagation algorithm until the learning error of the self-encoder is less than a preset threshold value, stopping training,
and finally, solidifying the structure and parameters of the deep learning network model after training to obtain a driver state recognition model.
And inputting the real-time acquired working image of the subway driver into the trained driver state recognition model to obtain a driver face state recognition result.
S3, acquiring a real-time working video of a subway driver and extracting continuous multi-frame images according to a preset frame rate;
the method comprises the steps of installing a camera on a driving platform of the subway, shooting the driving process of a driver in a front-facing manner at a certain angle, and extracting continuous multi-frame images from a collected real-time working video of the subway driver according to a certain frame rate for identifying the driving action of the driver and identifying the face state.
S4, acquiring the driving action track of the driver in the continuous multi-frame images, recognizing the driving action track by using the driver driving action recognition model, and judging whether the driving action of the driver is in compliance; and identifying the face state of the driver in the continuous multi-frame images by using the driver face state identification model, and detecting whether the face state of the driver is normal or not.
Specifically, the step of recognizing the driving motion trajectory by using the driver driving motion recognition model and determining whether the driving motion of the driver is compliant in step S4 further includes:
and inputting the driving action track into a trained driver driving action recognition model, and judging whether the driving action of the driver meets the specification or not by comparing whether the deviation of the driving action track and the track point in the driver driving action recognition model is within a preset threshold range or not.
If the deviation is within the preset threshold range, the driving action of the driver is in accordance with the standard, and if the deviation is outside the preset threshold range, the driver is in violation of operation.
The step of recognizing the driver ' S face state in the consecutive multi-frame images by using the driver ' S face state recognition model in step S4, and the step of detecting whether the driver ' S face state is normal further comprises:
and inputting the continuous multi-frame images into the driver state recognition model, outputting a classification result of the driver face state, and judging whether the driver face state is normal or not according to the classification result.
And if the classification result output by the driver state identification model has the maximum probability of the abnormal state type, indicating that the driver is in the abnormal state. And if the highest probability in the classification result output by the driver state identification model is the normal state type, indicating that the face state of the driver is normal.
The method for analyzing the vehicle-mounted driving behavior of the subway driver, provided by the embodiment of the invention, can be used for monitoring and intelligently evaluating the driving behavior and the driving state of the subway driver in real time, is beneficial to discovering possible manual operation errors as soon as possible, and ensures the driving safety.
As shown in fig. 2, a schematic structural diagram of a vehicle-mounted terminal according to another embodiment of the present invention includes: a processor module 21, a custom tailored board 22, and a dedicated power supply module 23, wherein,
the processor module 21 is composed of 256-core NVIDIA Pascal GPUs and a 6-core 64-bit ARMv8 processor cluster, and is configured to perform the methods described in the embodiments above, including, for example: s1, constructing a vehicle-mounted driving behavior database of the subway driver, wherein the vehicle-mounted driving behavior database comprises a driving action database and a face state database; s2, respectively obtaining a driver driving action recognition model and a driver face state recognition model by training based on the vehicle-mounted driving behavior database by adopting a deep learning method; s3, acquiring a real-time working video of a subway driver and extracting continuous multi-frame images according to a preset frame rate; s4, acquiring the driving action track of the driver in the continuous multi-frame images, recognizing the driving action track by using the driver driving action recognition model, and judging whether the driving action of the driver is in compliance; and identifying the face state of the driver in the continuous multi-frame images by using the driver face state identification model, and detecting whether the face state of the driver is normal or not.
The ARMv8 processor cluster comprises a dual-core NVIDIADenver 2 and a 4-core ARM Cortex-A57.
The user customization board 22 is used for realizing a bus expansion storage function, a 4G wireless communication function and an alarm function, and protecting and leading out an external interface of the processor module;
the special power supply module 23 is used for converting direct current 110V voltage into direct current low power used by the processor module, and includes functions of power supply filtering, protection and the like, so as to provide guarantee for long-term stable operation of the core module.
In addition, for adapting to driving environment, the vehicle-mounted terminal adopts a reinforced structure design, and comprises a reserved power supply interface, a network interface, a USB camera interface, a reserved 4G and a wifi antenna interface, wherein each interface is connected by an aviation connector. Considering that the environment in the vehicle has higher requirements on equipment, the vehicle-mounted terminal adopts a cold conduction and heat dissipation mode, and meanwhile, installation holes are designed to be convenient to install.
The improvement of the vehicle-mounted terminal provided by the embodiment of the invention is in a software algorithm part and a hardware part, the software algorithm realizes the analysis method of the vehicle-mounted driving behavior of the subway driver, the analysis method comprises the detection of the driving action specification of the driver and the detection algorithm of the face (fatigue, drinking and emotional instability) based on deep learning, and in order to support the improvement of software, the vehicle-mounted terminal adopts hardware equipment with GPU and is integrated in a cab to form vehicle-mounted monitoring equipment with early warning and communication capabilities.
The vehicle-mounted terminal provided by the embodiment of the invention can monitor and intelligently evaluate the driving behavior and the driving state of a locomotive driver in real time, is beneficial to discovering possible human misoperation as soon as possible, and ensures the driving safety.
As shown in fig. 3, a schematic structural diagram of a subway driver vehicle-mounted driving behavior analysis system according to another embodiment of the present invention includes: the vehicle-mounted terminal, the communication layer and the cloud management platform are as described above, wherein,
the communication layer is used for data transmission between the vehicle-mounted terminal and the cloud management platform;
the cloud management platform is used for realizing the functions of on-line self-learning and big data analysis.
Specifically, the vehicle-mounted terminal analyzes the vehicle-mounted driving behavior of the driver, which has been described in the above embodiments and is not described herein again. The vehicle-mounted terminal reports the obtained vehicle-mounted driving behavior monitoring result of the driver to the cloud management platform through the communication layer, the cloud management platform records the monitoring result after receiving the monitoring result of the vehicle-mounted terminal, online self-learning and big data analysis functions are achieved, and the cloud management platform and the vehicle-mounted terminal work cooperatively to form a set of subway driver vehicle-mounted driving behavior analysis system based on deep learning.
The system provided by the embodiment of the invention can help a driver to concentrate more on driving a subway train, and can give an alarm when the driver is fatigue driven, so that the driver can control a locomotive more safely. Meanwhile, the system can also provide real-time monitoring of train operation dynamic data for a ground management department, supervise the working state of a subway train driver in real time and record the whole process under the condition of abnormal occurrence, grasp the operation state of the whole train under abnormal conditions in real time, and improve the supervision capability of the operation safety of urban rail transit.
Finally, the method of the present invention is only a preferred embodiment and is not intended to limit the scope of the present invention. Any modification, equivalent replacement, or improvement made within the spirit and principle of the present invention should be included in the protection scope of the present invention.

Claims (7)

1. A subway driver vehicle-mounted driving behavior analysis method is characterized by comprising the following steps:
s1, constructing a vehicle-mounted driving behavior database of the subway driver, wherein the vehicle-mounted driving behavior database comprises a driving action database and a face state database;
s2, respectively obtaining a driver driving action recognition model and a driver face state recognition model by training based on the vehicle-mounted driving behavior database by adopting a deep learning method;
s3, acquiring a real-time working video of a subway driver and extracting continuous multi-frame images according to a preset frame rate;
s4, acquiring the driving action track of the driver in the continuous multi-frame images, recognizing the driving action track by using the driver driving action recognition model, and judging whether the driving action of the driver is in compliance; identifying the face state of the driver in the continuous multi-frame images by using the driver face state identification model, and detecting whether the face state of the driver is normal or not;
wherein the step S1 further includes:
s11, acquiring a standard working video of a subway driver by using an infrared vision sensor, uniformly extracting an image frame from the standard working video after the acquisition process is finished, labeling coordinate points of positions of two hands of the driver and a timestamp in the image frame, storing the labeled image frame, and generating a driving action database;
s12, continuously shooting a working video of a subway driver by using a camera, intercepting the working video into images according to a certain frame rate, screening effective pictures capable of distinguishing the face state of the driver from the intercepted images, storing the effective pictures, and generating a face state database;
wherein, it is right driver's both hands position coordinate point and timestamp carry out the step of marking in the image frame specifically do: firstly, automatically labeling coordinate points and timestamps of the positions of both hands of a driver in the image frame by a machine by adopting manual labeling and then utilizing an unsupervised learning algorithm;
wherein the step S2 further includes:
s21, based on the driving action database, obtaining a driver driving action recognition model through training by adopting a time sequence-based action recognition method;
s22, constructing a deep learning network model for state identification, and training the deep learning network model for state identification by using the facial state database to obtain a driver facial state identification model;
wherein the step S21 further includes:
s211, aiming at the same action, extracting all image frames describing the action in the driving action database, and repeatedly training the action for multiple times;
wherein, the image frame in the driving action database has all carried out the mark to driver's both hands position coordinate point and timestamp, records as:
Rt={Rx,Ry,t};Lt={Lx,Ly,t} (1),
wherein R istAs two-hand position coordinate points, LtIs a time stamp;
s212, recording the two-hand track points at any moment in the action process as average values obtained by multiple times of training, recording the two-hand track points on a picture according to a time sequence, generating a track graph, recording all track points of the action time sequence in the track graph, and setting a threshold value for the position of each track point in the track graph;
the coordinates and the time stamps of all track points in the track map are expressed as follows:
wherein K is the training times;
and S213, repeating the steps S211 and S212 until the training of all the actions in the driving action database is completed.
2. The method according to claim 1, wherein the deep learning network model for state recognition constructed in step S22 comprises:
the self-encoder comprises a first convolution layer, a first pooling layer, a second convolution layer, a second pooling layer, a third convolution layer, a fourth convolution layer, a third pooling layer, a fifth convolution layer, a sixth convolution layer, a fourth pooling layer, a seventh convolution layer, an eighth convolution layer, a fifth pooling layer, a full connection layer, a Softmax output layer and a self-encoder which are sequentially connected.
3. The method according to claim 2, wherein the step of training the deep learning network model for state recognition with the facial state database in step S22, and the step of obtaining a driver facial state recognition model further comprises:
taking continuous multi-frame pictures from the face state database, inputting the continuous multi-frame pictures into the first convolution layer, and starting convolution calculation;
calculating the output vector of the full-connection layer through an activation function to obtain a predicted value, calculating a loss function value of the predicted value and a real value by using a cross entropy loss function, and minimizing the loss function value;
continuously adjusting the network weight and bias by a random gradient descent method and recalculating a loss function value until the loss function value tends to be stable or reaches a set iteration number to obtain classified picture characteristics;
inputting the classified picture characteristics into the self-encoder for encoding compression, abstracting depth characteristics, reconstructing the input picture characteristics through decoding, and continuously performing iterative training through a back propagation algorithm until the learning error of the self-encoder is smaller than a preset threshold value;
and solidifying the structure and the parameters of the deep learning network model after training to obtain a driver state recognition model.
4. The method according to claim 1, wherein the step of recognizing the driving motion trajectory by using the driver driving motion recognition model in step S4, and the step of determining whether the driver' S driving motion is in compliance further comprises:
and inputting the driving action track into a trained driver driving action recognition model, and judging whether the driving action of the driver meets the specification or not by comparing whether the deviation of the driving action track and the track point in the driver driving action recognition model is within a preset threshold range or not.
5. The method according to claim 1, wherein the driver ' S face state recognition model is used to recognize the driver ' S face state in the consecutive multi-frame images in step S4, and the step of detecting whether the driver ' S face state is normal further comprises:
and inputting the continuous multi-frame images into the driver state recognition model, outputting a classification result of the driver face state, and judging whether the driver face state is normal or not according to the classification result.
6. A vehicle-mounted terminal characterized by comprising: a processor module, a custom-made board and a dedicated power module, wherein,
the processor module is composed of a 256-core NVIDIA Pascal GPU and a 6-core 64-bit ARMv8 processor cluster for performing the method of any of claims 1-5;
the user customization board is used for realizing a bus expansion storage function and a 4G wireless communication function and protecting and leading out an external interface of the processor module;
the special power supply module is used for converting direct current 110V voltage into direct current low voltage used by the processor module.
7. The utility model provides an on-vehicle driving behavior analysis system of subway driver which characterized in that includes: the in-vehicle terminal, the communication layer, and the cloud management platform of claim 6,
the communication layer is used for being responsible for data transmission between the vehicle-mounted terminal and the cloud management platform;
the cloud management platform is used for realizing the functions of on-line self-learning and big data analysis.
CN201711477182.1A 2017-12-29 2017-12-29 Subway driver vehicle-mounted driving behavior analysis method, vehicle-mounted terminal and system Active CN108216252B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201711477182.1A CN108216252B (en) 2017-12-29 2017-12-29 Subway driver vehicle-mounted driving behavior analysis method, vehicle-mounted terminal and system

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201711477182.1A CN108216252B (en) 2017-12-29 2017-12-29 Subway driver vehicle-mounted driving behavior analysis method, vehicle-mounted terminal and system

Publications (2)

Publication Number Publication Date
CN108216252A CN108216252A (en) 2018-06-29
CN108216252B true CN108216252B (en) 2019-12-20

Family

ID=62646147

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201711477182.1A Active CN108216252B (en) 2017-12-29 2017-12-29 Subway driver vehicle-mounted driving behavior analysis method, vehicle-mounted terminal and system

Country Status (1)

Country Link
CN (1) CN108216252B (en)

Families Citing this family (36)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109086873B (en) * 2018-08-01 2021-05-04 北京旷视科技有限公司 Training method, recognition method and device of recurrent neural network and processing equipment
CN109229108B (en) * 2018-08-07 2020-05-05 武汉理工大学 Driving behavior safety evaluation method based on driving fingerprints
CN110866427A (en) * 2018-08-28 2020-03-06 杭州海康威视数字技术股份有限公司 Vehicle behavior detection method and device
CN108986400A (en) * 2018-09-03 2018-12-11 深圳市尼欧科技有限公司 A kind of third party based on image procossing, which multiplies, drives safety automatic-alarming method
CN111104953A (en) * 2018-10-25 2020-05-05 北京嘀嘀无限科技发展有限公司 Driving behavior feature detection method and device, electronic equipment and computer-readable storage medium
JP7014129B2 (en) * 2018-10-29 2022-02-01 オムロン株式会社 Estimator generator, monitoring device, estimator generator method and estimator generator
CN109376673B (en) * 2018-10-31 2022-02-25 南京工业大学 Method for identifying unsafe behaviors of underground coal mine personnel based on human body posture estimation
CN109545027B (en) * 2018-12-24 2021-06-01 郑州畅想高科股份有限公司 Training platform, crew simulation training method and device
CN111382599A (en) * 2018-12-27 2020-07-07 北京搜狗科技发展有限公司 Image processing method and device and electronic equipment
CN111460855B (en) * 2019-01-20 2023-11-24 青岛海尔智能技术研发有限公司 Method and device for monitoring user behavior of article terminal and computer storage medium
CN111488758A (en) 2019-01-25 2020-08-04 富士通株式会社 Deep learning model for driving behavior recognition, training device and method
CN110020597B (en) * 2019-02-27 2022-03-11 中国医学科学院北京协和医院 Eye video processing method and system for auxiliary diagnosis of dizziness/vertigo
CN110096957B (en) * 2019-03-27 2023-08-08 苏州清研微视电子科技有限公司 Fatigue driving monitoring method and system based on facial recognition and behavior recognition fusion
CN110163084A (en) * 2019-04-08 2019-08-23 睿视智觉(厦门)科技有限公司 Operator action measure of supervision, device and electronic equipment
CN110197134A (en) * 2019-05-13 2019-09-03 睿视智觉(厦门)科技有限公司 A kind of human action detection method and device
CN110260925B (en) * 2019-07-12 2021-06-25 重庆赛迪奇智人工智能科技有限公司 Method and system for detecting quality of driver parking technology, intelligent recommendation method and electronic equipment
CN110598734B (en) * 2019-08-05 2022-04-26 西北工业大学 Driver identity authentication method based on convolutional neural network and support vector field description
CN112347820A (en) * 2019-08-08 2021-02-09 株洲中车时代电气股份有限公司 Driver driving behavior monitoring method and device
CN110443211A (en) * 2019-08-09 2019-11-12 紫荆智维智能科技研究院(重庆)有限公司 Detection system and method are slept in train driving doze based on vehicle-mounted GPU
CN110705605B (en) * 2019-09-11 2022-05-10 北京奇艺世纪科技有限公司 Method, device, system and storage medium for establishing feature database and identifying actions
CN110852190B (en) * 2019-10-23 2022-05-20 华中科技大学 Driving behavior recognition method and system integrating target detection and gesture recognition
CN111016913B (en) * 2019-12-05 2020-12-22 乐清市风杰电子科技有限公司 Driver state control system and method based on image information
CN113033239B (en) * 2019-12-09 2023-07-07 杭州海康威视数字技术股份有限公司 Behavior detection method and device
CN111126206B (en) * 2019-12-12 2023-04-07 创新奇智(成都)科技有限公司 Smelting state detection system and method based on deep learning
CN111432229A (en) * 2020-03-31 2020-07-17 卡斯柯信号有限公司 Method and device for recording, analyzing and live broadcasting driving command
CN111553209B (en) * 2020-04-15 2023-05-12 同济大学 Driver behavior recognition method based on convolutional neural network and time sequence diagram
CN112046489B (en) * 2020-08-31 2021-03-16 吉林大学 Driving style identification algorithm based on factor analysis and machine learning
CN112131972B (en) * 2020-09-07 2022-07-12 重庆邮电大学 Method for recognizing human body behaviors by using WiFi data based on attention mechanism
CN112336349A (en) * 2020-10-12 2021-02-09 易显智能科技有限责任公司 Method and related device for recognizing psychological state of driver
CN112329657B (en) * 2020-11-10 2022-07-01 易显智能科技有限责任公司 Method and related device for sensing upper body movement of driver
CN112609765A (en) * 2020-11-18 2021-04-06 徐州徐工挖掘机械有限公司 Excavator safety control method and system based on facial recognition
CN112487913A (en) * 2020-11-24 2021-03-12 北京市地铁运营有限公司运营四分公司 Labeling method and device based on neural network and electronic equipment
CN113256064A (en) * 2021-04-22 2021-08-13 中国安全生产科学研究院 Device and method for analyzing driving behavior of subway driver
CN113378733A (en) * 2021-06-17 2021-09-10 杭州海亮优教教育科技有限公司 System and device for constructing emotion diary and daily activity recognition
CN115209342B (en) * 2022-06-29 2023-06-06 北京融信数联科技有限公司 Subway driver identification method, system and readable storage medium
CN115761900B (en) * 2022-12-06 2023-07-18 深圳信息职业技术学院 Internet of things cloud platform for practical training base management

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102982316A (en) * 2012-11-05 2013-03-20 安维思电子科技(广州)有限公司 Driver abnormal driving behavior recognition device and method thereof
CN103021221A (en) * 2012-12-28 2013-04-03 成都运达科技股份有限公司 Simulation system and simulation method for virtual driving behavior of drivers of subway trains
CN103065121A (en) * 2012-12-13 2013-04-24 李秋华 Engine driver state monitoring method and device based on video face analysis
CN106446811A (en) * 2016-09-12 2017-02-22 北京智芯原动科技有限公司 Deep-learning-based driver's fatigue detection method and apparatus
CN106651910A (en) * 2016-11-17 2017-05-10 北京蓝天多维科技有限公司 Intelligent image analysis method and alarm system for abnormal driver behavior state

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102982316A (en) * 2012-11-05 2013-03-20 安维思电子科技(广州)有限公司 Driver abnormal driving behavior recognition device and method thereof
CN103065121A (en) * 2012-12-13 2013-04-24 李秋华 Engine driver state monitoring method and device based on video face analysis
CN103021221A (en) * 2012-12-28 2013-04-03 成都运达科技股份有限公司 Simulation system and simulation method for virtual driving behavior of drivers of subway trains
CN106446811A (en) * 2016-09-12 2017-02-22 北京智芯原动科技有限公司 Deep-learning-based driver's fatigue detection method and apparatus
CN106651910A (en) * 2016-11-17 2017-05-10 北京蓝天多维科技有限公司 Intelligent image analysis method and alarm system for abnormal driver behavior state

Also Published As

Publication number Publication date
CN108216252A (en) 2018-06-29

Similar Documents

Publication Publication Date Title
CN108216252B (en) Subway driver vehicle-mounted driving behavior analysis method, vehicle-mounted terminal and system
CN108171176B (en) Subway driver emotion identification method and device based on deep learning
CN107808139B (en) Real-time monitoring threat analysis method and system based on deep learning
CN110874362A (en) Data association analysis method and device
WO2019179024A1 (en) Method for intelligent monitoring of airport runway, application server and computer storage medium
CN104504377B (en) A kind of passenger on public transport degree of crowding identifying system and method
Wei et al. Unsupervised anomaly detection for traffic surveillance based on background modeling
US11798297B2 (en) Control device, system and method for determining the perceptual load of a visual and dynamic driving scene
CN105303191A (en) Method and apparatus for counting pedestrians in foresight monitoring scene
CN109743547A (en) A kind of artificial intelligence security monitoring management system
CN111738218B (en) Human body abnormal behavior recognition system and method
CN113592905B (en) Vehicle driving track prediction method based on monocular camera
CN109935080A (en) The monitoring system and method that a kind of vehicle flowrate on traffic route calculates in real time
CN112071084A (en) Method and system for judging illegal parking by utilizing deep learning
CN111723773A (en) Remnant detection method, device, electronic equipment and readable storage medium
Cui et al. Real-time detection method of driver fatigue state based on deep learning of face video
CN109635717A (en) A kind of mining pedestrian detection method based on deep learning
CN115965578A (en) Binocular stereo matching detection method and device based on channel attention mechanism
CN115083229B (en) Intelligent recognition and warning system of flight training equipment based on AI visual recognition
CN115346169B (en) Method and system for detecting sleep post behaviors
CN111241918A (en) Vehicle anti-tracking method and system based on face recognition
CN115311591A (en) Early warning method and device for abnormal behaviors and intelligent camera
Peng et al. Helmet wearing recognition of construction workers using convolutional neural network
CN115294519A (en) Abnormal event detection and early warning method based on lightweight network
Jiang et al. Fast Traffic Accident Identification Method Based on SSD Model

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant