CN108846332A - A kind of railway drivers Activity recognition method based on CLSTA - Google Patents

A kind of railway drivers Activity recognition method based on CLSTA Download PDF

Info

Publication number
CN108846332A
CN108846332A CN201810540015.5A CN201810540015A CN108846332A CN 108846332 A CN108846332 A CN 108846332A CN 201810540015 A CN201810540015 A CN 201810540015A CN 108846332 A CN108846332 A CN 108846332A
Authority
CN
China
Prior art keywords
network
output
clsta
input
layers
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201810540015.5A
Other languages
Chinese (zh)
Other versions
CN108846332B (en
Inventor
唐鹏
胡超
金炜东
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Southwest Jiaotong University
Original Assignee
Southwest Jiaotong University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Southwest Jiaotong University filed Critical Southwest Jiaotong University
Priority to CN201810540015.5A priority Critical patent/CN108846332B/en
Publication of CN108846332A publication Critical patent/CN108846332A/en
Application granted granted Critical
Publication of CN108846332B publication Critical patent/CN108846332B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/50Context or environment of the image
    • G06V20/59Context or environment of the image inside of a vehicle, e.g. relating to seat occupancy, driver state or inner lighting conditions
    • G06V20/597Recognising the driver's state or behaviour, e.g. attention or drowsiness
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/213Feature extraction, e.g. by transforming the feature space; Summarisation; Mappings, e.g. subspace methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/25Fusion techniques
    • G06F18/253Fusion techniques of extracted features

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Data Mining & Analysis (AREA)
  • Physics & Mathematics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • General Physics & Mathematics (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Evolutionary Biology (AREA)
  • Evolutionary Computation (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • General Engineering & Computer Science (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Artificial Intelligence (AREA)
  • Multimedia (AREA)
  • Image Analysis (AREA)

Abstract

The present invention discloses a kind of railway drivers Activity recognition method based on CLSTA, propose a kind of CLSTA neural network model, and CLSTA network is transplanted in industrial PC, the behavior of trainman is identified and understood using driver's indoor monitor video, the driving behavior of real-time monitoring and intelligent evaluation trainman and driving condition;Space characteristics study is carried out using video image of the convolutional neural networks CNN and length Memory Neural Networks LSTM to trainman's behavior and temporal aspect learns, and in view of driver's indoor environment is single, limb action changes little for entire scene, for this status, propose improved space-time attention method STA, neural network model is got by mass data training, finally by the model use in industrial PC, analyze the common behavior and abnormal behaviour during locomotive driver driving, such as fatigue driving, play mobile phone, smoke etc., finally realize the purpose to trainman's behavior understanding.

Description

A kind of railway drivers Activity recognition method based on CLSTA
Technical field
The present invention relates to railway operation safety detection technique fields, specially a kind of to be based on CLSTA (Convolutional LSTM Networks With Spatial-Temporal Attention has the LSTM convolutional Neural net of space-time attention Network)) railway drivers Activity recognition method.
Background technique
The building cause of China railways is going into the high-speed developing period characterized by " great-leap-forward development ", locomotive operation More stringent requirements are proposed for safeguard technology.How to ensure that oneself warp of the even running of locomotive becomes railway transportation department The most important thing has become the task of top priority to the monitoring management level of locomotive operation safety into raising railway locomotive department.
It is well known that paroxysmal equipment fault such as rolling stock is cut outside axis, route broken rail etc. or natural calamity, train Operational safety is maximum to threaten whether correctly whether instruction and driver correctly manipulate locomotive from train operation signal.From previous The train conflict of generation, knock into the back, exceeding the speed limit causes the immediate cause of the great driving accident such as train overturning to be seen, vehicle signal show mistake Or driver naps error manipulation train is caused to account for major portion.China railways systematic failures count the people for showing vehicle accident It is since the misoperation of driver and conductor causes to there is quite a few in factor.Wherein, the driving behavior for the person of sailing is improper, fatigue Driving, sleep, violation operation, bad steering habit etc. are the one of the major reasons for causing traffic safety accident.Train operator because Transport production task is heavy, and working environment is arduous, and the daily schedule is irregular, transports throughout the year in high-strength load, high-pressure, high speed In the state turned, also easily occur other improper operations in driving procedure.
The traffic safety monitoring of China railways achieves significant progress in recent years, but there are also larger compared with developed countries Gap, be mainly reflected in monitoring various information accuracy, real-time it is poor, the working condition of driver individual is not identified, Alarm, system function cannot be met the requirements.The driving behavior of real-time monitoring and intelligent evaluation trainman and driving condition, have Help the operation error having found that it is likely that early, has highly important realistic meaning to safety accident and casualties is reduced.It should System can help driver to focus more on driving locomotive, driving behavior of the driver in driving procedure of testing and assessing out, in its appearance It can be sounded an alarm when fatigue driving or abnormal operation, manipulation locomotive that can be safer.The system may be used also simultaneously To provide the real time monitoring of locomotive operation dynamic data for floor control department, in the case where occurring extremely to trainman's Working condition carries out supervision in real time and whole record, when grasp the operation conditions of entire locomotive under abnormal condition, improve To the ability to supervise of locomotive operation safety.
Summary of the invention
In view of the above-mentioned problems, the purpose of the present invention is to provide a kind of using the indoor monitor video of driver to trainman Behavior identified and understood, the driving behavior of real-time monitoring and intelligent evaluation trainman and driving condition based on The railway drivers Activity recognition method of CLSTA.Technical solution is as follows:
A kind of railway drivers Activity recognition method based on CLSTA, which is characterized in that include the following steps:
Step 1:The characteristics of according to the indoor environment of driver and driver's common behavior, establish improved space-time attention network STA, and the topological structure of planned network;The improved space-time attention network STA include spatial attention sub-network SA and Time attention sub-network TA;
Step 2:Spatial attention sub-network SA and time attention sub-network TA are merged into Main LSTM network, obtained New CLSTA neural network model, and the topological structure of planned network;The Main LSTM network by Main CNN network and Two layers of LSTM cascade composition;
Step 3:Using the common behavior video acquisition sample of trainman as data set, it is input to the CLSTA nerve In network model, training model;Obtained model is applied in industrial control computer, the monitoring for carrying out trainman's behavior is known Not.
Further, the spatial attention sub-network SA is real by the convolutional neural networks CNN based on AlexNet network The extraction of existing space characteristics, the AlexNet network include five convolutional layers and a full articulamentum fc6, totally six learning layers; The spatial attention sub-network SA is double fluid CNN structure, is CNN1 and CNN2 respectively, for extracting current image stream respectively Space characteristics, respectively there are six learning layers by CNN1, CNN2;That CNN1 is handled is the picture stream x of present framet, by current image frame xtIt is defeated Enter into CNN1;The picture x of CNN2 processing previous framet-1, by the picture x of previous framet-1It is input in CNN2;Pass through one again Eltwise carries out subtraction operation, and the CNN1 characteristic dimension exported is subtracted the output characteristic dimension of CNN2, eltwise layers defeated It is connect in a full articulamentum Fc_layer1 out.
Further, it is double-current CNN+LSTM structure in the time attention sub-network TA, is CNN1+ respectively LSTM1 and CNN2+LSTM2, for extracting the temporal aspect of current image stream respectively;By current image stream xtIt is input in CNN1 Space characteristics study is carried out, then the output of CNN1 is input to progress timing study in LSTM1;By the previous frame of current picture Picture xt-1It is input to progress space characteristics study in CNN2, then the output of CNN2 is input to progress timing study in LSTM2; Again by an eltwise layers of progress subtraction operation, the LSTM1 characteristic dimension exported is subtracted to the characteristic dimension of LSTM2 output, Then eltwise layers of output is linked into a full articulamentum Fc_layer2.
Further, the specific steps of the step 2 include:
Step 21:By current image stream xtIt is input in Main CNN, extracts the space characteristics of current image stream;
Step 22:The output of spatial attention sub-network SA is merged with the output of Main CNN, the mode of fusion is logical It crosses eltwise layers and does add operation;
Step 23:The characteristic dimension exported after step 22 fusion is input in Main LSTM network and carries out temporal aspect Study, the Main LSTM network are formed by 2 layers of LSTM cascade, and the input of LSTM1 is the output of step 22;Again will The characteristic dimension of LSTM1 output is input in LSTM2;
Step 24:The output of Main LSTM network in the output of time attention sub-network TA and step 23 is carried out Fusion, the mode of fusion is to do addition by eltwise layers;Fc_layer3 is met after fusion again, is finally classified.
Further, the specific steps of the step 3 include:
Step 31:Ambient video is acquired by industrial camera;
Step 32:It is picture frame, FPS 5 that shell script in industrial control computer, which decomposes video,;
Step 33:Being sent into model per continuous 16 frame for decomposition is tested;
Step 34:Output test result, and makes report.
The beneficial effects of the invention are as follows:The present invention is directed to this status, proposes a kind of improved space-time attention method STA (Spatial-Temporal Attention) gets nerve net for solving the problems, such as this, by mass data training Network model, common behavior and abnormal row finally by the model use in industrial PC, during analysis locomotive driver driving For, such as " normal driving ", " fatigue driving ", " playing mobile phone ", " smoking " etc., realization is finally reached to trainman's behavior understanding Purpose.
Detailed description of the invention
Fig. 1 is a kind of flow diagram of railway drivers Activity recognition method based on CLSTA.
Fig. 2 is the structural schematic diagram inside LSTM network unit.
Fig. 3 is CLSTA network topology structure block diagram.Fc_layer is full articulamentum, and Relu is that (it does not include active coating Main learning layer).
Specific embodiment
The present invention is described in further details in the following with reference to the drawings and specific embodiments.Video camera can be close with collection space Data of collection, and provide the chance remotely measured using lower accuracy as cost, it is relatively cheap and can fast slowdown monitoring.This The basic thought of invention is the video camera installed in using trainman room, acquires the video of locomotive drivers ' behavior in real time, adopts The video council of collection is decomposed into continuous picture frame by system program, and continuous picture is then input to trained CLSTA Test identification is carried out in network model, test content mainly includes the common behavior and exception analyzed during locomotive driver driving Behavior, such as " normal driving ", " fatigue driving ", " playing mobile phone ", " smoking ", " leaving office " common behavior, and make report. CLSTA model has the spatial character of study continuous picture and the ability of temporal characteristics, and temporal characteristics are by continuous picture It shows, in the present embodiment, model continuous 16 picture per treatment.Although environment is actually static and rigid , in viewing field of camera, for continuous picture after timing sequence process, driver's action part region will be dynamic.
Include the following steps in detail:
Step 1:It is proposed a kind of improved space-time attention network STA (Spatial-Temporal Attention), and The topological structure of planned network.STA network is mainly infused by spatial attention sub-network SA (Spatial Attention) and time Meaning power sub-network TA (Temporal Attention) composition.The time step sum that T is CLSTA network processes is remembered, in the experiment In for input CLSTA network continuous picture frame number, it is 16 per treatment continuous that obtaining in present invention experiment, which is T=16, Picture.Attached drawing 3 is CLSTA network topology structure block diagram, and Fc_layer is full articulamentum, and Relu is active coating.
Spatial attention sub-network SA:Sub-network SA is spatial attention, and the extraction of space characteristics mainly passes through convolution Neural network CNN realizes that CNN network is based on AlexNet network, and use AlexNet herein contains five convolutional layers (con1 ... a con5) and full articulamentum fc6 (former AlexNet network has 3 full articulamentums), altogether six learning layers.SA net It is double-current CNN structure in network, is CNN1 and CNN2 respectively, for extracting the space characteristics of current image stream, CNN1, CNN2 respectively It is each that there are six learning layers.That CNN1 is handled is the picture stream x of present framet, by current image stream xt(16*227*227,16 be each Continuous 16 picture is handled, 227*227 is dimension of picture) it is input in CNN1, the output dimension of the full articulamentum of CNN1 is 16*4096;That CNN2 is handled is the picture x of upper one streamt-1, by upper first-class picture xt-1(16*227*227) is input in CNN2, The full articulamentum output dimension of CNN is 16*4096;Again by one eltwise layer (layer mainly do add, subtract, multiplication grasp Make), the characteristic dimension of CNN1 full articulamentum output subtracts to the output characteristic dimension of the full articulamentum of CNN2, eltwise layers defeated Dimension is 16*4096 out, and then the output of eltwise is linked into full articulamentum, which is 16* 4096.By this method, SA sub-network eliminates static background interference, that is, it is special to remain the space different from previous frame Sign.
Time attention sub-network TA:It is double-current CNN+LSTM structure in TA network, is CNN1+LSTM1 and CNN2+ respectively LSTM2, for extracting the temporal aspect of current image stream respectively.Here CNN1 and CNN2 in sub-network SA be as 's.Similarly, by current image stream xt(16*227*227) is input to progress space characteristics study, the full articulamentum of CNN1 in CNN1 Output dimension be 16*4096, then the output of the articulamentum of CNN1 is input in LSTM1 progress timing study, LSTM1's is defeated Dimension is 16*256 out;By the previous frame picture x of current picturet-1(16*227*227), which is input in CNN2, carries out space spy Sign study, CNN2 full articulamentum output dimension be 16*4096, then by the output of the articulamentum of CNN2 be input in LSTM2 into The study of row timing, the output dimension of LSTM2 is 16*256;Again by one eltwise layer (layer be mainly do add, subtract, multiplication Operation), the LSTM1 characteristic dimension exported is subtracted to the characteristic dimension of LSTM2 output, eltwise layers of output dimension is 16* 256, then the output of eltwise is linked into full articulamentum, which is 16*256.Pass through this side Method, TA sub-network remain the temporal aspect of motion parts.
In STA network, the main learning layer that SA network includes has 14, is respectively:Six learning layers (5 of CNN1 Convolutional layer+1 full articulamentum), six learning layers (5 convolutional layer+1 full articulamentums) of CNN2,1 eltwise layers, 1 Full articulamentum.The main learning layer that TA network includes has 4, is respectively:LSTM1, LSTM2,1 eltwise layers, 1 connects entirely Connect layer.So the main learning layer for including in total in STA network is 18.
Step 2:STA sub-network is merged into Main Convolutional-LSTM Networks, proposes CLSTA network. Main Convolutional-LSTM Networks network is made of Main CNN network and 2 layers of LSTM cascade.Mainly It comprises the steps of:
Step 21:By current image stream xt(16*227*227) is input in Main CNN, the full articulamentum of Main CNN Exporting dimension is 16*4096, which is extracted the space characteristics of current image stream.Here the CNN1 in Main CNN and SA And the CNN1 in TA is networks.Here the number of the main layer of Main CNN, which does not calculate, (contains this in SA Computation layer).
Step 22:The output of SA is merged with the output of Main CNN, the mode of fusion is by eltwise layers of (layer Mainly do add, subtract, multiplication operation) layer does addition, the dimension exported after fusion is also 16*4096.What SA retained is present frame The space characteristics different from previous frame, SA are merged with the Main CNN space characteristics exported, highlight spatially different parts. So there was only mono- learning layer of eltwise here.
Step 23:The characteristic dimension exported after Step2 is merged is input to progress temporal aspect study in Main LSTM, Here Main LSTM is formed by 2 layers of LSTM cascade, and the input of LSTM1 is the output of Step2, and the output of LSTM1 is 16*256;The LSTM1 characteristic dimension exported is input in LSTM2 again, the output dimension of LSTM2 is 16*256.So here There are 2 learning layers LSTM1 and LSTM2.
Step 24:The output of TA is merged with the output of the Main LSTM network in Step3, the mode of fusion is Addition is done by eltwise layers.Fused output dimension is 16*256, connects full articulamentum after fusion again, finally classifies, The output dimension of full articulamentum be 16*6 (16 be continuous 16 picture, and 6 be the classification number of classification, one shared " normal driving ", " fatigue driving ", " playing mobile phone ", " smoking ", " leaving office ", " other " 6 class).When what SA retained is that present frame is different from previous frame Sequence characteristics, TA are merged with the Main LSTM space characteristics exported, highlight part different in timing.Here there are eltwise layers 2 learning layers are had altogether with full articulamentum.
So the main learning layer for including in CLSTA has 23.It is 18 learning layers and STA for including respectively in STA 5 learning layers when being merged with Main Convolutional-LSTM Networks.
Step 3:Video is extracted by video camera, the video of extraction is decomposed by continuous RGB image by shell script Frame, each second 5 frame pictures of decomposition.
Step 4:Using a large amount of trainman's behavioral data collection as sample data, it is input in CLSTA network and carries out mould Type training.Wherein the picture of training set has 12000, and test set picture 4000 is opened, and separately includes that " normal driving ", " fatigue is driven Sail ", " play mobile phone ", " smoking ", " leaving office ", " other " 6 class.The weight of CNN is based on CaffeNet network in CLSTA network Weight is very helpful for the convergence of network, obtains Model by training.
Step 5:Step 4 training is obtained into CLSTA model M odel, is embedded into industrial control computer, passes through the model realization To trainman's Activity recognition and understanding, mainly realized in use by following steps:
Step 51:Ambient video is acquired by industrial camera.
Step 52:It is picture frame, FPS 5 that shell script in industrial control computer, which decomposes video,.
Step 53:Being sent into model per continuous 16 frame for decomposition is tested.
Step 54:Output test result, and makes report.
Fig. 1 is a kind of flow diagram of railway drivers Activity recognition method based on CLSTN.Its process includes:
A, computer obtains the image of environment by interface driver CCD camera;
B, picture is decomposed into RGB picture;
C, then RGB picture is sent in CLSTA network again;
D, it is averaged the final result fusion of CLSTA network to obtain final result, summarizes testing result, form detection report It accuses.
Fig. 2 is LSTM schematic network structure, and main formulas for calculating is:
ft=σ (Wf.[ht-1,xt]+bf)
it=σ (Wi.[ht-1,xt]+bi)
ot=σ (Wo·[ht-1,xt]+bo)
ht=ot*tanh(Ct)
Fig. 3 is the topological structure schematic diagram of CLSTA network.The left side is Spatial Attention sub-network, and centre is Main CNN_LSTM Networks master network, the right are Temporal Attention sub-networks.What Data was represented is input Data are obtained, input 16 pictures every time.CNN1 is indicated in figure is the same network, indicate CNN2 be also the same net Network is all based on Alexnet.Fc_layer is full articulamentum, and Relu is active coating (it does not include for main learning layer) .4096 For the dimension of articulamentum complete in AlexNet, the i.e. dimension of CNN feature, 256 dimension to be exported in LSTM.

Claims (5)

1. a kind of railway drivers Activity recognition method based on CLSTA, which is characterized in that include the following steps:
Step 1:The characteristics of according to the indoor environment of driver and driver's common behavior, improved space-time attention network STA is established, And the topological structure of planned network;The improved space-time attention network STA includes spatial attention sub-network SA and time Attention sub-network TA;
Step 2:Spatial attention sub-network SA and time attention sub-network TA are merged into Main LSTM network, obtained new CLSTA neural network model, and the topological structure of planned network;The Main LSTM network is by Main CNN network and two layers LSTM cascade composition;
Step 3:Using the common behavior video acquisition sample of trainman as data set, it is input to the CLSTA neural network In model, training model;Obtained model is applied in industrial control computer, the monitoring identification of trainman's behavior is carried out.
2. the railway drivers Activity recognition method according to claim 1 based on CLSTA, which is characterized in that the space Attention sub-network SA realizes the extraction of space characteristics by the convolutional neural networks CNN based on AlexNet network, described AlexNet network includes five convolutional layers and a full articulamentum fc6, totally six learning layers;The spatial attention sub-network SA is double fluid CNN structure, is CNN1 and CNN2 respectively, for extracting the space characteristics of current image stream respectively, CNN1, CNN2 are each There are six learning layers;That CNN1 is handled is the picture stream x of present framet, by current image frame xtIt is input in CNN1;CNN2 processing The picture x of previous framet-1, by the picture x of previous framet-1It is input in CNN2;Subtraction operation is carried out by an eltwise again, The CNN1 characteristic dimension exported is subtracted to the output characteristic dimension of CNN2, eltwise layers of output meets a full articulamentum Fc_ In layer1.
3. the railway drivers Activity recognition method according to claim 1 based on CLSTA, which is characterized in that the time It is double-current CNN+LSTM structure in attention sub-network TA, is CNN1+LSTM1 and CNN2+LSTM2 respectively, for extracting respectively The temporal aspect of current image stream;By current image stream xtIt is input in CNN1 progress space characteristics study, then by the defeated of CNN1 It is input to progress timing study in LSTM1 out;By the previous frame picture x of current picturet-1It is input in CNN2 and carries out space spy Sign study, then the output of CNN2 is input to progress timing study in LSTM2;Pass through an eltwise layers of progress subtraction behaviour again Make, the LSTM1 characteristic dimension exported is subtracted to the characteristic dimension of LSTM2 output, eltwise layers of output is then linked into one In a full articulamentum Fc_layer2.
4. the railway drivers Activity recognition method according to claim 1 based on CLSTA, which is characterized in that the step 2 Specific steps include:
Step 21:By current image stream xtIt is input in Main CNN, extracts the space characteristics of current image stream;
Step 22:The output of spatial attention sub-network SA is merged with the output of Main CNN, the mode of fusion is to pass through Eltwise layers are done add operation;
Step 23:The characteristic dimension exported after step 22 fusion is input to progress temporal aspect study in Main LSTM network, The Main LSTM network is formed by 2 layers of LSTM cascade, and the input of LSTM1 is the output of step 22;Again by LSTM1 The characteristic dimension of output is input in LSTM2;
Step 24:The output of time attention sub-network TA is merged with the output of the Main LSTM network in step 23, The mode of fusion is to do addition by eltwise layers;Fc_layer3 is met after fusion again, is finally classified.
5. the railway drivers Activity recognition method according to claim 1 based on CLSTA, which is characterized in that the step 3 Specific steps include:
Step 31:Ambient video is acquired by industrial camera;
Step 32:It is picture frame, FPS 5 that shell script in industrial control computer, which decomposes video,;
Step 33:Being sent into model per continuous 16 frame for decomposition is tested;
Step 34:Output test result, and makes report.
CN201810540015.5A 2018-05-30 2018-05-30 CLSTA-based railway driver behavior identification method Active CN108846332B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201810540015.5A CN108846332B (en) 2018-05-30 2018-05-30 CLSTA-based railway driver behavior identification method

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201810540015.5A CN108846332B (en) 2018-05-30 2018-05-30 CLSTA-based railway driver behavior identification method

Publications (2)

Publication Number Publication Date
CN108846332A true CN108846332A (en) 2018-11-20
CN108846332B CN108846332B (en) 2022-04-29

Family

ID=64210902

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201810540015.5A Active CN108846332B (en) 2018-05-30 2018-05-30 CLSTA-based railway driver behavior identification method

Country Status (1)

Country Link
CN (1) CN108846332B (en)

Cited By (13)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109583508A (en) * 2018-12-10 2019-04-05 长安大学 A kind of vehicle abnormality acceleration and deceleration Activity recognition method based on deep learning
CN109784768A (en) * 2019-02-18 2019-05-21 吉林大学 A kind of driving task recognition methods
CN110059587A (en) * 2019-03-29 2019-07-26 西安交通大学 Human bodys' response method based on space-time attention
CN110135249A (en) * 2019-04-04 2019-08-16 华南理工大学 Human bodys' response method based on time attention mechanism and LSTM
CN110151203A (en) * 2019-06-06 2019-08-23 常熟理工学院 Fatigue driving recognition methods based on multistage avalanche type convolution Recursive Networks EEG analysis
CN110544360A (en) * 2019-08-07 2019-12-06 北京全路通信信号研究设计院集团有限公司 train safe driving monitoring system and method
CN111353636A (en) * 2020-02-24 2020-06-30 交通运输部水运科学研究所 Multi-mode data based ship driving behavior prediction method and system
CN111382647A (en) * 2018-12-29 2020-07-07 广州市百果园信息技术有限公司 Picture processing method, device, equipment and storage medium
CN111723694A (en) * 2020-06-05 2020-09-29 广东海洋大学 Abnormal driving behavior identification method based on CNN-LSTM space-time feature fusion
CN112381068A (en) * 2020-12-25 2021-02-19 四川长虹电器股份有限公司 Method and system for detecting 'playing mobile phone' of person
WO2021184619A1 (en) * 2020-03-19 2021-09-23 南京未艾信息科技有限公司 Human body motion attitude identification and evaluation method and system therefor
CN114343661A (en) * 2022-03-07 2022-04-15 西南交通大学 Method, device and equipment for estimating reaction time of high-speed rail driver and readable storage medium
CN116894225A (en) * 2023-09-08 2023-10-17 国汽(北京)智能网联汽车研究院有限公司 Driving behavior abnormality analysis method, device, equipment and medium thereof

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20130073114A1 (en) * 2011-09-16 2013-03-21 Drivecam, Inc. Driver identification based on face data
CN106709511A (en) * 2016-12-08 2017-05-24 华中师范大学 Urban rail transit panoramic monitoring video fault detection method based on depth learning
US20170262995A1 (en) * 2016-03-11 2017-09-14 Qualcomm Incorporated Video analysis with convolutional attention recurrent neural networks
CN107330362A (en) * 2017-05-25 2017-11-07 北京大学 A kind of video classification methods based on space-time notice
CN107609460A (en) * 2017-05-24 2018-01-19 南京邮电大学 A kind of Human bodys' response method for merging space-time dual-network stream and attention mechanism
CN107944409A (en) * 2017-11-30 2018-04-20 清华大学 video analysis method and device

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20130073114A1 (en) * 2011-09-16 2013-03-21 Drivecam, Inc. Driver identification based on face data
US20170262995A1 (en) * 2016-03-11 2017-09-14 Qualcomm Incorporated Video analysis with convolutional attention recurrent neural networks
CN106709511A (en) * 2016-12-08 2017-05-24 华中师范大学 Urban rail transit panoramic monitoring video fault detection method based on depth learning
CN107609460A (en) * 2017-05-24 2018-01-19 南京邮电大学 A kind of Human bodys' response method for merging space-time dual-network stream and attention mechanism
CN107330362A (en) * 2017-05-25 2017-11-07 北京大学 A kind of video classification methods based on space-time notice
CN107944409A (en) * 2017-11-30 2018-04-20 清华大学 video analysis method and device

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
SIJIE SONG: "An end-to-end spatio-temporal attention model for human action recognition from skeleton data", 《PROCEEDINGS OF THE THIRTY-FIRST AAAI CONFERENCE ON ARTIFICIAL INTELLIGENCEFEBRUARY 2017》 *

Cited By (20)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109583508A (en) * 2018-12-10 2019-04-05 长安大学 A kind of vehicle abnormality acceleration and deceleration Activity recognition method based on deep learning
CN111382647A (en) * 2018-12-29 2020-07-07 广州市百果园信息技术有限公司 Picture processing method, device, equipment and storage medium
CN111382647B (en) * 2018-12-29 2021-07-30 广州市百果园信息技术有限公司 Picture processing method, device, equipment and storage medium
CN109784768A (en) * 2019-02-18 2019-05-21 吉林大学 A kind of driving task recognition methods
CN109784768B (en) * 2019-02-18 2023-04-18 吉林大学 Driving task recognition method
CN110059587A (en) * 2019-03-29 2019-07-26 西安交通大学 Human bodys' response method based on space-time attention
CN110135249B (en) * 2019-04-04 2021-07-20 华南理工大学 Human behavior identification method based on time attention mechanism and LSTM (least Square TM)
CN110135249A (en) * 2019-04-04 2019-08-16 华南理工大学 Human bodys' response method based on time attention mechanism and LSTM
CN110151203A (en) * 2019-06-06 2019-08-23 常熟理工学院 Fatigue driving recognition methods based on multistage avalanche type convolution Recursive Networks EEG analysis
CN110151203B (en) * 2019-06-06 2021-11-23 常熟理工学院 Fatigue driving identification method based on multistage avalanche convolution recursive network EEG analysis
CN110544360A (en) * 2019-08-07 2019-12-06 北京全路通信信号研究设计院集团有限公司 train safe driving monitoring system and method
CN111353636A (en) * 2020-02-24 2020-06-30 交通运输部水运科学研究所 Multi-mode data based ship driving behavior prediction method and system
WO2021184619A1 (en) * 2020-03-19 2021-09-23 南京未艾信息科技有限公司 Human body motion attitude identification and evaluation method and system therefor
CN111723694A (en) * 2020-06-05 2020-09-29 广东海洋大学 Abnormal driving behavior identification method based on CNN-LSTM space-time feature fusion
CN112381068A (en) * 2020-12-25 2021-02-19 四川长虹电器股份有限公司 Method and system for detecting 'playing mobile phone' of person
CN112381068B (en) * 2020-12-25 2022-05-31 四川长虹电器股份有限公司 Method and system for detecting 'playing mobile phone' of person
CN114343661A (en) * 2022-03-07 2022-04-15 西南交通大学 Method, device and equipment for estimating reaction time of high-speed rail driver and readable storage medium
CN114343661B (en) * 2022-03-07 2022-05-27 西南交通大学 Method, device and equipment for estimating reaction time of driver in high-speed rail and readable storage medium
CN116894225A (en) * 2023-09-08 2023-10-17 国汽(北京)智能网联汽车研究院有限公司 Driving behavior abnormality analysis method, device, equipment and medium thereof
CN116894225B (en) * 2023-09-08 2024-03-01 国汽(北京)智能网联汽车研究院有限公司 Driving behavior abnormality analysis method, device, equipment and medium thereof

Also Published As

Publication number Publication date
CN108846332B (en) 2022-04-29

Similar Documents

Publication Publication Date Title
CN108846332A (en) A kind of railway drivers Activity recognition method based on CLSTA
CN110363131B (en) Abnormal behavior detection method, system and medium based on human skeleton
CN108875708A (en) Behavior analysis method, device, equipment, system and storage medium based on video
CN106845351A (en) It is a kind of for Activity recognition method of the video based on two-way length mnemon in short-term
CN108791299A (en) A kind of driving fatigue detection of view-based access control model and early warning system and method
CN104717468B (en) Cluster scene intelligent monitoring method and system based on the classification of cluster track
CN108334902A (en) A kind of track train equipment room smog fireproof monitoring method based on deep learning
CN106973039A (en) A kind of network security situation awareness model training method and device based on information fusion technology
CN111738044A (en) Campus violence assessment method based on deep learning behavior recognition
CN107122050A (en) Stable state of motion VEP brain-machine interface method based on CSFL GDBN
CN112259218A (en) Training method for auditory stimulation of infantile autism based on VR interaction technology
Zhang et al. Fall detection in videos with trajectory-weighted deep-convolutional rank-pooling descriptor
CN108376198A (en) A kind of crowd simulation method and system based on virtual reality
CN108983966A (en) Reformation of convicts assessment system and method based on virtual reality and eye movement technique
CN112233800A (en) Disease prediction system based on abnormal behaviors of children
CN115546899A (en) Examination room abnormal behavior analysis method, system and terminal based on deep learning
Makantasis et al. Privileged information for modeling affect in the wild
CN114373225A (en) Behavior recognition method and system based on human skeleton
CN111553264B (en) Campus non-safety behavior detection and early warning method suitable for primary and secondary school students
CN107225571A (en) Motion planning and robot control method and apparatus, robot
CN116308255A (en) Immersion type heat supply pipe network inspection and fault detection system and method based on meta universe
CN115346157A (en) Intrusion detection method, system, device and medium
CN115294519A (en) Abnormal event detection and early warning method based on lightweight network
CN114429677A (en) Coal mine scene operation behavior safety identification and assessment method and system
CN111191511A (en) Method and system for identifying dynamic real-time behaviors of prisons

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant