CN113420961A - Railway locomotive driving safety auxiliary system based on intelligent sensing - Google Patents

Railway locomotive driving safety auxiliary system based on intelligent sensing Download PDF

Info

Publication number
CN113420961A
CN113420961A CN202110602504.0A CN202110602504A CN113420961A CN 113420961 A CN113420961 A CN 113420961A CN 202110602504 A CN202110602504 A CN 202110602504A CN 113420961 A CN113420961 A CN 113420961A
Authority
CN
China
Prior art keywords
locomotive
gesture
frequency
subsystem
domain
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202110602504.0A
Other languages
Chinese (zh)
Inventor
石绍应
冯勤群
唐少林
吴杰伟
周志伟
王红涛
王亮
周立和
林科宇
杨杰
蒋贤烨
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Hunan Senying Zhizao Technology Co ltd
Original Assignee
Hunan Senying Zhizao Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Hunan Senying Zhizao Technology Co ltd filed Critical Hunan Senying Zhizao Technology Co ltd
Priority to CN202110602504.0A priority Critical patent/CN113420961A/en
Publication of CN113420961A publication Critical patent/CN113420961A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q10/00Administration; Management
    • G06Q10/06Resources, workflows, human or project management; Enterprise or organisation planning; Enterprise or organisation modelling
    • G06Q10/063Operations research, analysis or management
    • G06Q10/0639Performance analysis of employees; Performance analysis of enterprise or organisation operations
    • G06Q10/06398Performance of employee with respect to a job function

Landscapes

  • Business, Economics & Management (AREA)
  • Human Resources & Organizations (AREA)
  • Engineering & Computer Science (AREA)
  • Strategic Management (AREA)
  • Development Economics (AREA)
  • Economics (AREA)
  • Entrepreneurship & Innovation (AREA)
  • Educational Administration (AREA)
  • Operations Research (AREA)
  • Marketing (AREA)
  • Game Theory and Decision Science (AREA)
  • Quality & Reliability (AREA)
  • Tourism & Hospitality (AREA)
  • Physics & Mathematics (AREA)
  • General Business, Economics & Management (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Image Analysis (AREA)

Abstract

The invention discloses a railway locomotive driving safety auxiliary system based on intelligent perception, which comprises: the sensor subsystem is used for collecting operation data of a locomotive crew member, and the operation data comprises gesture actions, calling voice, a position and a moving track in a cab, respiratory frequency and heartbeat frequency of the locomotive crew member; the upper computer subsystem is used for scoring the normative of the operation behaviors of the locomotive crew members through a deep learning algorithm according to the operation data, evaluating the fatigue state health condition of the locomotive crew members and giving an alarm according to the scoring and evaluating results; the communication transmission subsystem is used for uploading the operation data, the scoring and the evaluation results to the monitoring center subsystem; and the monitoring center subsystem is used for displaying the on-line conditions of all the locomotives and the operation conditions of locomotive crew members in the management level of the monitoring center.

Description

Railway locomotive driving safety auxiliary system based on intelligent sensing
Technical Field
The invention belongs to the technical field of safe driving of locomotive crew members, and particularly relates to an auxiliary system for safe driving of a railway locomotive crew member.
Background
Whether the corresponding operation and action of the locomotive crew are standard or not in the working process, and the physiological health condition, the physical fatigue condition and the operation standard condition of the locomotive crew in the running process of the locomotive are directly related to the running safety of the locomotive, and the careful and reliable monitoring is needed.
At present, sensors such as video cameras, infrared sensors and laser sensors are adopted in the existing monitoring scheme at home and abroad, and due to the diversity and complexity of the running environment of the locomotive, the monitoring schemes of the sensors have the defects of dead monitoring angles, poor monitoring effectiveness, incapability of detecting shielding behaviors and the like, so that the problems are solved by a better monitoring scheme urgently needed, and the safety guarantee is provided for the driving safety of the locomotive.
Disclosure of Invention
In view of the above-mentioned safety problem of railway locomotive driving, the present invention aims to provide a safety auxiliary system for locomotive driving, which can comprehensively grasp the driving skill condition and physical state of the locomotive crew member by intelligently sensing the position, motion track, gesture action, call response and other operation normative properties of the locomotive crew member in the cab and intelligently analyzing and judging the fatigue condition and health condition based on heartbeat/breathing frequency detection. The system can carry out normalized evaluation and examination on the accuracy and the normalization of the standard operation of the locomotive crew member in the daily work, comprehensively master the driving skill and the working state of the locomotive crew member, supervise the locomotive crew member to strengthen the learning, the training and the whole-body work, improve the daily operation level and the quality and comprehensively improve the driving safety of the locomotive.
In order to achieve the purpose, the technical scheme of the invention is as follows:
the utility model provides a railway locomotive driving safety auxiliary system based on intelligent perception, includes:
the sensor subsystem is used for collecting operation data of a locomotive crew member, and the operation data comprises gesture actions, calling voice, a position and a moving track in a cab, respiratory frequency and heartbeat frequency of the locomotive crew member;
the upper computer subsystem is used for scoring the normative of the operation behaviors of the locomotive crew members through a deep learning algorithm according to the operation data, evaluating the fatigue state health condition of the locomotive crew members and giving an alarm according to the scoring and evaluating results;
the communication transmission subsystem is used for uploading the operation data, the scoring and the evaluation results to the monitoring center subsystem;
and the monitoring center subsystem is used for displaying the on-line conditions of all the locomotives and the operation conditions of locomotive crew members in the management level of the monitoring center.
The invention can finely recognize various gesture actions of the locomotive crew during operation, can quantitatively score the operation normative such as gesture actions, call response, rear observation, equipment routing inspection and the like of the locomotive crew in real time based on the fine recognition result and intelligent information processing, can alarm in real time based on the set judgment rule, can transmit the real-time monitoring, scoring and alarming results to the remote monitoring center system in real time through a wireless network, can transmit the operation normative monitoring result of the locomotive crew and part of core information to the remote monitoring center system in real time through the wireless network, can dump complete operation process data to the monitoring center system in a dumping mode, can be used for deep analysis after the fact, and can also retrieve relevant evidence data for violation conditions. Based on detection results of sensors such as gesture motion recognition, voice recognition, position monitoring, heartbeat and breath monitoring and the like, and in combination with a deep learning technology, the safety operation behavior of the locomotive crew member can be comprehensively monitored and evaluated in a normalized mode.
Drawings
FIG. 1 is a schematic diagram of the configuration of the safety auxiliary system for locomotive driving according to the present invention.
FIG. 2 is a schematic diagram of the construction of the sensor subsystem of the present invention.
Fig. 3 shows that a portion of the transmitted chirped continuous millimeter wave signal reflects back to the radar receiving antenna the received signal after encountering a target at distance R.
FIG. 4 is a flowchart of a laser radar human body gesture recognition method based on image cancellation preprocessing.
FIG. 5 shows a 9-layer neural network architecture of the present invention.
FIG. 6 is a flowchart of a human body gesture recognition method based on the fusion of millimeter waves and a laser radar.
FIG. 7 shows a 7-layer neural network architecture of the present invention.
Fig. 8 is a schematic structural diagram of the upper computer subsystem according to the present invention.
Fig. 9 is a schematic structural diagram of a communication transmission subsystem according to the present invention.
Fig. 10 is a schematic diagram of the composition structure of the monitoring center subsystem according to the present invention.
Detailed Description
In order to make the objects, technical solutions and advantages of the present invention more apparent, the present invention is described in further detail below with reference to the accompanying drawings and embodiments. It should be understood that the specific embodiments described herein are merely illustrative of the invention and are not intended to limit the invention.
As shown in fig. 1, a safety auxiliary system for locomotive driving is provided, which comprises a sensor subsystem 1, an upper computer subsystem 2, a communication transmission subsystem 3 and a monitoring center subsystem 4.
The sensor subsystem 1 detects and collects data of gesture actions, voice calls, positions and moving tracks of a cab, breathing and heartbeat frequency and the like of a locomotive crew member and transmits the data to the upper computer subsystem 2.
The upper computer subsystem 2 refers to LKJ and other signals to intelligently process and analyze the gesture actions of the crew, voice call, the position and moving track in a cab, breathing and heartbeat frequency and other data, scores the normative of the work behaviors of the crew in real time according to a corresponding evaluation index system, evaluates the fatigue state health condition of the crew in real time, gives an alarm in real time according to the scoring and evaluating results, and uploads or dumps the corresponding scoring and evaluating results, evidence obtaining data and monitoring data to a monitoring center subsystem.
And the communication transmission subsystem 3 is used for uploading or dumping the scoring and evaluation results, the evidence obtaining data and the monitoring data of the upper computer subsystem 2 to the monitoring center subsystem 4.
The monitoring center subsystem 4 can display the on-line condition of all locomotives and the operation condition of locomotive crew members in the management level of the monitoring center in real time according to the uploaded and dumped data, and can perform post analysis and evidence collection.
As shown in fig. 2, the sensor subsystem 1 includes an LKJ docking module 5, a speech recognition module 6, a life intelligent sensing module 7, a position intelligent sensing module 8, and a gesture intelligent sensing module 9.
And the LKJ docking module 5 is used for acquiring an LKJ signal of the locomotive and uploading the LKJ signal to the upper computer subsystem 2.
Specifically, the LKJ docking module 5 uses a TAX card to perform LKJ data acquisition and forward to the upper computer subsystem 2.
The voice recognition module 6 collects and recognizes the voice of 'call (answer)' of the locomotive crew member in the process of operation activity and uploads the voice to the upper computer subsystem.
The life intelligent sensing module 7 monitors human body vital sign information such as heartbeat frequency and respiratory frequency of locomotive crew members in real time and uploads the information to the upper computer subsystem.
The core sensor of the life intelligent perception module 7 is a millimeter wave radar, wherein the heartbeat and the breath of the locomotive driver are detected by detecting the micro-vibration generated by the corresponding part of the body of the locomotive driver during the heartbeat and the breath. In order to fully utilize the advantage of millimeter wave high frequency and improve the accuracy of measuring the body tiny vibration caused by respiration and heartbeat, a phase difference distance measuring method is adopted to measure the tiny displacement.
As shown in fig. 3, after the chirp continuous millimeter wave signal transmitted by the millimeter wave radar meets a target at a distance R, a part of the signal reflected back to the radar receiving antenna is received as follows:
Figure RE-GDA0003195243510000031
and correlating the receiving signal with the transmitting signal, and performing low-pass filtering:
Figure RE-GDA0003195243510000032
for this purpose, the phase shift of the target echo at distance R is:
Figure RE-GDA0003195243510000041
when the target has a small displacement, the phase shift of the adjacent echoes before and after the target is inconsistent, and the phase shift difference is as follows:
Figure RE-GDA0003195243510000042
in the above, Δ d is a target small displacement, and λ is a radar wavelength. Thus, the small displacement between adjacent echo times can be found as:
Figure RE-GDA0003195243510000043
for example, when a 60GHz millimeter wave radar is used for detecting the beating of the heart, breathing and the like, the displacement of 1.25 mm generated by the target is 1/4 mm of the radar wavelength, and the phase change caused by the displacement is 180 degrees. Therefore, the phase difference distance measurement method can detect the micro-motion of the target which is far less than 1 mm.
Specifically, when body vibration due to heartbeat or respiration is performed, the phase shift of the signal peak at the distance cell m corresponding to the vibration position is calculated by the distance dimension FFT. Then the nth TcInstantaneously, the vibration position signal x (t) on the distance unit m is:
Figure RE-GDA0003195243510000044
thus, the vibration frequency corresponding to the heartbeat and the respiration is obtained.
The millimeter wave radar can measure the micro displacement of the corresponding body part caused by the heartbeat, the respiration and the like of the locomotive driver based on the phase method, so the accuracy of measuring the heartbeat frequency and the respiratory frequency is high.
The position intelligent sensing module 8 monitors the movement characteristic information of the locomotive crew member such as position, body posture, moving track and the like in real time and uploads the movement characteristic information to the upper computer subsystem.
The core sensor of the position intelligent sensing module 8 is also a millimeter wave radar, and after low-noise amplification, down-conversion, intermediate frequency filtering, A/D (analog/digital) adoption and other processing are carried out by receiving a target echo millimeter wave signal, the processing such as distance measurement, angle measurement, target detection, Doppler information extraction, phase information extraction and the like is carried out, then target clustering and target tracking processing are carried out on the signal, and a target is formed to be used as position, moving track and 3D point cloud data.
Specifically, for the clustering processing of the targets, a K-means clustering algorithm is adopted, the implementation is relatively simple, the occupied resources are less, and the application requirements can be met, and the core idea of the K-means clustering algorithm is to use n vectors xj(1,2 …, n) into c groups Gi(i is 1,2, …, c) and for each groupThe centers are clustered so that the cost function (or objective function) of the non-similarity (or distance) indicators is minimized. The algorithm is realized by the following steps:
step 1: initializing the clustering center ciI is 1, …, c. The typical practice is to arbitrarily take c points from all data points;
step 2: by using the formula
Figure RE-GDA0003195243510000051
Determining a membership matrix U;
and step 3: according to the formula
Figure RE-GDA0003195243510000052
A cost function is calculated. If it is less than a certain threshold, or its change from the last cost function quality is less than a certain threshold, the algorithm stops;
and 4, step 4: according to the formula
Figure RE-GDA0003195243510000053
And (5) correcting the clustering center, and then returning to the step 2 to execute a loop.
And tracking the target by adopting a target tracking algorithm of a JMS-GMPDF (JMS-GMPDF) group based on clustering processing. The algorithm is based on a random set theory, can simultaneously process the problems of multi-target detection and tracking, and has obvious advantages when being used for group target tracking through improvement in the embodiment. The key implementation steps of the algorithm comprise:
step 1: prediction
Setting the posterior intensity v at the time k-1k-1Has the following forms:
Figure RE-GDA0003195243510000054
the predicted intensity vk|k-1Comprises the following steps:
vk|k-1(x,r)=γk(x,r)+vf,k|k-1(x,r)
in the formula:
Figure RE-GDA0003195243510000055
is the nascent target intensity.
Thus, it is possible to obtain:
Figure RE-GDA0003195243510000056
Figure RE-GDA0003195243510000057
Figure RE-GDA0003195243510000058
Figure RE-GDA0003195243510000059
wherein the content of the first and second substances,
Figure RE-GDA00031952435100000510
the weight, the mean value and the variance of the new target intensity are respectively.
Step 2: updating
Let the predicted intensity vk|k-1Has the following forms:
Figure RE-GDA00031952435100000511
the posterior intensity vkComprises the following steps:
Figure RE-GDA0003195243510000061
in the formula:
Figure RE-GDA0003195243510000062
Figure RE-GDA0003195243510000063
Figure RE-GDA0003195243510000064
Figure RE-GDA0003195243510000065
Figure RE-GDA0003195243510000066
the desired target numbers are:
Figure RE-GDA0003195243510000067
in the above-mentioned formula, the compound of formula,
Figure RE-GDA0003195243510000068
expressing that the target motion state random vector x obeys Gaussian distribution with mean value m and covariance matrix P;
j in each formula is a given model parameter;
πkand (r) is the probability distribution of the new target at the moment k with the motion model of r.
The frequency modulation continuous millimeter wave radar can be used for accurately measuring the moving track and the position of an operator in the cab, and high-precision 3D measuring point cloud is formed by the action of determining the position.
The gesture intelligent sensing module 9 monitors gesture action information of the locomotive crew member in real time and uploads the gesture action information to the upper computer subsystem.
In a first embodiment, the gesture intelligent sensing module 9 is a millimeter wave radar, and the method for acquiring gesture actions based on the millimeter wave radar includes: and performing feature extraction and classification recognition according to the received millimeter wave electromagnetic signals so as to judge the action sequence type of the current dynamic gesture, wherein the features comprise one or more of distance, azimuth angle, pitch angle and Doppler frequency, the feature extraction is realized by adopting a multi-domain feature engineering technology based on a distance domain-Doppler domain and a time domain-frequency domain, and/or the classification recognition is realized by adopting a multilayer perceptron.
Wherein the multi-domain feature engineering technology specifically comprises: a time domain-frequency domain combined characteristic consisting of envelope frequency, peak frequency, frequency component duration, and peak frequency dynamic range; the distance domain-Doppler domain combined characteristic is formed by scattering center distance-Doppler tracks, distance-velocity accumulation values, distance-velocity dispersion ranges and multi-channel distance-Doppler inter-frame difference; and classifying and combining the features to form a dynamic feature vector sequence of continuous multiple frames.
Specifically, the dynamic gesture recognition and monitoring of the locomotive driver are realized by extracting detailed characteristic information contained in a gesture action echo of the locomotive driver and classifying and recognizing the characteristic information. In order to fully utilize the advantage of millimeter wave high frequency and improve the accuracy of describing small-scale detail feature information in the gesture echo, the following multi-domain combined features are adopted for dynamic feature extraction.
As shown in fig. 3, after the transmitted chirp continuous millimeter wave signal encounters a target gesture at a distance R, a part of the signal reflected back to the radar receiving antenna is received as:
Figure RE-GDA0003195243510000071
correlating the received signal with the transmitted signal, and performing low-pass filtering to obtain a baseband echo signal x (t), wherein the corresponding discrete digital signal is as follows:
Figure RE-GDA0003195243510000072
wherein T issIs the sampling period.
Performing time-frequency analysis on the signals to obtain a time-frequency spectrogram of the echo signal as
Figure RE-GDA0003195243510000073
Where h (m) is a window function that affects the time and frequency domain resolution in the time-frequency analysis.
The time domain energy characteristics based on time frequency analysis are as follows:
Figure RE-GDA0003195243510000074
wherein M is the number of Doppler resolution cells.
The main Doppler frequency characteristics based on time-frequency analysis are as follows:
Figure RE-GDA0003195243510000075
and (3) arranging the baseband signals of a plurality of pulse repetition periods according to the distance to sampling time (fast time) -pulse repetition period (slow time) to obtain an R-D map RD (R, v, T) of a distance domain-Doppler domain, wherein index variables R, v and T respectively represent distance, speed and frame time.
The distance image characteristic based on the R-D image is as follows:
Figure RE-GDA0003195243510000076
similarly, the R-D based doppler signature for micromotion is:
Figure RE-GDA0003195243510000081
the speed centroid characteristics based on the R-D diagram are as follows:
Figure RE-GDA0003195243510000082
wherein r0 and r1 are the minimum and maximum distances of the gesture motion distribution range respectively.
The velocity dispersion range characteristic based on the R-D diagram is as follows:
Figure RE-GDA0003195243510000083
similarly, the distance dispersion range based on the R-D plot is characterized by:
Figure RE-GDA0003195243510000084
wherein r is*(T) is the specific distance corresponding to the distance-Doppler cell with the largest echo energy in the Tth frame data, i.e. the
Figure RE-GDA0003195243510000085
The energy accumulation characteristic based on the R-D diagram is as follows:
Figure RE-GDA0003195243510000086
the energy difference based on the R-D diagram is characterized in that:
Figure RE-GDA0003195243510000087
the multichannel accumulation characteristics based on the R-D map are as follows:
Figure RE-GDA0003195243510000088
wherein RDk(R, v, T) is the R-D plot for the kth receive channel, K being the total number of all receive channels.
The multichannel difference characteristic based on the R-D diagram is as follows:
Mij(r,v,T)=RDi(r,v,T)-RDj(r,v,T)
and performing time sequence combination on the time domain-frequency domain characteristics and the distance domain-Doppler domain characteristics to form a characteristic queue, and sending the characteristic queue to a multilayer perceptron to perform classification and identification of dynamic gestures.
In the method for acquiring the gesture actions based on the millimeter wave radar, the millimeter wave radar is adopted to extract and classify the characteristics of the local detail information of the gesture actions.
In a second embodiment, the gesture intelligent sensing module 9 is a laser radar, and the method for acquiring gesture actions based on the laser radar includes: collecting a depth image of an operator in a working state by using a laser radar, wherein the depth image is a laser image for removing a background of a working place; and sending the depth image into a trained neural network for human body gesture recognition, detecting and outputting a recognition result.
Further, based on the same laser radar, a pure background image of an operator workplace and a real-time image of the operator in a working state are respectively collected, the pixels of the real-time image and the pure background image are the same, and the real-time image and the pure background image are cancelled to obtain the depth image.
And normalizing each frame of real-time image before cancellation and/or normalizing each frame of depth image after cancellation, wherein the real-time image has n frames, and n is a positive integer greater than zero.
The method for sending the depth image into the neural network for recognition comprises the following steps: and taking the real-time image before cancellation and the depth image after cancellation as neural network model input data, and taking a human body gesture recognition result as a model output data.
Specifically, as shown in fig. 4, the method for acquiring gesture actions based on lidar of the present invention includes the following steps:
the method comprises the step S1 of collecting a depth image of an operator in a working state by using a laser radar, wherein the depth image is a laser image for removing a background of a working place.
The method for acquiring the depth image is based on the same laser radar, pure background images of operator workplaces with the same pixels and real-time images of operators in a working state are acquired respectively, and then the real-time images and the pure background images are cancelled to acquire the depth image.
The real-time image has n frames, n is a positive integer greater than zero, normalization processing is performed on each frame of real-time image before cancellation in order to enable all the image data to be cancelled under the same standard, preferably, since the dynamic range of sample data after cancellation becomes small, secondary normalization needs to be performed in order to enhance the contrast of a sample image, and therefore normalization processing needs to be performed on each frame of depth image after cancellation.
For ease of understanding, a specific embodiment of step S1 is provided as exemplified below:
step S11, installing the optical radar in the cab of the locomotive, electrifying and collecting M frames of pure background depth image samples in the locomotive, and respectively recording the M frames of pure background depth image samples as Gm(M is 1,2, …, M), the size of each frame depth image sample is I × J, the M frames background depth image is averaged according to the following formula to obtain the background sample G for real-time cancellation preprocessingA
Figure RE-GDA0003195243510000091
Wherein G ism(I, J) represents the pixel value of the ith row and the jth column of the frame depth image, I ∈ {1,2, …, I }, J ∈ {1,2, …, J }
Step S12, when the laser radar works normally, collecting laser depth image samples in real time according to the set frame rate, and recording the laser depth image samples as Gn(n is 1,2, …), the size of each frame depth image sample is I × J, that is, the dimension includes I pixels, and the width dimension includes J pixels;
step S13, in order to make all the image data identicalThe cancellation is carried out under the standard, and the depth image data G collected from the nth frame isnAnd (6) carrying out normalization processing. The maximum pixel value in the image data is expressed as:
gmaxGn(i,j)max
wherein G isn(I, J) represents the pixel value of the ith row and the jth column of the nth frame depth image, I belongs to {1,2, …, I }, J belongs to {1,2, …, J }, the depth sample data is normalized according to the maximum value of the pixel, and the pixel values are distributed between 0 and 255 by calculating the following steps:
Figure RE-GDA0003195243510000101
step S14, carrying out cancellation processing on the collected nth frame real-time depth image to obtain a depth image G with a fixed background after cancellationCnThe cancellation process is calculated as follows:
GCn(i,j)=|Gn(i,j)-GA(i,j)|
step S15, the dynamic range of the sample data after cancellation becomes smaller, in order to enhance the contrast of the sample picture, the second normalization is needed, and the depth image data G after the cancellation processing of the nth frame is processedCnAnd (6) carrying out normalization processing. The maximum pixel value in the image data is expressed as:
gmaxGcn(i,j)max
wherein G isCn(I, J) represents the pixel value of the ith row and the jth column of the depth image after the cancellation processing of the nth frame, I belongs to {1,2, …, I }, I belongs to {1,2, …, J }, the depth sample data is normalized according to the maximum value of the pixel, and the following calculation is carried out on all pixel points to ensure that the pixel value is distributed between 0 and 255:
Figure RE-GDA0003195243510000102
after the depth image is obtained in step S1, gesture recognition is performed in step S2.
And S2, sending the depth image into a trained neural network for human body gesture recognition, detecting and outputting a recognition result.
The method for sending the depth image into the neural network for recognition comprises the following steps: the real-time image before cancellation and the depth image after cancellation are used as input data of the neural network model, the human gesture recognition result is used as model output data, the network model parameters implemented in the way have the recognition capability of data samples before and after cancellation, and the influence of background mutation on the network recognition performance can be avoided.
The training method of the neural network comprises the following steps: setting a neural network model as a convolution neural network, and then aligning a real-time depth image set G before cancellation and a depth image sample set G after cancellationCLabeling according to gesture type, and sampling set GCAnd G image data is input data of the neural network model, the label is used as output data of the model, and parameters of the neural network model are trained.
The convolutional neural network is set to be 9-layer structure, and the convolutional neural network with 9 layers is adopted, so that the identification system can obtain real-time performance of identification processing and high accurate identification rate, and the requirement of real-time application of railway locomotive driving is met.
The specific implementation of the step S2 is as follows:
step S21, normalizing the depth image sample G obtained in the S13 and the S15gn(i, j) and GgCnAnd (i, j) sending the signal into a trained convolutional neural network for recognition, detecting and outputting a recognition result. As shown in fig. 5, the structural design and parameter training of the neural network includes the following steps:
s22.1, building a neural network structure, wherein the neural network comprises 9 layers of structures, which are respectively as follows:
layer 1, convolution layer, convolution kernel size is 5 × 5 × 1, the layer contains 6 kinds of convolution kernels in total, and the convolution kernel step size is 1;
the 2 nd layer is a pooling layer, the size of a pooling core is 2 multiplied by 1, the pooling mode is maximum pooling, and the step size of the pooling core is 2;
layer 3, convolution layer, convolution kernel size is 5 × 5 × 6, the layer contains 16 kinds of convolution kernels in total, and the convolution kernel step size is 1;
the layer 4 is a pooling layer, the size of a pooling core is 2 multiplied by 1, the pooling mode is maximum pooling, and the step size of the pooling core is 2;
layer 5, convolution layer, convolution kernel size is 5 × 5 × 16, the layer contains 64 kinds of convolution kernels in total, and the convolution kernel step size is 1;
layer 6, convolution layer, convolution kernel size 5 × 5 × 64, this layer contains 128 kinds of convolution kernels altogether, convolution kernel step size is 1;
the 7 th layer and the full connection layer are used for sequentially splicing all output characteristic values of the 6 th layer into a long column vector to serve as an input node of the 7 th layer, and the full connection layer is formed by 1024 output nodes;
a layer 8, a full connection layer, wherein 1024 nodes output from the layer 7 are used as input nodes, and the input nodes and 84 output nodes form the full connection layer;
and the layer 9 and the output layer assume M gesture types in total, and 84 nodes output by the layer 8 are used as input nodes to form a full connection layer with the M output nodes.
S22.2, preprocessing a training sample, acquiring multi-sample data and corresponding background data, preprocessing the acquired background data to obtain a corresponding background sample, performing cancellation processing on the acquired actual sample data to obtain sample data after background cancellation, and obtaining a sample set G after cancellation processingCAnd a set of samples G that are not processed for cancellation. For sample set GCAnd G, labeling according to the gesture type by using an image labeling software tool, wherein a sample label is 0-1 vector data, and label data of a k picture training sample is recorded as bkAssuming a total of M gesture types, bkThe size of (A) is Mx 1, and the specific element values are as follows
Figure RE-GDA0003195243510000111
Step S22.3, sample set GCAnd G image data as input data of neural network model toThe sample label vector is used as model output data, neural network model parameters are trained, the trained network model parameters have the recognition capability of data samples before and after cancellation, and the influence of background mutation on the network recognition performance can be avoided.
Step S3 is also included to improve the accuracy of identifying the output structure.
S3, recording the nth frame Ggn(i, j) and GgCAnd (5) integrating the recognition results of the n (i, j) depth images, and selecting the recognition result with the highest recognition rate as an output recognition result for the same target.
The specific implementation of the step S3 is as follows:
step S31, recording the result of the picture identification of the nth frame, and if N ≦ N, returning to the step S2 to execute, identifying the sample of the (N + 1) th frame, and if N ≦ N, executing step S32;
and step S32, integrating the N frame recognition results, and voting to output the final recognition result. Assuming a total of M gesture types, the number of tickets obtained by each gesture is respectively marked as Tm(M1, 2, …, M), the final result is the class with the most votes, denoted R.
Figure RE-GDA0003195243510000121
The method for acquiring the gesture actions based on the laser radar has the following advantages:
(1) the interference suppression processing of the fixed background target image is realized by carrying out cancellation preprocessing on the laser radar real-time image and the background image, so that the contrast of the depth image of the operation gesture target is obviously improved, the gesture edge characteristic is clearer and more stable, the influence of the complex environment inside the locomotive or the difference of different types on the quality of the gesture target laser image and the detection and identification performance is avoided, and the robustness and the adaptability of a gesture identification algorithm are improved;
(2) the adoption of a 9-layer convolutional neural network enables the identification system to obtain the real-time property of identification processing and high accurate identification rate, and meets the requirement of real-time application of railway locomotive driving.
In a third embodiment, the gesture intelligent sensing module 9 is a combination of a millimeter wave radar and a laser radar, and the method for acquiring the gesture actions based on the fusion of the millimeter wave radar and the laser radar includes: controlling the millimeter wave radar to send a trigger signal to the laser radar when the millimeter wave radar monitors the gesture initial action of the operator; controlling the laser radar to start to collect a depth image of an operator in a working state after receiving a trigger signal; and sending the depth image into a trained neural network for human body gesture recognition, detecting and outputting a recognition result.
Specifically, as shown in fig. 6, the method for recognizing human body gestures based on the fusion of millimeter waves and laser radar of the present invention includes the following steps:
and S1, controlling the millimeter wave radar to send a trigger signal to the laser radar when the millimeter wave radar monitors the gesture starting action of an operator.
The method for monitoring the gesture initial action comprises the following steps: extracting characteristic information corresponding to the dynamic gesture according to the received millimeter wave electric signals, wherein the characteristic information comprises one or more of distance, azimuth angle, pitch angle and Doppler frequency, generating a characteristic vector according to the characteristic information, and identifying and analyzing the characteristic vector so as to judge the category of the dynamic gesture.
In the above, the extracting of the feature information is implemented by using a multi-domain feature engineering technology based on a distance domain-doppler domain and a time domain-frequency domain, where the multi-domain feature engineering technology includes: the method comprises the steps of carrying out classification combination on time domain-frequency domain combined features consisting of envelope frequency, peak frequency, frequency component duration and peak frequency dynamic range and distance domain-Doppler domain combined features consisting of scattering center distance-Doppler track, distance-velocity accumulation value, distance-velocity dispersion range and multi-channel distance-Doppler interframe difference to form continuous multi-frame dynamic feature vector sequences.
The identification is achieved using a multi-tier perceptron.
For ease of understanding, a specific embodiment of step S1 is provided as exemplified below:
s11, installing a millimeter wave radar in a locomotive cab, electrifying to generate a linear frequency modulation continuous wave signal with a certain bandwidth and pulse width:
Figure RE-GDA0003195243510000131
wherein the frequency-modulated starting frequency f of the transmitted signalcIs 24GHz or more, preferably 60GHz, and pulse duration Tc40us, bandwidth B4 GHz, and frequency modulation rate (slope) S100 MHz/us.
And S12, converting the linear frequency modulation continuous millimeter wave signal into a millimeter wave transmitting signal through phase shifting and amplification, and transmitting the millimeter wave transmitting signal in a direction aligned with a locomotive driver through an antenna unit.
The phase shift is used for controlling the transmitting angle of the transmitting beam, and the amplification is used for ensuring that the transmitted signal has enough power.
And S13, processing the echo millimeter wave signal through low-noise amplification, down-conversion, intermediate frequency filtering, A/D sampling and the like, and then performing processing such as distance measurement, angle measurement, target detection, Doppler information extraction, phase information extraction and the like.
Specifically, the dynamic gesture recognition and monitoring of the locomotive driver are realized by extracting detailed characteristic information contained in a gesture action echo of the locomotive driver and classifying and recognizing the characteristic information. In order to fully utilize the advantage of millimeter wave high frequency and improve the accuracy of describing small-scale detail feature information in the gesture echo, the following multi-domain combined features are adopted for dynamic feature extraction. As shown in fig. 3, after the transmitted chirp continuous millimeter wave signal encounters a target gesture at a distance R, a part of the signal reflected back to the radar receiving antenna is received as:
Figure RE-GDA0003195243510000132
correlating the received signal with the transmitted signal, and performing low-pass filtering to obtain a baseband echo signal x (t), wherein the corresponding discrete digital signal is as follows:
Figure RE-GDA0003195243510000133
wherein T issIs the sampling period.
Performing time-frequency analysis on the signals to obtain a time-frequency spectrogram of the echo signal as
Figure RE-GDA0003195243510000141
Where h (m) is a window function that affects the time and frequency domain resolution in the time-frequency analysis.
The time domain energy characteristics based on time frequency analysis are as follows:
Figure RE-GDA0003195243510000142
wherein M is the number of Doppler resolution cells.
The main Doppler frequency characteristics based on time-frequency analysis are as follows:
Figure RE-GDA0003195243510000143
and (3) arranging the baseband signals of a plurality of pulse repetition periods according to the distance to sampling time (fast time) -pulse repetition period (slow time) to obtain an R-D map RD (R, v, T) of a distance domain-Doppler domain, wherein index variables R, v and T respectively represent distance, speed and frame time.
The distance image characteristic based on the R-D image is as follows:
Figure RE-GDA0003195243510000144
similarly, the R-D based doppler signature for micromotion is:
Figure RE-GDA0003195243510000145
the speed centroid characteristics based on the R-D diagram are as follows:
Figure RE-GDA0003195243510000146
wherein r0 and r1 are the minimum and maximum distances of the gesture motion distribution range respectively.
The velocity dispersion range characteristic based on the R-D diagram is as follows:
Figure RE-GDA0003195243510000147
similarly, the distance dispersion range based on the R-D plot is characterized by:
Figure RE-GDA0003195243510000151
wherein r is*(T) is the specific distance corresponding to the distance-Doppler cell with the largest echo energy in the Tth frame data, i.e. the
Figure RE-GDA0003195243510000152
The energy accumulation characteristic based on the R-D diagram is as follows:
Figure RE-GDA0003195243510000153
the energy difference based on the R-D diagram is characterized in that:
Figure RE-GDA0003195243510000154
the multichannel accumulation characteristics based on the R-D map are as follows:
Figure RE-GDA0003195243510000155
wherein RDk(R, v, T) is the R-D plot for the kth receive channel, K being the total number of all receive channels.
The multichannel difference characteristic based on the R-D diagram is as follows:
Mij(r,v,T)=RDi(r,v,T)-RDj(r,v,T)
and performing time sequence combination on the time domain-frequency domain characteristics and the distance domain-Doppler domain characteristics to form a characteristic queue, sending the characteristic queue to a multilayer perceptron to perform classification and identification of dynamic gestures, and giving a delay trigger signal to the laser radar when the gesture type is identified to belong to a set gesture initial action.
And S2, controlling the laser radar to start to collect the depth image of the operator in the working state after receiving the trigger signal.
The depth image has n frames, n is a positive integer greater than zero, and in order to process all image data under the same standard, normalization preprocessing is performed on each frame of depth image, so that the data format is normalized, and the identification precision is improved.
The neural network is a convolutional neural network and comprises a 7-layer structure, and as the characteristics of the preprocessed sample picture are enhanced, the 7-layer convolutional neural network with a simple model structure and a small calculated amount can meet the identification requirement, can well take the identification efficiency and the identification efficiency into consideration, and meets the requirement of railway locomotive driving application. Further, the training method of the neural network comprises the following steps: collecting a depth image as a sample picture, and labeling the sample picture according to the gesture type; and training parameters of the neural network model by taking the sample picture as input data of the neural network model and taking the label as output data of the model, wherein the neural network model is a convolutional neural network.
Specifically, the specific implementation of step S2 is as follows:
step S21, after the laser radar receives the delay trigger signal sent by the millimeter wave radarStarting an acquisition program, and respectively recording as G on the assumption that N frames of laser picture samples are acquiredn(N is 1,2, …, N), the size of each frame of picture sample is I × J, i.e., the picture height dimension includes I pixels, and the width dimension includes J pixels;
and step S22, carrying out normalization processing on the nth frame picture sample. Determining the maximum value of the pixel in the picture sample, and recording as:
gmax=maxGn(i,j)
wherein G isn(I, J) represents the pixel value of the ith row and the jth column of the frame sample picture, I belongs to {1,2, …, I }, J belongs to {1,2, …, J }, the picture sample data is normalized according to the maximum value of the pixel, and the pixel values are distributed between 0 and 255 by calculating the following steps:
Figure RE-GDA0003195243510000161
and S23, sending the normalized sample obtained in the S22 into a trained convolutional neural network for recognition, and outputting a recognition result. The structural design and parameter training of the neural network comprise the following steps:
step S23.1, as shown in fig. 7, building a neural network structure, where the neural network includes 7 layers of structures, which are:
layer 1, convolution layer, convolution kernel size is 5 × 5 × 1, the layer contains 6 kinds of convolution kernels in total, and the convolution kernel step size is 1;
the 2 nd layer is a pooling layer, the size of a pooling core is 2 multiplied by 1, the pooling mode is maximum pooling, and the step size of the pooling core is 2;
layer 3, convolution layer, convolution kernel size is 5 × 5 × 6, the layer contains 16 kinds of convolution kernels in total, and the convolution kernel step size is 1;
the layer 4 is a pooling layer, the size of a pooling core is 2 multiplied by 1, the pooling mode is maximum pooling, and the step size of the pooling core is 2;
layer 5, convolution layer, convolution kernel size is 5 × 5 × 16, the layer contains 64 kinds of convolution kernels in total, and the convolution kernel step size is 1;
the 6 th layer and the full connection layer are used for sequentially splicing all output characteristic values of the 5 th layer into a long column vector to serve as an input node of the 6 th layer, and the full connection layer is formed by the input node and 84 output nodes;
and the 7 th layer and the output layer assume M gesture types in total, and 84 nodes output by the 6 th layer are used as input nodes to form a full connection layer with the M output nodes.
S23.2, preprocessing a training sample, namely preprocessing the collected sample data in the step S22, making a label for the gesture type of the sample data, wherein the label of the sample is 0-1 vector data, and the label data of the training sample of the nth frame of picture is recorded as bnAssuming a total of M gesture types, bnThe size of (a) is M × 1, and the specific element values are as follows:
Figure RE-GDA0003195243510000162
and S23.3, training parameters of the neural network model by taking the sample picture as input data of the neural network model and taking the sample label vector as output data of the model.
The method of the present invention further comprises a step S3 for improving the accuracy of recognizing the output structure.
And S3, recording the recognition result of the nth frame of depth image, and selecting the same target with the highest recognition rate as an output recognition result.
The specific implementation of the step S3 is as follows:
step S31, recording the recognition result of the depth image of the nth frame, and if N is less than N, returning to the step S2 to execute, recognizing the sample of the (N + 1) th frame, and if N is equal to N, executing step S32;
and step S32, integrating the N frame recognition results, and voting to output the final recognition result. Assuming a total of M gesture types, the number of tickets obtained by each gesture is respectively marked as Tm(M1, 2, …, M), the final result is the class with the most votes, denoted R.
Figure RE-GDA0003195243510000171
According to the invention, the millimeter wave radar monitors the gesture initial action, and sends a trigger signal to the laser radar, so that the resource waste and performance reduction of the laser radar caused by long-term standby are reduced.
As shown in fig. 8, the upper computer subsystem 2 includes a real-time monitoring module 10, an analysis and evaluation module 11, a configuration control module 12, a storage and calculation module 13, a link monitoring module 14, and a data transmission module 15.
The real-time monitoring module 10 monitors the operation condition, heartbeat condition and respiration condition of the locomotive crew member in real time.
The analysis and evaluation module 11 analyzes the normative of the operation behaviors such as the gesture actions, the call responses and the like of the locomotive crew members in real time and evaluates the health and fatigue conditions of the locomotive crew members.
Specifically, the real-time data of the incoming heartbeat frequency and the respiratory frequency and the historical data are comprehensively analyzed by adopting a deep learning algorithm, so that the real-time evaluation result of the high reliability of the current health and fatigue state of the driver is obtained, the high reliability of the health and fatigue state of the driver is predicted, and the result is stored or/and is output to external equipment through an interface. In one embodiment, a deep learning algorithm is adopted as a Long Short-Term Memory (LSTM), and identity information, a historical evaluation result, a historical measurement result and a current measurement result are fully utilized for each determined driver, so as to ensure the accuracy of analysis and evaluation of the health and fatigue condition of the driver.
Comprehensive analysis is carried out on the movement track, the movement position and the 3D point cloud data of the coming crew member by adopting a deep learning algorithm, so that the accuracy and normative evaluation result of the operation position and the operation of the operation staff is obtained, and the result is output to the storage unit 5 for storage or/and is output to external equipment through the interface unit 4. In one embodiment, a deep learning algorithm is adopted as a convolutional neural network called fast R-CNN, and analysis and reasoning are carried out based on the movement track, the position and the 3D point cloud data of the operator to obtain the evaluation result of the accuracy and the normativity of the operation position and the action of the operator.
And comprehensively analyzing the gesture action recognition result, the confidence coefficient of the gesture action recognition result and historical data by adopting a deep learning algorithm to obtain a real-time evaluation result of the integrity and the normalization of the current gesture operation, and storing the result. In one embodiment, specifically, a deep learning algorithm is used as a Long Short-Term Memory (LSTM), and the historical evaluation result, the historical measurement result and the current measurement result are fully utilized for each determined crew member, so as to ensure the accuracy of analysis and evaluation on the integrity and the normativity of the driver gesture operation. The same applies to call answering.
The configuration control module 12 performs configuration management and operation control of each sensor of the sensor subsystem, and controls cooperative work of other modules of the upper computer subsystem.
The storage calculation module 13 performs intelligent processing on the data of the sensor subsystem, performs storage management on analysis and evaluation results, key evidence obtaining data and the like, and performs compression processing on data with large data volume.
The storage and calculation module 13 can simultaneously judge whether the crew operation call response exists or not and accurately identify when the voice data is intelligently processed.
The link monitoring module 14 monitors the working condition of the communication link between each sensor of the sensor subsystem and the upper computer subsystem.
The data transmission module 15 transmits and dumps the analysis and evaluation result of the operation normative of the locomotive crew, the operation behavior evidence data of the locomotive crew, the fatigue and health evaluation result of the locomotive crew and the like to the monitoring center subsystem through the communication transmission subsystem.
As shown in fig. 9, the communication transmission subsystem 3 includes a real-time data sending and receiving module 16 and an overall data dump module 17.
The real-time data sending and receiving module 16 is used for receiving the analysis and evaluation result of the work normalization of the locomotive crew member and the key data of the fatigue health evaluation result of the locomotive crew member from the upper computer subsystem 3 in real time and sending the data to the monitoring center subsystem 4 in real time.
The integral data dump module 17 transmits all data related to the operation process of the locomotive crew member from the upper computer subsystem 3 and stores the data in the monitoring center subsystem 4.
As shown in fig. 10, the monitoring center subsystem 4 includes a comprehensive monitoring module 18, a post analysis module 19, and a data center module 20.
The comprehensive monitoring module 18 displays the number of on-line locomotives, the number of on-line crew members, the real-time operation execution condition of the on-line crew members, the total alarm number of the on-line locomotives and the crew members, the overall working state of the on-line crew members and the comprehensive analysis of the operation execution condition of the on-line crew members in the management level of the monitoring center subsystem 4 in real time.
The post analysis module 19 plays back the historical data of the crew members, searches alarms, evaluates the work execution condition and health condition of the crew members based on the historical data, and comprehensively evaluates and analyzes all the crew members in the management level of the monitoring center subsystem 4 based on the historical data.
The data center module 20 stores and manages mass data accumulated by daily work of the crew members, and performs analysis, calculation, deep mining and utilization on the mass data based on a big data technology.
In a word, the invention takes an advanced micro millimeter wave radar as a core sensor, combines a voice input array and an auxiliary laser radar, and comprehensively masters the operation condition and the physical state of a locomotive attendant through intelligent perception of the position, the motion track, the gesture action, the call response and other operation behaviors of the locomotive attendant in a cab, intelligent analysis and judgment of the fatigue condition and the health state based on heartbeat/breathing frequency detection according to locomotive information such as LKJ and the like. The main innovation of the invention comprises: by adopting an advanced detection technology, the system work is not influenced and interfered by the environment; the real-time fine identification and quantitative scoring can be carried out on the operation behaviors of the crew; the monitoring and alarming can be realized, and the analysis and evidence collection can be carried out afterwards; the safety monitoring system can carry out all-dimensional intelligent monitoring on the safety operation of locomotive crew members. The system can be applied to various locomotive cabs, and can carry out normalized evaluation and check on the accuracy and the normalization of the standard operation of locomotive crew members in daily work, so that the locomotive crew members are promoted to strengthen business learning and training, the working standard is improved, and the driving safety of the locomotive is improved.
It should be noted that:
the method used in the present invention can be converted into program steps and devices which can be stored in a computer storage medium, and the program steps and devices are implemented by means of calling and executing by a controller, wherein the devices are understood as functional modules implemented by a computer program, and the computer program can be stored in a computer readable storage medium, wherein the computer readable storage medium stores one or more programs, and the one or more programs, when executed by a processor, implement the method.
The present invention also provides an electronic device, wherein the electronic device includes:
a processor; and the number of the first and second groups,
a memory arranged to store computer executable instructions that, when executed, cause the processor to perform the method described above.
The above description is only for the purpose of illustrating the preferred embodiments of the present invention and is not to be construed as limiting the invention, and any modifications, equivalents and improvements made within the spirit and principle of the present invention are intended to be included within the scope of the present invention.

Claims (10)

1. Railway locomotive driving safety auxiliary system based on intelligent perception, its characterized in that includes:
the sensor subsystem is used for collecting operation data of a locomotive crew member, and the operation data comprises gesture actions, calling voice, a position and a moving track in a cab, respiratory frequency and heartbeat frequency of the locomotive crew member;
the upper computer subsystem is used for scoring the normative of the operation behaviors of the locomotive crew members through a deep learning algorithm according to the operation data, evaluating the fatigue state health condition of the locomotive crew members and giving an alarm according to the scoring and evaluating results;
the communication transmission subsystem is used for uploading the operation data, the scoring and the evaluation results to the monitoring center subsystem;
and the monitoring center subsystem is used for displaying the on-line conditions of all the locomotives and the operation conditions of locomotive crew members in the management level of the monitoring center.
2. The railroad locomotive driving safety assistance system of claim 1, wherein the sensor subsystem comprises a millimeter wave radar, and the position and movement trajectory, breathing frequency, and heartbeat frequency of the locomotive crew member in the cab are collected based on the millimeter wave radar.
3. The railroad locomotive driving safety assistance system of claim 2, wherein the sensor subsystem comprises a lidar configured to capture gestural actions of a locomotive attendant based on millimeter wave radar and/or lidar.
4. The railroad locomotive driving safety assistance system of claim 3, wherein the millimeter wave radar-based gesture motion capturing method comprises:
and performing feature extraction and classification recognition according to the received millimeter wave electromagnetic signals so as to judge the action sequence type of the current dynamic gesture, wherein the features comprise one or more of distance, azimuth angle, pitch angle and Doppler frequency, the feature extraction is realized by adopting a multi-domain feature engineering technology based on a distance domain-Doppler domain and a time domain-frequency domain, and/or the classification recognition is realized by adopting a multilayer perceptron.
5. The railroad locomotive driving safety assistance system of claim 4, wherein the multi-domain signature engineering technique specifically comprises:
a time domain-frequency domain combined characteristic consisting of envelope frequency, peak frequency, frequency component duration, and peak frequency dynamic range;
the distance domain-Doppler domain combined characteristic is formed by scattering center distance-Doppler tracks, distance-velocity accumulation values, distance-velocity dispersion ranges and multi-channel distance-Doppler inter-frame difference;
and classifying and combining the features to form a dynamic feature vector sequence of continuous multiple frames.
6. The railroad locomotive driving safety assistance system of claim 3, wherein the lidar based gesture motion capturing method comprises:
collecting a depth image of an operator in a working state by using a laser radar, wherein the depth image is a laser image for removing a background of a working place;
and sending the depth image into a trained neural network for human body gesture recognition, detecting and outputting a recognition result.
7. The railroad locomotive driving safety assistance system of claim 6, wherein the lidar based method of capturing gesture motions further comprises:
based on the same laser radar, respectively acquiring a pure background image of an operator workplace and a real-time image of the operator in a working state, wherein the pixels of the real-time image and the pure background image are the same;
the real-time image is canceled with a pure background image to obtain the depth image,
and normalizing each frame of real-time image before cancellation and/or normalizing each frame of depth image after cancellation, wherein the real-time image has n frames, and n is a positive integer greater than zero.
8. The railroad locomotive driving safety assistance system of claim 7, wherein the method of feeding the depth image into the neural network for identification comprises:
and taking the real-time image before cancellation and the depth image after cancellation as neural network model input data, and taking a human body gesture recognition result as a model output data.
9. The railroad locomotive driving safety assistance system of claim 3, 4 or 5, wherein the method of acquiring gesture actions based on millimeter wave radar and lidar comprises:
controlling the millimeter wave radar to send a trigger signal to the laser radar when the millimeter wave radar monitors the gesture initial action of the operator;
controlling the laser radar to start to collect a depth image of an operator in a working state after receiving a trigger signal;
and sending the depth image into a trained neural network for human body gesture recognition, detecting and outputting a recognition result.
10. The railroad locomotive driving safety assistance system of claim 1, wherein the sensor subsystem comprises an LKJ docking module, the LKJ docking module is configured to collect an LKJ signal from the locomotive to the host computer subsystem, and the host computer subsystem performs the scoring and the evaluating according to the LKJ signal from the locomotive.
CN202110602504.0A 2021-05-31 2021-05-31 Railway locomotive driving safety auxiliary system based on intelligent sensing Pending CN113420961A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110602504.0A CN113420961A (en) 2021-05-31 2021-05-31 Railway locomotive driving safety auxiliary system based on intelligent sensing

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110602504.0A CN113420961A (en) 2021-05-31 2021-05-31 Railway locomotive driving safety auxiliary system based on intelligent sensing

Publications (1)

Publication Number Publication Date
CN113420961A true CN113420961A (en) 2021-09-21

Family

ID=77713407

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110602504.0A Pending CN113420961A (en) 2021-05-31 2021-05-31 Railway locomotive driving safety auxiliary system based on intelligent sensing

Country Status (1)

Country Link
CN (1) CN113420961A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116665334A (en) * 2023-07-28 2023-08-29 倍施特科技(集团)股份有限公司 Face recognition-based driver self-service reporting method and device

Citations (12)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20140324888A1 (en) * 2011-12-09 2014-10-30 Nokia Corporation Method and Apparatus for Identifying a Gesture Based Upon Fusion of Multiple Sensor Signals
CN107126224A (en) * 2017-06-20 2017-09-05 中南大学 A kind of real-time monitoring of track train driver status based on Kinect and method for early warning and system
CN108958490A (en) * 2018-07-24 2018-12-07 Oppo(重庆)智能科技有限公司 Electronic device and its gesture identification method, computer readable storage medium
CN109189019A (en) * 2018-09-07 2019-01-11 辽宁奇辉电子系统工程有限公司 A kind of engine drivers in locomotive depot value multiplies standardization monitoring system
CN110450784A (en) * 2019-07-30 2019-11-15 深圳普捷利科技有限公司 A kind of driver status monitoring method and system based on fmcw radar
CN110717988A (en) * 2018-07-12 2020-01-21 唐义 Virtual fitting system and method adopting three-dimensional laser radar
CN111062240A (en) * 2019-10-16 2020-04-24 中国平安财产保险股份有限公司 Method and device for monitoring automobile driving safety, computer equipment and storage medium
CN111968341A (en) * 2020-08-21 2020-11-20 无锡威孚高科技集团股份有限公司 Fatigue driving detection system and method
CN112158151A (en) * 2020-10-08 2021-01-01 南昌智能新能源汽车研究院 Automatic driving automobile gesture control system and method based on 5G network
CN112597965A (en) * 2021-01-05 2021-04-02 株洲中车时代电气股份有限公司 Driving behavior recognition method and device and computer readable storage medium
CN112782662A (en) * 2021-01-30 2021-05-11 湖南森鹰智造科技有限公司 Dynamic gesture recognition monitoring facilities
US20210141092A1 (en) * 2019-11-07 2021-05-13 Nio Usa, Inc. Scene perception using coherent doppler lidar

Patent Citations (12)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20140324888A1 (en) * 2011-12-09 2014-10-30 Nokia Corporation Method and Apparatus for Identifying a Gesture Based Upon Fusion of Multiple Sensor Signals
CN107126224A (en) * 2017-06-20 2017-09-05 中南大学 A kind of real-time monitoring of track train driver status based on Kinect and method for early warning and system
CN110717988A (en) * 2018-07-12 2020-01-21 唐义 Virtual fitting system and method adopting three-dimensional laser radar
CN108958490A (en) * 2018-07-24 2018-12-07 Oppo(重庆)智能科技有限公司 Electronic device and its gesture identification method, computer readable storage medium
CN109189019A (en) * 2018-09-07 2019-01-11 辽宁奇辉电子系统工程有限公司 A kind of engine drivers in locomotive depot value multiplies standardization monitoring system
CN110450784A (en) * 2019-07-30 2019-11-15 深圳普捷利科技有限公司 A kind of driver status monitoring method and system based on fmcw radar
CN111062240A (en) * 2019-10-16 2020-04-24 中国平安财产保险股份有限公司 Method and device for monitoring automobile driving safety, computer equipment and storage medium
US20210141092A1 (en) * 2019-11-07 2021-05-13 Nio Usa, Inc. Scene perception using coherent doppler lidar
CN111968341A (en) * 2020-08-21 2020-11-20 无锡威孚高科技集团股份有限公司 Fatigue driving detection system and method
CN112158151A (en) * 2020-10-08 2021-01-01 南昌智能新能源汽车研究院 Automatic driving automobile gesture control system and method based on 5G network
CN112597965A (en) * 2021-01-05 2021-04-02 株洲中车时代电气股份有限公司 Driving behavior recognition method and device and computer readable storage medium
CN112782662A (en) * 2021-01-30 2021-05-11 湖南森鹰智造科技有限公司 Dynamic gesture recognition monitoring facilities

Non-Patent Citations (3)

* Cited by examiner, † Cited by third party
Title
N. SUSITHRA,ET AL.: "Deep Learning-Based Activity Monitoring for Smart Environment Using Radar", 《SPRINGER》 *
周早丽: "基于 LFMCW 毫米波雷达的目标检测方法研究", 《中国优秀硕士学位论文全文数据库-信息科技辑》 *
郭鹏 等: "基于深度图像的手势识别研究", 《国外电子测量技术》 *

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116665334A (en) * 2023-07-28 2023-08-29 倍施特科技(集团)股份有限公司 Face recognition-based driver self-service reporting method and device

Similar Documents

Publication Publication Date Title
CN112998668B (en) Millimeter wave-based non-contact far-field multi-human-body respiration heart rate monitoring method
US20180313950A1 (en) CNN-Based Remote Locating and Tracking of Individuals Through Walls
CN112686094B (en) Non-contact identity recognition method and system based on millimeter wave radar
EP2428921A1 (en) Predictive and adaptive wide area surveillance
CN116027324B (en) Fall detection method and device based on millimeter wave radar and millimeter wave radar equipment
CN116602663B (en) Intelligent monitoring method and system based on millimeter wave radar
CN101282266A (en) Intelligent instruction-preventing microwave radar wireless sensor network
Li et al. Human behavior recognition using range-velocity-time points
CN113447905A (en) Double-millimeter-wave radar human body falling detection device and detection method
CN113420961A (en) Railway locomotive driving safety auxiliary system based on intelligent sensing
CN114814832A (en) Millimeter wave radar-based real-time monitoring system and method for human body falling behavior
CN112782662A (en) Dynamic gesture recognition monitoring facilities
CN114818916A (en) Road target classification method based on millimeter wave radar multi-frame point cloud sequence
US20220180619A1 (en) Object recognition method and apparatus
CN115755015A (en) Method, device, equipment and medium for detecting living body in cabin
CN113960587A (en) Millimeter wave radar multi-target tracking method based on category information feedback
CN113420610A (en) Human body gesture recognition method based on fusion of millimeter waves and laser radar, electronic device and storage medium
CN116849643A (en) Method for detecting falling of wearable equipment based on neural network
CN113126086B (en) Life detection radar weak target detection method based on state prediction accumulation
Kaushik Radar as a Security Measure-Real Time Neural Model based Human Detection and Behaviour Classification
CN115982620A (en) Millimeter wave radar human body falling behavior identification method and system based on multi-class three-dimensional features and Transformer
CN116008982A (en) Radar target identification method based on trans-scale feature aggregation network
CN116125406A (en) Method for evaluating performance of space-based surveillance radar based on SNR estimation and track-pointing track report
Pearce et al. Regional trajectory analysis through multi-person tracking with mmWave radar
CN115236749A (en) Living body detection method, apparatus and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication

Application publication date: 20210921

RJ01 Rejection of invention patent application after publication