CN115984787A - Intelligent vehicle-mounted real-time alarm method for industrial brain public transport - Google Patents

Intelligent vehicle-mounted real-time alarm method for industrial brain public transport Download PDF

Info

Publication number
CN115984787A
CN115984787A CN202310264493.9A CN202310264493A CN115984787A CN 115984787 A CN115984787 A CN 115984787A CN 202310264493 A CN202310264493 A CN 202310264493A CN 115984787 A CN115984787 A CN 115984787A
Authority
CN
China
Prior art keywords
vehicle
monitoring
driver
bus
time
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202310264493.9A
Other languages
Chinese (zh)
Inventor
王德龙
张烁
续敏
张灵敏
李庆龙
赵美楠
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Qilu Yunshang Digital Technology Co ltd
Original Assignee
Qilu Yunshang Digital Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Qilu Yunshang Digital Technology Co ltd filed Critical Qilu Yunshang Digital Technology Co ltd
Priority to CN202310264493.9A priority Critical patent/CN115984787A/en
Publication of CN115984787A publication Critical patent/CN115984787A/en
Pending legal-status Critical Current

Links

Images

Abstract

The invention relates to the technical field of vehicle alarm, in particular to an industrial brain bus intelligent vehicle-mounted real-time alarm method, which comprises the following steps: s1: the method comprises the steps that running information of the bus is obtained through a vehicle-mounted networking system, and monitoring and identification are carried out through video inside and outside the bus; s2: extracting a monitoring information image and preprocessing the image; s3: monitoring the driving state of a driver and the behavior of passengers extracted from monitoring information in the bus in real time through a state monitoring algorithm and a behavior monitoring algorithm respectively; s4: the control center receives the running information and the monitoring information of the bus and gives an alarm for the abnormal information existing in the monitoring. The invention carries out digital monitoring on vehicle running and monitoring information, respectively monitors the driving concentration and the driving posture of a driver through a state monitoring algorithm and a behavior monitoring algorithm, and monitors suspicious behaviors of passengers possibly existing in the vehicle through the behavior monitoring algorithm, thereby reducing the vehicle traffic accident rate and improving the personal safety and the article safety of the passengers.

Description

Intelligent vehicle-mounted real-time alarm method for industrial brain public transport
Technical Field
The invention relates to the technical field of vehicle alarming, in particular to an industrial brain bus intelligent vehicle-mounted real-time alarming method.
Background
The industrial brain has been enabling development based on data resources and digital technology. Currently, buses are widely used in every city of countries around the world. As the number of buses increases, the frequency of traffic accidents increases. In order to reduce or avoid dangerous situations such as traffic accidents, it is necessary to detect the conditions of the bus in a datamation manner.
The bus needs to travel on a crowded urban road section, frequent starting and stopping are performed, and more people exist around the bus, so that monitoring of the state of a driver is important to prevent traffic accidents. The invention discloses an intelligent vehicle-mounted real-time warning system and method for a smart bus, which are disclosed by the prior patent No. CN115311819A, and the occurrence rate of bus traffic accidents is reduced only by monitoring the fatigue state of a driver, but most of the traffic accidents in recent years are caused by the distracted driving of the driver or the interference of passengers, so that the intelligent vehicle-mounted real-time warning method for the smart bus for the brain of the industry is provided, the driving concentration and the driving posture of the driver are respectively monitored, the behavior of the passengers in the bus is monitored, and the personal safety and the article safety of the passengers are improved while the rate of the bus traffic accidents is reduced.
Disclosure of Invention
The invention aims to solve the defects in the background technology by providing an industrial brain bus intelligent vehicle-mounted real-time warning method.
The technical scheme adopted by the invention is as follows:
the intelligent bus-mounted real-time warning method for the brain of the industry comprises the following steps:
s1: the method comprises the steps that running information of the bus is obtained through a vehicle-mounted networking system, and monitoring and identification are carried out through video inside and outside the bus;
s2: extracting a monitoring information image and preprocessing the image;
s3: monitoring the driving state of a driver and the behavior of passengers extracted from monitoring information in the bus in real time through a state monitoring algorithm and a behavior monitoring algorithm respectively;
s4: the control center receives the running information and the monitoring information of the bus and gives an alarm for the abnormal information existing in the monitoring.
As a preferred technical scheme of the invention: the preprocessing operation in the S2 comprises image enhancement processing, image normalization processing, image detection and cutting processing.
As a preferred technical scheme of the invention: and the driving state of the driver in the S3 comprises driving concentration and driving posture, and the driving concentration and the driving posture of the driver are monitored through a state monitoring algorithm and a behavior monitoring algorithm respectively.
As a preferred technical scheme of the invention: among the state monitoring algorithm, concentrate the region to divide through K-means algorithm to driver's sight, driver's sight concentrates the region and includes the vehicle left side, vehicle the place ahead and vehicle right side, wherein, the vehicle left side includes left rear-view mirror and left side lane, the vehicle the place ahead includes the place ahead distance, near and interior instrument, the vehicle right side includes right rear-view mirror and right side lane.
As a preferred technical scheme of the invention: the steps of the K-means algorithm are as follows:
s3.1: inputting the number 3 of the clusters on the left side of the vehicle, the clusters in front of the vehicle and the clusters on the right side of the vehicle and a data set containing n objects, and determining an initial clustering center point for each cluster;
s3.2: distributing the data in the data set to the nearest cluster according to the Euclidean distance principle;
s3.3: using the sample data mean value in each cluster as a clustering center;
s3.4: and repeating the steps S3.2 and S3.3 until the algorithm is converged, and outputting 3 result clusters.
As a preferred technical scheme of the invention: in the state monitoring algorithm, sight concentration areas of the driver when the sight line falls on the left side of the vehicle, the front side of the vehicle and the right side of the vehicle are divided through a K-means algorithm, and the sight line activeness of the driver is calculated
Figure SMS_1
Figure SMS_2
Wherein the content of the first and second substances,
Figure SMS_3
for the number of image frames extracted per second from a surveillance video, a value is selected based on the number of image frames extracted per second>
Figure SMS_4
Is the time of a time window>
Figure SMS_5
Is the first->
Figure SMS_6
Annotated area of a frame, </or > R>
Figure SMS_7
Cluster center value of single time window; dividing line of sight activity->
Figure SMS_8
The threshold value of (2) is used for judging the driver concentration, when the sight line activity is smaller than the threshold value, the driver is judged to be in a distracted state, and the driver is reminded through the alarm when the driver is monitored to be in the distracted driving state.
As a preferred technical scheme of the invention: in the behavior monitoring algorithm, the motion characteristics of joint points in a skeleton sequence are used as indexes for judging abnormal behavior key frames, and whether suspicious postures exist in the driving posture of a driver and people in the vehicle is judged; wherein, a space-time diagram is constructed on the skeleton sequence of the key frame
Figure SMS_10
Set of nodes
Figure SMS_15
Including all key points of the skeleton sequence, the edge set E consists of two subsets, the first subset contains a single-frame human skeleton internal connecting line and is/is selected>
Figure SMS_17
Wherein is present>
Figure SMS_11
Indicates the corresponding time for collecting each frame of image video, and then>
Figure SMS_12
Represents a frame, or a frame, of the captured video sequence>
Figure SMS_14
Represents the jth monitored object, < > or >>
Figure SMS_18
Is a set of connecting lines among human joints, the second subset comprises the connecting lines of the same key points among adjacent frames,
Figure SMS_9
wherein, bySpace-time diagram convolution operation for 9-layer hybrid network based on set in frame and/or in frame>
Figure SMS_13
And interframe edge set>
Figure SMS_16
And executing graph volume operation to generate a high-level feature graph, and inputting the high-level feature graph into a softmax classifier to identify and classify abnormal driver behaviors and suspicious passenger behaviors.
As a preferred technical scheme of the invention: in the hybrid network, by
Figure SMS_19
Convolutional layer, time graph convolutional network and &>
Figure SMS_20
And (3) executing graph convolution operation on the inter-frame edge set of the intra-frame edge set by the convolution layer constructed hybrid network, and extracting the spatial feature and the temporal context feature of the skeleton sequence.
As a preferred technical scheme of the invention: in the time-graph convolution network of the hybrid network, the convolution process is as follows:
Figure SMS_21
wherein the content of the first and second substances,
Figure SMS_23
is an adjacency matrix representing an in-body connection of a keypoint in the single-frame skeleton, < > or>
Figure SMS_25
Is a unit matrix representing a line between video frames>
Figure SMS_27
Is degree matrix, based on the evaluation value>
Figure SMS_24
Represents an activation function of the time-patterned convolutional network, based on the comparison of the activation function and the evaluation of the activation function>
Figure SMS_26
For the input feature data, representing the skeleton joint point data, <' > based on the feature data>
Figure SMS_28
And &>
Figure SMS_29
For a parameter to be trained, is selected>
Figure SMS_22
Is the output data; />
The following calculations were performed for each mixing unit:
Figure SMS_30
wherein the content of the first and second substances,
Figure SMS_31
is the first->
Figure SMS_32
Floor input,. Or>
Figure SMS_33
Represents a mixing unit, <' > or>
Figure SMS_34
Represents a training parameter set, is selected>
Figure SMS_35
For an activation function of the mixing network>
Figure SMS_36
Is the first->
Figure SMS_37
And (6) outputting the layers.
As a preferred technical scheme of the invention: in the S4, the control center gives an alarm for the abnormal vehicle information according to the vehicle running information uploaded by the vehicle-mounted networking system; and monitoring and alarming the abnormal posture of the driver and the behavior of the suspicious passenger according to the monitoring information.
Compared with the prior art, the intelligent vehicle-mounted real-time warning method for the industrial brain public transport has the beneficial effects that:
the vehicle-mounted networking system monitors the digital data of the vehicle operation, monitors the driving concentration and the driving posture of a driver respectively through a state monitoring algorithm and a behavior monitoring algorithm by digitally monitoring the monitoring image, and monitors the suspicious behavior of passengers possibly existing in the vehicle through the behavior monitoring algorithm, so that the traffic accident rate of the vehicle is reduced, and the personal safety and the article safety of the passengers are improved.
Drawings
FIG. 1 is a flow chart of a method of a preferred embodiment of the present invention.
Detailed Description
It should be noted that, in the case of no conflict, the embodiments and the features in the embodiments may be combined with each other, and the technical solution in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
Referring to fig. 1, a preferred embodiment of the present invention provides an industrial brain bus intelligent vehicle-mounted real-time warning method, which includes the following steps:
s1: the method comprises the steps that running information of the bus is obtained through a vehicle-mounted networking system, and monitoring and identification are carried out through video inside and outside the bus;
s2: extracting a monitoring information image and preprocessing the image;
s3: monitoring the driving state of a driver and the behavior of passengers extracted from monitoring information in the bus in real time through a state monitoring algorithm and a behavior monitoring algorithm respectively;
s4: the control center receives the running information and the monitoring information of the bus and gives an alarm for the abnormal information existing in the monitoring.
The preprocessing operation in the step S2 comprises image enhancement processing, image normalization processing, image detection and cutting processing.
And S3, the driving state of the driver comprises driving concentration and driving posture, and the driving concentration and the driving posture of the driver are monitored through a state monitoring algorithm and a behavior monitoring algorithm respectively.
In the state monitoring algorithm, a driver sight line concentrated region is divided through a K-means algorithm, the driver sight line concentrated region comprises a vehicle left side, a vehicle front side and a vehicle right side, the vehicle left side comprises a left rearview mirror and a left lane, the vehicle front side comprises a far front part, a near part and an in-vehicle instrument, and the vehicle right side comprises a right rearview mirror and a right lane.
The steps of the K-means algorithm are as follows:
s3.1: inputting the number 3 of the clusters on the left side of the vehicle, the clusters in front of the vehicle and the clusters on the right side of the vehicle and a data set containing n objects, and determining an initial clustering center point for each cluster; in this embodiment, the data set samples of n objects are respectively
Figure SMS_38
Each sample belongs to one of the clusters on the left side of the vehicle, the front side of the vehicle, or the right side of the vehicle, as required by the driver's gaze concentration area. Is provided with>
Figure SMS_39
A cluster, in this embodiment->
Figure SMS_40
. Device for combining or screening>
Figure SMS_41
Represent sets of samples belonging to clusters on the left side of the vehicle, the front side of the vehicle, or the right side of the vehicle, respectively, namely:
Figure SMS_42
s3.2: distributing the data in the data set to the nearest cluster according to the Euclidean distance principle;
s3.3: using the sample data mean value in each cluster as a clustering center, wherein the initial clustering center points are respectively:
Figure SMS_43
s3.4: and repeating the steps S3.2 and S3.3 until the algorithm is converged, and outputting 3 result clusters. Each sample can be obtained for l iterations
Figure SMS_44
Associated result cluster->
Figure SMS_45
Figure SMS_46
In the state monitoring algorithm, a sight line concentration area when the sight line of a driver falls on the left side of the vehicle, the front side of the vehicle and the right side of the vehicle is divided through a K-means algorithm;
wherein
Figure SMS_47
Representing the square of the euclidean distance. Then, a new cluster center point can be determined by calculating the mean of the samples in each cluster:
Figure SMS_48
the iteration is repeated until the algorithm converges. Finally, will obtain
Figure SMS_49
In a cluster, i.e.>
Figure SMS_50
Each cluster has a cluster center point->
Figure SMS_51
For a cluster-like center value of a single time window, then->
Figure SMS_52
. These clusters correspond to the division of the area where the driver's sight line is concentrated. Finally calculating the liveness of the driver sight line->
Figure SMS_53
Figure SMS_54
Wherein, the first and the second end of the pipe are connected with each other,
Figure SMS_55
for the number of image frames extracted per second from a surveillance video, a value is selected based on the number of image frames extracted per second>
Figure SMS_56
Is the time of a time window>
Figure SMS_57
Is a first->
Figure SMS_58
Annotated region of frame, <' > or>
Figure SMS_59
Is a cluster-like center value of a single time window, < >>
Figure SMS_60
(ii) a Dividing line of sight activity->
Figure SMS_61
The threshold value of (2) is used for judging the driver concentration, when the sight line activity is smaller than the threshold value, the driver is judged to be in a distracted state, and the driver is reminded through the alarm when the driver is monitored to be in the distracted driving state.
Specifically, in order to ensure the judgment precision, the size of the time window is determined according to the input characteristic value through a decision tree algorithm:
wherein the feature is an average annotation region area, representing an average area of the annotation region within a time window. The method specifically comprises the following steps:
1. setting an initial time window size to t 0
2. For each time window, beta and mean are calculated area The value of (c).
3. A decision tree algorithm is used to decide whether the time window size needs to be adjusted.
a. If beta is less than threshold beta min, Then t is increased by a certain percentage p 1
b. If beta is greater than threshold beta max, Then t is reduced by a certain percentage p 2
c. If beta is at peta min And peta max In between, t is not adjusted.
4. Re-evaluating driver gaze liveness using new time window size
Figure SMS_62
5. And (5) repeating the steps 2-4 until the preset precision is reached.
Secondly, in a behavior monitoring algorithm, the motion characteristics of joint points in a skeleton sequence are used as indexes for judging abnormal behavior key frames, and whether suspicious postures exist in the driving posture of a driver and people in the vehicle is judged; wherein, a space-time diagram is constructed on the skeleton sequence of the key frame
Figure SMS_64
Node set->
Figure SMS_66
Including all key points of the skeleton sequence, the edge set E consists of two subsets, the first subset comprises a single-frame human skeleton internal connection line,
Figure SMS_70
in which>
Figure SMS_65
Indicating the time corresponding to the acquisition of each frame of image video,
Figure SMS_68
representing a frame of an acquired video sequence, based on a reference frame, and a method of determining whether a reference frame is present>
Figure SMS_71
Represents the jth monitored object, < > or >>
Figure SMS_72
Is a set of connecting lines between human joints, the second subset comprises the connecting lines of the same key points between adjacent frames, and the connecting lines of the same key points between adjacent frames are selected and matched>
Figure SMS_63
Wherein the spatio-temporal graph convolution operation through the 9-layer hybrid network is used for judging the set inside the frame>
Figure SMS_67
And interframe edge set>
Figure SMS_69
And performing graph volume operation to generate a high-level feature map, and inputting the high-level feature map into a softmax classifier to identify and classify abnormal driver behaviors and suspicious passenger behaviors.
In a hybrid network, by
Figure SMS_73
Convolutional layer, time graph convolutional network and &>
Figure SMS_74
And (3) executing graph convolution operation on the inter-frame edge set of the intra-frame edge set by the convolution layer constructed hybrid network, and extracting the spatial feature and the temporal context feature of the skeleton sequence.
In the time-graph convolution network of the hybrid network, the convolution process is as follows:
Figure SMS_75
wherein the content of the first and second substances,
Figure SMS_78
is an adjacency matrix representing an in-body connection of a keypoint in the single-frame skeleton, < > or>
Figure SMS_80
Is a unit matrix, representing viewsConnection between frames, based on the frequency>
Figure SMS_81
Is degree matrix, based on the evaluation value>
Figure SMS_77
Represents an activation function of the time-patterned convolutional network, based on the comparison of the activation function and the evaluation of the activation function>
Figure SMS_79
For the input feature data, representing the skeleton joint point data, <' > based on the feature data>
Figure SMS_82
And &>
Figure SMS_83
For a parameter to be trained, is selected>
Figure SMS_76
Is the output data;
the following calculations were performed for each mixing unit:
Figure SMS_84
wherein the content of the first and second substances,
Figure SMS_85
is the first->
Figure SMS_86
Floor input,. Or>
Figure SMS_87
Represents a mixing unit, <' > or>
Figure SMS_88
Represents a training parameter set, is selected>
Figure SMS_89
For an activation function of a mixing network>
Figure SMS_90
Is the first->
Figure SMS_91
And (6) outputting the layers.
S4, the control center gives an alarm for the abnormal vehicle information according to the vehicle running information uploaded by the vehicle-mounted networking system; and monitoring and alarming the abnormal posture of the driver and the behavior of the suspicious passenger according to the monitoring information.
In the embodiment, the basic information of the bus is acquired through the vehicle-mounted chain system, the running state of the bus is monitored in real time, an alarm is given when the bus is in an abnormal running state, if the bus stops for a long time in a non-congested road section and a traffic light intersection, the abnormal alarm is given to the monitoring center, and a worker can acquire the real-time situation by extracting the video image information of the inner and outer monitoring of the bus and perform contact processing.
And in the running process of the bus, extracting the image information inside and outside the bus in the running process of the bus of the inside and outside monitoring videos, and performing preprocessing operations such as enhancement, normalization, detection cutting and the like on the extracted image information. Carrying out a K-means algorithm on the left side of the vehicle, including a left rearview mirror and a left lane; the vehicle right side, including right rear-view mirror and right side lane, the vehicle place ahead, including the place ahead far away, the place ahead is near and instrument in the car, based on the sight dynamic clustering of the left side of vehicle, right side and the place ahead, obtain three clustering centers to divide each central range, if
Figure SMS_92
Is divided into left and right>
Figure SMS_93
The time is divided into a front part and a rear part,
Figure SMS_94
time division is carried out on the right side, one frame of image is extracted every second for extracting images within three seconds, and the sight line activeness of a driver is calculated
Figure SMS_95
Figure SMS_96
Wherein the content of the first and second substances,
Figure SMS_97
for the number of image frames extracted per second from a surveillance video, a value is selected based on the number of image frames extracted per second>
Figure SMS_98
Is the time of a time window>
Figure SMS_99
Is the first->
Figure SMS_100
Annotated region of frame, <' > or>
Figure SMS_101
Cluster-like center values for a single time window; dividing a sight line activity threshold value, and detecting the sight line activity of the driver>
Figure SMS_102
And if the value is less than the threshold value, the driver is determined to be in a distracted driving state, such as fatigue driving, distraction and the like. And for the condition divided into the distraction state, the alarm reminds the driver to concentrate on driving, and for multiple distraction driving of the driver, such as five distraction driving within three minutes, abnormal driving is automatically sent to the control center.
And respectively monitoring the driving posture of a driver and the passenger behavior in the vehicle by a behavior monitoring algorithm, and constructing a space-time diagram on a human skeleton sequence based on the skeleton sequence joint points as identification indexes of abnormal posture and abnormal behavior
Figure SMS_105
Node set->
Figure SMS_106
Including all key points of the skeleton sequence, the edge set E is composed of two subsets, the frame edge set contains a single frame of human skeleton internal connecting line and is/is>
Figure SMS_111
Wherein, in the step (A),
Figure SMS_104
is a set of connecting lines among human joints, the inter-frame edge set comprises the connecting lines of the same key points among adjacent frames,
Figure SMS_108
is selected by 9 layers>
Figure SMS_109
Convolutional layer, time graph convolutional network and &>
Figure SMS_110
Spatio-temporal graph convolution operation of convolutional layer constructed hybrid network on/off sets in frames>
Figure SMS_103
And interframe edge set>
Figure SMS_107
And executing graph volume operation:
the time pattern convolution network adopts the following convolution process:
Figure SMS_112
wherein the content of the first and second substances,
Figure SMS_113
is an adjacency matrix representing an in-body connection of a keypoint in the single-frame skeleton, < > or>
Figure SMS_117
Is a unit matrix representing a connection between video frames, is asserted>
Figure SMS_119
Degree matrix, <' > based on>
Figure SMS_115
Represents an activation function of the time-patterned convolutional network, based on the comparison of the activation function and the evaluation of the activation function>
Figure SMS_116
Representing skeletal joint point data for input feature data, <' >>
Figure SMS_118
And &>
Figure SMS_120
For a parameter to be trained, is selected>
Figure SMS_114
Is the output data;
for each mixing unit, take layer 5 as an example:
Figure SMS_121
wherein the content of the first and second substances,
Figure SMS_122
for entry at level 5, based on the number of the incoming calls>
Figure SMS_123
Represents a mixing unit, <' > or>
Figure SMS_124
Represents a training parameter set, is selected>
Figure SMS_125
For mixing network activation functions>
Figure SMS_126
Is the first->
Figure SMS_127
And (6) outputting the layers.
The skeleton map is operated based on the convolution layer, the activation mapping is transmitted to the subsequent map convolution layer, and the pedestrian skeleton features can be extracted more deeply on the basis of reducing the network computing complexity due to the use of the bottleneck residual error module. And meanwhile, inputting the generated high-level feature map into a softmax classifier to identify and classify abnormal driver behaviors and suspicious passenger behaviors.
When the abnormal driving state of the driver is identified and a certain time is exceeded, the driver gives an alarm to an alarm center, the situation that the passenger has suspicious behaviors is prompted through a prompting voice, the suspicious behaviors still existing after prompting are alarmed to a control center, and staff of the control center can judge and take corresponding measures according to received alarm information.
It will be evident to those skilled in the art that the invention is not limited to the details of the foregoing illustrative embodiments, and that the present invention may be embodied in other specific forms without departing from the spirit or essential attributes thereof. The present embodiments are therefore to be considered in all respects as illustrative and not restrictive, the scope of the invention being indicated by the appended claims rather than by the foregoing description, and all changes which come within the meaning and range of equivalency of the claims are therefore intended to be embraced therein. Any reference sign in a claim should not be construed as limiting the claim concerned.
Furthermore, it should be understood that although the present specification describes embodiments, not every embodiment includes only a single embodiment, and such description is for clarity purposes only, and it is to be understood that all embodiments may be combined as appropriate by one of ordinary skill in the art to form other embodiments as will be apparent to those of skill in the art from the description herein.

Claims (10)

1. An industrial brain bus intelligent vehicle-mounted real-time warning method is characterized by comprising the following steps:
s1: the method comprises the steps that running information of the bus is obtained through a vehicle-mounted networking system, and monitoring and identification are carried out through video inside and outside the bus;
s2: extracting a monitoring information image and preprocessing the image;
s3: monitoring the driving state of a driver and the behavior of passengers extracted from monitoring information in the bus in real time through a state monitoring algorithm and a behavior monitoring algorithm respectively;
s4: the control center receives the running information and the monitoring information of the bus and gives an alarm for the abnormal information existing in the monitoring.
2. The intelligent vehicle-mounted real-time warning method for the industrial brain bus according to claim 1, wherein the preprocessing operation in the step S2 comprises image enhancement processing, image normalization processing, image detection and clipping processing.
3. The intelligent vehicle-mounted real-time warning method for the industrial brain bus in the S1 is characterized in that the driving state of the driver in the S3 comprises driving concentration and driving posture, and the driving concentration and the driving posture of the driver are monitored through a state monitoring algorithm and a behavior monitoring algorithm respectively.
4. The intelligent vehicle-mounted real-time alarming method for the industrial brain bus according to claim 3, wherein in the state monitoring algorithm, a driver sight line concentrated region is divided through a K-means algorithm, the driver sight line concentrated region comprises a left side of the vehicle, a front of the vehicle and a right side of the vehicle, wherein the left side of the vehicle comprises a left rearview mirror and a left lane, the front of the vehicle comprises far front instruments, near front instruments and inside instruments, and the right side of the vehicle comprises a right rearview mirror and a right lane.
5. The intelligent vehicle-mounted real-time warning method for the industrial brain public transport according to claim 4, wherein the K-means algorithm comprises the following steps:
s3.1: inputting the number 3 of the clusters on the left side of the vehicle, the clusters in front of the vehicle and the clusters on the right side of the vehicle and a data set containing n objects, and determining an initial clustering center point for each cluster;
s3.2: distributing the data in the data set to the nearest cluster according to the Euclidean distance principle;
s3.3: using the sample data mean value in each cluster as a clustering center;
s3.4: and repeating the steps S3.2 and S3.3 until the algorithm is converged, and outputting 3 result clusters.
6. The intelligent vehicle-mounted real-time warning method for the industrial brain bus as claimed in claim 5, wherein in the state monitoring algorithm, the sight concentration areas of the driver when the sight falls on the left side of the vehicle, the front side of the vehicle and the right side of the vehicle are divided through a K-means algorithm, and the sight activity of the driver is calculated
Figure QLYQS_1
Figure QLYQS_2
Wherein the content of the first and second substances,
Figure QLYQS_3
for the number of image frames extracted per second from a surveillance video, a value is selected based on the number of image frames extracted per second>
Figure QLYQS_4
Is the time of a time window>
Figure QLYQS_5
Is the first->
Figure QLYQS_6
Annotated region of frame, <' > or>
Figure QLYQS_7
Cluster-like center values for a single time window; dividing line of sight activity degree>
Figure QLYQS_8
When the sight line activity is smaller than the threshold value, the driver is judged to be in a distracted state, and the driver is reminded through the alarm when the driver is monitored to be in the distracted driving state.
7. The intelligent vehicle-mounted real-time alarming method for industrial brain bus according to claim 6, wherein the behavior monitoring is performedIn the algorithm, the motion characteristics of joint points in a skeleton sequence are used as indexes for judging abnormal behavior key frames, and whether suspicious postures exist in the driving posture of a driver and the personnel in the vehicle is judged; wherein, a space-time diagram is constructed on the skeleton sequence of the key frame
Figure QLYQS_9
Node set>
Figure QLYQS_12
Including all key points of the skeleton sequence, the edge set E consists of two subsets, the first subset contains a single-frame human skeleton internal connecting line and is/is selected>
Figure QLYQS_15
In which>
Figure QLYQS_11
Indicates the corresponding time for collecting each frame of image video, and then>
Figure QLYQS_14
Represents a frame, or a frame, of the captured video sequence>
Figure QLYQS_17
Represents the jth monitored object, < > or >>
Figure QLYQS_18
Is a set of connecting lines between human joints, the second subset comprises the connecting lines of the same key points between adjacent frames, and the connecting lines of the same key points between adjacent frames are combined in a manner of combining the adjacent frames>
Figure QLYQS_10
Wherein the spatio-temporal graph convolution operation through the 9-layer hybrid network is used for judging the set inside the frame>
Figure QLYQS_13
And interframe edge set>
Figure QLYQS_16
Performing graph convolutionAnd operating to generate a high-level feature map, and inputting the high-level feature map into a softmax classifier to identify and classify abnormal behaviors of the driver and suspicious behaviors of passengers.
8. The intelligent vehicle-mounted real-time alarm method for industrial brain public transport according to claim 7, wherein the alarm is given in the hybrid network through the intelligent vehicle-mounted intelligent vehicle
Figure QLYQS_19
Convolutional layer, time graph convolutional network and &>
Figure QLYQS_20
And the mixed network constructed by the convolutional layer performs graph convolution operation on the inter-frame edge set of the intra-frame edge set, and extracts the spatial feature and the temporal context feature of the framework sequence.
9. The intelligent vehicle-mounted real-time alarm method for the brain buses of the industry as claimed in claim 8, wherein in the time chart convolution network of the hybrid network, the convolution process is as follows:
Figure QLYQS_21
wherein the content of the first and second substances,
Figure QLYQS_24
is an adjacency matrix representing an in-body connection of a keypoint in the single-frame skeleton, < > or>
Figure QLYQS_25
Is a unit matrix representing a connection between video frames, is asserted>
Figure QLYQS_27
Is degree matrix, based on the evaluation value>
Figure QLYQS_22
Represents an activation function of the time-patterned convolutional network, based on the comparison of the activation function and the evaluation of the activation function>
Figure QLYQS_26
For the input feature data, representing the skeleton joint point data, <' > based on the feature data>
Figure QLYQS_28
And &>
Figure QLYQS_29
For a parameter to be trained, is selected>
Figure QLYQS_23
Is the output data;
the following calculations were performed for each mixing unit:
Figure QLYQS_30
wherein the content of the first and second substances,
Figure QLYQS_31
is the first->
Figure QLYQS_32
Floor input,. Or>
Figure QLYQS_33
Represents a mixing unit, <' > or>
Figure QLYQS_34
Represents a training parameter set, is selected>
Figure QLYQS_35
For an activation function of the mixing network>
Figure QLYQS_36
Is the first->
Figure QLYQS_37
And (6) outputting the layers.
10. The intelligent vehicle-mounted real-time alarming method for the industrial brain bus according to the claim 9, wherein in the step S4, the control center gives an alarm for the abnormal vehicle information according to the vehicle running information uploaded by the vehicle-mounted networking system; and monitoring and alarming the abnormal posture of the driver and the behavior of the suspicious passenger according to the monitoring information.
CN202310264493.9A 2023-03-20 2023-03-20 Intelligent vehicle-mounted real-time alarm method for industrial brain public transport Pending CN115984787A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202310264493.9A CN115984787A (en) 2023-03-20 2023-03-20 Intelligent vehicle-mounted real-time alarm method for industrial brain public transport

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202310264493.9A CN115984787A (en) 2023-03-20 2023-03-20 Intelligent vehicle-mounted real-time alarm method for industrial brain public transport

Publications (1)

Publication Number Publication Date
CN115984787A true CN115984787A (en) 2023-04-18

Family

ID=85963475

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202310264493.9A Pending CN115984787A (en) 2023-03-20 2023-03-20 Intelligent vehicle-mounted real-time alarm method for industrial brain public transport

Country Status (1)

Country Link
CN (1) CN115984787A (en)

Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112149618A (en) * 2020-10-14 2020-12-29 紫清智行科技(北京)有限公司 Pedestrian abnormal behavior detection method and device suitable for inspection vehicle
US20210012128A1 (en) * 2019-03-18 2021-01-14 Beijing Sensetime Technology Development Co., Ltd. Driver attention monitoring method and apparatus and electronic device
CN112389448A (en) * 2020-11-23 2021-02-23 重庆邮电大学 Abnormal driving behavior identification method based on vehicle state and driver state
CN112633057A (en) * 2020-11-04 2021-04-09 北方工业大学 Intelligent monitoring method for abnormal behaviors in bus
CN113239884A (en) * 2021-06-04 2021-08-10 重庆能源职业学院 Method for recognizing human body behaviors in elevator car
CN113378771A (en) * 2021-06-28 2021-09-10 济南大学 Driver state determination method and device, driver monitoring system and vehicle
CN113642400A (en) * 2021-07-12 2021-11-12 东北大学 Graph convolution action recognition method, device and equipment based on 2S-AGCN
CN115035596A (en) * 2022-06-05 2022-09-09 东北石油大学 Behavior detection method and apparatus, electronic device, and storage medium
WO2022266853A1 (en) * 2021-06-22 2022-12-29 Intel Corporation Methods and devices for gesture recognition

Patent Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20210012128A1 (en) * 2019-03-18 2021-01-14 Beijing Sensetime Technology Development Co., Ltd. Driver attention monitoring method and apparatus and electronic device
CN112149618A (en) * 2020-10-14 2020-12-29 紫清智行科技(北京)有限公司 Pedestrian abnormal behavior detection method and device suitable for inspection vehicle
CN112633057A (en) * 2020-11-04 2021-04-09 北方工业大学 Intelligent monitoring method for abnormal behaviors in bus
CN112389448A (en) * 2020-11-23 2021-02-23 重庆邮电大学 Abnormal driving behavior identification method based on vehicle state and driver state
CN113239884A (en) * 2021-06-04 2021-08-10 重庆能源职业学院 Method for recognizing human body behaviors in elevator car
WO2022266853A1 (en) * 2021-06-22 2022-12-29 Intel Corporation Methods and devices for gesture recognition
CN113378771A (en) * 2021-06-28 2021-09-10 济南大学 Driver state determination method and device, driver monitoring system and vehicle
CN113642400A (en) * 2021-07-12 2021-11-12 东北大学 Graph convolution action recognition method, device and equipment based on 2S-AGCN
CN115035596A (en) * 2022-06-05 2022-09-09 东北石油大学 Behavior detection method and apparatus, electronic device, and storage medium

Non-Patent Citations (4)

* Cited by examiner, † Cited by third party
Title
MENGQI SHI 等: "Skeleton Action Recognition Based on Transformer Adaptive Graph Convolution", 《JOURNAL OF PHYSICS: CONFERENCE SERIES》, pages 1 - 7 *
张樱己: "基于眼动与手部行为识别的驾驶分心检测算法研究", 《中国优秀硕士学位论文全文数据库 工程科技Ⅰ辑》, vol. 2023, no. 3, pages 026 - 107 *
田文浩: "基于多特征融合的视频动作识别研究", 《中国优秀硕士学位论文全文数据库 信息科技辑》, vol. 2023, no. 1, pages 138 - 2843 *
白中浩 等: "基于图卷积网络的多信息融合驾驶员分心行为检测", 《汽车工程》, vol. 42, no. 8, pages 1027 - 1033 *

Similar Documents

Publication Publication Date Title
CN110119676B (en) Driver fatigue detection method based on neural network
CN108960065B (en) Driving behavior detection method based on vision
CN108791299B (en) Driving fatigue detection and early warning system and method based on vision
CN111469802B (en) Seat belt state determination system and method
CN109460699B (en) Driver safety belt wearing identification method based on deep learning
KR101833359B1 (en) Method and apparatus for collecting traffic information from bigdata of outside image of car
US11465634B1 (en) Automobile detection system
CN109190523B (en) Vehicle detection tracking early warning method based on vision
WO2018058958A1 (en) Road vehicle traffic alarm system and method therefor
WO2019223655A1 (en) Detection of non-motor vehicle carrying passenger
CN110866427A (en) Vehicle behavior detection method and device
CN112633057B (en) Intelligent monitoring method for abnormal behavior in bus
CN109977930B (en) Fatigue driving detection method and device
CN111597974B (en) Monitoring method and system for personnel activities in carriage based on TOF camera
CN103871200A (en) Safety warning system and method used for automobile driving
CN113076856B (en) Bus safety guarantee system based on face recognition
CN106570444A (en) On-board smart prompting method and system based on behavior identification
TWI774034B (en) Driving warning method, system and equipment based on internet of vehicle
CN107264526B (en) A kind of lateral vehicle method for early warning, system, storage medium and terminal device
CN110781872A (en) Driver fatigue grade recognition system with bimodal feature fusion
Yi et al. Safety belt wearing detection algorithm based on human joint points
KR101337554B1 (en) Apparatus for trace of wanted criminal and missing person using image recognition and method thereof
CN115984787A (en) Intelligent vehicle-mounted real-time alarm method for industrial brain public transport
CN111709396A (en) Driving skill subject two and three examination auxiliary evaluation method based on human body posture
CN108241866A (en) A kind of method, apparatus guided to driving behavior and vehicle

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination