CN110298257A - A kind of driving behavior recognition methods based on human body multiple location feature - Google Patents
A kind of driving behavior recognition methods based on human body multiple location feature Download PDFInfo
- Publication number
- CN110298257A CN110298257A CN201910483000.4A CN201910483000A CN110298257A CN 110298257 A CN110298257 A CN 110298257A CN 201910483000 A CN201910483000 A CN 201910483000A CN 110298257 A CN110298257 A CN 110298257A
- Authority
- CN
- China
- Prior art keywords
- human body
- driving behavior
- key point
- stage
- image
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/24—Classification techniques
- G06F18/241—Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V20/00—Scenes; Scene-specific elements
- G06V20/50—Context or environment of the image
- G06V20/59—Context or environment of the image inside of a vehicle, e.g. relating to seat occupancy, driver state or inner lighting conditions
- G06V20/597—Recognising the driver's state or behaviour, e.g. attention or drowsiness
-
- Y—GENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
- Y02—TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
- Y02T—CLIMATE CHANGE MITIGATION TECHNOLOGIES RELATED TO TRANSPORTATION
- Y02T10/00—Road transport of goods or passengers
- Y02T10/10—Internal combustion engine [ICE] based vehicles
- Y02T10/40—Engine management systems
Abstract
The present invention provides a kind of driving behavior recognition methods based on human body multiple location feature, comprising: establishes the image data set of driving behavior identification;Construct neural network model;Driving behavior identification model of the training based on human body multiple location feature;Driving behavior identification model based on human body multiple location feature is tested.Present invention combination human body key point region and global area, driving behavior identification is carried out using certain regional areas with judgement index, due to merging human body key point feature, driving behavior recognition accuracy is further increased, there is important application value in field of traffic safety.
Description
Technical field
The invention belongs to field of image processings, are related to mode identification method, and in particular to one kind is special based on human body multiple location
The static driver Activity recognition method of sign.
Background technique
In recent years, as the rapid development of China's economic, the living standard of resident are continuously improved, automobile has become people
The essential vehicles in class life production.However along with the universal of automobile, road traffic accident also occurs again and again.
The recent statistics data that national security supervision general bureau, Department of Transportation announce are shown: the road safety accident in China
Annual death toll is still in second place of the world, and wherein traffic accident caused by private car, cargo vehicle accounts for about national total amount
80%.Traffic safety faces lot of challenges at present, and for example traffic participant illegal activities are prominent;Manage, enforce the law it is not in place;
The problems such as road safety supervision is insufficient, and the unsafe acts of driver are then a critically important originals for making traffic accident
Cause, these it turns out that containment road traffic accident is high-incidence, specification driving behavior, it is extremely urgent to reduce traffic accident harm.
Unsafe driving behavior is often divided into following several, on the one hand, present rhythm of life is constantly accelerated, people's
Life is closely related with mobile phone, and driver often occurs taking mobile phone, checking transmission information etc. in driving procedure, this class behavior
Situations such as sight that will lead to driver is detached from road ahead situation, both hands leave steering wheel, absent minded generation.Once
Emergency case occurs, and driver is often difficult to make counter-measure rapidly, so as to brew serious traffic accident.Except this
Except, some drivers can often occur smoking during long-duration driving, talk with copilot passenger, both hands disengaging
Steering wheel etc. has the behavior of security risk, largely improves traffic accident rate.Newest road traffic
Safety law implementing regulations answer the obstruction such as hand-held phone safety clearly stipulate that must not dial during motor vehicle driving
The behavior of driving.
Single more above-mentioned driving behavior there are security risk is often difficult to be monitored and managed by relevant department, hands over
It is even more impossible to accomplish artificial supervision in real time by logical administrative staff.Prevent unsafe acts and is largely dependent upon driver's itself
Awareness of safety, specification driving behavior is still without effective measure.
Summary of the invention
To solve the above problems, the present invention provides a kind of driving behavior identification sides based on human body multiple location feature
Method, the human body multiple location feature extracting method used can obtain the spatial information of driving behavior in image, to know in real time
Other driving behavior.Due to different driving behaviors, the movement at body local position is different, and the present invention utilizes human body multiple location
Human body key independent positioning method Convolutional Pose Machines is combined with VGG model and is carried out driver by feature
Activity recognition.
In order to achieve the above object, the invention provides the following technical scheme:
A kind of driving behavior recognition methods based on human body multiple location feature, includes the following steps:
Step 1: establishing the image data set of driving behavior identification
Sample image data is obtained, image data set is established, includes various driving behaviors in sample image, by picture number
It is divided into training set and test set according to collection, and the driver in test sample picture in driver and training sample is independent;
Step 2, neural network model is constructed
The neural network model is made of Convolutional Pose Machines model with convolutional neural networks,
Regional area and the full articulamentum of global area feature extraction are mutually indepedent in convolutional neural networks, and convolutional layer is shared;
Step 3: driving behavior identification model of the training based on human body multiple location feature
Network model is built, network model is trained, optimizes network parameter using stochastic gradient descent method;
Step 4: the driving behavior identification model based on human body multiple location feature is tested
Give a driving behavior image, test image normalized into input as model after size, by it is preceding to
Propagate the Activity recognition result for obtaining test image.
Further, the step 2 specifically includes following process:
Step 201: 5 key points are marked in training sample and test sample, key point includes: head, the right hand, right elbow
Joint, left hand and left elbow joint, training sample and test sample image include various driving behaviors;
Step 202: defining Yp∈ Z is the location of pixels of p-th of key point, and wherein Z indicates all positions (u, v) in picture
Set, the target of human body key point location is the position Y=(Y of all key points in forecast image1,...,YP), posture machine
Device is by multi-class classifier sequence gt() is constituted;
In each stage t ∈ { 1 ..., T }, for different human body key point position Yp=z, z ∈ Z, classifier gt(·)
The feature x extracted based on picture position zzAnd the Y of previous stage classifier outputpRelevant information near field exports confidence
Value;Particularly, for first stage, i.e. t=1, classifier g1Shown in the value of the confidence such as formula (1) of () output:
WhereinFor classifier g1Score of p-th of the key point exported in the stage 1 in picture position z;
Define each position z=(u, v) in imageTThe value of the confidence of key point p beω and h is respectively to scheme
The width and height of picture, then can obtain
The confidence set of graphs of all human body key points is defined as bt∈Rω*h*(P+1), " P+1 " represent P human body key point with
Background;
In the subsequent stage, for key point p, classifier is by the characteristic information based on input pictureAnd formula
(2) the correlated characteristic information of the previous stage classifier output indicated provides the value of the confidence, i.e., as shown in following formula (3):
Wherein ψt>1() is confidence set bt-1To the mapping of characteristic value;X ' is characteristics of image, in original posture machine
In framework, x=x ';
Step 203: CPM half body network model is used, model includes following four-stage:
First stage is made of a basic convolutional neural networks, including being made of 7 layers of convolutional layer, 3 layers of pond layer
The convolution module of white, the convolution module belong to full convolutional coding structure, and wherein convolutional layer does not change the size of image;Input picture passes through
Three times after pondization operation, it is finally each key point output response figure, amounts to P+1 width;
Second stage passes through a cascaded structure for the convolution results of original image, the output response figure and one in stage one
For the center Constraint fusion that a Gaussian template generates as input, final output is P+1 width response diagram;
Third and fourth stage then uses the interim convolution results of second stage instead of the convolution results of original image, other
Part remains unchanged;
Step 204: costing bio disturbance and error propagation are carried out in the output in each stage, under each scale, meter
The response diagram of each key point is calculated, and is accumulated it as final overall response figure;
Step 205: obtaining and corresponding rectangular area is drawn according to position behind the position of each key point;
Step 206: forward calculation being carried out to global image by VGG-16 and obtains corresponding feature vector, and utilizes Rol
Pooling layers map in key point region with feature vector;
Step 207: to 5 key point feature vectors and global characteristics vector carry out cascade as last feature to
Amount, classifies to it using softmax, exports corresponding driving behavior.
Further, the x ' that step 202 follow-up phase uses is different from the first stage, classifier gt() is using random gloomy
Woods algorithm.
Further, in step 203, when whole body key point to be detected, phase III structure is repeated.
Further, the step 3 specifically includes Convolutional Pose Machines model training and VGG-
16 model trainings;
In the training of Convolutional Pose Machines model, if the correct response diagram of certain key point p isThe response diagram exported in model isThe Loss function in so each stage are as follows:
The Loss of four-stage are as follows:
The training of VGG-16 reduces softmax layers of loss according to the behavior label of sample;If P (α | I, r) it is softmax
The driving behavior provided belongs to the probability of α, then for the training sample of a batch, loss function are as follows:
Wherein liFor image IiCorrect behavior label, M be batch quantity.
Further, the model that training finishes on ImageNet-1K data set is utilized when step 3VGG-16 training
Carry out parameter initialization.
Compared with prior art, the invention has the advantages that and the utility model has the advantages that
1. the present invention combines human body key point region and global area, carried out using certain regional areas with judgement index
Driving behavior identification further increases driving behavior recognition accuracy, pacifies in traffic due to merging human body key point feature
There is important application value in full field.
2. the present invention is believed using the multiple key point regions of Convolutional Pose Machines model extraction human body
Breath, significantly improves the accuracy of identification of model.
Detailed description of the invention
Fig. 1 is flow chart of the present invention.
Fig. 2 is the sample picture of different driving behaviors in the present invention,
Fig. 3 is the driving behavior identification model block schematic illustration based on human body multiple location feature in the present invention,
Fig. 4 is posture machine shelf composition in the present invention,
Fig. 5 is CPM model structure schematic diagram in the present invention,
Fig. 6 is convolution module structural schematic diagram in the present invention,
Fig. 7 is relaying supervision schematic diagram in the present invention.
Specific embodiment
Technical solution provided by the invention is described in detail below with reference to specific embodiment, it should be understood that following specific
Embodiment is only illustrative of the invention and is not intended to limit the scope of the invention.
Detect that head, left hand elbow joint, right hand elbow close using a kind of serializing posture machine architecture first in the present invention
Five section, left hand wrist, right hand wrist key point positions, and draw regional area.Then the overall situation is schemed by VGG-16 model
Corresponding feature vector is obtained as carrying out convolutional calculation, carries out key point region and feature vector by Pooling layers of Rol
Mapping.Finally 5 feature vectors and global characteristics vector are cascaded, are classified using softmax to it, output pair
The driving behavior answered.
Specifically, the driving behavior recognition methods provided by the invention based on human body multiple location feature, process is such as
Shown in Fig. 1, comprising the following steps:
Step 1: establishing the image data set of driving behavior identification.
Sample data source and two parts, the driving behavior data set that a part is provided from Kaggle platform, picture
Size is 640*480, amounts to 25000, such as Chinese driver's image non-in Fig. 2;Another part is self-built driving behavior number
According to library, recorded under different angle and different light conditions by built-in vehicle-mounted camera, camera model Logitech
C920.Shooting picture size is cut into 640*480 for 1320*946 for uniform data, such as Fig. 2 Chinese driver's figure
Picture amounts to about 5000, and the sample size of 10 kinds of behaviors is almost the same, is respectively as follows: normal driving, left hand is made a phone call, right
Hand is made a phone call, left hand sending and receiving information, right hand sending and receiving information, left hand are smoked, the right hand smokes, drinks water, handing over copilot passenger
It talks and both hands off-direction disk.
It includes 29000 trained pictures and 1000 that the image data collection that shooting obtains, which is divided into training set and test set respectively,
Open test picture.It is 368*368, the corresponding behavior label of 0 to 9 representative sample of use that original image is down-sampled.For accuracy,
Test sample covers 10 kinds of driving behaviors, and every kind of driving behavior 100 is opened, and driver and training sample in test sample picture
Driver in this is independent.
Step 2: building neural network model.
Designed model is by Convolutional Pose Machines model and convolutional neural networks structure in this step
At structural schematic diagram is as shown in Figure 3.Wherein convolutional neural networks module uses VGG-16 model, and regional area and the overall situation
The full articulamentum of Region Feature Extraction is mutually indepedent, and convolutional layer is shared.In order to promote the processing speed of network model, introduce
Rol Pooling network layer.Specific building process is as follows:
Step 201: due to lacking disclosed driver's key point labeled data collection, marking about 10000 samples by hand
This, every kind of driving behavior about 1000.In addition test sample is 600, and every kind of behavior 100 is opened.As shown in figure 3, total mark
5 key points of note, respectively head, the right hand, right elbow joint, left hand and left elbow joint.
Step 202: defining Yp∈ Z is the location of pixels of p-th of key point, and wherein Z indicates all positions (u, v) in picture
Set.The target of human body key point location is the position Y=(Y of all key points in forecast image1,...,YP).Posture machine
Device is by multi-class classifier sequence gt() is constituted, as shown in Figure 4.
In each stage t ∈ { 1 ..., T }, for different human body key point position Yp=z, z ∈ Z, classifier gt(·)
The feature x extracted based on picture position zzAnd the Y of previous stage classifier outputpRelevant information near field exports confidence
Value.Particularly, for first stage, i.e. t=1, classifier g1() the value of the confidence of output is as shown in Equation 1.
WhereinFor classifier g1Score of p-th of the key point exported in the stage 1 in picture position z.Define image
In each position z=(u, v)TThe value of the confidence of key point p beω and h is respectively the width and height of image, then can
?
For the ease of indicating, the confidence set of graphs of all human body key points is defined as bt∈Rω*h*(P+1), " P+1 " represents P
Human body key point and background.
In the subsequent stage, for key point p, classifier is by the characteristic information based on input pictureWith formula 2
The correlated characteristic information of the previous stage classifier output of expression provides the value of the confidence, i.e., such as formula 3.
Wherein ψt>1() is confidence set bt-1To the mapping of characteristic value.With the increase of t, for each key point,
The accuracy of the value of the confidence of classifier output is higher and higher.In addition, the characteristics of image x ' that follow-up phase uses can be with the first rank
Duan Butong.In original posture machine architecture, x=x ', and classifier gtIt is random forests algorithm that (), which uses,.
Step 203: the key point detected needed for being identified due to driving behavior concentrates on driver above the waist, this hair
Bright to use half body network model, CPM bust form is divided into four-stage, as shown in Figure 5.
The first stage of CPM model is made of a basic convolutional neural networks, i.e. white convolution module.The mould
Block is made of 7 layers of convolutional layer, 3 layers of pond layer, as shown in Figure 6.The convolution module belongs to full convolutional coding structure, and wherein convolutional layer does not change
The size of image.Input picture size is 368*368, is finally each key point output response after the operation of pondization three times
Figure amounts to P+1 width, and size is 46*46.
The second stage of model is by a cascaded structure by the convolution results of original image, the output response in stage one
For the center Constraint fusion that figure and a Gaussian template generate as input, final output is similarly P+1 width response diagram, size
It is 46*46.The center constraint that Gaussian template generates is the center module in Fig. 5, and the effect of the module will mainly be rung
It should put together to picture centre.
Third and fourth stage then uses the interim convolution results of second stage instead of the convolution results of original image, other
Part remains unchanged.To design more complicated network structure, when for example needing to detect whole body key point, it is only necessary to weight
Multiple phase III structure.
Step 204: in order to solve the problems, such as gradient disappearance, the mechanism of relaying supervision is introduced in the present invention, i.e., in each rank
Costing bio disturbance and error propagation are all carried out in the output of section, as shown in Figure 7.In addition to this, also data are carried out in the present invention
Multiple dimensioned expansion calculates the response diagram of each key point that is, under each scale, and accumulates it as final overall response
Figure.
Step 205: obtaining and corresponding rectangular area is drawn according to position behind the position of each key point.In definition step 2
Extracted key point region is respectively rhead, rleft-hand, rright-hand, rleft-elbow, rright-elbow。
Step 206: forward calculation being carried out to global image by VGG-16 and obtains corresponding feature vector, and utilizes Rol
Pooling layers map in key point region with feature vector.
Step 207: to 5 key point feature vectors and global characteristics vector carry out cascade as last feature to
Amount, classifies to it using softmax, exports corresponding driving behavior.
Step 3: driving behavior identification model of the training based on human body multiple location feature.
Network model is built using Caffe Open-Source Tools, the training process of whole network model takes in Intel Core I7
It is run on business device, uses 18.04 operating system of NVIDIA TITANX GPU, Ubuntu.It is excellent using stochastic gradient descent method
Change network parameter.
Training is broadly divided into Convolutional Pose Machines model and VGG-16 model.
In the training of Convolutional Pose Machines model, if the correct response diagram of certain key point p isMould
The response diagram exported in type isThe Loss function in so each stage are as follows:
The Loss of four-stage are as follows:
And the loss for being trained for reducing softmax layers according to the behavior label of sample of VGG-16.If P (α | I, r) be
The driving behavior that softmax is provided belongs to the probability of α, then for the training sample of a batch, loss function are as follows:
Wherein liFor image IiCorrect behavior label.M is the quantity of batch.Using in ImageNet-1K number when training
Parameter initialization is carried out according to the model that training finishes on collection, to accelerate the convergence of model.Trained learning rate is 0.0001,
The size of batch is 32, and the number of iterations is about 7000 times.
Step 4: the driving behavior identification model based on human body multiple location feature is tested.Give a driver
Test image is normalized to 368 × 368 size as the input of model, is tested by propagated forward by behavior image
The Activity recognition result of image.
The technical means disclosed in the embodiments of the present invention is not limited only to technological means disclosed in above embodiment, further includes
Technical solution consisting of any combination of the above technical features.It should be pointed out that for those skilled in the art
For, various improvements and modifications may be made without departing from the principle of the present invention, these improvements and modifications are also considered as
Protection scope of the present invention.
Claims (6)
1. a kind of driving behavior recognition methods based on human body multiple location feature, which comprises the steps of:
Step 1: establishing the image data set of driving behavior identification
Sample image data is obtained, image data set is established, includes various driving behaviors in sample image, by image data collection
It is divided into training set and test set, and the driver in test sample picture in driver and training sample is independent;
Step 2, neural network model is constructed
The neural network model is made of Convolutional Pose Machines model with convolutional neural networks, convolution
Regional area and the full articulamentum of global area feature extraction are mutually indepedent in neural network, and convolutional layer is shared;
Step 3: driving behavior identification model of the training based on human body multiple location feature
Network model is built, network model is trained, optimizes network parameter using stochastic gradient descent method;
Step 4: the driving behavior identification model based on human body multiple location feature is tested
A driving behavior image is given, by the input after test image normalization size as model, passes through propagated forward
Obtain the Activity recognition result of test image.
2. the driving behavior recognition methods according to claim 1 based on human body multiple location feature, which is characterized in that institute
It states step 2 and specifically includes following process:
Step 201: 5 key points are marked in training sample and test sample, key point includes: head, the right hand, right elbow pass
Section, left hand and left elbow joint, training sample and test sample image include various driving behaviors;
Step 202: defining Yp∈ Z is the location of pixels of p-th of key point, and wherein Z indicates the collection of all positions (u, v) in picture
It closes, the target of human body key point location is the position Y=(Y of all key points in forecast image1..., YP), posture machine by
Multi-class classifier sequence gt() is constituted;
In each stage t ∈ { 1 ..., T }, for different human body key point position Yp=z, z ∈ Z, classifier gt() is based on
The feature x that picture position z is extractedzAnd the Y of previous stage classifier outputpRelevant information near field exports the value of the confidence;It is special
Not, for first stage, i.e. t=1, classifier g1Shown in the value of the confidence such as formula (1) of () output:
WhereinFor classifier g1Score of p-th of the key point exported in the stage 1 in picture position z;
Define each position z=(u, v) in imageTThe value of the confidence of key point p beω and h is respectively the width of image
And height, then can obtain
The confidence set of graphs of all human body key points is defined as bt∈Rω*h*(P+1), " P+1 " represents P human body key point and background;
In the subsequent stage, for key point p, classifier is by the characteristic information based on input pictureWith formula (2) table
The correlated characteristic information for the previous stage classifier output shown provides the value of the confidence, i.e., as shown in following formula (3):
Wherein ψT > 1() is confidence set bt-1To the mapping of characteristic value;X ' is characteristics of image, in original posture machine architecture
In, x=x ';
Step 203: CPM half body network model is used, model includes following four-stage:
First stage is made of a basic convolutional neural networks, including the white being made of 7 layers of convolutional layer, 3 layers of pond layer
Convolution module, which belongs to full convolutional coding structure, and wherein convolutional layer does not change the size of image;Input picture is by three times
It is finally each key point output response figure after pondization operation, amounts to P+1 width;
Second stage passes through a cascaded structure for the convolution results of original image, the output response figure in stage one and a height
For the center Constraint fusion of this template generation as input, final output is P+1 width response diagram;
Third and fourth stage then uses convolution results of the interim convolution results of second stage instead of original image, other parts
It remains unchanged;
Step 204: carrying out costing bio disturbance and error propagation in the output in each stage, under each scale, calculate each
The response diagram of a key point, and accumulate it as final overall response figure;
Step 205: obtaining and corresponding rectangular area is drawn according to position behind the position of each key point;
Step 206: forward calculation being carried out to global image by VGG-16 and obtains corresponding feature vector, and utilizes RoI
Pooling layers map in key point region with feature vector;
Step 207: cascade being carried out as last feature vector to 5 key point feature vectors and global characteristics vector, is made
Classified with softmax to it, exports corresponding driving behavior.
3. the driving behavior recognition methods according to claim 2 based on human body multiple location feature, which is characterized in that step
The x ' that rapid 202 follow-up phase uses is different from the first stage, classifier .gt() uses random forests algorithm.
4. the driving behavior recognition methods according to claim 2 based on human body multiple location feature, which is characterized in that step
In rapid 203, when whole body key point to be detected, phase III structure is repeated.
5. the driving behavior recognition methods according to claim 1 based on human body multiple location feature, which is characterized in that institute
It states step 3 and specifically includes Convolutional Pose Machines model training and VGG-16 model training;
In the training of Convolutional Pose Machines model, if the correct response diagram of certain key point p isThe response diagram exported in model isThe Loss function in so each stage are as follows:
The Loss of four-stage are as follows:
The training of VGG-16 reduces softmax layers of loss according to the behavior label of sample;If P (α | I, r) it is that softmax is provided
Driving behavior belong to the probability of α, then for the training sample of a batch, loss function are as follows:
Wherein liFor image IiCorrect behavior label, M be batch quantity.
6. the driving behavior recognition methods according to claim 1 based on human body multiple location feature, which is characterized in that institute
Parameter initialization is carried out using the model that training finishes on ImageNet-1K data set when stating step 3VGG-16 training.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201910483000.4A CN110298257B (en) | 2019-06-04 | 2019-06-04 | Driver behavior recognition method based on human body multi-part characteristics |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201910483000.4A CN110298257B (en) | 2019-06-04 | 2019-06-04 | Driver behavior recognition method based on human body multi-part characteristics |
Publications (2)
Publication Number | Publication Date |
---|---|
CN110298257A true CN110298257A (en) | 2019-10-01 |
CN110298257B CN110298257B (en) | 2023-08-01 |
Family
ID=68027575
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201910483000.4A Active CN110298257B (en) | 2019-06-04 | 2019-06-04 | Driver behavior recognition method based on human body multi-part characteristics |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN110298257B (en) |
Cited By (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN111160162A (en) * | 2019-12-18 | 2020-05-15 | 江苏比特达信息技术有限公司 | Cascaded estimation method for human body posture of driver |
CN111259719A (en) * | 2019-10-28 | 2020-06-09 | 浙江零跑科技有限公司 | Cab scene analysis method based on multi-view infrared vision system |
CN111717210A (en) * | 2020-06-01 | 2020-09-29 | 重庆大学 | Detection method for separation of driver from steering wheel in relative static state of hands |
CN111832526A (en) * | 2020-07-23 | 2020-10-27 | 浙江蓝卓工业互联网信息技术有限公司 | Behavior detection method and device |
CN113569817A (en) * | 2021-09-23 | 2021-10-29 | 山东建筑大学 | Driver attention dispersion detection method based on image area positioning mechanism |
CN117523664A (en) * | 2023-11-13 | 2024-02-06 | 书行科技(北京)有限公司 | Training method of human motion prediction model, related method and related product |
Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN108309311A (en) * | 2018-03-27 | 2018-07-24 | 北京华纵科技有限公司 | A kind of real-time doze of train driver sleeps detection device and detection algorithm |
CN109117719A (en) * | 2018-07-02 | 2019-01-01 | 东南大学 | Driving gesture recognition method based on local deformable partial model fusion feature |
CN109543627A (en) * | 2018-11-27 | 2019-03-29 | 西安电子科技大学 | A kind of method, apparatus and computer equipment judging driving behavior classification |
-
2019
- 2019-06-04 CN CN201910483000.4A patent/CN110298257B/en active Active
Patent Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN108309311A (en) * | 2018-03-27 | 2018-07-24 | 北京华纵科技有限公司 | A kind of real-time doze of train driver sleeps detection device and detection algorithm |
CN109117719A (en) * | 2018-07-02 | 2019-01-01 | 东南大学 | Driving gesture recognition method based on local deformable partial model fusion feature |
CN109543627A (en) * | 2018-11-27 | 2019-03-29 | 西安电子科技大学 | A kind of method, apparatus and computer equipment judging driving behavior classification |
Non-Patent Citations (1)
Title |
---|
YUANZHOUHAN CAO等: "Leveraging Convolutional Pose Machines for Fast and Accurate Head Pose Estimation", 《2018 IEEE/RSJ INTERNATIONAL CONFERENCE ON INTELLIGENT ROBOTS AND SYSTEMS》 * |
Cited By (8)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN111259719A (en) * | 2019-10-28 | 2020-06-09 | 浙江零跑科技有限公司 | Cab scene analysis method based on multi-view infrared vision system |
CN111259719B (en) * | 2019-10-28 | 2023-08-25 | 浙江零跑科技股份有限公司 | Cab scene analysis method based on multi-view infrared vision system |
CN111160162A (en) * | 2019-12-18 | 2020-05-15 | 江苏比特达信息技术有限公司 | Cascaded estimation method for human body posture of driver |
CN111160162B (en) * | 2019-12-18 | 2023-04-18 | 江苏比特达信息技术有限公司 | Cascaded driver human body posture estimation method |
CN111717210A (en) * | 2020-06-01 | 2020-09-29 | 重庆大学 | Detection method for separation of driver from steering wheel in relative static state of hands |
CN111832526A (en) * | 2020-07-23 | 2020-10-27 | 浙江蓝卓工业互联网信息技术有限公司 | Behavior detection method and device |
CN113569817A (en) * | 2021-09-23 | 2021-10-29 | 山东建筑大学 | Driver attention dispersion detection method based on image area positioning mechanism |
CN117523664A (en) * | 2023-11-13 | 2024-02-06 | 书行科技(北京)有限公司 | Training method of human motion prediction model, related method and related product |
Also Published As
Publication number | Publication date |
---|---|
CN110298257B (en) | 2023-08-01 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN110298257A (en) | A kind of driving behavior recognition methods based on human body multiple location feature | |
WO2020181685A1 (en) | Vehicle-mounted video target detection method based on deep learning | |
EP3767522A1 (en) | Image recognition method and apparatus, and terminal and storage medium | |
CN109118055A (en) | A kind of driving behavior methods of marking and device | |
CN107679491A (en) | A kind of 3D convolutional neural networks sign Language Recognition Methods for merging multi-modal data | |
CN106372571A (en) | Road traffic sign detection and identification method | |
CN107609481A (en) | The method, apparatus and computer-readable storage medium of training data are generated for recognition of face | |
CN107481188A (en) | A kind of image super-resolution reconstructing method | |
CN110532925B (en) | Driver fatigue detection method based on space-time graph convolutional network | |
CN105160299A (en) | Human face emotion identifying method based on Bayes fusion sparse representation classifier | |
CN111091044B (en) | Network appointment-oriented in-vehicle dangerous scene identification method | |
CN107103279A (en) | A kind of passenger flow counting method under vertical angle of view based on deep learning | |
CN110309723A (en) | A kind of driving behavior recognition methods based on characteristics of human body's disaggregated classification | |
CN110458494A (en) | A kind of unmanned plane logistics delivery method and system | |
CN112733802B (en) | Image occlusion detection method and device, electronic equipment and storage medium | |
CN110363093A (en) | A kind of driver's action identification method and device | |
Cheng et al. | Modeling mixed traffic in shared space using lstm with probability density mapping | |
CN109767298A (en) | The matched method and system of passenger's driver safety | |
Chien et al. | Deep learning based driver smoking behavior detection for driving safety | |
CN105893942A (en) | eSC and HOG-based adaptive HMM sign language identifying method | |
CN109886338A (en) | A kind of intelligent automobile test image mask method, device, system and storage medium | |
CN112052829B (en) | Pilot behavior monitoring method based on deep learning | |
CN109919194A (en) | Piece identity's recognition methods, system, terminal and storage medium in a kind of event | |
CN107103313A (en) | The casualty insurance fee payment method and device of a kind of utilization recognition of face people at highest risk | |
CN116704585A (en) | Face recognition method based on quality perception |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |