CN112668553A - Method, device, medium and equipment for detecting discontinuous observation behavior of driver - Google Patents

Method, device, medium and equipment for detecting discontinuous observation behavior of driver Download PDF

Info

Publication number
CN112668553A
CN112668553A CN202110061533.0A CN202110061533A CN112668553A CN 112668553 A CN112668553 A CN 112668553A CN 202110061533 A CN202110061533 A CN 202110061533A CN 112668553 A CN112668553 A CN 112668553A
Authority
CN
China
Prior art keywords
driver
offset
coordinates
lookout
intermittent
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202110061533.0A
Other languages
Chinese (zh)
Other versions
CN112668553B (en
Inventor
梁帆
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Guangdong Prophet Big Data Co ltd
Original Assignee
Dongguan Prophet Big Data Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Dongguan Prophet Big Data Co ltd filed Critical Dongguan Prophet Big Data Co ltd
Priority to CN202110061533.0A priority Critical patent/CN112668553B/en
Publication of CN112668553A publication Critical patent/CN112668553A/en
Application granted granted Critical
Publication of CN112668553B publication Critical patent/CN112668553B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Abstract

The invention relates to the technical field of computer vision, and provides a method for detecting intermittent lookout behaviors of a driver on one hand, which comprises the following detection steps of: acquiring continuous and dynamic multi-frame facial images in the driving process of a driver in a detection scene; preprocessing the face image and acquiring a plurality of key point coordinates of the face image; converting the two-dimensional face key point coordinates into three-dimensional space coordinates; calculating the offset degree of the driver in the vertical direction and the horizontal direction according to the three-dimensional space coordinates of each frame of the face image, wherein the offset in the vertical direction and the horizontal direction is measured based on the face posture of a standard driving; the invention can detect the intermittent lookout behavior of the driver in the driving process in an artificial intelligent mode, thereby improving the accuracy and efficiency of the real-time detection of the driving specification of the driver.

Description

Method, device, medium and equipment for detecting discontinuous observation behavior of driver
Technical Field
The invention relates to the technical field of computer vision, in particular to a method for detecting intermittent lookout behaviors of a driver.
Background
Uninterrupted driving lookout is one of the most basic operation specifications of drivers of trains and subways, and the drivers are required to monitor the front tracks all the time during driving. However, for rail train drivers, most of the time is that the rail train drivers operate on the rail, scenes in front of the eyes are almost not changed, the drivers need to keep being concentrated on and observe out difficultly, operations such as head lowering and head twisting can occur, the operations can cause intermittent observation, potential safety hazards are caused to the operation safety of the train, and if in traffic accidents caused by collision of rail trains, most of the drivers need to perform intermittent observation and fail to take parking measures in time to cause accidents.
The visual angle when the driver normally watched is the scene that the car door window corresponds, and the head of driver shifts in certain within range on vertical and horizontal two directions, if the long-time skew of driver surpasses normal scope, then explains that the driver is not being absorbed in and is observing driving the place ahead road conditions this moment, takes place the traffic accident easily, needs carry out timely early warning, guarantees driver and other people safety.
Disclosure of Invention
Solves the technical problem
Aiming at the defects of the prior art, the invention provides the method for detecting the intermittent lookout behavior of the driver, which can detect the intermittent lookout behavior of the driver in the driving process in an artificial intelligent mode, and improve the accuracy and efficiency of the real-time detection of the driving standard of the driver.
Technical scheme
In order to achieve the purpose, the invention is realized by the following technical scheme:
the invention provides a method for detecting the intermittent lookout behavior of a driver, which comprises the following detection steps:
acquiring continuous and dynamic multi-frame facial images in the driving process of a driver in a detection scene;
preprocessing the face image and acquiring a plurality of key point coordinates of the face image;
converting the two-dimensional face key point coordinates into three-dimensional space coordinates;
calculating the offset degree of the driver in the vertical direction and the horizontal direction according to the three-dimensional space coordinates of each frame of the face image, wherein the offset in the vertical direction and the horizontal direction is measured based on the face posture of a standard driving;
and carrying out anomaly analysis on the offset degrees of each frame of facial image in the vertical direction and the horizontal direction to obtain an anomaly offset time set in the vertical direction and the horizontal direction.
Another aspect of the present invention provides a device for detecting an intermittent lookout behavior of a driver, including:
the acquisition module is used for continuously and dynamically acquiring multi-frame facial images in the driving process of a driver in the detection environment;
the image processing module is used for carrying out image processing on the acquired driver face image;
a comparison module for comparing the detected driver
And the operation processing module is used for converting the coordinate of the key point from two dimensions to three-dimensional space coordinates, calculating the deviation degree of the driver in the vertical direction and the horizontal direction, and performing anomaly analysis.
Another aspect of the present invention provides a medium on which a computer program is stored, and a processor executes the method for detecting the intermittent lookout behavior of the driver according to any one of the embodiments when the computer program is executed.
Another aspect of the invention provides an apparatus comprising at least one memory for storing one or more programs;
one or more processors that, when executing the one or more programs, implement any one of the driver intermittent lookout behavior detection methods in the embodiments;
the processor controls the warning component to give an alarm when the time periods corresponding to the two groups of abnormal offset sequence set elements are output respectively.
Advantageous effects
The invention provides a method for detecting the intermittent observation behavior of a driver, which has the following beneficial effects compared with the prior art:
1. the method collects the dynamic facial information of a driver in the driving process, converts two-dimensional coordinates of key points of facial images into three-dimensional space coordinates, compares the three-dimensional space coordinates with standard head coordinates of the driver, calculates the offset degree of the driver in the vertical direction and the horizontal direction, and performs abnormal analysis on the offset degree of each frame of facial image in the vertical direction and the horizontal direction to obtain abnormal offset time sets in the vertical direction and the horizontal direction, so that the driver can be warned in time conveniently in an abnormal offset time period through warning equipment, and the safety of the driver and other people is ensured; the law detects the discontinuous observation behavior of the driver in the driving process in an artificial intelligent mode, and improves the accuracy and efficiency of the real-time detection of the driving standard of the driver.
Drawings
In order to more clearly illustrate the embodiments of the present invention or the technical solutions in the prior art, the drawings used in the description of the embodiments or the prior art will be briefly described below, it is obvious that the drawings in the following description are only some embodiments of the present invention, and for those skilled in the art, other drawings can be obtained according to the drawings without creative efforts.
FIG. 1 is a flow chart of the steps of the detection method of the present invention;
FIG. 2 is a schematic diagram of the transformation of two-dimensional face key point coordinates into three-dimensional space coordinates according to the present invention;
FIG. 3 is a schematic diagram of the vertical offset program sequence timing anomaly diagnosis in video according to the present invention;
FIG. 4 is a schematic diagram of the sequence timing anomaly diagnosis of the horizontal shift procedure in video according to the present invention;
FIG. 5 is a graph illustrating a vertical offset score according to the present invention;
FIG. 6 is a graphical illustration of the horizontal offset score according to the present invention;
FIG. 7 is a schematic diagram of the vertical offset anomaly set of the present invention;
FIG. 8 is a diagram illustrating an exception set for horizontal direction shift according to the present invention.
Detailed Description
In order to make the objects, technical solutions and advantages of the embodiments of the present invention clearer, the technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are some, but not all, embodiments of the present invention. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
Example (b):
the invention provides a method for detecting the intermittent observation behavior of a driver, which comprises the following detection steps with reference to FIG. 1:
step 1: acquiring continuous and dynamic multi-frame facial images in the driving process of a driver in a detection scene;
step 2: preprocessing the face image and acquiring a plurality of key point coordinates of the face image;
and step 3: converting the two-dimensional face key point coordinates into three-dimensional space coordinates;
and 4, step 4: calculating the offset degree of the driver in the vertical direction and the horizontal direction according to the three-dimensional space coordinates of each frame of the face image, wherein the offset in the vertical direction and the horizontal direction is measured based on the face posture of a standard driving;
and 5: and carrying out anomaly analysis on the offset degrees of each frame of facial image in the vertical direction and the horizontal direction to obtain an anomaly offset time set in the vertical direction and the horizontal direction.
Under the detection scene, acquiring the facial information of the driver driving as a training sample, and performing scene training to obtain a CNN training model;
the CNN training model is provided with a center coordinate of a standard driver and two vertical standard vector coordinates of a projection plane when the driver drives normally.
The scene training specifically comprises the following steps:
recording coordinates of four key points of the face, namely two ears, eyebrow and nose tip of the face when a driver normally drives based on a detection scene, wherein the coordinates are sequentially (xb)1,yb1,zb1),(xb2,yb2,zb2),(xb3,yb3,zb3),(xb4,yb4,zb4)
Standard face center coordinate OSign board=(xSign board,ySign board,zSign board) Wherein, in the step (A),
Figure BDA0002902845310000051
standard face support plane equation: (if (xb)1,yb1,zb1),(xb2,yb2,zb2),(xb4,yb4,zb4) Three points being collinear, then xb4=xb4+∈,0<∈<<1)
Ax+By+Cz+D=0
Wherein the content of the first and second substances,
A=yb1(zb2-zb4)+yb2(zb4-zb1)+yb4(zb1-zb2)
B=zb1(xb2-xb4)+zb2(xb4-xb1)+zb4(xb1-xb2)
C=xb1(yb2-yb4)+xb2(yb4-yb1)+xb4(yb1-yb2)
D=-xb1(yb2zb4-yb4zb2)-xb2(yb4zb1-yb1zb4)-xb4(yb1zb2-yb2zb1)
two perpendicular standard vector coordinates of video projection plane
Figure BDA0002902845310000052
One start coordinate o ═ xo,yo,zo) And the normal vector of the plane
Figure BDA0002902845310000053
In the present embodiment, the step of acquiring coordinates of a plurality of key points of a face image is as follows:
step 21: collecting a driving video of a driver through a vehicle-mounted camera so as to obtain continuous and dynamic multi-frame facial images of the driver during driving of the automobile;
step 22: adopting a bilateral filter to preprocess the face image;
step 23: and then, acquiring two-dimensional coordinates of four face key points of double ears, eyebrow center and nose tip of the face of the driver by using the trained CNN model.
In step 23, coordinates of four face key points of two ears, eyebrow center and nose tip of the driver face are obtained by using the trained CNN model, and the coordinates are (x) in sequence1,y1)、(x2,y2)、(x3,y3)、(x4,y4). (refer to FIG. 2)
The BF filtering result of the bilateral filter is as follows:
Figure BDA0002902845310000061
wherein
Figure BDA0002902845310000062
WqIs the sum of the weights of each pixel value within the filtering window, for normalization of the weights. The CNN comprises four convolutional layers, and the activation function of the CNN is a ReLU function, and the loss functions are a L1 paradigm loss function and a L2 paradigm loss function.
In step 3, (referring to fig. 2) the converting the coordinates of the key points of the two-dimensional face into the coordinates of the three-dimensional space includes:
step 31: calculating to obtain projection position coordinates of the head of the detected driver and the standard driver in the same grade;
step 32: comparing the obtained projection position coordinate with the standard coordinate, uniformly detecting the head distance depth of the driver and the standard driver, and obtaining an updated projection position coordinate;
step 33: and performing position translation calculation on the updated projection position coordinates and the standard vector coordinates to enable the centers of the heads of the detected driver and the standard driver to coincide, so as to obtain the final three-dimensional space coordinates of the key points of the face of the detected driver.
In step 31, the projected position coordinates of the detected driver's head on the same scale as the standard driver's head size are calculated:
the coordinates of the key points in the three-dimensional space where the projection plane is located are obtained through calculation and are respectively as follows:
Figure BDA0002902845310000063
the actual coordinates of the key points in the three-dimensional space obtained by calculation are respectively as follows:
m1=k1c+(1-k1)n1,m2=k2c+(1-k2)n2,m3=k3c+(1-k3)n3,m4
=k4c+(1-k4)n4
wherein k isi∈[0,1]Where i is 1,2,3, and 4 are projection scale coefficients, and the camera position coordinate c is (x)c,yc,zc)
Then, a segmentation parameter h is set to obtain a projection scale coefficient set of four key points
Figure BDA0002902845310000071
Figure BDA0002902845310000072
Wherein i1,i2,i3,i4∈[0,h]and i1,i2,i3,i4∈N。
For each element K in KP=(k1,k2,k3,k4) Calculating the face frame distance d14,d24,d34,d12The formula is as follows:
dij=|mi-mj|
then calculating the corresponding frame distance error epsilonP
εP=ε14243412
Wherein epsilonij=|dij-lij|
Take { epsilonPThe minimum value corresponds to the proportionality coefficient KPAnd calculating to obtain the coordinate m of the four key points in the three-dimensional space1=(mx1,my1,mz1),m2=(mx2,my2,mz2),m3=(mx3,my3,mz4),m4=(mx4,my4,mz4)。
In step 32, the obtained projection position coordinates are compared with the standard coordinates for calculation, the head distance depths of the driver and the marked driver are uniformly detected, and the updated projection position coordinates are obtained:
calculating to obtain coordinates g of four key points in three-dimensional space after the unified depth is obtained1,g2,g3,g4
gi=mi+kg(xf,yf,zf)
Wherein
Figure BDA0002902845310000073
CgIs a fluctuation parameter, is an empirical constant obtained by training historical data,
Figure BDA0002902845310000074
Figure BDA0002902845310000075
in step 33, performing position translation calculation on the updated projection position coordinates and the standard coordinates to enable the centers of the heads of the detected driver and the marked driver to coincide, so as to obtain final coordinates of the facial key points of the detected driver; unifying the head center to obtain the updated coordinates of the four key points in the three-dimensional space:
h1=(hx1,hy1,hz1),h2=(hx2,hy2,hz2),h3=(hx3,hy3,hz3),h4=(hx4,hy4,hz4):
hi=gi
wherein the content of the first and second substances,
Figure BDA0002902845310000081
in step 4, the degree of driver offset in both vertical and horizontal directions is calculated from the three-dimensional coordinates of the facial key.
Calculating the degree of vertical offset c1Degree of horizontal deviation c2
Figure BDA0002902845310000082
Figure BDA0002902845310000083
Wherein the content of the first and second substances,
wx=xh3-xB3,wy=yh3-yB3,wz=zh3-zB3
ux=xB3-xb4,uy=yB3-yb4,uz=zB3-zb4,
vx=xb2-xb1,vy=yb2-yb1,vz=zb2-zb1
wherein, xB3=xb3-At,yB3=yb3-Bt,zB3=zb3-Ct,
Figure BDA0002902845310000084
Figure BDA0002902845310000085
xh3=hx3-At′,yh3=hy3-Bt′,zh3=hz3-Ct′,
Figure BDA0002902845310000086
Figure BDA0002902845310000087
In step 5, the step of acquiring the time set of the abnormal offset in the vertical direction and the horizontal direction comprises the following steps:
step 51: generating two groups of offset state sequences according to the offset degree of each frame of facial image in the vertical direction and the horizontal direction;
step 52: carrying out time sequence threshold value screening processing on the two groups of offset state sequences;
step 53: classifying the two groups of sequences after screening to obtain two classification sets, and calculating the offset degree score of each element of the two classification sets;
step 54: and carrying out abnormality diagnosis according to the score condition to obtain an abnormal offset time set in the vertical direction and the horizontal direction.
Specifically, in step 51, performing time sequence abnormality diagnosis on the offset program sequence (see fig. 3 and 4) to obtain an abnormal offset time set in the vertical direction and the horizontal direction in the video, that is, an intermittent observation time set of the driver; and (3) carrying out time sequence threshold screening treatment on two groups of offset state sequences of drivers in the video:
identifying the time sequence of the offset state of a driver in a video, smoothing two groups of state sequences of vertical offset Ur and horizontal offset Le by using Kalman filtering respectively, and screening the two groups of processed state sequences by using a threshold value to obtain two updated groups of sequences Ur 'and Le' (see the attached figures 5 and 6)
Figure BDA0002902845310000091
Figure BDA0002902845310000092
Wherein mu1,μ2The vertical deviation threshold and the horizontal deviation threshold are respectively the experience constants of historical data training. The offset threshold value in the vertical direction is the maximum value of the absolute value of the offset when the sight of the rail train driver is at the upper edge and the lower edge of the window of the rail train. The offset threshold value in the horizontal direction is the maximum value of the absolute value of the offset of the sight line of the rail train driver at the left edge and the right edge of the window of the rail train, and in the example, the value of mu is set1=0.26,μ2=0.2。
Classifying the two groups of sequences after screening to obtain two classification sets, and calculating the deviation degree score of each element of the two classification sets:
using k-means to cluster the sequences Ur 'and Le', dividing the two groups of sequences into n according to the clustering center1A part and n2Each part gets two classification sets { Uri},{Lei}. (see the shaded parts of the figures 5 and 6)
Calculating a shift degree score for each element in the sorted set separately:
G1(Uri)=cUrmax(Uri)∑Uri(u)
G2(Lei)=cLemax(Lei)∑Lei(v)
wherein c isUr,cLeThe calculation constants are respectively a vertical calculation constant and a horizontal calculation constant, and are obtained by historical data training.
And carrying out abnormality diagnosis according to the scores to obtain an abnormality offset time set in the vertical direction and the horizontal direction:
carrying out threshold value screening on the deviation scores of the two groups of sequences to obtain a deviation abnormal sequence set
Figure BDA0002902845310000101
Figure BDA0002902845310000102
Figure BDA0002902845310000103
Wherein v1,ν2The method comprises the steps that a vertical anomaly threshold and a horizontal anomaly threshold are respectively set, the vertical anomaly threshold and the horizontal anomaly threshold are empirical constants of historical data training, the result is 1 to indicate anomaly, and 0 is normal. Example settings v1=20,ν2=17。
Two sets of offset abnormal sequences are collected
Figure BDA0002902845310000104
Outputting time periods corresponding to the elements respectively to obtain a vertical direction offset abnormal set
Figure BDA0002902845310000105
And horizontal offset exception set
Figure BDA0002902845310000106
(see fig. 7 and 8) the example video frame rate is 25 frames/s, and time t is the number of frames/video frame rate.
This scheme is through gathering its dynamic facial information to driver driving in-process, and convert facial image's key point two-dimensional coordinate into three-dimensional space coordinate, and compare with driver standard head coordinate, calculate the skew degree of driver in vertical and two directions of level, carry out unusual analysis to the skew degree of vertical and two directions of level of every frame facial image, obtain in vertical direction and the unusual skew time set of horizontal direction, can conveniently carry out timely early warning to the driver at unusual skew time quantum through warning equipment, guarantee driver and other people safety.
It is noted that, herein, relational terms such as first and second, and the like may be used solely to distinguish one entity or action from another entity or action without necessarily requiring or implying any actual such relationship or order between such entities or actions. Also, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising an … …" does not exclude the presence of other identical elements in a process, method, article, or apparatus that comprises the element.
The above examples are only intended to illustrate the technical solution of the present invention, but not to limit it; although the present invention has been described in detail with reference to the foregoing embodiments, it will be understood by those of ordinary skill in the art that: the technical solutions described in the foregoing embodiments may still be modified, or some technical features may be equivalently replaced; and such modifications or substitutions do not depart from the spirit and scope of the corresponding technical solutions of the embodiments of the present invention.

Claims (10)

1. A method for detecting intermittent lookout behaviors of a driver is characterized by comprising the following detection steps:
acquiring continuous and dynamic multi-frame facial images in the driving process of a driver in a detection scene;
preprocessing the face image and acquiring a plurality of key point coordinates of the face image;
converting the two-dimensional face key point coordinates into three-dimensional space coordinates;
calculating the offset degree of the driver in the vertical direction and the horizontal direction according to the three-dimensional space coordinates of each frame of the face image, wherein the offset in the vertical direction and the horizontal direction is measured based on the face posture of a standard driving;
and carrying out anomaly analysis on the offset degrees of each frame of facial image in the vertical direction and the horizontal direction to obtain an anomaly offset time set in the vertical direction and the horizontal direction.
2. The method for detecting the intermittent lookout behavior of the driver as claimed in claim 1, wherein in the detection scene, a CNN training model is obtained by collecting the facial information of the driver as a training sample and performing scene training;
the CNN training model is provided with a center coordinate of a standard driver and two vertical standard vector coordinates of a projection plane when the driver drives normally.
3. The method for detecting the intermittent lookout behavior of the driver as claimed in claim 1, wherein the step of acquiring coordinates of a plurality of key points of the face image is as follows:
collecting a driving video of a driver through a vehicle-mounted camera so as to obtain continuous and dynamic multi-frame facial images of the driver during driving of the automobile;
adopting a bilateral filter to preprocess the face image;
and then, acquiring two-dimensional coordinates of four face key points of double ears, eyebrow center and nose tip of the face of the driver by using the trained CNN model.
4. The method for detecting the intermittent lookout behavior of the driver as claimed in claim 1, wherein the step of converting the coordinates of the key points of the two-dimensional face into the coordinates of the three-dimensional space comprises:
calculating to obtain projection position coordinates of the head of the detected driver and the standard driver in the same grade;
comparing the obtained projection position coordinate with the standard coordinate, uniformly detecting the head distance depth of the driver and the standard driver, and obtaining an updated projection position coordinate;
and performing position translation calculation on the updated projection position coordinates and the standard vector coordinates to enable the centers of the heads of the detected driver and the standard driver to coincide, so as to obtain the final three-dimensional space coordinates of the key points of the face of the detected driver.
5. The method for detecting the intermittent lookout behavior of the driver as claimed in claim 1, wherein the step of acquiring the abnormal offset time sets in the vertical direction and the horizontal direction comprises the steps of:
generating two groups of offset state sequences according to the offset degree of each frame of facial image in the vertical direction and the horizontal direction;
carrying out time sequence threshold value screening processing on the two groups of offset state sequences;
classifying the two groups of sequences after screening to obtain two classification sets, and calculating the offset degree score of each element of the two classification sets;
and carrying out abnormality diagnosis according to the score condition to obtain an abnormal offset time set in the vertical direction and the horizontal direction.
6. The method for detecting the intermittent lookout behavior of the driver as claimed in claim 5, wherein the step of performing time sequence threshold screening processing on the two offset state sequences comprises the following steps:
carrying out time sequence identification on the offset state of a driver in a driving video;
using Kalman filtering to respectively carry out smoothing treatment on the two groups of state sequences of vertical offset and horizontal offset;
and carrying out threshold value screening on the two groups of processed state sequences to obtain two groups of updated sequences.
7. The method for detecting the intermittent lookout behavior of the driver as claimed in claim 5, wherein the step of performing the abnormality diagnosis on the scoring condition is as follows:
carrying out threshold value screening on the offset scores of the two groups of sequences to obtain an offset abnormal sequence set;
and outputting time periods corresponding to the two groups of abnormal offset sequence set elements respectively to obtain a vertical abnormal offset set and a horizontal abnormal offset set.
8. The utility model provides a driver is interrupted and is watched action detection device which characterized in that includes:
the acquisition module is used for continuously and dynamically acquiring multi-frame facial images in the driving process of a driver in the detection environment;
the image processing module is used for carrying out image processing on the acquired driver face image;
a comparison module for comparing the detected driver
And the operation processing module is used for converting the coordinate of the key point from two dimensions to three-dimensional space coordinates, calculating the deviation degree of the driver in the vertical direction and the horizontal direction, and performing anomaly analysis.
9. A medium having stored thereon a computer program, which when executed by a processor performs a method of detecting intermittent lookout behavior of a driver as claimed in any one of claims 1 to 7.
10. An apparatus, comprising:
at least one memory for storing one or more programs;
one or more processors that, when executing the one or more programs, implement a method of detecting intermittent lookout behavior of a driver as recited in any one of claims 1-7;
the processor controls the warning component to give an alarm when the time periods corresponding to the two groups of abnormal offset sequence set elements are output respectively.
CN202110061533.0A 2021-01-18 2021-01-18 Method, device, medium and equipment for detecting discontinuous observation behavior of driver Active CN112668553B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110061533.0A CN112668553B (en) 2021-01-18 2021-01-18 Method, device, medium and equipment for detecting discontinuous observation behavior of driver

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110061533.0A CN112668553B (en) 2021-01-18 2021-01-18 Method, device, medium and equipment for detecting discontinuous observation behavior of driver

Publications (2)

Publication Number Publication Date
CN112668553A true CN112668553A (en) 2021-04-16
CN112668553B CN112668553B (en) 2022-05-13

Family

ID=75415506

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110061533.0A Active CN112668553B (en) 2021-01-18 2021-01-18 Method, device, medium and equipment for detecting discontinuous observation behavior of driver

Country Status (1)

Country Link
CN (1) CN112668553B (en)

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102393989A (en) * 2011-07-28 2012-03-28 山西智济电子科技有限公司 Real-time monitoring system of driver working state
US20190135295A1 (en) * 2017-11-09 2019-05-09 Toyota Jidosha Kabushiki Kaisha Driver condition detection system
US20190318151A1 (en) * 2018-04-13 2019-10-17 Omron Corporation Image analysis apparatus, method, and program
CN110414419A (en) * 2019-07-25 2019-11-05 四川长虹电器股份有限公司 A kind of posture detecting system and method based on mobile terminal viewer
CN111985403A (en) * 2020-08-20 2020-11-24 中再云图技术有限公司 Distracted driving detection method based on face posture estimation and sight line deviation
CN112149615A (en) * 2020-10-12 2020-12-29 平安科技(深圳)有限公司 Face living body detection method, device, medium and electronic equipment

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102393989A (en) * 2011-07-28 2012-03-28 山西智济电子科技有限公司 Real-time monitoring system of driver working state
US20190135295A1 (en) * 2017-11-09 2019-05-09 Toyota Jidosha Kabushiki Kaisha Driver condition detection system
US20190318151A1 (en) * 2018-04-13 2019-10-17 Omron Corporation Image analysis apparatus, method, and program
CN110414419A (en) * 2019-07-25 2019-11-05 四川长虹电器股份有限公司 A kind of posture detecting system and method based on mobile terminal viewer
CN111985403A (en) * 2020-08-20 2020-11-24 中再云图技术有限公司 Distracted driving detection method based on face posture estimation and sight line deviation
CN112149615A (en) * 2020-10-12 2020-12-29 平安科技(深圳)有限公司 Face living body detection method, device, medium and electronic equipment

Also Published As

Publication number Publication date
CN112668553B (en) 2022-05-13

Similar Documents

Publication Publication Date Title
Kumar et al. Driver drowsiness monitoring system using visual behaviour and machine learning
Alioua et al. Driver’s fatigue detection based on yawning extraction
CN109389806B (en) Fatigue driving detection early warning method, system and medium based on multi-information fusion
CN102696041B (en) The system and method that the cost benefit confirmed for eye tracking and driver drowsiness is high and sane
EP3355104B1 (en) Method and device and computer program for determining a representation of a spectacle glass rim
US20140204193A1 (en) Driver gaze detection system
Paone et al. Baseline face detection, head pose estimation, and coarse direction detection for facial data in the SHRP2 naturalistic driving study
Batista A drowsiness and point of attention monitoring system for driver vigilance
CN104616438A (en) Yawning action detection method for detecting fatigue driving
García et al. Driver monitoring based on low-cost 3-D sensors
CN104361332A (en) Human face eye region positioning method for fatigue driving detection
Ahmed et al. Robust driver fatigue recognition using image processing
Choi et al. Driver drowsiness detection based on multimodal using fusion of visual-feature and bio-signal
US10740633B2 (en) Human monitoring system incorporating calibration methodology
CN109711239B (en) Visual attention detection method based on improved mixed increment dynamic Bayesian network
CN104077568A (en) High-accuracy driver behavior recognition and monitoring method and system
CN103839055B (en) A kind of detection method in pilot's line of vision direction
CN114005167A (en) Remote sight estimation method and device based on human skeleton key points
CN113989788A (en) Fatigue detection method based on deep learning and multi-index fusion
CN108108651B (en) Method and system for detecting driver non-attentive driving based on video face analysis
CN115393830A (en) Fatigue driving detection method based on deep learning and facial features
CN115346197A (en) Driver distraction behavior identification method based on bidirectional video stream
CN112668553B (en) Method, device, medium and equipment for detecting discontinuous observation behavior of driver
CN112926364A (en) Head posture recognition method and system, automobile data recorder and intelligent cabin
CN114022918A (en) Multi-posture-based learner excitement state label algorithm

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant
CP03 Change of name, title or address

Address after: Building 7, No. 124 Dongbao Road, Dongcheng Street, Dongguan City, Guangdong Province, 523128

Patentee after: Guangdong Prophet Big Data Co.,Ltd.

Country or region after: China

Address before: 523128 Room 401, building 6, No.5 Weifeng Road, Dongcheng Street, Dongguan City, Guangdong Province

Patentee before: Dongguan prophet big data Co.,Ltd.

Country or region before: China