CN114037979A - Lightweight driver fatigue state detection method - Google Patents
Lightweight driver fatigue state detection method Download PDFInfo
- Publication number
- CN114037979A CN114037979A CN202111321068.6A CN202111321068A CN114037979A CN 114037979 A CN114037979 A CN 114037979A CN 202111321068 A CN202111321068 A CN 202111321068A CN 114037979 A CN114037979 A CN 114037979A
- Authority
- CN
- China
- Prior art keywords
- face
- driver
- mouth
- fatigue state
- detection
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/10—Segmentation; Edge detection
- G06T7/11—Region-based segmentation
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/10—Image acquisition modality
- G06T2207/10016—Video; Image sequence
Abstract
The invention relates to the technical field of driver state recognition, and particularly discloses a light-weight driver fatigue state detection method, which uses a light-weight face detection screening network (based on a Retinaface face detection algorithm) to locate the position of the face of a driver in the detection range of a camera, simultaneously, preliminarily positioning key position points of the centers of the eyes, the nose tip and the left and right mouth corners of the mouth, further cutting out a face image of the driver on the adjusted image, then detecting the facial state of the driver as a side face, an inclination and the like according to the Euclidean distance between the center of the eyes and the nose tip, the included angle and the like, and the images are uniformly aligned and horizontally corrected, then normalized to make all the images in the same position scale, and finally, detecting the characteristic region to determine the fatigue state of the driver. On the whole, the method and the device realize the rapid and accurate detection of the fatigue state of the driver by using the neural network with smaller parameters.
Description
Technical Field
The invention relates to the technical field of driver state recognition, in particular to a light driver fatigue state detection method.
Background
Each year, traffic accidents cause enormous loss of lives and property, and among them, fatigue driving is one of the causes of traffic accidents. In the long-time vehicle driving, a driver is in a continuous high-strength working state for a long time due to long driving time and long distance, so that fatigue driving is easily caused. The vehicle is also a comparatively confined environment in the in-process of traveling, and some navigating mate also can receive the influence in narrow and small space and produce fatigue. In the road traffic safety law, provision is made for corresponding deductions and fines to be made first if driving is fatigue, and criminal liability is also assumed if a major traffic accident occurs. Therefore, it is of great significance to develop a method capable of rapidly identifying fatigue driving.
Fatigue driving refers to driving in a drowsy or physically fatigued state, and can be caused by a number of underlying causes, such as excessive sleepiness, sleep deprivation, circadian rhythm changes from shift work, fatigue, taking sedative drugs, and drinking while tired. The driver's action and trouble in the first moment before the accident takes place have directly led to the emergence of accident, if the driver can make the accident of reaction half a second soon just can avoid.
The currently common detection method for the fatigue state of the driver mainly comprises contact detection and non-contact detection, wherein the contact detection mainly utilizes a physiological sensor to detect the change of the physiological indexes of the driver to judge whether the driver enters the fatigue state. Such methods are most fatigue-related, but existing contact methods are not only obstructive to driving but also costly. In the non-contact detection, the vehicle driving characteristic detection can effectively detect the fatigue state of a driver, but a large number of sensors and other equipment need to be added on the vehicle, the system deployment is complex, and the cost is high. There are also detections based on the driver's driving behavior, such as steering wheel behavior. The most widely studied fatigue driving measurement method at present is based on the detection of human face fatigue characteristics of a driver during driving, and the detection is usually performed according to facial fatigue characteristics such as eyelid closure, yawning degree, blink frequency and the like within a certain time. The step of detecting the face fatigue generally comprises the steps of firstly determining the face position of a driver through a convolutional neural network, positioning eye and mouth regions in the face position and judging the state of the eye and mouth regions, and finally judging whether the driver is tired according to the state of the eye and mouth regions in unit time.
Although the existing fatigue detection method based on the neural network model can achieve a good detection effect, the weight file obtained by training the model is large, the parameter quantity and the calculated quantity of the model are large, the time consumption is long, and the method is not suitable for running on a mobile terminal and embedded equipment. The face of the driver needs to be detected in real time, and a lightweight algorithm model needs to be applied.
Disclosure of Invention
The invention provides a lightweight driver fatigue state detection method, which solves the technical problems that: at present, the fatigue detection method based on the neural network model has the disadvantages of large obtained weight file, huge parameter and calculation amount of the model, and unsuitability for real-time detection of the face of a driver on a vehicle-mounted embedded device.
In order to solve the above technical problems, the present invention provides a lightweight driver fatigue state detection method, comprising the steps of:
s1, acquiring a face video of all people in the detection area in real time;
s2, carrying out face detection and screening on the face video frame by adopting a light-weight face detection screening network so as to mark only the key position points of the face of the driver and the face image frame on the output image;
s3, cutting out a face image of the driver on the output image, and carrying out face alignment and horizontal correction on the face image;
s4, carrying out normalization processing on the face image of the driver after face alignment and horizontal correction to enable all images to be in the same position scale;
s5, extracting required characteristic regions according to the key position points of the face in the face image of the driver at the same position scale and the face image frame;
and S6, determining the fatigue state of the driver according to the change situation of the characteristic area in the continuous setting frame.
Further, in step S2, the face detection and screening of the face video is implemented by adding a face detection and screening network of a face screening module to the retinaFace face detection algorithm, where the retinaFace face detection algorithm is used to detect a face region and mark key position points of the face, and the face screening module is used to remove face regions of other passengers except the driver.
Further, the Retina face detection algorithm performs feature extraction by using a depth separable convolution, wherein the depth separable convolution is a channel-by-channel convolution of a convolution kernel with a size of 3 × 3 and a point-by-point convolution of a convolution kernel with a size of 1 × 1.
Further, the face screening module reserves the face area with the largest area as the face area of the driver and rejects the rest face areas.
Further, in the step S3, performing face alignment and horizontal correction on the driver face image through affine transformation; and cutting out the face image of the driver on the output image through OpenCV.
Further, let (x)i,yi)TTo locate the ith feature point coordinate on the face, (x)i',yi')TFor the i-th feature point coordinates aligned by affine transformation, the process of affine transformation is expressed as a linear equation:
wherein the content of the first and second substances,a, b, c, d, and,e. f is an affine transformation factor; solving a linear equation by using a least square method;
the feature points include the key position points of the face and the upper left corner point and the lower right corner point of the face image frame.
Further, the key position points of the human face comprise a left eye center, a right eye center, a nose tip center, a left mouth corner and a right mouth corner.
Further, in the step S5, the required feature regions include left and right eye regions corresponding to the left and right eye centers, and mouth regions corresponding to the left and right mouth corners;
assuming that the face image frame is entirely located in the first quadrant after affine transformation, the coordinates of the top left corner point a 'of the face image frame are expressed as (x'a,y′a) The coordinates of the lower right corner point B 'are expressed as (x'b,y′b) The coordinates of the left-eye center C 'are expressed as (x'c,y′c) The coordinates of the right-eye center D 'are expressed as (x'd,y′d) The coordinates of the tip center E 'are represented by (x'e,y′e) And the coordinate of the left nozzle angle F 'is expressed as (x'f,y′f) And the coordinates of the right nozzle angle G 'are expressed as (x'g,y′g) And then:
horizontal length h of left eye region and right eye region1=α1*H,H=x′b-x′aRepresenting the horizontal length, alpha, of the face image frame1Represents an eye level scale factor;
horizontal length w of left eye region and right eye region1=α2*W,W=y′b-y′aRepresenting the vertical length, alpha, of the face frame2Represents an ocular vertical scale factor;
horizontal length h of mouth region2=α3*(x′g-x′f),α3Represents a mouth horizontal scale factor;
vertical length w of mouth region2=α4*(y′f-y′g),α4Indicating the mouth droopA direct scale factor;
the upper left corner coordinate of the left eye region isThe coordinate of the lower right corner is
The upper left corner coordinate of the right eye region isThe coordinate of the lower right corner is
Further, in the step S6, selecting the cut left eye region or right eye region to determine the fatigue state, and if the blinking frequency reaches 10 to 15 times/minute, or the single eye closing time exceeds 0.5S, or the eye closing time exceeds 0.8S within 2 seconds, determining that the driver is in the fatigue driving state at this time; alternatively, the determination is made based on the state detected in the mouth region, and when the mouth opening state continues for 3 seconds or more, it is determined that yawning behavior exists in the driver, and the driver is in a fatigue driving state at this time.
Further, the step S6 adopts a three-layer convolution fatigue state detection network to perform driver fatigue detection.
The invention provides a lightweight driver fatigue state detection method, firstly acquiring a driver face image in real time through a camera, then using a lightweight face detection screening network (based on Retina face detection algorithm), locating the position of the driver face in the camera detection range, simultaneously preliminarily locating key position points of a binocular center, a nose tip and left and right mouth corners of a mouth, further cutting out the driver face image on the adjusted image, then detecting that the face state of the driver is a side face, an inclination and the like according to the Euclidean distance between the binocular center and the nose tip, an included angle and the like, uniformly carrying out face alignment and horizontal correction on the image, further carrying out normalization processing on the image to enable all the images to be in the same position scale, and finally, detecting the characteristic region to determine the fatigue state of the driver.
The invention adopts the light network to detect the human face area and the key position point, combines human face screening, image basic transformation and the like to obtain the human face area with the same size, further obtains the characteristic area with the same size, and finally inputs the detection network obtained by special training aiming at the characteristic area to output the current fatigue state detection result of the driver.
Drawings
FIG. 1 is a flow chart illustrating steps of a method for detecting a fatigue state of a driver in a lightweight manner according to an embodiment of the present invention;
fig. 2 is a structural diagram of a face detection screening network according to an embodiment of the present invention;
FIG. 3 is an effect diagram of a face screening module according to an embodiment of the present invention;
FIG. 4 is a simplified process flow diagram of a method for detecting fatigue status of a driver in a lightweight manner according to an embodiment of the present invention;
fig. 5 is a full-flow diagram display diagram of a lightweight driver fatigue state detection method provided by an embodiment of the invention.
Detailed Description
The embodiments of the present invention will be described in detail below with reference to the accompanying drawings, which are given solely for the purpose of illustration and are not to be construed as limitations of the invention, including the drawings which are incorporated herein by reference and for illustration only and are not to be construed as limitations of the invention, since many variations thereof are possible without departing from the spirit and scope of the invention.
In order to realize rapid and accurate detection of the fatigue state of the driver in a light weight manner, the embodiment of the invention provides a light-weight detection method of the fatigue state of the driver, the flow of the steps of which is shown in fig. 1, and the method comprises the following steps:
s1, acquiring a face video of all people in the detection area in real time;
s2, carrying out face detection and screening on the face video frame by adopting a light-weight face detection screening network so as to mark only the key position points of the face of the driver and the face image frame on the output image;
s3, cutting out a face image of the driver on the output image, and carrying out face alignment and horizontal correction on the face image;
s4, carrying out normalization processing on the face image of the driver after face alignment and horizontal correction to enable all images to be in the same position scale;
s5, extracting required characteristic regions according to the key position points of the face in the face image of the driver at the same position scale and the face image frame;
and S6, determining the fatigue state of the driver according to the change situation of the characteristic area in the continuous setting frame.
It should be further noted that, in step S2, the face detection and screening of the face video is implemented by a face detection screening network that adds a face screening module to the RetinaFace detection algorithm. Referring to fig. 2, the retinaFace face detection algorithm is used to detect a face region and mark key face position points, and the face screening module is used to remove the face regions of other passengers except the driver, as shown in fig. 3. Finally, the face area of the driver (including the key position points of the face of the driver and the face image frame) is marked on the input original image. The method comprises the steps that a passenger who enters mistakenly is in a target detection area, the area of a face is smaller than the area of the face of a driver, so that the face area of the driver is screened out, a non-target detection face is removed, the target area of the face of the driver is determined, namely the face screening module reserves the face area with the largest area as the face area of the driver, and the rest face areas are removed.
Other convolutional neural networks for detecting a face, such as MTCNN (multi-tasking cascaded convolutional neural network), can have a more accurate detection effect, and can align faces at the same time, but the detection speed is slow, 32 convolutional kernels of 3 × 3 size traverse each data in 16 channels, and finally, 32 required output channels can be obtained, the required parameters are 16 × 32 × 3 × 3 ═ 4608, and the parameter amount is too large, which is not favorable for fatigue driving and needs to be quickly judged. Compared with the prior art, the RetinaFace face detection algorithm can meet the requirement of rapid detection of fatigue driving by adopting a lightweight deep neural network while considering the accuracy. The Retina face detection algorithm adopts depth separable convolution for feature extraction, wherein the depth separable convolution is the channel-by-channel convolution of a convolution kernel with the size of 3 multiplied by 3 and the point-by-point convolution of a convolution kernel with the size of 1 multiplied by 1. The method adopts 16 convolution kernels with the size of 3 × 3 to respectively traverse data of 16 channels to obtain 16 feature maps, and before fusion operation, 32 convolution kernels with the size of 1 × 1 are used to traverse the 16 feature maps, the required parameters are 16 × 3 × 3+16 × 32 × 1 × 656, and nearly 4000 parameters are reduced.
Specifically, in step S3, the driver face image is subjected to face alignment and horizontal correction by affine transformation, and the driver face image is cut out from the output image by opencv. The facial state of the driver is detected to be a non-aligned and non-horizontal state such as a side face and an inclination state through the Euclidean distance between the center of the eyes and the nose tip and the included angle. Let (x)i,yi)TTo locate the ith feature point coordinate on the face, (x)i',yi')TFor the i-th feature point coordinates aligned by affine transformation, the process of affine transformation is expressed as a linear equation:
wherein the content of the first and second substances,for the transformation matrix, a, b, c, d, e and f are affine transformation factors; the solutions of the linear equations, namely the values of a, b, c, d, e and f, are solved by using a least square method, so that the state of one face of the driver can be adjusted to be in front of normal head-up, and a feature region can be conveniently intercepted at the later stage.
The feature points include the key position points of the face and the upper left corner point and the lower right corner point of the face image frame. Since the face feature extraction is performed by adopting the Retina face detection algorithm in this embodiment, the key position points of the face include the left eye center, the right eye center, the nose tip center, the left mouth angle and the right mouth angle. Assuming that the whole face image frame is located in the first quadrant, if the coordinates of the upper left corner point and the lower right corner point of the face image frame before transformation are respectively expressed as A (x)a,ya)、B(xb,yb) And is changed to A ' (x ' after conversion 'a,y′a)、B′(x′b,y′b). The coordinates before transformation of the left eye center, the right eye center, the nose tip center, the left mouth angle and the right mouth angle are respectively expressed as C (x)c,yc)、D(xd,yd)、E(xe,ye)、F(xf,yf)、G(xg,yg) The transformed coordinates are each C '(x'c,y′c)、D′(x′d,y′d)、E′(x′e,y′e)、F′(x′f,y′f)、G′(x′g,y′g)。
Even if the same driver has face images with different sizes at different times, step S4 further normalizes the images so that all the images are at the same position scale, and the eyes and mouth areas of the same driver are consistent at the same position scale, so as to further intercept the feature areas with the same size and input the feature areas into the final three-layer convolution fatigue state detection network.
In the present embodiment, in step S5, the required feature regions include left-eye regions and right-eye regions corresponding to the left-eye center and the right-eye center, and mouth regions corresponding to the left mouth corner and the right mouth corner, i.e., three regions in total are cut out. Wherein:
horizontal length h of left eye region and right eye region1=α1*H,H=x′b-x′aRepresenting the horizontal length, alpha, of the face image frame1Represents an eye level scale factor;
horizontal length w of left eye region and right eye region1=α2*W,W=y′b-y′aRepresenting the vertical length, alpha, of the face frame2Represents an ocular vertical scale factor;
horizontal length h of mouth region2=α3*(x′g-x′f),α3Represents a mouth horizontal scale factor;
vertical length w of mouth region2=α4*(y′f-y′g),α4Represents a mouth vertical scale factor;
the upper left corner coordinate of the left eye region isThe coordinate of the lower right corner is
The upper left corner coordinate of the right eye region isThe coordinate of the lower right corner is
This example [ alpha ]1,α2,α3,α4]=[0.75,0.45,1.05,1.08]And testing to obtain the product.
In step S6, selecting the cut left eye region or right eye region to determine the fatigue state, and if the blinking frequency reaches 10-15 times/minute, or the single eye closing time exceeds 0.5S, or the eye closing time exceeds 0.8S within 2 seconds, determining that the driver is in the fatigue driving state at this time; alternatively, the determination is made based on the state detected in the mouth region, and when the mouth opening state continues for 3 seconds or more, it is determined that yawning behavior exists in the driver, and the driver is in a fatigue driving state at this time.
Since there is no time scale in the continuous image sequence, but the number of image frames read per second is fixed, it can be determined whether the mouth is yawning according to the number of frames of the mouth opening in the continuous time, and then the frequency of yawning in the fixed time can be obtained. This step S6 employs a three-layer convolutional fatigue state detection network for driver fatigue detection.
From the viewpoint of the process flow, the fatigue state detection process of steps S1 to S6 may refer to fig. 4. From the perspective of the image example, the fatigue state detection process of steps S1 to S6 may refer to fig. 5.
To sum up, the method for detecting fatigue status of a driver with light weight according to the embodiments of the present invention includes obtaining a facial image of the driver in real time through a camera, locating the position of the driver's face in the detection range of the camera by using a light-weight face detection screening network (based on retinaFace face detection algorithm), preliminarily locating key position points of the center of the eyes, the nose tip, and the left and right mouth corners of the mouth, further cutting out the facial image of the driver from the adjusted image, detecting whether the facial status of the driver is a side face, an inclination, etc. according to the euclidean distance between the center of the eyes and the nose tip, the included angle thereof, etc., and uniformly performing face alignment and horizontal correction on the image, because even if the same driver has facial images with different sizes at different times, further normalizing the image is required to make all the images at the same position scale, and finally, detecting the characteristic region to determine the fatigue state of the driver.
The embodiment of the invention adopts the light network to detect the face area and the key position point, combines face screening, image basic transformation and the like to obtain the face area with the same size, further obtains the characteristic area with the same size, and finally inputs the three-layer convolution fatigue state detection network obtained by special training aiming at the characteristic area to output the fatigue state detection result of the current driver.
The above embodiments are preferred embodiments of the present invention, but the present invention is not limited to the above embodiments, and any other changes, modifications, substitutions, combinations, and simplifications which do not depart from the spirit and principle of the present invention should be construed as equivalents thereof, and all such changes, modifications, substitutions, combinations, and simplifications are intended to be included in the scope of the present invention.
Claims (10)
1. A lightweight driver fatigue state detection method, characterized by comprising the steps of:
s1, acquiring a face video of all people in the detection area in real time;
s2, carrying out face detection and screening on the face video frame by adopting a light-weight face detection screening network so as to mark only the key position points of the face of the driver and the face image frame on the output image;
s3, cutting out a face image of the driver on the output image, and carrying out face alignment and horizontal correction on the face image;
s4, carrying out normalization processing on the face image of the driver after face alignment and horizontal correction to enable all images to be in the same position scale;
s5, extracting required characteristic regions according to the key position points of the face in the face image of the driver at the same position scale and the face image frame;
and S6, determining the fatigue state of the driver according to the change situation of the characteristic area in the continuous setting frame.
2. The method for detecting a fatigue state of a lightweight driver according to claim 1, wherein in step S2, the face detection and screening of the face video is implemented by a face detection screening network in which a face screening module is added to a RetinaFace face detection algorithm; the RetinaFace face detection algorithm is used for detecting a face area and marking key position points of the face, and the face screening module is used for removing the face areas of other passengers except a driver.
3. The method for detecting a fatigue state of a lightweight driver according to claim 2, characterized in that: the Retina face detection algorithm adopts depth separable convolution for feature extraction, wherein the depth separable convolution is channel-by-channel convolution of convolution kernels with the size of 3 multiplied by 3 and point-by-point convolution of convolution kernels with the size of 1 multiplied by 1.
4. A lightweight driver fatigue state detection method according to claim 3, characterized in that: the face screening module reserves the face area with the largest area as a face area of the driver and rejects other face areas.
5. The method for detecting a fatigue state of a lightweight driver according to claim 4, wherein: in step S3, performing face alignment and horizontal correction on the driver face image by affine transformation; and cutting out the face image of the driver on the output image through OpenCV.
6. The method for detecting a fatigue state of a lightweight driver according to claim 5, characterized in that: let (x)i,yi)TIs the ith feature point coordinate on the located face, (x'i,y′i)TFor the i-th feature point coordinates aligned by affine transformation, the process of affine transformation is expressed as a linear equation:
wherein the content of the first and second substances,for the transformation matrix, a, b, c, d, e and f are affine transformation factors; solving a linear equation by using a least square method;
the feature points include the key position points of the face and the upper left corner point and the lower right corner point of the face image frame.
7. The method for detecting a fatigue state of a lightweight driver according to claim 6, characterized in that: the key position points of the human face comprise a left eye center, a right eye center, a nose tip center, a left mouth corner and a right mouth corner.
8. The method for detecting a fatigue state of a lightweight driver according to claim 7, characterized in that: in the step S5, the required feature regions include left-eye and right-eye regions corresponding to the left-eye center and the right-eye center, and mouth regions corresponding to the left mouth corner and the right mouth corner;
assuming that the face image frame is entirely located in the first quadrant after affine transformation, the coordinates of the top left corner point a 'of the face image frame are expressed as (x'a,y′a) The coordinates of the lower right corner point B 'are expressed as (x'b,y′b) The coordinates of the left-eye center C 'are expressed as (x'c,y′c) The coordinates of the right-eye center D 'are expressed as (x'd,y′d) The coordinates of the tip center E 'are represented by (x'e,y′e) And the coordinate of the left nozzle angle F 'is expressed as (x'f,y′f) And the coordinates of the right nozzle angle G 'are expressed as (x'g,y′g) And then:
horizontal length h of left eye region and right eye region1=α1*H,H=x′b-x′aRepresenting the horizontal length, alpha, of the face image frame1Represents an eye level scale factor;
horizontal length w of left eye region and right eye region1=α2*W,W=y′b-y′aRepresenting the vertical length, alpha, of the face frame2Represents an ocular vertical scale factor;
horizontal length h of mouth region2=α3*(x′g-x′f),α3Represents a mouth horizontal scale factor;
vertical length w of mouth region2=α4*(y′f-y′g),α4Represents a mouth vertical scale factor;
the upper left corner coordinate of the left eye region isThe coordinate of the lower right corner is
The upper left corner coordinate of the right eye region isThe coordinate of the lower right corner is
9. The method for detecting a fatigue state of a lightweight driver according to claim 8, characterized in that: in the step S6, selecting the cut left eye region or right eye region to determine the fatigue state, and if the blinking frequency reaches 10-15 times/minute, or the single eye closing time exceeds 0.5S, or the eye closing time exceeds 0.8S within 2 seconds, determining that the driver is in the fatigue driving state at this time; alternatively, the determination is made based on the state detected in the mouth region, and when the mouth opening state continues for 3 seconds or more, it is determined that yawning behavior exists in the driver, and the driver is in a fatigue driving state at this time.
10. The method for detecting a fatigue state of a lightweight driver according to claim 9, characterized in that: and step S6, adopting a three-layer convolution fatigue state detection network to detect the fatigue of the driver.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202111321068.6A CN114037979A (en) | 2021-11-09 | 2021-11-09 | Lightweight driver fatigue state detection method |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202111321068.6A CN114037979A (en) | 2021-11-09 | 2021-11-09 | Lightweight driver fatigue state detection method |
Publications (1)
Publication Number | Publication Date |
---|---|
CN114037979A true CN114037979A (en) | 2022-02-11 |
Family
ID=80136883
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202111321068.6A Pending CN114037979A (en) | 2021-11-09 | 2021-11-09 | Lightweight driver fatigue state detection method |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN114037979A (en) |
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN114582090A (en) * | 2022-02-27 | 2022-06-03 | 武汉铁路职业技术学院 | Rail vehicle drives monitoring and early warning system |
-
2021
- 2021-11-09 CN CN202111321068.6A patent/CN114037979A/en active Pending
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN114582090A (en) * | 2022-02-27 | 2022-06-03 | 武汉铁路职业技术学院 | Rail vehicle drives monitoring and early warning system |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US9180887B2 (en) | Driver identification based on face data | |
CN100462047C (en) | Safe driving auxiliary device based on omnidirectional computer vision | |
CN103714660B (en) | System for achieving fatigue driving judgment on basis of image processing and fusion between heart rate characteristic and expression characteristic | |
CN104200192B (en) | Driver's gaze detection system | |
WO2020029444A1 (en) | Method and system for detecting attention of driver while driving | |
CN112016457A (en) | Driver distraction and dangerous driving behavior recognition method, device and storage medium | |
US9662977B2 (en) | Driver state monitoring system | |
CN109584507A (en) | Driver behavior modeling method, apparatus, system, the vehicles and storage medium | |
JP5482737B2 (en) | Visual load amount estimation device, driving support device, and visual load amount estimation program | |
CN105956548A (en) | Driver fatigue state detection method and device | |
CN113033503A (en) | Multi-feature fusion dangerous driving behavior detection method and system | |
WO2020161610A2 (en) | Adaptive monitoring of a vehicle using a camera | |
CN111062292A (en) | Fatigue driving detection device and method | |
CN104881956A (en) | Fatigue driving early warning system | |
JP2012084068A (en) | Image analyzer | |
CN113989788A (en) | Fatigue detection method based on deep learning and multi-index fusion | |
CN114037979A (en) | Lightweight driver fatigue state detection method | |
CN115937830A (en) | Special vehicle-oriented driver fatigue detection method | |
CN113128295A (en) | Method and device for identifying dangerous driving state of vehicle driver | |
Rani et al. | Development of an Automated Tool for Driver Drowsiness Detection | |
CN110232300A (en) | Lane vehicle lane-changing intension recognizing method and system by a kind of | |
CN116012822B (en) | Fatigue driving identification method and device and electronic equipment | |
CN107315997A (en) | A kind of sight based on swift nature point location is towards determination methods and system | |
CN113901866A (en) | Fatigue driving early warning method based on machine vision | |
CN113421402A (en) | Passenger body temperature and fatigue driving behavior detection system and method based on infrared camera |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination |