CN114332833A - Binary differentiable fatigue detection method based on face key points - Google Patents

Binary differentiable fatigue detection method based on face key points Download PDF

Info

Publication number
CN114332833A
CN114332833A CN202111672550.4A CN202111672550A CN114332833A CN 114332833 A CN114332833 A CN 114332833A CN 202111672550 A CN202111672550 A CN 202111672550A CN 114332833 A CN114332833 A CN 114332833A
Authority
CN
China
Prior art keywords
binary
feature
eye
area
differentiable
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202111672550.4A
Other languages
Chinese (zh)
Inventor
严圣军
秦宇
王栋
王振
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shanghai Zhiying Robot Technology Co ltd
Jiangsu Tianying Environmental Protection Energy Equipment Co Ltd
China Tianying Inc
Original Assignee
Shanghai Zhiying Robot Technology Co ltd
Jiangsu Tianying Environmental Protection Energy Equipment Co Ltd
China Tianying Inc
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shanghai Zhiying Robot Technology Co ltd, Jiangsu Tianying Environmental Protection Energy Equipment Co Ltd, China Tianying Inc filed Critical Shanghai Zhiying Robot Technology Co ltd
Priority to CN202111672550.4A priority Critical patent/CN114332833A/en
Publication of CN114332833A publication Critical patent/CN114332833A/en
Pending legal-status Critical Current

Links

Images

Abstract

The invention discloses a binary differentiable fatigue detection method based on key points of a human face, which comprises the steps of collecting face video data of a driver, obtaining face position information and key point coordinates in each frame of the video data, cutting a left eye image and a right eye image according to the key point coordinates, manufacturing a data set, establishing a binary differentiable depth convolution segmentation network model, training a human eye segmentation model, inputting new video data, obtaining segmentation areas of a left eye and a right eye in each frame of image based on an MTCNN (multiple-transmission-network) human face detection algorithm and the human eye segmentation model, calculating corresponding areas, and calculating the eye closure condition within t time by using a formula according to the calculated areas of the left eye segmentation area and the right eye segmentation areas. The invention provides a differentiable and binaryzation human eye segmentation algorithm and a method for calculating whether human eyes are closed, and improves the accuracy of driver fatigue detection.

Description

Binary differentiable fatigue detection method based on face key points
Technical Field
The invention relates to a fatigue detection method, in particular to a binary differentiable fatigue detection method based on face key points, and belongs to the technical field of intelligent driving.
Background
With the development of national socioeconomic and improvement of people's living standard and the improvement of domestic road infrastructure development, in the municipal sanitation field, the types and the number of commuting vehicles in the sanitation field are increasing day by day, including: road sweeper, garbage transfer car, watering cart, etc. Meanwhile, the artificial intelligence technology is rapidly developed in recent years, and a plurality of novel and efficient deep learning algorithms are greatly developed. The technology development needs the driving of the actual application scene falling to the ground, and the safe auxiliary driving of the motor vehicle is a representative direction of the deep learning technology to pick up the falling to the ground in the machine vision. In the process of driving the motor vehicle, part of traffic accidents occur from dangerous behavior driving of drivers, which causes great loss for the development of social economy and the nation-counting citizens. Particularly, in the field of environmental sanitation, sanitation vehicles need to work in different time periods, and some of the sanitation vehicles can work in less time periods of social vehicles at night, so that the working personnel are very easy to have involuntary fatigue driving behaviors in the process of value riding. Therefore, the driver fatigue driving is necessary for the driving process of the sanitation vehicle, and the traffic accidents of the sanitation vehicle can be effectively reduced by real-time detection and early warning.
In some current researches, in fatigue driving judgment methods based on face key points, methods for detecting targets based on deep learning are mainly used. For example, in patent CN108460345, a video stream including the whole facial expression when a driver drives in a unit time is extracted, then each frame of image in the video stream is processed, whether the situation of eye closure or excessive mouth opening exists in each frame of image is determined by using key points of a face in each frame of image, and finally, the number of times of eye closure or excessive mouth opening occurring in the video stream in the unit time is detected by using a PECLOS method, so as to determine whether the driver is in fatigue driving. However, when tired, the eyes and mouth do not close completely, resulting in inaccuracies in the detection of fatigue.
Disclosure of Invention
The invention aims to provide a binary differentiable fatigue detection method based on face key points, and improve the accuracy of driving fatigue detection.
In order to solve the technical problems, the technical scheme adopted by the invention is as follows:
a binary differentiable fatigue detection method based on face key points is characterized by comprising the following steps:
the method comprises the following steps: collecting face video data of a driver;
step two: acquiring face position information and key point coordinates in each frame of video data;
step three: cutting the left eye and right eye pictures according to the coordinates of the key points to produce a data set;
step four: establishing a binary differentiable depth convolution segmentation network model and training a human eye segmentation model;
step five: inputting new video data, acquiring the segmentation areas of the left eye and the right eye in each frame of image based on an MTCNN face detection algorithm and an eye segmentation model, and calculating corresponding areas;
step six: calculating the eye closure condition in t time by using a formula according to the calculated areas of the left eye and right eye segmentation areas
Figure BDA0003450318360000021
Wherein close _ eye is the eye-closing condition, T is the calculated time period, T is the time threshold, areaiIs the area of the eye region divided at time i, area1,area2,...,areatIndicating the area of the segmented eye region at different times.
Further, the first step specifically comprises: video data are collected on the face of a driver through a camera arranged in a cockpit, and a driver activity area is reserved in a camera collection area.
Further, the second step is specifically: and decomposing the collected facial video data into image data by taking frames as units, and then processing the image data of each frame based on an MTCNN (multiple-point communication network) face detection algorithm to obtain face position information and key point coordinates in the image data of each frame.
Further, the keypoint coordinates include left eye coordinates, right eye coordinates, nose coordinates, and mouth coordinates.
5. The binary differentiable fatigue detection method based on the face key points as claimed in claim 1, wherein: the third step is specifically as follows:
3.1 left eye coordinates (x) obtainedleft,yleft) And the coordinates of the right eye (x)right,yright) Generating coordinates of the upper left corner and the lower right corner of the left-eye rectangular area, which are x respectivelyleft_top=xleft-20,yleft_top=,yleft-20,xleft_bottom=xleft+20,yleft_bottom=yleft+20;xright_top=xright-20,yright_top=yright-20,xright_bottom=xright+20,yright_bottom=yright+20, and then acquiring the corresponding left eye area and right eye area according to the coordinates.
3.2, respectively making a probability map label and a threshold label, wherein in a rectangular area, a left eye eyeball area can be described as D1, a right eye eyeball area can be described as D2, calculating the perimeter L1 and the area A1 of D1, calculating the perimeter L2 and the area A2 of D2, and respectively calculating the contraction proportion of D1 and D2
Figure BDA0003450318360000031
Wherein r is 0.4;
3.3, shrinking D1 to D1 ', D2 to D2' using the Vatti clipping algorithm; d1 ', D2' are internally filled with the value 0, namely D1 ', D2' represents a probability map label;
3.4, expand D1 to D1 '', D2 to D2 '' using the vatti clipping algorithm and make the values between D1 '' -D1, D2 '' -D2 follow a normal distribution, i.e. D1 '', D2 '' represent the threshold label.
Further, the fourth step is specifically:
4.1, unifying the size of the input pictures and ensuring that the size of the input pictures is a multiple of 32;
4.2, firstly inputting a picture through a five-layer characteristic pyramid, reducing the size of the picture through each layer in the characteristic pyramid by half relative to the size of the picture in the previous layer, finally reducing the size by 32 times, and storing the characteristics obtained by each characteristic layer; secondly, performing up-sampling on each layer of characteristics of the obtained pyramid, enabling the sizes of the characteristics to be the same, and fusing all the characteristic layers; then, the fused features are respectively convolved, the feature size is kept unchanged, a probability map basic _ feature is generated in part, and a threshold map threshold _ feature is generated in part; finally, generating a binary image binary _ feature by utilizing the basic _ feature and the threshold _ feature;
4.3 loss function of binary differentiable deep convolution segmentation network model as
Loss=Lbinary_feature+Lprobability_feature+Lthreshold_feature
4.4, setting other hyper-parameters of the training.
Further, the method for producing the binary image binary _ feature specifically comprises the following steps:
the binary equation capable of being differentiated is adopted to produce the binary image binary _ feature, and the binary equation capable of being differentiated can be expressed as
Figure BDA0003450318360000041
Where the value of k is set to 30 and i, j each represent a position in the two-dimensional matrix.
Further, the loss function is specifically:
the loss of the binary differentiable deep convolution segmentation network model consists of three parts, namely Lbinary_feature,Lprobability_feature,Lthreshold_feature
Lbinary_feature,Lprobability_featureUsing a cross-entropy loss function, can be expressed as
Lbinary_feature=Lprobability_feature=∑i∈Syilogxi+(1-yi)log(1-xi)
Lthreshold_featureUsing the L1 loss function, it can be expressed as
Lthreshold_feature=∑i∈S|yi-xi|
Namely, it is
Loss=Lbinary_feature+Lprobability_feature+Lthreshold_feature
Wherein L isbinary_featureRepresenting a binary map loss, Lprobability_featureRepresents the probability map loss, Lthreshold_featureThe threshold map is lost.
Further, the fifth step is specifically: inputting a t-time video stream, detecting key point coordinates of a left eye and a right eye by using an MTCNN (multiple-point transform neural network) model, extracting left eye and right eye areas, inputting information of the left eye and right eye areas to an actual range of divided eyeballs of a binary differentiable depth convolution division network model, and calculating the divided area, which is recorded as eye _ area [ area ]1,area2,...,areat]。
Compared with the prior art, the invention has the following advantages and effects:
1. according to the invention, the human eye region is segmented by a differentiable binarization segmentation method based on the key points of the human face, firstly, the invention can adaptively set a threshold value in the training process, can efficiently and accurately segment the human eye, plays a key role in accurately judging whether the human eye is closed or not in the follow-up process, and solves the problem of traditional segmentation post-processing operation, and meanwhile, compared with the traditional method, the invention has good robustness;
2. the invention proposesA method for adaptively judging whether human eyes are closed or not based on time dimension sets a threshold value T equal to 0.5, firstly, left eyes and right eyes are respectively segmented within T time, and area eye _ area of each segmented frame is respectively recorded1,area2,...,areat]The formula for calculating the eye closure is
Figure BDA0003450318360000061
And if the ratio is smaller than the threshold T, the eye closing state is indicated, and the accuracy of eye closing judgment is improved.
Drawings
Fig. 1 is a flowchart of a binary differentiable fatigue detection method based on face key points according to the present invention.
Fig. 2 is a schematic diagram of a MTCNN detection-based face and key points according to the present invention.
FIG. 3 is a schematic diagram of making a probability map label and a threshold label of the present invention.
FIG. 4 is a schematic diagram of a binary differentiable deep convolution segmentation network model of the present invention.
Detailed Description
To elaborate on technical solutions adopted by the present invention to achieve predetermined technical objects, the technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the accompanying drawings in the embodiments of the present invention, it is obvious that the described embodiments are only partial embodiments of the present invention, not all embodiments, and technical means or technical features in the embodiments of the present invention may be replaced without creative efforts, and the present invention will be described in detail below with reference to the drawings and in conjunction with the embodiments.
As shown in fig. 1, the binary differentiable fatigue detection method based on the face key points of the present invention includes the following steps:
the method comprises the following steps: the method comprises the steps of collecting face video data of a driver.
Video data are collected on the face of a driver through a camera arranged in a cockpit, and a driver activity area is reserved in a camera collection area.
Step two: and acquiring the face position information and the key point coordinates in each frame of the video data.
And decomposing the collected facial video data into image data by taking frames as units, and then processing the image data of each frame based on an MTCNN (multiple-point communication network) face detection algorithm to obtain face position information and key point coordinates in the image data of each frame. As shown in fig. 2, the key point coordinates include a left eye coordinate, a right eye coordinate, a nose coordinate, and two mouth coordinates.
Step three: and cutting the left eye picture and the right eye picture according to the coordinates of the key points to produce a data set.
3.1 left eye coordinates (x) obtainedleft,yleft) And the coordinates of the right eye (x)right,yright) Generating coordinates of the upper left corner and the lower right corner of the left-eye rectangular area, which are x respectivelyleft_top=xleft-20,yleft_top=,yleft-20,xleft_bottom=xleft+20,yleft_bottom=yleft+20;xright_top=xright-20,yright_top=yright-20,xright_bottom=xright+20,yright_bottom=yright+20, and then acquiring the corresponding left eye area and right eye area according to the coordinates.
3.2, respectively making a probability map label and a threshold label, as shown in FIG. 3, in a rectangular area, a left eye eyeball area can be described as D1, a right eye eyeball area can be described as D2, calculating the perimeter L1 and the area A1 of D1, calculating the perimeter L2 and the area A2 of D2, and respectively calculating the contraction proportion of D1 and D2
Figure BDA0003450318360000071
Wherein r is 0.4;
3.3, shrinking D1 to D1 ', D2 to D2' using the Vatti clipping algorithm; d1 ', D2' are internally filled with the value 0, namely D1 ', D2' represents a probability map label;
3.4, expand D1 to D1 '', D2 to D2 '' using the vatti clipping algorithm and make the values between D1 '' -D1, D2 '' -D2 follow a normal distribution, i.e. D1 '', D2 '' represent the threshold label.
Step four: and establishing a binary differentiable depth convolution segmentation network model and training a human eye segmentation model.
4.1, unifying the size of the input pictures and ensuring that the size of the input pictures is a multiple of 32;
4.2, as shown in fig. 4, firstly inputting a picture to pass through a five-layer characteristic pyramid, reducing the size of each layer of the picture passing through the characteristic pyramid by half relative to the size of the previous layer, finally reducing the size by 32 times, and storing the characteristics obtained by each characteristic layer; secondly, performing up-sampling on each layer of characteristics of the obtained pyramid, enabling the sizes of the characteristics to be the same, and fusing all the characteristic layers; then, the fused features are respectively convolved, the feature size is kept unchanged, a probability map basic _ feature is generated in part, and a threshold map threshold _ feature is generated in part; finally, generating a binary image binary _ feature by utilizing the basic _ feature and the threshold _ feature;
in the most advanced method in the past, the method for generating the binary _ feature is generated by comparing the sizes of elements in the binary _ feature under the action of the hyper parameter threshold t, and the method is described as using a formula
Figure BDA0003450318360000081
In the method, the hyper-parameter t cannot be learned in training, and if the model is trained in a large number of complex scenes, the model has no robustness.
The production method of the binary image binary _ feature specifically comprises the following steps:
the invention provides a binary equation capable of being differentiated, the binary equation capable of being differentiated is adopted to produce binary image binary _ feature, and the binary equation capable of being differentiated can be expressed as
Figure BDA0003450318360000082
Where the value of k is set to 30 and i, j each represent a position in the two-dimensional matrix. the threshold _ feature can be adjusted under the learning of a large amount of data, the most appropriate parameters are learned, and finally the segmentation of the model is completed.
4.3 loss function of binary differentiable deep convolution segmentation network model as
Loss=Lbinary_feature+Lprobabiliry_feature+Lthreshold_feature
The loss function is specifically:
the loss of the binary differentiable deep convolution segmentation network model consists of three parts, namely Lbinary_feature,Lprobability_feature,Lthreshold_feature
Lbinsry_feature,Lprobability_featureUsing a cross-entropy loss function, can be expressed as
Lbinary_feature=Lprobability_feature=∑i∈Syilogxi+(1-yi)log(1-xi)
Lthreshold_featureUsing the L1 loss function, it can be expressed as
Lthreshold_feature=∑i∈s|yi-xi|
Namely, it is
Loss=Lbinary_feature+Lprobabiliry_feature+Lthreshold_feature
Wherein L isbinery_festureRepresenting a binary map loss, Lprobability_featureRepresents the probability map loss, Lthreshold_featureThe threshold map is lost.
4.4, setting other hyper-parameters of the training. For example, the initial value of the learning rate lr is 0.01, and Xavier initializes the weight parameters, etc., to train the segmentation network.
Step five: inputting new video data, acquiring the segmentation areas of the left eye and the right eye in each frame of image based on an MTCNN face detection algorithm and an eye segmentation model, and calculating corresponding areas.
Inputting a t-time video stream, detecting key point coordinates of a left eye and a right eye by using an MTCNN (multiple-point transform neural network) model, extracting left eye and right eye areas, inputting information of the left eye and right eye areas to an actual range of divided eyeballs of a binary differentiable depth convolution division network model, and calculating the divided area, which is recorded as eye _ area [ area ]1,area2,...,areat]。
Step six: calculating the eye closure condition in t time by using a formula according to the calculated areas of the left eye and right eye segmentation areas
Figure BDA0003450318360000101
Wherein close _ eye is the eye-closing condition, T is the calculated time period, T is the time threshold, areaiIs the area of the eye region divided at time i, area1,area2,...,areatIndicating the area of the segmented eye region at different times.
The invention provides a differentiable binarization human eye segmentation algorithm and a method for calculating whether human eyes are closed or not based on human face key points, and aims to solve the problem of fatigue driving of a driver. The human eye segmentation algorithm based on the differentiable binarization provided by the invention is used for adaptively segmenting human eyes, so that the segmentation is more accurate, the robustness is higher in a complex environment, such as strong light or darker light, and the post-processing operation completed by segmentation in the current most advanced technology can be solved to a certain extent. The invention provides a method for self-adaptively judging whether human eyes are closed or not based on time dimension, and provides a formula
Figure BDA0003450318360000102
The state change in the continuous time is considered, the state of the driver in the time period can be better expressed, and the judgment is more reasonable and accurate.
Although the present invention has been described with reference to a preferred embodiment, it should be understood that various changes, substitutions and alterations can be made herein without departing from the spirit and scope of the invention as defined by the appended claims.

Claims (9)

1. A binary differentiable fatigue detection method based on face key points is characterized by comprising the following steps:
the method comprises the following steps: collecting face video data of a driver;
step two: acquiring face position information and key point coordinates in each frame of video data;
step three: cutting the left eye and right eye pictures according to the coordinates of the key points to produce a data set;
step four: establishing a binary differentiable depth convolution segmentation network model and training a human eye segmentation model;
step five: inputting new video data, acquiring the segmentation areas of the left eye and the right eye in each frame of image based on an MTCNN face detection algorithm and an eye segmentation model, and calculating corresponding areas;
step six: calculating the eye closure condition in t time by using a formula according to the calculated areas of the left eye and right eye segmentation areas
Figure FDA0003450318350000011
Wherein close _ eye is the eye-closing condition, T is the calculated time period, T is the time threshold, areaiIs the area of the eye region divided at time i, area1,area2,...,areatIndicating the area of the segmented eye region at different times.
2. The binary differentiable fatigue detection method based on the face key points as claimed in claim 1, wherein: the first step is specifically as follows: video data are collected on the face of a driver through a camera arranged in a cockpit, and a driver activity area is reserved in a camera collection area.
3. The binary differentiable fatigue detection method based on the face key points as claimed in claim 1, wherein: the second step is specifically as follows: and decomposing the collected facial video data into image data by taking frames as units, and then processing the image data of each frame based on an MTCNN (multiple-point communication network) face detection algorithm to obtain face position information and key point coordinates in the image data of each frame.
4. The binary differentiable fatigue detection method based on the face key points as claimed in claim 3, wherein: the key point coordinates include left eye coordinates, right eye coordinates, nose coordinates, and mouth coordinates.
5. The binary differentiable fatigue detection method based on the face key points as claimed in claim 1, wherein: the third step is specifically as follows:
3.1 left eye coordinates (x) obtainedleft,yleft) And the coordinates of the right eye (x)right,yright) Generating coordinates of the upper left corner and the lower right corner of the left-eye rectangular area, which are x respectivelyleft_top=xleft-20,yleft_top=,yleft-20,xleft_bottom=xleft+20,yleft_bottom=yleft+20;xright_top=xright-20,yright_top=yright-20,xright_bottom=xright+20,yright_bottom=yright+20, and then acquiring the corresponding left eye area and right eye area according to the coordinates.
3.2 creating probability map label and threshold label separately, in rectangular areaIn the middle, the left eye eyeball region may be described as D1, the right eye eyeball region may be described as D2, the circumference L1 and the area a1 of D1 are calculated, the circumference L2 and the area a2 of D2 are calculated, and the contraction ratios of D1 and D2 are calculated, respectively
Figure FDA0003450318350000021
Wherein r is 0.4;
3.3, shrinking D1 to D1 ', D2 to D2' using the Vatti clipping algorithm; d1 ', D2' are internally filled with the value 0, namely D1 ', D2' represents a probability map label;
3.4, expand D1 to Di '', D2 to D2 '' using the vatti clipping algorithm and make the values between D1 '' -D1, D2 '' -D2 follow a normal distribution, i.e. D1 '', D2 '' represent the threshold label.
6. The binary differentiable fatigue detection method based on the face key points as claimed in claim 1, wherein: the fourth step is specifically as follows:
4.1, unifying the size of the input pictures and ensuring that the size of the input pictures is a multiple of 32;
4.2, firstly inputting a picture through a five-layer characteristic pyramid, reducing the size of the picture through each layer in the characteristic pyramid by half relative to the size of the picture in the previous layer, finally reducing the size by 32 times, and storing the characteristics obtained by each characteristic layer; secondly, performing up-sampling on each layer of characteristics of the obtained pyramid, enabling the sizes of the characteristics to be the same, and fusing all the characteristic layers; then, the fused features are respectively convolved, the feature size is kept unchanged, a probability map basic _ feature is generated in part, and a threshold map threshold _ feature is generated in part; finally, generating a binary image binary _ feature by utilizing the basic _ feature and the threshold _ feature;
4.3 loss function of binary differentiable deep convolution segmentation network model as
Loss=Lbinary_feature+Lprobability_feature+Lthreshold_feature
4.4, setting other hyper-parameters of the training.
7. The binary differentiable fatigue detection method based on the face key points as claimed in claim 6, wherein: the production method of the binary image binary _ feature specifically comprises the following steps:
the binary equation capable of being differentiated is adopted to produce the binary image binary _ feature, and the binary equation capable of being differentiated can be expressed as
Figure FDA0003450318350000031
Where the value of k is set to 30 and i, j each represent a position in the two-dimensional matrix.
8. The binary differentiable fatigue detection method based on the face key points as claimed in claim 7, wherein: the loss function is specifically:
the loss of the binary differentiable deep convolution segmentation network model consists of three parts, namely Lbinary_feature,Lprobability_feature,Lthreshold_feature
Lbinary_feature,Lprobability_featureUsing a cross-entropy loss function, can be expressed as
Lbinary_feature=Lprobability_feature=∑i∈Syilogxi+(1-yi)log(1-xi)
Lthreshold_featureUsing the L1 loss function, it can be expressed as
Lthreshold_feature=∑i∈S|yi-xi|
Namely, it is
Loss=Lbinary_feature+Lprobability_feature+Lthreshold_feature
Wherein L isbinary_featureRepresenting a binary map loss, Lprobsbility_featureRepresents the probability map loss, Lthreshold_featureThe threshold map is lost.
9. The binary differentiable fatigue detection method based on the face key points as claimed in claim 7, wherein: the fifth step is specifically as follows: inputting a t-time video stream, detecting key point coordinates of a left eye and a right eye by using an MTCNN (multiple-point transform neural network) model, extracting left eye and right eye areas, inputting information of the left eye and right eye areas to an actual range of divided eyeballs of a binary differentiable depth convolution division network model, and calculating the divided area, which is recorded as eye _ area [ area ]1,area2,...,areat]。
CN202111672550.4A 2021-12-31 2021-12-31 Binary differentiable fatigue detection method based on face key points Pending CN114332833A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202111672550.4A CN114332833A (en) 2021-12-31 2021-12-31 Binary differentiable fatigue detection method based on face key points

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202111672550.4A CN114332833A (en) 2021-12-31 2021-12-31 Binary differentiable fatigue detection method based on face key points

Publications (1)

Publication Number Publication Date
CN114332833A true CN114332833A (en) 2022-04-12

Family

ID=81020334

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202111672550.4A Pending CN114332833A (en) 2021-12-31 2021-12-31 Binary differentiable fatigue detection method based on face key points

Country Status (1)

Country Link
CN (1) CN114332833A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116912808A (en) * 2023-09-14 2023-10-20 四川公路桥梁建设集团有限公司 Bridge girder erection machine control method, electronic equipment and computer readable medium

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116912808A (en) * 2023-09-14 2023-10-20 四川公路桥梁建设集团有限公司 Bridge girder erection machine control method, electronic equipment and computer readable medium
CN116912808B (en) * 2023-09-14 2023-12-01 四川公路桥梁建设集团有限公司 Bridge girder erection machine control method, electronic equipment and computer readable medium

Similar Documents

Publication Publication Date Title
CN109829403B (en) Vehicle anti-collision early warning method and system based on deep learning
CN110119676A (en) A kind of Driver Fatigue Detection neural network based
CN104751600B (en) Anti-fatigue-driving safety means and its application method based on iris recognition
CN107194346A (en) A kind of fatigue drive of car Forecasting Methodology
CN101941425B (en) Intelligent recognition device and method for fatigue state of driver
CN108280397A (en) Human body image hair detection method based on depth convolutional neural networks
CN111242015B (en) Method for predicting driving dangerous scene based on motion profile semantic graph
CN109117788A (en) A kind of public transport compartment crowding detection method merging ResNet and LSTM
CN107315998B (en) Vehicle class division method and system based on lane line
CN105956552A (en) Face black list monitoring method
CN104978567A (en) Vehicle detection method based on scenario classification
CN110310241A (en) A kind of more air light value traffic image defogging methods of fusion depth areas segmentation
CN111553214B (en) Method and system for detecting smoking behavior of driver
CN110717389A (en) Driver fatigue detection method based on generation of countermeasure and long-short term memory network
CN112818775B (en) Forest road rapid identification method and system based on regional boundary pixel exchange
Wang et al. The research on edge detection algorithm of lane
CN114332833A (en) Binary differentiable fatigue detection method based on face key points
Dewangan et al. Towards the design of vision-based intelligent vehicle system: methodologies and challenges
CN112949560A (en) Method for identifying continuous expression change of long video expression interval under two-channel feature fusion
CN116311180A (en) Multi-method fusion fatigue driving detection method
CN115761574A (en) Weak surveillance video target segmentation method and device based on frame labeling
CN110232327B (en) Driving fatigue detection method based on trapezoid cascade convolution neural network
CN115147450B (en) Moving target detection method and detection device based on motion frame difference image
CN113989495B (en) Pedestrian calling behavior recognition method based on vision
CN104091344B (en) Road dividing method

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination