CN110263663A - A kind of driver's multistage drowsiness monitor method based on multidimensional facial characteristics - Google Patents

A kind of driver's multistage drowsiness monitor method based on multidimensional facial characteristics Download PDF

Info

Publication number
CN110263663A
CN110263663A CN201910455869.8A CN201910455869A CN110263663A CN 110263663 A CN110263663 A CN 110263663A CN 201910455869 A CN201910455869 A CN 201910455869A CN 110263663 A CN110263663 A CN 110263663A
Authority
CN
China
Prior art keywords
image
value
face
fatigue
feature
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201910455869.8A
Other languages
Chinese (zh)
Inventor
谢非
吴沛林
王鼎
刘文慧
汪璠
杨继全
韩嘉宁
光蔚然
刘益剑
郭钊利
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Nanjing Intelligent High-End Equipment Industry Research Institute Co Ltd
Nanjing University
Nanjing Normal University
Original Assignee
Nanjing Intelligent High-End Equipment Industry Research Institute Co Ltd
Nanjing Normal University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Nanjing Intelligent High-End Equipment Industry Research Institute Co Ltd, Nanjing Normal University filed Critical Nanjing Intelligent High-End Equipment Industry Research Institute Co Ltd
Priority to CN201910455869.8A priority Critical patent/CN110263663A/en
Publication of CN110263663A publication Critical patent/CN110263663A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/50Context or environment of the image
    • G06V20/59Context or environment of the image inside of a vehicle, e.g. relating to seat occupancy, driver state or inner lighting conditions
    • G06V20/597Recognising the driver's state or behaviour, e.g. attention or drowsiness
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/161Detection; Localisation; Normalisation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/168Feature extraction; Face representation

Abstract

Driver's multistage drowsiness monitor method based on multidimensional facial characteristics that the invention discloses a kind of, comprising: step 1: acquisition successive video frames sequence image judges whether there is face in the picture.If being tested with face in image-region, face parts of images is individually intercepted to come out to go to as Face datection image handles in next step;Otherwise it reads next frame image and re-starts judgement.Step 2: extracting human face characteristic point, and record the position coordinates of each characteristic point in the picture.Step 3: calculating multidimensional facial characteristics;Step 4 judges facial state, and whether whether facial state include: whether dozing off, yawning, bowing and in rotary head.Step 5, fusion multidimensional facial characteristics and facial state, calculate fatigue strength assessed value in K frame, by drowsiness monitor are normal, slight fatigue, moderate fatigue and tired four grades of severe according to fatigue strength assessed value, complete driver's multistage drowsiness monitor function.

Description

A kind of driver's multistage drowsiness monitor method based on multidimensional facial characteristics
Technical field
The present invention relates to intelligent transportation and the technical fields of image procossing, and in particular to a kind of based on multidimensional facial characteristics Driver's multistage drowsiness monitor method.
Background technique
As modern life rhythm is accelerated, phenomenon of staying up late is serious, do not get enough athletic exercise, irregular etc. the factor of diet causes The easy tired energy of modern is poor, and the fatigue driving occurred therewith becomes one of the main reason for causing traffic accident.Fatigue is driven Sailing is one of most important hidden danger of traffic safety.Statistical data shows that traffic accident caused by fatigue driving accounts for personal injury Accident 15%, death by accident 20% or more.Therefore driver's multistage drowsiness monitor technique study is to monitoring driver It is of great significance whether there is or not fatigue state and application value.
Most often adopted fatigue driving detection means is that driver drives behavioural analysis at present, i.e. analysis driver operation Judge driver fatigue state, such as steering wheel rotation, touch on the brake behavior.Followed by wearable brain electric fatigue degree monitoring and " supra-aural " Drowsiness reminder, mainly analyzes driver fatigue state by electroencephalogramsignal signal analyzing and head movement angle.It is existing There is the general calculation amount of the method for technology larger, calculating speed is not also high.
Summary of the invention
In view of the deficiencies of the prior art, driver's multistage fatigue strength based on multidimensional facial characteristics that the invention discloses a kind of Recognition methods includes the following steps:
Step 1: acquisition successive video frames sequence image reads each frame image, detects whether there are face, if detection There are human face regions in current frame image out, and face parts of images is individually intercepted and is come out as Face datection image, step is gone to Rapid 2 are handled;If not detecting human face region, jumps to next frame image and continue to test with the presence or absence of face;
Step 2: the method based on probability random forest being utilized to the Face datection image intercepted out, trains detection face The regression model of 68 characteristic points carries out human face characteristic point extraction to Face datection image using regression model, and records face inspection Characteristic point position information in altimetric image, i.e. pixel coordinate of the characteristic point in Face datection image;
Step 3: according to characteristic point position information, carrying out the building of multidimensional facial characteristics, the multidimensional facial characteristics includes a left side Eye closure feature, right eye closure feature, mouth closure feature, look height bit are sought peace eye closure frequecy characteristic;
Step 4: according to multidimensional facial characteristics, facial state is judged, it is described face state include: whether beat it is sleepy It sleeps, whether yawning, whether bowing and whether in rotary head;
Step 5: according to multidimensional facial characteristics and facial state, fatigue strength assessed value in K frame image is calculated, according to fatigue strength Drowsiness monitor is normal, slight fatigue, moderate fatigue and tired four grades of severe by assessed value, and it is tired to complete driver's multistage Lao Du identification.
Step 1 includes the following steps:
Step 1-1: calculating mean picture brightness value, and when average brightness value is greater than 70, infrared lamp is closed, and use is infrared Monocular cam acquires successive video frames sequence color;When average brightness value is less than or equal to 70, infrared lamp is opened, and is made Continuous videos frame sequence gray level image is acquired with infrared monocular cam.Mean picture brightness value calculation formula is as follows:
Wherein, Brightness is mean picture brightness value, (i, j), G (i, j) and B (i, j) be respectively the i-th row of image and Red color component value, green component values and the blue color component value of jth column pixel.
Step 1-2: if what is read is colorized face images, by color image gray processing algorithm, face figure is obtained As grayscale image;If what is read is facial image grayscale image, it is directly to step 1-3;
Step 1-3: it by image compression algorithm by the resolution compression of face gray level image to 800 × 600, is compressed Image (it can refer to Yin Jingxia, a kind of optimization and realization of image compression algorithm, Hunan University, master thesis, 2017, pp.56-61);This step is the operand in order to reduce step 1-4, accelerates detection speed.
Step 1-4: first detect whether that there are faces (can refer to Chen Chao, the people based on deep learning in compression image The design and realization of face identifying system, Nanjing Univ. of Posts and Telecommunications, master thesis, 2018, pp15-38), if detected current There are human face regions in frame image, and face parts of images is individually intercepted and is come out as Face datection image;If present frame figure Human face region is not detected as in, then jumps to next frame image, and return step 1-2 is handled.
It is described by color image gray processing algorithm in step 1-2, facial image grayscale image is obtained, is specifically included:
Gray value is calculated according to the following formula:
Face_Gray (i, j)=kR×R(i,j)+kG×G(,j)+kB×B(i,j)
Wherein, face_Gray (i, j) is the pixel gray value of ith row and jth column in face gray level image, kR、kGWith kBThe power of the red weight of the pixel of ith row and jth column, the weight of green and blue respectively in Face datection image Value, and value kR=0.2973, kG=0.6274 and kB=0.0753, R (i, j), G (i, j) and B (i, j) are respectively face inspection Red color component value, green component values and the blue color component value of altimetric image ith row and jth column pixel.
In step 2, the method based on probability random forest is utilized to the Face datection image intercepted out, trains detection people The regression model of 68 characteristic point of face, comprising:
Step 2-1-1: acquisition data set: data set using IBUG common data sets [reference: https: // Ibug.doc.ic.ac.uk/resources] it is used as training set and test set, which contains N1(generally 5000) Or so facial image, and marked 68 human face characteristic points.
Step 2-1-2: the pretreatment of data set: the operand in order to reduce training pattern, to data concentrate image into The processing of row image gray processing, specific implementation method is the same as step 1-2;
Step 2-1-3: the enhancing of data set:, can not since the common data sets used only have 5000 or so pictures Meets the needs of regression model training.Following methods EDS extended data set is used herein: the N that initial data is concentrated1It opens and keeps former Beginning size is rotated.Rotation mode be using picture centre coordinate as rotation center, rotate clockwise respectively in order 15 °, 30 °, 45 °, 60 ° and 75 °, similarly 15 °, 30 °, 45 °, 60 ° and 75 ° of rotation counterclockwise respectively in order;Since picture uses Facial parts, therefore corner filling selection data processing amount least way is used only in part;By four after image rotation The gray value for the blank parts that corner generates is disposed as 0.Finally using all pictures in the data set after expansion as training Training gray level image uses;
Step 2-1-4: building pixel disruptive features: pixel disruptive features feature is by a pixel gray level For value with four pixel gray value difference gained of surrounding, formula is as follows:
Feature=4 × f (a, b)-f (a-1, b)-f (a+1, b)-f (a, b-1)-f (a, b+1)
In above formula, what f (a, b) was indicated is the grey scale pixel value at training gray level image coordinate (a, b).Coordinate (a, b) Selection rule be the above stage estimate characteristic point centered on, r be pixel radius chosen.Determined pixel half herein The meaning of diameter r are as follows: the abscissa of pixel differs the vertical of r pixel or pixel with the abscissa for estimating characteristic point coordinate Coordinate differs r pixel with the ordinate for estimating characteristic point coordinate.With progressive, the selected pixels radius r of cascade number It is mutually deserved to become smaller, it is provided here by r=5 to be initial value, the value that cascade terminates r every time subtracts 1 until being 0.
Step 2-1-5: optimal tree node split feature: the characteristic value feature defined according to step 2-1-4, meter are chosen The characteristic value of t-th of characteristic point of marked position in all training samples is calculated, and randomly selects one in all characteristic values Characteristic value is as threshold epsilon, and wherein training sample refers to training any one picture in gray level image;The value of t be 1~ 68, represent the ordinal number of the human face characteristic point of label;When the characteristic value feature of training sample is less than ε, which sorts out For left sibling;When the characteristic value feature of training sample is more than or equal to ε, which is classified as right node;
The sample characteristics variance of left and right node is calculated, left sibling sample characteristics variance is denoted as varL, right node sample Characteristic value variance is denoted as varR;And calculate population variance var:
Wherein, N is all training samples numbers of present node;Nl and Nr is the sample size and right section of left sibling respectively The sample size of point chooses the smallest t-th of the characteristic point of population variance as predicted characteristics point, saves the corresponding coordinate letter of this feature Breath and model of the threshold epsilon as t-th of characteristic point, according to the 68 of label human face characteristic points, be finally obtained 68 it is different Model;
Step 2-1-6: calculate the characteristic value of local probability: its calculation formula is as follows:
Wherein p (fi) is the final probability value that the fi leaf node calculates, and num (fi) is that training sample falls into fi The number of leaf node, I are the numbers of all leaf nodes, and I value is 68 herein.
Step 2-1-5: the multi-model fusion based on averaging method:, can be certain for each trained model The position of a characteristic point is correctly predicted in degree;For the accuracy of Enhanced feature point coordinate information output, in this patent The same result for inputting and exporting under different models is handled using averaging method:
Wherein, M is the number of model, and L is the feature point number that each sample includes, L=68 in this example;sii(xjj, yjj) indicate the result that i-th i model returns j characteristic point of jth.What Final was indicated is comprehensive i-th i model for the All characteristic point positions finally determined after the result that jj characteristic point returns, xjj,yjjRespectively indicate the cross of j characteristic point of jth Coordinate and ordinate;
Step 2-1-8: it using Face datection image as input, is input in the model of step 2-1-5 training, then leads to The location information of 68 characteristic points of multi-model fusion method final output in step 2-1-7 is crossed, location information is 68 characteristic points Location of pixels.The abscissa of n-th of characteristic point is denoted as xn, ordinate be denoted as yn, wherein n is the ordinal number of 1~68 point.
Step 3 includes the following steps:
Step 3-1: according to characteristic point position information, calculating separately left eye closure feature and right eye closure feature, left Eye closure feature is denoted as left_eye, and right eye closure feature is denoted as right_eye, and calculation formula is as follows:
Step 3-2: according to characteristic point position information, mouth closure feature is calculated, mouth closure feature is denoted as Mouth, calculation formula are as follows:
Step 3-3: according to characteristic point position information, look height is calculated than feature, look height is denoted as than feature Mouth_face, calculation formula are as follows:
Wherein abs expression takes absolute value;
Step 3-4: according to the resulting left eye closure feature of step 3-1 and right eye closure feature, eye is calculated in real time It is closed frequecy characteristic, eye closure frequecy characteristic is denoted as eye_degree, and formula is as follows:
Wherein, K is a fixed value, is modified according to frame number, and value is 5 times of image procossing frame number, setting figure here The frame number of picture processing is 10, then K=50.ΔtsIt is the time of s frame processing.Left eye closure feature or the right side are not used directly Eye closure feature, and the purpose for using eye closure frequecy characteristic is, rejects rapid eye movements for determining facial state It influences.
In step 3, left eye closure feature, right eye closure feature, mouth closure feature and look height ratio are calculated In characteristic procedure, 18 characteristic points in 68 point feature points are only chosen, have the advantages that calculating is simple, it is ensured that fortune The real-time of scanning frequency degree;Selected characteristic point and feature association calculated are strong, and accuracy is high and describing significance is clear.
Step 4 includes the following steps:
Step 4-1: being closed frequecy characteristic according to eye, judge whether user is dozing off, if met:
Eye_degree > thr1,
Then it is judged to dozing off, this season sleep=1;
If met:
Eye_degree≤thr1,
Then it is judged to not dozing off, this season sleep=0;Sleep indicates whether the state dozed off among the above.
Thr1 in formula is the threshold value set, and the value of threshold value thr1 is equal to 50 times of frame per second, sets image procossing Frame number is 10, then thr1=500;
Step 4-2: according to mouth closure feature mouth value and look height than feature mouth_face value, judgement is It is no to bow, whether be rotary head and whether yawn, work as satisfaction:
Thr4 > mouth > thr2 and mouth_face >=thr3
When, it is judged to bowing, with season bow=1;Otherwise bow=0;Bow indicates whether the state bowed among the above.
Work as satisfaction:
Thr4 > mouth > thr2 and mouth_face≤thr3
When, it is determined as rotary head, with season swivel=1;Otherwise swivel=0;Swivel indicates whether rotary head among the above State.
Work as satisfaction:
mouth≥thr4
When, it is judged to yawning, with season yawn=1;Otherwise yawn=0;Yawn indicates whether to yawn among the above State.
Thr2 in formula, thr3 are given threshold, and threshold value thr2 setting value is 0.25;Threshold value thr3 setting value is face Look height compares feature under no fatigue state;Threshold value thr4 setting value is 0.5.
Step 5 includes the following steps:
Step 5-1: calculating fatigue strength assessed value in K frame, and note fatigue strength assessed value makees fatigue_degree, and formula is such as Under:
K in formula is a fixed value, is modified according to frame number, and value is 5 times of frame number;ΔtsIt is the processing of s frame Time;
Step 5-2: being that normal, slight fatigue, moderate fatigue and severe are tired by drowsiness monitor according to fatigue strength assessed value Four grades of labor:
Work as satisfaction:
fatigue_degree≤degree1
When, fatigue strength is normal;
Work as satisfaction:
Fatigue_degree > degree1 and fatigue_degree≤degree2
When, fatigue strength is slight fatigue;
Work as satisfaction:
Fatigue_degree > degree2 and fatigue_degree≤degree3
When, fatigue strength is moderate fatigue;
Work as satisfaction:
fatigue_degree>degree3
When, fatigue strength is severe fatigue;
Degree1 in step 5-2, degree2, degree3 are fatigue strength threshold value, the selection of fatigue strength threshold value by The influence of actual treatment frame per second, degree1 value are 100 times of image procossing frame per second, and degree2 value is image procossing frame per second 250 times;Degree3 value is 350 times of image procossing frame per second.Image procossing frame per second is set as 10, then degree1=herein 1000, degree2=2500, degree3=3500.
Through the implementation of the above technical solution, the beneficial effects of the present invention are: (1) solves blink erroneous judgement, speech movement It interferes, doze off and yawns differentiation and multistage drowsiness monitor problem, strong antijamming capability, there is multistage drowsiness monitor rate And the preferable feature of stability.(2) multidimensional facial characteristics is established, enhances the accuracy of fatigue strength grade judgement, while conveniently The adjustment of fatigue strength stage division.(3) gray proces and cutting are carried out to image, greatly reduces algorithm operation quantity, significantly Improve algorithm process speed.
The present invention provides a kind of driver's multistage drowsiness monitor method based on the building of multidimensional facial characteristics, solves blink Erroneous judgement, speech act interfere, doze off and yawn differentiation and multistage drowsiness monitor problem, are intelligent transportation and safe driving It establishes technical foundation and provides safeguard.
Detailed description of the invention
The present invention is done with reference to the accompanying drawings and detailed description and is further illustrated, it is of the invention above-mentioned or Otherwise advantage will become apparent.
Fig. 1 is the workflow schematic diagram of the method for the present invention.
Fig. 2 is the relative position information of 68 human face characteristic points provided in an embodiment of the present invention.
Fig. 3 is anti-blink recognition result figure provided in an embodiment of the present invention.
Fig. 4 is rotary head recognition result figure provided in an embodiment of the present invention.
Fig. 5 is recognition result figure provided in an embodiment of the present invention of yawning.
Fig. 6 is that fatigue strength provided in an embodiment of the present invention is determined as normal detection image.
Fig. 7 is the detection image that fatigue strength provided in an embodiment of the present invention is determined as slight fatigue.
Fig. 8 is the detection image that fatigue strength provided in an embodiment of the present invention is determined as moderate fatigue.
Fig. 9 is the detection image that fatigue strength provided in an embodiment of the present invention is determined as severe fatigue.
Specific embodiment
The present invention will be further described with reference to the accompanying drawings and embodiments.
Embodiment
In present example, the face color image acquired by CCD industrial camera, by computer to containing facial image Color image carry out image procossing.
Referring to Fig.1, Fig. 1 is driver's multistage drowsiness monitor side that the embodiment of the present invention is constructed based on multidimensional facial characteristics The workflow schematic diagram of method, comprising the following steps:
Step 1: acquiring successive video frames sequence image using infrared monocular cam, read each frame image, detection is It is no that there are faces.If detecting that there are human face regions in this frame image, face parts of images is individually intercepted and is come out as people Face detection image goes to step 2 and is handled;If not detecting human face region in this frame image, next frame is jumped to Image re-starts image preprocessing and detects.
Step 2: the Face datection image intercepted out being extracted using the human face characteristic point based on random forest method, is extracted 68 characteristic points in Face datection image out, and record the characteristic point position information in face detection image, i.e. characteristic point exists Pixel coordinate in Face datection image.
Step 3: according to the characteristic point position information recorded in step 2, carrying out the building of multidimensional facial characteristics.Multidimensional face is special Sign includes that eye closure feature, mouth closure feature, look height bit are sought peace eye closure frequecy characteristic.
Step 4: according to the multidimensional facial characteristics constructed in step 3, facial state being judged;Facial state includes: Whether is dozing off, whether yawning, whether bowing and whether in rotary head.
Step 5, according to the multidimensional facial characteristics in step 3 and the facial state in step 4, calculate fatigue strength in K frame and comment Drowsiness monitor is normal, slight fatigue, moderate fatigue and tired four grades of severe according to fatigue strength assessed value by valuation, Complete driver's multistage drowsiness monitor.
The present invention is described further comprising step in the following with reference to the drawings and specific embodiments.
The step 1 includes:
Step 1-1: calculating mean picture brightness value, when average brightness value is greater than 70, infrared lamp is closed, and use is red Outer monocular cam acquires successive video frames sequence color;When average brightness value is less than or equal to 70, infrared lamp is opened, Continuous videos frame sequence gray level image is acquired using infrared monocular cam.Mean picture brightness value calculation formula is as follows:
Wherein, Brightness is mean picture brightness value, and R (i, j), G (i, j) and B (i, j) are respectively the i-th row of image With red color component value, green component values and the blue color component value of jth column pixel.
Step 1-2: if what is read is colorized face images, facial image is obtained by color image gray processing algorithm Grayscale image.Color image gray processing algorithm is according to formula once:
Face_Gray (i, j)=kR×R(i,j)+kG×G(i,j)+kB×B(i,j)
Wherein, face_Gray (i, j) is the pixel gray value of ith row and jth column in face gray level image, kR、kGWith kBThe weight of the red, green of each pixel and blue three colors respectively in face color image, and value kR=0.2973, kG=0.6274 and kB=0.0753, R (i, j), G (i, j) and B (i, j) are respectively ith row and jth column picture in face gray level image Red, green and the blue color component value of vegetarian refreshments.
Step 1-3: Yin Jingxia, a kind of optimization and realization of image compression algorithm, lake [can refer to by image compression algorithm Southern university, master thesis, 2017, pp.56-61] by the resolution compression of face gray level image to 800 × 600, it is pressed Contract drawing picture.This step is the operand in order to reduce step 1-4, accelerates detection speed.
Step 1-4: first detect whether that there are faces [can refer to Chen Chao, the people based on deep learning in compression image The design and realization of face identifying system, Nanjing Univ. of Posts and Telecommunications, master thesis, 2018, pp15-38], it detects in the present embodiment There are human face regions in this frame image out, and face parts of images is individually intercepted and is come out as Face datection image, is finally obtained Face gray level image.
Referring to Fig. 2, Fig. 2 is the relative position information of 68 human face characteristic points provided in an embodiment of the present invention.In the present invention In embodiment, 68 human face characteristic points in face gray level image are extracted by step 2.The step 2, comprising:
Step 2: the method based on probability random forest being utilized to the Face datection image intercepted out, trains detection face The regression model of 68 characteristic points;The concrete operations of training regression model are as follows:
Step 2-1-1: the acquisition of data set: data set using IBUG common data sets [reference: https: // Ibug.doc.ic.ac.uk/resources] it is used as training set and test set, which contains 5000 or so people Face image, and marked 68 human face characteristic points.
Step 2-1-2: the pretreatment of data set: the operand in order to reduce training pattern first carries out data set pre- Processing obtains training gray level image, and pretreated process is image gray processing processing herein, and specific embodiment is the same as step 1-2.
Step 2-1-3: the enhancing of data set:, can not since the common data sets used only have 5000 or so pictures Meets the needs of regression model training.Following methods EDS extended data set is used herein: raw data set image is kept original big It is small to be rotated.Rotation angle is divided into ± 15 °, and ± 30 °, ± 45 °, ± 60 °, ± 75 ° of ten kinds of rotation types concentrate institute to data There is picture successively to be rotated.The expansion image generated after raw data set image rotation expands on its four sides outward, so that The picture size of generation is identical.
Step 2-1-4: building pixel disruptive features: institute constituency is characterized in by a pixel gray value in this patent With four pixel gray value difference gained of surrounding, formula is as follows:
Feature=4 × f (a, b)-f (a-1, b)-f (a+1, b)-f (a, b-1)-f (a, b+1)
In above formula, grey scale pixel value of the formula that f (a, b) is indicated at training gray level image coordinate (a, b).Coordinate (a, b) Selection rule be the above stage estimate characteristic point centered on, r be pixel radius chosen.Determined pixel half herein The meaning of diameter r are as follows: the abscissa of pixel differs the vertical of r pixel or pixel with the abscissa for estimating characteristic point coordinate Coordinate differs r pixel with the ordinate for estimating characteristic point coordinate.With progressive, the selected pixels radius r of cascade number It is mutually deserved to become smaller, it is provided here by r=5 to be initial value, the value that cascade terminates r every time subtracts 1 until being 0.
Step 2-1-5: the selection of optimal tree node split feature: the characteristic value feature as defined in previous step, meter The characteristic value of all the points is calculated, and randomly selects a characteristic value as threshold epsilon in all characteristic values;Calculate the institute of present node There is characteristic value of the training sample under this group of feature, when characteristic value feature is less than ε, which is classified as left sibling;Work as spy When value indicative feature is more than or equal to ε, which is classified as right node.
And then the sample characteristics variance of left and right node is calculated, left sibling sample characteristics variance is denoted as varL, right node Sample characteristics variance is denoted as varR;And calculate population variance:
In formula, N is all sample sizes of present node;Nl and Nr is the sample number of left sibling and right node respectively Amount.The smallest candidate feature of variance is chosen as predicted characteristics, saves the corresponding coordinate information of this feature and threshold epsilon as one The model of one of characteristic point.As described herein, detect that there are 68 characteristic points, thus be finally obtained 68 it is different Model;
Step 2-1-6: calculate the characteristic value of local probability: its calculation formula is as follows:
Wherein p (fi) is the final probability value that the fi leaf node calculates, and num (fi) is that training sample falls into fi The number of leaf node, I are the numbers of all leaf nodes, and I value is 68 herein.
Step 2-1-7: the multi-model fusion based on averaging method:, can be certain for each trained model The position of a characteristic point is correctly predicted in degree;For the accuracy of Enhanced feature point coordinate information output, in this patent The same result for inputting and exporting under different models is handled using averaging method:
Wherein, M is the number of model, and L is the feature point number that each sample includes, L=68 in this example;sii(xjj, yjj) indicate the result that i-th i model returns j characteristic point of jth.
Step 2-1-8: it using Face datection image as input, is input in the model of step 2-1-5 training, finally 68 dot position informations are exported, location information is the location of pixels of 68 characteristic points.The abscissa of n-th of characteristic point is denoted as xn, Ordinate is denoted as yn, wherein n is the ordinal number of 1~68 point.
In embodiments of the present invention, the step 3, comprising:
Step 3-1: according to the characteristic point position information saved in step 2, left eye closure feature and right eye are calculated separately Closure feature.Left eye closure feature is denoted as left_eye, and right eye closure feature is denoted as right_eye, and calculation formula is such as Under:
Step 3-2: according to the characteristic point position information saved in step 2, mouth closure feature is calculated, mouth closure is special Sign is denoted as mouth, and calculation formula is as follows:
Step 3-3: according to the characteristic point position information saved in step 2, look height is calculated than feature, look height ratio Feature is denoted as mouth_face, and calculation formula is as follows:
Wherein abs expression takes absolute value.
Referring to Fig. 3, Fig. 3 is anti-blink recognition result figure provided in an embodiment of the present invention.Human eye is in closed state in figure But fatigue strength grade will not be enhanced.Left side parameter concrete meaning is as follows in figure, and what degree was indicated is fatigue strength level, Degree-0 indicates that fatigue strength grade is in minimum state, i.e. fatigue strength grade belongs to normally.Facial characteristics does not detect simultaneously It yawns out, rotary head and the state bowed, therefore upper left corner status display is Normal.In embodiments of the present invention, step 3-4 calculates eye closure frequecy characteristic in real time, is not closed in this step using left eye closure feature or right eye directly Feature is spent, and the purpose for using eye closure frequecy characteristic is, rejects the influence of rapid eye movements state facial for judgement.Institute State step 3-4, comprising:
Step 3-4: according to the resulting left eye closure feature of step 3-1 and left eye closure feature, eye is calculated in real time It is closed frequecy characteristic, eye closure frequecy characteristic is denoted as eye_degree, and formula is as follows:
Wherein, K is a fixed value, is modified according to frame number, and value is 5 times of image procossing frame number, setting figure here The frame number of picture processing is 10, then K=50.ΔtsIt is the time of s frame processing.
In step 3, left eye closure feature, right eye closure feature, mouth closure feature and look height ratio are calculated In characteristic procedure, 18 characteristic points in 68 point feature points are only chosen, have the advantages that calculating is simple, it is ensured that fortune The real-time of scanning frequency degree;Selected characteristic point and feature association calculated are strong, and accuracy is high and describing significance is clear.
In embodiments of the present invention, the step 4, comprising:
Step 4-1: being closed frequecy characteristic according to the eye being calculated in step 3, judge whether user is dozing off, If
eye_degree>thr1
Then it is judged to dozing off, this season sleep=1;
If
eye_degree≤thr1
Then it is judged to not dozing off, this season sleep=0.
Thr1 in formula is the threshold value set, and the selection of threshold value thr1 is influenced by image procossing frame per second, herein The value of thr1 is equal to 50 times of frame per second.Image procossing frame per second is set as 10, then thr1=500 herein.
Step 4-2: according to the mouth value and mouth_face value calculated in step 3, determine whether bow, whether be Rotary head and whether yawn.When
Thr4 > mouth > thr2 and mouth_face >=thr3
When, it is judged to bowing, with season bow=1;Otherwise bow=0.When
Thr4 > mouth > thr2 and mouth_face≤thr3
When, it is determined as rotary head, as shown in figure 4, Fig. 4 is frame rotary head recognition result figure provided in an embodiment of the present invention, figure Septum reset feature is in rotary head state.With season swivel=1;Otherwise swivel=0.Left side parameter concrete meaning is such as in figure Under, what degree was indicated is fatigue strength level, and degree-0 indicates that fatigue strength grade is in minimum state, i.e. fatigue strength grade category In normal.Facial feature detection goes out the state of rotary head simultaneously, but state of yawning and bow is not detected, therefore the upper left corner Status display is Normal and Swivel, and expression detects that there is only rotary head states.
When
mouth≥thr4
When, it is judged to yawning, as shown in figure 5, Fig. 5 is that a frame provided in an embodiment of the present invention is yawned recognition result Figure, figure septum reset feature are in state of yawning.With season yawn=1;Otherwise yawn=0.Left side parameter concrete meaning in figure As follows, what degree was indicated is fatigue strength level, and degree-1 indicates that fatigue strength grade is in reduced levels, i.e. fatigue strength grade Belong to slight fatigue.Facial feature detection goes out the state yawned simultaneously, but rotary head is not detected and state of bowing, therefore Upper left corner status display is Normal and Yawn, and expression detects that there is only the states of yawning.
Thr2 in formula, thr3 are given threshold, and threshold value thr2 setting value is 0.25;Threshold value thr3 setting value is face Look height compares feature under no fatigue state;Threshold value thr4 setting value is 0.5.
In embodiments of the present invention, the step 5, comprising:
Step 5-1: calculating fatigue strength assessed value in K frame, and fatigue strength assessed value is denoted as fatigue_degree, and formula is such as Under:
K in formula is a fixed value, and value is 5 times of image procossing frame per second, and image procossing is set in this example Frame per second is 10, therefore K value is 50;ΔtsIt is the time of s frame processing.
Step 5-2: according to the fatigue strength assessed value in step 5-1, it is tired that fatigue strength is divided into normal, slight fatigue, moderate Labor and tired four grades of severe:
When
fatigue_degree≤degree1
When, fatigue strength are as follows: normal;As shown in fig. 6, Fig. 6 is that fatigue strength provided in an embodiment of the present invention is determined as normally Detection image.Left side parameter concrete meaning is as follows in figure, and what degree was indicated is fatigue strength level, and degree-0 indicates fatigue strength Grade is in minimum state, i.e. fatigue strength grade belongs to normally.Facial characteristics, which is not detected, simultaneously yawns, rotary head and bows State, therefore upper left corner status display is Normal.
When
Fatigue_degree > degree1 and fatigue_degree≤degree2
When, fatigue strength are as follows: slight fatigue;As shown in fig. 7, Fig. 7 is that fatigue strength provided in an embodiment of the present invention is determined as gently Spend the detection image of fatigue.Left side parameter concrete meaning is as follows in figure, and what degree was indicated is fatigue strength level, degree-1 table Show that fatigue strength grade is in compared with low state, i.e. fatigue strength grade belongs to slight fatigue.Facial characteristics is not detected to beat and breathe out simultaneously Deficient, rotary head and the state bowed, therefore upper left corner status display is Normal.
When
Fatigue_degree > degree2 and fatigue_degree≤degree3
When, fatigue strength are as follows: moderate fatigue;As shown in figure 8, Fig. 8 is during fatigue strength provided in an embodiment of the present invention is determined as Spend the detection image of fatigue.Left side parameter concrete meaning is as follows in figure, and what degree was indicated is fatigue strength level, degree-2 table Show that fatigue strength grade is in fair state, i.e. fatigue strength grade belongs to moderate fatigue.Facial characteristics is not detected to beat and breathe out simultaneously Deficient, rotary head and the state bowed, therefore upper left corner status display is Normal.
When
fatigue_degree>degree3
When, fatigue strength are as follows: severe fatigue;As shown in figure 9, Fig. 9 is that fatigue strength provided in an embodiment of the present invention judgement is attached most importance to Spend the detection image of fatigue.Left side parameter concrete meaning is as follows in figure, and what degree was indicated is fatigue strength level, degree-3 table Show that fatigue strength grade is in highest state, i.e. fatigue strength grade belongs to severe fatigue.Facial feature detection goes out to exist and bow simultaneously State, but the state yawned with rotary head is not detected, therefore upper left corner status display is Normal and Nodding head。
Degree1 in the above 5-2 formula, degree2, degree3 are fatigue strength threshold value, and the selection of threshold value is by reality The influence of border processing frame per second, it is preferable that degree1 value is 100 times for handling frame per second;Degree2 value is processing frame per second 250 times;Degree3 value is 350 times for handling frame per second.Image procossing frame per second is set in this example as 10, then degree1 herein =1000, degree2=2500, degree3=3500.
Driver's multistage drowsiness monitor method based on multidimensional facial characteristics that the present invention provides a kind of, specific implementation should There are many method and approach of technical solution, the above is only a preferred embodiment of the present invention, it is noted that for this technology For the those of ordinary skill in field, various improvements and modifications may be made without departing from the principle of the present invention, this A little improvements and modifications also should be regarded as protection scope of the present invention.Existing skill can be used in each component part being not known in the present embodiment Art is realized.

Claims (8)

1. a kind of driver's multistage drowsiness monitor method based on multidimensional facial characteristics, which comprises the steps of:
Step 1: acquisition successive video frames sequence image reads each frame image, detects whether that there are faces, if detecting to work as There are human face regions in prior image frame, and face parts of images is individually intercepted and is come out as Face datection image, go to step 2 into Row processing;If not detecting human face region, jumps to next frame image and continue to test with the presence or absence of face;
Step 2: the method based on probability random forest being utilized to the Face datection image intercepted out, it is special to train detection face 68 The regression model for levying point carries out human face characteristic point extraction to Face datection image using regression model, and records Face datection figure Characteristic point position information as in, i.e. pixel coordinate of the characteristic point in Face datection image;
Step 3: according to characteristic point position information, carrying out the building of multidimensional facial characteristics, the multidimensional facial characteristics includes that left eye closes Right feature, right eye closure feature, mouth closure feature, look height bit seek peace eye closure frequecy characteristic;
Step 4: according to multidimensional facial characteristics, facial state is judged, it is described face state include: whether doze off, Whether is yawning, whether bowing and whether in rotary head;
Step 5: according to multidimensional facial characteristics and facial state, calculating fatigue strength assessed value in K frame image, assessed according to fatigue strength Drowsiness monitor is normal, slight fatigue, moderate fatigue and tired four grades of severe by value, completes driver's multistage fatigue strength Identification.
2. step 1 includes the following steps: according to the method described in claim 1, being characterized in that
Step 1-1: calculating mean picture brightness value, and when average brightness value is greater than 70, infrared lamp is closed, and uses infrared monocular Camera acquires successive video frames sequence color;When average brightness value is less than or equal to 70, infrared lamp is opened, and use is infrared Monocular cam acquires continuous videos frame sequence gray level image, and mean picture brightness value calculation formula is as follows:
Wherein, Brightness is mean picture brightness value, and R (i, j), G (i, j) and B (i, j) are respectively the i-th row of image and jth Red color component value, green component values and the blue color component value of column pixel;
Step 1-2: if what is read is colorized face images, by color image gray processing algorithm, facial image ash is obtained Degree figure;If what is read is facial image grayscale image, it is directly to step 1-3;
Step 1-3: by image compression algorithm by the resolution compression of face gray level image to 800 × 600, compression figure is obtained Picture;
Step 1-4: first detect whether that there are faces in compression image, if detecting that there are face areas in current frame image Face parts of images is individually intercepted and is come out as Face datection image by domain;If not detecting face in current frame image Region then jumps to next frame image, and return step 1-2 is handled.
It is described by color image gray processing algorithm in step 1-2 3. according to the method described in claim 2, be characterized in that, Facial image grayscale image is obtained, is specifically included:
Gray value is calculated according to the following formula:
Face_Gray (i, j)=kR×R(i,j)+kG×G(i,j)+kB×B(i,j)
Wherein, face_Gray (i, j) is the pixel gray value of ith row and jth column in face gray level image, kR、kGAnd kBPoint Not Wei in face detection image the red weight of the pixel of ith row and jth column, the weight of green and blue weight, R (i, j), G (i, j) and B (i, j) are respectively the red color component value of Face datection image ith row and jth column pixel, green component Value and blue color component value.
4. in step 2, utilizing and being based on to the Face datection image intercepted out according to the method described in claim 3, being characterized in that The method of probability random forest trains the regression model of detection 68 characteristic point of face, comprising:
Step 2-1-1: acquisition data set: data set is used using IBUG common data sets as training set and test set, the number Contain N according to collection1Or so facial image, and marked 68 human face characteristic points;
Step 2-1-2: image gray processing processing the pretreatment of data set: is carried out to the image that data are concentrated;
Step 2-1-3: the enhancing of data set: following methods EDS extended data set is used: by the N in data set1It opens image and keeps original Size is rotated, rotation mode be using picture centre coordinate as rotation center, rotate clockwise respectively in order 15 °, 30 °, 45 °, 60 ° and 75 °, similarly 15 °, 30 °, 45 °, 60 ° and 75 ° of rotation counterclockwise respectively in order;By four after image rotation The gray value for the blank parts that corner generates is disposed as 0;Finally using all pictures in the data set after expansion as training Gray level image uses;
Step 2-1-4: building pixel disruptive features: pixel disruptive features feature is same by a pixel gray value Four pixel gray values difference gained of surrounding, formula are as follows:
Feature=4 × f (a, b)-f (a-1, b)-f (a+1, b)-f (a, b-1)-f (a, b+1)
In above formula, what f (a, b) was indicated is the grey scale pixel value at training gray level image coordinate (a, b);The choosing of coordinate (a, b) Take rule be the above stage estimate characteristic point centered on, r be pixel radius chosen, herein set pixel radius r Meaning are as follows: the abscissa of pixel differs the vertical seat of r pixel or pixel with the abscissa for estimating characteristic point coordinate It marks and differs r pixel with the ordinate for estimating characteristic point coordinate;
Step 2-1-5: choosing optimal tree node split feature: the characteristic value feature defined according to step 2-1-4, calculates institute There is the characteristic value of t-th of characteristic point of marked position in training sample, and randomly selects a feature in all characteristic values Value is used as threshold epsilon, and wherein training sample refers to training any one picture in gray level image;The value of t is 1~68, generation The ordinal number of the human face characteristic point of list notation;When the characteristic value feature of training sample is less than ε, which is classified as a left side Node;When the characteristic value feature of training sample is more than or equal to ε, which is classified as right node;
The sample characteristics variance of left and right node is calculated, left sibling sample characteristics variance is denoted as varL, right node sample characteristics Value variance is denoted as varR, and calculate population variance var:
Wherein, N is all training samples numbers of present node;Nl and Nr is the sample size and right node of left sibling respectively Sample size chooses the smallest t-th of the characteristic point of population variance and is used as predicted characteristics point, save the corresponding coordinate information of this feature with 68 different models are finally obtained according to the 68 of label human face characteristic points in model of the threshold epsilon as t-th of characteristic point;
Step 2-1-6: the characteristic value of local probability is calculated:
Wherein p (fi) is the final probability value that the fi leaf node calculates, and num (fi) is that training sample falls into the fi leaf The number of node, I are the numbers of all leaf nodes, and I value is 68 herein;
Step 2-1-7: the multi-model fusion based on averaging method: handle what same input exported under different models using averaging method As a result:
Wherein, M is the number of model, and L is the feature point number that each sample includes, L=68;sii(xjj,yjj) indicate the i-th i The result that model returns j characteristic point of jth;What Final was indicated is that comprehensive i-th i model returns j characteristic point of jth All characteristic point positions finally determined after the result returned, xjj,yjjRespectively indicate the abscissa and vertical seat of j characteristic point of jth Mark;
Step 2-1-8: it using Face datection image as input, is input in the model of step 2-1-5 training, then passes through step The location information of multi-model fusion method 68 characteristic points of final output in rapid 2-1-7, location information is the picture of 68 characteristic points Plain position, n-th point of abscissa are denoted as xn, ordinate be denoted as yn, wherein n is the ordinal number of 1~68 point.
5. step 3 includes the following steps: according to the method described in claim 4, being characterized in that
Step 3-1: according to characteristic point position information, left eye closure feature and right eye closure feature are calculated separately, left eye closes Right feature is denoted as left_eye, and right eye closure feature is denoted as right_eye, and calculation formula is as follows:
Step 3-2: according to characteristic point position information, calculating mouth closure feature, and mouth closure feature is denoted as mouth, counts It is as follows to calculate formula:
Step 3-3: according to characteristic point position information, look height is calculated than feature, look height is denoted as mouth_ than feature Face, calculation formula are as follows:
Wherein abs expression takes absolute value;
Step 3-4: according to the resulting left eye closure feature of step 3-1 and right eye closure feature, eye closure is calculated in real time Frequecy characteristic, eye closure frequecy characteristic are denoted as eye_degree, and formula is as follows:
Wherein, K is a fixed value, is modified according to frame number;ΔtsIt is the time of s frame processing.
6. step 4 includes the following steps: according to the method described in claim 5, being characterized in that
Step 4-1: being closed frequecy characteristic according to eye, judge whether user is dozing off, if met:
Eye_degree > thr1,
Then it is judged to dozing off, this season sleep=1;
If met:
Eye_degree≤thr1,
Then it is judged to not dozing off, this season sleep=0;Sleep indicates whether the state dozed off among the above;
Thr1 in formula is the threshold value set;
Step 4-2: it according to mouth closure feature mouth value and look height than feature mouth_face value, determines whether It bows, whether be rotary head and whether yawn, work as satisfaction:
Thr4 > mouth > thr2 and mouth_face >=thr3
When, it is judged to bowing, with season bow=1;Otherwise bow=0;Bow indicates whether the state bowed among the above;
Work as satisfaction:
Thr4 > mouth > thr2 and mouth_face≤thr3
When, it is determined as rotary head, with season swivel=1;Otherwise swivel=0;Swivel indicates whether the shape of rotary head among the above State;
Work as satisfaction:
mouth≥thr4
When, it is judged to yawning, with season yawn=1;Otherwise yawn=0;Yawn indicates whether the state yawned among the above;
Thr2 in formula, thr3 are given threshold, and threshold value thr2 setting value is 0.25;Threshold value thr3 setting value is face without tired Look height compares feature under labor state;Threshold value thr4 setting value is 0.5.
7. step 5 includes the following steps: according to the method described in claim 6, being characterized in that
Step 5-1: calculating fatigue strength assessed value in K frame, and note fatigue strength assessed value makees fatigue_degree, and formula is as follows:
K in formula is a fixed value, is modified according to frame number, and value is 5 times of frame number;
Step 5-2: being normal, slight fatigue, moderate fatigue and severe fatigue four by drowsiness monitor according to fatigue strength assessed value A grade:
Work as satisfaction:
fatigue_degree≤degree1
When, fatigue strength is normal;
Work as satisfaction:
Fatigue_degree > degree1 and fatigue_degree≤degree2
When, fatigue strength is slight fatigue;
Work as satisfaction:
Fatigue_degree > degree2 and fatigue_degree≤degree3
When, fatigue strength is moderate fatigue;
Work as satisfaction:
fatigue_degree>degree3
When, fatigue strength is severe fatigue;
Degree1 in step 5-2, degree2, degree3 are fatigue strength threshold value.
8. according to the method described in claim 7, being characterized in that, degree1 value is 100 times of image procossing frame per second, Degree2 value is 250 times of image procossing frame per second;Degree3 value is 350 times of image procossing frame per second.
CN201910455869.8A 2019-05-29 2019-05-29 A kind of driver's multistage drowsiness monitor method based on multidimensional facial characteristics Pending CN110263663A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910455869.8A CN110263663A (en) 2019-05-29 2019-05-29 A kind of driver's multistage drowsiness monitor method based on multidimensional facial characteristics

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910455869.8A CN110263663A (en) 2019-05-29 2019-05-29 A kind of driver's multistage drowsiness monitor method based on multidimensional facial characteristics

Publications (1)

Publication Number Publication Date
CN110263663A true CN110263663A (en) 2019-09-20

Family

ID=67915671

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910455869.8A Pending CN110263663A (en) 2019-05-29 2019-05-29 A kind of driver's multistage drowsiness monitor method based on multidimensional facial characteristics

Country Status (1)

Country Link
CN (1) CN110263663A (en)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111382714A (en) * 2020-03-13 2020-07-07 Oppo广东移动通信有限公司 Image detection method, device, terminal and storage medium
CN112124320A (en) * 2020-09-10 2020-12-25 恒大新能源汽车投资控股集团有限公司 Vehicle control method and system and vehicle
CN113205619A (en) * 2021-03-15 2021-08-03 广州朗国电子科技有限公司 Door lock face recognition method, equipment and medium based on wireless network

Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102982316A (en) * 2012-11-05 2013-03-20 安维思电子科技(广州)有限公司 Driver abnormal driving behavior recognition device and method thereof
CN106845327A (en) * 2015-12-07 2017-06-13 展讯通信(天津)有限公司 The training method of face alignment model, face alignment method and device
CN107169463A (en) * 2017-05-22 2017-09-15 腾讯科技(深圳)有限公司 Method for detecting human face, device, computer equipment and storage medium
CN108460345A (en) * 2018-02-08 2018-08-28 电子科技大学 A kind of facial fatigue detection method based on face key point location
CN108875642A (en) * 2018-06-21 2018-11-23 长安大学 A kind of method of the driver fatigue detection of multi-index amalgamation
CN109191791A (en) * 2018-10-30 2019-01-11 罗普特(厦门)科技集团有限公司 A kind of fatigue detection method and device merging multiple features
CN109243144A (en) * 2018-10-16 2019-01-18 南京伊斯特机械设备有限公司 A kind of recognition of face warning system and its method for fatigue driving
CN109271875A (en) * 2018-08-24 2019-01-25 中国人民解放军火箭军工程大学 A kind of fatigue detection method based on supercilium and eye key point information
CN109543577A (en) * 2018-11-09 2019-03-29 上海物联网有限公司 A kind of fatigue driving detection method for early warning based on facial expression feature

Patent Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102982316A (en) * 2012-11-05 2013-03-20 安维思电子科技(广州)有限公司 Driver abnormal driving behavior recognition device and method thereof
CN106845327A (en) * 2015-12-07 2017-06-13 展讯通信(天津)有限公司 The training method of face alignment model, face alignment method and device
CN107169463A (en) * 2017-05-22 2017-09-15 腾讯科技(深圳)有限公司 Method for detecting human face, device, computer equipment and storage medium
CN108460345A (en) * 2018-02-08 2018-08-28 电子科技大学 A kind of facial fatigue detection method based on face key point location
CN108875642A (en) * 2018-06-21 2018-11-23 长安大学 A kind of method of the driver fatigue detection of multi-index amalgamation
CN109271875A (en) * 2018-08-24 2019-01-25 中国人民解放军火箭军工程大学 A kind of fatigue detection method based on supercilium and eye key point information
CN109243144A (en) * 2018-10-16 2019-01-18 南京伊斯特机械设备有限公司 A kind of recognition of face warning system and its method for fatigue driving
CN109191791A (en) * 2018-10-30 2019-01-11 罗普特(厦门)科技集团有限公司 A kind of fatigue detection method and device merging multiple features
CN109543577A (en) * 2018-11-09 2019-03-29 上海物联网有限公司 A kind of fatigue driving detection method for early warning based on facial expression feature

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111382714A (en) * 2020-03-13 2020-07-07 Oppo广东移动通信有限公司 Image detection method, device, terminal and storage medium
CN111382714B (en) * 2020-03-13 2023-02-17 Oppo广东移动通信有限公司 Image detection method, device, terminal and storage medium
CN112124320A (en) * 2020-09-10 2020-12-25 恒大新能源汽车投资控股集团有限公司 Vehicle control method and system and vehicle
CN113205619A (en) * 2021-03-15 2021-08-03 广州朗国电子科技有限公司 Door lock face recognition method, equipment and medium based on wireless network

Similar Documents

Publication Publication Date Title
CN105893946B (en) A kind of detection method of front face image
Ji et al. Fatigue state detection based on multi-index fusion and state recognition network
CN106169071B (en) A kind of Work attendance method and system based on dynamic human face and chest card recognition
CN110084156A (en) A kind of gait feature abstracting method and pedestrian's personal identification method based on gait feature
CN108053615A (en) Driver tired driving condition detection method based on micro- expression
CN107403142A (en) A kind of detection method of micro- expression
CN110263663A (en) A kind of driver's multistage drowsiness monitor method based on multidimensional facial characteristics
CN106228137A (en) A kind of ATM abnormal human face detection based on key point location
CN101604382A (en) A kind of learning fatigue recognition interference method based on human facial expression recognition
Li et al. Person-independent head pose estimation based on random forest regression
CN103390164A (en) Object detection method based on depth image and implementing device thereof
CN109902560A (en) A kind of fatigue driving method for early warning based on deep learning
CN107038422A (en) The fatigue state recognition method of deep learning is constrained based on space geometry
CN101425136A (en) Method and device for acquiring partial binary mode characteristic of video image
CN101620673A (en) Robust face detecting and tracking method
CN106599870A (en) Face recognition method based on adaptive weighting and local characteristic fusion
CN104809451B (en) A kind of person's handwriting identification system based on stroke curvature measuring
CN109506628A (en) Object distance measuring method under a kind of truck environment based on deep learning
CN101853397A (en) Bionic human face detection method based on human visual characteristics
CN103390151B (en) Method for detecting human face and device
CN113158850B (en) Ship driver fatigue detection method and system based on deep learning
CN106373146A (en) Target tracking method based on fuzzy learning
CN108171201A (en) Eyelashes rapid detection method based on gray scale morphology
CN106548132A (en) The method for detecting fatigue driving of fusion eye state and heart rate detection
CN109544523A (en) Quality of human face image evaluation method and device based on more attribute face alignments

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication

Application publication date: 20190920

RJ01 Rejection of invention patent application after publication