CN107563346A - One kind realizes that driver fatigue sentences method for distinguishing based on eye image processing - Google Patents

One kind realizes that driver fatigue sentences method for distinguishing based on eye image processing Download PDF

Info

Publication number
CN107563346A
CN107563346A CN201710849821.6A CN201710849821A CN107563346A CN 107563346 A CN107563346 A CN 107563346A CN 201710849821 A CN201710849821 A CN 201710849821A CN 107563346 A CN107563346 A CN 107563346A
Authority
CN
China
Prior art keywords
face
template
human eye
frame
eyes
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201710849821.6A
Other languages
Chinese (zh)
Inventor
华国栋
许长勇
严加权
田学牧
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Nanjing Oak Traffic Interconnection Technology Co Ltd
Original Assignee
Nanjing Oak Traffic Interconnection Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Nanjing Oak Traffic Interconnection Technology Co Ltd filed Critical Nanjing Oak Traffic Interconnection Technology Co Ltd
Priority to CN201710849821.6A priority Critical patent/CN107563346A/en
Publication of CN107563346A publication Critical patent/CN107563346A/en
Pending legal-status Critical Current

Links

Landscapes

  • Image Analysis (AREA)

Abstract

The present invention is disclosed a kind of handled based on eye image and realizes that driver fatigue sentences method for distinguishing.First according to driver face situation, initial face template, human eye template are set.Then, the last period video is handled, work is freely trained.Then, the human eye area in follow-up frame video image is detected, judges whether human eye closes.According to continuous eye closing frame number and current frame number, PERCLOS values are calculated.If PERCLOS values are more than 0.15, system sends alarm, it is believed that driver fatigue.Otherwise continue to detect subsequent frame.

Description

One kind realizes that driver fatigue sentences method for distinguishing based on eye image processing
Technical field
The invention belongs to digital image processing field, is related to a kind of by image procossing detection driver frequency of wink judgement Driver whether fatigue method.
Background technology
Traffic accident is the first big public hazards that the world today endangers human life's safety, and at least 500,000 people die from friendship every year Interpreter's event.Wherein, driver tired driving is to cause one of major reason of traffic fatalities.It is public at a high speed now with China The generally raising developed rapidly with car speed on road, the fatigue detecting problem of driver have become vehicle security drive An important ring.
In early stage, driving fatigue test and appraisal are carried out, this inspection mainly from medical angle by medical device Survey method real-time is poor, inconvenient for operation.1998, Zhou Peng analyze driver fatigue accident occur the reason for, respectively from The method that Human physiology, modern neuro, the angle of electronic engineering propose dispelling fatigue accident potential, that is, eliminate driver Abnormal tired and cerebral palsy method caused by driving for a long time.2000, Shi Jian, Wu Pengyuan, Zhuo Bin et al. had found tired Labor degree is relevant with the operating situation of steering wheel, and they differentiate that the method for driver's driving safety factor is measured using sensor The kinematic parameter of the devices such as pedal, steering wheel judges, if steering wheel is interior for a period of time to be in motionless state, illustrates to drive The person of sailing is in fatigue state, absent minded, or is dozing off.2001, Li Zengyong, Wang Cheng covers were from ergonomics angle Degree have studied the effective scheme for preventing and alleviating driving fatigue.
At this stage, (the Percentage of Eyelid of the percentage in certain time shared by the eyes closed time Closure Over the Pupil Over Time, abbreviation PERCLOS) it is considered as having for maximally efficient most real-time An evaluation and test measure of driving fatigue is closed, many researchers develop a variety of driver fatigue detections according to PERCLOS Algorithm.Such as University Of Chongqing Guo Yong coloured silks are gathered reflected image and carried out using the infrared light supply of two wavelength of 850nm/940nm Subtract each other, eyes and face other parts are separated, so as to calculate eyes area, count PERCLOS values.2 kinds of this method needs are red Outer light source, equipment cost is higher, and process is complicated.Liaocheng University friend Yu friend etc. is calculated using the grader based on Haar-Like features Method carries out the detection of eye closure in single-frame images, improves detection speed, but can't be applied to actual.
The content of the invention
The purpose of the present invention is:It is proposed that a kind of handled based on eye image realizes that driver fatigue sentences method for distinguishing, improve Driver fatigue Detection accuracy.
The technical problem to be solved in the present invention is to propose that a kind of handled based on eye image realizes that driver fatigue differentiates Method, including:
Step 1, parameter st_train, end_train is pre-defined, for representing to be used for the start frame sequence trained in video flowing Number.Defined parameters counter, and 0 is initialized as, the parameter is used to count the frame number continuously closed one's eyes in video flowing.Meanwhile according to Driver face situation, initial face template, human eye template are set.
Step 2, the st_train frames of video flowing are handled frame by frame to end_train frames, by free training, optimize people Face and human eye template.Comprise the following steps that:
(1) by i-th of frame of video(Include R, G, B triple channel information)Grey level quantization processing, the gray scale of R, G, B triple channel after quantization It is worth for 1 ~ 8.
(2) according to initial face template, human eye template, search out face area figure, facial regions are represented with 1 in matrix A o Domain, Ab represent background area with 1, and At represents face and background area with 1.
(3) if present frame is st_train frames, according to Ao and Ab, face is described using kernel function weighted histogram Color model, calculating do not normalize likelihood ratio L and normalization likelihood ratio Ln, and they are all 8 × 8 × 8 matrixes.
(4) using mean shift algorithms detection face, positioning eyes, new face template, human eye template are calculated, Recalculate simultaneously and do not normalize likelihood ratio L and normalization likelihood ratio Ln.
(5) if still needing to train, next frame video image is read.(1) in repeat step 2, terminate until training, export New face template, human eye template, and corresponding likelihood ratio L and normalization likelihood ratio Ln are not normalized.
Step 3, the human eye area in follow-up frame video image is detected, judges whether human eye closes.Comprise the following steps that:
(1) by i-th of frame of video(Include R, G, B triple channel information)Grey level quantization processing, the gray scale of R, G, B triple channel after quantization It is worth for 1 ~ 8.
(2) according to the face template and human eye template optimized, the face area figure of current frame image is searched out, calculates square Battle array Ao, Ab and At.
(3) according to Ao and Ab, the color model of face is described using kernel function weighted histogram, calculates and does not normalize likelihood Than L and normalization likelihood ratio Ln.
(4) using mean shift algorithms detection face, positioning eyes, new face template, human eye template are calculated, Recalculate simultaneously and do not normalize likelihood ratio L and normalization likelihood ratio Ln.
(5) eye areas figure corresponding to human eye template is converted into gray scale image, remakes rim detection and binary conversion treatment, The profile of eyes in prominent image.
(6) binary map is made into hough conversion, the method justified according to hough change detections, detects the eyes in figure.If Hough change detections have circle into figure, then it is assumed that driver's eyes are not closed in figure, set counter=0.Otherwise it is assumed that drive Member's eyes closed, counter increases by 1.
Step 4, according to current counter values and frame number, PERCLOS values are calculated.If PERCLOS values are more than 0.15, Reach warning value, system sends alarm.Otherwise (1) in step 3 is performed.
Compared with prior art, the present invention has the characteristics of following:(1) present invention uses conventional visible light wave range video Collecting device, cost are low.(2) simple to the location algorithm of face in video image and human eye, operational efficiency is high.
Brief description of the drawings
Fig. 1 is flow chart of the present invention.
Fig. 2 is PERCLOS value schematic diagram calculations.
Fig. 3 is the video figure of the first frame in example one.
Fig. 4 is the face figure in the frame video figure of example one first.
Fig. 5 is people's eye figure in the frame video figure of example one first.
Fig. 6 is the result figure after the processing of the frame video figure grey level quantization of example one first.
Fig. 7 is the Ao schematic diagram figures that the frame video figure of example one first is calculated.
Fig. 8 is the Ab schematic diagram figures that the frame video figure of example one first is calculated.
Fig. 9 is the At schematic diagrames that the frame video figure of example one first is calculated.
Figure 10 is the new face template that the frame video figure of example one first freely trains to obtain.
Figure 11 is the new human eye template that the frame video figure of example one first freely trains to obtain.
Figure 12 is the new face template obtained the one free training stage of example.
Figure 13 is the new human eye template obtained the one free training stage of example.
Figure 14 is the two field picture human eye area binary conversion treatment result of example 1 the 26th.
Figure 15 is the circle schematic diagram that the two field picture human eye area hough change detections of example 1 the 26th obtain.
Figure 16 is the eye closing video figure that example one detects in real time.
Figure 17 is face part in the eye closing video figure that example one detects in real time.
Embodiment
The basic thought of the present invention is that under normal circumstances, the number of winks of the eyes of people in one minute is at 10 to 15 times Between, average wink time was probably 0.2 second per average duration of blinking twice at intervals of 4 to 5 seconds.When frequency of wink is low When, just only two kinds there may be, first, driver fatigue is felt sleepy, another kind is exactly absent-minded, drive when under the dull state of expression in the eyes The eyes of member are still to open state.It can carry out judging the first state by continuously detecting the frame number of closed-eye state, lead to Cross and calculate the frequency of blink and carry out judging second of state.Therefore, a kind of realized based on eye image processing of the present invention is driven Member's fatigue sentences method for distinguishing, by identifying human eye closed condition in video flowing, PERCLOS values is calculated, whether to differentiate driver Fatigue.
PERCLOS refers to time scale shared during the eyes closed within the regular hour.It is by measuring eyes closed When the length asked just can determine that driving fatigue degree, the eyes closed time is longer, and fatigue is more serious.Research shows, using eyes Close percentage of time (namely P80 modules) more than 80% area and calculate the PERCLOS of gained and the pass of degree of fatigue Connection property is larger, and fatigue differentiates that accuracy rate is high.Therefore, the present invention selects this standard.Measure PERCLOS values schematic diagram as shown in Fig. 2 Curve represents the curve of the eyes aperture time to time change during eyes eye opening eye closing, t in figure2Closure is opened for eyes to reach The time of 20% aperture;t1Opened for eyes and close the time for reaching 80% aperture;t3Closure, which is opened, for eyes next time reaches 20% aperture Time;t4Opened for eyes next time and close the time for reaching 80% aperture, the eyes closed time accounts for t1To t4The hundred of this period Point ratio is
(1)
A PERCLOS value is calculated to judge the degree of fatigue of driver, within 60 second time during eyes closed within generally every 60 seconds Between shared ratio be:
(2)
When the PERCLOS values drawn are more than 0.15, judge that driver is in fatigue state.
A kind of handled based on eye image proposed by the present invention realizes that driver fatigue sentences method for distinguishing, including:
Step 1, parameter st_train, end_train is pre-defined, for representing to be used for the start frame sequence trained in video flowing Number.Defined parameters counter, and 0 is initialized as, the parameter is used to count the frame number continuously closed one's eyes in video flowing.Meanwhile according to Driver face situation, initial face template, human eye template are set.
Step 2, the st_train frames of video flowing are handled frame by frame to end_train frames, by free training, optimize people Face and human eye template.Comprise the following steps that:
(1) by i-th of frame of video(Include R, G, B triple channel information)Grey level quantization processing, the gray scale of R, G, B triple channel after quantization It is worth for 1 ~ 8.
(2) according to initial face template, human eye template, search out face area figure, facial regions are represented with 1 in matrix A o Domain, Ab represent background area with 1, and At represents face and background area with 1.
(3) if present frame is st_train frames, according to Ao and Ab, face is described using kernel function weighted histogram Color model, calculating do not normalize likelihood ratio L and normalization likelihood ratio Ln.Produced between element in L is normalized into 0 to 1 To Ln.
(4) using Mean shift algorithms detection face, positioning eyes, new face template, human eye template are calculated, together When recalculate and do not normalize likelihood ratio L and normalization likelihood ratio Ln.
Mean shift be Fukunaga in 1975 et al. one on the estimation of probability density gradient function in propose Concept, it is initially meant that mean shift amount.Nineteen ninety-five, Yizong Cheng have promoted basic Mean Shift and calculated Method.Yizong Cheng first define one group of kernel function, there is provided a weight system, make the importance of different sample points not Equally, it is greatly expanded the scope of application of mean shift.Mean Shift algorithm general principles are expressed as follows.
Given d dimension spacesR d InnIndividual sample pointx i ,i=1, n,xPoint Mean Shift vector it is basic Formal definition is:
(3)
Wherein,S h It is that a radius ishHigher-dimension ball region, meet formula (4)yThe set of point,kRepresent at thisnIndividual sample pointx i In, havekIndividual point is fallen intoS h In region.
(4)
If sample pointFrom a probability density functionMiddle sampling obtains, because the probability density gradient of non-zero is pointed to The maximum direction of probability density increase, therefore,Sample point in region more falls along the direction of probability density gradient, Mean Shift vectorsShould next -event estimator density gradient direction.
(5) if still needing to train, next frame video image is read.(1) in repeat step 2, terminate until training, export The face template of optimization, human eye template, and do not normalize likelihood ratio L and normalization likelihood ratio Ln accordingly.
Step 3, the human eye area in follow-up frame video image is detected, judges whether human eye closes.Comprise the following steps that:
(1) by i-th of frame of video(Include R, G, B triple channel information)Grey level quantization processing, the gray scale of R, G, B triple channel after quantization It is worth for 1 ~ 8.
(2) according to the face template and human eye template optimized, the face area figure of current frame image is searched out, calculates square Battle array Ao, Ab and At.
(3) according to Ao and Ab, the color model of face is described using kernel function weighted histogram, calculates and does not normalize likelihood Than L and normalization likelihood ratio Ln.
(4) using Mean shift algorithms detection face, positioning eyes, new face template, human eye template are calculated, together When recalculate and do not normalize likelihood ratio L and normalization likelihood ratio Ln.
(5) eye areas figure corresponding to human eye template is converted into gray level image, remakes rim detection and binary conversion treatment, The profile of eyes in prominent image.
(6) binary map is made into hough conversion, the method justified according to hough change detections, detects the eyes in figure.If Hough change detections have circle into figure, then it is assumed that driver's eyes are not closed in figure, set counter=0.Otherwise it is assumed that drive Member's eyes closed, counter increases by 1.
The general principle of hough change detections circle is described as follows.
If the coordinate plane of original image is x--y planes, wherein round general equation is to be expressed as
(5)
In r-- θ polar coordinate planes, round polar equation is,
(6)
Wherein(A, b)For central coordinate of circle, r is the radius of circle.By Hough transform, by image space(x,y)Correspond to parameter sky Between(A, b, r).The basic thought of Hough transform detection circle is as follows.It is right(A, b, r)Parameter space moderately quantifies, and obtains one three The accumulator array of dimension, the gradient information for calculating every intensity of image obtains edge, then calculates and each pixel on edge (Xi, Yi) distance for radius of circle r a little, while the accumulator of corresponding cube of small lattice is added 1.After detection, to three-dimensional All accumulator peakings of array, the coordinate of the small lattice of its peak value just correspond to the center of circle of circle in x--y planes.
Step 4, according to current counter values and frame number, PERCLOS values are calculated.If PERCLOS values are more than 0.15, Reach warning value, system sends alarm.Otherwise (1) in step 3 is performed.
Below in conjunction with the accompanying drawings and embodiment the invention will be further described.
Embodiment one
The video of drive simulating in one section of vehicle drive storehouse is chosen, the frame frequency of video is 25Hz, is 40ms per frame time.
Step 1, it is respectively 1 and 50 to select st_train and end_train, selects 2 seconds works above freely to train.The One two field picture as shown in figure 3, initial face template, human eye template as shown in Figure 4,5.Counter=0.
Step 2, in the free training stage, the 1st frame video figure grey level quantization result as shown in fig. 6, be calculated Ao, Ab and At matrixes as Figure 7-9, wherein white area thresholding for 1, black region thresholding for 0, white portion in three matrixes The 1st frame video figure face area, background area, face and background overall area are represented respectively.
For the 1st frame video figure, what is be calculated does not normalize likelihood ratio L and normalization likelihood ratio Ln, and they are all 8 × 8 × 8 matrix, wherein non-zero entry are known as
L (1,1,1)=2.7226, L (2,2,2)=1.8554, L (3,3,3)=1.6318, L (8,8,8)=3.8918,
Ln (1,1,1)=0.6996, Ln (2,2,2)=0.4767, Ln (3,3,3)=0.4193, Ln (8,8,8)=1.
Then, with Mean shift algorithms, new face template, human eye template are calculated respectively as shown in FIG. 10 and 11. It is now the first two field picture of processing, therefore face template, human eye template are identical with initial value.Likelihood ratio L and normalizing are not normalized Change likelihood ratio Ln also not change.
Then, follow-up two field picture, repeat step 2 are handled.After having handled 25 two field pictures, the face template that is calculated, As shown in Figures 12 and 13, normalization likelihood ratio L and the non-zero entry normalized in likelihood ratio Ln are known as human eye template
L (1,1,1)=2.7959, L (2,2,2)=1.7279, L (3,3,3)=1.6318, L (8,8,8)=3.9223,
Ln (1,1,1)=0.7128, Ln (2,2,2)=0.4405, Ln (3,3,3)=0.4160, Ln (8,8,8)=1.
Step 3, follow-up two field picture is handled, detects human eye closure situation in each frame.Wherein (1)-(4) and freely train step It is rapid similar, after execution, eye areas figure corresponding to human eye template is converted into gray level image, remake at rim detection and binaryzation Reason, protrude the profile of eyes in image.The human eye area binary conversion treatment result such as Figure 14 institutes detected for the 26th two field picture Show.Pass through circle in the circle such as Figure 15 that hough change detections obtain again.Show that now, driver does not close one's eyes, counter is set For 0.When a processing two field picture subsequently as shown in figure 16, the human face region of acquisition is as shown in figure 17, it is seen that human eye is most of Region is closed.Now, circle can not be detected by hough conversion, shows that now driver is to close one's eyes.Counter values Increase by 1.
Step 4, according to current counter values and frame number, PERCLOS values are calculated.If PERCLOS values are more than 0.15, System sends alarm.Otherwise step 3 is repeated.

Claims (2)

1. one kind realizes that driver fatigue sentences method for distinguishing based on eye image processing, it is characterised in that it comprises the following steps: Step 1, parameter st_train, end_train is pre-defined, it is fixed for representing to be used for the starting frame number trained in video flowing Adopted parameter counter, and 0 is initialized as, the parameter is used to count the frame number continuously closed one's eyes in video flowing, meanwhile, according to driving Member face situation, sets initial face template, human eye template;Step 2, the st_train frames of video flowing are handled frame by frame to the End_train frames, by free training, optimize face and human eye template;Comprise the following steps that:(1) by i-th of frame of video(Bag Containing R, G, B triple channel information)Grey level quantization processing, the gray value of R, G, B triple channel is 1 ~ 8 after quantization;(2) according to initial Face template, human eye template, search out face area figure, and face area is represented with 1 in matrix A o, and Ab represents background area with 1, At represents face and background area with 1;(3) if present frame is st_train frames, according to Ao and Ab, weighted using kernel function Histogram describes the color model of face, and calculating does not normalize likelihood ratio L and normalization likelihood ratio Ln, and they are all 8 × 8 × 8 Matrix;(4) using mean shift algorithms detection face, positioning eyes, new face template, human eye template are calculated, together When recalculate and do not normalize likelihood ratio L and normalization likelihood ratio Ln;(5) if still needing to train, next frame video image is read, (1) in repeat step 2, terminate until training, export new face template, human eye template, and do not normalize likelihood accordingly Than L and normalization likelihood ratio Ln;Step 3, the human eye area in follow-up frame video image is detected, judges whether human eye closes, is had Body step is as follows:(1) by i-th of frame of video(Include R, G, B triple channel information)Grey level quantization processing, R, G, B threeway after quantization The gray value in road is 1 ~ 8;(2) face template and human eye template that basis has optimized, the face area of current frame image is searched out Figure, calculating matrix Ao, Ab and At;(3) according to Ao and Ab, the color model of face is described using kernel function weighted histogram, is counted Calculation does not normalize likelihood ratio L and normalization likelihood ratio Ln;(4) mean shift algorithms detection face, positioning eyes, meter are utilized New face template, human eye template are calculated, while recalculates and does not normalize likelihood ratio L and normalization likelihood ratio Ln;(5) by people Eye areas figure corresponding to eye template is converted into gray scale image, remakes rim detection and binary conversion treatment, protrudes eyes in image Profile;(6) binary map is made into hough conversion, the method justified according to hough change detections, detects the eyes in figure, if Hough change detections have circle into figure, then it is assumed that driver's eyes are not closed in figure, set counter=0, otherwise it is assumed that driving Member's eyes closed, counter increases by 1;Step 4, according to current counter values and frame number, PERCLOS values are calculated, if PERCLOS values are more than given threshold, then reach warning value, and system sends alarm, otherwise perform (1) in step 3.
2. a kind of handled based on eye image according to claim 1 realizes that driver fatigue sentences method for distinguishing, its feature It is:Parameter PERCLOS given threshold is 0.15 in step 4.
CN201710849821.6A 2017-09-20 2017-09-20 One kind realizes that driver fatigue sentences method for distinguishing based on eye image processing Pending CN107563346A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201710849821.6A CN107563346A (en) 2017-09-20 2017-09-20 One kind realizes that driver fatigue sentences method for distinguishing based on eye image processing

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201710849821.6A CN107563346A (en) 2017-09-20 2017-09-20 One kind realizes that driver fatigue sentences method for distinguishing based on eye image processing

Publications (1)

Publication Number Publication Date
CN107563346A true CN107563346A (en) 2018-01-09

Family

ID=60981609

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201710849821.6A Pending CN107563346A (en) 2017-09-20 2017-09-20 One kind realizes that driver fatigue sentences method for distinguishing based on eye image processing

Country Status (1)

Country Link
CN (1) CN107563346A (en)

Cited By (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108742656A (en) * 2018-03-09 2018-11-06 华南理工大学 Fatigue state detection method based on face feature point location
CN109165630A (en) * 2018-09-19 2019-01-08 南京邮电大学 A kind of fatigue monitoring method based on two-dimentional eye recognition
CN110278367A (en) * 2018-03-14 2019-09-24 厦门歌乐电子企业有限公司 Eye detection method, system, equipment and its medium based on filming apparatus
CN111145497A (en) * 2020-01-06 2020-05-12 广东工业大学 Portable driver fatigue detection equipment
CN111580651A (en) * 2020-04-30 2020-08-25 英华达(上海)科技有限公司 Terminal control method, terminal, and computer-readable storage medium
CN112668393A (en) * 2020-11-30 2021-04-16 海纳致远数字科技(上海)有限公司 Fatigue degree detection device and method based on face recognition and key point detection
CN112686927A (en) * 2020-12-31 2021-04-20 上海易维视科技有限公司 Human eye position regression calculation method

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1830389A (en) * 2006-04-21 2006-09-13 太原理工大学 Device for monitoring fatigue driving state and its method
CN101593425A (en) * 2009-05-06 2009-12-02 深圳市汉华安道科技有限责任公司 A kind of fatigue driving monitoring method and system based on machine vision
CN103324284A (en) * 2013-05-24 2013-09-25 重庆大学 Mouse control method based on face and eye detection
CN106846734A (en) * 2017-04-12 2017-06-13 南京理工大学 A kind of fatigue driving detection device and method

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1830389A (en) * 2006-04-21 2006-09-13 太原理工大学 Device for monitoring fatigue driving state and its method
CN101593425A (en) * 2009-05-06 2009-12-02 深圳市汉华安道科技有限责任公司 A kind of fatigue driving monitoring method and system based on machine vision
CN103324284A (en) * 2013-05-24 2013-09-25 重庆大学 Mouse control method based on face and eye detection
CN106846734A (en) * 2017-04-12 2017-06-13 南京理工大学 A kind of fatigue driving detection device and method

Cited By (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108742656A (en) * 2018-03-09 2018-11-06 华南理工大学 Fatigue state detection method based on face feature point location
WO2019169896A1 (en) * 2018-03-09 2019-09-12 华南理工大学 Fatigue state detection method based on facial feature point positioning
CN110278367A (en) * 2018-03-14 2019-09-24 厦门歌乐电子企业有限公司 Eye detection method, system, equipment and its medium based on filming apparatus
CN110278367B (en) * 2018-03-14 2021-11-19 厦门歌乐电子企业有限公司 Human eye detection method, system, device and medium based on shooting device
CN109165630A (en) * 2018-09-19 2019-01-08 南京邮电大学 A kind of fatigue monitoring method based on two-dimentional eye recognition
CN111145497A (en) * 2020-01-06 2020-05-12 广东工业大学 Portable driver fatigue detection equipment
CN111580651A (en) * 2020-04-30 2020-08-25 英华达(上海)科技有限公司 Terminal control method, terminal, and computer-readable storage medium
CN111580651B (en) * 2020-04-30 2023-12-15 英华达(上海)科技有限公司 Terminal control method, terminal and computer readable storage medium
CN112668393A (en) * 2020-11-30 2021-04-16 海纳致远数字科技(上海)有限公司 Fatigue degree detection device and method based on face recognition and key point detection
CN112686927A (en) * 2020-12-31 2021-04-20 上海易维视科技有限公司 Human eye position regression calculation method
CN112686927B (en) * 2020-12-31 2023-05-12 上海易维视科技有限公司 Human eye position regression calculation method

Similar Documents

Publication Publication Date Title
CN107563346A (en) One kind realizes that driver fatigue sentences method for distinguishing based on eye image processing
CN108053615B (en) Method for detecting fatigue driving state of driver based on micro-expression
CN101593425B (en) Machine vision based fatigue driving monitoring method and system
CN112241658B (en) Fatigue driving early warning method based on depth camera
CN103839379B (en) Automobile and driver fatigue early warning detecting method and system for automobile
Ji et al. Fatigue state detection based on multi-index fusion and state recognition network
CN102054163B (en) Method for testing driver fatigue based on monocular vision
CN103824420B (en) Fatigue driving identification system based on heart rate variability non-contact measurement
CN110728241A (en) Driver fatigue detection method based on deep learning multi-feature fusion
CN104013414A (en) Driver fatigue detecting system based on smart mobile phone
CN106250801A (en) Based on Face datection and the fatigue detection method of human eye state identification
CN108596087B (en) Driving fatigue degree detection regression model based on double-network result
CN109740477A (en) Study in Driver Fatigue State Surveillance System and its fatigue detection method
CN110991324B (en) Fatigue driving detection method based on various dynamic characteristics and Internet of things technology
CN111753674A (en) Fatigue driving detection and identification method based on deep learning
CN109977930A (en) Method for detecting fatigue driving and device
CN108229245A (en) Method for detecting fatigue driving based on facial video features
Tang et al. Real-time image-based driver fatigue detection and monitoring system for monitoring driver vigilance
CN113989788A (en) Fatigue detection method based on deep learning and multi-index fusion
CN112528843A (en) Motor vehicle driver fatigue detection method fusing facial features
Chen Research on driver fatigue detection strategy based on human eye state
CN103729646A (en) Eye image validity detection method
Rani et al. Development of an Automated Tool for Driver Drowsiness Detection
Guo et al. Monitoring and detection of driver fatigue from monocular cameras based on Yolo v5
CN113548056A (en) Automobile safety driving assisting system based on computer vision

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
WD01 Invention patent application deemed withdrawn after publication
WD01 Invention patent application deemed withdrawn after publication

Application publication date: 20180109