CN109044363A - Driver Fatigue Detection based on head pose and eye movement - Google Patents

Driver Fatigue Detection based on head pose and eye movement Download PDF

Info

Publication number
CN109044363A
CN109044363A CN201811029521.4A CN201811029521A CN109044363A CN 109044363 A CN109044363 A CN 109044363A CN 201811029521 A CN201811029521 A CN 201811029521A CN 109044363 A CN109044363 A CN 109044363A
Authority
CN
China
Prior art keywords
eye movement
user
head
head pose
driver fatigue
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201811029521.4A
Other languages
Chinese (zh)
Inventor
韩鹏
林林庆
邱健
彭力
骆开庆
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
South China Normal University
Original Assignee
South China Normal University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by South China Normal University filed Critical South China Normal University
Priority to CN201811029521.4A priority Critical patent/CN109044363A/en
Publication of CN109044363A publication Critical patent/CN109044363A/en
Pending legal-status Critical Current

Links

Classifications

    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • A61B5/103Detecting, measuring or recording devices for testing the shape, pattern, colour, size or movement of the body or parts thereof, for diagnostic purposes
    • A61B5/11Measuring movement of the entire body or parts thereof, e.g. head or hand tremor, mobility of a limb
    • A61B5/1116Determining posture transitions
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • A61B5/16Devices for psychotechnics; Testing reaction times ; Devices for evaluating the psychological state
    • A61B5/165Evaluating the state of mind, e.g. depression, anxiety
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • A61B5/16Devices for psychotechnics; Testing reaction times ; Devices for evaluating the psychological state
    • A61B5/18Devices for psychotechnics; Testing reaction times ; Devices for evaluating the psychological state for vehicle drivers or machine operators
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/50Context or environment of the image
    • G06V20/59Context or environment of the image inside of a vehicle, e.g. relating to seat occupancy, driver state or inner lighting conditions
    • G06V20/597Recognising the driver's state or behaviour, e.g. attention or drowsiness
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/168Feature extraction; Face representation
    • G06V40/171Local features and components; Facial parts ; Occluding parts, e.g. glasses; Geometrical relationships
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/18Eye characteristics, e.g. of the iris
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B2503/00Evaluating a particular growth phase or type of persons or animals
    • A61B2503/20Workers
    • A61B2503/22Motor vehicles operators, e.g. drivers, pilots, captains

Landscapes

  • Health & Medical Sciences (AREA)
  • Engineering & Computer Science (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • General Health & Medical Sciences (AREA)
  • Physics & Mathematics (AREA)
  • Oral & Maxillofacial Surgery (AREA)
  • Psychiatry (AREA)
  • Molecular Biology (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Veterinary Medicine (AREA)
  • Public Health (AREA)
  • Animal Behavior & Ethology (AREA)
  • Multimedia (AREA)
  • Biophysics (AREA)
  • Pathology (AREA)
  • Biomedical Technology (AREA)
  • Heart & Thoracic Surgery (AREA)
  • Medical Informatics (AREA)
  • Surgery (AREA)
  • Psychology (AREA)
  • Human Computer Interaction (AREA)
  • Child & Adolescent Psychology (AREA)
  • Developmental Disabilities (AREA)
  • Educational Technology (AREA)
  • Hospice & Palliative Care (AREA)
  • Social Psychology (AREA)
  • Dentistry (AREA)
  • Physiology (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Ophthalmology & Optometry (AREA)
  • Eye Examination Apparatus (AREA)
  • Image Analysis (AREA)

Abstract

The invention discloses the Driver Fatigue Detections based on head pose and eye movement, comprising the following steps: obtains image sequence;Detect human face characteristic point and ocular image information;Analysis human face characteristic point solves head pose, and analysis ocular image information solves eye movement vector;Calculate the time of the head pose attitude angle and eye movement vector shift secure threshold;The driving condition of real-time monitoring driver.Driver Fatigue Detection based on head pose and eye movement of the invention is according to the physiologic habits of people's observed objects, using head pose and eye movement as effectively information is watched attentively, using facial orientation as benchmark direction of gaze;This method is not necessarily to demarcating steps, without carrying out the dynamic compensation of head, it can be achieved that really detecting driver fatigue without constraint.

Description

Driver Fatigue Detection based on head pose and eye movement
Technical field
The present invention relates to human-computer interaction technique fields, and in particular to a kind of driver fatigue based on head pose and eye movement Detection method.
Background technique
With the development of economy and society, motor vehicle becomes one of most important trip mode.At the same time, traffic accident is not Evitable rapid growth, fatigue driving is the ever-increasing major reason of traffic accident, and can pass through technology hand The big traffic security risk that Duan Jinhang gives warning in advance.According to statistics, casualties situation caused by fatigue driving is no less than drunk It drives.Absent minded state caused by driver itself fatigue state can cause serious personnel casualty accidents frequent at present Occur.Therefore need one kind can real-time detection driver fatigue situation, and be capable of the device of early warning, this will be prevention fatigue Drive and cause the effective means of traffic accident.
But found out in the driver fatigue monitor system of comparative maturity at present, reliable is by extracting arteries and veins It fights signal detection, but driver is required to wear physiological signal inductor on the way driving, driver can be caused to bear, to driving Safety belt carrys out certain hidden danger.Simultaneously traditional, off-board, contact, non real-time driving fatigue detection method oneself It is not suitable with the requirement in epoch, needs to seek a kind of driver fatigue detection device, in real time to driver fatigue shape in driving procedure The detection and early warning of state.
When driver fatigue, it will usually be frequent bow, come back, nodding, eyes closed and frequency of wink it is excessively slow Etc. features.Therefore monitoring in real time is carried out to the head pose of driver and accurately estimates head pose, be fatigue driving inspection Important link in survey.The monitoring driver fatigue method based on head pose and Eye-controlling focus be applied in the field come, The prompting that driver's driving condition exception can be achieved can take effective measures when necessary, stop driving, to reduce accident Risk ensures driver's personal safety and property safety.
Summary of the invention
In view of this, the present invention provides one kind to be based on head pose and eye to solve above-mentioned the problems of the prior art Dynamic Driver Fatigue Detection, to solve real-time monitoring driver fatigue in non-intrusion type human-computer interaction technique field Problem.
To achieve the above object, technical scheme is as follows.
A kind of Driver Fatigue Detection based on head pose and eye movement comprising following steps:
Step 1 obtains image sequence by the image acquisition units of non-intrusion type;
In step 2, each frame image based on described image sequence, human face characteristic point image information and eye area are detected Area image information;
Step 3, based on the human face characteristic point image information, solve head pose;
Step 4, based on the ocular image information, solve eye movement vector;
Step 5, in conjunction with the head pose and eye movement vector, determine the driving condition of driver.
Further, include following sub-step in the step 3:
Step 31 carries out facial feature points detection on human face image information region, identifies at least three face characteristics Point;
The human face characteristic point is carried out face characteristic point alignment by step 32, obtains the space coordinate of user's head;
Step 33, the space coordinate based on the user's head construct head geometry three-dimensional model, obtain user's head appearance State angle;
Step 34 makees a ray by endpoint of a characteristic point on face, and directions of rays is that the user's head posture is angular The direction of amount.
Further, the human face characteristic point include left eye canthus, right eye canthus, the left corners of the mouth of mouth, the right corners of the mouth of mouth and Nose.
Further, the step 4 includes following sub-step:
Step 41 is based on the ocular image information, obtains iris position, divides the pupil side in human eye area Edge, and position pupil center;
Step 42, based on the relative position between pupil center and canthus, construct the big canthus vector of pupil center-(Δ x, Δ y) obtains the eye movement vector of user.
Further, the step 5 includes following sub-step:
Step 51 sets the secure threshold of user's head attitude angle and eye movement vector as D1
Step 52, record user's head attitude angle or eye movement vector are greater than secure threshold D1Time be t1
Step 53, setting user's head attitude angle or eye movement vector are greater than secure threshold D1Maximum safety time be t2
Step 54 calculates the time t1And t2Difference be T;
Step 55 compares size relation between T and 0, obtains the driving condition of user;
Step 56, based on the eye movement vector, counting user number of winks K1, set the user security number of winks upper limit For K2, lower limit K3
Step 57 compares K1With K2、K3Size relation, obtain the driving condition of user.
Compared with the prior art, the Driver Fatigue Detection of the invention based on head pose and eye movement is seen according to people The physiologic habit for surveying object, using head pose and eye movement as effectively information is watched attentively, using head pose as benchmark direction of gaze; Space reflection model is resettled, the time of the head pose and eye movement vector shift secure threshold is calculated, real-time monitoring is driven The driving condition for the person of sailing.This method is not necessarily to demarcating steps, without carrying out the dynamic compensation of head, it can be achieved that really driving without constraint detection Member's fatigue.
Detailed description of the invention
Fig. 1 is the Driver Fatigue Detection logic diagram of the invention based on head pose and eye movement.
Fig. 2 is the embodiment 1 of non-intrusion type face eyes image video acquisition unit.
Fig. 3 is the embodiment 1 of non-intrusion type Image Acquisition unit and user relative position.
Fig. 4 is the embodiment 2 of non-intrusion type Image Acquisition unit and user relative position.
Fig. 5 is the embodiment 3 of non-intrusion type Image Acquisition unit and user relative position.
Fig. 6 is the embodiment 2 of non-intrusion type face eyes image video acquisition unit.
Fig. 7 is pupil center-inner eye corner eye movement vector.
Fig. 8 is gray integration function and Snake model orientation pupil center.
Specific embodiment
Specific implementation of the invention is described further below in conjunction with attached drawing and specific embodiment.It may be noted that Being, if having the process (such as convolutional neural networks) or parameter of not special detailed description below, is that those skilled in the art can join According to the prior art realize or understand.Described embodiments are only a part of the embodiments of the present invention, rather than whole realities Example is applied, based on the embodiments of the present invention, those of ordinary skill in the art are obtained without making creative work Every other embodiment, shall fall within the protection scope of the present invention.
Embodiment 1
As shown in Figure 1, a kind of Driver Fatigue Detection based on head pose and eye movement, comprising the following steps:
Step 1 obtains image sequence by the image acquisition units of non-intrusion type;
Man face image acquiring unit is the camera near monitoring screen observed by user is arranged in, for for shooting Family face area, user acquire what user used in real time without wearing any assisted acquisition device (non-intrusion type), by camera Image is converted into 256 rank grayscale images.
Cam lens of the present invention are equipped with infrared fileter, can filter out visible light wave range, retain infrared band.Camera sense Optical element CCD or CMOS can be photosensitive to infrared band.Near cam lens, arranges infrared LED array of source, circlewise divide Cloth.Light-source brightness can be adjusted by PWM modulation signal according to actual environment needs.The present invention can be used a camera and complete, Also multiple camera collocation can be used, enhance imaging effect.
As shown in Fig. 2, being a kind of embodiment of non-intrusion type image acquisition units.Which includes interactive screen, take the photograph As head, infrared LED light source.The characteristics of camera, is the infrared band light that its energy light-sensitive infrared LED light source issues, front end dress Have an infrared filtering eyeglass, characteristic parameter such as cutoff frequency etc. according to the LED light source of actual use and user apart from situations such as Adjustment.When LED light source uses the infrared LED light source of 800nm wave band or more, cutoff frequency is selected in 800nm or more, usually , bandpass-type filtering is more preferable than high-pass type filter effect.
As shown in figure 3, being the embodiment 1 of non-intrusion type Image Acquisition unit Yu user relative position.Wherein non-intruding Lower position of the formula Image Acquisition unit in user face.
As shown in figure 4, being the embodiment 2 of non-intrusion type Image Acquisition unit Yu user relative position.Wherein non-intruding Formula Image Acquisition unit is in the square position of user face.
As shown in figure 5, being the embodiment 3 of non-intrusion type Image Acquisition unit Yu user relative position.Wherein non-intruding Formula Image Acquisition unit is in the top position of user face.
Embodiment 2
Although only needing a camera, an infrared LED light source that image sampling can be completed in the embodiment of the present invention, it is Multiple light courcess, multi-camera system, division of labor collocation can be used in acquisition higher quality of image signals.As shown in fig. 6, being invaded to be non- Another embodiment for entering formula image acquisition units, the present embodiment provides a scheme, camera 1 is equipped with infrared LED light source, uses In shooting wide scene, face is captured;Camera 2 and camera 3 are used to shoot the high-definition image of eye areas, and interactive screen is left Upper angle and the upper right corner are respectively equipped with infrared LED light source.
Step 2 is based on described image sequence, in each frame image of image sequence, detects that face characteristic point image is believed Breath and ocular image information;
Step 3, based on the human face characteristic point in described image, analyze head pose image information, solve head pose, Using head pose as directions of rays, the directions of rays is intersected with image acquisition units and obtains an intersection point, by the intersection point As benchmark head pose point;
Include following sub-step in the step 3:
Step 31 carries out facial feature points detection on human face region, identifies five human face characteristic points;
Five human face characteristic points are carried out face characteristic point alignment by step 32, the method based on convolutional neural networks, Obtain the space coordinate of user's head;
Step 33, the space coordinate based on the user's head construct head geometry three-dimensional model, solve the rotation on head Shaft angle;If rotation shaft angle is α, each component of the rotary shaft under the geometry three-dimensional model is set as βx、βyAnd βz, warp Formula is converted to quaternary number, the formula are as follows:
W=cos (α/2)
X=sin (α/2) cos (βx)
Y=sin (α/2) cos (βy)
Z=sin (α/2) cos (βz)
Wherein, w, x, y and z are quaternary number.
Based on the quaternary number, user's head attitude angle, the formula are obtained through formula are as follows:
Wherein ψ is facial orientation or so angle, and φ is lower angle on facial orientation,For facial orientation roll angle;
Step 34 makees a ray by endpoint of certain characteristic point on face, and directions of rays is that the user's head posture is angular The direction of amount.
Five human face characteristic points include left eye canthus, right eye canthus, the left corners of the mouth of mouth, the right corners of the mouth of mouth and nose.
Two-dimensional image sequence file is built into 3-D geometric model by five human face characteristic points by the embodiment of the present invention, is obtained Three-dimensional head pose information is taken, head pose is obtained.
Step 4, based on the benchmark head pose point, centered on the benchmark head pose point, in equipment screen Curtain regional assignment watching area, using the center of watching area as basic point, the region for dividing 2/3rds is safety zone;
Based on the watching area, the ocular image information is analyzed, eye movement vector is solved, by eye movement vector In device screen as sight line point.
Include following sub-step in the step 4:
Step 41, the method (methods of convolutional neural networks) based on deep learning, detection are fallen in the watching area Ocular;By gray integration function, iris position is obtained, is then based on Snake model, divides the pupil in human eye area Hole edge, and position pupil center;
Step 42, based on the relative position between pupil center and canthus, construct pupil center-inner eye corner eye movement vector (Δ x, Δ y) can be obtained user in the sight line point of the watching area.
Further, in this embodiment the eye movement vector illustrated is pupil center-inner eye corner vector, as shown in Figure 7.It is first First, inner eye corner belongs to one of human face characteristic point, has passed through the acquisition of benchmark gazing direction detecting submodule.By being based on deep learning Method (methods of such as convolutional neural networks) and Hough transformation image processing method, use human eye classifier divide human eye area Domain.In human eye area, by gray integration function, iris region is obtained, Snake model is reused and is partitioned into iris edge, Pupil center is fitted, process is shown in attached drawing 8.Then, camera coordinate system is set, and world coordinate system is drawn outer with 1 point on the face Prolong ray, is met at a bit with interactive screen.
Step 5, based on the head pose image information and ocular image information, obtain user and watch attentively described The driving condition in region.
Include following sub-step in the step 5:
Step 51 sets the secure threshold of user's head attitude angle and eye movement vector as D1
Step 52 obtains user's head attitude angle or eye movement vector greater than secure threshold D1Time be t1
Step 53, setting user's head attitude angle or eye movement vector are greater than secure threshold D1Maximum safety time be t2
Step 54 calculates the time t1And t2Difference be T;
Step 55, based on the T, when T is more than or equal to zero, determine user in fatigue driving state, when T is less than When zero, determine user in normal driving state;
Step 56, based on the eye movement vector, counting user number of winks K1, set the user security number of winks upper limit For K2, lower limit K3
Step 57, the number of winks K based on the user1, work as K1Greater than K3And it is less than K2When, determine user just Normal driving condition, when working as K1Less than K3Or it is greater than K2When, determine user in fatigue driving state.
In conclusion the Driver Fatigue Detection of the invention based on head pose and eye movement is according to people's observed objects Physiologic habit, using head pose and eye movement as effectively information is watched attentively, using head pose as benchmark direction of gaze;It resettles Space reflection model calculates the time of the head pose and eye movement vector shift secure threshold, real-time monitoring driver's Driving condition.This method is not necessarily to demarcating steps, without carrying out the dynamic compensation of head, it can be achieved that really tired without constraint detection driver Labor.
Above-listed detailed description is illustrating for possible embodiments of the present invention, and the embodiment is not to limit this hair Bright the scope of the patents, all equivalence enforcements or change without departing from carried out by the present invention, is intended to be limited solely by the scope of the patents of this case.

Claims (5)

1. a kind of Driver Fatigue Detection based on head pose and eye movement, which comprises the following steps:
Step 1 obtains image sequence by the image acquisition units of non-intrusion type;
In step 2, each frame image based on described image sequence, human face characteristic point image information and ocular figure are detected As information;
Step 3, based on the human face characteristic point image information, solve head pose;
Step 4, based on the ocular image information, solve eye movement vector;
Step 5, in conjunction with the head pose and eye movement vector, determine the driving condition of driver.
2. the Driver Fatigue Detection according to claim 1 based on head pose and eye movement, which is characterized in that institute State includes following sub-step in step 3:
Step 31 carries out facial feature points detection on human face image information region, identifies at least three human face characteristic points;
The human face characteristic point is carried out face characteristic point alignment by step 32, obtains the space coordinate of user's head;
Step 33, the space coordinate based on the user's head construct head geometry three-dimensional model, obtain user's head posture Angle;
Step 34 makees a ray by endpoint of a characteristic point on face, and directions of rays is the user's head attitude angle vector Direction.
3. the Driver Fatigue Detection according to claim 1 or 2 based on head pose and eye movement, feature exist In: the human face characteristic point includes left eye canthus, right eye canthus, the left corners of the mouth of mouth, the right corners of the mouth of mouth and nose.
4. the Driver Fatigue Detection according to claim 1 based on head pose and eye movement, which is characterized in that institute Stating step 4 includes following sub-step:
Step 41 is based on the ocular image information, obtains iris position, divides the pupil edge in human eye area, and Position pupil center;
Step 42, based on the relative position between pupil center and canthus, construct the big canthus vector of pupil center-(Δ x, Δ y), Obtain the eye movement vector of user.
5. the Driver Fatigue Detection according to claim 1 based on head pose and eye movement, which is characterized in that institute Stating step 5 includes following sub-step:
Step 51 sets the secure threshold of user's head attitude angle and eye movement vector as D1
Step 52, record user's head attitude angle or eye movement vector are greater than secure threshold D1Time be t1
Step 53, setting user's head attitude angle or eye movement vector are greater than secure threshold D1Maximum safety time be t2
Step 54 calculates the time t1And t2Difference be T;
Step 55 compares size relation between T and 0, obtains the driving condition of user;
Step 56, based on the eye movement vector, counting user number of winks K1, the user security number of winks upper limit is set as K2, Lower limit is K3
Step 57 compares K1With K2、K3Size relation, obtain the driving condition of user.
CN201811029521.4A 2018-09-04 2018-09-04 Driver Fatigue Detection based on head pose and eye movement Pending CN109044363A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201811029521.4A CN109044363A (en) 2018-09-04 2018-09-04 Driver Fatigue Detection based on head pose and eye movement

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201811029521.4A CN109044363A (en) 2018-09-04 2018-09-04 Driver Fatigue Detection based on head pose and eye movement

Publications (1)

Publication Number Publication Date
CN109044363A true CN109044363A (en) 2018-12-21

Family

ID=64759601

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201811029521.4A Pending CN109044363A (en) 2018-09-04 2018-09-04 Driver Fatigue Detection based on head pose and eye movement

Country Status (1)

Country Link
CN (1) CN109044363A (en)

Cited By (15)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109878528A (en) * 2019-01-31 2019-06-14 电子科技大学 Head movement attitude detection system towards vehicle-mounted stereo visual system
CN109965843A (en) * 2019-03-14 2019-07-05 华南师范大学 A kind of eye movements system passing picture based on filtering
CN110197169A (en) * 2019-06-05 2019-09-03 南京邮电大学 A kind of contactless learning state monitoring system and learning state detection method
CN110909596A (en) * 2019-10-14 2020-03-24 广州视源电子科技股份有限公司 Side face recognition method, device, equipment and storage medium
CN111626221A (en) * 2020-05-28 2020-09-04 四川大学 Driver gazing area estimation method based on human eye information enhancement
CN112084820A (en) * 2019-06-14 2020-12-15 初速度(苏州)科技有限公司 Personnel state detection method and device based on head information
CN112232128A (en) * 2020-09-14 2021-01-15 南京理工大学 Eye tracking based method for identifying care needs of old disabled people
CN112869701A (en) * 2021-01-11 2021-06-01 上海微创医疗机器人(集团)股份有限公司 Sight line detection method, surgical robot system, control method, and storage medium
CN112926364A (en) * 2019-12-06 2021-06-08 北京四维图新科技股份有限公司 Head posture recognition method and system, automobile data recorder and intelligent cabin
CN113331839A (en) * 2021-05-28 2021-09-03 武汉科技大学 Network learning attention monitoring method and system based on multi-source information fusion
CN113591762A (en) * 2021-08-09 2021-11-02 重庆理工大学 Safe driving early warning method under free angle
CN113591699A (en) * 2021-07-30 2021-11-02 西安电子科技大学 Online visual fatigue detection system and method based on deep learning
TWI754806B (en) * 2019-04-09 2022-02-11 栗永徽 System and method for locating iris using deep learning
CN114432098A (en) * 2022-01-27 2022-05-06 中山大学附属第一医院 Gait orthotic devices based on model
CN117227740A (en) * 2023-09-14 2023-12-15 南京项尚车联网技术有限公司 Multi-mode sensing system and method for intelligent driving vehicle

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20030169907A1 (en) * 2000-07-24 2003-09-11 Timothy Edwards Facial image processing system
CN101419664A (en) * 2007-10-25 2009-04-29 株式会社日立制作所 Sight direction measurement method and sight direction measurement device
CN105279493A (en) * 2015-10-22 2016-01-27 四川膨旭科技有限公司 System for identifying visions of drivers in vehicle running process
CN106598221A (en) * 2016-11-17 2017-04-26 电子科技大学 Eye key point detection-based 3D sight line direction estimation method
CN107193383A (en) * 2017-06-13 2017-09-22 华南师范大学 A kind of two grades of Eye-controlling focus methods constrained based on facial orientation
CN108427503A (en) * 2018-03-26 2018-08-21 京东方科技集团股份有限公司 Human eye method for tracing and human eye follow-up mechanism

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20030169907A1 (en) * 2000-07-24 2003-09-11 Timothy Edwards Facial image processing system
CN101419664A (en) * 2007-10-25 2009-04-29 株式会社日立制作所 Sight direction measurement method and sight direction measurement device
CN105279493A (en) * 2015-10-22 2016-01-27 四川膨旭科技有限公司 System for identifying visions of drivers in vehicle running process
CN106598221A (en) * 2016-11-17 2017-04-26 电子科技大学 Eye key point detection-based 3D sight line direction estimation method
CN107193383A (en) * 2017-06-13 2017-09-22 华南师范大学 A kind of two grades of Eye-controlling focus methods constrained based on facial orientation
CN108427503A (en) * 2018-03-26 2018-08-21 京东方科技集团股份有限公司 Human eye method for tracing and human eye follow-up mechanism

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
迟健男: "驾驶疲劳监测方法综述", 《交通节能与环保》 *

Cited By (24)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109878528A (en) * 2019-01-31 2019-06-14 电子科技大学 Head movement attitude detection system towards vehicle-mounted stereo visual system
CN109965843A (en) * 2019-03-14 2019-07-05 华南师范大学 A kind of eye movements system passing picture based on filtering
CN109965843B (en) * 2019-03-14 2022-05-24 华南师范大学 Eye movement system based on filtering image transmission
TWI754806B (en) * 2019-04-09 2022-02-11 栗永徽 System and method for locating iris using deep learning
CN110197169A (en) * 2019-06-05 2019-09-03 南京邮电大学 A kind of contactless learning state monitoring system and learning state detection method
CN110197169B (en) * 2019-06-05 2022-08-26 南京邮电大学 Non-contact learning state monitoring system and learning state detection method
CN112084820A (en) * 2019-06-14 2020-12-15 初速度(苏州)科技有限公司 Personnel state detection method and device based on head information
CN112084820B (en) * 2019-06-14 2022-06-24 魔门塔(苏州)科技有限公司 Personnel state detection method and device based on head information
CN110909596A (en) * 2019-10-14 2020-03-24 广州视源电子科技股份有限公司 Side face recognition method, device, equipment and storage medium
CN112926364B (en) * 2019-12-06 2024-04-19 北京四维图新科技股份有限公司 Head gesture recognition method and system, automobile data recorder and intelligent cabin
CN112926364A (en) * 2019-12-06 2021-06-08 北京四维图新科技股份有限公司 Head posture recognition method and system, automobile data recorder and intelligent cabin
CN111626221A (en) * 2020-05-28 2020-09-04 四川大学 Driver gazing area estimation method based on human eye information enhancement
CN112232128A (en) * 2020-09-14 2021-01-15 南京理工大学 Eye tracking based method for identifying care needs of old disabled people
CN112232128B (en) * 2020-09-14 2022-09-13 南京理工大学 Eye tracking based method for identifying care needs of old disabled people
CN112869701B (en) * 2021-01-11 2024-03-29 上海微创医疗机器人(集团)股份有限公司 Sight line detection method, surgical robot system, control method, and storage medium
CN112869701A (en) * 2021-01-11 2021-06-01 上海微创医疗机器人(集团)股份有限公司 Sight line detection method, surgical robot system, control method, and storage medium
CN113331839A (en) * 2021-05-28 2021-09-03 武汉科技大学 Network learning attention monitoring method and system based on multi-source information fusion
CN113591699A (en) * 2021-07-30 2021-11-02 西安电子科技大学 Online visual fatigue detection system and method based on deep learning
CN113591699B (en) * 2021-07-30 2024-02-09 西安电子科技大学 Online visual fatigue detection system and method based on deep learning
CN113591762A (en) * 2021-08-09 2021-11-02 重庆理工大学 Safe driving early warning method under free angle
CN113591762B (en) * 2021-08-09 2023-07-25 重庆理工大学 Safe driving early warning method under free angle
CN114432098A (en) * 2022-01-27 2022-05-06 中山大学附属第一医院 Gait orthotic devices based on model
CN117227740A (en) * 2023-09-14 2023-12-15 南京项尚车联网技术有限公司 Multi-mode sensing system and method for intelligent driving vehicle
CN117227740B (en) * 2023-09-14 2024-03-19 南京项尚车联网技术有限公司 Multi-mode sensing system and method for intelligent driving vehicle

Similar Documents

Publication Publication Date Title
CN109044363A (en) Driver Fatigue Detection based on head pose and eye movement
Chen et al. Deepphys: Video-based physiological measurement using convolutional attention networks
CN107193383B (en) Secondary sight tracking method based on face orientation constraint
US5570698A (en) System for monitoring eyes for detecting sleep behavior
US7692550B2 (en) Method and system for detecting operator alertness
US7692551B2 (en) Method and system for detecting operator alertness
JP5109922B2 (en) Driver monitoring device and program for driver monitoring device
Batista A drowsiness and point of attention monitoring system for driver vigilance
Ghosh et al. Real time eye detection and tracking method for driver assistance system
JP6855872B2 (en) Face recognition device
CN109076176A (en) The imaging device and its illumination control method of eye position detection device and method, imaging sensor with rolling shutter drive system
CN109964230A (en) Method and apparatus for eyes measurement acquisition
JP2004192552A (en) Eye opening/closing determining apparatus
Kunka et al. Non-intrusive infrared-free eye tracking method
US10188292B2 (en) Device for screening convergence insufficiency and related computer implemented methods
Yoon et al. Driver’s eye-based gaze tracking system by one-point calibration
Anwar et al. Development of real-time eye tracking algorithm
Apoorva et al. Review on Drowsiness Detection
Hegde et al. Low cost eye based human computer interface system (Eye controlled mouse)
Huang et al. An optimized eye locating and tracking system for driver fatigue monitoring
CN106682588A (en) Real-time pupil detection and tracking method
EP1901252A2 (en) Method and system for detecting operator alertness
Venugopal et al. Real Time Implementation of Eye Tracking System Using Arduino Uno Based Hardware Interface
Thayanithi et al. Real-Time Voice Enabled Drivers' Safety Monitoring System Using Image Processing
Malla Automated video-based measurement of eye closure using a remote camera for detecting drowsiness and behavioural microsleeps

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication
RJ01 Rejection of invention patent application after publication

Application publication date: 20181221