CN110276273A - Merge the Driver Fatigue Detection of facial characteristics and the estimation of image pulse heart rate - Google Patents

Merge the Driver Fatigue Detection of facial characteristics and the estimation of image pulse heart rate Download PDF

Info

Publication number
CN110276273A
CN110276273A CN201910466298.8A CN201910466298A CN110276273A CN 110276273 A CN110276273 A CN 110276273A CN 201910466298 A CN201910466298 A CN 201910466298A CN 110276273 A CN110276273 A CN 110276273A
Authority
CN
China
Prior art keywords
driver
heart rate
image
eyes
fatigue
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201910466298.8A
Other languages
Chinese (zh)
Other versions
CN110276273B (en
Inventor
罗堪
都可钦
李建兴
黄炳法
陈炜
马莹
刘肖
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Fujian University of Technology
Original Assignee
Fujian University of Technology
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Fujian University of Technology filed Critical Fujian University of Technology
Priority to CN201910466298.8A priority Critical patent/CN110276273B/en
Publication of CN110276273A publication Critical patent/CN110276273A/en
Application granted granted Critical
Publication of CN110276273B publication Critical patent/CN110276273B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • A61B5/02Detecting, measuring or recording pulse, heart rate, blood pressure or blood flow; Combined pulse/heart-rate/blood pressure determination; Evaluating a cardiovascular condition not otherwise provided for, e.g. using combinations of techniques provided for in this group with electrocardiography or electroauscultation; Heart catheters for measuring blood pressure
    • A61B5/024Detecting, measuring or recording pulse rate or heart rate
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • A61B5/16Devices for psychotechnics; Testing reaction times ; Devices for evaluating the psychological state
    • A61B5/168Evaluating attention deficit, hyperactivity
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • A61B5/16Devices for psychotechnics; Testing reaction times ; Devices for evaluating the psychological state
    • A61B5/18Devices for psychotechnics; Testing reaction times ; Devices for evaluating the psychological state for vehicle drivers or machine operators
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/50Context or environment of the image
    • G06V20/59Context or environment of the image inside of a vehicle, e.g. relating to seat occupancy, driver state or inner lighting conditions
    • G06V20/597Recognising the driver's state or behaviour, e.g. attention or drowsiness
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B2503/00Evaluating a particular growth phase or type of persons or animals
    • A61B2503/20Workers
    • A61B2503/22Motor vehicles operators, e.g. drivers, pilots, captains

Landscapes

  • Health & Medical Sciences (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Veterinary Medicine (AREA)
  • Surgery (AREA)
  • Public Health (AREA)
  • General Health & Medical Sciences (AREA)
  • Animal Behavior & Ethology (AREA)
  • Molecular Biology (AREA)
  • Developmental Disabilities (AREA)
  • Biophysics (AREA)
  • Pathology (AREA)
  • Biomedical Technology (AREA)
  • Heart & Thoracic Surgery (AREA)
  • Medical Informatics (AREA)
  • Cardiology (AREA)
  • Hospice & Palliative Care (AREA)
  • Social Psychology (AREA)
  • Psychology (AREA)
  • Psychiatry (AREA)
  • Child & Adolescent Psychology (AREA)
  • Educational Technology (AREA)
  • Physiology (AREA)
  • General Physics & Mathematics (AREA)
  • Multimedia (AREA)
  • Theoretical Computer Science (AREA)
  • Measurement Of The Respiration, Hearing Ability, Form, And Blood Characteristics Of Living Organisms (AREA)
  • Measuring Pulse, Heart Rate, Blood Pressure Or Blood Flow (AREA)

Abstract

The present invention relates to the Driver Fatigue Detections of a kind of fusion facial characteristics and the estimation of image pulse heart rate comprising following steps: 1) carrying out data initialization for driver;2) video acquisition module carries out Face datection to driver, acquires and sends current face characteristic image;3) face locating is carried out using pre-training model;4) the current face characteristic image after positioning is handled, current eye closing time ratio is calculated using PERCLOS algorithm, frequency of currently yawning is calculated using yawn detection algorithm;5) the heart rate difference of current left and right cheek heart rate P1 and P2 are calculated using IPPG technology;6) judge whether driver is tired using Fuzzy Neural Network System;7) monitoring video acquisition module be not detected face time t whether t > 20min, if so, step 1) is repeated when detecting face next time, if it is not, then continuously sending out giving fatigue pre-warning.

Description

Merge the Driver Fatigue Detection of facial characteristics and the estimation of image pulse heart rate
Technical field
Estimate the present invention relates to transmission technology field more particularly to a kind of fusion facial characteristics and image pulse heart rate Driver Fatigue Detection.
Background technique
Having 2.2% in annual global death toll is that road traffic accident causes, it is contemplated that road traffic accident is led in 20 years The death toll of cause will also rise.Driver tired driving is a major reason for leading to traffic accident, is that driver exists It is generated in this big system of people-vehicle-environment.Some special physiological phenomenons can be shown when driver fatigue and psychology is existing As.According to the length of driving time, fatigue driving can be divided into short time driving fatigue and long-duration driving fatigue.Short time drives The characterization for sailing fatigue appearance has: blink again and again feels some fatigues, reduces the attention to safety;Shift not in time, is not allowed Really, attention appearance is not concentrated;Automobile does not change speed with the difference of road conditions etc. in time.The table that long-duration driving fatigue occurs Sign has: mouth parched and tongue scorched yawns incessantly, and nods again and again, is difficult to keep heads-up posture;Eyes are dry, sore, and whens eyes closes when opening, It dozes off, blurred vision;Lassitude, slow in reacting, it is slow to judge;Often subconscious operation steering wheel, deviation are differentiated Unclear orientation, speed blindly raising etc..Correlative study shows driver under fatigue state, and physiological reaction can slow up, letter Number exciter response be delayed, physical signs can deviate normal condition, thus can use biosensor detection driver The variation of physical signs judges whether driver enters fatigue state.By driving for a long time, the heart rate fluctuations of driver Amplitude can react the psychology and burden physiologically that driver is born.
In the prior art, substantially there are following three kinds for the fatigue detecting technology of driver status:
The first detection based on the passive behavioural characteristic of vehicle, when driver is in a state of fatigue, consciousness is weaker, sentences It is slow to break, it may appear that thus the fatigue characteristics such as frequently nod, yawn will lead to steering wheel and hold not tight, encounter burst thing Situations such as part is handled not in time.The detection method is identified by the operating status to vehicle in driving conditions, to sentence Whether disconnected driver is in the state of dangerous driving.The detection method shortcoming has: will receive the shadow of many uncontrollable factors It rings, such as: complicated road conditions, changeable weather, personal driving habit and different vehicles etc.;Therefore, in road bumpiness Or the difficulty detected to driving personnel's fatigue state is just considerably increased when low vehicle speeds.
Second of behavioural characteristic detection based on image procossing, the behavioral value based on image procossing are usually to utilize camera shooting Head obtains the real-time status detection information of driver, and the extraction of feature is carried out by image processing techniques, analyzes the reality of driver When state detecting information, therefore, it is determined that the fatigue state of driver.The detection method is usually to obtain driver using camera Real-time status detection information, by image processing techniques carry out feature extraction, analyze driver real-time status detection letter Breath, therefore, it is determined that the fatigue state of driver, but this method lacks direct quantitative criteria, does not consider individual difference.The inspection Survey method shortcoming has: the clear image in order to more accurately arrive driver status, often using expensive The problem of special image acquires equipment, and this method lacks direct quantitative criteria, does not fully take into account individual difference.
The third detection based on human body physiological parameter, this method can more intuitively react the degree of fatigue of driver, It usually requires that some position of driving personnel directly contact to detect to complete it based on the detection method of physiological parameter Journey, this Medical Devices for just needing driver to dress certain profession acquire relevant physiological parameter.The detection method mainly passes through The physiological signal that detection sympathetic nerve and parasympathetic nerve are closely related realizes the judgement of fatigue state;Common physiological signal Including brain electricity, electrocardio etc..The detection method shortcoming has: since it needs directly to contact some position of driver Equipment to complete to monitor, thus driver is needed to dress profession acquires related physiological parameters, but many drivers do not drive The middle habit for wearing detection device is sailed, the comfort driven is influenced to a certain extent, and then influence the safety driven, removes this Except, more weaken using such detection device using the reason of be: this detection device cost is too high.
Currently, the tired accuracy of research brain wave (EEG) monitoring both at home and abroad is relatively high, it is considered to be monitor fatigue at present Goldstandard, but the need of EEG signal directly acquire signal from head, driver can be interfered to operate;In addition, EEG signal easily by by The interference of electromagnetic field, and be not suitable in practical Driving Scene.
Heart rate and heart rate variability metrics in electrocardiogram (ECG), and judge that an important physiology of fatigue driving refers to Mark, but need to attach conductive electrode to the measuring point of driver to complete its detection process when direct measurement, this is just needed Driver dresses ecg signal acquiring equipment, but there is no the habits that detection device is worn in driving by many drivers, even The comfort driven is influenced to a certain extent, influences normal driver behavior.
The image photoplethysmography (Imaging Photoplethysmography, IPPG) in measurement is one indirectly The contactless physio-parameter detection technology of kind, basic principle are that imaging device is utilized to acquire the video that human body surface is tested position Information, and it is screened out subcutaneous shallow-layer vascular flow perfusion information included in the video, finally by analysis blood perfusion letter Breath to obtain such as heart rate, respiratory rate, heart rate variability physiological parameter, have it is noninvasive, it is non-contact, it is remote (0.5 to be greater than Rice) measurement the characteristics of, there is unique advantage in terms of the assessment physio-parameter detections such as cardiovascular system health status.Pass through IPPG The pulse frequency variability parameter for being able to reflect fatigue state can effectively be extracted.But this method due to will appear tested position and Image capture device phase shift, this relative movement can generate motion artifacts, to generate motion artifacts.For actual scene Consider under noise jamming, there is presently no the methods that mature IPPG carries out driver fatigue assessment.
Therefore, the present invention proposes the driver fatigue detection side of a kind of fusion facial characteristics and the estimation of image pulse heart rate Method.Institute's technical problem underlying to be solved of the invention are as follows: reduce motion artifacts pair caused by motion artifacts under dynamic condition The interference of IPPG source extraction is allowed to obtain accurate driver's pulse heart rate estimation under the dynamic condition of running car, It is special with the pulse heart rate under its current state simultaneously using the facial feature data of face feature point acquisition driver's driving Sign combines, and accurately calculates driver's fatigue degree.
Summary of the invention
In view of the above-mentioned deficiencies in the prior art, it is an object of the present invention to propose a kind of fusion facial characteristics and image pulse heart rate The Driver Fatigue Detection of estimation, it is intended to which the technical problem underlying of solution is reduced under dynamic condition produced by motion artifacts Interference of the motion artifacts to IPPG source extraction, be allowed to obtain accurate driver's arteries and veins under the dynamic condition of running car Heart rate of fighting estimation, while using the facial feature data of face feature point acquisition driver's driving, under its current state Pulse heart rate feature combine, accurately calculate driver's fatigue degree.
To achieve the above object, the invention adopts the following technical scheme:
Merge the Driver Fatigue Detection of facial characteristics and the estimation of image pulse heart rate comprising following steps:
1) data initialization is carried out for driver;
2) by video acquisition module to driver carry out Face datection, acquire and send including eye image, mouth image and Current face characteristic image including the cheek image of left and right;
3) current face characteristic image is received using locating module, and is input to pre-training model and carries out face locating, the face Positioning includes eyes positioning, mouth positioning and the positioning of left and right cheek;
4) the current face characteristic image after positioning is handled by image pre-processing module, to detect current eye width L and iris height H and current mouth length M and height N;Current eye closing time ratio is calculated using PERCLOS algorithm, And frequency of currently yawning is calculated using yawn detection algorithm;
5) the current left and right cheek image of driver is acquired by continuously taking pictures, and after being positioned, worked as using the calculating of IPPG technology Whether the heart rate difference of front left and right cheek heart rate P1 and P2 are less than 5, if so, the heart rate difference is effective, if it is not, then the heart rate is poor Value is invalid, need to resurvey and calculate;
6) frequency of yawning by current eye closing time ratio, currently and effective heart rate difference are input to fuzzy neural network system In system, judge whether driver is tired, if so, issuing giving fatigue pre-warning, and executes step 7), if it is not, after being then spaced T minutes, Execute step 2);
7) monitoring video acquisition module be not detected face time t whether t > 20min, if so, when detecting face next time Step 1) is repeated, if it is not, then continuously sending out giving fatigue pre-warning.
Preferably, the step 1) method for carrying out data initialization for driver are as follows: start to drive in driver When automobile, initial facial feature figure of the driver including eyes, mouth and left and right cheek is acquired by video acquisition module Picture is scanned the real-time frame of initial facial feature image, and scanning window is sized to 80 × 80, first positions driver's face Position in captured image exports the position rectangle frame of face, then the face in the rectangle frame of position is passed through affine variation Obtain the standard faces of 150 × 150 sizes, then with pre-training model extraction to 128 dimensional feature vectors be aligned, according to Distance between feature vector come to driver eyes and mouth position and mark, record eyes coordinates and mouth coordinate, The length and height under the width and iris height and mouth closed state when eyes are opened are calculated using coordinate subtractive method, point It Cai Yong not PERCLOS algorithm and the initial eyes closed time ratio of yawn detection algorithm calculating and initial frequency of yawning, realization Data initialization.
Preferably, the step 4) method for calculating current eye closing time ratio using PERCLOS algorithm are as follows: set When determining driver's eyes and opening 100%, the ratio S1=H1/L1 of iris height H1 and eye widths L1, when eyes open 80%, The ratio of 80% iris height H2 and eye widths L2 are S2=H2/L2, when eyes open 20%, 20% iris height H3 Determine that eyes are to open state, if meeting S3 if meet 0.2S1 < S2 < S1 with the ratio S3=H3/L3 of eye widths L3 When < 0.2S1, then determine eyes for closed state;By the time shared by eyes closed state in the unit of account time, worked as Preceding eyes closed time ratio.
Preferably, the step 4) method for calculating frequency of currently yawning using yawn detection algorithm are as follows: record is driven Length M2 and height N2 when length M1 and height N1 and mouth under the person's of sailing mouth closed state open, if meetingAndDetermine mouth then to open state of yawning;By calculating driver during driving The number yawned in unit time, frequency of currently being yawned.
Preferably, the step 6) Fuzzy Neural Network System is by input layer, blurring layer, fuzzy rule layer, state Layer and output layer five-layer structure are constituted.
Preferably, current eye closing time ratio, frequency of currently yawning and effective heart rate difference are defeated from input layer Enter, and Fuzzy processing is carried out by blurring layer, current eye closing time ratio is obscured and is converted into blinking by the blurring layer Eye is divided into yes/no two states and a wink time is divided into long or short two kinds of situations, and frequency ambiguity of currently yawning is turned It changes into yawn and is divided into yes/no two states and the time of once yawning is divided into long or short two kinds of situations, and by effective heart rate Difference is fuzzy be converted into being greater than 5 or less than 5 two kinds situations, the state layer will be obscured according to the fuzzy rule of fuzzy rule layer Information after changing layer conversion is tired with severe respectively, after slight fatigue and awake corresponding association, by output layer output it is tired or Person regains consciousness two states information.
Preferably, the method that step 6) uses IPPG technology to calculate current left and right cheek heart rate are as follows: extract driver and work as Front left and right cheek image is classified as tri- channels R, G, B, using the smallest channel G of noise as the channel of source signal, to separation G channel image afterwards carries out space pixel average treatment,
Wherein, k is number of image frames, and K is the totalframes of image;X (k) is the one-dimensional source signal in the channel R;xi,jIt (n) is pixel (i, j) is in the channel G color intensity value;H, w are the height and width of G channel image respectively;
Later, signal denoising is carried out using experience resolution model EMD method, steps are as follows:
(1) average value for taking original signal s (t) obtains signal m1(t),
(2) single order surplus h is calculated1(t)=s (t)-m1(t), h is checked1(t) whether meet IMF condition, if it is not, then back to step Suddenly (1), uses h1(t) as the original signal of programmed screening, i.e.,
h2(t)=h1(t)-m2(t) (2)
Screening k times is repeated,
hk(t)=hk-1(t)-mk(t) (3)
In hk(t) before meeting IMF condition, the one-component IMF of IMF is obtained1, i.e.,
IMF1=hk(t) (4)
(3) original signal s (t) subtracts IMF1Surplus r can be obtained1(t), i.e.,
r1(t)=s (t)-IMF1 (5)
(4) s is enabled1(t)=r1(t), by s1(t) as new original signal, above-mentioned steps is repeated and obtain second IMF points Measure IMF2, such repeatedly n times,
(5) as n-th of component rn(t) monotonic function is had become, when can not decompose IMF again, the decomposable process of entire EMD is completed; Original signal s (t) can be expressed as n IMF component and an average tendency component rn(t) combination, it may be assumed that
The signal that its frequency is located at 0.75-2.0Hz in heartbeat frequency band after being decomposed with arma modeling to EMD does energy spectrum analysis, that is, Corresponding human normal heart rate range is 45-120 times/min, and the corresponding frequency in energy highest point is palmic rate fh, then the heart Rate are as follows:
R=60fh (7)。
Preferably, giving fatigue pre-warning described in step 6) includes slight giving fatigue pre-warning and severe giving fatigue pre-warning.
The invention adopts the above technical scheme, acquires driver's face figure in a non-contact manner by IP Camera Picture extracts facial feature image, reduces the interference to driver behavior, carries out face locating using pre-training model, can be accurate Measured's eyes and mouth position are positioned, to accurately calculate the real-time status of eyes and mouth, then uses PERCLOS algorithm Current eye closing time ratio is calculated, and frequency of currently yawning is calculated using yawn detection algorithm, using IPPG technology meter Current left and right cheek heart rate is calculated, due to research shows that the similarity of human pulse and heart rate 99.9% or more, thus can be counted The pulse heart rate estimated in the case of driver dynamic is calculated, in conjunction with frequency of wink and frequency of yawning, utilizes fuzzy neural network system The method real-time and precise of system use information fusion measures the degree of fatigue of driver, reduces the risk of fatigue driving, improves and drive The safety coefficient of member's driving.Compared with the contact heart rate acquisition method under traditional wearable device, the present invention only need by The video acquisition modules such as camera are mounted on the rear side of steering wheel, without being affixed on skin using the poor electrode of experience Electrocardiosignal is acquired, without the position of adjustment electrode slice, a little researchs are in driver's wearing spectacles before overcoming, eyes The problem of feature extraction is failed calculates estimation Variation of Drivers ' Heart Rate by IPPG technology;With the behavioural characteristic of traditional image procossing Detection is compared, and is increased and the biggish pulse heart rate physiological data of the fatigue strength degree of correlation.The present invention is driven by face tracking, reduction Relative displacement between the person of sailing and camera reduces the shadow that motion artifacts measure in a dynamic condition IPPG technology heart rate It rings.
Detailed description of the invention
Now in conjunction with attached drawing, the present invention is further elaborated:
Fig. 1 is that the process of the Driver Fatigue Detection of present invention fusion facial characteristics and the estimation of image pulse heart rate is illustrated Figure;
Fig. 2 is the flow diagram of data initialization of the present invention;
Fig. 3 is the eye state schematic diagram when present invention uses PERCLOS algorithm;
Fig. 4 is the mouth states schematic diagram when present invention uses yawn detection algorithm;
Fig. 5 is the flow diagram that eyes of the present invention and mouth states judge;
Fig. 6 is the flow diagram of the initial rate calculation of the present invention;
Fig. 7 is the structural schematic diagram of Fuzzy Neural Network System of the present invention;
Fig. 8 is the fuzzy rule schematic diagram of Fuzzy Neural Network System of the present invention.
Specific embodiment
As shown in one of Fig. 1-8, the present invention merges facial characteristics and the driver fatigue of image pulse heart rate estimation detects Method comprising following steps:
1) data initialization is carried out for driver;
2) by video acquisition module to driver carry out Face datection, acquire and send including eye image, mouth image and Current face characteristic image including the cheek image of left and right;
3) current face characteristic image is received using locating module, and is input to pre-training model and carries out face locating, the face Positioning includes eyes positioning, mouth positioning and the positioning of left and right cheek;
4) the current face characteristic image after positioning is handled by image pre-processing module, to detect current eye width L and iris height H and current mouth length M and height N;Current eye closing time ratio is calculated using PERCLOS algorithm, And frequency of currently yawning is calculated using yawn detection algorithm;
5) the current left and right cheek image of driver is acquired by continuously taking pictures, and after being positioned, worked as using the calculating of IPPG technology Whether the heart rate difference of front left and right cheek heart rate P1 and P2 are less than 5, if so, the heart rate difference is effective, if it is not, then the heart rate is poor Value is invalid, need to resurvey and calculate;
6) frequency of yawning by current eye closing time ratio, currently and effective heart rate difference are input to fuzzy neural network system In system, judge whether driver is tired, if so, issuing giving fatigue pre-warning, giving fatigue pre-warning includes that slight giving fatigue pre-warning and severe are tired Labor early warning, and step 7) is executed, if it is not, executing step 2) after being then spaced T minutes;
7) monitoring video acquisition module be not detected face time t whether t > 20min, if so, when detecting face next time Step 1) is repeated, if it is not, then continuously sending out giving fatigue pre-warning.
Since driving time is elongated, human-body fatigue is aggravated, and because this person becomes tired, cardiomotility is slack-off, so heart rate energy The objectively current state of reaction driver.The a degree of degree of fatigue that can react driver of the closing time of eyes. Judge that fatigue state passes through PERCLOS (percentage of eyelid closure over the by the closure of eyes Pupil over Time, abbreviation PERCLOS) it measures, it is defined as in the unit time and (generally taking 1 minute or 30 seconds) eye Eyeball is closed the time shared by certain proportion (70% or 80%).Frequency of yawning refers to driver during driving when unit Between the total degree yawned, such as driver's minute only play 1 yawn, then frequency of yawning is 1 time/min, if certain minute 2 yawns are played altogether, then are 2 times/min, and so on.Thus, the present invention using three parameters, (PERCLOS and yawn by heart rate Frequency) it can be realized the degree of fatigue that non-contact exact detection judges driver.
Preferably, the step 1) method for carrying out data initialization for driver are as follows: start to drive in driver When automobile, initial facial feature figure of the driver including eyes, mouth and left and right cheek is acquired by video acquisition module Picture is scanned the real-time frame of initial facial feature image, and scanning window is sized to 80 × 80, first positions driver's face Position in captured image exports the position rectangle frame of face, then the face in the rectangle frame of position is passed through affine variation Obtain the standard faces of 150 × 150 sizes, then with pre-training model extraction to 128 dimensional feature vectors be aligned, according to Distance between feature vector come to driver eyes and mouth position and mark, record eyes coordinates and mouth coordinate, The length and height under the width and iris height and mouth closed state when eyes are opened are calculated using coordinate subtractive method, point It Cai Yong not PERCLOS algorithm and the initial eyes closed time ratio of yawn detection algorithm calculating and initial frequency of yawning, realization Data initialization.In preceding ten minutes of driver's driving, video acquisition module records the information of current eye and mouth Carry out initialization data;Calculate PERCLOS value and mouth reset condition value on this basis, can be used for quantifying eyes closed state and Mouth open configuration improves detection accuracy.
As shown in figure 3, the step 4) method for calculating current eye closing time ratio using PERCLOS algorithm are as follows: When setting driver's eyes open 100%, the ratio S1=H1/L1 of iris height H1 and eye widths L1, eyes open 80% When, the ratio of 80% iris height H2 and eye widths L2 are S2=H2/L2, and when eyes open 20%, 20% iris is high The ratio S3=H3/L3 of H3 and eye widths L3 are spent, if the height of human eye and the ratio of width are between 20%-100%, i.e., If meet 0.2S1 < S2 < S1, determine that eyes are to open state, if the height of human eye and the ratio of width are less than 20%, When even meeting S3 < 0.2S1, then determine eyes for closed state;By shared by eyes closed state in the unit of account time Time obtains current eye closing time ratio.
As shown in figure 4, the step 4) method for calculating frequency of currently yawning using yawn detection algorithm are as follows: record is driven Length M2 and height N2 when length M1 and height N1 and mouth under the person's of sailing mouth closed state open, if meetingAndDetermine mouth then to open state of yawning;By calculating driver during driving The number yawned in unit time, frequency of currently being yawned.
As shown in fig. 6, driver driving preceding ten minutes in, we by randomly select in ten minutes any two Minute, make camera in this two minutes, be adjusted to high speed photographing mode, calculates driving using face data to facilitate us The heart rate of member.For the influence for overcoming the motion artifacts of driver and camera relative motion generation under current intelligence, we will be driven The left and right cheek for the person of sailing carries out accurate positioning and track up, and the influence of motion artifacts is overcome with this.Fig. 6 is that initial heart rate mentions Flow chart is taken, the starting of camera high-speed mode, (30 frames/s) acquire driver or so cheek image by continuously taking pictures, and utilize IPPG technology calculates Variation of Drivers ' Heart Rate, if driver or so cheek heart rate difference is calculated less than 2, this time measurement is effective, Initially effective heart rate takes the average value of left and right cheek heart rate to driver;If difference is greater than 2, this time measurement is invalid, need to survey again Amount.
We will be judged using Fuzzy Neural Network System in terms of fatigue data judgement.Fuzzy system is a kind of Control method based on fuzzy mathematics theory.Artificial neural network has stronger self study and adaptive ability.Fuzzy neural The advantages that language inference ability of the study mechanism of network integration neural network and fuzzy system, have the characteristics that efficient and convenient And the cycle of training of fuzzy neural network is shorter, studies it more suitable for Study in Driver Fatigue State Surveillance System.In the initial stage, drive The person of sailing driving be it is awake, in the meantime, fuzzy neural network is in the self study stage, remembers the initial shape of driver State.Later, fuzzy neural network is in offline adaptive learning, real-time monitoring driver status and voice warning.
As shown in Fig. 7 or Fig. 8, the step 6) Fuzzy Neural Network System is by input layer, blurring layer, fuzzy rule Layer, state layer and output layer five-layer structure are constituted.
Preferably, the Fuzzy Neural Network System is as state analyzing module of the invention.When current eye is closed Between ratio, frequency and the effective heart rate difference of currently yawning inputted from input layer, and Fuzzy processing, institute carried out by blurring layer It states blurring layer and the fuzzy blink that is converted into of current eye closing time ratio is divided into yes/no two states and primary blink Time is divided into long or short two kinds of situations, frequency ambiguity of currently yawning is converted into yawning be divided into yes/no two states and The primary time of yawning is divided into long or short two kinds of situations, and effective heart rate difference is obscured and is converted into being greater than 5 or less than 5 two kinds Situation, the state layer according to the fuzzy rule of fuzzy rule layer will be blurred the information after layer is converted respectively with severe fatigue, Slight fatigue passes through output layer and exports fatigue or awake two states information with after corresponding be associated with of regaining consciousness.
Preferably, the method that step 6) uses IPPG technology to calculate current left and right cheek heart rate are as follows: extract driver and work as Front left and right cheek image is classified as tri- channels R, G, B, using the smallest channel G of noise as the channel of source signal, to separation G channel image afterwards carries out space pixel average treatment,
Wherein, k is number of image frames, and K is the totalframes of image;X (k) is the one-dimensional source signal in the channel R;xi,jIt (n) is pixel (i, j) is in the channel G color intensity value;H, w are the height and width of G channel image respectively;
Later, signal denoising is carried out using experience resolution model EMD method, steps are as follows:
(1) average value for taking original signal s (t) obtains signal m1(t),
(2) single order surplus h is calculated1(t)=s (t)-m1(t), h is checked1(t) whether meet IMF condition, if it is not, then back to step Suddenly (1), uses h1(t) as the original signal of programmed screening, i.e.,
h2(t)=h1(t)-m2(t) (2)
Screening k times is repeated,
hk(t)=hk-1(t)-mk(t) (3)
In hk(t) before meeting IMF condition, the one-component IMF of IMF is obtained1, i.e.,
IMF1=hk(t) (4)
(3) original signal s (t) subtracts IMF1Surplus r can be obtained1(t), i.e.,
r1(t)=s (t)-IMF1 (5)
(4) s is enabled1(t)=r1(t), by s1(t) as new original signal, above-mentioned steps is repeated and obtain second IMF points Measure IMF2, such repeatedly n times,
(5) as n-th of component rn(t) monotonic function is had become, when can not decompose IMF again, the decomposable process of entire EMD is completed; Original signal s (t) can be expressed as n IMF component and an average tendency component rn(t) combination, it may be assumed that
The signal that its frequency is located at 0.75-2.0Hz in heartbeat frequency band after being decomposed with arma modeling to EMD does energy spectrum analysis, that is, Corresponding human normal heart rate range is 45-120 times/min, and the corresponding frequency in energy highest point is palmic rate fh, then the heart Rate are as follows:
R=60fh (7)。
By the above method, we can measure the heart rate signal at the left and right cheek of driver, to compare.
The invention adopts the above technical scheme, acquires driver's face figure in a non-contact manner by IP Camera Picture extracts facial feature image, reduces the interference to driver behavior, then calculates current eye using PERCLOS algorithm and closes Time ratio is closed, and frequency of currently yawning is calculated using yawn detection algorithm, current left and right cheek is calculated using IPPG technology Heart rate, since research shows that the similarity of human pulse and heart rate is 99.9% or more, thus it is dynamic to calculate estimation driver Pulse heart rate in the case of state is merged in conjunction with frequency of wink and frequency of yawning using Fuzzy Neural Network System use information Method real-time and precise measure the degree of fatigue of driver, reduce the risk of fatigue driving, improve the peace of driver's driving Overall coefficient.Compared with the contact heart rate acquisition method under traditional wearable device, the present invention only need to will the view such as camera Frequency acquisition module is mounted on the rear side of steering wheel, without being affixed on dermal harvest electrocardiosignal using the poor electrode of experience, Without the position of adjustment electrode slice, estimation Variation of Drivers ' Heart Rate is calculated by IPPG technology;With the behavior of traditional image procossing Feature detection is compared, and is increased and the biggish pulse heart rate physiological data of the fatigue strength degree of correlation.
Above description should not have any restriction to protection scope of the present invention.

Claims (8)

1. merge facial characteristics and image pulse heart rate estimation Driver Fatigue Detection, it is characterised in that: it include with Lower step:
1) data initialization is carried out for driver;
2) by video acquisition module to driver carry out Face datection, acquire and send including eye image, mouth image and Current face characteristic image including the cheek image of left and right;
3) current face characteristic image is received using locating module, and is input to pre-training model and carries out face locating, the face Positioning includes eyes positioning, mouth positioning and the positioning of left and right cheek;
4) the current face characteristic image after positioning is handled by image pre-processing module, to detect current eye width L and iris height H and current mouth length M and height N;Current eye closing time ratio is calculated using PERCLOS algorithm, And frequency of currently yawning is calculated using yawn detection algorithm;
5) the current left and right cheek image of driver is acquired by continuously taking pictures, and after being positioned, worked as using the calculating of IPPG technology Whether the heart rate difference of front left and right cheek heart rate P1 and P2 are less than 5, if so, the heart rate difference is effective, if it is not, then the heart rate is poor Value is invalid, need to resurvey and calculate;
6) frequency of yawning by current eye closing time ratio, currently and effective heart rate difference are input to fuzzy neural network system In system, judge whether driver is tired, if so, issuing giving fatigue pre-warning, and executes step 7), if it is not, after being then spaced T minutes, Execute step 2);
7) monitoring video acquisition module be not detected face time t whether t > 20min, if so, when detecting face next time Step 1) is repeated, if it is not, then continuously sending out giving fatigue pre-warning.
2. the Driver Fatigue Detection of fusion facial characteristics according to claim 1 and the estimation of image pulse heart rate, It is characterized by: the step 1) method for carrying out data initialization for driver are as follows: when driver starts driving, Initial facial feature image of the driver including eyes, mouth and left and right cheek is acquired by video acquisition module, to first The real-time frame of beginning facial feature image is scanned, and scanning window is sized to 80 × 80, is first positioned driver's face and is being clapped The position in image is taken the photograph, exports the position rectangle frame of face, then the face in the rectangle frame of position is obtained 150 by affine variation The standard faces of × 150 sizes, then with pre-training model extraction to 128 dimensional feature vectors be aligned, according to feature to Distance between amount come to driver eyes and mouth position and mark, record eyes coordinates and mouth coordinate, utilize seat Width and iris height when mark subtractive method calculating eyes are opened and length and height under mouth closed state, are respectively adopted PERCLOS algorithm and yawn detection algorithm calculate initial eyes closed time ratio and frequency of initially yawning, at the beginning of realizing data Beginningization.
3. the Driver Fatigue Detection of fusion facial characteristics according to claim 1 and the estimation of image pulse heart rate, It is characterized by: the step 4) method for calculating current eye closing time ratio using PERCLOS algorithm are as follows: setting drives When member's eyes open 100%, the ratio S1=H1/L1 of iris height H1 and eye widths L1, when eyes open 80%, 80% The ratio of iris height H2 and eye widths L2 is S2=H2/L2, when eyes open 20%, 20% iris height H3 and eyes If the ratio S3=H3/L3 of width L3 determines that eyes are to open state, if meeting S3 < 0.2S1 meet 0.2S1 < S2 < S1 When, then determine eyes for closed state;By the time shared by eyes closed state in the unit of account time, current eye is obtained Closing time ratio.
4. the Driver Fatigue Detection of fusion facial characteristics according to claim 1 and the estimation of image pulse heart rate, It is characterized by: the step 4) method for calculating frequency of currently yawning using yawn detection algorithm are as follows: record driver's mouth Length M2 and height N2 when length M1 and height N1 and mouth under bar closed state open, if meetingAndDetermine mouth then to open state of yawning;By calculating driver during driving in the unit time The number yawned, frequency of currently being yawned.
5. the Driver Fatigue Detection of fusion facial characteristics according to claim 1 and the estimation of image pulse heart rate, It is characterized by: the step 6) Fuzzy Neural Network System is by input layer, blurring layer, fuzzy rule layer, state layer and defeated Layer five-layer structure is constituted out.
6. the Driver Fatigue Detection of fusion facial characteristics according to claim 5 and the estimation of image pulse heart rate, It is characterized by: current eye closing time ratio, frequency of currently yawning and effective heart rate difference are inputted from input layer, and by It is blurred layer and carries out Fuzzy processing, the blurring layer, which obscures current eye closing time ratio to be converted into blinking to be divided into, is Or no two states and a wink time are divided into long or short two kinds of situations, and frequency ambiguity of currently yawning is converted Kazakhstan in dozen It owes to be divided into yes/no two states and the time of once yawning is divided into long or short two kinds of situations, and effective heart rate difference is obscured It is converted into being greater than 5 or less than 5 two kinds situations, the state layer will be blurred layer according to the fuzzy rule of fuzzy rule layer and convert Information afterwards is tired with severe respectively, after slight fatigue and awake corresponding association, passes through output layer output fatigue or regains consciousness two Kind status information.
7. the Driver Fatigue Detection of fusion facial characteristics according to claim 1 and the estimation of image pulse heart rate, It is characterized by: the method that step 6) uses IPPG technology to calculate current left and right cheek heart rate are as follows: extract driver and work as front left and right Cheek image is classified as tri- channels R, G, B, logical to the G after separation using the smallest channel G of noise as the channel of source signal Road image carries out space pixel average treatment,
Wherein, k is number of image frames, and K is the totalframes of image;X (k) is the one-dimensional source signal in the channel R;xi,jIt (n) is pixel (i, j) is in the channel G color intensity value;H, w are the height and width of G channel image respectively;
Later, signal denoising is carried out using experience resolution model EMD method, steps are as follows:
(1) average value for taking original signal s (t) obtains signal m1(t),
(2) single order surplus h is calculated1(t)=s (t)-m1(t), h is checked1(t) whether meet IMF condition, if it is not, then back to step Suddenly (1), uses h1(t) as the original signal of programmed screening, i.e.,
h2(t)=h1(t)-m2(t) (2)
Screening k times is repeated,
hk(t)=hk-1(t)-mk(t) (3)
In hk(t) before meeting IMF condition, the one-component IMF of IMF is obtained1, i.e.,
IMF1=hk(t) (4)
(3) original signal s (t) subtracts IMF1Surplus r can be obtained1(t), i.e.,
r1(t)=s (t)-IMF1 (5)
(4) s is enabled1(t)=r1(t), by s1(t) as new original signal, above-mentioned steps is repeated and obtain second IMF points Measure IMF2, such repeatedly n times,
(5) as n-th of component rn(t) monotonic function is had become, when can not decompose IMF again, the decomposable process of entire EMD is completed;It is former Beginning signal s (t) can be expressed as n IMF component and an average tendency component rn(t) combination, it may be assumed that
The signal that its frequency is located at 0.75-2.0Hz in heartbeat frequency band after being decomposed with arma modeling to EMD does energy spectrum analysis, that is, Corresponding human normal heart rate range is 45-120 times/min, and the corresponding frequency in energy highest point is palmic rate fh, then the heart Rate are as follows:
R=60fh (7)。
8. the Driver Fatigue Detection of fusion facial characteristics according to claim 1 and the estimation of image pulse heart rate, It is characterized by: giving fatigue pre-warning described in step 6) includes slight giving fatigue pre-warning and severe giving fatigue pre-warning.
CN201910466298.8A 2019-05-30 2019-05-30 Driver fatigue detection method integrating facial features and image pulse heart rate estimation Active CN110276273B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910466298.8A CN110276273B (en) 2019-05-30 2019-05-30 Driver fatigue detection method integrating facial features and image pulse heart rate estimation

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910466298.8A CN110276273B (en) 2019-05-30 2019-05-30 Driver fatigue detection method integrating facial features and image pulse heart rate estimation

Publications (2)

Publication Number Publication Date
CN110276273A true CN110276273A (en) 2019-09-24
CN110276273B CN110276273B (en) 2024-01-02

Family

ID=67960473

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910466298.8A Active CN110276273B (en) 2019-05-30 2019-05-30 Driver fatigue detection method integrating facial features and image pulse heart rate estimation

Country Status (1)

Country Link
CN (1) CN110276273B (en)

Cited By (16)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110705453A (en) * 2019-09-29 2020-01-17 中国科学技术大学 Real-time fatigue driving detection method
CN110728241A (en) * 2019-10-14 2020-01-24 湖南大学 Driver fatigue detection method based on deep learning multi-feature fusion
CN110811649A (en) * 2019-10-31 2020-02-21 太原理工大学 Fatigue driving detection method based on bioelectricity and behavior characteristic fusion
CN110930641A (en) * 2019-11-28 2020-03-27 同济大学 Fatigue driving early warning system and method based on physiological information analysis
CN111274997A (en) * 2020-02-17 2020-06-12 天津中科智能识别产业技术研究院有限公司 Iris recognition neural network model training method based on binocular fusion
CN111652036A (en) * 2020-03-30 2020-09-11 华南理工大学 Fatigue driving identification method based on fusion of heart rate and facial features of vision
CN112052905A (en) * 2020-09-11 2020-12-08 重庆科技学院 Method for extracting multi-operation fatigue features of driver based on recurrent neural network
CN112220480A (en) * 2020-10-21 2021-01-15 合肥工业大学 Driver state detection system and vehicle based on millimeter wave radar and camera fusion
CN112401857A (en) * 2020-11-23 2021-02-26 杭州艺兴科技有限公司 Driver drunk driving detection method
CN113255478A (en) * 2021-05-10 2021-08-13 厦门理工学院 Composite fatigue detection method, terminal equipment and storage medium
CN113792663A (en) * 2021-09-15 2021-12-14 东北大学 Detection method and device for drunk driving and fatigue driving of driver and storage medium
CN113974633A (en) * 2021-10-12 2022-01-28 浙江大学 Traffic risk prevention and control method, device, equipment and electronic equipment
GB2607994A (en) * 2021-06-02 2022-12-21 Lenovo Beijing Ltd Fatigue measurement method, apparatus, and computer-readable medium
CN116168508A (en) * 2022-05-20 2023-05-26 海南大学 Driving fatigue detection and early warning control method and device for man-machine co-driving
CN117115894A (en) * 2023-10-24 2023-11-24 吉林省田车科技有限公司 Non-contact driver fatigue state analysis method, device and equipment
CN117426754A (en) * 2023-12-22 2024-01-23 山东锋士信息技术有限公司 PNN-LVQ-based feature weight self-adaptive pulse wave classification method

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2008020458A2 (en) * 2006-08-18 2008-02-21 Ananya Innovations Limited A method and system to detect drowsy state of driver
US20090261979A1 (en) * 1992-05-05 2009-10-22 Breed David S Driver Fatigue Monitoring System and Method
CN103714660A (en) * 2013-12-26 2014-04-09 苏州清研微视电子科技有限公司 System for achieving fatigue driving judgment on basis of image processing and fusion between heart rate characteristic and expression characteristic
CN106372621A (en) * 2016-09-30 2017-02-01 防城港市港口区高创信息技术有限公司 Face recognition-based fatigue driving detection method
CN107731306A (en) * 2017-10-10 2018-02-23 江西科技师范大学 A kind of contactless heart rate extracting method based on thermal imaging
CN109460703A (en) * 2018-09-14 2019-03-12 华南理工大学 A kind of non-intrusion type fatigue driving recognition methods based on heart rate and facial characteristics

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20090261979A1 (en) * 1992-05-05 2009-10-22 Breed David S Driver Fatigue Monitoring System and Method
WO2008020458A2 (en) * 2006-08-18 2008-02-21 Ananya Innovations Limited A method and system to detect drowsy state of driver
CN103714660A (en) * 2013-12-26 2014-04-09 苏州清研微视电子科技有限公司 System for achieving fatigue driving judgment on basis of image processing and fusion between heart rate characteristic and expression characteristic
CN106372621A (en) * 2016-09-30 2017-02-01 防城港市港口区高创信息技术有限公司 Face recognition-based fatigue driving detection method
CN107731306A (en) * 2017-10-10 2018-02-23 江西科技师范大学 A kind of contactless heart rate extracting method based on thermal imaging
CN109460703A (en) * 2018-09-14 2019-03-12 华南理工大学 A kind of non-intrusion type fatigue driving recognition methods based on heart rate and facial characteristics

Non-Patent Citations (5)

* Cited by examiner, † Cited by third party
Title
KUO JONNY 等: "Evaluation of a video-based measure of driver heart rate", 《JOURNAL OF SAFETY RESEARCH》, vol. 54, pages 55 - 59 *
刘祎 等: "基于人脸视频的非接触式心率测量方法", 《纳米技术与精密工程》 *
刘祎 等: "基于人脸视频的非接触式心率测量方法", 《纳米技术与精密工程》, vol. 14, no. 01, 31 March 2016 (2016-03-31), pages 76 - 79 *
徐妙语: "基于人脸特征点的驾驶员疲劳检测算法研究", 《中国优秀硕士论文全文数据库 工程科技Ⅱ辑》 *
徐妙语: "基于人脸特征点的驾驶员疲劳检测算法研究", 《中国优秀硕士论文全文数据库 工程科技Ⅱ辑》, no. 06, 15 June 2018 (2018-06-15), pages 1 - 42 *

Cited By (25)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110705453A (en) * 2019-09-29 2020-01-17 中国科学技术大学 Real-time fatigue driving detection method
CN110728241A (en) * 2019-10-14 2020-01-24 湖南大学 Driver fatigue detection method based on deep learning multi-feature fusion
CN110811649A (en) * 2019-10-31 2020-02-21 太原理工大学 Fatigue driving detection method based on bioelectricity and behavior characteristic fusion
CN110930641A (en) * 2019-11-28 2020-03-27 同济大学 Fatigue driving early warning system and method based on physiological information analysis
CN111274997A (en) * 2020-02-17 2020-06-12 天津中科智能识别产业技术研究院有限公司 Iris recognition neural network model training method based on binocular fusion
CN111274997B (en) * 2020-02-17 2023-02-24 天津中科智能识别产业技术研究院有限公司 Iris recognition neural network model training method based on binocular fusion
CN111652036A (en) * 2020-03-30 2020-09-11 华南理工大学 Fatigue driving identification method based on fusion of heart rate and facial features of vision
CN111652036B (en) * 2020-03-30 2023-05-26 华南理工大学 Fatigue driving identification method based on vision and fusing heart rate and facial features
CN112052905B (en) * 2020-09-11 2023-02-03 重庆科技学院 Method for extracting multi-operation fatigue features of driver based on recurrent neural network
CN112052905A (en) * 2020-09-11 2020-12-08 重庆科技学院 Method for extracting multi-operation fatigue features of driver based on recurrent neural network
CN112220480B (en) * 2020-10-21 2023-08-04 合肥工业大学 Driver state detection system based on millimeter wave radar and camera fusion and vehicle
CN112220480A (en) * 2020-10-21 2021-01-15 合肥工业大学 Driver state detection system and vehicle based on millimeter wave radar and camera fusion
CN112401857A (en) * 2020-11-23 2021-02-26 杭州艺兴科技有限公司 Driver drunk driving detection method
CN113255478A (en) * 2021-05-10 2021-08-13 厦门理工学院 Composite fatigue detection method, terminal equipment and storage medium
GB2607994A (en) * 2021-06-02 2022-12-21 Lenovo Beijing Ltd Fatigue measurement method, apparatus, and computer-readable medium
GB2607994B (en) * 2021-06-02 2023-09-20 Lenovo Beijing Ltd Fatigue measurement method, apparatus, and computer-readable medium
CN113792663A (en) * 2021-09-15 2021-12-14 东北大学 Detection method and device for drunk driving and fatigue driving of driver and storage medium
CN113792663B (en) * 2021-09-15 2024-05-14 东北大学 Method, device and storage medium for detecting drunk driving and fatigue driving of driver
CN113974633B (en) * 2021-10-12 2023-02-14 浙江大学 Traffic risk prevention and control method, device, equipment and electronic equipment
CN113974633A (en) * 2021-10-12 2022-01-28 浙江大学 Traffic risk prevention and control method, device, equipment and electronic equipment
CN116168508A (en) * 2022-05-20 2023-05-26 海南大学 Driving fatigue detection and early warning control method and device for man-machine co-driving
CN116168508B (en) * 2022-05-20 2023-10-24 海南大学 Driving fatigue detection and early warning control method and device for man-machine co-driving
CN117115894A (en) * 2023-10-24 2023-11-24 吉林省田车科技有限公司 Non-contact driver fatigue state analysis method, device and equipment
CN117426754A (en) * 2023-12-22 2024-01-23 山东锋士信息技术有限公司 PNN-LVQ-based feature weight self-adaptive pulse wave classification method
CN117426754B (en) * 2023-12-22 2024-04-19 山东锋士信息技术有限公司 PNN-LVQ-based feature weight self-adaptive pulse wave classification method

Also Published As

Publication number Publication date
CN110276273B (en) 2024-01-02

Similar Documents

Publication Publication Date Title
CN110276273A (en) Merge the Driver Fatigue Detection of facial characteristics and the estimation of image pulse heart rate
Zhang et al. Driver drowsiness detection using multi-channel second order blind identifications
Solaz et al. Drowsiness detection based on the analysis of breathing rate obtained from real-time image recognition
Pratama et al. A review on driver drowsiness based on image, bio-signal, and driver behavior
Favilla et al. Heart rate and heart rate variability from single-channel video and ICA integration of multiple signals
CN112434611B (en) Early fatigue detection method and system based on eye movement subtle features
Liu et al. Driver fatigue detection through pupil detection and yawing analysis
Ghosh et al. Real time eye detection and tracking method for driver assistance system
Hachisuka Human and vehicle-driver drowsiness detection by facial expression
Hussein et al. Driver drowsiness detection techniques: A survey
CN112220480A (en) Driver state detection system and vehicle based on millimeter wave radar and camera fusion
Awais et al. Automated eye blink detection and tracking using template matching
CN113989788A (en) Fatigue detection method based on deep learning and multi-index fusion
Coetzer et al. Driver fatigue detection: A survey
Khan et al. Efficient Car Alarming System for Fatigue Detectionduring Driving
Xu et al. Ivrr-PPG: An illumination variation robust remote-PPG algorithm for monitoring heart rate of drivers
Yarlagadda et al. Driver drowsiness detection using facial parameters and rnns with lstm
Yin et al. A driver fatigue detection method based on multi-sensor signals
Anumas et al. Driver fatigue monitoring system using video face images & physiological information
WO2019124087A1 (en) Biological state estimating device, method, and program
WO2023184832A1 (en) Physiological state detection method and apparatus, electronic device, storage medium, and program
Bulygin et al. Image-Based Fatigue Detection of Vehicle Driver: State-of-the-Art and Reference Model
Chiou et al. Abnormal driving behavior detection using sparse representation
Nopsuwanchai et al. Driver-independent assessment of arousal states from video sequences based on the classification of eyeblink patterns
Du et al. Online vigilance analysis combining video and electrooculography features

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant