CN112617835A - Multi-feature fusion fatigue detection method based on transfer learning - Google Patents

Multi-feature fusion fatigue detection method based on transfer learning Download PDF

Info

Publication number
CN112617835A
CN112617835A CN202011492334.7A CN202011492334A CN112617835A CN 112617835 A CN112617835 A CN 112617835A CN 202011492334 A CN202011492334 A CN 202011492334A CN 112617835 A CN112617835 A CN 112617835A
Authority
CN
China
Prior art keywords
data
volunteer
fatigue
electroencephalogram
electrocardio
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202011492334.7A
Other languages
Chinese (zh)
Other versions
CN112617835B (en
Inventor
陶鹏鹏
张帅青
檀旭栋
黄海平
胡素君
王汝传
王睿
李欣祥
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Nanjing University of Posts and Telecommunications
Original Assignee
Nanjing University of Posts and Telecommunications
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Nanjing University of Posts and Telecommunications filed Critical Nanjing University of Posts and Telecommunications
Priority to CN202011492334.7A priority Critical patent/CN112617835B/en
Publication of CN112617835A publication Critical patent/CN112617835A/en
Application granted granted Critical
Publication of CN112617835B publication Critical patent/CN112617835B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • A61B5/16Devices for psychotechnics; Testing reaction times ; Devices for evaluating the psychological state
    • A61B5/18Devices for psychotechnics; Testing reaction times ; Devices for evaluating the psychological state for vehicle drivers or machine operators
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • A61B5/16Devices for psychotechnics; Testing reaction times ; Devices for evaluating the psychological state
    • A61B5/162Testing reaction times
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • A61B5/72Signal processing specially adapted for physiological signals or for diagnostic purposes
    • A61B5/7235Details of waveform analysis
    • A61B5/7264Classification of physiological signals or data, e.g. using neural networks, statistical classifiers, expert systems or fuzzy systems
    • A61B5/7267Classification of physiological signals or data, e.g. using neural networks, statistical classifiers, expert systems or fuzzy systems involving training the classification device
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B2503/00Evaluating a particular growth phase or type of persons or animals
    • A61B2503/20Workers
    • A61B2503/22Motor vehicles operators, e.g. drivers, pilots, captains

Landscapes

  • Health & Medical Sciences (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Veterinary Medicine (AREA)
  • Surgery (AREA)
  • Psychiatry (AREA)
  • Artificial Intelligence (AREA)
  • Public Health (AREA)
  • General Health & Medical Sciences (AREA)
  • Animal Behavior & Ethology (AREA)
  • Biophysics (AREA)
  • Pathology (AREA)
  • Biomedical Technology (AREA)
  • Heart & Thoracic Surgery (AREA)
  • Medical Informatics (AREA)
  • Molecular Biology (AREA)
  • Hospice & Palliative Care (AREA)
  • Developmental Disabilities (AREA)
  • Educational Technology (AREA)
  • Social Psychology (AREA)
  • Child & Adolescent Psychology (AREA)
  • Psychology (AREA)
  • Evolutionary Computation (AREA)
  • Fuzzy Systems (AREA)
  • Mathematical Physics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Physiology (AREA)
  • Signal Processing (AREA)
  • Measurement And Recording Of Electrical Phenomena And Electrical Characteristics Of The Living Body (AREA)
  • Measurement Of The Respiration, Hearing Ability, Form, And Blood Characteristics Of Living Organisms (AREA)

Abstract

The multi-feature fusion fatigue detection method based on the transfer learning improves the existing fatigue detection method based on single physiological features, acquires electroencephalogram, electrocardio-electrocardio signals closest to the essence of a fatigue state, fuses facial image features, further improves the model recognition rate, trains models respectively according to the 4 sensor data, and uses a weighted average method to carry out decision-level fusion, thereby ensuring that the models have certain robustness under the condition of sensor failure. Meanwhile, the invention introduces a transfer learning strategy, and reduces the influence of individual difference of different drivers on the stability of the fatigue detection model.

Description

Multi-feature fusion fatigue detection method based on transfer learning
Technical Field
The invention relates to the field of safe driving of automobiles, in particular to a multi-feature fusion fatigue detection method based on transfer learning.
Background
Fatigue driving is an important cause of traffic accidents, and the number of traffic accidents caused by fatigue driving is countless every year, so that various domestic and foreign enterprises and research institutions successively develop research on driving fatigue detection, and the current fatigue detection methods mainly include three types:
1. and detection methods based on the behavior characteristics of the running vehicle, such as the deflection angle of a steering wheel, the acceleration of the steering wheel, the grip force of the steering wheel, the transverse position of the vehicle, the change of the running speed and the like. Although the characteristics based on the vehicle behaviors are convenient to obtain and do not influence the driving operation of a driver, the vehicle behavior characteristics are influenced by different vehicle types, different driving habits and different road conditions, so that the stability of a detection result under different conditions is often difficult to ensure by using a fatigue detection model of the vehicle behavior characteristics.
2. Detection methods based on the facial image features of the driver, such as head position, eye behavior change, mouth state, and the like; the eye features are important features reflecting fatigue states, after a driver enters the fatigue states, the blinking frequency of the driver is reduced, the eye closing time is obviously increased compared with the normal state, the eye opening time is reduced, the eye opening degree is also reduced to a certain degree, if the driver enters the deep fatigue states, the serious condition that the eyes of the driver are in the closed state for a long time can occur, and therefore the facial image features, particularly the eye features, can well reflect the states of the driver.
3. Based on physiological characteristics of the driver, such as electroencephalogram (EEG), Electrocardiogram (ECG), Electromyogram (EMG), Electrooculogram (EOG), and other physiological indicators. The physiological characteristics are known as the most accurate and reliable indexes for detecting fatigue, particularly electroencephalograms, and are known as the 'gold standard' for detecting fatigue, so that the fatigue state of a driver can be detected with high precision by processing and analyzing the physiological indexes.
Many fatigue driving methods based on single-feature or multi-feature fusion have been proposed in recent years, but these methods have not considered the influence of individual differences of drivers on the model effect in practical situations.
Disclosure of Invention
In order to solve the problems that the accuracy of the single-feature fatigue detection method is low, and the stability of a fatigue detection model is affected by individual differences among different drivers, the invention provides the fatigue detection method which integrates electroencephalogram, electrocardio, electrooculogram and facial image features and combines a transfer learning strategy.
The invention relates to a multi-feature fusion fatigue detection method based on transfer learning, which comprises the following steps:
step 1: selecting a plurality of volunteers;
step 2: carrying out laboratory simulation driving on each volunteer, acquiring real-time electroencephalogram, electrocardio, electrooculogram and facial image signals, and carrying out reaction time test on each volunteer at intervals to finish data acquisition;
step 3, dividing the electroencephalogram, electrocardio, electrooculogram and facial image signals of each volunteer according to time windows, respectively extracting features, setting labels according to corresponding reaction time, and forming a labeled data set D by the data and the labels of all the volunteerss={x1,x2,x3…xn},Y={Y1,Y2,Y3…YnIn which xiCharacteristic data representing the i volunteer, YiStatus tag data representing the ith volunteer;
and 4, step 4: data pre-collection and characteristic extraction are carried out on a driver so as to obtain drivingPerson feature data xtqAnd volunteer status label Y, will xtqAnd D in step 3sRespectively solving the maximum average value difference of the data of each volunteer in the data set, and screening out m volunteers with the minimum maximum average value difference with the physiological data of the driver;
and 5: using the labeled physiological data of m volunteers obtained in step 4 and the driver characteristic data x of step 4tqRespectively training 4 migration learning models TLDA based on a depth self-encoder according to the electroencephalogram data, the electrocardio data, the electrooculogram data and the facial image number of each volunteer, training m multiplied by 4 TLDA models in total, inputting the characteristic data of the driver into the trained models, and obtaining the evaluation result P (y) of each TLDA model on the fatigue state of the driverij),P(yij) The TLDA model representing the jth sensor data of the ith volunteer outputs a probability that the result is fatigue;
step 6: and 5, integrating the average values output by all the sensor models by using a weighted average method to obtain the final evaluation result of the electroencephalogram, electrocardio, electrooculogram and facial image models of each volunteer in the step 5
Figure BDA0002841064010000021
And the conditional probability P (y) is countedi|Y),p(yiY) represents the fatigue probability of the output of the ith TLDA model with or without fatigue of the real tag;
and 7: evaluation result P (y) of each volunteer model using step 6i) And conditional probability p (y)iY) to calculate the final evaluation result
Figure BDA0002841064010000022
Further, in the step 2, electroencephalogram, electrocardio and eye electric signals are collected at a sampling frequency of 512Hz, and a facial video of the subject is recorded at a frequency of 30 fps.
Further, in step 3, the processing of the electroencephalogram signal is as follows: wavelet threshold denoising is firstly carried out on the electroencephalogram signals, then alpha waves, beta waves and theta waves are obtained through wavelet decomposition, and then energy, sample entropy and sample entropy combinations of all frequency bands are calculated to serve as electroencephalogram characteristics.
Further, the processing of the electrocardiosignal comprises: firstly, marking R wave peak points, then calculating R-R intervals, and then calculating R-R interval mean values, R-R interval standard deviations and the proportion of R-R intervals larger than 50ms to the total number of R-R intervals according to the R-R intervals to serve as the electrocardio characteristics.
Further, the processing of the ocular electrical signal is: firstly searching the wave crest and the left and right zero points of each blink, calculating the eye-opening closing time length and the blink opening time length of each blink, and the average blink time length, the blink frequency and the combination characteristic PAVR in the time window, namely the ratio of the maximum amplitude of the electric eye signal to the blink time length in each blink process.
Further, the processing for the face image is: the CLM positioning model is used for marking human eyes, the upper eyelid distance and the lower eyelid distance are obtained to calculate the eye characteristic PERCLOS, namely the ratio of the time of the upper eyelid distance and the lower eyelid distance being smaller than 30% of the eye opening state to the total time window length, and meanwhile, the corresponding reaction time is used as a label.
The invention has the beneficial effects that: the invention adds decision fusion and transfer learning strategies of multiple physiological characteristics. The decision integration of multiple physiological characteristics can improve the accuracy of fatigue detection and the robustness of the model, and the migration learning strategy can effectively reduce the influence of individual differences of different drivers on the evaluation effect of the model, so that the method has stronger stability.
Drawings
In order that the present invention may be more readily and clearly understood, a more particular description of the invention briefly described above will be rendered by reference to specific embodiments that are illustrated in the appended drawings.
FIG. 1 is a flow chart of data acquisition for the method of the present invention;
FIG. 2 is an eye landmark position of a facial feature;
FIG. 3 is a flow chart of multi-feature decision fusion of the method of the present invention;
fig. 4 is a flowchart of a transfer learning strategy of the method of the present invention.
Detailed Description
As shown in fig. 1-4, the multi-feature fusion fatigue detection method based on transfer learning according to the present invention includes the following steps:
step 1: selecting 20 volunteers with different ages, different professions and driving ages more than 1 year, wherein 10 volunteers are selected for men and women;
the 20 volunteers in step 1 must cover three age groups of 20-29,30-39,40-49 while ensuring that all volunteers are engaged in different industries and 10 men and women each.
Step 2: each volunteer carries out laboratory simulation driving and acquires real-time electroencephalogram, electrocardio, electrooculogram and facial image signals, reaction time test is carried out every 10 seconds, each experiment is carried out for 10 minutes, and the time interval of each experiment is not less than 24 hours;
in step 2, collecting electroencephalogram, electrocardio and eye electric signals at a sampling frequency of 512HZ, and recording a facial video of a human subject at a frequency of 30fps, wherein the position of an electroencephalogram signal sampling electrode comprises Fz: midline frontal electrode, Cz: midline central electrode, C3: left hemisphere center electrode, C4: right hemisphere central electrode, Pz: a midline top electrode; the ocular electrical signals include horizontal ocular electrical: EOG-V and vertical electro-oculogram: EOG-H; a single-channel grayscale image of face image 512x424 resolution, at a frame rate of 30 fps.
The reaction time is measured by displaying a button key on the computer screen in front of the tested person every 10 seconds, and recording the time t of the buttonsAnd the time t when the subject presses the buttoneThe reaction time t is recorded as ts-te. Since the minimum reaction time required by the driver before the braking is started is 0.4 seconds, the minimum total time for the braking effect to occur is 0.3 seconds in the traffic accident, and the minimum total time for the braking effect to occur is also 0.7 seconds in total, the reaction time of 0.7 seconds is taken as the threshold value of waking and fatigue, and more than 0.7 seconds is marked as fatigue, and less than 0.7 seconds is marked as waking. Each test was carried out for 10 minutes, and each volunteer was subjected to 3 tests at different time periods, with intervals of not less than 24 hours.
And step 3: dividing the EEG, ECG, EEG and facial image signals of each experiment according to a time window of 10 seconds, and respectively carrying out feature extractionTaking the corresponding reaction time as a label, and forming a labeled data set D by the data and the labels of all the volunteerss={x1,x2,x3…xn},Y={Y1,Y2,Y3…YnIn which xiCharacteristic data representing the i volunteer, YiStatus label data representing the ith volunteer.
The feature extraction and feature selection of the invention are shown in figure 1, in step 2, wavelet threshold denoising is firstly used for processing electroencephalogram signals, then alpha waves, beta waves and theta waves are obtained by using wavelet decomposition, and then energy E of each frequency band is calculatedα、Eβ、EθAnd sample entropy SEα、SEβ、SEθCalculating Fθ/βAnd Fθ+α/βAs a feature, wherein Fθ/βAnd Fθ+α/βThe calculation formula is as follows:
Figure BDA0002841064010000041
wherein E isα、Eβ、EθRespectively represent the energy of alpha wave, beta wave and theta wave,
the processing of the electrocardiosignal is carried out by marking the peak point of the R wave and then calculating the R-R interval
RRi=Ri+1-Ri
Wherein R isi+1,RiTime stamps of the (i + 1) th R wave peak and the ith R wave peak, respectively.
The following features were calculated from the R-R interval: mean value of R-R intervals
Figure BDA0002841064010000051
Standard deviation of R-R interval
Figure BDA0002841064010000052
And the proportion of R-R intervals to the total number of R-R intervals greater than 50ms (PNN 50).
The processing of the eye electric signal is performed by first searching for a peak and left and right zero points of each blink, and calculating an eye-open closing time period, a blink open time period, and an average blink time period in a time window, and a blink frequency of each blink. Calculation processing for the eye electric signal first looks for the peak and left and right zero points of each blink, calculates the eye-open closing time length, the blink open time length, and the average blink time length in the time window for each blink, and the blink frequency. Calculating the combination characteristic pAVR:
Figure BDA0002841064010000053
where the maximum amplitude is the maximum amplitude of the ocular signal during each blink.
For the processing of the facial image signal, the human eye is labeled with a human face feature point location model CLM, and the eye contour is labeled with 6 points as shown in fig. three, wherein 2 are in the corner of the eye, 2 are in the upper eyelid, and 2 are in the lower eyelid. The upper and lower eyelid distance d is calculated. And further obtaining a characteristic PERCLOS:
Figure BDA0002841064010000054
packaging the EEG, ECG, EEG and facial image features extracted by the features into model input data Ds={x1,x2,x3…xn}。
And 4, step 4: when the model is used, a driver firstly carries out data pre-collection and feature extraction to obtain driver feature data xtqAnd volunteer status label Y, will xtqAnd D in step 3sAnd respectively solving the maximum average value difference of the data of each volunteer in the data set, and screening out m volunteers with the minimum maximum average value difference with the physiological data of the driver.
In step 4, data pre-acquisition x is performed for the drivertq={z1,z2,z3…zmAnd volunteer input characteristic data D collected in previous experimentss={x1,x2,x3…xnRespectively calculating the maximum mean difference MMD,the formula is as follows:
Figure DA00028410640131728413
Figure BDA0002841064010000056
wherein
Figure BDA0002841064010000057
Representing a χ → H, a mapping from the original space to the hilbert space, K (x, x') being the specific spatial mapping kernel, the gaussian kernel formula used in the present invention is as follows:
Figure BDA0002841064010000061
wherein σ is a hyper-parameter of the kernel function, a characteristic length scale for learning the similarity between samples is defined, namely the ratio of the distance between the samples before and after the characteristic space mapping under the weight space view angle, and x' are the input of the space mapping kernel function.
The resulting MMDs are sorted. And screening m volunteers with the minimum MMD value, namely the minimum difference with the physiological data distribution of the driver.
And 5: using the labeled physiological data of m volunteers obtained in step 4 and x of step 4tqRespectively training 4 migration learning models TLDA based on a depth self-encoder according to the electroencephalogram data, the electrocardio data, the electrooculogram data and the face image number of each volunteer, training m multiplied by 4 TLDA models in total, and obtaining the evaluation result P (y) of each TLDA model on the fatigue state of the driverij),P(yij) The TLDA model representing the jth sensor data of the ith volunteer outputs the probability that the result is fatigue.
In step 5, in order to make the model have certain robustness and ensure that the model can still output results after a certain sensor fails, 4 TLDA transfer learning models are respectively trained on electroencephalogram, electrocardio, electrooculogram and facial image data of each volunteer.
The training process of the TLDA model is to train the screened volunteer data and the data of the target domain together to form a depth self-encoder, and to integrate the KL divergence difference of the data of the source domain and the data of the target domain in the hidden layer feature space into an optimized target function of the depth self-encoder, so that the output features are more abstract and higher through the encoding and decoding processes, and the difference of data distribution of different domains is reduced.
The output layer of the TLDA models uses Softmax regression, so that each TLDA model evaluates the driver fatigue state P (y)ij) The output of the TLDA model representing the jth sensor data of the ith volunteer is the probability of determining fatigue.
Step 6, outputting the final evaluation result by fusing the evaluation results of the electroencephalogram, electrocardio, electrooculogram and facial image models of each volunteer in the step 5 by using decision level
Figure BDA0002841064010000062
And the conditional probability P (y) is countedi|Y)。
The flow chart of the step 6 is shown in fig. 4, in order to ensure the robustness of the 4 sensors in the working process in the step 6, namely, the model can still normally carry out fatigue detection when one sensor fails, the invention uses a weighted average method to synthesize the average value of the output results of the 4 models of the electroencephalogram, electrocardio, opthalmic and facial images of each volunteer as the final judgment result
Figure BDA0002841064010000063
And the conditional probability p (y) is countedi|Y),p(yiY) represents the probability that the ith volunteer model output is tired with or without the real tag tired.
And 7: final output assessment result P (y) for each volunteer using step 6i) And conditional probability P (y)iY) calculating the final evaluation result
Figure BDA0002841064010000071
In step 7, a posterior probability can be obtained using the Bayesian formula because the final output evaluation results of each volunteer are independent of each other
Figure BDA0002841064010000072
The posterior probability was used as the final evaluation result Y' fusing the m volunteer models.
The invention improves the existing fatigue detection method based on single physiological characteristics, acquires the electroencephalogram, electrocardio and electro-oculogram signals closest to the essence of the fatigue state, fuses facial image characteristics, further improves the recognition rate of the method, trains models respectively according to the 4 sensor data, and fuses decision levels by using a weighted average method, thereby ensuring that the method has certain robustness under the condition of sensor failure. Meanwhile, the invention introduces a transfer learning strategy, and reduces the influence of individual difference of different drivers on the stability of the fatigue detection model.
The above description is only a preferred embodiment of the present invention, and is not intended to limit the present invention, and all equivalent variations made by using the contents of the present specification and the drawings are within the protection scope of the present invention.

Claims (6)

1. A multi-feature fusion fatigue detection method based on transfer learning comprises the following steps:
step 1: selecting a plurality of volunteers;
step 2: carrying out laboratory simulation driving on each volunteer, acquiring real-time electroencephalogram, electrocardio, electrooculogram and facial image signals, and carrying out reaction time test on each volunteer at intervals to finish data acquisition;
and step 3: dividing the electroencephalogram, electrocardio, electrooculogram and facial image signals of each volunteer according to time windows, respectively extracting features, setting labels according to corresponding reaction time, and forming a labeled data set D by the data and the labels of all the volunteerss={x1,x2,x3...xn},Y={Y1,Y2,Y3...YnIn which xiCharacteristic data representing the i volunteer, YiStatus tag data representing the ith volunteer;
and 4, step 4: the driver characteristic data x is obtained by performing data pre-collection and characteristic extraction on the drivertqAnd volunteer status label Y, will xtqAnd D in step 3sRespectively solving the maximum average value difference of the data of each volunteer in the data set, and screening out m volunteers with the minimum maximum average value difference with the physiological data of the driver;
and 5: using the labeled physiological data of m volunteers obtained in step 4 and the driver characteristic data x of step 4tqRespectively training 4 migration learning models TLDA based on a depth self-encoder according to the electroencephalogram data, the electrocardio data, the electrooculogram data and the facial image number of each volunteer, training m multiplied by 4 TLDA models in total, inputting the characteristic data of the driver into the trained models, and obtaining the evaluation result P (y) of each TLDA model on the fatigue state of the driverij),P(yij) The TLDA model representing the jth sensor data of the ith volunteer outputs a probability that the result is fatigue;
step 6: and 5, integrating the average values output by all the sensor models by using a weighted average method to obtain the final evaluation result of the electroencephalogram, electrocardio, electrooculogram and facial image models of each volunteer in the step 5
Figure FDA0002841062000000011
And the conditional probability P (y) is countedi|Y),p(yiY) represents the fatigue probability of the output of the ith TLDA model with or without fatigue of the real tag;
and 7: evaluation result P (y) of each volunteer model using step 6i) And conditional probability p (y)iY) to calculate the final evaluation result
Figure FDA0002841062000000012
2. The method for detecting fatigue through multi-feature fusion based on transfer learning of claim 1, wherein in the step 2, electroencephalogram, electrocardio-electro-oculogram signals are collected at a sampling frequency of 512HZ, and a facial video of the subject is recorded at a frequency of 30 fps.
3. The method for detecting fatigue based on multi-feature fusion of transfer learning according to claim 1, wherein in the step 3, the processing of the electroencephalogram signal is as follows: wavelet threshold denoising is firstly carried out on the electroencephalogram signals, then alpha waves, beta waves and theta waves are obtained through wavelet decomposition, and then energy, sample entropy and sample entropy combinations of all frequency bands are calculated to serve as electroencephalogram characteristics.
4. The multi-feature fusion fatigue detection method based on transfer learning according to claim 1, wherein the processing of the electrocardiosignal is as follows: firstly, marking R wave peak points, then calculating R-R intervals, and then calculating R-R interval mean values, R-R interval standard deviations and the proportion of R-R intervals larger than 50ms to the total number of R-R intervals according to the R-R intervals to serve as the electrocardio characteristics.
5. The method for detecting fatigue based on multi-feature fusion of transfer learning according to claim 1, wherein the processing of the electro-ocular signals is as follows: firstly searching the wave crest and the left and right zero points of each blink, calculating the eye-opening closing time length and the blink opening time length of each blink, and the average blink time length, the blink frequency and the combination characteristic PAVR in the time window, namely the ratio of the maximum amplitude of the electric eye signal to the blink time length in each blink process.
6. The method for detecting fatigue of multi-feature fusion based on transfer learning according to claim 1, wherein the facial image is processed by: the CLM positioning model is used for marking human eyes, the upper eyelid distance and the lower eyelid distance are obtained to calculate the eye characteristic PERCLOS, namely the ratio of the time of the upper eyelid distance and the lower eyelid distance being smaller than 30% of the eye opening state to the total time window length, and meanwhile, the corresponding reaction time is used as a label.
CN202011492334.7A 2020-12-17 2020-12-17 Multi-feature fusion fatigue detection method based on transfer learning Active CN112617835B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202011492334.7A CN112617835B (en) 2020-12-17 2020-12-17 Multi-feature fusion fatigue detection method based on transfer learning

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202011492334.7A CN112617835B (en) 2020-12-17 2020-12-17 Multi-feature fusion fatigue detection method based on transfer learning

Publications (2)

Publication Number Publication Date
CN112617835A true CN112617835A (en) 2021-04-09
CN112617835B CN112617835B (en) 2022-12-13

Family

ID=75316231

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202011492334.7A Active CN112617835B (en) 2020-12-17 2020-12-17 Multi-feature fusion fatigue detection method based on transfer learning

Country Status (1)

Country Link
CN (1) CN112617835B (en)

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114224344A (en) * 2021-12-31 2022-03-25 杭州电子科技大学 Fatigue state real-time detection system based on EEG and transfer learning
CN114343661A (en) * 2022-03-07 2022-04-15 西南交通大学 Method, device and equipment for estimating reaction time of high-speed rail driver and readable storage medium
CN117079255A (en) * 2023-10-17 2023-11-17 江西开放大学 Fatigue driving detection method based on face recognition and voice interaction
CN117290781A (en) * 2023-10-24 2023-12-26 中汽研汽车检验中心(宁波)有限公司 Driver KSS grade self-evaluation training method for DDAW system test
CN117636488A (en) * 2023-11-17 2024-03-01 中国科学院自动化研究所 Multi-mode fusion learning ability assessment method and device and electronic equipment

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110772268A (en) * 2019-11-01 2020-02-11 哈尔滨理工大学 Multimode electroencephalogram signal and 1DCNN migration driving fatigue state identification method
WO2020226696A1 (en) * 2019-12-05 2020-11-12 Huawei Technologies Co. Ltd. System and method of generating a video dataset with varying fatigue levels by transfer learning

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110772268A (en) * 2019-11-01 2020-02-11 哈尔滨理工大学 Multimode electroencephalogram signal and 1DCNN migration driving fatigue state identification method
WO2020226696A1 (en) * 2019-12-05 2020-11-12 Huawei Technologies Co. Ltd. System and method of generating a video dataset with varying fatigue levels by transfer learning

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
LAN-LAN CHEN 等: "Cross-subject driver status detection from physiological signals based on hybrid feature selection and transfer learning", 《EXPERT SYSTEMS WITH APPLICATIONS》 *
RITA CHATTOPADHYAY: "Multisource domain adaptation and its application to early detection of fatigue", 《ACM TRANSACTIONS ON KNOWLEDGE DISCOVERY FROM DATA》 *

Cited By (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114224344A (en) * 2021-12-31 2022-03-25 杭州电子科技大学 Fatigue state real-time detection system based on EEG and transfer learning
CN114224344B (en) * 2021-12-31 2024-05-07 杭州电子科技大学 Fatigue state real-time detection system based on EEG and migration learning
CN114343661A (en) * 2022-03-07 2022-04-15 西南交通大学 Method, device and equipment for estimating reaction time of high-speed rail driver and readable storage medium
CN114343661B (en) * 2022-03-07 2022-05-27 西南交通大学 Method, device and equipment for estimating reaction time of driver in high-speed rail and readable storage medium
CN117079255A (en) * 2023-10-17 2023-11-17 江西开放大学 Fatigue driving detection method based on face recognition and voice interaction
CN117079255B (en) * 2023-10-17 2024-01-05 江西开放大学 Fatigue driving detection method based on face recognition and voice interaction
CN117290781A (en) * 2023-10-24 2023-12-26 中汽研汽车检验中心(宁波)有限公司 Driver KSS grade self-evaluation training method for DDAW system test
CN117636488A (en) * 2023-11-17 2024-03-01 中国科学院自动化研究所 Multi-mode fusion learning ability assessment method and device and electronic equipment

Also Published As

Publication number Publication date
CN112617835B (en) 2022-12-13

Similar Documents

Publication Publication Date Title
CN112617835B (en) Multi-feature fusion fatigue detection method based on transfer learning
Picot et al. Drowsiness detection based on visual signs: blinking analysis based on high frame rate video
Ueno et al. Development of drowsiness detection system
CN112241658B (en) Fatigue driving early warning method based on depth camera
Friedrichs et al. Camera-based drowsiness reference for driver state classification under real driving conditions
Bamidele et al. Non-intrusive driver drowsiness detection based on face and eye tracking
CN109460703B (en) Non-invasive fatigue driving identification method based on heart rate and facial features
CN110859609B (en) Multi-feature fusion fatigue driving detection method based on voice analysis
CN111753674A (en) Fatigue driving detection and identification method based on deep learning
CN113743471B (en) Driving evaluation method and system
Bittner et al. Detecting of fatigue states of a car driver
CN112884063B (en) P300 signal detection and identification method based on multi-element space-time convolution neural network
CN114358194A (en) Gesture tracking based detection method for abnormal limb behaviors of autism spectrum disorder
Liu et al. A review of driver fatigue detection: Progress and prospect
Singh et al. Physical and physiological drowsiness detection methods
CN110097012B (en) Fatigue detection method for monitoring eye movement parameters based on N-range image processing algorithm
Dehzangi et al. Unobtrusive driver drowsiness prediction using driving behavior from vehicular sensors
Ukwuoma et al. Deep learning review on drivers drowsiness detection
CN117272155A (en) Intelligent watch-based driver road anger disease detection method
Chen et al. Deep learning approach for detection of unfavorable driving state based on multiple phase synchronization between multi-channel EEG signals
CN111281382A (en) Feature extraction and classification method based on electroencephalogram signals
Haupt et al. Steering wheel motion analysis for detection of the driver’s drowsiness
CN115736920A (en) Depression state identification method and system based on bimodal fusion
CN112438741B (en) Driving state detection method and system based on electroencephalogram feature transfer learning
Ma et al. Driver Drowsiness Detection Based On ResNet-18 And Transfer Learning

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant