CN108309311A - A kind of real-time doze of train driver sleeps detection device and detection algorithm - Google Patents

A kind of real-time doze of train driver sleeps detection device and detection algorithm Download PDF

Info

Publication number
CN108309311A
CN108309311A CN201810257717.2A CN201810257717A CN108309311A CN 108309311 A CN108309311 A CN 108309311A CN 201810257717 A CN201810257717 A CN 201810257717A CN 108309311 A CN108309311 A CN 108309311A
Authority
CN
China
Prior art keywords
driver
detection
eyes
image
doze
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201810257717.2A
Other languages
Chinese (zh)
Inventor
黄晋
张恩德
胡志坤
白云仁
胡昱坤
刘尧
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Hua Longitudinal Science And Technology Co Ltd
Tsinghua University
Original Assignee
Beijing Hua Longitudinal Science And Technology Co Ltd
Tsinghua University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Hua Longitudinal Science And Technology Co Ltd, Tsinghua University filed Critical Beijing Hua Longitudinal Science And Technology Co Ltd
Priority to CN201810257717.2A priority Critical patent/CN108309311A/en
Publication of CN108309311A publication Critical patent/CN108309311A/en
Pending legal-status Critical Current

Links

Classifications

    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • A61B5/103Detecting, measuring or recording devices for testing the shape, pattern, colour, size or movement of the body or parts thereof, for diagnostic purposes
    • A61B5/11Measuring movement of the entire body or parts thereof, e.g. head or hand tremor, mobility of a limb
    • A61B5/1103Detecting eye twinkling
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • A61B5/16Devices for psychotechnics; Testing reaction times ; Devices for evaluating the psychological state
    • A61B5/168Evaluating attention deficit, hyperactivity
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • A61B5/16Devices for psychotechnics; Testing reaction times ; Devices for evaluating the psychological state
    • A61B5/18Devices for psychotechnics; Testing reaction times ; Devices for evaluating the psychological state for vehicle drivers or machine operators
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • A61B5/72Signal processing specially adapted for physiological signals or for diagnostic purposes
    • A61B5/7235Details of waveform analysis
    • A61B5/7264Classification of physiological signals or data, e.g. using neural networks, statistical classifiers, expert systems or fuzzy systems
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • A61B5/74Details of notification to user or communication with user or patient ; user input means
    • A61B5/746Alarms related to a physiological condition, e.g. details of setting alarm thresholds or avoiding false alarms
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/214Generating training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/50Context or environment of the image
    • G06V20/59Context or environment of the image inside of a vehicle, e.g. relating to seat occupancy, driver state or inner lighting conditions
    • G06V20/597Recognising the driver's state or behaviour, e.g. attention or drowsiness
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/161Detection; Localisation; Normalisation
    • G06V40/165Detection; Localisation; Normalisation using facial parts and geometric relationships
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/168Feature extraction; Face representation
    • G06V40/171Local features and components; Facial parts ; Occluding parts, e.g. glasses; Geometrical relationships
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/172Classification, e.g. identification
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B2503/00Evaluating a particular growth phase or type of persons or animals
    • A61B2503/20Workers
    • A61B2503/22Motor vehicles operators, e.g. drivers, pilots, captains
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/46Descriptors for shape, contour or point-related descriptors, e.g. scale invariant feature transform [SIFT] or bags of words [BoW]; Salient regional features
    • G06V10/467Encoded features or binary features, e.g. local binary patterns [LBP]

Landscapes

  • Engineering & Computer Science (AREA)
  • Health & Medical Sciences (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Physics & Mathematics (AREA)
  • General Health & Medical Sciences (AREA)
  • Theoretical Computer Science (AREA)
  • Oral & Maxillofacial Surgery (AREA)
  • General Physics & Mathematics (AREA)
  • Public Health (AREA)
  • Surgery (AREA)
  • Veterinary Medicine (AREA)
  • Biophysics (AREA)
  • Pathology (AREA)
  • Biomedical Technology (AREA)
  • Heart & Thoracic Surgery (AREA)
  • Medical Informatics (AREA)
  • Molecular Biology (AREA)
  • Animal Behavior & Ethology (AREA)
  • Multimedia (AREA)
  • Artificial Intelligence (AREA)
  • Data Mining & Analysis (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Physiology (AREA)
  • Psychiatry (AREA)
  • Human Computer Interaction (AREA)
  • Evolutionary Computation (AREA)
  • Developmental Disabilities (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Psychology (AREA)
  • Hospice & Palliative Care (AREA)
  • Educational Technology (AREA)
  • Child & Adolescent Psychology (AREA)
  • Social Psychology (AREA)
  • General Engineering & Computer Science (AREA)
  • Evolutionary Biology (AREA)
  • Signal Processing (AREA)
  • Mathematical Physics (AREA)
  • Fuzzy Systems (AREA)
  • Geometry (AREA)

Abstract

The present invention provides a kind of, and detection algorithm is slept in the train driver doze based on deep learning, specifically includes following steps:Step 1, the face-image for acquiring a frame driver from camera first obtain human face region using LBP extraction local binary features;Step 2 determines face key point using the method that random forest and global linear regression are combined, including eyes, nose, mouth position;The ocular image of acquisition is input to trained convolutional neural networks model and classifies by step 3, obtains eye state;Step 4 calculates degree of fatigue according to the P80 measurement methods of PERCLOS according to the closed state of opening of eyes in conjunction with frequency of wink;Step 5, triggering alarm is alarmed after being judged as fatigue state, to the alerting drivers in the case where state is slept in doze.Detection device and detection algorithm are slept in the doze in real time of the train driver of the present invention has the characteristics that detection speed is fast, it is high to judge precision, stability is strong.

Description

A kind of real-time doze of train driver sleeps detection device and detection algorithm
Technical field
The present invention relates to a kind of train driving technology more particularly to a kind of train driver, detection device and inspection are slept in doze in real time Method of determining and calculating.
The prior art
Railway has the characteristics that huge freight volume, speed, transportation range are long as the common vehicles.With each The rapid growth of railway quantity and transportation range between area, train driver fatigue detecting is to improving long-duration driving safety Property is of great significance.
Currently, in fatigue detecting field, fatigue detecting research method can be divided into two major classes, and one kind is from driver itself spy Sign, which is set out, has detected whether tired generation;Another kind of is that judge whether driver generates indirectly according to the behavior expression of vehicle tired Labor.Fatigue detection method based on vehicle behavior is to go to judge whether there is fatigue indirectly to the manipulation situation of vehicle from driver Occur.Such method is with line trace or the driving vehicle state that vehicle behaviors are shown at a distance from front truck etc. to be combined to carry out Fatigue detecting.Since the driving environments difference such as driver's driving behavior difference and light, road surface is big, the information of measurement simultaneously can not It leans on.
Current fatigue detection method is substantially the fatigue detecting technology based on driver.By video camera, acquisition is driven The video image of the person's of sailing face, is analyzed, and calculates head pose, eyes open and close the characteristic quantities such as frequency, thus it is speculated that driver's Degree of fatigue.This technical thought has non-contact, is not required to special sensor, it is at low cost the advantages that, widely paid close attention to.Mesh The preceding existing detection fatigue method based on video analysis, it is most of to use OpenCV Technology designs, utilize AdaBoost algorithms Driver face is detected, finds out the gradient matrix of driver's face area figure vertical direction, and carry out to gradient matrix Floor projection obtains the relative position of eyes in the picture by the structure feature of driver face, is opened eyes according to distance Close progress really foot.Then the parameter that each state of driver's eyes is found out according to PERCLOS measuring principles, finally by each index Both the relationship of sufficient threshold value judged whether driver is in fatigue driving state.This method is in illumination condition variation, wearing eye Accuracy of detection is declined when mirror, head deflection angle change.And for train, due to driver activity sky Between it is big, head pose changes greatly, and is possible to leave seat.Therefore existing fatigue detecting technology is directly applied, general double Eye location technology is difficult to the effect obtained on the face of wide-angle variations, many times eye position Wrong localization, therefore, The practical accuracy rate aboard measured is very low, and the minimum requirements of application is not achieved.
Invention content
The object of the present invention is to provide a kind of detection speeds soon, judges that the train driver that precision is high, stability is strong is real-time Detection device and detection algorithm are slept in doze.
The technical solution of the present invention is to provide a kind of, and detection device is slept in the train driver doze based on deep learning, including Image processing module, neural network classification module and assessment are warned module, it is characterised in that it is characterized in that:
Image processing module includes mainly human face region detection unit and face key point positioning unit;Human face region detects Unit uses wide-angle camera, per frame image includes human face region in video, also includes non-face region;Human face region detection is single Region of the member where identifying face in video image, is marked with rectangle frame etc.;Face key point positioning unit use with The method that machine forest and global linear regression are combined is detected face key point;
The eyes image that neural network classification module is generated for above-mentioned image processing module of classifying;Using convolutional Neural net Network model, using this process of feature extraction as one adaptive, self study process, classification performance is found by machine learning Optimal feature;
Assessment warns module for being combined the classification results of above-mentioned neural network classification module with time series, to train Driver's driving condition is predicted.
Further, it includes dozing to sleep condition adjudgement unit, judging unit of eyeing to the front to assess module of warning;It wherein dozes and sleeps shape State judging unit calculates degree of fatigue according to the P80 measurement method combination frequencies of wink of PERCLOS;The measurement of PERCLOS Parameter refers to that eyes closed degree is more than the percentage for a certain time for closing value accounting for total time within the unit interval;It eyes to the front and sentences Disconnected unit is warned according to assessment to be extracted facial key point in module and evaluates facial range of deflection angles, if can sentence beyond this range Break and do not eye to the front for driver, cumulative frequency can trigger alarm, it is ensured that when driver is during long-time train driving It carves awake.
Further, the present invention also provides a kind of, and the detection of detection device is slept in the train driver doze based on deep learning Algorithm includes the following steps:
Step 1, the face-image for acquiring a frame driver from camera first, are carried using local binary patterns extraction algorithm Local binary feature is taken to obtain human face region;
Step 2 determines face key point using the method that random forest and global linear regression are combined, including eye Eyeball, nose, mouth position;
The ocular image of acquisition is input to trained convolutional neural networks model and classifies by step 3, is obtained Eye state;
Step 4 calculates tired according to the closed state of opening of eyes according to the P80 measurement methods of PERCLOS in conjunction with frequency of wink Labor degree;
Detection time section is set as 30 seconds, and PERCLIS threshold values are set as 40%;If exceeding threshold value, judge that driver is in fatigue State;
Step 5, triggering alarm is alarmed after being judged as fatigue state, to the alerting drivers in the case where state is slept in doze
The beneficial effects of the present invention are:
(1) detection speed is fast.First, human face region detection uses LBP algorithms, Haar algorithms is compared, due to that can pass through It is compared operation in small neighbourhood to obtain, detection speed is faster.Secondly, key point position portion uses random forest and complete The method that office's linear regression is combined does regression analysis using the two-value index feature of part, is that face most fast at present is crucial Point location technology.
(2) judge precision height.Using the method training convolutional neural networks model of deep learning, automatic study obtains eyes State feature has excellent generalization ability, is obviously improved compared to conventional sorting methods accuracy rate.
(3) stability is strong, it is contemplated that and the series of factors such as illumination variation, wearing spectacles, head deflection, robustness is stronger, Judging nicety rate is up to industrialized standard.
(4) haar feature of the LBP feature extracting methods mutually than before in scheme has the advantage that:1. returning without carrying out illumination One change is handled, therefore without the variance for seeking image, calculation amount is small.2. the memory space that grader file occupies is small, it is convenient for It is stored on embedded device.3. calculating process is simple, not complicated division and special operation, it is convenient for hardware realization.4. phase For haar features, the time of this feature detection is short, and the real-time of detection is good.
Description of the drawings
Fig. 1 is that each module diagram of detection device is slept in train driver doze;
Fig. 2 is that detection algorithm flow diagram is slept in train driver doze;
Fig. 3 is to doze to sleep detection algorithm network architecture schematic diagram.
Specific implementation mode
Technical scheme of the present invention is described in detail below in conjunction with attached drawing.
As shown in Figure 1, present embodiments provide it is a kind of based on deep learning train driver doze sleep detection device, including Image processing module, neural network classification module and assessment are warned module, wherein:
Image processing module includes mainly human face region detection unit and face key point positioning unit.
Input monitoring driver's video, each frame image pixel are 600*400, are carried to each frame imagery exploitation LBP features It takes and carries out Face datection acquisition human face region.
Human face region detection unit uses wide-angle camera, per frame image includes human face region in video, also includes inhuman Face region in order to accelerate the detection to human face region, while excluding unlicensed person personnel activity interference, among our detection images One third region greatly speeds up detection speed that is, at pilot set.
Region of the human face region detection unit where identifying face in video image, is generally available rectangle frame etc. into rower Note, the human face region of this label is not accurate facial contour curve.
Face key point positioning unit is using the method that random forest and global linear regression are combined to face key point Carry out high speed detection.
The localization method of face key point positioning unit can be expressed with this following formula:
St=St-1+Rt(I,St-1)
Wherein:St indicates that absolute shape, Rt indicate that a recurrence device, I indicate image, and Rt is according to the position of image and shape Information predicts a deformation, and adds it to current one new shape of composition in shape.T indicates the cascade number of plies, it is general I Can by multilayer cascade come predicting shape.
The set of key point is referred to as shape, shape contains the location information of key point, and this location information is general It can be indicated with two kinds of forms, the first is the position of key point relative to whole image, is for second the position phase of key point For face frame (identifying position of the face in whole image).The first shape is referred to as absolute shape, its value one As arrive image between 0 width, height, we are referred to as relative shape for second shape, its value is typically in the range of 0 to 1.
Both shapes can be converted by face frame.One random forest is provided to each key point, it will be random The output of forest forms a kind of feature, and referred to as LBF is given a forecast using this LBF, and all key points are corresponding random gloomy The local feature of woods output is connected with each other, referred to as local binary feature (LBF), then using this local binary feature come Global recurrence is done, for predicting deformation.
To each characteristic pointCoordinate is returned, and input is exactly the local binary eigenmatrix W of picturet, own The vector of Δ S coordinates composition finally obtains weight vectors W as the target returnedt, W is the parameter of linear regression, and λ is model Parameter, prevent occurring over-fitting in the training process.Then there is the local binary feature that new picture extractsIt is multiplied by Wt The Δ S values that can be obtained by prediction are finally added on the S that a cascade returns, obtain new shape S.
In LBF algorithms, the every level-one for the method that multi-stage cascade returns can adopt splits into two with the aforedescribed process Point, local binary feature is extracted first with random forest, then local binary feature is recycled to do global linear regression prediction Shape incrementss Δ S.
The eyes image that neural network classification module is used to generate image processing module is classified.In classification before In model, feature is usually extracted in advance.After extracting all multiple features, correlation analysis carried out to these features, most can found Selected clarification of objective is represented, is removed to unrelated and autocorrelative feature of classifying.However, the extraction of these features too relies on people Experience and subjective consciousness, the feature extracted it is different classification performance is influenced it is very big, or even the feature of extraction sequence It can influence last classification performance.Meanwhile the quality of image preprocessing also influences whether the feature of extraction.
The present embodiment uses convolutional neural networks model, using the adaptive, self study as one of this process of feature extraction Process, the optimal feature of classification performance is found by machine learning.
The local feature of convolutional Neural member each Hidden unit extraction last layer eyes image, maps it onto one and puts down Face, Feature Mapping function use activation primitive of the sigmoid functions as convolutional network so that Feature Mapping has shift invariant Property.Each neuron is connected with the local receptor field of preceding layer.The neuron weights of same plane layer are shared, there is same degree Displacement, rotational invariance.Followed by one is used for asking local average and the down-sampling layer of second extraction after each feature extraction. This distinctive structure of feature extraction twice makes network have higher distortion tolerance to input sample.Convolutional neural networks Disaggregated model ensures the robustness of image alignment shifting, scaling, distortion by local receptor field, shared weights and subsampling.
Assessment warns module for being combined the classification results of neural network classification module with time series, to train driving Member's driving condition is predicted.
It includes dozing to sleep condition adjudgement unit, judging unit of eyeing to the front to assess module of warning.
Condition adjudgement unit is slept in doze, and degree of fatigue is calculated according to the P80 measurement method combination frequencies of wink of PERCLOS.
The parameter of the measurement of PERCLOS refers to that eyes closed degree accounts for always more than a certain time for closing value within the unit interval The percentage of time.The P80 standards of PERCLOS methods:The area that eyelid covers pupil is more than 80% to be just calculated as eyes closed, is united Above-mentioned neural network classification module is that the quantity being closed accounts for total quantity ratio to eyes image classification results to meter within a certain period of time. If driver's frequent blinking or frequency of wink are less than a certain threshold value simultaneously, alarm can trigger.
Judging unit of eyeing to the front can evaluate facial range of deflection angles according to facial key point is extracted in module one, if It can determine whether not eye to the front for driver beyond this range, cumulative frequency can trigger alarm, it is ensured that driver arranges in long-time The moment regains consciousness and pays attention to observation front situation in vehicle driving procedure.
As shown in Fig. 2, the present invention also provides a kind of, detection algorithm is slept in the train driver doze based on deep learning, including Step once:
Step 1, the face-image for acquiring a frame driver from camera first, are carried using local binary patterns extraction algorithm Local binary feature is taken to obtain human face region;
Input monitoring driver's video per frame image includes human face region in video, also wraps due to using wide-angle camera Non-face region is included, in order to accelerate the detection to human face region, while excluding unlicensed person personnel activity interference, we only detect figure As intermediate one third region, i.e., at pilot set, detection speed is greatly speeded up, each frame image pixel is 600*400, right Each frame imagery exploitation LBP feature extractions carry out Face datection and obtain human face region.
Step 2 determines face key point using the method that random forest and global linear regression are combined, including eye Eyeball, nose, mouth position
One random forest is provided to each key point, it, will by the output composition local binary feature (LBF) of random forest The local feature of the corresponding random forest output of all key points, which is connected with each other, does global recurrence, for predicting deformation.
The ocular image of acquisition is input to trained convolutional neural networks model and classifies by step 3, is obtained Eye state;
Normalized is done to training image data first, picture pixels size is adjusted to 28*28, brightness is unified to be adjusted For 160 lumens (weakening light variation influences), it is input to advance trained model, output eyes, which are opened, closes two states judgement.
Detection algorithm network model is slept in doze as shown in figure 3, whole network model shares 7 layers, including input layer, two convolution Layer, two down-sampling layers and two full articulamentums (output).
Convolutional layer can be such that original signal feature enhances, and reduce noise, and down-sampling layer utilizes image local correlation Principle carries out sub-sample, it is possible to reduce data processing amount retains useful information simultaneously to image.So from a plane to next The mapping of a plane can be regarded as making convolution algorithm, and down-sampling layer is considered as fuzzy filter, plays Further Feature Extraction Effect.Spatial resolution is successively decreased between hidden layer and hidden layer, and the number of planes contained by every layer is incremented by, and can be used for detecting so more Characteristic information.
Step 4, the closed state of opening according to eyes are calculated according to the P80 measurement methods of PERCLOS in conjunction with frequency of wink Degree of fatigue;
Detection time section is set as 30 seconds, and PERCLIS threshold values are set as 40%.If exceeding threshold value, judge that driver is in fatigue State
Step 5, triggering alarm is alarmed after being judged as fatigue state, is driven to be warned in the case where state is slept in doze Member.
Although the present invention has been described by way of example and in terms of the preferred embodiments, embodiment is not for the purpose of limiting the invention.Not It is detached from the spirit and scope of the present invention, any equivalent change or retouch done also belongs to the protection domain of the present invention.Cause This protection scope of the present invention should be using the content that claims hereof is defined as standard.

Claims (3)

1. detection device, including image processing module, neural network classification are slept in a kind of train driver doze based on deep learning Module and assessment are warned module, it is characterised in that it is characterized in that:
Image processing module includes mainly human face region detection unit and face key point positioning unit;Human face region detection unit Using wide-angle camera, every frame image includes human face region in video, also includes non-face region;Human face region detection unit from The region where face is identified in video image, is marked with rectangle frame etc.;Face key point positioning unit is using random gloomy The method that woods and global linear regression are combined is detected face key point;
The eyes image that neural network classification module is generated for above-mentioned image processing module of classifying;Using convolutional neural networks mould Type, using this process of feature extraction as one adaptive, self study process, it is optimal that classification performance is found by machine learning Feature;
Assessment warns module for being combined the classification results of above-mentioned neural network classification module with time series, to train driving Member's driving condition is predicted.
2. detection device is slept in the train driver doze according to claim 1 based on deep learning, it is characterised in that:Assessment Module of warning includes dozing to sleep condition adjudgement unit, judging unit of eyeing to the front;It wherein dozes and sleeps condition adjudgement unit, according to The P80 measurement method combination frequencies of wink of PERCLOS calculate degree of fatigue;The parameter of the measurement of PERCLOS refers in unit Eyes closed degree is more than the percentage for a certain time for closing value accounting for total time in time;Judging unit eye to the front according to assessment Facial key point is extracted in module of warning and evaluates facial range of deflection angles, if can determine whether do not have beyond this range for driver It eyes to the front, cumulative frequency can trigger alarm, it is ensured that driver's moment during long-time train driving is awake.
3. the train driver according to claim 1 based on deep learning, which is dozed, sleeps the detection algorithm of detection device, special Sign is:Include the following steps:
Step 1, the face-image for acquiring a frame driver from camera first, using local binary patterns extraction algorithm extraction office Portion's binary feature obtains human face region;
Step 2 determines face key point using the method that random forest and global linear regression are combined, including eyes, Nose, mouth position;
The ocular image of acquisition is input to trained convolutional neural networks model and classifies by step 3, obtains eyes State;
Step 4 calculates tired journey according to the P80 measurement methods of PERCLOS according to the closed state of opening of eyes in conjunction with frequency of wink Degree;
Detection time section is set as 30 seconds, and PERCLIS threshold values are set as 40%;If exceeding threshold value, judge that driver is in fatigue state;
Step 5, triggering alarm is alarmed after being judged as fatigue state, to the alerting drivers in the case where state is slept in doze.
CN201810257717.2A 2018-03-27 2018-03-27 A kind of real-time doze of train driver sleeps detection device and detection algorithm Pending CN108309311A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201810257717.2A CN108309311A (en) 2018-03-27 2018-03-27 A kind of real-time doze of train driver sleeps detection device and detection algorithm

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201810257717.2A CN108309311A (en) 2018-03-27 2018-03-27 A kind of real-time doze of train driver sleeps detection device and detection algorithm

Publications (1)

Publication Number Publication Date
CN108309311A true CN108309311A (en) 2018-07-24

Family

ID=62899328

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201810257717.2A Pending CN108309311A (en) 2018-03-27 2018-03-27 A kind of real-time doze of train driver sleeps detection device and detection algorithm

Country Status (1)

Country Link
CN (1) CN108309311A (en)

Cited By (30)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109145852A (en) * 2018-08-31 2019-01-04 辽宁工业大学 A kind of driver fatigue state recognition method for opening closed state based on eyes
CN109166177A (en) * 2018-08-27 2019-01-08 清华大学 Air navigation aid in a kind of art of craniomaxillofacial surgery
CN109466586A (en) * 2018-11-23 2019-03-15 周家鸿 A kind of city rail electric train safe operation means of defence, system and device
CN109543627A (en) * 2018-11-27 2019-03-29 西安电子科技大学 A kind of method, apparatus and computer equipment judging driving behavior classification
CN109583338A (en) * 2018-11-19 2019-04-05 山东派蒙机电技术有限公司 Driver Vision decentralized detection method based on depth integration neural network
CN109770925A (en) * 2019-02-03 2019-05-21 闽江学院 A kind of fatigue detection method based on depth time-space network
CN109859085A (en) * 2018-12-25 2019-06-07 深圳市天彦通信股份有限公司 Safe early warning method and Related product
CN109948509A (en) * 2019-03-11 2019-06-28 成都旷视金智科技有限公司 Obj State monitoring method, device and electronic equipment
CN110063736A (en) * 2019-05-06 2019-07-30 苏州国科视清医疗科技有限公司 The awake system of fatigue detecting and rush of eye movement parameter monitoring based on MOD-Net network
CN110119676A (en) * 2019-03-28 2019-08-13 广东工业大学 A kind of Driver Fatigue Detection neural network based
CN110119672A (en) * 2019-03-26 2019-08-13 湖北大学 A kind of embedded fatigue state detection system and method
CN110119714A (en) * 2019-05-14 2019-08-13 济南浪潮高新科技投资发展有限公司 A kind of Driver Fatigue Detection and device based on convolutional neural networks
CN110188655A (en) * 2019-05-27 2019-08-30 上海蔚来汽车有限公司 Driving condition evaluation method, system and computer storage medium
CN110263641A (en) * 2019-05-17 2019-09-20 成都旷视金智科技有限公司 Fatigue detection method, device and readable storage medium storing program for executing
CN110287795A (en) * 2019-05-24 2019-09-27 北京爱诺斯科技有限公司 A kind of eye age detection method based on image analysis
CN110288567A (en) * 2019-05-24 2019-09-27 北京爱诺斯科技有限公司 A kind of image analysis method for eye
CN110298994A (en) * 2019-07-01 2019-10-01 南京派光智慧感知信息技术有限公司 A kind of track train driving behavior comprehensive monitoring warning system
CN110298257A (en) * 2019-06-04 2019-10-01 东南大学 A kind of driving behavior recognition methods based on human body multiple location feature
CN110443211A (en) * 2019-08-09 2019-11-12 紫荆智维智能科技研究院(重庆)有限公司 Detection system and method are slept in train driving doze based on vehicle-mounted GPU
CN110674701A (en) * 2019-09-02 2020-01-10 东南大学 Driver fatigue state rapid detection method based on deep learning
CN111035096A (en) * 2020-01-09 2020-04-21 郑州铁路职业技术学院 Engineering constructor fatigue detection system based on safety helmet
WO2020084469A1 (en) * 2018-10-22 2020-04-30 5Dt, Inc A drowsiness detection system
CN111259719A (en) * 2019-10-28 2020-06-09 浙江零跑科技有限公司 Cab scene analysis method based on multi-view infrared vision system
CN111409555A (en) * 2020-04-10 2020-07-14 中国科学院重庆绿色智能技术研究院 Multi-functional intelligent recognition vehicle-mounted rearview mirror
CN111645695A (en) * 2020-06-28 2020-09-11 北京百度网讯科技有限公司 Fatigue driving detection method and device, computer equipment and storage medium
CN112114671A (en) * 2020-09-22 2020-12-22 上海汽车集团股份有限公司 Human-vehicle interaction method and device based on human eye sight and storage medium
CN112241658A (en) * 2019-07-17 2021-01-19 青岛大学 Fatigue driving early warning system and method based on depth camera
CN112241647A (en) * 2019-07-16 2021-01-19 青岛点之云智能科技有限公司 Dangerous driving behavior early warning device and method based on depth camera
CN113158850A (en) * 2021-04-07 2021-07-23 大连海事大学 Ship driver fatigue detection method and system based on deep learning
CN113361452A (en) * 2021-06-24 2021-09-07 中国科学技术大学 Driver fatigue driving real-time detection method and system based on deep learning

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105718913A (en) * 2016-01-26 2016-06-29 浙江捷尚视觉科技股份有限公司 Robust face characteristic point positioning method
CN107194346A (en) * 2017-05-19 2017-09-22 福建师范大学 A kind of fatigue drive of car Forecasting Methodology
CN107330378A (en) * 2017-06-09 2017-11-07 湖北天业云商网络科技有限公司 A kind of driving behavior detecting system based on embedded image processing

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105718913A (en) * 2016-01-26 2016-06-29 浙江捷尚视觉科技股份有限公司 Robust face characteristic point positioning method
CN107194346A (en) * 2017-05-19 2017-09-22 福建师范大学 A kind of fatigue drive of car Forecasting Methodology
CN107330378A (en) * 2017-06-09 2017-11-07 湖北天业云商网络科技有限公司 A kind of driving behavior detecting system based on embedded image processing

Cited By (42)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109166177A (en) * 2018-08-27 2019-01-08 清华大学 Air navigation aid in a kind of art of craniomaxillofacial surgery
CN109145852A (en) * 2018-08-31 2019-01-04 辽宁工业大学 A kind of driver fatigue state recognition method for opening closed state based on eyes
CN109145852B (en) * 2018-08-31 2022-06-17 辽宁工业大学 Driver fatigue state identification method based on eye opening and closing state
US11514688B2 (en) 2018-10-22 2022-11-29 5DT, Inc. Drowsiness detection system
WO2020084469A1 (en) * 2018-10-22 2020-04-30 5Dt, Inc A drowsiness detection system
CN109583338A (en) * 2018-11-19 2019-04-05 山东派蒙机电技术有限公司 Driver Vision decentralized detection method based on depth integration neural network
CN109466586A (en) * 2018-11-23 2019-03-15 周家鸿 A kind of city rail electric train safe operation means of defence, system and device
CN109543627A (en) * 2018-11-27 2019-03-29 西安电子科技大学 A kind of method, apparatus and computer equipment judging driving behavior classification
CN109543627B (en) * 2018-11-27 2023-08-01 西安电子科技大学 Method and device for judging driving behavior category and computer equipment
CN109859085A (en) * 2018-12-25 2019-06-07 深圳市天彦通信股份有限公司 Safe early warning method and Related product
CN109770925A (en) * 2019-02-03 2019-05-21 闽江学院 A kind of fatigue detection method based on depth time-space network
CN109948509A (en) * 2019-03-11 2019-06-28 成都旷视金智科技有限公司 Obj State monitoring method, device and electronic equipment
CN110119672A (en) * 2019-03-26 2019-08-13 湖北大学 A kind of embedded fatigue state detection system and method
CN110119676B (en) * 2019-03-28 2023-02-03 广东工业大学 Driver fatigue detection method based on neural network
CN110119676A (en) * 2019-03-28 2019-08-13 广东工业大学 A kind of Driver Fatigue Detection neural network based
CN110063736B (en) * 2019-05-06 2022-03-08 苏州国科视清医疗科技有限公司 Eye movement parameter monitoring fatigue detection and wake-up promotion system based on MOD-Net network
CN110063736A (en) * 2019-05-06 2019-07-30 苏州国科视清医疗科技有限公司 The awake system of fatigue detecting and rush of eye movement parameter monitoring based on MOD-Net network
CN110119714B (en) * 2019-05-14 2022-02-25 山东浪潮科学研究院有限公司 Driver fatigue detection method and device based on convolutional neural network
CN110119714A (en) * 2019-05-14 2019-08-13 济南浪潮高新科技投资发展有限公司 A kind of Driver Fatigue Detection and device based on convolutional neural networks
CN110263641A (en) * 2019-05-17 2019-09-20 成都旷视金智科技有限公司 Fatigue detection method, device and readable storage medium storing program for executing
CN110287795A (en) * 2019-05-24 2019-09-27 北京爱诺斯科技有限公司 A kind of eye age detection method based on image analysis
CN110288567A (en) * 2019-05-24 2019-09-27 北京爱诺斯科技有限公司 A kind of image analysis method for eye
CN110188655A (en) * 2019-05-27 2019-08-30 上海蔚来汽车有限公司 Driving condition evaluation method, system and computer storage medium
CN110298257B (en) * 2019-06-04 2023-08-01 东南大学 Driver behavior recognition method based on human body multi-part characteristics
CN110298257A (en) * 2019-06-04 2019-10-01 东南大学 A kind of driving behavior recognition methods based on human body multiple location feature
CN110298994A (en) * 2019-07-01 2019-10-01 南京派光智慧感知信息技术有限公司 A kind of track train driving behavior comprehensive monitoring warning system
CN112241647B (en) * 2019-07-16 2023-05-09 青岛点之云智能科技有限公司 Dangerous driving behavior early warning device and method based on depth camera
CN112241647A (en) * 2019-07-16 2021-01-19 青岛点之云智能科技有限公司 Dangerous driving behavior early warning device and method based on depth camera
CN112241658A (en) * 2019-07-17 2021-01-19 青岛大学 Fatigue driving early warning system and method based on depth camera
CN112241658B (en) * 2019-07-17 2023-09-01 青岛大学 Fatigue driving early warning method based on depth camera
CN110443211A (en) * 2019-08-09 2019-11-12 紫荆智维智能科技研究院(重庆)有限公司 Detection system and method are slept in train driving doze based on vehicle-mounted GPU
CN110674701A (en) * 2019-09-02 2020-01-10 东南大学 Driver fatigue state rapid detection method based on deep learning
CN111259719A (en) * 2019-10-28 2020-06-09 浙江零跑科技有限公司 Cab scene analysis method based on multi-view infrared vision system
CN111259719B (en) * 2019-10-28 2023-08-25 浙江零跑科技股份有限公司 Cab scene analysis method based on multi-view infrared vision system
CN111035096A (en) * 2020-01-09 2020-04-21 郑州铁路职业技术学院 Engineering constructor fatigue detection system based on safety helmet
CN111409555A (en) * 2020-04-10 2020-07-14 中国科学院重庆绿色智能技术研究院 Multi-functional intelligent recognition vehicle-mounted rearview mirror
CN111645695A (en) * 2020-06-28 2020-09-11 北京百度网讯科技有限公司 Fatigue driving detection method and device, computer equipment and storage medium
CN111645695B (en) * 2020-06-28 2022-08-09 北京百度网讯科技有限公司 Fatigue driving detection method and device, computer equipment and storage medium
CN112114671A (en) * 2020-09-22 2020-12-22 上海汽车集团股份有限公司 Human-vehicle interaction method and device based on human eye sight and storage medium
CN113158850A (en) * 2021-04-07 2021-07-23 大连海事大学 Ship driver fatigue detection method and system based on deep learning
CN113158850B (en) * 2021-04-07 2024-01-05 大连海事大学 Ship driver fatigue detection method and system based on deep learning
CN113361452A (en) * 2021-06-24 2021-09-07 中国科学技术大学 Driver fatigue driving real-time detection method and system based on deep learning

Similar Documents

Publication Publication Date Title
CN108309311A (en) A kind of real-time doze of train driver sleeps detection device and detection algorithm
US20230154207A1 (en) Driver fatigue detection method and system based on combining a pseudo-3d convolutional neural network and an attention mechanism
CN103839379B (en) Automobile and driver fatigue early warning detecting method and system for automobile
CN110119676A (en) A kind of Driver Fatigue Detection neural network based
CN108875642A (en) A kind of method of the driver fatigue detection of multi-index amalgamation
Ji et al. Fatigue state detection based on multi-index fusion and state recognition network
Lenskiy et al. Driver’s eye blinking detection using novel color and texture segmentation algorithms
CN108389220B (en) Remote sensing video image motion target real-time intelligent cognitive method and its device
CN106250801A (en) Based on Face datection and the fatigue detection method of human eye state identification
CN107491769A (en) Method for detecting fatigue driving and system based on AdaBoost algorithms
CN111626272A (en) Driver fatigue monitoring system based on deep learning
CN108609018B (en) For analyzing Forewarning Terminal, early warning system and the parser of dangerous driving behavior
CN110472511A (en) A kind of driver status monitoring device based on computer vision
CN109740477A (en) Study in Driver Fatigue State Surveillance System and its fatigue detection method
CN112528843A (en) Motor vehicle driver fatigue detection method fusing facial features
Devi et al. Fuzzy based driver fatigue detection
Dari et al. A neural network-based driver gaze classification system with vehicle signals
Hasan et al. State-of-the-art analysis of modern drowsiness detection algorithms based on computer vision
Saif et al. Robust drowsiness detection for vehicle driver using deep convolutional neural network
Al Redhaei et al. Realtime driver drowsiness detection using machine learning
CN113989788A (en) Fatigue detection method based on deep learning and multi-index fusion
CN114220158A (en) Fatigue driving detection method based on deep learning
Bergasa et al. Visual monitoring of driver inattention
Panicker et al. Open-eye detection using iris–sclera pattern analysis for driver drowsiness detection
Sharma et al. Development of a drowsiness warning system based on the fuzzy logic

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication

Application publication date: 20180724

RJ01 Rejection of invention patent application after publication