CN110188655A - Driving condition evaluation method, system and computer storage medium - Google Patents

Driving condition evaluation method, system and computer storage medium Download PDF

Info

Publication number
CN110188655A
CN110188655A CN201910445107.XA CN201910445107A CN110188655A CN 110188655 A CN110188655 A CN 110188655A CN 201910445107 A CN201910445107 A CN 201910445107A CN 110188655 A CN110188655 A CN 110188655A
Authority
CN
China
Prior art keywords
information
driving
driving condition
fusion
membership
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201910445107.XA
Other languages
Chinese (zh)
Inventor
钱少华
王笑悦
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
NIO Co Ltd
Original Assignee
NIO Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by NIO Co Ltd filed Critical NIO Co Ltd
Priority to CN201910445107.XA priority Critical patent/CN110188655A/en
Publication of CN110188655A publication Critical patent/CN110188655A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • G06F18/241Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches
    • G06F18/2411Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches based on the proximity to a decision surface, e.g. support vector machines
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/25Fusion techniques
    • G06F18/251Fusion techniques of input or preprocessed data
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/044Recurrent networks, e.g. Hopfield networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/50Context or environment of the image
    • G06V20/52Surveillance or monitoring of activities, e.g. for recognising suspicious objects

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • General Physics & Mathematics (AREA)
  • Evolutionary Computation (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Artificial Intelligence (AREA)
  • General Engineering & Computer Science (AREA)
  • Computing Systems (AREA)
  • Software Systems (AREA)
  • Molecular Biology (AREA)
  • Computational Linguistics (AREA)
  • Biophysics (AREA)
  • Biomedical Technology (AREA)
  • Mathematical Physics (AREA)
  • General Health & Medical Sciences (AREA)
  • Health & Medical Sciences (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Evolutionary Biology (AREA)
  • Multimedia (AREA)
  • Image Analysis (AREA)

Abstract

The present invention relates to a kind of driving condition evaluation method, system and computer storage mediums, and the method includes the following steps: acquiring the image information of driver;Described image information is handled using multiple cascade neural networks, and then obtains the driving condition information of multiple depth about the driver, physiological status of the driver described in the driving condition message reflection when driving;Fusion treatment is carried out to obtain driving condition evaluation of estimate to the driving condition information using default segmentation rule, the driving condition evaluation of estimate indicates the risk of the driving behavior of the driver.It may be implemented to evaluate the driving behavior of driver by this method.

Description

Driving condition evaluation method, system and computer storage medium
Technical field
The present invention relates to the mechanism that the driving condition of a kind of couple of driver is evaluated;More particularly it relates to A kind of driving condition evaluation method, system and a kind of computer storage medium.
Background technique
Computer aided pilot to a certain extent may be implemented in driver's monitoring system, by driver's Driving condition is monitored and then at it there are prompting message is issued to it when improper driving behavior, is corrected not with sharp driver Driving habit appropriate.This monitoring system can be widely applied to various loglstics enterprises, driving training enterprise and other need It will be in the industry and commerce entity that driver's driving behavior is evaluated.
Current existing driver's monitoring system design method in accuracy of identification, identification type and alarm decision not It is enough perfect.Monitoring method based on physiological parameter needs to contact sensor with the body of driver, and detection accuracy is by a People's difference and contingency are affected, thus are difficult accurately to be determined.On the other hand, based on the monitoring method of physiological parameter at This is higher, deployment is inconvenient (because its needs is contacted with driver).
In addition, also using the monitoring method based on imaging sensor in this field.Aleksandra et al. is in recognition methods It is upper to use Viola and Jones algorithm frame, feature extraction is carried out using traditional image template, then passes through Adaboost Or SVM method is classified, and then determines image object.In terms of iris detection, it can use biological characteristic progress feature and mention It takes.However, there are larger gaps with neural network on accuracy of identification for these methods, also it is not so good as nerve net on detection precision Network.
Summary of the invention
The present invention provides a kind of driving condition evaluation method, system and computer storage medium, in terms of target detection Feature extraction is carried out using the neural network based on deep learning, thus there is better detection accuracy and robust performance.It is another Aspect, it is existing to be detected in identification progress driver's monitoring early-warning system design using deep learning network image, not One perfect alarm decision scheme, majority selection carry out fatigue strength judgement based on eye closing frequency utilization PERCLOS index, but tired Lao Du is not only related with eye closing frequency;On the contrary, absent minded is also a need under the untired state of driver The dangerous situation to be alarmed.Therefore need to carry out the detection of multi-parameter, and it is alert to carry out United Daily News.In use neural network to face In the DMS system detected, simple Threshold Alerts are used after obtaining parameter or use traditional PERCLOS forewarning index Result in many cases with truth require be not consistent.Driving condition evaluation method, system and calculating provided by the invention Machine storage medium carries out multi-information fusion based on fuzzy logic in alarm decision, and carries out multiple target using multiple neural network Identification carries out Comprehensive Evaluation to the multi-parameter of identification.
According to an aspect of the present invention, a kind of driving condition evaluation method is provided, the method includes the following steps: adopting Collect the image information of driver;Described image information is handled using multiple cascade neural networks, and then is closed In the driving condition information of multiple depth of the driver, driver described in the driving condition message reflection is being driven When physiological status;Fusion treatment is carried out to the driving condition information using default segmentation rule to obtain driving condition evaluation Value, the driving condition evaluation of estimate indicate the risk of the driving behavior of the driver.
According to another aspect of the present invention, a kind of driving condition evaluation system is provided, the system comprises: Image Acquisition Module is used to acquire the image information of driver;Multiple cascade neural networks are used to carry out described image information Processing, and then obtain the driving condition information of multiple depth about the driver, the driving condition message reflection institute State physiological status of the driver when driving;Fusion Module carries out the driving condition information using default segmentation rule Fusion treatment is to obtain driving condition evaluation of estimate;Evaluation module is used to drive according to driving condition evaluation of estimate instruction Sail the risk of the driving behavior of personnel.
In one embodiment of driving condition evaluation system, using camera as image capture module, acquires and obtain Take the behavioural information of driver;It is dynamic to the human face characteristic point of driver and danger by multiple cascade deep neural networks It is detected, carries out danger classes judgement using multi-information fusion, early warning is monitored to driver.The system is available Three neural networks are directed to behavior, facial detail etc. respectively and carry out emphasis identification, increase accuracy of identification, are accurately blinked Frequency, frequency of yawning, rotary head frequency, direction of visual lines, hand and steering wheel position relationship.Compared with prior art, the embodiment Improve detection accuracy and the single problem of decision logic.
According to another aspect of the invention, a kind of computer storage medium is provided, the computer storage medium is used for Store instruction executes any one of present invention method by processor when executed.
Detailed description of the invention
From the following detailed description in conjunction with attached drawing, it will keep above and other purpose and advantage of the invention more complete It is clear, wherein the same or similar element, which is adopted, to be indicated by the same numeral.
Fig. 1 is the diagram of driving condition evaluation system according to an embodiment of the invention;
Fig. 2 is the flow chart of driving condition evaluation method according to an embodiment of the invention;
Fig. 3 is the flow chart of driving condition evaluation method according to an embodiment of the invention;
Fig. 4 is the flow chart of driving condition evaluation method according to an embodiment of the invention;
Fig. 5 is that mouth status information according to an embodiment of the invention and each degree of membership for driving opinion rating are closed The diagram of system;
Fig. 6 is that eye status information according to an embodiment of the invention and each degree of membership for driving opinion rating are closed The diagram of system;
Fig. 7 is that direction of visual lines information according to an embodiment of the invention and each degree of membership for driving opinion rating are closed The diagram of system;
Fig. 8 is that head state information according to an embodiment of the invention and each degree of membership for driving opinion rating are closed The diagram of system;
Fig. 9 is that hand state information according to an embodiment of the invention and each degree of membership for driving opinion rating are closed The diagram of system;
Figure 10 is the diagram of human eye training training sample according to an embodiment of the invention;
Figure 11 is the diagram according to an embodiment of the invention for generating driving condition information;And
Figure 12 is the diagram according to an embodiment of the invention for generating driving condition information.
Specific embodiment
For succinct and illustrative purpose, this paper Primary Reference its example embodiment describes the principle of the present invention.But Those skilled in the art will readily recognize that identical principle can be equally applied to all types of comment for driving condition Valence method, system and computer storage medium, and these the same or similar principles can be implemented within, any such change Change the true spirit and range without departing substantially from present patent application.
Fig. 1 shows driving condition evaluation system according to an embodiment of the invention.System includes Image Acquisition mould Block 100, multiple cascade neural network 1 01-103, Fusion Module 110 and evaluation module 120.
Image capture module 100 can be various types of image collecting devices or be constitute image collecting device one Partial image device (such as CMOS, CCD photosensitive array etc.).Image capture module 100 is used to that driver to be imaged, and at As range should at least cover face, hand (specifically, since the normal coupling position of hand and vehicle is the 3 of steering wheel Point and 9 positions, thus can be imaged with pointing direction disk position).Although being shown in figure single image acquisition device, Imaging is carried out to driver using the imaging device of multiple collaborative works to be intended to be included within the scope of the present invention.Multiple collaboration works The imaging device of work can be the imaging of the collaborative work of the imaging device, different resolution of the collaborative work of such as different focal length Device.The imaging device of multiple collaborative works can ultimately form a width for multiple cascade neural network 1 01- in the present invention 103 images handled, the image can have the details of different degrees of concern.Due to cost considerations, it can be used only single The image capture module 100 of a image collecting device.The installation site of image collecting device is required to guarantee to collect driving The image information of the desired zone of personnel.
On the other hand, as a kind of redundancy backup scheme, image capture module 100 may include multiple mutually redundant figures As acquisition device, and in the state of normal work, only one schemes to the multiple cascade neural network 1 01-103 outputs of junior Picture;Remaining image collecting device can be cold standby or warm standby state, for the image collecting device in work at present It is actuated for replacing in time when failure.So-called cold standby refers to that remaining image collecting device is not at energization standby mode, is It unites and starts the image collector of backup image acquisition device replacement failure immediately in the image collecting device failure of work at present It sets;So-called warm back-up refers to that remaining image collecting device is in energization standby mode, Image Acquisition of the system in work at present Switch the image collecting device of backup image acquisition device replacement failure when failure of apparatus immediately.Cold standby scheme can extend figure As service life of acquisition device and energy saving, and Hot Stand-By solution can reduce the delay of replacement image collecting device, this field Technical staff can according to need the suitable scheme of selection, and the present invention does not limit in this respect.
Multiple cascade neural network 1 01-103 are obtained for handling image information about driver Multiple depth driving condition information.It is shown as three cascade neural networks in Fig. 1, but those skilled in the art can be with Different concatenation levels is taken to obtain the depth driving condition of different number letter as needed when using the principle of invention Breath.Driving condition message reflection (for example, direction of visual lines, mouth state) driver in single dimension, granularity or level Physiological status when driving.
It continues to refer to figure 1, multiple cascade neural networks include first nerves network 101, nervus opticus network 102 and Three neural network 1s 03.Wherein, first nerves network 101 is for handling image information to obtain the driving of multiple depth The first depth driving condition information in status information;Nervus opticus network 102 is used for identifying through first nerves network 101 First depth driving condition information is handled to obtain the second depth driving condition in the driving condition information of multiple depth Information;Third nerve network 103 is for handling the second depth driving condition information identified through nervus opticus network 102 Third depth driving condition information in driving condition information to obtain multiple depth.This setup can be from multiple dimensions Degree, the driving condition for evaluating driver to microcosmic from macroscopic view, thus driving condition evaluation can be made more acurrate.With it is above Corresponding, the region of image information corresponding to third depth driving condition information can have higher details, thus Using the imaging device of multiple collaborative works come when obtaining image, which can use the imaging device of such as high-resolution To be imaged using as finally formed image.
Fusion Module 110 carries out fusion treatment to driving condition information using preset fusion rule to obtain driving condition Evaluation of estimate.Evaluation module 120 is used to indicate the risk of the driving behavior of driver according to driving condition evaluation of estimate.
For concrete application, the first depth driving condition information may include the position of face, hand state information (tool It can be determined by the position of hand for body);Second depth driving condition information may include eye status information, mouth state Information and head state information;Third depth driving condition information may include direction of visual lines information.
The example to form the driving condition information of multiple and different depth is shown in Figure 11.Frame 1101 is for indicating face position It sets, and frame 1102,1103 is for indicating hand position (for generating hand state information).Mark (identification) process can benefit It is carried out with first nerves network 101, first nerves network 101 can all for example Yolo networks.
As an embodiment of the present invention, image information is divided into multiple grids by first nerves network 101, and is detected Object and its corresponding bezel locations in multiple grids, object include such as face and hand, and according to the confidence level of frame It is iterated, exports position and the hand state information of face.Specifically, the image information of input is passed through processing by Yolo network Obtain a tensor (tensor).According to the design of Yolo network, the image of input can be for example divided into the grid of 7*7, Each grid allows for example to predict 2 frames (bounding box, the rectangle frame comprising some object), in total 49*2=98 A frame.The tensor of output corresponds to the 7*7 grid of input picture.Wherein, each grid has corresponded to the vector of one 30 dimension, vector Including following information: the probability of 20 object classifications, the confidence level of 2 frames, 2 frames position.Yolo network support is more The identification of the different objects of kind, can indicate that there are the probability of any object for the grid position with multiple values.Each frame needs 4 A numerical value indicates its position: x coordinate, the y-coordinate of the central point of frame, the width of frame, height), 2 frames need 8 altogether A numerical value indicates its position.There are the probability * of the object frames and the practical side of the object in the confidence level of the frame=frame The IOU (hand over and compare) of frame.The structure of Yolo is very simple, including the full connection of CNN convolution, pond (down-sampling) and two layers.Training Incipient stage, what the frame of neural network forecast may be random, but always selection IOU be relatively good that, with it is trained into Row, it (may be object size, the ratio of width to height, different types of object that each frame, which can gradually be good at the prediction to certain situations, Deng).So this is a kind of evolution or unsupervised learning.
When first nerves network 101 recognizes driver and executes predetermined action, evaluation module 120 can be issued directly Alarm.The direct alarm to dangerous play may be implemented in this setting.Evaluation module 120 can be with other than it can sound an alarm Various dangerous plays, driving condition evaluation of estimate are recorded, and is stored in and supplies to have access on vehicle-mounted computer, or storage is beyond the clouds to facilitate number According to transferring, counting, can additionally be sent to the interested various entities of driving condition evaluation of estimate.
With continued reference to Figure 11, nervus opticus network 102 can be used for the face location identified through first nerves network 101 Etc. information handled to obtain eye status information, mouth status information and head state information, the above various information are equal From facial image information.Third nerve network 103 be used for the eye status information identified through nervus opticus network 102 into Row processing is to obtain direction of visual lines information (1105,1106).Fusion Module 110 is respectively according to eye status information, mouth state Information, head state information, hand state information and direction of visual lines information are according to it respectively about each driving opinion rating Degree of membership is based on default segmentation rule and is merged.
As an embodiment of the present invention, face figure can be cut out from image information according to the position of the face Picture, then, each layer in nervus opticus network are iterated to next layer of output key point thermal map, and finally export face figure As upper multiple characteristic points (key point thermal map).Specifically, nervus opticus network 102 can be Deep Alignment Network(DAN).The input of each phase Network of DAN is whole picture.When network is all made of whole picture as input, DAN can effectively overcome the problems, such as that head pose and initialization are brought, to obtain better detection effect.DAN is added Key point thermal map.DAN includes multiple stages, each stage is corrected respectively containing three inputs and an output, input Picture, key point thermal map and the characteristic pattern generated by full articulamentum crossed, output is face shape.Wherein, CONNECTION The effect of LAYER is must to export this stage to carry out a series of transformation, generates three inputs required for next stage.
Referring to Fig. 5-9, wherein the horizontal axis in each figure is the driving condition information obtained through Processing with Neural Network, specially eye Portion's status information, head state information, hand state information and direction of visual lines information.Wherein, eye status information is first The frame number that eye is closed in image information in preset time, mouth status information are mouth in image information in the first preset time The frame number of opening, head state information are the frame number in head bias normal driving direction in image information in the first preset time, Hand state information is that the frame number of hand off-direction disk and direction of visual lines information are in image information in the first preset time Sight deviates the frame number in normal driving direction in image information in first preset time.First preset time can be by this field skill Art personnel formulate as needed in carrying out the present invention, such as can be 60 seconds.At this point, if image capture module is with per second 30 Frame acquires image, then can collect within 60 seconds 1800 frame images.It will be with example herein below in relation to the explanation of specific value Carry out.
Fig. 5-9 shows degree of membership of each driving condition information about different driving opinion ratings.In showing for Fig. 5-9 Example in, although not necessarily, some particular value of driving condition information it is corresponding it is each drive opinion rating degree of membership it Be 1.Such as mouth status information drives opinion rating (hereinafter referred to as first etc. about first when value is 50 frame Grade) degree of membership be 1, and about second and third, four degrees of membership for driving opinion ratings (hereinafter referred to as second and third, four grades) It is 0, the sum of degree of membership is 1+0+0+0=1.It is noted that although the embodiment of the present invention is unfolded with four opinion ratings Narration, but the present invention does not limit herein, more or fewer opinion ratings are allowed in the principle of the present invention.As Unrestricted example, each driving condition information can be based on default rule about each degree of membership for driving opinion rating (such as Expert Rules) are to determine, and system can be with adjust automatically (such as based on self-learning method) preset rules in due course So that each driving condition information is more reasonable to each membership for driving opinion rating.Opinion rating is driven to show The risk of the physiological status of driver, wherein first to fourth grade risk is gradually high.Specifically, for example, if sentencing When the mouth opening number of image frames of division of history into periods table mouth status information is 175 frame, the degree of membership for adhering to third and fourth grade separately is all 0.5, the mouth status information of the single physiological status embodies higher risk, this is because mouth is chronically at opening shape State might mean that driver is used communication equipment and talks, or dozing off (or high-frequency beat Kazakhstan It owes).
Nervus opticus network 102 can obtain eye status information, mouth status information and head according to following methods Portion's status information.Firstly, the face identified through first nerves network 101 is split, then be sent into nervus opticus network 102 into The detection of 68 characteristic points of row.Human eye, the calculating of lip proportion is recycled to open and close eyes, open situation of shutting up.
Wherein (xi,yi) it is that i-th point of coordinate, max (x, y, z) function are used in 68 human face characteristic points in a frame image In seeking maximum value, SeyeIndicate the area of the ocular recognized.PeyestateFor determining the state of eyes, if PeyestateLower than the threshold value of setting, then determine that eyes are in closed state in this frame image;If PeyestateIt is greater than or equal to The threshold value of setting then determines that eyes are in the state of opening in this frame image.
Mouth opening in one frame of image, closed state can also be determined with same method:
Wherein (xi,yi) it is that i-th point of coordinate, max (x, y, z) function are used in 68 human face characteristic points in a frame image In seeking maximum value, SmouthIndicate the area of the mouth region recognized.PmouthstateFor determining the state of mouth, if PmouthstateLower than the threshold value of setting, then determine that mouth is in closed state in this frame image;If PmouthstateIt is higher than or waits In the threshold value of setting, then determine that mouth is in open configuration in this frame image.
Referring to figs. 5 and 6, if calculated in the interior mouth of the first preset time (60 seconds, in terms of 30 frame per second totally 1800 frame) Portion is opened when accounting for 50 frame altogether, then it is 1 about the degree of membership of the first estate, and about second and third, the degrees of membership of four grades be 0. If calculated when the interior eye closure of the first preset time (60 seconds, in terms of 30 frame per second totally 1800 frame) accounts for 150 frame altogether, then its Degree of membership about the second grade is 1, and the degree of membership about first and third, four grades is 0.
Referring to Fig. 8 and 9, head state information is head bias normal driving direction in image information in the first preset time Frame number, the frame number that hand state information is hand off-direction disk in image information in the first preset time.Specifically, can Cephalad direction to be made comparisons with normal driving direction (front), if the angle of offset is more than predetermined value (such as 15 degree), So determine that head deviates normal driving direction in this frame image;If the angle of offset is not above predetermined value (15 degree), So determine head in this frame image without offset normal driving direction.In addition, hand state information be according to hand whether It is determined with direction disk detachment.Optionally, it is necessary to when, normal grip position (and can also be directed at generally 3 points at 9 points Position) it is detected, the frame picture for not being held in normal grip position is all determined as and direction disk detachment.
Third nerve network 103 can obtain direction of visual lines information according to following methods.Referring to Fig. 7 and Figure 10, sight Directional information is that sight deviates the frame number in normal driving direction in image information in the first preset time.Specifically, can incite somebody to action Direction of visual lines is made comparisons with normal driving direction (front), if the angle of offset is more than predetermined value (such as 20 degree), Determine that sight deviates normal driving direction in this frame image;If the angle of offset is not above predetermined value (20 degree), Determine sight in this frame image without offset normal driving direction.Here, can will be detected via nervus opticus network 102 Human eye individually split, be sent into trained individual human eye sight network (103) and calculate human eye sight orientation, calculate people An eye line angle.Figure 10 shows the human eye training training sample as non-limiting example.
As an embodiment of the present invention, what third nerve network 103 inputted is the image of eye areas, is exported as view Line direction;Seonwook Park et al. can be used in the side that " Deep Pictorial Gaze Estimation " is proposed in it Method detects direction of visual lines.Specifically, eye is cut out from described image information according to multiple characteristic points on the facial image Portion's image, the third nerve network carry out successive ignition according to the position of pupil in the eyes image and export direction of visual lines Information.
Fusion Module 110 can also be respectively according to eye status information, mouth status information, head state information, hand Status information and direction of visual lines information are based on default segmentation rule according to its respective degree of membership and are merged, and according in fusion Corresponding each degree of membership for driving opinion rating carries out ambiguity solution, to calculate driving condition evaluation of estimate.The present invention is to each letter Property provides with no restriction for the fusion sequence of breath, mode, but following embodiment will be provided as a kind of possible form.
Eye status information and mouth status information are based on default segmentation rule according to its respective degree of membership to merge, Generate the first fusion.As exemplary, default segmentation rule as shown in table 1 (wherein the first, second, third and fourth grade be denoted as Lv1, Lv2, Lv3, Lv4):
Table 1
Wherein the first row indicates that mouth status information drives being subordinate to for opinion rating about each, and first row indicates eye shape State information drives being subordinate to for opinion rating about each, and content is to be subordinate to feelings to each driving opinion rating after merging in table Condition.Further, eye status information and mouth status information respectively can be driven into being subordinate to for opinion rating about each Smaller in degree is as the first fusion about each degree of membership for driving opinion rating.For example, with reference to Fig. 5, such as in mouth For status information when value is 175 frame, the degree of membership about third and fourth grade is all 0.5, and above procedure is denoted as:
MOUTH@175:
Lv3→0.5
Lv4→0.5
Referring back to Fig. 6, such as in eye status information when value is 275 frame, the degree of membership about the tertiary gradient is 1, above procedure is denoted as:
EYE@275:
Lv3→1
Table may determine that fused first fusion about each degree of membership for driving opinion rating in inquiry:
Due to mouth status information about Lv3, Lv4 there are degree of membership, and eye information about Lv3 there are degree of membership, that Fused intermediate quantity will there are degree of membership about Lv3, Lv4 (in the table with overstriking font representation).In addition, by eye shape State information and mouth status information are respectively merged as first about Lv3, Lv4 about the smaller in the degree of membership of Lv3, Lv4 Degree of membership, thus the first fusion is 0.5 (0.5 < 1) and 0.5 respectively about the degree of membership of Lv3, Lv4.
Similarly, head state information and direction of visual lines information can be based on default fusion rule according to its respective degree of membership Then (table 2) is merged, and generates the second fusion;Hand state information and the second fusion are based on presetting according to its respective degree of membership Fusion rule (table 3) is merged, and third fusion is generated;First fusion and third fusion are based in advance according to its respective degree of membership If fusion rule (table 4) is merged, the 4th fusion is generated.Fusion rule in table 1- table 4 together constitutes as described herein Default segmentation rule.
Table 2
In table 2, the first row indicates that head state information drives being subordinate to for opinion rating about each, and first row indicates view Line directional information drives being subordinate to for opinion rating about each, and content is fused second fusion about each driving in table Opinion rating is subordinate to.
Table 3
In table 3, the first row indicates that hand state information being subordinate to about each opinion rating that drives, and first row indicates the Two fusions drive being subordinate to for opinion rating about each, and content is that fused third is merged about each driving evaluation in table Grade is subordinate to.
Table 4
In table 4, the first row indicates that the first fusion drives being subordinate to for opinion rating about each, and first row indicates that third is melted It closes and drives being subordinate to for opinion rating about each, content is fused 4th fusion about each driving opinion rating in table Be subordinate to.
Similarly, by head state information and direction of visual lines information respectively about in each degree of membership for driving opinion rating Smaller as second fusion about it is each drive opinion rating degree of membership;Hand state information and the second fusion is respective It is merged about the smaller in each degree of membership for driving opinion rating as third and drives being subordinate to for opinion rating about each Degree;Using the first fusion and third fusion respectively about the smaller in each degree of membership for driving opinion rating as the 4th fusion About each degree of membership for driving opinion rating;4th fusion is based on each drive about each degree of membership for driving opinion rating Sail the weight calculation driving condition evaluation of estimate of opinion rating.
It should be noted that although giving specific fusion sequence, those skilled in the art in the embodiments herein It can according to need when realizing the principle of the present invention and the sequence of fusion be adjusted.
Go to Fig. 1, Fusion Module 110 can according to each degree of membership for driving opinion rating corresponding in the 4th fusion into Row ambiguity solution, to calculate driving condition evaluation of estimate.
The process of ambiguity solution can refer to following formula:
Wherein, SLviIndicate the 4th fusion about i-th (according to example above, i can take 1,2,3 and 4) drive evaluation The degree of membership of grade, wLviFor the weight of i-th of grade, the L calculated can be used as driving condition evaluation of estimate.Then, it evaluates The L calculated (driving condition evaluation of estimate) is decided whether to sound an alarm by module 120 compared with predetermined value.
In order to guarantee the timeliness of detection, more than after preset time, system can recalculate driving condition evaluation of estimate.
It is noted that the present invention is for succinct consideration is described, it include the function of calculating in cascade networks at different levels, However in an alternate embodiment of the invention, these computing functions can also be realized with individual module, unit, or independent at independent Step.Specifically, eye status information, mouth status information, head state information can be several characteristic points, and first is pre- If the frame number that eye is closed in image information in the time, the frame number that mouth opens in image information in the first preset time and the The frame number in head bias normal driving direction can be calculated by individual module, unit in image information in one preset time. Likewise, hand state information can be hand position, and hand off-direction disk in image information in the first preset time Frame number can be calculated by individual module, unit.These embodiment variants all cover within the protection scope of the present invention.Again It such as, is equally for describing succinct consideration, first nerves network can export position and hand state information and the root of face Facial image is cut out from image information according to the position of face, correspondingly, the position according to face cuts out people from image information Face image can also be executed multiple characteristic points and root on nervus opticus network output facial image by individual module, unit Eyes image is cut out from image information according to multiple characteristic points on facial image, correspondingly, according to multiple on facial image Characteristic point cuts out eyes image from image information and can also be executed by individual module, unit.These embodiment variants All cover within the protection scope of the present invention.
Fig. 2 shows the flow charts of driving condition evaluation method according to an embodiment of the invention.The evaluation method System such as shown in FIG. 1 be can use to realize.
Firstly, acquiring the image information of driver in step 201;Secondly, in step 202 using multiple cascade Neural network handles image information, and then obtains the driving condition information of multiple depth about driver;Finally, In step 203, fusion treatment is carried out to obtain driving condition evaluation of estimate to driving condition information using default segmentation rule, driven Sail the risk of the driving behavior of state evaluation value instruction driver.
With reference to Fig. 3, wherein image information is handled using multiple cascade neural networks in step 202, and then obtains Obtaining may include steps of about the driving condition information of multiple depth of driver: using first nerves network to image Information is handled to obtain the first depth driving condition information (step 3021) in the driving condition information of multiple depth;Make The first depth driving condition information through first nerves Network Recognition is handled to obtain multiple depths with nervus opticus network The second depth driving condition information (step 3022) in the driving condition information of degree;Using third nerve network to through the second mind The second depth driving condition information through Network Recognition is handled to obtain the third in the driving condition information of multiple depth Depth driving condition information (step 3023).More specifically, the first depth driving condition information may include the position of face, hand Portion's status information;Second depth driving condition information may include eye status information, mouth status information and head state Information;Third depth driving condition information may include direction of visual lines information.
Carry out fusion treatment to driving condition information using default segmentation rule can wrap with obtaining driving condition evaluation of estimate It includes: being believed respectively according to eye status information, mouth status information, head state information, hand state information and direction of visual lines It ceases and forms its degree of membership for corresponding to each driving opinion rating based on the first preset rules, drive opinion rating and show driver The risk of the single physiological status of member.As non-limiting example, eye status information is image in the first preset time The frame number of eyes closed, mouth status information are the frame number, head that mouth opens in image information in the first preset time in information Portion's status information is that the frame number in head bias normal driving direction, hand state information are in image information in the first preset time The frame number of hand off-direction disk and direction of visual lines information are in the first preset time in image information in first preset time The frame number in sight offset normal driving direction in image information.
Furthermore it is also possible to be believed respectively according to eye status information, mouth status information, head state information, hand state Breath and direction of visual lines information are based on default segmentation rule according to its respective degree of membership and are merged, and according to corresponding in fusion Each degree of membership for driving opinion rating carries out ambiguity solution, to calculate driving condition evaluation of estimate.
Specifically, eye status information and mouth status information can be based on according to its respective degree of membership with reference to Fig. 4 Default segmentation rule is merged, and the first fusion (step 41) is generated;Head state information and direction of visual lines information is each according to it From degree of membership be based on default segmentation rule merged, generate second fusion (step 42);By hand state information and second Fusion is based on default segmentation rule according to its respective degree of membership and is merged, and generates third and merges (step 43);By the first fusion Default segmentation rule is based on according to its respective degree of membership with third fusion to be merged, and the 4th fusion (step 44) is generated;Then Ambiguity solution is carried out further according to each degree of membership for driving opinion rating corresponding in the 4th fusion, to calculate driving condition evaluation Value.
Wherein it is possible to by eye status information and mouth status information respectively about each degree of membership for driving opinion rating In smaller as first fusion about it is each drive opinion rating degree of membership;Head state information and direction of visual lines are believed Breath is respectively merged as second about each driving opinion rating about the smaller in each degree of membership for driving opinion rating Degree of membership;Hand state information and the second fusion are respectively made about the smaller in each degree of membership for driving opinion rating It is third fusion about each degree of membership for driving opinion rating;First fusion and third fusion are respectively commented about each driving Smaller in the degree of membership of valence grade is as the 4th fusion about each degree of membership for driving opinion rating;4th fusion is closed In each degree of membership for driving opinion rating based on each weight calculation driving condition evaluation of estimate for driving opinion rating.
By eye status information and mouth status information respectively about smaller in each degree of membership for driving opinion rating Person is as the first fusion about each degree of membership for driving opinion rating;Head state information and direction of visual lines information are respectively closed Smaller in each degree of membership for driving opinion rating is as the second fusion about each degree of membership for driving opinion rating; Hand state information and the second fusion are respectively melted about the smaller in each degree of membership for driving opinion rating as third It closes about each degree of membership for driving opinion rating;By the first fusion and third fusion respectively about each driving opinion rating Smaller in degree of membership is as the 4th fusion about each degree of membership for driving opinion rating;4th fusion is driven about each The degree of membership of opinion rating is sailed based on each weight calculation driving condition evaluation of estimate for driving opinion rating.
In addition, can directly sound an alarm (Fig. 4 if first nerves Network Recognition executes predetermined action to driver In be represented by dotted lines).If driving condition evaluation of estimate is higher than predetermined value, sound an alarm.It, can also be super in order to guarantee timeliness Driving condition evaluation of estimate is recalculated after crossing the second preset time.
In addition, the present invention also provides a kind of computer storage mediums, for storing instruction, when executed Method as described above is executed by processor.
It is analogous to Figure 11, Figure 12 is the screenshot using the system runnable interface of one embodiment of the present of invention.Show in Figure 12 Go out and real-time tracking is carried out to the driving condition information of different depth, by above according to one or more embodiments of the invention Driving condition evaluation method, system and computer storage medium detection accuracy can be improved and also solve decision logic Single problem, so that the monitoring and early warning to driver is more perfect.
Example above primarily illustrates driving condition evaluation method, system and the computer storage medium of the disclosure.Although Only some of embodiments of the present invention are described, but those of ordinary skill in the art are it is to be appreciated that the present invention It can implement without departing from its spirit in range in many other form.Therefore, the example and embodiment quilt shown Be considered as it is illustrative and not restrictive, in the feelings for not departing from the spirit and scope of the present invention as defined in appended claims Under condition, the present invention may cover various modification and replacement.

Claims (25)

1. a kind of driving condition evaluation method, which is characterized in that the method includes the following steps:
Acquire the image information of driver;
Described image information is handled using multiple cascade neural networks, and then is obtained about the more of the driver The driving condition information of a depth, physiological status of the driver described in the driving condition message reflection when driving;
It is described to drive using default segmentation rule to driving condition information progress fusion treatment to obtain driving condition evaluation of estimate Sail the risk that state evaluation value indicates the driving behavior of the driver.
2. according to the method described in claim 1, wherein, using multiple cascade neural networks to described image information at Reason, and then obtain and include: about the driving condition information of multiple depth of the driver
Described image information is handled using first nerves network to obtain in the driving condition information of the multiple depth The first depth driving condition information;
Using nervus opticus network to the first depth driving condition information through the first nerves Network Recognition at Manage the second depth driving condition information in the driving condition information to obtain the multiple depth;And
Using third nerve network to the second depth driving condition information through the nervus opticus Network Recognition at Manage the third depth driving condition information in the driving condition information to obtain the multiple depth.
3. according to the method described in claim 2, wherein, the first depth driving condition information includes the position of face, hand Portion's status information;The second depth driving condition information includes eye status information, mouth status information and head state Information;Third depth driving condition information includes direction of visual lines information.
4. according to the method described in claim 3, wherein, being merged using default segmentation rule to the driving condition information It handles to obtain driving condition evaluation of estimate and includes:
Believed respectively according to the eye status information, the mouth status information, the head state information, the hand state Breath and the direction of visual lines information form it based on the first preset rules and correspond to each degree of membership for driving opinion rating, institute State the risk for driving the single physiological status that opinion rating shows the driver.
5. according to the method described in claim 4, wherein,
The eye status information is the frame number that eye is closed in described image information in the first preset time, the mouth state When information is the frame number that mouth opens in described image information in the first preset time, the head state information is first default The frame number in head bias normal driving direction, the hand state information are in the first preset time in interior described image information The frame number of hand off-direction disk and the direction of visual lines information are the figure in the first preset time in described image information As the frame number in sight offset normal driving direction in information.
6. method according to claim 4 or 5, wherein
Believed respectively according to the eye status information, the mouth status information, the head state information, the hand state Breath and the direction of visual lines information are based on default segmentation rule according to its respective degree of membership and are merged, and according to right in fusion The degree of membership for each driving opinion rating answered carries out ambiguity solution, to calculate the driving condition evaluation of estimate.
7. according to the method described in claim 6, wherein,
The eye status information and the mouth status information are carried out according to its respective degree of membership based on default segmentation rule Fusion generates the first fusion;
The head state information and the direction of visual lines information are based on the default segmentation rule according to its respective degree of membership It is merged, generates the second fusion;
The hand state information and the second fusion are based on the default segmentation rule according to its respective degree of membership to merge, Generate third fusion;
First fusion and third fusion are based on the default segmentation rule according to its respective degree of membership to merge, generate the 4th Fusion;And
Ambiguity solution is carried out according to each degree of membership for driving opinion rating corresponding in the 4th fusion, to calculate the driving condition Evaluation of estimate.
8. according to the method described in claim 7, wherein,
By the eye status information and the mouth status information respectively about in each degree of membership for driving opinion rating Smaller is as the first fusion about each degree of membership for driving opinion rating;
By the head state information and the direction of visual lines information respectively about in each degree of membership for driving opinion rating Smaller is as the second fusion about each degree of membership for driving opinion rating;
The hand state information and the second fusion are respectively made about the smaller in each degree of membership for driving opinion rating It is third fusion about each degree of membership for driving opinion rating;
First fusion and third fusion are respectively melted about the smaller in each degree of membership for driving opinion rating as the 4th It closes about each degree of membership for driving opinion rating;And
By the 4th fusion about each degree of membership for driving opinion rating based on described in each weight calculation for driving opinion rating Driving condition evaluation of estimate.
9. according to the method described in claim 3, wherein,
If the first nerves Network Recognition executes predetermined action to the driver, sound an alarm.
10. method according to claim 1 or 5, wherein
If the driving condition evaluation of estimate is higher than predetermined value, sound an alarm.
11. according to the method described in claim 1, wherein,
More than recalculating the driving condition evaluation of estimate after the second preset time.
12. according to the method described in claim 2, wherein,
Described image information is divided into multiple grids by the first nerves network, and detect the object in the multiple grid and Its corresponding bezel locations, the object includes face and hand, and is iterated according to the confidence level of the frame, exports people The position of face and hand state information, and facial image is cut out from described image information according to the position of the face;
Each layer in the nervus opticus network is iterated to next layer of output key point thermal map, and finally exports the people Multiple characteristic points in face image, and eye is cut out from described image information according to multiple characteristic points on the facial image Portion's image;And
The third nerve network carries out successive ignition according to the position of pupil in the eyes image and exports direction of visual lines letter Breath.
13. a kind of driving condition evaluation system, which is characterized in that the system comprises:
Image capture module is used to acquire the image information of driver;
Multiple cascade neural networks are used to handle described image information, and then obtain about the driver Multiple depth driving condition information, physiological status of the driver described in the driving condition message reflection when driving;
Fusion Module carries out fusion treatment to the driving condition information using default segmentation rule to obtain driving condition evaluation Value;And
Evaluation module is used to indicate the risk of the driving behavior of the driver according to the driving condition evaluation of estimate.
14. system according to claim 13, wherein the multiple cascade neural network includes:
First nerves network is used to handle described image information to obtain the driving condition information of the multiple depth In the first depth driving condition information;
Nervus opticus network is used to carry out the first depth driving condition information through the first nerves Network Recognition Handle the second depth driving condition information in the driving condition information to obtain the multiple depth;And
Third nerve network is used to carry out the second depth driving condition information through the nervus opticus Network Recognition Handle the third depth driving condition information in the driving condition information to obtain the multiple depth.
15. system according to claim 14, wherein the first depth driving condition information include face position, Hand state information;The second depth driving condition information includes eye status information, mouth status information and head shape State information;Third depth driving condition information includes direction of visual lines information.
16. system according to claim 15, wherein the Fusion Module is respectively according to the eye status information, institute Mouth status information, the head state information, the hand state information and the direction of visual lines information are stated, is based on first Preset rules form it and correspond to each degree of membership for driving opinion rating, and the driving opinion rating shows the driver Single physiological status risk.
17. system according to claim 16, wherein the eye status information is described image in the first preset time The frame number of eye closure, the mouth status information are that mouth opens in described image information in the first preset time in information Frame number, the head state information are the frame in head bias normal driving direction in described image information in the first preset time Several, the described hand state information is the frame number of hand off-direction disk in described image information in the first preset time, Yi Jisuo Stating direction of visual lines information is that sight deviates the frame number in normal driving direction in described image information in the first preset time.
18. system according to claim 16 or 17, wherein the Fusion Module is believed according to the eye state respectively Breath, the mouth status information, the head state information, the hand state information and the direction of visual lines information are according to it Respective degree of membership is based on default segmentation rule and is merged, and is subordinate to according to each opinion rating that drives corresponding in fusion Degree carries out ambiguity solution, to calculate the driving condition evaluation of estimate.
19. system according to claim 18, wherein the Fusion Module is also used to:
The eye status information and the mouth status information are carried out according to its respective degree of membership based on default segmentation rule Fusion generates the first fusion;
The head state information and the direction of visual lines information are based on the default segmentation rule according to its respective degree of membership It is merged, generates the second fusion;
The hand state information and the second fusion are based on the default segmentation rule according to its respective degree of membership to merge, Generate third fusion;
First fusion and third fusion are based on the default segmentation rule according to its respective degree of membership to merge, generate the 4th Fusion;
Ambiguity solution is carried out according to each degree of membership for driving opinion rating corresponding in the 4th fusion, to calculate the driving condition Evaluation of estimate.
20. system according to claim 19, wherein the Fusion Module is also used to:
By the eye status information and the mouth status information respectively about in each degree of membership for driving opinion rating Smaller is as the first fusion about each degree of membership for driving opinion rating;
By the head state information and the direction of visual lines information respectively about in each degree of membership for driving opinion rating Smaller is as the second fusion about each degree of membership for driving opinion rating;
The hand state information and the second fusion are respectively made about the smaller in each degree of membership for driving opinion rating It is third fusion about each degree of membership for driving opinion rating;
First fusion and third fusion are respectively melted about the smaller in each degree of membership for driving opinion rating as the 4th It closes about each degree of membership for driving opinion rating;
By the 4th fusion about each degree of membership for driving opinion rating based on described in each weight calculation for driving opinion rating Driving condition evaluation of estimate.
21. system according to claim 15, wherein the evaluation module is in the first nerves Network Recognition described in Driver sounds an alarm when executing predetermined action.
22. system described in 3 or 17 according to claim 1, wherein the evaluation module is higher than in the driving condition evaluation of estimate It is sounded an alarm when predetermined value.
23. system according to claim 13, wherein
After the second preset time, the system recalculates the driving condition evaluation of estimate.
24. system according to claim 14, wherein
Described image information is divided into multiple grids by the first nerves network, and detect the object in the multiple grid and Its corresponding bezel locations, the object includes face and hand, and is iterated according to the confidence level of the frame, exports people The position of face and hand state information, and facial image is cut out from described image information according to the position of the face;
Each layer in the nervus opticus network is iterated to next layer of output key point thermal map, and finally exports the people Multiple characteristic points in face image, and eye is cut out from described image information according to multiple characteristic points on the facial image Portion's image;And
The third nerve network carries out successive ignition according to the position of pupil in the eyes image and exports direction of visual lines letter Breath.
25. a kind of computer storage medium executes such as right by processor when executed for storing instruction It is required that method described in any one of 1-12.
CN201910445107.XA 2019-05-27 2019-05-27 Driving condition evaluation method, system and computer storage medium Pending CN110188655A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910445107.XA CN110188655A (en) 2019-05-27 2019-05-27 Driving condition evaluation method, system and computer storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910445107.XA CN110188655A (en) 2019-05-27 2019-05-27 Driving condition evaluation method, system and computer storage medium

Publications (1)

Publication Number Publication Date
CN110188655A true CN110188655A (en) 2019-08-30

Family

ID=67717929

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910445107.XA Pending CN110188655A (en) 2019-05-27 2019-05-27 Driving condition evaluation method, system and computer storage medium

Country Status (1)

Country Link
CN (1) CN110188655A (en)

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111160237A (en) * 2019-12-27 2020-05-15 智车优行科技(北京)有限公司 Head pose estimation method and apparatus, electronic device, and storage medium
CN111950371A (en) * 2020-07-10 2020-11-17 上海淇毓信息科技有限公司 Fatigue driving early warning method and device, electronic equipment and storage medium
CN112699721A (en) * 2019-10-23 2021-04-23 通用汽车环球科技运作有限责任公司 Context-dependent adjustment of off-road glance time
CN115398509A (en) * 2020-06-09 2022-11-25 株式会社日立物流 Operation assistance method, operation assistance system, and operation assistance server

Citations (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104240446A (en) * 2014-09-26 2014-12-24 长春工业大学 Fatigue driving warning system on basis of human face recognition
CN104809445A (en) * 2015-05-07 2015-07-29 吉林大学 Fatigue driving detection method based on eye and mouth states
CN105286802A (en) * 2015-11-30 2016-02-03 华南理工大学 Driver fatigue detection method based on video information
US20170001648A1 (en) * 2014-01-15 2017-01-05 National University Of Defense Technology Method and Device for Detecting Safe Driving State of Driver
CN107657236A (en) * 2017-09-29 2018-02-02 厦门知晓物联技术服务有限公司 Vehicle security drive method for early warning and vehicle-mounted early warning system
CN108309311A (en) * 2018-03-27 2018-07-24 北京华纵科技有限公司 A kind of real-time doze of train driver sleeps detection device and detection algorithm
CN108960065A (en) * 2018-06-01 2018-12-07 浙江零跑科技有限公司 A kind of driving behavior detection method of view-based access control model
WO2019028798A1 (en) * 2017-08-10 2019-02-14 北京市商汤科技开发有限公司 Method and device for monitoring driving condition, and electronic device
CN109409347A (en) * 2018-12-27 2019-03-01 哈尔滨理工大学 A method of based on facial features localization fatigue driving
CN109460780A (en) * 2018-10-17 2019-03-12 深兰科技(上海)有限公司 Safe driving of vehicle detection method, device and the storage medium of artificial neural network

Patent Citations (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20170001648A1 (en) * 2014-01-15 2017-01-05 National University Of Defense Technology Method and Device for Detecting Safe Driving State of Driver
CN104240446A (en) * 2014-09-26 2014-12-24 长春工业大学 Fatigue driving warning system on basis of human face recognition
CN104809445A (en) * 2015-05-07 2015-07-29 吉林大学 Fatigue driving detection method based on eye and mouth states
CN105286802A (en) * 2015-11-30 2016-02-03 华南理工大学 Driver fatigue detection method based on video information
WO2019028798A1 (en) * 2017-08-10 2019-02-14 北京市商汤科技开发有限公司 Method and device for monitoring driving condition, and electronic device
CN107657236A (en) * 2017-09-29 2018-02-02 厦门知晓物联技术服务有限公司 Vehicle security drive method for early warning and vehicle-mounted early warning system
CN108309311A (en) * 2018-03-27 2018-07-24 北京华纵科技有限公司 A kind of real-time doze of train driver sleeps detection device and detection algorithm
CN108960065A (en) * 2018-06-01 2018-12-07 浙江零跑科技有限公司 A kind of driving behavior detection method of view-based access control model
CN109460780A (en) * 2018-10-17 2019-03-12 深兰科技(上海)有限公司 Safe driving of vehicle detection method, device and the storage medium of artificial neural network
CN109409347A (en) * 2018-12-27 2019-03-01 哈尔滨理工大学 A method of based on facial features localization fatigue driving

Non-Patent Citations (3)

* Cited by examiner, † Cited by third party
Title
MAREK KOWALSKI ET AL.: ""Deep Alignment Network: A convolutional neural network for robust face alignment"" *
SEONWOOK PARK ET AL.: ""Deep Pictorial Gaze Estimation"" *
石坚 等: "汽车驾驶员主动安全性因素的辨识与分析" *

Cited By (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112699721A (en) * 2019-10-23 2021-04-23 通用汽车环球科技运作有限责任公司 Context-dependent adjustment of off-road glance time
CN112699721B (en) * 2019-10-23 2023-08-25 通用汽车环球科技运作有限责任公司 Context-dependent adjustment of off-road glance time
CN111160237A (en) * 2019-12-27 2020-05-15 智车优行科技(北京)有限公司 Head pose estimation method and apparatus, electronic device, and storage medium
CN115398509A (en) * 2020-06-09 2022-11-25 株式会社日立物流 Operation assistance method, operation assistance system, and operation assistance server
CN111950371A (en) * 2020-07-10 2020-11-17 上海淇毓信息科技有限公司 Fatigue driving early warning method and device, electronic equipment and storage medium
CN111950371B (en) * 2020-07-10 2023-05-19 上海淇毓信息科技有限公司 Fatigue driving early warning method and device, electronic equipment and storage medium

Similar Documents

Publication Publication Date Title
CN110188655A (en) Driving condition evaluation method, system and computer storage medium
Chirra et al. Deep CNN: A Machine Learning Approach for Driver Drowsiness Detection Based on Eye State.
Ji et al. Fatigue state detection based on multi-index fusion and state recognition network
CN103839379B (en) Automobile and driver fatigue early warning detecting method and system for automobile
CN108309311A (en) A kind of real-time doze of train driver sleeps detection device and detection algorithm
CN108664947A (en) A kind of fatigue driving method for early warning based on Expression Recognition
CN105769120A (en) Fatigue driving detection method and device
Yang et al. All in one network for driver attention monitoring
CN102324166A (en) Fatigue driving detection method and device
CN104224204A (en) Driver fatigue detection system on basis of infrared detection technology
CN104331160A (en) Lip state recognition-based intelligent wheelchair human-computer interaction system and method
CN102629321A (en) Facial expression recognition method based on evidence theory
CN109002774A (en) A kind of fatigue monitoring device and method based on convolutional neural networks
Naz et al. Driver fatigue detection using mean intensity, SVM, and SIFT
Hasan et al. State-of-the-art analysis of modern drowsiness detection algorithms based on computer vision
Devi et al. Fuzzy based driver fatigue detection
Wathiq et al. Optimized driver safety through driver fatigue detection methods
Liu et al. 3DCNN-based real-time driver fatigue behavior detection in urban rail transit
CN110084217B (en) Eye movement parameter monitoring fatigue detection method based on MOD-Net network
Singh et al. Driver fatigue detection using machine vision approach
Aarthi et al. Driver drowsiness detection using deep learning technique
Wang et al. Driving fatigue detection based on feature fusion of information entropy
Sulaiman et al. A systematic review on Evaluation of Driver Fatigue Monitoring Systems based on Existing Face/Eyes Detection Algorithms
Gupta et al. Real time driver drowsiness detecion using transfer learning
Subbaiah et al. Driver drowsiness detection methods: A comprehensive survey

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination