CN106485191A - A kind of method for detecting fatigue state of driver and system - Google Patents
A kind of method for detecting fatigue state of driver and system Download PDFInfo
- Publication number
- CN106485191A CN106485191A CN201510555903.0A CN201510555903A CN106485191A CN 106485191 A CN106485191 A CN 106485191A CN 201510555903 A CN201510555903 A CN 201510555903A CN 106485191 A CN106485191 A CN 106485191A
- Authority
- CN
- China
- Prior art keywords
- state
- eyes
- eye
- driver
- confidence level
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V20/00—Scenes; Scene-specific elements
- G06V20/50—Context or environment of the image
- G06V20/59—Context or environment of the image inside of a vehicle, e.g. relating to seat occupancy, driver state or inner lighting conditions
- G06V20/597—Recognising the driver's state or behaviour, e.g. attention or drowsiness
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V40/00—Recognition of biometric, human-related or animal-related patterns in image or video data
- G06V40/10—Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
- G06V40/16—Human faces, e.g. facial parts, sketches or expressions
- G06V40/174—Facial expression recognition
- G06V40/175—Static expression
Landscapes
- Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Multimedia (AREA)
- Theoretical Computer Science (AREA)
- Health & Medical Sciences (AREA)
- General Health & Medical Sciences (AREA)
- Oral & Maxillofacial Surgery (AREA)
- Human Computer Interaction (AREA)
- Image Analysis (AREA)
Abstract
A kind of method for detecting fatigue state of driver that the present invention provides, send into disaggregated model by using eyes SIFT feature to carry out calculating acquisition confidence level, open closed state according to level of confidence to eyes to judge, can not be affected by facial angle, there is rotational invariance, high to the accuracy of human eye measurement, and calculating process is simple, in time the driving fatigue state of driver can be detected, can there is the requirement meeting real-time detection, in addition, a kind of present invention also offers driver fatigue state detecting system.
Description
Technical field
The present invention relates to safe driving field, particularly to a kind of method for detecting fatigue state of driver and be
System.
Background technology
Developing rapidly with communication, driver tired driving has become as frequent accidents and occurs
One of the main reasons, and driver is often because working environment and cannot anticipating in time the reason driving time
Know and lay oneself open to fatigue driving state, and then bring great potential safety hazard, automatic detection driver is tired
Labor state is the important means of prevention vehicle accident, and lot of experimental data proves, in the unit interval, eyes close
The percentage of time closing and degree of fatigue have good dependency, thus, detection driver's eyes state tool
There is particularly significant meaning.
In recent years, developing rapidly with image processing and pattern recognition, is driven by video surveillance
Member's eye state always judges that driver fatigue state becomes feasible scheme.Human eye state detection key
Place is exactly to find to distinguish the feature opened eyes and close one's eyes, and the feature that current researcher is commonly used includes iris, eye
The edge features such as eyelid, geometric properties, color characteristic.
At present, the method for detection human eye state has a lot, mainly has the method based on template matching, is based on
The method of Hough transform, based on method of human eye difference etc. under infrared light supply.The method of template matching needs
Multiple template to be prestored, information memory capacity is greatly it is not easy to promote.Hough transform ellipse detection method meter
Calculation amount is big, and real-time is poor.Complicated based on human eye difference method system building under infrared light supply, it is vulnerable to
Light source position, irradiating angle and face cutaneous reflex etc. disturb.
Content of the invention
In view of this, a kind of method for detecting fatigue state of driver and system are embodiments provided.
It is an object of the present invention to provide a kind of method for detecting fatigue state of driver, according to eyes yardstick
Invariant features change SIFT feature training in advance disaggregated model, and described disaggregated model is used for eyes SIFT
Feature carries out calculating the corresponding confidence level of output, including:
Obtain the face contour image of driver;
Described face contour image is normalized with acquisition normalization facial image;
Eyes SIFT feature, wherein said eyes SIFT feature bag are extracted to described normalization facial image
Include left eye SIFT feature and right eye SIFT feature;
Described left eye SIFT feature and right eye SIFT feature are inputted described disaggregated model calculated and divided
Do not obtain the first confidence level of described left eye SIFT feature and the second confidence level of described right eye SIFT feature;
Comparison result according to described first confidence level and described second confidence level and described default confidence interval
Determine that the eyes of described driver open closed state;
Determine that when the eyes of described driver are for closed-eye state described driver is in fatigue state.
Alternatively, described according to described first confidence level and described second confidence level and described default confidence area
Between comparison result determine that the eyes of described driver open closed state, including:
It is defined as left eye eyes-open state when described first confidence level is more than default confidence interval, when described the
One confidence level is defined as left eye closed-eye state when being less than default confidence interval;
It is defined as right eye eyes-open state when described second confidence level is more than default confidence interval, when described the
One confidence level is defined as right eye closed-eye state when being less than default confidence interval;
When left eye eyes-open state and/or right eye eyes-open state are, determine that described driver is eyes-open state, when
Determine when described left eye closed-eye state and described right eye closed-eye state that described driver is closed-eye state.
Alternatively, described according to described first confidence level and described second confidence level and described default confidence area
Between comparison result determine that the eyes of described driver open closed state, including:
It is defined as left eye labile state when described first confidence bit is in default confidence interval, when described
Second confidence bit is defined as right eye labile state when described confidence interval, unstable to described left eye
Corresponding first confidence level of state and corresponding second confidence level of described right eye labile state carry out joint generally
Rate is calculated probit, more than predetermined threshold value, described probit is to determine that driver is eyes-open state, when
Described probit determines that driver is closed-eye state when being not more than predetermined threshold value.
Alternatively, described described face contour image is normalized obtains normalization facial image,
Including:
Obtain eye position and face contour size in described face contour image;
According to described eye position and face contour size calculate the face size of described driver, position and
Posture feature;
Image mapping method is utilized to obtain to described face according to described face size, position and posture feature
Contour images are normalized operation to obtain normalization facial image.
Alternatively, described to described normalization facial image extract eyes SIFT feature, including:
Determine the image-region calculating needed for eyes SIFT feature description;
Coordinate axess are rotated to be the direction of key point, to guarantee rotational invariance;
Calculate the direction histogram of each seed point, form characteristic vector;
The characteristic vector of key point is normalized;
Description subvector thresholding is to block off-limits Grad.
Alternatively, when the described eyes as described driver are for closed-eye state, to determine that described driver is in tired
After labor state, also include:
Alerted or vehicle deceleration when described driver is in fatigue state, described alarm includes sound
At least one in prompting, light prompt or vibration prompting.
It is a further object to provide a kind of driver fatigue state detecting system, according to eyes
SIFT feature training in advance disaggregated model, described disaggregated model is corresponding for carrying out to eyes SIFT feature
Confidence calculations, including:
First extraction unit, for extracting the face contour image of driver;
First processing units, obtain normalization people for being normalized to described face contour image
Face image;
Second extraction unit, for eyes SIFT feature is extracted to described normalization facial image, wherein,
Described eyes SIFT feature includes left eye SIFT feature and right eye SIFT feature;
Second processing unit, for by described in described left eye SIFT feature and the input of right eye SIFT feature point
Class model calculates the first confidence level of described left eye SIFT feature and the of described right eye SIFT feature respectively
Two confidence levels;
First determining unit, for default with described according to described first confidence level and described second confidence level
The comparison result of confidence interval determines that the eyes of described driver open closed state;
Second determining unit, for determining described driver when the eyes of described driver are for closed-eye state
It is in fatigue state.
Alternatively, described first determining unit is additionally operable to:
It is defined as left eye eyes-open state when described first confidence level is more than default confidence interval, when described the
One confidence level is defined as left eye closed-eye state when being less than default confidence interval;
It is defined as right eye eyes-open state when described second confidence level is more than default confidence interval, when described the
One confidence level is defined as right eye closed-eye state when being less than default confidence interval;
When left eye eyes-open state and/or right eye eyes-open state are, determine that described driver is eyes-open state, when
Determine when described left eye closed-eye state and described right eye closed-eye state that described driver is closed-eye state.
Alternatively, described first determining unit is additionally operable to:
It is defined as left eye labile state when described first confidence bit is in default confidence interval, when described
Second confidence bit is defined as right eye labile state when described confidence interval, unstable to described left eye
Corresponding first confidence level of state and corresponding second confidence level of described right eye labile state carry out joint generally
Rate is calculated probit, more than predetermined threshold value, described probit is to determine that driver is eyes-open state, when
Described probit determines that driver is closed-eye state when being not more than predetermined threshold value.
Alternatively, described first processing units are additionally operable to:
Obtain eye position and face contour size in described face contour image;
According to described eye position and face contour size calculate the face size of described driver, position and
Posture feature;
Image mapping method is utilized to obtain to described face according to described face size, position and posture feature
Contour images are normalized operation to obtain normalization facial image.
Alternatively, described second extraction unit is additionally operable to:
Determine the image-region calculating needed for eyes SIFT feature description;
Coordinate axess are rotated to be the direction of key point, to guarantee rotational invariance;
Calculate the direction histogram of each seed point, form characteristic vector;
The characteristic vector of key point is normalized;
Description subvector thresholding is to block off-limits Grad.
Alternatively, described system also includes:
Danger early warning unit, for being alerted or vehicle deceleration when described driver is in fatigue state,
Described alarm includes at least one in auditory tone cueses, light prompt or vibration prompting.
A kind of method for detecting fatigue state of driver and system that the present invention provides, by special by eyes SIFT
Levy input disaggregated model to carry out calculating acquisition confidence level, closed state is opened according to level of confidence to eyes and carries out
Judge, can not be affected by facial angle, there is rotational invariance, high to the accuracy of human eye measurement,
And calculating process is simple, in time the driving fatigue state of driver can be detected, can have satisfaction
The requirement of real-time detection.
Brief description
Fig. 1 is a kind of flow chart of embodiment of method for detecting fatigue state of driver that the present invention provides;
Fig. 2 is the flow chart of another kind of embodiment of method for detecting fatigue state of driver that the present invention provides;
Fig. 3 is a kind of structure chart of embodiment of driver fatigue state detecting system that the present invention provides.
Specific embodiment
In order that those skilled in the art more fully understand the present invention program, real below in conjunction with the present invention
Apply the accompanying drawing in example, the technical scheme in the embodiment of the present invention is clearly and completely described it is clear that
Described embodiment is only the embodiment of a present invention part, rather than whole embodiments.It is based on
Embodiment in the present invention, those of ordinary skill in the art are obtained under the premise of not making creative work
The every other embodiment obtaining, all should belong to the scope of protection of the invention.
Term " first " in description and claims of this specification and above-mentioned accompanying drawing, " second ", "
Three " " 4th " etc. is for distinguishing similar object, without for describing specific order or successively secondary
Sequence.It should be appreciated that such data using can be exchanged, in the appropriate case so that enforcement described herein
Example can with except here illustrate or description content in addition to order implement.Additionally, term " inclusion "
" having " and their any deformation, it is intended that covering non-exclusive comprising, for example, comprise
The process of series of steps or unit, method, system, product or equipment are not necessarily limited to clearly to list
Those steps or unit, but may include clearly not listing or for these processes, method,
Product or the intrinsic other steps of equipment or unit.
SIFT feature (Chinese:Scale invariant feature is changed, English:Scaleinvariant feature
Transform) be image local feature, it maintains the invariance to rotation, scaling, brightness flop,
A certain degree of stability is also kept to visual angle change, affine transformation, noise.
SVM (Chinese:Support vector machine, English:Support Vector Machine) it is a kind of foundation
According to limited sample information in the VC dimension theory of Statistical Learning Theory and Structural risk minization basis
Model complexity (i.e. the study precision to specific training sample) and learning capacity is (i.e. without error
The ability of identification arbitrary sample) between seek optimal compromise, to obtaining the study of best Generalization Ability
Method.
In conjunction with shown in Fig. 1, the invention provides a kind of method for detecting fatigue state of driver, according to eyes
SIFT feature training in advance disaggregated model, described disaggregated model is used for eyes SIFT feature carried out calculating defeated
Go out corresponding confidence level, including:
Here disaggregated model can adopt support vector machine, adopts substantial amounts of eyes in disaggregated model in advance
SIFT feature (eyes SIFT feature and eyes SIFT feature when closing one's eyes during eye opening) is trained obtaining,
After study, the scope of eyes SIFT feature when eye is opened when disaggregated model can determine normal
The range data of eyes SIFT feature when data and eye closure, then obtain driver's by real-time
Image carries out process and obtains face contour image, sends into classification using the eyes SIFT feature obtaining after processing
It is available in model representing the confidence level of similarity degree, represented using confidence level and be in eyes-open state
Or closed-eye state, confidence level is higher, the more approximate eyes-open state of human eye, and confidence level is lower, and human eye is got over
Approximate closed-eye state, can be specifically introduced hereinafter.
S101, the face contour image of extraction driver.
The position of face characteristic can be obtained using ASM algorithm positioning, including human face five-sense-organ position and face
Profile information, specifically can adopt the methods such as the connective filtering of face Rough Inspection, edge inspection area, for example,
To face, rectangle outline substantially carries out Rough Inspection first, then do rim detection, effective information binaryzation,
Section connectivity filtering, profile point correction, priority area and entire image are done vertical-horizontal projection etc.
Reason, obtains accurate facial contour, recycles the methods such as transformation of scale, histogram modification to obtain normalization
Facial image, does not limit herein.
S102, described face contour image is normalized acquisition normalization facial image.
As described in step S101 can be by face contour image for normalization facial image
Carry out processing and obtain, the method such as proportion of utilization conversion, histogram modification obtains normalization facial image, example
As eye position and face contour size in described face contour image can be obtained, according to described eyes
The face size of position and the face contour size described driver of calculating, position and posture feature, according to institute
State face size, position and posture feature to obtain described face contour image is entered using image mapping method
Row normalization operation, to obtain normalization facial image, does not limit herein.
S103, to described normalization facial image extract eyes SIFT feature, wherein said eyes SIFT
Feature includes left eye SIFT feature and right eye SIFT feature.
Extract eyes SIFT feature and can adopt multiple methods, can be in the following ways:
S1, determination calculate the image-region needed for eyes SIFT feature description;
The yardstick that Feature Descriptor is located with characteristic point is relevant, therefore, to gradient ask for should be in characteristic point
Carry out in corresponding Gaussian image.Neighborhood near key point is divided into d × d sub-regions, for example
D=4, every sub-regions have n direction, such as n=8 as a seed point, each seed point.
S2, coordinate axess are rotated to be the direction of key point, to guarantee rotational invariance;
S3, calculate the direction histogram of each seed point, form characteristic vector;
Sampled point in neighborhood is assigned in corresponding subregion, the Grad in subregion is assigned to n
On individual direction, calculate the weights of Grad, postrotational sample point coordinate is in radius in the circle of radius
It is assigned to the subregion of d × d, calculate the gradient of sampled point and the direction of impact subregion, be assigned to n
On individual direction, calculate the gradient in each seed point n direction using linear interpolation method, gained sampled point exists
Subscript in subregion carries out linear interpolation, calculates its contribution to each seed point.
S4, the characteristic vector to key point are normalized.
After characteristic vector is formed, in order to remove the impact of illumination variation, need to be normalized place to them
Reason, integrally drifts about for image intensity value, and the gradient of image each point is that neighborhood territory pixel subtracts each other and obtains, so
Also can remove
S5, description subvector thresholding are to block off-limits Grad;
Nonlinear optical shines, and the change of camera saturation is excessive to the Grad causing some directions, and to direction
Impact faint, therefore setting threshold value (after vectorial normalization, typically taking 0.2) block larger Grad.
Then, characteristic vector is carried out with a normalized again, improves the distinctive of feature.
It is noted that being judged using eyes SIFT feature, not affected by facial angle, being had
Rotational invariance, carries out human eye accuracy of measurement height by technical scheme, and calculates simple,
Speed is fast, disclosure satisfy that the requirement of real-time detection.
Specifically can adopt and take 16 × 16 neighborhood using centered on characteristic point as sampling window, by sampled point with
The relative direction of characteristic point is included into after being weighted by Gauss and comprises 8 bin (Chinese:Group is away from English:
Binwidth direction histogram), finally obtains 4 × 4 × 8 128 dimensional feature description, for extraction
The mode of SIFT feature is not limited to upper type, can also adopt other forms, not limit herein.
Can be the eyes SIFT feature when two width images using the method that eyes SIFT feature obtains confidence level
After vector generates, key point in two width images is used as using the Euclidean distance of key point characteristic vector
Similarity determination is measured.In first image, for example take certain key point, find second by traversal
Closest two key point in image.In this two key points, if secondary closely divided by
Closely it is less than certain default threshold, then be judged to a pair of match point, then may determine that this to match point phase
Seemingly, calculate other key points by that analogy respectively, determine the similarity of two images, and then obtain eyes
The confidence level of SIFT feature, by confidence level it will be seen that in current frame image eye close to opening or
Close to closing or opening completely or of completely closed, show that eyes open the trend closed using confidence level, this
Field those of ordinary skill is not it is to be appreciated that repeat herein.
S104, described left eye SIFT feature and right eye SIFT feature are inputted described disaggregated model and counted
Calculate and obtain the first confidence level of described left eye SIFT feature and the second of described right eye SIFT feature respectively
Confidence level.
Mention in step s 103 and how to calculate eyes SIFT feature using Feature Descriptor and key point
Similarity degree, that is, calculate confidence level, in step S104, using the method to left eye SIFT feature and the right side
Eye SIFT feature calculates confidence level, and wherein left eye SIFT feature is corresponding is the first confidence level, right eye SIFT
Feature is corresponding to be the second confidence level, and the first confidence level represents left eye in the state opened He must close, second
Confidence level represents the state that right eye is opened and closed, it should be noted that confidence interval can be pre-set,
Then may be considered eyes higher than confidence interval to open, then may be considered eyes closed less than confidence interval,
Unstable when then showing in confidence interval, cannot accurately determine that eyes are to open eyes or close one's eyes, this
When can be by the confidence level judging in continuous three two field pictures, when the confidence level in continuous three two field pictures is in
Decline, then it is considered that now this eye be in closure, before continuous three frames can be this two field picture current
Continuous two field pictures after continuous two field pictures or currently this two field picture, judge what confidence level declined
Method can be using the confidence level of eyes SIFT feature in rear two field picture and eyes in formerly two field picture
It is to bear then it is considered that confidence level declines that SIFT feature makees difference result, and those of ordinary skill in the art should
Solution, is not repeated herein.
S105, the ratio according to described first confidence level and described second confidence level and described default confidence interval
Result is determined with the eyes of described driver open closed state.
Step S104 refer to how to determine that eyes are in using the relation of confidence level and confidence interval and open
Open or close or state, what the comparison result of the first confidence level will be seen that left eye opens closed state, the second confidence
What the comparison result of degree will be seen that right eye opens closed state, but judges that having at least one eyes to be in opens shape
Then think during state that driver is in eyes-open state, when two eyes then think driver all in closure state
It is in closed-eye state.
Specific confirmation process can be defined as using when described first confidence level is more than default confidence interval
Left eye eyes-open state, is defined as left eye closed-eye state when described first confidence level is less than default confidence interval;
It is defined as right eye eyes-open state when described second confidence level is more than default confidence interval, when described first puts
Reliability is defined as right eye closed-eye state when being less than default confidence interval;When left eye eyes-open state and/or right eye are opened
Eye state is, determines that described driver is eyes-open state, when being in described left eye closed-eye state and the described right side
Determine during eye closed-eye state that described driver is closed-eye state.
It should be noted that when the judgement of eyes plays pendulum, that is, can not judge it is eyes-open state
Or during closed-eye state, it would however also be possible to employ the mode of joint probability is calculated to determine what driver is in
The state of kind, is defined as left eye labile state when described first confidence bit is in default confidence interval,
It is defined as right eye labile state when described second confidence bit is in described confidence interval, to described left eye
Corresponding first confidence level of labile state and corresponding second confidence level of described right eye labile state are carried out
Joint probability calculation obtains probit, more than predetermined threshold value, described probit is to determine that driver is eye opening shape
When described probit is not more than predetermined threshold value, state, determines that driver is closed-eye state, using joint probability
During calculating, certain coefficient can be multiplied by respectively for the first confidence level and the second confidence level and carry out calculating and sue for peace
To probit, the probit obtaining is calculated, how using joint probability calculation, this area is common
Technical staff it is to be appreciated that specifically not introducing, referred to herein as predetermined threshold value by driver's eyes
Open closed state statistics can obtain.
S106, determine that when the eyes of described driver are for closed-eye state described driver is in fatigue state.
It is in the state of opening or closure state by may determine that the eyes of driver in S105, when true
The eyes determining driver are in during closure state and can determine that driver is in fatigue driving state, because people
Produce sleepiness when, eyes can automatic closure, this can be very dangerous during driving a car, instead
Should be blunt, detecting driver's frazzle in time has great significance to safe driving, needs explanation
, when judging that driver is in fatigue state, may also take on follow-up safety measure carry out pre-
Alert, for example automatically reduction of speed is carried out to vehicle, voice reminder etc. is carried out to driver, does not specifically limit.
A kind of method for detecting fatigue state of driver that the present invention provides, by sending into eyes SIFT feature
Disaggregated model obtains confidence level, opens closed state according to level of confidence to eyes and judges, can not be subject to
Facial angle affects, and has rotational invariance, high to the accuracy of human eye measurement, and calculating process letter
It is single, in time the driving fatigue state of driver can be detected, can have the requirement meeting real-time detection.
In conjunction with shown in Fig. 2, a kind of another kind of method for detecting fatigue state of driver of the present invention also offer
Embodiment, including:
S201, the face contour image of extraction driver.
Similar with step S101 in a upper embodiment, do not repeated herein.
Eye position and face contour size in S202, the described face contour image of acquisition.
Similar with step S102 in a upper embodiment, do not repeated herein.
S203, calculated according to described eye position and face contour size described driver face size,
Position and posture feature.
Similar with step S102 in a upper embodiment, do not repeated herein.
S204, according to described face size, position and posture feature utilize image mapping method obtain to institute
State face contour image and be normalized operation to obtain normalization facial image.
Similar with step S102 in a upper embodiment, do not repeated herein, it should be noted that right
Method in normalization facial image can also be using implementation in other, those of ordinary skill in the art
It is to be appreciated that here is not introduced.
S205, described to described normalization facial image extract eyes SIFT feature, wherein said eyes
SIFT feature includes left eye SIFT feature and right eye SIFT feature.
It should be noted that the process that the eyes SIFT feature mentioned in step S205 is extracted can be joined
It is admitted to the introduction in an embodiment, in addition, those of ordinary skill in the art is not it is to be appreciated that enter to this
Row is introduced.
S206, described left eye SIFT feature and right eye SIFT feature are inputted described disaggregated model and counted
Calculate and obtain the first confidence level of described left eye SIFT feature and the second of described right eye SIFT feature respectively
Confidence level.
By eyes SIFT feature extraction is carried out to driver's eyes in the present embodiment, then by each eye
Eyes SIFT feature send into disaggregated model obtain confidence level, judge opening or closing of eyes using confidence level
Conjunction situation, improves the fault-tolerance of judgement, judge process simple operationss.
S207, the ratio according to described first confidence level and described second confidence level and described default confidence interval
Result is determined with the eyes of described driver open closed state, executes when the eyes of driver are for eyes-open state
S201, executes S208 when the eyes of driver are for closed-eye state.
It is defined as left eye eyes-open state when described first confidence level is more than default confidence interval, when described the
One confidence level is defined as left eye closed-eye state when being less than default confidence interval;
It is defined as right eye eyes-open state when described second confidence level is more than default confidence interval, when described the
One confidence level is defined as right eye closed-eye state when being less than default confidence interval;
When left eye eyes-open state and/or right eye eyes-open state are, determine that described driver is eyes-open state, when
Determine when described left eye closed-eye state and described right eye closed-eye state that described driver is closed-eye state, and
It is defined as left eye labile state when described first confidence bit is in default confidence interval, when described
Second confidence bit is defined as right eye labile state when described confidence interval, unstable to described left eye
Corresponding first confidence level of state and corresponding second confidence level of described right eye labile state carry out joint generally
Rate is calculated probit, more than predetermined threshold value, described probit is to determine that driver is eyes-open state, when
Described probit determines that driver is closed-eye state when being not more than predetermined threshold value.
By carrying out corresponding operation for different situations so that eyes are opened with the detection closed is more accurate,
Particularly when confidence level is in confidence interval, using joint probability method calculate the first confidence level and
Second confidence level simultaneously obtains probit, recycles predetermined threshold value to judge that the corresponding driver of probit is in and opens
Eye state or closed-eye state, adapt to various scenes, improve the motility of the inventive method.
S208, determine that when the eyes of described driver are for closed-eye state described driver is in fatigue state.
Step S208 is similar with S106 in a upper embodiment, does not repeat herein.
S209, alerted or vehicle deceleration, described alarm includes auditory tone cueses, light prompt or vibrations
At least one in prompting.
Need to take safety measures in time when determination driver is in fatigue state, voice reminder can wrap
Include and " for you and other people safety, would you please rest in time!", vibration prompting can by seat shake into
OK, light prompt can adopt red flare etc., flexibly can be selected according to scene for vehicle deceleration,
For example when high speed uplink is sailed, speed is required, unexpected reduction of speed is prone to accidents, language can be selected
Sound prompting etc., is not defined to this.
A kind of method for detecting fatigue state of driver has been provided above, accordingly, present invention also offers
A kind of driver fatigue state detecting system, is specifically introduced below.
In conjunction with shown in Fig. 3, the invention provides a kind of embodiment of driver fatigue state detecting system,
According to eyes SIFT feature training in advance disaggregated model, described disaggregated model is used for eyes SIFT feature
Compare and export corresponding confidence level, including:
First extraction unit 301, for extracting the face contour image of driver;
First processing units 302, for being normalized acquisition normalization to described face contour image
Facial image;
Second extraction unit 303, for eyes SIFT feature is extracted to described normalization facial image, its
In, described eyes SIFT feature includes left eye SIFT feature and right eye SIFT feature;
Second processing unit 304, described for inputting described left eye SIFT feature and right eye SIFT feature
Disaggregated model is calculated and is obtained respectively the first confidence level of described left eye SIFT feature and described right eye
Second confidence level of SIFT feature;
First determining unit 305, for pre- with described according to described first confidence level and described second confidence level
If the comparison result of confidence interval determines that the eyes of described driver open closed state;
Second determining unit 306, for determining described driving when the eyes of described driver are for closed-eye state
Member is in fatigue state.
Alternatively, described first determining unit 305 is additionally operable to:
It is defined as left eye eyes-open state when described first confidence level is more than default confidence interval, when described the
One confidence level is defined as left eye closed-eye state when being less than default confidence interval;
It is defined as right eye eyes-open state when described second confidence level is more than default confidence interval, when described the
One confidence level is defined as right eye closed-eye state when being less than default confidence interval;
When left eye eyes-open state and/or right eye eyes-open state are, determine that described driver is eyes-open state, when
Determine when described left eye closed-eye state and described right eye closed-eye state that described driver is closed-eye state.
Alternatively, described first determining unit 305 is additionally operable to:
It is defined as left eye labile state when described first confidence bit is in default confidence interval, when described
Second confidence bit is defined as right eye labile state when described confidence interval, unstable to described left eye
Corresponding first confidence level of state and corresponding second confidence level of described right eye labile state carry out joint generally
Rate is calculated probit, more than predetermined threshold value, described probit is to determine that driver is eyes-open state, when
Described probit determines that driver is closed-eye state when being not more than predetermined threshold value.
Alternatively, described first processing units 302 are additionally operable to:
Obtain eye position and face contour size in described face contour image;
According to described eye position and face contour size calculate the face size of described driver, position and
Posture feature;
Image mapping method is utilized to obtain to described face according to described face size, position and posture feature
Contour images are normalized operation to obtain normalization facial image.
Alternatively, described second extraction unit 302 is additionally operable to:
Determine the image-region calculating needed for eyes SIFT feature description;
Coordinate axess are rotated to be the direction of key point, to guarantee rotational invariance;
Calculate the direction histogram of each seed point, form characteristic vector;
The characteristic vector of key point is normalized;
Description subvector thresholding is to block off-limits Grad.
Alternatively, described system also includes:
Danger early warning unit 307, for being alerted when described driver is in fatigue state or vehicle subtracts
Speed, described alarm includes at least one in auditory tone cueses, light prompt or vibration prompting
A kind of eyes that the present invention provides open closed state monitoring system, by dividing the input of eyes SIFT feature
Class model obtains confidence level, opens closed state according to level of confidence to eyes and judges, can not be subject to people
Face angle affects, and has rotational invariance, high to the accuracy of human eye measurement, and calculating process is simple,
In time the driving fatigue state of driver can be detected, can have the requirement meeting real-time detection.
Those skilled in the art can be understood that, for convenience and simplicity of description, above-mentioned retouches
The specific work process of the system, apparatus, and unit stated, may be referred to the correspondence in preceding method embodiment
Process, will not be described here.
It should be understood that disclosed system in several embodiments provided herein, device and
Method, can realize by another way.For example, device embodiment described above is only shown
Meaning property, for example, the division of described unit, only a kind of division of logic function, actual can when realizing
There to be other dividing mode, for example multiple units or assembly can in conjunction with or be desirably integrated into another
System, or some features can ignore, or do not execute.Another, shown or discussed each other
Coupling direct-coupling or communication connection can be the INDIRECT COUPLING of device or unit by some interfaces
Or communication connection, can be electrical, mechanical or other forms.
The described unit illustrating as separating component can be or may not be physically separate, make
For the part that unit shows can be or may not be physical location, you can with positioned at a place,
Or can also be distributed on multiple NEs.Can select according to the actual needs part therein or
The whole unit of person is realizing the purpose of this embodiment scheme.
In addition, can be integrated in a processing unit in each functional unit in each embodiment of the present invention,
Can also be that unit is individually physically present it is also possible to two or more units are integrated in a list
In unit.Above-mentioned integrated unit both can be to be realized in the form of hardware, it would however also be possible to employ software function list
The form of unit is realized.
One of ordinary skill in the art will appreciate that all or part step in the various methods of above-described embodiment
Suddenly the program that can be by complete come the hardware to instruct correlation, and this program can be stored in a computer can
Read in storage medium, storage medium can include:Read only memory (ROM, Read Only Memory),
Random access memory (RAM, Random Access Memory), disk or CD etc..
One of ordinary skill in the art will appreciate that realizing all or part of step in above-described embodiment method
The program that can be by completes come the hardware to instruct correlation, and described program can be stored in a kind of computer
In readable storage medium storing program for executing, storage medium mentioned above can be read only memory, disk or CD etc..
Above a kind of method for detecting fatigue state of driver provided by the present invention and system are carried out in detail
Introduce, for one of ordinary skill in the art, according to the thought of the embodiment of the present invention, be embodied as
All will change in mode and range of application, in sum, it is right that this specification content should not be construed as
The restriction of the present invention.
Claims (12)
1. a kind of method for detecting fatigue state of driver is it is characterised in that according to eyes scale invariant feature
Conversion SIFT feature training in advance disaggregated model, described disaggregated model is used for eyes SIFT feature is carried out
Calculate and export corresponding confidence level, including:
Obtain the face contour image of driver;
Described face contour image is normalized with acquisition normalization facial image;
Eyes SIFT feature, wherein said eyes SIFT feature bag are extracted to described normalization facial image
Include left eye SIFT feature and right eye SIFT feature;
Described left eye SIFT feature and right eye SIFT feature are inputted described disaggregated model calculated and divided
Do not obtain the first confidence level of described left eye SIFT feature and the second confidence level of described right eye SIFT feature;
Comparison result according to described first confidence level and described second confidence level and described default confidence interval
Determine that the eyes of described driver open closed state;
Determine that when the eyes of described driver are for closed-eye state described driver is in fatigue state.
2. method according to claim 1 it is characterised in that described according to described first confidence level
Determine that the eyes of described driver are opened with the comparison result of described second confidence level and described default confidence interval
Closed state, including:
It is defined as left eye eyes-open state when described first confidence level is more than default confidence interval, when described the
One confidence level is defined as left eye closed-eye state when being less than default confidence interval;
It is defined as right eye eyes-open state when described second confidence level is more than default confidence interval, when described the
One confidence level is defined as right eye closed-eye state when being less than default confidence interval;
When left eye eyes-open state and/or right eye eyes-open state are, determine that described driver is eyes-open state, when
Determine when described left eye closed-eye state and described right eye closed-eye state that described driver is closed-eye state.
3. method according to claim 1 and 2 is it is characterised in that described put according to described first
The comparison result of reliability and described second confidence level and described default confidence interval determines the eye of described driver
Eyeball opens closed state, including:
It is defined as left eye labile state when described first confidence bit is in default confidence interval, when described
Second confidence bit is defined as right eye labile state when described confidence interval, unstable to described left eye
Corresponding first confidence level of state and corresponding second confidence level of described right eye labile state carry out joint generally
Rate is calculated probit, more than predetermined threshold value, described probit is to determine that driver is eyes-open state, when
Described probit determines that driver is closed-eye state when being not more than predetermined threshold value.
4. method according to claim 1 it is characterised in that described to described face contour image
It is normalized acquisition normalization facial image, including:
Obtain eye position and face contour size in described face contour image;
According to described eye position and face contour size calculate the face size of described driver, position and
Posture feature;
Image mapping method is utilized to obtain to described face according to described face size, position and posture feature
Contour images are normalized operation to obtain normalization facial image.
5. method according to claim 1 it is characterised in that described to described normalization face figure
As extracting eyes SIFT feature, including:
Determine the image-region calculating needed for eyes SIFT feature description;
Coordinate axess are rotated to be the direction of key point, to guarantee rotational invariance;
Calculate the direction histogram of each seed point, form characteristic vector;
The characteristic vector of key point is normalized;
Description subvector thresholding is to block off-limits Grad.
6. method according to claim 1 is it is characterised in that the described eyes as described driver
After determining during for closed-eye state that described driver is in fatigue state, also include:
Alerted or vehicle deceleration when described driver is in fatigue state, described alarm includes sound
At least one in prompting, light prompt or vibration prompting.
7. a kind of driver fatigue state detecting system is it is characterised in that pre- according to eyes SIFT feature
First train classification models, described disaggregated model is used for carrying out corresponding confidence calculations to eyes SIFT feature,
Including:
First extraction unit, for extracting the face contour image of driver;
First processing units, obtain normalization people for being normalized to described face contour image
Face image;
Second extraction unit, for eyes SIFT feature is extracted to described normalization facial image, wherein,
Described eyes SIFT feature includes left eye SIFT feature and right eye SIFT feature;
Second processing unit, for by described in described left eye SIFT feature and the input of right eye SIFT feature point
Class model calculates the first confidence level of described left eye SIFT feature and the of described right eye SIFT feature respectively
Two confidence levels;
First determining unit, for default with described according to described first confidence level and described second confidence level
The comparison result of confidence interval determines that the eyes of described driver open closed state;
Second determining unit, for determining described driver when the eyes of described driver are for closed-eye state
It is in fatigue state.
8. system according to claim 7 is it is characterised in that described first determining unit is additionally operable to:
It is defined as left eye eyes-open state when described first confidence level is more than default confidence interval, when described the
One confidence level is defined as left eye closed-eye state when being less than default confidence interval;
It is defined as right eye eyes-open state when described second confidence level is more than default confidence interval, when described the
One confidence level is defined as right eye closed-eye state when being less than default confidence interval;
When left eye eyes-open state and/or right eye eyes-open state are, determine that described driver is eyes-open state, when
Determine when described left eye closed-eye state and described right eye closed-eye state that described driver is closed-eye state.
9. the system according to claim 7 or 8 it is characterised in that described first determining unit also
For:
It is defined as left eye labile state when described first confidence bit is in default confidence interval, when described
Second confidence bit is defined as right eye labile state when described confidence interval, unstable to described left eye
Corresponding first confidence level of state and corresponding second confidence level of described right eye labile state carry out joint generally
Rate is calculated probit, more than predetermined threshold value, described probit is to determine that driver is eyes-open state, when
Described probit determines that driver is closed-eye state when being not more than predetermined threshold value.
10. system according to claim 7 is it is characterised in that described first processing units are also used
In:
Obtain eye position and face contour size in described face contour image;
According to described eye position and face contour size calculate the face size of described driver, position and
Posture feature;
Image mapping method is utilized to obtain to described face according to described face size, position and posture feature
Contour images are normalized operation to obtain normalization facial image.
11. systems according to claim 7 are it is characterised in that described second extraction unit is also used
In:
Determine the image-region calculating needed for eyes SIFT feature description;
Coordinate axess are rotated to be the direction of key point, to guarantee rotational invariance;
Calculate the direction histogram of each seed point, form characteristic vector;
The characteristic vector of key point is normalized;
Description subvector thresholding is to block off-limits Grad.
12. systems according to claim 7 are it is characterised in that described system also includes:
Danger early warning unit, for being alerted or vehicle deceleration when described driver is in fatigue state,
Described alarm includes at least one in auditory tone cueses, light prompt or vibration prompting.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201510555903.0A CN106485191B (en) | 2015-09-02 | 2015-09-02 | A kind of method for detecting fatigue state of driver and system |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201510555903.0A CN106485191B (en) | 2015-09-02 | 2015-09-02 | A kind of method for detecting fatigue state of driver and system |
Publications (2)
Publication Number | Publication Date |
---|---|
CN106485191A true CN106485191A (en) | 2017-03-08 |
CN106485191B CN106485191B (en) | 2018-12-11 |
Family
ID=58237920
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201510555903.0A Active CN106485191B (en) | 2015-09-02 | 2015-09-02 | A kind of method for detecting fatigue state of driver and system |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN106485191B (en) |
Cited By (16)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN107578008A (en) * | 2017-09-02 | 2018-01-12 | 吉林大学 | Fatigue state detection method based on blocking characteristic matrix algorithm and SVM |
CN107704805A (en) * | 2017-09-01 | 2018-02-16 | 深圳市爱培科技术股份有限公司 | method for detecting fatigue driving, drive recorder and storage device |
CN108372785A (en) * | 2018-04-25 | 2018-08-07 | 吉林大学 | A kind of non-security driving detection device of the automobile based on image recognition and detection method |
CN109192275A (en) * | 2018-08-06 | 2019-01-11 | 百度在线网络技术(北京)有限公司 | The determination method, apparatus and server of personage's state of mind |
CN110059650A (en) * | 2019-04-24 | 2019-07-26 | 京东方科技集团股份有限公司 | Information processing method, device, computer storage medium and electronic equipment |
WO2019205633A1 (en) * | 2018-04-27 | 2019-10-31 | 京东方科技集团股份有限公司 | Eye state detection method and detection apparatus, electronic device, and computer readable storage medium |
WO2020024395A1 (en) * | 2018-08-02 | 2020-02-06 | 平安科技(深圳)有限公司 | Fatigue driving detection method and apparatus, computer device, and storage medium |
WO2020034541A1 (en) * | 2018-08-14 | 2020-02-20 | 深圳壹账通智能科技有限公司 | Driver drowsiness detection method, computer readable storage medium, terminal device, and apparatus |
CN111242065A (en) * | 2020-01-17 | 2020-06-05 | 江苏润杨汽车零部件制造有限公司 | Portable vehicle-mounted intelligent driving system |
WO2020237664A1 (en) * | 2019-05-31 | 2020-12-03 | 驭势(上海)汽车科技有限公司 | Driving prompt method, driving state detection method and computing device |
CN113255558A (en) * | 2021-06-09 | 2021-08-13 | 北京惠朗时代科技有限公司 | Driver fatigue driving low-consumption identification method and device based on single image |
CN113454645A (en) * | 2021-05-27 | 2021-09-28 | 华为技术有限公司 | Driving state detection method and device, equipment, storage medium, system and vehicle |
CN113449584A (en) * | 2020-03-24 | 2021-09-28 | 丰田自动车株式会社 | Eye opening degree calculation device |
CN114170069A (en) * | 2021-11-25 | 2022-03-11 | 杭州电子科技大学上虞科学与工程研究院有限公司 | Automatic eye closing processing method based on continuous multiple pictures |
CN114220158A (en) * | 2022-02-18 | 2022-03-22 | 电子科技大学长三角研究院(湖州) | Fatigue driving detection method based on deep learning |
CN117079255A (en) * | 2023-10-17 | 2023-11-17 | 江西开放大学 | Fatigue driving detection method based on face recognition and voice interaction |
Citations (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN102096810A (en) * | 2011-01-26 | 2011-06-15 | 北京中星微电子有限公司 | Method and device for detecting fatigue state of user before computer |
CN102156871A (en) * | 2010-02-12 | 2011-08-17 | 中国科学院自动化研究所 | Image classification method based on category correlated codebook and classifier voting strategy |
CN103049740A (en) * | 2012-12-13 | 2013-04-17 | 杜鹢 | Method and device for detecting fatigue state based on video image |
CN103839379A (en) * | 2014-02-27 | 2014-06-04 | 长城汽车股份有限公司 | Automobile and driver fatigue early warning detecting method and system for automobile |
CN103971093A (en) * | 2014-04-22 | 2014-08-06 | 大连理工大学 | Fatigue detection method based on multi-scale LBP algorithm |
CN104688251A (en) * | 2015-03-02 | 2015-06-10 | 西安邦威电子科技有限公司 | Method for detecting fatigue driving and driving in abnormal posture under multiple postures |
-
2015
- 2015-09-02 CN CN201510555903.0A patent/CN106485191B/en active Active
Patent Citations (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN102156871A (en) * | 2010-02-12 | 2011-08-17 | 中国科学院自动化研究所 | Image classification method based on category correlated codebook and classifier voting strategy |
CN102096810A (en) * | 2011-01-26 | 2011-06-15 | 北京中星微电子有限公司 | Method and device for detecting fatigue state of user before computer |
CN103049740A (en) * | 2012-12-13 | 2013-04-17 | 杜鹢 | Method and device for detecting fatigue state based on video image |
CN103839379A (en) * | 2014-02-27 | 2014-06-04 | 长城汽车股份有限公司 | Automobile and driver fatigue early warning detecting method and system for automobile |
CN103971093A (en) * | 2014-04-22 | 2014-08-06 | 大连理工大学 | Fatigue detection method based on multi-scale LBP algorithm |
CN104688251A (en) * | 2015-03-02 | 2015-06-10 | 西安邦威电子科技有限公司 | Method for detecting fatigue driving and driving in abnormal posture under multiple postures |
Cited By (25)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN107704805A (en) * | 2017-09-01 | 2018-02-16 | 深圳市爱培科技术股份有限公司 | method for detecting fatigue driving, drive recorder and storage device |
CN107704805B (en) * | 2017-09-01 | 2018-09-07 | 深圳市爱培科技术股份有限公司 | Method for detecting fatigue driving, automobile data recorder and storage device |
CN107578008B (en) * | 2017-09-02 | 2020-07-17 | 吉林大学 | Fatigue state detection method based on block feature matrix algorithm and SVM |
CN107578008A (en) * | 2017-09-02 | 2018-01-12 | 吉林大学 | Fatigue state detection method based on blocking characteristic matrix algorithm and SVM |
CN108372785B (en) * | 2018-04-25 | 2023-06-23 | 吉林大学 | Image recognition-based automobile unsafe driving detection device and detection method |
CN108372785A (en) * | 2018-04-25 | 2018-08-07 | 吉林大学 | A kind of non-security driving detection device of the automobile based on image recognition and detection method |
WO2019205633A1 (en) * | 2018-04-27 | 2019-10-31 | 京东方科技集团股份有限公司 | Eye state detection method and detection apparatus, electronic device, and computer readable storage medium |
US11386710B2 (en) | 2018-04-27 | 2022-07-12 | Boe Technology Group Co., Ltd. | Eye state detection method, electronic device, detecting apparatus and computer readable storage medium |
WO2020024395A1 (en) * | 2018-08-02 | 2020-02-06 | 平安科技(深圳)有限公司 | Fatigue driving detection method and apparatus, computer device, and storage medium |
US11055512B2 (en) | 2018-08-06 | 2021-07-06 | Baidu Online Network Technology (Beijing) Co., Ltd. | Method, apparatus and server for determining mental state of human |
CN109192275A (en) * | 2018-08-06 | 2019-01-11 | 百度在线网络技术(北京)有限公司 | The determination method, apparatus and server of personage's state of mind |
WO2020034541A1 (en) * | 2018-08-14 | 2020-02-20 | 深圳壹账通智能科技有限公司 | Driver drowsiness detection method, computer readable storage medium, terminal device, and apparatus |
CN110059650A (en) * | 2019-04-24 | 2019-07-26 | 京东方科技集团股份有限公司 | Information processing method, device, computer storage medium and electronic equipment |
WO2020237664A1 (en) * | 2019-05-31 | 2020-12-03 | 驭势(上海)汽车科技有限公司 | Driving prompt method, driving state detection method and computing device |
CN111242065B (en) * | 2020-01-17 | 2020-10-13 | 江苏润杨汽车零部件制造有限公司 | Portable vehicle-mounted intelligent driving system |
CN111242065A (en) * | 2020-01-17 | 2020-06-05 | 江苏润杨汽车零部件制造有限公司 | Portable vehicle-mounted intelligent driving system |
CN113449584A (en) * | 2020-03-24 | 2021-09-28 | 丰田自动车株式会社 | Eye opening degree calculation device |
CN113449584B (en) * | 2020-03-24 | 2023-09-26 | 丰田自动车株式会社 | Eye opening degree calculating device |
CN113454645A (en) * | 2021-05-27 | 2021-09-28 | 华为技术有限公司 | Driving state detection method and device, equipment, storage medium, system and vehicle |
CN113454645B (en) * | 2021-05-27 | 2022-08-09 | 华为技术有限公司 | Driving state detection method and device, equipment, storage medium, system and vehicle |
CN113255558A (en) * | 2021-06-09 | 2021-08-13 | 北京惠朗时代科技有限公司 | Driver fatigue driving low-consumption identification method and device based on single image |
CN114170069A (en) * | 2021-11-25 | 2022-03-11 | 杭州电子科技大学上虞科学与工程研究院有限公司 | Automatic eye closing processing method based on continuous multiple pictures |
CN114220158A (en) * | 2022-02-18 | 2022-03-22 | 电子科技大学长三角研究院(湖州) | Fatigue driving detection method based on deep learning |
CN117079255A (en) * | 2023-10-17 | 2023-11-17 | 江西开放大学 | Fatigue driving detection method based on face recognition and voice interaction |
CN117079255B (en) * | 2023-10-17 | 2024-01-05 | 江西开放大学 | Fatigue driving detection method based on face recognition and voice interaction |
Also Published As
Publication number | Publication date |
---|---|
CN106485191B (en) | 2018-12-11 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN106485191A (en) | A kind of method for detecting fatigue state of driver and system | |
CN105095829B (en) | A kind of face identification method and system | |
CN105354902B (en) | A kind of security management method and system based on recognition of face | |
CN106557726B (en) | Face identity authentication system with silent type living body detection and method thereof | |
US6690814B1 (en) | Image processing apparatus and method | |
CN106056079B (en) | A kind of occlusion detection method of image capture device and human face five-sense-organ | |
CN110223322B (en) | Image recognition method and device, computer equipment and storage medium | |
CN107644204A (en) | A kind of human bioequivalence and tracking for safety-protection system | |
CN102375970B (en) | A kind of identity identifying method based on face and authenticate device | |
US10445602B2 (en) | Apparatus and method for recognizing traffic signs | |
CN109460704B (en) | Fatigue detection method and system based on deep learning and computer equipment | |
JP6222948B2 (en) | Feature point extraction device | |
CN102938058A (en) | Method and system for video driving intelligent perception and facing safe city | |
KR101937323B1 (en) | System for generating signcription of wireless mobie communication | |
KR102005150B1 (en) | Facial expression recognition system and method using machine learning | |
CN110705357A (en) | Face recognition method and face recognition device | |
JP2008146539A (en) | Face authentication device | |
CN107977639A (en) | A kind of face definition judgment method | |
CN108734235A (en) | A kind of personal identification method and system for electronic prescription | |
Solymár et al. | Banknote recognition for visually impaired | |
CN109977771A (en) | Verification method, device, equipment and the computer readable storage medium of driver identification | |
CN112926522B (en) | Behavior recognition method based on skeleton gesture and space-time diagram convolution network | |
CN106909879A (en) | A kind of method for detecting fatigue driving and system | |
Cheong et al. | A novel face detection algorithm using thermal imaging | |
CN109815937A (en) | Fatigue state intelligent identification Method, device and electronic equipment |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
C06 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |