CN115830579A - Driving state monitoring method and system and vehicle - Google Patents

Driving state monitoring method and system and vehicle Download PDF

Info

Publication number
CN115830579A
CN115830579A CN202211336389.8A CN202211336389A CN115830579A CN 115830579 A CN115830579 A CN 115830579A CN 202211336389 A CN202211336389 A CN 202211336389A CN 115830579 A CN115830579 A CN 115830579A
Authority
CN
China
Prior art keywords
driving state
image data
driver
state monitoring
data
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202211336389.8A
Other languages
Chinese (zh)
Inventor
胡束芒
林枝叶
颉毅
赵龙
吴会肖
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Great Wall Motor Co Ltd
Original Assignee
Great Wall Motor Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Great Wall Motor Co Ltd filed Critical Great Wall Motor Co Ltd
Priority to CN202211336389.8A priority Critical patent/CN115830579A/en
Publication of CN115830579A publication Critical patent/CN115830579A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02TCLIMATE CHANGE MITIGATION TECHNOLOGIES RELATED TO TRANSPORTATION
    • Y02T10/00Road transport of goods or passengers
    • Y02T10/10Internal combustion engine [ICE] based vehicles
    • Y02T10/40Engine management systems

Landscapes

  • Traffic Control Systems (AREA)

Abstract

The embodiment of the disclosure provides a driving state monitoring method, a driving state monitoring system and a vehicle, wherein the driving state monitoring method comprises the following steps: acquiring first face image data of a driver during driving; extracting relevant facial features from the first facial image data, wherein the facial features comprise features of at least one part of the head and the five sense organs; inputting the facial features into a preset driving state monitoring model to monitor the driving state of the driver to obtain a monitoring result; the driving state monitoring model is obtained by training second face image data of the driver in a normal driving state, and the training comprises training a driving state reference feature matrix constructed according to facial features extracted from the second face image data.

Description

Driving state monitoring method and system and vehicle
Technical Field
The disclosure relates to the technical field of automobile driving state monitoring, in particular to a driving state monitoring method, a driving state monitoring system and a vehicle.
Background
Because the individual postures, appearances and five sense organs of different drivers have obvious differences, the conventional Driver state Monitoring System (DMS) adopts a universal algorithm for Monitoring, false alarm and false detection easily exist on the special Driver states of some individuals, and the Driver experience and the trust level of the Driver on the whole vehicle Monitoring System are greatly influenced.
Disclosure of Invention
The embodiment of the disclosure aims to provide a driving state monitoring method, a driving state monitoring system and a vehicle, so as to solve the technical problems that an existing driver state monitoring system is prone to false alarm and false detection, and driver experience is poor.
In order to solve the technical problem, the embodiment of the present disclosure adopts the following technical solutions:
a driving state monitoring method, comprising:
acquiring first facial image data of a driver during driving;
extracting relevant facial features from the first facial image data, wherein the facial features comprise features of at least one part of the head and the five sense organs;
inputting the facial features into a preset driving state monitoring model to monitor the driving state of the driver to obtain a monitoring result;
the driving state monitoring model is obtained by training second face image data of the driver in a normal driving state, and the training comprises training a driving state reference feature matrix constructed according to facial features extracted from the second face image data.
According to the driving state monitoring method, the driving state monitoring system and the vehicle, the first facial image data of the driver during driving is acquired in daily driving of the driver, the related facial features are extracted from the first facial image data, the facial features are input into the preset driving state monitoring model trained by the second facial image data of the driver during normal driving, the driving state of the driver is monitored, and the monitoring result is obtained.
In some embodiments, training a driving state reference feature matrix constructed from facial features extracted from the second facial image data includes:
calibrating the face key points in the second face image data to obtain reference face key points;
extracting facial features at key points of the reference face;
classifying the facial features to obtain a posture classification result of the driver;
constructing the driving state reference feature matrix according to the posture classification result and the feature value corresponding to the facial feature;
and training the driving state reference characteristic matrix to obtain the driving state monitoring model.
In the process of training the driving state monitoring model, a more comprehensive and accurate driving state reference characteristic matrix can be constructed according to multi-source characteristics, reference facial characteristics are collected as the standard of driving state monitoring judgment as comprehensively and accurately as possible, and then the driving state reference characteristic matrix is trained to obtain a more accurate and credible driving state monitoring model.
In some embodiments, the method further comprises:
acquiring driving environment data, wherein the driving environment data comprises at least one of distance data between a driver and equipment in the vehicle, vehicle working condition data and environment data around the vehicle;
calculating the driving state reference characteristic matrix according to the driving environment data to obtain a probability distribution curve of a face characteristic state corresponding to the posture classification result;
and determining a monitoring standard corresponding to a monitoring result in the driving state monitoring model according to the probability distribution curve. According to the driving state reference characteristic matrix, the driving state reference characteristic matrix can be calculated according to the driving environment data, the probability distribution curve of the face characteristic state corresponding to the posture classification result is obtained, the monitoring standard for judging the driving state is determined according to the statistical result of the probability distribution curve, and a more accurate driving state monitoring model is obtained.
In some embodiments, the method further comprises: and optimizing the driving state monitoring model according to the abnormal driving state data in the monitoring result, so that a more accurate driving state monitoring model can be obtained.
In some embodiments, optimizing the driving state monitoring model according to abnormal driving state data in the monitoring result comprises:
acquiring vehicle control information of a time period corresponding to the abnormal driving state data;
judging whether the vehicle control information meets a preset condition or not;
if yes, comparing the abnormal driving state data with the corresponding driving state reference characteristic matrix, and calculating an offset;
if the offset is smaller than a preset offset threshold, determining that the abnormal driving state data are problem data;
and training the driving state monitoring model by using the problem data.
Whether the abnormal driving state data are problem data or not is further judged according to the vehicle control information of the time period corresponding to the abnormal driving state data, the driving state monitoring model is trained by utilizing the problem data, accuracy of training data selection in model training can be improved, and the model is effectively optimized.
In some embodiments, the method further comprises:
acquiring current driving state data of a driver before the driving state monitoring model operates;
inputting the current driving state data of the driver into the driving state monitoring model for self-checking;
if the self-checking is successful, controlling the driving state monitoring model to operate, and monitoring the driving state of the driver; if the self-checking fails, adjusting the driving state reference characteristic matrix according to the frequency of the abnormal driving state;
and training and updating the driving state monitoring model according to the adjusted driving state reference characteristic matrix.
Before the driver state monitoring is carried out, the driving state monitoring model is subjected to self-checking according to the current driving state of the driver, monitoring parameters (such as a monitoring threshold value) of the driving state monitoring model are adjusted in a self-adaptive mode, the monitoring accuracy in the subsequent vehicle driving operation process can be guaranteed, and the user experience is improved.
In some embodiments, after extracting facial features from the first and/or second facial image data, the method further comprises:
carrying out face recognition according to the facial features to obtain a face recognition result;
and associating the driver identity information corresponding to the face recognition result with the first face image data and/or the second face image data.
When the facial features are extracted from the second facial image data, the face recognition can be carried out according to the facial features, the driver identity information is obtained according to the face recognition result, the driver identity information is further associated with the second facial image data, so that different driving state reference feature matrixes are constructed according to different driver identity information subsequently, the reference state information for managing the identity information can be constructed according to specific driver individual feature differences, personalized driving state monitoring models are trained for different drivers, the accuracy of driving state monitoring is improved, and the user experience is improved.
When facial features are extracted from the first facial image data, face recognition is carried out according to the facial features, the driver identity information corresponding to the face recognition result is associated with the first facial image data, so that personalized accurate monitoring and recognition can be conveniently carried out according to the driver identity when the driver state is monitored by the driving state monitoring model in the follow-up process, and the false recognition and the like are avoided.
In some embodiments, the method further comprises:
acquiring second face image data of a driver during driving according to a preset image acquisition period, and updating the second face image data;
updating the driving state reference feature matrix based on the updated second face image data.
The second facial image data can be dynamically collected when the driver is in a normal driving state by regularly updating the second facial image data, and the second facial image data is used as training data, relevant facial features are extracted, a driving state reference feature matrix is constructed, a driving state monitoring model is dynamically trained, parameters of the model are continuously dynamically optimized, and the accuracy and the credibility of the driving state monitoring model are further improved.
The embodiment of the present disclosure further provides a driving state monitoring system, including:
an acquisition module configured to acquire first facial image data while a driver is driving;
an extraction module configured to extract relevant facial features from the first facial image data, wherein the facial features include features of at least one of the head and five sense organs;
the monitoring module is configured to input the first facial image data into a preset driving state monitoring model to monitor the driving state of the driver to obtain a monitoring result;
wherein the driving state monitoring model is obtained by training second facial image data of the driver in a normal driving state, and the training includes training a driving state reference feature matrix constructed from facial features extracted from the second facial image data.
Embodiments of the present disclosure also provide a vehicle including a control device including a memory having a computer program stored thereon and a processor implementing the above method when executing the computer program on the memory.
The embodiment of the present disclosure further provides a computer-readable storage medium, on which computer-executable instructions are stored, and when the computer-executable instructions are executed by a processor, the driving state monitoring method is implemented.
Drawings
In order to more clearly illustrate the embodiments of the present disclosure or the technical solutions in the prior art, the drawings needed to be used in the description of the embodiments or the prior art will be briefly introduced below, it is obvious that the drawings in the following description are only some embodiments described in the present disclosure, and for those skilled in the art, other drawings can be obtained according to the drawings without creative efforts.
FIG. 1 is a flow chart of a driving condition monitoring method according to an embodiment of the present disclosure;
FIG. 2 is an architecture diagram of a driving condition monitoring system according to an embodiment of the present disclosure;
FIG. 3 is another flow chart of a driving condition monitoring method of an embodiment of the present disclosure;
FIG. 4 is a graph illustrating key point calibration of the mouth features in the driving state monitoring method according to the embodiment of the disclosure;
FIG. 5 is a diagram of a calibration of key points of ocular features in a driving state monitoring method according to an embodiment of the disclosure;
fig. 6 is a probability distribution curve of a certain face characteristic state in the driving state monitoring method according to the embodiment of the disclosure;
fig. 7 is a flow chart illustrating a self-inspection of a driving state monitoring model in a driving state monitoring method according to an embodiment of the present disclosure;
fig. 8 is a schematic structural diagram of a driving state monitoring system according to an embodiment of the disclosure.
Detailed Description
Various aspects and features of the disclosure are described herein with reference to the drawings.
It will be understood that various modifications may be made to the embodiments of the present application. Accordingly, the foregoing description should not be considered as limiting, but merely as exemplifications of embodiments. Other modifications will occur to those skilled in the art within the scope and spirit of the disclosure.
The accompanying drawings, which are incorporated in and constitute a part of the specification, illustrate embodiments of the disclosure and, together with a general description of the disclosure given above, and the detailed description of the embodiments given below, serve to explain the principles of the disclosure.
These and other characteristics of the present disclosure will become apparent from the following description of preferred forms of embodiment, given as non-limiting examples, with reference to the attached drawings.
It should also be understood that, although the present disclosure has been described with reference to some specific examples, a person of skill in the art shall certainly be able to achieve many other equivalent forms of the disclosure, having the characteristics as set forth in the claims and hence all coming within the field of protection defined thereby.
The above and other aspects, features and advantages of the present disclosure will become more apparent in view of the following detailed description when taken in conjunction with the accompanying drawings.
Specific embodiments of the present disclosure are described hereinafter with reference to the accompanying drawings; however, it is to be understood that the disclosed embodiments are merely exemplary of the disclosure that may be embodied in various forms. Well-known and/or repeated functions and structures have not been described in detail so as not to obscure the present disclosure with unnecessary or unnecessary detail. Therefore, specific structural and functional details disclosed herein are not to be interpreted as limiting, but merely as a basis for the claims and as a representative basis for teaching one skilled in the art to variously employ the present disclosure in virtually any appropriately detailed structure.
The specification may use the phrases "in one embodiment," "in another embodiment," "in yet another embodiment," or "in other embodiments," which may each refer to one or more of the same or different embodiments in accordance with the disclosure.
When the driver state monitoring system works, the upper half body (face is the main) video image information of the driver in driving is collected firstly, facial features (including state features of the head, eyes, mouth and the like) are extracted, and the abnormal state of the driver is detected and judged according to the extracted facial features. Due to the fact that the individual postures, the individual appearances, the facial features of the five sense organs and the like of different drivers are different, the driver state monitoring system can be mistakenly identified, and the monitoring and identifying effects of the system are influenced.
The differences between the individual postures, appearances and facial features of the five sense organs of different drivers influence the monitoring and recognition effect of the system mainly through the following reasons: 1) The face picture may be shielded, resulting in incomplete extracted feature information; 2) The extraction of key points or other characteristic algorithms of the human face is distorted, which causes the accuracy to be reduced; 3) The result of the human face features is not suitable for judging the condition state of the driver, for example, the small eyes are considered to be closed eyes, the large mouth is more easily considered to be yawning or speaking, and the like.
In view of this, the embodiments of the present disclosure provide a driving state monitoring method and system, and a vehicle.
Fig. 1 shows a flowchart of a driving state monitoring method according to an embodiment of the present disclosure, and as shown in fig. 1, a driving state monitoring method according to an embodiment of the present disclosure includes:
s101: first face image data when a driver drives is acquired.
In this embodiment, the driver state monitoring method may be applied to vehicle-mounted devices such as a vehicle-mounted computer, a vehicle-mounted monitoring device, or a vehicle event data recorder. The vehicle-mounted equipment comprises a camera and a server, wherein the camera can shoot a driving position in the automobile and collect a facial image of a driver to obtain first facial image data; the server can process the acquired first face image data so as to monitor the driving state of the driver.
The in-vehicle apparatus including the above-described camera and server constitutes the driving state monitoring system of the present embodiment. As shown in fig. 2, the driving state monitoring system includes a driver monitoring camera unit, a state monitoring management unit and a service unit, wherein the driver monitoring camera unit can acquire the first facial image data of the driver in real time and send the first facial image data to the state monitoring management unit through a transmission channel (a wireless or wired transmission channel) so as to monitor the driving state of the driver through the state monitoring management unit. The collected first facial image data may be facial image data at any time or in any time period while driving.
The driving state includes a normal driving state and an abnormal driving state, and the abnormal driving state may include a driver in a fatigue driving state, a distracted driving state, a dangerous driving state, and the like.
S102: extracting relevant facial features from the first facial image data, wherein the facial features comprise features of at least one of the head and the five sense organs.
After receiving the first facial image data sent by the camera, the server may process the first facial image data to extract facial features related to driving state detection and recognition.
Facial features include static and/or dynamic features of the head and at least one of the five sense organs (eyes, nose, mouth, eyebrows, ears). The head features may include head gestures, such as head motion gestures like head up, head down, nodding or turning, as well as features further describing head parameters such as head yaw frequency, head pitch angle (pitch), yaw angle (yaw) and roll angle (roll). The eye features may include eye opening and closing motion features such as eye shape, and features that further describe the eye parameters such as eye openness, inter-eye distance, blink frequency, eye closing duration, etc. The mouth characteristics may include open mouth and closed mouth motion characteristics, as well as mouth openness and other parameter characteristics. The facial features may also include physical features such as facial contours.
Specifically, when feature extraction is performed, feature values of feature point positions predetermined in the first facial image data may be extracted, and facial features of corresponding parts may be extracted.
To improve the accuracy and efficiency of facial feature extraction, before performing step S102, the method further comprises: and preprocessing the first face image data.
Specifically, the image size of the acquired first face image data can be subjected to shape clipping compression to improve the data processing efficiency; the acquired first face image data can also be subjected to data cleaning and the like to obtain a face frontal view capable of accurately identifying face features, so that subsequent face key points can be determined and the face features can be extracted.
S103: and inputting the facial features into a preset driving state monitoring model to monitor the driving state of the driver to obtain a monitoring result.
The driving state monitoring model is obtained by training second face image data of the driver in a normal driving state, and the training comprises training a driving state reference feature matrix constructed according to facial features extracted from the second face image data.
In this embodiment, the driving state monitoring model may be obtained by training in advance using the second facial image data of the driver in the normal driving state. Specifically, in the daily driving process of the driver, second facial image data is collected and used as training data, relevant facial features are extracted from the second facial image data, then a driving state reference feature matrix is constructed according to the extracted facial features, and the driving state reference feature matrix is trained to obtain the driving state monitoring model. Then, the facial features extracted in step S102 are input to the driving state monitoring model, the driving state of the driver is monitored, and whether or not the driver is in an abnormal driving state (e.g., a fatigue driving state) is determined.
Optionally, the acquiring the second facial image data while the driver is in the normal driving state includes: the method comprises the steps of obtaining facial image data of a driver in preset time of starting and running of the vehicle.
The preset time for starting and running the vehicle can be 3 to 5 minutes from the beginning of the ignition cycle of the automobile, the time period is a period of time when a user just gets on the automobile, completes relevant settings and focuses on driving, the driver is unlikely to fatigue driving, and therefore the collected facial image data of the time period is used as the second facial image data when the driver is in a normal driving state.
The constructed driving state reference feature matrix comprises multi-source features, so that a multi-source feature fusion algorithm layer (shown in figure 2) can be set in the construction process of the driving state monitoring model, extracted reference facial features are fused, a more comprehensive and accurate driving state reference feature matrix is constructed, the reference facial features are collected as fully and accurately as possible to serve as the standard for monitoring and judging the driving state, and the more accurate and credible driving state monitoring model can be obtained by training the driving state reference feature matrix.
When the driving state is monitored according to the acquired first facial data, the facial features extracted from the first facial data can be fused through a multi-source feature fusion algorithm layer, so that the fused facial features and the driving state reference feature matrix are compared, and the driving state of the driver is identified and judged.
In other embodiments, the first facial image data of the driver may be directly used as the input of the driving state monitoring model, as shown in fig. 2, the constructed driving state monitoring model may include a visual feature detection algorithm layer, may perform feature extraction on the collected first facial image data, then fuse each extracted facial feature through a multi-source feature fusion algorithm layer, and further compare the extracted facial feature with the driving state reference feature matrix through an abnormal state judgment module to identify and judge the driving state of the driver.
The driving state monitoring model may be a fatigue driving state monitoring model for monitoring whether the user is in fatigue driving, or may be an automatic driving state monitoring model for monitoring whether the user performs auxiliary automatic driving, or the like. The driving state monitoring model may be a multi-state monitoring model, and may monitor a plurality of driving states of the driver at the same time, for example, whether the driver is distracted driving or not may be monitored while monitoring whether the driver is fatigue driving or not. The specific type of driving state monitoring model is not particularly limited in this disclosure.
In an embodiment, when the driving state monitoring model is used for monitoring, if a feature value of a feature point position is not extracted from the first facial image data of the driver to be identified, and the feature value of the feature point is a variable in the driving state reference feature matrix, it can be determined that the feature point is possibly occluded, and then an occluded part can be predicted according to the driving state reference feature matrix, and the driving state is judged and predicted, so as to improve the accuracy of driving state monitoring.
The driving state monitoring model can be a machine learning model such as an SVM (support vector machine), a decision tree, a neural network (CNN) model and the like.
Further, as shown in fig. 2, the state monitoring management unit may include a state monitoring message processing module, and may send the monitoring result obtained by the abnormal state judgment module to the service unit, so that the state response policy module in the service unit may adopt a corresponding policy, thereby improving driving safety.
The driving state monitoring method provided by the embodiment of the disclosure acquires first facial image data of a driver during driving in daily driving of the driver, extracts related facial features from the first facial image data, and inputs the facial features into a preset driving state monitoring model trained by using second facial image data of the driver during normal driving to monitor the driving state of the driver to obtain a monitoring result, wherein the driving state monitoring model is obtained by training a driving state reference feature matrix constructed according to the facial features extracted from the second facial image data, so that reference state information of the driver can be constructed according to specific individual feature differences of the driver, a more comprehensive, accurate, reasonable and credible driving state monitoring model is obtained by training, the driving state monitoring accuracy is improved, corresponding measures (such as keeping a safe distance with front and rear vehicles) are conveniently and accurately performed on an abnormal driving state of the driver, and the driving safety of the driver is improved. In some embodiments, as shown in fig. 3, training a driving state reference feature matrix constructed from facial features extracted from the second facial image data includes:
s201: calibrating the face key points in the second face image data to obtain reference face key points;
s202: extracting facial features at key points of the reference face;
s203: classifying the facial features to obtain a posture classification result of the driver;
s204: constructing the driving state reference feature matrix according to the posture classification result and the feature value corresponding to the facial feature;
s205: and training the driving state reference characteristic matrix to obtain the driving state monitoring model.
As shown in fig. 2, the constructed driving state monitoring model may include a face feature calibration algorithm layer, which may calibrate face key points used for characterizing facial features to obtain reference face key points; and further extracting the facial features at the key points of the reference face, so as to obtain the reference facial features of the driver in a normal driving state. The reference facial features may be stored in a feature configuration database for subsequent construction of a driving state reference feature matrix. As shown in fig. 4, 20 key points around the mouth may be calibrated to obtain the coordinates of the key points, so as to extract the features of the mouth; as shown in fig. 5, 13 key points around the eye can be calibrated, where P1 to P8 are 8 key points of the eyelid, and P9 to P13 are 5 key points of the pupil.
After the facial features are extracted according to the calibrated reference face key points, the facial features can be classified, for example, the mouth can be divided into three categories, namely large, medium and small, and the mouth can be divided into different sub-categories, such as open mouth and closed mouth, according to the opening size of the mouth on the basis of each category.
The mouth can be classified into three major categories according to the proportional relationship between the inter-ocular distance and the mouth width: the face is determined when the mouth is closed at the front, and the mouth is classified as a large mouth when the ratio between the inter-eye distance and the mouth width is less than 65%, as a medium mouth when the ratio is 65-80%, and as a small mouth when the ratio is more than 80%.
Further, as shown in fig. 4, the mouth state may be calculated according to the extracted feature values of the key points of the mouth, and the distance between the upper lip and the lower lip is calculated as follows: the distance between the upper lip and the lower lip is = | | | P15-P19| |.
The maximum lip distance is the calculated lip distance when the head faces forwards and the face and mouth expressions are not influenced, and the mouth is not made up. The face image data can be obtained by carrying out statistical analysis on a large number of second face image data of the front face of the human face.
The maximum mouth width is the mouth width obtained by completely closing the mouth at the front side of the mouth, relaxing without smiling and calculating when the lips are pulled flat, and the maximum mouth width is = | | (P1-P7 |).
In order to ensure that the more accurate maximum mouth width under the condition of closing the mouth at the front side is obtained, a function or a straight line is used for fitting a key point connecting line between the upper lip and the lower lip, and the maximum mouth width is obtained through calculation.
In this embodiment, the Mouth Aspect Ratio MAR (Mouth Aspect Ratio) may also be calculated:
Figure BDA0003915475490000111
or
Figure BDA0003915475490000121
And calculating to obtain the reference mouth opening degree in the normal driving state. For example, the mouth opening state when MAR is 0.75 is determined as normal mouth opening, and when MAR >0.75 and the mouth opening frequency reaches a preset threshold value, it can be determined as yawning and the driver is in a fatigue driving state.
For the classification of the eye features, in the present embodiment, the eye shapes may be classified into three types of small, medium, and large according to the eyelid shape and pupil visibility. Wherein, the small eye: the eyelid is flat and thin, and nearly half of the pupil is invisible; eye disease: most of the pupil can be seen, and the upper part and the lower part of the pupil are almost free from white eyes; large eye: the pupil is completely visible, and there is obvious white eyes on the upper and lower sides.
Further, the pupil diameter and the Eyelid Aspect Ratio (EAR) may be calculated from the extracted feature values of the Eye key points.
Wherein the pupil diameter = | | | P9-P11| | or [ (| | P9-P13 |) × 2 | ]
Figure BDA0003915475490000122
Further, the eye opening may be calculated from the eyelid distance and pupil diameter:
Figure BDA0003915475490000123
for large eyes, the normal opening remains almost 100%, and therefore, the eye features when the normal opening of the eye is greater than 100% are classified as large eyes; and the normal openness of the small eyes is about 50%, and therefore, the eye features having the normal openness of the eyes of about 50% are classified as the small eyes.
More preferably, the type of the human eyes can be comprehensively judged through the aspect ratio of the eyelids and the eye opening under the condition of the front face of the human face, and generally, the smaller the aspect ratio of the eyelids is, the flatter the eyes are; the smaller the eye opening, the lower the pupil exposure ratio. Namely, the eye with the eyelid aspect ratio smaller than the preset aspect ratio threshold and the eye opening smaller than the preset opening threshold is determined as the small eye category.
After the extracted facial features are classified, the classification categories and the corresponding feature values may be stored as reference facial features in a feature configuration database as shown in fig. 2 for subsequent recognition of the driving state.
And further, constructing the driving state reference characteristic matrix according to the posture classification result obtained by the calculation and the characteristic value corresponding to the face characteristic. Specifically, a feature value matrix may be constructed according to feature values (for example, position coordinates) of feature point positions in facial features corresponding to different posture classification results in a normal driving state, and a driving state reference feature matrix may be further constructed. For example, in this embodiment, each column vector may be a feature value of each feature point of a certain portion, and further, a reference facial feature in a normal driving state may be determined according to a feature value of at least one feature point of the portion and/or an association relationship between different feature points.
Illustratively, the eye distance can be calculated according to the feature values of the feature points in a first column vector containing the eye features in the reference feature matrix, the mouth width can be calculated according to the feature values of the feature points in a second column vector containing the mouth features, then the mouth can be divided into three types, namely large, medium and small according to the proportional relation between the eye distance and the mouth width, so as to further determine the size of the mouth of the driver according to the driving difference of the individual features of different drivers, and further obtain the reference facial features representing that the driver is in a normal state according to the determined features of the size of the mouth, the mouth opening degree and the like of the driver. Namely, the driving state reference feature matrix constructed above is a reference facial feature representing that the driver is in a normal driving state.
In this embodiment, the facial reference feature data that is as comprehensive as possible can be obtained by constructing the driving state reference feature matrix in step S204, so as to effectively identify the driving state. And constructing a driving state reference characteristic matrix, and constructing a driving state monitoring model based on the reference characteristic matrix for training to obtain a final driving state monitoring model for judging the driving state of the driver.
In some embodiments, the method further comprises:
s301: acquiring driving environment data, wherein the driving environment data comprises at least one of distance data between a driver and equipment in the vehicle, vehicle working condition data and environment data around the vehicle;
s302: calculating the driving state reference characteristic matrix according to the driving environment data to obtain a probability distribution curve of a face characteristic state corresponding to the posture classification result;
s303: and determining a monitoring standard corresponding to a monitoring result in the driving state monitoring model according to the statistical parameters of the probability distribution curve.
The facial features extracted in the normal driving state are extracted from second facial image data corresponding to different driving environments, and driving environment data such as the distance between the driver and the camera and the driving external environment may affect the accuracy of the extracted facial features, and further affect the accuracy of the facial feature state calculated based on the facial features. Therefore, in this embodiment, in order to ensure that a more accurate driving state monitoring model is obtained, the driving state reference feature matrix is calculated according to the driving environment data to obtain a probability distribution curve of the facial feature state corresponding to the posture classification result, and then a determination criterion (a monitoring criterion) for determining the driving state is determined according to a statistical result of the probability distribution curve.
As shown in fig. 2, when the vehicle travels, the driving environment data may be transmitted to the service unit through the transmission channel in the form of a CAN signal, and then transmitted to the state monitoring management unit through the service unit, so as to monitor the driving state. A driver control information module in the service unit can acquire vehicle working condition data such as vehicle gear/pedal opening, vehicle speed/acceleration, steering wheel turning angle/acceleration, lateral/yaw acceleration and the like.
Fig. 6 is a probability distribution curve of a certain face feature state in an embodiment of the present disclosure, in which the horizontal axis represents a driving environment, and the vertical axis represents a probability value of MAR (Mouth Aspect Ratio) of the face feature state of the Mouth opening degree. As shown in fig. 6, in this embodiment, the data in the most concentrated range may be selected according to the statistical result of the probability distribution curve, and the statistical parameters such as the probability centroid or the statistical mean of the facial feature state may be used as the monitoring criteria corresponding to the facial feature state, so as to obtain more accurate driving state determination monitoring criteria. For example, when determining whether the driver is in a fatigue driving state by calculating the MAR, the calculated MARs are different in different driving environments, and therefore, in the present embodiment, the driving state reference feature matrix in each driving environment is calculated to obtain the probability distribution curve of the MAR, and since the probability distribution curve substantially satisfies the normal distribution, the probability centroid corresponding to the facial feature state (for example, the mouth opening degree when the MAR >0.75 is the reference mouth opening degree for determining that the driver is in the fatigue driving state) can be used as the determination criterion of the fatigue driving, so that by optimizing the determination criterion, a more accurate driving state monitoring model can be obtained.
In some embodiments, the method further comprises:
s401: and optimizing the monitoring parameters of the driving state monitoring model according to the abnormal driving state data in the monitoring result.
After the driving state of the driver is monitored in steps S101 to S103, when it is monitored that the driver is in an abnormal driving state, the abnormal driving state data may be acquired, and the driving state monitoring model may be optimized according to the abnormal driving state data.
In some embodiments, step S401 specifically includes:
s4011: acquiring vehicle control information of a time period corresponding to the abnormal driving state data;
s4012: judging whether the vehicle control information meets a preset condition or not;
s4013: if yes, comparing the abnormal driving state data with the corresponding driving state reference characteristic matrix, and calculating an offset;
s4014: if the offset is smaller than a preset offset threshold, determining that the abnormal driving state data is problem data;
s4015: and training the driving state monitoring model by using the problem data.
Specifically, problem data (e.g., misrecognition data) in the driver monitoring process may be screened based on the vehicle control information while driving. In this embodiment, when the driving state monitoring model is used to obtain abnormal driving state data, vehicle control information of a time period corresponding to the abnormal driving state may be obtained, whether a driver has performed sufficient control input on the vehicle in the time period may be determined, if yes, corresponding facial features may be extracted from the abnormal driving state data, and compared with a corresponding driving state reference feature matrix, whether a deviation exists may be determined, if not, the abnormal driving state data may be determined as problem data that may be misrecognized, the problem data may be used as facial image data in a normal driving state, after a preset number of problem data are collected, the problem data and the obtained second facial image data may be used together as a training data set, and input into the driving state monitoring model for training, so as to obtain a driving state reference feature matrix for determining the driving state.
In other embodiments, the problem data and the second facial image data may be input into a separate driving state reference feature model (which is a data screening model) for training, so as to obtain reference features for determining the driving state.
Therefore, after the reference driving state of the driver is calibrated, the embodiment can perform self-learning training on the reference driving state data based on the calibration result with the higher offset degree to improve the monitoring effect of the driving monitoring state model.
In other embodiments, step S401 may adaptively adjust the monitoring parameters of the driving monitoring state model according to the abnormal driving state data, and modify the parameter threshold for identifying the driving state according to the driving state reference feature matrix obtained in steps S201 to S204 to improve the monitoring accuracy, for example, the opening degree of the small eye in the reference classification is smaller, so that on the basis of the classification category of the small eye, a lower eye opening degree threshold is set to determine that the driver is in the eye-closed state, and then it is determined whether the driver is in the fatigue driving state according to the duration of the eye-closed state.
In some embodiments, the method further comprises:
s501: acquiring current driving state data of a driver before the driving state monitoring model operates;
s502: inputting the current driving state data of the driver into the driving state monitoring model for self-checking;
s503: if the self-checking is successful, controlling the driving state monitoring model to operate, and monitoring the driving state of the driver; if the self-checking fails, adjusting the driving state reference characteristic matrix according to the frequency of the abnormal driving state;
s504: and training and updating the driving state monitoring model according to the adjusted driving state reference characteristic matrix.
Specifically, when the vehicle is ignited and started, the driving state monitoring model is self-checked according to the current driving state of the driver within a preset time (for example, 5 minutes) before the driving state monitoring model works normally, monitoring parameters (for example, a monitoring threshold value) of the driving state monitoring model are adjusted in a self-adaptive manner, the monitoring accuracy in the subsequent vehicle driving operation process is ensured, and the user experience is improved.
As shown in fig. 7, self-checking of the driving state monitoring model is achieved through interaction between the driver monitoring algorithm layer (the state monitoring management unit) and the driver monitoring function service layer (the service unit), in the self-checking process, if the frequency of the abnormal driving state is higher than a preset frequency threshold (for example, 5 times/minute), the self-checking time can be appropriately prolonged, for example, 1-2 minutes is prolonged, if the frequency of the abnormal driving state is still higher than the preset frequency threshold after the self-checking is prolonged, it can be determined that the self-checking fails, the driving state reference feature matrix needs to be readjusted, and the driving state monitoring model is trained and optimized based on the adjusted driving state reference feature matrix. And after the self-checking is successful, the driving state monitoring model completely enters a normal working state, the state of the driver in the driving process is monitored in real time, and the abnormal driving state of the driver is reported to the service layer after the abnormal driving state is monitored. The service layer can carry out corresponding suggestion strategy to in time remind the driver, remind the driver to be absorbed in and drive, guarantee driving safety, as shown in fig. 2, can indicate through different output mode, for example can carry out TTS and report, can remind through well accuse suggestion, can remind through the instrument, or report an emergency and ask for help or increased vigilance through the speaker.
In some embodiments, after extracting facial features from the first facial image data and/or the second facial image data in step S102, the method further comprises:
s601: carrying out face recognition according to the facial features to obtain a face recognition result;
s602: and associating the driver identity information corresponding to the face recognition result with the first face image data and/or the second face image data.
In this embodiment, when facial features are extracted from the second facial image data, face recognition can be performed according to the facial features, driver identity information is obtained according to a face recognition result, and then the driver identity information is associated with the second facial image data, so that different driving state reference feature matrices are subsequently constructed according to different driver identity information, and therefore reference state information for managing the identity information can be constructed according to specific driver individual feature differences, personalized driving state monitoring models are trained for different drivers, the accuracy of driving state monitoring is improved, and user experience is improved.
When facial features are extracted from the first facial image data, face recognition is carried out according to the facial features, the driver identity information corresponding to the face recognition results is associated with the first facial image data, personalized accurate monitoring and recognition can be conveniently carried out according to the driver identity when the driver state is monitored by the driving state monitoring model subsequently, and false recognition and the like are avoided (for example, the corresponding driving state monitoring model can be selected according to the face recognition results).
In some embodiments, the method further comprises:
s701: acquiring second face image data of a driver during driving according to a preset image acquisition period, and updating the second face image data;
s702: updating the driving state reference feature matrix based on the updated second face image data.
In the embodiment, the second face image data of the driver in the normal driving state can be periodically acquired for updating, the driving state reference characteristic matrix is adaptively updated, and the accuracy of monitoring the driving state is further improved. For example, for a novice driver, facial features in a normal state may have certain randomness initially, for example, a mouth opening degree is larger and is in a surprise state instead of yawning, and as driving experiences increase, a driving state reference feature matrix tends to be stable continuously, so that by updating a reference feature group of the driving state reference feature matrix regularly, a more accurate dynamic driving state monitoring model can be obtained by continuous training according to the driving experiences of the driver.
The embodiment of the disclosure can acquire the second facial image data of the driver in a normal driving state in real time in the daily driving of the driver, dynamically extract facial features, construct a driving state reference feature matrix, train a driving state monitoring model, continuously dynamically optimize the parameters of the model, and further improve the credibility of the driving state monitoring model.
Fig. 8 shows a schematic structural diagram of a driving state monitoring system according to an embodiment of the present disclosure. As shown in fig. 8, an embodiment of the present disclosure further provides a driving state monitoring system, including:
an acquisition module 10 configured to acquire first face image data while a driver is driving;
an extraction module 20 configured to extract relevant facial features from the first facial image data, wherein the facial features include features of at least one of the head and five sense organs;
the monitoring module 30 is configured to input the facial features into a preset driving state monitoring model to monitor the driving state of the driver, so as to obtain a monitoring result;
the driving state monitoring model is obtained by training second face image data of the driver in a normal driving state, and the training comprises training a driving state reference feature matrix constructed according to facial features extracted from the second face image data.
In some embodiments, further comprising a training module configured to:
calibrating the face key points in the second face image data to obtain reference face key points;
extracting facial features at key points of the reference face;
classifying the facial features to obtain a posture classification result of the driver;
constructing the driving state reference feature matrix according to the posture classification result and the feature value corresponding to the facial feature;
and training the driving state reference characteristic matrix to obtain the driving state monitoring model.
In some embodiments, the training module is further configured to:
acquiring driving environment data, wherein the driving environment data comprises at least one of distance data between a driver and equipment in the vehicle, vehicle working condition data and environment data around the vehicle;
calculating the driving state reference characteristic matrix according to the driving environment data to obtain a probability distribution curve of a face characteristic state corresponding to the posture classification result;
and determining a monitoring standard corresponding to a monitoring result in the driving state monitoring model according to the statistical parameters of the probability distribution curve.
In some embodiments, further comprising a model optimization module configured to:
and optimizing the driving state monitoring model according to the abnormal driving state data in the monitoring result.
In some embodiments, the model optimization module is further configured to:
acquiring vehicle control information of a time period corresponding to the abnormal driving state data;
judging whether the vehicle control information meets a preset condition or not;
if yes, comparing the abnormal driving state data with the corresponding driving state reference characteristic matrix, and calculating an offset;
if the offset is smaller than a preset offset threshold, determining that the abnormal driving state data is problem data;
and training the driving state monitoring model by using the problem data.
In some embodiments, the model self-test module is further included and configured to:
acquiring current driving state data of a driver before the driving state monitoring model operates;
inputting the current driving state data of the driver into the driving state monitoring model for self-checking;
if the self-checking is successful, controlling the driving state monitoring model to operate, and monitoring the driving state of the driver; if the self-checking fails, adjusting the driving state reference characteristic matrix according to the frequency of the abnormal driving state;
and training and updating the driving state monitoring model according to the adjusted driving state reference characteristic matrix.
In some embodiments, further comprising an association module configured to: after extracting facial features from the first and/or second facial image data,
carrying out face recognition according to the facial features to obtain a face recognition result;
and associating the driver identity information corresponding to the face recognition result with the first face image data and/or the second face image data.
In some embodiments, further comprising an update module configured to:
acquiring second face image data of a driver during driving according to a preset image acquisition period, and updating the second face image data;
updating the driving state reference feature matrix based on the updated second face image data.
The driving state monitoring system provided in the embodiment of the present disclosure corresponds to the driving state monitoring method in the above embodiment, and any optional items in the driving state monitoring method embodiment are also applicable to the embodiment of the driving state monitoring system, and are not described herein again.
The embodiment of the disclosure also provides a vehicle, which includes a control device, the control device includes a memory and a processor, the memory stores a computer program, and the processor implements the driving state monitoring method when executing the computer program on the memory.
The memory may include volatile memory (e.g., random-access memory (RAM), which may include volatile RAM, magnetic RAM, ferroelectric RAM, and any other suitable form) and non-volatile memory (e.g., disk memory, flash memory, erasable programmable read-only memory (EPROM), electrically erasable programmable read-only memory (EEPROM), memristor-based non-volatile solid-state memory, etc.).
The processor may be a processing device that may be a computer system including at least one general purpose processing unit, such as a microprocessor, central Processing Unit (CPU), graphics Processing Unit (GPU), or the like. More particularly, the processor may be a Complex Instruction Set Computing (CISC) microprocessor, reduced Instruction Set Computing (RISC) microprocessor, very Long Instruction Word (VLIW) microprocessor, processor executing other instruction sets, or processors executing a combination of instruction sets. The processor may also be one or more special-purpose processing devices such as an Application Specific Integrated Circuit (ASIC), a Field Programmable Gate Array (FPGA), a Digital Signal Processor (DSP), a system on a chip (SoC), or the like.
The electronic device may further include a communication interface for communicating with an external device (e.g., a network device), and a communication bus through which the processor, the memory, and the communication interface communicate with each other.
The electronic device includes, but is not limited to, vehicle-mounted devices such as a vehicle-mounted server, a vehicle-mounted display, and a vehicle event data recorder, and also includes terminal devices such as a handheld device (e.g., a mobile phone, a tablet computer, etc.), a wearable device (e.g., a smart watch, a smart bracelet, a pedometer, etc.), and the like.
The embodiment of the disclosure also provides a computer-readable storage medium, on which computer-executable instructions are stored, and when the computer-executable instructions are executed by a processor, the driving state monitoring method is realized.
The computer-executable instructions of the embodiments of the present disclosure may be organized into one or more computer-executable components or modules. Aspects of the disclosure may be implemented with any number and combination of such components or modules. For example, aspects of the disclosure are not limited to the specific computer-executable instructions or the specific components or modules illustrated in the figures and described herein. Other embodiments may include different computer-executable instructions or components having more or less functionality than illustrated and described herein.
The foregoing description is only exemplary of the preferred embodiments of the disclosure and is illustrative of the principles of the technology employed. It will be appreciated by those skilled in the art that the scope of the disclosure herein is not limited to the particular combination of features described above, but also encompasses other embodiments in which any combination of the features described above or their equivalents does not depart from the spirit of the disclosure. For example, the above features and (but not limited to) the features disclosed in this disclosure having similar functions are replaced with each other to form the technical solution.
Further, while operations are depicted in a particular order, this should not be understood as requiring that such operations be performed in the particular order shown or in sequential order. Under certain circumstances, multitasking and parallel processing may be advantageous. Likewise, while several specific implementation details are included in the above discussion, these should not be construed as limitations on the scope of the disclosure. Certain features that are described in the context of separate embodiments can also be implemented in combination in a single embodiment. Conversely, various features that are described in the context of a single embodiment can also be implemented in multiple embodiments separately or in any suitable subcombination.
Although the subject matter has been described in language specific to structural features and/or methodological acts, it is to be understood that the subject matter defined in the appended claims is not necessarily limited to the specific features or acts described above. Rather, the specific features and acts described above are disclosed as example forms of implementing the claims.
While the present disclosure has been described in detail with reference to the embodiments, the present disclosure is not limited to the specific embodiments, and those skilled in the art can make various modifications and alterations based on the concept of the present disclosure, and the modifications and alterations should fall within the scope of the present disclosure as claimed.

Claims (10)

1. A driving state monitoring method, characterized by comprising:
acquiring first face image data of a driver during driving;
extracting relevant facial features from the first facial image data, wherein the facial features comprise features of at least one part of the head and five sense organs;
inputting the facial features into a preset driving state monitoring model to monitor the driving state of the driver to obtain a monitoring result;
the driving state monitoring model is obtained by training second face image data of the driver in a normal driving state, and the training comprises training a driving state reference feature matrix constructed according to facial features extracted from the second face image data.
2. The driving state monitoring method according to claim 1, wherein training a driving state reference feature matrix constructed from facial features extracted from the second facial image data includes:
calibrating the face key points in the second face image data to obtain reference face key points;
extracting facial features at key points of the reference face;
classifying the facial features to obtain a posture classification result of the driver;
constructing the driving state reference feature matrix according to the posture classification result and the feature value corresponding to the facial feature;
and training the driving state reference characteristic matrix to obtain the driving state monitoring model.
3. The driving state monitoring method according to claim 2, characterized in that the method further comprises:
acquiring driving environment data, wherein the driving environment data comprises at least one of distance data between a driver and equipment in the vehicle, vehicle working condition data and environment data around the vehicle;
calculating the driving state reference characteristic matrix according to the driving environment data to obtain a probability distribution curve of a face characteristic state corresponding to the posture classification result;
and determining a monitoring standard corresponding to a monitoring result in the driving state monitoring model according to the statistical parameters of the probability distribution curve.
4. The driving state monitoring method according to claim 1, characterized in that the method further comprises:
and optimizing the driving state monitoring model according to the abnormal driving state data in the monitoring result.
5. The driving state monitoring method according to claim 4, wherein optimizing the driving state monitoring model according to abnormal driving state data in the monitoring result includes:
acquiring vehicle control information of a time period corresponding to the abnormal driving state data;
judging whether the vehicle control information meets a preset condition or not;
if yes, comparing the abnormal driving state data with the corresponding driving state reference characteristic matrix, and calculating an offset;
if the offset is smaller than a preset offset threshold, determining that the abnormal driving state data is problem data;
and training the driving state monitoring model by using the problem data.
6. The driving state monitoring method according to claim 1, characterized by further comprising:
acquiring current driving state data of a driver before the driving state monitoring model operates;
inputting the current driving state data of the driver into the driving state monitoring model for self-checking;
if the self-checking is successful, controlling the driving state monitoring model to operate, and monitoring the driving state of the driver; if the self-checking fails, adjusting the driving state reference characteristic matrix according to the frequency of the abnormal driving state;
and training and updating the driving state monitoring model according to the adjusted driving state reference characteristic matrix.
7. The driving state monitoring method according to claim 1, wherein after extracting facial features from the first facial image data and/or the second facial image data, the method further comprises:
carrying out face recognition according to the facial features to obtain a face recognition result;
and associating the driver identity information corresponding to the face recognition result with the first face image data and/or the second face image data.
8. The driving state monitoring method according to claim 1, characterized in that the method further comprises:
acquiring second face image data of a driver during driving according to a preset image acquisition period, and updating the second face image data;
updating the driving state reference feature matrix based on the updated second face image data.
9. A driving state monitoring system, comprising:
an acquisition module configured to acquire first facial image data while a driver is driving;
an extraction module configured to extract relevant facial features from the first facial image data, wherein the facial features include features of at least one part of the head and five sense organs;
the monitoring module is configured to input the first facial image data into a preset driving state monitoring model to monitor the driving state of the driver to obtain a monitoring result;
the driving state monitoring model is obtained by training second face image data of the driver in a normal driving state, and the training comprises training a driving state reference feature matrix constructed according to facial features extracted from the second face image data.
10. A vehicle, characterized in that it comprises a control device comprising a memory on which a computer program is stored and a processor implementing the method according to any one of claims 1 to 8 when executing the computer program on the memory.
CN202211336389.8A 2022-10-28 2022-10-28 Driving state monitoring method and system and vehicle Pending CN115830579A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202211336389.8A CN115830579A (en) 2022-10-28 2022-10-28 Driving state monitoring method and system and vehicle

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202211336389.8A CN115830579A (en) 2022-10-28 2022-10-28 Driving state monitoring method and system and vehicle

Publications (1)

Publication Number Publication Date
CN115830579A true CN115830579A (en) 2023-03-21

Family

ID=85525754

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202211336389.8A Pending CN115830579A (en) 2022-10-28 2022-10-28 Driving state monitoring method and system and vehicle

Country Status (1)

Country Link
CN (1) CN115830579A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116749988A (en) * 2023-06-20 2023-09-15 中国第一汽车股份有限公司 Driver fatigue early warning method and device, electronic equipment and storage medium

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116749988A (en) * 2023-06-20 2023-09-15 中国第一汽车股份有限公司 Driver fatigue early warning method and device, electronic equipment and storage medium

Similar Documents

Publication Publication Date Title
US20210012128A1 (en) Driver attention monitoring method and apparatus and electronic device
JP7146959B2 (en) DRIVING STATE DETECTION METHOD AND DEVICE, DRIVER MONITORING SYSTEM AND VEHICLE
US20210009150A1 (en) Method for recognizing dangerous action of personnel in vehicle, electronic device and storage medium
CN109902562B (en) Driver abnormal posture monitoring method based on reinforcement learning
EP3033999B1 (en) Apparatus and method for determining the state of a driver
WO2021004138A1 (en) Screen display method, terminal device, and storage medium
CN112016457A (en) Driver distraction and dangerous driving behavior recognition method, device and storage medium
WO2021016873A1 (en) Cascaded neural network-based attention detection method, computer device, and computer-readable storage medium
WO2008127465A1 (en) Real-time driving danger level prediction
CN111434553B (en) Brake system, method and device, and fatigue driving model training method and device
CN110341617B (en) Eyeball tracking method, device, vehicle and storage medium
KR20190083155A (en) Apparatus and method for detecting state of vehicle driver
US11453401B2 (en) Closed eye determination device
CN111160239A (en) Concentration degree evaluation method and device
CN115830579A (en) Driving state monitoring method and system and vehicle
JP2019016178A (en) Drowsy driving alarm system
CN115937830A (en) Special vehicle-oriented driver fatigue detection method
Pandey et al. A survey on visual and non-visual features in Driver’s drowsiness detection
Flores-Monroy et al. Visual-based real time driver drowsiness detection system using CNN
CN109657550B (en) Fatigue degree detection method and device
JP7046748B2 (en) Driver status determination device and driver status determination method
KR102401607B1 (en) Method for analyzing driving concentration level of driver
US20220284718A1 (en) Driving analysis device and driving analysis method
CN116189153A (en) Method and device for identifying sight line of driver, vehicle and storage medium
WO2021262166A1 (en) Operator evaluation and vehicle control based on eyewear data

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination