CN111062300A - Driving state detection method, device, equipment and computer readable storage medium - Google Patents

Driving state detection method, device, equipment and computer readable storage medium Download PDF

Info

Publication number
CN111062300A
CN111062300A CN201911271338.XA CN201911271338A CN111062300A CN 111062300 A CN111062300 A CN 111062300A CN 201911271338 A CN201911271338 A CN 201911271338A CN 111062300 A CN111062300 A CN 111062300A
Authority
CN
China
Prior art keywords
data
head
hand
driving state
feature
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201911271338.XA
Other languages
Chinese (zh)
Inventor
曾伟
蒋鑫龙
许欢莉
潘志文
高晨龙
张宇欣
张辉
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shenzhen Semisky Technology Co ltd
Original Assignee
Shenzhen Semisky Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shenzhen Semisky Technology Co ltd filed Critical Shenzhen Semisky Technology Co ltd
Priority to CN201911271338.XA priority Critical patent/CN111062300A/en
Publication of CN111062300A publication Critical patent/CN111062300A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/50Context or environment of the image
    • G06V20/59Context or environment of the image inside of a vehicle, e.g. relating to seat occupancy, driver state or inner lighting conditions
    • G06V20/597Recognising the driver's state or behaviour, e.g. attention or drowsiness
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/25Fusion techniques
    • G06F18/253Fusion techniques of extracted features
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/011Arrangements for interaction with the human body, e.g. for user immersion in virtual reality
    • G06F3/012Head tracking input arrangements
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/011Arrangements for interaction with the human body, e.g. for user immersion in virtual reality
    • G06F3/014Hand-worn input/output arrangements, e.g. data gloves

Abstract

The invention provides a driving state detection method, a device, equipment and a computer readable storage medium, wherein the method comprises the following steps: collecting head posture data and hand motion data of a user; performing feature extraction on the head posture data to obtain head feature data, and performing feature extraction on the hand motion data to obtain hand feature data; and analyzing the head characteristic data and the hand characteristic data to determine the driving state of the user. According to the invention, the driving state of the user is analyzed based on the head posture data and the hand motion data, the driving state of the user is determined, the detection applicability is improved, the accuracy of the detection result is ensured, and compared with the existing mode, the privacy of the user is not invaded.

Description

Driving state detection method, device, equipment and computer readable storage medium
Technical Field
The present invention relates to the field of data processing technologies, and in particular, to a driving state detection method, device, and apparatus, and a computer-readable storage medium.
Background
With the development of economy, motor vehicles have become one of the most important travel modes, and meanwhile, various traffic accidents are inevitably and rapidly increased. Among many factors, driver distraction is an increasingly important cause of traffic accidents. In order to improve the driving safety, the real-time driver state detection can be carried out through technical means, the driving analysis condition of the driver can be discovered in time, and early warning is carried out in advance.
The conventional driving state detection method generally detects the driving state based on visual information or a driving state of the vehicle. The method based on the visual information is used for acquiring images through a camera, identifying the images and analyzing the driving state, the detection accuracy is easily reduced due to environmental influences (such as human face shielding, illumination conditions and the like), and the use is easily limited; the detection method based on the automobile driving state analyzes the state of a driver according to data such as speed, displacement and the like of the automobile, has high dependence on the type of the automobile, the road state and the driving habit, and is easy to influence the accuracy of the use process.
Disclosure of Invention
The invention mainly aims to provide a driving state detection method, a driving state detection device, driving state detection equipment and a computer readable storage medium, and aims to solve the technical problems that the existing driving state detection is low in applicability and easy to reduce detection accuracy due to use conditions.
In order to achieve the above object, an embodiment of the present invention provides a driving state detection method, including:
collecting head posture data and hand motion data of a user;
performing feature extraction on the head posture data to obtain head feature data, and performing feature extraction on the hand motion data to obtain hand feature data;
and analyzing the head characteristic data and the hand characteristic data to determine the driving state of the user.
Further, to achieve the above object, an embodiment of the present invention further provides a driving state detection device, including:
the data acquisition module is used for acquiring head posture data and hand motion data of a user;
the characteristic extraction module is used for carrying out characteristic extraction on the head posture data to obtain head characteristic data and carrying out characteristic extraction on the hand motion data to obtain hand characteristic data;
and the data analysis module is used for analyzing the head characteristic data and the hand characteristic data and determining the driving state of the user.
Further, to achieve the above object, an embodiment of the present invention further provides a driving state detection apparatus, which includes a processor, a memory, and a computer program stored on the memory and executable by the processor, wherein when the computer program is executed by the processor, the steps based on the driving state detection method as described above are implemented.
Furthermore, to achieve the above object, an embodiment of the present invention further provides a computer-readable storage medium, on which a computer program is stored, wherein the computer program, when executed by a processor, implements the steps of the driving state detection method as described above.
According to the embodiment of the invention, the head posture data and the hand movement data of the user are collected in the driving process of the user, the driving state of the user is analyzed based on the head posture data and the hand movement data, and the driving state of the user is determined, because the analysis and detection are carried out based on the head posture data and the hand movement data, the influence of actual use conditions is not easily caused in the data collection process, the detection applicability is further improved, the accuracy of a detection result is ensured, and the privacy of the user is not invaded compared with the existing image analysis and other modes; meanwhile, because the analysis and detection are carried out based on various data, various state factors are comprehensively considered, and the accuracy of the detection result is improved.
Drawings
Fig. 1 is a schematic diagram of a hardware structure of a driving state detection device according to an embodiment of the present invention;
FIG. 2 is a schematic flow chart illustrating a driving state detection method according to a first embodiment of the present invention;
fig. 3 is a schematic view of a head-worn apparatus according to a first embodiment of the driving state detection method of the present invention;
fig. 4 is a schematic view of a hand-worn device according to a first embodiment of the driving state detection method of the present invention;
fig. 5 is a schematic view of a coordinate system according to a fifth embodiment of the driving state detecting method of the present invention;
fig. 6 is a functional block diagram of the driving state detecting apparatus according to the first embodiment of the present invention.
The implementation, functional features and advantages of the objects of the present invention will be further explained with reference to the accompanying drawings.
Detailed Description
It should be understood that the specific embodiments described herein are merely illustrative of the invention and are not intended to limit the invention.
The driving state detection method according to the embodiment of the present invention is mainly applied to a driving state detection device, and the examination device may be a device having a data processing function, such as a Personal Computer (PC), a notebook computer, a vehicle-mounted terminal, and a mobile phone.
Referring to fig. 1, fig. 1 is a schematic diagram of a hardware structure of a driving state detection device according to an embodiment of the present invention. In this embodiment of the present invention, the driving state detection device may include a processor 1001 (e.g., a central processing Unit, CPU), a communication bus 1002, a user interface 1003, a network interface 1004, and a memory 1005. The communication bus 1002 is used for realizing connection communication among the components; the user interface 1003 may include a Display screen (Display), an input unit such as a Keyboard (Keyboard); the network interface 1004 may optionally include a standard wired interface, a WIreless interface (e.g., a WI-FI interface, WI-FI interface); the memory 1005 may be a Random Access Memory (RAM) or a non-volatile memory (non-volatile memory), such as a magnetic disk memory, and the memory 1005 may optionally be a storage device independent of the processor 1001. Those skilled in the art will appreciate that the hardware configuration depicted in FIG. 1 is not intended to be limiting of the present invention, and may include more or less components than those shown, or some components in combination, or a different arrangement of components.
With continued reference to FIG. 1, the memory 1005 of FIG. 1, which is one type of computer-readable storage medium, may include an operating system, a network communication module, and a computer program. In fig. 1, the network communication module may be used to connect to a cloud server for data interaction with the cloud server; and the processor 1001 may call a computer program stored in the memory 1005 and execute the driving state detection method provided by the embodiment of the present invention.
Based on the hardware architecture, embodiments of the driving state detection method of the present invention are provided.
The embodiment of the invention provides a driving state detection method.
Referring to fig. 2, fig. 2 is a flowchart illustrating a driving state detection method according to a first embodiment of the present invention.
In this embodiment, the driving state detection method includes the steps of:
step S10, collecting head posture data and hand motion data of a user;
in order to solve the technical problems that the existing driving state detection is low in applicability and detection accuracy is easily reduced due to use conditions, the embodiment provides a driving state detection method, head posture data and hand motion data of a user are collected in the driving process of the user, the driving state of the user is analyzed based on the head posture data and the hand motion data, and the driving state of the user is determined; meanwhile, because the analysis and detection are carried out based on various data, various state factors are comprehensively considered, and the accuracy of the detection result is improved.
The driving state detection method in this embodiment is implemented by a driving state detection device, which may be a mobile phone, a vehicle-mounted terminal, a personal computer, or the like, and the driving state detection device in this embodiment is described by taking a mobile phone as an example. When a user drives, the mobile phone firstly needs to acquire head posture data and hand motion data of the user, and the acquisition process of the head posture data and the hand motion data of the user can be realized by other wearable devices, for example, the head posture data can be acquired by wearable devices such as an intelligent head ring, intelligent glasses and an intelligent earphone, and the hand motion data can be acquired by wearable devices such as an intelligent bracelet and an intelligent watch; the mobile phone can be connected with the wearable devices (either in a wired manner or in a wireless manner, such as Wi-Fi connection), and when the wearable devices acquire the head posture data and the hand motion data, the data can be sent to the mobile phone, and the mobile phone acquires the head posture data and the hand motion data of the user. The head posture data can comprise head acceleration data representing that the head moves towards a certain direction, head angular velocity data representing head angle change (head rotation), and head quaternion data representing the current posture of the head; it is worth mentioning that the head pose includes a motion pose and a static pose, and when the head of the user is in the static pose, the current pose of the head can be accurately determined through the head quaternion data. The hand motion data, similar to the head pose data, may include hand acceleration data characterizing the motion of the hand in a certain direction, hand angular velocity data characterizing the change in hand angle (hand rotation), and hand quaternion data characterizing the current pose of the hand. Of course, in practice, the head pose data and hand motion data may include more data types.
Optionally, referring to fig. 3, fig. 3 is a schematic diagram of a head-worn apparatus for acquiring head pose data according to the present embodiment. The head-worn device may include a power supply unit, a data processing unit, a sensor unit, a Wi-Fi transmission unit, and a device switch. The data processing unit is connected with the power supply unit, the sensor unit and the Wi-Fi transmission unit, acquires sensor data from the sensor unit for analysis processing, and corrects driving behaviors through the behavior correction unit when the situation that the attention of a user to be analyzed is dispersed is found; the power supply unit provides power for the whole head-wearing device; the sensor unit comprises an accelerometer, a gyroscope, a magnetometer and the like and is used for acquiring head posture data of a user; the Wi-Fi transmission unit is used for transmitting the collected head posture data to the mobile phone; the device switch is used to turn on or off the head wearable device. Wherein the head acceleration data is measured by an accelerometer; head angular velocity data is measured by a gyroscope; the head quaternion data may be obtained in a variety of manners, for example, based on an attitude calculation algorithm integrated in the head wearable device, the head quaternion data may be obtained by calculating head acceleration data and head angular velocity data, or based on an attitude calculation algorithm integrated in the head wearable device, absolute quaternion data may be obtained by calculating head acceleration data, head angular velocity data, and head magnetometer data measured by the magnetometer, and the manner of obtaining the head quaternion data may be set according to actual needs. Similarly, referring to fig. 4, fig. 4 is a schematic diagram of a hand-wearing device for collecting hand motion data according to the present embodiment, where the hand-wearing device may include a power supply unit, a data processing unit, a sensor unit, a Wi-Fi transmission unit, and a device switch. The functions of each part in the hand-wearing device are similar to the corresponding parts of the head-wearing device, and the detailed description is omitted here. Through head wearing equipment and hand wearing equipment, the cell-phone gathers user's head gesture data and hand motion data to follow-up operation carries out.
Further, in this embodiment, the head posture data and the hand motion data of the user may be collected at a certain frequency; for example, the acquisition frequency may be set to 80Hz, i.e. 80 head pose data and 80 hand motion data per minute.
Step S20, performing feature extraction on the head posture data to obtain head feature data, and performing feature extraction on the hand motion data to obtain hand feature data;
in this embodiment, when the mobile phone obtains the head posture data and the hand motion data, feature extraction can be performed on the head posture data and the hand motion data respectively to obtain corresponding feature data; for convenience of explanation, the feature data corresponding to the head pose data may be referred to as head feature data, and the feature data corresponding to the hand motion data may be referred to as hand feature data. When the feature extraction is performed, a relevant extraction rule may be preset, where the rule includes a type and an extraction manner of feature data, and for example, a plurality of representative data values that can reflect data distribution, a certain amplitude, and the like in the head posture data and the hand motion data may be selected as the feature data, or a value obtained by performing relevant conversion on the head posture data may be selected as the feature data.
Step S30, analyzing the head feature data and the hand feature data to determine the driving state of the user.
In the embodiment, when the head characteristic data and the hand characteristic data are obtained, the mobile phone can analyze the two types of characteristic data, so that the driving state of the user is determined according to the analysis result; the driving state of the user includes concentration and distraction, and whether the user is currently concentrating or not can be estimated by analyzing the head feature data and the hand feature data. The analysis process of the head characteristic data and the hand characteristic data can be to preset a relevant model or algorithm, then input the head characteristic data and the hand characteristic data into the model or algorithm, and determine the driving state of the user according to the output result of the model or algorithm. It should be noted that, during the analysis, the head feature data and the hand feature data may be analyzed respectively, then the analysis results of the two types of feature data are obtained, and then the analysis results of the two types of feature data are combined, so as to determine the driving state of the user; or the head characteristic data and the hand characteristic data are fused, for example, the two types of characteristic data are converted to obtain comprehensive characteristic data, then the comprehensive characteristic data are analyzed, and the driving state of the user is determined according to the analysis result.
Further, after the step S30, the method further includes:
and if the driving state of the user is distracted, prompting based on a preset prompting rule.
In this embodiment, if it is detected that the current driving state of the user is attention-focused, step S10 may be executed, that is, the driving state of the user continues to be checked; if the current driving state of the user is detected to be distracted, prompting can be carried out based on a preset prompting rule; for example, a cell phone may ring and/or vibrate to alert a user; prompt information can also be sent to the wearable device to cause the wearable device to ring and/or vibrate. In addition, different prompting modes can be adopted according to the duration of the distraction of the user; for example, when the driving state of the user is detected to be distracted, the mobile phone can prompt in a vibration mode, when the duration time of the distracted state exceeds 3 seconds, the current vibration prompt can be considered to be incapable of attracting the attention of the user, and at the moment, the mobile phone prompts the user in a ringing mode; or, different prompting sound volumes can be adopted according to the distraction duration of the user, and the longer the state duration of distraction of the user is, the larger the prompting sound volume of the ring is.
In the embodiment, head posture data and hand motion data of a user are collected; performing feature extraction on the head posture data to obtain head feature data, and performing feature extraction on the hand motion data to obtain hand feature data; and analyzing the head characteristic data and the hand characteristic data to determine the driving state of the user. Through the above manner, the head posture data and the hand movement data of the user are collected in the driving process of the user, the driving state of the user is analyzed based on the head posture data and the hand movement data, and the driving state of the user is determined; meanwhile, because the analysis and detection are carried out based on various data, various state factors are comprehensively considered, and the accuracy of the detection result is improved.
A second embodiment of the driving state detection method of the present invention is proposed based on the above-described embodiment shown in fig. 2.
In this embodiment, the step S30 includes:
step S31, analyzing the head characteristic data based on a first analysis model to obtain a corresponding first state analysis result, and analyzing the hand characteristic data based on a second analysis model to obtain a corresponding second state analysis result;
in the embodiment, when the head characteristic data and the hand characteristic data are obtained, the mobile phone can analyze the two types of characteristic data, so that the driving state of the user is determined according to the analysis result; when the analysis is carried out, the head characteristic data and the hand characteristic data are respectively analyzed, then the analysis results of the two types of characteristic data are obtained, and then the analysis results of the two types of characteristic data are combined, so that the driving state of the user is determined.
Specifically, the mobile phone may first obtain the first analysis model and the second analysis model from the cloud server. The first analysis model is used for analyzing the head characteristic data, and the second analysis model is used for analyzing the hand characteristic data; the first analysis model and the second analysis model are obtained by pre-training a cloud server, the cloud server collects sample data of sample users, the sample data comprises head sample data and hand sample data of which the driving states are distractions, and a mobile phone corresponding to the head sample data and the hand sample data of which the driving states are the distractions can be used for presetting a plurality of scenes of the distractions, for example, typing and replying a WeChat message, making a call, eating and drinking, talking with others, finishing makeup, adjusting in-vehicle sound, an air conditioner, seeing the outside of a window and the like during driving, then collecting the head sample data and the hand sample data of the sample users under the conditions, and in addition, the head sample data and the hand sample data of which the driving states are the distractions are also included. After obtaining the head sample data and the hand sample data, carrying out noise reduction (denoising), timestamp alignment and sliding window processing on the two types of sample data; the denoising (denoising) is to remove noise data in two types of sample data, the timestamp alignment mainly makes the start time of the two types of sample data consistent (i.e. obtaining head sample data and hand sample data corresponding to the same time period), and the sliding window processing means intercepts one or more segments of data through a sliding window for model training (the size and the moving step length of the sliding window may be fixed or may be dynamically changed); after the above processing, the two types of sample data are correspondingly labeled, that is, the sample data is characterized to correspond to attention concentration or attention dispersion, and of course, the labeled class labels may be different for different types of sample data, for example, the class label of the head sample data with attention concentration is 1, the class label of the head sample data with attention dispersion is-1, the class label of the hand sample data with attention concentration is 2, and the class label of the hand sample data with attention dispersion is-2; then, feature extraction can be performed on the two types of sample data respectively (the process of feature extraction is similar to the extraction process in step S20, and is not described here again), two initial training data sets corresponding to the head sample feature and the hand sample feature are obtained, and model training is performed using the two initial training data sets respectively. In the current machine learning method, the classification algorithm includes many kinds, such as decision trees, Support Vector Machines (SVMs), naive bayes, random forests, and the like. The first analysis model and the second analysis model used in this embodiment may be support vector machine models (SVMs) adopting two categories, and it should be understood that other algorithm models may be adopted according to actual requirements in specific applications; a Support Vector Machine (SVM) is a generalized linear classifier for binary classification of data in a supervised learning mode, and the training is mainly carried out on finding a hyperplane which can divide different samples and simultaneously maximize the distance from a point in a sample set to the classification hyperplane so as to realize the classification of the data. During training, taking the first analysis model as an example, the training data set of the head sample data can be randomly divided into 10 equal parts, the model is trained in a ten-fold cross validation manner, model parameters are continuously adjusted, a model based on the labeled sample with the highest average classification precision (precision rate) is obtained, and the model is used as a final classification model SVM _ ONE, namely the first analysis model is obtained. The training process of the second analysis model is similar and will not be described herein.
The cloud server obtains a first analysis model and a second analysis model after training, and the mobile phone can obtain the first analysis model and the second analysis model from the cloud server; and then analyzing the head characteristic data through the first analysis model to obtain a corresponding first state analysis result, and analyzing the hand characteristic data through the second analysis model to obtain a corresponding second state analysis result. And for the process of analyzing through the first analysis model and the second analysis model, namely the process of classifying the feature data by using the hyperplane found in the training process, the corresponding state analysis result of the feature data can be determined according to the position relation between the feature data and the hyperplane.
And step S32, performing fusion processing on the first state analysis result and the second state analysis result, and determining the driving state of the user according to the fusion result.
In this embodiment, when the mobile phone obtains the first state analysis result and the second state analysis result, the mobile phone performs decision layer fusion processing on the first state analysis result and the second state analysis result, and determines whether the driving state of the user is attentive or distractive according to the fusion result. The first state analysis result and the second state analysis result may be in the form of an attention dispersion probability value, the probability value may be used to obtain the probability of the user's attention dispersion estimated by the model, the greater the attention dispersion probability value, the greater the probability of the user's attention dispersion, and the fusion process may be to weight the two analysis results and compare the weighted result (fusion result) with a preset dispersion threshold, if the weighted result is greater than the preset dispersion threshold, the driving state of the user may be considered as the attention dispersion, and if the weighted result is less than or equal to the preset dispersion threshold, the driving state of the user may be considered as the attention dispersion. For example, the first state result and the second state analysis result are attention distraction probability values, the probability values range from 0 to 1, and for a certain user, the corresponding first state analysis result is 0.7, and the corresponding second state analysis result is 0.8, which means that the attention distraction probability obtained according to the head posture data analysis of the user is 0.7, and the attention distraction probability obtained according to the hand motion data analysis of the user is 0.8; then the mobile phone performs weighted fusion on the two state analysis results, wherein the weighted coefficient is 0.5, and the result is 0.7 x 0.5+0.8 x 0.5-0.75; then comparing the result with a preset dispersion threshold value of 0.5; since the result is greater than the preset distraction threshold, it is determined that the driving state of the user is distracted. Of course, in practice, the first state analysis result and the second state analysis result may be in the form of a probability value of attention concentration, and the higher the probability value of attention concentration, the less the possibility of distraction of the user, or in other forms; in addition to the above-described weighting, the fusion processing may be performed by averaging, or taking the maximum value or the minimum value of the two state analysis results.
Through the above manner, when the head posture data and the hand motion data are analyzed, the analysis is respectively performed through different analysis models, so that the relative independence of the analysis processes of different types of data is kept; and when the analysis results of different types of data are obtained, the data are subjected to fusion processing to obtain a final judgment result, so that the accuracy of driving state detection is improved for fusion analysis of various data.
Further, after the mobile phone determines the driving state of the user, the collected head posture data, hand movement data and the determined driving state can be uploaded to the cloud server, so that the cloud server can update the first state analysis model and the second state analysis model through the head posture data, the hand movement data and the determined driving state. In other words, the cloud server may collect actual usage data (including head pose data and hand motion data, and determined driving states) of a plurality of driving state detection devices, and then update using the actual usage data. In the specific updating process, new data and old sample data can be used for training two brand-new models together again, or new data is adopted for carrying out incremental training on the old models; after the cloud server updates the model, the driving state detection device can also download the updated model from the cloud server again, and then detect the driving state based on the updated model. By uploading the actual use data to the cloud server to update the model, continuous iterative optimization of the model is realized, the model more suitable for the actual use condition of the user can be obtained, and the accuracy of user state detection can be improved.
A third embodiment of the driving state detection method of the present invention is proposed based on the above-described embodiment shown in fig. 2.
In this embodiment, after the step S10, the method further includes:
step S40, respectively carrying out denoising processing on the head posture data and the hand motion data;
in this embodiment, when the mobile phone obtains the head posture data and the hand motion data, in order to improve the efficiency of the subsequent analysis process and the accuracy of the analysis result, the head posture data and the hand motion data may be preprocessed first, so as to obtain more regular data. Specifically, because the collected head posture data and hand motion data may have a lot of noise data due to the high sensitivity of the sensor when the data are collected, the head posture data and the hand motion data can be denoised first; the median filter which can be adopted in the embodiment is a linear filter, is a nonlinear signal processing technology which is based on the ordering statistical theory and can effectively inhibit noise, and has the basic principle that the value of one point in a digital sequence is replaced by the median of each point value in a neighborhood of the point, and the surrounding pixel values are close to the real values, so that isolated noise points are eliminated; of course, in practice, other denoising methods may be used.
Step S50, performing windowed segmentation on the de-noised head posture data to obtain head window data, and performing windowed segmentation on the de-noised hand motion data to obtain hand window data;
the data volume of the head posture data and the hand motion data obtained by the mobile phone may be relatively large, and after the denoising processing is performed, windowing segmentation (that is, taking a certain part of data) can be performed on the denoised head posture data and the denoised hand motion data respectively to avoid the increase of the calculation amount and the reduction of the operation rate caused by the increase of the data, so as to improve the efficiency of subsequent feature extraction. In this embodiment, the size and the moving step of the sliding window are not specifically limited, the size and the moving step of the sliding window may be fixed or may be dynamically changed, and the administrator may set the size and the moving step as needed. For example, taking the head pose data as an example, if the data acquisition frequency is f equal to 80Hz, that is, 80 head pose data are acquired every second, the manager may set the size of the sliding window for feature extraction to be fixed to L equal to 2f, the moving step length to be fixed to f, that is, the window moves forward by 80 data every time, and there are 160 head pose data acquired in the current window for 2 seconds. After denoising, head window data and hand window data can be obtained respectively.
The step S20 includes:
and step S21, performing feature extraction on the head window data to obtain head feature data, and performing feature extraction on the hand window data to obtain hand feature data.
In this embodiment, after the head window data and the hand window data are obtained, feature extraction may be performed on the head window data to obtain head feature data; and extracting the characteristics of the hand window data to obtain hand characteristic data. The specific feature extraction process is not described herein again.
By the above mode, before feature extraction is carried out on the head window data and the hand window data, the pre-processing of denoising and windowing segmentation is carried out on the head window data and the hand window data, so that more regular data is obtained, the efficiency of subsequent feature extraction and state detection is favorably improved, and the accuracy of a detection result is favorably improved.
A fourth embodiment of the driving state detection method of the present invention is proposed based on the above-described embodiment shown in fig. 2.
In this embodiment, after the step S10, the method further includes:
a step S60 of performing time stamp alignment on the head posture data and the hand motion data;
after the collected head posture data and hand motion data are from different body parts of the user, there may be a difference in collection time during collection, resulting in a difference in data time, for example, the collected head posture data corresponds to a time of 8 hours 1 minute 1 second to 8 hours 1 minute 3 seconds, and the collected hand motion data corresponds to a time of 8 hours 1 minute 2 seconds to 8 hours 1 minute 4 seconds, and if the data corresponding to different times are directly used for detection, the accuracy of the detection result may be reduced. In this regard, in this embodiment, after the head posture data and the hand motion data are acquired, the time stamp alignment is performed on the head posture data and the hand motion data, and the time stamp alignment may be considered to be to make the start times of the head posture data and the hand motion data consistent to obtain the head posture data and the hand motion data corresponding to the same time period. For example, the above-mentioned head posture data of 8 hours 1 minute 1 second to 8 hours 1 minute 3 second and 8 hours 1 minute 2 second to 8 hours 1 minute 4 second correspond to 8 hours 1 minute 2 second to 8 hours 1 minute 3 second (i.e., the time remaining the same) after alignment.
The step S20 includes:
and step S22, performing feature extraction on the aligned head posture data to obtain head feature data, and performing feature extraction on the aligned hand motion data to obtain hand feature data.
In this embodiment, after obtaining the aligned head posture data and the aligned hand motion data, feature extraction may be performed on the aligned head posture data to obtain head feature data; and performing feature extraction on the aligned hand motion data to obtain hand feature data. The specific feature extraction process is not described herein again.
In this embodiment, before feature extraction is performed on the head window data and the hand window data, timestamp alignment is performed on the head window data and the hand window data, which is favorable for improving accuracy of subsequent analysis and detection results.
A fifth embodiment of the driving state detection method of the present invention is proposed based on the above-described embodiment shown in fig. 2.
In this embodiment, the head pose data includes head acceleration data, head angular velocity data, and head quaternion data,
the step S20 includes:
step S23, performing feature extraction on the head acceleration data to obtain time domain feature data and frequency domain feature data of the head acceleration data; performing feature extraction on the head angular velocity data to obtain time domain feature data and frequency domain feature data of the head angular velocity data; performing feature extraction on the head quaternion data to obtain time domain feature data of the head quaternion data;
in this embodiment, the head posture data that the cell-phone was gathered through head wearing equipment includes head acceleration data, head angular velocity data and head quaternion data, and the hand motion data that gather through hand wearing equipment includes hand acceleration data, hand angular velocity data and hand quaternion data. Corresponding to the process of feature extraction in step S20, the present embodiment takes the head pose data as an example for explanation.
In this embodiment, the feature data extracted from the head posture data includes time domain feature data and frequency domain feature data of the head acceleration data, time domain feature data and frequency domain feature data of the head angular velocity data, and time domain feature data of the head quaternion data. Wherein, the time domain features refer to some features related to time in the process of the data/signal sequence changing along with time, and the frequency domain features are used for discovering periodic features in the data/signal; the time domain characteristics of the head acceleration data refer to time-related characteristics in all head acceleration data in the head posture data (or the head window data after preprocessing), and the frequency domain characteristics of the head acceleration data refer to periodic characteristics in all head acceleration data in the head posture data (or the head window data after preprocessing); the time domain characteristics of the head angular velocity data refer to time-related characteristics in all head angular velocity data in the head pose data (or the head window data after preprocessing), and the frequency domain characteristics of the head angular velocity data refer to periodic characteristics in the head pose data (or the head window data after preprocessing); the time-domain features of the head quaternion data refer to time-related features in all quaternions in the head pose data (or pre-processed head window data).
The feature data of the hand motion data, similar to the head posture data, also includes time domain feature data and frequency domain feature data of hand acceleration data, time domain feature data and frequency domain feature data of hand angular velocity data, and time domain feature data of hand quaternion data, which are not described in detail again.
Further, the above feature extraction process of the head pose data specifically includes:
acquiring one or more of the maximum value, the minimum value, the standard deviation, the average value and the line number of the over-average value of the head acceleration data to serve as time domain feature data of the head acceleration data; performing FFT (f fast Fourier transform) on head acceleration data to acquire one or more of a direct current component, an amplitude mean value, an amplitude standard deviation, an amplitude slope and an amplitude kurtosis of the head acceleration data as frequency domain feature data of the head acceleration data; acquiring one or more of the maximum value, the minimum value, the standard deviation, the average value and the line number of the over-average value of the head angular velocity data to serve as time domain characteristic data of the head angular velocity data; performing FFT (f fast Fourier transform) on head acceleration data to obtain one or more of a direct current component, an amplitude mean value, an amplitude standard deviation, an amplitude slope and an amplitude kurtosis of the head angular velocity data as frequency domain characteristic data of the head angular velocity data; and acquiring one or more of the maximum value, the minimum value, the standard deviation, the average value and the line number of the over-average value of the head quaternion data to be used as the time domain characteristic data of the head quaternion data.
Similarly, the feature extraction process for the hand motion data may include:
acquiring one or more of the maximum value, the minimum value, the standard deviation, the average value and the line number of the over-average value of the hand acceleration data to serve as time domain feature data of the hand acceleration data; performing FFT (f fast Fourier transform) on hand acceleration data to obtain one or more of a direct current component, an amplitude mean value, an amplitude standard deviation, an amplitude slope and an amplitude kurtosis of the hand acceleration data as frequency domain characteristic data of the hand acceleration data; acquiring one or more of the maximum value, the minimum value, the standard deviation, the average value and the line number of the over-average value of the hand angular velocity data to serve as time domain characteristic data of the hand angular velocity data; performing FFT (f fast Fourier transform) on hand angular velocity data to obtain one or more of a direct current component, an amplitude mean value, an amplitude standard deviation, an amplitude slope and an amplitude kurtosis of the hand angular velocity data as frequency domain characteristic data of the hand angular velocity data; and acquiring one or more of the maximum value, the minimum value, the standard deviation, the average value and the over-average line number of the hand quaternion data as the time domain characteristic data of the hand quaternion data.
It should be noted that, in the present embodiment, when performing feature extraction, a coordinate system may be established in advance, specifically, as shown in fig. 5, fig. 5 is a schematic diagram of the coordinate system, where a vertical direction is taken as a Z axis, a direction right in front of a vehicle is taken as a Y axis, and then a direction perpendicular to the Y axis and the Z axis is taken as an X axis (which may be considered to be perpendicular to two side windows), and an origin of the coordinate system may be a center of a human face or other positions. In general, the head posture of the driver is mostly the movements in the horizontal direction and the pitch direction (i.e., the rotation around the X axis and the Z axis), and the hand movements are the movements in the horizontal direction, the front-back direction, and the pitch direction (i.e., the rotation around the X axis and the Y axis, and the Z axis), and therefore, as for the characteristics of the head posture data, it may be about the X axis, the Z axis; and the hand motion data may be characterized with respect to an X-axis, a Y-axis, and a Z-axis.
For example, for the head, the head accelerations of the ith segment of head pose data (or the head window data of the ith window) in the X-axis and Z-axis directions are respectively recorded as
Figure BDA0002313742020000141
Angular velocity of the head is respectively noted
Figure BDA0002313742020000142
Head quaternion is noted
Figure BDA0002313742020000143
Then, from these head pose data, 40 time domain features (including the maximum value t _ max, the minimum value t _ min, the standard deviation t _ std, the average value t _ mean, and the number of over-average lines t _ above) and 20 frequency domain features (including the dc component f _ d, the amplitude mean f _ mean, the amplitude standard deviation f _ std, the amplitude slope f _ skew, and the amplitude kurtosis f _ kurt) can be obtained, as shown in table 1 and table 2 below
TABLE 1 temporal characteristics of headers
Figure BDA0002313742020000144
TABLE 2 frequency domain characterization of headers
Figure BDA0002313742020000145
For example, for a hand, the hand accelerations of the ith hand motion data (or the hand window data of the ith window) in the X-axis, Y-axis, and Z-axis directions are respectively expressed as
Figure BDA0002313742020000151
Angular velocity of hand is respectively recorded as
Figure BDA0002313742020000152
The hand quaternion is
Figure BDA0002313742020000153
Then, 50 time domain features (including the maximum value t _ max, the minimum value t _ min, the standard deviation t _ std, the average value t _ mean, and the number of over-average lines t _ above) and 30 frequency domain features (including the dc component f _ d, the amplitude mean f _ mean, the amplitude standard deviation f _ std, the amplitude slope f _ skew, and the amplitude kurtosis f _ kurt) can be obtained according to the hand motion data, as shown in table 3 and table 4 below
TABLE 3 time-Domain characterization of the hand
Figure BDA0002313742020000154
TABLE 4 frequency domain characterization of the hand
Figure BDA0002313742020000155
Step S24, acquiring head feature data based on the time domain feature data and the frequency domain feature data of the head acceleration data, the time domain feature data and the frequency domain feature data of the head angular velocity data, and the time domain feature data of the head quaternion data.
In this embodiment, for convenience of subsequent model analysis when obtaining time domain feature data and frequency domain feature data of head acceleration data, time domain feature data and frequency domain feature data of head angular velocity data, and time domain feature data of head quaternion data, the head feature data are further combined according to a certain rule to be combined into a feature vector, a feature matrix, and the like, and the feature vector and the feature matrix may be regarded as head feature data and used for subsequent analysis, for example, for the above-mentioned i-th segment of head posture data (or head window data of i-th window), the obtained head feature data may be expressed as head feature data
Figure BDA0002313742020000161
Wherein the content of the first and second substances,
Figure BDA0002313742020000162
features in the X-axis direction of head acceleration (including time and frequency domains),
Figure BDA0002313742020000163
features in the Z-axis direction of head acceleration (including time domain and frequency domain),
Figure BDA0002313742020000164
the characteristics of the X-axis direction of the head angular velocity (including a time domain and a frequency domain),
Figure BDA0002313742020000165
the characteristics of the head angular velocity Z-axis direction (including time domain and frequency domain),
Figure BDA0002313742020000166
the characteristics of the head quaternion data (including time domain, j ═ 1,2,3,4) are shown.
For another example, the hand feature data obtained for the ith hand motion data (or the hand window data of the ith window) can be expressed as
Figure BDA0002313742020000167
Wherein the content of the first and second substances,
Figure BDA0002313742020000168
features in the direction of the X-axis of hand acceleration (including time domain and frequency domain),
Figure BDA0002313742020000169
features in the direction of the Y-axis of hand acceleration (including time domain and frequency domain),
Figure BDA00023137420200001610
features in the direction of the Z-axis of hand acceleration (including time domain and frequency domain),
Figure BDA00023137420200001611
features of the hand angular velocity in the X-axis direction (including time domain and frequency domain),
Figure BDA00023137420200001612
features of the hand angular velocity Y-axis direction (including time domain and frequency domain),
Figure BDA00023137420200001613
features of the hand angular velocity Z-axis direction (including time domain and frequency domain),
Figure BDA00023137420200001614
features of hand quaternion data (including time domain, j ═ 1,2,3, 4).
Through the mode, when the features are extracted, a plurality of time domain features and frequency domain features capable of representing the user state features are extracted, and the accuracy of subsequent driving state detection is improved.
It should be noted that, for the above embodiments, the combination can be freely realized in practical application; for example, when head posture data and hand motion data are acquired, denoising, windowing segmentation, timestamp alignment and other processing are performed, and then feature extraction is performed, wherein the extracted features comprise time domain features and frequency domain features; then inputting the characteristics into a model obtained from a cloud server for respective analysis to obtain two types of analysis results; and fusing the two types of analysis results to determine the driving state of the user. Other combinations are of course possible and the above examples are not intended to limit the present application.
In addition, the embodiment of the invention also provides a driving state detection device.
Referring to fig. 6, fig. 6 is a functional block diagram of the driving state detecting device according to the first embodiment of the present invention.
In this embodiment, the driving state detection device includes:
the data acquisition module 10 is used for acquiring head posture data and hand motion data of a user;
the feature extraction module 20 is configured to perform feature extraction on the head posture data to obtain head feature data, and perform feature extraction on the hand motion data to obtain hand feature data;
a data analysis module 30, configured to analyze the head feature data and the hand feature data, and determine a driving state of the user.
Each virtual function module of the driving state detection device is stored in the memory 1005 based on the driving state detection device shown in fig. 1, and is used for realizing all functions of a computer program; when executed by the processor 1001, the modules may perform a driving state detection function.
Further, the data analysis module 30 includes:
the data analysis unit is used for analyzing the head characteristic data based on a first analysis model to obtain a corresponding first state analysis result, and analyzing the hand characteristic data based on a second analysis model to obtain a corresponding second state analysis result;
and the state determining unit is used for carrying out fusion processing on the first state analysis result and the second state analysis result and determining the driving state of the user according to the fusion result.
Further, the driving state detection device further includes:
the de-noising processing module is used for respectively carrying out de-noising processing on the head posture data and the hand motion data;
the window segmentation module is used for performing windowed segmentation on the de-noised head posture data to obtain head window data and performing windowed segmentation on the de-noised hand motion data to obtain hand window data;
the feature extraction module 20 is further configured to perform feature extraction on the head window data to obtain head feature data, and perform feature extraction on the hand window data to obtain hand feature data.
Further, the driving state detection device further includes:
a timestamp alignment module to timestamp align the head pose data and the hand motion data;
the feature extraction module 20 is configured to perform feature extraction on the aligned head posture data to obtain head feature data, and perform feature extraction on the aligned hand motion data to obtain hand feature data.
Further, the head pose data includes head acceleration data, head angular velocity data, and head quaternion data,
the feature extraction module 20 is specifically configured to:
performing feature extraction on the head acceleration data to obtain time domain feature data and frequency domain feature data of the head acceleration data;
performing feature extraction on the head angular velocity data to obtain time domain feature data and frequency domain feature data of the head angular velocity data;
performing feature extraction on the head quaternion data to obtain time domain feature data of the head quaternion data;
and acquiring head characteristic data based on the time domain characteristic data and the frequency domain characteristic data of the head acceleration data, the time domain characteristic data and the frequency domain characteristic data of the head angular velocity data and the time domain characteristic data of the head quaternion data.
Further, the time domain feature data of the head acceleration data comprises one or more of a maximum value, a minimum value, an average value, a standard deviation and an over-average line number of the head acceleration data;
frequency domain feature data of the head acceleration data, including one or more of a direct current component, an amplitude mean, an amplitude standard deviation, an amplitude slope and an amplitude kurtosis of the head acceleration data;
time domain feature data of the head angular velocity data, including one or more of a maximum value, a minimum value, a mean value, a standard deviation and an over-mean line number of the head angular velocity data;
frequency domain feature data of the head angular velocity data, including one or more of a direct current component, an amplitude mean, an amplitude standard deviation, an amplitude slope, and an amplitude kurtosis of the head angular velocity data;
the time domain feature data of the head quaternion data comprises one or more of the maximum value, the minimum value, the average value, the standard deviation and the line number of the over-average value of the head quaternion data.
The hand is similar to the head.
Further, the driving state includes distraction, and the driving state detection device further includes:
and the prompting module is used for prompting based on a preset prompting rule if the driving state of the user is distracted.
The function implementation of each module in the driving state detection device corresponds to each step in the driving state detection method embodiment, and the function and implementation process are not described in detail herein.
In addition, the embodiment of the invention also provides a computer readable storage medium.
The computer-readable storage medium of the present invention has stored thereon a computer program, wherein the computer program, when being executed by a processor, realizes the steps of the driving state detection method as described above.
The method implemented when the computer program is executed may refer to various embodiments of the driving state detection method of the present invention, and details thereof are not repeated herein.
It should be noted that, in this document, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or system that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or system. Without further limitation, an element defined by the phrase "comprising an … …" does not exclude the presence of other like elements in a process, method, article, or system that comprises the element.
The above-mentioned serial numbers of the embodiments of the present invention are merely for description and do not represent the merits of the embodiments.
Through the above description of the embodiments, those skilled in the art will clearly understand that the method of the above embodiments can be implemented by software plus a necessary general hardware platform, and certainly can also be implemented by hardware, but in many cases, the former is a better implementation manner. Based on such understanding, the technical solution of the present invention may be embodied in the form of a software product, which is stored in a storage medium (e.g., ROM/RAM, magnetic disk, optical disk) as described above and includes instructions for enabling a terminal device (e.g., a mobile phone, a computer, a server, an air conditioner, or a network device) to execute the method according to the embodiments of the present invention.
The above description is only a preferred embodiment of the present invention, and not intended to limit the scope of the present invention, and all modifications of equivalent structures and equivalent processes, which are made by using the contents of the present specification and the accompanying drawings, or directly or indirectly applied to other related technical fields, are included in the scope of the present invention.

Claims (10)

1. A driving state detection method characterized by comprising:
collecting head posture data and hand motion data of a user;
performing feature extraction on the head posture data to obtain head feature data, and performing feature extraction on the hand motion data to obtain hand feature data;
and analyzing the head characteristic data and the hand characteristic data to determine the driving state of the user.
2. The driving state detection method of claim 1, wherein the step of analyzing the head feature data and the hand feature data to determine the driving state of the user comprises:
analyzing the head characteristic data based on a first analysis model to obtain a corresponding first state analysis result, and analyzing the hand characteristic data based on a second analysis model to obtain a corresponding second state analysis result;
and fusing the first state analysis result and the second state analysis result, and determining the driving state of the user according to the fusion result.
3. The driving state detection method of claim 1, wherein the step of collecting head pose data and hand motion data of the user is followed by further comprising:
respectively carrying out denoising processing on the head posture data and the hand motion data;
performing windowed segmentation on the de-noised head posture data to obtain head window data, and performing windowed segmentation on the de-noised hand motion data to obtain hand window data;
the step of extracting the features of the head posture data to obtain head feature data and the step of extracting the features of the hand motion data to obtain hand feature data comprises the following steps:
and performing feature extraction on the head window data to obtain head feature data, and performing feature extraction on the hand window data to obtain hand feature data.
4. The driving state detection method of claim 1, wherein the step of collecting head pose data and hand motion data of the user is followed by further comprising:
time stamp aligning the head pose data and the hand motion data;
the step of extracting the features of the head posture data to obtain head feature data and the step of extracting the features of the hand motion data to obtain hand feature data comprises the following steps:
and performing feature extraction on the aligned head posture data to obtain head feature data, and performing feature extraction on the aligned hand motion data to obtain hand feature data.
5. The driving state detection method according to claim 1, wherein the head posture data includes head acceleration data, head angular velocity data, and head quaternion data,
the step of extracting the features of the head posture data to obtain head feature data comprises the following steps:
performing feature extraction on the head acceleration data to obtain time domain feature data and frequency domain feature data of the head acceleration data;
performing feature extraction on the head angular velocity data to obtain time domain feature data and frequency domain feature data of the head angular velocity data;
performing feature extraction on the head quaternion data to obtain time domain feature data of the head quaternion data;
and acquiring head characteristic data based on the time domain characteristic data and the frequency domain characteristic data of the head acceleration data, the time domain characteristic data and the frequency domain characteristic data of the head angular velocity data and the time domain characteristic data of the head quaternion data.
6. The driving state detection method according to claim 5, wherein the time domain feature data of the head acceleration data includes one or more of a maximum value, a minimum value, a mean value, a standard deviation, and an over-mean line number of the head acceleration data;
frequency domain feature data of the head acceleration data, including one or more of a direct current component, an amplitude mean, an amplitude standard deviation, an amplitude slope and an amplitude kurtosis of the head acceleration data;
time domain feature data of the head angular velocity data, including one or more of a maximum value, a minimum value, a mean value, a standard deviation and an over-mean line number of the head angular velocity data;
frequency domain feature data of the head angular velocity data, including one or more of a direct current component, an amplitude mean, an amplitude standard deviation, an amplitude slope, and an amplitude kurtosis of the head angular velocity data;
the time domain feature data of the head quaternion data comprises one or more of the maximum value, the minimum value, the average value, the standard deviation and the line number of the over-average value of the head quaternion data.
7. The driving state detection method according to any one of claims 1 to 6, characterized in that the driving state includes distraction,
after the step of analyzing the head feature data and the hand feature data and determining the driving state of the user, the method further includes:
and if the driving state of the user is distracted, prompting based on a preset prompting rule.
8. A driving state detection device characterized by comprising:
the data acquisition module is used for acquiring head posture data and hand motion data of a user;
the characteristic extraction module is used for carrying out characteristic extraction on the head posture data to obtain head characteristic data and carrying out characteristic extraction on the hand motion data to obtain hand characteristic data;
and the data analysis module is used for analyzing the head characteristic data and the hand characteristic data and determining the driving state of the user.
9. A driving state detection apparatus, characterized in that the driving state detection apparatus comprises a processor, a memory, and a computer program stored on the memory and executable by the processor, wherein the computer program, when executed by the processor, implements the steps of the driving state detection method according to any one of claims 1 to 7.
10. A computer-readable storage medium, characterized in that a computer program is stored on the computer-readable storage medium, wherein the computer program, when being executed by a processor, carries out the steps of the driving state detection method according to any one of claims 1 to 7.
CN201911271338.XA 2019-12-11 2019-12-11 Driving state detection method, device, equipment and computer readable storage medium Pending CN111062300A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201911271338.XA CN111062300A (en) 2019-12-11 2019-12-11 Driving state detection method, device, equipment and computer readable storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201911271338.XA CN111062300A (en) 2019-12-11 2019-12-11 Driving state detection method, device, equipment and computer readable storage medium

Publications (1)

Publication Number Publication Date
CN111062300A true CN111062300A (en) 2020-04-24

Family

ID=70300659

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201911271338.XA Pending CN111062300A (en) 2019-12-11 2019-12-11 Driving state detection method, device, equipment and computer readable storage medium

Country Status (1)

Country Link
CN (1) CN111062300A (en)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112115601A (en) * 2020-09-10 2020-12-22 西北工业大学 Reliable user attention monitoring estimation representation model
CN112277957A (en) * 2020-10-27 2021-01-29 广州汽车集团股份有限公司 Early warning method and system for driver distraction correction and storage medium
CN116755567A (en) * 2023-08-21 2023-09-15 北京中科心研科技有限公司 Equipment interaction method and system based on gesture data, electronic equipment and medium

Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103366506A (en) * 2013-06-27 2013-10-23 北京理工大学 Device and method for automatically monitoring telephone call behavior of driver when driving
CN105677039A (en) * 2016-02-16 2016-06-15 北京博研智通科技有限公司 Method, device and wearable device for gesture-based driving status detection
CN107742399A (en) * 2017-11-16 2018-02-27 百度在线网络技术(北京)有限公司 For sending the method and device of alarm signal
CN109063686A (en) * 2018-08-29 2018-12-21 安徽华元智控科技有限公司 A kind of fatigue of automobile driver detection method and system
CN109389806A (en) * 2018-11-08 2019-02-26 山东大学 Fatigue driving detection method for early warning, system and medium based on multi-information fusion
CN109846459A (en) * 2019-01-18 2019-06-07 长安大学 A kind of fatigue driving state monitoring method
CN110188710A (en) * 2019-06-03 2019-08-30 石家庄铁道大学 Train driver dynamic behaviour recognition methods
CN110393531A (en) * 2019-05-23 2019-11-01 重庆大学 A kind of method for detecting fatigue driving and system based on smart machine
CN110547807A (en) * 2019-09-17 2019-12-10 深圳市赛梅斯凯科技有限公司 driving behavior analysis method, device, equipment and computer readable storage medium

Patent Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103366506A (en) * 2013-06-27 2013-10-23 北京理工大学 Device and method for automatically monitoring telephone call behavior of driver when driving
CN105677039A (en) * 2016-02-16 2016-06-15 北京博研智通科技有限公司 Method, device and wearable device for gesture-based driving status detection
CN107742399A (en) * 2017-11-16 2018-02-27 百度在线网络技术(北京)有限公司 For sending the method and device of alarm signal
CN109063686A (en) * 2018-08-29 2018-12-21 安徽华元智控科技有限公司 A kind of fatigue of automobile driver detection method and system
CN109389806A (en) * 2018-11-08 2019-02-26 山东大学 Fatigue driving detection method for early warning, system and medium based on multi-information fusion
CN109846459A (en) * 2019-01-18 2019-06-07 长安大学 A kind of fatigue driving state monitoring method
CN110393531A (en) * 2019-05-23 2019-11-01 重庆大学 A kind of method for detecting fatigue driving and system based on smart machine
CN110188710A (en) * 2019-06-03 2019-08-30 石家庄铁道大学 Train driver dynamic behaviour recognition methods
CN110547807A (en) * 2019-09-17 2019-12-10 深圳市赛梅斯凯科技有限公司 driving behavior analysis method, device, equipment and computer readable storage medium

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
李孟歆, 徐州:中国矿业大学出版社 *

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112115601A (en) * 2020-09-10 2020-12-22 西北工业大学 Reliable user attention monitoring estimation representation model
CN112115601B (en) * 2020-09-10 2022-05-17 西北工业大学 Reliable user attention monitoring estimation representation model
CN112277957A (en) * 2020-10-27 2021-01-29 广州汽车集团股份有限公司 Early warning method and system for driver distraction correction and storage medium
CN112277957B (en) * 2020-10-27 2022-06-24 广州汽车集团股份有限公司 Early warning method and system for driver distraction correction and storage medium
CN116755567A (en) * 2023-08-21 2023-09-15 北京中科心研科技有限公司 Equipment interaction method and system based on gesture data, electronic equipment and medium

Similar Documents

Publication Publication Date Title
CN109726771B (en) Abnormal driving detection model building method, device and storage medium
US9881221B2 (en) Method and system for estimating gaze direction of vehicle drivers
US11611621B2 (en) Event detection system
US11847911B2 (en) Object-model based event detection system
US10867195B2 (en) Systems and methods for monitoring driver state
JP6394735B2 (en) Detection of limbs using hierarchical context-aware
Braunagel et al. Driver-activity recognition in the context of conditionally autonomous driving
JP7061685B2 (en) Motion recognition, driving motion analysis methods and devices, and electronic devices
US10902331B2 (en) Systems and methods for providing visual allocation management
CN111062300A (en) Driving state detection method, device, equipment and computer readable storage medium
Chuang et al. Estimating gaze direction of vehicle drivers using a smartphone camera
CN110765807B (en) Driving behavior analysis and processing method, device, equipment and storage medium
US10394321B2 (en) Information acquiring method, information acquiring apparatus, and user equipment
JPWO2015025704A1 (en) Video processing apparatus, video processing method, and video processing program
CN110547807A (en) driving behavior analysis method, device, equipment and computer readable storage medium
Dua et al. AutoRate: How attentive is the driver?
WO2019097595A1 (en) Vehicle external communication apparatus, vehicle external communication method, information processing device, and vehicle external communication program
CN114936330A (en) Method and related device for pushing information in vehicle driving scene
Ouyang et al. Multiwave: A novel vehicle steering pattern detection method based on smartphones
CN110366388B (en) Information processing method, information processing apparatus, and computer-readable storage medium
CN115690750A (en) Driver distraction detection method and device
CN112651266A (en) Pedestrian detection method and device
Isaza et al. Dynamic set point model for driver alert state using digital image processing
US20220153278A1 (en) Cognitive Heat Map: A Model for Driver Situational Awareness
CN112637420B (en) Driving behavior recognition method and device and computer system

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
CB02 Change of applicant information
CB02 Change of applicant information

Address after: 518000 706, building 3b, hongrongyuan Shangjun phase II, Longping community, Dalang street, Longhua District, Shenzhen City, Guangdong Province

Applicant after: Jintu computing technology (Shenzhen) Co.,Ltd.

Address before: 518000 room 905, building 3b, hongrongyuan Shangjun phase II, Longping community, Dalang street, Longhua District, Shenzhen City, Guangdong Province

Applicant before: SHENZHEN SEMISKY TECHNOLOGY Co.,Ltd.