CN109711260B - Fatigue state detection method, terminal device and medium - Google Patents

Fatigue state detection method, terminal device and medium Download PDF

Info

Publication number
CN109711260B
CN109711260B CN201811432989.8A CN201811432989A CN109711260B CN 109711260 B CN109711260 B CN 109711260B CN 201811432989 A CN201811432989 A CN 201811432989A CN 109711260 B CN109711260 B CN 109711260B
Authority
CN
China
Prior art keywords
video data
fatigue
detected
fatigue state
electroencephalogram
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201811432989.8A
Other languages
Chinese (zh)
Other versions
CN109711260A (en
Inventor
冯超
李先华
叶政强
路红杰
郭鑫
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Xiamen Beiyang Brain Computer Interface And Intelligent Health Innovation Research Institute
Xiamen Beiyang Ruiheng Intelligent Health Co.,Ltd.
Original Assignee
Neural Flex Technology Shenzhen Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Neural Flex Technology Shenzhen Co Ltd filed Critical Neural Flex Technology Shenzhen Co Ltd
Priority to CN201811432989.8A priority Critical patent/CN109711260B/en
Publication of CN109711260A publication Critical patent/CN109711260A/en
Application granted granted Critical
Publication of CN109711260B publication Critical patent/CN109711260B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Landscapes

  • Measurement Of The Respiration, Hearing Ability, Form, And Blood Characteristics Of Living Organisms (AREA)
  • Measurement And Recording Of Electrical Phenomena And Electrical Characteristics Of The Living Body (AREA)

Abstract

The invention is suitable for the technical field of data processing, and provides a method for detecting a fatigue state, terminal equipment and a medium, wherein the method comprises the following steps: acquiring an electroencephalogram of an object to be detected, and synchronously acquiring first video data about an environment scene to which the object to be detected belongs when the electroencephalogram is acquired; analyzing each section of electroencephalogram signal corresponding to the object to be detected in a fatigue state, and marking the determined electroencephalogram signal as a fatigue early warning signal; determining first video data corresponding to the acquisition time according to the acquisition time of the fatigue early warning signal; and storing the fatigue early warning signal and the determined incidence relation of the first video data, so as to determine whether the object to be detected is in a fatigue state based on the pre-stored incidence relation when the first video data about the environment scene to which the object to be detected belongs is acquired again. The invention adds the consideration factor of the environment scene in the judging process, thereby improving the detection efficiency and the detection accuracy of the fatigue state of the user.

Description

Fatigue state detection method, terminal device and medium
Technical Field
The invention belongs to the technical field of data processing, and particularly relates to a fatigue state detection method, terminal equipment and a computer readable storage medium.
Background
In recent years, traffic accidents frequently occur, which brings serious threats to lives and properties of people, and thus, the prevention of traffic accidents is more and more important. The fatigue driving is a factor causing a large accident occurrence rate, and therefore, how to accurately judge whether a driver has fatigue driving and is in a drowsy state is a major key point and technical problem of current traffic accident prevention and research work.
In the prior art, because the electroencephalogram signal can more directly and objectively reflect the activity condition of the human brain, the time resolution is higher, and the electroencephalogram signal cannot be artificially controlled and forged, the fatigue state of a driver is determined based on analysis and processing of the electroencephalogram signal under general conditions. However, the method for directly judging the fatigue state by analyzing the electroencephalogram signals has a single consideration factor, so that the problem of low judgment accuracy still exists.
Disclosure of Invention
In view of this, embodiments of the present invention provide a method for detecting a fatigue state, a terminal device, and a computer-readable storage medium, so as to solve the problem that the conventional method for detecting a fatigue state has a single consideration factor.
A first aspect of an embodiment of the present invention provides a method for detecting a fatigue state, including:
acquiring an electroencephalogram signal of an object to be detected, and synchronously acquiring first video data about an environment scene to which the object to be detected belongs when the electroencephalogram signal is acquired;
analyzing each section of electroencephalogram signal corresponding to the object to be detected in a fatigue state, and marking the determined electroencephalogram signal as a fatigue early warning signal;
determining the first video data corresponding to the acquisition time according to the acquisition time of the fatigue early warning signal;
and storing the fatigue early warning signal and the determined incidence relation of the first video data, so as to determine whether the object to be detected is in a fatigue state based on the prestored incidence relation when the first video data about the environmental scene to which the object to be detected belongs is collected again.
A second aspect of an embodiment of the present invention provides a fatigue state detection apparatus, including:
the device comprises a collecting unit, a processing unit and a processing unit, wherein the collecting unit is used for collecting an electroencephalogram signal of an object to be detected and synchronously collecting first video data about an environment scene to which the object to be detected belongs when the electroencephalogram signal is collected;
the first analysis unit is used for analyzing each section of electroencephalogram signal corresponding to the object to be detected in a fatigue state and marking the determined electroencephalogram signal as a fatigue early warning signal;
the determining unit is used for determining the first video data corresponding to the acquisition time according to the acquisition time of the fatigue early warning signal;
and the association storage unit is used for storing the fatigue early warning signal and the determined association relationship of the first video data so as to determine whether the object to be detected is in a fatigue state or not based on the pre-stored association relationship when the first video data about the environmental scene to which the object to be detected belongs is acquired again.
A third aspect of the embodiments of the present invention provides a terminal device, including a memory and a processor, where the memory stores a computer program operable on the processor, and the processor implements the steps of the method for detecting a fatigue state as described above when executing the computer program.
A fourth aspect of the embodiments of the present invention provides a computer-readable storage medium, which stores a computer program, and when the processor executes the computer program, the processor implements the steps of the method for detecting a fatigue state as described above.
In the embodiment of the invention, the first video data of the environment scene to which the object to be detected belongs is synchronously collected while the electroencephalogram signal of the object to be detected is collected, and each section of fatigue early warning signal corresponding to the object to be detected in the fatigue state is analyzed, so that the first video data collected corresponding to the fatigue early warning signal can be accurately obtained and stored, and the relation between the electroencephalogram signal and the fatigue state is established by utilizing the actual environment; when the first video data about the environmental scene to which the object to be detected belongs is collected again, the embodiment of the invention can quickly pre-judge whether the object to be detected is in the fatigue state or not based on the pre-stored association relation, and the consideration factor of the environmental scene is added in the judging process, so that the detection efficiency and the detection accuracy of the fatigue state of the user are improved.
Drawings
In order to more clearly illustrate the technical solutions in the embodiments of the present invention, the drawings needed to be used in the embodiments or the prior art descriptions will be briefly described below, and it is obvious that the drawings in the following description are only some embodiments of the present invention, and it is obvious for those skilled in the art to obtain other drawings based on these drawings without inventive exercise.
Fig. 1 is a flowchart of an implementation of a method for detecting a fatigue state according to an embodiment of the present invention;
fig. 2 is a flowchart of a specific implementation of the fatigue state detection method S102 according to an embodiment of the present invention;
fig. 3 is a flowchart of a specific implementation of the fatigue state detection method S103 according to an embodiment of the present invention;
fig. 4 is a flowchart illustrating a detailed implementation of the fatigue state detecting method S103 according to another embodiment of the present invention;
fig. 5 is a block diagram of a fatigue state detection apparatus according to an embodiment of the present invention;
fig. 6 is a schematic diagram of a terminal device according to an embodiment of the present invention.
Detailed Description
In the following description, for purposes of explanation and not limitation, specific details are set forth, such as particular system structures, techniques, etc. in order to provide a thorough understanding of the embodiments of the invention. It will be apparent, however, to one skilled in the art that the present invention may be practiced in other embodiments that depart from these specific details. In other instances, detailed descriptions of well-known systems, devices, circuits, and methods are omitted so as not to obscure the description of the present invention with unnecessary detail.
In order to explain the technical means of the present invention, the following description will be given by way of specific examples.
Fig. 1 shows a flow of implementing the method for detecting a fatigue state, which is provided by the embodiment of the present invention, and is applied to a scenario in which a vehicle cabin is used, where the flow of the method includes steps S101 to S104. The specific realization principle of each step is as follows:
s101: the method comprises the steps of collecting an electroencephalogram signal of an object to be detected, and synchronously collecting first video data about an environment scene to which the object to be detected belongs when the electroencephalogram signal is collected.
In the embodiment of the invention, a detection device for the fatigue state is preset in the vehicle cab. The fatigue state detection device comprises a camera and an electroencephalogram signal collector. The camera is used for capturing video data in the shooting range of the camera; the electroencephalogram signal collector is used for collecting an electroencephalogram signal of an object to be detected. The electroencephalogram signals include, but are not limited to, resting state electroencephalogram signals, Visual Evoked Potential (VEP) signals, motion imagining electroencephalogram signals, Event Related Potential (ERP) signals, and the like. The object to be detected is specifically a driver of a vehicle cockpit, and the driver is in a fatigue state detection stage.
Specifically, the preset shooting angle of the camera included in the detection device for the fatigue state is an environmental scene outside the cockpit, that is, the camera is used for shooting traffic conditions outside the vehicle. In the embodiment of the invention, the signal data acquired by the camera and the electroencephalogram signal acquisition device are synchronous in time, so that the camera can synchronously acquire the first video data of the environmental scene of the object to be detected in the process of acquiring the electroencephalogram signal by the electroencephalogram signal acquisition device.
S102: analyzing each section of electroencephalogram signal corresponding to the object to be detected in a fatigue state, and marking the determined electroencephalogram signal as a fatigue early warning signal.
The electroencephalogram signals acquired at all acquisition moments are processed through a preset algorithm, and the method comprises the following steps: the electroencephalogram signals are subjected to operations such as preprocessing, feature extraction, feature smoothing filtration, feature selection, dynamic feature extraction, fatigue monitoring and the like, and whether the electroencephalogram signals acquired at each moment are electroencephalogram signals generated by an object to be detected in a fatigue state or not is identified. The preset algorithm may be, for example, a fatigue detection algorithm based on mindwave brain waves, a fatigue driving electroencephalogram detection based on a matching pursuit algorithm, or the like.
And for any section of electroencephalogram signal, if the section of electroencephalogram signal is identified to be the electroencephalogram signal generated by the object to be detected in the fatigue state, marking the section of electroencephalogram signal as a fatigue early warning signal so as to represent the corresponding relation between the section of electroencephalogram signal and the fatigue state.
S103: and determining the first video data corresponding to the acquisition time according to the acquisition time of the fatigue early warning signal.
In the embodiment of the present invention, searching for the first video data corresponding to the marked fatigue warning signal according to the acquisition time of the fatigue warning signal includes: searching first video data synchronously acquired with the fatigue early warning signal in the acquisition time; or, taking the acquisition time as a starting point, backtracking the first video data with a preset length.
S104: and storing the fatigue early warning signal and the determined incidence relation of the first video data, so as to determine whether the object to be detected is in a fatigue state based on the prestored incidence relation when the first video data about the environmental scene to which the object to be detected belongs is collected again.
The determined first video data are associated with the fatigue state of the object to be detected, and the first video data are environmental scene data outside the vehicle cab, so that the determined first video data are identified as fatigue scene data. By storing the incidence relation between the fatigue early warning signal and the fatigue scene data in the preset data table, whether the information record corresponding to the first video data exists in the data table or not can be quickly determined in each subsequent time period when the first video data generated in real time is collected.
If the current first video data exist in the data table, determining that the object to be detected at the moment is in a fatigue state; and if the current first video data does not exist in the data table, further determining whether the object to be detected at the moment is in a fatigue state or not based on the electroencephalogram signals acquired in real time.
Preferably, after the first video data corresponding to the acquisition time of the fatigue warning signal is determined, the first video data is stored in a buffer area. Similarly, in the subsequent preset time, according to the detected fatigue early warning signals of each section, the first video data corresponding to the acquisition time of the fatigue early warning signals of each section are stored in the cache region. If N pieces of first video data with the same or similar characteristics (N is a preset value larger than one) are detected to exist in the buffer area, the first video data are determined to be fatigue scene data. At the moment, the incidence relation between the fatigue early warning signal and the fatigue scene data is stored in a preset data table, so that the detection accuracy of the fatigue scene data is improved.
In the embodiment of the invention, the first video data of the environment scene to which the object to be detected belongs is synchronously collected while the electroencephalogram signal of the object to be detected is collected, and each section of fatigue early warning signal corresponding to the object to be detected in the fatigue state is analyzed, so that the first video data collected corresponding to the fatigue early warning signal can be accurately obtained and stored, and the relation between the electroencephalogram signal and the fatigue state is established by utilizing the actual environment; when the first video data about the environmental scene to which the object to be detected belongs is collected again, the embodiment of the invention can quickly pre-judge whether the object to be detected is in the fatigue state or not based on the pre-stored association relation, and the consideration factor of the environmental scene is added in the judging process, so that the detection efficiency and the detection accuracy of the fatigue state of the user are improved.
As an embodiment of the present invention, fig. 2 shows a flowchart of a specific implementation of the fatigue state detection method S102 provided by the embodiment of the present invention, which is detailed as follows:
s1021: and synchronously acquiring second video data related to the human face characteristics of the object to be detected when the electroencephalogram signals are acquired.
The detection device for the fatigue state comprises an electroencephalogram signal collector, a first camera and a second camera, wherein the first camera is used for shooting traffic conditions outside the vehicle, and the second camera is used for shooting human face characteristics inside the vehicle. And the preset shooting visual angle of the second camera is the head area of the vehicle driver. The first camera, the second camera and the electroencephalogram signal collector respectively collect signal data in time synchronization, so that the first camera can synchronously collect first video data of an environment scene where an object to be detected belongs in the process of collecting electroencephalograms by the electroencephalogram signal collector, and the second camera synchronously collects second video data of face characteristics of the object to be detected.
S1022: and analyzing the second video data through a preset algorithm to determine the second video data corresponding to the object to be detected in the fatigue state.
In the embodiment of the invention, the second video data is analyzed and processed through a preset algorithm so as to output the face features contained in each image frame in the second video data. The face features include face information, pixel information, sound information, time information, and the like. The face information includes the size of the eyes, the position of the corners of the mouth, the degree of curvature of the lips, and the like. The extraction method of the face information includes, but is not limited to, the following various methods:
a priori rule-based mode: the method comprises the steps of obtaining a plurality of face sample images acquired in advance, wherein the face information in each face sample image is marked information. The face sample image and the corresponding face information are trained, and a prior rule for detecting the face information is output. And after image frames in the second video data are subjected to pre-transformation processing to strengthen the target characteristics of the image frames, candidate points or regions corresponding to various pieces of face information are identified from the image frames according to the prior rule.
Geometry-based approach: constructing a geometric model with variable parameters according to the shape characteristics of the facial features of the human face, and setting an evaluation function; and the evaluation function is used for measuring the matching degree of the region to be detected in the image frame and the geometric model. The variable parameters are continuously adjusted by selecting different regions to be detected in the image frame, so that the output value of the evaluation function is minimized, and the geometric model can converge and position an image region containing the human face characteristics.
Color information based approach: and establishing a color model of the facial features by using a statistical method, traversing each candidate region in the image frame, and positioning candidate points corresponding to the face information in the image frame according to the matching degree of the color of the measured point in the candidate region and the color model.
Mode based on appearance information: and positioning the sub-images in the area near the facial features in the facial sample image, taking the sub-images as a whole, mapping the sub-images into a point in a high-dimensional space, so that a point set in the high-dimensional space can be used for describing the facial features of the same type, and obtaining a corresponding distribution model by using a statistical method. The face information contained in the image frame can be judged by calculating the matching degree of each region to be detected in the image frame and the distribution model.
In the embodiment of the invention, the output human face features are processed by calling a preset video machine learning model or calling a preset judgment condition so as to determine whether the object to be detected is in a fatigue state. The video machine learning model can be an existing human face fatigue detection model; the preset judgment condition may be a rule for judging by using a fatigue parameter obtained in advance by the face fatigue detection model. For example, it is detected whether the blinking frequency of the object to be detected in the image frame exceeds a predetermined threshold, whether the eye closing time exceeds a predetermined threshold, and/or the yawning frequency exceeds a predetermined threshold, etc.
As a specific implementation example of the present invention, the method for processing the output facial features by invoking a preset determination condition to determine whether the object to be detected is in a fatigue state may further include:
and acquiring various human face characteristic parameter values associated with the object to be detected and fatigue reference values corresponding to the human face characteristic parameter values. The human face characteristic parameter values comprise eye opening amplitude, eyebrow droop degree, mouth corner droop degree and mouth-shaped bending degree. And respectively judging whether the parameter values of the various human face characteristics reach the corresponding fatigue reference values. And if any one of the face characteristic parameter values reaches the corresponding fatigue reference value, adding one to the fatigue index of the object to be detected. And when the fatigue index of the object to be detected is larger than a preset threshold value, determining that the object to be detected is in a fatigue state.
Preferably, in the above implementation example, the fatigue index of the object to be detected is incremented only when M of the face feature parameter values all reach their respective corresponding fatigue reference values. M is an integer greater than or equal to one.
Preferably, the M face feature parameter values are M face feature parameter values selected in advance. For example, the pre-selected face feature parameter values are two face feature parameter values of the eye opening amplitude and the eyebrow droop degree. At this time, only when the eye opening amplitude of the object to be detected reaches the fatigue reference value corresponding to the eye opening amplitude and the eyebrow droop degree also reaches the fatigue reference value corresponding to the eye opening amplitude, the fatigue index of the object to be detected is increased by one.
And selecting the second video data acquired at a certain moment and determining the second video data as the second video data corresponding to the object to be detected in the fatigue state if the judgment result obtained based on the corresponding human face characteristics is that the object to be detected is in the fatigue state.
S1023: and for the determined second video data, marking the electroencephalogram signals synchronously acquired with the second video data as fatigue early warning signals.
According to the analysis, the signal data respectively acquired by the first camera, the second camera and the electroencephalogram signal acquisition device are synchronous in time, so that the electroencephalogram signal synchronously acquired with the second video data can be acquired for the determined second video data, and the section of electroencephalogram signal is marked as a fatigue early warning signal.
In the embodiment of the invention, when the electroencephalogram signal of the object to be detected is collected, the second video data related to the human face characteristics of the object to be detected is synchronously collected, so that the whole detection device can detect whether the whole detection device is in a fatigue state according to the electroencephalogram signal of a vehicle driver, and can determine whether the whole detection device is in the fatigue state based on the video characteristic data of the vehicle driver, therefore, the flexibility of a fatigue detection mode is improved.
As an embodiment of the present invention, fig. 3 shows a specific implementation flow of the fatigue state detection method S103 provided in the embodiment of the present invention, which is detailed as follows:
s1031: and constructing and training an electroencephalogram signal machine model based on the fatigue early warning signal.
In the embodiment of the invention, the marked fatigue early warning signal is utilized to construct and train an electroencephalogram signal machine model, so that the electroencephalogram signal machine model after training can be used for judging whether the currently detected electroencephalogram signal is the electroencephalogram signal generated by the object to be detected in a fatigue state. The training process of the electroencephalogram signal machine model can be regarded as training a classifier by using a machine learning method to solve the classification problem.
The machine learning methods include, but are not limited to, k-nearest neighbor methods, perceptrons, naive bayes, decision trees, logistic regression models, support vector machines, adaBoost, bayesian networks, neural network methods, and the like.
S1032: analyzing the electroencephalogram signals acquired at each moment through the electroencephalogram signal machine model to obtain a first detection result about whether the object to be detected is in a fatigue state.
And sequentially inputting the electroencephalogram signals collected at each moment into the electroencephalogram signal machine model after the training is finished, and outputting a first detection result after the input electroencephalogram signals are identified and processed through the electroencephalogram signal machine model. The first detection result comprises that the object to be detected is in a fatigue state at the moment or the object to be detected is not in the fatigue state at the moment.
S1033: and if the first detection result indicates that the object to be detected is in a fatigue state, acquiring the first video data correspondingly acquired at the moment.
If the first detection result output by the electroencephalogram signal machine model is that the object to be detected is in a fatigue state at the moment, marking the electroencephalogram signal acquired at the moment as a fatigue early warning signal, and searching first video data corresponding to the acquisition time, wherein the method comprises the following steps: searching first video data synchronously acquired with the fatigue early warning signal in the acquisition time; or, taking the acquisition time as a starting point, backtracking the first video data with a preset length.
Preferably, as another embodiment of the present invention, as shown in fig. 4, before the above S1033, steps S1034 to S1035 are further included; the above step S1033 includes S10331. The implementation principle of each step is as follows:
s1034: and constructing and training a video machine learning model according to the second video data synchronously acquired with the fatigue early warning signal.
In addition to obtaining the electroencephalogram signal model through training, in the embodiment of the present invention, the video machine learning model for detecting the facial features to determine whether the object to be detected is in a fatigue state also needs to be updated and trained, so as to ensure that the trained video machine learning model has higher generalization capability.
After the electroencephalogram signal generated by the object to be detected in the fatigue state is determined through the electroencephalogram signal machine model, second video data synchronously acquired with the electroencephalogram signal can be determined, and the second video data is marked as fatigue characteristic data. And training a video machine learning model according to the marked fatigue characteristic data and the second video data of the unmarked fatigue characteristic data.
S1035: and analyzing the second video data acquired at each moment through the video machine learning model to obtain a second detection result about whether the object to be detected is in a fatigue state.
Inputting the electroencephalogram signals and second video data synchronously acquired at each subsequent moment into the electroencephalogram signal machine model after the training is finished, and outputting a first detection result after the input electroencephalogram signals are identified and processed through the electroencephalogram signal machine model; and sequentially inputting the second video data into the trained video machine learning model, identifying the input second video data through the video machine learning model, and outputting a second detection result. The first detection result comprises that the object to be detected is in a fatigue state at the moment or the object to be detected is not in the fatigue state at the moment. The second detection result includes that the object to be detected is in a fatigue state at the moment or the object to be detected is not in the fatigue state at the moment.
S10331: and if the first detection result and the second detection result are both that the object to be detected is in a fatigue state, acquiring the first video data correspondingly acquired at the moment.
If the first detection result output by the electroencephalogram signal machine model and the second detection result output by the video machine learning model are both that the object to be detected is in a fatigue state at the moment, marking the electroencephalogram signal acquired at the moment as a fatigue early warning signal, and searching first video data corresponding to the acquisition time, the method comprises the following steps: searching first video data synchronously acquired with the fatigue early warning signal in the acquisition time; or, taking the acquisition time as a starting point, backtracking the first video data with a preset length.
In the embodiment of the invention, whether the signal data acquired at each moment is related to the fatigue state of the user is judged by simultaneously utilizing the brain electric signal machine model and the video machine learning model, and the first video data synchronously acquired corresponding to the signal data is extracted under the condition that the detection results of the two models are the same, so that the extraction accuracy of the environmental video data is improved, and the higher identification accuracy can be achieved when the user is identified to be in the fatigue state based on the corresponding relation of the environmental video data and the fatigue early warning signal.
It should be understood that, the sequence numbers of the steps in the foregoing embodiments do not imply an execution sequence, and the execution sequence of each process should be determined by its function and inherent logic, and should not constitute any limitation to the implementation process of the embodiments of the present invention.
Fig. 5 is a block diagram illustrating a structure of a fatigue state detection apparatus according to an embodiment of the present invention, which corresponds to a fatigue state detection method according to an embodiment of the present invention. For convenience of explanation, only the portions related to the present embodiment are shown.
Referring to fig. 5, the apparatus includes:
the acquisition unit 51 is configured to acquire an electroencephalogram signal of an object to be detected, and acquire first video data related to an environmental scene to which the object to be detected belongs synchronously when the electroencephalogram signal is acquired.
The first analyzing unit 52 is configured to analyze each segment of the electroencephalogram signal corresponding to the object to be detected in a fatigue state, and mark the determined electroencephalogram signal as a fatigue early warning signal.
The determining unit 53 is configured to determine, according to the acquisition time of the fatigue early warning signal, the first video data corresponding to the acquisition time.
The association storage unit 54 is configured to store the fatigue early warning signal and the determined association relationship of the first video data, so as to determine whether the object to be detected is in a fatigue state based on the pre-stored association relationship when the first video data about the environmental scene to which the object to be detected belongs is acquired again.
Optionally, the first parsing unit 52 includes:
and the acquisition subunit is used for synchronously acquiring second video data related to the human face characteristics of the object to be detected when the electroencephalogram signal is acquired.
The analysis subunit is configured to analyze the second video data through a preset algorithm to determine the second video data corresponding to the object to be detected in a fatigue state;
and the marking subunit is used for marking the electroencephalogram signals synchronously acquired with the second video data as fatigue early warning signals for the determined second video data.
Optionally, the determining unit 53 includes:
and the first training subunit is used for constructing and training an electroencephalogram signal machine model based on the fatigue early warning signal.
And the detection subunit is used for analyzing the electroencephalogram signals acquired at each moment through the electroencephalogram signal machine model so as to obtain a first detection result about whether the object to be detected is in a fatigue state.
And the first obtaining subunit is configured to obtain the first video data correspondingly acquired at the moment if the first detection result indicates that the object to be detected is in a fatigue state.
Optionally, the fatigue state detection device further includes:
and the construction unit is used for constructing and training a video machine learning model according to the second video data synchronously acquired with the fatigue early warning signal.
And the second analysis unit is used for analyzing the second video data acquired at each moment through the video machine learning model so as to obtain a second detection result about whether the object to be detected is in a fatigue state.
The first obtaining subunit is specifically configured to:
and if the first detection result and the second detection result are both that the object to be detected is in a fatigue state, acquiring the first video data correspondingly acquired at the moment.
Optionally, the determining unit 53 includes:
and the second acquisition subunit is used for acquiring the acquisition time of the fatigue early warning signal.
And the determining subunit is used for determining the first video data acquired within a preset time length before the acquisition time.
Optionally, the association storage unit 54 includes:
and the storage subunit is used for storing the first video data to a buffer area.
And the judging subunit is used for judging whether N pieces of first video data are added in the cache region within the latest preset time length.
And the association subunit is used for storing the association relationship between the fatigue early warning signal and the first video data if the judgment result is yes.
In the embodiment of the invention, the first video data of the environment scene to which the object to be detected belongs is synchronously collected while the electroencephalogram signal of the object to be detected is collected, and each section of fatigue early warning signal corresponding to the object to be detected in the fatigue state is analyzed, so that the first video data collected corresponding to the fatigue early warning signal can be accurately obtained and stored, and the relation between the electroencephalogram signal and the fatigue state is established by utilizing the actual environment; when the first video data about the environmental scene to which the object to be detected belongs is collected again, the embodiment of the invention can quickly pre-judge whether the object to be detected is in the fatigue state or not based on the pre-stored association relation, and the consideration factor of the environmental scene is added in the judging process, so that the detection efficiency and the detection accuracy of the fatigue state of the user are improved.
Fig. 6 is a schematic diagram of a terminal device according to an embodiment of the present invention. As shown in fig. 6, the terminal device 6 of this embodiment includes: a processor 60, a memory 61 and a computer program 62 stored in said memory 61 and executable on said processor 60, such as a detection program of a fatigue state. The processor 60, when executing the computer program 62, implements the steps in the above-described embodiments of the method for detecting fatigue states, such as the steps 101 to 104 shown in fig. 1. Alternatively, the processor 60, when executing the computer program 62, implements the functions of the modules/units in the above-mentioned device embodiments, such as the functions of the units 51 to 54 shown in fig. 5.
Illustratively, the computer program 62 may be partitioned into one or more modules/units that are stored in the memory 61 and executed by the processor 60 to implement the present invention. The one or more modules/units may be a series of computer program instruction segments capable of performing specific functions, which are used to describe the execution of the computer program 62 in the terminal device 6.
The terminal device 6 may be a desktop computer, a notebook, a palm computer, a cloud server, or other computing devices. The terminal device may include, but is not limited to, a processor 60, a memory 61. Those skilled in the art will appreciate that fig. 6 is merely an example of a terminal device 6 and does not constitute a limitation of terminal device 6 and may include more or less components than those shown, or some components in combination, or different components, for example, the terminal device may also include input output devices, network access devices, buses, etc.
The Processor 60 may be a Central Processing Unit (CPU), other general purpose Processor, a Digital Signal Processor (DSP), an Application Specific Integrated Circuit (ASIC), an off-the-shelf Programmable Gate Array (FPGA) or other Programmable logic device, discrete Gate or transistor logic, discrete hardware components, etc. A general purpose processor may be a microprocessor or the processor may be any conventional processor or the like.
The memory 61 may be an internal storage unit of the terminal device 6, such as a hard disk or a memory of the terminal device 6. The memory 61 may also be an external storage device of the terminal device 6, such as a plug-in hard disk, a Smart Media Card (SMC), a Secure Digital (SD) Card, a Flash memory Card (Flash Card), and the like, which are provided on the terminal device 6. Further, the memory 61 may also include both an internal storage unit and an external storage device of the terminal device 6. The memory 61 is used for storing the computer program and other programs and data required by the terminal device. The memory 61 may also be used to temporarily store data that has been output or is to be output.
In addition, functional units in the embodiments of the present application may be integrated into one processing unit, or each unit may exist alone physically, or two or more units are integrated into one unit. The integrated unit can be realized in a form of hardware, and can also be realized in a form of a software functional unit.
The integrated unit, if implemented in the form of a software functional unit and sold or used as a stand-alone product, may be stored in a computer readable storage medium. Based on such understanding, the technical solution of the present application may be substantially implemented or contributed to by the prior art, or all or part of the technical solution may be embodied in a software product, which is stored in a storage medium and includes instructions for causing a computer device (which may be a personal computer, a server, or a network device) to execute all or part of the steps of the method according to the embodiments of the present application. And the aforementioned storage medium includes: various media capable of storing program codes, such as a usb disk, a removable hard disk, a Read-Only Memory (ROM), a Random Access Memory (RAM), a magnetic disk, or an optical disk.
The above embodiments are only used for illustrating the technical solutions of the present application, and not for limiting the same; although the present application has been described in detail with reference to the foregoing embodiments, it should be understood by those of ordinary skill in the art that: the technical solutions described in the foregoing embodiments may still be modified, or some technical features may be equivalently replaced; and such modifications or substitutions do not depart from the spirit and scope of the corresponding technical solutions in the embodiments of the present application.

Claims (10)

1. A method for detecting a fatigue state, comprising:
acquiring an electroencephalogram signal of an object to be detected, and synchronously acquiring first video data about an environment scene to which the object to be detected belongs when the electroencephalogram signal is acquired; the environment scene is an environment scene outside the cockpit;
analyzing each section of electroencephalogram signal corresponding to the object to be detected in a fatigue state, and marking the determined electroencephalogram signal as a fatigue early warning signal;
determining the first video data corresponding to the acquisition time according to the acquisition time of the fatigue early warning signal;
and storing the fatigue early warning signal and the determined incidence relation of the first video data, so as to determine whether the object to be detected is in a fatigue state based on the prestored incidence relation when the first video data about the environmental scene to which the object to be detected belongs is collected again.
2. The method for detecting the fatigue state according to claim 1, wherein the analyzing out the electroencephalogram signals corresponding to the object to be detected in the fatigue state, and marking the determined electroencephalogram signals as fatigue early warning signals comprises:
synchronously acquiring second video data related to the human face characteristics of the object to be detected when the electroencephalogram signals are acquired;
analyzing the second video data through a preset algorithm to determine the second video data corresponding to the object to be detected in a fatigue state;
and for the determined second video data, marking the electroencephalogram signals synchronously acquired with the second video data as fatigue early warning signals.
3. The method for detecting the fatigue state according to claim 1 or 2, wherein the determining the first video data corresponding to the acquisition time according to the acquisition time of the fatigue warning signal comprises:
constructing and training an electroencephalogram signal machine model based on the fatigue early warning signal;
analyzing the electroencephalogram signals acquired at each moment through the electroencephalogram signal machine model to obtain a first detection result about whether the object to be detected is in a fatigue state;
and if the first detection result indicates that the object to be detected is in a fatigue state, acquiring the first video data correspondingly acquired at the moment.
4. The method of detecting a fatigue state according to claim 3, further comprising:
constructing and training a video machine learning model according to second video data which are synchronously acquired with the fatigue early warning signal and are related to the human face characteristics of the object to be detected;
analyzing the second video data acquired at each moment through the video machine learning model to obtain a second detection result about whether the object to be detected is in a fatigue state;
if the first detection result indicates that the object to be detected is in a fatigue state, acquiring the first video data correspondingly acquired at the moment, including:
and if the first detection result and the second detection result are both that the object to be detected is in a fatigue state, acquiring the first video data correspondingly acquired at the moment.
5. The method for detecting the fatigue state according to claim 1, wherein the determining the first video data corresponding to the acquisition time according to the acquisition time of the fatigue warning signal comprises:
acquiring the acquisition time of the fatigue early warning signal;
and determining the first video data acquired within a preset time length before the acquisition time.
6. The method for detecting a fatigue state according to claim 1, wherein the storing the relationship between the fatigue warning signal and the determined first video data comprises:
storing the first video data to a buffer area;
judging whether N pieces of first video data are added in the cache region within the latest preset time; wherein N is a preset value greater than 1;
and if so, storing the association relationship between the fatigue early warning signal and the first video data.
7. A fatigue state detection device, comprising:
the device comprises a collecting unit, a processing unit and a processing unit, wherein the collecting unit is used for collecting an electroencephalogram signal of an object to be detected and synchronously collecting first video data about an environment scene to which the object to be detected belongs when the electroencephalogram signal is collected; the environment scene is an environment scene outside the cockpit;
the first analysis unit is used for analyzing each section of electroencephalogram signal corresponding to the object to be detected in a fatigue state and marking the determined electroencephalogram signal as a fatigue early warning signal;
the determining unit is used for determining the first video data corresponding to the acquisition time according to the acquisition time of the fatigue early warning signal;
and the association storage unit is used for storing the fatigue early warning signal and the determined association relationship of the first video data so as to determine whether the object to be detected is in a fatigue state or not based on the pre-stored association relationship when the first video data about the environmental scene to which the object to be detected belongs is acquired again.
8. The detection apparatus of claim 7, wherein the first parsing unit comprises:
the acquisition subunit is used for synchronously acquiring second video data related to the human face characteristics of the object to be detected when the electroencephalogram signal is acquired;
the analysis subunit is configured to analyze the second video data through a preset algorithm to determine the second video data corresponding to the object to be detected in a fatigue state;
and the marking subunit is used for marking the electroencephalogram signals synchronously acquired with the second video data as fatigue early warning signals for the determined second video data.
9. A terminal device comprising a memory and a processor, the memory storing a computer program operable on the processor, wherein the processor implements the steps of the method according to any one of claims 1 to 5 when executing the computer program.
10. A computer-readable storage medium, in which a computer program is stored which, when being executed by a processor, carries out the steps of the method according to any one of claims 1 to 5.
CN201811432989.8A 2018-11-28 2018-11-28 Fatigue state detection method, terminal device and medium Active CN109711260B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201811432989.8A CN109711260B (en) 2018-11-28 2018-11-28 Fatigue state detection method, terminal device and medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201811432989.8A CN109711260B (en) 2018-11-28 2018-11-28 Fatigue state detection method, terminal device and medium

Publications (2)

Publication Number Publication Date
CN109711260A CN109711260A (en) 2019-05-03
CN109711260B true CN109711260B (en) 2021-03-05

Family

ID=66254491

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201811432989.8A Active CN109711260B (en) 2018-11-28 2018-11-28 Fatigue state detection method, terminal device and medium

Country Status (1)

Country Link
CN (1) CN109711260B (en)

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104000586A (en) * 2014-05-12 2014-08-27 燕山大学 Stroke patient rehabilitation training system and method based on brain myoelectricity and virtual scene
CN106448062A (en) * 2016-10-26 2017-02-22 深圳市元征软件开发有限公司 Fatigue driving detection method and device
CN108304764A (en) * 2017-04-24 2018-07-20 中国民用航空局民用航空医学中心 Fatigue state detection device and detection method in simulated flight driving procedure
CN108650418A (en) * 2018-05-09 2018-10-12 广东小天才科技有限公司 Tired based reminding method, device, intelligent terminal and storage medium

Family Cites Families (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105094029B (en) * 2015-07-24 2019-01-18 上海帝仪科技有限公司 Safety management system and method
CN105719431A (en) * 2016-03-09 2016-06-29 深圳市中天安驰有限责任公司 Fatigue driving detection system
CN105893980B (en) * 2016-04-26 2019-02-26 北京科技大学 A kind of attention focus evaluation method and system
CN108665084B (en) * 2017-03-31 2021-12-10 中移物联网有限公司 Method and system for predicting driving risk
CN107137096A (en) * 2017-06-22 2017-09-08 中国科学院心理研究所 A kind of multi-modal physiology and behavioral data merge acquisition system
CN107874756A (en) * 2017-11-21 2018-04-06 博睿康科技(常州)股份有限公司 The precise synchronization method of eeg collection system and video acquisition system

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104000586A (en) * 2014-05-12 2014-08-27 燕山大学 Stroke patient rehabilitation training system and method based on brain myoelectricity and virtual scene
CN106448062A (en) * 2016-10-26 2017-02-22 深圳市元征软件开发有限公司 Fatigue driving detection method and device
CN108304764A (en) * 2017-04-24 2018-07-20 中国民用航空局民用航空医学中心 Fatigue state detection device and detection method in simulated flight driving procedure
CN108650418A (en) * 2018-05-09 2018-10-12 广东小天才科技有限公司 Tired based reminding method, device, intelligent terminal and storage medium

Also Published As

Publication number Publication date
CN109711260A (en) 2019-05-03

Similar Documents

Publication Publication Date Title
US8792722B2 (en) Hand gesture detection
US8750573B2 (en) Hand gesture detection
US20190236413A1 (en) Systems and methods for machine learning enhanced by human measurements
CN111857356B (en) Method, device, equipment and storage medium for recognizing interaction gesture
Hoang Ngan Le et al. Robust hand detection and classification in vehicles and in the wild
Gowsikhaa et al. Suspicious Human Activity Detection from Surveillance Videos.
EP2889805A2 (en) Method and system for emotion and behavior recognition
CN111563480B (en) Conflict behavior detection method, device, computer equipment and storage medium
CN109241842B (en) Fatigue driving detection method, device, computer equipment and storage medium
CN105612533A (en) In-vivo detection method, in-vivo detection system and computer programe products
CN109766755B (en) Face recognition method and related product
CN106648078B (en) Multi-mode interaction method and system applied to intelligent robot
CN108198130B (en) Image processing method, image processing device, storage medium and electronic equipment
CN107832721B (en) Method and apparatus for outputting information
CN110543848B (en) Driver action recognition method and device based on three-dimensional convolutional neural network
CN111488855A (en) Fatigue driving detection method, device, computer equipment and storage medium
CN112487844A (en) Gesture recognition method, electronic device, computer-readable storage medium, and chip
CN106056083A (en) Information processing method and terminal
Zhao et al. Deep convolutional neural network for drowsy student state detection
CN109564633B (en) Artificial neural network
CN113435432B (en) Video anomaly detection model training method, video anomaly detection method and device
JP7211428B2 (en) Information processing device, control method, and program
Panicker et al. Open-eye detection using iris–sclera pattern analysis for driver drowsiness detection
CN109711260B (en) Fatigue state detection method, terminal device and medium
Chen Evaluation technology of classroom students’ learning state based on deep learning

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant
TR01 Transfer of patent right

Effective date of registration: 20240329

Address after: Room 704-04, 7th Floor, No. 588 Jiahe Road, Torch Park, Torch High tech Zone, Xiamen City, Fujian Province, 361000

Patentee after: Xiamen Beiyang Ruiheng Intelligent Health Co.,Ltd.

Country or region after: China

Patentee after: Xiamen Beiyang Brain Computer Interface and Intelligent Health Innovation Research Institute

Address before: 518000 room 210, building 5, Shenzhen Software Park, No.2, Gaoxin middle third road, Yuehai street, Nanshan District, Shenzhen City, Guangdong Province

Patentee before: NEURAL FLEX TECHNOLOGY(SHENZHEN) Co.,Ltd.

Country or region before: China

TR01 Transfer of patent right