CN114898530A - Detection method and detection equipment for fatigue driving - Google Patents

Detection method and detection equipment for fatigue driving Download PDF

Info

Publication number
CN114898530A
CN114898530A CN202210468108.8A CN202210468108A CN114898530A CN 114898530 A CN114898530 A CN 114898530A CN 202210468108 A CN202210468108 A CN 202210468108A CN 114898530 A CN114898530 A CN 114898530A
Authority
CN
China
Prior art keywords
fatigue driving
driver
eye information
information
score
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202210468108.8A
Other languages
Chinese (zh)
Inventor
杨岩
黄爱萍
白媛媛
程雪
顾韵晗
霍来超
靳立才
兰国庆
李开亮
牛志慧
孙岚
王晨
王伟
吴桐
夏雪
朱婷
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Bank of China Ltd
Original Assignee
Bank of China Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Bank of China Ltd filed Critical Bank of China Ltd
Priority to CN202210468108.8A priority Critical patent/CN114898530A/en
Publication of CN114898530A publication Critical patent/CN114898530A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G08SIGNALLING
    • G08BSIGNALLING OR CALLING SYSTEMS; ORDER TELEGRAPHS; ALARM SYSTEMS
    • G08B21/00Alarms responsive to a single specified undesired or abnormal condition and not otherwise provided for
    • G08B21/02Alarms for ensuring the safety of persons
    • G08B21/06Alarms for ensuring the safety of persons indicating a condition of sleep, e.g. anti-dozing alarms
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B60VEHICLES IN GENERAL
    • B60WCONJOINT CONTROL OF VEHICLE SUB-UNITS OF DIFFERENT TYPE OR DIFFERENT FUNCTION; CONTROL SYSTEMS SPECIALLY ADAPTED FOR HYBRID VEHICLES; ROAD VEHICLE DRIVE CONTROL SYSTEMS FOR PURPOSES NOT RELATED TO THE CONTROL OF A PARTICULAR SUB-UNIT
    • B60W40/00Estimation or calculation of non-directly measurable driving parameters for road vehicle drive control systems not related to the control of a particular sub unit, e.g. by using mathematical models
    • B60W40/08Estimation or calculation of non-directly measurable driving parameters for road vehicle drive control systems not related to the control of a particular sub unit, e.g. by using mathematical models related to drivers or passengers
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B60VEHICLES IN GENERAL
    • B60WCONJOINT CONTROL OF VEHICLE SUB-UNITS OF DIFFERENT TYPE OR DIFFERENT FUNCTION; CONTROL SYSTEMS SPECIALLY ADAPTED FOR HYBRID VEHICLES; ROAD VEHICLE DRIVE CONTROL SYSTEMS FOR PURPOSES NOT RELATED TO THE CONTROL OF A PARTICULAR SUB-UNIT
    • B60W50/00Details of control systems for road vehicle drive control not related to the control of a particular sub-unit, e.g. process diagnostic or vehicle driver interfaces
    • B60W50/08Interaction between the driver and the control system
    • B60W50/14Means for informing the driver, warning the driver or prompting a driver intervention
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B60VEHICLES IN GENERAL
    • B60WCONJOINT CONTROL OF VEHICLE SUB-UNITS OF DIFFERENT TYPE OR DIFFERENT FUNCTION; CONTROL SYSTEMS SPECIALLY ADAPTED FOR HYBRID VEHICLES; ROAD VEHICLE DRIVE CONTROL SYSTEMS FOR PURPOSES NOT RELATED TO THE CONTROL OF A PARTICULAR SUB-UNIT
    • B60W40/00Estimation or calculation of non-directly measurable driving parameters for road vehicle drive control systems not related to the control of a particular sub unit, e.g. by using mathematical models
    • B60W40/08Estimation or calculation of non-directly measurable driving parameters for road vehicle drive control systems not related to the control of a particular sub unit, e.g. by using mathematical models related to drivers or passengers
    • B60W2040/0818Inactivity or incapacity of driver
    • B60W2040/0827Inactivity or incapacity of driver due to sleepiness
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B60VEHICLES IN GENERAL
    • B60WCONJOINT CONTROL OF VEHICLE SUB-UNITS OF DIFFERENT TYPE OR DIFFERENT FUNCTION; CONTROL SYSTEMS SPECIALLY ADAPTED FOR HYBRID VEHICLES; ROAD VEHICLE DRIVE CONTROL SYSTEMS FOR PURPOSES NOT RELATED TO THE CONTROL OF A PARTICULAR SUB-UNIT
    • B60W40/00Estimation or calculation of non-directly measurable driving parameters for road vehicle drive control systems not related to the control of a particular sub unit, e.g. by using mathematical models
    • B60W40/08Estimation or calculation of non-directly measurable driving parameters for road vehicle drive control systems not related to the control of a particular sub unit, e.g. by using mathematical models related to drivers or passengers
    • B60W2040/0818Inactivity or incapacity of driver
    • B60W2040/0836Inactivity or incapacity of driver due to alcohol
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B60VEHICLES IN GENERAL
    • B60WCONJOINT CONTROL OF VEHICLE SUB-UNITS OF DIFFERENT TYPE OR DIFFERENT FUNCTION; CONTROL SYSTEMS SPECIALLY ADAPTED FOR HYBRID VEHICLES; ROAD VEHICLE DRIVE CONTROL SYSTEMS FOR PURPOSES NOT RELATED TO THE CONTROL OF A PARTICULAR SUB-UNIT
    • B60W50/00Details of control systems for road vehicle drive control not related to the control of a particular sub-unit, e.g. process diagnostic or vehicle driver interfaces
    • B60W50/08Interaction between the driver and the control system
    • B60W50/14Means for informing the driver, warning the driver or prompting a driver intervention
    • B60W2050/146Display means

Landscapes

  • Engineering & Computer Science (AREA)
  • Automation & Control Theory (AREA)
  • Physics & Mathematics (AREA)
  • Transportation (AREA)
  • Mechanical Engineering (AREA)
  • Business, Economics & Management (AREA)
  • Emergency Management (AREA)
  • General Physics & Mathematics (AREA)
  • Human Computer Interaction (AREA)
  • Mathematical Physics (AREA)
  • Traffic Control Systems (AREA)

Abstract

The application provides a detection method and detection equipment for fatigue driving, and relates to the technical field of big data, wherein the detection equipment comprises: image collector and controller. The controller responds to the detection trigger instruction, controls the display to play the first animation and controls the image collector to collect eye information, wherein the eye information comprises information generated by eyes of a driver due to stimulation of the first animation when the driver watches the first animation; and inputting the eye information into the fatigue driving model, and playing first prompt information if the output result of the fatigue driving model meets a preset rule.

Description

Detection method and detection equipment for fatigue driving
Technical Field
The embodiment of the application relates to the technical field of big data, in particular to a detection method and detection equipment for fatigue driving.
Background
The public safety hazards are caused by a plurality of reasons, one of which is that the drivers are not properly observed, not concentrated in attention and poor in dynamic visual acuity due to fatigue driving, so that the public safety hazards are caused.
How to provide a detection method for fatigue driving to reduce public hazards caused by fatigue driving of drivers becomes an urgent technical problem to be solved.
Disclosure of Invention
The embodiment of the application provides a detection method and detection equipment for fatigue driving, which are used for inputting eye information into a fatigue driving model, and playing first prompt information if the output result of the fatigue driving model meets a preset rule. The driver can take a rest after receiving the first prompt message, and then public hazards caused by the fact that the driver is continuously in a driving state can be reduced.
A first aspect of an embodiment of the present application provides a device for detecting fatigue driving, including a controller, a display, and an image collector;
the controller is configured to: responding to the detection trigger instruction, controlling the display to play the first animation, and controlling the image collector to collect eye information, wherein the eye information comprises information generated by eyes of a driver due to stimulation of the first animation when the driver watches the first animation;
and inputting the eye information into a fatigue driving model, and if the output result of the fatigue driving model accords with a preset rule, controlling a display to play first prompt information, wherein the fatigue driving model is a model built according to historical data, the historical data is information generated when a fatigue driving driver watches a first animation, and the first prompt information is used for prompting the driver to be in a fatigue driving state.
Combine the first implementation of first aspect, the display includes transparent screen, and transparent screen includes the show face, and the show face is used for showing first animation, and one side that the show face was kept away from to transparent screen is provided with image collector, and image collector sees through transparent screen collection eye information.
With reference to the second implementation manner of the first aspect, the alcohol detection device further includes an alcohol detector; the controller is further configured to:
responding to the detection trigger instruction, controlling an alcohol detector to detect the alcohol content of the gas exhaled by the driver and outputting the alcohol content to a controller;
and if the alcohol content is greater than the alcohol content threshold value, controlling the display to play second prompt information, wherein the second prompt information is used for prompting the driver to be in the drunk driving state.
With reference to the third implementation manner of the first aspect, the apparatus further includes a speaker, and the controller is further configured to:
if the output result of the fatigue driving model accords with the preset rule, controlling a loudspeaker to play first prompt information;
and if the alcohol content is greater than the alcohol content threshold value, controlling the loudspeaker to play a second prompt message.
With reference to the fourth implementation manner of the first aspect, the eye information includes different types of eye information, and each piece of eye information corresponds to one fatigue driving model; the controller is further configured to:
inputting each eye information into a corresponding fatigue driving model to obtain a single score;
carrying out addition calculation on the plurality of single scores to obtain a comprehensive score;
and if the comprehensive score is smaller than the score threshold value, playing the first prompt message.
With reference to the fifth implementation manner of the first aspect, each piece of eye information corresponds to one weight value, and the greater the randomness of the eye information generated by the driver in a fatigue driving state, the smaller the weight value corresponding to the eye information is;
the step of adding the plurality of single scores to obtain the comprehensive score comprises the following steps:
calculating a weighted score according to the individual score and a target weighted value, wherein the target weighted value is a weighted value corresponding to the eye information generating the individual score;
and adding the weighted scores to obtain a comprehensive score.
A second aspect of the embodiments of the present application provides a method for detecting fatigue driving, including:
in response to the detection trigger instruction, playing a first animation and collecting eye information, wherein the eye information comprises information generated by eyes of a driver due to stimulation of the first animation when the driver watches the first animation;
and inputting the eye information into a fatigue driving model, and playing first prompt information if the output result of the fatigue driving model meets a preset rule, wherein the fatigue driving model is a model built according to historical data, the historical data is information generated when a fatigue driving driver watches a first animation, and the first prompt information is used for prompting the fatigue driving state of the driver.
With reference to the second aspect, the first implementation manner further includes:
detecting the alcohol content of the expired gas of the driver in response to the detection trigger instruction;
and if the alcohol content is greater than the alcohol content threshold value, playing second prompt information, wherein the second prompt information is used for prompting the driver to be in the drunk driving state.
With reference to the second implementation manner of the first aspect, the eye information includes different types of eye information;
inputting each eye information into a corresponding fatigue driving model to obtain a single score;
carrying out addition calculation on the plurality of single scores to obtain a comprehensive score;
and if the comprehensive score is smaller than the score threshold value, playing the first prompt message.
With reference to the third implementation manner of the second aspect, each piece of eye information corresponds to one weight value, and the step of performing addition calculation on the plurality of single scores to obtain a composite score includes:
calculating a weighted score according to the individual score and a target weighted value, wherein the target weighted value is a weighted value corresponding to the eye information generating the individual score;
and adding the weighted scores to obtain a comprehensive score.
The technical scheme provided by the application is suitable for detection equipment. The detection device may include a display, an image collector, and a controller. The controller responds to the detection trigger instruction, controls the display to play the first animation and controls the image collector to collect eye information, wherein the eye information comprises information generated by eyes of a driver due to stimulation of the first animation when the driver watches the first animation; and inputting the eye information into the fatigue driving model, and playing first prompt information if the output result of the fatigue driving model meets a preset rule. It can be seen that the detection device provided by the embodiment of the application collects the eye information of the driver in the process that the driver watches the first animation, inputs the eye information into the fatigue driving model, if the output result of the fatigue driving model meets the preset rule, the driver is proved to be in the fatigue driving state, and under the condition, the display can play the first prompt information. The driver can take a rest after receiving the first prompt information, and public hazards caused by the fact that the driver is continuously in a driving state can be reduced by the aid of the detection equipment provided by the embodiment of the application.
Drawings
The accompanying drawings, which are incorporated in and constitute a part of this specification, illustrate embodiments consistent with the embodiments of the application and, together with the description, serve to explain the principles of the embodiments of the application and are not to be construed as unduly limiting the embodiments of the application.
FIG. 1 is a schematic diagram of a fatigue driving detection apparatus provided in one possible embodiment;
FIG. 2 is a flow chart illustrating interaction of components of the fatigue driving detection apparatus provided in FIG. 1;
FIG. 3 is a display showing a first prompt message provided by a feasible implementation;
FIG. 4 is a schematic diagram of a fatigue driving detection apparatus according to a possible embodiment;
FIG. 5 is a display showing a first prompt and a second prompt provided by a feasible implementation;
FIG. 6 is a flow chart of a method for detecting fatigue driving according to one possible embodiment;
FIG. 7 is a flow chart of a method for detecting drunk driving according to a possible embodiment;
FIG. 8 is a flowchart of a method for calculating eye information according to one possible embodiment;
FIG. 9 is a flowchart of a method for calculating eye information according to a possible embodiment.
Detailed Description
In order to make the technical solutions of the embodiments of the present application better understood by those of ordinary skill in the art, the technical solutions of the embodiments of the present application will be clearly and completely described below with reference to the accompanying drawings.
It should be noted that the terms "first," "second," and the like in the description and claims of the embodiments of the present application and in the drawings described above are used for distinguishing between similar elements and not necessarily for describing a particular sequential or chronological order. It is to be understood that the data so used is interchangeable under appropriate circumstances such that the embodiments of the application described herein are capable of operation in sequences other than those illustrated or otherwise described herein. The implementations described in the following exemplary examples do not represent all implementations consistent with the examples of this application. Rather, they are merely examples of apparatus and methods consistent with certain aspects of the embodiments of the application, as detailed in the appended claims.
The public safety hazards are caused by a plurality of reasons, one of which is that the drivers are not properly observed, not concentrated in attention and poor in dynamic visual acuity due to fatigue driving, so that the public safety hazards are caused. A survey of the american society for automotive transportation safety foundation in 2014 showed: the traffic accidents caused by driving fatigue account for 21 percent of the death incidents of the traffic accidents in the United states, about 6400 people are lost every year, and the fatigue driving condition in China is not optimistic.
In order to reduce public hazard caused by fatigue driving of a driver, a first aspect of embodiments of the present application provides a fatigue driving detection apparatus. The fatigue driving detection device provided by the embodiment is suitable for law enforcement inspection scenes of traffic control departments. Before detection, the driver needs to take off sunglasses or glasses in a matching way for checking.
Specifically, referring to fig. 1 and fig. 2, fig. 1 is a block diagram of a fatigue driving detection apparatus provided in a feasible embodiment, and fig. 2 is an interaction flowchart of components in the fatigue driving detection apparatus provided in fig. 1. It can be seen that the fatigue driving detecting apparatus includes a controller 1, a display 2 and an image collector 3.
In the embodiment of the present application, the controller refers to a master device for controlling the starting, speed regulation, braking and reversing of the motor by changing the wiring of the master circuit or the control circuit and changing the resistance value in the circuit according to a predetermined sequence. The system consists of a program counter, an instruction register, an instruction decoder, a time sequence generator and an operation controller, and can coordinate and direct the operation of the whole computer system.
In the embodiment of the present application, the display 2 (display) is configured to receive the signal, form an image, and present the image, and is particularly applied to the fatigue driving detection device, and the display is configured to present the first animation.
In the embodiment of the present application, the image collector 3 is a video input device, and may be, but is not limited to, a camera (camera).
The following describes the interaction process of the controller 1, the display 2 and the image collector 3:
in the embodiment of the present application, in response to detecting the trigger instruction, the controller 1 is configured to execute step S21 to control the display to play the first animation, and control the image collector to collect the eye information. Wherein, the detection trigger command is artificially triggered.
In the embodiment of the present application, the eye information includes information generated when the driver views the first animation.
In the embodiment of the application, the first animation comprises at least one motion element, and the motion element can be subjected to morphological change. In the process of the change of the moving element form, the person watching the first animation is stimulated to generate eye information. In an embodiment of the present application, the person viewing the first animation may include a driver.
In the embodiment of the present application, the motion element is an element that can be changed in form. In some feasible implementations, the motion element may be a graphic that is presented in a first animation. In some feasible implementations, the first animation may be played in an Augmented Reality system (AR), and in an AR application scenario, the motion element may be an anchor point in the Augmented Reality system. In some feasible implementations, the first animation may be played in a Virtual Reality system (VR), and the motion element may be one object shown in the first animation. It is noted that the embodiments of the present application are merely exemplary in describing several forms of motion elements. In practice, the motion element may be, but is not limited to, the above forms.
Driving on a highway has a driving environment in which a driving task is monotonous, a demand stimulus is less, and a danger signal is uncertain, and this driving environment reduces the alertness of a driver, and in this driving environment, the driver cannot timely find and react to the danger signal, so that the possibility of occurrence of an accident increases. According to the technical scheme provided by the embodiment of the application, the first animation can be played in a VR/AR mode, and the first animation played by the VR/VR can be a driving environment with monotonous driving tasks, less stimulation of demands and uncertain occurrence of dangerous signals. The scheme provided by the embodiment can detect whether the driver is in a fatigue driving state under the driving environment.
The morphological changes of the moving elements are explained below. In some feasible implementations, the change in the form of the moving element may be a change in the displacement of the moving element; in some feasible implementations, the morphological change of the motion element may be a change in color of the motion element. In some feasible implementations, the morphological change of the motion element may be a deformation of some graphics zooming in and out. It should be noted that the embodiments of the present application are merely exemplary in describing several forms of morphological changes of motion elements. In the process of practical application, the form of the morphological change of the motion element can be, but is not limited to, the above forms.
In the embodiment of the application, in the process of the form change of the motion element, a driver watching the first animation is stimulated to generate eye information.
In the embodiment of the present application, the eye information is information generated due to movement of the eyeball/eyelid of the driver. May include, but is not limited to, blink frequency, blink time average, percent eye closure, pupil diameter, average gaze time, gaze location, average speed of eye jump, peak speed of eye jump, and the like. The technical scheme provided by the embodiment of the application is used for collecting the eye information of the user instead of the face information of the user, so that the privacy of a driver can be protected to a certain extent.
Inputting the eye information into the fatigue driving model, and if the output result of the fatigue driving model complies with the preset rule, the controller 1 is configured to perform S22: and controlling the display to play the first prompt message.
In the embodiment of the application, the fatigue driving model is a model built according to historical data, the historical data is information generated when a fatigue driving driver watches the first animation, and the first prompt information is used for prompting the driver to be in a fatigue driving state.
The model construction method of the fatigue driving model may adopt a model construction method that is commonly used in the art, and the embodiments of the present application are not limited to this. For example: in some feasible embodiments, the model construction mode of the fatigue driving model can be that a support vector machine establishes the driving fatigue model.
In the embodiment of the application, the fatigue driving model can obtain the range of the eye information through training of historical data. And when the collected eye information is in the range of the eye information, inputting the eye information into the fatigue driving model, wherein the output result of the fatigue driving model is in accordance with the preset rule.
Further, in order to improve the accuracy of the detection result, the embodiment of the application further discloses updating the fatigue driving model at preset intervals. The embodiment of the present application does not specifically limit the preset time. For example, a fatigue driving model is constructed using different data at preset intervals.
The following explains the playing form of the first prompt message:
in a possible implementation, the first prompt information may be presented on the display 2 in the form of an image frame. Specifically, referring to fig. 3, fig. 3 is a display showing a first prompt message provided by a feasible implementation. In this embodiment, the first prompt message is displayed in a text form, and the prompt message is "the driver is in a fatigue driving state". In some feasible embodiments, the prompt information may also be presented in the form of image frames.
In some feasible embodiments, the fatigue driving detection device further comprises a speaker 4. And inputting the eye information into the fatigue driving model, and if the output result of the fatigue driving model accords with a preset rule, controlling a loudspeaker to play the first prompt information in an audio mode by the controller.
In some feasible embodiments, the eye information is input into the fatigue driving model, and if the output result of the fatigue driving model conforms to the preset rule, the controller may control the speaker to play the first prompt information in the form of audio, and at the same time, control the display to show the first prompt information in the form of image frames.
As a feasible implementation manner, in order to relieve the eyestrain of the driver, the background of the first animation is an eye protection background, and the eye protection background is used for relieving the eyestrain of the driver. Wherein, the eye-protecting background can be, but is not limited to, a solid background, such as: a green background; some natural landscapes are also possible, for example: in some feasible implementations, the eye-protecting background can be grass. The embodiment is only an exemplary introduction of several eye protection backgrounds, and in the practical application process, all backgrounds capable of achieving the effect of relieving eye fatigue can be applied to the embodiment of the application as the eye protection backgrounds.
In order to improve the accuracy of the detection result of the fatigue driving detection device provided by the embodiment of the application, as a feasible implementation mode, the image collector can collect different types of eye information; and then, the image collector transmits the collected eye information of different types to the controller. In the embodiment, the controller stores the fatigue driving model corresponding to each eye information in advance, and then the controller inputs each eye information into the corresponding fatigue driving model according to the corresponding relation between the eye information and the fatigue driving model to obtain the univariate score; then, the controller performs weighted summation on the plurality of single-term scores to obtain a comprehensive score; and if the comprehensive score is smaller than the score threshold value, playing the first prompt message. The embodiment of the present application does not specifically limit the scoring threshold, for example: in some feasibility implementations, the scoring threshold may be 80 points.
According to the technical scheme provided by the embodiment, in the process of determining whether the driver is in the fatigue driving state, different types of eye information are comprehensively considered, and the accuracy of the obtained detection result is further ensured.
In general, each eye information contributes differently to the evaluation of whether the driver is in a tired state, such as:
for the blink frequency, the driver is in a fatigue driving state, the change trend of the blink frequency is not obvious, and the blink frequency has definite randomness. Therefore, the blink frequency contributes less to the evaluation of whether the driver is in a tired state;
for the eye information 'blink time mean', the variation coefficient of the blink time mean of a driver in a fatigue driving state is gradually reduced, and the blink time mean is stable and reliable and has consistency across people. Therefore, the blink time average value greatly contributes to the evaluation of whether the driver is in a fatigue state;
for the eye information "pupil diameter", the driver is in a fatigue driving state, and the pupil diameter is significantly reduced. Therefore, the pupil diameter contributes greatly to the evaluation of whether the driver is in a fatigue state;
as for the eye information "average fixation time", the driver is in a fatigue driving state, and the average fixation time of the relevant position is extended. Therefore, the average gaze time contributes significantly to the evaluation of whether the driver is in a fatigue state;
for the eye information "average speed of eye jump", the driver is in a fatigue driving state, and the average speed of eye jump is significantly decreased. Therefore, the eye jump average speed greatly contributes to the evaluation of whether the driver is in a fatigue state;
for the eye information "eye jump speed", the driver is in a fatigue driving state, and the eye jump speed is significantly decreased. Therefore, the eye jump average speed greatly contributes to the evaluation of whether the driver is in a fatigue state;
in the embodiment of the application, the eye information generated by the driver in the fatigue driving state is more random, and the eye information contributes less to the evaluation of whether the driver is in the fatigue state. According to the technical scheme, the eye information is endowed with the weight value according to the contribution of the eye information to the evaluation of whether the driver is in the fatigue state, and the larger the contribution of the eye information to the evaluation of whether the driver is in the fatigue state is, the larger the weight value corresponding to the eye information is.
Table 1 is a table of weighting values for eye information provided in a possible embodiment:
TABLE 1
Figure BDA0003625333220000061
Figure BDA0003625333220000071
According to the technical scheme provided by the embodiment of the application, the controller calculates the weighted score according to the singles score and the target weighted value, wherein the target weighted value is the weighted value corresponding to the eye information generating the singles score; and adding the weighted scores to obtain a comprehensive score. According to the technical scheme, the weight value of the eye information is considered in the process of calculating the comprehensive score, so that the obtained comprehensive score can reflect whether the driver is in a fatigue driving state or not more accurately.
As a feasible implementation manner, the display may be a transparent screen, the transparent screen includes a display surface, the display surface is used for displaying the first animation, and one side of the transparent screen, which is away from the display surface, is provided with an image collector, so that the image collector collects eye information through the transparent screen.
Specifically, referring to fig. 4, fig. 4 is a schematic diagram of a fatigue driving detection device according to a possible embodiment, wherein the fatigue driving detection device may include: a transparent screen 2, an image collector 3 and a controller (not shown in fig. 4). Wherein, image collector 3 sets up in transparent screen 2 and keeps away from the one side of show face, transparent screen 2's the back promptly, and image collector 3 can see through transparent screen 2 and gather eye information.
In the embodiment of the application, the transparent screen 2 adopts an electrowetting principle, the change of the image can be controlled through applying voltage, and a polaroid is not required to be added to the transparent screen 2, so that the monochrome transmittance can reach more than 40 percent, and an image collector behind the display can be easily seen under the condition of not installing a backlight source. Correspondingly, the image collector 3 arranged on the back of the transparent screen 2 can collect the eye information of the driver through the transparent screen 2.
The fatigue driving detection equipment provided by the embodiment of the application adopts the transparent screen 2, so that the image collector 3 can be arranged on the back of the transparent screen 2. The image collector 3 arranged on the back of the transparent screen 2 can collect the eye information of the driver through the transparent screen 2.
In order to enable the fatigue driving detection equipment to have the alcohol detection function, the fatigue driving detection equipment also comprises an alcohol detector as a feasible implementation mode; in particular, and with continued reference to fig. 4, the fatigue driving detection apparatus further comprises an alcohol detector 4, the alcohol detector 4 being disposed on a side remote from the handle 5. The controller is further configured to: controlling an alcohol detector to detect the alcohol content of the gas exhaled by the driver in response to the detection trigger instruction; and if the alcohol content is greater than the alcohol content threshold value, controlling the display to play second prompt information, wherein the second prompt information is used for prompting the driver to be in the drunk driving state. In the practical application process, the alcohol content threshold may be set according to the requirement, and the embodiment of the present application is not limited to this.
In some feasible implementations, the second prompting message is presented in a text form. In some feasible embodiments, the prompt information may also be presented in the form of audio. The embodiments of the present application are not intended to be unduly limited herein.
As a feasible implementation manner, the first prompt message and the second prompt message may contain different contents in order to distinguish whether the driver is in a fatigue driving state or a drunk driving state.
For example, when the first prompt message and the second prompt message are both displayed in the form of audio, the eye information is input into the fatigue driving model, and if the output result of the fatigue driving model meets the preset rule and the detected alcohol content is greater than the alcohol content threshold, the controller may control the speaker to play the first prompt message "the driver is in the fatigue driving state" and the second prompt message "the driver is in the drunk driving state".
For example, when the first prompt message and the second prompt message are both displayed in the form of image frames, the eye information is input into the fatigue driving model, and if the output result of the fatigue driving model meets the preset rule and the detected alcohol content is greater than the alcohol content threshold value, the controller may simultaneously display the first prompt message "the driver is in the fatigue driving state" and the second prompt message "the driver is in the drunk driving state" on the transparent screen 2. In particular, reference may be made to FIG. 5.
Furthermore, due to the fatigue driving detection device provided by the embodiment of the application, drunk driving and fatigue driving can be detected at the same time, and drunk driving and fatigue driving can be detected at the same time. Therefore, the fatigue driving detection device has the advantages of short detection time and high detection efficiency.
According to the technical scheme provided by the embodiment of the application, the controller responds to the detection trigger instruction, controls the display to play the first animation and controls the image collector to collect the eye information, wherein the eye information comprises information generated by the eyes of a driver due to stimulation of the first animation when the driver watches the first animation; and inputting the eye information into the fatigue driving model, and playing first prompt information if the output result of the fatigue driving model meets a preset rule. It can be seen that the detection device provided by the embodiment of the application collects the eye information of the driver in the process that the driver watches the first animation, inputs the eye information into the fatigue driving model, and if the output result of the fatigue driving model meets the preset rule, the driver is proved to be in the fatigue driving state, and in this case, the first prompt information is played. According to the scheme disclosed by the application, when the driver is determined to be in the fatigue driving state, the first prompt message can be played. The driver can take a rest after receiving the first prompt message, so that public hazards caused by the fact that the driver is continuously in a driving state are reduced.
A second aspect of the embodiment of the present application provides a method for detecting fatigue driving, specifically referring to fig. 6, where fig. 6 is a flowchart of a method for detecting fatigue driving provided by a feasible embodiment, and the method is applicable to a device for detecting fatigue driving provided by the embodiment of the present application, and the method includes steps S61 to S62:
s61, responding to the detection trigger instruction, playing the first animation and collecting eye information, wherein the eye information comprises information generated by the eyes of the driver due to stimulation of the first animation when the driver watches the first animation;
and S62, inputting the eye information into the fatigue driving model, and playing first prompt information if the output result of the fatigue driving model meets a preset rule, wherein the fatigue driving model is a model constructed according to historical data, the historical data is information generated when the fatigue driving driver watches the first animation, and the first prompt information is used for prompting the fatigue driving state of the driver.
On the basis of the detection method provided in the foregoing embodiment, an embodiment of the present application further provides a detection method for drunk driving, specifically referring to fig. 7, where fig. 7 is a flowchart of the detection method for drunk driving provided in a feasible embodiment, and on the basis of the detection method provided in fig. 5, the detection method further includes S71 to S72:
s71, responding to the detection trigger instruction, and detecting the alcohol content of the expired gas of the driver;
and S72, if the alcohol content is larger than the alcohol content threshold value, playing second prompt information, wherein the second prompt information is used for prompting that the driver is in the drunk driving state.
On the basis of the detection method provided in the foregoing embodiment, an embodiment of the present application further provides a method for calculating eye information, specifically, referring to fig. 8, where fig. 8 is a flowchart of a method for calculating eye information provided in a feasible embodiment, and the method for calculating eye information further includes, on the basis of the detection method provided in fig. 6 or fig. 7, S81 to S83:
s81, inputting each eye information into a corresponding fatigue driving model to obtain a single score;
s82, adding the plurality of single scores to obtain a comprehensive score;
and S83, if the comprehensive score is smaller than the score threshold value, playing the first prompt message.
On the basis of the detection method provided in the foregoing embodiment, an embodiment of the present application further provides a method for calculating a comprehensive score, specifically, referring to fig. 9, where fig. 9 is a flowchart of a method for calculating eye information provided in a feasible embodiment, and in the calculation method provided in fig. 8, S82 includes S91 to S92:
s91, calculating a weighted score according to the singles score and a target weighted value, wherein the target weighted value is a weighted value corresponding to the eye information generating the singles score;
and S92, performing addition calculation on the weighted scores to obtain a comprehensive score.
Optionally, the background of the first animation is an eye-protecting background, and the eye-protecting background is used for relieving eye fatigue of the driver.
Optionally, the first prompt message is played in the form of image frame and/or audio;
and/or the second prompt message is played in the form of image frames and/or audio.
It will be appreciated that in order to implement the above-described functions, the electronic device comprises corresponding hardware and/or software modules for performing the respective functions. The elements and algorithm steps of the various examples described in connection with the embodiments disclosed herein may be embodied in hardware or in a combination of hardware and computer software. Whether a function is performed as hardware or computer software drives hardware depends upon the particular application and design constraints imposed on the solution. Skilled artisans may implement the described functionality in varying ways for each particular application, with the embodiment described in connection with the particular application, but such implementation decisions should not be interpreted as causing a departure from the scope of the present application.
The above description is only an embodiment of the present application, but the scope of the present application is not limited thereto, and any changes or substitutions within the technical scope of the present disclosure should be covered by the scope of the present application. Therefore, the protection scope of the present application shall be subject to the protection scope of the claims.

Claims (10)

1. The utility model provides a detection equipment of fatigue driving which characterized in that, includes the controller, display and image acquisition ware: the controller is configured to:
responding to a detection trigger instruction, controlling the display to play a first animation, and controlling the image collector to collect eye information, wherein the eye information comprises information generated when a driver watches the first animation;
and inputting the eye information into a fatigue driving model, controlling the display to play first prompt information for prompting the driver to be in a fatigue driving state if the output result of the fatigue driving model meets a preset rule, and constructing the fatigue driving model based on information generated when the fatigue driving driver watches the first animation.
2. The detection apparatus according to claim 1, wherein the display includes a transparent screen, the transparent screen includes a display surface, the display surface is used for displaying the first animation, the image collector is disposed on a side of the transparent screen away from the display surface, and the image collector collects the eye information through the transparent screen.
3. The detection apparatus according to claim 1 or 2, further comprising an alcohol detector; the controller is further configured to:
in response to a detection trigger instruction, controlling the alcohol detector to detect the alcohol content of the gas exhaled by the driver and outputting the alcohol content to the controller;
and if the alcohol content is greater than the alcohol content threshold value, controlling the display to play second prompt information, wherein the second prompt information is used for prompting that the driver is in a drunk driving state.
4. The detection device of claim 3, further comprising a speaker, the controller further configured to:
if the output result of the fatigue driving model accords with a preset rule, controlling the loudspeaker to play the first prompt message;
and/or if the alcohol content is larger than the alcohol content threshold value, controlling the loudspeaker to play the second prompt message.
5. The detection apparatus according to claim 1 or 2, wherein the eye information includes different kinds of eye information, each kind of the eye information corresponding to one of the fatigue driving models; the controller is further configured to:
inputting each type of eye information into the corresponding fatigue driving model to obtain a single score;
adding the plurality of single scores to obtain a comprehensive score;
and if the comprehensive score is smaller than a score threshold value, playing the first prompt message.
6. The apparatus according to claim 5, wherein each of the eye information corresponds to a weight value that is inversely proportional to randomness of the eye information generated when the driver is in a fatigue driving state; the controller is further configured to:
calculating a weighted score according to the singles score and a target weighted value, wherein the target weighted value is a weighted value corresponding to the eye information generating the singles score;
and performing addition calculation on the weighted scores to obtain the comprehensive score.
7. A detection method of fatigue driving, the method being applied to the detection apparatus of any one of claims 1 to 6, characterized in that the detection method comprises:
in response to a detection trigger instruction, playing a first animation and collecting eye information, wherein the eye information comprises information generated by eyes of a driver due to stimulation of the first animation when the driver watches the first animation;
inputting the eye information into a fatigue driving model, and playing first prompt information if the output result of the fatigue driving model meets a preset rule, wherein the fatigue driving model is a model built according to historical data, the historical data is information generated when a fatigue driving driver watches the first animation, and the first prompt information is used for prompting the fatigue driving state of the driver.
8. The detection method according to claim 7, characterized in that the detection method further comprises:
detecting the alcohol content of the expired gas of the driver in response to a detection trigger instruction;
and if the alcohol content is greater than the alcohol content threshold value, playing second prompt information, wherein the second prompt information is used for prompting that the driver is in a drunk driving state.
9. The detection method according to claim 7 or 8, characterized in that the eye information includes different kinds of eye information;
inputting each type of eye information into a corresponding fatigue driving model to obtain a single score;
adding the plurality of single scores to obtain a comprehensive score;
and if the comprehensive score is smaller than a score threshold value, playing the first prompt message.
10. The detection method according to claim 9, wherein each of the eye information corresponds to a weight value, and the weight value is inversely proportional to randomness of the eye information generated when the driver is in a fatigue driving state; the step of adding the plurality of single scores to obtain the comprehensive score comprises the following steps:
calculating a weighted score according to the single score and a target weighted value, wherein the target weighted value is a weighted value corresponding to the eye information generating the single score;
and performing addition calculation on the weighted scores to obtain the comprehensive score.
CN202210468108.8A 2022-04-29 2022-04-29 Detection method and detection equipment for fatigue driving Pending CN114898530A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210468108.8A CN114898530A (en) 2022-04-29 2022-04-29 Detection method and detection equipment for fatigue driving

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210468108.8A CN114898530A (en) 2022-04-29 2022-04-29 Detection method and detection equipment for fatigue driving

Publications (1)

Publication Number Publication Date
CN114898530A true CN114898530A (en) 2022-08-12

Family

ID=82720050

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210468108.8A Pending CN114898530A (en) 2022-04-29 2022-04-29 Detection method and detection equipment for fatigue driving

Country Status (1)

Country Link
CN (1) CN114898530A (en)

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104967837A (en) * 2015-06-30 2015-10-07 西安三星电子研究有限公司 Device and method for adjusting three-dimensional display effect
CN112849147A (en) * 2021-01-04 2021-05-28 詹昌文 Drunk driving and fatigue driving monitoring and early warning system and method
CN113838265A (en) * 2021-09-27 2021-12-24 科大讯飞股份有限公司 Fatigue driving early warning method and device and electronic equipment

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104967837A (en) * 2015-06-30 2015-10-07 西安三星电子研究有限公司 Device and method for adjusting three-dimensional display effect
CN112849147A (en) * 2021-01-04 2021-05-28 詹昌文 Drunk driving and fatigue driving monitoring and early warning system and method
CN113838265A (en) * 2021-09-27 2021-12-24 科大讯飞股份有限公司 Fatigue driving early warning method and device and electronic equipment

Similar Documents

Publication Publication Date Title
CN104750249B (en) Information processing method and electronic equipment
EP3042152B1 (en) Navigation method based on a see-through head-mounted device
CN105468147B (en) A kind of pre- myopic-preventing smart machine, system and method
CN106598252A (en) Image display adjustment method and apparatus, storage medium and electronic device
CN107092314A (en) A kind of head-mounted display apparatus and detection method that driving behavior monitor detection is provided
US20210081047A1 (en) Head-Mounted Display With Haptic Output
CN103595984A (en) 3D glasses, a 3D display system, and a 3D display method
WO2017169273A1 (en) Information processing device, information processing method, and program
CN106821297A (en) A kind of portable dynamic vision testing method based on the aobvious equipment of head
CN105678976A (en) Auxiliary health prompting device for computer user
CN107077822A (en) control device, control method and program
WO2019114013A1 (en) Scene displaying method for self-driving vehicle and smart eyewear
CN106740581A (en) A kind of control method of mobile unit, AR devices and AR systems
US10469819B2 (en) Augmented reality display method based on a transparent display device and augmented reality display device
CN112185415A (en) Sound visualization method and device, storage medium and MR mixed reality equipment
Chao et al. Effects of display technologies on operation performances and visual fatigue
EP3809396A1 (en) Driving simulator and video control device
CN109032350B (en) Vertigo sensation alleviating method, virtual reality device, and computer-readable storage medium
CN103257703A (en) Augmented reality device and method
CN206906936U (en) A kind of head-mounted display apparatus that driving behavior monitor detection is provided
US11004273B2 (en) Information processing device and information processing method
CN114898530A (en) Detection method and detection equipment for fatigue driving
WO2018158950A1 (en) Work aptitude determination device, work aptitude determination method, and work aptitude determination program
WO2016058449A1 (en) Smart glasses and control method for smart glasses
CN113525402B (en) Advanced assisted driving and unmanned visual field intelligent response method and system

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination