WO2019056259A1 - 疲劳驾驶预警的方法和终端 - Google Patents

疲劳驾驶预警的方法和终端 Download PDF

Info

Publication number
WO2019056259A1
WO2019056259A1 PCT/CN2017/102689 CN2017102689W WO2019056259A1 WO 2019056259 A1 WO2019056259 A1 WO 2019056259A1 CN 2017102689 W CN2017102689 W CN 2017102689W WO 2019056259 A1 WO2019056259 A1 WO 2019056259A1
Authority
WO
WIPO (PCT)
Prior art keywords
fatigue
component
time
driver
determining
Prior art date
Application number
PCT/CN2017/102689
Other languages
English (en)
French (fr)
Inventor
徐家林
Original Assignee
深圳传音制造有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 深圳传音制造有限公司 filed Critical 深圳传音制造有限公司
Priority to PCT/CN2017/102689 priority Critical patent/WO2019056259A1/zh
Publication of WO2019056259A1 publication Critical patent/WO2019056259A1/zh

Links

Images

Classifications

    • GPHYSICS
    • G08SIGNALLING
    • G08BSIGNALLING OR CALLING SYSTEMS; ORDER TELEGRAPHS; ALARM SYSTEMS
    • G08B21/00Alarms responsive to a single specified undesired or abnormal condition and not otherwise provided for
    • G08B21/02Alarms for ensuring the safety of persons
    • G08B21/06Alarms for ensuring the safety of persons indicating a condition of sleep, e.g. anti-dozing alarms

Definitions

  • the invention relates to signal processing technology, in particular to a method and a terminal for fatigue driving warning.
  • Driving fatigue means that the driver's alertness and safe driving ability decrease with the driver's fatigue, and the slow response, slow and slow pace is the main form of driver's driving fatigue.
  • Fatigue driving has become an important factor in today's traffic accidents, which seriously threatens people's lives and property safety. It is necessary to monitor the driver's fatigue status in real time and provide early warning of fatigue driving to reduce the fatigue caused by the driver. Traffic accident.
  • fatigue driving warning function is provided in the vehicle artificial intelligence system of some high-end automobile equipment.
  • These fatigue driving warning functions usually detect the driver's EEG change information, head posture information and eyelid drooping degree, combined with the steering wheel's rotation amplitude and the steering wheel's grip force detection, and the road tracking detection by the vehicle camera. A series of test results are obtained, and the on-board artificial intelligence system calculates the driver's awake state based on the test result.
  • the invention provides a method and terminal for fatigue driving warning, and determines a real-time fatigue threshold according to a continuous driving time, a current date and a current time, and uses a real-time fatigue threshold as a criterion for judging the degree of fatigue of the driver, thereby reducing the computational complexity of the fatigue driving judgment. Improve the accuracy of early warning of fatigue driving.
  • a method for fatigue driving warning is provided, which is applied to a terminal, and the terminal is provided with a camera; the method includes:
  • the driver is alerted when it is determined that the fatigue value is greater than or equal to the real-time fatigue threshold.
  • determining the fatigue threshold reduction component according to the continuous formation time, the current date, and the current time including:
  • the fatigue threshold reduction component is determined based on the first reduced component, the second reduced component, and the third reduced component.
  • the method before the determining the fatigue threshold reduction component according to the continuous travel time, the current date, and the current time, the method further includes:
  • the continuous driving time is obtained according to the starting time and the ending time.
  • the alerting the driver includes:
  • the method further includes: acquiring location information of the terminal; obtaining, according to location information of the terminal and pre-stored parking area data, The latest parking area information with the smallest terminal distance is displayed; the latest parking area information is displayed to the driver.
  • the obtaining a driver's head drooping frequency according to the video obtained from the camera includes: obtaining a face image of the driver according to a video obtained from the camera;
  • the head sag The frequency is N/M.
  • the eye state information includes a blink frequency, a blink duration ratio, and a closed eye speed
  • the driver's eye state information comprising: obtaining an upper eyelid position according to an area of the binocular image that matches a preset upper eyelid feature; according to the binocular image The area matching the preset lower eyelid feature to obtain the lower eyelid position;
  • the blink distance is less than or equal to a preset first blink threshold, and the face image that is greater than the second blink threshold is determined to be a semi-closed image, the first blink The threshold is greater than the second blink threshold;
  • the obtaining, according to the head droop frequency and the eye state information, the fatigue value of the driver including:
  • the fatigue value of the driver is obtained according to the first fatigue component, the second fatigue component, the third fatigue component, and the fourth fatigue component.
  • the obtaining, according to the first fatigue component, the second fatigue component, the third fatigue component, and the fourth fatigue component, the fatigue value of the driver including: according to a preset a weight, a second weight, a third weight, and a fourth weight, respectively weighting the first fatigue component, the second fatigue component, the third fatigue component, and the fourth fatigue component to obtain the fatigue value of the driver ,
  • the first weight is a weight of the first fatigue component
  • the second weight is a weight of the second fatigue component
  • the third weight is a weight of the third fatigue component
  • the four weights are the weights of the fourth fatigue component.
  • a terminal comprising: a real-time threshold determining module, configured to determine a fatigue threshold reducing component according to a continuous driving time, a current date, and a current time, and according to an initial fatigue threshold and the fatigue threshold Decreasing the component and determining the real-time fatigue threshold;
  • An image processing module configured to obtain, according to the video obtained from the camera, a driver's head droop frequency and driver's eye state information
  • a fatigue value obtaining module configured to obtain the fatigue value of the driver according to the head droop frequency and the eye state information
  • the warning module is configured to provide an early warning to the driver when determining that the fatigue value is greater than or equal to the real-time fatigue threshold.
  • the real-time threshold determining module is specifically configured to: determine, according to the continuous driving time, a first decreasing component, wherein a size of the first decreasing component follows the continuous driving Increasing the time; determining a second decreasing component according to the preset fatigue date range and the current date, wherein the second decreasing component is the fatigue time interval corresponding to the current date a decreasing component; determining a third decreasing component according to the preset fatigue time interval and the current time, wherein the third decreasing component is a corresponding reduction of the fatigue time interval including the current time a component; determining the fatigue threshold reduction component based on the first reduced component, the second reduced component, and the third reduced component.
  • the real-time threshold determining module is further configured to: obtain, according to the preset time period, before determining the fatigue threshold reducing component according to the continuous driving time, the current date, and the current time a position signal of the terminal, determining a moving speed of the vehicle; determining, as a starting time of the continuous forming time, a time when the moving speed of the vehicle is changed from 0 to greater than 0; determining the current time as the continuous driving time Termination time, wherein the vehicle moving speed is greater than between the start time and the end time 0: obtaining the continuous driving time according to the starting time and the ending time.
  • the warning module is specifically configured to determine, according to the excess amount of the fatigue value relative to the real-time fatigue threshold, when determining that the fatigue value is greater than or equal to the real-time fatigue threshold The fatigue level of the driver; obtaining early warning information corresponding to the fatigue level, and alerting the driver according to the warning information.
  • an optional parking area information display module configured to: acquire location information of the terminal; and obtain a distance from the terminal according to location information of the terminal and pre-stored parking area data. The smallest recent parking area information; the latest parking area information is displayed to the driver.
  • the image processing module is specifically configured to: obtain a face image of the driver according to a video obtained from the camera; according to a preset binocular feature and a preset eyebrow feature, Determining a binocular image in the face image; acquiring a center point of the binocular connection in the binocular image, and a lowest point in a vertical direction of the face image; if the center point position coordinate and the lowest point Determining the difference of the position coordinates in the vertical direction is smaller than the preset difference value, determining the face image as a head drop image; the number N of the head droop images acquired according to the preset time period, and The number M of the face images acquired in the preset time period is determined by the head drooping frequency, and the head drooping frequency is N/M.
  • the eye state information includes a blink frequency, a blink duration ratio, and a closed eye speed; and the image processing module is specifically configured to: match the preset upper eyelid feature in the binocular image a region, obtaining an upper eyelid position; obtaining a lower eyelid position according to an area of the binocular image that matches a preset lower eyelid feature; and obtaining an upper eyelid and a lower eyelid according to the upper eyelid position and the lower eyelid position a blink distance of the distance; during the preset time period, the face image having the blink distance less than or equal to the preset first blink threshold and greater than the second blink threshold is determined to be a semi-closed image The first blink threshold is greater than the second blink threshold; and the face image whose blink distance is less than or equal to the preset second blink threshold is determined to be closed during the preset time period.
  • An eye image determining the blink frequency according to the number of times the closed eye image is continuously acquired in the preset time period; and the number of the closed eye images respectively acquired according to the preset time period X, the number Y of semi-closed images and the number Z of face images, determining (X+Y)/Z as the blink duration ratio; determining the closed eye image in the preset time period
  • the length of time of the semi-closed image obtained continuously before, and the maximum value of the length of time is determined as the closed-eye speed.
  • the fatigue value obtaining module is configured to: obtain a first fatigue component according to the head droop frequency in the preset time period, according to the Obtaining a second fatigue component according to a blink duration ratio; obtaining a third fatigue component according to the blink frequency within the preset time period; Obtaining a fourth fatigue component according to the closed eye speed in the preset time period; obtaining fatigue of the driver according to the first fatigue component, the second fatigue component, the third fatigue component, and the fourth fatigue component value.
  • the fatigue value obtaining module is further configured to: respectively, according to the preset first weight, the second weight, the third weight, and the fourth weight, respectively, the first fatigue component, and the second A fatigue component, a third fatigue component, and a fourth fatigue component are weighted and summed to obtain a fatigue value of the driver, wherein the first weight is a weight of the first fatigue component, and the second weight is the a weight of the second fatigue component, the third weight is a weight of the third fatigue component, and the fourth weight is a weight of the fourth fatigue component.
  • a terminal comprising: a memory, a processor, and a computer program, wherein the computer program is stored in the memory, the processor running the computer program to perform the first aspect of the present invention
  • the first aspect of the various possible designs of the method of fatigue driving warning comprising: a memory, a processor, and a computer program, wherein the computer program is stored in the memory, the processor running the computer program to perform the first aspect of the present invention
  • a storage medium comprising: a readable storage medium and a computer program for implementing the fatigue driving warning of the first aspect and the first aspect of the present invention in various possible designs Methods.
  • the method and the terminal provided by the invention determine the fatigue threshold reducing component according to the continuous driving time, the current date and the current time, and determine the real-time fatigue threshold according to the initial fatigue threshold and the fatigue threshold reducing component;
  • the real-time fatigue threshold improves the accuracy of the fatigue driving judgment; on the other hand, according to the video obtained from the camera, the driver's head droop frequency and the driver's eye state information are obtained; then, according to the head droop frequency and the eye state Information, obtain the fatigue value of the driver; calculate the fatigue value from two factors: the head drooping frequency and the eye state information, which can improve the calculation accuracy of the fatigue value and reduce the possibility of false warning; finally, determine the fatigue value is greater than When the speed is equal to the real-time fatigue threshold, the driver is alerted.
  • the method provided by the invention is applied to the terminal, and the real-time fatigue threshold is determined according to the continuous driving time, the current date and the current time, and the real-time fatigue threshold is used as the judgment standard of the driver fatigue degree, thereby reducing the computational complexity of the fatigue driving judgment and improving the calculation complexity. Early warning accuracy of fatigue driving.
  • FIG. 1 is an application scenario of a fatigue driving warning according to an embodiment of the present invention
  • FIG. 2 is a schematic flow chart of a method for early warning of fatigue driving according to an embodiment of the present invention
  • FIG. 3 is a schematic flow chart of another method for fatigue driving warning according to an embodiment of the present invention.
  • FIG. 4 is a schematic flow chart of another method for fatigue driving warning according to an embodiment of the present invention.
  • FIG. 5 is a schematic diagram of comparison of face images in a frontal and head sag state according to an embodiment of the present invention
  • FIG. 6 is a schematic flowchart of still another method for fatigue driving warning according to an embodiment of the present invention.
  • FIG. 7 is a schematic flowchart of still another method for fatigue driving warning according to an embodiment of the present invention.
  • FIG. 8 is a schematic diagram of a terminal for fatigue driving warning according to an embodiment of the present invention.
  • FIG. 9 is a schematic structural diagram of hardware of a terminal provided by the present invention.
  • FIG. 1 is an application scenario of a fatigue driving warning according to an embodiment of the present invention.
  • the embodiment shown in FIG. 1 is illustrated by taking the terminal as the mobile phone 12 as an example, and the driver 11 realizes fatigue driving through the mobile phone 12 mounted on the support frame.
  • the mobile phone 12 is pre-installed on the support frame by the driver 11, and the camera on the mobile phone 12 faces the head position of the driver, and acquires the face image of the driver 11 for recognition and fatigue determination.
  • the fatigue driving warning is integrated into the widely used mobile phone 12, and the driver 11 can activate the fatigue driving warning function while using the mobile phone 12 every day, without adding additional equipment, at a low cost and simple.
  • the structure realizes fatigue driving warning.
  • FIG. 2 is a schematic flow chart of a method for early warning of fatigue driving according to an embodiment of the present invention.
  • the method of fatigue driving warning shown in FIG. 2 is applied to a terminal provided with a camera.
  • the camera may be a passive infrared camera or an active infrared camera.
  • the invention is not limited thereto, and may be other devices having an image acquisition function.
  • the method shown in Figure 2 includes the following steps:
  • S110 Determine a fatigue threshold reduction component according to the continuous travel time, the current date, and the current time, and determine a real-time fatigue threshold according to the initial fatigue threshold and the fatigue threshold reduction component.
  • the driver will face greater fatigue risk as the driving time is longer. For example, if the continuous driving time exceeds 4 hours, the driver can be considered to be in a fatigue driving state. In the case that other factors are constant, the longer the continuous driving time, the more likely the driver is to fatigue, and the corresponding reduction of the real-time fatigue threshold can improve the accuracy of the fatigue driving judgment.
  • the current date reflects the impact of seasonal changes or weather changes on the driver. For example, drivers are more prone to fatigue in hot summers or hot weather than in cool autumn, increasing the risk of fatigue driving. On these fatigue-prone dates, reducing the real-time fatigue threshold can also improve the accuracy of fatigue driving judgments.
  • the current moment reflects the impact of time of day on the driver. For example, the driver is more prone to fatigue after 11 o'clock in the evening than at 8 am. For another example, for a driver who has a habit of sleeping for half an hour at 1 o'clock, if it is still driving during 12:30 to 1:30, it is very likely to cause fatigue. At these moments of fatigue, reducing the real-time fatigue threshold can also improve the accuracy of fatigue driving judgment.
  • the fatigue threshold reduction component is determined by integrating the effects of the continuous travel time, the current date, and the current time on the driver.
  • the method for determining the fatigue threshold reduction component may specifically obtain three reduction values according to the continuous travel time, the current date, and the current time, and then weight the three reduced values to obtain a fatigue threshold reduction component;
  • the pre-set fatigue threshold reduction component calculation model comprehensively calculates the reduction time generated by the continuous travel time, the current date and the current time, and considers the cross-impact between the three, and finally obtains the fatigue threshold reduction component.
  • the driver In the period when fatigue is likely to occur, if the driver is still judged by the high initial fatigue threshold, it is likely that the driver may not be able to judge whether it is in the initial fatigue state or not. During the period of initial fatigue, the driver is more likely to accelerate the distressed state, and the time from initial fatigue to severe fatigue is shorter. Once the driver's initial fatigue state is not quickly judged, the driver who quickly enters the severe fatigue state will face a great risk of traffic accidents, and the driver is confused in the severe fatigue state, and it is likely to ignore the warning issued by the terminal. , leading to a decline in early warning efficiency. Therefore, the real-time fatigue threshold is reduced during the fatigue-prone period, and the accuracy of the fatigue driving judgment is improved by a simple calculation method.
  • the initial fatigue threshold is subtracted from the fatigue threshold reduction component to obtain a real-time fatigue threshold.
  • the real-time fatigue threshold is determined by considering the continuous travel time, the current date and the current time factor at the current time, which is beneficial to improve the accuracy of fatigue driving.
  • the frame picture constituting the video may be obtained in real time according to the video obtained from the camera in real time, and the face image of the driver is obtained after removing the background of each frame image.
  • the driver's face position and the specific position of the eye in the face image can be positioned, and the driver's head movement can be tracked in the video. And the movement of a specific location of the eye.
  • the pattern tracking technique is used to obtain the driver's head up and down movement and the upper and lower eyelid opening and closing conditions, thereby determining the driver's head drooping frequency and the driver's eye state information.
  • one manifestation of the person's fatigue is that the eye muscles are relaxed, the upper and lower eyelid distances are reduced, and the driver may have performed severely when the performance of the head drooping frequency is increased. Fatigue is gone.
  • the face image included in the video is obtained by the video obtained by the camera, thereby analyzing and obtaining the driver's head drooping frequency and the driver's eye state information.
  • the manner of calculating the fatigue value of the driver may be the sum of the head droop frequency and the fatigue value indicated by the eye state information; or may determine a reference fatigue value based on the eye state information, and then determine a fatigue by the head droop frequency.
  • the value is increased, and the sum of the reference fatigue value and the fatigue value increase component is determined as the fatigue value of the driver; or the weight of the head droop frequency and the eye state information is determined separately, and then the head droop frequency is indicated.
  • the fatigue component and the fatigue component indicated by the eye state information are weighted and summed to obtain the fatigue value of the driver.
  • the fatigue value is judged from a single factor, it is likely to cause a large error in the calculation of the fatigue value. For example, the nod movement in the communication between the driver and the passenger is recognized by the terminal as the head droop frequency is increased and the error is obtained. High fatigue value.
  • the fatigue value is calculated from two factors: the head sag frequency and the eye state information, which can improve the calculation accuracy of the fatigue value and reduce the possibility of false warning.
  • the process of S110 and the process of executing S120-S130 are performed without a certain order.
  • the two processes may be executed simultaneously, or one of the processes may be executed before another process, and the present invention is not limited to FIG. The order of execution shown.
  • S140 alert the driver when determining that the fatigue value is greater than or equal to the real-time fatigue threshold.
  • the fatigue value is compared with the real-time fatigue threshold, and when the fatigue value is greater than or equal to the real-time fatigue threshold, the driver is judged to be fatigue driving, and the driver is alerted.
  • the fatigue threshold reduction component is determined according to the continuous driving time, the current date and the current time, and the real-time fatigue threshold is determined according to the initial fatigue threshold and the fatigue threshold reducing component;
  • the real-time fatigue threshold improves the accuracy of the fatigue driving judgment; on the other hand, according to the video obtained from the camera, the driver's head droop frequency and the driver's eye state information are obtained; then, according to the head droop frequency and the eye state Information, obtain the fatigue value of the driver; calculate the fatigue value from two factors: the head drooping frequency and the eye state information, which can improve the calculation accuracy of the fatigue value and reduce the possibility of false warning; finally, determine the fatigue value is greater than When the speed is equal to the real-time fatigue threshold, the driver is alerted.
  • the method provided by the embodiment is applied to the terminal, and the real-time fatigue threshold is determined according to the continuous driving time, the current date and the current time, and the real-time fatigue threshold is used as a criterion for determining the degree of fatigue of the driver, thereby reducing the computational complexity of the fatigue driving judgment and improving the complexity.
  • the warning accuracy of fatigue driving In this embodiment, the fatigue degree of the driver is judged by different fatigue time periods, the judgment difficulty of the fatigue driving is reduced, and the warning accuracy of the fatigue driving is improved.
  • FIG. 3 is a schematic flow chart of another method for fatigue driving warning according to an embodiment of the present invention.
  • the method embodiment shown in FIG. 3 is an implementation manner for determining a fatigue threshold reduction component in the embodiment shown in FIG. 2.
  • the process shown in FIG. 3 is specifically:
  • the first reduced component may be divided into a first determining process that is less than the fatigue travel time threshold, and a second determining process that is greater than or equal to the fatigue travel time threshold.
  • the fatigue travel time threshold may be a boundary time at which the driver is prone to fatigue, such as 3 hours, 3.5 hours, or 4 hours.
  • the first decreasing component increases with the increase of the continuous driving time by the first constant coefficient
  • the first decreasing component is The rate of change greater than the first constant coefficient increases as the continuous travel time increases.
  • a specific calculation method of the first reduced component may be:
  • D1 is the first decreasing component
  • t is the continuous driving time
  • T is the preset fatigue driving time threshold
  • the first decreasing component increases more rapidly with the increase of the continuous driving time, which is consistent with The law of the human body is conducive to accurately determining the first reduced component.
  • the terminal obtains an implementation manner of the continuous travel time, and specifically, the continuous travel time is determined according to the change of the moving speed of the vehicle.
  • the location signal of the terminal is acquired, and the moving speed of the terminal may be determined according to the location signal obtained in the preset time period.
  • the terminal is mounted on the vehicle, so the moving speed of the terminal is determined as the moving speed of the vehicle.
  • the time at which the vehicle moving speed is changed from 0 to greater than 0 is determined as the starting time of the continuous forming time; and the current time is determined as the ending time of the continuous driving time, wherein the vehicle moving speed is greater than between the starting time and the ending time 0.
  • the vehicle is always in a driving state between the start time and the end time. According to the starting time and the ending time, the continuous driving time is obtained.
  • S220 Determine a second decreasing component according to the preset fatigue date interval and the current date, and the second decreasing component is a decreasing component corresponding to the fatigue date segment including the current date.
  • an easy fatigue date interval may be set according to the temperature time distribution. For example, preset six fatigue months for June-July, July-August, and August-September. Since the weather is generally hot in July-August, it can correspond to the largest reduction component, June-July and 8-9.
  • the two fatigue-prone date intervals of the month may be the same decreasing component or different decreasing components.
  • Another fatigue-prone date interval can be preset by the user in the terminal. Depending on the physical differences of different people, there may be different periods of easy fatigue in different time periods of the year. The determination of the second reduced component by the user's own preset fatigue date interval can improve the accuracy of the second reduced component.
  • the current date is first obtained from the system clock of the terminal, and then the target fatigue date interval including the current date is determined in the above-mentioned easy fatigue date interval, and the reduced component corresponding to the target fatigue date interval is determined as the second reduced component. .
  • the setting mode of the current time may be preset according to the system, or may be customized by the user according to his own working habits.
  • the current time is obtained from the system clock of the terminal, and then the target fatigue time interval including the current time is determined in the fatigue time interval, and the reduced component corresponding to the target fatigue time interval is determined as the third reduced component. .
  • the continuous driving time, the current date, and the current time all have an impact on the possibility of fatigue of the driver, but according to the degree of influence, the first decreasing component and the second decreasing component corresponding to the three may be
  • the third reduced component is weighted and summed to obtain a fatigue threshold reducing component.
  • the effects of the three are not independent and may cross each other.
  • the weights of the first reduced component, the second reduced component, and the third reduced component are equal.
  • the weights corresponding to the second reduced component and the third reduced component are greater than the weight of the first reduced component; and the continuous travel time is greater than Or equal to the fatigue travel time threshold, the weight corresponding to the second reduced component and the third reduced component is less than the weight of the first reduced component.
  • the first decreasing component, the second decreasing component, and the third decreasing component are respectively obtained according to the continuous driving time, the current date, and the current time, and then according to the first decreasing component, the second decreasing component, and the third subtracting.
  • the small component determines the fatigue threshold reduction component, comprehensively considers the impact of continuous driving time, current date and current time on the driver's fatigue, and improves the matching degree between the fatigue threshold reduction component and the driver's current state.
  • the terminal provides an early warning to the driver when determining that the fatigue value is greater than or equal to the real-time fatigue threshold.
  • a specific implementation manner may be: when determining that the fatigue value is greater than or equal to the real-time fatigue threshold, according to The fatigue level of the driver is determined by the excess of the fatigue value relative to the real-time fatigue threshold.
  • a plurality of fatigue levels may be preset, and different fatigue levels may correspond to different preset warning information. Since the driver's ability to respond to external information at different levels of fatigue is different, different content or different types of warning information can be set for different fatigue levels. Different types of early warning information may be, for example, splash screen prompt information, voice prompt information, and the like.
  • the warning information of different contents may be, for example, audible information of different contents such as "Drip", “alarm prompt, fatigue driving", “fatigue driving, parking and rest”.
  • the terminal obtains early warning information corresponding to the fatigue level. And alert the driver according to the warning information.
  • the early warning information may be a preset number
  • the terminal may obtain a preset number
  • the family number can be preset, and the family members can talk to the driver when they receive the call, and advise them to stop and rest.
  • the terminal can also display the latest parking area information to the user. Specifically, first, the location information of the terminal is acquired, and then the latest parking area information with the smallest distance from the terminal is obtained according to the location information of the terminal and the pre-stored parking area data.
  • the parking area data may be offline downloaded map data, parking lot data, or the like, or may be requested in real time to obtain the parking area data near the terminal from the network server.
  • the distance between all parking areas and the terminal is compared, and the parking area with the smallest distance from the terminal is determined as the nearest parking area. Get information on recent parking areas, such as the name of the nearest parking area, the route, the remaining parking spaces, and more. Finally, the driver is shown the latest parking area information.
  • FIG. 4 is a schematic flow chart of still another method for fatigue driving warning according to an embodiment of the present invention.
  • the method embodiment shown in FIG. 4 is an implementation manner in which the terminal obtains the driver's head droop frequency according to the video obtained from the camera in the embodiment shown in FIG. 2.
  • the method shown in Figure 4 is specifically:
  • the frame image may be obtained from the video first, and whether the frame image includes the face image is determined by using a preset face recognition model, and if yes, the background is removed to obtain the face image, and if not, the image is continuously acquired.
  • a frame image Since the camera is facing the driver's head, as long as the driver is in the driving state, the face image can be continuously obtained from the video, and each face image is for one time point.
  • both the human eye region and the eyebrow region have the characteristics of darker color and bilateral symmetry, and the shape of the eye and the shape of the eyebrow are more distinct from other parts of the facial features.
  • the binocular feature and the preset eyebrow feature are used as the positioning basis, and the binocular image is determined in the face image, which can improve the positioning accuracy.
  • the face image is then determined as a head drop image.
  • the center point of the connection of the eyes in the binocular image may correspond to the position of the tip of the bridge of the nose, and the lowest point of the vertical direction of the face image may correspond to the position of the chin.
  • the distance between the facial features of the face is the largest, and in the case of looking up, bowing, and left and right side heads, the relative position of the facial features in the obtained face image occurs. Variety.
  • FIG. 5 is a schematic diagram of comparison of face images in a frontal and head sag state according to an embodiment of the present invention.
  • FIG. 5 shows a face image of the front side and a face image of the head down image acquired by the camera.
  • the face acquired by the camera when the driver's head is drooping The facial features distance must be made smaller in the vertical direction such that the difference H1 in the vertical direction between the center point position coordinate and the lowest point position coordinate is smaller than the standard value H0.
  • the face image is determined as the head drop image.
  • S350 Determine a head sag frequency according to the number N of head sag images acquired in the preset time period and the number M of face images acquired in the preset time period, and the head sag frequency is N/M.
  • the driver's face image is obtained from the video, and then the face image is positioned to the binocular image, the lowest point in the vertical direction is determined in the face image, and the center point of the binocular connection is determined in the binocular image, and according to The difference between the coordinates of the center point position of the double-eye line and the position coordinate of the lowest point in the vertical direction determines the head drooping image, and finally the number N of the head droop images acquired according to the preset time period, and the preset time
  • the number M of face images acquired in the segment is determined to be N/M of the head drooping frequency, and the driver's head droop frequency is determined according to the video, and has high accuracy.
  • FIG. 6 is a schematic flow chart of still another method for fatigue driving warning according to an embodiment of the present invention.
  • the method embodiment shown in FIG. 6 is an implementation manner in which the terminal obtains the eye state information of the driver according to the video obtained from the camera in the embodiment shown in FIG. 2, as shown in the embodiment shown in FIG.
  • the eye state information may be a blink frequency, a blink duration ratio, and a closed eye speed.
  • the specific process of obtaining the driver's eye state information shown in FIG. 6 may be:
  • S410 Obtain an upper eyelid position according to an area in the binocular image that matches the preset upper eyelid feature; and obtain a lower eyelid position according to the region of the binocular image that matches the preset lower eyelid feature.
  • the upper eyelid feature and the lower eyelid feature may be obtained by training in a video sample in advance.
  • the upper eyelid position is the lowest point of the upper eyelid
  • the lower eyelid position is the highest point of the lower eyelid
  • the difference between the upper eyelid position and the lower eyelid position in the midline direction of the face is determined as the blink distance.
  • the midline of the face is the line connecting the midpoint between the eyebrows and the midpoint between the eyes.
  • the face image may be tilted or rotated due to the deflection of the driver's head, but the relative motion relationship between the facial features may not change. Therefore, calculating the blink distance using the midline of the face as a reference line can reduce the head motion pair. The effect of the difference is to improve the accuracy of the blink distance.
  • S440 Determine, in a preset time period, a face image whose blink distance is less than or equal to a preset second blink threshold, as a closed eye image.
  • S430 and S440 may be executed simultaneously or sequentially.
  • the execution order of S430 and S440 is not limited in this embodiment, and the execution sequence shown in FIG. 6 is an optional execution sequence.
  • the blink distance is less than or equal to the preset first blink threshold, and is greater than the second blink threshold, indicating that the driver in the face image is between the normal blink state and the closed eye state, which may be a drowsiness.
  • the eye state may also be an intermediate process state of blinking, and this embodiment is determined to be a semi-closed eye.
  • the blink distance is less than or equal to the preset second blink threshold, indicating that the driver in the face image is in a closed eye state or a close eye closed state, and all of the embodiments are determined to be closed eyes.
  • S450 Determine a blink frequency according to the number of times the closed-eye image is continuously acquired within a preset time period.
  • each face image corresponds to a time point. Therefore, the proportional relationship between the number X of closed eyes images, the number Y of semi-closed images, and the number Z of face images can directly correspond to the closed eye time and half.
  • the blink frequency is a ratio of the number of times the closed-eye image is continuously acquired to the preset time period in the preset time period, wherein the number of consecutively acquiring the closed-eye image refers to the number of times when the closed-eye image is continuously acquired. .
  • the closed-eye image is continuously acquired 3 times in a preset period of time, 5 closed-eye images are continuously acquired for the first time, 6 closed-eye images are continuously acquired for the second time, and 6 closed images are continuously acquired for the third time.
  • the eye image while the preset time period is 20 seconds, the blink frequency is 3/20 (times/second). Under normal circumstances, the frequency of blinking changes is small, while in a state of fatigue, it may blink quickly in order to strengthen the spirit, and may also have the habit of sleeping in the blink of an eye without blinking for a long time. Therefore, the blink frequency can be used as a reference factor for judging whether the driver is in a fatigue state.
  • S460 Determine (X+Y)/Z as a blink duration ratio according to the number X of closed eye images, the number Y of semi-closed images, and the number Z of face images respectively acquired in the preset time period.
  • the blink duration ratio refers to the sum of the time when the driver is in the closed eye state and the time in the semi-closed eye state, which is the ratio of the preset time period.
  • the blink duration is an intermediate process that can be used to indicate that the driver blinks, and a state in which the driver blinks slightly when the driver is in a state of drowsiness. It can be seen that the blink duration ratio can also be used as a reference factor for judging whether the driver is in a fatigue state.
  • S470 Determine a length of time of the semi-closed image continuously acquired before acquiring the closed-eye image in the preset time period, and determine a maximum value of the length of time as the closed-eye speed.
  • the closed eye speed is the longest time the driver uses from the semi-closed eye state to the closed eye state.
  • the time length of the semi-closed image continuously acquired before the closed-eye image is acquired for the first time is 0.5 second
  • the time of the semi-closed image continuously acquired before the closed-eye image is acquired for the second time is 0.7 seconds
  • the time length of the semi-closed image continuously acquired before the third-time acquisition of the closed-eye image is 0.8 seconds
  • the closed-eye speed is 0.8 seconds.
  • the closed eye speed can also be used as a reference factor for judging whether the driver is in a fatigue state.
  • S450, S460, and S470 may be executed simultaneously, or sequentially sequentially or in other orders. The present invention does not limit the execution order of S450, S460, and S470.
  • FIG. 7 is a schematic flowchart diagram of still another method for fatigue driving warning according to an embodiment of the present invention.
  • the method embodiment shown in FIG. 7 is an implementation manner of obtaining the fatigue value of the driver according to the head drooping frequency and the eye state information in the embodiment shown in FIG. 2, and the specific embodiment of the embodiment shown in FIG.
  • the process can be:
  • S510 Obtain a first fatigue component according to a head droop frequency in a preset time period, obtain a second fatigue component according to a blink duration ratio in a preset time period, and obtain a third fatigue component according to the blink frequency in the preset time period.
  • the fourth fatigue component is obtained according to the closed eye speed within the preset time period.
  • the first fatigue component, the second fatigue component, the third fatigue component, and the fourth fatigue are determined according to a degree of head sagging frequency, a blink duration ratio, a blink frequency, and a closed eye speed, respectively.
  • the size and positive and negative of the component Taking the blink frequency as an example, when the blink frequency is greater than the upper limit frequency or less than the lower limit frequency, the third fatigue component is positive and is a preset fixed value, when the blink frequency is less than or equal to the upper limit frequency and greater than or equal to the lower limit frequency The third fatigue component is zero.
  • the first weight is the weight of the first fatigue component
  • the second weight is the weight of the second fatigue component
  • the third weight is the weight of the third fatigue component
  • the fourth weight is the weight of the fourth fatigue component.
  • the first fatigue component, the second fatigue component, the third fatigue component, and the fourth fatigue component may be weighted and summed according to the preset first weight, the second weight, the third weight, and the fourth weight, respectively. Get the driver's fatigue value.
  • FIG. 8 is a schematic diagram of a terminal for fatigue driving warning according to an embodiment of the present invention.
  • a terminal shown in FIG. 8 includes:
  • the real-time threshold determination module 801 is configured to determine a fatigue threshold reduction component according to the continuous travel time, the current date, and the current time, and determine a real-time fatigue threshold according to the initial fatigue threshold and the fatigue threshold reduction component.
  • the image processing module 802 is configured to obtain a driver's head droop frequency and driver's eye state information according to the video obtained from the camera.
  • the fatigue value obtaining module 803 is configured to obtain a fatigue value of the driver according to the head droop frequency and the eye state information.
  • the warning module 804 is configured to provide an early warning to the driver when determining that the fatigue value is greater than or equal to the real-time fatigue threshold.
  • the terminal of the embodiment shown in FIG. 8 is correspondingly used to perform the steps performed by the terminal in the method embodiment shown in FIG. 2, and the implementation principle and technical effects are similar, and details are not described herein again.
  • the real-time threshold determining module is specifically configured to: determine, according to the continuous driving time, a first decreasing component, wherein a size of the first decreasing component increases with the continuous driving time Large and increasing; determining a second decreasing component according to a preset fatigue-prone date interval and the current date, wherein the second decreasing component is a corresponding reduction of the fatigue-prone date interval including the current date a component; determining, according to the preset fatigue time interval and the current time, a third decrease component, wherein the third decrease component is a reduced component corresponding to the fatigue time interval including the current time; The first reduced component, the second reduced component, and the third reduced component determine the fatigue threshold reducing component.
  • the real-time threshold determining module is further configured to: before the determining the fatigue threshold reducing component according to the continuous driving time, the current date, and the current time, according to the terminal obtained in the preset time period Position signal, determining a moving speed of the vehicle; determining a time at which the moving speed of the vehicle changes from 0 to greater than 0, determining a starting time of the continuous forming time; determining the current time as a ending time of the continuous driving time The vehicle moving speed is greater than 0 between the start time and the end time; and the continuous travel time is obtained according to the start time and the end time.
  • the warning module is specifically configured to: when determining that the fatigue value is greater than or equal to the real-time fatigue threshold, determine the driving according to an excess of the fatigue value relative to the real-time fatigue threshold The fatigue level of the member; obtaining early warning information corresponding to the fatigue level, and alerting the driver according to the warning information.
  • the parking area information display module is further configured to: acquire the bit of the terminal Setting information; obtaining, according to the location information of the terminal and the pre-stored parking area data, the latest parking area information having the smallest distance from the terminal; and displaying the latest parking area information to the driver.
  • the image processing module is specifically configured to: obtain a face image of the driver according to a video obtained from the camera; and in the face according to a preset binocular feature and a preset eyebrow feature Determining a binocular image in the image; acquiring a center point of the double eye connection in the binocular image, and a lowest point of the vertical direction of the face image; if the center point position coordinate and the lowest point position coordinate are vertical Determining the difference in the straight direction is smaller than the preset difference, determining the face image as a head drop image; the number N of the head droop images acquired according to the preset time period, and the preset time The number M of the face images acquired in the segment determines the head drooping frequency, and the head drooping frequency is N/M.
  • the eye state information includes a blink frequency, a blink duration ratio, and a closed eye speed; and the image processing module is configured to: obtain, according to the region of the binocular image that matches the preset upper eyelid feature, An upper eyelid position; obtaining a lower eyelid position according to an area of the binocular image that matches a preset lower eyelid feature; and obtaining a distance between the upper eyelid and the lower eyelid according to the upper eyelid position and the lower eyelid position a blink distance; determining, in the preset time period, that the blink distance is less than or equal to a preset first blink threshold, and the face image that is greater than the second blink threshold is determined to be a semi-closed image, The first blink threshold is greater than the second blink threshold; and the facial image whose blink distance is less than or equal to the preset second blink threshold is determined as the closed eye image during the preset time period; Determining the blink frequency according to the number of times the closed-eye image is continuously acquired according to the preset time period; and
  • the fatigue value obtaining module is configured to: obtain a first fatigue component according to the head droop frequency in the preset time period, according to the blink duration ratio in the preset time period, Obtaining a second fatigue component; obtaining a third fatigue component according to the blink frequency in the preset time period; obtaining a fourth fatigue component according to the closed eye speed in the preset time period; The fatigue component, the second fatigue component, the third fatigue component, and the fourth fatigue component obtain the fatigue value of the driver.
  • the fatigue value obtaining module is further configured to: respectively, the first fatigue component, the second fatigue component, and the third according to the preset first weight, the second weight, the third weight, and the fourth weight
  • a fatigue component and a fourth fatigue component are weighted and summed to obtain a fatigue value of the driver, wherein the first weight is a weight of the first fatigue component, and the second weight is a second fatigue component Weight, the third weight is the third The weight of the fatigue component, the fourth weight being the weight of the fourth fatigue component.
  • FIG. 9 is a schematic structural diagram of hardware of a terminal provided by the present invention.
  • the terminal shown in FIG. 9 may be a mobile terminal.
  • Mobile terminals include, but are not limited to, mobile devices, personal digital assistants (PDAs), tablets, portable devices (eg, portable computers, pocket computers, or handheld computers) with mobile devices having image capture capabilities.
  • PDAs personal digital assistants
  • portable devices eg, portable computers, pocket computers, or handheld computers
  • the embodiment of the present invention does not limit the form of the terminal.
  • the terminal includes: a processor 911 and a memory 912;
  • the memory 912 is configured to store a computer program, and the memory may also be a flash memory.
  • the processor 911 is configured to execute execution instructions of the memory storage to implement the steps performed by the terminal in the method for the fatigue driving warning described above. For details, refer to the related description in the foregoing method embodiments.
  • the memory 912 can be either standalone or integrated with the processor 911.
  • the terminal may further include:
  • a bus 913 is provided for connecting the memory 912 and the processor 911.
  • the present invention also provides a readable storage medium having stored therein an execution instruction, and when the at least one processor of the terminal executes the execution instruction, the terminal executes the fatigue driving warning provided by the various embodiments described above.
  • the present invention also provides a program product comprising an execution instruction stored in a readable storage medium.
  • At least one processor of the terminal may read the execution instructions from a readable storage medium, and the at least one processor executes the execution instructions such that the terminal implements the method of fatigue driving warning provided by the various embodiments described above.
  • the processor may be a central processing unit (English: Central Processing Unit, CPU for short), or may be other general-purpose processors, digital signal processors (English: Digital Signal Processor) , referred to as: DSP), ASIC (English: Application Specific Integrated Circuit, referred to as: ASIC).
  • the general purpose processor may be a microprocessor or the processor or any conventional processor or the like. The steps of the method disclosed in connection with the present application may be directly embodied by hardware processor execution or by a combination of hardware and software modules in a processor.

Landscapes

  • Business, Economics & Management (AREA)
  • Emergency Management (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Traffic Control Systems (AREA)
  • Measurement Of The Respiration, Hearing Ability, Form, And Blood Characteristics Of Living Organisms (AREA)

Abstract

一种疲劳驾驶预警的方法和终端。疲劳驾驶预警的方法应用于终端,终端设置有摄像头;方法包括:根据连续行车时间、当前日期和当前时刻,确定疲劳阈值减小分量,并根据初始疲劳阈值和疲劳阈值减小分量,确定实时疲劳阈值(S110);根据从摄像头获得的视频,获得驾驶员的头部下垂频率和驾驶员的眼部状态信息(S120);根据头部下垂频率和眼部状态信息,获得驾驶员的疲劳值(S130);在确定疲劳值大于或等于实时疲劳阈值时,向驾驶员进行预警(S140)。该方法和终端根据连续行车时间、当前日期和当前时刻确定实时疲劳阈值,以实时疲劳阈值作为驾驶员疲劳程度的判断标准,降低了疲劳驾驶判断的计算复杂程度,提高了疲劳驾驶的预警准确性。

Description

疲劳驾驶预警的方法和终端 技术领域
本发明涉及信号处理技术,尤其涉及一种疲劳驾驶预警的方法和终端。
背景技术
驾驶疲劳是指汽车驾驶员的警觉和安全驾驶能力随着驾驶员的疲劳而下降,反应迟钝、判断迟缓节奏缓慢等是驾驶员驾驶疲劳的主要表现形式。疲劳驾驶已成为现今交通事故的重要因素,严重威胁着人们的生命和财产安全,有必要对驾驶员的疲劳状态进行实时监测,并对疲劳驾驶进行预警,以减少由驾驶员疲劳驾驶而引发的交通事故。
目前,在一些高端汽车装备的车载人工智能系统中提供有疲劳驾驶预警功能。这些疲劳驾驶预警功能通常是对驾驶员的脑电图变化信息、头部姿势信息和眼睑下垂程度进行检测,结合对方向盘的转动幅度以及方向盘的紧握力检测,以及通过车载摄像头对道路追踪检测,获得一系列的检测结果,车载人工智能系统根据检测结果计算驾驶员的清醒状态。
对于出租司机、货运司机来说,由于工作需要,长时间的驾驶导致出现疲劳驾驶的概率上升,而车载人工智能系统一般成本较高,无法在大多数的出租车和货车上推广装备。现有的疲劳驾驶预警方式计算复杂、难度大,对设备要求太高。
发明内容
本发明提供一种疲劳驾驶预警的方法和终端,根据连续行车时间、当前日期和当前时刻确定实时疲劳阈值,以实时疲劳阈值作为驾驶员疲劳程度的判断标准,降低了疲劳驾驶判断的计算复杂程度,提高了疲劳驾驶的预警准确性。
根据本发明的第一方面,提供一种疲劳驾驶预警的方法,应用于终端,所述终端设置有摄像头;所述方法包括:
根据连续行车时间、当前日期和当前时刻,确定疲劳阈值减小分量,并根据初始疲劳阈值和所述疲劳阈值减小分量,确定实时疲劳阈值;
根据从所述摄像头获得的视频,获得驾驶员的头部下垂频率和驾驶员的眼部状态 信息;
根据所述头部下垂频率和眼部状态信息,获得所述驾驶员的疲劳值;
在确定所述疲劳值大于或等于所述实时疲劳阈值时,向所述驾驶员进行预警。
作为一种可选的实施方式,所述根据连续形成时间、当前日期、当前时刻,确定疲劳阈值减小分量,包括:
根据所述连续行车时间,确定第一减小分量,其中,所述第一减小分量的大小随着所述连续行车时间的增大而增大;
根据预设的易疲劳日期区间和所述当前日期,确定第二减小分量,所述第二减小分量为包含所述当前日期的所述易疲劳日期区间对应的减小分量;
根据预设的易疲劳时刻区间和所述当前时刻,确定第三减小分量,所述第三减小分量为包含所述当前时刻的所述易疲劳时刻区间对应的减小分量;
根据所述第一减小分量、第二减小分量和第三减小分量,确定所述疲劳阈值减小分量。
作为一种可选的实施方式,在所述根据连续行车时间、当前日期和当前时刻,确定疲劳阈值减小分量之前,还包括:
根据预设时间段内获得的所述终端的位置信号,确定车辆移动速度;
将所述车辆移动速度由0变为大于0的时刻,确定为所述连续形成时间的起算时刻;
将所述当前时刻确定为所述连续行车时间的终止时刻,其中,在所述起算时刻至所述终止时刻之间,所述车辆移动速度大于0;
根据所述起算时刻和所述终止时刻,获得所述连续行车时间。
作为一种可选的实施方式,所述在确定所述疲劳值大于或等于所述实时疲劳阈值时,向所述驾驶员进行预警,包括:
在确定所述疲劳值大于或等于所述实时疲劳阈值时,根据所述疲劳值相对于所述实时疲劳阈值的超出量,确定所述驾驶员的疲劳等级;
获得与所述疲劳等级对应的预警信息,并根据所述预警信息向所述驾驶员进行预警。
作为一种可选的实施方式,在所述向所述驾驶员进行预警之后,还包括:获取所述终端的位置信息;根据所述终端的位置信息和预存储的停车区域数据,获得与所述终端距离最小的最近停车区域信息;向所述驾驶员显示所述最近停车区域信息。
作为一种可选的实施方式,所述根据从所述摄像头获得的视频,获得驾驶员的头部下垂频率,包括:根据从所述摄像头获得的视频,获得所述驾驶员的人脸图像;
根据预设的双眼特征和预设的眉毛特征,在所述人脸图像中确定双眼图像;
获取所述双眼图像中双眼连线的中心点,以及所述人脸图像的竖直方向的最低点;
若所述中心点位置坐标与所述最低点的位置坐标在竖直方向上的差值小于预设差值,则将所述人脸图像确定为头部下垂图像;
根据预设时间段内获取到的所述头部下垂图像数量N,和所述预设时间段内获取到的所述人脸图像数量M,确定所述头部下垂频率,所述头部下垂频率为N/M。
作为一种可选的实施方式,所述眼部状态信息包括眨眼频率、眨眼持续时间比和闭眼速度;
所述根据从所述摄像头获得的视频,获得驾驶员的眼部状态信息,包括:根据所述双眼图像中与预设的上眼睑特征匹配的区域,获得上眼睑位置;根据所述双眼图像中与预设的下眼睑特征匹配的区域,获得下眼睑位置;
根据所述上眼睑位置和所述下眼睑位置,获得指示上眼睑与下眼睑之间距离的睁眼距离;
在所述预设时间段内,将所述睁眼距离小于或等于预设第一睁眼阈值,并且大于第二睁眼阈值的人脸图像,确定为半闭图像,所述第一睁眼阈值大于所述第二睁眼阈值;
在所述预设时间段内,将所述睁眼距离小于或等于预设第二睁眼阈值的人脸图像,确定为闭眼图像;
根据所述预设时间段内,连续获取到所述闭眼图像的次数,确定所述眨眼频率;
根据在所述预设时间段内,分别获取到的所述闭眼图像的数量X、半闭图像的数量Y和人脸图像的数量Z,将(X+Y)/Z确定为所述眨眼持续时间比;
确定所述预设时间段内,在获取到所述闭眼图像前连续获取到的所述半闭图像的时间长度,将所述时间长度的最大值确定为所述闭眼速度。
作为一种可选的实施方式,所述根据所述头部下垂频率和眼部状态信息,获得所述驾驶员的疲劳值,包括:
根据所述预设时段内的所述头部下垂频率,获得第一疲劳分量;根据所述预设时段内的所述眨眼持续时间比,获得第二疲劳分量;根据所述预设时段内的所述眨眼频率,获得第三疲劳分量;根据所述预设时段内的所述闭眼速度,获得第四疲劳分量;
根据所述第一疲劳分量、第二疲劳分量、第三疲劳分量、第四疲劳分量,得到所述驾驶员的疲劳值。
作为一种可选的实施方式,所述根据所述第一疲劳分量、第二疲劳分量、第三疲劳分量、第四疲劳分量,得到所述驾驶员的疲劳值,包括:根据预设的第一权重、第二权重、第三权重和第四权重,分别对所述第一疲劳分量、第二疲劳分量、第三疲劳分量和第四疲劳分量加权求和,获得所述驾驶员的疲劳值,
其中,所述第一权重为所述第一疲劳分量的权重,所述第二权重为所述第二疲劳分量的权重,所述第三权重为所述第三疲劳分量的权重,所述第四权重为所述第四疲劳分量的权重。
根据本发明的第二方面,提供一种终端,包括:实时阈值确定模块,用于根据连续行车时间、当前日期和当前时刻,确定疲劳阈值减小分量,并根据初始疲劳阈值和所述疲劳阈值减小分量,确定实时疲劳阈值;
图像处理模块,用于根据从所述摄像头获得的视频,获得驾驶员的头部下垂频率和驾驶员的眼部状态信息;
疲劳值获得模块,用于根据所述头部下垂频率和眼部状态信息,获得所述驾驶员的疲劳值;
预警模块,用于在确定所述疲劳值大于或等于所述实时疲劳阈值时,向所述驾驶员进行预警。
作为一种可选的实施方式,所述实时阈值确定模块具体用于:根据所述连续行车时间,确定第一减小分量,其中,所述第一减小分量的大小随着所述连续行车时间的增大而增大;根据预设的易疲劳日期区间和所述当前日期,确定第二减小分量,所述第二减小分量为包含所述当前日期的所述易疲劳日期区间对应的减小分量;根据预设的易疲劳时刻区间和所述当前时刻,确定第三减小分量,所述第三减小分量为包含所述当前时刻的所述易疲劳时刻区间对应的减小分量;根据所述第一减小分量、第二减小分量和第三减小分量,确定所述疲劳阈值减小分量。
作为一种可选的实施方式,所述实时阈值确定模块,还用于:在所述根据连续行车时间、当前日期和当前时刻,确定疲劳阈值减小分量之前,根据预设时间段内获得的所述终端的位置信号,确定车辆移动速度;将所述车辆移动速度由0变为大于0的时刻,确定为所述连续形成时间的起算时刻;将所述当前时刻确定为所述连续行车时间的终止时刻,其中,在所述起算时刻至所述终止时刻之间,所述车辆移动速度大于 0;根据所述起算时刻和所述终止时刻,获得所述连续行车时间。
作为一种可选的实施方式,所述预警模块具体用于:在确定所述疲劳值大于或等于所述实时疲劳阈值时,根据所述疲劳值相对于所述实时疲劳阈值的超出量,确定所述驾驶员的疲劳等级;获得与所述疲劳等级对应的预警信息,并根据所述预警信息向所述驾驶员进行预警。
作为一种可选的实施方式,还包括最近停车区域信息显示模块,用于:获取所述终端的位置信息;根据所述终端的位置信息和预存储的停车区域数据,获得与所述终端距离最小的最近停车区域信息;向所述驾驶员显示所述最近停车区域信息。
作为一种可选的实施方式,所述图像处理模块具体用于:根据从所述摄像头获得的视频,获得所述驾驶员的人脸图像;根据预设的双眼特征和预设的眉毛特征,在所述人脸图像中确定双眼图像;获取所述双眼图像中双眼连线的中心点,以及所述人脸图像的竖直方向的最低点;若所述中心点位置坐标与所述最低点的位置坐标在竖直方向上的差值小于预设差值,则将所述人脸图像确定为头部下垂图像;根据预设时间段内获取到的所述头部下垂图像数量N,和所述预设时间段内获取到的所述人脸图像数量M,确定所述头部下垂频率,所述头部下垂频率为N/M。
作为一种可选的实施方式,所述眼部状态信息包括眨眼频率、眨眼持续时间比和闭眼速度;图像处理模块具体用于:根据所述双眼图像中与预设的上眼睑特征匹配的区域,获得上眼睑位置;根据所述双眼图像中与预设的下眼睑特征匹配的区域,获得下眼睑位置;根据所述上眼睑位置和所述下眼睑位置,获得指示上眼睑与下眼睑之间距离的睁眼距离;在所述预设时间段内,将所述睁眼距离小于或等于预设第一睁眼阈值,并且大于第二睁眼阈值的人脸图像,确定为半闭图像,所述第一睁眼阈值大于所述第二睁眼阈值;在所述预设时间段内,将所述睁眼距离小于或等于预设第二睁眼阈值的人脸图像,确定为闭眼图像;根据所述预设时间段内,连续获取到所述闭眼图像的次数,确定所述眨眼频率;根据在所述预设时间段内,分别获取到的所述闭眼图像的数量X、半闭图像的数量Y和人脸图像的数量Z,将(X+Y)/Z确定为所述眨眼持续时间比;确定所述预设时间段内,在获取到所述闭眼图像前连续获取到的所述半闭图像的时间长度,将所述时间长度的最大值确定为所述闭眼速度。
作为一种可选的实施方式,所述疲劳值获得模块具体用于:根据所述预设时段内的所述头部下垂频率,获得第一疲劳分量,根据所述预设时段内的所述眨眼持续时间比,获得第二疲劳分量;根据所述预设时段内的所述眨眼频率,获得第三疲劳分量; 根据所述预设时段内的所述闭眼速度,获得第四疲劳分量;根据所述第一疲劳分量、第二疲劳分量、第三疲劳分量、第四疲劳分量,得到所述驾驶员的疲劳值。
作为一种可选的实施方式,所述疲劳值获得模块进一步用于:根据预设的第一权重、第二权重、第三权重和第四权重,分别对所述第一疲劳分量、第二疲劳分量、第三疲劳分量和第四疲劳分量加权求和,获得所述驾驶员的疲劳值,其中,所述第一权重为所述第一疲劳分量的权重,所述第二权重为所述第二疲劳分量的权重,所述第三权重为所述第三疲劳分量的权重,所述第四权重为所述第四疲劳分量的权重。
根据本发明的第三方面,提供一种终端,包括:存储器、处理器以及计算机程序,所述计算机程序存储在所述存储器中,所述处理器运行所述计算机程序执行本发明第一方面及第一方面各种可能的设计的所述疲劳驾驶预警的方法。
根据本发明的第四方面,提供一种存储介质,包括:可读存储介质和计算机程序,所述计算机程序用于实现本发明第一方面及第一方面各种可能的设计所述疲劳驾驶预警的方法。
本发明提供的方法和终端,一方面根据连续行车时间、当前日期和当前时刻,确定疲劳阈值减小分量,并根据初始疲劳阈值和疲劳阈值减小分量,确定实时疲劳阈值;在易疲劳时段降低实时疲劳阈值,提高了疲劳驾驶判断的准确性;另一方面根据从摄像头获得的视频,获得驾驶员的头部下垂频率和驾驶员的眼部状态信息;然后根据头部下垂频率和眼部状态信息,获得驾驶员的疲劳值;从头部下垂频率和眼部状态信息两种因素对疲劳值进行计算,可以提高疲劳值的计算准确性,降低错误预警的可能;最后,在确定疲劳值大于或等于实时疲劳阈值时,向驾驶员进行预警。本发明提供的方法应用于终端,并且根据连续行车时间、当前日期和当前时刻确定实时疲劳阈值,以实时疲劳阈值作为驾驶员疲劳程度的判断标准,降低了疲劳驾驶判断的计算复杂程度,提高了疲劳驾驶的预警准确性。
附图说明
为了更清楚地说明本发明实施例或现有技术中的技术方案,下面将对实施例或现有技术描述中所需要使用的附图做简单地介绍,显而易见地,下面描述中的附图是本发明的一些实施例,对于本领域普通技术人员来讲,在不付出创造性劳动的前提下,还可以根据这些附图获得其他的附图。
图1为本发明实施例提供的一种疲劳驾驶预警的应用场景;
图2为本发明实施例提供的一种疲劳驾驶预警的方法流程示意图;
图3为本发明实施例提供的另一种疲劳驾驶预警的方法流程示意图;
图4为本发明实施例提供的再一种疲劳驾驶预警的方法流程示意图;
图5为本发明实施例提供的一种正面和头部下垂状态下人脸图像的对比示意图;
图6为本发明实施例提供的又一种疲劳驾驶预警的方法流程示意图;
图7为本发明实施例提供的又一种疲劳驾驶预警的方法流程示意图;
图8为本发明实施例提供的一种疲劳驾驶预警的终端示意图;
图9为本发明提供的一种终端的硬件结构示意图。
具体实施方式
为使本发明实施例的目的、技术方案和优点更加清楚,下面将结合本发明实施例中的附图,对本发明实施例中的技术方案进行清楚、完整地描述,显然,所描述的实施例仅仅是本发明一部分实施例,而不是全部的实施例。基于本发明中的实施例,本领域普通技术人员在没有做出创造性劳动前提下所获得的所有其他实施例,都属于本发明保护的范围。
本发明的说明书和权利要求书及上述附图中的术语“第一”、“第二”、“第三”“第四”等是用于区别类似的对象,而不必用于描述特定的顺序或先后次序。应该理解这样使用的数据在适当情况下可以互换,以便这里描述的本发明的实施例能够以除了在这里图示或描述的那些以外的顺序实施。应当理解,本申请中使用的术语“包括”和“具有”以及他们的任何变形,意图在于覆盖不排他的包含,例如,包含了一系列步骤或单元的过程、方法、系统、产品或设备不必限于清楚地列出的那些步骤或单元,而是可包括没有清楚地列出的或对于这些过程、方法、产品或设备固有的其它步骤或单元。取决于语境,如在此所使用的“若”可以被解释成为“在……时”或“当……时”或“响应于确定”或“响应于检测”。应当理解,本申请中使用的术语“初显疲劳状态”、“重度疲劳状态”是用于定义驾驶员的疲劳程度,其中,与处于“初显疲劳状态”相比较,驾驶员在处于“重度疲劳状态”时更加疲劳。
下面以具体地实施例对本发明的技术方案进行详细说明。下面这几个具体的实施例可以相互结合,对于相同或相似的概念或过程可能在某些实施例不再赘述。
图1为本发明实施例提供的一种疲劳驾驶预警的应用场景。图1所示实施例以终端为手机12为例进行示意,驾驶员11通过安装在支撑架上的手机12,实现疲劳驾驶 预警。具体地,手机12被驾驶员11预先安装在支撑架上,手机12上的摄像头朝向驾驶员的头部位置,并获取驾驶员11的人脸图像进行识别和疲劳判断。现有技术中,出租车、货车等非高端车辆中,没有转载功能齐备的车辆智能系统,驾驶员通常需要通过手机实现网络接单、路线导航等功能,因此在车内设置支撑架,并将手机安装在朝向人脸位置是驾驶员日常习惯的做法。图1所示实施例中将疲劳驾驶预警集成在广泛使用的手机12中,驾驶员11可以在日常习惯使用手机12的同时开启疲劳驾驶预警功能,无需增加额外设备,以较低的成本、简单的结构实现疲劳驾驶预警。
图2为本发明实施例提供的一种疲劳驾驶预警的方法流程示意图。图2所示的疲劳驾驶预警的方法应用于终端,该终端设置有摄像头。其中,摄像头可以是被动红外摄像头,也可以是主动红外摄像头,本发明不限于此,也可以是其他具有图像获取功能的装置。图2所示的方法包括以下步骤:
S110,根据连续行车时间、当前日期和当前时刻,确定疲劳阈值减小分量,并根据初始疲劳阈值和所疲劳阈值减小分量,确定实时疲劳阈值。
其中,驾驶员在连续行车时间越大,就将面临越大的疲劳风险。例如在连续行车时间超过4小时,可以认为驾驶员处于疲劳驾驶状态。在其他因素不变的情况下,连续行车时间越大,驾驶员越容易产生疲劳,相应地降低实时疲劳阈值可以提高疲劳驾驶判断的准确性。
当前日期反映的是季节变化或天气变化对驾驶员的影响。例如,与凉爽的秋季相比较,驾驶员在炎热的夏季或者是高温天气中更容易产生乏力感,增大了疲劳驾驶的风险。在这些易疲劳的日期,降低实时疲劳阈值也可以提高疲劳驾驶判断的准确性。
当前时刻反映的是一天之中时间对驾驶员的影响。例如,与上午8点相比,驾驶员在夜晚11点以后更容易产生疲劳感。又例如,对于有中午1点午睡半小时习惯的驾驶员而言,中午12点半到1点半若是仍然在驾驶过程中,则非常容易产生疲劳感。在这些易疲劳的时刻,降低实时疲劳阈值同样可以提高疲劳驾驶判断的准确性。
综合连续行车时间、当前日期和当前时刻三者对驾驶员的影响,确定疲劳阈值减小分量。疲劳阈值减小分量的确定方式,具体地可以是根据连续行车时间、当前日期和当前时刻分别获得三个减小值,再对三个减小值加权求和获得疲劳阈值减小分量;也可以是以预设的疲劳阈值减小分量计算模型对连续行车时间、当前日期和当前时刻产生的减小值进行综合计算,加入三者之间的交叉影响考虑,最终获得疲劳阈值减小分量。
在易产生疲劳的时期,如果还是以较高的初始疲劳阈值对驾驶员是否疲劳驾驶进行判断,很可能无法从驾驶员已经表现出的细微动作判断是否处于初显疲劳状态,而在易产生疲劳的时期驾驶员在初显疲劳状态下更容易、更快地加剧困顿状态,从初显疲劳进入重度疲劳的时间较短。一旦没有快速判断驾驶员的初显疲劳状态,快速进入重度疲劳状态后的驾驶员将面临极大的交通事故风险,并且驾驶员在重度疲劳状态下,意识模糊,很可能忽略掉终端发出的预警,导致预警效率下降。因此,在易疲劳时段降低实时疲劳阈值,以简单的计算方法提高疲劳驾驶判断的准确性。通过预设置一初始疲劳阈值,将初始疲劳阈值减去疲劳阈值减小分量,获得实时疲劳阈值。实时疲劳阈值是当前时刻下考虑连续行车时间、当前日期和当前时刻因素后确定的,有利于提高疲劳驾驶的准确性。
S120,根据从摄像头获得的视频,获得驾驶员的头部下垂频率和驾驶员的眼部状态信息。
具体地,可以是根据从摄像头实时获得的视频,实时获得组成该视频的帧图片,对每一帧图像去除背景后获得了驾驶员的人脸图像。根据预设的计算模型,或者是预设的特征信息,可以对人脸图像中驾驶员的人脸位置和眼部的特定位置进行定位,进而可以在视频中追踪获得驾驶员的头部移动情况和眼部特定位置的移动情况。例如,利用图形跟踪技术获得驾驶员的头部上下移动情况和上下眼睑开合情况,从而确定驾驶员的头部下垂频率和驾驶员的眼部状态信息。
S130,根据头部下垂频率和眼部状态信息,获得驾驶员的疲劳值。
其中,根据对人疲劳时表现出的状态可知,人在疲劳时的一种表现是眼部肌肉放松,上下眼睑距离减小,而在表现出现头部下垂频率提高时,驾驶员可能已经进行重度疲劳状态了。本实施例通过摄像头获得的视频,获得视频中包含的人脸图像,从而分析获得驾驶员的头部下垂频率和驾驶员的眼部状态信息。计算驾驶员的疲劳值的方式,可以是头部下垂频率与眼部状态信息指示的疲劳值之和;也可以是以眼部状态信息确定一基准疲劳值,再以头部下垂频率确定一疲劳值增加分量,再以基准疲劳值和疲劳值增加分量之和确定为驾驶员的疲劳值;还可以是预先分别确定头部下垂频率和眼部状态信息的权重,再对头部下垂频率指示的疲劳分量和眼部状态信息指示的疲劳分量加权求和,获得驾驶员的疲劳值。
如果从单一的一种因素判断疲劳值,很可能导致疲劳值计算出现较大误差,例如驾驶员与乘客交流中的点头动作,被终端识别为头部下垂频率提高而错误得到一个较 高的疲劳值。本实施例从头部下垂频率和眼部状态信息两种因素对疲劳值进行计算,可以提高疲劳值的计算准确性,降低错误预警的可能。
本实施例中,执行S110的过程和执行S120~S130的过程,没有确定的顺序,这两个过程可以同时执行,也可以先执行其中一个过程再执行另一个过程,本发明不限于图2所示的执行顺序。
S140,在确定疲劳值大于或等于实时疲劳阈值时,向驾驶员进行预警。
具体地,将疲劳值与实时疲劳阈值进行比较,在疲劳值大于或等于实时疲劳阈值时,判断驾驶员为疲劳驾驶,向驾驶员进行预警。
本实施例提供的方法中,一方面根据连续行车时间、当前日期和当前时刻,确定疲劳阈值减小分量,并根据初始疲劳阈值和疲劳阈值减小分量,确定实时疲劳阈值;在易疲劳时段降低实时疲劳阈值,提高了疲劳驾驶判断的准确性;另一方面根据从摄像头获得的视频,获得驾驶员的头部下垂频率和驾驶员的眼部状态信息;然后根据头部下垂频率和眼部状态信息,获得驾驶员的疲劳值;从头部下垂频率和眼部状态信息两种因素对疲劳值进行计算,可以提高疲劳值的计算准确性,降低错误预警的可能;最后,在确定疲劳值大于或等于实时疲劳阈值时,向驾驶员进行预警。本实施例提供的方法应用于终端,并且根据连续行车时间、当前日期和当前时刻确定实时疲劳阈值,以实时疲劳阈值作为驾驶员疲劳程度的判断标准,降低了疲劳驾驶判断的计算复杂程度,提高了疲劳驾驶的预警准确性。本实施例以不同的易疲劳时段来对驾驶员的疲劳程度进行判断,降低了疲劳驾驶的判断难度,并提高了疲劳驾驶的预警准确性。
为了进一步描述图2所示实施例,下面结合图3所示的实施例对确定疲劳阈值减小分量的过程进行详细说明。
图3为本发明实施例提供的另一种疲劳驾驶预警的方法流程示意图。如图3所示的方法实施例,是图2所示实施例中确定疲劳阈值减小分量的一种实现方式,图3所示的过程具体为:
S210,根据连续行车时间,确定第一减小分量,其中,第一减小分量的大小随着连续行车时间的增大而增大。
具体地,第一减小分量可以分为小于疲劳行车时间阈值的第一确定过程,和大于或等于疲劳行车时间阈值的第二确定过程。疲劳行车时间阈值可以是驾驶员易于疲劳的分界时间,例如3小时、3.5小时或者4小时。在第一确定过程中,第一减小分量以第一常系数随着连续行车时间的增大而增大,在第二确定过程中,第一减小分量以 大于第一常系数的变化率随着连续行车时间的增大而增大。
第一减小分量的一种具体计算方法可以是:
若t<T小时,则D1=t*a,
若t≥T小时,则D1=(t-T)*b+c;
其中,D1为第一减小分量,t为连续行车时间,T为预设的疲劳行车时间阈值,a,b,c依次为符号为正的第一常系数、第二常系数、第三常系数,并且,b大于a,且c=4*a。
由于驾驶员在长时间驾驶过程中,后期产生疲劳感的可能性明显大于前期,因此在第二确定过程中,第一减小分量更快速地随着连续行车时间的增大而增大,符合人体规律,有利于准确地确定第一减小分量。
其中,在图2和图3所示的实施例中,终端获得连续行车时间的一种实现方式,具体可以是根据车辆的移动速度变化来确定连续行车时间。在本实现方式中,获取终端的位置信号,根据预设时间段内获得的位置信号,可以确定终端的移动速度。终端安装在车辆上,因此将终端的移动速度确定为车辆移动速度。将车辆移动速度由0变为大于0的时刻,确定为连续形成时间的起算时刻;并且将当前时刻确定为连续行车时间的终止时刻,其中,在起算时刻至终止时刻之间,车辆移动速度大于0。换而言之,在起算时刻至终止时刻之间,车辆一直处于驾驶状态。根据起算时刻和终止时刻,获得连续行车时间。
S220,根据预设的易疲劳日期区间和当前日期,确定第二减小分量,第二减小分量为包含当前日期的易疲劳日期区间对应的减小分量。
其中,一种易疲劳日期区间可以是依据气温时间分布来设置。例如预设6-7月份、7-8月份以及8-9月份三个易疲劳日期区间,由于一般7-8月份天气最为炎热,可以对应最大的减小分量,6-7月份和8-9月份两个易疲劳日期区间可以是相同的减小分量,也可以是不同的减小分量。
另一种易疲劳日期区间可以是用户在终端中预设置的。根据不同人的体质差异,在一年中不同时间段可能存在不同的易疲劳日期区间。以用户自己预设的易疲劳日期区间进行第二减小分量的确定,能够提高第二减小分量的准确性。
具体地,首先从终端的系统时钟获得当前日期,然后在上述易疲劳日期区间中确定包含当前日期的目标易疲劳日期区间,将目标易疲劳日期区间对应的减小分量确定为第二减小分量。
S230,根据预设的易疲劳时刻区间和当前时刻,确定第三减小分量,第三减小分 量为包含当前时刻的易疲劳时刻区间对应的减小分量。
其中,当前时刻的设置方式可以是根据系统预设的,也可以是用户根据自身的作息习惯进行自定义设置的。
具体地,首先从终端的系统时钟获得当前时刻,然后在上述易疲劳时刻区间中确定包含当前时刻的目标易疲劳时刻区间,将目标易疲劳时刻区间对应的减小分量确定为第三减小分量。
S240,根据第一减小分量、第二减小分量和第三减小分量,确定疲劳阈值减小分量。
具体地,连续行车时间、当前日期和当前时刻三者都会对驾驶员产生疲劳的可能性造成影响,但根据影响程度不同,可以对三者对应的第一减小分量、第二减小分量和第三减小分量进行加权求和获得疲劳阈值减小分量。然而三者的影响并不是独立的,可能相互交叉影响。
在一种可选的实现方式中,第一减小分量、第二减小分量和第三减小分量的权重相等。
在另一种可选的实现方式中,在连续行车时间小于疲劳行车时间阈值时,第二减小分量和第三减小分量对应的权重大于第一减小分量的权重;在连续行车时间大于或等于疲劳行车时间阈值时,第二减小分量和第三减小分量对应的权重小于第一减小分量的权重。由此可以更加准确地确定疲劳阈值减小分量。
本实施例根据连续行车时间、当前日期和当前时刻分别获得第一减小分量、第二减小分量和第三减小分量,再根据第一减小分量、第二减小分量和第三减小分量确定疲劳阈值减小分量,综合考量了连续行车时间、当前日期和当前时刻对驾驶员产生疲劳的影响程度,提高了疲劳阈值减小分量与驾驶员当前状态的匹配程度。
在图2所示实施例中,终端在确定疲劳值大于或等于实时疲劳阈值时,向驾驶员进行预警,一种具体的实现方式可以是:在确定疲劳值大于或等于实时疲劳阈值时,根据疲劳值相对于实时疲劳阈值的超出量,确定驾驶员的疲劳等级。本实施例中可以预设多个疲劳等级,不同疲劳等级可以对应预设不同的预警信息。由于驾驶员在不同的疲劳等级对外界信息的反应能力不同,因此可以对不同疲劳等级设置不同内容或者不同类型的预警信息。不同类型的预警信息例如可以是闪屏提示信息、语音提示信息等。不同内容的预警信息例如可以是:“滴滴滴”“警报提示、疲劳驾驶”“疲劳驾驶、请停车休息”等不同内容的声音提示信息。终端获得与疲劳等级对应的预警信息, 并根据预警信息向驾驶员进行预警。其中,在预警信息疲劳值相对于实时疲劳阈值的超出量达到预设上限时,预警信息可以是预设的号码,终端可以获得预设的号码,并以该号码为被叫号码拨打电话。例如,可以预设家人的号码,家人在接到电话时与驾驶员通话,规劝其停车休息。
在一种实施例中,若驾驶员在获知自己已处于疲劳驾驶状态时,需要寻找停车区域进行暂时休息。终端在向驾驶员进行预警之后,还可以向用户显示最近停车区域信息。具体地可以是,首先,获取终端的位置信息,然后,根据终端的位置信息和预存储的停车区域数据,获得与终端距离最小的最近停车区域信息。其中,停车区域数据可以是离线下载的地图数据、停车场数据等,也可以是实时地向网络服务器请求获得终端附近的停车区域数据。比较所有停车区域与终端的距离,将与终端距离最小的停车区域确定为最近停车区域。获取最近停车区域信息,例如最近停车区域的名称、路线,剩余车位等。最后,向驾驶员显示最近停车区域信息。
为了更加清楚地描述上述实施例,下面结合图4、图5、图6和图7,分别对终端获得驾驶员的头部下垂频率、驾驶员的眼部状态信息和驾驶员的疲劳值的过程进行详细说明。
图4为本发明实施例提供的再一种疲劳驾驶预警的方法流程示意图。如图4所示的方法实施例,是图2所示实施例中终端根据从摄像头获得的视频,获得驾驶员的头部下垂频率的一种实现方式。如图4所示的方法具体是:
S310,根据从摄像头获得的视频,获得驾驶员的人脸图像。
具体地,可以是先从视频获得帧图像,在以预设的人脸识别模型对帧图像是否包含人脸图像进行判断,若有,则去除背景获得人脸图像,若没有,则继续获取下一个帧图像。由于摄像头朝向驾驶员的头部,因此只要驾驶员处于驾驶状态,就可以从视频中连续获得人脸图像,且每一个人脸图像都对于一个时间点。
S320,根据预设的双眼特征和预设的眉毛特征,在人脸图像中确定双眼图像。
具体地,人眼区域和眉毛区域都具有颜色较深以及左右对称的特征,而且眼睛的形状和眉毛的形状都与人脸五官上其他部位具有较为明显的区别性。本实施例以双眼特征和预设的眉毛特征作为定位依据,在人脸图像中确定双眼图像,可以提高定位的准确性。
S330,获取双眼图像中双眼连线的中心点,以及人脸图像的竖直方向的最低点。
S340,若中心点位置坐标与最低点的位置坐标在竖直方向上的差值小于预设差值, 则将人脸图像确定为头部下垂图像。
其中,双眼图像中双眼连线的中心点可以与鼻梁顶端位置对应,人脸图像的竖直方向的最低点可以与下巴位置对应。根据摄像头获取图像的规律可知,在获取正面人脸图像时,人脸的五官之间距离最大,而在抬头、低头、左右侧头的情况下,都会导致获得的人脸图像中五官相对位置发生变化。
图5为本发明实施例提供的一种正面和头部下垂状态下人脸图像的对比示意图。图5示出了摄像头获取的正面的人脸图像和头部下垂的人脸图像。如图5所示,若以正面图像中,中心点位置坐标与最低点的位置坐标在竖直方向上的差值为标准值H0,则在驾驶员头部下垂时,摄像头获取到的人脸五官距离必然在竖直方向上变小,使得中心点位置坐标与最低点的位置坐标在竖直方向上的差值H1小于标准值H0。在中心点位置坐标与最低点的位置坐标在竖直方向上的差值小于预设差值时,表明驾驶员的头部下垂到一定程度,则将人脸图像确定为头部下垂图像。
S350,根据预设时间段内获取到的头部下垂图像数量N,和预设时间段内获取到的人脸图像数量M,确定头部下垂频率,头部下垂频率为N/M。
例如,在1分钟内获得120张人脸图像,即M=12。其中下垂图像数量为25张即N=25,则头部下垂频率为25/120。
本实施例从视频获得驾驶员的人脸图像,再从人脸图像定位到双眼图像,在人脸图像中确定竖直方向的最低点,在双眼图像中确定双眼连线的中心点,并且根据双眼连线的中心点位置坐标与最低点的位置坐标在竖直方向上的差值,确定头部下垂图像,最后根据预设时间段内获取到的头部下垂图像数量N,和预设时间段内获取到的人脸图像数量M,确定头部下垂频率为N/M,实现了根据视频确定驾驶员的头部下垂频率,并且具有较高的准确性。
图6为本发明实施例提供的又一种疲劳驾驶预警的方法流程示意图。如图6所示的方法实施例,是图2所示实施例中终端根据从摄像头获得的视频,获得驾驶员的眼部状态信息的一种实现方式,如图6所示的实施例中,眼部状态信息可以是眨眼频率、眨眼持续时间比和闭眼速度。相应地,图6所示的获得驾驶员的眼部状态信息的具体过程,可以是:
S410,根据双眼图像中与预设的上眼睑特征匹配的区域,获得上眼睑位置;根据双眼图像中与预设的下眼睑特征匹配的区域,获得下眼睑位置。
其中,上眼睑特征和下眼睑特征可以是预先通过视频样本训练获得的。
S420,根据上眼睑位置和下眼睑位置,获得指示上眼睑与下眼睑之间距离的睁眼距离。
具体地,上眼睑位置是上眼睑最低点的位置,下眼睑位置是下眼睑最高点的位置,将上眼睑位置和下眼睑位置在人脸中线方向上的差值,确定为睁眼距离。人脸中线是双眉之间中点与双眼之间的中点的连线。人脸图像可能因为驾驶员头部的偏转产生倾斜或者转动,但人脸五官之间的相对运动关系不会改变,因此,以人脸中线为参考线计算睁眼距离,能够降低头部运动对差值的影响,提高睁眼距离的准确性。
S430,在预设时间段内,将睁眼距离小于或等于预设第一睁眼阈值,并且大于第二睁眼阈值的人脸图像,确定为半闭图像,第一睁眼阈值大于第二睁眼阈值。
S440,在预设时间段内,将睁眼距离小于或等于预设第二睁眼阈值的人脸图像,确定为闭眼图像。
S430和S440可以同时执行,也可以先后顺序执行,本实施例对S430和S440的执行顺序不做限定,图6所示的执行顺序为一种可选的执行顺序。具体地,睁眼距离小于或等于预设第一睁眼阈值,并且大于第二睁眼阈值,表明人脸图像中的驾驶员处于正常睁眼状态和闭眼状态之间,可能是困顿的眯眼状态,也可能是眨眼的中间过程状态,本实施例均判定为半闭眼。睁眼距离小于或等于预设第二睁眼阈值,表明人脸图像中的驾驶员处于闭眼状态或接近闭眼状态,本实施例均判定为闭眼。
S450,根据预设时间段内连续获取到闭眼图像的次数,确定眨眼频率。
具体地,每个人脸图像都对应一个时间点,因此,以闭眼图像的数量X、半闭图像的数量Y和人脸图像的数量Z之间的比例关系,可以直接对应闭眼时间、半闭眼时间和预设时间段之间的比例关系。眨眼频率是以预设时间段内连续获取到闭眼图像的次数和预设时间段的比值,其中,连续获取到闭眼图像的次数,是指出现连续获取到闭眼图像这种情况的次数。例如,在预设时间段内连续获取到闭眼图像3次,第一次连续获取到5张闭眼图像,第二次连续获取到6张闭眼图像,第三次连续获取到6张闭眼图像,而预设时间段为20秒,则眨眼频率为3/20(次/秒)。人在正常情况下,眨眼频率变化幅度较小,而处于疲劳状态下,可能为了强打精神而快速眨眼,也可能有睁眼睡觉的习惯而长时间不眨眼。因此,眨眼频率可以作为判断驾驶员是否处于疲劳状态的一个参考因素。
S460,根据在预设时间段内分别获取到的闭眼图像的数量X、半闭图像的数量Y和人脸图像的数量Z,将(X+Y)/Z确定为眨眼持续时间比。
具体地,眨眼持续时间比是指驾驶员处于闭眼状态的时间和处于半闭眼状态的时间之和,占预设时间段的比值。眨眼持续时间比可以用来表示驾驶员眨眼的中间过程,以及驾驶员处于困顿状态时微眯着双眼的状态。可见,眨眼持续时间比也可以作为判断驾驶员是否处于疲劳状态的一个参考因素。
S470,确定预设时间段内,在获取到闭眼图像前连续获取到的半闭图像的时间长度,将时间长度的最大值确定为闭眼速度。
具体地,闭眼速度是驾驶员从半闭眼状态到闭眼状态所用的最长时间。例如,在预设时间段内,第一次获取到闭眼图像前连续获取到的半闭图像的时间长度为0.5秒,第二次获取到闭眼图像前连续获取到的半闭图像的时间长度为0.7秒,第三次获取到闭眼图像前连续获取到的半闭图像的时间长度为0.8秒,则闭眼速度为0.8秒。闭眼速度越慢,表明驾驶员很可能处于疲劳状态。闭眼速度也可以作为判断驾驶员是否处于疲劳状态的一个参考因素。S450、S460和S470可以同时执行,也可以依次先后顺序执行或者以其他顺序执行,本发明不对S450、S460和S470的执行顺序做限定。
图7为本发明实施例提供的又一种疲劳驾驶预警的方法流程示意图。如图7所示的方法实施例,是图2所示实施例中终端根据头部下垂频率和眼部状态信息,获得驾驶员的疲劳值的一种实现方式,图7所示实施例的具体过程可以是:
S510,根据预设时段内的头部下垂频率,获得第一疲劳分量,根据预设时段内的眨眼持续时间比,获得第二疲劳分量;根据预设时段内的眨眼频率,获得第三疲劳分量;根据预设时段内的闭眼速度,获得第四疲劳分量。
具体地,根据头部下垂频率、眨眼持续时间比、眨眼频率和闭眼速度,分别对驾驶员疲劳的指示程度,确定上述第一疲劳分量、第二疲劳分量、第三疲劳分量和第四疲劳分量的大小和正负。以眨眼频率为例,在眨眼频率大于上限频率时,或者小于下限频率时,第三疲劳分量为正,且为预设固定值,在眨眼频率小于或等于上限频率,并且大于或等于下限频率时,第三疲劳分量为0。
S520,根据第一疲劳分量、第二疲劳分量、第三疲劳分量、第四疲劳分量,得到驾驶员的疲劳值。
其中,第一权重为第一疲劳分量的权重,第二权重为第二疲劳分量的权重,第三权重为第三疲劳分量的权重,第四权重为第四疲劳分量的权重。具体地,可以是根据预设的第一权重、第二权重、第三权重和第四权重,分别对第一疲劳分量、第二疲劳分量、第三疲劳分量和第四疲劳分量加权求和,获得驾驶员的疲劳值。
图8为本发明实施例提供的一种疲劳驾驶预警的终端示意图。图8所示的一种终端,包括:
实时阈值确定模块801,用于根据连续行车时间、当前日期和当前时刻,确定疲劳阈值减小分量,并根据初始疲劳阈值和疲劳阈值减小分量,确定实时疲劳阈值。
图像处理模块802,用于根据从摄像头获得的视频,获得驾驶员的头部下垂频率和驾驶员的眼部状态信息。
疲劳值获得模块803,用于根据头部下垂频率和眼部状态信息,获得驾驶员的疲劳值。
预警模块804,用于在确定疲劳值大于或等于实时疲劳阈值时,向驾驶员进行预警。
图8所示实施例的终端对应地可用于执行图2所示方法实施例中终端执行的步骤,其实现原理和技术效果类似,此处不再赘述。
在上述实施例中,所述实时阈值确定模块具体用于:根据所述连续行车时间,确定第一减小分量,其中,所述第一减小分量的大小随着所述连续行车时间的增大而增大;根据预设的易疲劳日期区间和所述当前日期,确定第二减小分量,所述第二减小分量为包含所述当前日期的所述易疲劳日期区间对应的减小分量;根据预设的易疲劳时刻区间和所述当前时刻,确定第三减小分量,所述第三减小分量为包含所述当前时刻的所述易疲劳时刻区间对应的减小分量;根据所述第一减小分量、第二减小分量和第三减小分量,确定所述疲劳阈值减小分量。
在上述实施例中,所述实时阈值确定模块,还用于:在所述根据连续行车时间、当前日期和当前时刻,确定疲劳阈值减小分量之前,根据预设时间段内获得的所述终端的位置信号,确定车辆移动速度;将所述车辆移动速度由0变为大于0的时刻,确定为所述连续形成时间的起算时刻;将所述当前时刻确定为所述连续行车时间的终止时刻,其中,在所述起算时刻至所述终止时刻之间,所述车辆移动速度大于0;根据所述起算时刻和所述终止时刻,获得所述连续行车时间。
在上述实施例中,所述预警模块具体用于:在确定所述疲劳值大于或等于所述实时疲劳阈值时,根据所述疲劳值相对于所述实时疲劳阈值的超出量,确定所述驾驶员的疲劳等级;获得与所述疲劳等级对应的预警信息,并根据所述预警信息向所述驾驶员进行预警。
在上述实施例中,还包括最近停车区域信息显示模块,用于:获取所述终端的位 置信息;根据所述终端的位置信息和预存储的停车区域数据,获得与所述终端距离最小的最近停车区域信息;向所述驾驶员显示所述最近停车区域信息。
在上述实施例中,图像处理模块具体用于:根据从所述摄像头获得的视频,获得所述驾驶员的人脸图像;根据预设的双眼特征和预设的眉毛特征,在所述人脸图像中确定双眼图像;获取所述双眼图像中双眼连线的中心点,以及所述人脸图像的竖直方向的最低点;若所述中心点位置坐标与所述最低点的位置坐标在竖直方向上的差值小于预设差值,则将所述人脸图像确定为头部下垂图像;根据预设时间段内获取到的所述头部下垂图像数量N,和所述预设时间段内获取到的所述人脸图像数量M,确定所述头部下垂频率,所述头部下垂频率为N/M。
在上述实施例中,所述眼部状态信息包括眨眼频率、眨眼持续时间比和闭眼速度;图像处理模块具体用于:根据所述双眼图像中与预设的上眼睑特征匹配的区域,获得上眼睑位置;根据所述双眼图像中与预设的下眼睑特征匹配的区域,获得下眼睑位置;根据所述上眼睑位置和所述下眼睑位置,获得指示上眼睑与下眼睑之间距离的睁眼距离;在所述预设时间段内,将所述睁眼距离小于或等于预设第一睁眼阈值,并且大于第二睁眼阈值的人脸图像,确定为半闭图像,所述第一睁眼阈值大于所述第二睁眼阈值;在所述预设时间段内,将所述睁眼距离小于或等于预设第二睁眼阈值的人脸图像,确定为闭眼图像;根据所述预设时间段内,连续获取到所述闭眼图像的次数,确定所述眨眼频率;根据在所述预设时间段内,分别获取到的所述闭眼图像的数量X、半闭图像的数量Y和人脸图像的数量Z,将(X+Y)/Z确定为所述眨眼持续时间比;确定所述预设时间段内,在获取到所述闭眼图像前连续获取到的所述半闭图像的时间长度,将所述时间长度的最大值确定为所述闭眼速度。
在上述实施例中,疲劳值获得模块具体用于:根据所述预设时段内的所述头部下垂频率,获得第一疲劳分量,根据所述预设时段内的所述眨眼持续时间比,获得第二疲劳分量;根据所述预设时段内的所述眨眼频率,获得第三疲劳分量;根据所述预设时段内的所述闭眼速度,获得第四疲劳分量;根据所述第一疲劳分量、第二疲劳分量、第三疲劳分量、第四疲劳分量,得到所述驾驶员的疲劳值。
在上述实施例中,疲劳值获得模块进一步用于:根据预设的第一权重、第二权重、第三权重和第四权重,分别对所述第一疲劳分量、第二疲劳分量、第三疲劳分量和第四疲劳分量加权求和,获得所述驾驶员的疲劳值,其中,所述第一权重为所述第一疲劳分量的权重,所述第二权重为所述第二疲劳分量的权重,所述第三权重为所述第三 疲劳分量的权重,所述第四权重为所述第四疲劳分量的权重。
图9为本发明提供的一种终端的硬件结构示意图。图9所示的终端可以是移动终端。移动终端包括但不限于手机、个人数字助理(Personal Digital Assistant,PDA)、平板电脑、便携设备(例如,便携式计算机、袖珍式计算机或手持式计算机)等具有图像采集功能的移动设备。本发明实施例对终端的形式并不限定。
如图9所示,该终端包括:处理器911以及存储器912;其中
存储器912,用于存储计算机程序,该存储器还可以是闪存(flash)。
处理器911,用于执行存储器存储的执行指令,以实现上述疲劳驾驶预警的方法中终端执行的各个步骤。具体可以参见前面方法实施例中的相关描述。
可选地,存储器912既可以是独立的,也可以跟处理器911集成在一起。
当所述存储器912是独立于处理器911之外的器件时,所述终端还可以包括:
总线913,用于连接所述存储器912和处理器911。
本发明还提供一种可读存储介质,可读存储介质中存储有执行指令,当终端的至少一个处理器执行该执行指令时,终端执行上述的各种实施方式提供的疲劳驾驶预警的方法。
本发明还提供一种程序产品,该程序产品包括执行指令,该执行指令存储在可读存储介质中。终端的至少一个处理器可以从可读存储介质读取该执行指令,至少一个处理器执行该执行指令使得终端实施上述的各种实施方式提供的疲劳驾驶预警的方法。
在上述终端或者服务器的实施例中,应理解,处理器可以是中央处理单元(英文:Central Processing Unit,简称:CPU),还可以是其他通用处理器、数字信号处理器(英文:Digital Signal Processor,简称:DSP)、专用集成电路(英文:Application Specific Integrated Circuit,简称:ASIC)等。通用处理器可以是微处理器或者该处理器也可以是任何常规的处理器等。结合本申请所公开的方法的步骤可以直接体现为硬件处理器执行完成,或者用处理器中的硬件及软件模块组合执行完成。
最后应说明的是:以上各实施例仅用以说明本发明的技术方案,而非对其限制;尽管参照前述各实施例对本发明进行了详细的说明,本领域的普通技术人员应当理解:其依然可以对前述各实施例所记载的技术方案进行修改,或者对其中部分或者全部技术特征进行等同替换;而这些修改或者替换,并不使相应技术方案的本质脱离本发明各实施例技术方案的范围。

Claims (10)

  1. 一种疲劳驾驶预警的方法,其特征在于,应用于终端,所述终端设置有摄像头;所述方法包括:
    根据连续行车时间、当前日期和当前时刻,确定疲劳阈值减小分量,并根据初始疲劳阈值和所述疲劳阈值减小分量,确定实时疲劳阈值;
    根据从所述摄像头获得的视频,获得驾驶员的头部下垂频率和驾驶员的眼部状态信息;
    根据所述头部下垂频率和眼部状态信息,获得所述驾驶员的疲劳值;
    在确定所述疲劳值大于或等于所述实时疲劳阈值时,向所述驾驶员进行预警。
  2. 根据权利要求1所述的方法,其特征在于,所述根据连续形成时间、当前日期、当前时刻,确定疲劳阈值减小分量,包括:
    根据所述连续行车时间,确定第一减小分量,其中,所述第一减小分量的大小随着所述连续行车时间的增大而增大;
    根据预设的易疲劳日期区间和所述当前日期,确定第二减小分量,所述第二减小分量为包含所述当前日期的所述易疲劳日期区间对应的减小分量;
    根据预设的易疲劳时刻区间和所述当前时刻,确定第三减小分量,所述第三减小分量为包含所述当前时刻的所述易疲劳时刻区间对应的减小分量;
    根据所述第一减小分量、第二减小分量和第三减小分量,确定所述疲劳阈值减小分量。
  3. 根据权利要求1或2所述的方法,其特征在于,在所述根据连续行车时间、当前日期和当前时刻,确定疲劳阈值减小分量之前,还包括:
    根据预设时间段内获得的所述终端的位置信号,确定车辆移动速度;
    将所述车辆移动速度由0变为大于0的时刻,确定为所述连续形成时间的起算时刻;
    将所述当前时刻确定为所述连续行车时间的终止时刻,其中,在所述起算时刻至所述终止时刻之间,所述车辆移动速度大于0;
    根据所述起算时刻和所述终止时刻,获得所述连续行车时间。
  4. 根据权利要求1所述的方法,其特征在于,所述在确定所述疲劳值大于或等于所述实时疲劳阈值时,向所述驾驶员进行预警,包括:
    在确定所述疲劳值大于或等于所述实时疲劳阈值时,根据所述疲劳值相对于所述 实时疲劳阈值的超出量,确定所述驾驶员的疲劳等级;
    获得与所述疲劳等级对应的预警信息,并根据所述预警信息向所述驾驶员进行预警。
  5. 根据权利要求1或4所述的方法,其特征在于,在所述向所述驾驶员进行预警之后,还包括:
    获取所述终端的位置信息;
    根据所述终端的位置信息和预存储的停车区域数据,获得与所述终端距离最小的最近停车区域信息;
    向所述驾驶员显示所述最近停车区域信息。
  6. 根据权利要求1所述的方法,其特征在于,所述根据从所述摄像头获得的视频,获得驾驶员的头部下垂频率,包括:
    根据从所述摄像头获得的视频,获得所述驾驶员的人脸图像;
    根据预设的双眼特征和预设的眉毛特征,在所述人脸图像中确定双眼图像;
    获取所述双眼图像中双眼连线的中心点,以及所述人脸图像的竖直方向的最低点;
    若所述中心点位置坐标与所述最低点的位置坐标在竖直方向上的差值小于预设差值,则将所述人脸图像确定为头部下垂图像;
    根据预设时间段内获取到的所述头部下垂图像数量N,和所述预设时间段内获取到的所述人脸图像数量M,确定所述头部下垂频率,所述头部下垂频率为N/M。
  7. 根据权利要求6所述的方法,其特征在于,所述眼部状态信息包括眨眼频率、眨眼持续时间比和闭眼速度;
    所述根据从所述摄像头获得的视频,获得驾驶员的眼部状态信息,包括:
    根据所述双眼图像中与预设的上眼睑特征匹配的区域,获得上眼睑位置;根据所述双眼图像中与预设的下眼睑特征匹配的区域,获得下眼睑位置;
    根据所述上眼睑位置和所述下眼睑位置,获得指示上眼睑与下眼睑之间距离的睁眼距离;
    在所述预设时间段内,将所述睁眼距离小于或等于预设第一睁眼阈值,并且大于第二睁眼阈值的人脸图像,确定为半闭图像,所述第一睁眼阈值大于所述第二睁眼阈值;
    在所述预设时间段内,将所述睁眼距离小于或等于预设第二睁眼阈值的人脸图像,确定为闭眼图像;
    根据所述预设时间段内,连续获取到所述闭眼图像的次数,确定所述眨眼频率;
    根据在所述预设时间段内,分别获取到的所述闭眼图像的数量X、半闭图像的数量Y和人脸图像的数量Z,将(X+Y)/Z确定为所述眨眼持续时间比;
    确定所述预设时间段内,在获取到所述闭眼图像前连续获取到的所述半闭图像的时间长度,将所述时间长度的最大值确定为所述闭眼速度。
  8. 根据权利要求7所述的方法,其特征在于,所述根据所述头部下垂频率和眼部状态信息,获得所述驾驶员的疲劳值,包括:
    根据所述预设时段内的所述头部下垂频率,获得第一疲劳分量;
    根据所述预设时段内的所述眨眼持续时间比,获得第二疲劳分量;
    根据所述预设时段内的所述眨眼频率,获得第三疲劳分量;
    根据所述预设时段内的所述闭眼速度,获得第四疲劳分量;
    根据所述第一疲劳分量、第二疲劳分量、第三疲劳分量、第四疲劳分量,得到所述驾驶员的疲劳值。
  9. 根据权利要求8所述的方法,其特征在于,所述根据所述第一疲劳分量、第二疲劳分量、第三疲劳分量、第四疲劳分量,得到所述驾驶员的疲劳值,包括:
    根据预设的第一权重、第二权重、第三权重和第四权重,分别对所述第一疲劳分量、第二疲劳分量、第三疲劳分量和第四疲劳分量加权求和,获得所述驾驶员的疲劳值,
    其中,所述第一权重为所述第一疲劳分量的权重,所述第二权重为所述第二疲劳分量的权重,所述第三权重为所述第三疲劳分量的权重,所述第四权重为所述第四疲劳分量的权重。
  10. 一种终端,其特征在于,包括:
    实时阈值确定模块,用于根据连续行车时间、当前日期和当前时刻,确定疲劳阈值减小分量,并根据初始疲劳阈值和所述疲劳阈值减小分量,确定实时疲劳阈值;
    图像处理模块,用于根据从摄像头获得的视频,获得驾驶员的头部下垂频率和驾驶员的眼部状态信息;
    疲劳值获得模块,用于根据所述头部下垂频率和眼部状态信息,获得所述驾驶员的疲劳值;
    预警模块,用于在确定所述疲劳值大于或等于所述实时疲劳阈值时,向所述驾驶员进行预警。
PCT/CN2017/102689 2017-09-21 2017-09-21 疲劳驾驶预警的方法和终端 WO2019056259A1 (zh)

Priority Applications (1)

Application Number Priority Date Filing Date Title
PCT/CN2017/102689 WO2019056259A1 (zh) 2017-09-21 2017-09-21 疲劳驾驶预警的方法和终端

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
PCT/CN2017/102689 WO2019056259A1 (zh) 2017-09-21 2017-09-21 疲劳驾驶预警的方法和终端

Publications (1)

Publication Number Publication Date
WO2019056259A1 true WO2019056259A1 (zh) 2019-03-28

Family

ID=65810984

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2017/102689 WO2019056259A1 (zh) 2017-09-21 2017-09-21 疲劳驾驶预警的方法和终端

Country Status (1)

Country Link
WO (1) WO2019056259A1 (zh)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111179551A (zh) * 2019-12-17 2020-05-19 西安工程大学 一种危化品运输驾驶员实时监控方法
CN112550145A (zh) * 2020-11-25 2021-03-26 国家电网有限公司 一种工程车辆疲劳驾驶干预系统

Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104183091A (zh) * 2014-08-14 2014-12-03 苏州清研微视电子科技有限公司 一种自适应调整疲劳驾驶预警系统灵敏度的系统
CN105069976A (zh) * 2015-07-28 2015-11-18 南京工程学院 一种疲劳检测和行驶记录综合系统及疲劳检测方法
US20160090097A1 (en) * 2014-09-29 2016-03-31 The Boeing Company System for fatigue detection using a suite of physiological measurement devices
CN105701971A (zh) * 2016-03-23 2016-06-22 江苏大学 一种基于虹膜识别的防止疲劳驾驶系统、装置及方法
CN105788028A (zh) * 2016-03-21 2016-07-20 上海仰笑信息科技有限公司 具有疲劳驾驶预警功能的行车记录仪
CN106080194A (zh) * 2016-06-14 2016-11-09 李英德 防疲劳驾驶的预警方法和系统
CN106448062A (zh) * 2016-10-26 2017-02-22 深圳市元征软件开发有限公司 疲劳驾驶检测方法及装置
CN106740862A (zh) * 2016-11-29 2017-05-31 深圳市元征科技股份有限公司 驾驶员状态监控方法及驾驶员状态监控装置

Patent Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104183091A (zh) * 2014-08-14 2014-12-03 苏州清研微视电子科技有限公司 一种自适应调整疲劳驾驶预警系统灵敏度的系统
US20160090097A1 (en) * 2014-09-29 2016-03-31 The Boeing Company System for fatigue detection using a suite of physiological measurement devices
CN105069976A (zh) * 2015-07-28 2015-11-18 南京工程学院 一种疲劳检测和行驶记录综合系统及疲劳检测方法
CN105788028A (zh) * 2016-03-21 2016-07-20 上海仰笑信息科技有限公司 具有疲劳驾驶预警功能的行车记录仪
CN105701971A (zh) * 2016-03-23 2016-06-22 江苏大学 一种基于虹膜识别的防止疲劳驾驶系统、装置及方法
CN106080194A (zh) * 2016-06-14 2016-11-09 李英德 防疲劳驾驶的预警方法和系统
CN106448062A (zh) * 2016-10-26 2017-02-22 深圳市元征软件开发有限公司 疲劳驾驶检测方法及装置
CN106740862A (zh) * 2016-11-29 2017-05-31 深圳市元征科技股份有限公司 驾驶员状态监控方法及驾驶员状态监控装置

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111179551A (zh) * 2019-12-17 2020-05-19 西安工程大学 一种危化品运输驾驶员实时监控方法
CN112550145A (zh) * 2020-11-25 2021-03-26 国家电网有限公司 一种工程车辆疲劳驾驶干预系统

Similar Documents

Publication Publication Date Title
JP5326521B2 (ja) 覚醒状態判断装置及び覚醒状態判断方法
WO2020078465A1 (zh) 驾驶状态分析方法和装置、驾驶员监控系统、车辆
WO2019095937A1 (zh) 一种碰撞预警方法及装置
JP2019536673A (ja) 運転状態監視方法及び装置、運転者監視システム、並びに車両
WO2015106690A1 (zh) 一种驾驶员安全驾驶状态检测方法及装置
US9105172B2 (en) Drowsiness-estimating device and drowsiness-estimating method
García et al. Driver monitoring based on low-cost 3-D sensors
WO2022110737A1 (zh) 车辆防撞预警方法、装置、车载终端设备和存储介质
Al-Madani et al. Real-time driver drowsiness detection based on eye movement and yawning using facial landmark
CN111179552A (zh) 基于多传感器融合的驾驶员状态监测方法和系统
US20200064912A1 (en) Eye gaze tracking of a vehicle passenger
Rezaei et al. Simultaneous analysis of driver behaviour and road condition for driver distraction detection
WO2019056259A1 (zh) 疲劳驾驶预警的方法和终端
Dua et al. Drowsiness detection and alert system
Chandiwala et al. Driver’s real-time drowsiness detection using adaptable eye aspect ratio and smart alarm system
Guria et al. Iot-enabled driver drowsiness detection using machine learning
Lashkov et al. Ontology-based approach and implementation of ADAS system for mobile device use while driving
US20220273211A1 (en) Fatigue evaluation system and fatigue evaluation device
Pravinth Raja et al. Smart Steering Wheel for Improving Driver’s Safety Using Internet of Things
Neeraja et al. DL-Based Somnolence Detection for Improved Driver Safety and Alertness Monitoring
Kim et al. Context-based rider assistant system for two wheeled self-balancing vehicles
Wang et al. Cooperative detection method for distracted and fatigued driving behaviors with readily embedded system implementation
Agarkar et al. Driver Drowsiness Detection and Warning using Facial Features and Hand Gestures
Gupta et al. Real time driver drowsiness detecion using transfer learning
Purohit et al. An Artificial Intelligence based Prototype of Drıver Drowsıness Detectıon for Intelligent Vehicles

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 17925762

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 17925762

Country of ref document: EP

Kind code of ref document: A1