WO2018028068A1 - 疲劳驾驶监控方法及云端服务器 - Google Patents

疲劳驾驶监控方法及云端服务器 Download PDF

Info

Publication number
WO2018028068A1
WO2018028068A1 PCT/CN2016/105631 CN2016105631W WO2018028068A1 WO 2018028068 A1 WO2018028068 A1 WO 2018028068A1 CN 2016105631 W CN2016105631 W CN 2016105631W WO 2018028068 A1 WO2018028068 A1 WO 2018028068A1
Authority
WO
WIPO (PCT)
Prior art keywords
driver
fatigue
driving state
information
fatigue driving
Prior art date
Application number
PCT/CN2016/105631
Other languages
English (en)
French (fr)
Inventor
刘均
刘新
宋朝忠
欧阳张鹏
Original Assignee
深圳市元征科技股份有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 深圳市元征科技股份有限公司 filed Critical 深圳市元征科技股份有限公司
Publication of WO2018028068A1 publication Critical patent/WO2018028068A1/zh

Links

Classifications

    • BPERFORMING OPERATIONS; TRANSPORTING
    • B60VEHICLES IN GENERAL
    • B60KARRANGEMENT OR MOUNTING OF PROPULSION UNITS OR OF TRANSMISSIONS IN VEHICLES; ARRANGEMENT OR MOUNTING OF PLURAL DIVERSE PRIME-MOVERS IN VEHICLES; AUXILIARY DRIVES FOR VEHICLES; INSTRUMENTATION OR DASHBOARDS FOR VEHICLES; ARRANGEMENTS IN CONNECTION WITH COOLING, AIR INTAKE, GAS EXHAUST OR FUEL SUPPLY OF PROPULSION UNITS IN VEHICLES
    • B60K28/00Safety devices for propulsion-unit control, specially adapted for, or arranged in, vehicles, e.g. preventing fuel supply or ignition in the event of potentially dangerous conditions
    • B60K28/02Safety devices for propulsion-unit control, specially adapted for, or arranged in, vehicles, e.g. preventing fuel supply or ignition in the event of potentially dangerous conditions responsive to conditions relating to the driver
    • B60K28/06Safety devices for propulsion-unit control, specially adapted for, or arranged in, vehicles, e.g. preventing fuel supply or ignition in the event of potentially dangerous conditions responsive to conditions relating to the driver responsive to incapacity of driver
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B60VEHICLES IN GENERAL
    • B60RVEHICLES, VEHICLE FITTINGS, OR VEHICLE PARTS, NOT OTHERWISE PROVIDED FOR
    • B60R16/00Electric or fluid circuits specially adapted for vehicles and not otherwise provided for; Arrangement of elements of electric or fluid circuits specially adapted for vehicles and not otherwise provided for
    • B60R16/02Electric or fluid circuits specially adapted for vehicles and not otherwise provided for; Arrangement of elements of electric or fluid circuits specially adapted for vehicles and not otherwise provided for electric constitutive elements
    • B60R16/023Electric or fluid circuits specially adapted for vehicles and not otherwise provided for; Arrangement of elements of electric or fluid circuits specially adapted for vehicles and not otherwise provided for electric constitutive elements for transmission of signals between vehicle parts or subsystems
    • B60R16/0231Circuits relating to the driving or the functioning of the vehicle

Definitions

  • the invention relates to the technical field of vehicle monitoring, in particular to a fatigue driving monitoring method and a cloud server.
  • the driver is prompted by a warning or a reminder to stop and drive for a few hours after driving for a while and then drive, but this method only serves as a reminder, and does not have binding force, if the driver does not take the initiative to take no
  • the way of the Council is not to force the driver to rest, and the safety hazard is difficult to eliminate.
  • the main object of the present invention is to provide a fatigue driving monitoring method and a cloud server, which aim to fundamentally eliminate the safety hazard caused by driver fatigue driving.
  • the fatigue driving monitoring method proposed by the present invention comprises the following steps:
  • a control command is generated according to the fatigue driving state, and the control command is transmitted to the smart terminal.
  • generating a control instruction according to the fatigue driving state, and transmitting the control instruction to the smart terminal includes: when the driver is currently in a fatigue driving state, The driver's fatigue driving state is compared with a preset driving fatigue level, and a control command corresponding to the preset driving fatigue level is issued.
  • the current vital sign information is a video image including the driver's head, face or hand
  • the analyzing the current physical condition information to determine the driving state of the driver includes :
  • the current vital sign information is a pulse signal including the driver's heart rate, respiration, or blood pressure; and the analyzing and processing the current vital sign information to determine the driving state of the driver includes:
  • the driving state of the driver is a fatigue driving state.
  • the invention also provides a fatigue driving monitoring method, comprising the following steps:
  • the intelligent terminal receives the control command and transmits the control command to the vehicle controller.
  • the invention also provides a cloud server, comprising:
  • a remote receiving port configured to receive a fatigue state determination request sent by the smart terminal, where the fatigue state determination request includes information about a current physical condition of the driver;
  • a judging module configured to perform an analysis process on the current vital sign information to determine a driving state of the driver
  • the instruction module is configured to generate a control instruction according to the fatigue driving state when determining that the driver is currently in a fatigue driving state, and send the control instruction to the smart terminal.
  • the instruction module further includes:
  • the driver's fatigue driving state is compared with a preset driving fatigue level, and a control instruction corresponding to the preset driving fatigue level is issued.
  • the current vitality information is a video image including the driver's head, face, or hand
  • the determining module further includes:
  • a positioning submodule configured to detect the video image, and locate a feature image in the video image
  • An analysis submodule configured to analyze the feature image to determine feature information of the feature image
  • the determining submodule is configured to compare the feature information with the preset statistical model to determine a driving state of the driver.
  • the current vital sign information is a pulse signal including the driver's heart rate, respiration, or blood pressure.
  • the determining module includes:
  • a conversion submodule for processing the pulse signal and converting it into a digital signal
  • a comparison submodule configured to determine whether a duration of the value of the digital signal exceeds a threshold is greater than a first preset duration
  • the determining submodule is configured to determine that the driving state of the driver is a fatigue driving state when a duration of the digital signal exceeds a threshold for a duration longer than a first preset duration.
  • the cloud server determines whether the driver is currently in a fatigue state according to the information, and according to the fatigue state.
  • the level generates a corresponding control command and transmits it to the intelligent vehicle unit, and the intelligent terminal sends the control command to the vehicle controller to execute a corresponding control command; thereby, the driver's fatigue state can be monitored, and once fatigued,
  • the vehicle controller is forced to execute control commands, such as decelerating and stopping the vehicle; thereby fundamentally eliminating the safety hazard caused by fatigue driving.
  • FIG. 1 is a schematic block diagram of a module of an embodiment of a cloud server according to the present invention.
  • FIG. 2 is a schematic block diagram of another embodiment of a cloud server according to the present invention.
  • FIG. 3 is a flow chart of an embodiment of a fatigue driving monitoring method according to the present invention.
  • FIG. 4 is a schematic diagram of a module framework of still another embodiment of a cloud server according to the present invention.
  • first, second, and the like in the present invention are used for the purpose of description only, and are not to be construed as indicating or implying their relative importance or implicitly indicating the number of technical features indicated.
  • features defining “first” or “second” may include at least one of the features, either explicitly or implicitly.
  • the meaning of "a plurality” is at least two, such as two, three, etc., unless specifically defined otherwise.
  • the terms "connected”, “fixed” and the like should be understood broadly, unless otherwise clearly defined and limited.
  • “fixed” may be a fixed connection, or may be a detachable connection, or may be integrated; It may be a mechanical connection or an electrical connection; it may be directly connected or indirectly connected through an intermediate medium, and may be an internal connection of two elements or an interaction relationship of two elements unless explicitly defined otherwise.
  • the specific meanings of the above terms in the present invention can be understood on a case-by-case basis.
  • the fatigue driving monitoring system can be referred to FIG. 1 and FIG.
  • the architecture can involve:
  • the information collecting component is configured to continuously generate information data including the driver's sign, and can adopt a wearable sensor such as a camera or a heart rate sensor, a blood pressure sensor, a breathing sensor, etc., the camera can be set on the vehicle, and the driver is set to scan the driver.
  • a wearable sensor such as a camera or a heart rate sensor, a blood pressure sensor, a breathing sensor, etc.
  • the camera can be set on the vehicle, and the driver is set to scan the driver.
  • the body can be embedded in the seat belt; and the intelligent terminal can communicate wirelessly through the network or Bluetooth;
  • the intelligent terminal may be an intelligent vehicle unit or a mobile terminal; the vehicle unit may be disposed on the vehicle, communicate with the media device and the controller on the vehicle through the CAN bus, and communicate with the cloud server through the wireless network; On-board computer system T-BOX (Telematics) using wireless communication technology BOX), on-board diagnostic system OBD (On-Board Diagnostic), etc.; the mobile terminal may specifically be a mobile phone, a tablet computer, a smart bracelet, a smart watch, etc., and realize wireless communication with a media device and a controller on the vehicle and a cloud server through a wireless network;
  • T-BOX Telematics
  • OBD On-Board Diagnostic
  • the cloud server establishes communication with the intelligent terminal through the remote interface and the network.
  • a memory for storing executable instructions of the processor; a processor for acquiring a video image or pulse data containing the driver's sign; detecting the video image, locating the feature image in the video image; analyzing the feature image, Determining feature information of the feature image; determining a driving state of the driver according to the feature information; processing and converting the pulse data, and determining a driving state of the driver according to the converted data.
  • a fatigue driving monitoring method provided by an example of the present invention includes:
  • Step S10 Receive a fatigue state determination request sent by the smart terminal, where the fatigue state determination request includes current physical condition information of the driver;
  • the cloud server receives the fatigue status request by means of the remote receiving interface, the fatigue status request includes information data of the driver's current vital signs; and stores the request and information data in the memory.
  • the information of the current physical signs of the driver may specifically be a video image including a posture characteristic of the driver, and the video image may specifically include a facial feature, a head feature or a hand feature of the driver; or may be a heart rate including a vital sign of the driver.
  • Pulse signals such as blood pressure and respiratory frequency.
  • Step S20 performing analysis processing on the current vital information to determine a driving state of the driver
  • the above data is called, and the data is analyzed and processed accordingly to finally determine the driving state of the driver; in a fatigue driving state or a non-fatigue driving state.
  • Step S30 when it is determined that the driver is currently in a fatigue driving state, generating a control instruction according to the fatigue driving state, and transmitting the control instruction to the smart terminal.
  • control command may be a general deceleration driving signal or an emergency braking signal
  • the cloud server may transmit the control command to the smart network through the wireless network.
  • the terminal sends.
  • the cloud server determines whether the driver is currently in a fatigue state according to the information, and according to the fatigue state.
  • the level generates a corresponding control command and transmits it to the intelligent vehicle unit, and the intelligent terminal sends the control command to the vehicle controller to execute a corresponding control command; thereby, the driver's fatigue state can be monitored, and once fatigued,
  • the vehicle controller is forced to execute control commands, such as decelerating and stopping the vehicle; thereby fundamentally eliminating the safety hazard caused by fatigue driving.
  • the fatigue driving monitoring method includes:
  • Step 101 Receive a fatigue state determination request of the smart terminal, where the request includes information about a current physical condition of the driver; the current vitality information is a video image including the driver's head, face, or hand;
  • the video image containing the driver's sign is recorded by the camera, and the video image should at least include the facial image of the driver, thereby determining whether the driver in the video is in a fatigue state, such as dozing state, by analyzing the facial image of the driver. , physical discomfort, etc. It can also include the driver's hand image, such as whether the driver's hands are placed on the steering wheel. If the hands are off the steering wheel, it can be judged that the driver is in a state of fatigue at this time, thereby issuing a deceleration or emergency brake to the controller. Prevent driving safety hazards caused by driver fatigue.
  • Step 201 Perform analysis processing on the current vital information to determine a driving state of the driver
  • the embodiment of the present disclosure determines the driver information in the video based on the previously obtained recognition model, face detection and tracking technology, and then presents the recognized driver information to the user who views the video;
  • Step 201a detecting the video image, and positioning a feature image in the video image
  • the video captured by the camera can be sent to the cloud server through the wireless network, and the cloud server positioning sub-module detects and analyzes the received video image, if the method is applied to the driver.
  • the client application of the fatigue driving detection method may be installed in the camera, or the driver's terminal device, such as a mobile phone, may be connected to the camera in a wired or wireless manner, and the application software in the mobile phone is used.
  • the analysis process firstly detects the video image.
  • the video image is composed of frame images of one frame and one frame.
  • the process of detecting the video image is the process of detecting each frame image, and scanning each frame image.
  • the feature image appearing in the frame image is located, and the position coordinates of the feature image in the frame image are marked to determine position information of the feature image.
  • Step 201b analyzing the feature image to determine feature information of the feature image
  • the feature image may have various categories according to the determination of the driving state of the driver, for example, a head image, an eye image, a mouth image, a hand image including a steering wheel; and an eye image according to the resolution of the camera It can also be subdivided into iris images, pupil images, and so on.
  • the analysis sub-module analyzes the feature image according to its respective attribute characteristics of the feature image, and determines feature information included in the feature image. For example, if the feature image is an eye image, the feature information may include: an opening degree value between the upper and lower eyelids, a pupil opening degree characteristic parameter, an eyeball contour size value, and the like.
  • Step 201c comparing feature information of the feature image with a preset statistical model to determine a driving state of the driver; if the driver is in a fatigue driving state, proceeding to step 301, if in a non-fatiguous driving state, Returning to step 101, monitoring information data of the next moment;
  • the preset number of feature images are collected as sample data, and the sample data is analyzed according to a preset algorithm to obtain the preset statistical model; the signal interaction process is referred to FIG. 1 .
  • the driving state of the driver may be determined based on the feature information alone, or the driving information of the driver may be determined after the feature information is compared with the reference information.
  • various states can be set according to requirements, such as an awake state (non-fatigue driving state), a fatigue state (fatigue driving state), a semi-fatigue state (fatigue driving state), and the like;
  • the preset statistical model may include: a driver's head movement range threshold; the feature image may include: a head image. Specifically, the determining sub-module determines whether the movement trajectory of the positioning coordinate exceeds a threshold value of the driver's head movement range. If the time length exceeding the threshold is greater than the first preset time length, determining that the driving state of the driver is a fatigue driving state.
  • the threshold value of the driver's head movement range may be a threshold value of the head movement range that is analyzed and modeled and collected according to the individual driving habits of the driver after collecting a large amount of driving video information of the driver. For example, some drivers prefer to listen to the song while driving. The head will sway with the music, while some drivers are of a type that is motionless and focused on driving.
  • the head movement range threshold determined by the above two types of drivers will be different.
  • the head movement range exceeds the range threshold of the preset statistical model, and the duration exceeds the threshold for a period of time, for example, the first preset duration is 4 seconds
  • the driver's head may be considered to be lower than 4 seconds, at this time, It is very likely that the driver has lowered his head due to dozing and judged that he is in a fatigue driving state.
  • the preset statistical model may include: an eye opening threshold; the feature image may include: an eye image.
  • the determining the sub-module determines whether the eye opening characteristic parameter is less than a preset eye opening threshold. If the duration of the threshold less than the preset eye opening threshold is greater than the second preset duration, determining that the driving state of the driver is a fatigue driving state. For example, if the driver slightly closes both eyes due to fatigue, it is detected that the opening degree of the eye becomes smaller, and the opening degree is smaller than the preset eye.
  • the eye opening threshold value and the duration shorter than the opening degree threshold are continued for a period of time. For example, if the second preset duration is 5 seconds, the driver can be judged to slightly close the eyes for 5 seconds, and it is determined that the driver has entered the fatigue driving state.
  • the present embodiment extracts a frame image to be detected by performing a preset step size division on a video image, and analyzes the image to be detected, thereby greatly reducing the number of data analysis of the video image and improving the determination efficiency of the driving state;
  • the driving state represented by the feature information is quickly and accurately determined by comparing the feature information in the feature image such as the head image and the eye image with the preset statistical model.
  • Step 301 If it is determined that the driver is currently in a fatigue driving state, generate a control instruction according to the fatigue driving state, and send the control instruction to the smart terminal.
  • the step 301 further includes: when determining that the driver is currently in a fatigue driving state, comparing the fatigue driving state of the driver with a preset driving fatigue level, and issuing a corresponding driving fatigue level corresponding to the preset driving fatigue level Control instruction.
  • the driver is determined to be in a fatigue driving state according to the current physical condition information, and the driving state of the driver is compared with the preset driving fatigue level, and a control command corresponding to the preset driving fatigue level is issued.
  • the specific implementation may be performed by setting a plurality of comparison thresholds for the preset statistical model. When the feature parameters in the feature image belong to different comparison threshold ranges, different driving states within different comparison threshold ranges are determined.
  • the eye opening threshold is divided into 80% and 50%; the preset driving fatigue level may be corresponding to not sending a deceleration command, transmitting a deceleration command, and transmitting an emergency braking command; if the driver's eye opening is greater than 80%, it is considered to be In the awake state, the deceleration command is not sent; when the driver's eye opening is between 80% and 50%, it is considered to be half-awake and half-tired, and a warning can be sent to remind the driver whether to stop after driving and rest; If the driver's eye opening is less than 50%, a deceleration command is issued, and the driver is reminded to re-energize or suggest that the brake be rested; further, when the driver's eye opening is detected to be 0, that is, the eye is closed, the driver may send Emergency braking commands to brake the vehicle to prevent a safety accident caused by fatigue driving.
  • the fatigue driving monitoring method includes:
  • Step 102 Receive information that the smart terminal includes the current physical signs of the driver;
  • the current vital information is a pulse signal including the heart rate, the breath, or the blood pressure of the driver;
  • the current vital sign information is a pulse signal including the driver's heart rate, respiration or blood pressure; the pulse signal including the driver's heart rate, respiration or blood pressure is recorded by the sensor, and the driver is Analysis of heart rate, respiration or blood pressure to determine whether the driver is in a state of fatigue or discomfort. If the heart rate exceeds 160 beats per minute, or is less than 40 beats per minute, the driver is considered to have physical discomfort such as palpitations and chest tightness due to heart disease. Wait.
  • driver's breathing such as the driver's respiratory rate exceeding 24 beats / min, or less than 12 beats / min, may also determine that the driver is in a state of fatigue or discomfort at this time; may also include the driver's blood pressure, such as The driver's systolic pressure is higher than 150mmHg, the diastolic pressure is higher than 120mmHg, or the systolic pressure is lower than 80Kpa, and the diastolic pressure is lower than 50mmHg. It can also be judged that the driver is in a state of fatigue or discomfort at this time; Prevent driving safety hazards caused by driver discomfort.
  • Step 202 Perform analysis processing on the current vital information to determine a driving state of the driver.
  • Step 202R processing the pulse signal and converting it into a digital signal
  • the processing process firstly performs amplification, filtering and noise reduction on the pulse electrical signal to improve the reliability of the sampled signal, and then converts the electrical signal to obtain a digital signal.
  • the value of the digital signal directly reflects the heart rate, The breathing frequency or the blood pressure is high or low; the conversion sub-module may specifically include an amplifying circuit, a filtering circuit, and an analog-to-digital converter.
  • Step 202S Determine whether the value of the digital signal exceeds a threshold, and determine whether a duration exceeding the threshold is greater than a first preset duration;
  • the comparison submodule invokes the digital signal, and compares the digital signal value with a preset threshold, for example, the heart rate threshold is set to 40 ⁇ 160, and if the digital signal value is 180, if the comparison exceeds the threshold, the time is counted; The next data is compared and still exceeds the threshold until the Nth group of data is less than the threshold, and then the time interval between the sampling times of the data of the first N-1 group data that exceeds the threshold for the first time is calculated ( If the time interval is less than the first preset duration, return to compare the next data; if the time interval is greater than or equal to the first preset duration, proceed to step 202T;
  • a preset threshold for example, the heart rate threshold is set to 40 ⁇ 160, and if the digital signal value is 180, if the comparison exceeds the threshold, the time is counted; The next data is compared and still exceeds the threshold until the Nth group of data is less than the threshold, and then the time interval between the sampling times of the data of the first N-1
  • Step 202T if the duration of exceeding the threshold is greater than the first preset duration, determining that the driving state of the driver is a fatigue driving state;
  • the duration of the threshold is greater than the first preset duration, for example, the heart rate is 180 beats/min for 1 minute, the driver is considered to be in poor physical condition and is in a fatigue driving state;
  • Step 302 If the driver is in a fatigue driving state, generate a control instruction according to the fatigue driving state, and send the control instruction to the smart terminal.
  • the signal interaction process is shown in Figure 2.
  • the level of the fatigue state generates different levels of control commands, where the control command may be a general deceleration driving signal or an emergency braking signal; the cloud server may send the control command to the intelligent terminal through the wireless network. .
  • the driver's fatigue driving state is compared with a preset driving fatigue level, and a control command corresponding to the preset driving fatigue level is issued.
  • the driver Based on the current vital information, it is determined that the driver is in a fatigue driving state, and the driving state of the driver is compared with the preset driving fatigue level, and a control command corresponding to the preset driving fatigue level is issued.
  • the specific implementation may be performed by setting a plurality of comparison thresholds for the preset statistical model. When the feature parameters in the feature image belong to different comparison threshold ranges, different driving states within different comparison threshold ranges are determined.
  • the blood pressure threshold is divided into 80%, 100%, and 120%; the preset driving fatigue level may be corresponding to not sending a deceleration command, transmitting a deceleration command, and transmitting an emergency brake command; if the driver's systolic pressure is greater than 80% of 150 mmHg, it is considered It is awake and does not send a deceleration command; when the driver's systolic pressure is 80% of 150mmHg Between ⁇ 100%, it is considered to be semi-awake and semi-tired, and can send a warning to remind the driver whether to stop and drive again; when the driver's systolic pressure is higher than 100% of 150mmHg, a deceleration command is issued, and The driver is reminded to re-energize or advise his brake to rest; further, when it is detected that the driver's systolic pressure is higher than 120% of 150 mmHg, an emergency braking command can be sent to brake the vehicle to prevent a
  • the present invention also provides a cloud server.
  • the cloud server includes a remote receiving port 10, a judging module 20, and an instruction module 30.
  • the remote receiving port 10 is configured to receive a fatigue state determination request sent by the smart terminal, where the fatigue state determination request includes information about a current physical condition of the driver; and the cloud server receives the fatigue state determination request through the wireless network by using the remote receiving interface 10, and the fatigue state determination request Information data including the driver's current signs; and the above request and information data are stored in the memory.
  • the judging module 20 is configured to perform an analysis process on the current vital information, determine a driving state of the driver, invoke the above data, and perform corresponding analysis processing on the data to finally determine a driving state of the driver; the driving state herein includes In a state of fatigue driving or in a non-fatiguous driving state.
  • the command module 30 is configured to generate a control instruction according to the fatigue driving state when determining that the driver is currently in a fatigue driving state, and send the control instruction to the smart terminal.
  • control command may be a general deceleration driving signal or an emergency braking signal
  • the cloud server may transmit the control command to the smart network through the wireless network.
  • the terminal sends.
  • the level of the fatigue state may be divided to generate a control command corresponding to the fatigue state level; in the embodiment, the command module 30 is specifically configured to drive the driving
  • the fatigue driving state of the member is compared with the preset driving fatigue level, and a control command corresponding to the preset driving fatigue level is issued.
  • the level of the fatigue state and the corresponding control commands can be set according to actual needs.
  • the specific implementation may be performed by setting a plurality of comparison thresholds for the preset statistical model. When the feature parameters in the feature image belong to different comparison threshold ranges, different driving states within different comparison threshold ranges are determined.
  • the cloud server determines whether the driver is currently in a fatigue state according to the information, and according to the fatigue state.
  • the level generates a corresponding control command and transmits it to the intelligent vehicle unit, and the intelligent terminal sends the control command to the vehicle controller to execute a corresponding control command; thereby, the driver's fatigue state can be monitored, and once fatigued,
  • the vehicle controller is forced to execute control commands, such as decelerating and stopping the vehicle; thereby fundamentally eliminating the safety hazard caused by fatigue driving.
  • the current physical information is different, and the specific structure of the corresponding judging module 20 is different, which is described in detail below.
  • the current vitality information is a video image including the driver's head, face, or hand
  • the determining module 20 further includes a positioning sub-module 201, The analysis sub-module 202 and the determination sub-module 203.
  • the information of the current physical signs of the driver is a video image containing the physical characteristics of the driver, and the video image may specifically include facial features, head features or hand features of the driver;
  • the intelligent terminal establishes a communication connection with the camera through the acquisition module timing, and collects
  • the video image is then connected to the network through the network module through the sending module, and the physical information is packaged and sent to the cloud server together with the fatigue state request;
  • the sending module and the remote receiving port 10 may be specifically an input/output (I/O) interface.
  • the present embodiment determines the driver information in the video based on the previously obtained recognition model, face detection and tracking technology, and then presents the recognized driver information to the user who views the video.
  • the positioning sub-module 201 is configured to detect the video image and locate a feature image in the video image. First, the video image is detected, and the video image is composed of a frame image of one frame and one frame, and the video image is detected.
  • the process is a process of detecting each frame image, scanning each frame image, positioning the feature image appearing in the frame image, and marking the position coordinates of the feature image in the frame image to determine the feature image. location information.
  • the analysis sub-module 202 is configured to analyze the feature image to determine feature information of the feature image; the feature image may have multiple categories according to different determination criteria for the driver's driving state, for example, a head image, an eye. Image, mouth image, hand image containing the steering wheel; depending on the resolution of the camera, the eye image can also be subdivided into an iris image, a pupil image, and the like.
  • the analysis sub-module analyzes the feature image according to its respective attribute characteristics of the feature image, and determines feature information included in the feature image. For example, if the feature image is an eye image, the feature information may include: an opening degree value between the upper and lower eyelids, a pupil opening degree characteristic parameter, an eyeball contour size value, and the like.
  • the determining sub-module 203 is configured to compare the feature information with the preset statistical model to determine a driving state of the driver. If the driver is in a fatigue driving state, the command module 30 is triggered, and if in a non-fatigue driving state, the remote receiving port 10 is returned to monitor the information data at the next moment.
  • the difference from the above embodiment is that the current vital sign information is a pulse signal including the driver's heart rate, respiration or blood pressure, and correspondingly
  • the judging module 20 includes a conversion submodule 204, a comparison submodule 205, and a judging submodule 206.
  • the conversion sub-module 204 is configured to process the pulse signal and convert it into a digital signal; record a pulse signal containing a sign such as a driver's heart rate, respiration or blood pressure by using a sensor, first amplify the pulse electrical signal, filter, and reduce noise, etc. Processing, to improve the reliability of the sampled signal, and then converting the electrical signal to obtain a digital signal, the magnitude of the digital signal directly reflects the above heart rate, respiratory rate or blood pressure; the conversion sub-module 204 may specifically include an amplifying circuit , filter circuit and analog to digital converter.
  • the comparison sub-module 205 is configured to determine whether the duration of the value of the digital signal exceeds the threshold is greater than a first preset duration; the comparison sub-module invokes the digital signal, and compares the digital signal value with a preset threshold, if If the duration of the value of the digital signal exceeds the threshold is less than the first preset duration, the next data is compared; if the duration of the digital signal exceeds the threshold, the duration is greater than or equal to the first preset duration, the triggering module is triggered. 206.
  • the determining sub-module 206 is configured to determine that the driving state of the driver is a fatigue driving state when the value of the digital signal exceeds the threshold for a duration longer than the first preset duration, and trigger the command module 30.

Landscapes

  • Engineering & Computer Science (AREA)
  • Mechanical Engineering (AREA)
  • Chemical & Material Sciences (AREA)
  • Combustion & Propulsion (AREA)
  • Transportation (AREA)
  • Automation & Control Theory (AREA)
  • Measurement Of The Respiration, Hearing Ability, Form, And Blood Characteristics Of Living Organisms (AREA)
  • Traffic Control Systems (AREA)

Abstract

一种疲劳驾驶监控方法及云端服务器,该疲劳驾驶监控方法包括接收智能终端发送的疲劳状态判定请求,所述疲劳状态判定请求包含驾驶员当前体征信息;对所述当前体征信息进行分析处理,确定所述驾驶员的驾驶状态;当确定驾驶员当前处于疲劳驾驶状态时,则根据所述疲劳驾驶状态生成控制指令,并向智能终端发送所述控制指令。本疲劳驾驶监控方法及云端服务器能够对驾驶员的疲劳状态进行监控,一旦处于驾驶员疲劳驾驶,则强制车辆控制器执行控制指令,例如使车辆减速并停靠;从而从根本上消除疲劳驾驶带来的安全隐患。

Description

疲劳驾驶监控方法及云端服务器
技术领域
本发明涉及一种车辆监控技术领域,尤其是一种疲劳驾驶监控方法及云端服务器。
背景技术
目前,随着汽车的普及程度越来越高,由汽车驾驶因素而带来的安全隐患也越来越多。
驾驶人员在驾车过程中,因身体疲劳而引发交通意外事故的情况时有发生,如何能够对驾驶人员的是否处于困倦乏力等状态进行掌握并及时控制车辆减速停靠已成为亟待解决的问题。
现有技术中有以警告或提醒方式提示驾驶人员在连续行车几个小时后停车休息一段时间再开车,但这种方式仅仅起到提示作用,而不具备约束力,如果驾驶人员没有意识采取不予理会的方式,也不能强制驾驶人员休息,安全隐患难于消除。
发明内容
本发明的主要目的是提供一种疲劳驾驶监控方法及云端服务器,旨在从根本上消除因驾驶员疲劳驾驶带来的安全隐患。
为实现上述目的,本发明提出的疲劳驾驶监控方法,包括以下步骤:
接收智能终端发送的疲劳状态判定请求,所述疲劳状态判定请求包含驾驶员当前体征信息;
对所述当前体征信息进行分析处理,确定所述驾驶员的驾驶状态;
当确定驾驶员当前处于疲劳驾驶状态时,则根据所述疲劳驾驶状态生成控制指令,并向智能终端发送所述控制指令。
优选地,所述当确定驾驶员当前处于疲劳驾驶状态时,则根据所述疲劳驾驶状态生成控制指令,并向智能终端发送所述控制指令包括:当驾驶员当前处于疲劳驾驶状态时,将所述驾驶员的疲劳驾驶状态与预设驾驶疲劳等级进行比对,发出与所述预设驾驶疲劳等级对应的控制指令。
作为优选地实施例一,所述当前体征信息为包含所述驾驶员头部、面部或手部的视频图像,所述对所述当前体征信息进行分析处理,确定所述驾驶员的驾驶状态包括:
对所述视频图像进行检测,定位所述视频图像中的特征图像;
对所述特征图像进行分析,确定所述特征图像的特征信息;
将所述特征图像的特征信息与预设统计模型进行比对,确定所述驾驶员的驾驶状态;
采集预设数量的特征图像作为样本数据,根据预设算法对所述样本数据进行分析后得到所述预设统计模型。
作为优选地实施例二,所述当前体征信息为包含所述驾驶员心率、呼吸或血压的脉冲信号;所述对所述当前体征信息进行分析处理,确定所述驾驶员的驾驶状态包括:
对所述脉冲信号进行处理,并转换为数字信号;
判断所述数字信号的值超出阈值的持续时长是否大于第一预设时长;
若超出阈值的持续时长大于第一预设时长,判断驾驶员的驾驶状态为疲劳驾驶状态。
本发明还提出一种疲劳驾驶监控方法,包括以下步骤:
采集包含驾驶员当前体征的信息;
向云端服务器发送包含该当前体征信息的疲劳状态判定请求;
所述云端服务器接收所述疲劳状态判定请求,所述疲劳状态判定请求包含驾驶员当前体征信息;
对所述当前体征信息进行分析处理,确定所述驾驶员的驾驶状态;
当确定驾驶员当前处于疲劳驾驶状态时,则根据所述疲劳驾驶状态生成控制指令;
智能终端接收所述控制指令,并向车辆控制器发送该控制指令。
本发明还提供一种云端服务器,包括:
远程接收端口,用于接收智能终端发送的疲劳状态判定请求,所述疲劳状态判定请求包含驾驶员当前体征的信息;
判断模块,用于对所述当前体征信息进行分析处理,确定所述驾驶员的驾驶状态;
所述指令模块,用于当确定驾驶员当前处于疲劳驾驶状态时,根据所述疲劳驾驶状态生成控制指令,并向智能终端发送该控制指令。
优选地,所述指令模块还包括:
当确定驾驶员当前处于疲劳驾驶状态时,用于将所述驾驶员的疲劳驾驶状态与预设驾驶疲劳等级进行比对,发出与所述预设驾驶疲劳等级对应的控制指令。
作为优选实施例一,所述当前体征信息为包含所述驾驶员头部、面部或手部的视频图像,相应地所述判断模块还包括:
定位子模块,用于对所述视频图像进行检测,定位所述视频图像中的特征图像;
分析子模块,用于对所述特征图像进行分析,确定所述特征图像的特征信息;
确定子模块,用于将所述特征信息与所述预设统计模型进行比对,确定所述驾驶员的驾驶状态。
作为优选实施例二,所述当前体征信息为包含所述驾驶员心率、呼吸或血压的脉冲信号,相应地所述判断模块包括:
转换子模块,用于对所述脉冲信号进行处理,并转换为数字信号;
比较子模块,用于判断所述数字信号的值超出阈值的持续时长是否大于第一预设时长;
判断子模块,用于在所述数字信号的值超出阈值的持续时长大于第一预设时长,判断驾驶员的驾驶状态为疲劳驾驶状态。
本发明技术方案中,通过在智能车载单元侧采集包含驾驶员当前体征的信息,并向云端服务器发送,云端服务器接收该信息后,根据该信息判断驾驶员当前是否处于疲劳状态,并根据疲劳状态的等级生成相应的控制指令,并传送给智能车载单元,智能终端将该控制指令发送到车辆控制器,执行相应的控制指令;据此能够对驾驶员的疲劳状态进行监控,一旦处于疲劳驾驶,则强制车辆控制器执行控制指令,例如使车辆减速并停靠;从而从根本上消除疲劳驾驶带来的安全隐患。
附图说明
为了更清楚地说明本发明实施例或现有技术中的技术方案,下面将对实施例或现有技术描述中所需要使用的附图作简单地介绍,显而易见地,下面描述中的附图仅仅是本发明的一些实施例,对于本领域普通技术人员来讲,在不付出创造性劳动的前提下,还可以根据这些附图示出的结构获得其他的附图。
图1为本发明云端服务器一实施例的模块框架示意图;
图2为本发明云端服务器另一实施例的模块框架示意图;
图3为本发明疲劳驾驶监控方法一实施例的流程图;
图4为本发明云端服务器再一实施例的模块框架示意图。
本发明目的的实现、功能特点及优点将结合实施例,参照附图做进一步说明。
具体实施方式
下面将结合本发明实施例中的附图,对本发明实施例中的技术方案进行清楚、完整地描述,显然,所描述的实施例仅仅是本发明的一部分实施例,而不是全部的实施例。基于本发明中的实施例,本领域普通技术人员在没有作出创造性劳动前提下所获得的所有其他实施例,都属于本发明保护的范围。
需要说明,本发明实施例中所有方向性指示(诸如上、下、左、右、前、后……)仅用于解释在某一特定姿态(如附图所示)下各部件之间的相对位置关系、运动情况等,如果该特定姿态发生改变时,则该方向性指示也相应地随之改变。
另外,在本发明中如涉及“第一”、“第二”等的描述仅用于描述目的,而不能理解为指示或暗示其相对重要性或者隐含指明所指示的技术特征的数量。由此,限定有“第一”、“第二”的特征可以明示或者隐含地包括至少一个该特征。在本发明的描述中,“多个”的含义是至少两个,例如两个,三个等,除非另有明确具体的限定。
在本发明中,除非另有明确的规定和限定,术语“连接”、“固定”等应做广义理解,例如,“固定”可以是固定连接,也可以是可拆卸连接,或成一体;可以是机械连接,也可以是电连接;可以是直接相连,也可以通过中间媒介间接相连,可以是两个元件内部的连通或两个元件的相互作用关系,除非另有明确的限定。对于本领域的普通技术人员而言,可以根据具体情况理解上述术语在本发明中的具体含义。
另外,本发明各个实施例之间的技术方案可以相互结合,但是必须是以本领域普通技术人员能够实现为基础,当技术方案的结合出现相互矛盾或无法实现时应当认为这种技术方案的结合不存在,也不在本发明要求的保护范围之内。
本发明一实施例的疲劳驾驶监控系统可以参照图1、图2所示。该架构可以涉及:
信息采集元件,用于连续生成包含驾驶员体征的信息数据,可采用摄像头或心率传感器、血压传感器、呼吸传感器等可穿戴的传感器,摄像头可设置在车辆上,正对驾驶员设置以扫描驾驶员的体态;传感器可埋设于安全带中;与智能终端可通过网络或蓝牙实现无线通信;
智能终端,可以是智能车载单元,也可以是移动终端;车载单元可设置于车辆上,通过CAN总线与车辆上的媒体设备及控制器实现通信,通过无线网络与云端服务器实现通信;具体可以是应用无线通信技术的车载电脑系统T-BOX(Telematics BOX)、车载诊断系统OBD(On-Board Diagnostic)等;移动终端具体可以是手机、平板电脑、智能手环、智能手表等,通过无线网络与车辆上的媒体设备及控制器、云端服务器实现无线通信;
云端服务器,通过远程接口及网络与智能终端建立通信。配置有存储器和处理器;
存储器,用于存储处理器的可执行指令;处理器,用于获取包含有驾驶员体征的视频图像或脉冲数据;对视频图像进行检测,定位视频图像中的特征图像;对特征图像进行分析,确定特征图像的特征信息;根据特征信息,确定驾驶员的驾驶状态;对脉冲数据进行处理和转换,根据转换后的数据,确定驾驶员的驾驶状态。
以下基于上述系统框架对本发明实施例的疲劳驾驶监控方法进行详细阐述。
参照图3,本发明实例提供的疲劳驾驶监控方法包括:
步骤S10,接收智能终端发送的疲劳状态判定请求,所述疲劳状态判定请求包含驾驶员当前体征信息;
云端服务器借助远程接收接口接收疲劳状态请求,疲劳状态请求包括驾驶员当前体征的信息数据;并将上述请求及信息数据存储于存储器内。驾驶员当前体征的信息具体可以是包含驾驶员的体态特征的视频图像,视频图像中具体可包含驾驶员的面部特征、头部特征或手部特征;还可以是包含驾驶员的生命特征的心率、血压、呼吸频次等脉冲信号。
步骤S20,对所述当前体征信息进行分析处理,确定所述驾驶员的驾驶状态;
调用上述数据,并对数据进行相应的分析处理,最终确定驾驶员的驾驶状态;处于疲劳驾驶状态或非疲劳驾驶状态。
步骤S30,当确定驾驶员当前处于疲劳驾驶状态时,则根据所述疲劳驾驶状态生成控制指令,并向智能终端发送所述控制指令。
一旦判断驾驶员处于疲劳驾驶状态,则根据疲劳驾驶状态的等级生成不同等级的控制指令,这里的控制指令可以是一般减速行驶信号或紧急刹车信号;云端服务器则可以通过无线网络将控制指令向智能终端发送。
本发明技术方案中,通过在智能车载单元侧采集包含驾驶员当前体征的信息,并向云端服务器发送,云端服务器接收该信息后,根据该信息判断驾驶员当前是否处于疲劳状态,并根据疲劳状态的等级生成相应的控制指令,并传送给智能车载单元,智能终端将该控制指令发送到车辆控制器,执行相应的控制指令;据此能够对驾驶员的疲劳状态进行监控,一旦处于疲劳驾驶,则强制车辆控制器执行控制指令,例如使车辆减速并停靠;从而从根本上消除疲劳驾驶带来的安全隐患。
进一步地,上述当前体征信息不同,对应的生成控制指令的方式也不同,以下对此进行详细说明:
在其中一种方式中,所述疲劳驾驶监控方法包括:
步骤101,接收智能终端的疲劳状态判断请求,该请求包含驾驶员当前体征的信息;当前体征信息为包含所述驾驶员头部、面部或手部的视频图像;
通过摄像头记录下包含有驾驶员体征的视频影像,该视频影像至少应该包括驾驶员的面部影像,从而通过对驾驶员面部影像的分析,判定视频中的驾驶员是否处于疲劳状态,如打瞌睡状态、身体不适状态等。还可以包括驾驶员的手部影像,如驾驶员的双手是否置于方向盘上,若双手脱离方向盘,也可以判断此时驾驶员处于疲劳状态,从而向控制器发出减速行驶或紧急制动,以防止因驾驶员疲劳而造成的行车安全隐患。
步骤201,对上述当前体征信息进行分析处理,确定所述驾驶员的驾驶状态;
为了对视频中出现的驾驶员进行自动识别,需要从视频中提取帧图像,并进一步提取出包含有人脸图像的帧图像,再对这些帧图像利用预设算法进行人脸图像识别,识别出视频中的驾驶员信息。具体来说,本公开实施例基于预先得到的识别模型、人脸检测及跟踪技术来确定视频中的驾驶员信息,进而将识别出的驾驶员信息向观看视频的用户呈现;
步骤201a,对所述视频图像进行检测,定位所述视频图像中的特征图像;
若该方法应用于云端的服务器中,则可以通过无线网络将摄像头拍摄的视频发送到云端服务器中,由云端服务器定位子模块对接收到的视频图像进行检测分析,若该方法应用于驾驶员的终端(客户端设备)中,可以在该摄像头内安装该疲劳驾驶检测方法的客户端应用,或将驾驶员的终端设备,如手机等与该摄像头进行有线或无线连接,由手机内的应用软件对该视频图像进行后续的分析。分析过程首先是对视频图像进行检测,视频图像是由一帧一帧的帧图像构成的,对视频图像的检测过程,就是对每帧帧图像进行检测的过程,对每帧帧图像进行扫描,对帧图像中出现的特征图像进行定位,标记出特征图像位于该帧图像中的位置坐标,以确定特征图像的位置信息。
步骤201b,对所述特征图像进行分析,确定所述特征图像的特征信息;
特征图像根据对驾驶员的驾驶状态的判定依据不同,可以有多种类别,例如,头部图像、眼睛图像、嘴部图像、包含有方向盘的手部图像;根据摄像头的分辨率,眼睛图像中还可以细分为虹膜图像、瞳孔图像等。分析子模块根据特征图像其各自的属性特征,对特征图像进行分析,并确定出特征图像中所包含的特征信息。例如,若特征图像为眼睛图像,则特征信息中可以包括有:上下眼睑之间的开度值、瞳孔开度特征参数、眼珠轮廓尺寸值等。
步骤201c,将所述特征图像的特征信息与预设统计模型进行比对,确定所述驾驶员的驾驶状态;如果驾驶员处于疲劳驾驶状态,则进行步骤301,如果处于非疲劳驾驶状态,则返回步骤101,对下一时刻的信息数据进行监测;
采集预设数量的特征图像作为样本数据,根据预设算法对所述样本数据进行分析后得到所述预设统计模型;信号的交互流程参照图1。
该步骤中既可以单独根据特征信息,判定驾驶员的驾驶状态;也可以将特征信息与参照信息进行比对后,判定驾驶员的驾驶状态。此外对于驾驶员的驾驶状态,也可以根据需求进行多种状态的设定,如清醒状态(非疲劳驾驶状态)、疲劳状态(疲劳驾驶状态)、半疲劳状态(疲劳驾驶状态)等;
预设统计模型可以包括:驾驶员头部移动范围阈值;特征图像可以包括:头部图像。具体可以通过确定子模块判断定位坐标的移动轨迹是否超出驾驶员头部移动范围阈值,若超出阈值的时长大于第一预设时长,判断驾驶员的驾驶状态为疲劳驾驶状态。驾驶员头部移动范围阈值可以为通过采集驾驶员大量行车视频信息后,分析、建模得到的符合驾驶员个体驾驶习惯的头部移动范围阈值,例如,有些驾驶员驾车时喜欢听歌,则头部会随乐曲晃动,而有些驾驶员则属于一动不动专注开车的类型,则对上述两类驾驶员所确定的头部移动范围阈值会有所不同。当头部移动范围超出预设统计模型的范围阈值,且为一段时间持续超出阈值,例如,第一预设时长为4秒,则可以认为驾驶员头部低下超过4秒的时间,此时,很有可能是由于驾驶员打瞌睡而低下了头,判断其为疲劳驾驶状态。
或者,预设统计模型可以包括:眼睛开度阈值;特征图像可以包括:眼部图像。具体可以通过确定子模块判断眼睛开度特征参数是否小于预设眼睛开度阈值,若小于预设眼睛开度阈值的持续时长大于第二预设时长,判断驾驶员的驾驶状态为疲劳驾驶状态。例如,驾驶员由于疲劳而微闭双眼,则检测到眼睛的开度变小,且开度小于预设眼 睛开度阈值,并且小于开度阈值的时长持续了一段时间,例如,第二预设时长为5秒,则可以判断驾驶员微闭双眼5秒钟,判定驾驶员进入了疲劳驾驶状态。
综上,本实施例通过对视频图像进行预设步长划分,提取出待检测帧图像,并对待检测帧图像进行分析,从而大大减少了视频图像的数据分析数量,提高驾驶状态的确定效率;还通过将头部图像、眼部图像等特征图像中的特征信息与预设统计模型进行比对,从而快速并准确判定特征信息所代表的驾驶状态。
步骤301,如果确定驾驶员当前处于疲劳驾驶状态,则根据所述疲劳驾驶状态生成控制指令,并向智能终端发送该控制指令。
优选地,步骤301还包括:当确定驾驶员当前处于疲劳驾驶状态时,将所述驾驶员的疲劳驾驶状态与预设驾驶疲劳等级进行比对,并发出与所述预设驾驶疲劳等级对应的控制指令。
根据当前体征信息判定驾驶员处于疲劳驾驶状态,将驾驶员的驾驶状态与预设驾驶疲劳等级进行比对,发出与预设驾驶疲劳等级对应的控制指令。其具体实现可以通过对预设统计模型设置多个比较阈值,当特征图像中的特征参数归属于不同的比较阈值范围时,判断得到不同比较阈值范围内的不同驾驶状态。例如,眼睛开度阈值分为80%、50%;预设驾驶疲劳等级可相应为不发送减速指令、发送减速指令、发送紧急刹车指令;假设驾驶员眼睛开度大于80%,则认为其为清醒状态,不发送减速指令;当驾驶员眼睛开度在80%~50%之间徘徊,则认为其为半清醒半疲倦状态,可以发送警告,以提醒驾驶员是否停车休息后再行驶;当驾驶员眼睛开度低于50%,则发出减速指令,并提醒驾驶员重新振奋精神或建议其制动休息;进一步地,当检测到驾驶员眼睛开度为0,即眼睛闭合,则可以发送紧急刹车指令以制动车辆,以防止疲劳驾驶导致的安全事故。
在另一方式中,所述疲劳驾驶监控方法包括:
步骤102,接收智能终端包含驾驶员当前体征的信息;当前体征信息为包含所述驾驶员心率、呼吸或血压的脉冲信号;
与上述实施例的不同之处在于当前体征信息为包含所述驾驶员心率、呼吸或血压的脉冲信号;通过传感器记录下包含有驾驶员心率、呼吸或血压等体征的脉冲信号,通过对驾驶员的心率、呼吸或血压分析,判定驾驶员是否处于疲劳或不适状态,如心率超过160次/分钟,或低于40次/分钟,则认为驾驶员因心脏病导致的心悸、胸闷等身体不适状态等。还可以包括驾驶员的呼吸,如驾驶员的呼吸频率超过24次/分,或低于12次/分,也可以判断此时驾驶员处于疲劳或不适状态;还可以包括驾驶员的血压,如驾驶员的收缩压高于150mmHg,舒张压高于120mmHg,或者收缩压低于于80Kpa,舒张压低于50mmHg,也可以判断此时驾驶员处于疲劳或不适状态;从而向控制器发出紧急制动,以防止因驾驶员身体不适而造成的行车安全隐患。
步骤202,对上述当前体征信息进行分析处理,确定所述驾驶员的驾驶状态;
步骤202R,对所述脉冲信号进行处理,并转换为数字信号;
处理过程首先是对脉冲电信号进行放大、滤波降噪等处理,以提高采样信号的可信度,然后再对电信号进行转换,得到数字信号,这个数字信号的值大小直接反应了上述心率、呼吸频率或血压高低;转换子模块具体可包括放大电路、滤波电路及模数转换器。
步骤202S,判断所述数字信号的值是否超出阈值,并判断超出阈值的持续时长是否大于第一预设时长;
比较子模块调用上述数字信号,并将该数字信号值与预先设定阈值进行比较,例如心率阈值设定为40~160,如果数字信号值为180,经比较,超出了阈值,则计时;如果下一数据经比较,仍然超出了阈值,直到第N组数据,才小于阈值,则计算从第N-1组数据的采样时间第一次超出了阈值的数据的采样时间之间的时间间隔(数字信号的值超出阈值的持续时长),如果该时间间隔小于第一预设时长,则返回对下一数据进行比较;如果该时间间隔大于或等于第一预设时长,则进行步骤202T;
步骤202T,若超出阈值的时长大于第一预设时长,判断驾驶员的驾驶状态为疲劳驾驶状态;
如果超出阈值的时长大于第一预设时长,例如心率180次/分持续1分钟,则认为驾驶员身体状态欠佳,处于疲劳驾驶状态;
步骤302,如果驾驶员处于疲劳驾驶状态,则根据所述疲劳驾驶状态生成控制指令,并向智能终端发送该控制指令。信号的交互流程参照图2。
一旦判断驾驶员处于疲劳驾驶状态,则疲劳状态的等级生成不同等级的控制指令,这里的控制指令可以是一般减速行驶信号或紧急刹车信号;云端服务器则可以通过无线网络将控制指令向智能终端发送。
优选地,当确定驾驶员当前处于疲劳驾驶状态时,将所述驾驶员的疲劳驾驶状态与预设驾驶疲劳等级进行比对,发出与所述预设驾驶疲劳等级对应的控制指令。
根据当前体征信息,确定驾驶员处于疲劳驾驶状态,将驾驶员的驾驶状态与预设驾驶疲劳等级进行比对,发出与预设驾驶疲劳等级对应的控制指令。其具体实现可以通过对预设统计模型设置多个比较阈值,当特征图像中的特征参数归属于不同的比较阈值范围时,判断得到不同比较阈值范围内的不同驾驶状态。例如,血压阈值分为80%、100%、120%;预设驾驶疲劳等级可相应为不发送减速指令、发送减速指令、发送紧急刹车指令;假设驾驶员收缩压大于150mmHg的80%,则认为其为清醒状态,不发送减速指令;当驾驶员的收缩压在150mmHg的80% ~100%之间徘徊,则认为其为半清醒半疲倦状态,可以发送警告,以提醒驾驶员是否停车休息后再行驶;当驾驶员收缩压高于150mmHg的100%,则发出减速指令,并提醒驾驶员重新振奋精神或建议其制动休息;进一步地,当检测到驾驶员收缩压高于150mmHg的120%,则可以发送紧急刹车指令以制动车辆,以防止疲劳驾驶导致的安全事故。
本发明还提供一种云端服务器,参见图4,云端服务器包括远程接收端口10、判断模块20和指令模块30。
远程接收端口10用于接收智能终端发送的疲劳状态判定请求,所述疲劳状态判定请求包含驾驶员当前体征的信息;云端服务器借助远程接收接口10通过无线网络接收疲劳状态判定请求,疲劳状态判定请求包括驾驶员当前体征的信息数据;并将上述请求及信息数据存储于存储器内。
判断模块20用于对所述当前体征信息进行分析处理,确定所述驾驶员的驾驶状态;调用上述数据,并对数据进行相应的分析处理,最终确定驾驶员的驾驶状态;这里的驾驶状态包括处于疲劳驾驶状态或处于非疲劳驾驶状态。
指令模块30用于当确定驾驶员当前处于疲劳驾驶状态时,根据所述疲劳驾驶状态生成控制指令,并向智能终端发送该控制指令。
一旦判断驾驶员处于疲劳驾驶状态,则根据疲劳驾驶状态的等级生成不同等级的控制指令,这里的控制指令可以是一般减速行驶信号或紧急刹车信号;云端服务器则可以通过无线网络将控制指令向智能终端发送。
进一步地,当确定驾驶员当前处于疲劳驾驶状态时,可对疲劳状态的等级进行划分,以生成与疲劳状态等级对应的控制命令;在本实施例中,指令模块30具体用于将所述驾驶员的疲劳驾驶状态与预设驾驶疲劳等级进行比对,发出与所述预设驾驶疲劳等级对应的控制指令。疲劳状态的等级以及与之对应的控制指令可根据实际需要进行设置。其具体实现可以通过对预设统计模型设置多个比较阈值,当特征图像中的特征参数归属于不同的比较阈值范围时,判断得到不同比较阈值范围内的不同驾驶状态。
本发明技术方案中,通过在智能车载单元侧采集包含驾驶员当前体征的信息,并向云端服务器发送,云端服务器接收该信息后,根据该信息判断驾驶员当前是否处于疲劳状态,并根据疲劳状态的等级生成相应的控制指令,并传送给智能车载单元,智能终端将该控制指令发送到车辆控制器,执行相应的控制指令;据此能够对驾驶员的疲劳状态进行监控,一旦处于疲劳驾驶,则强制车辆控制器执行控制指令,例如使车辆减速并停靠;从而从根本上消除疲劳驾驶带来的安全隐患。
进一步地,上述当前体征信息不同,对应的判断模块20的具体构架也不同,以下对此进行详细说明:
在本发明一实施例中,如图1所示,所述当前体征信息为包含所述驾驶员头部、面部或手部的视频图像,相应地所述判断模块20还包括定位子模块201、分析子模块202和确定子模块203。
驾驶员当前体征的信息是包含驾驶员的体态特征的视频图像,视频图像中具体可包含驾驶员的面部特征、头部特征或手部特征;智能终端通过采集模块定时与摄像头建立通信连接,采集上述视频图像,然后通过发送模块借助网络接口连接到网络,以将该体征信息与疲劳状态请求一起打包发送给云端服务器;发送模块及远程接收端口10可具体为输入输出(I/O)接口。
本实施例中,为了对视频中出现的驾驶员进行自动识别,需要从视频中提取帧图像,并进一步提取出包含有人脸图像的帧图像,再对这些帧图像利用预设算法进行人脸图像识别,识别出视频中的驾驶员信息。具体来说,本实施例基于预先得到的识别模型、人脸检测及跟踪技术来确定视频中的驾驶员信息,进而将识别出的驾驶员信息向观看视频的用户呈现。
定位子模块201用于对所述视频图像进行检测,定位所述视频图像中的特征图像;首先对视频图像进行检测,视频图像是由一帧一帧的帧图像构成的,对视频图像的检测过程,就是对每帧帧图像进行检测的过程,对每帧帧图像进行扫描,对帧图像中出现的特征图像进行定位,标记出特征图像位于该帧图像中的位置坐标,以确定特征图像的位置信息。
分析子模块202用于对所述特征图像进行分析,确定所述特征图像的特征信息;特征图像根据对驾驶员的驾驶状态的判定依据不同,可以有多种类别,例如,头部图像、眼睛图像、嘴部图像、包含有方向盘的手部图像;根据摄像头的分辨率,眼睛图像中还可以细分为虹膜图像、瞳孔图像等。分析子模块根据特征图像其各自的属性特征,对特征图像进行分析,并确定出特征图像中所包含的特征信息。例如,若特征图像为眼睛图像,则特征信息中可以包括有:上下眼睑之间的开度值、瞳孔开度特征参数、眼珠轮廓尺寸值等。
确定子模块203用于将所述特征信息与所述预设统计模型进行比对,确定所述驾驶员的驾驶状态。如果驾驶员处于疲劳驾驶状态,则触发指令模块30,如果处于非疲劳驾驶状态;则返回远程接收端口10,对下一时刻的信息数据进行监测。
在本发明云端服务器的另一实施例中,如图2所示,与上述实施例的不同之处在于,所述当前体征信息为包含所述驾驶员心率、呼吸或血压的脉冲信号,相应地所述判断模块20包括转换子模块204、比较子模块205和判断子模块206。
转换子模块204用于对所述脉冲信号进行处理,并转换为数字信号;通过传感器记录包含有驾驶员心率、呼吸或血压等体征的脉冲信号,首先对脉冲电信号进行放大、滤波降噪等处理,以提高采样信号的可信度,然后再对电信号进行转换,得到数字信号,这个数字信号的值大小直接反应了上述心率、呼吸频率或血压高低;转换子模块204具体可包括放大电路、滤波电路及模数转换器等。
比较子模块205用于判断所述数字信号的值超出阈值的持续时长是否大于第一预设时长;比较子模块调用上述数字信号,并将该数字信号值与预先设定的阈值进行比较,如果该数字信号的值超出阈值的持续时长小于第一预设时长,则返回对下一数据进行比较;如果该数字信号的值超出阈值的持续时长大于或等于第一预设时长,则触发判断模块206。
判断子模块206用于在所述数字信号的值超出阈值的持续时长大于第一预设时长,判断驾驶员的驾驶状态为疲劳驾驶状态,并触发指令模块30。
以上所述仅为本发明的优选实施例,并非因此限制本发明的专利范围,凡是在本发明的发明构思下,利用本发明说明书及附图内容所作的等效结构变换,或直接/间接运用在其他相关的技术领域均包括在本发明的专利保护范围内。

Claims (8)

  1. 一种疲劳驾驶监控方法,其特征在于,包括以下步骤:
    接收智能终端发送的疲劳状态判定请求,所述疲劳状态判定请求包含驾驶员当前体征信息;
    对所述当前体征信息进行分析处理,确定所述驾驶员的驾驶状态;
    当确定驾驶员当前处于疲劳驾驶状态时,则根据所述疲劳驾驶状态生成控制指令,并向智能终端发送所述控制指令。
  2. 如权利要求1所述的疲劳驾驶监控方法,其特征在于,所述当确定驾驶员当前处于疲劳驾驶状态时,则根据所述疲劳驾驶状态生成控制指令,并向智能终端发送所述控制指令包括:
    当驾驶员当前处于疲劳驾驶状态时,将所述驾驶员的疲劳驾驶状态与预设驾驶疲劳等级进行比对,并发出与当前驾驶状态匹配的驾驶疲劳等级所对应的控制指令。
  3. 如权利要求2所述的疲劳驾驶监控方法,其特征在于,所述当前体征信息为包含所述驾驶员头部、面部或手部的视频图像,所述对所述当前体征信息进行分析处理,确定所述驾驶员的驾驶状态包括:
    对所述视频图像进行检测,定位所述视频图像中的特征图像;
    对所述特征图像进行分析,确定所述特征图像的特征信息;
    将所述特征图像的特征信息与预设统计模型进行比对,确定所述驾驶员的驾驶状态;其中,
    采集预设数量的特征图像作为样本数据,根据预设算法对所述样本数据进行分析后得到所述预设统计模型。
  4. 如权利要求2所述的疲劳驾驶监控方法,其特征在于,所述当前体征信息为包含所述驾驶员心率、呼吸或血压的脉冲信号;所述对所述当前体征信息进行分析处理,确定所述驾驶员的驾驶状态包括:
    对所述脉冲信号进行处理,并转换为数字信号;
    判断所述数字信号的值超出阈值的持续时长是否大于第一预设时长;
    若超出阈值的持续时长大于第一预设时长,判断驾驶员的驾驶状态为疲劳驾驶状态。
  5. 一种云端服务器,其特征在于,包括:
    远程接收端口,用于接收智能终端发送的疲劳状态判定请求,所述疲劳状态判定请求包含驾驶员当前体征的信息;
    判断模块,用于对所述当前体征信息进行分析处理,确定所述驾驶员的驾驶状态;
    所述指令模块,用于当确定驾驶员当前处于疲劳驾驶状态时,根据所述疲劳驾驶状态生成控制指令,并向智能终端发送该控制指令。
  6. 如权利要求5所述的云端服务器,其特征在于,所述指令模块还包括:
    当确定驾驶员当前处于疲劳驾驶状态时,用于将所述驾驶员的疲劳驾驶状态与预设驾驶疲劳等级进行比对,发出与所述预设驾驶疲劳等级对应的控制指令。
  7. 如权利要求6所述的云端服务器,其特征在于,所述当前体征信息为包含所述驾驶员头部、面部或手部的视频图像,相应地所述判断模块还包括:
    定位子模块,用于对所述视频图像进行检测,定位所述视频图像中的特征图像;
    分析子模块,用于对所述特征图像进行分析,确定所述特征图像的特征信息;
    确定子模块,用于将所述特征信息与所述预设统计模型进行比对,确定所述驾驶员的驾驶状态。
  8. 如权利要求6所述的云端服务器,其特征在于,所述当前体征信息为包含所述驾驶员心率、呼吸或血压的脉冲信号,相应地所述判断模块包括:
    转换子模块,用于对所述脉冲信号进行处理,并转换为数字信号;
    比较子模块,用于判断所述数字信号的值超出阈值的持续时长是否大于第一预设时长;
    判断子模块,用于在所述数字信号的值超出阈值的持续时长大于第一预设时长,判断驾驶员的驾驶状态为疲劳驾驶状态。
PCT/CN2016/105631 2016-08-12 2016-11-14 疲劳驾驶监控方法及云端服务器 WO2018028068A1 (zh)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN201610666315.9 2016-08-12
CN201610666315.9A CN106218405A (zh) 2016-08-12 2016-08-12 疲劳驾驶监控方法及云端服务器

Publications (1)

Publication Number Publication Date
WO2018028068A1 true WO2018028068A1 (zh) 2018-02-15

Family

ID=57548702

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2016/105631 WO2018028068A1 (zh) 2016-08-12 2016-11-14 疲劳驾驶监控方法及云端服务器

Country Status (2)

Country Link
CN (1) CN106218405A (zh)
WO (1) WO2018028068A1 (zh)

Cited By (18)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108438000A (zh) * 2018-05-15 2018-08-24 北京兴科迪电子技术研究院 驾驶员突发疾病察觉装置、方法和系统
CN109305039A (zh) * 2018-10-29 2019-02-05 成都云科新能汽车技术有限公司 一种安全驾驶监测系统及方法
CN109318710A (zh) * 2018-09-07 2019-02-12 深圳腾视科技有限公司 一种带自动限制汽车行驶速度的驾驶员状态监测仪
CN109472253A (zh) * 2018-12-28 2019-03-15 华人运通控股有限公司 行车安全智能提醒方法、装置、智能方向盘和智能手环
CN111563456A (zh) * 2020-05-07 2020-08-21 安徽江淮汽车集团股份有限公司 驾乘行为预警方法及系统
CN112528792A (zh) * 2020-12-03 2021-03-19 深圳地平线机器人科技有限公司 疲劳状态检测方法、装置、介质及电子设备
CN112733683A (zh) * 2020-12-31 2021-04-30 深圳市元征科技股份有限公司 司机保健方法、车载设备及计算机可读存储介质
CN112829767A (zh) * 2021-02-22 2021-05-25 清华大学苏州汽车研究院(相城) 一种基于监测驾驶员误操作的自动驾驶控制系统及方法
CN113312958A (zh) * 2021-03-22 2021-08-27 广州宸祺出行科技有限公司 一种基于司机状态的派单优先度调整方法及装置
CN113771859A (zh) * 2021-08-31 2021-12-10 智新控制系统有限公司 智能行车干预方法、装置、设备及计算机可读存储介质
CN114148337A (zh) * 2021-12-31 2022-03-08 阿维塔科技(重庆)有限公司 驾驶员状态信息提示方法、装置及计算机可读存储介质
CN114212030A (zh) * 2021-12-27 2022-03-22 深圳市有方科技股份有限公司 渣土车监控管理系统
CN114520823A (zh) * 2020-11-03 2022-05-20 北京地平线机器人技术研发有限公司 基于疲劳驾驶状态的通信建立方法、装置及系统
CN115035687A (zh) * 2022-06-07 2022-09-09 公安部第三研究所 一种基于座椅承压分析的驾驶人疲劳状态监测系统
WO2022222295A1 (zh) * 2021-04-19 2022-10-27 博泰车联网科技(上海)股份有限公司 车载视频监控的方法、系统、存储介质和车载终端
CN116176600A (zh) * 2023-04-25 2023-05-30 合肥工业大学 一种智能健康座舱的控制方法
CN116439710A (zh) * 2023-04-11 2023-07-18 中国人民解放军海军特色医学中心 一种基于生理信号的舰船驾驶员疲劳检测系统及方法
CN115798247B (zh) * 2022-10-10 2023-09-22 深圳市昊岳科技有限公司 一种基于大数据的智慧公交云平台

Families Citing this family (34)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106710309A (zh) * 2017-02-23 2017-05-24 国网四川省电力公司检修公司 一种gps车辆管理系统
TWI634454B (zh) * 2017-05-19 2018-09-01 致伸科技股份有限公司 人體感知檢測系統及其方法
CN107169481A (zh) * 2017-06-28 2017-09-15 上海与德科技有限公司 一种提醒方法及装置
CN107307855A (zh) * 2017-08-02 2017-11-03 沈阳东康智能科技有限公司 基于可穿戴设备的车载人体健康监控系统及方法
WO2019028798A1 (zh) * 2017-08-10 2019-02-14 北京市商汤科技开发有限公司 驾驶状态监控方法、装置和电子设备
CN109421732B (zh) * 2017-08-16 2021-08-31 深圳如一探索科技有限公司 设备控制方法及装置
CN109398084A (zh) * 2017-08-18 2019-03-01 江苏斯诺物联科技有限公司 一种基于驾驶员生理参数的疲劳驾驶检测系统
CN109598174A (zh) * 2017-09-29 2019-04-09 厦门歌乐电子企业有限公司 驾驶员状态的检测方法、及其装置和系统
CN107571735A (zh) * 2017-10-13 2018-01-12 苏州小黄人汽车科技有限公司 一种机动车驾驶员状态监测系统及监测方法
US10710590B2 (en) 2017-12-19 2020-07-14 PlusAI Corp Method and system for risk based driving mode switching in hybrid driving
US10406978B2 (en) 2017-12-19 2019-09-10 PlusAI Corp Method and system for adapting augmented switching warning
US10620627B2 (en) 2017-12-19 2020-04-14 PlusAI Corp Method and system for risk control in switching driving mode
CN108454620B (zh) * 2018-04-10 2023-06-30 西华大学 一种汽车预防撞与自主救援系统
CN110493296A (zh) * 2018-05-15 2019-11-22 上海博泰悦臻网络技术服务有限公司 疲劳驾驶提醒方法及云端服务器
CN108764179A (zh) * 2018-05-31 2018-11-06 惠州市德赛西威汽车电子股份有限公司 一种基于人脸识别技术的共享汽车解锁方法和系统
US10915769B2 (en) 2018-06-04 2021-02-09 Shanghai Sensetime Intelligent Technology Co., Ltd Driving management methods and systems, vehicle-mounted intelligent systems, electronic devices, and medium
US10970571B2 (en) 2018-06-04 2021-04-06 Shanghai Sensetime Intelligent Technology Co., Ltd. Vehicle control method and system, vehicle-mounted intelligent system, electronic device, and medium
CN108819900A (zh) * 2018-06-04 2018-11-16 上海商汤智能科技有限公司 车辆控制方法和系统、车载智能系统、电子设备、介质
CN108860150B (zh) * 2018-07-03 2021-05-04 百度在线网络技术(北京)有限公司 汽车制动方法、装置、设备及计算机可读存储介质
CN108974014A (zh) * 2018-07-18 2018-12-11 武汉理工大学 云端服务器对智能汽车安全驾驶的监控系统及方法
CN109784188A (zh) * 2018-12-18 2019-05-21 深圳壹账通智能科技有限公司 驾驶疲劳度评价方法、装置、计算机设备和存储介质
CN109849660A (zh) * 2019-01-29 2019-06-07 合肥革绿信息科技有限公司 一种车辆安全控制系统
CN109649357B (zh) * 2019-01-30 2020-04-21 大连交通大学 一种基于虹膜识别控制的刹车系统
CN110096957B (zh) * 2019-03-27 2023-08-08 苏州清研微视电子科技有限公司 基于面部识别和行为识别融合的疲劳驾驶监测方法和系统
CN110053555A (zh) * 2019-04-15 2019-07-26 深圳市英泰斯达智能技术有限公司 一种车辆安全驾驶辅助系统及汽车
CN110217237A (zh) * 2019-04-16 2019-09-10 安徽酷哇机器人有限公司 车辆远程控制系统和车辆远程控制方法
CN110084176A (zh) * 2019-04-23 2019-08-02 努比亚技术有限公司 一种疲劳驾驶提醒的方法、装置、终端设备及存储介质
CN110147738B (zh) * 2019-04-29 2021-01-22 中国人民解放军海军特色医学中心 一种驾驶员疲劳监测预警方法及系统
CN110341639A (zh) * 2019-06-18 2019-10-18 平安科技(深圳)有限公司 一种汽车安全预警的方法、装置、设备及存储介质
CN111191545B (zh) * 2019-12-20 2024-01-12 河南嘉晨智能控制股份有限公司 一种驾驶员行为实时监控分析系统及方法
CN111158350A (zh) * 2020-01-16 2020-05-15 斯润天朗(北京)科技有限公司 基于诊断的质量监控平台及系统
CN112124320A (zh) * 2020-09-10 2020-12-25 恒大新能源汽车投资控股集团有限公司 车辆控制方法、系统及车辆
CN113212369A (zh) * 2021-05-11 2021-08-06 江苏爱玛车业科技有限公司 电动汽车控制方法、系统及电动汽车
CN114523984A (zh) * 2021-12-28 2022-05-24 东软睿驰汽车技术(沈阳)有限公司 提示方法、提示装置、提示系统及计算机可读存储介质

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104299451A (zh) * 2014-10-28 2015-01-21 天津工业大学 防止高速路连环撞车的智能预警系统
CN105225421A (zh) * 2015-10-10 2016-01-06 英华达(南京)科技有限公司 疲劳驾驶控制系统及方法
CN105488957A (zh) * 2015-12-15 2016-04-13 小米科技有限责任公司 疲劳驾驶检测方法及装置
CN105678959A (zh) * 2016-02-25 2016-06-15 重庆邮电大学 一种疲劳驾驶监控预警方法及系统
CN105799509A (zh) * 2014-12-30 2016-07-27 北京奇虎科技有限公司 一种防疲劳驾驶系统及方法

Family Cites Families (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP4946447B2 (ja) * 2007-01-12 2012-06-06 横浜ゴム株式会社 疲労評価方法および疲労評価装置。
DE102010004089A1 (de) * 2010-01-12 2010-09-30 Daimler Ag Verfahren und Vorrichtung zum Betrieb eines Fahrzeuges
CN105069977A (zh) * 2015-07-28 2015-11-18 宋婉毓 一种预防疲劳驾驶的提醒装置及方法

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104299451A (zh) * 2014-10-28 2015-01-21 天津工业大学 防止高速路连环撞车的智能预警系统
CN105799509A (zh) * 2014-12-30 2016-07-27 北京奇虎科技有限公司 一种防疲劳驾驶系统及方法
CN105225421A (zh) * 2015-10-10 2016-01-06 英华达(南京)科技有限公司 疲劳驾驶控制系统及方法
CN105488957A (zh) * 2015-12-15 2016-04-13 小米科技有限责任公司 疲劳驾驶检测方法及装置
CN105678959A (zh) * 2016-02-25 2016-06-15 重庆邮电大学 一种疲劳驾驶监控预警方法及系统

Cited By (26)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108438000A (zh) * 2018-05-15 2018-08-24 北京兴科迪电子技术研究院 驾驶员突发疾病察觉装置、方法和系统
CN109318710A (zh) * 2018-09-07 2019-02-12 深圳腾视科技有限公司 一种带自动限制汽车行驶速度的驾驶员状态监测仪
CN109305039A (zh) * 2018-10-29 2019-02-05 成都云科新能汽车技术有限公司 一种安全驾驶监测系统及方法
CN109472253A (zh) * 2018-12-28 2019-03-15 华人运通控股有限公司 行车安全智能提醒方法、装置、智能方向盘和智能手环
CN109472253B (zh) * 2018-12-28 2024-04-16 华人运通(上海)云计算科技有限公司 行车安全智能提醒方法、装置、智能方向盘和智能手环
CN111563456A (zh) * 2020-05-07 2020-08-21 安徽江淮汽车集团股份有限公司 驾乘行为预警方法及系统
CN114520823A (zh) * 2020-11-03 2022-05-20 北京地平线机器人技术研发有限公司 基于疲劳驾驶状态的通信建立方法、装置及系统
CN114520823B (zh) * 2020-11-03 2024-02-20 北京地平线机器人技术研发有限公司 基于疲劳驾驶状态的通信建立方法、装置及系统
CN112528792A (zh) * 2020-12-03 2021-03-19 深圳地平线机器人科技有限公司 疲劳状态检测方法、装置、介质及电子设备
CN112528792B (zh) * 2020-12-03 2024-05-31 深圳地平线机器人科技有限公司 疲劳状态检测方法、装置、介质及电子设备
CN112733683A (zh) * 2020-12-31 2021-04-30 深圳市元征科技股份有限公司 司机保健方法、车载设备及计算机可读存储介质
CN112829767A (zh) * 2021-02-22 2021-05-25 清华大学苏州汽车研究院(相城) 一种基于监测驾驶员误操作的自动驾驶控制系统及方法
CN112829767B (zh) * 2021-02-22 2024-05-17 清华大学苏州汽车研究院(相城) 一种基于监测驾驶员误操作的自动驾驶控制系统及方法
CN113312958B (zh) * 2021-03-22 2024-04-12 广州宸祺出行科技有限公司 一种基于司机状态的派单优先度调整方法及装置
CN113312958A (zh) * 2021-03-22 2021-08-27 广州宸祺出行科技有限公司 一种基于司机状态的派单优先度调整方法及装置
WO2022222295A1 (zh) * 2021-04-19 2022-10-27 博泰车联网科技(上海)股份有限公司 车载视频监控的方法、系统、存储介质和车载终端
CN113771859B (zh) * 2021-08-31 2024-01-26 智新控制系统有限公司 智能行车干预方法、装置、设备及计算机可读存储介质
CN113771859A (zh) * 2021-08-31 2021-12-10 智新控制系统有限公司 智能行车干预方法、装置、设备及计算机可读存储介质
CN114212030A (zh) * 2021-12-27 2022-03-22 深圳市有方科技股份有限公司 渣土车监控管理系统
CN114148337A (zh) * 2021-12-31 2022-03-08 阿维塔科技(重庆)有限公司 驾驶员状态信息提示方法、装置及计算机可读存储介质
CN115035687A (zh) * 2022-06-07 2022-09-09 公安部第三研究所 一种基于座椅承压分析的驾驶人疲劳状态监测系统
CN115798247B (zh) * 2022-10-10 2023-09-22 深圳市昊岳科技有限公司 一种基于大数据的智慧公交云平台
CN116439710B (zh) * 2023-04-11 2023-10-20 中国人民解放军海军特色医学中心 一种基于生理信号的舰船驾驶员疲劳检测系统及方法
CN116439710A (zh) * 2023-04-11 2023-07-18 中国人民解放军海军特色医学中心 一种基于生理信号的舰船驾驶员疲劳检测系统及方法
CN116176600B (zh) * 2023-04-25 2023-09-29 合肥工业大学 一种智能健康座舱的控制方法
CN116176600A (zh) * 2023-04-25 2023-05-30 合肥工业大学 一种智能健康座舱的控制方法

Also Published As

Publication number Publication date
CN106218405A (zh) 2016-12-14

Similar Documents

Publication Publication Date Title
WO2018028068A1 (zh) 疲劳驾驶监控方法及云端服务器
US10076705B2 (en) System and method for detecting user attention
US11032457B2 (en) Bio-sensing and eye-tracking system
US20220319520A1 (en) Voice interaction wakeup electronic device, method and medium based on mouth-covering action recognition
WO2019088769A1 (ko) 개방형 api 기반 의료 정보 제공 방법 및 시스템
KR20170110505A (ko) 다이나믹 비젼 센서의 이미지 표현 및 처리 방법과 장치
JP2017208109A5 (zh)
US20170263264A1 (en) Method for recording sound of video-recorded object and mobile terminal
EP3377963A1 (en) Electronic device and control method thereof
US9823815B2 (en) Information processing apparatus and information processing method
KR20100129629A (ko) 움직임 검출에 의한 전자장치 동작 제어방법 및 이를 채용하는 전자장치
WO2020190060A1 (en) Electronic device for measuring blood pressure and method for measuring blood pressure
US11308733B2 (en) Gesture detection using ultrasonic clicks
US20220319467A1 (en) Shooting control method and electronic device
CN108881782B (zh) 一种视频通话方法及终端设备
CN110881105B (zh) 一种拍摄方法及电子设备
WO2019194651A1 (ko) 전자 장치에서 생체 정보 측정 방법 및 장치
WO2020045710A1 (ko) 수면측정장치, 및 이를 구비하는 수면측정 시스템
CN112954222B (zh) 一种连拍方法及电子设备
CN108737762B (zh) 一种视频通话方法及终端设备
WO2021125667A1 (ko) 전자 장치 및 전자 장치에서 광 센서 데이터의 피크 포인트를 검출하는 방법
EP4002199A1 (en) Method and device for behavior recognition based on line-of-sight estimation, electronic equipment, and storage medium
CN113721768A (zh) 一种穿戴设备的控制方法、装置、系统及可读存储介质
US10798337B2 (en) Communication device, communication system, and non-transitory computer readable medium storing program
WO2016074278A1 (zh) 用于网络医院紧急救助的信息交互系统和方法

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 16912522

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 16912522

Country of ref document: EP

Kind code of ref document: A1