WO2020237664A1 - Driving prompt method, driving state detection method and computing device - Google Patents

Driving prompt method, driving state detection method and computing device Download PDF

Info

Publication number
WO2020237664A1
WO2020237664A1 PCT/CN2019/089639 CN2019089639W WO2020237664A1 WO 2020237664 A1 WO2020237664 A1 WO 2020237664A1 CN 2019089639 W CN2019089639 W CN 2019089639W WO 2020237664 A1 WO2020237664 A1 WO 2020237664A1
Authority
WO
WIPO (PCT)
Prior art keywords
information
driving
facial feature
detected object
image information
Prior art date
Application number
PCT/CN2019/089639
Other languages
French (fr)
Chinese (zh)
Inventor
郑睿姣
叶凌峡
Original Assignee
驭势(上海)汽车科技有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 驭势(上海)汽车科技有限公司 filed Critical 驭势(上海)汽车科技有限公司
Priority to CN201980000877.1A priority Critical patent/CN110582437A/en
Priority to PCT/CN2019/089639 priority patent/WO2020237664A1/en
Publication of WO2020237664A1 publication Critical patent/WO2020237664A1/en

Links

Images

Classifications

    • BPERFORMING OPERATIONS; TRANSPORTING
    • B60VEHICLES IN GENERAL
    • B60WCONJOINT CONTROL OF VEHICLE SUB-UNITS OF DIFFERENT TYPE OR DIFFERENT FUNCTION; CONTROL SYSTEMS SPECIALLY ADAPTED FOR HYBRID VEHICLES; ROAD VEHICLE DRIVE CONTROL SYSTEMS FOR PURPOSES NOT RELATED TO THE CONTROL OF A PARTICULAR SUB-UNIT
    • B60W40/00Estimation or calculation of non-directly measurable driving parameters for road vehicle drive control systems not related to the control of a particular sub unit, e.g. by using mathematical models
    • B60W40/08Estimation or calculation of non-directly measurable driving parameters for road vehicle drive control systems not related to the control of a particular sub unit, e.g. by using mathematical models related to drivers or passengers
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B60VEHICLES IN GENERAL
    • B60WCONJOINT CONTROL OF VEHICLE SUB-UNITS OF DIFFERENT TYPE OR DIFFERENT FUNCTION; CONTROL SYSTEMS SPECIALLY ADAPTED FOR HYBRID VEHICLES; ROAD VEHICLE DRIVE CONTROL SYSTEMS FOR PURPOSES NOT RELATED TO THE CONTROL OF A PARTICULAR SUB-UNIT
    • B60W50/00Details of control systems for road vehicle drive control not related to the control of a particular sub-unit, e.g. process diagnostic or vehicle driver interfaces
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B60VEHICLES IN GENERAL
    • B60WCONJOINT CONTROL OF VEHICLE SUB-UNITS OF DIFFERENT TYPE OR DIFFERENT FUNCTION; CONTROL SYSTEMS SPECIALLY ADAPTED FOR HYBRID VEHICLES; ROAD VEHICLE DRIVE CONTROL SYSTEMS FOR PURPOSES NOT RELATED TO THE CONTROL OF A PARTICULAR SUB-UNIT
    • B60W50/00Details of control systems for road vehicle drive control not related to the control of a particular sub-unit, e.g. process diagnostic or vehicle driver interfaces
    • B60W50/08Interaction between the driver and the control system
    • B60W50/14Means for informing the driver, warning the driver or prompting a driver intervention
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B60VEHICLES IN GENERAL
    • B60WCONJOINT CONTROL OF VEHICLE SUB-UNITS OF DIFFERENT TYPE OR DIFFERENT FUNCTION; CONTROL SYSTEMS SPECIALLY ADAPTED FOR HYBRID VEHICLES; ROAD VEHICLE DRIVE CONTROL SYSTEMS FOR PURPOSES NOT RELATED TO THE CONTROL OF A PARTICULAR SUB-UNIT
    • B60W50/00Details of control systems for road vehicle drive control not related to the control of a particular sub-unit, e.g. process diagnostic or vehicle driver interfaces
    • B60W50/08Interaction between the driver and the control system
    • B60W50/14Means for informing the driver, warning the driver or prompting a driver intervention
    • B60W50/16Tactile feedback to the driver, e.g. vibration or force feedback to the driver on the steering wheel or the accelerator pedal
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/50Context or environment of the image
    • G06V20/59Context or environment of the image inside of a vehicle, e.g. relating to seat occupancy, driver state or inner lighting conditions
    • G06V20/597Recognising the driver's state or behaviour, e.g. attention or drowsiness
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/168Feature extraction; Face representation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/172Classification, e.g. identification
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B60VEHICLES IN GENERAL
    • B60WCONJOINT CONTROL OF VEHICLE SUB-UNITS OF DIFFERENT TYPE OR DIFFERENT FUNCTION; CONTROL SYSTEMS SPECIALLY ADAPTED FOR HYBRID VEHICLES; ROAD VEHICLE DRIVE CONTROL SYSTEMS FOR PURPOSES NOT RELATED TO THE CONTROL OF A PARTICULAR SUB-UNIT
    • B60W50/00Details of control systems for road vehicle drive control not related to the control of a particular sub-unit, e.g. process diagnostic or vehicle driver interfaces
    • B60W2050/0001Details of the control system
    • B60W2050/0019Control system elements or transfer functions
    • B60W2050/0028Mathematical models, e.g. for simulation
    • B60W2050/0029Mathematical model of the driver

Definitions

  • This application relates to the field of automatic driving technology, and in particular to a driving reminder method, a driving state detection method, and a computing device.
  • Autonomous vehicles (Autonomous vehicles; Self-piloting automobiles), also known as unmanned vehicles and computer-driven vehicles, are intelligent vehicles that realize unmanned driving through a computer's automatic driving system.
  • unmanned vehicles also known as unmanned vehicles and computer-driven vehicles
  • autonomous driving can be divided into several levels:
  • Level L0 The driver has complete control of the vehicle
  • Level L1 The automatic system can sometimes assist the driver to complete certain driving tasks
  • L2 assisted driving The automatic system can complete certain driving tasks, but the driver needs to monitor the driving environment, complete the rest, and ensure that problems occur and take over at any time. At this level, the wrong perception and judgment of the automatic system can be corrected by the driver at any time, and most car companies can provide this system. L2 can be divided into different usage scenarios based on speed and environment, such as low-speed traffic jams on the loop, fast driving on highways and automatic parking by the driver in the car;
  • L3 semi-autonomous driving The automatic system can not only complete certain driving tasks, but also monitor the driving environment under certain conditions, but the driver must be ready to regain driving control (when the automatic system makes a request). Therefore, at this level, the driver still cannot sleep or take a deep rest.
  • the difference between L3 and L2 is that the vehicle is responsible for surrounding monitoring, while the human driver only needs to maintain attention for emergencies.
  • Level 4 highly automated driving Automated systems can complete driving tasks and monitor the driving environment in certain environments and specific conditions; currently, the deployment of L4 is mostly based on city use, which can be fully automated valet parking. It can also be done directly in conjunction with taxi services. At this stage, within the scope of autonomous driving, all tasks related to driving have nothing to do with the driver and passengers. The perception of external responsibility lies in the autonomous driving system, and there are different design and deployment ideas here;
  • Level 5 fully automated driving: all driving tasks that the automated system can complete under all conditions
  • the current automatic driving system has developed corresponding driver state detection methods to monitor the driver's state to ensure that the driver can concentrate on driving.
  • an embodiment of the present application proposes a driving reminder method, a driving state detection method, and a computing device to solve the problems in the prior art.
  • an embodiment of the present application discloses a driving reminder method, including:
  • An embodiment of the present application also discloses a computing device, including:
  • One or more processors are One or more processors.
  • An embodiment of the present application also discloses one or more machine-readable media, on which instructions are stored, which when executed by one or more processors, cause a computing device to execute the foregoing method.
  • the method proposed in the embodiment of the present application can solve the problem of the high false alarm rate of the existing driver state detection system, and ensure that the driver has the ability to safely take over the vehicle within a specified time range.
  • Fig. 1 is a block diagram of an automatic driving system according to an embodiment of the application.
  • Fig. 2 is a block diagram of visual algorithm processing according to an embodiment of the application.
  • Fig. 3 shows a flowchart of a driving reminding method according to an embodiment.
  • 4A to 4D are flowcharts of sub-steps of the driving reminding method shown in FIG. 3.
  • Fig. 5 schematically shows a block diagram of a computing device for executing the method according to the present application.
  • Fig. 6 schematically shows a storage unit for holding or carrying program codes for implementing the method according to the present application.
  • the embodiments of this application propose a driving method and device applied to an automatic driving system, which can solve the problem of high false alarm rate of the existing driver state detection system and ensure that the driver has the ability to Safely take over the vehicle within the time frame.
  • the embodiment of the application proposes a driving reminding method, which is applied to an automatic driving system of a vehicle.
  • the automatic driving system can detect the information inside and outside the car, and this information can be input into the automatic driving system as the basis for the automatic driving system to judge and perform operations.
  • Information outside the vehicle may include traffic environment information and natural environment information; traffic environment information, for example, road condition information, traffic light information, obstacle information, etc.; natural environment information, for example, temperature, humidity, light, etc. This information can be obtained through detection elements such as sensors, cameras, and radars outside the vehicle.
  • In-vehicle information includes, for example, in-vehicle environment information, driver status information, and driver's operation information on the vehicle, etc. These information can be acquired through detection elements such as in-vehicle sensors and cameras.
  • Fig. 1 shows a system block diagram of an automatic driving system proposed in an embodiment of the application.
  • the automatic driving system of a vehicle can be composed of software, hardware, or a combination of software and hardware.
  • the automatic driving system can include a vehicle sensor module 10, a vehicle-mounted camera module 20, a driver state detection module 30, and an automatic driving system master.
  • the vehicle sensor module 10 and the vehicle camera module 20 may be hardware devices, which are connected to the vehicle computer through a connection such as a data bus; the driver state detection module 30, the automatic driving system main control module 40, and the wake-up strategy control
  • the module 50 may be a computer program in a vehicle-mounted computer processor; the human-computer interaction interface module 60 may be a software module or a hardware module.
  • the vehicle sensor module 10 and the vehicle camera module 20 are used to collect information in the vehicle, and the vehicle sensor module 10 is used to detect whether there is a driver at the driving position through sensors.
  • the sensor module 10 may be a pressure sensor installed in the driver's seat.
  • the vehicle sensor module 10 may be used to receive related sensor signals of the vehicle, such as driving position pressure sensor signals, seat belt signals, etc., to determine whether the driver is in the driving position.
  • the vehicle-mounted camera module 20 is used to collect multiple frames of video images of the driving position.
  • the vehicle-mounted camera module 20 may be one or more of a normal camera, a high-definition camera, and a stereo camera.
  • the multiple frames of video images may be continuous or discontinuous.
  • the vehicle-mounted camera module 20 can be installed at the A-pillar position in the vehicle to collect image information of the driver and monitor the status of the driver.
  • the installation position of the vehicle-mounted camera module can be used to obtain a greater degree of driver
  • the facial information is the best and cannot affect the operation of the driver, for example, it cannot obscure the operation of the driver.
  • the signal and video image of the sensor module may be sent to the driver state detection module 30 of the onboard computer.
  • the driver state detection module 30 may use at least one of the sensor signal and the video image to determine the driving state of the driver, and send the driving state to the automatic driving system main control module 40 .
  • the driver state detection module 30 may also send the driving state to the wake-up strategy control module 50.
  • the driver state detection module 30 may receive the image information.
  • the driver state detection module 30 performs driver detection through a face classifier to determine whether there is a human face in the detected frame of video image. When there is a human face, the driver's rectangular area can be determined, and the driver's facial feature points can be located in the rectangular area of the human face to obtain facial feature information, and the driver's state can be determined based on the facial feature information.
  • FIG. 2 shows a visual algorithm processing block diagram of the driver state detection module 30.
  • the processing flow of the driver state detection module 30 includes four parts: video image input, driver detection, facial feature point positioning, and driver state judgment.
  • the video image input process is used to obtain the video image of the on-board camera module 20;
  • the driver detection process is used to determine whether there is a multi-frame video image to determine whether the driver’s facial image information exists;
  • the process of facial feature point positioning is used to determine the The image information determines the facial feature points;
  • the driver state judgment process is used to judge the driver's driving state based on facial features.
  • the main control module 40 of the automatic driving system can be used to send system status signals according to the operating conditions of the automatic driving system, such as a signal that the system is malfunctioning, an emergency situation, or the automatic driving system cannot accurately determine the road conditions ahead. Signal etc.
  • the wake-up strategy control module 50 may be used to send different instructions according to the driver state and/or the state signal of the automatic driving system, and use different reminding methods to remind the driver to take over the vehicle.
  • different reminding methods can be adopted to the driver according to the driver status and confidence level to ensure The driver can take over the driving task safely and smoothly within the specified time.
  • the wake-up strategy control module 50 formulates a wake-up strategy according to the driving state, and executes the wake-up strategy through the human-computer interaction interface module 60.
  • the driver state detection module 30 and the wake-up strategy control module 50 may be software modules in the on-board computer processor, and the human-computer interaction interface module 60 may be used to control driving according to the control instructions given by the wake-up strategy control module 50.
  • the hardware module that sends notification information to the operator, and the control command includes different wake-up modes.
  • the human-computer interaction interface module 60 executes a corresponding wake-up mode for the driver to remind the driver to take over the driving task.
  • the human-computer interaction interface module 60 may include, for example, a sound module, a light module, a vibration module, a display module, etc., which are not particularly limited in this application.
  • the above description of the automatic driving system is only for convenience of description, and does not limit the present application within the scope of the listed embodiments. It can be understood that for those skilled in the art, after understanding the principle of the system, it is possible to arbitrarily combine various modules, or form subsystems to connect with other modules without departing from this principle, to implement the above Various amendments and changes in the form and details of the application field of the method and system.
  • the above-mentioned driver state detection module 30, automatic driving system main control module 40, wake-up strategy control module 50 and human-computer interaction interface module 60 are separate software modules.
  • these modules may also be two-by-two integrated or multiple integrated modules, and any deformation or modification belongs to the protection scope of this application.
  • the driver state detection module 30 and the automatic driving system main control module 40 can be integrated together in the form of software; for example, the wake-up strategy control module 50 and the human-computer interaction interface module 60 can be integrated together in the form of software; another example The driver state detection module 30, the automatic driving system main control module 40, and the wake-up strategy control module 50 can be integrated in the form of software, or the driver state detection module 30, the automatic driving system main control module 40, and the wake-up strategy
  • the control module 50 and the human-computer interaction interface module 60 are integrated together in the form of software as a whole, and this application does not particularly limit the implementation of the above modules alone or in combination. Such deformations are all within the protection scope of this application.
  • FIG. 3 is a flowchart of the steps of the driving reminding method according to the first embodiment of the application. As shown in FIG. 3, the driving reminder method of the embodiment of the present application is applied to an automatic driving system and includes the following steps:
  • the automatic driving system obtains facial image information of the detected object.
  • the automatic driving system may obtain the facial image information of the driver in the driving position of the vehicle where the automatic driving system is located through a sensor or a camera.
  • the face image information of the detected object may include a face image recognized through face recognition technology.
  • the vehicle-mounted camera module 10 in FIG. 1 may be used to shoot video, for example, 30 frames per second of continuous video images.
  • a face classifier can be used to analyze and detect the video image to determine whether there is face image information.
  • the face classifier may be obtained by extracting MBLBP (Multiscale BlockLBP) features from a training set containing human faces and non-human faces, and then training using a cascaded AdaBoost algorithm.
  • MBLBP Multiscale BlockLBP
  • the rectangular area of the face can be obtained through an algorithm.
  • judging whether there is face image information in the image can also be implemented by means of machine learning.
  • a machine learning model can be used to determine whether there is face image information in the image.
  • the application of the machine learning model includes the training phase and the use phase: in the training phase, multiple images containing face image information and images not containing face image information can be input into the machine learning model, and these pictures are marked as "containing "Or “does not contain", use these images as samples to train the machine learning model; in the use stage, input new images into the training mature machine learning model, and the machine learning model can automatically output whether the image contains face image information critical result.
  • judging whether there is face image information in the image can also be implemented by means of deep learning in machine learning.
  • Deep learning uses a neural network model that contains multiple hidden layers to establish and simulate a neural network that simulates the human brain for analysis and learning, and imitates the mechanism of the human brain to interpret data, such as text, images, and sounds. Deep learning usually requires a larger amount of training data to train the neural network model. These training data are, for example, a large number of images marked with "face image information" or "not containing face image information"; in the use stage after training, Input the new image into the neural network model, the neural network model can automatically output the judgment result of whether the image contains facial image information, and the accuracy of the output information is significantly improved compared with the traditional machine learning model.
  • step S102 can be performed as follows:
  • facial feature points from the aforementioned rectangular area of the face, such as eyes, mouth corners, nose tip, and face contour; then use the position information of the facial feature points as facial feature information for subsequent use Determine driving status.
  • the position information of the facial feature points is determined, an initial shape is given for the rectangular region of the face, the image features of the key feature points are extracted, and the initial shape is returned to Position close to or even equal to the true shape.
  • the location information of the facial feature points can be determined by using the aforementioned Supervised Descent Method (SDM) to solve the problem, and the image feature adopts the Histogram of Oriented Gradient, HOG);
  • SDM Supervised Descent Method
  • HOG Histogram of Oriented Gradient
  • the orientation gradient histogram feature is a feature description factor formed by calculating and counting the gradient orientation histogram of the local area of the image, and will not be repeated here.
  • determining the position information of the facial feature points may also be obtained through machine learning.
  • a machine learning model can be used to obtain the location information of facial feature points.
  • the application of the machine learning model includes the training phase and the use phase: in the training phase, multiple face images marked with the location information of the facial feature points can be input into the machine learning model to train the machine learning model; in the use phase, The new face image input trains a mature machine learning model, and the machine learning model can automatically output the location information of the facial feature points of the face image.
  • determining the position information of the facial feature points can also be obtained by means of deep learning in machine learning.
  • a large amount of training data can be used to train the neural network model.
  • These training data are, for example, face images marked with the location information of facial feature points; in the use phase, new face images are input to the neural network model, neural network model
  • the position information of the facial feature points of the face image can be automatically output, and the accuracy of the output information is significantly improved compared to traditional machine learning models.
  • step S103 can be executed as follows:
  • the driving state of the detected object can be obtained based on facial feature information.
  • the corresponding facial feature information can be obtained through facial feature points.
  • the facial feature information includes, for example, position information of facial feature points.
  • the location information can be used to extract feature description factors to determine the driver's state.
  • the driver’s eye area can be located from the position information of the facial feature points; the description factor is extracted using the position information of the eye area, such as the aspect ratio of the driver’s eyes, and the SVM algorithm is used to determine the state of the eyes ——The state can include open, closed, half open, etc., for example.
  • the driver’s mouth area from the position information of facial feature points; use the position information of the mouth area to extract descriptive factors, such as the aspect ratio of the driver’s mouth, and use specific
  • the algorithm determines the state of the mouth-the state can include, for example, open, closed, half-open, etc.
  • the head posture of the driver can be calculated by combining the internal and external parameters of the on-board camera module 20.
  • the head posture of the driver can be calculated according to the driver's current captured image, combined with the deflection angle of the camera relative to the x-y-z three-axis coordinate system.
  • the angle and transformation relationship between the axis of the driver's head and the axis of the body are used to determine the driver's head posture.
  • the driver when the preset head posture is less than a specific angle relative to the body, for example, the angle is between 0-15 degrees, the driver is considered to be in a normal posture; when the driver’s head posture is larger than the body At an angle of 15 degrees, it is considered that the driver may be sleeping.
  • the automatic driving system may set one or more of the postures of the eyes, mouth, and head to correspond to the driving state of the driver. In some embodiments, when the automatic driving system determines one or more of the eyes, mouth, head, etc., the driving state of the driver can be determined.
  • the driving state can be divided into multiple driving state levels. Taking the use of the driver's eyes to detect the driving state as an example, three different levels of state (0/1/2) can be set, the lower the level, the more sober. For example, if any eye is closed at a certain moment, the consecutive frames with closed eyes increase once. If the number of consecutive frames with closed eyes is greater than the maximum number of consecutive closed eyes, the driver's driving state level is 2; if the consecutive number of closed eyes is between the maximum and minimum consecutive closed eyes, the driver's driving state level is 1; otherwise , The driver’s driving state level is 0.
  • the driver’s driving state level when it is detected that the user’s head drooping exceeds the first time period, the driver’s driving state level is 2; when the user’s head drooping time is detected between the first time period and the second time period, the first time period is If the duration is longer than the second duration, the driver's driving state level is 1; otherwise, the driver's driving state level is 0.
  • the open and closed state of the eyes can be determined based on the SVM classifier by extracting feature information from the eye region.
  • the feature information may include the fusion information of the LBP feature, the HU moment feature, and the histogram feature of the gray-level rotation invariant equivalent mode, and these information are used for feature description of the eye region.
  • the SVM classifier first extracts the fusion feature information of the image, and then trains the data sample set images based on the support vector machine algorithm to obtain a classifier capable of judging the open and closed state of the eyes.
  • step S104 can be performed as follows:
  • the predetermined condition is based on at least one of the driver's driving state, the current system state signal of the automatic driving system, and the accuracy of the driver's driving state judgment (for example, system confidence). One or a combination of them.
  • the predetermined condition may be: the current system status signal of the automatic driving system is a system error or a vehicle failure; that is, as long as the system status signal contains a signal related to a system error or a vehicle failure, the wake-up shown in FIG. 1
  • the strategy reminding module 50 determines the reminding mode according to the driving state of the driver, and sends out corresponding reminding information through the human-computer interaction interface module 60.
  • the predetermined condition may be: the driving state level is 1 or 2 (for example, the driver is not awake or less awake), and the current system state signal of the automatic driving system is a system error or a vehicle failure ; That is, it is necessary to simultaneously satisfy that the driver is not in a driving state and the system status signal is displayed as a system error or a vehicle failure.
  • the driver's intervention is required to meet the predetermined conditions; when the predetermined conditions are met, the wake-up strategy control module 50 is According to the driving state of the driver, the reminding method is determined, and the corresponding reminding information is sent out through the human-computer interaction interface module 60.
  • the system confidence level that is, the accuracy of the system judgment
  • the step of determining the confidence of the system may include the following steps, for example:
  • the system confidence is determined according to the number of times of the facial image information, and the system confidence includes more than two system confidence levels.
  • the automatic driving system may use the number of consecutive detections of human faces to divide the confidence of the system result, using three levels for characterization (0/1/2), the higher the level, the higher the confidence. If the number of consecutively detected faces is greater than the maximum threshold for consecutively detected faces, the confidence level is 2; if the number of consecutively detected faces is between the maximum and minimum thresholds, the confidence level is 1; otherwise, the confidence level Is 0.
  • the predetermined condition may be: the driving state level is 1 or 2 (for example, the driver is relatively unconscious or very unconscious), and the current system state signal of the automatic driving system is a system error Or vehicle failure, and the aforementioned system confidence is 1 or 2 (that is, the confidence level is high or medium); that is, it is necessary to satisfy that the driver is not in a driving state, and the system status signal shows that the driver is required to intervene, and the system confidence is high
  • the wake-up strategy reminder module 50 of the on-board control system determines the reminder mode according to the driving state of the driver, and sends out the corresponding reminder through the human-computer interaction interface module 60. Reminder information.
  • the operation of issuing corresponding reminder information based on the driving state of the driver may be, for example, setting reminding methods of different intensities for different levels of driving states, for example, three reminding methods of high, middle and low intensities.
  • the reminding methods of different intensities can be realized by one or more of the methods such as volume, light flashing, steering wheel vibration, seat vibration, and the like.
  • the difference between high-intensity reminder, medium-intensity reminder, and low-intensity reminder lies in the intensity or intensity of the used reminder.
  • the high-intensity reminder can use high decibel volume and high light flashing frequency.
  • medium-intensity reminder can use medium decibel volume, light flashing frequency or yellow light medium-frequency flashing, medium-frequency steering wheel vibration or seat vibration
  • Low-intensity reminders can use low-decibel volume, low-frequency light flashing or green light flashing, low-frequency steering wheel vibration or seat vibration.
  • the corresponding reminder information is issued based on the driving state, and the system confidence can also be used as one of the reference factors, that is, the corresponding reminder method can be determined based on the driving state and the system confidence, and the reminder information of the corresponding level can be issued.
  • Table 1 shows an example of multiple reminding methods set for the driving state when the predetermined conditions are met, as follows:
  • Driving status level System confidence Reminder 1 (not sober) 1 (high) high strength 2 (less sober) 1 (high) Medium intensity 3 (awake) 2 (medium) Low intensity 2 (less sober) 3 (low) Low intensity 3 (awake) 3 (low) Low intensity
  • steps S101 to S104 may respectively include the following sub-steps.
  • the step S101 that is, the step of acquiring face image information of the detected object, may include the following sub-steps:
  • S1011 Use a face classifier to detect whether there is face image information in the collected image
  • S1012 When it is determined that there is facial image information in the collected image, extract facial feature information from the facial image information;
  • an algorithm can be used to obtain a rectangular area of the human face. After the rectangular area of the human face is determined, at least one facial feature information can be obtained from the rectangular area of the human face.
  • a face classifier may be used to detect whether there is a face image.
  • the face classifier can be obtained by extracting MBLBP features from a training set containing human faces and non-human faces, and training them using a cascaded AdaBoost algorithm.
  • the AdaBoost algorithm is used to select some rectangular features (weak classifiers) that best represent the face, and the weak classifier is constructed into a strong classifier according to the weighted voting method, and then Several strong classifiers obtained by training are connected in series to form a cascaded classifier.
  • the aforementioned MBLBP feature refers to the Multiscale Block LBP feature. Compared with the LBP feature, the MBLBP feature used in the embodiment of the present application is more robust and characterizes the image more completely.
  • the facial feature information of the face image information may be further determined.
  • the rectangular area of the face can be obtained.
  • the facial feature points can be obtained, and the location information of the facial feature points can be determined by the positioning method as the facial feature information. That is, the facial feature information may include automatically locating the positions of facial feature points according to the rectangular area of the human face.
  • the facial feature information may be the positions of various parts that make up the human face, such as the eyes, the corners of the mouth, the tip of the nose, and the contour of the human face. These feature positions can be obtained through algorithm positioning.
  • the position information of the facial feature points is determined, an initial shape is given for the rectangular region of the face, the image features of the key feature points are extracted, and the initial shape is returned to a position close to or even equal to the true shape through continuous iteration.
  • the facial feature information may include eye information of the detected object; the eye feature information may include, for example, a ratio of eye height to eye width.
  • step S102 may include the following sub-steps:
  • S1021 Determine facial feature points according to the facial image information
  • S1022 Determine location information of the facial feature point, and use the location information as facial feature information.
  • Sub-step S1021 that is, the step of determining facial feature points according to the facial image information, may include:
  • S1021a Obtain a rectangular area of the face according to the face image information
  • S1021b Extract the initial shape of at least one facial feature point from the rectangular area of the face;
  • Sub-step S1022 determining the location information of the facial feature points, and using the location information as the facial feature information may include:
  • the image features include directional gradient histogram features.
  • the facial feature points such as eyes, mouth corners, nose tip, and face contour
  • the facial feature points can be obtained from the aforementioned rectangular area of the face first; and then the position information of the facial feature points can be used as facial feature information for subsequent Determine driving status.
  • the position information of the facial feature points can be used as facial feature information for subsequent Determine driving status.
  • an initial shape is given, image features of key feature points are extracted, and the initial shape is returned to a position close to or even equal to the true shape through continuous iteration.
  • determining the position information of the facial feature points may be solved by using a supervised descent algorithm (Supervised Descent Method, SDM), and the image feature uses a histogram of orientation gradient (Histogram of Oriented Gradient, HOG).
  • the supervised descent algorithm is a method used to minimize the non-linear least squares (Non-linear Least Squares) objective function. By learning a series of descent directions and the scales of the directions, the objective function is made at a very fast speed. Convergence to the minimum value, avoiding the problem of solving Jacobian matrix and Hessian matrix.
  • the directional gradient histogram feature is a feature description factor formed by calculating and counting the gradient directional histogram of the local area of the image. Its essence is the statistical information of the image gradient, which can maintain good geometrical and optical deformations in the image. Immutability.
  • the aforementioned step S103 that is, the step of judging the driving state of the detected object according to the facial feature information, may include the following sub-steps:
  • S1031 Extract eye feature information according to the eye information, and determine the open and closed state of the eyes;
  • S1032 Determine the driving state of the detected object by using the eye open and closed states of the multiple continuous images, where the driving state includes more than two driving state levels.
  • feature information can be extracted according to the eye region, and then the open and closed state of the eyes can be obtained based on the SVM classifier.
  • the fusion information of the LBP feature, the HU moment feature and the histogram feature of the gray-level rotation invariant equivalent mode can be used to describe the eye area.
  • the state of the driver is judged based on the open and closed state of the human eyes in consecutive frames. For example, three different levels (0/1/2) may be set, and the lower the level, the more awake. Specifically, if any eye is closed at a certain moment, the consecutive frames with closed eyes increase once. If the number of consecutive frames with closed eyes is greater than the maximum number of consecutive closed eyes, the driver's status level is 2; if the number of consecutive eyes closed is between the maximum and minimum consecutive eye closures, the driver's status level is 1; otherwise, driving The status level of the member is 0.
  • the facial feature information may also include the mouth information of the detected object; the mouth information includes the ratio of the height of the mouth of the detected object to the width of the mouth, and at least the area of the mouth.
  • the facial feature information may further include head posture information of the detected object; the head posture information may include, for example, the angle between the current head axis direction and the preset head axis direction.
  • the embodiment of the present application proposes a driving reminder method, which has at least the following advantages compared with the prior art:
  • the driving reminding method proposed in the embodiments of the present application can solve the problem of the high false alarm rate of the existing driver state detection system, and ensure that the driver has the ability to safely take over the vehicle within a specified time range.
  • the driver does not need to be awake all the time, nor need to pay attention to the road conditions ahead, or even sleep, but needs to be awakened in the event of a system failure.
  • the system can adopt different intensities of reminding methods according to the detected driver's status, so that the driver can complete the switching of the driving task subject in a short time.
  • the driving reminder method proposed in this application can be applied to an automatic driving system above the L3 level, can solve the problem of high false alarm rate of the existing driver state detection system, and ensure that the driver has the ability to safely take over the vehicle within a specified time range.
  • the driving reminder method proposed in the optional embodiment of the present application at least includes the following advantages:
  • the driving reminder method proposed in some embodiments of this application uses MBLBP features as the feature descriptor of the face detection process. This feature can more completely characterize image information, is more robust than LBP features, and is more robust than Haar-like features. Efficient.
  • the embodiment of this application uses the supervised descent method to solve the problem of minimizing the nonlinear least squares (Non-linear Least Squares) objective function.
  • This method has fast processing speed and accurate calculation results, which can overcome the shortcomings of many second-order optimization schemes. , Such as non-differentiable, Hessian matrix is computationally intensive, etc.
  • the embodiment of the present application also proposes a driving state detection method, which is used to detect the state of the driver of an automatic driving vehicle, including the aforementioned steps S101 to S103.
  • an embodiment of the present application also proposes a driving reminder device, including:
  • a memory in which a computer readable program is stored
  • the processor is connected to the memory and is used to execute the computer-readable program to perform the following operations:
  • an embodiment of the present application also provides a driving state detection device, including:
  • a memory in which a computer readable program is stored
  • the processor is connected to the memory and is used to execute the computer-readable program to perform the following operations:
  • the driving state of the detected object is acquired.
  • the embodiment of the present application also proposes an automatic driving system, including:
  • the vehicle sensor module is used to detect whether the detected object is in the driving position
  • Vehicle-mounted camera module used to obtain images of detected objects
  • a memory in which a computer readable program is stored
  • the processor is connected to the vehicle sensor module, the vehicle camera module, and the memory, acquires sensor signals and the image, and is used to execute the computer-readable program to perform the following operations:
  • the embodiment of the present application also proposes an automatic driving system for detecting the driving state of the detected object, including:
  • the vehicle sensor module is used to detect whether the detected object is in the driving position
  • Vehicle-mounted camera module used to obtain images of detected objects
  • a memory in which a computer readable program is stored
  • the processor is connected to the vehicle sensor module, the vehicle camera module, and the memory, acquires sensor signals and the image, and is used to execute the computer-readable program to perform the following operations:
  • the driving state of the detected object is acquired.
  • Each component embodiment of the present application may be implemented by hardware, or by software modules running on one or more processors, or by a combination of them.
  • a microprocessor or a digital signal processor (DSP) may be used in practice to implement some or all of the functions of some or all components in the computing device according to the embodiments of the present application.
  • This application can also be implemented as a device or device program (for example, a computer program and a computer program product) for executing part or all of the methods described herein.
  • Such a program for implementing the present application may be stored on a computer-readable medium, or may have the form of one or more signals. Such signals can be downloaded from Internet websites, or provided on carrier signals, or provided in any other form.
  • FIG. 5 shows a computing device that can implement the method according to the present application.
  • the computing device traditionally includes a processor 1010 and a computer program product in the form of a memory 1020 or a computer readable medium.
  • the memory 1020 may be an electronic memory such as flash memory, EEPROM (Electrically Erasable Programmable Read Only Memory), EPROM, hard disk, or ROM.
  • the memory 1020 has a storage space 1030 for executing the program code 1031 of any method step in the above method.
  • the storage space 1030 for program codes may include various program codes 1031 for implementing various steps in the above method. These program codes can be read out from or written into one or more computer program products.
  • These computer program products include program code carriers such as hard disks, compact disks (CDs), memory cards or floppy disks.
  • Such a computer program product is usually a portable or fixed storage unit as described with reference to FIG. 6.
  • the storage unit may have storage segments, storage spaces, etc., arranged similarly to the memory 1020 in the computing device of FIG. 5.
  • the program code can be compressed in an appropriate form, for example.
  • the storage unit includes computer-readable codes 1031', that is, codes that can be read by, for example, a processor such as 1010. These codes, when run by a computing device, cause the computing device to execute each of the methods described above. step.
  • the embodiment of the present application provides a computing device, including: one or more processors; and one or more machine-readable media on which instructions are stored. When executed by the one or more processors, The computing device executes the method described in one or more of the embodiments of the present application.
  • the driving reminding method proposed in the embodiments of the present application can solve the problem of the high false alarm rate of the existing driver state detection system, and ensure that the driver has the ability to safely take over the vehicle within a specified time range.
  • the driver does not need to be awake all the time, nor need to pay attention to the road conditions ahead, or even sleep, but needs to be awakened in the event of a system failure.
  • the system can adopt different intensities of reminding methods according to the detected driver's status, so that the driver can complete the switching of the driving task subject in a short time.
  • the driving reminder method proposed in this application can be applied to an automatic driving system above the L3 level, can solve the problem of high false alarm rate of the existing driver state detection system, and ensure that the driver has the ability to safely take over the vehicle within a specified time range.
  • the driving reminder method proposed in the optional embodiment of the present application at least includes the following advantages:
  • the driving reminder method proposed in some embodiments of this application uses MBLBP features as the feature descriptor of the face detection process. This feature can more completely characterize image information, is more robust than LBP features, and is more robust than Haar-like features. Efficient.
  • the embodiment of this application uses the supervised descent method to solve the problem of minimizing the nonlinear least squares (Non-linear Least Squares) objective function.
  • This method has fast processing speed and accurate calculation results, which can overcome the shortcomings of many second-order optimization schemes.
  • Such as non-differentiable, Hessian matrix is computationally intensive

Abstract

Disclosed by the present application is a driving prompt method, a driving state detection method and a computing device. The driving prompt methods comprises: acquiring face image information of a detected subject; acquiring at least one piece of facial feature information of the detected subject from within the face image information; determining the driving state of the detected subject according to the facial feature information; and sending corresponding prompt information on the basis of the driving state when a predetermined condition is met. The method proposed in embodiments of the present application may solve the problem of high false alarm rates of existing driver state detection systems, and ensure that a driver is able to safely take over a vehicle within a specified time range.

Description

驾驶提醒方法、驾驶状态检测方法和计算设备Driving reminding method, driving state detection method and computing device 技术领域Technical field
本申请涉及自动驾驶技术领域,特别是涉及一种驾驶提醒方法、驾驶状态检测方法和计算设备。This application relates to the field of automatic driving technology, and in particular to a driving reminder method, a driving state detection method, and a computing device.
背景技术Background technique
自动驾驶汽车(Autonomous vehicles;Self-piloting automobile)又称无人驾驶汽车、电脑驾驶汽车,是一种通过电脑的自动驾驶系统实现无人驾驶的智能汽车。在20世纪已有数十年的历史,21世纪初呈现出接近实用化的趋势。Autonomous vehicles (Autonomous vehicles; Self-piloting automobiles), also known as unmanned vehicles and computer-driven vehicles, are intelligent vehicles that realize unmanned driving through a computer's automatic driving system. In the 20th century, there have been decades of history, and the beginning of the 21st century showed a trend close to practicality.
根据驾驶员参与的程度,可以将自动驾驶分为若干等级:According to the degree of driver participation, autonomous driving can be divided into several levels:
L0级:驾驶员完全掌控车辆;Level L0: The driver has complete control of the vehicle;
L1级:自动系统有时能够辅助驾驶员完成某些驾驶任务;Level L1: The automatic system can sometimes assist the driver to complete certain driving tasks;
L2级辅助驾驶:自动系统能够完成某些驾驶任务,但驾驶员需要监控驾驶环境,完成剩余部分,同时保证出现问题,随时进行接管。在这个层级,自动系统的错误感知和判断有驾驶员随时纠正,大多数车企都能提供这个系统。L2可以通过速度和环境分割成不同的使用场景,如环路低速堵车、高速路上的快速行车和驾驶员在车内的自动泊车;L2 assisted driving: The automatic system can complete certain driving tasks, but the driver needs to monitor the driving environment, complete the rest, and ensure that problems occur and take over at any time. At this level, the wrong perception and judgment of the automatic system can be corrected by the driver at any time, and most car companies can provide this system. L2 can be divided into different usage scenarios based on speed and environment, such as low-speed traffic jams on the loop, fast driving on highways and automatic parking by the driver in the car;
L3级半自动驾驶:自动系统既能完成某些驾驶任务,也能在某些情况下监控驾驶环境,但驾驶员必须准备好重新取得驾驶控制权(自动系统发出请求时)。所以在该层级下,驾驶者仍无法进行睡觉或者深度的休息。L3与L2的差异是车辆负责周边监控,而人类驾驶员只需要保持注意力以备不时之需。L3 semi-autonomous driving: The automatic system can not only complete certain driving tasks, but also monitor the driving environment under certain conditions, but the driver must be ready to regain driving control (when the automatic system makes a request). Therefore, at this level, the driver still cannot sleep or take a deep rest. The difference between L3 and L2 is that the vehicle is responsible for surrounding monitoring, while the human driver only needs to maintain attention for emergencies.
L4级高度自动驾驶:自动系统在某些环境和特定条件下,能够完成驾驶任务并监控驾驶环境;L4的部署,目前来看多数是基于城市的使用,可以是全自动的代客泊车,也可以是直接结合打车服务来做。这个阶段下,在自动驾驶可以运行的范围内,驾驶相关的所有任务和驾乘人已经没关系了,感知外界责任全在自动驾驶系统,这里就存在着不同的设计和部署思路了;Level 4 highly automated driving: Automated systems can complete driving tasks and monitor the driving environment in certain environments and specific conditions; currently, the deployment of L4 is mostly based on city use, which can be fully automated valet parking. It can also be done directly in conjunction with taxi services. At this stage, within the scope of autonomous driving, all tasks related to driving have nothing to do with the driver and passengers. The perception of external responsibility lies in the autonomous driving system, and there are different design and deployment ideas here;
L5级完全自动驾驶:自动系统在所有条件下都能完成的所有驾驶任务Level 5 fully automated driving: all driving tasks that the automated system can complete under all conditions
可知,级别越高,驾驶员的参与程度越低;级别越低,驾驶员的参与程度越 高。It can be seen that the higher the level, the lower the driver’s participation; the lower the level, the higher the driver’s participation.
针对驾驶员参与度较高的L0、L1和L2级别,目前的自动驾驶系统开发有相应的驾驶员状态检测方法,对驾驶员的状态进行监控,以确保驾驶员能够集中注意力参与驾驶。For the L0, L1, and L2 levels with high driver participation, the current automatic driving system has developed corresponding driver state detection methods to monitor the driver's state to ensure that the driver can concentrate on driving.
随着自动驾驶技术的发展,自动驾驶系统可以判断越来越多的复杂状况,需要驾驶员参与的情况逐渐减少。而现有的驾驶员状态检测方法主要应用于驾驶员参与度较高的自动驾驶级别,如果转用到驾驶员参与度较低的驾驶级别,则容易产生虚警率高,误报警频繁等问题,影响驾驶员情绪和精神状态。With the development of autopilot technology, autopilot systems can judge more and more complex situations, and the number of situations requiring driver participation is gradually reduced. The existing driver state detection methods are mainly applied to the autonomous driving level with higher driver participation. If you switch to the driving level with lower driver participation, it will easily cause problems such as high false alarm rate and frequent false alarms. , Affect the driver's mood and mental state.
因此,有必要提出一种适用于驾驶员参与度较低的自动驾驶级别的驾驶员状态检测方法。Therefore, it is necessary to propose a driver state detection method suitable for autonomous driving levels with low driver participation.
发明内容Summary of the invention
鉴于上述问题,本申请一实施例提出一种驾驶提醒方法、驾驶状态检测方法和计算设备,以解决现有技术存在的问题。In view of the foregoing problems, an embodiment of the present application proposes a driving reminder method, a driving state detection method, and a computing device to solve the problems in the prior art.
为了解决上述问题,本申请一实施例公开一种驾驶提醒方法,包括:In order to solve the above problem, an embodiment of the present application discloses a driving reminder method, including:
获取被检测对象的人脸图像信息;Acquiring face image information of the detected object;
从所述人脸图像信息中获取被检测对象的至少一个面部特征信息;Acquiring at least one facial feature information of the detected object from the facial image information;
基于所述面部特征信息,获取被检测对象的驾驶状态;Obtaining the driving state of the detected object based on the facial feature information;
在满足预定条件时,基于所述驾驶状态发出对应的提醒信息。When a predetermined condition is met, corresponding reminder information is issued based on the driving state.
本申请一实施例还公开一种计算设备,包括:An embodiment of the present application also discloses a computing device, including:
一个或多个处理器;和One or more processors; and
其上存储有指令的一个或多个机器可读介质,当由所述一个或多个处理器执行时,使得所述计算设备执行上述的方法。One or more machine-readable media on which instructions are stored, when executed by the one or more processors, cause the computing device to execute the above-mentioned method.
本申请一实施例还公开一个或多个机器可读介质,其上存储有指令,当由一个或多个处理器执行时,使得计算设备执行上述的方法。An embodiment of the present application also discloses one or more machine-readable media, on which instructions are stored, which when executed by one or more processors, cause a computing device to execute the foregoing method.
由上述可知,本申请实施例包括以下优点:It can be seen from the foregoing that the embodiments of the present application include the following advantages:
本申请实施例提出的方法,能够解决现有的驾驶员状态检测系统误报率高的问题,确保驾驶员有能力在规定时间范围内安全接管车辆。The method proposed in the embodiment of the present application can solve the problem of the high false alarm rate of the existing driver state detection system, and ensure that the driver has the ability to safely take over the vehicle within a specified time range.
附图说明Description of the drawings
为了更清楚地说明本申请实施例或现有技术中的技术方案,下面将对实施例或现有技术描述中所需要使用的附图作一简单地介绍,显而易见地,下面描述中的附图是本申请的一些实施例,对于本领域普通技术人员来讲,在不付出创造性劳动的前提下,还可以根据这些附图获得其他的附图。In order to more clearly describe the technical solutions in the embodiments of the present application or the prior art, the following will briefly introduce the drawings that need to be used in the description of the embodiments or the prior art. Obviously, the drawings in the following description These are some embodiments of the present application. For those of ordinary skill in the art, other drawings can be obtained based on these drawings without creative work.
图1所示为本申请一实施例的自动驾驶系统的框图。Fig. 1 is a block diagram of an automatic driving system according to an embodiment of the application.
图2所示为本申请一实施例视觉算法处理的框图。Fig. 2 is a block diagram of visual algorithm processing according to an embodiment of the application.
图3所示为一实施例的驾驶提醒方法的流程图。Fig. 3 shows a flowchart of a driving reminding method according to an embodiment.
图4A至图4D所示为图3所示的驾驶提醒方法的子步骤的流程图。4A to 4D are flowcharts of sub-steps of the driving reminding method shown in FIG. 3.
图5示意性地示出了用于执行根据本申请的方法的计算设备的框图。Fig. 5 schematically shows a block diagram of a computing device for executing the method according to the present application.
图6示意性地示出了用于保持或者携带实现根据本申请的方法的程序代码的存储单元。Fig. 6 schematically shows a storage unit for holding or carrying program codes for implementing the method according to the present application.
具体实施方式Detailed ways
下面将结合本申请实施例中的附图,对本申请实施例中的技术方案进行清楚、完整地描述,显然,所描述的实施例仅仅是本申请一部分实施例,而不是全部的实施例。基于本申请中的实施例,本领域普通技术人员所获得的所有其他实施例,都属于本申请保护的范围。The technical solutions in the embodiments of the present application will be clearly and completely described below in conjunction with the drawings in the embodiments of the present application. Obviously, the described embodiments are only a part of the embodiments of the present application, rather than all of the embodiments. Based on the embodiments in this application, all other embodiments obtained by those of ordinary skill in the art fall within the protection scope of this application.
对于逐渐提高的自动驾驶级别,本申请实施例提出一种应用于自动驾驶系统的驾驶方法和装置,能够解决现有的驾驶员状态检测系统误报率高的问题,确保驾驶员有能力在规定时间范围内安全接管车辆。For the gradually increasing level of automatic driving, the embodiments of this application propose a driving method and device applied to an automatic driving system, which can solve the problem of high false alarm rate of the existing driver state detection system and ensure that the driver has the ability to Safely take over the vehicle within the time frame.
本申请实施例提出一种驾驶提醒方法,应用于车辆的自动驾驶系统。自动驾驶系统能够检测车内信息和车外信息,这些信息可以输入自动驾驶系统,作为自动驾驶系统判断和执行操作的依据。The embodiment of the application proposes a driving reminding method, which is applied to an automatic driving system of a vehicle. The automatic driving system can detect the information inside and outside the car, and this information can be input into the automatic driving system as the basis for the automatic driving system to judge and perform operations.
车外的信息可以包括交通环境信息和自然环境信息;交通环境信息例如为路况信息、交通灯信息、障碍信息等;自然环境信息例如包括温度、湿度、光线等。这些信息可以通过车外传感器、摄像头、雷达等检测元件获取。Information outside the vehicle may include traffic environment information and natural environment information; traffic environment information, for example, road condition information, traffic light information, obstacle information, etc.; natural environment information, for example, temperature, humidity, light, etc. This information can be obtained through detection elements such as sensors, cameras, and radars outside the vehicle.
车内信息例如包括车内环境信息、驾驶人员状态信息、驾驶员对车辆的操作 信息等,这些信息可以通过车内传感器和摄像头等检测元件获取。In-vehicle information includes, for example, in-vehicle environment information, driver status information, and driver's operation information on the vehicle, etc. These information can be acquired through detection elements such as in-vehicle sensors and cameras.
本申请主要针对车内信息处理,提出一种驾驶提醒方法和装置。图1所示为本申请一实施例提出的自动驾驶系统的系统框图。如图1所示,车辆的自动驾驶系统可以由软件、硬件或者软硬件结合的方式组成,自动驾驶系统可以包括车辆传感器模块10、车载摄像头模块20、驾驶员状态检测模块30、自动驾驶系统主控模块40,唤醒策略控制模块50和人机交互接口模块60。在一些实施例中,车辆传感器模块10和车载摄像头模块20可以是硬件装置,通过例如数据总线等连接方式连接于车载电脑;驾驶员状态检测模块30、自动驾驶系统主控模块40,唤醒策略控制模块50可以是车载电脑处理器中的计算机程序;人机交互接口模块60可以为软件模块或者硬件模块。This application mainly aims at information processing in the vehicle, and proposes a driving reminder method and device. Fig. 1 shows a system block diagram of an automatic driving system proposed in an embodiment of the application. As shown in Figure 1, the automatic driving system of a vehicle can be composed of software, hardware, or a combination of software and hardware. The automatic driving system can include a vehicle sensor module 10, a vehicle-mounted camera module 20, a driver state detection module 30, and an automatic driving system master. The control module 40, the wake-up strategy control module 50 and the human-computer interaction interface module 60. In some embodiments, the vehicle sensor module 10 and the vehicle camera module 20 may be hardware devices, which are connected to the vehicle computer through a connection such as a data bus; the driver state detection module 30, the automatic driving system main control module 40, and the wake-up strategy control The module 50 may be a computer program in a vehicle-mounted computer processor; the human-computer interaction interface module 60 may be a software module or a hardware module.
车辆传感器模块10和车载摄像头模块20用于采集车内的信息,车辆传感器模块10用于通过传感器检测驾驶位置是否有驾驶员。在一些实施例中,所述传感器模块10可以是设置在驾驶座位的压力传感器。在一些实施例中,所述车辆传感器模块10可以用于接收车辆的相关传感器信号,例如驾驶位压力传感器信号、安全带信号等,用于判断驾驶员是否在驾驶位置。The vehicle sensor module 10 and the vehicle camera module 20 are used to collect information in the vehicle, and the vehicle sensor module 10 is used to detect whether there is a driver at the driving position through sensors. In some embodiments, the sensor module 10 may be a pressure sensor installed in the driver's seat. In some embodiments, the vehicle sensor module 10 may be used to receive related sensor signals of the vehicle, such as driving position pressure sensor signals, seat belt signals, etc., to determine whether the driver is in the driving position.
车载摄像头模块20用于采集驾驶位置的多帧视频图像。在一些实施例中,所述车载摄像头模块20可以是普通摄像头、高清摄像头、立体摄像头等中的一种或多种。在一些实施例中,所述多帧视频图像可以是连续的,也可以是不连续的。在一些实施例中,车载摄像头模块20可以安装在车内A柱位置,用于采集驾驶员的图像信息,监控驾驶员状态,所述车载摄像头模块的安装位置以能够获取较大程度的驾驶员面部信息为最佳,且不能影响驾驶员的操作,例如不能遮挡驾驶员的操作。The vehicle-mounted camera module 20 is used to collect multiple frames of video images of the driving position. In some embodiments, the vehicle-mounted camera module 20 may be one or more of a normal camera, a high-definition camera, and a stereo camera. In some embodiments, the multiple frames of video images may be continuous or discontinuous. In some embodiments, the vehicle-mounted camera module 20 can be installed at the A-pillar position in the vehicle to collect image information of the driver and monitor the status of the driver. The installation position of the vehicle-mounted camera module can be used to obtain a greater degree of driver The facial information is the best and cannot affect the operation of the driver, for example, it cannot obscure the operation of the driver.
在一些实施例中,传感器模块的信号和视频图像可以发送至车载电脑的驾驶员状态检测模块30。在一些实施例中,所述驾驶员状态检测模块30可以利用所述传感器信号和所述视频图像的至少其中一者判断驾驶员的驾驶状态,并将驾驶状态发送至自动驾驶系统主控模块40。在一些实施例中,所述驾驶员状态检测模块30也可以将所述驾驶状态发送至唤醒策略控制模块50。在一些实施例中,驾驶员状态检测模块30可以接收所述图像信息。在一些实施例中,驾驶员状态检测模块 30通过人脸分类器进行驾驶员检测,确定所检测的一帧视频图像中是否存在人脸。当存在人脸时,可以确定驾驶员的人脸矩形区域,并在人脸矩形区域中定位出驾驶员的面部特征点并获得面部特征信息,基于面部特征信息判断驾驶员的状态。In some embodiments, the signal and video image of the sensor module may be sent to the driver state detection module 30 of the onboard computer. In some embodiments, the driver state detection module 30 may use at least one of the sensor signal and the video image to determine the driving state of the driver, and send the driving state to the automatic driving system main control module 40 . In some embodiments, the driver state detection module 30 may also send the driving state to the wake-up strategy control module 50. In some embodiments, the driver state detection module 30 may receive the image information. In some embodiments, the driver state detection module 30 performs driver detection through a face classifier to determine whether there is a human face in the detected frame of video image. When there is a human face, the driver's rectangular area can be determined, and the driver's facial feature points can be located in the rectangular area of the human face to obtain facial feature information, and the driver's state can be determined based on the facial feature information.
在一些实施例中,图2显示了驾驶员状态检测模块30的视觉算法处理框图。如图2所示,驾驶员状态检测模块30的处理流程包括视频图像输入、驾驶员检测、面部特征点定位、驾驶员状态判断四个部分。视频图像输入的流程用于获取车载摄像头模块20的视频图像;驾驶员检测的流程用于判断多帧视频图像判断是否存在驾驶员的人脸图像信息;面部特征点定位的流程用于根据人脸图像信息确定面部特征点;驾驶员状态判断的流程用于根据面部特征判断驾驶员的驾驶状态。In some embodiments, FIG. 2 shows a visual algorithm processing block diagram of the driver state detection module 30. As shown in Figure 2, the processing flow of the driver state detection module 30 includes four parts: video image input, driver detection, facial feature point positioning, and driver state judgment. The video image input process is used to obtain the video image of the on-board camera module 20; the driver detection process is used to determine whether there is a multi-frame video image to determine whether the driver’s facial image information exists; the process of facial feature point positioning is used to determine the The image information determines the facial feature points; the driver state judgment process is used to judge the driver's driving state based on facial features.
回到图1所示,自动驾驶系统主控模块40可以用于根据自动驾驶系统的运行情况,发送系统状态信号,例如系统出现故障的信号,紧急情况,或是自动驾驶系统无法准确判断前方路况的信号等。Returning to Figure 1, the main control module 40 of the automatic driving system can be used to send system status signals according to the operating conditions of the automatic driving system, such as a signal that the system is malfunctioning, an emergency situation, or the automatic driving system cannot accurately determine the road conditions ahead. Signal etc.
唤醒策略控制模块50可以用于根据所述驾驶员状态和/或所述自动驾驶系统的状态信号,发送不同的指令,采用不同的提醒方式提醒驾驶员接管车辆。在一些实施例中,在接收到自动驾驶系统发出的系统状态信号,例如,出现了车辆故障或者系统故障时,则可以根据所述驾驶员状态和置信度对驾驶员采取不同的提醒方式,保证驾驶员能够在规定时间内安全、平顺地接管驾驶任务。The wake-up strategy control module 50 may be used to send different instructions according to the driver state and/or the state signal of the automatic driving system, and use different reminding methods to remind the driver to take over the vehicle. In some embodiments, upon receiving the system status signal sent by the automatic driving system, for example, when a vehicle failure or system failure occurs, different reminding methods can be adopted to the driver according to the driver status and confidence level to ensure The driver can take over the driving task safely and smoothly within the specified time.
在一些实施例中,所述唤醒策略控制模块50根据所述驾驶状态制定唤醒策略,并通过人机交互接口模块60执行唤醒策略。在一些实施例中,驾驶员状态检测模块30、唤醒策略控制模块50可以是车载电脑处理器中的软件模块,人机交互接口模块60可以是依据唤醒策略控制模块50给出的控制指令对驾驶员发出通知信息的硬件模块,控制指令包含有不同唤醒模式。根据所述唤醒策略控制指令,人机交互接口模块60对驾驶员执行对应的唤醒模式,提醒驾驶员接管驾驶任务。人机交互接口模块60例如可以包括声音模块、灯光模块、震动模块、显示模块等,本申请并不特别限定。In some embodiments, the wake-up strategy control module 50 formulates a wake-up strategy according to the driving state, and executes the wake-up strategy through the human-computer interaction interface module 60. In some embodiments, the driver state detection module 30 and the wake-up strategy control module 50 may be software modules in the on-board computer processor, and the human-computer interaction interface module 60 may be used to control driving according to the control instructions given by the wake-up strategy control module 50. The hardware module that sends notification information to the operator, and the control command includes different wake-up modes. According to the wake-up strategy control instruction, the human-computer interaction interface module 60 executes a corresponding wake-up mode for the driver to remind the driver to take over the driving task. The human-computer interaction interface module 60 may include, for example, a sound module, a light module, a vibration module, a display module, etc., which are not particularly limited in this application.
值得注意的是,以上对于自动驾驶系统的描述仅为描述方便,并不能把本申请限制在所举实施例范围之内。可以理解,对于本领域的技术人员来说,在了解该系统的原理后,可能在不背离这一原理的情况下,对各个模块进行任意组合, 或者构成子系统与其他模块连接,对实施上述方法和系统的应用领域形式和细节上的各种修正和改变。例如上述的驾驶员状态检测模块30、自动驾驶系统主控模块40,唤醒策略控制模块50和人机交互接口模块60是单独存在的软件模块。又例如,这些模块也可以是两两集成或者多个集成在一起的模块,任何变形或修改都属于本申请的保护范围。例如,驾驶员状态检测模块30和自动驾驶系统主控模块40可以以软件的形式集成在一起;例如,唤醒策略控制模块50和人机交互接口模块60可以以软件的形式集成在一起;又例如,驾驶员状态检测模块30、自动驾驶系统主控模块40、唤醒策略控制模块50三者可以以软件的形式集成在一起,或者驾驶员状态检测模块30、自动驾驶系统主控模块40,唤醒策略控制模块50和人机交互接口模块60整体以软件的形式集成在一起,本申请并不特别限定以上模块单独实施或者组合实施的形式。诸如此类的变形,均在本申请的保护范围之内。It is worth noting that the above description of the automatic driving system is only for convenience of description, and does not limit the present application within the scope of the listed embodiments. It can be understood that for those skilled in the art, after understanding the principle of the system, it is possible to arbitrarily combine various modules, or form subsystems to connect with other modules without departing from this principle, to implement the above Various amendments and changes in the form and details of the application field of the method and system. For example, the above-mentioned driver state detection module 30, automatic driving system main control module 40, wake-up strategy control module 50 and human-computer interaction interface module 60 are separate software modules. For another example, these modules may also be two-by-two integrated or multiple integrated modules, and any deformation or modification belongs to the protection scope of this application. For example, the driver state detection module 30 and the automatic driving system main control module 40 can be integrated together in the form of software; for example, the wake-up strategy control module 50 and the human-computer interaction interface module 60 can be integrated together in the form of software; another example The driver state detection module 30, the automatic driving system main control module 40, and the wake-up strategy control module 50 can be integrated in the form of software, or the driver state detection module 30, the automatic driving system main control module 40, and the wake-up strategy The control module 50 and the human-computer interaction interface module 60 are integrated together in the form of software as a whole, and this application does not particularly limit the implementation of the above modules alone or in combination. Such deformations are all within the protection scope of this application.
图3所示为本申请第一实施例的驾驶提醒方法的步骤流程图。如图3所示,本申请实施例的驾驶提醒方法应用于自动驾驶系统,包括如下步骤:FIG. 3 is a flowchart of the steps of the driving reminding method according to the first embodiment of the application. As shown in FIG. 3, the driving reminder method of the embodiment of the present application is applied to an automatic driving system and includes the following steps:
S101,获取被检测对象的人脸图像信息;S101: Obtain face image information of a detected object;
在一些实施例中,自动驾驶系统获取被检测对象的人脸图像信息。其中,所述自动驾驶系统可以通过传感器或摄像头获取所述自动驾驶系统所在车辆驾驶位上的驾驶员的人脸图像信息。In some embodiments, the automatic driving system obtains facial image information of the detected object. Wherein, the automatic driving system may obtain the facial image information of the driver in the driving position of the vehicle where the automatic driving system is located through a sensor or a camera.
在一实施例中,被检测对象的人脸图像信息可以包括通过人脸识别技术识别出来的人脸图像。在一实施例中可以利用图1中的车载摄像头模块10拍摄视频,拍摄例如每秒30帧的连续视频图像。之后,可以采用人脸分类器对所述视频图像进行分析检测,判断是否存在人脸图像信息。在一实施例中,所述人脸分类器可以是对包含人脸和非人脸的训练集提取MBLBP(Multiscale BlockLBP)特征,然后使用级联的AdaBoost算法训练得到的。In an embodiment, the face image information of the detected object may include a face image recognized through face recognition technology. In one embodiment, the vehicle-mounted camera module 10 in FIG. 1 may be used to shoot video, for example, 30 frames per second of continuous video images. After that, a face classifier can be used to analyze and detect the video image to determine whether there is face image information. In an embodiment, the face classifier may be obtained by extracting MBLBP (Multiscale BlockLBP) features from a training set containing human faces and non-human faces, and then training using a cascaded AdaBoost algorithm.
在一些实施例中,在经过人脸分类器判断得出存在人脸图像信息的结论后,即可通过算法获得人脸矩形区域。In some embodiments, after the face classifier determines that there is face image information, the rectangular area of the face can be obtained through an algorithm.
在另一些实施例中,判断图像中是否存在人脸图像信息还可以通过机器学习的方式实现。例如,可以利用机器学习模型判断图像中是否存在人脸图像信息。 针对机器学习模型的运用包括训练阶段和使用阶段:在训练阶段,可以将多张包含人脸图像信息的图像和不包含人脸图像信息的图像输入机器学习模型,并对这些图片标记为“包含”或“不包含”,以这些图像为样本对机器学习模型进行训练;在使用阶段,将新的图像输入训练成熟的机器学习模型,机器学习模型能够自动输出图像中是否包含人脸图像信息的判断结果。In other embodiments, judging whether there is face image information in the image can also be implemented by means of machine learning. For example, a machine learning model can be used to determine whether there is face image information in the image. The application of the machine learning model includes the training phase and the use phase: in the training phase, multiple images containing face image information and images not containing face image information can be input into the machine learning model, and these pictures are marked as "containing "Or "does not contain", use these images as samples to train the machine learning model; in the use stage, input new images into the training mature machine learning model, and the machine learning model can automatically output whether the image contains face image information critical result.
在另一些实施例中,判断图像中是否存在人脸图像信息还可以通过机器学习中的深度学习的方式实现。深度学习是利用包含多个隐含层的神经网络模型,建立、模拟人脑进行分析学习的神经网络,模仿人脑的机制来解释数据,如文本、图像、声音。深度学习通常需要更大量的训练数据来训练神经网络模型,这些训练数据例如是大量的标记有“包含人脸图像信息”或“不包含人脸图像信息”的图像;在训练之后的使用阶段,将新的图像输入神经网络模型,神经网络模型能够自动输出该图像是否包含人脸图像信息的判断结果,并且输出的信息的准确度相比传统的机器学习模型有显著的提高。In some other embodiments, judging whether there is face image information in the image can also be implemented by means of deep learning in machine learning. Deep learning uses a neural network model that contains multiple hidden layers to establish and simulate a neural network that simulates the human brain for analysis and learning, and imitates the mechanism of the human brain to interpret data, such as text, images, and sounds. Deep learning usually requires a larger amount of training data to train the neural network model. These training data are, for example, a large number of images marked with "face image information" or "not containing face image information"; in the use stage after training, Input the new image into the neural network model, the neural network model can automatically output the judgment result of whether the image contains facial image information, and the accuracy of the output information is significantly improved compared with the traditional machine learning model.
值得注意的是,以上仅为举例之用,判断图像中是否存在人脸图像信息的方式可以并不限于上述多种,本领域技术人员可以对确定的方式进行任意变换,这些变换均包含在本申请的范围中。It is worth noting that the above is only an example. The method of judging whether there is face image information in the image may not be limited to the above. Those skilled in the art can make any changes to the determined method, and these transformations are included in this article. Within the scope of application.
在执行步骤S101之后,根据本申请一实施例,可以执行步骤S102如下:After step S101 is performed, according to an embodiment of the present application, step S102 can be performed as follows:
S102,从所述人脸图像信息中获取被检测对象的面部特征信息面部特征信息;S102: Obtain facial feature information and facial feature information of the detected object from the facial image information;
在这一步骤中,可以先从前述的人脸矩形区域中定位获得面部特征点,例如眼睛、嘴角、鼻尖以及人脸轮廓等;再利用面部特征点的位置信息作为面部特征信息,用于后续确定驾驶状态。In this step, you can first locate and obtain facial feature points from the aforementioned rectangular area of the face, such as eyes, mouth corners, nose tip, and face contour; then use the position information of the facial feature points as facial feature information for subsequent use Determine driving status.
在一些实施例中,进一步的,确定所述面部特征点的位置信息,对于所述人脸矩形区域,给出一个初始形状,提取关键特征点的图像特征,通过不断地迭代将初始形状回归到接近甚至等于真实形状的位置。In some embodiments, further, the position information of the facial feature points is determined, an initial shape is given for the rectangular region of the face, the image features of the key feature points are extracted, and the initial shape is returned to Position close to or even equal to the true shape.
在一些实施例中,确定所述面部特征点的位置信息可以使用前述的监督下降算法(Supervised Descent Method,SDM)进行求解,所述图像特征采用的是方向梯度直方图特征(Histogram of Oriented Gradient,HOG);所述方向梯度直方图特征是通过计算和统计图像局部区域的梯度方向直方图构成的特征描述因子,在此不再赘 述。In some embodiments, the location information of the facial feature points can be determined by using the aforementioned Supervised Descent Method (SDM) to solve the problem, and the image feature adopts the Histogram of Oriented Gradient, HOG); The orientation gradient histogram feature is a feature description factor formed by calculating and counting the gradient orientation histogram of the local area of the image, and will not be repeated here.
在一些实施例中,确定所述面部特征点的位置信息还可以通过机器学习的方式获得。例如,可以利用机器学习模型获取面部特征点的位置信息。针对机器学习模型的运用包括训练阶段和使用阶段:在训练阶段,可以将多张标记有面部特征点的位置信息的人脸图像输入机器学习模型,对机器学习模型进行训练;在使用阶段,将新的人脸图像输入训练成熟的机器学习模型,机器学习模型能够自动输出该人脸图像的面部特征点的位置信息。In some embodiments, determining the position information of the facial feature points may also be obtained through machine learning. For example, a machine learning model can be used to obtain the location information of facial feature points. The application of the machine learning model includes the training phase and the use phase: in the training phase, multiple face images marked with the location information of the facial feature points can be input into the machine learning model to train the machine learning model; in the use phase, The new face image input trains a mature machine learning model, and the machine learning model can automatically output the location information of the facial feature points of the face image.
在一些实施例中,确定所述面部特征点的位置信息还可以通过机器学习中的深度学习的方式获得。在训练阶段可以利用大量训练数据来训练神经网络模型,这些训练数据例如是标记有面部特征点的位置信息的人脸图像;在使用阶段,将新的人脸图像输入神经网络模型,神经网络模型能够自动输出该人脸图像的面部特征点的位置信息,并且输出的信息的准确度相比传统的机器学习模型有显著的提高。In some embodiments, determining the position information of the facial feature points can also be obtained by means of deep learning in machine learning. In the training phase, a large amount of training data can be used to train the neural network model. These training data are, for example, face images marked with the location information of facial feature points; in the use phase, new face images are input to the neural network model, neural network model The position information of the facial feature points of the face image can be automatically output, and the accuracy of the output information is significantly improved compared to traditional machine learning models.
值得注意的是,以上仅为举例之用,确定所述面部特征点的位置信息的方式可以并不限于上述多种,本领域技术人员可以对确定的方式进行任意变换,这些变换均包含在本申请的范围中。It is worth noting that the above is only for example, and the method for determining the position information of the facial feature points may not be limited to the above-mentioned methods. Those skilled in the art can change the determination method arbitrarily, and these transformations are included in this article. Within the scope of application.
在执行步骤S102之后,根据本申请一实施例,可以执行步骤S103如下:After step S102 is executed, according to an embodiment of the present application, step S103 can be executed as follows:
S103,基于所述面部特征信息,获取被检测对象的驾驶状态;S103: Acquire the driving state of the detected object based on the facial feature information;
在这一步骤中,可以基于面部特征信息获得被检测对象的驾驶状态。通过面部特征点可以获得对应的面部特征信息。面部特征信息例如包括面部特征点的位置信息。该位置信息可以作为用于提取特征描述因子,确定驾驶员的状态。In this step, the driving state of the detected object can be obtained based on facial feature information. The corresponding facial feature information can be obtained through facial feature points. The facial feature information includes, for example, position information of facial feature points. The location information can be used to extract feature description factors to determine the driver's state.
例如,在一实施例中可以从面部特征点的位置信息中定位到驾驶员的眼睛区域;利用眼睛区域的位置信息提取描述因子,例如驾驶员眼睛的高宽比,使用SVM算法确定眼睛的状态——该状态例如可以包括睁开、闭上、半睁等。For example, in one embodiment, the driver’s eye area can be located from the position information of the facial feature points; the description factor is extracted using the position information of the eye area, such as the aspect ratio of the driver’s eyes, and the SVM algorithm is used to determine the state of the eyes ——The state can include open, closed, half open, etc., for example.
同样地,在其他实施例中还可以从面部特征点的位置信息中定位到驾驶员的嘴部区域;利用嘴部区域的位置信息提取描述因子,例如驾驶员嘴部的高宽比,使用特定的算法确定嘴巴的状态——该状态例如可以包括张开、闭合、半张开等。Similarly, in other embodiments, it is also possible to locate the driver’s mouth area from the position information of facial feature points; use the position information of the mouth area to extract descriptive factors, such as the aspect ratio of the driver’s mouth, and use specific The algorithm determines the state of the mouth-the state can include, for example, open, closed, half-open, etc.
同样地,在其他实施例中,可以结合车载摄像头模块20的内外参数计算出驾 驶员的头部姿态。例如可以根据驾驶员当前的被拍摄到的图像,结合摄像头相对于x-y-z三轴坐标系的偏转角度,计算出驾驶员的头部姿态。或者,利用驾驶员的头部所在的轴线与身体的轴线的夹角及变换关系,确定驾驶员的头部姿态。例如,当预设的头部姿态是头部相对于身体呈小于特定角度,例如夹角在0-15度之间,认为驾驶员处于正常姿态;当驾驶员的头部姿态相对于身体呈大于15度的角度,则认为驾驶员可能处于睡觉状态。Similarly, in other embodiments, the head posture of the driver can be calculated by combining the internal and external parameters of the on-board camera module 20. For example, the head posture of the driver can be calculated according to the driver's current captured image, combined with the deflection angle of the camera relative to the x-y-z three-axis coordinate system. Alternatively, the angle and transformation relationship between the axis of the driver's head and the axis of the body are used to determine the driver's head posture. For example, when the preset head posture is less than a specific angle relative to the body, for example, the angle is between 0-15 degrees, the driver is considered to be in a normal posture; when the driver’s head posture is larger than the body At an angle of 15 degrees, it is considered that the driver may be sleeping.
在一些实施例中,当通过其中的一种或者多种方法判断出驾驶员的驾驶状态,用于后续发出对应的提醒信息。在一些实施例中,所述自动驾驶系统可以设定眼睛、嘴巴、头部等姿态中的一种或者多种来对应驾驶员的驾驶状态。在一些实施例中,当所述自动驾驶系统确定了所述眼睛、嘴巴、头部等中的一种或者多种姿态时,则可以确定驾驶员的驾驶状态。In some embodiments, when the driving state of the driver is determined by one or more of these methods, it is used to subsequently issue corresponding reminders. In some embodiments, the automatic driving system may set one or more of the postures of the eyes, mouth, and head to correspond to the driving state of the driver. In some embodiments, when the automatic driving system determines one or more of the eyes, mouth, head, etc., the driving state of the driver can be determined.
驾驶状态可以分为多个驾驶状态等级。以利用驾驶员的眼睛来检测驾驶状态为例,可以设置有三个不同的等级状态(0/1/2),等级越低越清醒。举例来说,某时刻任一只眼睛是闭眼状态,则闭眼连续帧增加一次。如果闭眼连续帧次数大于最大连续闭眼次数,则驾驶员的驾驶状态等级为2;如果闭眼连续次数在最大和最小连续闭眼次数之间,则驾驶员的驾驶状态等级为1;否则,驾驶员的驾驶状态等级为0。The driving state can be divided into multiple driving state levels. Taking the use of the driver's eyes to detect the driving state as an example, three different levels of state (0/1/2) can be set, the lower the level, the more sober. For example, if any eye is closed at a certain moment, the consecutive frames with closed eyes increase once. If the number of consecutive frames with closed eyes is greater than the maximum number of consecutive closed eyes, the driver's driving state level is 2; if the consecutive number of closed eyes is between the maximum and minimum consecutive closed eyes, the driver's driving state level is 1; otherwise , The driver’s driving state level is 0.
再举例来说,当检测到用户的头下垂超过第一时长,则驾驶员的驾驶状态等级为2;当检测到用户的头下垂时间位于第一时长和第二时长之间,其中第一时长大于第二时长,则驾驶员的驾驶状态等级为1;否则驾驶员的驾驶状态等级为0。For another example, when it is detected that the user’s head drooping exceeds the first time period, the driver’s driving state level is 2; when the user’s head drooping time is detected between the first time period and the second time period, the first time period is If the duration is longer than the second duration, the driver's driving state level is 1; otherwise, the driver's driving state level is 0.
眼睛的睁闭状态可以根据所述眼部区域提取特征信息,然后基于SVM分类器来判断。所述特征信息可以包括灰度旋转不变等价模式的LBP特征、HU矩特征和直方图特征的融合信息,这些信息用于对所述的眼部区域进行特征描述。在一些实施例中,所述SVM分类器首先提取图像的融合特征信息,然后基于支持向量机算法对数据样本集图像进行训练,得到能够判断眼睛睁闭状态的分类器。The open and closed state of the eyes can be determined based on the SVM classifier by extracting feature information from the eye region. The feature information may include the fusion information of the LBP feature, the HU moment feature, and the histogram feature of the gray-level rotation invariant equivalent mode, and these information are used for feature description of the eye region. In some embodiments, the SVM classifier first extracts the fusion feature information of the image, and then trains the data sample set images based on the support vector machine algorithm to obtain a classifier capable of judging the open and closed state of the eyes.
在执行步骤S103之后,根据本申请一实施例,可以执行步骤S104如下:After step S103 is performed, according to an embodiment of the present application, step S104 can be performed as follows:
S104,在满足预定条件时,基于所述驾驶状态发出对应的提醒信息。S104: When a predetermined condition is met, send corresponding reminder information based on the driving state.
在这一步骤中,所述预定条件基于驾驶员的驾驶状态、自动驾驶系统当前的 系统状态信号、以及对驾驶员的驾驶状态判断的准确程度(例如,系统置信度)等中的至少其中之一或其组合确定。In this step, the predetermined condition is based on at least one of the driver's driving state, the current system state signal of the automatic driving system, and the accuracy of the driver's driving state judgment (for example, system confidence). One or a combination of them.
在一些实施例中,所述预定条件可以为:自动驾驶系统当前的系统状态信号为系统错误或车辆故障;即,只要系统状态信号包含系统错误或者车辆故障相关的信号,图1所示的唤醒策略提醒模块50即根据驾驶员的驾驶状态,确定提醒方式,并通过人机交互接口模块60发出对应的提醒信息。In some embodiments, the predetermined condition may be: the current system status signal of the automatic driving system is a system error or a vehicle failure; that is, as long as the system status signal contains a signal related to a system error or a vehicle failure, the wake-up shown in FIG. 1 The strategy reminding module 50 determines the reminding mode according to the driving state of the driver, and sends out corresponding reminding information through the human-computer interaction interface module 60.
在另一些实施例中,所述预定条件可以为:驾驶状态等级为1或2级(例如驾驶员很不清醒或者较不清醒),且自动驾驶系统当前的系统状态信号为系统错误或车辆故障;即,需要同时满足驾驶员未处于驾驶状态且系统状态信号显示为系统错误或车辆故障,此时需要驾驶员介入,才满足预定条件;在满足预定条件的情况下,唤醒策略控制模块50即根据驾驶员的驾驶状态,确定提醒方式,并通过人机交互接口模块60发出对应的提醒信息。In other embodiments, the predetermined condition may be: the driving state level is 1 or 2 (for example, the driver is not awake or less awake), and the current system state signal of the automatic driving system is a system error or a vehicle failure ; That is, it is necessary to simultaneously satisfy that the driver is not in a driving state and the system status signal is displayed as a system error or a vehicle failure. At this time, the driver's intervention is required to meet the predetermined conditions; when the predetermined conditions are met, the wake-up strategy control module 50 is According to the driving state of the driver, the reminding method is determined, and the corresponding reminding information is sent out through the human-computer interaction interface module 60.
在另一些实施例中,可以引入系统置信度,即系统判断的准确程度,以此为依据确定是否达到预定条件,或者决定发出不同级别的提醒信息。确定系统置信度的步骤例如可以包括如下步骤:In other embodiments, the system confidence level, that is, the accuracy of the system judgment, can be introduced to determine whether the predetermined conditions are met or decide to issue different levels of reminder information. The step of determining the confidence of the system may include the following steps, for example:
计算连续检测到人脸图像信息的次数;以及Count the number of consecutive detections of face image information; and
根据所述人脸图像信息的次数确定所述系统置信度,所述系统置信度包括两个以上系统置信度等级。The system confidence is determined according to the number of times of the facial image information, and the system confidence includes more than two system confidence levels.
在一实施例中,自动驾驶系统可以利用连续检测到人脸的次数划分系统结果的置信度,使用三个等级进行表征(0/1/2),等级越高可信度越高。如果连续检测到人脸的次数大于连续检测到人脸的最大阈值,则置信度为2;如果连续检测到人脸的次数在最大和最小阈值之间,则置信度为1;否则,置信度为0。In an embodiment, the automatic driving system may use the number of consecutive detections of human faces to divide the confidence of the system result, using three levels for characterization (0/1/2), the higher the level, the higher the confidence. If the number of consecutively detected faces is greater than the maximum threshold for consecutively detected faces, the confidence level is 2; if the number of consecutively detected faces is between the maximum and minimum thresholds, the confidence level is 1; otherwise, the confidence level Is 0.
在考虑到系统置信度的情形下,所述预定条件可以为:驾驶状态等级为1或2级(例如驾驶员较为不清醒或者很不清醒),且自动驾驶系统当前的系统状态信号为系统错误或车辆故障,且前述系统置信度为1或2(即可信程度高或中等);即,需要同时满足驾驶员未处于驾驶状态、且系统状态信号显示需要驾驶员介入,并且系统置信度高于一定范围,才认为满足了预定条件;在满足预定条件的情况下,车载控制系统的唤醒策略提醒模块50即根据驾驶员的驾驶状态,确定提醒方式, 通过人机交互接口模块60发出对应的提醒信息。Taking the system confidence into consideration, the predetermined condition may be: the driving state level is 1 or 2 (for example, the driver is relatively unconscious or very unconscious), and the current system state signal of the automatic driving system is a system error Or vehicle failure, and the aforementioned system confidence is 1 or 2 (that is, the confidence level is high or medium); that is, it is necessary to satisfy that the driver is not in a driving state, and the system status signal shows that the driver is required to intervene, and the system confidence is high When the predetermined conditions are met, the wake-up strategy reminder module 50 of the on-board control system determines the reminder mode according to the driving state of the driver, and sends out the corresponding reminder through the human-computer interaction interface module 60. Reminder information.
基于驾驶员的驾驶状态发出对应的提醒信息的操作例如可以为,针对不同等级的驾驶状态,设置不同强度的提醒方式,例如高中低三种强度的提醒方式。其中,所述不同强度的提醒方式皆可通过音量、灯光闪烁、方向盘震动、座椅震动等方式中的一种或多种实现。其中,高强度提醒方式、中强度提醒方式、低强度提醒方式的区别在于所采用提醒方式的激烈或强度不同,例如在采用音量提醒时,高强度提醒方式可采用高分贝音量,灯光闪烁频率高等或者采用红色灯光高频率闪烁,高频率的方向盘震动或座椅震动;中强度提醒方式可采用中等分贝音量,灯光闪烁频率中等或者采用黄色灯光中频率闪烁,中频率的方向盘震动或座椅震动;低强度提醒方式可采用低分贝音量,灯光闪烁频率较低或采用绿色灯光闪烁,低频率的方向盘震动或座椅震动。The operation of issuing corresponding reminder information based on the driving state of the driver may be, for example, setting reminding methods of different intensities for different levels of driving states, for example, three reminding methods of high, middle and low intensities. Wherein, the reminding methods of different intensities can be realized by one or more of the methods such as volume, light flashing, steering wheel vibration, seat vibration, and the like. Among them, the difference between high-intensity reminder, medium-intensity reminder, and low-intensity reminder lies in the intensity or intensity of the used reminder. For example, when using volume reminder, the high-intensity reminder can use high decibel volume and high light flashing frequency. Or use high-frequency red light flashing, high-frequency steering wheel vibration or seat vibration; medium-intensity reminder can use medium decibel volume, light flashing frequency or yellow light medium-frequency flashing, medium-frequency steering wheel vibration or seat vibration; Low-intensity reminders can use low-decibel volume, low-frequency light flashing or green light flashing, low-frequency steering wheel vibration or seat vibration.
基于驾驶状态发出对应的提醒信息,也可以将系统置信度作为参考因素之一,即,可以基于驾驶状态和系统置信度,确定对应的提醒方式,发出对应级别的提醒信息。The corresponding reminder information is issued based on the driving state, and the system confidence can also be used as one of the reference factors, that is, the corresponding reminder method can be determined based on the driving state and the system confidence, and the reminder information of the corresponding level can be issued.
表1所示为在满足预定条件下,针对驾驶状态设置的多种提醒方式的示例,如下:Table 1 shows an example of multiple reminding methods set for the driving state when the predetermined conditions are met, as follows:
驾驶状态等级Driving status level 系统置信度System confidence 提醒方式Reminder
1(不清醒)1 (not sober) 1(高)1 (high) 高强度high strength
2(较不清醒)2 (less sober) 1(高)1 (high) 中等强度Medium intensity
3(清醒)3 (awake) 2(中)2 (medium) 低强度Low intensity
2(较不清醒)2 (less sober) 3(低)3 (low) 低强度Low intensity
3(清醒)3 (awake) 3(低)3 (low) 低强度Low intensity
表1Table 1
值得注意的是,上述的不同强度的提醒方式设置的强度级别、每一强度级别的具体提醒手段均是举例说明,并且同一提醒强度级别也可能对应不同的提醒手段,本申请并不特别限制。本领域技术人员可以在本申请公开的原理范围内自行设置提醒手段。It is worth noting that the above-mentioned intensity levels set by different intensity reminding methods and the specific reminding means of each intensity level are all examples, and the same reminding intensity level may correspond to different reminding means, and this application is not particularly limited. Those skilled in the art can set the reminding means by themselves within the scope of the principles disclosed in this application.
在一些实施例中,步骤S101到S104分别可以包含如下子步骤。In some embodiments, steps S101 to S104 may respectively include the following sub-steps.
在一可选实施例中,进一步地,如图4A所示,所述步骤S101,即获取被检测对象的人脸图像信息的步骤,可以包括如下子步骤:In an optional embodiment, further, as shown in FIG. 4A, the step S101, that is, the step of acquiring face image information of the detected object, may include the following sub-steps:
S1011,利用人脸分类器检测所采集的图像中是否存在人脸图像信息;S1011: Use a face classifier to detect whether there is face image information in the collected image;
S1012,当判断所采集的图像中存在人脸图像信息时,从所述人脸图像信息中提取面部特征信息;S1012: When it is determined that there is facial image information in the collected image, extract facial feature information from the facial image information;
在一些实施例中,当判断所采集的图像中存在人脸时,即可以利用算法获得人脸矩形区域。在确定了人脸矩形区域后,可以从人脸矩形区域中获得至少一个面部特征信息。In some embodiments, when it is determined that there is a human face in the collected image, an algorithm can be used to obtain a rectangular area of the human face. After the rectangular area of the human face is determined, at least one facial feature information can be obtained from the rectangular area of the human face.
例如,在子步骤S1011中,可以利用人脸分类器检测是否存在人脸图像。人脸分类器可以是对包含人脸和非人脸的训练集提取MBLBP特征,利用级联的AdaBoost算法训练得到的。For example, in sub-step S1011, a face classifier may be used to detect whether there is a face image. The face classifier can be obtained by extracting MBLBP features from a training set containing human faces and non-human faces, and training them using a cascaded AdaBoost algorithm.
在些一实施例中,人脸检测过程中使用AdaBoost算法挑选出一些最能代表人脸的矩形特征(弱分类器),按照加权投票的方式将弱分类器构造为一个强分类器,再将训练得到的若干强分类器串联组成一个级联结构的层叠分类器。In some embodiments, in the face detection process, the AdaBoost algorithm is used to select some rectangular features (weak classifiers) that best represent the face, and the weak classifier is constructed into a strong classifier according to the weighted voting method, and then Several strong classifiers obtained by training are connected in series to form a cascaded classifier.
前述的MBLBP特征指的是Multiscale Block LBP特征,本申请实施例使用的MBLBP特征与LBP特征相比,MBLBP特征更加鲁棒,更加完整地表征图像。The aforementioned MBLBP feature refers to the Multiscale Block LBP feature. Compared with the LBP feature, the MBLBP feature used in the embodiment of the present application is more robust and characterizes the image more completely.
在子步骤S1012中,进一步地,当判断所采集的图像中存在人脸图像信息时,可以进一步确定人脸图像信息的面部特征信息。In sub-step S1012, further, when it is determined that there is face image information in the collected image, the facial feature information of the face image information may be further determined.
在子步骤S1011中判断存在人脸图像信息之后,即可获得人脸矩形区域。在获得了人脸矩形区域之后,即可获得面部特征点,并进一步利用定位的方法确定出面部特征点的位置信息,作为面部特征信息。即,面部特征信息可以包括根据所述人脸矩形区域自动地定位出面部特征点的位置。After judging that there is face image information in sub-step S1011, the rectangular area of the face can be obtained. After obtaining the rectangular area of the human face, the facial feature points can be obtained, and the location information of the facial feature points can be determined by the positioning method as the facial feature information. That is, the facial feature information may include automatically locating the positions of facial feature points according to the rectangular area of the human face.
所述面部特征信息可以是组成人脸的各部件位置,比如眼睛、嘴角、鼻尖以及人脸轮廓等。这些特征位置可以通过算法定位获得。确定所述面部特征点的位置信息,对于所述人脸矩形区域,给出一个初始形状,提取关键特征点的图像特征,通过不断地迭代将初始形状回归到接近甚至等于真实形状的位置。The facial feature information may be the positions of various parts that make up the human face, such as the eyes, the corners of the mouth, the tip of the nose, and the contour of the human face. These feature positions can be obtained through algorithm positioning. The position information of the facial feature points is determined, an initial shape is given for the rectangular region of the face, the image features of the key feature points are extracted, and the initial shape is returned to a position close to or even equal to the true shape through continuous iteration.
在一可选实施例中,所述面部特征信息可以包括被检测对象的眼部信息;所述眼部特征信息例如可以包括眼部高度和眼部宽度的比例。In an optional embodiment, the facial feature information may include eye information of the detected object; the eye feature information may include, for example, a ratio of eye height to eye width.
因此,进一步地,如图4B和图4C所示,步骤S102可以包括如下子步骤:Therefore, further, as shown in FIG. 4B and FIG. 4C, step S102 may include the following sub-steps:
S1021,根据所述人脸图像信息确定面部特征点;S1021: Determine facial feature points according to the facial image information;
S1022,确定所述面部特征点的位置信息,将所述位置信息作为面部特征信息。S1022: Determine location information of the facial feature point, and use the location information as facial feature information.
子步骤S1021,即所述根据所述人脸图像信息确定面部特征点的步骤,可以包括:Sub-step S1021, that is, the step of determining facial feature points according to the facial image information, may include:
S1021a,根据人脸图像信息获得人脸矩形区域;S1021a: Obtain a rectangular area of the face according to the face image information;
S1021b,从人脸矩形区域中提取至少一个面部特征点的初始形状;S1021b: Extract the initial shape of at least one facial feature point from the rectangular area of the face;
子步骤S1022,确定所述面部特征点的位置信息,将所述位置信息作为面部特征信息的步骤,可以包括:Sub-step S1022, determining the location information of the facial feature points, and using the location information as the facial feature information may include:
S1022a,基于监督下降算法利用所述面部特征点的图像特征修正所述初始形状;S1022a, using the image features of the facial feature points to correct the initial shape based on a supervised descent algorithm;
其中,所述图像特征包括方向梯度直方图特征。Wherein, the image features include directional gradient histogram features.
在一些实施例中,可以先从前述的人脸矩形区域中定位获得面部特征点,例如眼睛、嘴角、鼻尖以及人脸轮廓等;再利用面部特征点的位置信息作为面部特征信息,用于后续确定驾驶状态。对于所述人脸矩形区域,给出一个初始形状,提取关键特征点的图像特征,通过不断地迭代将初始形状回归到接近甚至等于真实形状的位置。In some embodiments, the facial feature points, such as eyes, mouth corners, nose tip, and face contour, can be obtained from the aforementioned rectangular area of the face first; and then the position information of the facial feature points can be used as facial feature information for subsequent Determine driving status. For the rectangular region of the human face, an initial shape is given, image features of key feature points are extracted, and the initial shape is returned to a position close to or even equal to the true shape through continuous iteration.
进一步地,确定所述面部特征点的位置信息可以使用监督下降算法(Supervised Descent Method,SDM)进行求解,所述图像特征采用的是方向梯度直方图特征(Histogram of Oriented Gradient,HOG)。所述监督下降算法是用来最小化非线性最小二乘(Non-linear Least Squares)目标函数的一种方法,通过学习一系列下降的方向和该方向的尺度,使得目标函数以非常快的速度收敛到最小值,避免了求解雅可比矩阵和海森矩阵的难题。所述方向梯度直方图特征是通过计算和统计图像局部区域的梯度方向直方图构成的特征描述因子,其本质是图像梯度的统计信息,对图像中的几何形变和光学形变都能保持很好的不变性。Further, determining the position information of the facial feature points may be solved by using a supervised descent algorithm (Supervised Descent Method, SDM), and the image feature uses a histogram of orientation gradient (Histogram of Oriented Gradient, HOG). The supervised descent algorithm is a method used to minimize the non-linear least squares (Non-linear Least Squares) objective function. By learning a series of descent directions and the scales of the directions, the objective function is made at a very fast speed. Convergence to the minimum value, avoiding the problem of solving Jacobian matrix and Hessian matrix. The directional gradient histogram feature is a feature description factor formed by calculating and counting the gradient directional histogram of the local area of the image. Its essence is the statistical information of the image gradient, which can maintain good geometrical and optical deformations in the image. Immutability.
如图4D所示,前述步骤S103,即根据所述面部特征信息判断被检测对象的驾驶状态的步骤,可以包括如下子步骤:As shown in FIG. 4D, the aforementioned step S103, that is, the step of judging the driving state of the detected object according to the facial feature information, may include the following sub-steps:
S1031,根据所述眼部信息提取眼部特征信息,并确定眼睛的睁闭状态;S1031: Extract eye feature information according to the eye information, and determine the open and closed state of the eyes;
S1032,利用多个连续图像的眼睛睁闭状态,确定被检测对象的驾驶状态,所述驾驶状态包括两个以上驾驶状态等级。S1032: Determine the driving state of the detected object by using the eye open and closed states of the multiple continuous images, where the driving state includes more than two driving state levels.
在子步骤S1031中,可以根据所述眼部区域提取特征信息,然后基于SVM分类器得到眼睛的睁闭状态。如前述,可以用灰度旋转不变等价模式的LBP特征、HU矩特征和直方图特征的融合信息,对所述的眼部区域进行特征描述。In sub-step S1031, feature information can be extracted according to the eye region, and then the open and closed state of the eyes can be obtained based on the SVM classifier. As mentioned above, the fusion information of the LBP feature, the HU moment feature and the histogram feature of the gray-level rotation invariant equivalent mode can be used to describe the eye area.
在子步骤S1032中,所述驾驶员的状态是根据连续帧的人眼睁闭状态判断的,例如可以设置有三个不同的等级状态(0/1/2),等级越低越清醒。具体地说,某时刻任一只眼睛是闭眼状态,则闭眼连续帧增加一次。如果闭眼连续帧次数大于最大连续闭眼次数,则驾驶员的状态等级为2;如果闭眼连续次数在最大和最小连续闭眼次数之间,则驾驶员的状态等级为1;否则,驾驶员的状态等级为0。In sub-step S1032, the state of the driver is judged based on the open and closed state of the human eyes in consecutive frames. For example, three different levels (0/1/2) may be set, and the lower the level, the more awake. Specifically, if any eye is closed at a certain moment, the consecutive frames with closed eyes increase once. If the number of consecutive frames with closed eyes is greater than the maximum number of consecutive closed eyes, the driver's status level is 2; if the number of consecutive eyes closed is between the maximum and minimum consecutive eye closures, the driver's status level is 1; otherwise, driving The status level of the member is 0.
在一实施例中,所述面部特征信息还可以包括被检测对象的嘴部信息;所述嘴部信息包括所述被检测对象的嘴部高度和嘴部宽度的比例、嘴部的面积的至少一者。在其他实施例中,所述面部特征信息还可以包括被检测对象的头部姿态信息;所述头部姿态信息例如可以包括当前头部轴线方向与预设头部轴线方向的夹角。虽然前述实施例是以眼部特征信息为例进行说明的,但是本领域技术人员可以明确的是,面部特征信息所并不限于仅包含前述的眼部特征信息。In an embodiment, the facial feature information may also include the mouth information of the detected object; the mouth information includes the ratio of the height of the mouth of the detected object to the width of the mouth, and at least the area of the mouth. One. In other embodiments, the facial feature information may further include head posture information of the detected object; the head posture information may include, for example, the angle between the current head axis direction and the preset head axis direction. Although the foregoing embodiments are described by taking eye feature information as an example, those skilled in the art can clearly understand that facial feature information is not limited to only including the aforementioned eye feature information.
由上述可知,本申请实施例提出一种驾驶提醒方法,相比于现有技术,至少具有如下优点:From the foregoing, it can be seen that the embodiment of the present application proposes a driving reminder method, which has at least the following advantages compared with the prior art:
本申请实施例提出的驾驶提醒方法,能够解决现有的驾驶员状态检测系统误报率高的问题,确保驾驶员有能力在规定时间范围内安全接管车辆。The driving reminding method proposed in the embodiments of the present application can solve the problem of the high false alarm rate of the existing driver state detection system, and ensure that the driver has the ability to safely take over the vehicle within a specified time range.
对于例如L3级别以上的自动驾驶系统,并不需要驾驶员一直是清醒状态,也不必时刻关注前方路况信息,甚至可以睡觉,但是在系统出现故障的情况下需要准备被唤醒。当自动驾驶系统发生故障时,系统可以根据检测到的驾驶员状态采取不同强度的提醒方式,以使得驾驶员能够在较短时间内完成驾驶任务主体的切换。For automatic driving systems above the L3 level, for example, the driver does not need to be awake all the time, nor need to pay attention to the road conditions ahead, or even sleep, but needs to be awakened in the event of a system failure. When the automatic driving system fails, the system can adopt different intensities of reminding methods according to the detected driver's status, so that the driver can complete the switching of the driving task subject in a short time.
本申请提出的驾驶提醒方法可以应用于L3级别以上的自动驾驶系统,能够解决现有的驾驶员状态检测系统误报率高的问题,确保驾驶员有能力在规定时间范围内安全接管车辆。The driving reminder method proposed in this application can be applied to an automatic driving system above the L3 level, can solve the problem of high false alarm rate of the existing driver state detection system, and ensure that the driver has the ability to safely take over the vehicle within a specified time range.
此外,本申请可选实施例提出的驾驶提醒方法至少还包括如下优点:In addition, the driving reminder method proposed in the optional embodiment of the present application at least includes the following advantages:
第一,本申请一些实施例提出的驾驶提醒方法,采用MBLBP特征作为人脸检测过程的特征描述子,该特征能够更加完整地表征图像信息,比LBP特征更加鲁棒,比Haar-like特征更加高效。First, the driving reminder method proposed in some embodiments of this application uses MBLBP features as the feature descriptor of the face detection process. This feature can more completely characterize image information, is more robust than LBP features, and is more robust than Haar-like features. Efficient.
第二,本申请实施例使用监督下降方法解决最小化非线性最小二乘(Non-linear Least Squares)目标函数的问题,该方法处理速度快,计算结果准确,能够克服很多二阶优化方案的缺点,比如不可导、海森矩阵计算量大等。Second, the embodiment of this application uses the supervised descent method to solve the problem of minimizing the nonlinear least squares (Non-linear Least Squares) objective function. This method has fast processing speed and accurate calculation results, which can overcome the shortcomings of many second-order optimization schemes. , Such as non-differentiable, Hessian matrix is computationally intensive, etc.
本申请实施例还提出一种驾驶状态检测方法,用于检测自动驾驶车辆的驾驶员的状态,包括前述的步骤S101至步骤S103。The embodiment of the present application also proposes a driving state detection method, which is used to detect the state of the driver of an automatic driving vehicle, including the aforementioned steps S101 to S103.
与前述的驾驶提醒方法对应地,本申请实施例还提出一种驾驶提醒装置,包括:Corresponding to the aforementioned driving reminder method, an embodiment of the present application also proposes a driving reminder device, including:
存储器,其中存储有计算机可读程序;A memory in which a computer readable program is stored;
处理器,连接于所述存储器,用于执行所述计算机可读程序,以执行如下操作:The processor is connected to the memory and is used to execute the computer-readable program to perform the following operations:
获取被检测对象的人脸图像信息;Acquiring face image information of the detected object;
从所述人脸图像信息中获取被检测对象的至少一个面部特征信息;Acquiring at least one facial feature information of the detected object from the facial image information;
基于所述面部特征信息,获取被检测对象的驾驶状态;Obtaining the driving state of the detected object based on the facial feature information;
在满足预定条件时,基于所述驾驶状态发出对应的提醒信息。When a predetermined condition is met, corresponding reminder information is issued based on the driving state.
与前述的驾驶状态检测方法对应地,本申请实施例还提出一种驾驶状态检测装置,包括:Corresponding to the aforementioned driving state detection method, an embodiment of the present application also provides a driving state detection device, including:
存储器,其中存储有计算机可读程序;A memory in which a computer readable program is stored;
处理器,连接于所述存储器,用于执行所述计算机可读程序,以执行如下操作:The processor is connected to the memory and is used to execute the computer-readable program to perform the following operations:
获取被检测对象的人脸图像信息;Acquiring face image information of the detected object;
从所述人脸图像信息中获取被检测对象的至少一个面部特征信息;Acquiring at least one facial feature information of the detected object from the facial image information;
基于所述面部特征信息,获取被检测对象的驾驶状态。Based on the facial feature information, the driving state of the detected object is acquired.
本申请实施例还提出一种自动驾驶系统,包括:The embodiment of the present application also proposes an automatic driving system, including:
车辆传感器模块,用于检测被检测对象是否在驾驶位置;The vehicle sensor module is used to detect whether the detected object is in the driving position;
车载摄像头模块,用于获取被检测对象的图像;Vehicle-mounted camera module, used to obtain images of detected objects;
存储器,其中存储有计算机可读程序;A memory in which a computer readable program is stored;
处理器,连接于所述车辆传感器模块、所述车载摄像头模块以及所述存储器,获取传感器信号和所述图像,并用于执行所述计算机可读程序,以执行如下操作:The processor is connected to the vehicle sensor module, the vehicle camera module, and the memory, acquires sensor signals and the image, and is used to execute the computer-readable program to perform the following operations:
从所述图像中获取被检测对象的人脸图像信息;Acquiring face image information of the detected object from the image;
从所述人脸图像信息中获取被检测对象的至少一个面部特征信息;Acquiring at least one facial feature information of the detected object from the facial image information;
基于所述面部特征信息,获取被检测对象的驾驶状态;Obtaining the driving state of the detected object based on the facial feature information;
在满足预定条件时,基于所述驾驶状态发出对应的提醒信息。When a predetermined condition is met, corresponding reminder information is issued based on the driving state.
本申请实施例还提出一种自动驾驶系统,用于检测被检测对象的驾驶状态,包括:The embodiment of the present application also proposes an automatic driving system for detecting the driving state of the detected object, including:
车辆传感器模块,用于检测被检测对象是否在驾驶位置;The vehicle sensor module is used to detect whether the detected object is in the driving position;
车载摄像头模块,用于获取被检测对象的图像;Vehicle-mounted camera module, used to obtain images of detected objects;
存储器,其中存储有计算机可读程序;A memory in which a computer readable program is stored;
处理器,连接于所述车辆传感器模块、所述车载摄像头模块以及所述存储器,获取传感器信号和所述图像,并用于执行所述计算机可读程序,以执行如下操作:The processor is connected to the vehicle sensor module, the vehicle camera module, and the memory, acquires sensor signals and the image, and is used to execute the computer-readable program to perform the following operations:
从所述图像中获取被检测对象的人脸图像信息;Acquiring face image information of the detected object from the image;
从所述人脸图像信息中获取被检测对象的至少一个面部特征信息;Acquiring at least one facial feature information of the detected object from the facial image information;
基于所述面部特征信息,获取被检测对象的驾驶状态。Based on the facial feature information, the driving state of the detected object is acquired.
本申请的各个部件实施例可以以硬件实现,或者以在一个或者多个处理器上运行的软件模块实现,或者以它们的组合实现。本领域的技术人员应当理解,可以在实践中使用微处理器或者数字信号处理器(DSP)来实现根据本申请实施例的计算设备中的一些或者全部部件的一些或者全部功能。本申请还可以实现为用于执行这里所描述的方法的一部分或者全部的设备或者装置程序(例如,计算机程序和计算机程序产品)。这样的实现本申请的程序可以存储在计算机可读介质上,或者可以具有一个或者多个信号的形式。这样的信号可以从因特网网站上下载得到,或者在载体信号上提供,或者以任何其他形式提供。Each component embodiment of the present application may be implemented by hardware, or by software modules running on one or more processors, or by a combination of them. Those skilled in the art should understand that a microprocessor or a digital signal processor (DSP) may be used in practice to implement some or all of the functions of some or all components in the computing device according to the embodiments of the present application. This application can also be implemented as a device or device program (for example, a computer program and a computer program product) for executing part or all of the methods described herein. Such a program for implementing the present application may be stored on a computer-readable medium, or may have the form of one or more signals. Such signals can be downloaded from Internet websites, or provided on carrier signals, or provided in any other form.
例如,图5示出了可以实现根据本申请的方法的计算设备。该计算设备传统上包括处理器1010和以存储器1020形式的计算机程序产品或者计算机可读介质。 存储器1020可以是诸如闪存、EEPROM(电可擦除可编程只读存储器)、EPROM、硬盘或者ROM之类的电子存储器。存储器1020具有用于执行上述方法中的任何方法步骤的程序代码1031的存储空间1030。例如,用于程序代码的存储空间1030可以包括分别用于实现上面的方法中的各种步骤的各个程序代码1031。这些程序代码可以从一个或者多个计算机程序产品中读出或者写入到这一个或者多个计算机程序产品中。这些计算机程序产品包括诸如硬盘,紧致盘(CD)、存储卡或者软盘之类的程序代码载体。这样的计算机程序产品通常为如参考图6所述的便携式或者固定存储单元。该存储单元可以具有与图5的计算设备中的存储器1020类似布置的存储段、存储空间等。程序代码可以例如以适当形式进行压缩。通常,存储单元包括计算机可读代码1031’,即可以由例如诸如1010之类的处理器读取的代码,这些代码当由计算设备运行时,导致该计算设备执行上面所描述的方法中的各个步骤。For example, FIG. 5 shows a computing device that can implement the method according to the present application. The computing device traditionally includes a processor 1010 and a computer program product in the form of a memory 1020 or a computer readable medium. The memory 1020 may be an electronic memory such as flash memory, EEPROM (Electrically Erasable Programmable Read Only Memory), EPROM, hard disk, or ROM. The memory 1020 has a storage space 1030 for executing the program code 1031 of any method step in the above method. For example, the storage space 1030 for program codes may include various program codes 1031 for implementing various steps in the above method. These program codes can be read out from or written into one or more computer program products. These computer program products include program code carriers such as hard disks, compact disks (CDs), memory cards or floppy disks. Such a computer program product is usually a portable or fixed storage unit as described with reference to FIG. 6. The storage unit may have storage segments, storage spaces, etc., arranged similarly to the memory 1020 in the computing device of FIG. 5. The program code can be compressed in an appropriate form, for example. Generally, the storage unit includes computer-readable codes 1031', that is, codes that can be read by, for example, a processor such as 1010. These codes, when run by a computing device, cause the computing device to execute each of the methods described above. step.
本申请实施例提供了一种计算设备,包括:一个或多个处理器;和其上存储有指令的一个或多个机器可读介质,当由所述一个或多个处理器执行时,使得所述计算设备执行如本申请实施例中一个或多个所述的方法。The embodiment of the present application provides a computing device, including: one or more processors; and one or more machine-readable media on which instructions are stored. When executed by the one or more processors, The computing device executes the method described in one or more of the embodiments of the present application.
本说明书中的各个实施例均采用递进的方式描述,每个实施例重点说明的都是与其他实施例的不同之处,各个实施例之间相同相似的部分互相参见即可。The various embodiments in this specification are described in a progressive manner. Each embodiment focuses on the differences from other embodiments, and the same or similar parts between the various embodiments can be referred to each other.
尽管已描述了本申请实施例的优选实施例,但本领域内的技术人员一旦得知了基本创造性概念,则可对这些实施例做出另外的变更和修改。所以,所附权利要求意欲解释为包括优选实施例以及落入本申请实施例范围的所有变更和修改。Although the preferred embodiments of the embodiments of the present application have been described, those skilled in the art can make additional changes and modifications to these embodiments once they learn the basic creative concept. Therefore, the appended claims are intended to be interpreted as including the preferred embodiments and all changes and modifications falling within the scope of the embodiments of the present application.
最后,还需要说明的是,在本文中,诸如第一和第二等之类的关系术语仅仅用来将一个实体或者操作与另一个实体或操作区分开来,而不一定要求或者暗示这些实体或操作之间存在任何这种实际的关系或者顺序。而且,术语“包括”、“包含”或者其任何其他变体意在涵盖非排他性的包含,从而使得包括一系列要素的过程、方法、物品或者计算设备不仅包括那些要素,而且还包括没有明确列出的其他要素,或者是还包括为这种过程、方法、物品或者计算设备所固有的要素。在没有更多限制的情况下,由语句“包括一个……”限定的要素,并不排除在包括所述要素的过程、方法、物品或者计算设备中还存在另外的相同要素。Finally, it should be noted that in this article, relational terms such as first and second are only used to distinguish one entity or operation from another entity or operation, and do not necessarily require or imply these entities Or there is any such actual relationship or sequence between operations. Moreover, the terms "include", "include" or any other variants thereof are intended to cover non-exclusive inclusion, so that a process, method, article, or computing device that includes a series of elements includes not only those elements, but also those that are not explicitly listed. Other elements listed, or also include elements inherent to this process, method, article, or computing device. If there are no more restrictions, the element defined by the sentence "including a..." does not exclude the existence of other same elements in the process, method, article or computing device that includes the element.
以上对本申请所提供的一种驾驶提醒方法和驾驶提醒装置,进行了详细介绍,本文中应用了具体个例对本申请的原理及实施方式进行了阐述,以上实施例的说明只是用于帮助理解本申请的方法及其核心思想;同时,对于本领域的一般技术人员,依据本申请的思想,在具体实施方式及应用范围上均会有改变之处,综上所述,本说明书内容不应理解为对本申请的限制。The above provides a detailed introduction to the driving reminder method and driving reminder device provided by the present application. Specific examples are used in this article to explain the principles and implementation of the present application. The description of the above embodiments is only used to help understand the present application. The method of application and its core idea; meanwhile, for those skilled in the art, according to the idea of this application, there will be changes in the specific implementation and scope of application. In summary, the content of this specification should not be understood It is a restriction on this application.
工业实用性Industrial applicability
本申请实施例提出的驾驶提醒方法,能够解决现有的驾驶员状态检测系统误报率高的问题,确保驾驶员有能力在规定时间范围内安全接管车辆。The driving reminding method proposed in the embodiments of the present application can solve the problem of the high false alarm rate of the existing driver state detection system, and ensure that the driver has the ability to safely take over the vehicle within a specified time range.
对于例如L3级别以上的自动驾驶系统,并不需要驾驶员一直是清醒状态,也不必时刻关注前方路况信息,甚至可以睡觉,但是在系统出现故障的情况下需要准备被唤醒。当自动驾驶系统发生故障时,系统可以根据检测到的驾驶员状态采取不同强度的提醒方式,以使得驾驶员能够在较短时间内完成驾驶任务主体的切换。For automatic driving systems above the L3 level, for example, the driver does not need to be awake all the time, nor need to pay attention to the road conditions ahead, or even sleep, but needs to be awakened in the event of a system failure. When the automatic driving system fails, the system can adopt different intensities of reminding methods according to the detected driver's status, so that the driver can complete the switching of the driving task subject in a short time.
本申请提出的驾驶提醒方法可以应用于L3级别以上的自动驾驶系统,能够解决现有的驾驶员状态检测系统误报率高的问题,确保驾驶员有能力在规定时间范围内安全接管车辆。The driving reminder method proposed in this application can be applied to an automatic driving system above the L3 level, can solve the problem of high false alarm rate of the existing driver state detection system, and ensure that the driver has the ability to safely take over the vehicle within a specified time range.
此外,本申请可选实施例提出的驾驶提醒方法至少还包括如下优点:In addition, the driving reminder method proposed in the optional embodiment of the present application at least includes the following advantages:
第二,本申请一些实施例提出的驾驶提醒方法,采用MBLBP特征作为人脸检测过程的特征描述子,该特征能够更加完整地表征图像信息,比LBP特征更加鲁棒,比Haar-like特征更加高效。Second, the driving reminder method proposed in some embodiments of this application uses MBLBP features as the feature descriptor of the face detection process. This feature can more completely characterize image information, is more robust than LBP features, and is more robust than Haar-like features. Efficient.
第三,本申请实施例使用监督下降方法解决最小化非线性最小二乘(Non-linear Least Squares)目标函数的问题,该方法处理速度快,计算结果准确,能够克服很多二阶优化方案的缺点,比如不可导、海森矩阵计算量大等。Third, the embodiment of this application uses the supervised descent method to solve the problem of minimizing the nonlinear least squares (Non-linear Least Squares) objective function. This method has fast processing speed and accurate calculation results, which can overcome the shortcomings of many second-order optimization schemes. , Such as non-differentiable, Hessian matrix is computationally intensive

Claims (19)

  1. 一种驾驶提醒方法,其特征在于,包括:A driving reminder method, characterized in that it comprises:
    获取被检测对象的人脸图像信息;Acquiring face image information of the detected object;
    从所述人脸图像信息中获取被检测对象的至少一个面部特征信息;Acquiring at least one facial feature information of the detected object from the facial image information;
    基于所述面部特征信息,获取被检测对象的驾驶状态;Obtaining the driving state of the detected object based on the facial feature information;
    在满足预定条件时,基于所述驾驶状态发出对应的提醒信息。When a predetermined condition is met, corresponding reminder information is issued based on the driving state.
  2. 根据权利要求1所述的驾驶提醒方法,其特征在于,所述获取被检测对象的人脸图像信息的步骤包括:The driving reminder method according to claim 1, wherein the step of obtaining face image information of the detected object comprises:
    利用人脸分类器检测所采集的图像中是否存在人脸图像信息;Use a face classifier to detect whether there is face image information in the collected images;
    当判断所采集的图像中存在人脸图像信息时;从所述人脸图像信息中提取面部特征信息。When it is determined that there is face image information in the collected image; extracting facial feature information from the face image information.
  3. 根据权利要求2所述的驾驶提醒方法,其特征在于,所述人脸分类器是对包含人脸和非人脸的训练集提取MBLBP特征,利用级联的AdaBoost算法训练得到的分类器。The driving reminder method according to claim 2, wherein the face classifier is a classifier obtained by extracting MBLBP features from a training set containing human faces and non-human faces, and training by using a cascaded AdaBoost algorithm.
  4. 根据权利要求1所述的驾驶提醒方法,其特征在于,所述从所述人脸图像信息中获取被检测对象的至少一个面部特征信息的步骤包括:The driving reminder method according to claim 1, wherein the step of obtaining at least one facial feature information of the detected object from the facial image information comprises:
    根据所述人脸图像信息确定面部特征点;Determining facial feature points according to the facial image information;
    确定所述面部特征点的位置信息,将所述位置信息作为面部特征信息。The location information of the facial feature point is determined, and the location information is used as the facial feature information.
  5. 根据权利要求4所述的驾驶提醒方法,其特征在于,所述面部特征点包括:眼睛、嘴角、鼻尖以及人脸轮廓的至少一者。The driving reminder method according to claim 4, wherein the facial feature points include at least one of eyes, corners of mouth, tip of nose, and face contour.
  6. 根据权利要求4所述的驾驶提醒方法,其特征在于,所述根据所述人脸图像信息确定面部特征点的步骤,包括:The driving reminder method according to claim 4, wherein the step of determining facial feature points according to the facial image information comprises:
    根据人脸图像信息获得人脸矩形区域;Obtain the rectangular area of the face according to the face image information;
    从人脸矩形区域中提取至少一个面部特征点的初始形状;Extracting the initial shape of at least one facial feature point from the rectangular region of the face;
    所述确定所述面部特征点的位置信息,将所述位置信息作为面部特征信息的步骤包括:The step of determining the location information of the facial feature point and using the location information as the facial feature information includes:
    利用所述面部特征点的图像特征修正所述初始形状。The initial shape is corrected by using the image features of the facial feature points.
  7. 根据权利要求4所述的驾驶提醒方法,其特征在于,所述根据所述面部特征点从所述人脸图像信息中获得所述至少一个面部特征信息的步骤中,所述面部特征信息包括:被检测对象的眼部信息、嘴部信息和头部姿态信息的至少一者。The driving reminder method according to claim 4, wherein in the step of obtaining the at least one facial feature information from the facial image information according to the facial feature points, the facial feature information comprises: At least one of eye information, mouth information, and head posture information of the detected object.
  8. 根据权利要求7所述的驾驶提醒方法,其特征在于,所述面部特征信息包括被检测对象的眼部信息;The driving reminder method of claim 7, wherein the facial feature information includes eye information of the detected object;
    所述根据所述面部特征信息判断被检测对象的驾驶状态的步骤,包括:The step of judging the driving state of the detected object according to the facial feature information includes:
    根据所述眼部信息提取眼部特征信息,并确定眼睛的睁闭状态;Extracting eye feature information according to the eye information, and determining the open and closed state of the eye;
    利用多个连续图像的眼睛睁闭状态,确定被检测对象的驾驶状态,所述驾驶状态包括两个以上驾驶状态等级。Using the eye open and closed states of a plurality of continuous images, the driving state of the detected object is determined, and the driving state includes more than two driving state levels.
  9. 根据权利要求8所述的驾驶提醒方法,其特征在于,所述眼部特征信息包括眼部高度和眼部宽度的比例。The driving reminder method according to claim 8, wherein the eye feature information includes a ratio of eye height to eye width.
  10. 根据权利要求7所述的驾驶提醒方法,其特征在于,所述面部特征信息包括被检测对象的嘴部信息;The driving reminder method according to claim 7, wherein the facial feature information includes mouth information of the detected object;
    所述嘴部信息包括所述被检测对象的嘴部高度和嘴部宽度的比例、嘴部的面积的至少一者。The mouth information includes at least one of the ratio of the height of the mouth to the width of the mouth and the area of the mouth of the detected object.
  11. 根据权利要求7所述的驾驶提醒方法,其特征在于,所述面部特征信息包括被检测对象的头部姿态信息;The driving reminder method according to claim 7, wherein the facial feature information includes head posture information of the detected object;
    所述头部姿态信息包括当前头部轴线方向与预设头部轴线方向的夹角。The head posture information includes the angle between the current head axis direction and the preset head axis direction.
  12. 根据权利要求1所述的驾驶提醒方法,其特征在于,所述预定条件包括:The driving reminder method according to claim 1, wherein the predetermined condition comprises:
    接收到系统故障信号;或A system failure signal is received; or
    接收到车辆故障信号。A vehicle failure signal is received.
  13. 根据权利要求12所述的驾驶提醒方法,其特征在于,所述基于驾驶状态发出对应的提醒信息的步骤,包括:The driving reminder method of claim 12, wherein the step of issuing corresponding reminder information based on the driving state comprises:
    根据所述驾驶状态、系统置信度和所述系统状态信号,发送对应的提醒信息。According to the driving state, the system confidence level and the system state signal, corresponding reminder information is sent.
  14. 根据权利要求13所述的驾驶提醒方法,其特征在于,所述方法还包括:The driving reminder method of claim 13, wherein the method further comprises:
    确定系统置信度,包括:Determine system confidence, including:
    计算连续检测到人脸图像信息的次数;Count the number of consecutive detections of face image information;
    根据所述人脸图像信息的次数确定所述系统置信度,所述系统置信度包括两 个以上系统置信度等级。The system confidence is determined according to the number of times of the facial image information, and the system confidence includes more than two system confidence levels.
  15. 一种驾驶状态检测方法,其特征在于,包括:A driving state detection method, characterized in that it comprises:
    获取被检测对象的人脸图像信息;Acquiring face image information of the detected object;
    从所述人脸图像信息中获取被检测对象的至少一个面部特征信息;Acquiring at least one facial feature information of the detected object from the facial image information;
    根据所述面部特征信息判断被检测对象的驾驶状态。The driving state of the detected object is determined based on the facial feature information.
  16. 一种计算设备,用于执行驾驶提醒,所述计算设备包括:A computing device for executing driving reminders, the computing device includes:
    存储器,其中存储有计算机可读程序;处理器,连接于所述存储器,用于执行所述计算机可读程序,以执行如下操作:A memory, which stores a computer-readable program; a processor, connected to the memory, for executing the computer-readable program to perform the following operations:
    获取被检测对象的人脸图像信息;Acquiring face image information of the detected object;
    从所述人脸图像信息中获取被检测对象的至少一个面部特征信息;Acquiring at least one facial feature information of the detected object from the facial image information;
    基于所述面部特征信息,获取被检测对象的驾驶状态;Obtaining the driving state of the detected object based on the facial feature information;
    在满足预定条件时,基于所述驾驶状态发出对应的提醒信息。When a predetermined condition is met, corresponding reminder information is issued based on the driving state.
  17. 根据权利要求16所述的计算设备,还包括:The computing device of claim 16, further comprising:
    车辆传感器模块,用于检测被检测对象是否在驾驶位置;The vehicle sensor module is used to detect whether the detected object is in the driving position;
    车载摄像头模块,用于获取被检测对象的图像;Vehicle-mounted camera module, used to obtain images of detected objects;
    所述获取被检测对象的人脸图像信息的操作中,所述人脸图像信息从所述车载摄像头模块获取的图像中获得。In the operation of obtaining the face image information of the detected object, the face image information is obtained from the image obtained by the vehicle-mounted camera module.
  18. 一种计算设备,用于执行驾驶状态检测,所述计算设备包括:A computing device for performing driving state detection, the computing device includes:
    存储器,其中存储有计算机可读程序;处理器,连接于所述存储器,用于执行所述计算机可读程序,以执行如下操作:A memory, which stores a computer-readable program; a processor, connected to the memory, for executing the computer-readable program to perform the following operations:
    获取被检测对象的人脸图像信息;Acquiring face image information of the detected object;
    从所述人脸图像信息中获取被检测对象的至少一个面部特征信息;Acquiring at least one facial feature information of the detected object from the facial image information;
    基于所述面部特征信息,获取被检测对象的驾驶状态。Based on the facial feature information, the driving state of the detected object is acquired.
  19. 根据权利要求18所述的计算设备,还包括:The computing device of claim 18, further comprising:
    车辆传感器模块,用于检测被检测对象是否在驾驶位置;The vehicle sensor module is used to detect whether the detected object is in the driving position;
    车载摄像头模块,用于获取被检测对象的图像;Vehicle-mounted camera module, used to obtain images of detected objects;
    所述获取被检测对象的人脸图像信息的操作中,所述人脸图像信息从所述车载摄像头模块获取的图像中获得。In the operation of obtaining the face image information of the detected object, the face image information is obtained from the image obtained by the vehicle-mounted camera module.
PCT/CN2019/089639 2019-05-31 2019-05-31 Driving prompt method, driving state detection method and computing device WO2020237664A1 (en)

Priority Applications (2)

Application Number Priority Date Filing Date Title
CN201980000877.1A CN110582437A (en) 2019-05-31 2019-05-31 driving reminding method, driving state detection method and computing device
PCT/CN2019/089639 WO2020237664A1 (en) 2019-05-31 2019-05-31 Driving prompt method, driving state detection method and computing device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
PCT/CN2019/089639 WO2020237664A1 (en) 2019-05-31 2019-05-31 Driving prompt method, driving state detection method and computing device

Publications (1)

Publication Number Publication Date
WO2020237664A1 true WO2020237664A1 (en) 2020-12-03

Family

ID=68815615

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2019/089639 WO2020237664A1 (en) 2019-05-31 2019-05-31 Driving prompt method, driving state detection method and computing device

Country Status (2)

Country Link
CN (1) CN110582437A (en)
WO (1) WO2020237664A1 (en)

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113591533A (en) * 2021-04-27 2021-11-02 浙江工业大学之江学院 Anti-fatigue driving method, device, equipment and storage medium based on road monitoring
CN115284976A (en) * 2022-08-10 2022-11-04 东风柳州汽车有限公司 Automatic adjusting method, device and equipment for vehicle seat and storage medium
CN115796494A (en) * 2022-11-16 2023-03-14 北京百度网讯科技有限公司 Work order processing method and work order information display method for unmanned vehicle
CN116901975A (en) * 2023-09-12 2023-10-20 深圳市九洲卓能电气有限公司 Vehicle-mounted AI security monitoring system and method thereof
CN117622177A (en) * 2024-01-23 2024-03-01 青岛创新奇智科技集团股份有限公司 Vehicle data processing method and device based on industrial large model

Families Citing this family (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110979340A (en) * 2019-12-20 2020-04-10 北京海纳川汽车部件股份有限公司 Vehicle and control method and device thereof
CN111645694B (en) * 2020-04-15 2021-08-06 南京航空航天大学 Driver driving state monitoring system and method based on attitude estimation
CN112053224B (en) * 2020-09-02 2023-08-18 中国银行股份有限公司 Service processing monitoring realization method, device and system
CN112693469A (en) * 2021-01-05 2021-04-23 中国汽车技术研究中心有限公司 Method and device for testing vehicle taking over by driver, electronic equipment and medium
CN112977476A (en) * 2021-02-20 2021-06-18 纳瓦电子(上海)有限公司 Radar detection-based vehicle driving method and automatic driving vehicle
CN113076801A (en) * 2021-03-04 2021-07-06 广州铁路职业技术学院(广州铁路机械学校) Train on-road state intelligent linkage detection system and method
CN113191214A (en) * 2021-04-12 2021-07-30 合肥综合性国家科学中心人工智能研究院(安徽省人工智能实验室) Driver misoperation risk early warning method and system
CN113715766B (en) * 2021-08-17 2022-05-24 厦门星图安达科技有限公司 Method for detecting people in vehicle

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103714660A (en) * 2013-12-26 2014-04-09 苏州清研微视电子科技有限公司 System for achieving fatigue driving judgment on basis of image processing and fusion between heart rate characteristic and expression characteristic
CN104688251A (en) * 2015-03-02 2015-06-10 西安邦威电子科技有限公司 Method for detecting fatigue driving and driving in abnormal posture under multiple postures
US9460601B2 (en) * 2009-09-20 2016-10-04 Tibet MIMAR Driver distraction and drowsiness warning and sleepiness reduction for accident avoidance
CN106485191A (en) * 2015-09-02 2017-03-08 腾讯科技(深圳)有限公司 A kind of method for detecting fatigue state of driver and system

Family Cites Families (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109803583A (en) * 2017-08-10 2019-05-24 北京市商汤科技开发有限公司 Driver monitoring method, apparatus and electronic equipment
CN107657236A (en) * 2017-09-29 2018-02-02 厦门知晓物联技术服务有限公司 Vehicle security drive method for early warning and vehicle-mounted early warning system
CN109435959B (en) * 2018-10-24 2020-10-09 斑马网络技术有限公司 Fatigue driving processing method, vehicle, storage medium, and electronic device

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US9460601B2 (en) * 2009-09-20 2016-10-04 Tibet MIMAR Driver distraction and drowsiness warning and sleepiness reduction for accident avoidance
CN103714660A (en) * 2013-12-26 2014-04-09 苏州清研微视电子科技有限公司 System for achieving fatigue driving judgment on basis of image processing and fusion between heart rate characteristic and expression characteristic
CN104688251A (en) * 2015-03-02 2015-06-10 西安邦威电子科技有限公司 Method for detecting fatigue driving and driving in abnormal posture under multiple postures
CN106485191A (en) * 2015-09-02 2017-03-08 腾讯科技(深圳)有限公司 A kind of method for detecting fatigue state of driver and system

Cited By (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113591533A (en) * 2021-04-27 2021-11-02 浙江工业大学之江学院 Anti-fatigue driving method, device, equipment and storage medium based on road monitoring
CN115284976A (en) * 2022-08-10 2022-11-04 东风柳州汽车有限公司 Automatic adjusting method, device and equipment for vehicle seat and storage medium
CN115284976B (en) * 2022-08-10 2023-09-12 东风柳州汽车有限公司 Automatic adjustment method, device and equipment for vehicle seat and storage medium
CN115796494A (en) * 2022-11-16 2023-03-14 北京百度网讯科技有限公司 Work order processing method and work order information display method for unmanned vehicle
CN115796494B (en) * 2022-11-16 2024-03-29 北京百度网讯科技有限公司 Work order processing method and work order information display method for unmanned vehicle
CN116901975A (en) * 2023-09-12 2023-10-20 深圳市九洲卓能电气有限公司 Vehicle-mounted AI security monitoring system and method thereof
CN116901975B (en) * 2023-09-12 2023-11-21 深圳市九洲卓能电气有限公司 Vehicle-mounted AI security monitoring system and method thereof
CN117622177A (en) * 2024-01-23 2024-03-01 青岛创新奇智科技集团股份有限公司 Vehicle data processing method and device based on industrial large model

Also Published As

Publication number Publication date
CN110582437A (en) 2019-12-17

Similar Documents

Publication Publication Date Title
WO2020237664A1 (en) Driving prompt method, driving state detection method and computing device
US11783601B2 (en) Driver fatigue detection method and system based on combining a pseudo-3D convolutional neural network and an attention mechanism
CN111741884B (en) Traffic distress and road rage detection method
CN102263937B (en) Driver's driving behavior monitoring device and monitoring method based on video detection
CN110765807B (en) Driving behavior analysis and processing method, device, equipment and storage medium
CN104021370B (en) The driver status monitoring method and system of a kind of view-based access control model information fusion
JP4702100B2 (en) Dozing determination device and dozing operation warning device
Tang et al. Driver lane change intention recognition of intelligent vehicle based on long short-term memory network
JP5666383B2 (en) Sleepiness estimation apparatus and sleepiness estimation method
CN103824420A (en) Fatigue driving identification system based on heart rate variability non-contact measuring
US20080186154A1 (en) Method and Device for Driver Support
CN105956548A (en) Driver fatigue state detection method and device
JP4182131B2 (en) Arousal level determination device and arousal level determination method
CN101950355A (en) Method for detecting fatigue state of driver based on digital video
CN101599207A (en) A kind of fatigue driving detection device and automobile
Chen et al. Driver behavior monitoring and warning with dangerous driving detection based on the internet of vehicles
Yan et al. Recognizing driver inattention by convolutional neural networks
CN109664894A (en) Fatigue driving safety pre-warning system based on multi-source heterogeneous data perception
CN110281944A (en) Driver status based on multi-information fusion monitors system
WO2022110737A1 (en) Vehicle anticollision early-warning method and apparatus, vehicle-mounted terminal device, and storage medium
CN114771545A (en) Intelligent safe driving system
CN115937830A (en) Special vehicle-oriented driver fatigue detection method
CN103569084B (en) Drive arrangement for detecting and method thereof
Ohn-Bar et al. Vision on wheels: Looking at driver, vehicle, and surround for on-road maneuver analysis
CN207579730U (en) A kind of intelligence control system of vehicles steering indicating light

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 19930279

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 19930279

Country of ref document: EP

Kind code of ref document: A1