WO2019028798A1 - 驾驶状态监控方法、装置和电子设备 - Google Patents

驾驶状态监控方法、装置和电子设备 Download PDF

Info

Publication number
WO2019028798A1
WO2019028798A1 PCT/CN2017/096957 CN2017096957W WO2019028798A1 WO 2019028798 A1 WO2019028798 A1 WO 2019028798A1 CN 2017096957 W CN2017096957 W CN 2017096957W WO 2019028798 A1 WO2019028798 A1 WO 2019028798A1
Authority
WO
WIPO (PCT)
Prior art keywords
driver
face
key point
information
image
Prior art date
Application number
PCT/CN2017/096957
Other languages
English (en)
French (fr)
Inventor
王飞
Original Assignee
北京市商汤科技开发有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Priority to CN201780053499.4A priority Critical patent/CN109803583A/zh
Priority to PCT/CN2017/096957 priority patent/WO2019028798A1/zh
Application filed by 北京市商汤科技开发有限公司 filed Critical 北京市商汤科技开发有限公司
Priority to KR1020207007113A priority patent/KR102391279B1/ko
Priority to CN201880003399.5A priority patent/CN109937152B/zh
Priority to JP2018568375A priority patent/JP6933668B2/ja
Priority to EP18845078.7A priority patent/EP3666577A4/en
Priority to PCT/CN2018/084526 priority patent/WO2019029195A1/zh
Priority to SG11202002549WA priority patent/SG11202002549WA/en
Priority to US16/177,198 priority patent/US10853675B2/en
Publication of WO2019028798A1 publication Critical patent/WO2019028798A1/zh
Priority to CN201910152525.XA priority patent/CN110399767A/zh
Priority to KR1020207027781A priority patent/KR20200124278A/ko
Priority to PCT/CN2019/129370 priority patent/WO2020173213A1/zh
Priority to JP2020551547A priority patent/JP2021517313A/ja
Priority to SG11202009720QA priority patent/SG11202009720QA/en
Priority to TW109106588A priority patent/TWI758689B/zh
Priority to US17/034,290 priority patent/US20210009150A1/en
Priority to US17/085,953 priority patent/US20210049386A1/en
Priority to US17/085,972 priority patent/US20210049387A1/en
Priority to US17/085,989 priority patent/US20210049388A1/en

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/50Context or environment of the image
    • G06V20/59Context or environment of the image inside of a vehicle, e.g. relating to seat occupancy, driver state or inner lighting conditions
    • G06V20/597Recognising the driver's state or behaviour, e.g. attention or drowsiness
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/168Feature extraction; Face representation
    • G06V40/171Local features and components; Facial parts ; Occluding parts, e.g. glasses; Geometrical relationships
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B60VEHICLES IN GENERAL
    • B60WCONJOINT CONTROL OF VEHICLE SUB-UNITS OF DIFFERENT TYPE OR DIFFERENT FUNCTION; CONTROL SYSTEMS SPECIALLY ADAPTED FOR HYBRID VEHICLES; ROAD VEHICLE DRIVE CONTROL SYSTEMS FOR PURPOSES NOT RELATED TO THE CONTROL OF A PARTICULAR SUB-UNIT
    • B60W50/00Details of control systems for road vehicle drive control not related to the control of a particular sub-unit, e.g. process diagnostic or vehicle driver interfaces
    • B60W50/08Interaction between the driver and the control system
    • B60W50/087Interaction between the driver and the control system where the control system corrects or modifies a request from the driver
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B60VEHICLES IN GENERAL
    • B60WCONJOINT CONTROL OF VEHICLE SUB-UNITS OF DIFFERENT TYPE OR DIFFERENT FUNCTION; CONTROL SYSTEMS SPECIALLY ADAPTED FOR HYBRID VEHICLES; ROAD VEHICLE DRIVE CONTROL SYSTEMS FOR PURPOSES NOT RELATED TO THE CONTROL OF A PARTICULAR SUB-UNIT
    • B60W50/00Details of control systems for road vehicle drive control not related to the control of a particular sub-unit, e.g. process diagnostic or vehicle driver interfaces
    • B60W50/08Interaction between the driver and the control system
    • B60W50/14Means for informing the driver, warning the driver or prompting a driver intervention
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/82Arrangements for image or video recognition or understanding using pattern recognition or machine learning using neural networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/40Scenes; Scene-specific elements in video content
    • G06V20/46Extracting features or characteristics from the video content, e.g. video fingerprints, representative shots or key frames
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/168Feature extraction; Face representation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/18Eye characteristics, e.g. of the iris
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/20Movements or behaviour, e.g. gesture recognition
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/20Movements or behaviour, e.g. gesture recognition
    • G06V40/28Recognition of hand or arm movements, e.g. recognition of deaf sign language
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B60VEHICLES IN GENERAL
    • B60WCONJOINT CONTROL OF VEHICLE SUB-UNITS OF DIFFERENT TYPE OR DIFFERENT FUNCTION; CONTROL SYSTEMS SPECIALLY ADAPTED FOR HYBRID VEHICLES; ROAD VEHICLE DRIVE CONTROL SYSTEMS FOR PURPOSES NOT RELATED TO THE CONTROL OF A PARTICULAR SUB-UNIT
    • B60W50/00Details of control systems for road vehicle drive control not related to the control of a particular sub-unit, e.g. process diagnostic or vehicle driver interfaces
    • B60W50/08Interaction between the driver and the control system
    • B60W50/14Means for informing the driver, warning the driver or prompting a driver intervention
    • B60W2050/143Alarm means
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B60VEHICLES IN GENERAL
    • B60WCONJOINT CONTROL OF VEHICLE SUB-UNITS OF DIFFERENT TYPE OR DIFFERENT FUNCTION; CONTROL SYSTEMS SPECIALLY ADAPTED FOR HYBRID VEHICLES; ROAD VEHICLE DRIVE CONTROL SYSTEMS FOR PURPOSES NOT RELATED TO THE CONTROL OF A PARTICULAR SUB-UNIT
    • B60W2540/00Input parameters relating to occupants
    • B60W2540/225Direction of gaze
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B60VEHICLES IN GENERAL
    • B60WCONJOINT CONTROL OF VEHICLE SUB-UNITS OF DIFFERENT TYPE OR DIFFERENT FUNCTION; CONTROL SYSTEMS SPECIALLY ADAPTED FOR HYBRID VEHICLES; ROAD VEHICLE DRIVE CONTROL SYSTEMS FOR PURPOSES NOT RELATED TO THE CONTROL OF A PARTICULAR SUB-UNIT
    • B60W2540/00Input parameters relating to occupants
    • B60W2540/229Attention level, e.g. attentive to driving, reading or sleeping

Definitions

  • the present application relates to computer vision technology, and more particularly to a driving state monitoring method, a medium, a driving state monitoring device, and an electronic device.
  • the driver's driving state has a very serious impact on safe driving, the driver should be in a good driving state as much as possible.
  • the factors that affect the driver's driving state usually include: fatigue and distraction caused by taking care of other things such as mobile phones during driving; specifically, the driver continuously drives the vehicle for a long time, poor sleep quality and sleep time. Insufficient conditions often lead to disorientation of the driver's physiology and mental function, which may lead to a decline in the driver's driving skills, which will ultimately affect the driver's driving state; and when the driver is driving, Distraction into other things, such as cell phones, can cause drivers to be unable to keep abreast of road conditions.
  • the embodiment of the present application provides a driving condition monitoring technical solution.
  • a driving state monitoring method comprising: detecting a face key point of a driver image; determining a state of at least a portion of a driver's face according to the face key point Information; determining, according to the plurality of state information of at least a portion of the driver's face over a period of time, a parameter value for characterizing at least one indicator of the driving state; determining a driving state monitoring result of the driver based on the parameter value.
  • a driving state monitoring apparatus comprising: a face detection key module for detecting a face key point of a driver image; and an area mode module for determining Determining, according to the face key point, state information of at least a part of the driver's face; and determining an indicator parameter value module, configured to determine, according to the plurality of state information of at least part of the driver's face in a period of time, A parameter value of at least one indicator of the state; a driving state module configured to determine a driving state monitoring result of the driver based on the parameter value.
  • an electronic device includes: a memory for storing a computer program; a processor for executing a computer program stored in the memory, when the computer program is executed, The following instructions are executed: an instruction for detecting a face key of the driver image; an instruction for determining state information of at least a portion of the driver's face based on the face key; for using for a period of time A plurality of status information of at least a portion of the driver's face, an instruction for determining a parameter value of the at least one indicator of the driving state, and an instruction for determining a driving state monitoring result of the driver based on the parameter value.
  • a computer storage medium having stored thereon a computer program that, when executed by a processor, performs various steps in an embodiment of the method of the present application, for example, detecting a driver a face key point of the image; determining state information of at least a part of the driver's face according to the face key point; according to the driver's face in a period of time A plurality of state information of the partial region, determining a parameter value for characterizing at least one indicator of the driving state; and determining a driving state monitoring result of the driver according to the parameter value.
  • a computer program when executed by a processor, performing various steps in an embodiment of the method of the present application, for example, detecting a face key point of a driver image; Determining, by the face key point, state information of at least a portion of the driver's face; determining a parameter value of the at least one indicator for characterizing the driving state according to the plurality of state information of the at least part of the driver's face in a period of time; The driver's driving state monitoring result is determined based on the parameter value.
  • the present application can detect the key points of the driver's image, so that the present application can be based on the key points of the face, the eyes, the mouth, and the like. Positioning is performed to obtain state information of the corresponding area; and by analyzing state information of at least part of the driver's face over a period of time, a parameter value of at least one indicator that quantifies the driver's driving state can be obtained, thereby The application can timely and objectively reflect the driving state of the driver based on the parameter values of the quantified indicators.
  • FIG. 1 is a block diagram of an exemplary apparatus for implementing an embodiment of the present application
  • FIG. 3 is a schematic diagram of key points on the eyelid line after positioning of the eyelid line of the present application.
  • FIG. 4 is a schematic diagram of key points of a mouth in a key point of a face of the present application.
  • Figure 5 is a schematic view of the pupil edge after the pupil positioning of the present application.
  • Figure 6 is a schematic structural view of an embodiment of the device of the present application.
  • FIG. 7 is a schematic diagram of an application scenario of the present application.
  • Embodiments of the present application can be applied to computer systems/servers that can operate with numerous other general purpose or special purpose computing system environments or configurations.
  • Examples of well-known computing systems, environments, and/or configurations suitable for use with computer systems/servers include, but are not limited to, personal computer systems, server computer systems, thin clients, thick clients, handheld or laptop devices, based on Microprocessor systems, set-top boxes, programmable consumer electronics, networked personal computers, small computer systems, mainframe computer systems, and distributed cloud computing technology environments including any of the above, and the like.
  • the computer system/server can be described in the general context of computer system executable instructions (such as program modules, etc.) executed by the computer system.
  • program modules may include routines, programs, target programs, components, logic, and data structures, etc., which perform particular tasks or implement particular abstract data types.
  • the computer system/server can be implemented in a distributed cloud computing environment where tasks are performed by remote processing devices that are linked through a communication network.
  • program modules may be located on a local or remote computing system storage medium including storage devices.
  • device 200 suitable for implementing the present application, which may be a control system/electronic system configured in a car, a mobile terminal (eg, a smart mobile phone, etc.), a personal computer (ie, a PC, eg, a desktop Computers, notebooks, etc.), tablets, servers, etc.
  • device 200 includes one or more processors, communication units, etc., which may be: one or more central processing units (CPUs) 201, and/or one or more images.
  • CPUs central processing units
  • the communication unit 212 may include, but is not limited to, a network card, which may include, but is not limited to, an IB (Infiniband) network card.
  • the processor can communicate with the read only memory 202 and/or the random access memory 230 to execute executable instructions, connect to the communication portion 212 via the bus 204, and communicate with other target devices via the communication portion 212, thereby completing the corresponding in this application. step.
  • the processor performs the steps of: detecting a face key point of the driver image; determining state information of at least a portion of the driver's face according to the face key point; a plurality of status information of at least a portion of the driver's face, determining a parameter value for characterizing at least one indicator of the driving state; determining a driving state monitoring result of the driver based on the parameter value.
  • RAM 203 various programs and data required for the operation of the device can be stored.
  • the CPU 201, the ROM 202, and the RAM 203 are connected to each other through a bus 204.
  • ROM 202 is an optional module.
  • the RAM 203 stores executable instructions or writes executable instructions to the ROM 202 at runtime, the executable instructions causing the central processing unit 201 to perform the steps included in the object segmentation method described above.
  • An input/output (I/O) interface 205 is also coupled to bus 204.
  • the communication unit 212 may be integrated, or may be configured to have a plurality of sub-modules (for example, a plurality of IB network cards) and be respectively connected to the bus.
  • the following components are connected to the I/O interface 205: an input portion 206 including a keyboard, a mouse, etc.; an output portion 207 including, for example, a cathode ray tube (CRT), a liquid crystal display (LCD), and the like, and a storage portion 208 including a hard disk or the like. And a communication portion 209 including a network interface card such as a LAN card, a modem, or the like. The communication section 209 performs communication processing via a network such as the Internet.
  • Driver 210 is also connected to I/O interface 205 as needed.
  • a removable medium 211 such as a magnetic disk, an optical disk, a magneto-optical disk, a semiconductor memory or the like is mounted on the drive 210 as needed so that a computer program read therefrom is installed in the storage portion 208 as needed.
  • FIG. 1 is only an optional implementation manner.
  • the number and type of the components in FIG. 1 may be selected, deleted, added, or replaced according to actual needs.
  • separate implementations such as separate settings or integrated settings can also be used.
  • the GPU and the CPU can be separated.
  • the GPU can be integrated on the CPU, and the communication unit can be separated or integrated. On the CPU or GPU, etc.
  • embodiments of the present application include a computer program product comprising tangibly embodied in a machine readable A computer program on a medium, the computer program comprising program code for performing the steps shown in the flowchart, the program code comprising instructions corresponding to performing the steps provided by the application, for example, a face key for detecting a driver image And an instruction for determining state information of at least a portion of the driver's face according to the face key point; and determining, for characterizing, based on the plurality of state information of at least a portion of the driver's face over a period of time An instruction of a parameter value of at least one indicator of a driving state; an instruction for determining a driving state monitoring result of the driver based on the parameter value.
  • the computer program can be downloaded and installed from the network via the communication portion 209, and/or installed from the removable medium 211.
  • the computer program is executed by the central processing unit (CPU) 201, the above-described instructions described in the present application are executed.
  • the driving state monitoring technical solution provided by the present application can be implemented by a single chip microcomputer, an FPGA (Field Programmable Gate Array), a microprocessor, a smart mobile phone, a notebook computer, a tablet computer (PAD), a desktop computer or a server.
  • An electronic device implementation running a computer program (which may also be referred to as program code) may be stored in a computer readable storage medium such as a flash memory, a cache, a hard disk, or an optical disk.
  • S300 detects a face key point of the driver image.
  • step S300 in the present application may be performed by the processor invoking an instruction stored in the memory for acquiring a face keypoint, or may be performed by the detected face keypoint module 800 executed by the processor.
  • the driver image in the present application is typically an image frame in a video captured by a camera (eg, an infrared camera, etc.) for the cab, ie, the detected face key module 800 can be captured for the camera.
  • a camera eg, an infrared camera, etc.
  • Each image frame in the video detects a face key point in real time or offline.
  • the detected face key module 800 can detect face key points of each driver image in a variety of ways; for example, the detected face key module 800 can utilize the corresponding neural network to obtain face key points for each driver image.
  • the detected face key module 800 can provide each driver image generated based on the camera to a face detection depth neural network for detecting a face position, and the face detection depth neural network is directed to Every driver entered The image performs face position detection, and outputs face position information for each driver image input.
  • the face detection depth neural network outputs face bracket information for each input driver image;
  • the detected face key module 800 inputs each driver image of the face detection depth neural network and the face position information output by the face detection depth neural network for each driver image (for example, the face is externally connected)
  • the frame information is respectively provided to a face key point deep neural network for detecting a key point of the face, and each driver image and its corresponding face position information form a set of input information of the face key point depth neural network,
  • the face key point depth neural network can determine the area to be detected in the driver image according to the face position information, and perform face key point detection on the image of the area to be detected, thereby making the face key
  • the point depth neural network outputs face key points for each driver image, for example, face key point depth god
  • the network generates coordinate information of a plurality of key points in the driver image and the number of each key point for each driver image, and outputs, thereby detecting the face key point module 800 according to the face key point depth neural network output.
  • the detected face key module 800 can accurately and timely detect the face key points of each image frame (ie, each driver image) in the video. .
  • the quality of the driver's image taken by the infrared camera tends to be better than the quality of the driver's image taken by the ordinary camera, especially In the dark or outside environment such as at night or in a cloudy sky or in a tunnel, the image captured by the infrared camera is usually significantly better than the image of the driver's image taken by the ordinary camera, thereby facilitating the detection of the face key module 800.
  • the accuracy of the detected key points of the face is beneficial to improve the accuracy of the driving state monitoring.
  • S310 Determine state information of at least a part of the driver's face according to the key point of the face.
  • step S310 in the present application may be performed by the processor invoking an instruction stored in the memory for determining state information of at least a portion of the driver's face, or may be determined by the processor to determine the state of the region.
  • Module 810 executes.
  • the state information of at least a portion of the driver's face of the present application is set for an indicator for characterizing the driving state, that is, state information of at least a portion of the driver's face is mainly used for forming A parameter value that characterizes the driving state.
  • At least a portion of the driver's face may include at least one of a driver's face, an operator's face, and a driver's face.
  • the status information of at least a part of the driver's face may include at least one of eye contact state information, mouth opening and closing state information, face orientation information, and line of sight direction information, for example, at least part of the driver's face in the present application.
  • the status information of the area includes at least two of the above four types of information, so as to comprehensively consider the state information of different areas of the face to improve the accuracy of the driving state information detection.
  • the above-mentioned eye fitting state information can be used for performing closed eye detection by the driver, such as detecting whether the driver is half-closed ("half" indicates a state of incomplete closed eyes, such as blinking in a dozing state, etc.), whether or not to close the eye , the number of closed eyes, the degree of closed eyes, and so on.
  • the eye fitting state information may optionally be information that normalizes the height of the eye opening.
  • the above mouth opening and closing status information can be used to drive Driver's yawn detection, such as detecting whether the driver yawns, yawns, etc.
  • the mouth opening and closing state information may optionally be information that normalizes the height of the mouth opening.
  • the face orientation information described above can be used to perform a driver's face orientation detection, such as detecting whether the driver is facing the face or turning back.
  • the face orientation information may optionally be an angle between the front of the driver's face and the front of the vehicle that the driver is driving.
  • the above-described line-of-sight direction information can be used to perform driver's line of sight detection, such as detecting whether the driver is looking ahead, etc., and the line-of-sight direction information can be used to determine whether the driver's line of sight has deviated or the like.
  • the line-of-sight direction information may optionally be an angle between the driver's line of sight and the front of the vehicle on which the driver is driving.
  • the determining region state module 810 can directly calculate the eye key points in the face key points detected by the face key module 800 to obtain eye matching state information according to the calculation result; In an optional example, the determination region status module 810 directly calculates the distance between the key point 53 and the key point 57 in the face key point, and calculates the distance between the key point 52 and the key point 55, and determines the area status module 810 to utilize The distance between the key point 52 and the key point 55 normalizes the distance between the key point 53 and the key point 57, and the normalized value is the eye blending state information; another optional example The determining region state module 810 can directly calculate the distance between the key point 72 and the key point 73 in the face key point, and calculate the distance between the key point 52 and the key point 55, and determine that the area status module 810 utilizes the key point 52. The distance between the key point 55 and the key point 55 is normalized, and the normalized value is the eye blending state information. The distance between the key points in this application is calculated using the coordinates
  • the determining region status module 810 may first utilize the eye key points (eg, coordinate information of the eye key points in the driver image) of the face key points obtained by detecting the face key point module 800.
  • the eye in the driver image is positioned to obtain an eye image, and the upper eyelid line and the lower eyelid line are obtained using the eye image, and the determined region state module 810 obtains an eye flaw by calculating an interval between the upper eyelid line and the lower eyelid line.
  • Status information An optional example is as follows:
  • the determination region state module 810 can locate the position of the eye in the driver image by using the eye key points in the face key points detected by the face key module 800 (eg, positioning the position of the right eye or the left eye) position);
  • the determination region state module 810 cuts out the eye image from the driver image according to the position of the positioned eye, and performs a resizing process (if needed) on the cropped eye image to obtain an eye image having a predetermined size. Obtaining an eye image of a predetermined size by performing enlargement/reduction processing on the cropped eye image;
  • the determined region status module 810 provides the cropped and resized eye image to a first neural network for eyelid line positioning (also referred to as an eyelid line positioning depth neural network) for input by the first neural network.
  • the eye image performs eyelid line positioning, so that the first neural network outputs eyelid line information for each eye image input, and the eyelid line information may include an upper eyelid line key point, a lower eyelid line key point, and the like, for example, the first
  • the neural network outputs the number of each eyelid line key point and its coordinate information in the eye image, the number of each lower eyelid line key point and its coordinate information in the eye image, the number of the inner corner key point and its image in the eye
  • the coordinate information in the coordinate information of the outer eye corner and the coordinate information in the eye image; the number of key points of the upper eyelid line output by the first neural network is generally the same as the number of key points of the lower eyelid line; for example, in FIG.
  • key point 12 key point 13, key point 14, key point 15, key point 16, key point 17, key point 18, key point 19, key point 20, off Point 21 is the key point of the upper eyelid line
  • key point 0, key point 1, key point 2, key point 3, key point 4, key point 5, key point 6, key point 7, key point 8, key point 9 is The key point of the eyelid
  • the key point 10 is the key point of the inner corner of the eye
  • the key point 11 is the key point of the outer corner of the eye
  • the key points of the corner and the key points of the outer corner of the eye can be used as key points shared by the upper eyelid line and the lower eyelid line;
  • the determination region state module 810 calculates an average distance between the upper eyelid line and the lower eyelid line according to each upper eyelid key point and each lower eyelid key point, and utilizes the inner eye corner key point and the outer eye corner output by the first neural network.
  • the key points calculate the eye corner distance
  • the determined area state module 810 normalizes the average distance using the calculated eye angle distance to obtain an interval between the upper eyelid line and the lower eyelid line.
  • the upper eyelid line in the present application may be represented by a trajectory representation or a fitted line of a certain number of key points at the upper eyelid of a single eye; for example, in FIG. 3, by key point 11, key point 12, key point 13, key point 14 , key point 15, key point 16, key point 17, key point 18, key point 19, key point 20, key point 21, and the trajectory of key point 10 may represent the upper eyelid line or a line representation obtained by fitting these key points Upper eyelid line; the lower eyelid line in the present application can be represented by a trajectory representation or a fitted line of a certain number of key points at the lower eyelid; for example, in Figure 3, by key point 11, key point 0, key point 1.
  • Key point 2, key point 3, key point 4, key point 5, key point 6, key point 7, key point 8, key point 9 and key point 10 trajectory can indicate lower eyelid line or fit of these key points
  • the resulting line represents the lower eyelid line.
  • the key point 10 is the key point of the inner corner of the eye
  • the key point 11 is the key point of the outer corner of the eye.
  • the key point 10 can be classified in the key point of the upper eyelid line, or can be classified in the key point of the lower eyelid line.
  • the key point 11 can be It can be placed in the key points of the lower eyelid line.
  • the average distance between the upper eyelid line and the lower eyelid line can be calculated as: calculating the distance between the key point 0 and the key point 12, calculating the distance between the key point 1 and the key point 13, and calculating the key point 2
  • the quotient of the sum of the above 10 distances and 10 is calculated, which is the average distance between the upper eyelid line and the lower eyelid line.
  • normalizing the average distance by using the calculated eye angle distance may be: calculating the eye angle distance between the key point 10 and the key point 11, and calculating the quotient of the average distance divided by the eye angle distance to perform the return.
  • the quotient obtained by dividing the two is the interval between the upper eyelid line and the lower eyelid line.
  • the interval between the upper eyelid line and the lower eyelid line in the present application may also be expressed in other manners, for example, the distance between a set of key points located in the middle of the eye and opposite in the upper and lower positions may be The quotient of the eye-angle distance is used as the interval between the upper eyelid line and the lower eyelid line; for example, it can be represented by a non-normalized value.
  • the average distance between the upper eyelid line and the lower eyelid line can be used as The distance between the upper eyelid line and the lower eyelid line may also be the distance between a set of key points located in the middle of the eyelid line and opposite to the upper and lower positions as the interval between the upper eyelid line and the lower eyelid line. This application does not limit the specific manifestation of the spacing between the upper eyelid line and the lower eyelid line.
  • the upper eyelid line and the lower eyelid line are positioned by the eye image, and an accurate upper eyelid line and a lower eyelid line can be obtained, thereby calculating the upper eyelid line and the lower eyelid line based on the obtained upper eyelid line and lower eyelid line.
  • the interval can reflect the shape of the driver's eyes more objectively and accurately, which is conducive to improving the accuracy of the information of the eye-matching state, and ultimately helps to improve the accuracy of determining the driving state.
  • the determined region status module 810 can directly utilize the detected person detected by the face key module 800.
  • the key points in the face key points are calculated to obtain the mouth opening and closing state information according to the calculation result; an optional example, as shown in FIG. 4, the determining area state module 810 directly calculates the key points in the face key points 87
  • the distance between the key point 93 and the key point 84 is calculated, and the area state module 810 is determined to utilize the distance between the key point 84 and the key point 90 between the key point 87 and the key point 93.
  • the distance is normalized, and the normalized value is the mouth opening and closing state information; in another optional example, the determined area state module 810 directly calculates the key point 88 and the key point 92 in the face key point.
  • the distance between the key point 84 and the key point 90 is calculated, and the determined area status module 810 uses the distance between the key point 84 and the key point 90 to normalize the distance between the key point 88 and the key point 92. Once processed, the normalized value is the mouth opening and closing status information.
  • the determining region status module 810 may first utilize the mouth key points in the face key points detected by the face key module 800 (eg, the coordinate information of the mouth key points in the driver image) Positioning the mouth in the driver image, obtaining a mouth image by cutting or the like, and obtaining the upper lip line and the lower lip line using the mouth image, determining that the area state module 810 calculates the interval between the upper lip line and the lower lip line , to get the mouth opening and closing status information; an optional example is as follows:
  • the determination region state module 810 can locate the position of the mouth in the driver image by using the mouth key points in the face key points detected by the face key module 800.
  • the determination area state module 810 cuts out the mouth image from the driver image according to the position of the positioned mouth, and performs a size adjustment process on the cut mouth image to obtain a mouth image having a predetermined size, such as by The cut mouth image is subjected to enlargement/reduction processing to obtain a mouth image of a predetermined size.
  • the determined region status module 810 provides the cropped and resized mouth image to a second neural network for lip positioning (also referred to as a lip keypoint location depth neural network) for input by the second neural network.
  • the mouth image is positioned for lip line so that the second neural network outputs lip key point information for each mouth image input.
  • the second neural network outputs the number of each upper lip line key point and its coordinate information in the mouth image, the number of each lower lip line key point and its coordinate information in the mouth image, etc.; the upper lip of the second neural network output
  • the number of key points of the line is usually the same as the number of key points of the lower lip line.
  • the two key points of the mouth can be regarded as the key points shared by the upper lip line and the lower lip line, and can also be regarded as independent of the upper lip line and the lower lip line.
  • the key points of the present application are not limited thereto.
  • the determining region state module 810 calculates an average distance between the upper lip line and the lower lip line according to each of the upper lip line key points and each lower lip line key point, and calculates the mouth angular distance by using the two mouth corner key points output by the second neural network.
  • the determined region status module 810 normalizes the average distance using the calculated mouth angular distance to obtain an interval between the upper lip line and the lower lip line.
  • the upper lip line in the present application may be represented by trajectory information or a fitted line represented by a certain number of key points of the upper lip upper and lower contours; or trajectory information or a fitted line represented by a certain number of key points at the upper lip contour
  • the lower lip line in the present application may be represented by a trajectory information or a fitted line represented by a certain number of key points of the lower lip contour; or a trajectory information represented by a certain number of key points at the lower lip contour Or the line of the line is indicated; the key points of the two corners can be classified into the key points of the upper lip line, and can also be classified into the key points of the lower lip line, which is not limited by the embodiment of the present application.
  • the interval between the upper lip line and the lower lip line in the present application may also be expressed in other manners, for example, the distance between a set of key points located in the middle of the lip line and opposite in the upper and lower positions may be The quotient of the angular distance of the mouth is used as the interval between the upper lip line and the lower lip line; for example, it can be represented by a non-normalized value.
  • the average distance between the upper lip line and the lower lip line can be used as the upper lip line.
  • the distance between the upper lip line and the lower lip line may be used as the interval between the upper lip line and the lower lip line. This application does not limit the specific manifestation of the spacing between the upper eyelid line and the lower eyelid line.
  • the present application can obtain an upper lip line and a lower lip line by performing upper lip line positioning and lower lip line positioning for the cut-sized mouth image, thereby calculating the upper lip line and the lower lip line based on the upper lip line and the lower lip line obtained by the lip line positioning.
  • the interval between the lower lip lines can more objectively and accurately reflect the shape of the driver's mouth, which is conducive to improving the accuracy of the mouth opening and closing state information, and ultimately helps to improve the accuracy of determining the driving state.
  • the determined area state module 810 can obtain the head according to the face key point obtained by detecting the face key point module 800.
  • the posture feature information so that the determination region state module 810 can determine the face orientation information according to the head posture feature information, the face orientation information can represent the direction and angle of the face rotation, where the direction of rotation can be rotated to the left. , turn to the right, turn down and / or up.
  • the determined region status module 810 can obtain facial orientation information for each driver image in a variety of manners; in an alternative example, the determined region status module 810 can utilize the corresponding neural network to obtain facial orientation information for each driver image.
  • the determination region status module 810 can provide the face key points obtained by the detection face key module 800 to a third neural network (also referred to as a header) for extracting head posture feature information.
  • the third neural network extracts the head posture feature information for a set of face key points currently input, and outputs the head posture feature information for the currently input group of face key points
  • the determination region state module 810 provides the head pose feature information output by the third neural network to a fourth neural network (also referred to as a head orientation detection depth neural network) for estimating the head orientation, by the fourth neural network.
  • the face orientation calculation is performed on the currently input set of head pose feature information, and the determined region state module 810 is based on the fourth god.
  • the information outputted by the network obtains the face orientation information corresponding to a set of face key points.
  • the third neural network for extracting head posture feature information and the fourth neural network for estimating head orientation to obtain face orientation information are used in the existing mature development, with good real-time performance.
  • the determined area status module 810 can accurately and timely detect the face orientation information corresponding to each image frame (ie, each driver image) in the video, thereby facilitating the determination of the degree of fatigue driving. accuracy.
  • the determination region status module 810 can determine the pupil edge position using the eye image located by the eye key point in the face key point, and calculate the pupil center position based on the pupil edge position to determine the region status module 810.
  • the line direction information is usually calculated by calculating the center position of the pupil and the center position of the eye in the eye image, for example, calculating a vector of the center position of the pupil and the center position of the eye in the eye image, and the vector can be used as the line of sight direction information.
  • the determined region status module 810 provides the cropped enlarged eye image to the fifth for pupil positioning.
  • a neural network also referred to as a pupil keypoint localization neural network
  • the fifth neural network performs pupil key detection on the currently input eye image, and outputs the detected pupil key points for the currently input eye image to determine the region state.
  • Module 810 is based on the pupil key of the fifth neural network output The point is obtained to the pupil edge position.
  • the circle at the pupil position in FIG. 5 is the pupil edge position obtained by the determination region state module 810; the determination region state module 810 calculates the position of the pupil edge (for example, calculating the center position of the pupil) ), you can get the center of the pupil.
  • the determination region status module 810 can obtain an eye center position based on the upper eyelid line and the lower eyelid line, for example, determining coordinate information of all key points of the upper eyelid line and the lower eyelid line of the area state module 810. The addition is performed, and in addition to the number of all the key points of the upper eyelid line and the lower eyelid line, the coordinate information obtained by the area state module 810 is determined as the eye center position.
  • the determining region state module 810 may also acquire the eye center position in other manners. For example, the determining region state module 810 performs calculation on the eye key points in the face key points obtained by detecting the face key point module 800, thereby obtaining an eye. Central location; the present application does not limit the specific implementation of determining the regional state module 810 to obtain the center of the eye.
  • the present application can obtain a more accurate pupil center position by acquiring the pupil center position on the basis of the pupil key point detection; and obtaining a more accurate eye center position by acquiring the eye center position based on the eyelid line positioning. Therefore, in the present application, when the pupil center position and the eye center position are used to determine the line of sight direction, more accurate line of sight direction information can be obtained. In addition, by using the pupil key point detection method to locate the pupil center position, and using the pupil center position and the eye center position to determine the line of sight direction, the implementation of determining the line of sight direction is accurate and easy to implement. .
  • the present application can employ existing neural networks to detect the position of the pupil edge and the detection of the center position of the eye.
  • S320 Determine, according to the plurality of state information of at least a part of the driver's face in a period of time, a parameter value for characterizing at least one indicator of the driving state.
  • step S320 in the present application may be used by the processor to call a parameter stored in the memory for determining at least one indicator for characterizing the driving state based on the plurality of state information of the driver's face region over a period of time.
  • the execution of the instructions of the value may also be performed by a determined indicator parameter value module 820 that is executed by the processor.
  • the present application quantifies the driver's driving state (eg, fatigue driving degree and/or driving concentration), for example, quantifying the driver's fatigue driving degree as an indicator based on the degree of closed eyes, At least one of an indicator based on yawning, an indicator based on the degree of deviation of the line of sight, and an indicator based on the degree of turning; the driving focus is quantified as at least one of an indicator based on the degree of deviation of the line of sight and an indicator based on the degree of turning.
  • the driver's driving state eg, fatigue driving degree and/or driving concentration
  • the above-mentioned indicator based on the degree of closed eyes may include: at least one of the number of closed eyes, the frequency of closed eyes, the number of closed eyes, and the frequency of semi-closed eyes; the above-mentioned indicators based on yawning may include: whether to yawn and the number of yawns At least one of the above-mentioned indicators based on the degree of deviation of the line of sight may include: at least one of whether the line of sight is deviated and whether the line of sight is seriously deviated, etc.; the above-mentioned indicator based on the degree of turning (also referred to as an indicator based on the degree of turning face or based on the degree of turning back) Indicators) can include: whether to turn around, whether to turn around for a short time, and whether to turn at least one of the long time.
  • the determined region status module 810 can obtain status information for a series of driver facial regions over a period of time in the video (eg, multiple consecutive The status information of the driver's face area may also be referred to as a series of consecutive status information of the driver's face area), and the determination index parameter value module 820 passes the needle.
  • the determination index parameter value module 820 passes the needle.
  • the determined indicator parameter value module 820 determines that the interval between the upper and lower eyelid lines is less than the first predetermined interval, and the phenomenon that is less than the first predetermined interval continues for N frames (eg, persistence) 5 frames or 6 frames, etc., the determination index parameter value module 820 determines that the driver has a closed eye phenomenon, and the determination index parameter value module 820 can record a closed eye, and can also record the current closed eye length; determine the indicator parameter The value module 820 determines that the interval between the upper eyelid line and the lower eyelid line is not less than the first predetermined interval, less than the second predetermined interval, and the phenomenon that is not less than the first predetermined interval and less than the second predetermined interval continues for N1.
  • N frames eg, persistence 5 frames or 6 frames, etc.
  • the determination index parameter value module 820 determines that the driver has a closed eye phenomenon, and the determination index parameter value module 820 can record a closed eye, and can also record the current closed eye length; determine the indicator parameter The value module 820 determines that the
  • the frame determines that the indicator parameter value module 820 determines that the driver has experienced a semi-closed eye phenomenon (which may also be referred to as a blink phenomenon, etc.), and the determined indicator parameter value module 820 may record You can also record the duration of this semi-closed eye with one and a half closed eyes.
  • a semi-closed eye phenomenon which may also be referred to as a blink phenomenon, etc.
  • the determine indicator parameter value module 820 determines that the interval between the upper lip line and the lower lip line is greater than a third predetermined interval, and the phenomenon that is greater than the third predetermined interval continues for an N2 frame (eg, persistence)
  • the 10th or 11th frame, etc. determines that the indicator parameter value module 820 determines that the driver has experienced an yawn phenomenon, and the determined indicator parameter value module 820 can record a yawn.
  • the determining indicator parameter value module 820 determines that the face orientation information is greater than the first orientation, and the phenomenon that is greater than the first orientation continues for N3 frames (eg, lasting 9 frames or 10 frames, etc.), Then, the determining indicator parameter value module 820 determines that the driver has experienced a long-term large-angle turning phenomenon, and the determining index parameter value module 820 can record a long-time large-angle turning head, and can also record the current turning head length; determining the index parameter value.
  • the module 820 determines that the face orientation information is not greater than the first orientation, is greater than the second orientation, and is not greater than the first orientation, and the phenomenon greater than the second orientation continues for N3 frames (eg, lasting 9 frames or 10 frames, etc.) Then, the determined index parameter value module 820 determines that the driver has experienced a long-term small-angle turning phenomenon, and the determined index parameter value module 820 can record a small-angle turning deviation, and can also record the current rotating head length.
  • N3 frames eg, lasting 9 frames or 10 frames, etc.
  • the determination indicator parameter value module 820 determines that the angle between the line of sight direction information and the front of the vehicle is greater than the first angle, and the phenomenon that is greater than the first angle continues for N4 frames (eg, If the 8 or 9 frames are continued, the indicator parameter value module 820 determines that the driver has a serious deviation of the line of sight, and the determined indicator parameter value module 820 can record a serious deviation of the line of sight, and can also record the serious deviation time of the line of sight.
  • Determining the index parameter value module 820 determines that the angle between the line of sight direction information and the front of the vehicle is not greater than the first angle, greater than the second angle, not greater than the first angle, greater than the second angle If the phenomenon continues for N4 frames (for example, 9 frames or 10 frames, etc.), the determination indicator parameter value module 820 determines that the driver has a line of sight deviation phenomenon, and the determination indicator parameter value module 820 can record a line of sight deviation, or Record the length of time this line of sight deviates.
  • the specific ones of the first predetermined interval, the second predetermined interval, the third predetermined interval, the first orientation, the second orientation, the first angle, the second angle, N1, N2, N3, and N4 can be set according to the actual situation. This application does not limit the size of the specific value.
  • step S330 of the present application may be executed by the processor to call an instruction stored in the memory for determining the driving state monitoring result of the driver according to the parameter value, or may be determined by the processor to determine the driving state.
  • Module 830 executes.
  • the driving condition monitoring result of the present application may include: driver fatigue monitoring result; of course, the driving state monitoring result of the present application may also be expressed in other forms, for example, the driving state monitoring result may include: the driver Attention monitoring results, etc.
  • the above driving state monitoring result may be optionally a fatigue driving degree.
  • the fatigue driving degree may include: a normal driving level (also referred to as a non-fatigue driving level) and a fatigue driving level; wherein the fatigue driving level may be one level. It can also be divided into a number of different levels.
  • the above-mentioned fatigue driving level can be divided into: prompting fatigue driving level (also referred to as mild fatigue driving level) and warning fatigue driving level (also referred to as severe fatigue driving) Level);
  • fatigue driving can be divided into more levels, such as mild fatigue driving levels, moderate fatigue driving levels, and severe fatigue driving levels. This application does not limit the different levels of fatigue driving.
  • the above-mentioned driver attention monitoring result may optionally be focused on driving degree.
  • the concentration driving degree may include: a focused driving level (also referred to as an undistracted driving level) and a distracting driving level; wherein the distracting driving level is included It can be one level or divided into several different levels.
  • the above distracting driving level can be divided into: prompt distracting driving level (also called mild distracting driving level) and warning distracting driving.
  • Level also known as severe fatigue driving level
  • distracted driving levels can be divided into more levels, such as mild distracted driving levels, moderate distracting driving levels, and severe distracted driving levels. This application does not limit the different levels of distraction driving.
  • each level included in the fatigue driving degree in the present application corresponds to a preset condition
  • the driving state module 830 determines that the parameter value of the indicator calculated by the index parameter value module 820 should be determined in real time.
  • the predetermined driving condition is determined, and the determined driving state module 830 may determine the level corresponding to the satisfied preset condition as the driver's current fatigue driving degree.
  • the preset conditions corresponding to the normal driving level may include:
  • Condition 3 there is no rotor deviation phenomenon or there is a short small angle head deviation phenomenon (ie, the length of the small angle rotor deviation does not exceed a preset duration, for example, 2-3 seconds, etc.);
  • the determination driving state module 830 can determine that the driver is currently at the normal driving level.
  • the preset conditions corresponding to the prompt fatigue driving level may include:
  • Condition 33 there is a small angle head deviation phenomenon in a short time range (ie, the length of the small angle head deviation does not exceed a preset duration, for example, 5-8 seconds, etc.);
  • Condition 44 there is a small angle of view deviation phenomenon in a short time range (ie, the length of the small angle line of sight deviation does not exceed a preset duration, for example, 5-8 seconds, etc.);
  • the determination driving state module 830 may determine that the driver is currently in the prompt fatigue driving level, and at this time, the control module 840 may pass the sound. (such as voice or ringing) / light (lighting or flashing lights, etc.) / vibration, etc. to prompt the driver, in order to remind the driver to improve driving attention, Encourage the driver to concentrate on driving the car or to encourage the driver to rest.
  • the sound such as voice or ringing
  • light light (lighting or flashing lights, etc.)
  • the preset conditions corresponding to the warning fatigue driving level may include:
  • Condition 111 there is a closed eye phenomenon or the number of closed eyes in a period of time reaches a preset number of times or the closed eye time in a period of time reaches a preset time;
  • Condition 222 the number of yawnings in a period of time reaches a preset number of times
  • Condition 333 there is a phenomenon of head deviation in a long time range (ie, the length of the head deviation exceeds a preset duration, for example, 5-8 seconds, etc.);
  • Condition 444 there is a phenomenon of line-of-sight deviation in a longer time range (ie, the length of the line of sight deviation exceeds a predetermined length of time, for example, 5-8 seconds, etc.) or there is a serious line-of-sight deviation phenomenon (ie, the angle of the line of sight deviation exceeds a predetermined angle);
  • the determination driving state module 830 may determine that the driver is currently in a warning fatigue driving level, and at this time, the control module 840 may take over The current driving (ie, switching to take over driving mode), etc., to ensure driving safety as much as possible; taking over the current driving can be optionally switched to the driverless mode/auto driving mode, etc.; at the same time, the control module 840 can also pass the sound ( The driver is prompted by means of light (such as voice or bell), light (lighting or flashing, etc.)/vibration to remind the driver to increase driving attention, to encourage the driver to concentrate on driving the car or to prompt the driver to rest.
  • the current driving ie, switching to take over driving mode
  • the control module 840 can also pass the sound ( The driver is prompted by means of light (such as voice or bell), light (lighting or flashing, etc.)/vibration to remind the driver to increase driving attention, to encourage the driver to concentrate on driving the car or to prompt the driver to rest.
  • each level included in the focused driving degree in the present application corresponds to a preset condition
  • the driving state module 830 determines that the parameter value of the indicator calculated by the index parameter value module 820 should be determined in real time.
  • the satisfied driving condition module 830 may determine the level corresponding to the satisfied preset condition as the driver's current focused driving degree.
  • the preset conditions corresponding to the driving level may include:
  • Condition a there is no rotor deviation phenomenon or there is a short small angle head deviation phenomenon (ie, the length of the small angle head deviation does not exceed a preset duration, for example, 2-3 seconds, etc.);
  • Condition b there is no line-of-sight deviation phenomenon or there is a short-term small-angle line-of-sight deviation phenomenon (ie, the length of the small-angle line of sight deviation does not exceed a preset duration, for example, 3-4 seconds, etc.);
  • the determined driving status module 830 can determine that the driver is currently at a focused driving level.
  • the preset conditions corresponding to the distracting driving level may include:
  • Condition aa there is a small angle head deviation phenomenon in a short time range (ie, the length of the small angle head deviation does not exceed a preset duration, for example, 5-8 seconds, etc.);
  • Condition bb there is a small angle of view deviation phenomenon in a short time range (ie, the length of the small angle of view deviation does not exceed a preset duration, for example, 5-8 seconds, etc.);
  • the determination driving state module 830 may determine that the driver is currently prompted to distract the driving level, and at this time, the control module 840 may pass the sound (such as voice or ringing).
  • the bell, etc. / light (lighting or flashing light, etc.) / vibration, etc. prompts the driver to remind the driver to increase driving concentration and prompt the driver to return the distracted attention to driving.
  • the preset conditions corresponding to the warning distracting driving level may include:
  • Condition aaa there is a phenomenon of rotor head deviation in a long time range (ie, the length of the head deviation exceeds a preset duration, for example, 5-8 seconds, etc.);
  • Condition bbb there is a phenomenon of line-of-sight deviation in a long time range (ie, the length of the line of sight deviation exceeds a predetermined length of time, for example, 5-8 seconds, etc.) or there is a serious line-of-sight deviation phenomenon (ie, the angle of the line of sight deviation exceeds a predetermined angle);
  • the driving state module 830 can determine that the driver is currently in the warning distracting driving level, and at this time, the control module 840 can take over the current driving (ie, switch) In order to take over the driving mode, etc., as far as possible to ensure driving safety; take over the current driving can be optionally switched to the driverless mode / automatic driving mode, etc.; at the same time, the control module 840 can also pass the sound (such as voice or ringing) Etc. / Light (lighting or flashing, etc.) / vibration, etc. to prompt the driver, in order to remind the driver to increase driving concentration, and to promote the distracted attention to driving.
  • the sound such as voice or ringing
  • Etc. / Light lighting or flashing, etc.
  • vibration etc.
  • control module 840 may be implemented by the processor invoking instructions stored in the memory for performing control operations corresponding to the driving condition monitoring results.
  • FIG. 7 an application scenario in which an implementation may be implemented according to an embodiment of the present application is schematically illustrated.
  • the driver drives the car 100 to travel on the road, and the car 100 is equipped with a camera capable of capturing the driver's face, and the image taken by the camera is transmitted in real time to data such as a CPU or a microprocessor installed in the car.
  • the processing unit (not shown in FIG. 7) performs real-time analysis on the image received by the data processing unit, and determines whether the driver is currently in a fatigue driving state or is in a distracting driving state according to the real-time analysis result of the image.
  • corresponding measures should be taken in time, for example, a voice prompt is issued, and the warning light is illuminated, and the vibration device is activated at the same time; for example, switching For driverless mode/autopilot mode and more.
  • the camera and the data processing unit involved in the above application scenario are usually the configuration device of the automobile 100 itself.
  • the camera and the data processing unit may also be the camera and data processing in the electronic device carried by the driver himself.
  • the unit, for example, the camera and data processing unit in the above application scenario may be a camera and a data processing unit provided by an electronic device such as a smart mobile phone or a tablet computer of the driver; an optional example, the smart mobile phone of the driver passes
  • the fixed bracket is disposed on the operation platform of the automobile 100.
  • the camera of the smart mobile phone is aligned with the driver's face. After the corresponding APP installed in the smart mobile phone is started, the camera in the smart mobile phone is in the process of capturing images.
  • the data processing unit in the smart mobile phone analyzes the image captured by the camera in real time, and judges whether the driver is currently in a fatigue driving state or is in a distracted driving state according to the real-time analysis result of the image, and the result of the judgment is driving.
  • Current staff Fatigue driving state or in a state when distracted driving, etc. can alert the driver through sound, light and / or pulsating manner.
  • an image taken by the camera in the car 100 may be captured in real time via a wireless network.
  • Transmitting to the back-end electronic device for example, being transmitted to the desktop computer in the driver's home/office in real time, and, for example, being transmitted to the server corresponding to the corresponding APP in real time, and the image received by the background electronic device is analyzed in real time, and Real-time analysis based on images It is judged whether the driver is currently in a fatigue driving state or is in a distracted driving state.
  • the background electronic device should promptly adopt corresponding measures, for example, the background.
  • the electronic device transmits an alert command to the smart mobile phone carried by the driver or the data processing unit in the car 100 through the wireless network, so that the smart mobile phone or the data processing unit in the car 100 can issue a voice prompt according to the alert command and/or Or light a warning light and / or switch to driverless mode and so on.
  • an image captured by a camera based on the camera 100 itself configured to be captured by the driver's face may be transmitted in real time to an electronic device carried by the driver via a short-range wireless transmission method such as Bluetooth, for example, ingested.
  • the image is transmitted in real time to the smart mobile phone or tablet or notebook computer that the driver carries with him.
  • the electronic device carried by the driver carries on the real-time analysis of the image received by the driver, and judges the driver according to the real-time analysis result of the image. Whether it is currently in a state of fatigue driving or whether it is in a distracted driving state.
  • the electronic device carried by the driver should take appropriate measures in time, for example, The electronic device carried by the driver sends an alert command to the data processing unit in the car 100 via a short-range wireless transmission method such as Bluetooth, so that the data processing unit configured by the car 100 can issue a voice prompt and/or a lighting alert according to the alert command.
  • a short-range wireless transmission method such as Bluetooth
  • the driver portable electronic device itself emits voice prompts and / or vibration and so on.
  • any of the neural networks mentioned in the embodiments of the present invention is a neural network pre-trained by a supervisory, semi-supervised or unsupervised manner for a specific task (such as a key point information positioning task, etc.), and the specific manner of training.
  • the embodiments of the present application are not limited.
  • the neural network may be pre-trained in a supervised manner, such as pre-training the neural network with annotated data for an organ of the face.
  • the network structure of the neural network may be flexibly designed according to the needs of the key point information positioning task, and the embodiment of the present application is not limited.
  • the neural network may include, but is not limited to, a convolutional layer, a non-linear Relu layer, a pooled layer, a fully connected layer, etc., the more the number of network layers, the deeper the network; and the network structure such as a neural network may be, but not limited to, ALexNet The structure of a network such as Deep Residual Network (ResNet) or VGGnet (Visual Geometry Group Network).
  • ResNet Deep Residual Network
  • VGGnet Visual Geometry Group Network
  • the driving state monitoring technical solution provided by the present application can obtain a face key point from the captured driver image, so that the present application can locate the face area such as the eyes and the mouth based on the acquired key point of the face, and pass the pair
  • the image of the face area such as the eyes and the mouth is recognized, and the state information of the face area, for example, the eye fitting state information, the mouth opening and closing state information, the face orientation information, the line of sight direction information, and the like, can be obtained;
  • a comprehensive analysis of the plurality of status information of the driver's face can quantify parameter values for characterizing at least one indicator of the driving state, for example, semi-closed eye time/half-closed eye frequency, closed eye time/closed eye frequency, playing The number of yawns, the side face time, and the degree of deviation of the line of sight, etc., so that the driving state of the driver's fatigue driving degree or distracting driving degree can be measured in time according to the parameter value of the quantized index.
  • the present application Since the present application detects a driving state such as a driver's fatigue driving degree or a distracting driving degree based on the ingested driver image, the present application can prevent the driver's fatigue driving degree or distracting driving degree from being detected based on the driver physiological signal. It is necessary to provide a component such as a sensor on the driver to make the driver's driving condition monitoring easy to implement;
  • the indicator for characterizing the driving state is an index for quantifying the driving state such as driver fatigue driving or distracting driving, and the present application can realize timely conversion of the driver image captured in real time into the parameter value of the quantized index.
  • the application can determine the driver's fatigue driving degree or the distracting driving degree by using the parameter values of the indicators obtained in real time, so that the driving state such as the driver's fatigue driving degree or the distracting driving degree can be determined objectively and accurately.
  • the methods and apparatus, electronic devices, and computer readable storage media of the present application are possible in many ways.
  • the methods and apparatus, electronic devices, and computer readable storage media of the present application can be implemented in software, hardware, firmware, or any combination of software, hardware, or firmware.
  • the above-described sequence of steps for the method is for illustrative purposes only, and the steps of the method of the present application are not limited to the order specifically described above unless otherwise specifically stated.
  • the present application can also be implemented as a program recorded in a recording medium, the program including machine readable instructions for implementing the method according to the present application.
  • the present application also covers a recording medium storing a program for executing the method according to the present application.

Abstract

一种驾驶状态监控方法、装置、电子设备、计算机可读介质和计算机程序,其中的方法包括:检测驾驶员图像的人脸关键点;根据所述人脸关键点确定驾驶员人脸至少部分区域的状态信息;根据一段时间内的驾驶员人脸至少部分区域的多个状态信息,确定用于表征驾驶状态的至少一个指标的参数值;根据所述参数值确定驾驶员的驾驶状态监测结果。

Description

驾驶状态监控方法、装置和电子设备 技术领域
本申请涉及计算机视觉技术,尤其是涉及一种驾驶状态监控方法、介质、驾驶状态监控装置以及电子设备。
背景技术
由于驾驶员的驾驶状态对安全行车的影响非常严重,因此,应尽可能的使驾驶员处于良好的驾驶状态。
对驾驶员的驾驶状态产生影响的因素通常包括:疲劳以及在驾驶过程中由于顾及手机等其他事物而导致的分心等;具体的,驾驶员长时间连续驾驶交通工具、睡眠质量差及睡眠时间不足等情况,往往会引起使驾驶员的生理机能及心理机能等出现失调现象,从而会导致驾驶员的驾驶技能下降,最终会影响驾驶员的驾驶状态;而当驾驶员在驾驶过程中将其注意力分散到手机等其他事物时,会导致驾驶员无法及时了解道路情况。
发明内容
本申请实施方式提供一种驾驶状态监控技术方案。
根据本申请实施方式的其中一个方面,提供了一种驾驶状态监控方法,该方法包括:检测驾驶员图像的人脸关键点;根据所述人脸关键点确定驾驶员人脸至少部分区域的状态信息;根据一段时间内的驾驶员人脸至少部分区域的多个状态信息,确定用于表征驾驶状态的至少一个指标的参数值;根据所述参数值确定驾驶员的驾驶状态监测结果。
根据本申请实施方式的其中另一个方面,提供了一种驾驶状态监控装置,该装置包括:检测人脸关键点模块,用于检测驾驶员图像的人脸关键点;确定区域状态模块,用于根据所述人脸关键点确定驾驶员人脸至少部分区域的状态信息;确定指标参数值模块,用于根据一段时间内的驾驶员人脸至少部分区域的多个状态信息,确定用于表征驾驶状态的至少一个指标的参数值;确定驾驶状态模块,用于根据所述参数值确定驾驶员的驾驶状态监测结果。
根据本申请实施方式的再一个方面,提供了一种电子设备,包括:存储器,用于存储计算机程序;处理器,用于执行所述存储器中存储的计算机程序,所述计算机程序被执行时,下述指令被运行:用于检测驾驶员图像的人脸关键点的指令;用于根据所述人脸关键点确定驾驶员人脸至少部分区域的状态信息的指令;用于根据一段时间内的驾驶员人脸至少部分区域的多个状态信息,确定用于表征驾驶状态的至少一个指标的参数值的指令;用于根据所述参数值确定驾驶员的驾驶状态监测结果的指令。
根据本申请实施方式的再一个方面,提供了一种计算机存储介质,其上存储有计算机程序,该计算机程序被处理器执行时,执行本申请方法实施方式中的各步骤,例如,检测驾驶员图像的人脸关键点;根据所述人脸关键点确定驾驶员人脸至少部分区域的状态信息;根据一段时间内的驾驶员人脸至 少部分区域的多个状态信息,确定用于表征驾驶状态的至少一个指标的参数值;根据所述参数值确定驾驶员的驾驶状态监测结果。
根据本申请实施方式的再一个方面,提供了一种计算机程序,该计算机程序被处理器执行时,执行本申请方法实施方式中的各个步骤,例如,检测驾驶员图像的人脸关键点;根据所述人脸关键点确定驾驶员人脸至少部分区域的状态信息;根据一段时间内的驾驶员人脸至少部分区域的多个状态信息,确定用于表征驾驶状态的至少一个指标的参数值;根据所述参数值确定驾驶员的驾驶状态监测结果。
基于本申请提供的驾驶状态监控方法、驾驶状态监控装置、电子设备以及计算机存储介质,本申请通过检测驾驶员图像的人脸关键点,使本申请可以基于人脸关键点对眼睛以及嘴巴等区域进行定位,进而可以获得相应区域的状态信息;通过对一段时间内的驾驶员人脸至少部分区域的状态信息进行分析,可以获得对驾驶员驾驶状态进行量化的至少一个指标的参数值,从而本申请可以根据量化的指标的参数值及时客观的反映出驾驶员的驾驶状态。
下面通过附图和实施方式,对本申请的技术方案做进一步的详细描述。
附图说明
构成说明书的一部分的附图描述了本申请的实施方式,并且连同描述一起用于解释本申请的原理。参照附图,根据下面的详细描述,可以更加清楚地理解本申请,其中:
图1为实现本申请实施方式的一示例性设备的框图;
图2为本申请方法一个实施方式的流程图;
图3为本申请的眼睑线定位后的眼睑线上的关键点示意图;
图4为本申请的人脸关键点中的嘴巴关键点示意图;
图5为本申请的经过瞳孔定位后的瞳孔边沿示意图;
图6为本申请装置一个实施方式的结构示意图;
图7为本申请的一个应用场景示意图。
具体实施例
现在将参照附图来详细描述本申请的各种示例性实施方式。应该注意到:除非另外具体说明,否则在这些实施方式中阐述的部件和步骤的相对布置、数字表达式和数值不限制本申请的范围。
同时,应当明白,为了便于描述,附图中所示出的各个部分的尺寸并不是按照实际的比例关系绘制的。
以下对至少一个示例性实施方式的描述实际上仅仅是说明性的,决不作为对本申请及其应用或使用的任何限制。
对于相关领域普通技术人员已知的技术、方法和设备可能不作详细讨论,但在适当情况下,所述 技术、方法和设备应当被视为说明书的一部分。
应该注意到:相似的标号和字母在下面的附图中表示类似项,因此,一旦某一项在一附图中被定义,则在随后的附图中可能不会对其进行进一步讨论。
本申请实施方式可以应用于计算机系统/服务器,其可与众多其它通用或者专用计算系统环境或配置一起操作。适于与计算机系统/服务器一起使用的众所周知的计算系统、环境和/或配置的例子包括但不限于:个人计算机系统、服务器计算机系统、瘦客户机、厚客户机、手持或膝上设备、基于微处理器的系统、机顶盒、可编程消费电子产品、网络个人电脑、小型计算机系统、大型计算机系统和包括上述任何系统的分布式云计算技术环境,等等。
计算机系统/服务器可以在由计算机系统执行的计算机系统可执行指令(诸如程序模块等)的一般语境下描述。通常,程序模块可以包括例程、程序、目标程序、组件、逻辑以及数据结构等等,它们执行特定的任务或者实现特定的抽象数据类型。计算机系统/服务器可以在分布式云计算环境中实施,分布式云计算环境中,任务是由通过通信网络链接的远程处理设备执行的。在分布式云计算环境中,程序模块可以位于包括存储设备的本地或者远程计算系统存储介质上。
示例性设备
图1示出了适于实现本申请的示例性设备200,设备200可以是汽车中配置的控制系统/电子系统、移动终端(例如,智能移动电话等)、个人计算机(即PC,例如,台式计算机或者笔记型计算机等)、平板电脑以及服务器等。图1中,设备200包括一个或者多个处理器、通信部等,所述一个或者多个处理器可以为:一个或者多个中央处理单元(CPU)201,和/或,一个或者多个图像处理器(GPU)213等,处理器可以根据存储在只读存储器(ROM)202中的可执行指令或者从存储部分208加载到随机访问存储器(RAM)203中的可执行指令而执行各种适当的动作和处理。通信部212可以包括但不限于网卡,所述网卡可以包括但不限于IB(Infiniband)网卡。处理器可与只读存储器202和/或随机访问存储器230中通信以执行可执行指令,通过总线204与通信部212相连、并经通信部212与其他目标设备通信,从而完成本申请中的相应步骤。在一个可选的示例中,处理器所执行的步骤包括:检测驾驶员图像的人脸关键点;根据所述人脸关键点确定驾驶员人脸至少部分区域的状态信息;根据一段时间内的驾驶员人脸至少部分区域的多个状态信息,确定用于表征驾驶状态的至少一个指标的参数值;根据所述参数值确定驾驶员的驾驶状态监测结果。
此外,在RAM 203中,还可以存储有装置操作所需的各种程序以及数据。CPU201、ROM202以及RAM203通过总线204彼此相连。在有RAM203的情况下,ROM202为可选模块。RAM203存储可执行指令,或在运行时向ROM202中写入可执行指令,可执行指令使中央处理单元201执行上述物体分割方法所包括的步骤。输入/输出(I/O)接口205也连接至总线204。通信部212可以集成设置,也可以设置为具有多个子模块(例如,多个IB网卡),并分别与总线连接。
以下部件连接至I/O接口205:包括键盘、鼠标等的输入部分206;包括诸如阴极射线管(CRT)、液晶显示器(LCD)等以及扬声器等的输出部分207;包括硬盘等的存储部分208;以及包括诸如LAN卡、调制解调器等的网络接口卡的通信部分209。通信部分209经由诸如因特网的网络执行通信处理。 驱动器210也根据需要连接至I/O接口205。可拆卸介质211,诸如磁盘、光盘、磁光盘、半导体存储器等等,根据需要安装在驱动器210上,以便于从其上读出的计算机程序根据需要被安装在存储部分208中。
需要特别说明的是,如图1所示的架构仅为一种可选实现方式,在具体实践过程中,可以根据实际需要对上述图1的部件数量和类型进行选择、删减、增加或替换;在不同功能部件设置上,也可采用分离设置或集成设置等实现方式,例如,GPU和CPU可分离设置,再例如,可以将GPU集成在CPU上,通信部可分离设置,也可集成设置在CPU或GPU上等。这些可替换的实施方式均落入本申请的保护范围。
特别地,根据本申请的实施方式,下文参考流程图描述的过程可以被实现为计算机软件程序,例如,本申请实施方式包括一种计算机程序产品,该计算机程序产品包括有形地包含在机器可读介质上的计算机程序,计算机程序包含用于执行流程图所示的步骤的程序代码,程序代码可包括对应执行本申请提供的步骤对应的指令,例如,用于检测驾驶员图像的人脸关键点的指令;用于根据所述人脸关键点确定驾驶员人脸至少部分区域的状态信息的指令;用于根据一段时间内的驾驶员人脸至少部分区域的多个状态信息,确定用于表征驾驶状态的至少一个指标的参数值的指令;用于根据所述参数值确定驾驶员的驾驶状态监测结果的指令。
在这样的实施方式中,该计算机程序可以通过通信部分209从网络上被下载及安装,和/或从可拆卸介质211被安装。在该计算机程序被中央处理单元(CPU)201执行时,执行本申请中记载的上述指令。
示例性实施例
本申请提供的驾驶状态监控技术方案可由单片机、FPGA(Field Programmable Gate Array,现场可编程门阵列)、微处理器、智能移动电话、笔记型计算机、平板电脑(PAD)、台式计算机或者服务器等能够运行计算机程序(也可以称为程序代码)的电子设备实现,该计算机程序可以存储于闪存、缓存、硬盘或者光盘等计算机可读存储介质中。
下面结合图2至图6对本申请提供的驾驶状态监控技术方案进行说明。
图2中,S300、检测驾驶员图像的人脸关键点。
在一个可选示例中,本申请中的步骤S300可以由处理器调用存储器中存储的用于获取人脸关键点的指令执行,也可以由被处理器运行的检测人脸关键点模块800执行。
在一个可选示例中,本申请中的驾驶员图像通常为通过摄像头(如红外摄像头等)针对驾驶室摄取到的视频中的图像帧,即检测人脸关键点模块800可以针对摄像头摄取到的视频中的各图像帧分别实时或离线检测人脸关键点。检测人脸关键点模块800可以采用多种方式检测各驾驶员图像的人脸关键点;例如,检测人脸关键点模块800可以利用相应的神经网络来获得各驾驶员图像的人脸关键点。
一个可选示例中,首先,检测人脸关键点模块800可以将基于摄像头而产生的各驾驶员图像分别提供给用于检测人脸位置的人脸检测深度神经网络,人脸检测深度神经网络针对输入的每一幅驾驶员 图像进行人脸位置检测,并针对输入的每一幅驾驶员图像分别输出人脸位置信息,例如,人脸检测深度神经网络针对输入的每一幅驾驶员图像分别输出人脸外接框信息;其次,检测人脸关键点模块800将输入人脸检测深度神经网络的每一幅驾驶员图像以及人脸检测深度神经网络针对每一幅驾驶员图像而输出的人脸位置信息(例如,人脸外接框信息)分别提供给用于检测人脸关键点的人脸关键点深度神经网络,每一幅驾驶员图像及其对应的人脸位置信息形成人脸关键点深度神经网络的一组输入信息,针对每一组输入信息,人脸关键点深度神经网络可以根据人脸位置信息确定出驾驶员图像中需要检测的区域,并针对需要检测的区域的图像进行人脸关键点检测,从而人脸关键点深度神经网络会针对每一幅驾驶员图像输出人脸关键点,例如,人脸关键点深度神经网络针对每一幅驾驶员图像产生多个关键点在驾驶员图像中的坐标信息以及每一个关键点的编号,并输出,从而检测人脸关键点模块800根据人脸关键点深度神经网络输出的信息获取到视频中的每一幅驾驶员图像的人脸关键点,例如,针对视频中的每一幅驾驶员图像,检测人脸关键点模块800均获取到各个关键点的编号以及各关键点在该驾驶员图像中的坐标信息。
在采用现有的发展较为成熟,具有较好的实时性的用于检测人脸位置的人脸检测深度神经网络和用于检测人脸关键点的人脸关键点深度神经网络来检测人脸关键点的情况下,针对摄像头(尤其是红外摄像头)所摄取到的视频,检测人脸关键点模块800可以准确及时的检测出视频中的各图像帧(即各驾驶员图像)的人脸关键点。另外,由于驾驶员所在区域(如车内或者驾驶室等)的光线往往较复杂,而红外摄像头所摄取的驾驶员图像的质量往往会优于普通摄像头所摄取的驾驶员图像的质量,尤其是在夜晚或者阴天或者隧道内等外部光线较暗环境下,红外摄像头所摄取到驾驶员图像通常明显优于普通摄像头所摄取的驾驶员图像的质量,从而有利于提高检测人脸关键点模块800所检测出的人脸关键点的准确性,进而有利于提高驾驶状态监控的准确性。
S310、根据人脸关键点确定驾驶员人脸至少部分区域的状态信息。
在一个可选示例中,本申请中的步骤S310可以由处理器调用存储器中存储的用于确定驾驶员人脸至少部分区域的状态信息的指令执行,也可以由被处理器运行的确定区域状态模块810执行。
在一个可选示例中,本申请的驾驶员人脸至少部分区域的状态信息是针对用于表征驾驶状态的指标而设置的,即驾驶员人脸至少部分区域的状态信息主要用于形成用于表征驾驶状态的指标的参数值。
在一个可选示例中,驾驶员人脸至少部分区域可以包括:驾驶员人脸眼部区域、驾驶员人脸嘴部区域以及驾驶员面部整个区域等中的至少一个。驾驶员人脸至少部分区域的状态信息可以包括:眼睛睁合状态信息、嘴巴开合状态信息、人脸朝向信息以及视线方向信息中的至少一个,例如,本申请中的驾驶员人脸至少部分区域的状态信息包括上述四种信息中的至少二种,以综合考虑人脸不同区域的状态信息来提高驾驶状态信息检测的准确性。
上述眼睛睁合状态信息可以用于进行驾驶员的闭眼检测,如检测驾驶员是否半闭眼(“半”表示非完全闭眼的状态,如瞌睡状态下的眯眼等)、是否闭眼、闭眼次数、闭眼幅度等。眼睛睁合状态信息可以可选的为对眼睛睁开的高度进行归一化处理后的信息。上述嘴巴开合状态信息可以用于进行驾 驶员的哈欠检测,如检测驾驶员是否打哈欠、哈欠次数等。嘴巴开合状态信息可以可选的为对嘴巴张开的高度进行归一化处理后的信息。上述人脸朝向信息可以用于进行驾驶员的人脸朝向检测,如检测驾驶员是否侧脸或者是否回头等。人脸朝向信息可以可选的为驾驶员人脸正前方与驾驶员所驾驶的车辆正前方之间的夹角。上述视线方向信息可以用于进行驾驶员的视线检测,如检测驾驶员是否目视前方等,视线方向信息可以用于判断驾驶员的视线是否发生了偏离现象等。视线方向信息可以可选的为驾驶员的视线与驾驶员所驾驶的车辆正前方之间的夹角。
在一个可选示例中,确定区域状态模块810可以直接利用检测人脸关键点模块800所检测出的人脸关键点中的眼睛关键点进行计算,从而根据计算结果获得眼睛睁合状态信息;一个可选的例子,确定区域状态模块810直接计算人脸关键点中的关键点53和关键点57之间的距离,并计算关键点52和关键点55之间的距离,确定区域状态模块810利用关键点52和关键点55之间的距离对关键点53和关键点57之间的距离进行归一化处理,归一化处理后的数值即为眼睛睁合状态信息;另一个可选的例子,确定区域状态模块810可以直接计算人脸关键点中的关键点72和关键点73之间的距离,并计算关键点52和关键点55之间的距离,确定区域状态模块810利用关键点52和关键点55之间的距离对关键点72和关键点73之间的距离进行归一化处理,归一化处理后的数值即为眼睛睁合状态信息。本申请中的关键点之间的距离是利用关键点的坐标计算的。
在一个可选示例中,确定区域状态模块810可以先利用检测人脸关键点模块800所获得的人脸关键点中的眼睛关键点(例如,眼睛关键点在驾驶员图像中的坐标信息)对驾驶员图像中的眼睛进行定位,以获得眼睛图像,并利用该眼睛图像获得上眼睑线和下眼睑线,确定区域状态模块810通过计算上眼睑线和下眼睑线之间的间隔,获得眼睛睁合状态信息。一个可选的例子如下:
首先,确定区域状态模块810可以利用检测人脸关键点模块800所检测出的人脸关键点中的眼睛关键点定位驾驶员图像中的眼睛的位置(例如,定位右眼的位置或者左眼的位置);
其次,确定区域状态模块810根据定位出的眼睛的位置从驾驶员图像中剪切出眼睛图像,将剪切出的眼睛图像进行大小调整处理(如需要的话),以获得具有预定大小的眼睛图像,如通过对剪切出的眼睛图像进行放大/缩小处理获得预定大小的眼睛图像;
再次,确定区域状态模块810将剪切并调整大小后的眼睛图像提供给用于眼睑线定位的第一神经网络(也可以称为眼睑线定位深度神经网络),由第一神经网络针对输入的眼睛图像进行眼睑线定位,从而第一神经网络会针对输入的每一幅眼睛图像输出眼睑线信息,所述眼睑线信息可包括上眼睑线关键点、下眼睑线关键点等,例如,第一神经网络输出各上眼睑线关键点的编号及其在眼睛图像中的坐标信息、各下眼睑线关键点的编号及其在眼睛图像中的坐标信息、内眼角关键点的编号及其在眼睛图像中的坐标信息以及外眼角关键点的编号及其在眼睛图像中的坐标信息;第一神经网络输出的上眼睑线关键点的数量与下眼睑线关键点的数量通常相同;例如,图3中,关键点12、关键点13、关键点14、关键点15、关键点16、关键点17、关键点18、关键点19、关键点20、关键点21为上眼睑线关键点,关键点0、关键点1、关键点2、关键点3、关键点4、关键点5、关键点6、关键点7、关键点8、关键点9为下眼睑关键点,关键点10为内眼角关键点,关键点11为外眼角关键点;另外,内眼 角关键点和外眼角关键点可以作为属于上眼睑线和下眼睑线共享的关键点;
最后,确定区域状态模块810根据各上眼睑线关键点和各下眼睑线关键点计算上眼睑线和下眼睑线之间的平均距离,并利用第一神经网络输出的内眼角关键点和外眼角关键点计算眼角距离,确定区域状态模块810利用计算出的眼角距离对平均距离进行归一化处理,从而获得上眼睑线和下眼睑线之间的间隔。
本申请中的上眼睑线可以由单眼上眼睑处的一定数量关键点的轨迹表示或拟合线表示;例如,在图3中,由关键点11、关键点12、关键点13、关键点14、关键点15、关键点16、关键点17、关键点18、关键点19、关键点20、关键点21以及关键点10的轨迹可表示上眼睑线或者这些关键点拟合而得的线表示上眼睑线;本申请中的下眼睑线可以由单眼下眼睑处的一定数量的关键点的轨迹表示或拟合线表示;例如,在图3中,由关键点11、关键点0、关键点1、关键点2、关键点3、关键点4、关键点5、关键点6、关键点7、关键点8、关键点9以及关键点10的轨迹可表示下眼睑线或者这些关键点拟合而得的线表示下眼睑线。关键点10为内眼角关键点,关键点11为外眼角关键点,关键点10可以划归在上眼睑线关键点中,也可以划归在下眼睑线关键点中,同样的,键点11可以划归在上眼睑线关键点中,也可以划归在下眼睑线关键点中。
图3中,计算上眼睑线和下眼睑线之间的平均距离可以为:计算关键点0与关键点12之间的距离,计算关键点1与关键点13之间的距离、计算关键点2与关键点14之间的距离、计算关键点3与关键点15之间的距离、计算关键点4与关键点16之间的距离、计算关键点5与关键点17之间的距离、计算关键点6与关键点18之间的距离、计算关键点7与关键点19之间的距离、计算关键点8与关键点20之间的距离、计算关键点9与关键点21之间的距离,计算上述10个距离之和与10的商,该商即为上眼睑线和下眼睑线之间的平均距离。
图3中,利用计算出的眼角距离对平均距离进行归一化处理可以为:计算关键点10与关键点11之间的眼角距离,并计算上述平均距离除以眼角距离的商,以进行归一化处理,两者相除所获得的商即为上眼睑线和下眼睑线之间的间隔。
需要特别说明的是,本申请中的上眼睑线和下眼睑线之间的间隔也可以采用其他方式来表示,例如,可以将位于眼睛中部,上下位置相对的一组关键点之间的距离与眼角距离的商作为上眼睑线和下眼睑线之间的间隔;再例如,可以采用非归一化的数值来表示,可选的,可以将上眼睑线和下眼睑线之间的平均距离作为上眼睑线和下眼睑线之间的间隔,也可以将位于眼睑线中部,上下位置相对的一组关键点之间的距离作为上眼睑线和下眼睑线之间的间隔等。本申请不限制上眼睑线和下眼睑线之间的间隔的具体表现形式。
本申请通过眼睛图像进行上眼睑线和下眼睑线定位,可以获得准确的上眼睑线和下眼睑线,从而基于定位获得的上眼睑线和下眼睑线来计算上眼睑线和下眼睑线之间的间隔,可以较为客观准确的反映出驾驶员眼睛的形态,有利于提高眼睛睁合状态信息的准确性,最终有利于提高确定驾驶状态的准确性。
在一个可选示例中,确定区域状态模块810可以直接利用检测人脸关键点模块800所检测出的人 脸关键点中的嘴巴关键点进行计算,从而根据计算结果获得嘴巴开合状态信息;一个可选的例子,如图4所示,确定区域状态模块810直接计算人脸关键点中的关键点87和关键点93之间的距离,并计算关键点84和关键点90之间的距离,确定区域状态模块810利用关键点84和关键点90之间的距离对关键点87和关键点93之间的距离进行归一化处理,归一化处理后的数值即为嘴巴开合状态信息;另一个可选的例子,确定区域状态模块810直接计算人脸关键点中的关键点88和关键点92之间的距离,并计算关键点84和关键点90之间的距离,确定区域状态模块810利用关键点84和关键点90之间的距离对关键点88和关键点92之间的距离进行归一化处理,归一化处理后的数值即为嘴巴开合状态信息。
在一个可选示例中,确定区域状态模块810可以先利用检测人脸关键点模块800所检测出的人脸关键点中的嘴巴关键点(例如,嘴巴关键点在驾驶员图像中的坐标信息)对驾驶员图像中的嘴巴进行定位,通过剪切等方式可以获得嘴巴图像,并利用该嘴巴图像获得上唇线和下唇线,确定区域状态模块810通过计算上唇线和下唇线之间的间隔,获得嘴巴开合状态信息;一个可选的例子如下:
首先,确定区域状态模块810可以利用检测人脸关键点模块800所检测出的人脸关键点中的嘴巴关键点定位驾驶员图像中的嘴巴的位置。
其次,确定区域状态模块810根据定位出的嘴巴的位置从驾驶员图像中剪切出嘴巴图像,并将剪切出的嘴巴图像进行大小调整处理,以获得具有预定大小的嘴巴图像,如通过对剪切出的嘴巴图像进行放大/缩小处理获得预定大小的嘴巴图像。
再次,确定区域状态模块810将剪切并调整大小后的嘴巴图像提供给用于唇线定位的第二神经网络(也可以称为嘴唇关键点定位深度神经网络),由第二神经网络针对输入的嘴巴图像进行唇线定位,从而第二神经网络会针对输入的每一幅嘴巴图像输出唇线关键点信息。例如,第二神经网络输出各上唇线关键点的编号及其在嘴巴图像中的坐标信息、各下唇线关键点的编号及其在嘴巴图像中的坐标信息等;第二神经网络输出的上唇线关键点的数量与下唇线关键点的数量通常相同,两个嘴角关键点可以看作上唇线和下唇线共有的关键点,也可看作是不属于上唇线和下唇线而独立存在的关键点,本申请实施例对此并不限制。
最后,确定区域状态模块810根据各上唇线关键点和各下唇线关键点计算上唇线和下唇线之间的平均距离,并利用第二神经网络输出的两个嘴角关键点计算嘴角距离,确定区域状态模块810利用计算出的嘴角距离对上述平均距离进行归一化处理,从而获得上唇线和下唇线之间的间隔。
本申请中的上唇线可以由上唇上下轮廓处各自的一定数量的关键点表示的轨迹信息或拟合线表示;也可以由上唇上轮廓处的一定数量的关键点表示的轨迹信息或拟合线表示;本申请中的下唇线可以由下唇上下轮廓处各自的一定数量关键点表示的轨迹信息或拟合线表示;也可以由下唇下轮廓处的一定数量的关键点表示的轨迹信息或拟合线表示;两个嘴角关键点可以划归在上唇线关键点中,也可以划归在下唇线关键点中,本申请实施例对此并不限制。
确定区域状态模块810计算上唇线和下唇线之间的平均距离以及利用嘴角距离对该平均距离进行归一化处理的具体方式可以参照上述针对图3的说明,在此不再详细说明。
需要特别说明的是,本申请中的上唇线和下唇线之间的间隔也可以采用其他方式来表示,例如,可以将位于唇线中部,上下位置相对的一组关键点之间的距离与嘴角距离的商作为上唇线和下唇线之间的间隔;再例如,可以采用非归一化的数值来表示,可选的,可以将上唇线和下唇线之间的平均距离作为上唇线和下唇线之间的间隔,也可以将位于唇线中部,上下位置相对的一组关键点之间的距离作为上唇线和下唇线之间的间隔等。本申请不限制上眼睑线和下眼睑线之间的间隔的具体表现形式。
本申请通过针对剪切调整大小的嘴巴图像进行上唇线定位以及下唇线定位,可以获得准确的上唇线和下唇线,从而基于唇线定位获得的上唇线和下唇线来计算上唇线和下唇线之间的间隔,可以较为客观准确的反映出驾驶员嘴巴的形态,有利于提高嘴巴开合状态信息的准确性,最终有利于提高确定驾驶状态的准确性。
在一个可选示例中,由于人脸关键点中通常会包含有头部姿态特征信息,因此,确定区域状态模块810可以根据检测人脸关键点模块800所获得的人脸关键点获取到头部姿态特征信息,从而确定区域状态模块810可以根据头部姿态特征信息确定出人脸朝向信息,人脸朝向信息可以表现出人脸转动的方向以及角度,这里的转动的方向可以为向左转动、向右转动、向下转动和/或者向上转动等。
确定区域状态模块810可以采用多种方式获得各驾驶员图像的人脸朝向信息;在一个可选示例中,确定区域状态模块810可以利用相应的神经网络来获得各驾驶员图像的人脸朝向信息;一个可选的例子,确定区域状态模块810可以将检测人脸关键点模块800所获得的人脸关键点提供给用于提取头部姿态特征信息的第三神经网络(也可以称为头部姿态特征检测深度神经网络),由第三神经网络针对当前输入的一组人脸关键点进行头部姿态特征信息提取出来,并针对当前输入的该组人脸关键点输出头部姿态特征信息,确定区域状态模块810将第三神经网络输出的头部姿态特征信息提供给用于估测头部朝向的第四神经网络(也可以称为头部朝向检测深度神经网络),由第四神经网络针对当前输入的一组头部姿态特征信息进行人脸朝向计算,确定区域状态模块810基于第四神经网络输出的信息获取到一组人脸关键点对应的人脸朝向信息。
在采用现有的发展较为成熟,具有较好的实时性的用于提取头部姿态特征信息的第三神经网络和用于估测头部朝向的第四神经网络来获取人脸朝向信息的情况下,针对摄像头摄取到的视频,确定区域状态模块810可以准确及时的检测出视频中的各图像帧(即各驾驶员图像)所对应的人脸朝向信息,从而有利于提高确定疲劳驾驶程度的准确性。
在一个可选示例中,确定区域状态模块810可以利用人脸关键点中的眼睛关键点所定位的眼睛图像,确定出瞳孔边沿位置,并根据瞳孔边沿位置计算瞳孔中心位置,确定区域状态模块810通常对瞳孔中心位置与眼睛图像中的眼睛中心位置进行计算,即可获得视线方向信息,例如,计算瞳孔中心位置与眼睛图像中的眼睛中心位置的向量,该向量即可作为视线方向信息。
一个可选的例子,在确定区域状态模块810获得了从驾驶员图像中剪切并放大的眼睛图像后,确定区域状态模块810将剪切放大后的眼睛图像提供给用于瞳孔定位的第五神经网络(也可以称为瞳孔关键点定位神经网络),由第五神经网络针对当前输入的眼睛图像进行瞳孔关键点检测,并针对当前输入的眼睛图像输出检测到的瞳孔关键点,确定区域状态模块810根据第五神经网络输出的瞳孔关键 点获取到瞳孔边沿位置,例如,图5中的瞳孔位置处的圆圈即为确定区域状态模块810获取到的瞳孔边沿位置;确定区域状态模块810通过对瞳孔边沿位置进行计算(例如,计算圆心位置),即可获得瞳孔中心位置。
在一个可选示例中,确定区域状态模块810可以基于上述上眼睑线和下眼睑线获取到眼睛中心位置,例如,确定区域状态模块810将上眼睑线和下眼睑线的所有关键点的坐标信息进行相加,并除以上眼睑线和下眼睑线的所有关键点的数量,确定区域状态模块810将相除后获得的坐标信息作为眼睛中心位置。当然,确定区域状态模块810也可以采用其他方式获取眼睛中心位置,例如,确定区域状态模块810针对检测人脸关键点模块800所获得的人脸关键点中的眼睛关键点进行计算,从而获得眼睛中心位置;本申请不限制确定区域状态模块810获取眼睛中心位置的具体实现方式。
本申请通过在瞳孔关键点检测的基础上来获取瞳孔中心位置,可以获取到更为准确的瞳孔中心位置;通过在眼睑线定位的基础上来获取眼睛中心位置,可以获取到更为准确的眼睛中心位置,从而本申请在利用瞳孔中心位置和眼睛中心位置来确定视线方向时,可以获得较为准确的视线方向信息。另外,通过利用瞳孔关键点检测的方式来定位瞳孔中心位置,并利用瞳孔中心位置和眼睛中心位置来确定视线方向,使确定视线方向的实现方式在具有准确性,还兼具有易于实现的特点。
在一个可选示例中,本申请可以采用现有的神经网络来实现瞳孔边沿位置的检测以及眼睛中心位置的检测。
S320、根据一段时间内的驾驶员人脸至少部分区域的多个状态信息,确定用于表征驾驶状态的至少一个指标的参数值。
在一个可选示例中,本申请中的步骤S320可以由处理器调用存储器中存储的用于根据一段时间内的驾驶员面部区域的多个状态信息确定用于表征驾驶状态的至少一个指标的参数值的指令执行,也可以由被处理器运行的确定指标参数值模块820执行。
在一个可选示例中,本申请对驾驶员的驾驶状态(例如,疲劳驾驶程度和/或驾驶专注程度)进行了量化,例如,将驾驶员的疲劳驾驶程度量化为基于闭眼程度的指标、基于打哈欠的指标、基于视线偏离程度的指标以及基于转头程度的指标中的至少一个;将驾驶专注程度量化为基于视线偏离程度的指标以及基于转头程度的指标中的至少一个。通过对驾驶员的驾驶状态进行量化,有利于及时客观的衡量驾驶员的驾驶状态。
上述基于闭眼程度的指标可以包括:闭眼次数、闭眼频率、半闭眼次数以及半闭眼频率中的至少一个;上述基于打哈欠的指标可以包括:是否打哈欠以及打哈欠的次数中的至少一个;上述基于视线偏离程度的指标可以包括:视线是否偏离以及视线是否严重偏离等中的至少一个;上述基于转头程度的指标(也可以称为基于转脸程度的指标或者基于回头程度的指标)可以包括:是否转头、是否短时间转头以及是否长时间转头中的至少一个。
在一个可选示例中,对于摄像头摄取到的驾驶员的视频而言,确定区域状态模块810可以获得视频中任意一段时间内的一系列的驾驶员面部区域的状态信息(例如,多个连续的驾驶员面部区域的状态信息,也可以称为一系列连续的驾驶员面部区域的状态信息),而确定指标参数值模块820通过针 对一系列的驾驶员面部区域的状态信息进行统计,即可获得用于表征驾驶状态的各指标的参数值。
在一个可选示例中,确定指标参数值模块820在判断出上眼睑线和下眼睑线之间的间隔小于第一预定间隔,小于第一预定间隔的这一现象持续了N帧(例如,持续了5帧或者6帧等),则确定指标参数值模块820确定驾驶员出现了一次闭眼现象,确定指标参数值模块820可以记录一次闭眼,也可以记录本次闭眼时长;确定指标参数值模块820在判断出上眼睑线和下眼睑线之间的间隔不小于第一预定间隔,小于第二预定间隔,在不小于第一预定间隔,小于第二预定间隔的这一现象持续了N1帧(例如,持续了5帧或者6帧等),则确定指标参数值模块820确定驾驶员出现了一次半闭眼现象(也可以称为眯眼现象等),确定指标参数值模块820可以记录一次半闭眼,也可以记录本次半闭眼时长。
在一个可选示例中,确定指标参数值模块820在判断出上唇线和下唇线之间的间隔大于第三预定间隔,在大于第三预定间隔的这一现象持续了N2帧(例如,持续了10帧或者11帧等),则确定指标参数值模块820确定驾驶员出现了一次打哈欠现象,确定指标参数值模块820可以记录一次哈欠。
在一个可选示例中,确定指标参数值模块820在判断出人脸朝向信息大于第一朝向,大于第一朝向的这一现象持续了N3帧(例如,持续了9帧或者10帧等),则确定指标参数值模块820确定驾驶员出现了一次长时间大角度转头现象,确定指标参数值模块820可以记录一次长时间大角度转头,也可以记录本次转头时长;确定指标参数值模块820在判断出人脸朝向信息不大于第一朝向,大于第二朝向,在不大于第一朝向,大于第二朝向的这一现象持续了N3帧(例如,持续了9帧或者10帧等),则确定指标参数值模块820确定驾驶员出现了一次长时间小角度转头现象,确定指标参数值模块820可以记录一次小角度转头偏离,也可以记录本次转头时长。
在一个可选示例中,确定指标参数值模块820在判断出视线方向信息和汽车正前方之间的夹角大于第一夹角,大于第一夹角的这一现象持续了N4帧(例如,持续了8帧或9帧等),则确定指标参数值模块820确定驾驶员出现了一次视线严重偏离现象,确定指标参数值模块820可以记录一次视线严重偏离,也可以记录本次视线严重偏离时长;确定指标参数值模块820在判断出视线方向信息和汽车正前方之间的夹角不大于第一夹角,大于第二夹角,在不大于第一夹角,大于第二夹角的这一现象持续了N4帧(例如,持续了9帧或10帧等),则确定指标参数值模块820确定驾驶员出现了一次视线偏离现象,确定指标参数值模块820可以记录一次视线偏离,也可以记录本次视线偏离时长。
在一个可选示例中,上述第一预定间隔、第二预定间隔、第三预定间隔、第一朝向、第二朝向、第一夹角、第二夹角、N1、N2、N3以及N4的具体取值可以根据实际情况设置,本申请不限制具体取值的大小。
S330、根据上述参数值确定驾驶员的驾驶状态监测结果。
在一个可选示例中,本申请的步骤S330可以由处理器调用存储器中存储的用于根据上述参数值确定驾驶员的驾驶状态监测结果的指令执行,也可以由被处理器运行的确定驾驶状态模块830执行。
在一个可选示例中,本申请的驾驶状态监测结果可以包括:驾驶员疲劳监测结果;当然,本申请的驾驶状态监测结果也可以表现为其他形式,例如,驾驶状态监测结果可以包括:驾驶员注意力监测结果等。
上述驾驶状态监测结果可以可选的为疲劳驾驶程度,例如,疲劳驾驶程度可以包括:正常驾驶级别(也可以称为非疲劳驾驶级别)以及疲劳驾驶级别;其中的疲劳驾驶级别可以为一个级别,也可以被划分为多个不同的级别,例如,上述疲劳驾驶级别可以被划分为:提示疲劳驾驶级别(也可以称为轻度疲劳驾驶级别)和警告疲劳驾驶级别(也可以称为重度疲劳驾驶级别);当然,疲劳驾驶程度可以被划分为更多级别,例如,轻度疲劳驾驶级别、中度疲劳驾驶级别以及重度疲劳驾驶级别等。本申请不限制疲劳驾驶程度所包括的不同级别。
上述驾驶员注意力监测结果可以可选的为专注驾驶程度,例如,专注驾驶程度可以包括:专注驾驶级别(也可以称为未分心驾驶级别)以及分心驾驶级别;其中的分心驾驶级别可以为一个级别,也可以被划分为多个不同的级别,例如,上述分心驾驶级别可以被划分为:提示分心驾驶级别(也可以称为轻度分心驾驶级别)和警告分心驾驶级别(也可以称为重度疲劳驾驶级别);当然,分心驾驶程度可以被划分为更多级别,例如,轻度分心驾驶级别、中度分心驾驶级别以及重度分心驾驶级别等。本申请不限制分心驾驶程度所包括的不同级别。
在一个可选示例中,本申请中的疲劳驾驶程度所包含的每一个级别均对应有预设条件,确定驾驶状态模块830应实时的判断确定指标参数值模块820所统计的指标的参数值所满足的预设条件,确定驾驶状态模块830可以将被满足的预设条件所对应的级别确定为驾驶员的当前疲劳驾驶程度。
在一个可选示例中,正常驾驶级别对应的预设条件可以包括:
条件1、不存在半闭眼以及闭眼现象;
条件2,不存在打哈欠现象;
条件3、不存在转头偏离现象或者存在短暂的小角度转头偏离现象(即小角度转头偏离的时长不超过一预设时长,例如,2-3秒等);
条件4、不存在视线偏离现象或者存在短暂的小角度视线偏离现象(即小角度视线偏离的时长不超过一预设时长,例如,3-4秒等);
在上述条件1、条件2、条件3以及条件4均满足的情况下,确定驾驶状态模块830可以确定出驾驶员当前处于正常驾驶级别。
在一个可选示例中,提示疲劳驾驶级别对应的预设条件可以包括:
条件11、存在半闭眼现象;
条件22、存在打哈欠现象;
条件33、存在较短时间范围内的小角度转头偏离现象(即小角度转头偏离的时长不超过一预设时长,例如,5-8秒等);
条件44,存在较短时间范围内的小角度视线偏离现象(即小角度视线偏离的时长不超过一预设时长,例如,5-8秒等);
在上述条件11、条件22、条件33以及条件44中的其中任一条件满足的情况下,确定驾驶状态模块830可以确定出驾驶员当前处于提示疲劳驾驶级别,此时,控制模块840可以通过声(如语音或者响铃等)/光(亮灯或者灯光闪烁等)/震动等方式提示驾驶员,以便于提醒驾驶员提高驾驶注意力, 促使驾驶员专心驾驶汽车或者促使驾驶员进行休息等。
在一个可选示例中,警告疲劳驾驶级别对应的预设条件可以包括:
条件111、存在闭眼现象或者在一段时间内的闭眼次数达到一预设次数或者在一段时间内的闭眼时间达到一预设时间;
条件222、在一段时间内的打哈欠的次数达到一预设次数;
条件333、存在较长时间范围内的转头偏离现象(即转头偏离的时长超过一预设时长,例如,5-8秒等);
条件444,存在较长时间范围内的视线偏离现象(即视线偏离的时长超过一预定时长,例如,5-8秒等)或者存在严重视线偏离现象(即视线偏离的角度超过一预定角度);
在上述条件111、条件222、条件333以及条件444中的其中任一条件满足的情况下,确定驾驶状态模块830可以确定出驾驶员当前处于警告疲劳驾驶级别,此时,控制模块840可以通过接管当前驾驶(即切换为接管驾驶模式)等方式,尽可能的保证行驶安全;接管当前驾驶可以可选的为切换为无人驾驶模式/自动驾驶模式等;同时,控制模块840还可以通过声(如语音或者响铃等)/光(亮灯或者灯光闪烁等)/震动等方式提示驾驶员,以便于提醒驾驶员提高驾驶注意力,促使驾驶员专心驾驶汽车或者促使驾驶员进行休息等。
在一个可选示例中,本申请中的专注驾驶程度所包含的每一个级别均对应有预设条件,确定驾驶状态模块830应实时的判断确定指标参数值模块820所统计的指标的参数值所满足的预设条件,确定驾驶状态模块830可以将被满足的预设条件所对应的级别确定为驾驶员的当前专注驾驶程度。
在一个可选示例中,专注驾驶级别对应的预设条件可以包括:
条件a、不存在转头偏离现象或者存在短暂的小角度转头偏离现象(即小角度转头偏离的时长不超过一预设时长,例如,2-3秒等);
条件b、不存在视线偏离现象或者存在短暂的小角度视线偏离现象(即小角度视线偏离的时长不超过一预设时长,例如,3-4秒等);
在上述条件a以及条件b均满足的情况下,确定驾驶状态模块830可以确定出驾驶员当前处于专注驾驶级别。
在一个可选示例中,提示分心驾驶级别对应的预设条件可以包括:
条件aa、存在较短时间范围内的小角度转头偏离现象(即小角度转头偏离的时长不超过一预设时长,例如,5-8秒等);
条件bb,存在较短时间范围内的小角度视线偏离现象(即小角度视线偏离的时长不超过一预设时长,例如,5-8秒等);
在上述条件aa以及条件bb中的其中任一条件满足的情况下,确定驾驶状态模块830可以确定出驾驶员当前处于提示分心驾驶级别,此时,控制模块840可以通过声(如语音或者响铃等)/光(亮灯或者灯光闪烁等)/震动等方式提示驾驶员,以便于提醒驾驶员提高驾驶专注度,促使驾驶员将被分散的注意力回归到驾驶上。
在一个可选示例中,警告分心驾驶级别对应的预设条件可以包括:
条件aaa、存在较长时间范围内的转头偏离现象(即转头偏离的时长超过一预设时长,例如,5-8秒等);
条件bbb,存在较长时间范围内的视线偏离现象(即视线偏离的时长超过一预定时长,例如,5-8秒等)或者存在严重视线偏离现象(即视线偏离的角度超过一预定角度);
在上述条件aaa以及条件bbb中的其中任一条件满足的情况下,确定驾驶状态模块830可以确定出驾驶员当前处于警告分心驾驶级别,此时,控制模块840可以通过接管当前驾驶(即切换为接管驾驶模式)等方式,尽可能的保证行驶安全;接管当前驾驶可以可选的为切换为无人驾驶模式/自动驾驶模式等;同时,控制模块840还可以通过声(如语音或者响铃等)/光(亮灯或灯光闪烁等)/震动等方式提示驾驶员,以便于提醒驾驶员提高驾驶专注度,促使将被分散的注意力回归到驾驶上。
在一个可选示例中,控制模块840所执行的上述操作可以由处理器调用存储器中存储的用于执行与驾驶状态监测结果对应的控制操作的指令来实现。
应用场景示例
参考图7,示意性地示出了根据本申请实施方式的可以在其中实现的一个应用场景。
图7中,驾驶员驾驶汽车100在道路上行驶,汽车100中安装有能够摄取到驾驶员面部的摄像头,摄像头所摄取的图像被实时的传输至设置于汽车中的CPU或者微处理器等数据处理单元(图7中未示出)处,由该数据处理单元对其接收到的图像进行实时分析,并根据图像的实时分析结果判断驾驶员当前是否处于疲劳驾驶状态或者是否处于分心驾驶状态,在判断的结果为驾驶员当前处于疲劳驾驶状态或者处于分心驾驶状态时,应及时的采用相应的措施,例如,发出语音提示,并点亮警示灯,同时启动震动装置;再例如,切换为无人驾驶模式/自动驾驶模式等等。
需要特别说明的是,上述应用场景中所涉及的摄像头和数据处理单元通常是汽车100自身配置设备,然而,上述摄像头和数据处理单元也可以是驾驶员自身携带的电子设备中的摄像头和数据处理单元,例如,上述应用场景中的摄像头和数据处理单元可以为驾驶员的智能移动电话或者平板电脑等电子设备自带的摄像头和数据处理单元;一个可选的例子,驾驶员的智能移动电话通过固定支架而被设置在汽车100的操作台上,该智能移动电话的摄像头对准驾驶员面部,在智能移动电话中安装的相应APP被启动运行后,智能移动电话中的摄像头处于摄取图像的工作状态,智能移动电话中的数据处理单元对摄像头摄取到的图像进行实时分析,并根据图像的实时分析结果判断驾驶员当前是否处于疲劳驾驶状态或者是否处于分心驾驶状态,在判断的结果为驾驶员当前处于疲劳驾驶状态或者处于分心驾驶状态时,可以通过声、光和/或震动等方式警示驾驶员。
另外,在一个应用场景中,基于汽车100中的摄像头(例如,汽车100自身配置的摄像头或者驾驶员随身携带的电子设备的摄像头等)所摄取到驾驶员面部的图像可以通过无线网络被实时的传输至后台电子设备,例如,被实时传输至驾驶员家中/办公室中的台式计算机,再例如,被实时传输至相应APP对应的服务器,由后台电子设备对其接收到的图像进行实时分析,并根据图像的实时分析结果 判断驾驶员当前是否处于疲劳驾驶状态或者是否处于分心驾驶状态,在判断的结果为驾驶员当前处于疲劳驾驶状态或者处于分心驾驶状态时,后台电子设备应及时的采用相应措施,例如,后台电子设备通过无线网络向驾驶员随身携带的智能移动电话或者汽车100中的数据处理单元等发送警示命令,使智能移动电话或者汽车100中的数据处理单元等可以根据该警示命令发出语音提示和/或点亮警示灯和/或切换为无人驾驶模式等等。
还有,在一个应用场景中,基于汽车100自身配置的摄像头所摄取到驾驶员面部的图像可以通过蓝牙等近距离无线传输方式被实时的传输至驾驶员随身携带的电子设备,例如,摄取到的图像被实时传输至驾驶员随身携带的智能移动电话或者平板电脑或者笔记型计算机,由驾驶员随身携带的电子设备对其接收到的图像进行实时分析,并根据图像的实时分析结果判断驾驶员当前是否处于疲劳驾驶状态或者是否处于分心驾驶状态,在判断的结果为驾驶员当前处于疲劳驾驶状态或者处于分心驾驶状态时,驾驶员随身携带的电子设备应及时的采用相应措施,例如,驾驶员随身携带的电子设备通过蓝牙等近距离无线传输方式向汽车100中的数据处理单元发送警示命令,使汽车100自身配置的数据处理单元可以根据该警示命令发出语音提示和/或点亮警示灯和/或切换为无人驾驶模式等等;再例如,驾驶员随身携带的电子设备自身发出语音提示和/或震动提示等。
可以理解,本发明实施例中提及的任一种神经网络是针对特定任务(如关键点信息定位任务等)采用监督、半监督或无监督等方式预先训练完成的神经网络,训练的具体方式本申请实施例并不限定。在一个可选示例中,神经网络可采用监督方式预先训练完成,如采用人脸某器官的标注数据预先训练所述神经网络。所述神经网络的网络结构可根据关键点信息定位任务的需要灵活设计,本申请实施例并不限制。例如,神经网络可包括但不限于卷积层、非线性Relu层、池化层、全连接层等,网络层数越多,网络越深;又例如神经网络的网络结构可采用但不限于ALexNet、深度残差网络(Deep Residual Network,ResNet)或VGGnet(Visual Geometry Group Network)等网络的结构。
然而,本领域技术人员完全可以理解,本申请实施方式的适用场景不受到该框架任何方面的限制。上述应用场景是以汽车驾驶为例,可以理解,本申请实施方式的使用场景还可以广泛应用于船、飞机、货车、列车、地铁、轻轨等其他各种交通工具驾驶过程中驾驶员驾驶状态的监控。
本申请提供的驾驶状态监控技术方案,通过从摄取到的驾驶员图像中获取人脸关键点,使本申请可以基于获取到的人脸关键点对眼睛、嘴巴等面部区域进行定位,并通过对眼睛、嘴巴等面部区域的图像进行识别,可以获得面部区域的状态信息,例如,眼睛睁合状态信息、嘴巴开合状态信息、人脸朝向信息以及视线方向信息等;本申请通过对一段时间内的驾驶员面部的多个状态信息进行综合分析,可以量化出用于表征驾驶状态的至少一个指标的参数值,例如,半闭眼时间/半闭眼频率、闭眼时间/闭眼频率、打哈欠次数、侧脸时间以及视线偏离程度等等,从而根据量化的指标的参数值可以及时的衡量出驾驶员的疲劳驾驶程度或者分心驾驶程度等驾驶状态。
由于本申请是基于摄取的驾驶员图像来检测驾驶员的疲劳驾驶程度或者分心驾驶程度等驾驶状态,因此,本申请可以避免基于驾驶员生理信号检测驾驶员的疲劳驾驶程度或者分心驾驶程度等而需要在驾驶员身上设置传感器等元器件的现象,使驾驶员驾驶状态监测易于实施;由于本申请中的用于 表征驾驶状态的指标是针对驾驶员疲劳驾驶或者分心驾驶等驾驶状态进行量化的指标,本申请可以实现及时的将实时摄取到的驾驶员图像转换为量化出的指标的参数值,因此,本申请可以利用实时获取到的指标的参数值及时的判别驾驶员的疲劳驾驶程度或者分心驾驶程度,从而可以较为客观准确的确定出驾驶员的疲劳驾驶程度或者分心驾驶程度等驾驶状态。
可能以许多方式来实现本申请的方法和装置、电子设备以及计算机可读存储介质。例如,可通过软件、硬件、固件或者软件、硬件、固件的任何组合来实现本申请的方法和装置、电子设备以及计算机可读存储介质。用于方法的步骤的上述顺序仅是为了进行说明,本申请的方法的步骤不限于以上具体描述的顺序,除非以其它方式特别说明。此外,在一些实施方式中,还可将本申请实施为记录在记录介质中的程序,这些程序包括用于实现根据本申请的方法的机器可读指令。因而,本申请还覆盖存储用于执行根据本申请的方法的程序的记录介质。
本申请的描述是为了示例和描述起见而给出的,而并不是无遗漏的或者将本申请限于所公开的形式。很多修改和变化对于本领域的普通技术人员而言是显然的。选择和描述实施方式是为了更好说明本申请的原理和实际应用,并且使本领域的普通技术人员能够理解本申请从而设计适于特定用途的带有各种修改的各种实施方式。

Claims (28)

  1. 一种驾驶状态监控方法,其特征在于,包括:
    检测驾驶员图像的人脸关键点;
    根据所述人脸关键点确定驾驶员人脸至少部分区域的状态信息;
    根据一段时间内的驾驶员人脸至少部分区域的多个状态信息,确定用于表征驾驶状态的至少一个指标的参数值;
    根据所述参数值确定驾驶员的驾驶状态监测结果。
  2. 根据权利要求1所述的方法,其特征在于,所述检测驾驶员图像的人脸关键点之前,还包括:
    通过红外摄像头拍摄驾驶员图像。
  3. 根据权利要求1或2所述的方法,其特征在于,
    所述驾驶状态监测结果包括:驾驶员疲劳监测结果,所述驾驶员人脸至少部分区域的状态信息包括:眼睛睁合状态信息、嘴巴开合状态信息、人脸朝向信息以及视线方向信息中的至少一个;和/或,
    所述驾驶状态监测结果包括:驾驶员注意力监测结果,所述驾驶员人脸至少部分区域的状态信息包括:人脸朝向信息以及视线方向信息中的至少一个。
  4. 根据权利要求1至3任一项所述的方法,其特征在于,所述用于表征驾驶状态的至少一个指标包括:基于闭眼程度的指标、基于打哈欠的指标、基于视线偏离程度的指标以及基于人脸朝向偏离程度的指标中的至少一个。
  5. 根据权利要求4所述的方法,其特征在于,所述基于闭眼程度的指标的参数值包括以下至少之一:驾驶员的闭眼次数、闭眼频率、闭眼持续时长、闭眼幅度、半闭眼次数、半闭眼频率;和/或,
    所述基于打哈欠的指标的参数值包括以下至少之一:驾驶员的哈欠状态、打哈欠次数、时长、频率;和/或,
    基于视线方向偏离程度的指标的参数值包括以下至少之一:视线方向偏离角度、时长、频率;和/或,
    基于人脸朝向偏离程度的指标的参数值包括以下至少之一:转头次数、转头持续时长、转头频率。
  6. 根据权利要求3至5任一项所述的方法,其特征在于,所述根据所述参数值确定驾驶员的驾驶状态监测结果包括:
    将所述参数值所满足的预设条件所对应的疲劳驾驶程度等级和/或注意力确定为驾驶状态监测结果。
  7. 根据权利要求1-6任一所述的方法,其特征在于,所述方法还包括:
    执行与所述驾驶状态监测结果对应的控制操作。
  8. 根据权利要求7所述的方法,其特征在于,所述执行与所述驾驶状态监测结果对应的控制操作,包括以下至少之一:
    如果确定的所述驾驶状态监测结果满足提示/告警预定条件,输出与所述提示/告警预定条件相应的提示/告警信息;
    如果确定的所述驾驶状态监测结果满足驾驶模式切换预定条件,将驾驶模式切换为接管驾驶模式。
  9. 根据权利要求3-8任一所述的方法,其特征在于,所述根据所述人脸关键点确定驾驶员人脸至少部分区域的状态信息,包括:
    根据所述人脸关键点分割出所述图像中的人眼区域图像;
    基于第一神经网络对所述人眼区域图像进行上眼睑线和下眼睑线的检测;
    根据所述上眼睑线和下眼睑线之间的间隔确定驾驶员的眼睛睁合状态信息。
  10. 根据权利要求3-9任一所述的方法,其特征在于,所述根据所述人脸关键点确定驾驶员人脸至少部分区域的状态信息,包括:
    根据所述人脸关键点分割出所述图像中的嘴部区域图像;
    基于第二神经网络对所述嘴部区域图像进行上唇线和下唇线的检测;
    根据所述上唇线和下唇线之间的间隔确定驾驶员的嘴巴开合状态信息。
  11. 根据权利要求3-10任一所述的方法,其特征在于,所述根据所述人脸关键点确定驾驶员面部区域的状态信息包括:
    根据所述人脸关键点获取头部姿态特征信息;
    根据所述头部姿态特征信息确定人脸朝向信息。
  12. 根据权利要求11所述的方法,其特征在于,所述根据所述人脸关键点获取头部姿态特征信息,根据所述头部姿态特征信息确定人脸朝向信息包括:
    基于所述人脸关键点并经第三神经网络提取头部姿态特征信息;
    基于所述头部姿态特征信息并经第四神经网络进行头部朝向估计,获得人脸朝向信息。
  13. 根据权利要求3-12任一所述的方法,其特征在于,所述根据所述人脸关键点确定驾驶员人脸至少部分区域的状态包括:
    根据所述人脸关键点中的眼睛关键点所定位的眼睛图像确定瞳孔边沿位置,并根据所述瞳孔边沿位置计算瞳孔中心位置;
    根据所述瞳孔中心位置与眼睛中心位置计算所述视线方向信息。
  14. 根据权利要求13所述的方法,其特征在于,所述根据所述人脸关键点中的眼睛关键点所定位的眼睛图像确定瞳孔边沿位置包括:
    基于第五神经网络对根据所述人脸关键点分割出的所述图像中的眼睛区域图像进行瞳孔边沿位置的检测,并根据第五神经网络输出的信息获取到瞳孔边沿位置。
  15. 一种驾驶状态监控装置,其特征在于,包括:
    检测人脸关键点模块,用于检测驾驶员图像的人脸关键点;
    确定区域状态模块,用于根据所述人脸关键点确定驾驶员人脸至少部分区域的状态信息;
    确定指标参数值模块,用于根据一段时间内的驾驶员人脸至少部分区域的多个状态信息,确定用于表征驾驶状态的至少一个指标的参数值;
    确定驾驶状态模块,用于根据所述参数值确定驾驶员的驾驶状态监测结果。
  16. 根据权利要求15所述的装置,其特征在于,所述装置还包括:
    控制模块,用于执行与所述驾驶状态监测结果对应的控制操作。
  17. 一种电子设备,包括:
    存储器,用于存储计算机程序;
    处理器,用于执行所述存储器中存储的计算机程序,所述计算机程序被执行时,下述指令被运行:
    用于检测驾驶员图像的人脸关键点的指令;
    用于根据所述人脸关键点确定驾驶员人脸至少部分区域的状态信息的指令;
    用于根据一段时间内的驾驶员人脸至少部分区域的多个状态信息,确定用于表征驾驶状态的至少一个指标的参数值的指令;
    用于根据所述参数值确定驾驶员的驾驶状态监测结果的指令。
  18. 根据权利要求17所述的电子设备,其特征在于,被运行的指令还包括:
    用于通过红外摄像头拍摄驾驶员图像的指令。
  19. 根据权利要求17或18所述的电子设备,其特征在于,被运行的指令还包括:
    用于执行与所述驾驶状态监测结果对应的控制操作的指令。
  20. 根据权利要求19所述的电子设备,其特征在于,所述用于执行与所述驾驶状态监测结果对应的控制操作的指令,包括以下至少之一:
    用于如果确定的所述驾驶状态监测结果满足提示/告警预定条件,输出与所述提示/告警预定条件相应的提示/告警信息的指令;
    用于如果确定的所述驾驶状态监测结果满足驾驶模式切换预定条件,将驾驶模式切换为接管驾驶模式的指令。
  21. 根据权利要求17-20任一所述的电子设备,其特征在于,所述用于根据所述人脸关键点确定驾驶员人脸至少部分区域的状态信息的指令,包括:
    用于根据所述人脸关键点分割出所述图像中的人眼区域图像的指令;
    用于基于第一神经网络对所述人眼区域图像进行上眼睑线和下眼睑线的检测的指令;
    用于根据所述上眼睑线和下眼睑线之间的间隔确定驾驶员的眼睛睁合状态信息的指令。
  22. 根据权利要求17-20任一所述的电子设备,其特征在于,所述用于根据所述人脸关键点确定驾驶员人脸至少部分区域的状态信息的指令,包括:
    用于根据所述人脸关键点分割出所述图像中的嘴部区域图像的指令;
    用于基于第二神经网络对所述嘴部区域图像进行上唇线和下唇线的检测的指令;
    用于根据所述上唇线和下唇线之间的间隔确定驾驶员的嘴巴开合状态信息的指令。
  23. 根据权利要求17-22任一所述的电子设备,其特征在于,所述用于根据所述人脸关键点确定驾驶员面部区域的状态信息的指令包括:
    用于根据所述人脸关键点获取头部姿态特征信息的指令;
    用于根据所述头部姿态特征信息确定人脸朝向信息的指令。
  24. 根据权利要求23所述的电子设备,其特征在于,所述用于根据所述人脸关键点获取头部姿态特征信息的指令,以及用于根据所述头部姿态特征信息确定人脸朝向信息的指令包括:
    用于基于所述人脸关键点并经第三神经网络提取头部姿态特征信息的指令;
    用于基于所述头部姿态特征信息并经第四神经网络进行头部朝向估计,获得人脸朝向信息的指令。
  25. 根据权利要求17-24任一所述的电子设备,其特征在于,所述用于根据所述人脸关键点确定驾驶员人脸至少部分区域的状态的指令包括:
    用于根据所述人脸关键点中的眼睛关键点所定位的眼睛图像确定瞳孔边沿位置,并根据所述瞳孔边沿位置计算瞳孔中心位置的指令;
    用于根据所述瞳孔中心位置与眼睛中心位置计算所述视线方向信息的指令。
  26. 根据权利要求25所述的电子设备,其特征在于,所述用于根据所述人脸关键点中的眼睛关键点所定位的眼睛图像确定瞳孔边沿位置的指令包括:
    用于基于第五神经网络对根据所述人脸关键点分割出的所述图像中的眼睛区域图像进行瞳孔边沿位置的检测,并根据第五神经网络输出的信息获取到瞳孔边沿位置的指令。
  27. 一种计算机可读存储介质,其上存储有计算机程序,该计算机程序被处理器执行时实现上述权利要求1-14中任一项所述的方法中的步骤。
  28. 一种计算机程序,包括计算机可读代码,当所述计算机可读代码在设备中运行时,所述设备中的处理器执行用于实现权利要求1-14中的任一权利要求所述的方法中的步骤的可执行指令。
PCT/CN2017/096957 2017-08-10 2017-08-10 驾驶状态监控方法、装置和电子设备 WO2019028798A1 (zh)

Priority Applications (19)

Application Number Priority Date Filing Date Title
CN201780053499.4A CN109803583A (zh) 2017-08-10 2017-08-10 驾驶状态监控方法、装置和电子设备
PCT/CN2017/096957 WO2019028798A1 (zh) 2017-08-10 2017-08-10 驾驶状态监控方法、装置和电子设备
KR1020207007113A KR102391279B1 (ko) 2017-08-10 2018-04-25 운전 상태 모니터링 방법 및 장치, 운전자 모니터링 시스템 및 차량
CN201880003399.5A CN109937152B (zh) 2017-08-10 2018-04-25 驾驶状态监测方法和装置、驾驶员监控系统、车辆
JP2018568375A JP6933668B2 (ja) 2017-08-10 2018-04-25 運転状態監視方法及び装置、運転者監視システム、並びに車両
EP18845078.7A EP3666577A4 (en) 2017-08-10 2018-04-25 METHOD AND DEVICE FOR DRIVING CONDITION MONITORING, DRIVER MONITORING SYSTEM AND VEHICLE
PCT/CN2018/084526 WO2019029195A1 (zh) 2017-08-10 2018-04-25 驾驶状态监测方法和装置、驾驶员监控系统、车辆
SG11202002549WA SG11202002549WA (en) 2017-08-10 2018-04-25 Driving state monitoring methods and apparatuses, driver monitoring systems, and vehicles
US16/177,198 US10853675B2 (en) 2017-08-10 2018-10-31 Driving state monitoring methods and apparatuses, driver monitoring systems, and vehicles
CN201910152525.XA CN110399767A (zh) 2017-08-10 2019-02-28 车内人员危险动作识别方法和装置、电子设备、存储介质
SG11202009720QA SG11202009720QA (en) 2017-08-10 2019-12-27 Method and apparatus for identifying dangerous actions of persons in vehicle, electronic device and storage medium
KR1020207027781A KR20200124278A (ko) 2017-08-10 2019-12-27 차량 내 인원의 위험 동작 인식 방법 및 장치, 전자 기기, 저장 매체
PCT/CN2019/129370 WO2020173213A1 (zh) 2017-08-10 2019-12-27 车内人员危险动作识别方法和装置、电子设备、存储介质
JP2020551547A JP2021517313A (ja) 2017-08-10 2019-12-27 車両乗員の危険動作の認識方法及び装置、電子機器、並びに記憶媒体
TW109106588A TWI758689B (zh) 2017-08-10 2020-02-27 車內人員危險動作識別方法和裝置、電子設備、儲存介質
US17/034,290 US20210009150A1 (en) 2017-08-10 2020-09-28 Method for recognizing dangerous action of personnel in vehicle, electronic device and storage medium
US17/085,953 US20210049386A1 (en) 2017-08-10 2020-10-30 Driving state monitoring methods and apparatuses, driver monitoring systems, and vehicles
US17/085,972 US20210049387A1 (en) 2017-08-10 2020-10-30 Driving state monitoring methods and apparatuses, driver monitoring systems, and vehicles
US17/085,989 US20210049388A1 (en) 2017-08-10 2020-10-30 Driving state monitoring methods and apparatuses, driver monitoring systems, and vehicles

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
PCT/CN2017/096957 WO2019028798A1 (zh) 2017-08-10 2017-08-10 驾驶状态监控方法、装置和电子设备

Related Child Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2018/084526 Continuation WO2019029195A1 (zh) 2017-08-10 2018-04-25 驾驶状态监测方法和装置、驾驶员监控系统、车辆

Publications (1)

Publication Number Publication Date
WO2019028798A1 true WO2019028798A1 (zh) 2019-02-14

Family

ID=65273075

Family Applications (3)

Application Number Title Priority Date Filing Date
PCT/CN2017/096957 WO2019028798A1 (zh) 2017-08-10 2017-08-10 驾驶状态监控方法、装置和电子设备
PCT/CN2018/084526 WO2019029195A1 (zh) 2017-08-10 2018-04-25 驾驶状态监测方法和装置、驾驶员监控系统、车辆
PCT/CN2019/129370 WO2020173213A1 (zh) 2017-08-10 2019-12-27 车内人员危险动作识别方法和装置、电子设备、存储介质

Family Applications After (2)

Application Number Title Priority Date Filing Date
PCT/CN2018/084526 WO2019029195A1 (zh) 2017-08-10 2018-04-25 驾驶状态监测方法和装置、驾驶员监控系统、车辆
PCT/CN2019/129370 WO2020173213A1 (zh) 2017-08-10 2019-12-27 车内人员危险动作识别方法和装置、电子设备、存储介质

Country Status (8)

Country Link
US (5) US10853675B2 (zh)
EP (1) EP3666577A4 (zh)
JP (2) JP6933668B2 (zh)
KR (2) KR102391279B1 (zh)
CN (3) CN109803583A (zh)
SG (2) SG11202002549WA (zh)
TW (1) TWI758689B (zh)
WO (3) WO2019028798A1 (zh)

Cited By (14)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110001652A (zh) * 2019-03-26 2019-07-12 深圳市科思创动科技有限公司 驾驶员状态的监测方法、装置及终端设备
CN110188655A (zh) * 2019-05-27 2019-08-30 上海蔚来汽车有限公司 驾驶状态评价方法、系统及计算机存储介质
CN110826521A (zh) * 2019-11-15 2020-02-21 爱驰汽车有限公司 驾驶员疲劳状态识别方法、系统、电子设备和存储介质
CN111160126A (zh) * 2019-12-11 2020-05-15 深圳市锐明技术股份有限公司 驾驶状态确定方法、装置、车辆及存储介质
WO2020173135A1 (zh) * 2019-02-28 2020-09-03 北京市商汤科技开发有限公司 神经网络训练及眼睛睁闭状态检测方法、装置及设备
CN111860280A (zh) * 2020-07-15 2020-10-30 南通大学 一种基于深度学习的驾驶员违章行为识别系统
CN112528792A (zh) * 2020-12-03 2021-03-19 深圳地平线机器人科技有限公司 疲劳状态检测方法、装置、介质及电子设备
CN112585656A (zh) * 2020-02-25 2021-03-30 华为技术有限公司 特殊路况的识别方法、装置、电子设备和存储介质
CN112660141A (zh) * 2020-12-29 2021-04-16 长安大学 一种通过驾驶行为数据的驾驶员驾驶分心行为识别方法
CN113128295A (zh) * 2019-12-31 2021-07-16 湖北亿咖通科技有限公司 一种车辆驾驶员危险驾驶状态识别方法及装置
CN113313019A (zh) * 2021-05-27 2021-08-27 展讯通信(天津)有限公司 一种分神驾驶检测方法、系统及相关设备
WO2022042203A1 (zh) * 2020-08-31 2022-03-03 魔门塔(苏州)科技有限公司 一种人体关键点的检测方法及装置
CN114187581A (zh) * 2021-12-14 2022-03-15 安徽大学 一种基于无监督学习的驾驶员分心细粒度检测方法
CN116052136A (zh) * 2023-03-27 2023-05-02 中国科学技术大学 分心检测方法、车载控制器和计算机存储介质

Families Citing this family (82)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2018033137A1 (zh) * 2016-08-19 2018-02-22 北京市商汤科技开发有限公司 在视频图像中展示业务对象的方法、装置和电子设备
CN109803583A (zh) * 2017-08-10 2019-05-24 北京市商汤科技开发有限公司 驾驶状态监控方法、装置和电子设备
JP6888542B2 (ja) * 2017-12-22 2021-06-16 トヨタ自動車株式会社 眠気推定装置及び眠気推定方法
US10850746B2 (en) * 2018-07-24 2020-12-01 Harman International Industries, Incorporated Coordinating delivery of notifications to the driver of a vehicle to reduce distractions
US11170240B2 (en) * 2019-01-04 2021-11-09 Cerence Operating Company Interaction system and method
US10657396B1 (en) * 2019-01-30 2020-05-19 StradVision, Inc. Method and device for estimating passenger statuses in 2 dimension image shot by using 2 dimension camera with fisheye lens
CN111661059B (zh) * 2019-03-08 2022-07-08 虹软科技股份有限公司 分心驾驶监测方法、系统及电子设备
CN111845749A (zh) * 2019-04-28 2020-10-30 郑州宇通客车股份有限公司 一种自动驾驶车辆的控制方法及系统
CN109977930B (zh) * 2019-04-29 2021-04-02 中国电子信息产业集团有限公司第六研究所 疲劳驾驶检测方法及装置
GB2583742B (en) * 2019-05-08 2023-10-25 Jaguar Land Rover Ltd Activity identification method and apparatus
CN110263641A (zh) * 2019-05-17 2019-09-20 成都旷视金智科技有限公司 疲劳检测方法、装置及可读存储介质
US11281920B1 (en) * 2019-05-23 2022-03-22 State Farm Mutual Automobile Insurance Company Apparatuses, systems and methods for generating a vehicle driver signature
WO2020237664A1 (zh) * 2019-05-31 2020-12-03 驭势(上海)汽车科技有限公司 驾驶提醒方法、驾驶状态检测方法和计算设备
CN112241645A (zh) * 2019-07-16 2021-01-19 广州汽车集团股份有限公司 一种疲劳驾驶检测方法及其系统、电子设备
JP7047821B2 (ja) * 2019-07-18 2022-04-05 トヨタ自動車株式会社 運転支援装置
US10991130B2 (en) * 2019-07-29 2021-04-27 Verizon Patent And Licensing Inc. Systems and methods for implementing a sensor based real time tracking system
FR3100640B1 (fr) * 2019-09-10 2021-08-06 Faurecia Interieur Ind Procédé et dispositif de détection de bâillements d’un conducteur d’un véhicule
CN112758098B (zh) * 2019-11-01 2022-07-22 广州汽车集团股份有限公司 基于驾驶员状态等级的车辆驾驶权限接管控制方法及装置
CN110942591B (zh) * 2019-11-12 2022-06-24 博泰车联网科技(上海)股份有限公司 驾驶安全提醒系统以及方法
CN110837815A (zh) * 2019-11-15 2020-02-25 济宁学院 一种基于卷积神经网络的驾驶员状态监测方法
CN110968718B (zh) * 2019-11-19 2023-07-14 北京百度网讯科技有限公司 目标检测模型负样本挖掘方法、装置及电子设备
CN110909715B (zh) * 2019-12-06 2023-08-04 重庆商勤科技有限公司 基于视频图像识别吸烟的方法、装置、服务器及存储介质
JP2021096530A (ja) * 2019-12-13 2021-06-24 トヨタ自動車株式会社 運転支援装置、運転支援プログラムおよび運転支援システム
CN111160237A (zh) * 2019-12-27 2020-05-15 智车优行科技(北京)有限公司 头部姿态估计方法和装置、电子设备和存储介质
CN111191573A (zh) * 2019-12-27 2020-05-22 中国电子科技集团公司第十五研究所 一种基于眨眼规律识别的驾驶员疲劳检测方法
CN113126296B (zh) * 2020-01-15 2023-04-07 未来(北京)黑科技有限公司 一种提高光利用率的抬头显示设备
CN111243236A (zh) * 2020-01-17 2020-06-05 南京邮电大学 一种基于深度学习的疲劳驾驶预警方法及系统
US11873000B2 (en) 2020-02-18 2024-01-16 Toyota Motor North America, Inc. Gesture detection for transport control
JP7402084B2 (ja) * 2020-03-05 2023-12-20 本田技研工業株式会社 乗員行動判定装置
CN111783515A (zh) * 2020-03-18 2020-10-16 北京沃东天骏信息技术有限公司 行为动作识别的方法和装置
US11912307B2 (en) 2020-03-18 2024-02-27 Waymo Llc Monitoring head movements of drivers tasked with monitoring a vehicle operating in an autonomous driving mode
CN111460950B (zh) * 2020-03-25 2023-04-18 西安工业大学 自然驾驶通话行为中基于头-眼证据融合的认知分心方法
JP7380380B2 (ja) * 2020-03-26 2023-11-15 いすゞ自動車株式会社 運転支援装置
CN111626101A (zh) * 2020-04-13 2020-09-04 惠州市德赛西威汽车电子股份有限公司 一种基于adas的吸烟监测方法及系统
US20230154226A1 (en) * 2020-05-27 2023-05-18 Mitsubishi Electric Corporation Gesture detection apparatus and gesture detection method
WO2021240668A1 (ja) * 2020-05-27 2021-12-02 三菱電機株式会社 ジェスチャ検出装置およびジェスチャ検出方法
CN111611970B (zh) * 2020-06-01 2023-08-22 城云科技(中国)有限公司 一种基于城管监控视频的乱扔垃圾行为检测方法
CN111652128B (zh) * 2020-06-02 2023-09-01 浙江大华技术股份有限公司 一种高空电力作业安全监测方法、系统和存储装置
CN111767823A (zh) * 2020-06-23 2020-10-13 京东数字科技控股有限公司 一种睡岗检测方法、装置、系统及存储介质
JP7359087B2 (ja) * 2020-07-02 2023-10-11 トヨタ自動車株式会社 ドライバモニタ装置及びドライバモニタ方法
CN111785008A (zh) * 2020-07-04 2020-10-16 苏州信泰中运物流有限公司 一种基于gps和北斗定位的物流监控管理方法、装置及计算机可读存储介质
CN113920576A (zh) * 2020-07-07 2022-01-11 奥迪股份公司 车上人员的丢物行为识别方法、装置、设备及存储介质
US20220414796A1 (en) * 2020-07-08 2022-12-29 Pilot Travel Centers, LLC Computer implemented oil field logistics
CN111797784B (zh) * 2020-07-09 2024-03-05 斑马网络技术有限公司 驾驶行为监测方法、装置、电子设备及存储介质
US11776319B2 (en) * 2020-07-14 2023-10-03 Fotonation Limited Methods and systems to predict activity in a sequence of images
CN111860292A (zh) * 2020-07-16 2020-10-30 科大讯飞股份有限公司 基于单目相机的人眼定位方法、装置以及设备
CN111832526A (zh) * 2020-07-23 2020-10-27 浙江蓝卓工业互联网信息技术有限公司 一种行为检测方法及装置
CN112061065B (zh) * 2020-07-27 2022-05-10 大众问问(北京)信息科技有限公司 一种车内行为识别报警方法、设备、电子设备及存储介质
US11651599B2 (en) * 2020-08-17 2023-05-16 Verizon Patent And Licensing Inc. Systems and methods for identifying distracted driver behavior from video
CN112069931A (zh) * 2020-08-20 2020-12-11 深圳数联天下智能科技有限公司 一种状态报告的生成方法及状态监控系统
CN112016457A (zh) * 2020-08-27 2020-12-01 青岛慕容信息科技有限公司 驾驶员分神以及危险驾驶行为识别方法、设备和存储介质
CN112163470A (zh) * 2020-09-11 2021-01-01 高新兴科技集团股份有限公司 基于深度学习的疲劳状态识别方法、系统、存储介质
CN112307920B (zh) * 2020-10-22 2024-03-22 东云睿连(武汉)计算技术有限公司 一种高风险工种作业人员行为预警装置及方法
CN112347891B (zh) * 2020-10-30 2022-02-22 南京佑驾科技有限公司 基于视觉的舱内喝水状态检测方法
CN112270283A (zh) * 2020-11-04 2021-01-26 北京百度网讯科技有限公司 异常驾驶行为确定方法、装置、设备、车辆和介质
CN112356839A (zh) * 2020-11-06 2021-02-12 广州小鹏自动驾驶科技有限公司 一种驾驶状态监测方法、系统及汽车
JP2022077282A (ja) * 2020-11-11 2022-05-23 株式会社コムテック 警報システム
KR102443980B1 (ko) * 2020-11-17 2022-09-21 주식회사 아르비존 차량 제어 방법
TWI739675B (zh) * 2020-11-25 2021-09-11 友達光電股份有限公司 影像辨識方法及裝置
CN112455452A (zh) * 2020-11-30 2021-03-09 恒大新能源汽车投资控股集团有限公司 驾驶状态的检测方法、装置及设备
CN112766050B (zh) * 2020-12-29 2024-04-16 富泰华工业(深圳)有限公司 着装及作业检查方法、计算机装置及存储介质
CN112754498B (zh) * 2021-01-11 2023-05-26 一汽解放汽车有限公司 驾驶员的疲劳检测方法、装置、设备及存储介质
JP2022130086A (ja) * 2021-02-25 2022-09-06 トヨタ自動車株式会社 タクシー車両およびタクシーシステム
CN114005104A (zh) * 2021-03-23 2022-02-01 深圳市创乐慧科技有限公司 一种基于人工智能的智能驾驶方法、装置及相关产品
CN113139531A (zh) * 2021-06-21 2021-07-20 博泰车联网(南京)有限公司 困倦状态检测方法及装置、电子设备、可读存储介质
CN113298041A (zh) * 2021-06-21 2021-08-24 黑芝麻智能科技(上海)有限公司 用于标定驾驶员分心参考方向的方法及系统
CN113486759B (zh) * 2021-06-30 2023-04-28 上海商汤临港智能科技有限公司 危险动作的识别方法及装置、电子设备和存储介质
CN113537135A (zh) * 2021-07-30 2021-10-22 三一重机有限公司 一种驾驶监测方法、装置、系统及可读存储介质
CN113734173B (zh) * 2021-09-09 2023-06-20 东风汽车集团股份有限公司 车辆智能监控方法、设备及存储介质
KR102542683B1 (ko) * 2021-09-16 2023-06-14 국민대학교산학협력단 손 추적 기반 행위 분류 방법 및 장치
FR3127355B1 (fr) * 2021-09-20 2023-09-29 Renault Sas procédé de sélection d’un mode de fonctionnement d’un dispositif de capture d’images pour reconnaissance faciale
KR102634012B1 (ko) * 2021-10-12 2024-02-07 경북대학교 산학협력단 딥러닝 기반 객체 분류를 이용한 운전자 행동 검출 장치
CN114162130B (zh) * 2021-10-26 2023-06-20 东风柳州汽车有限公司 驾驶辅助模式切换方法、装置、设备及存储介质
CN114005105B (zh) * 2021-12-30 2022-04-12 青岛以萨数据技术有限公司 驾驶行为检测方法、装置以及电子设备
CN114582090A (zh) * 2022-02-27 2022-06-03 武汉铁路职业技术学院 一种轨道车辆驾驶监测预警系统
CN114666378A (zh) * 2022-03-03 2022-06-24 武汉科技大学 一种重型柴油车车载远程监控系统
KR20230145614A (ko) 2022-04-07 2023-10-18 한국기술교육대학교 산학협력단 운전자 안전 모니터링 시스템 및 방법
CN115035502A (zh) * 2022-07-08 2022-09-09 北京百度网讯科技有限公司 驾驶员的行为监测方法、装置、电子设备及存储介质
CN114898341B (zh) * 2022-07-14 2022-12-20 苏州魔视智能科技有限公司 疲劳驾驶预警方法、装置、电子设备及存储介质
CN115601709B (zh) * 2022-11-07 2023-10-27 北京万理软件开发有限公司 煤矿员工违规统计系统、方法、装置以及存储介质
CN116311181B (zh) * 2023-03-21 2023-09-12 重庆利龙中宝智能技术有限公司 一种异常驾驶的快速检测方法及系统
CN116645732B (zh) * 2023-07-19 2023-10-10 厦门工学院 一种基于计算机视觉的场地危险活动预警方法及系统

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2000102510A (ja) * 1998-09-29 2000-04-11 Oki Electric Ind Co Ltd 眼の開度測定方法および装置
US7253739B2 (en) * 2005-03-10 2007-08-07 Delphi Technologies, Inc. System and method for determining eye closure state
CN101030316A (zh) * 2007-04-17 2007-09-05 北京中星微电子有限公司 一种汽车安全驾驶监控系统和方法
CN101540090A (zh) * 2009-04-14 2009-09-23 华南理工大学 基于多元信息融合的驾驶员疲劳监测装置及其监测方法
CN101692980A (zh) * 2009-10-30 2010-04-14 吴泽俊 疲劳驾驶检测方法
CN105980228A (zh) * 2014-02-12 2016-09-28 株式会社电装 驾驶辅助装置

Family Cites Families (107)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2546415B2 (ja) * 1990-07-09 1996-10-23 トヨタ自動車株式会社 車両運転者監視装置
US7738678B2 (en) * 1995-06-07 2010-06-15 Automotive Technologies International, Inc. Light modulation techniques for imaging objects in or around a vehicle
JP3843513B2 (ja) * 1996-12-24 2006-11-08 トヨタ自動車株式会社 車両用警報装置
JPH11161798A (ja) * 1997-12-01 1999-06-18 Toyota Motor Corp 車両運転者監視装置
JP3495934B2 (ja) * 1999-01-08 2004-02-09 矢崎総業株式会社 事故防止システム
US20120231773A1 (en) * 1999-08-27 2012-09-13 Lipovski Gerald John Jack Cuboid-based systems and methods for safe mobile texting.
AU2001258867A1 (en) * 2000-05-04 2001-11-12 Jin-Ho Song Automatic vehicle management apparatus and method using wire and wireless communication network
JP2003131785A (ja) * 2001-10-22 2003-05-09 Toshiba Corp インタフェース装置および操作制御方法およびプログラム製品
US6926429B2 (en) * 2002-01-30 2005-08-09 Delphi Technologies, Inc. Eye tracking/HUD system
AU2003211065A1 (en) * 2002-02-19 2003-09-09 Volvo Technology Corporation System and method for monitoring and managing driver attention loads
US6873714B2 (en) * 2002-02-19 2005-03-29 Delphi Technologies, Inc. Auto calibration and personalization of eye tracking system using larger field of view imager with higher resolution
JP2004017939A (ja) * 2002-06-20 2004-01-22 Denso Corp 車両用情報報知装置及びプログラム
JP3951231B2 (ja) * 2002-12-03 2007-08-01 オムロン株式会社 安全走行情報仲介システムおよびそれに用いる安全走行情報仲介装置と安全走行情報の確認方法
US7639148B2 (en) * 2003-06-06 2009-12-29 Volvo Technology Corporation Method and arrangement for controlling vehicular subsystems based on interpreted driver activity
KR100494848B1 (ko) 2004-04-16 2005-06-13 에이치케이이카 주식회사 차량 탑승자가 차량 내부에서 수면을 취하는지 여부를감지하는 방법 및 장치
DE102005018697A1 (de) * 2004-06-02 2005-12-29 Daimlerchrysler Ag Verfahren und Vorrichtung zur Warnung eines Fahrers im Falle eines Verlassens der Fahrspur
JP4564320B2 (ja) * 2004-09-29 2010-10-20 アイシン精機株式会社 ドライバモニタシステム
CN1680779A (zh) * 2005-02-04 2005-10-12 江苏大学 驾驶员疲劳监测方法及装置
EP1894180A4 (en) * 2005-06-09 2011-11-02 Greenroad Driving Technologies Ltd SYSTEM AND METHOD FOR DISPLAYING A DRIVING PROFILE
US20070041552A1 (en) * 2005-06-13 2007-02-22 Moscato Jonathan D Driver-attentive notification system
JP2007237919A (ja) * 2006-03-08 2007-09-20 Toyota Motor Corp 車両用入力操作装置
CN101489467B (zh) * 2006-07-14 2011-05-04 松下电器产业株式会社 视线方向检测装置和视线方向检测方法
US20130150004A1 (en) * 2006-08-11 2013-06-13 Michael Rosen Method and apparatus for reducing mobile phone usage while driving
CN100462047C (zh) * 2007-03-21 2009-02-18 汤一平 基于全方位计算机视觉的安全驾驶辅助装置
JP2008302741A (ja) * 2007-06-05 2008-12-18 Toyota Motor Corp 運転支援装置
US20130275899A1 (en) * 2010-01-18 2013-10-17 Apple Inc. Application Gateway for Providing Different User Interfaces for Limited Distraction and Non-Limited Distraction Contexts
JP5208711B2 (ja) * 2008-12-17 2013-06-12 アイシン精機株式会社 眼開閉判別装置及びプログラム
US10019634B2 (en) * 2010-06-04 2018-07-10 Masoud Vaziri Method and apparatus for an eye tracking wearable computer
US9460601B2 (en) * 2009-09-20 2016-10-04 Tibet MIMAR Driver distraction and drowsiness warning and sleepiness reduction for accident avoidance
CN101877051A (zh) * 2009-10-30 2010-11-03 江苏大学 驾驶人注意力状态监测方法和装置
US20110224875A1 (en) * 2010-03-10 2011-09-15 Cuddihy Mark A Biometric Application of a Polymer-based Pressure Sensor
US10592757B2 (en) * 2010-06-07 2020-03-17 Affectiva, Inc. Vehicular cognitive data collection using multiple devices
US10074024B2 (en) * 2010-06-07 2018-09-11 Affectiva, Inc. Mental state analysis using blink rate for vehicles
CN101950355B (zh) * 2010-09-08 2012-09-05 中国人民解放军国防科学技术大学 基于数字视频的驾驶员疲劳状态检测方法
JP5755012B2 (ja) * 2011-04-21 2015-07-29 キヤノン株式会社 情報処理装置、その処理方法、プログラム及び撮像装置
US11270699B2 (en) * 2011-04-22 2022-03-08 Emerging Automotive, Llc Methods and vehicles for capturing emotion of a human driver and customizing vehicle response
JP5288045B2 (ja) * 2011-07-11 2013-09-11 トヨタ自動車株式会社 車両の緊急退避装置
US8744642B2 (en) * 2011-09-16 2014-06-03 Lytx, Inc. Driver identification based on face data
CN102436715B (zh) * 2011-11-25 2013-12-11 大连海创高科信息技术有限公司 疲劳驾驶检测方法
KR20140025812A (ko) * 2012-08-22 2014-03-05 삼성전기주식회사 졸음 운전 감지 장치 및 방법
JP2014048760A (ja) * 2012-08-29 2014-03-17 Denso Corp 車両の運転者に情報を提示する情報提示システム、情報提示装置、および情報センター
JP6036065B2 (ja) * 2012-09-14 2016-11-30 富士通株式会社 注視位置検出装置及び注視位置検出方法
US9405982B2 (en) * 2013-01-18 2016-08-02 GM Global Technology Operations LLC Driver gaze detection system
US20140272811A1 (en) * 2013-03-13 2014-09-18 Mighty Carma, Inc. System and method for providing driving and vehicle related assistance to a driver
US10210761B2 (en) * 2013-09-30 2019-02-19 Sackett Solutions & Innovations, LLC Driving assistance systems and methods
JP5939226B2 (ja) 2013-10-16 2016-06-22 トヨタ自動車株式会社 運転支援装置
KR101537936B1 (ko) * 2013-11-08 2015-07-21 현대자동차주식회사 차량 및 그 제어방법
US10417486B2 (en) * 2013-12-30 2019-09-17 Alcatel Lucent Driver behavior monitoring systems and methods for driver behavior monitoring
JP6150258B2 (ja) * 2014-01-15 2017-06-21 みこらった株式会社 自動運転車
US20150310758A1 (en) * 2014-04-26 2015-10-29 The Travelers Indemnity Company Systems, methods, and apparatus for generating customized virtual reality experiences
US20160001785A1 (en) * 2014-07-07 2016-01-07 Chin-Jung Hsu Motion sensing system and method
US9714037B2 (en) * 2014-08-18 2017-07-25 Trimble Navigation Limited Detection of driver behaviors using in-vehicle systems and methods
US9796391B2 (en) * 2014-10-13 2017-10-24 Verizon Patent And Licensing Inc. Distracted driver prevention systems and methods
TW201615457A (zh) * 2014-10-30 2016-05-01 鴻海精密工業股份有限公司 車用安全識別反應系統及方法
CN104408879B (zh) * 2014-11-19 2017-02-01 湖南工学院 疲劳驾驶预警处理方法、装置及系统
US10614726B2 (en) * 2014-12-08 2020-04-07 Life Long Driver, Llc Behaviorally-based crash avoidance system
CN104574817A (zh) * 2014-12-25 2015-04-29 清华大学苏州汽车研究院(吴江) 一种适用于智能手机的基于机器视觉疲劳驾驶预警系统
JP2016124364A (ja) * 2014-12-26 2016-07-11 本田技研工業株式会社 覚醒装置
US10705521B2 (en) 2014-12-30 2020-07-07 Visteon Global Technologies, Inc. Autonomous driving interface
DE102015200697A1 (de) * 2015-01-19 2016-07-21 Robert Bosch Gmbh Verfahren und Vorrichtung zum Erkennen von Sekundenschlaf eines Fahrers eines Fahrzeugs
CN104688251A (zh) * 2015-03-02 2015-06-10 西安邦威电子科技有限公司 一种多姿态下的疲劳及非正常姿态驾驶检测方法
FR3033303B1 (fr) * 2015-03-03 2017-02-24 Renault Sas Dispositif et procede de prediction d'un niveau de vigilance chez un conducteur d'un vehicule automobile.
EP3286057B1 (en) * 2015-04-20 2021-06-09 Bayerische Motoren Werke Aktiengesellschaft Apparatus and method for controlling a user situation awareness modification of a user of a vehicle, and a user situation awareness modification processing system
CN105139583A (zh) * 2015-06-23 2015-12-09 南京理工大学 基于便携式智能设备的车辆危险提醒方法
CN106327801B (zh) * 2015-07-07 2019-07-26 北京易车互联信息技术有限公司 疲劳驾驶检测方法和装置
CN204915314U (zh) * 2015-07-21 2015-12-30 戴井之 一种汽车安全驾驶装置
CN105096528B (zh) * 2015-08-05 2017-07-11 广州云从信息科技有限公司 一种疲劳驾驶检测方法及系统
US10769459B2 (en) * 2015-08-31 2020-09-08 Sri International Method and system for monitoring driving behaviors
CN105261153A (zh) * 2015-11-03 2016-01-20 北京奇虎科技有限公司 车辆行驶监控方法和装置
CN105354985B (zh) * 2015-11-04 2018-01-12 中国科学院上海高等研究院 疲劳驾驶监控装置及方法
JP6641916B2 (ja) * 2015-11-20 2020-02-05 オムロン株式会社 自動運転支援装置、自動運転支援システム、自動運転支援方法および自動運転支援プログラム
CN105574487A (zh) * 2015-11-26 2016-05-11 中国第一汽车股份有限公司 基于面部特征的驾驶人注意力状态检测方法
CN105654753A (zh) * 2016-01-08 2016-06-08 北京乐驾科技有限公司 一种智能车载安全驾驶辅助方法及系统
CN105769120B (zh) * 2016-01-27 2019-01-22 深圳地平线机器人科技有限公司 疲劳驾驶检测方法和装置
FR3048544B1 (fr) * 2016-03-01 2021-04-02 Valeo Comfort & Driving Assistance Dispositif et methode de surveillance d'un conducteur d'un vehicule automobile
US10108260B2 (en) * 2016-04-01 2018-10-23 Lg Electronics Inc. Vehicle control apparatus and method thereof
WO2017208529A1 (ja) * 2016-06-02 2017-12-07 オムロン株式会社 運転者状態推定装置、運転者状態推定システム、運転者状態推定方法、運転者状態推定プログラム、対象者状態推定装置、対象者状態推定方法、対象者状態推定プログラム、および記録媒体
US20180012090A1 (en) * 2016-07-07 2018-01-11 Jungo Connectivity Ltd. Visual learning system and method for determining a driver's state
JP2018022229A (ja) * 2016-08-01 2018-02-08 株式会社デンソーテン 安全運転行動通知システム及び安全運転行動通知方法
CN106218405A (zh) * 2016-08-12 2016-12-14 深圳市元征科技股份有限公司 疲劳驾驶监控方法及云端服务器
CN106446811A (zh) * 2016-09-12 2017-02-22 北京智芯原动科技有限公司 基于深度学习的驾驶员疲劳检测方法及装置
EP3513265A4 (en) * 2016-09-14 2020-04-22 Nauto Global Limited SYSTEMS AND METHODS FOR DETERMINING ALMOST COLLISIONS
CN106355838A (zh) * 2016-10-28 2017-01-25 深圳市美通视讯科技有限公司 一种疲劳驾驶检测方法和系统
US10246014B2 (en) * 2016-11-07 2019-04-02 Nauto, Inc. System and method for driver distraction determination
CN106709420B (zh) * 2016-11-21 2020-07-10 厦门瑞为信息技术有限公司 一种监测营运车辆驾驶人员驾驶行为的方法
US10467488B2 (en) * 2016-11-21 2019-11-05 TeleLingo Method to analyze attention margin and to prevent inattentive and unsafe driving
CN106585629B (zh) * 2016-12-06 2019-07-12 广东泓睿科技有限公司 一种车辆控制方法和装置
CN106585624B (zh) * 2016-12-07 2019-07-26 深圳市元征科技股份有限公司 驾驶员状态监控方法及装置
CN106781282A (zh) * 2016-12-29 2017-05-31 天津中科智能识别产业技术研究院有限公司 一种智能行车驾驶员疲劳预警系统
CN106909879A (zh) * 2017-01-11 2017-06-30 开易(北京)科技有限公司 一种疲劳驾驶检测方法及系统
CN106985750A (zh) * 2017-01-17 2017-07-28 戴姆勒股份公司 用于车辆的车内安全监控系统及汽车
FR3063557B1 (fr) * 2017-03-03 2022-01-14 Valeo Comfort & Driving Assistance Dispositif de determination de l'etat d'attention d'un conducteur de vehicule, systeme embarque comportant un tel dispositif, et procede associe
WO2018167991A1 (ja) * 2017-03-14 2018-09-20 オムロン株式会社 運転者監視装置、運転者監視方法、学習装置及び学習方法
US10922566B2 (en) * 2017-05-09 2021-02-16 Affectiva, Inc. Cognitive state evaluation for vehicle navigation
US10289938B1 (en) * 2017-05-16 2019-05-14 State Farm Mutual Automobile Insurance Company Systems and methods regarding image distification and prediction models
US10402687B2 (en) * 2017-07-05 2019-09-03 Perceptive Automata, Inc. System and method of predicting human interaction with vehicles
US10592785B2 (en) * 2017-07-12 2020-03-17 Futurewei Technologies, Inc. Integrated system for detection of driver condition
CN109803583A (zh) * 2017-08-10 2019-05-24 北京市商汤科技开发有限公司 驾驶状态监控方法、装置和电子设备
JP6666892B2 (ja) * 2017-11-16 2020-03-18 株式会社Subaru 運転支援装置及び運転支援方法
CN107933471B (zh) * 2017-12-04 2019-12-20 惠州市德赛西威汽车电子股份有限公司 事故主动呼叫救援的方法及车载自动求救系统
CN108407813A (zh) * 2018-01-25 2018-08-17 惠州市德赛西威汽车电子股份有限公司 一种基于大数据的车辆抗疲劳安全驾驶方法
US10322728B1 (en) * 2018-02-22 2019-06-18 Futurewei Technologies, Inc. Method for distress and road rage detection
US10776644B1 (en) * 2018-03-07 2020-09-15 State Farm Mutual Automobile Insurance Company Image analysis technologies for assessing safety of vehicle operation
US10915769B2 (en) * 2018-06-04 2021-02-09 Shanghai Sensetime Intelligent Technology Co., Ltd Driving management methods and systems, vehicle-mounted intelligent systems, electronic devices, and medium
US10970571B2 (en) * 2018-06-04 2021-04-06 Shanghai Sensetime Intelligent Technology Co., Ltd. Vehicle control method and system, vehicle-mounted intelligent system, electronic device, and medium
JP6870660B2 (ja) * 2018-06-08 2021-05-12 トヨタ自動車株式会社 ドライバ監視装置
CN108961669A (zh) * 2018-07-19 2018-12-07 上海小蚁科技有限公司 网约车的安全预警方法及装置、存储介质、服务器

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2000102510A (ja) * 1998-09-29 2000-04-11 Oki Electric Ind Co Ltd 眼の開度測定方法および装置
US7253739B2 (en) * 2005-03-10 2007-08-07 Delphi Technologies, Inc. System and method for determining eye closure state
CN101030316A (zh) * 2007-04-17 2007-09-05 北京中星微电子有限公司 一种汽车安全驾驶监控系统和方法
CN101540090A (zh) * 2009-04-14 2009-09-23 华南理工大学 基于多元信息融合的驾驶员疲劳监测装置及其监测方法
CN101692980A (zh) * 2009-10-30 2010-04-14 吴泽俊 疲劳驾驶检测方法
CN105980228A (zh) * 2014-02-12 2016-09-28 株式会社电装 驾驶辅助装置

Cited By (20)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP7227385B2 (ja) 2019-02-28 2023-02-21 ベイジン センスタイム テクノロジー ディベロップメント カンパニー リミテッド ニューラルネットワークのトレーニング及び目開閉状態の検出方法、装置並び機器
JP2022517398A (ja) * 2019-02-28 2022-03-08 ベイジン センスタイム テクノロジー ディベロップメント カンパニー リミテッド ニューラルネットワークのトレーニング及び目開閉状態の検出方法、装置並び機器
WO2020173135A1 (zh) * 2019-02-28 2020-09-03 北京市商汤科技开发有限公司 神经网络训练及眼睛睁闭状态检测方法、装置及设备
CN110001652A (zh) * 2019-03-26 2019-07-12 深圳市科思创动科技有限公司 驾驶员状态的监测方法、装置及终端设备
CN110001652B (zh) * 2019-03-26 2020-06-23 深圳市科思创动科技有限公司 驾驶员状态的监测方法、装置及终端设备
CN110188655A (zh) * 2019-05-27 2019-08-30 上海蔚来汽车有限公司 驾驶状态评价方法、系统及计算机存储介质
CN110826521A (zh) * 2019-11-15 2020-02-21 爱驰汽车有限公司 驾驶员疲劳状态识别方法、系统、电子设备和存储介质
CN111160126B (zh) * 2019-12-11 2023-12-19 深圳市锐明技术股份有限公司 驾驶状态确定方法、装置、车辆及存储介质
CN111160126A (zh) * 2019-12-11 2020-05-15 深圳市锐明技术股份有限公司 驾驶状态确定方法、装置、车辆及存储介质
CN113128295A (zh) * 2019-12-31 2021-07-16 湖北亿咖通科技有限公司 一种车辆驾驶员危险驾驶状态识别方法及装置
CN112585656A (zh) * 2020-02-25 2021-03-30 华为技术有限公司 特殊路况的识别方法、装置、电子设备和存储介质
CN111860280A (zh) * 2020-07-15 2020-10-30 南通大学 一种基于深度学习的驾驶员违章行为识别系统
WO2022042203A1 (zh) * 2020-08-31 2022-03-03 魔门塔(苏州)科技有限公司 一种人体关键点的检测方法及装置
CN112528792A (zh) * 2020-12-03 2021-03-19 深圳地平线机器人科技有限公司 疲劳状态检测方法、装置、介质及电子设备
CN112660141A (zh) * 2020-12-29 2021-04-16 长安大学 一种通过驾驶行为数据的驾驶员驾驶分心行为识别方法
CN113313019A (zh) * 2021-05-27 2021-08-27 展讯通信(天津)有限公司 一种分神驾驶检测方法、系统及相关设备
CN114187581B (zh) * 2021-12-14 2024-04-09 安徽大学 一种基于无监督学习的驾驶员分心细粒度检测方法
CN114187581A (zh) * 2021-12-14 2022-03-15 安徽大学 一种基于无监督学习的驾驶员分心细粒度检测方法
CN116052136A (zh) * 2023-03-27 2023-05-02 中国科学技术大学 分心检测方法、车载控制器和计算机存储介质
CN116052136B (zh) * 2023-03-27 2023-09-05 中国科学技术大学 分心检测方法、车载控制器和计算机存储介质

Also Published As

Publication number Publication date
US20210009150A1 (en) 2021-01-14
TW202033395A (zh) 2020-09-16
CN110399767A (zh) 2019-11-01
WO2020173213A1 (zh) 2020-09-03
JP2021517313A (ja) 2021-07-15
KR20200051632A (ko) 2020-05-13
US20190065873A1 (en) 2019-02-28
US10853675B2 (en) 2020-12-01
US20210049387A1 (en) 2021-02-18
SG11202009720QA (en) 2020-10-29
EP3666577A4 (en) 2020-08-19
US20210049388A1 (en) 2021-02-18
JP6933668B2 (ja) 2021-09-08
CN109937152A (zh) 2019-06-25
US20210049386A1 (en) 2021-02-18
SG11202002549WA (en) 2020-04-29
KR20200124278A (ko) 2020-11-02
EP3666577A1 (en) 2020-06-17
CN109803583A (zh) 2019-05-24
CN109937152B (zh) 2022-03-25
JP2019536673A (ja) 2019-12-19
TWI758689B (zh) 2022-03-21
WO2019029195A1 (zh) 2019-02-14
KR102391279B1 (ko) 2022-04-26

Similar Documents

Publication Publication Date Title
WO2019028798A1 (zh) 驾驶状态监控方法、装置和电子设备
WO2020078465A1 (zh) 驾驶状态分析方法和装置、驾驶员监控系统、车辆
Ramzan et al. A survey on state-of-the-art drowsiness detection techniques
WO2019232972A1 (zh) 驾驶管理方法和系统、车载智能系统、电子设备、介质
WO2020078464A1 (zh) 驾驶状态检测方法和装置、驾驶员监控系统、车辆
Hossain et al. IOT based real-time drowsy driving detection system for the prevention of road accidents
Wang et al. Driver fatigue detection: a survey
US10655978B2 (en) Controlling an autonomous vehicle based on passenger behavior
Lashkov et al. Driver dangerous state detection based on OpenCV & dlib libraries using mobile video processing
CN110879973A (zh) 一种驾驶员疲劳状态面部特征识别检测方法
KR20190083155A (ko) 운전자 상태 검출 장치 및 그 방법
JP2007163864A (ja) 表示制御装置、表示制御方法、表示制御プログラム、および表示制御プログラム記録媒体
Bergasa et al. Visual monitoring of driver inattention
Guria et al. Iot-enabled driver drowsiness detection using machine learning
CN113901866A (zh) 一种机器视觉的疲劳驾驶预警方法
P Mathai A New Proposal for Smartphone-Based Drowsiness Detection and Warning System for Automotive Drivers
Shostak et al. Using Internet of Things Technologies to Ensure Cargo Transportation Safety
US20230206658A1 (en) Apparatus, method, and computer program for determining driver's behavior
Phayde et al. Real-Time Drowsiness Diagnostic System Using Opencv Algorithm
KR102520188B1 (ko) 인공 지능을 이용하여 운전자의 상태를 판단하는 차량 장치 및 그 제어 방법
Pradhan et al. Driver Drowsiness Detection Model System Using EAR
Rajput et al. Accident Prevention Using Drowsiness Detection
US20230128944A1 (en) Seizure prediction machine learning models
KR20230071593A (ko) 인공 지능을 이용하여 운전자의 주시 상태를 판단하는 차량 장치 및 그 제어 방법
Wong Driver monitoring scheme in advanced driver assistance systems perspective

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 17921289

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

32PN Ep: public notification in the ep bulletin as address of the adressee cannot be established

Free format text: NOTING OF LOSS OF RIGHTS PURSUANT TO RULE 112(1) EPC (EPO FORM 1205 DATED 19.06.2020)

122 Ep: pct application non-entry in european phase

Ref document number: 17921289

Country of ref document: EP

Kind code of ref document: A1