WO2020181840A1 - 分心驾驶监测方法、系统及电子设备 - Google Patents

分心驾驶监测方法、系统及电子设备 Download PDF

Info

Publication number
WO2020181840A1
WO2020181840A1 PCT/CN2019/122790 CN2019122790W WO2020181840A1 WO 2020181840 A1 WO2020181840 A1 WO 2020181840A1 CN 2019122790 W CN2019122790 W CN 2019122790W WO 2020181840 A1 WO2020181840 A1 WO 2020181840A1
Authority
WO
WIPO (PCT)
Prior art keywords
target
detection result
distracted driving
driving behavior
result
Prior art date
Application number
PCT/CN2019/122790
Other languages
English (en)
French (fr)
Inventor
张骁迪
张志伟
鲍天龙
丁春辉
王进
Original Assignee
虹软科技股份有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 虹软科技股份有限公司 filed Critical 虹软科技股份有限公司
Priority to EP19817571.3A priority Critical patent/EP3730371A4/en
Priority to KR1020217032527A priority patent/KR102543161B1/ko
Priority to US16/626,350 priority patent/US11783599B2/en
Priority to JP2021552987A priority patent/JP7407198B2/ja
Publication of WO2020181840A1 publication Critical patent/WO2020181840A1/zh

Links

Images

Classifications

    • BPERFORMING OPERATIONS; TRANSPORTING
    • B60VEHICLES IN GENERAL
    • B60WCONJOINT CONTROL OF VEHICLE SUB-UNITS OF DIFFERENT TYPE OR DIFFERENT FUNCTION; CONTROL SYSTEMS SPECIALLY ADAPTED FOR HYBRID VEHICLES; ROAD VEHICLE DRIVE CONTROL SYSTEMS FOR PURPOSES NOT RELATED TO THE CONTROL OF A PARTICULAR SUB-UNIT
    • B60W40/00Estimation or calculation of non-directly measurable driving parameters for road vehicle drive control systems not related to the control of a particular sub unit, e.g. by using mathematical models
    • B60W40/08Estimation or calculation of non-directly measurable driving parameters for road vehicle drive control systems not related to the control of a particular sub unit, e.g. by using mathematical models related to drivers or passengers
    • B60W40/09Driving style or behaviour
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/50Context or environment of the image
    • G06V20/59Context or environment of the image inside of a vehicle, e.g. relating to seat occupancy, driver state or inner lighting conditions
    • G06V20/597Recognising the driver's state or behaviour, e.g. attention or drowsiness
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B60VEHICLES IN GENERAL
    • B60WCONJOINT CONTROL OF VEHICLE SUB-UNITS OF DIFFERENT TYPE OR DIFFERENT FUNCTION; CONTROL SYSTEMS SPECIALLY ADAPTED FOR HYBRID VEHICLES; ROAD VEHICLE DRIVE CONTROL SYSTEMS FOR PURPOSES NOT RELATED TO THE CONTROL OF A PARTICULAR SUB-UNIT
    • B60W40/00Estimation or calculation of non-directly measurable driving parameters for road vehicle drive control systems not related to the control of a particular sub unit, e.g. by using mathematical models
    • B60W40/08Estimation or calculation of non-directly measurable driving parameters for road vehicle drive control systems not related to the control of a particular sub unit, e.g. by using mathematical models related to drivers or passengers
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B60VEHICLES IN GENERAL
    • B60WCONJOINT CONTROL OF VEHICLE SUB-UNITS OF DIFFERENT TYPE OR DIFFERENT FUNCTION; CONTROL SYSTEMS SPECIALLY ADAPTED FOR HYBRID VEHICLES; ROAD VEHICLE DRIVE CONTROL SYSTEMS FOR PURPOSES NOT RELATED TO THE CONTROL OF A PARTICULAR SUB-UNIT
    • B60W50/00Details of control systems for road vehicle drive control not related to the control of a particular sub-unit, e.g. process diagnostic or vehicle driver interfaces
    • B60W50/08Interaction between the driver and the control system
    • B60W50/14Means for informing the driver, warning the driver or prompting a driver intervention
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B60VEHICLES IN GENERAL
    • B60WCONJOINT CONTROL OF VEHICLE SUB-UNITS OF DIFFERENT TYPE OR DIFFERENT FUNCTION; CONTROL SYSTEMS SPECIALLY ADAPTED FOR HYBRID VEHICLES; ROAD VEHICLE DRIVE CONTROL SYSTEMS FOR PURPOSES NOT RELATED TO THE CONTROL OF A PARTICULAR SUB-UNIT
    • B60W50/00Details of control systems for road vehicle drive control not related to the control of a particular sub-unit, e.g. process diagnostic or vehicle driver interfaces
    • B60W50/08Interaction between the driver and the control system
    • B60W50/14Means for informing the driver, warning the driver or prompting a driver intervention
    • B60W50/16Tactile feedback to the driver, e.g. vibration or force feedback to the driver on the steering wheel or the accelerator pedal
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/60Type of objects
    • G06V20/64Three-dimensional objects
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/161Detection; Localisation; Normalisation
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B60VEHICLES IN GENERAL
    • B60WCONJOINT CONTROL OF VEHICLE SUB-UNITS OF DIFFERENT TYPE OR DIFFERENT FUNCTION; CONTROL SYSTEMS SPECIALLY ADAPTED FOR HYBRID VEHICLES; ROAD VEHICLE DRIVE CONTROL SYSTEMS FOR PURPOSES NOT RELATED TO THE CONTROL OF A PARTICULAR SUB-UNIT
    • B60W50/00Details of control systems for road vehicle drive control not related to the control of a particular sub-unit, e.g. process diagnostic or vehicle driver interfaces
    • B60W2050/0001Details of the control system
    • B60W2050/0002Automatic control, details of type of controller or control system architecture
    • B60W2050/0004In digital systems, e.g. discrete-time systems involving sampling
    • B60W2050/0005Processor details or data handling, e.g. memory registers or chip architecture
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B60VEHICLES IN GENERAL
    • B60WCONJOINT CONTROL OF VEHICLE SUB-UNITS OF DIFFERENT TYPE OR DIFFERENT FUNCTION; CONTROL SYSTEMS SPECIALLY ADAPTED FOR HYBRID VEHICLES; ROAD VEHICLE DRIVE CONTROL SYSTEMS FOR PURPOSES NOT RELATED TO THE CONTROL OF A PARTICULAR SUB-UNIT
    • B60W50/00Details of control systems for road vehicle drive control not related to the control of a particular sub-unit, e.g. process diagnostic or vehicle driver interfaces
    • B60W50/08Interaction between the driver and the control system
    • B60W50/14Means for informing the driver, warning the driver or prompting a driver intervention
    • B60W2050/143Alarm means
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B60VEHICLES IN GENERAL
    • B60WCONJOINT CONTROL OF VEHICLE SUB-UNITS OF DIFFERENT TYPE OR DIFFERENT FUNCTION; CONTROL SYSTEMS SPECIALLY ADAPTED FOR HYBRID VEHICLES; ROAD VEHICLE DRIVE CONTROL SYSTEMS FOR PURPOSES NOT RELATED TO THE CONTROL OF A PARTICULAR SUB-UNIT
    • B60W50/00Details of control systems for road vehicle drive control not related to the control of a particular sub-unit, e.g. process diagnostic or vehicle driver interfaces
    • B60W50/08Interaction between the driver and the control system
    • B60W50/14Means for informing the driver, warning the driver or prompting a driver intervention
    • B60W2050/146Display means
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B60VEHICLES IN GENERAL
    • B60WCONJOINT CONTROL OF VEHICLE SUB-UNITS OF DIFFERENT TYPE OR DIFFERENT FUNCTION; CONTROL SYSTEMS SPECIALLY ADAPTED FOR HYBRID VEHICLES; ROAD VEHICLE DRIVE CONTROL SYSTEMS FOR PURPOSES NOT RELATED TO THE CONTROL OF A PARTICULAR SUB-UNIT
    • B60W2420/00Indexing codes relating to the type of sensors based on the principle of their operation
    • B60W2420/40Photo, light or radio wave sensitive means, e.g. infrared sensors
    • B60W2420/403Image sensing, e.g. optical camera
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B60VEHICLES IN GENERAL
    • B60WCONJOINT CONTROL OF VEHICLE SUB-UNITS OF DIFFERENT TYPE OR DIFFERENT FUNCTION; CONTROL SYSTEMS SPECIALLY ADAPTED FOR HYBRID VEHICLES; ROAD VEHICLE DRIVE CONTROL SYSTEMS FOR PURPOSES NOT RELATED TO THE CONTROL OF A PARTICULAR SUB-UNIT
    • B60W2540/00Input parameters relating to occupants
    • B60W2540/229Attention level, e.g. attentive to driving, reading or sleeping
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30196Human being; Person
    • G06T2207/30201Face
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30248Vehicle exterior or interior
    • G06T2207/30268Vehicle interior
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V2201/00Indexing scheme relating to image or video recognition or understanding
    • G06V2201/07Target detection

Definitions

  • the present invention relates to computer vision processing technology, and in particular to a method, system and electronic equipment for monitoring distracted driving.
  • At least some embodiments of the present invention provide a distracted driving monitoring method, system, and electronic equipment, so as to at least partially solve the problem of traffic accidents caused by failure to monitor the distracted driving behavior of the driver during driving.
  • a method for monitoring distracted driving including: collecting driver images; detecting a target in the driver image to obtain a detection result, wherein the target corresponds to the distracted driving behavior ; According to the detection result, obtain the judgment result of the driving behavior; when the judgment result indicates that the distracted driving behavior occurs, an alarm signal is issued.
  • the method collects an image of the driver through an image acquisition module, wherein the image acquisition module is an independent camera device or a camera device integrated on an electronic device.
  • the image acquisition module is an independent camera device or a camera device integrated on an electronic device.
  • the target includes at least one of the following: cigarettes, mobile phones, water glasses, and food
  • the distracted driving behavior corresponding to the target includes at least one of the following: smoking, making calls, drinking water ,diet.
  • the detection result indicates whether the driver image contains a target object, and when the detection result indicates that the driver image contains the target object, the judgment result of the driving behavior is a distracted driving behavior.
  • the detection result includes the type of the target object and the probability value corresponding to the type.
  • the method includes: screening detection results according to the probability value.
  • the method includes: comparing the probability value corresponding to the category in the detection result with a first threshold to obtain a comparison result; and screening the detection result according to the comparison result.
  • the method includes: when the comparison result indicates that the probability value corresponding to the category in the detection result is greater than the first threshold, retaining the detection result; otherwise, discarding the detection result.
  • the method includes: detecting a face area after the driver image is collected.
  • the detection result includes the position of the target object.
  • the method includes: evaluating the rationality of the detection result by analyzing the relative position relationship between the position of the target and a preset reasonable area.
  • evaluating the rationality of the detection result includes: calculating the intersection between the position of the target and the preset reasonable area corresponding to the target.
  • intersection ratio is greater than the second threshold value, it means that the position of the target object appears in a preset reasonable area, and the target detection result is credible; otherwise, discard the The target detection result.
  • the method further includes: preprocessing the driver image to obtain a preprocessed image; wherein, the preprocessing includes at least one of the following: image scaling, pixel value restoration Unified, image enhancement.
  • the method uses a deep learning algorithm to obtain the position, type, and probability value of the target in the driver image or the preprocessed image, where the probability value is that the target belongs to the category The probability.
  • the method determines the final judgment result by combining judgment results of consecutive frames.
  • the method uses a queue structure to store the judgment result of each frame in the last t seconds and maintain the queue; traverse the queue record, and if the proportion of driving behavior in the last t seconds exceeds the third threshold, the The driving behavior is the final judgment result.
  • a distracted driving monitoring system including: an image acquisition module configured to collect driver images; a detection module configured to detect targets in the driver’s image to obtain detection Result: The logical judgment module is configured to obtain the judgment result of the driving behavior according to the detection result; the communication module is configured to issue an alarm signal when the judgment result indicates that a distracted driving behavior occurs.
  • the image acquisition module is an independent camera device or a camera device integrated on an electronic device.
  • the target includes at least one of the following: cigarettes, mobile phones, water glasses, and food
  • the distracted driving behavior corresponding to the target includes at least one of the following: smoking, making calls, drinking water ,diet.
  • the detection result includes at least one of the following: whether there is a target, a location of the target, a type of the target, and a probability value corresponding to the category.
  • the logical judgment module is configured to filter the detection result according to the probability value.
  • the logic judgment module obtains the comparison result by comparing the probability value corresponding to the category in the detection result with a first threshold; screens the detection result according to the comparison result; when the comparison result represents the detection result When the probability value corresponding to the category is greater than the first threshold, the detection result is retained; otherwise, the detection result is discarded.
  • the detection module is configured to detect the face area after the driver image is collected.
  • the logical judgment module evaluates the rationality of the detection result by analyzing the relative position relationship between the position of the target and a preset reasonable area.
  • evaluating the rationality of the detection result includes: calculating the intersection between the position of the target and the preset reasonable area corresponding to the target.
  • intersection ratio is greater than the second threshold value, it means that the position of the target object appears in a preset reasonable area, and the target detection result is credible; otherwise, discard the The target detection result.
  • the detection module uses a deep learning algorithm to obtain the position, type, and probability value of the target in the driver image, where the probability value is the probability that the target belongs to the type.
  • the logical judgment module determines the final judgment result by combining judgment results of consecutive frames.
  • the logical judgment module uses a queue structure to store the judgment result of each frame in the last t seconds and maintain the queue; traverse the queue records, and if the proportion of driving behavior in the last t seconds exceeds the third threshold, Take this driving behavior as the final judgment result.
  • an electronic device including: a processor; and a memory, configured to store executable instructions of the processor; wherein the processor is configured to execute the executable Instructions to execute any of the distracted driving monitoring methods described above.
  • a storage medium wherein the storage medium includes a stored program, and wherein the device where the storage medium is located is controlled to execute any one of the foregoing when the program is running. Distracted driving monitoring method.
  • the detection result is obtained by collecting the driver image; detecting the target in the driver image to obtain the detection result; according to the detection result, obtaining the judgment result of the driving behavior; when the judgment result indicates the occurrence of distracted driving behavior, Alarm signal; real-time monitoring and warning of the driver’s distracted driving behavior, so as to urge the driver to concentrate, ensure safe driving and avoid traffic accidents; in addition, it can also determine the specific distracted driving Behavior and give different warning prompts, which can be used as a basis for law enforcement, data analysis or for further manual confirmation; and then solve the problem of traffic accidents caused by the failure to monitor the distracted driving behavior of the driver in the driving process.
  • Fig. 1 is a schematic diagram of an optional distracted driving monitoring method according to an embodiment of the present invention
  • FIG. 2 is a flowchart of another optional distracted driving monitoring method according to an embodiment of the present invention.
  • FIG. 3 is a structural block diagram of an optional distracted driving monitoring system according to an embodiment of the present invention.
  • Fig. 4 is a structural block diagram of an optional electronic device according to an embodiment of the present invention.
  • the embodiments of the present invention can be applied to a computer system/server, which can operate with many other general-purpose or special-purpose computing system environments or configurations.
  • Examples of well-known computing systems, environments and/or configurations suitable for use with computer systems/servers include, but are not limited to: personal computer systems, server computer systems, handheld or laptop devices, microprocessor-based systems, set-top boxes, Programmable consumer electronics products, network personal computers, small computer systems, large computer systems, and distributed cloud computing technology environments including any of the above systems, etc.
  • the computer system/server may be described in the general context of computer system executable instructions (such as program modules, etc.) executed by the computer system.
  • program modules can include routines, programs, target programs, components, logic, and data structures, etc., which perform specific tasks or implement specific abstract data types.
  • the computer system/server can be implemented in a distributed cloud computing environment, and tasks are performed by remote processing equipment linked through a communication network.
  • program modules may be located on a storage medium of a local or remote computing system including a storage device.
  • FIG. 1 it is a flowchart of an optional distracted driving monitoring method according to an embodiment of the present invention. As shown in Figure 1, the method includes the following steps:
  • the driver image is collected; the target object in the driver image is detected to obtain the detection result; the judgment result of the driving behavior is obtained according to the detection result; when the judgment result indicates distracted driving During the behavior, an alarm signal is issued; the driver’s distracted driving behavior can be monitored and alarmed in real time, so as to urge the driver to concentrate, ensure safe driving and avoid traffic accidents.
  • Step S10 collecting driver images
  • the image of the driver may be acquired through an image acquisition module, where the image acquisition module may be an independent camera device or a camera device integrated on an electronic device, such as an independent infrared camera, depth Cameras, RGB cameras, Mono cameras, etc., or cameras that come with electronic devices such as mobile phones, tablets, driving recorders, navigators, operation panels, and center consoles.
  • the driver image can be obtained by intercepting image frames in the video collected by the image collection module.
  • the light in the car usually changes with the driving environment, during the day when the weather is fine, the light in the car (for example, the driver's cab) is brighter, at night or in a cloudy day or in a tunnel.
  • the light in the driver's cab is relatively dark, while the infrared camera is less affected by changes in light and has the ability to work around the clock. Therefore, you can choose an infrared camera (including near-infrared camera, etc.) to obtain driver images to obtain better quality than ordinary cameras Driver’s image, thereby improving the accuracy of distracted driving monitoring results.
  • the image acquisition module may be set in any position in the vehicle where the driver's face can be photographed, for example, near the dashboard, near the center console, near the rearview mirror, etc.
  • the number of image acquisition modules can be one or more.
  • video frame images may be acquired every predetermined number of frames to reduce the frequency of acquiring video frame images and optimize computing resources.
  • the driver image may be preprocessed, and the preprocessing includes at least one of the following: image scaling, pixel value normalization, image enhancement; thus, the definition, size, etc. can be obtained The driver image that meets the requirements.
  • Step S12 detecting a target in the driver's image to obtain a detection result; wherein the target corresponds to a distracted driving behavior;
  • the target object in the driver image may be detected by the detection module to obtain the detection result.
  • the detection result may indicate whether the driver image contains the target object.
  • the target includes at least one of the following: cigarettes, mobile phones, water glasses, and food.
  • the distracted driving behavior corresponding to the target object includes at least one of the following: smoking, making phone calls, drinking water, and eating.
  • the driver image can be input to the target detection algorithm to detect the target in the driver image, where the target detection algorithm can be obtained by offline training a large number of samples.
  • the target detection algorithm may be a deep learning algorithm, such as yolo, faster-RCNN, SSD, etc.
  • Step S14 Obtain the judgment result of the driving behavior according to the detection result
  • the judgment result of the driving behavior can be obtained according to the detection result through the logic judgment module.
  • the judgment result of the driving behavior includes normal driving behavior and distracted driving behavior.
  • the driving behavior judgment result is a distracted driving behavior; when the detection result indicates that the driver image does not contain a target object, the driving behavior judgment result is a normal driving behavior.
  • Step S16 When the judgment result indicates that a distracted driving behavior occurs, an alarm signal is issued.
  • an alarm signal may be issued according to the judgment result through the communication module.
  • the alarm signal can be at least one of the following: sound prompt, light prompt, vibration prompt.
  • the sound prompt includes a voice or a bell
  • the light prompt includes a light or a flashing light.
  • the driver image can also be transmitted to the monitoring center in real time, as a basis for law enforcement or for data collection, data analysis, and further manual confirmation Wait.
  • the driver's distracted driving behavior can be monitored and alarmed, so as to urge the driver to concentrate, ensure safe driving, and avoid traffic accidents.
  • the above-mentioned distracted driving monitoring method can only judge whether the driver's behavior is a normal driving behavior or a distracted driving behavior and give a simple alarm, but cannot determine which distracted driving behavior is and give different warning prompts.
  • FIG. 2 it is a flowchart of another optional distracted driving monitoring method according to an embodiment of the present invention. As shown in Figure 2, the method includes the following steps:
  • the driver image is collected; the target in the driver image is detected to obtain the detection result; the detection result is screened to determine the type of the target; and the driving behavior is obtained according to the type of the target
  • the result of the judgment when the judgment result indicates that there is a distracted driving behavior, an alarm signal is issued; in addition to judging whether the driver’s behavior is a normal driving behavior or a distracted driving behavior and issuing a simple alarm signal, it can also determine the specific type Distracted driving behavior and give different warning prompts, so as to urge the driver to concentrate, ensure safe driving, and avoid traffic accidents; at the same time, it can be used as a basis for law enforcement, data analysis or for further manual confirmation.
  • step S20 is basically the same as the step S10 shown in FIG. 1, and will not be repeated here.
  • steps S22 to S28 will be described in detail below.
  • the target object in the driver image may be detected by the detection module to obtain the detection result.
  • the detection result may indicate whether the driver image contains the target object.
  • the target includes at least one of the following: cigarettes, mobile phones, water glasses, and food.
  • the distracted driving behavior corresponding to the target object includes at least one of the following: smoking, making phone calls, drinking water, and eating.
  • the detection result of step S22 In addition to indicating whether the driver image contains the target, when the driver image contains the target, the detection result can also include: the type of the target and the probability value corresponding to the type.
  • the probability value represents the probability that the target object belongs to the type.
  • the value range is 0 ⁇ 1.
  • the detection result can be screened by the logic judgment module to determine the type of the target object.
  • screening the detection results and determining the type of the target includes: comparing the probability value corresponding to the type in the detection result with a first threshold to obtain the comparison result; The target detection result is screened according to the comparison result; wherein multiple targets of different types can share the same first threshold or each type of target corresponds to a first threshold.
  • the comparison result indicates that the probability value corresponding to the category in the detection result is greater than the first threshold, the detection result is retained; otherwise, the detection result is discarded.
  • the type of the target can be determined.
  • the detection result with the highest probability value is retained to determine the type of the target object.
  • the driver image can be input to the target detection algorithm to detect the target in the driver image, where the target detection algorithm can be obtained by offline training a large number of samples.
  • the target detection algorithm may be a deep learning algorithm, such as yolo, faster-RCNN, SSD, etc.
  • Step S26 Obtain the judgment result of the driving behavior according to the type of the target
  • the judgment result of the driving behavior can be obtained according to the type of the target object through the logic judgment module.
  • the judgment result of the driving behavior includes normal driving behavior and various specific distracted driving behaviors.
  • the driving behavior judgment result is normal driving behavior; when the detection result indicates that the driver image contains the target object, the driving behavior judgment result is distracted driving behavior, and,
  • various specific distracted driving behaviors can be further judged, for example, smoking, making phone calls, drinking water, eating, etc. Specifically, for example, if the target type is smoke, it is determined that the specific distracted driving behavior is smoking; if the target type is a water cup, it is determined that the specific distracted driving behavior is drinking water.
  • Step S28 When the judgment result indicates that a distracted driving behavior occurs, an alarm signal is issued.
  • an alarm signal may be issued according to the judgment result through the communication module.
  • the alarm signal can be at least one of the following: sound prompt, light prompt, vibration prompt.
  • the sound prompt includes a voice or a bell
  • the light prompt includes a light or a flashing light.
  • voice broadcasts can be used to give different prompts to various specific distracted driving behaviors that appear.
  • the driver image can also be transmitted to the monitoring center in real time for law enforcement basis, data collection, data analysis or for further manual confirmation .
  • the distracted driving monitoring method in the embodiment of the present invention further includes initializing the hardware and software before collecting the driver image in step S10 or S20.
  • step S11 may be further included: detecting the face area. It should be noted that step S11 can be performed before, after or at the same time as step S12 or S22 (that is, detecting the target in the driver image and obtaining the detection result).
  • the detection result can also include the position of the target, where the position of the target can be represented by a rectangular frame, including the coordinates of the upper left corner and the lower right corner or the coordinates of the upper right corner and the lower left corner. Coordinates or the coordinates of the four points at the upper left corner, lower right corner, upper right corner, and lower left corner.
  • step S14 or S24 may further include step S13: evaluating the rationality of the detection result.
  • the rationality of the detection result may be evaluated by analyzing the relative position relationship between the position of the target object and the preset reasonable area.
  • evaluating the rationality of the target position includes calculating the intersection ratio between the position of the target object and the preset reasonable area corresponding to the target object, and comparing the intersection ratio with a second threshold; when the intersection ratio is greater than the second threshold, It means that the location of the target object appears in the preset reasonable area, the detection result is credible, and the next step can be performed; otherwise, the target detection result is discarded.
  • the preset reasonable area can be preset according to the reasonable area where the distracted driving behavior may appear in the face area.
  • the preset reasonable area corresponding to the behavior of making a call may be the two sides or the area below the face area; the preset reasonable area corresponding to the smoking behavior may be the area below the face.
  • step S11 and/or step S13 that is, detecting the face area and/or evaluating the rationality of the detection result, the accuracy of the distracted driving monitoring result can be improved.
  • step S14 or step S26 can also determine the final judgment result by combining the judgment results of consecutive frames to determine the score more accurately.
  • Heart-driving behavior reduces the false detection rate.
  • combining the judgment result of consecutive frames includes using a queue structure to store the judgment result of each frame in the last t seconds and maintain the queue; traverse the queue record, if a certain driving behavior accounts for more than the proportion of the last t seconds With three thresholds, the driving behavior will be the final judgment result.
  • a distracted driving monitoring system is also provided, and the distracted driving monitoring system 30 includes:
  • the image collection module 300 is configured to collect driver images
  • the image acquisition module 300 may be an independent camera device or a camera device integrated on an electronic device, for example, an independent infrared camera, a depth camera, an RGB camera, a Mono camera, etc., or a mobile phone Cameras that come with electronic devices such as tablets, driving recorders, navigators, operation panels, and center consoles.
  • the driver image can be obtained by intercepting image frames in the video collected by the image collection module.
  • the light in the car usually changes with the driving environment, during the day when the weather is fine, the light in the car (for example, the driver's cab) is brighter, at night or in a cloudy day or in a tunnel.
  • the light in the driver’s cab is relatively dark, while the infrared camera is less affected by changes in illumination and has the ability to work around the clock. Therefore, an infrared camera (including a near-infrared camera, etc.) can be selected as the image acquisition module 300 to obtain driver images to obtain The quality of driver images is better than that of ordinary cameras, thereby improving the accuracy of distracted driving monitoring results.
  • the image acquisition module 300 can be installed in any position in the vehicle where the driver's face can be photographed, for example, near the dashboard, near the center console, near the rearview mirror, and so on.
  • the number of image acquisition modules can be one or more.
  • video frame images may be acquired every predetermined number of frames to reduce the frequency of acquiring video frame images and optimize computing resources.
  • the driver image may be preprocessed through the image acquisition module 300, and the preprocessing includes at least one of the following: image scaling, pixel value normalization, and image enhancement; A driver image that meets the requirements for clarity and size.
  • the detection module 302 is configured to detect the target in the driver image and obtain the detection result
  • the detection result may indicate whether the driver image contains the target object.
  • the target includes at least one of the following: cigarettes, mobile phones, water glasses, and food.
  • the distracted driving behavior corresponding to the target object includes at least one of the following: smoking, making phone calls, drinking water, and eating.
  • the detection module 302 uses a target detection algorithm to detect the target in the driver image, where the target detection algorithm can be obtained by offline training a large number of samples.
  • the target detection algorithm may be a deep learning algorithm, such as yolo, faster-RCNN, SSD, etc.
  • the logical judgment module 304 is configured to obtain the judgment result of the driving behavior according to the detection result
  • the judgment result of the driving behavior includes normal driving behavior and distracted driving behavior.
  • the driving behavior judgment result is a distracted driving behavior; when the detection result indicates that the driver image does not contain a target object, the driving behavior judgment result is a normal driving behavior.
  • the communication module 306 is configured to issue an alarm signal when the judgment result indicates that a distracted driving behavior occurs.
  • the alarm signal may be at least one of the following: a sound prompt, a light prompt, and a vibration prompt.
  • voice prompts include voice or ringing
  • light prompts include lighting or flashing lights.
  • the communication module can also transmit the driver image to the monitoring center in real time, as a basis for law enforcement or for data collection, data analysis, and further Manual confirmation, etc.
  • the above-mentioned image acquisition module 300, detection module 302, logic judgment module 304, and communication module 306 can be configured in the distracted driving monitoring system in a mutually independent manner, or can be configured in the subdivision in a manner of being partially integrated or fully integrated into a large module. In this way, the distracted driving monitoring system can realize real-time monitoring of the driver's distracted driving behavior and alarm, so as to urge the driver to concentrate, ensure safe driving, and avoid traffic accidents.
  • the detection module 302 can not only detect whether the driver image contains a target object, but also can detect: the type of the target object and the corresponding type Probability value.
  • the probability value represents the probability that the target object belongs to the type.
  • the value range is 0 ⁇ 1.
  • the logic judgment module 304 is configured to screen the detection result according to the probability value and determine the type of the target object. Since multiple targets or interference objects other than the target may be detected in each frame of the driver image, some of them are wrong detection targets. In order to remove these erroneous detection targets, optionally, in the embodiment of the present invention, the logic judgment module 304 is configured to compare the probability value corresponding to the category in the detection result with the first threshold to obtain the comparison result; filter according to the comparison result Target detection result; wherein multiple targets of different types may share the same first threshold or each type of target corresponds to a first threshold. When the comparison result indicates that the probability value corresponding to the category in the detection result is greater than the first threshold, the detection result is retained; otherwise, the detection result is discarded.
  • the type of the target can be determined.
  • the judgment result of the driving behavior includes normal driving behavior and various specific distracted driving behaviors.
  • the driving behavior judgment result is normal driving behavior; when the detection result indicates that the driver image contains the target object, the driving behavior judgment result is distracted driving behavior, and,
  • the logic judgment module 304 can further judge various specific distracted driving behaviors, for example, smoking, making phone calls, drinking water, eating, etc. Specifically, for example, if the target type is smoke, it is determined that the specific distracted driving behavior is smoking; if the target type is a water cup, it is determined that the specific distracted driving behavior is drinking water.
  • the communication module 306 issues an alarm signal according to the judgment result.
  • the alarm signal can be at least one of the following: sound prompt, light prompt, vibration prompt.
  • the sound prompt includes a voice or a bell
  • the light prompt includes a light or a flashing light.
  • voice broadcasts can be used to give different prompts to various specific distracted driving behaviors that appear.
  • the detection module 302 may also be configured to detect the face area and the position of the target after the driver image is collected, where the position of the target may be It is represented by a rectangular box, including the coordinates of the upper left corner and the lower right corner or the coordinates of the upper right corner and the lower left corner or the coordinates of the upper left corner, the lower right corner, the upper right corner, and the lower left corner.
  • the logic judgment module 304 may also be configured to evaluate the rationality of the detection result.
  • the rationality of the detection result may be evaluated by analyzing the relative position relationship between the position of the target object and the preset reasonable area.
  • evaluating the rationality of the target position includes calculating the intersection ratio between the position of the target object and the preset reasonable area corresponding to the target object, and comparing the intersection ratio with a second threshold; when the intersection ratio is greater than the second threshold, It means that the location of the target object appears in the preset reasonable area, the detection result is credible, and the next step can be performed; otherwise, the target detection result is discarded.
  • the preset reasonable area can be preset according to the reasonable area where the distracted driving behavior may appear in the face area.
  • the preset reasonable area corresponding to the behavior of making a call may be the two sides or the area below the face area; the preset reasonable area corresponding to the smoking behavior may be the area below the face.
  • the logical judgment module 304 can also be configured to determine the final judgment result by combining the judgment results of consecutive frames to more accurately judge the distracted driving behavior and reduce False detection rate.
  • combining the judgment result of consecutive frames includes using a queue structure to store the judgment result of each frame in the last t seconds and maintain the queue; traverse the queue record, if a certain driving behavior accounts for more than the proportion of the last t seconds With three thresholds, the driving behavior will be the final judgment result.
  • an electronic device is also provided.
  • the electronic device 40 includes: a processor 400; and a memory 402 configured to store executable instructions of the processor 400; wherein the processor 400 is It is configured to execute any one of the above-mentioned distracted driving monitoring methods by executing the executable instructions.
  • a storage medium wherein the storage medium includes a stored program, and wherein the device where the storage medium is located is controlled to execute any one of the foregoing when the program is running. Distracted driving monitoring method.
  • the disclosed technical content can be implemented in other ways.
  • the device embodiments described above are merely illustrative.
  • the division of the units may be a logical function division, and there may be other divisions in actual implementation, for example, multiple units or components may be combined or may be Integrate into another system, or some features can be ignored or not implemented.
  • the displayed or discussed mutual coupling or direct coupling or communication connection may be through some interfaces, indirect coupling or communication connection of units or modules, and may be in electrical or other forms.
  • the units described as separate components may or may not be physically separated, and the components displayed as units may or may not be physical units, that is, they may be located in one place, or they may be distributed on multiple units. Some or all of the units may be selected according to actual needs to achieve the objectives of the solutions of the embodiments.
  • the functional units in the various embodiments of the present invention may be integrated into one processing unit, or each unit may exist alone physically, or two or more units may be integrated into one unit.
  • the above-mentioned integrated unit can be realized in the form of hardware or software functional unit.
  • the integrated unit is implemented in the form of a software functional unit and sold or used as an independent product, it can be stored in a computer readable storage medium.
  • the technical solution of the present invention essentially or the part that contributes to the prior art or all or part of the technical solution can be embodied in the form of a software product, and the computer software product is stored in a storage medium , Including several instructions to make a computer device (which may be a personal computer, a server, or a network device, etc.) execute all or part of the steps of the method described in each embodiment of the present invention.
  • the aforementioned storage media include: U disk, read-only memory (ROM, Read-Only Memory), random access memory (RAM, Random Access Memory), mobile hard disk, magnetic disk or optical disk and other media that can store program code .

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Automation & Control Theory (AREA)
  • Theoretical Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Multimedia (AREA)
  • Transportation (AREA)
  • Mechanical Engineering (AREA)
  • Human Computer Interaction (AREA)
  • General Health & Medical Sciences (AREA)
  • Health & Medical Sciences (AREA)
  • Mathematical Physics (AREA)
  • Oral & Maxillofacial Surgery (AREA)
  • Traffic Control Systems (AREA)
  • Emergency Alarm Devices (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Image Analysis (AREA)
  • Artificial Intelligence (AREA)
  • Computing Systems (AREA)
  • Databases & Information Systems (AREA)
  • Evolutionary Computation (AREA)
  • Medical Informatics (AREA)
  • Software Systems (AREA)

Abstract

一种分心驾驶监测方法、系统及电子设备,所述分心驾驶监测方法通过采集驾驶员图像(S10);检测驾驶员图像中的目标物,获得检测结果(S12);根据检测结果,获取驾驶行为的判断结果(S14);当判断结果表示出现分心驾驶行为时,发出报警信号(S16);可以对驾驶员的分心驾驶行为进行实时监测并报警,从而督促驾驶员集中注意力,保障安全驾驶,避免交通事故的发生;此外,还可以判断出具体是哪种分心驾驶行为并给出不同的报警提示,有助于作为执法依据或用于数据采集、数据分析、进一步的人工确认;进而解决未对司机在行车过程中出现的分心驾驶行为进行监控而造成交通事故的问题。

Description

分心驾驶监测方法、系统及电子设备 技术领域
本发明涉及计算机视觉处理技术,具体而言,涉及一种分心驾驶监测方法、系统及电子设备。
背景技术
随着国家经济发展,国内汽车保有量持续上升,家用车辆、交通运输车辆快速增多。与此同时,交通事故数量同步快速上升,造成众多人员伤亡和重大的财产损失。如何减少交通事故数量,是全社会共同关注的问题。在驾驶过程中,出现接打电话、抽烟、喝水或饮食等行为时,使司机分心驾驶,是造成交通事故的最常见原因之一。因此,有必要在司机驾驶过程中,对其行为进行监控,在司机出现接打电话、抽烟、喝水或饮食等分心驾驶行为时,能够及时对其进行警告,或者反馈给监管机构,以降低出现交通事故的风险。
发明内容
本发明至少部分实施例提供了一种分心驾驶监测方法、系统及电子设备,以至少部分地解决未对司机在行车过程中出现的分心驾驶行为进行监控而造成交通事故的问题。
根据本发明其中一实施例,提供了一种分心驾驶监测方法,包括:采集驾驶员图像;检测驾驶员图像中的目标物,获得检测结果,其中,所述目标物与分心驾驶行为对应;根据所述检测结果,获取驾驶行为的判断结果;当所述判断结果表示出现所述分心驾驶行为时,发出报警信号。
可选地,所述方法通过图像采集模块采集驾驶员图像,其中,所述图像采集模块为独立的摄像装置或集成在电子设备上的摄像装置。
可选地,所述目标物包括以下至少之一:香烟、手机、水杯、食物,与所述目标物对应的所述分心驾驶行为则包括以下至少之一:抽烟、接打电话、喝水、饮食。
可选地,所述检测结果表示所述驾驶员图像中是否包含目标物,当所述检测结果表示所述驾驶员图像中包含目标物时,所述驾驶行为的判断结果为分心驾驶行为。
可选地,所述检测结果包括所述目标物的类型、与所属类型对应的概率值。
可选地,所述方法包括:根据所述概率值筛选检测结果。
可选地,所述方法包括:比较所述检测结果中与所属类型对应的概率值和第一阈值,获得比较结果;根据所述比较结果来筛选检测结果。
可选地,所述方法包括:当所述比较结果表示检测结果中与所属类型对应的概率值大于所述第一阈值时,保留该检测结果;否则,将所述检测结果丢弃。
可选地,当存在多个概率值大于第一阈值的检测结果时,仅保留概率值最高的检测结果。
可选地,所述方法包括:在所述采集驾驶员图像之后,检测人脸区域。
可选地,所述检测结果包括目标物的位置。
可选地,所述方法包括:通过分析所述目标物的位置和预设合理区域的相对位置关系,评估所述检测结果的合理性。
可选地,通过分析目标物的位置和预设合理区域的相对位置关系,评估检测结果的合理性包括:计算所述目标物的位置与所述目标物对应的所述预设合理区域的交并比,比较交并比与第二阈值;当所述交并比大于所述第二阈值时,表示所述目标物的位置出现在预设合理区域中,目标检测结果可信;否则丢弃所述目标检测结果。
可选地,在采集驾驶员图像之后,所述方法还包括:对所述驾驶员图像进行预处理,得到预处理图像;其中,所述预处理包括以下至少之一:图像缩放、像素值归一化、图像增强。
可选地,所述方法使用深度学习算法得到所述驾驶员图像或所述预处理图像中所述目标物的位置、类型和概率值,其中,所述概率值为所述目标物属于所属类型的概率。
可选地,所述方法通过结合连续帧的判断结果,确定最终的判断结果。
可选地,所述方法使用队列结构存储最近t秒中每一帧的判断结果并维护该队列;遍历该队列记录,如果驾驶行为在最近t秒中的占比超过第 三阈值,则将该驾驶行为作为最终的判断结果。
根据本发明其中一实施例,还提供了一种分心驾驶监测系统,包括:图像采集模块,被配置为采集驾驶员图像;检测模块,被配置为检测驾驶员图像中的目标物,获得检测结果;逻辑判断模块,被配置为根据检测结果,获取驾驶行为的判断结果;通讯模块,被配置为当判断结果表示出现分心驾驶行为时,发出报警信号。
可选地,所述图像采集模块为独立的摄像装置或集成在电子设备上的摄像装置。
可选地,所述目标物包括以下至少之一:香烟、手机、水杯、食物,与所述目标物对应的所述分心驾驶行为则包括以下至少之一:抽烟、接打电话、喝水、饮食。
可选地,所述检测结果包括以下至少之一:是否存在目标物、目标物的位置、目标物的类型、与所属类型对应的概率值。
可选地,所述逻辑判断模块被配置为根据所述概率值筛选所述检测结果。
可选地,所述逻辑判断模块通过比较所述检测结果中与所属类型对应的概率值和第一阈值,获得比较结果;根据所述比较结果来筛选检测结果;当所述比较结果表示检测结果中与所属类型对应的概率值大于所述第一阈值时,保留该检测结果;否则,将所述检测结果丢弃。
可选地,当存在多个概率值大于第一阈值的检测结果时,仅保留概率 值最高的检测结果。
可选地,所述检测模块被配置为在所述采集驾驶员图像之后,检测人脸区域。
可选地,所述逻辑判断模块通过分析所述目标物的位置和预设合理区域的相对位置关系,评估所述检测结果的合理性。
可选地,通过分析目标物的位置和预设合理区域的相对位置关系,评估检测结果的合理性包括:计算所述目标物的位置与所述目标物对应的所述预设合理区域的交并比,比较交并比与第二阈值;当所述交并比大于所述第二阈值时,表示所述目标物的位置出现在预设合理区域中,目标检测结果可信;否则丢弃所述目标检测结果。
可选地,所述检测模块使用深度学习算法得到所述驾驶员图像中所述目标物的位置、类型和概率值,其中,所述概率值为所述目标物属于所属类型的概率。
可选地,所述逻辑判断模块通过结合连续帧的判断结果,确定最终的判断结果。
可选地,所述逻辑判断模块使用队列结构存储最近t秒中每一帧的判断结果并维护该队列;遍历该队列记录,如果驾驶行为在最近t秒中的占比超过第三阈值,则将该驾驶行为作为最终的判断结果。
根据本发明其中一实施例,还提供了一种电子设备,包括:处理器;以及存储器,设置为存储所述处理器的可执行指令;其中,所述处理器配 置为经由执行所述可执行指令来执行上述任意一项所述的分心驾驶监测方法。
根据本发明其中一实施例,还提供了一种存储介质,其中,所述存储介质包括存储的程序,其中,在所述程序运行时控制所述存储介质所在设备执行上述任意一项所述的分心驾驶监测方法。
在本发明至少部分实施例中,通过采集驾驶员图像;检测驾驶员图像中的目标物,获得检测结果;根据检测结果,获取驾驶行为的判断结果;当判断结果表示出现分心驾驶行为时,发出报警信号;可以对驾驶员的分心驾驶行为进行实时监测并报警,从而督促驾驶员集中注意力,保障安全驾驶,避免交通事故的发生;此外,还可以判断出具体是哪种分心驾驶行为并给出不同的报警提示,有助于作为执法依据、数据分析或用于进一步的人工确认;进而解决未对司机在行车过程中出现的分心驾驶行为进行监控而造成交通事故的问题。
附图说明
此处所说明的附图用来提供对本发明的进一步理解,构成本申请的一部分,本发明的示意性实施例及其说明用于解释本发明,并不构成对本发明的不当限定。在附图中:
图1根据本发明实施例的一种可选的分心驾驶监测方法的示意图;
图2是根据本发明实施例的另一种可选的分心驾驶监测方法的流程图;
图3是根据本发明实施例的一种可选的分心驾驶监测系统的结构框图;
图4是根据本发明实施例一种可选的电子设备的结构框图。
具体实施方式
为了使本技术领域的人员更好地理解本发明方案,下面将结合本发明实施例中的附图,对本发明实施例中的技术方案进行清楚、完整地描述,显然,所描述的实施例仅仅是本发明一部分的实施例,而不是全部的实施例。基于本发明中的实施例,本领域普通技术人员在没有做出创造性劳动前提下所获得的所有其他实施例,都应当属于本发明保护的范围。
需要说明的是,本发明的说明书和权利要求书及上述附图中的术语“第一”、“第二”等是用于区别类似的对象,而不必用于描述特定的顺序或先后次序。应该理解这样使用的数据在适当情况下可以互换,以便这里描述的本发明的实施例能够以除了在这里图示或描述的那些以外的顺序实施。此外,术语“包括”和“具有”以及他们的任何变形,意图在于覆盖不排他的包含,例如,包含了一系列步骤或单元的过程、方法、系统、产品或设备不必限于清楚地列出的那些步骤或单元,而是可包括没有清楚地列出的或对于这些过程、方法、产品或设备固有的其它步骤或单元。
本发明实施例可以应用于计算机系统/服务器,其可与众多其它通用或者专用计算系统环境或配置一起操作。适于与计算机系统/服务器一起使用的众所周知的计算系统、环境和/或配置的例子包括但不限于:个人计算机系统、服务器计算机系统、手持或膝上设备、基于微处理器的系统、机顶盒、可编程消费电子产品、网络个人电脑、小型计算机系统、大型计 算机系统和包括上述任何系统的分布式云计算技术环境,等等。
计算机系统/服务器可以在由计算机系统执行的计算机系统可执行指令(诸如程序模块等)的一般语境下描述。通常,程序模块可以包括例程、程序、目标程序、组件、逻辑以及数据结构等等,它们执行特定的任务或者实现特定的抽象数据类型。计算机系统/服务器可以在分布式云计算环境中实施,由通过通信网络链接的远程处理设备执行任务。在分布式云计算环境中,程序模块可以位于包括存储设备的本地或者远程计算系统存储介质上。
下面通过详细的实施例来说明本发明。
参考图1,是根据本发明实施例的一种可选的分心驾驶监测方法的流程图。如图1所示,该方法包括以下步骤:
S10:采集驾驶员图像;
S12:检测驾驶员图像中的目标物,获得检测结果,其中,目标物与分心驾驶行为对应;
S14:根据检测结果,获取驾驶行为的判断结果;
S16:当判断结果表示出现分心驾驶行为时,发出报警信号。
在本发明实施例中,通过上述步骤,即通过采集驾驶员图像;检测驾驶员图像中的目标物,获得检测结果;根据检测结果,获取驾驶行为的判断结果;当判断结果表示出现分心驾驶行为时,发出报警信号;可以对驾驶员的分心驾驶行为进行实时监测并报警,从而督促驾驶员集中注意力, 保障安全驾驶,避免交通事故的发生。
下面对上述各步骤进行详细说明。
步骤S10,采集驾驶员图像;
可选的,在本发明实施例中,可以通过图像采集模块获取驾驶员图像,其中,图像采集模块可以为独立的摄像装置或集成在电子设备上的摄像装置等,例如独立的红外摄像头、深度摄像头、RGB摄像头、Mono摄像头等,或者手机、平板电脑、行车记录仪、导航仪、操作面板、中控台等电子设备上自带的摄像头。驾驶员图像可以通过截取图像采集模块采集的视频中的图像帧获取。
由于车内(例如,驾驶室)的光线通常会跟随行车环境而变化,在天气晴好的白天,车内(例如,驾驶室)的光线较为明亮,在夜晚或者阴天或者隧道内,车内(例如,驾驶室)的光线则较暗,而红外摄像头受光照变化影响小,具有全天候工作的能力,因此可以选择红外摄像头(包括近红外摄像头等)获取驾驶员图像,以获取质量优于普通摄像头的驾驶员图像,从而提高分心驾驶监测结果的准确度。
可选的,在本发明实施例中,图像采集模块可以设置在车内任意可以拍摄到驾驶员面部的至少一个位置,例如,仪表盘附近,中控台附近,后视镜附近等。图像采集模块的数量可以为一个或多个。
可选的,在本发明实施例中,可以每隔预定数目帧获取视频帧图像,以降低视频帧图像的获取频率,优化计算资源。
可选的,在本发明实施例中,可以对驾驶员图像进行预处理,预处理包括以下至少之一:图像缩放、像素值归一化、图像增强;由此,可以得到清晰度、大小等符合要求的驾驶员图像。
步骤S12,检测驾驶员图像中的目标物,获得检测结果;其中,目标物与分心驾驶行为对应;
可选的,在本发明实施例中,可以通过检测模块检测驾驶员图像中的目标物,获得检测结果。
可选的,在本发明实施例中,检测结果可以表示驾驶员图像中是否包含目标物。
可选的,在本发明实施例中,目标物包括以下至少之一:香烟、手机、水杯、食物。与目标物对应的分心驾驶行为则包括以下至少之一:抽烟、接打电话、喝水、饮食。
可选的,在本发明实施例中,可以将驾驶员图像输入目标检测算法以检测驾驶员图像中的目标物,其中目标检测算法可以通过离线训练大量样本得到。优选的,目标检测算法可以为深度学习算法,例如yolo、faster-RCNN,SSD等。
步骤S14:根据检测结果,获取驾驶行为的判断结果;
可选的,在本发明实施例中,可以通过逻辑判断模块根据检测结果,获取驾驶行为的判断结果。
可选的,在本发明实施例中,驾驶行为的判断结果包含正常驾驶行为 和分心驾驶行为。当检测结果表示驾驶员图像中包含目标物时,驾驶行为的判断结果为分心驾驶行为;当检测结果表示驾驶员图像中未包含目标物时,驾驶行为的判断结果为正常驾驶行为。
步骤S16:当判断结果表示出现分心驾驶行为时,发出报警信号。
可选的,在本发明实施例中,可以通过通讯模块根据判断结果发出报警信号。其中,报警信号可以为以下至少之一:声音提示,灯光提示,震动提示。具体地,声音提示包括语音或响铃等,灯光提示包括亮灯或灯光闪烁等。
可选的,在本发明实施例中,当判断结果表示出现分心驾驶行为时,还可以将驾驶员图像实时传送到监控中心,作为执法依据或用于数据采集、数据分析、进一步的人工确认等。
通过上述步骤,可以对驾驶员的分心驾驶行为进行监测并报警,从而督促驾驶员集中注意力,保障安全驾驶,避免交通事故的发生。但是,上述分心驾驶监测方法只能判断驾驶员的行为是正常驾驶行为还是分心驾驶行为并进行简单的报警,而无法确定具体是哪一种分心驾驶行为并给出不同的报警提示。
参考图2,是根据本发明实施例的另一种可选的分心驾驶监测方法的流程图。如图2所示,该方法包括以下步骤:
S20:采集驾驶员图像;
S22:检测驾驶员图像中的目标物,获得检测结果,其中,目标物与 分心驾驶行为对应;
S24:筛选检测结果,确定目标物的类型;
S26:根据目标物的类型,获取驾驶行为的判断结果;
S28:当判断结果表示出现分心驾驶行为时,发出报警信号。
在本发明实施例中,通过上述步骤,即通过采集驾驶员图像;检测驾驶员图像中的目标物,获得检测结果;筛选检测结果,确定目标物的类型;根据目标物的类型,获取驾驶行为的判断结果;当判断结果表示出现分心驾驶行为时,发出报警信号;除了可以判断驾驶员的行为是正常驾驶行为还是分心驾驶行为并发出简单的报警信号,还可以判断出具体是哪种分心驾驶行为并给出不同的报警提示,从而督促驾驶员集中注意力,保障安全驾驶,避免交通事故的发生;同时有助于作为执法依据、数据分析或用于进一步的人工确认。
上述步骤S20与图1所示的步骤S10基本相同,在此不再赘述。下面对上述步骤S22至步骤S28分别进行详细说明。
S22:检测驾驶员图像中的目标物,获得检测结果,其中,目标物与分心驾驶行为对应;
可选的,在本发明实施例中,可以通过检测模块检测驾驶员图像中的目标物,获得检测结果。
可选的,在本发明实施例中,检测结果可以表示驾驶员图像中是否包含目标物。
可选的,在本发明实施例中,目标物包括以下至少之一:香烟、手机、水杯、食物。与目标物对应的分心驾驶行为则包括以下至少之一:抽烟、接打电话、喝水、饮食。
由于分心驾驶行为包含抽烟、接打电话、喝水、饮食等,为了更清楚地判断出具体为哪一种分心驾驶行为,可选的,在本发明实施例中,步骤S22的检测结果除了可以表示驾驶员图像中是否包含目标物以外,当检测到驾驶员图像中包含目标物时,检测结果还可以包含:目标物的类型、与所属类型对应的概率值。概率值表示目标物为所属类型的概率,优选的,取值范围为0ˉ1。
S24:筛选检测结果,确定目标物的类型;
可选的,在本发明实施例中,可以通过逻辑判断模块筛选检测结果,确定目标物的类型。
由于每帧驾驶员图像上可能检测到多个目标物或除目标物以外的干扰物,其中部分为错误的检测目标。为了去除这些错误的检测目标,可选的,在本发明实施例中,筛选检测结果,确定目标物的类型包括:比较检测结果中与所属类型对应的概率值和第一阈值,获得比较结果;根据比较结果筛选目标检测结果;其中,多个不同类型的目标物可以共用同一个第一阈值或者每个类型的目标物对应一个第一阈值。当比较结果表示检测结果中与所属类型对应的概率值大于第一阈值时,保留该检测结果;否则,将该检测结果丢弃。当只有一个概率值大于第一阈值的检测结果时,即可确定目标物的类型。当存在多个概率值大于第一阈值的检测结果时,仅保 留概率值最高的检测结果,以此确定目标物的类型。
可选的,在本发明实施例中,可以将驾驶员图像输入目标检测算法以检测驾驶员图像中的目标物,其中目标检测算法可以通过离线训练大量样本得到。优选的,目标检测算法可以为深度学习算法,例如yolo、faster-RCNN,SSD等。
步骤S26:根据目标物的类型,获取驾驶行为的判断结果;
可选的,在本发明实施例中,可以通过逻辑判断模块根据目标物的类型,获取驾驶行为的判断结果。
可选的,在本发明实施例中,驾驶行为的判断结果包含正常驾驶行为和各种具体的分心驾驶行为。当检测结果表示驾驶员图像中未包含目标物时,驾驶行为的判断结果为正常驾驶行为;当检测结果表示驾驶员图像中包含目标物时,驾驶行为的判断结果为分心驾驶行为,并且,根据目标物的类型,可进一步判断出各种具体的分心驾驶行为,例如,抽烟、接打电话、喝水、饮食等。具体地,例如,目标物类型为烟,则判断具体分心驾驶行为为抽烟;目标物类型为水杯,则判断具体分心驾驶行为为喝水。
步骤S28:当判断结果表示出现分心驾驶行为时,发出报警信号。
可选的,在本发明实施例中,可以通过通讯模块根据判断结果发出报警信号。其中,报警信号可以为以下至少之一:声音提示,灯光提示,震动提示。具体地,声音提示包括语音或响铃等,灯光提示包括亮灯或灯光闪烁等。优选的,可以采用语音播报的方式对所出现的各种具体的分心驾 驶行为进行不同的提示。
可选的,在本发明实施例中,当判断结果表示出现分心驾驶行为时,还可以将驾驶员图像实时传送到监控中心,作为执法依据、数据采集、数据分析或用于进一步的人工确认。
通过上述步骤,不仅可以判断驾驶员的行为是正常驾驶行为还是分心驾驶行为并发出简单的报警信号,还可以判断出具体是哪种分心驾驶行为并给出不同的报警提示,从而督促驾驶员集中注意力,保障安全驾驶,避免交通事故的发生;同时有助于作为执法依据、数据采集、数据分析或用于进一步的人工确认。
可选的,本发明实施例的分心驾驶监测方法还包括在步骤S10或S20采集驾驶员图像之前对硬件和软件进行初始化。
为了提高分心驾驶监测结果的准确度,可选的,在本发明实施例中,在步骤S10或S20(即采集驾驶员图像)之后,还可以包括步骤S11:检测人脸区域。需要注意的是,步骤S11可以在步骤S12或S22(即检测驾驶员图像中的目标物,获得检测结果)之前、之后或同时进行。
由于驾驶员图像整体区域较大,在该区域内可能会同时出现多种目标物或者除目标物以外的干扰物,为了提高分心驾驶监测结果的准确度,可选的,在本发明实施例中,步骤S12或S22的检测结果还可以包含目标物的位置,其中,目标物的位置可以用矩形框表示,包括左上角、右下角两个点的坐标或者右上角、左下角两个点的坐标或者左上角、右下角、右上 角、左下角四个点的坐标。
在步骤S12或S22的检测结果包含目标物的位置时,步骤S14或S24还可以包含步骤S13:评估检测结果的合理性。可选的,在本发明实施例中,可以通过分析目标物的位置和预设合理区域的相对位置关系评估检测结果的合理性。具体地,评估目标物位置的合理性包括计算目标物的位置与该目标物对应的预设合理区域的交并比,比较交并比与第二阈值;当交并比大于第二阈值时,表示该目标物的位置出现在预设合理区域中,检测结果可信,可进行下一步骤;否则丢弃该目标检测结果。其中,预设合理区域可根据分心驾驶行为在人脸区域可能出现的合理区域预先设置。例如,接打电话行为所对应的预设合理区域可以为人脸区域的两侧或下方区域;抽烟行为所对应的预设合理区域可以为人脸的下方区域。
通过加入步骤S11和/或步骤S13,即检测人脸区域和/或评估检测结果的合理性,可以提高分心驾驶监测结果的准确度。
为了进一步提高分心驾驶监测结果的准确度,可选的,在本发明实施例中,步骤S14或步骤S26还可以通过结合连续帧的判断结果,确定最终的判断结果,以更准确地判断分心驾驶行为,降低误检率。具体的,结合连续帧的判断结果包括使用队列结构存储最近t秒中每一帧的判断结果并维护该队列;遍历该队列记录,如果某一种驾驶行为在最近t秒中的占比超过第三阈值,则将该驾驶行为作为最终的判断结果。
根据本发明其中一实施例,还提供了一种分心驾驶监测系统,分心驾驶监测系统30包括:
图像采集模块300,被配置为采集驾驶员图像;
可选的,在本发明实施例中,图像采集模块300可以为独立的摄像装置或集成在电子设备上的摄像装置等,例如独立的红外摄像头、深度摄像头、RGB摄像头、Mono摄像头等,或者手机、平板电脑、行车记录仪、导航仪、操作面板、中控台等电子设备上自带的摄像头。驾驶员图像可以通过截取图像采集模块采集的视频中的图像帧获取。
由于车内(例如,驾驶室)的光线通常会跟随行车环境而变化,在天气晴好的白天,车内(例如,驾驶室)的光线较为明亮,在夜晚或者阴天或者隧道内,车内(例如,驾驶室)的光线则较暗,而红外摄像头受光照变化影响小,具有全天候工作的能力,因此可以选择红外摄像头(包括近红外摄像头等)作为图像采集模块300获取驾驶员图像,以获取质量优于普通摄像头的驾驶员图像,从而提高分心驾驶监测结果的准确度。
可选的,在本发明实施例中,图像采集模块300可以设置在车内任意可以拍摄到驾驶员面部的至少一个位置,例如,仪表盘附近,中控台附近,后视镜附近等。图像采集模块的数量可以为一个或多个。
可选的,在本发明实施例中,可以每隔预定数目帧获取视频帧图像,以降低视频帧图像的获取频率,优化计算资源。
可选的,在本发明实施例中,可以通过图像采集模块300对驾驶员图像进行预处理,预处理包括以下至少之一:图像缩放、像素值归一化、图像增强;由此,可以得到清晰度、大小等符合要求的驾驶员图像。
检测模块302,被配置为检测驾驶员图像中的目标物,获得检测结果;
可选的,在本发明实施例中,检测结果可以表示驾驶员图像中是否包含目标物。
可选的,在本发明实施例中,目标物包括以下至少之一:香烟、手机、水杯、食物。与目标物对应的分心驾驶行为则包括以下至少之一:抽烟、接打电话、喝水、饮食。
可选的,在本发明实施例中,检测模块302采用目标检测算法以检测驾驶员图像中的目标物,其中目标检测算法可以通过离线训练大量样本得到。优选的,目标检测算法可以为深度学习算法,例如yolo、faster-RCNN,SSD等。
逻辑判断模块304,被配置为根据检测结果,获取驾驶行为的判断结果;
可选的,在本发明实施例中,驾驶行为的判断结果包含正常驾驶行为和分心驾驶行为。当检测结果表示驾驶员图像中包含目标物时,驾驶行为的判断结果为分心驾驶行为;当检测结果表示驾驶员图像中未包含目标物时,驾驶行为的判断结果为正常驾驶行为。
通讯模块306,被配置为当判断结果表示出现分心驾驶行为时,发出报警信号。
可选的,在本发明实施例中,报警信号可以为以下至少之一:声音提示,灯光提示,震动提示。具体地,声音提示包括语音或响铃等,灯光提 示包括亮灯或灯光闪烁等。
可选的,在本发明实施例中,当判断结果表示出现分心驾驶行为时,通讯模块还可以将驾驶员图像实时传送到监控中心,作为执法依据或用于数据采集、数据分析、进一步的人工确认等。
上述图像采集模块300、检测模块302、逻辑判断模块304和通讯模块306可以以相互独立的方式配置在分心驾驶监测系统中,也可以以部分集成或全部集成为一个大模块的方式配置在分心驾驶监测系统中,由此,分心驾驶监测系统可以实现对驾驶员的分心驾驶行为进行实时监测并报警,从而督促驾驶员集中注意力,保障安全驾驶,避免交通事故的发生。
由于分心驾驶行为包含抽烟、接打电话、喝水、饮食等,为了更清楚地判断出具体为哪一种分心驾驶行为。可选的,在本发明实施例的另一种分心驾驶监测系统中,检测模块302除了可以检测驾驶员图像中是否包含目标物以外,还可以检测:目标物的类型、与所属类型对应的概率值。概率值表示目标物为所属类型的概率,优选的,取值范围为0ˉ1。
然后,逻辑判断模块304被配置为根据所述概率值筛选所述检测结果,确定目标物的类型。由于每帧驾驶员图像上可能检测到多个目标物或除目标物以外的干扰物,其中部分为错误的检测目标。为了去除这些错误的检测目标,可选的,在本发明实施例中,逻辑判断模块304被配置为比较检测结果中与所属类型对应的概率值和第一阈值,获得比较结果;根据比较结果筛选目标检测结果;其中,多个不同类型的目标物可以共用同一个第一阈值或者每个类型的目标物对应一个第一阈值。当比较结果表示检测结 果中与所属类型对应的概率值大于第一阈值时,保留该检测结果;否则,将该检测结果丢弃。当只有一个概率值大于第一阈值的检测结果时,即可确定目标物的类型。当存在多个概率值大于第一阈值的检测结果时,仅保留概率值最高的检测结果,以此确定目标物的类型。可选的,在本发明实施例中,驾驶行为的判断结果包含正常驾驶行为和各种具体的分心驾驶行为。当检测结果表示驾驶员图像中未包含目标物时,驾驶行为的判断结果为正常驾驶行为;当检测结果表示驾驶员图像中包含目标物时,驾驶行为的判断结果为分心驾驶行为,并且,根据目标物的类型,逻辑判断模块304可进一步判断出各种具体的分心驾驶行为,例如,抽烟、接打电话、喝水、饮食等。具体地,例如,目标物类型为烟,则判断具体分心驾驶行为为抽烟;目标物类型为水杯,则判断具体分心驾驶行为为喝水。
之后,通讯模块306根据判断结果发出报警信号。其中,报警信号可以为以下至少之一:声音提示,灯光提示,震动提示。具体地,声音提示包括语音或响铃等,灯光提示包括亮灯或灯光闪烁等。优选的,可以采用语音播报的方式对所出现的各种具体的分心驾驶行为进行不同的提示。
在本发明实施例的又一种分心驾驶监测系统中,检测模块302还可以被配置为在所述采集驾驶员图像之后,检测人脸区域和目标物的位置,其中,目标物的位置可以用矩形框表示,包括左上角、右下角两个点的坐标或者右上角、左下角两个点的坐标或者左上角、右下角、右上角、左下角四个点的坐标。
逻辑判断模块304还可以被配置为评估检测结果的合理性。可选的, 在本发明实施例中,可以通过分析目标物的位置和预设合理区域的相对位置关系评估检测结果的合理性。具体地,评估目标物位置的合理性包括计算目标物的位置与该目标物对应的预设合理区域的交并比,比较交并比与第二阈值;当交并比大于第二阈值时,表示该目标物的位置出现在预设合理区域中,检测结果可信,可进行下一步骤;否则丢弃该目标检测结果。其中,预设合理区域可根据分心驾驶行为在人脸区域可能出现的合理区域预先设置。例如,接打电话行为所对应的预设合理区域可以为人脸区域的两侧或下方区域;抽烟行为所对应的预设合理区域可以为人脸的下方区域。
在本发明实施例的再一种分心驾驶监测系统中,逻辑判断模块304还可以被配置为通过结合连续帧的判断结果,确定最终的判断结果,以更准确地判断分心驾驶行为,降低误检率。具体的,结合连续帧的判断结果包括使用队列结构存储最近t秒中每一帧的判断结果并维护该队列;遍历该队列记录,如果某一种驾驶行为在最近t秒中的占比超过第三阈值,则将该驾驶行为作为最终的判断结果。
根据本发明其中一实施例,还提供了一种电子设备,电子设备40包括:处理器400;以及存储器402,设置为存储所述处理器400的可执行指令;其中,所述处理器400被配置为经由执行所述可执行指令来执行上述任意一项所述的分心驾驶监测方法。
根据本发明其中一实施例,还提供了一种存储介质,其中,所述存储介质包括存储的程序,其中,在所述程序运行时控制所述存储介质所在设备执行上述任意一项所述的分心驾驶监测方法。
本领域技术人员完全可以理解,本发明实施例的应用场景不限于汽车驾驶,还可以广泛应用于船、飞机、火车、地铁、轻轨等其他各种交通工具驾驶过程中驾驶员驾驶状态的监控。
上述本发明实施例序号仅仅为了描述,不代表实施例的优劣。
在本发明的上述实施例中,对各个实施例的描述都各有侧重,某个实施例中没有详述的部分,可以参见其他实施例的相关描述。
在本申请所提供的几个实施例中,应该理解到,所揭露的技术内容,可通过其它的方式实现。其中,以上所描述的装置实施例仅仅是示意性的,例如所述单元的划分,可以为一种逻辑功能划分,实际实现时可以有另外的划分方式,例如多个单元或组件可以结合或者可以集成到另一个系统,或一些特征可以忽略,或不执行。另一点,所显示或讨论的相互之间的耦合或直接耦合或通信连接可以是通过一些接口,单元或模块的间接耦合或通信连接,可以是电性或其它的形式。
所述作为分离部件说明的单元可以是或者也可以不是物理上分开的,作为单元显示的部件可以是或者也可以不是物理单元,即可以位于一个地方,或者也可以分布到多个单元上。可以根据实际的需要选择其中的部分或者全部单元来实现本实施例方案的目的。
另外,在本发明各个实施例中的各功能单元可以集成在一个处理单元中,也可以是各个单元单独物理存在,也可以两个或两个以上单元集成在一个单元中。上述集成的单元既可以采用硬件的形式实现,也可以采用软 件功能单元的形式实现。
所述集成的单元如果以软件功能单元的形式实现并作为独立的产品销售或使用时,可以存储在一个计算机可读取存储介质中。基于这样的理解,本发明的技术方案本质上或者说对现有技术做出贡献的部分或者该技术方案的全部或部分可以以软件产品的形式体现出来,该计算机软件产品存储在一个存储介质中,包括若干指令用以使得一台计算机设备(可为个人计算机、服务器或者网络设备等)执行本发明各个实施例所述方法的全部或部分步骤。而前述的存储介质包括:U盘、只读存储器(ROM,Read-Only Memory)、随机存取存储器(RAM,Random Access Memory)、移动硬盘、磁碟或者光盘等各种可以存储程序代码的介质。
以上所述仅是本发明的优选实施方式,应当指出,对于本技术领域的普通技术人员来说,在不脱离本发明原理的前提下,还可以做出若干改进和润饰,这些改进和润饰也应视为本发明的保护范围。

Claims (32)

  1. 一种分心驾驶监测方法,所述方法包括:
    采集驾驶员图像;
    检测所述驾驶员图像中的目标物,获得检测结果,其中,所述目标物与分心驾驶行为对应;
    根据所述检测结果,获取驾驶行为的判断结果;
    当所述判断结果表示出现所述分心驾驶行为时,发出报警信号。
  2. 根据权利要求1所述的方法,其中,通过图像采集模块采集驾驶员图像,其中,所述图像采集模块为独立的摄像装置或集成在电子设备上的摄像装置。
  3. 根据权利要求1所述的方法,其中,所述目标物包括以下至少之一:香烟、手机、水杯、食物,与所述目标物对应的所述分心驾驶行为则包括以下至少之一:抽烟、接打电话、喝水、饮食。
  4. 根据权利要求1所述的方法,其中,所述检测结果表示所述驾驶员图像中是否包含目标物,当所述检测结果表示所述驾驶员图像中包含目标物时,所述驾驶行为的判断结果为分心驾驶行为。
  5. 根据权利要求1或4所述的方法,其中,所述检测结果包括所述目标物的类型、与所属类型对应的概率值。
  6. 根据权利要求5所述的方法,其中,所述方法包括:根据所述概率值 筛选检测结果。
  7. 根据权利要求6所述的方法,其中,所述方法包括:比较所述检测结果中与所属类型对应的概率值和第一阈值,获得比较结果;根据所述比较结果来筛选检测结果。
  8. 根据权利要求7所述的方法,其中,所述方法包括:当所述比较结果表示检测结果中与所属类型对应的概率值大于所述第一阈值时,保留该检测结果;否则,将所述检测结果丢弃。
  9. 根据权利要求8所述的方法,其中,当存在多个概率值大于第一阈值的检测结果时,仅保留概率值最高的检测结果。
  10. 根据权利要求1所述的方法,其中,所述方法包括:在所述采集驾驶员图像之后,检测人脸区域。
  11. 根据权利要求1或4或10所述的方法,其中,所述检测结果包括目标物的位置。
  12. 根据权利要求11所述的方法,其中,所述方法包括:通过分析所述目标物的位置和预设合理区域的相对位置关系,评估所述检测结果的合理性。
  13. 根据权利要求12所述的方法,其中,通过分析目标物的位置和预设合理区域的相对位置关系,评估检测结果的合理性包括:计算所述目标物的位置与所述目标物对应的所述预设合理区域的交并比,比较交并比与第二阈值;当所述交并比大于所述第二阈值时,表示所述目标物 的位置出现在预设合理区域中,目标检测结果可信;否则丢弃所述目标检测结果。
  14. 根据权利要求1所述的方法,其中,在采集驾驶员图像之后,所述方法还包括:
    对所述驾驶员图像进行预处理,得到预处理图像;其中,所述预处理包括以下至少之一:图像缩放、像素值归一化、图像增强。
  15. 根据权利要求1或14所述的方法,其中,使用深度学习算法得到所述驾驶员图像或所述预处理图像中所述目标物的位置、类型和概率值,其中,所述概率值为所述目标物属于所属类型的概率。
  16. 根据权利要求1所述的方法,还包括:结合连续帧的判断结果,确定最终的判断结果。
  17. 根据权利要求16所述的方法,其中,使用队列结构存储最近t秒中每一帧的判断结果并维护该队列;遍历该队列记录,如果驾驶行为在最近t秒中的占比超过第三阈值,则将该驾驶行为作为最终的判断结果。
  18. 一种分心驾驶监测系统,包括:
    图像采集模块,被配置为采集驾驶员图像;
    检测模块,被配置为检测所述驾驶员图像中的目标物,获得检测结果;
    逻辑判断模块,被配置为根据检测结果,获取驾驶行为的判断结 果;
    通讯模块,被配置为当判断结果表示出现分心驾驶行为时,发出报警信号。
  19. 根据权利要求18所述的分心驾驶监测系统,其中,所述图像采集模块为独立的摄像装置或集成在电子设备上的摄像装置。
  20. 根据权利要求18所述的分心驾驶监测系统,其中,所述目标物包括以下至少之一:香烟、手机、水杯、食物,与所述目标物对应的所述分心驾驶行为则包括以下至少之一:抽烟、接打电话、喝水、饮食。
  21. 根据权利要求18所述的分心驾驶监测系统,其中,所述检测结果包括以下至少之一:是否存在目标物、目标物的位置、目标物的类型、与所属类型对应的概率值。
  22. 根据权利要求21所述的分心驾驶监测系统,其中,所述逻辑判断模块被配置为根据所述概率值筛选所述检测结果。
  23. 根据权利要求22所述的分心驾驶监测系统,其中,所述逻辑判断模块通过比较所述检测结果中与所属类型对应的概率值和第一阈值,获得比较结果;根据所述比较结果来筛选检测结果;当所述比较结果表示检测结果中与所属类型对应的概率值大于所述第一阈值时,保留该检测结果;否则,将所述检测结果丢弃。
  24. 根据权利要求23所述的分心驾驶监测系统,其中,当存在多个概率值大于第一阈值的检测结果时,仅保留概率值最高的检测结果。
  25. 根据权利要求18或21所述的分心驾驶监测系统,其中,所述检测模块被配置为在所述采集驾驶员图像之后,检测人脸区域。
  26. 根据权利要求25所述的分心驾驶监测系统,其中,所述逻辑判断模块通过分析所述目标物的位置和预设合理区域的相对位置关系,评估所述检测结果的合理性。
  27. 根据权利要求26所述的分心驾驶监测系统,其中,通过分析目标物的位置和预设合理区域的相对位置关系,评估检测结果的合理性包括:
    计算所述目标物的位置与所述目标物对应的所述预设合理区域的交并比,比较交并比与第二阈值;当所述交并比大于所述第二阈值时,表示所述目标物的位置出现在预设合理区域中,目标检测结果可信;否则丢弃所述目标检测结果。
  28. 根据权利要求18所述的分心驾驶监测系统,其中,所述检测模块使用深度学习算法得到所述驾驶员图像中所述目标物的位置、类型和概率值,其中,所述概率值为所述目标物属于所属类型的概率。
  29. 根据权利要求18所述的分心驾驶监测系统,其中,所述逻辑判断模块通过结合连续帧的判断结果,确定最终的判断结果。
  30. 根据权利要求29所述的分心驾驶监测系统,其中,所述逻辑判断模块使用队列结构存储最近t秒中每一帧的判断结果并维护该队列;遍历该队列记录,如果驾驶行为在最近t秒中的占比超过第三阈值,则将该驾驶行为作为最终的判断结果。
  31. 一种电子设备,包括:
    处理器;以及
    存储器,设置为存储所述处理器的可执行指令;
    其中,所述处理器配置为经由执行所述可执行指令来执行权利要求1至17中任意一项所述的分心驾驶监测方法。
  32. 一种存储介质,所述存储介质包括存储的程序,其中,在所述程序运行时控制所述存储介质所在设备执行权利要求1至17中任意一项所述的分心驾驶监测方法。
PCT/CN2019/122790 2019-03-08 2019-12-03 分心驾驶监测方法、系统及电子设备 WO2020181840A1 (zh)

Priority Applications (4)

Application Number Priority Date Filing Date Title
EP19817571.3A EP3730371A4 (en) 2019-03-08 2019-12-03 STEERING WHEEL DISTRACTION MONITORING METHOD AND SYSTEM, AND ELECTRONIC DEVICE
KR1020217032527A KR102543161B1 (ko) 2019-03-08 2019-12-03 산만 운전 모니터링 방법, 시스템 및 전자기기
US16/626,350 US11783599B2 (en) 2019-03-08 2019-12-03 Distracted-driving monitoring method, system and electronic device
JP2021552987A JP7407198B2 (ja) 2019-03-08 2019-12-03 ながら運転のモニタリング方法、システム及び電子機器

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN201910175982.0A CN111661059B (zh) 2019-03-08 2019-03-08 分心驾驶监测方法、系统及电子设备
CN201910175982.0 2019-03-08

Publications (1)

Publication Number Publication Date
WO2020181840A1 true WO2020181840A1 (zh) 2020-09-17

Family

ID=70475931

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2019/122790 WO2020181840A1 (zh) 2019-03-08 2019-12-03 分心驾驶监测方法、系统及电子设备

Country Status (6)

Country Link
US (1) US11783599B2 (zh)
EP (1) EP3730371A4 (zh)
JP (1) JP7407198B2 (zh)
KR (1) KR102543161B1 (zh)
CN (1) CN111661059B (zh)
WO (1) WO2020181840A1 (zh)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112677981A (zh) * 2021-01-08 2021-04-20 浙江三一装备有限公司 用于作业机械安全驾驶的智能辅助方法及装置

Families Citing this family (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112208547B (zh) * 2020-09-29 2021-10-01 英博超算(南京)科技有限公司 一种安全自动驾驶系统
CN112347891B (zh) * 2020-10-30 2022-02-22 南京佑驾科技有限公司 基于视觉的舱内喝水状态检测方法
CN112613441A (zh) * 2020-12-29 2021-04-06 新疆爱华盈通信息技术有限公司 异常驾驶行为的识别和预警方法、电子设备
CN113191244A (zh) * 2021-04-25 2021-07-30 上海夏数网络科技有限公司 一种驾驶员不规范行为检测方法
CN113335296B (zh) * 2021-06-24 2022-11-29 东风汽车集团股份有限公司 一种分心驾驶自适应检测系统及方法
CN117163054B (zh) * 2023-08-30 2024-03-12 广州方驰信息科技有限公司 一种用于虚拟实景视频融合的大数据分析系统及方法

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104239847A (zh) * 2013-06-14 2014-12-24 由田新技股份有限公司 行车警示方法及车用电子装置
CN104598934A (zh) * 2014-12-17 2015-05-06 安徽清新互联信息科技有限公司 一种驾驶员吸烟行为监控方法
CN106709420A (zh) * 2016-11-21 2017-05-24 厦门瑞为信息技术有限公司 一种监测营运车辆驾驶人员驾驶行为的方法
US20180238686A1 (en) * 2011-02-15 2018-08-23 Guardvant, Inc. Cellular phone and personal protective equipment usage monitoring system
CN108609018A (zh) * 2018-05-10 2018-10-02 郑州天迈科技股份有限公司 用于分析危险驾驶行为的预警终端、预警系统及分析算法
CN108629282A (zh) * 2018-03-29 2018-10-09 福州海景科技开发有限公司 一种吸烟检测方法、存储介质及计算机
CN110399767A (zh) * 2017-08-10 2019-11-01 北京市商汤科技开发有限公司 车内人员危险动作识别方法和装置、电子设备、存储介质

Family Cites Families (17)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP4775658B2 (ja) * 2006-12-27 2011-09-21 アイシン・エィ・ダブリュ株式会社 地物認識装置・自車位置認識装置・ナビゲーション装置・地物認識方法
JP4942604B2 (ja) * 2007-10-02 2012-05-30 本田技研工業株式会社 車両用電話通話判定装置
CN102436715B (zh) * 2011-11-25 2013-12-11 大连海创高科信息技术有限公司 疲劳驾驶检测方法
KR101417408B1 (ko) * 2012-11-15 2014-07-14 현대자동차주식회사 레이더를 이용한 객체 인식방법 및 시스템
JP2015012679A (ja) 2013-06-28 2015-01-19 株式会社日立製作所 アキシャルギャップ型回転電機
CN105069842A (zh) * 2015-08-03 2015-11-18 百度在线网络技术(北京)有限公司 道路三维模型的建模方法和装置
CN105632104B (zh) * 2016-03-18 2019-03-01 内蒙古大学 一种疲劳驾驶检测系统和方法
CN106529565B (zh) * 2016-09-23 2019-09-13 北京市商汤科技开发有限公司 目标识别模型训练和目标识别方法及装置、计算设备
WO2018085804A1 (en) * 2016-11-07 2018-05-11 Nauto Global Limited System and method for driver distraction determination
KR102342143B1 (ko) * 2017-08-08 2021-12-23 주식회사 만도모빌리티솔루션즈 딥 러닝 기반 자율 주행 차량, 딥 러닝 기반 자율 주행 제어 장치 및 딥 러닝 기반 자율 주행 제어 방법
JP6972756B2 (ja) * 2017-08-10 2021-11-24 富士通株式会社 制御プログラム、制御方法、及び情報処理装置
CN107704805B (zh) * 2017-09-01 2018-09-07 深圳市爱培科技术股份有限公司 疲劳驾驶检测方法、行车记录仪及存储装置
US10915769B2 (en) * 2018-06-04 2021-02-09 Shanghai Sensetime Intelligent Technology Co., Ltd Driving management methods and systems, vehicle-mounted intelligent systems, electronic devices, and medium
JP6870660B2 (ja) * 2018-06-08 2021-05-12 トヨタ自動車株式会社 ドライバ監視装置
CN109086662B (zh) * 2018-06-19 2021-06-15 浙江大华技术股份有限公司 一种异常行为检测方法及装置
CN109063574B (zh) * 2018-07-05 2021-04-23 顺丰科技有限公司 一种基于深度神经网络检测的包络框的预测方法、系统及设备
US10882398B2 (en) * 2019-02-13 2021-01-05 Xevo Inc. System and method for correlating user attention direction and outside view

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20180238686A1 (en) * 2011-02-15 2018-08-23 Guardvant, Inc. Cellular phone and personal protective equipment usage monitoring system
CN104239847A (zh) * 2013-06-14 2014-12-24 由田新技股份有限公司 行车警示方法及车用电子装置
CN104598934A (zh) * 2014-12-17 2015-05-06 安徽清新互联信息科技有限公司 一种驾驶员吸烟行为监控方法
CN106709420A (zh) * 2016-11-21 2017-05-24 厦门瑞为信息技术有限公司 一种监测营运车辆驾驶人员驾驶行为的方法
CN110399767A (zh) * 2017-08-10 2019-11-01 北京市商汤科技开发有限公司 车内人员危险动作识别方法和装置、电子设备、存储介质
CN108629282A (zh) * 2018-03-29 2018-10-09 福州海景科技开发有限公司 一种吸烟检测方法、存储介质及计算机
CN108609018A (zh) * 2018-05-10 2018-10-02 郑州天迈科技股份有限公司 用于分析危险驾驶行为的预警终端、预警系统及分析算法

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112677981A (zh) * 2021-01-08 2021-04-20 浙江三一装备有限公司 用于作业机械安全驾驶的智能辅助方法及装置

Also Published As

Publication number Publication date
KR102543161B1 (ko) 2023-06-14
EP3730371A1 (en) 2020-10-28
CN111661059A (zh) 2020-09-15
US20220180648A1 (en) 2022-06-09
JP2022523247A (ja) 2022-04-21
CN111661059B (zh) 2022-07-08
US11783599B2 (en) 2023-10-10
EP3730371A4 (en) 2020-11-18
JP7407198B2 (ja) 2023-12-28
KR20210135313A (ko) 2021-11-12

Similar Documents

Publication Publication Date Title
WO2020181840A1 (zh) 分心驾驶监测方法、系统及电子设备
CN109937152B (zh) 驾驶状态监测方法和装置、驾驶员监控系统、车辆
WO2019232972A1 (zh) 驾驶管理方法和系统、车载智能系统、电子设备、介质
CN110889351B (zh) 视频检测方法、装置、终端设备及可读存储介质
CN108275114B (zh) 一种油箱防盗监控系统
CN105469035A (zh) 基于双目视频分析的驾驶员不良驾驶行为检测系统
US10810866B2 (en) Perimeter breach warning system
CN111629181B (zh) 消防生命通道监控系统及方法
CN109377694B (zh) 社区车辆的监控方法及系统
US11423673B2 (en) Method and device for detecting state of holding steering wheel
CN113239754A (zh) 一种应用于车联网的危险驾驶行为检测定位方法及系统
CN111860210A (zh) 双手脱离方向盘检测方法、装置、电子设备和存储介质
CN101930540A (zh) 基于视频的多特征融合的火焰检测装置和方法
CN110913209A (zh) 摄像头遮挡检测方法、装置、电子设备及监控系统
CN109685083A (zh) 驾驶员开车违规使用手机的多尺度检测方法
CN108932503A (zh) 恶劣天气下车前障碍的识别方法及装置、存储介质、终端
CN110913212B (zh) 基于光流的智能车载摄像头遮挡监测方法及装置、辅助驾驶系统
CN108162866A (zh) 一种基于流媒体外后视镜系统的车道识别系统及方法
TWI706381B (zh) 影像物件偵測方法及系統
CN211979500U (zh) 一种车载信息汇总处理系统
CN113450567A (zh) 一种人工智能预警系统
CN110163037B (zh) 监测驾驶员状态的方法、设备、系统、处理器及存储介质
CN113744498B (zh) 驾驶员注意力监测的系统和方法
CN112528910B (zh) 手脱离方向盘检测方法、装置、电子设备及存储介质
US11106917B2 (en) Surveillance system with human-machine interface

Legal Events

Date Code Title Description
ENP Entry into the national phase

Ref document number: 2019817571

Country of ref document: EP

Effective date: 20191219

121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 19817571

Country of ref document: EP

Kind code of ref document: A1

ENP Entry into the national phase

Ref document number: 2021552987

Country of ref document: JP

Kind code of ref document: A

NENP Non-entry into the national phase

Ref country code: DE

ENP Entry into the national phase

Ref document number: 20217032527

Country of ref document: KR

Kind code of ref document: A