WO2020181840A1 - 分心驾驶监测方法、系统及电子设备 - Google Patents
分心驾驶监测方法、系统及电子设备 Download PDFInfo
- Publication number
- WO2020181840A1 WO2020181840A1 PCT/CN2019/122790 CN2019122790W WO2020181840A1 WO 2020181840 A1 WO2020181840 A1 WO 2020181840A1 CN 2019122790 W CN2019122790 W CN 2019122790W WO 2020181840 A1 WO2020181840 A1 WO 2020181840A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- target
- detection result
- distracted driving
- driving behavior
- result
- Prior art date
Links
- 238000000034 method Methods 0.000 title claims abstract description 61
- 238000012544 monitoring process Methods 0.000 title claims abstract description 49
- 238000001514 detection method Methods 0.000 claims abstract description 156
- 230000006399 behavior Effects 0.000 claims description 129
- 230000000391 smoking effect Effects 0.000 claims description 17
- 239000003651 drinking water Substances 0.000 claims description 13
- 235000020188 drinking water Nutrition 0.000 claims description 13
- 238000004891 communication Methods 0.000 claims description 11
- 230000007937 eating Effects 0.000 claims description 10
- 230000000717 retained effect Effects 0.000 claims description 10
- 235000019504 cigarettes Nutrition 0.000 claims description 7
- 238000013135 deep learning Methods 0.000 claims description 7
- 235000019353 potassium silicate Nutrition 0.000 claims description 7
- 238000007781 pre-processing Methods 0.000 claims description 5
- 238000012216 screening Methods 0.000 claims description 5
- 238000010606 normalization Methods 0.000 claims description 3
- 238000012360 testing method Methods 0.000 claims description 3
- XLYOFNOQVPJJNP-UHFFFAOYSA-N water Substances O XLYOFNOQVPJJNP-UHFFFAOYSA-N 0.000 claims 1
- 206010039203 Road traffic accident Diseases 0.000 abstract description 14
- 239000012141 concentrate Substances 0.000 abstract description 7
- 238000012790 confirmation Methods 0.000 abstract description 7
- 238000007405 data analysis Methods 0.000 abstract description 7
- 238000013480 data collection Methods 0.000 abstract description 5
- 230000008878 coupling Effects 0.000 description 3
- 238000010168 coupling process Methods 0.000 description 3
- 238000005859 coupling reaction Methods 0.000 description 3
- 238000010586 diagram Methods 0.000 description 3
- 230000008569 process Effects 0.000 description 3
- 238000012545 processing Methods 0.000 description 3
- 238000012549 training Methods 0.000 description 3
- 241000190070 Sarracenia purpurea Species 0.000 description 2
- 235000005911 diet Nutrition 0.000 description 2
- 230000037213 diet Effects 0.000 description 2
- 238000005516 engineering process Methods 0.000 description 2
- 238000012986 modification Methods 0.000 description 2
- 230000004048 modification Effects 0.000 description 2
- 239000000779 smoke Substances 0.000 description 2
- 238000011161 development Methods 0.000 description 1
- 230000035622 drinking Effects 0.000 description 1
- 230000006870 function Effects 0.000 description 1
- 238000005286 illumination Methods 0.000 description 1
- 230000003287 optical effect Effects 0.000 description 1
- 230000001105 regulatory effect Effects 0.000 description 1
Images
Classifications
-
- B—PERFORMING OPERATIONS; TRANSPORTING
- B60—VEHICLES IN GENERAL
- B60W—CONJOINT CONTROL OF VEHICLE SUB-UNITS OF DIFFERENT TYPE OR DIFFERENT FUNCTION; CONTROL SYSTEMS SPECIALLY ADAPTED FOR HYBRID VEHICLES; ROAD VEHICLE DRIVE CONTROL SYSTEMS FOR PURPOSES NOT RELATED TO THE CONTROL OF A PARTICULAR SUB-UNIT
- B60W40/00—Estimation or calculation of non-directly measurable driving parameters for road vehicle drive control systems not related to the control of a particular sub unit, e.g. by using mathematical models
- B60W40/08—Estimation or calculation of non-directly measurable driving parameters for road vehicle drive control systems not related to the control of a particular sub unit, e.g. by using mathematical models related to drivers or passengers
- B60W40/09—Driving style or behaviour
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V20/00—Scenes; Scene-specific elements
- G06V20/50—Context or environment of the image
- G06V20/59—Context or environment of the image inside of a vehicle, e.g. relating to seat occupancy, driver state or inner lighting conditions
- G06V20/597—Recognising the driver's state or behaviour, e.g. attention or drowsiness
-
- B—PERFORMING OPERATIONS; TRANSPORTING
- B60—VEHICLES IN GENERAL
- B60W—CONJOINT CONTROL OF VEHICLE SUB-UNITS OF DIFFERENT TYPE OR DIFFERENT FUNCTION; CONTROL SYSTEMS SPECIALLY ADAPTED FOR HYBRID VEHICLES; ROAD VEHICLE DRIVE CONTROL SYSTEMS FOR PURPOSES NOT RELATED TO THE CONTROL OF A PARTICULAR SUB-UNIT
- B60W40/00—Estimation or calculation of non-directly measurable driving parameters for road vehicle drive control systems not related to the control of a particular sub unit, e.g. by using mathematical models
- B60W40/08—Estimation or calculation of non-directly measurable driving parameters for road vehicle drive control systems not related to the control of a particular sub unit, e.g. by using mathematical models related to drivers or passengers
-
- B—PERFORMING OPERATIONS; TRANSPORTING
- B60—VEHICLES IN GENERAL
- B60W—CONJOINT CONTROL OF VEHICLE SUB-UNITS OF DIFFERENT TYPE OR DIFFERENT FUNCTION; CONTROL SYSTEMS SPECIALLY ADAPTED FOR HYBRID VEHICLES; ROAD VEHICLE DRIVE CONTROL SYSTEMS FOR PURPOSES NOT RELATED TO THE CONTROL OF A PARTICULAR SUB-UNIT
- B60W50/00—Details of control systems for road vehicle drive control not related to the control of a particular sub-unit, e.g. process diagnostic or vehicle driver interfaces
- B60W50/08—Interaction between the driver and the control system
- B60W50/14—Means for informing the driver, warning the driver or prompting a driver intervention
-
- B—PERFORMING OPERATIONS; TRANSPORTING
- B60—VEHICLES IN GENERAL
- B60W—CONJOINT CONTROL OF VEHICLE SUB-UNITS OF DIFFERENT TYPE OR DIFFERENT FUNCTION; CONTROL SYSTEMS SPECIALLY ADAPTED FOR HYBRID VEHICLES; ROAD VEHICLE DRIVE CONTROL SYSTEMS FOR PURPOSES NOT RELATED TO THE CONTROL OF A PARTICULAR SUB-UNIT
- B60W50/00—Details of control systems for road vehicle drive control not related to the control of a particular sub-unit, e.g. process diagnostic or vehicle driver interfaces
- B60W50/08—Interaction between the driver and the control system
- B60W50/14—Means for informing the driver, warning the driver or prompting a driver intervention
- B60W50/16—Tactile feedback to the driver, e.g. vibration or force feedback to the driver on the steering wheel or the accelerator pedal
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V20/00—Scenes; Scene-specific elements
- G06V20/60—Type of objects
- G06V20/64—Three-dimensional objects
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V40/00—Recognition of biometric, human-related or animal-related patterns in image or video data
- G06V40/10—Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
- G06V40/16—Human faces, e.g. facial parts, sketches or expressions
- G06V40/161—Detection; Localisation; Normalisation
-
- B—PERFORMING OPERATIONS; TRANSPORTING
- B60—VEHICLES IN GENERAL
- B60W—CONJOINT CONTROL OF VEHICLE SUB-UNITS OF DIFFERENT TYPE OR DIFFERENT FUNCTION; CONTROL SYSTEMS SPECIALLY ADAPTED FOR HYBRID VEHICLES; ROAD VEHICLE DRIVE CONTROL SYSTEMS FOR PURPOSES NOT RELATED TO THE CONTROL OF A PARTICULAR SUB-UNIT
- B60W50/00—Details of control systems for road vehicle drive control not related to the control of a particular sub-unit, e.g. process diagnostic or vehicle driver interfaces
- B60W2050/0001—Details of the control system
- B60W2050/0002—Automatic control, details of type of controller or control system architecture
- B60W2050/0004—In digital systems, e.g. discrete-time systems involving sampling
- B60W2050/0005—Processor details or data handling, e.g. memory registers or chip architecture
-
- B—PERFORMING OPERATIONS; TRANSPORTING
- B60—VEHICLES IN GENERAL
- B60W—CONJOINT CONTROL OF VEHICLE SUB-UNITS OF DIFFERENT TYPE OR DIFFERENT FUNCTION; CONTROL SYSTEMS SPECIALLY ADAPTED FOR HYBRID VEHICLES; ROAD VEHICLE DRIVE CONTROL SYSTEMS FOR PURPOSES NOT RELATED TO THE CONTROL OF A PARTICULAR SUB-UNIT
- B60W50/00—Details of control systems for road vehicle drive control not related to the control of a particular sub-unit, e.g. process diagnostic or vehicle driver interfaces
- B60W50/08—Interaction between the driver and the control system
- B60W50/14—Means for informing the driver, warning the driver or prompting a driver intervention
- B60W2050/143—Alarm means
-
- B—PERFORMING OPERATIONS; TRANSPORTING
- B60—VEHICLES IN GENERAL
- B60W—CONJOINT CONTROL OF VEHICLE SUB-UNITS OF DIFFERENT TYPE OR DIFFERENT FUNCTION; CONTROL SYSTEMS SPECIALLY ADAPTED FOR HYBRID VEHICLES; ROAD VEHICLE DRIVE CONTROL SYSTEMS FOR PURPOSES NOT RELATED TO THE CONTROL OF A PARTICULAR SUB-UNIT
- B60W50/00—Details of control systems for road vehicle drive control not related to the control of a particular sub-unit, e.g. process diagnostic or vehicle driver interfaces
- B60W50/08—Interaction between the driver and the control system
- B60W50/14—Means for informing the driver, warning the driver or prompting a driver intervention
- B60W2050/146—Display means
-
- B—PERFORMING OPERATIONS; TRANSPORTING
- B60—VEHICLES IN GENERAL
- B60W—CONJOINT CONTROL OF VEHICLE SUB-UNITS OF DIFFERENT TYPE OR DIFFERENT FUNCTION; CONTROL SYSTEMS SPECIALLY ADAPTED FOR HYBRID VEHICLES; ROAD VEHICLE DRIVE CONTROL SYSTEMS FOR PURPOSES NOT RELATED TO THE CONTROL OF A PARTICULAR SUB-UNIT
- B60W2420/00—Indexing codes relating to the type of sensors based on the principle of their operation
- B60W2420/40—Photo, light or radio wave sensitive means, e.g. infrared sensors
- B60W2420/403—Image sensing, e.g. optical camera
-
- B—PERFORMING OPERATIONS; TRANSPORTING
- B60—VEHICLES IN GENERAL
- B60W—CONJOINT CONTROL OF VEHICLE SUB-UNITS OF DIFFERENT TYPE OR DIFFERENT FUNCTION; CONTROL SYSTEMS SPECIALLY ADAPTED FOR HYBRID VEHICLES; ROAD VEHICLE DRIVE CONTROL SYSTEMS FOR PURPOSES NOT RELATED TO THE CONTROL OF A PARTICULAR SUB-UNIT
- B60W2540/00—Input parameters relating to occupants
- B60W2540/229—Attention level, e.g. attentive to driving, reading or sleeping
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/30—Subject of image; Context of image processing
- G06T2207/30196—Human being; Person
- G06T2207/30201—Face
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/30—Subject of image; Context of image processing
- G06T2207/30248—Vehicle exterior or interior
- G06T2207/30268—Vehicle interior
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V2201/00—Indexing scheme relating to image or video recognition or understanding
- G06V2201/07—Target detection
Definitions
- the present invention relates to computer vision processing technology, and in particular to a method, system and electronic equipment for monitoring distracted driving.
- At least some embodiments of the present invention provide a distracted driving monitoring method, system, and electronic equipment, so as to at least partially solve the problem of traffic accidents caused by failure to monitor the distracted driving behavior of the driver during driving.
- a method for monitoring distracted driving including: collecting driver images; detecting a target in the driver image to obtain a detection result, wherein the target corresponds to the distracted driving behavior ; According to the detection result, obtain the judgment result of the driving behavior; when the judgment result indicates that the distracted driving behavior occurs, an alarm signal is issued.
- the method collects an image of the driver through an image acquisition module, wherein the image acquisition module is an independent camera device or a camera device integrated on an electronic device.
- the image acquisition module is an independent camera device or a camera device integrated on an electronic device.
- the target includes at least one of the following: cigarettes, mobile phones, water glasses, and food
- the distracted driving behavior corresponding to the target includes at least one of the following: smoking, making calls, drinking water ,diet.
- the detection result indicates whether the driver image contains a target object, and when the detection result indicates that the driver image contains the target object, the judgment result of the driving behavior is a distracted driving behavior.
- the detection result includes the type of the target object and the probability value corresponding to the type.
- the method includes: screening detection results according to the probability value.
- the method includes: comparing the probability value corresponding to the category in the detection result with a first threshold to obtain a comparison result; and screening the detection result according to the comparison result.
- the method includes: when the comparison result indicates that the probability value corresponding to the category in the detection result is greater than the first threshold, retaining the detection result; otherwise, discarding the detection result.
- the method includes: detecting a face area after the driver image is collected.
- the detection result includes the position of the target object.
- the method includes: evaluating the rationality of the detection result by analyzing the relative position relationship between the position of the target and a preset reasonable area.
- evaluating the rationality of the detection result includes: calculating the intersection between the position of the target and the preset reasonable area corresponding to the target.
- intersection ratio is greater than the second threshold value, it means that the position of the target object appears in a preset reasonable area, and the target detection result is credible; otherwise, discard the The target detection result.
- the method further includes: preprocessing the driver image to obtain a preprocessed image; wherein, the preprocessing includes at least one of the following: image scaling, pixel value restoration Unified, image enhancement.
- the method uses a deep learning algorithm to obtain the position, type, and probability value of the target in the driver image or the preprocessed image, where the probability value is that the target belongs to the category The probability.
- the method determines the final judgment result by combining judgment results of consecutive frames.
- the method uses a queue structure to store the judgment result of each frame in the last t seconds and maintain the queue; traverse the queue record, and if the proportion of driving behavior in the last t seconds exceeds the third threshold, the The driving behavior is the final judgment result.
- a distracted driving monitoring system including: an image acquisition module configured to collect driver images; a detection module configured to detect targets in the driver’s image to obtain detection Result: The logical judgment module is configured to obtain the judgment result of the driving behavior according to the detection result; the communication module is configured to issue an alarm signal when the judgment result indicates that a distracted driving behavior occurs.
- the image acquisition module is an independent camera device or a camera device integrated on an electronic device.
- the target includes at least one of the following: cigarettes, mobile phones, water glasses, and food
- the distracted driving behavior corresponding to the target includes at least one of the following: smoking, making calls, drinking water ,diet.
- the detection result includes at least one of the following: whether there is a target, a location of the target, a type of the target, and a probability value corresponding to the category.
- the logical judgment module is configured to filter the detection result according to the probability value.
- the logic judgment module obtains the comparison result by comparing the probability value corresponding to the category in the detection result with a first threshold; screens the detection result according to the comparison result; when the comparison result represents the detection result When the probability value corresponding to the category is greater than the first threshold, the detection result is retained; otherwise, the detection result is discarded.
- the detection module is configured to detect the face area after the driver image is collected.
- the logical judgment module evaluates the rationality of the detection result by analyzing the relative position relationship between the position of the target and a preset reasonable area.
- evaluating the rationality of the detection result includes: calculating the intersection between the position of the target and the preset reasonable area corresponding to the target.
- intersection ratio is greater than the second threshold value, it means that the position of the target object appears in a preset reasonable area, and the target detection result is credible; otherwise, discard the The target detection result.
- the detection module uses a deep learning algorithm to obtain the position, type, and probability value of the target in the driver image, where the probability value is the probability that the target belongs to the type.
- the logical judgment module determines the final judgment result by combining judgment results of consecutive frames.
- the logical judgment module uses a queue structure to store the judgment result of each frame in the last t seconds and maintain the queue; traverse the queue records, and if the proportion of driving behavior in the last t seconds exceeds the third threshold, Take this driving behavior as the final judgment result.
- an electronic device including: a processor; and a memory, configured to store executable instructions of the processor; wherein the processor is configured to execute the executable Instructions to execute any of the distracted driving monitoring methods described above.
- a storage medium wherein the storage medium includes a stored program, and wherein the device where the storage medium is located is controlled to execute any one of the foregoing when the program is running. Distracted driving monitoring method.
- the detection result is obtained by collecting the driver image; detecting the target in the driver image to obtain the detection result; according to the detection result, obtaining the judgment result of the driving behavior; when the judgment result indicates the occurrence of distracted driving behavior, Alarm signal; real-time monitoring and warning of the driver’s distracted driving behavior, so as to urge the driver to concentrate, ensure safe driving and avoid traffic accidents; in addition, it can also determine the specific distracted driving Behavior and give different warning prompts, which can be used as a basis for law enforcement, data analysis or for further manual confirmation; and then solve the problem of traffic accidents caused by the failure to monitor the distracted driving behavior of the driver in the driving process.
- Fig. 1 is a schematic diagram of an optional distracted driving monitoring method according to an embodiment of the present invention
- FIG. 2 is a flowchart of another optional distracted driving monitoring method according to an embodiment of the present invention.
- FIG. 3 is a structural block diagram of an optional distracted driving monitoring system according to an embodiment of the present invention.
- Fig. 4 is a structural block diagram of an optional electronic device according to an embodiment of the present invention.
- the embodiments of the present invention can be applied to a computer system/server, which can operate with many other general-purpose or special-purpose computing system environments or configurations.
- Examples of well-known computing systems, environments and/or configurations suitable for use with computer systems/servers include, but are not limited to: personal computer systems, server computer systems, handheld or laptop devices, microprocessor-based systems, set-top boxes, Programmable consumer electronics products, network personal computers, small computer systems, large computer systems, and distributed cloud computing technology environments including any of the above systems, etc.
- the computer system/server may be described in the general context of computer system executable instructions (such as program modules, etc.) executed by the computer system.
- program modules can include routines, programs, target programs, components, logic, and data structures, etc., which perform specific tasks or implement specific abstract data types.
- the computer system/server can be implemented in a distributed cloud computing environment, and tasks are performed by remote processing equipment linked through a communication network.
- program modules may be located on a storage medium of a local or remote computing system including a storage device.
- FIG. 1 it is a flowchart of an optional distracted driving monitoring method according to an embodiment of the present invention. As shown in Figure 1, the method includes the following steps:
- the driver image is collected; the target object in the driver image is detected to obtain the detection result; the judgment result of the driving behavior is obtained according to the detection result; when the judgment result indicates distracted driving During the behavior, an alarm signal is issued; the driver’s distracted driving behavior can be monitored and alarmed in real time, so as to urge the driver to concentrate, ensure safe driving and avoid traffic accidents.
- Step S10 collecting driver images
- the image of the driver may be acquired through an image acquisition module, where the image acquisition module may be an independent camera device or a camera device integrated on an electronic device, such as an independent infrared camera, depth Cameras, RGB cameras, Mono cameras, etc., or cameras that come with electronic devices such as mobile phones, tablets, driving recorders, navigators, operation panels, and center consoles.
- the driver image can be obtained by intercepting image frames in the video collected by the image collection module.
- the light in the car usually changes with the driving environment, during the day when the weather is fine, the light in the car (for example, the driver's cab) is brighter, at night or in a cloudy day or in a tunnel.
- the light in the driver's cab is relatively dark, while the infrared camera is less affected by changes in light and has the ability to work around the clock. Therefore, you can choose an infrared camera (including near-infrared camera, etc.) to obtain driver images to obtain better quality than ordinary cameras Driver’s image, thereby improving the accuracy of distracted driving monitoring results.
- the image acquisition module may be set in any position in the vehicle where the driver's face can be photographed, for example, near the dashboard, near the center console, near the rearview mirror, etc.
- the number of image acquisition modules can be one or more.
- video frame images may be acquired every predetermined number of frames to reduce the frequency of acquiring video frame images and optimize computing resources.
- the driver image may be preprocessed, and the preprocessing includes at least one of the following: image scaling, pixel value normalization, image enhancement; thus, the definition, size, etc. can be obtained The driver image that meets the requirements.
- Step S12 detecting a target in the driver's image to obtain a detection result; wherein the target corresponds to a distracted driving behavior;
- the target object in the driver image may be detected by the detection module to obtain the detection result.
- the detection result may indicate whether the driver image contains the target object.
- the target includes at least one of the following: cigarettes, mobile phones, water glasses, and food.
- the distracted driving behavior corresponding to the target object includes at least one of the following: smoking, making phone calls, drinking water, and eating.
- the driver image can be input to the target detection algorithm to detect the target in the driver image, where the target detection algorithm can be obtained by offline training a large number of samples.
- the target detection algorithm may be a deep learning algorithm, such as yolo, faster-RCNN, SSD, etc.
- Step S14 Obtain the judgment result of the driving behavior according to the detection result
- the judgment result of the driving behavior can be obtained according to the detection result through the logic judgment module.
- the judgment result of the driving behavior includes normal driving behavior and distracted driving behavior.
- the driving behavior judgment result is a distracted driving behavior; when the detection result indicates that the driver image does not contain a target object, the driving behavior judgment result is a normal driving behavior.
- Step S16 When the judgment result indicates that a distracted driving behavior occurs, an alarm signal is issued.
- an alarm signal may be issued according to the judgment result through the communication module.
- the alarm signal can be at least one of the following: sound prompt, light prompt, vibration prompt.
- the sound prompt includes a voice or a bell
- the light prompt includes a light or a flashing light.
- the driver image can also be transmitted to the monitoring center in real time, as a basis for law enforcement or for data collection, data analysis, and further manual confirmation Wait.
- the driver's distracted driving behavior can be monitored and alarmed, so as to urge the driver to concentrate, ensure safe driving, and avoid traffic accidents.
- the above-mentioned distracted driving monitoring method can only judge whether the driver's behavior is a normal driving behavior or a distracted driving behavior and give a simple alarm, but cannot determine which distracted driving behavior is and give different warning prompts.
- FIG. 2 it is a flowchart of another optional distracted driving monitoring method according to an embodiment of the present invention. As shown in Figure 2, the method includes the following steps:
- the driver image is collected; the target in the driver image is detected to obtain the detection result; the detection result is screened to determine the type of the target; and the driving behavior is obtained according to the type of the target
- the result of the judgment when the judgment result indicates that there is a distracted driving behavior, an alarm signal is issued; in addition to judging whether the driver’s behavior is a normal driving behavior or a distracted driving behavior and issuing a simple alarm signal, it can also determine the specific type Distracted driving behavior and give different warning prompts, so as to urge the driver to concentrate, ensure safe driving, and avoid traffic accidents; at the same time, it can be used as a basis for law enforcement, data analysis or for further manual confirmation.
- step S20 is basically the same as the step S10 shown in FIG. 1, and will not be repeated here.
- steps S22 to S28 will be described in detail below.
- the target object in the driver image may be detected by the detection module to obtain the detection result.
- the detection result may indicate whether the driver image contains the target object.
- the target includes at least one of the following: cigarettes, mobile phones, water glasses, and food.
- the distracted driving behavior corresponding to the target object includes at least one of the following: smoking, making phone calls, drinking water, and eating.
- the detection result of step S22 In addition to indicating whether the driver image contains the target, when the driver image contains the target, the detection result can also include: the type of the target and the probability value corresponding to the type.
- the probability value represents the probability that the target object belongs to the type.
- the value range is 0 ⁇ 1.
- the detection result can be screened by the logic judgment module to determine the type of the target object.
- screening the detection results and determining the type of the target includes: comparing the probability value corresponding to the type in the detection result with a first threshold to obtain the comparison result; The target detection result is screened according to the comparison result; wherein multiple targets of different types can share the same first threshold or each type of target corresponds to a first threshold.
- the comparison result indicates that the probability value corresponding to the category in the detection result is greater than the first threshold, the detection result is retained; otherwise, the detection result is discarded.
- the type of the target can be determined.
- the detection result with the highest probability value is retained to determine the type of the target object.
- the driver image can be input to the target detection algorithm to detect the target in the driver image, where the target detection algorithm can be obtained by offline training a large number of samples.
- the target detection algorithm may be a deep learning algorithm, such as yolo, faster-RCNN, SSD, etc.
- Step S26 Obtain the judgment result of the driving behavior according to the type of the target
- the judgment result of the driving behavior can be obtained according to the type of the target object through the logic judgment module.
- the judgment result of the driving behavior includes normal driving behavior and various specific distracted driving behaviors.
- the driving behavior judgment result is normal driving behavior; when the detection result indicates that the driver image contains the target object, the driving behavior judgment result is distracted driving behavior, and,
- various specific distracted driving behaviors can be further judged, for example, smoking, making phone calls, drinking water, eating, etc. Specifically, for example, if the target type is smoke, it is determined that the specific distracted driving behavior is smoking; if the target type is a water cup, it is determined that the specific distracted driving behavior is drinking water.
- Step S28 When the judgment result indicates that a distracted driving behavior occurs, an alarm signal is issued.
- an alarm signal may be issued according to the judgment result through the communication module.
- the alarm signal can be at least one of the following: sound prompt, light prompt, vibration prompt.
- the sound prompt includes a voice or a bell
- the light prompt includes a light or a flashing light.
- voice broadcasts can be used to give different prompts to various specific distracted driving behaviors that appear.
- the driver image can also be transmitted to the monitoring center in real time for law enforcement basis, data collection, data analysis or for further manual confirmation .
- the distracted driving monitoring method in the embodiment of the present invention further includes initializing the hardware and software before collecting the driver image in step S10 or S20.
- step S11 may be further included: detecting the face area. It should be noted that step S11 can be performed before, after or at the same time as step S12 or S22 (that is, detecting the target in the driver image and obtaining the detection result).
- the detection result can also include the position of the target, where the position of the target can be represented by a rectangular frame, including the coordinates of the upper left corner and the lower right corner or the coordinates of the upper right corner and the lower left corner. Coordinates or the coordinates of the four points at the upper left corner, lower right corner, upper right corner, and lower left corner.
- step S14 or S24 may further include step S13: evaluating the rationality of the detection result.
- the rationality of the detection result may be evaluated by analyzing the relative position relationship between the position of the target object and the preset reasonable area.
- evaluating the rationality of the target position includes calculating the intersection ratio between the position of the target object and the preset reasonable area corresponding to the target object, and comparing the intersection ratio with a second threshold; when the intersection ratio is greater than the second threshold, It means that the location of the target object appears in the preset reasonable area, the detection result is credible, and the next step can be performed; otherwise, the target detection result is discarded.
- the preset reasonable area can be preset according to the reasonable area where the distracted driving behavior may appear in the face area.
- the preset reasonable area corresponding to the behavior of making a call may be the two sides or the area below the face area; the preset reasonable area corresponding to the smoking behavior may be the area below the face.
- step S11 and/or step S13 that is, detecting the face area and/or evaluating the rationality of the detection result, the accuracy of the distracted driving monitoring result can be improved.
- step S14 or step S26 can also determine the final judgment result by combining the judgment results of consecutive frames to determine the score more accurately.
- Heart-driving behavior reduces the false detection rate.
- combining the judgment result of consecutive frames includes using a queue structure to store the judgment result of each frame in the last t seconds and maintain the queue; traverse the queue record, if a certain driving behavior accounts for more than the proportion of the last t seconds With three thresholds, the driving behavior will be the final judgment result.
- a distracted driving monitoring system is also provided, and the distracted driving monitoring system 30 includes:
- the image collection module 300 is configured to collect driver images
- the image acquisition module 300 may be an independent camera device or a camera device integrated on an electronic device, for example, an independent infrared camera, a depth camera, an RGB camera, a Mono camera, etc., or a mobile phone Cameras that come with electronic devices such as tablets, driving recorders, navigators, operation panels, and center consoles.
- the driver image can be obtained by intercepting image frames in the video collected by the image collection module.
- the light in the car usually changes with the driving environment, during the day when the weather is fine, the light in the car (for example, the driver's cab) is brighter, at night or in a cloudy day or in a tunnel.
- the light in the driver’s cab is relatively dark, while the infrared camera is less affected by changes in illumination and has the ability to work around the clock. Therefore, an infrared camera (including a near-infrared camera, etc.) can be selected as the image acquisition module 300 to obtain driver images to obtain The quality of driver images is better than that of ordinary cameras, thereby improving the accuracy of distracted driving monitoring results.
- the image acquisition module 300 can be installed in any position in the vehicle where the driver's face can be photographed, for example, near the dashboard, near the center console, near the rearview mirror, and so on.
- the number of image acquisition modules can be one or more.
- video frame images may be acquired every predetermined number of frames to reduce the frequency of acquiring video frame images and optimize computing resources.
- the driver image may be preprocessed through the image acquisition module 300, and the preprocessing includes at least one of the following: image scaling, pixel value normalization, and image enhancement; A driver image that meets the requirements for clarity and size.
- the detection module 302 is configured to detect the target in the driver image and obtain the detection result
- the detection result may indicate whether the driver image contains the target object.
- the target includes at least one of the following: cigarettes, mobile phones, water glasses, and food.
- the distracted driving behavior corresponding to the target object includes at least one of the following: smoking, making phone calls, drinking water, and eating.
- the detection module 302 uses a target detection algorithm to detect the target in the driver image, where the target detection algorithm can be obtained by offline training a large number of samples.
- the target detection algorithm may be a deep learning algorithm, such as yolo, faster-RCNN, SSD, etc.
- the logical judgment module 304 is configured to obtain the judgment result of the driving behavior according to the detection result
- the judgment result of the driving behavior includes normal driving behavior and distracted driving behavior.
- the driving behavior judgment result is a distracted driving behavior; when the detection result indicates that the driver image does not contain a target object, the driving behavior judgment result is a normal driving behavior.
- the communication module 306 is configured to issue an alarm signal when the judgment result indicates that a distracted driving behavior occurs.
- the alarm signal may be at least one of the following: a sound prompt, a light prompt, and a vibration prompt.
- voice prompts include voice or ringing
- light prompts include lighting or flashing lights.
- the communication module can also transmit the driver image to the monitoring center in real time, as a basis for law enforcement or for data collection, data analysis, and further Manual confirmation, etc.
- the above-mentioned image acquisition module 300, detection module 302, logic judgment module 304, and communication module 306 can be configured in the distracted driving monitoring system in a mutually independent manner, or can be configured in the subdivision in a manner of being partially integrated or fully integrated into a large module. In this way, the distracted driving monitoring system can realize real-time monitoring of the driver's distracted driving behavior and alarm, so as to urge the driver to concentrate, ensure safe driving, and avoid traffic accidents.
- the detection module 302 can not only detect whether the driver image contains a target object, but also can detect: the type of the target object and the corresponding type Probability value.
- the probability value represents the probability that the target object belongs to the type.
- the value range is 0 ⁇ 1.
- the logic judgment module 304 is configured to screen the detection result according to the probability value and determine the type of the target object. Since multiple targets or interference objects other than the target may be detected in each frame of the driver image, some of them are wrong detection targets. In order to remove these erroneous detection targets, optionally, in the embodiment of the present invention, the logic judgment module 304 is configured to compare the probability value corresponding to the category in the detection result with the first threshold to obtain the comparison result; filter according to the comparison result Target detection result; wherein multiple targets of different types may share the same first threshold or each type of target corresponds to a first threshold. When the comparison result indicates that the probability value corresponding to the category in the detection result is greater than the first threshold, the detection result is retained; otherwise, the detection result is discarded.
- the type of the target can be determined.
- the judgment result of the driving behavior includes normal driving behavior and various specific distracted driving behaviors.
- the driving behavior judgment result is normal driving behavior; when the detection result indicates that the driver image contains the target object, the driving behavior judgment result is distracted driving behavior, and,
- the logic judgment module 304 can further judge various specific distracted driving behaviors, for example, smoking, making phone calls, drinking water, eating, etc. Specifically, for example, if the target type is smoke, it is determined that the specific distracted driving behavior is smoking; if the target type is a water cup, it is determined that the specific distracted driving behavior is drinking water.
- the communication module 306 issues an alarm signal according to the judgment result.
- the alarm signal can be at least one of the following: sound prompt, light prompt, vibration prompt.
- the sound prompt includes a voice or a bell
- the light prompt includes a light or a flashing light.
- voice broadcasts can be used to give different prompts to various specific distracted driving behaviors that appear.
- the detection module 302 may also be configured to detect the face area and the position of the target after the driver image is collected, where the position of the target may be It is represented by a rectangular box, including the coordinates of the upper left corner and the lower right corner or the coordinates of the upper right corner and the lower left corner or the coordinates of the upper left corner, the lower right corner, the upper right corner, and the lower left corner.
- the logic judgment module 304 may also be configured to evaluate the rationality of the detection result.
- the rationality of the detection result may be evaluated by analyzing the relative position relationship between the position of the target object and the preset reasonable area.
- evaluating the rationality of the target position includes calculating the intersection ratio between the position of the target object and the preset reasonable area corresponding to the target object, and comparing the intersection ratio with a second threshold; when the intersection ratio is greater than the second threshold, It means that the location of the target object appears in the preset reasonable area, the detection result is credible, and the next step can be performed; otherwise, the target detection result is discarded.
- the preset reasonable area can be preset according to the reasonable area where the distracted driving behavior may appear in the face area.
- the preset reasonable area corresponding to the behavior of making a call may be the two sides or the area below the face area; the preset reasonable area corresponding to the smoking behavior may be the area below the face.
- the logical judgment module 304 can also be configured to determine the final judgment result by combining the judgment results of consecutive frames to more accurately judge the distracted driving behavior and reduce False detection rate.
- combining the judgment result of consecutive frames includes using a queue structure to store the judgment result of each frame in the last t seconds and maintain the queue; traverse the queue record, if a certain driving behavior accounts for more than the proportion of the last t seconds With three thresholds, the driving behavior will be the final judgment result.
- an electronic device is also provided.
- the electronic device 40 includes: a processor 400; and a memory 402 configured to store executable instructions of the processor 400; wherein the processor 400 is It is configured to execute any one of the above-mentioned distracted driving monitoring methods by executing the executable instructions.
- a storage medium wherein the storage medium includes a stored program, and wherein the device where the storage medium is located is controlled to execute any one of the foregoing when the program is running. Distracted driving monitoring method.
- the disclosed technical content can be implemented in other ways.
- the device embodiments described above are merely illustrative.
- the division of the units may be a logical function division, and there may be other divisions in actual implementation, for example, multiple units or components may be combined or may be Integrate into another system, or some features can be ignored or not implemented.
- the displayed or discussed mutual coupling or direct coupling or communication connection may be through some interfaces, indirect coupling or communication connection of units or modules, and may be in electrical or other forms.
- the units described as separate components may or may not be physically separated, and the components displayed as units may or may not be physical units, that is, they may be located in one place, or they may be distributed on multiple units. Some or all of the units may be selected according to actual needs to achieve the objectives of the solutions of the embodiments.
- the functional units in the various embodiments of the present invention may be integrated into one processing unit, or each unit may exist alone physically, or two or more units may be integrated into one unit.
- the above-mentioned integrated unit can be realized in the form of hardware or software functional unit.
- the integrated unit is implemented in the form of a software functional unit and sold or used as an independent product, it can be stored in a computer readable storage medium.
- the technical solution of the present invention essentially or the part that contributes to the prior art or all or part of the technical solution can be embodied in the form of a software product, and the computer software product is stored in a storage medium , Including several instructions to make a computer device (which may be a personal computer, a server, or a network device, etc.) execute all or part of the steps of the method described in each embodiment of the present invention.
- the aforementioned storage media include: U disk, read-only memory (ROM, Read-Only Memory), random access memory (RAM, Random Access Memory), mobile hard disk, magnetic disk or optical disk and other media that can store program code .
Landscapes
- Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- Automation & Control Theory (AREA)
- Theoretical Computer Science (AREA)
- General Physics & Mathematics (AREA)
- Multimedia (AREA)
- Transportation (AREA)
- Mechanical Engineering (AREA)
- Human Computer Interaction (AREA)
- General Health & Medical Sciences (AREA)
- Health & Medical Sciences (AREA)
- Mathematical Physics (AREA)
- Oral & Maxillofacial Surgery (AREA)
- Traffic Control Systems (AREA)
- Emergency Alarm Devices (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Image Analysis (AREA)
- Artificial Intelligence (AREA)
- Computing Systems (AREA)
- Databases & Information Systems (AREA)
- Evolutionary Computation (AREA)
- Medical Informatics (AREA)
- Software Systems (AREA)
Abstract
Description
Claims (32)
- 一种分心驾驶监测方法,所述方法包括:采集驾驶员图像;检测所述驾驶员图像中的目标物,获得检测结果,其中,所述目标物与分心驾驶行为对应;根据所述检测结果,获取驾驶行为的判断结果;当所述判断结果表示出现所述分心驾驶行为时,发出报警信号。
- 根据权利要求1所述的方法,其中,通过图像采集模块采集驾驶员图像,其中,所述图像采集模块为独立的摄像装置或集成在电子设备上的摄像装置。
- 根据权利要求1所述的方法,其中,所述目标物包括以下至少之一:香烟、手机、水杯、食物,与所述目标物对应的所述分心驾驶行为则包括以下至少之一:抽烟、接打电话、喝水、饮食。
- 根据权利要求1所述的方法,其中,所述检测结果表示所述驾驶员图像中是否包含目标物,当所述检测结果表示所述驾驶员图像中包含目标物时,所述驾驶行为的判断结果为分心驾驶行为。
- 根据权利要求1或4所述的方法,其中,所述检测结果包括所述目标物的类型、与所属类型对应的概率值。
- 根据权利要求5所述的方法,其中,所述方法包括:根据所述概率值 筛选检测结果。
- 根据权利要求6所述的方法,其中,所述方法包括:比较所述检测结果中与所属类型对应的概率值和第一阈值,获得比较结果;根据所述比较结果来筛选检测结果。
- 根据权利要求7所述的方法,其中,所述方法包括:当所述比较结果表示检测结果中与所属类型对应的概率值大于所述第一阈值时,保留该检测结果;否则,将所述检测结果丢弃。
- 根据权利要求8所述的方法,其中,当存在多个概率值大于第一阈值的检测结果时,仅保留概率值最高的检测结果。
- 根据权利要求1所述的方法,其中,所述方法包括:在所述采集驾驶员图像之后,检测人脸区域。
- 根据权利要求1或4或10所述的方法,其中,所述检测结果包括目标物的位置。
- 根据权利要求11所述的方法,其中,所述方法包括:通过分析所述目标物的位置和预设合理区域的相对位置关系,评估所述检测结果的合理性。
- 根据权利要求12所述的方法,其中,通过分析目标物的位置和预设合理区域的相对位置关系,评估检测结果的合理性包括:计算所述目标物的位置与所述目标物对应的所述预设合理区域的交并比,比较交并比与第二阈值;当所述交并比大于所述第二阈值时,表示所述目标物 的位置出现在预设合理区域中,目标检测结果可信;否则丢弃所述目标检测结果。
- 根据权利要求1所述的方法,其中,在采集驾驶员图像之后,所述方法还包括:对所述驾驶员图像进行预处理,得到预处理图像;其中,所述预处理包括以下至少之一:图像缩放、像素值归一化、图像增强。
- 根据权利要求1或14所述的方法,其中,使用深度学习算法得到所述驾驶员图像或所述预处理图像中所述目标物的位置、类型和概率值,其中,所述概率值为所述目标物属于所属类型的概率。
- 根据权利要求1所述的方法,还包括:结合连续帧的判断结果,确定最终的判断结果。
- 根据权利要求16所述的方法,其中,使用队列结构存储最近t秒中每一帧的判断结果并维护该队列;遍历该队列记录,如果驾驶行为在最近t秒中的占比超过第三阈值,则将该驾驶行为作为最终的判断结果。
- 一种分心驾驶监测系统,包括:图像采集模块,被配置为采集驾驶员图像;检测模块,被配置为检测所述驾驶员图像中的目标物,获得检测结果;逻辑判断模块,被配置为根据检测结果,获取驾驶行为的判断结 果;通讯模块,被配置为当判断结果表示出现分心驾驶行为时,发出报警信号。
- 根据权利要求18所述的分心驾驶监测系统,其中,所述图像采集模块为独立的摄像装置或集成在电子设备上的摄像装置。
- 根据权利要求18所述的分心驾驶监测系统,其中,所述目标物包括以下至少之一:香烟、手机、水杯、食物,与所述目标物对应的所述分心驾驶行为则包括以下至少之一:抽烟、接打电话、喝水、饮食。
- 根据权利要求18所述的分心驾驶监测系统,其中,所述检测结果包括以下至少之一:是否存在目标物、目标物的位置、目标物的类型、与所属类型对应的概率值。
- 根据权利要求21所述的分心驾驶监测系统,其中,所述逻辑判断模块被配置为根据所述概率值筛选所述检测结果。
- 根据权利要求22所述的分心驾驶监测系统,其中,所述逻辑判断模块通过比较所述检测结果中与所属类型对应的概率值和第一阈值,获得比较结果;根据所述比较结果来筛选检测结果;当所述比较结果表示检测结果中与所属类型对应的概率值大于所述第一阈值时,保留该检测结果;否则,将所述检测结果丢弃。
- 根据权利要求23所述的分心驾驶监测系统,其中,当存在多个概率值大于第一阈值的检测结果时,仅保留概率值最高的检测结果。
- 根据权利要求18或21所述的分心驾驶监测系统,其中,所述检测模块被配置为在所述采集驾驶员图像之后,检测人脸区域。
- 根据权利要求25所述的分心驾驶监测系统,其中,所述逻辑判断模块通过分析所述目标物的位置和预设合理区域的相对位置关系,评估所述检测结果的合理性。
- 根据权利要求26所述的分心驾驶监测系统,其中,通过分析目标物的位置和预设合理区域的相对位置关系,评估检测结果的合理性包括:计算所述目标物的位置与所述目标物对应的所述预设合理区域的交并比,比较交并比与第二阈值;当所述交并比大于所述第二阈值时,表示所述目标物的位置出现在预设合理区域中,目标检测结果可信;否则丢弃所述目标检测结果。
- 根据权利要求18所述的分心驾驶监测系统,其中,所述检测模块使用深度学习算法得到所述驾驶员图像中所述目标物的位置、类型和概率值,其中,所述概率值为所述目标物属于所属类型的概率。
- 根据权利要求18所述的分心驾驶监测系统,其中,所述逻辑判断模块通过结合连续帧的判断结果,确定最终的判断结果。
- 根据权利要求29所述的分心驾驶监测系统,其中,所述逻辑判断模块使用队列结构存储最近t秒中每一帧的判断结果并维护该队列;遍历该队列记录,如果驾驶行为在最近t秒中的占比超过第三阈值,则将该驾驶行为作为最终的判断结果。
- 一种电子设备,包括:处理器;以及存储器,设置为存储所述处理器的可执行指令;其中,所述处理器配置为经由执行所述可执行指令来执行权利要求1至17中任意一项所述的分心驾驶监测方法。
- 一种存储介质,所述存储介质包括存储的程序,其中,在所述程序运行时控制所述存储介质所在设备执行权利要求1至17中任意一项所述的分心驾驶监测方法。
Priority Applications (4)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
EP19817571.3A EP3730371A4 (en) | 2019-03-08 | 2019-12-03 | STEERING WHEEL DISTRACTION MONITORING METHOD AND SYSTEM, AND ELECTRONIC DEVICE |
KR1020217032527A KR102543161B1 (ko) | 2019-03-08 | 2019-12-03 | 산만 운전 모니터링 방법, 시스템 및 전자기기 |
US16/626,350 US11783599B2 (en) | 2019-03-08 | 2019-12-03 | Distracted-driving monitoring method, system and electronic device |
JP2021552987A JP7407198B2 (ja) | 2019-03-08 | 2019-12-03 | ながら運転のモニタリング方法、システム及び電子機器 |
Applications Claiming Priority (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201910175982.0A CN111661059B (zh) | 2019-03-08 | 2019-03-08 | 分心驾驶监测方法、系统及电子设备 |
CN201910175982.0 | 2019-03-08 |
Publications (1)
Publication Number | Publication Date |
---|---|
WO2020181840A1 true WO2020181840A1 (zh) | 2020-09-17 |
Family
ID=70475931
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
PCT/CN2019/122790 WO2020181840A1 (zh) | 2019-03-08 | 2019-12-03 | 分心驾驶监测方法、系统及电子设备 |
Country Status (6)
Country | Link |
---|---|
US (1) | US11783599B2 (zh) |
EP (1) | EP3730371A4 (zh) |
JP (1) | JP7407198B2 (zh) |
KR (1) | KR102543161B1 (zh) |
CN (1) | CN111661059B (zh) |
WO (1) | WO2020181840A1 (zh) |
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN112677981A (zh) * | 2021-01-08 | 2021-04-20 | 浙江三一装备有限公司 | 用于作业机械安全驾驶的智能辅助方法及装置 |
Families Citing this family (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN112208547B (zh) * | 2020-09-29 | 2021-10-01 | 英博超算(南京)科技有限公司 | 一种安全自动驾驶系统 |
CN112347891B (zh) * | 2020-10-30 | 2022-02-22 | 南京佑驾科技有限公司 | 基于视觉的舱内喝水状态检测方法 |
CN112613441A (zh) * | 2020-12-29 | 2021-04-06 | 新疆爱华盈通信息技术有限公司 | 异常驾驶行为的识别和预警方法、电子设备 |
CN113191244A (zh) * | 2021-04-25 | 2021-07-30 | 上海夏数网络科技有限公司 | 一种驾驶员不规范行为检测方法 |
CN113335296B (zh) * | 2021-06-24 | 2022-11-29 | 东风汽车集团股份有限公司 | 一种分心驾驶自适应检测系统及方法 |
CN117163054B (zh) * | 2023-08-30 | 2024-03-12 | 广州方驰信息科技有限公司 | 一种用于虚拟实景视频融合的大数据分析系统及方法 |
Citations (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN104239847A (zh) * | 2013-06-14 | 2014-12-24 | 由田新技股份有限公司 | 行车警示方法及车用电子装置 |
CN104598934A (zh) * | 2014-12-17 | 2015-05-06 | 安徽清新互联信息科技有限公司 | 一种驾驶员吸烟行为监控方法 |
CN106709420A (zh) * | 2016-11-21 | 2017-05-24 | 厦门瑞为信息技术有限公司 | 一种监测营运车辆驾驶人员驾驶行为的方法 |
US20180238686A1 (en) * | 2011-02-15 | 2018-08-23 | Guardvant, Inc. | Cellular phone and personal protective equipment usage monitoring system |
CN108609018A (zh) * | 2018-05-10 | 2018-10-02 | 郑州天迈科技股份有限公司 | 用于分析危险驾驶行为的预警终端、预警系统及分析算法 |
CN108629282A (zh) * | 2018-03-29 | 2018-10-09 | 福州海景科技开发有限公司 | 一种吸烟检测方法、存储介质及计算机 |
CN110399767A (zh) * | 2017-08-10 | 2019-11-01 | 北京市商汤科技开发有限公司 | 车内人员危险动作识别方法和装置、电子设备、存储介质 |
Family Cites Families (17)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP4775658B2 (ja) * | 2006-12-27 | 2011-09-21 | アイシン・エィ・ダブリュ株式会社 | 地物認識装置・自車位置認識装置・ナビゲーション装置・地物認識方法 |
JP4942604B2 (ja) * | 2007-10-02 | 2012-05-30 | 本田技研工業株式会社 | 車両用電話通話判定装置 |
CN102436715B (zh) * | 2011-11-25 | 2013-12-11 | 大连海创高科信息技术有限公司 | 疲劳驾驶检测方法 |
KR101417408B1 (ko) * | 2012-11-15 | 2014-07-14 | 현대자동차주식회사 | 레이더를 이용한 객체 인식방법 및 시스템 |
JP2015012679A (ja) | 2013-06-28 | 2015-01-19 | 株式会社日立製作所 | アキシャルギャップ型回転電機 |
CN105069842A (zh) * | 2015-08-03 | 2015-11-18 | 百度在线网络技术(北京)有限公司 | 道路三维模型的建模方法和装置 |
CN105632104B (zh) * | 2016-03-18 | 2019-03-01 | 内蒙古大学 | 一种疲劳驾驶检测系统和方法 |
CN106529565B (zh) * | 2016-09-23 | 2019-09-13 | 北京市商汤科技开发有限公司 | 目标识别模型训练和目标识别方法及装置、计算设备 |
WO2018085804A1 (en) * | 2016-11-07 | 2018-05-11 | Nauto Global Limited | System and method for driver distraction determination |
KR102342143B1 (ko) * | 2017-08-08 | 2021-12-23 | 주식회사 만도모빌리티솔루션즈 | 딥 러닝 기반 자율 주행 차량, 딥 러닝 기반 자율 주행 제어 장치 및 딥 러닝 기반 자율 주행 제어 방법 |
JP6972756B2 (ja) * | 2017-08-10 | 2021-11-24 | 富士通株式会社 | 制御プログラム、制御方法、及び情報処理装置 |
CN107704805B (zh) * | 2017-09-01 | 2018-09-07 | 深圳市爱培科技术股份有限公司 | 疲劳驾驶检测方法、行车记录仪及存储装置 |
US10915769B2 (en) * | 2018-06-04 | 2021-02-09 | Shanghai Sensetime Intelligent Technology Co., Ltd | Driving management methods and systems, vehicle-mounted intelligent systems, electronic devices, and medium |
JP6870660B2 (ja) * | 2018-06-08 | 2021-05-12 | トヨタ自動車株式会社 | ドライバ監視装置 |
CN109086662B (zh) * | 2018-06-19 | 2021-06-15 | 浙江大华技术股份有限公司 | 一种异常行为检测方法及装置 |
CN109063574B (zh) * | 2018-07-05 | 2021-04-23 | 顺丰科技有限公司 | 一种基于深度神经网络检测的包络框的预测方法、系统及设备 |
US10882398B2 (en) * | 2019-02-13 | 2021-01-05 | Xevo Inc. | System and method for correlating user attention direction and outside view |
-
2019
- 2019-03-08 CN CN201910175982.0A patent/CN111661059B/zh active Active
- 2019-12-03 US US16/626,350 patent/US11783599B2/en active Active
- 2019-12-03 KR KR1020217032527A patent/KR102543161B1/ko active IP Right Grant
- 2019-12-03 EP EP19817571.3A patent/EP3730371A4/en active Pending
- 2019-12-03 WO PCT/CN2019/122790 patent/WO2020181840A1/zh unknown
- 2019-12-03 JP JP2021552987A patent/JP7407198B2/ja active Active
Patent Citations (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20180238686A1 (en) * | 2011-02-15 | 2018-08-23 | Guardvant, Inc. | Cellular phone and personal protective equipment usage monitoring system |
CN104239847A (zh) * | 2013-06-14 | 2014-12-24 | 由田新技股份有限公司 | 行车警示方法及车用电子装置 |
CN104598934A (zh) * | 2014-12-17 | 2015-05-06 | 安徽清新互联信息科技有限公司 | 一种驾驶员吸烟行为监控方法 |
CN106709420A (zh) * | 2016-11-21 | 2017-05-24 | 厦门瑞为信息技术有限公司 | 一种监测营运车辆驾驶人员驾驶行为的方法 |
CN110399767A (zh) * | 2017-08-10 | 2019-11-01 | 北京市商汤科技开发有限公司 | 车内人员危险动作识别方法和装置、电子设备、存储介质 |
CN108629282A (zh) * | 2018-03-29 | 2018-10-09 | 福州海景科技开发有限公司 | 一种吸烟检测方法、存储介质及计算机 |
CN108609018A (zh) * | 2018-05-10 | 2018-10-02 | 郑州天迈科技股份有限公司 | 用于分析危险驾驶行为的预警终端、预警系统及分析算法 |
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN112677981A (zh) * | 2021-01-08 | 2021-04-20 | 浙江三一装备有限公司 | 用于作业机械安全驾驶的智能辅助方法及装置 |
Also Published As
Publication number | Publication date |
---|---|
KR102543161B1 (ko) | 2023-06-14 |
EP3730371A1 (en) | 2020-10-28 |
CN111661059A (zh) | 2020-09-15 |
US20220180648A1 (en) | 2022-06-09 |
JP2022523247A (ja) | 2022-04-21 |
CN111661059B (zh) | 2022-07-08 |
US11783599B2 (en) | 2023-10-10 |
EP3730371A4 (en) | 2020-11-18 |
JP7407198B2 (ja) | 2023-12-28 |
KR20210135313A (ko) | 2021-11-12 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
WO2020181840A1 (zh) | 分心驾驶监测方法、系统及电子设备 | |
CN109937152B (zh) | 驾驶状态监测方法和装置、驾驶员监控系统、车辆 | |
WO2019232972A1 (zh) | 驾驶管理方法和系统、车载智能系统、电子设备、介质 | |
CN110889351B (zh) | 视频检测方法、装置、终端设备及可读存储介质 | |
CN108275114B (zh) | 一种油箱防盗监控系统 | |
CN105469035A (zh) | 基于双目视频分析的驾驶员不良驾驶行为检测系统 | |
US10810866B2 (en) | Perimeter breach warning system | |
CN111629181B (zh) | 消防生命通道监控系统及方法 | |
CN109377694B (zh) | 社区车辆的监控方法及系统 | |
US11423673B2 (en) | Method and device for detecting state of holding steering wheel | |
CN113239754A (zh) | 一种应用于车联网的危险驾驶行为检测定位方法及系统 | |
CN111860210A (zh) | 双手脱离方向盘检测方法、装置、电子设备和存储介质 | |
CN101930540A (zh) | 基于视频的多特征融合的火焰检测装置和方法 | |
CN110913209A (zh) | 摄像头遮挡检测方法、装置、电子设备及监控系统 | |
CN109685083A (zh) | 驾驶员开车违规使用手机的多尺度检测方法 | |
CN108932503A (zh) | 恶劣天气下车前障碍的识别方法及装置、存储介质、终端 | |
CN110913212B (zh) | 基于光流的智能车载摄像头遮挡监测方法及装置、辅助驾驶系统 | |
CN108162866A (zh) | 一种基于流媒体外后视镜系统的车道识别系统及方法 | |
TWI706381B (zh) | 影像物件偵測方法及系統 | |
CN211979500U (zh) | 一种车载信息汇总处理系统 | |
CN113450567A (zh) | 一种人工智能预警系统 | |
CN110163037B (zh) | 监测驾驶员状态的方法、设备、系统、处理器及存储介质 | |
CN113744498B (zh) | 驾驶员注意力监测的系统和方法 | |
CN112528910B (zh) | 手脱离方向盘检测方法、装置、电子设备及存储介质 | |
US11106917B2 (en) | Surveillance system with human-machine interface |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
ENP | Entry into the national phase |
Ref document number: 2019817571 Country of ref document: EP Effective date: 20191219 |
|
121 | Ep: the epo has been informed by wipo that ep was designated in this application |
Ref document number: 19817571 Country of ref document: EP Kind code of ref document: A1 |
|
ENP | Entry into the national phase |
Ref document number: 2021552987 Country of ref document: JP Kind code of ref document: A |
|
NENP | Non-entry into the national phase |
Ref country code: DE |
|
ENP | Entry into the national phase |
Ref document number: 20217032527 Country of ref document: KR Kind code of ref document: A |