WO2022078077A1 - 驾驶风险的预警方法、装置、计算设备及存储介质 - Google Patents

驾驶风险的预警方法、装置、计算设备及存储介质 Download PDF

Info

Publication number
WO2022078077A1
WO2022078077A1 PCT/CN2021/114418 CN2021114418W WO2022078077A1 WO 2022078077 A1 WO2022078077 A1 WO 2022078077A1 CN 2021114418 W CN2021114418 W CN 2021114418W WO 2022078077 A1 WO2022078077 A1 WO 2022078077A1
Authority
WO
WIPO (PCT)
Prior art keywords
dangerous
driver
driving behavior
time period
dangerous driving
Prior art date
Application number
PCT/CN2021/114418
Other languages
English (en)
French (fr)
Inventor
侯琛
Original Assignee
腾讯科技(深圳)有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 腾讯科技(深圳)有限公司 filed Critical 腾讯科技(深圳)有限公司
Publication of WO2022078077A1 publication Critical patent/WO2022078077A1/zh
Priority to US17/968,341 priority Critical patent/US20230048112A1/en

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/50Context or environment of the image
    • G06V20/59Context or environment of the image inside of a vehicle, e.g. relating to seat occupancy, driver state or inner lighting conditions
    • G06V20/597Recognising the driver's state or behaviour, e.g. attention or drowsiness
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B60VEHICLES IN GENERAL
    • B60WCONJOINT CONTROL OF VEHICLE SUB-UNITS OF DIFFERENT TYPE OR DIFFERENT FUNCTION; CONTROL SYSTEMS SPECIALLY ADAPTED FOR HYBRID VEHICLES; ROAD VEHICLE DRIVE CONTROL SYSTEMS FOR PURPOSES NOT RELATED TO THE CONTROL OF A PARTICULAR SUB-UNIT
    • B60W40/00Estimation or calculation of non-directly measurable driving parameters for road vehicle drive control systems not related to the control of a particular sub unit, e.g. by using mathematical models
    • B60W40/08Estimation or calculation of non-directly measurable driving parameters for road vehicle drive control systems not related to the control of a particular sub unit, e.g. by using mathematical models related to drivers or passengers
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B60VEHICLES IN GENERAL
    • B60WCONJOINT CONTROL OF VEHICLE SUB-UNITS OF DIFFERENT TYPE OR DIFFERENT FUNCTION; CONTROL SYSTEMS SPECIALLY ADAPTED FOR HYBRID VEHICLES; ROAD VEHICLE DRIVE CONTROL SYSTEMS FOR PURPOSES NOT RELATED TO THE CONTROL OF A PARTICULAR SUB-UNIT
    • B60W40/00Estimation or calculation of non-directly measurable driving parameters for road vehicle drive control systems not related to the control of a particular sub unit, e.g. by using mathematical models
    • B60W40/08Estimation or calculation of non-directly measurable driving parameters for road vehicle drive control systems not related to the control of a particular sub unit, e.g. by using mathematical models related to drivers or passengers
    • B60W40/09Driving style or behaviour
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B60VEHICLES IN GENERAL
    • B60WCONJOINT CONTROL OF VEHICLE SUB-UNITS OF DIFFERENT TYPE OR DIFFERENT FUNCTION; CONTROL SYSTEMS SPECIALLY ADAPTED FOR HYBRID VEHICLES; ROAD VEHICLE DRIVE CONTROL SYSTEMS FOR PURPOSES NOT RELATED TO THE CONTROL OF A PARTICULAR SUB-UNIT
    • B60W50/00Details of control systems for road vehicle drive control not related to the control of a particular sub-unit, e.g. process diagnostic or vehicle driver interfaces
    • B60W50/08Interaction between the driver and the control system
    • B60W50/14Means for informing the driver, warning the driver or prompting a driver intervention
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/18Eye characteristics, e.g. of the iris
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/20Movements or behaviour, e.g. gesture recognition
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B60VEHICLES IN GENERAL
    • B60WCONJOINT CONTROL OF VEHICLE SUB-UNITS OF DIFFERENT TYPE OR DIFFERENT FUNCTION; CONTROL SYSTEMS SPECIALLY ADAPTED FOR HYBRID VEHICLES; ROAD VEHICLE DRIVE CONTROL SYSTEMS FOR PURPOSES NOT RELATED TO THE CONTROL OF A PARTICULAR SUB-UNIT
    • B60W40/00Estimation or calculation of non-directly measurable driving parameters for road vehicle drive control systems not related to the control of a particular sub unit, e.g. by using mathematical models
    • B60W40/08Estimation or calculation of non-directly measurable driving parameters for road vehicle drive control systems not related to the control of a particular sub unit, e.g. by using mathematical models related to drivers or passengers
    • B60W2040/0818Inactivity or incapacity of driver
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B60VEHICLES IN GENERAL
    • B60WCONJOINT CONTROL OF VEHICLE SUB-UNITS OF DIFFERENT TYPE OR DIFFERENT FUNCTION; CONTROL SYSTEMS SPECIALLY ADAPTED FOR HYBRID VEHICLES; ROAD VEHICLE DRIVE CONTROL SYSTEMS FOR PURPOSES NOT RELATED TO THE CONTROL OF A PARTICULAR SUB-UNIT
    • B60W40/00Estimation or calculation of non-directly measurable driving parameters for road vehicle drive control systems not related to the control of a particular sub unit, e.g. by using mathematical models
    • B60W40/08Estimation or calculation of non-directly measurable driving parameters for road vehicle drive control systems not related to the control of a particular sub unit, e.g. by using mathematical models related to drivers or passengers
    • B60W2040/0818Inactivity or incapacity of driver
    • B60W2040/0827Inactivity or incapacity of driver due to sleepiness
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B60VEHICLES IN GENERAL
    • B60WCONJOINT CONTROL OF VEHICLE SUB-UNITS OF DIFFERENT TYPE OR DIFFERENT FUNCTION; CONTROL SYSTEMS SPECIALLY ADAPTED FOR HYBRID VEHICLES; ROAD VEHICLE DRIVE CONTROL SYSTEMS FOR PURPOSES NOT RELATED TO THE CONTROL OF A PARTICULAR SUB-UNIT
    • B60W40/00Estimation or calculation of non-directly measurable driving parameters for road vehicle drive control systems not related to the control of a particular sub unit, e.g. by using mathematical models
    • B60W40/08Estimation or calculation of non-directly measurable driving parameters for road vehicle drive control systems not related to the control of a particular sub unit, e.g. by using mathematical models related to drivers or passengers
    • B60W2040/0818Inactivity or incapacity of driver
    • B60W2040/0836Inactivity or incapacity of driver due to alcohol
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B60VEHICLES IN GENERAL
    • B60WCONJOINT CONTROL OF VEHICLE SUB-UNITS OF DIFFERENT TYPE OR DIFFERENT FUNCTION; CONTROL SYSTEMS SPECIALLY ADAPTED FOR HYBRID VEHICLES; ROAD VEHICLE DRIVE CONTROL SYSTEMS FOR PURPOSES NOT RELATED TO THE CONTROL OF A PARTICULAR SUB-UNIT
    • B60W50/00Details of control systems for road vehicle drive control not related to the control of a particular sub-unit, e.g. process diagnostic or vehicle driver interfaces
    • B60W50/08Interaction between the driver and the control system
    • B60W50/14Means for informing the driver, warning the driver or prompting a driver intervention
    • B60W2050/143Alarm means
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B60VEHICLES IN GENERAL
    • B60WCONJOINT CONTROL OF VEHICLE SUB-UNITS OF DIFFERENT TYPE OR DIFFERENT FUNCTION; CONTROL SYSTEMS SPECIALLY ADAPTED FOR HYBRID VEHICLES; ROAD VEHICLE DRIVE CONTROL SYSTEMS FOR PURPOSES NOT RELATED TO THE CONTROL OF A PARTICULAR SUB-UNIT
    • B60W50/00Details of control systems for road vehicle drive control not related to the control of a particular sub-unit, e.g. process diagnostic or vehicle driver interfaces
    • B60W50/08Interaction between the driver and the control system
    • B60W50/14Means for informing the driver, warning the driver or prompting a driver intervention
    • B60W2050/146Display means
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B60VEHICLES IN GENERAL
    • B60WCONJOINT CONTROL OF VEHICLE SUB-UNITS OF DIFFERENT TYPE OR DIFFERENT FUNCTION; CONTROL SYSTEMS SPECIALLY ADAPTED FOR HYBRID VEHICLES; ROAD VEHICLE DRIVE CONTROL SYSTEMS FOR PURPOSES NOT RELATED TO THE CONTROL OF A PARTICULAR SUB-UNIT
    • B60W2420/00Indexing codes relating to the type of sensors based on the principle of their operation
    • B60W2420/40Photo or light sensitive means, e.g. infrared sensors
    • B60W2420/403Image sensing, e.g. optical camera
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B60VEHICLES IN GENERAL
    • B60WCONJOINT CONTROL OF VEHICLE SUB-UNITS OF DIFFERENT TYPE OR DIFFERENT FUNCTION; CONTROL SYSTEMS SPECIALLY ADAPTED FOR HYBRID VEHICLES; ROAD VEHICLE DRIVE CONTROL SYSTEMS FOR PURPOSES NOT RELATED TO THE CONTROL OF A PARTICULAR SUB-UNIT
    • B60W2556/00Input parameters relating to data
    • B60W2556/10Historical data

Definitions

  • the embodiments of the present application relate to the technical field of intelligent driving, and in particular, to early warning of driving risks.
  • a camera is installed on the vehicle, and the camera is used to collect the driving behavior of the driver in real time, and to prompt the driving risk based on the driving behavior of the driver.
  • Embodiments of the present application provide a driving risk early warning method, device, computing device, and storage medium, so as to improve the early warning accuracy of vehicle driving risks, and realize accurate detection of various dangerous scenarios that may be caused by the driver's current dangerous driving behavior. Warning.
  • an embodiment of the present application provides a driving risk warning method, the method is executed by a computing device, and the method includes:
  • an embodiment of the present application provides a driving risk warning device, including:
  • an obtaining unit configured to obtain the data of the dangerous driving behavior of the driver in the first time period, and to obtain the corresponding relationship between the number of occurrences of the dangerous driving behavior and the actual number of occurrences of the dangerous scene;
  • a prediction unit used for predicting the target number of times that the driver encounters different dangerous scenarios in the first time period according to the actual number of occurrences of the dangerous driving behavior involved in the dangerous driving behavior data, and the corresponding relationship;
  • a determining unit configured to generate early warning information according to the target number of times.
  • an embodiment of the present application provides a computing device, including a processor and a memory;
  • the memory for storing computer programs
  • the processor is configured to execute the computer program to implement the driving risk warning method described in the above aspects.
  • an embodiment of the present application provides a computer-readable storage medium, where the storage medium includes a computer program, and the computer program is used to execute the driving risk early warning method described in the above aspect.
  • an embodiment of the present application provides a computer program product, the program product includes a computer program, the computer program is stored in a readable storage medium, and at least one processor of a computer can read from the readable storage medium Taking the computer program, the at least one processor executes the computer program so that the computer implements the driving risk warning method described in the above aspects.
  • the driving risk early warning method, device, computing device, and storage medium obtained the data of the dangerous driving behavior of the driver in the first time period, and obtain the occurrence frequency of the dangerous driving behavior and the actual occurrence frequency of the dangerous scene. Correspondence between. Because the corresponding relationship can truly reflect the relationship between the actual occurrence of the dangerous scene and the dangerous driving behavior that the driver made before encountering the dangerous scene, the corresponding relationship and the dangerous driving behavior involved in the dangerous driving behavior data can be obtained through the corresponding relationship. The number of occurrences can accurately predict the number of targets that the driver may encounter in different dangerous scenarios in the first time period, and based on the warning information generated by the target times, the driver will be warned of the risk.
  • warning information is based on the driver's dangerous driving behavior.
  • the resulting dangerous scene is determined, so that the driver can clearly understand the serious consequences of his dangerous driving behavior through early warning.
  • This result-oriented early warning method is intuitive and can effectively improve the efficiency of human-computer interaction. to regulate driving behavior.
  • FIG. 1 is a schematic diagram of an application scenario provided by an embodiment of the present application.
  • FIG. 2 is a system architecture diagram involved in an embodiment of the application
  • FIG. 3 is a schematic flowchart of a driving risk early warning method provided by an embodiment of the present application.
  • FIG. 4 is a schematic diagram of an image processing involved in an embodiment of the present application.
  • FIG. 6 is another schematic diagram of early warning information involved in an embodiment of the application.
  • FIG. 7 is another schematic flowchart of a driving risk warning method provided by an embodiment of the present application.
  • FIG. 8 is another schematic flowchart of a driving risk early warning method provided by an embodiment of the present application.
  • FIG. 9 is a schematic flowchart of a method for predicting dangerous driving behavior provided by an embodiment of the present application.
  • FIG. 10 is a schematic structural diagram of a driving risk warning device provided by an embodiment of the application.
  • FIG. 11 is a schematic structural diagram of a device for predicting dangerous driving behavior provided by an embodiment of the application.
  • FIG. 12 is a block diagram of a computing device involved in an embodiment of the present application.
  • V2X Vehicle to Everything
  • V2X provides vehicle information through sensors and vehicle terminals mounted on the vehicle, and realizes vehicle-to-vehicle (V2V), vehicle-to-road and vehicle-to-vehicle (V2V) through various communication technologies.
  • V2I Vehicle to Infrastructure
  • V2P Vehicle to Pedestrian
  • V2N Vehicle to Network
  • Artificial Intelligence is a theory, method, technology and application system that uses digital computers or machines controlled by digital computers to simulate, extend and expand human intelligence, perceive the environment, acquire knowledge and use knowledge to obtain the best results.
  • artificial intelligence is a comprehensive technique of computer science that attempts to understand the essence of intelligence and produce a new kind of intelligent machine that can respond in a similar way to human intelligence.
  • Artificial intelligence is to study the design principles and implementation methods of various intelligent machines, so that the machines have the functions of perception, reasoning and decision-making.
  • Artificial intelligence technology is a comprehensive discipline, involving a wide range of fields, including both hardware-level technology and software-level technology.
  • the basic technologies of artificial intelligence generally include technologies such as sensors, special artificial intelligence chips, cloud computing, distributed storage, big data processing technology, operation/interaction systems, and mechatronics.
  • Artificial intelligence software technology mainly includes computer vision technology, speech processing technology, natural language processing technology, and machine learning/deep learning.
  • Computer Vision Technology (Computer Vision, CV) Computer vision is a science that studies how to make machines "see”. Further, it refers to the use of cameras and computers instead of human eyes to identify, track and measure targets. Machine vision, And further do graphics processing, so that computer processing becomes more suitable for human eye observation or transmission to the instrument detection image. As a scientific discipline, computer vision studies related theories and technologies, trying to build artificial intelligence systems that can obtain information from images or multidimensional data.
  • Computer vision technology usually includes image processing, image recognition, image semantic understanding, image retrieval, OCR, video processing, video semantic understanding, video content/behavior recognition, 3D object reconstruction, 3D technology, virtual reality, augmented reality, simultaneous localization and mapping It also includes common biometric identification technologies such as face recognition and fingerprint recognition.
  • Intelligent driving technology includes high-precision maps, environmental perception, behavioral decision-making, path planning, motion control and other technologies. Intelligent driving technology has a wide range of application prospects. The embodiments of the present application are applied in the technical field of intelligent driving, and are used for early warning of the driving risk of the driver, so as to assist the driver to drive safely.
  • the intercommunication between the vehicle driven by the driver and the network can be realized through the Internet of Vehicles technology.
  • the corresponding relationship is obtained through the network, and the predicted number of times the driver encounters dangerous scenes is obtained. Wait.
  • the collected images (such as driver images or driving road conditions images, etc.) can also be analyzed through computer vision technology to obtain dangerous driving behavior data or dangerous scene data.
  • B corresponding to A means that B is associated with A. In one implementation, B may be determined from A. However, it should also be understood that determining B according to A does not mean that B is only determined according to A, and B may also be determined according to A and/or other information.
  • words such as “first” and “second” are used to distinguish the same or similar items with basically the same function and effect.
  • words “first”, “second” and the like do not limit the quantity and execution order, and the words “first”, “second” and the like are not necessarily different.
  • FIG. 1 is a schematic diagram of an application scenario provided by an embodiment of the present application.
  • vehicle A, vehicle B, and vehicle C are all moving vehicles, and the driving risk early warning method provided by the embodiment of the present application may be one of The driver of at least one vehicle provides an early warning service while driving.
  • the driving risk early warning method provided in the embodiment of the present application may be executed by a computing device, and the computing device may be a terminal device or a server capable of processing vehicle early warning, wherein the server may be an independent physical server or multiple physical servers
  • the constituted server cluster or distributed system may also be a cloud server that provides cloud computing services.
  • the terminal may be a smart phone, a tablet computer, a notebook computer, a desktop computer, a smart speaker, a smart watch, a vehicle terminal, a smart TV, etc., but is not limited thereto.
  • the terminal device and the server can be directly or indirectly connected through wired or wireless communication, which is not limited in this application.
  • the computing device is a server, it can be connected to the vehicle driven by the driver through the Internet of Vehicles, so as to obtain the dangerous driving behavior from the vehicle and return early warning information to the vehicle.
  • FIG. 2 is a system architecture diagram involved in an embodiment of the application. As shown in FIG. 2 , the system architecture includes: a driver, a computing device, a vehicle-mounted camera, and an early warning system.
  • the on-board camera is installed on the vehicle to collect driver images.
  • the in-vehicle camera may be an AI camera, and the AI camera may use computer vision technology to obtain the driver's dangerous driving behavior data.
  • the computing device is respectively connected to the on-board camera and the early warning system, and can obtain data from the on-board camera and the early warning system, and can also send data to the on-board camera and the early warning system.
  • the computing device can obtain the driver's driving behavior data from the in-vehicle camera, and can obtain the dangerous scene data of the vehicle from the early warning system.
  • the above-mentioned computing device may be used to execute the technical solutions of the embodiments of the present application, for example, may be used to obtain the dangerous driving behavior data of the driver in the first time period, and obtain the occurrence times of the dangerous driving behavior and the information of the dangerous scene.
  • the computing device may send the generated early warning information to an early warning system, and the early warning system displays the early warning information.
  • the computing device when the above-mentioned computing device has a display function, for example, when it has a display screen, the computing device can directly display the warning information.
  • the early warning system is installed on the vehicle to give early warning of dangerous scenes and save the data of dangerous scenes that have actually occurred in the vehicle.
  • the above-mentioned early warning system may be installed on the computing device.
  • the driving risk early warning method in the related art is to predict the possibility of a driver's occurrence of a dangerous scene based on the driver's historical driving behavior. Early warning of dangerous scenes is difficult to play an effective early warning role.
  • the embodiment of the present application provides a driving risk prediction method and device, which acquires the data of the dangerous driving behavior of the driver in the first time period, and acquires the occurrence times of the dangerous driving behavior and the actual occurrence times of the dangerous scene. Correspondence between. Because the corresponding relationship can truly reflect the relationship between the actual occurrence of the dangerous scene and the dangerous driving behavior that the driver made before encountering the dangerous scene, the corresponding relationship and the dangerous driving behavior involved in the dangerous driving behavior data can be obtained through the corresponding relationship.
  • the number of occurrences can accurately predict the number of targets that the driver may encounter in different dangerous scenarios in the first time period, and based on the warning information generated by the target times, the driver will be warned of the risk. Because the warning information is based on the driver's dangerous driving behavior. The resulting dangerous scene is determined, so that the driver can clearly understand the serious consequences of his dangerous driving behavior through early warning. This result-oriented early warning method is intuitive and can effectively improve the efficiency of human-computer interaction. to regulate driving behavior.
  • FIG. 3 is a schematic flowchart of a driving risk warning method according to an embodiment of the present application. As shown in FIG. 3 , the method of the embodiment of the present application includes:
  • the execution body of the embodiment of the present application is the aforementioned computing device.
  • the above-mentioned execution body is a unit having a data processing function in the computing device, for example, a processor in the computing device.
  • the dangerous driving behaviors involved in the embodiments of the present application include preset M different types of dangerous driving behaviors, as shown in Table 1, including fatigue driving, distracted driving, drunk driving, not wearing a seat belt, etc., where M is greater than or an integer equal to 1.
  • the type of dangerous driving behavior may be set according to actual needs, which is not limited in this embodiment of the present application.
  • Types of Dangerous Driving Behavior name of dangerous driving behavior
  • the computing device may obtain the driving speed of the vehicle from the driving system of the vehicle to determine whether the vehicle is speeding.
  • the type of dangerous driving behavior may be stored in the vehicle camera shown in FIG. 2 in advance, and in actual use, the computing device may obtain the type of dangerous driving behavior from the vehicle camera.
  • the type of risky driving behavior may be pre-stored in the computing device.
  • the dangerous scenarios involved in the embodiments of the present application include preset N different types of dangerous scenarios, as shown in Table 2, including lane deviation, pedestrian collision, front collision, safe vehicle distance, etc., where N is greater than or equal to 1 the integer.
  • the type of the dangerous scene may be set according to actual needs, which is not limited in this embodiment of the present application.
  • Types of Hazardous Scenarios the name of the dangerous scene Type 1 Dangerous Scenario Lane offset 2nd Dangerous Scenario pedestrian collision 3rd Dangerous Scenario front crash 4th Dangerous Scenario safe distance ... ...
  • the type of the dangerous scene may be stored in the early warning system shown in FIG. 2 in advance, and in actual use, the computing device may obtain the type of the dangerous scene from the early warning system.
  • the type of hazard scenario may be previously stored in the computing device.
  • the methods for obtaining the dangerous driving behavior data of the driver in the first time period in the above S301 include at least the following two methods.
  • the computing device can obtain the dangerous driving data of the driver in the first way.
  • the vehicle-mounted camera shown in FIG. 2 is a second vehicle-mounted camera with an image recognition function
  • the computing device may obtain the driver's dangerous driving behavior data by using the second method, specifically:
  • the computing device generates dangerous driving behavior data based on the driver image collected by the first vehicle-mounted camera, and the specific process includes the following steps C1 to C3:
  • step C1 the driver image collected by the first vehicle-mounted camera in the first time period is obtained.
  • the first in-vehicle camera is installed on the vehicle at a position facing the driver, and is used to collect images of the driver in real time.
  • the first vehicle-mounted camera is communicatively connected with the computing device, and can send the driver image collected in the current time period to the computing device.
  • Step C2 according to the types of dangerous driving behaviors, identify the actual occurrence times of different types of dangerous driving behaviors in the driver image.
  • the computing device obtains, from the first vehicle-mounted camera, the driver image collected by the first vehicle-mounted camera in the first time period. Next, according to the preset M types of dangerous driving behaviors, the facial features and behavioral characteristics of the driver in the driver image are identified, and it is determined which types of the M dangerous driving behaviors occurred in the driver in the first time period. Dangerous driving behavior and the actual number of dangerous driving behaviors that occurred.
  • the computing device includes a pre-trained image recognition model that can recognize the facial and behavioral characteristics of the driver.
  • the computing device inputs the image of the driver into the image recognition model, and the image recognition model recognizes the driver's facial features and behavioral features, wherein the facial features include eye state (such as whether the eyes are open or squinted), mouth state (such as mouth Whether it is open or closed, the size of the opening), head position, etc., and behavioral characteristics include hand movements and upper body movements.
  • the computing device compares the facial features and behavioral features of the driver identified by the image recognition model with the facial features and behavioral features corresponding to each of the M dangerous driving behaviors to determine the identified facial features. Dangerous driving behaviors corresponding to behavioral characteristics.
  • the facial features and behavioral features of the driver corresponding to different dangerous driving behaviors can be set according to the actual situation.
  • Step C3 obtaining dangerous driving behavior data according to the actual number of occurrences.
  • the above identification method is used for each driver image to identify the dangerous driving behavior of the driver in each driver image.
  • the dangerous driving behavior of the driver in each driver image the actual occurrences of each dangerous driving behavior in the multiple driver images are counted. For example, among 1000 driver images, there are 10 driver images. If fatigue driving is identified in the current time period, it can be determined that the driver has 10 times of fatigue driving in the current time period. Take each identified dangerous driving behavior and the number of occurrences of each dangerous driving behavior as the first dangerous driving behavior data, that is, the first dangerous driving behavior data includes the identified driver in the current time period. The occurrence of each dangerous driving behavior, and the number of occurrences of each dangerous driving behavior.
  • the computing device obtains the dangerous driving behavior data of the driver in the first time period from the second vehicle-mounted camera.
  • the vehicle-mounted camera shown in FIG. 2 is a second vehicle-mounted camera
  • the second vehicle-mounted camera is a camera with an image processing function, such as an AI camera.
  • the second vehicle-mounted camera may include a camera module and a processor, and the camera module is used to collect the driver's image in real time, and send the collected driver's image to the processor.
  • the processor sequentially processes the collected driver images to obtain the dangerous driving behavior data shown on the left. Specifically, the processor performs image recognition processing on the driver's image according to the preset M types of dangerous driving behaviors, and obtains the dangerous driving behavior of the driver in the first time period. For example, the processor recognizes the driver's image in the image.
  • the processor generates dangerous driving behavior data according to the identified dangerous driving behavior of the driver. It should be noted that the method for the processor to identify the dangerous driving behavior of the driver in the driver image in this method is basically the same as the method for the above-mentioned computing device to identify the dangerous driving behavior of the driver in the driver image. This will not be repeated here.
  • the duration of the first time period is the duration for which the first vehicle-mounted camera or the second vehicle-mounted camera collects the image of the driver.
  • the computing device can obtain the dangerous driving behavior data first, and then obtain the corresponding relationship, or can obtain the corresponding relationship, and then obtain the dangerous driving behavior data, and can also obtain the dangerous driving behavior data and the corresponding relationship at the same time.
  • the corresponding relationship between the number of occurrences of dangerous driving behaviors and the actual number of occurrences of dangerous scenarios may be generated in advance.
  • the computing device may directly obtain the generated corresponding relationship, thereby avoiding generating the corresponding relationship. Time resources and computing resources consumed by the corresponding relationship, so as to realize timely early warning of driving risks. For example, the computing device generates the corresponding relationship once a day, so that within one day after the corresponding relationship is generated, the corresponding relationship can be directly used in the driving risk warning process.
  • the above-mentioned corresponding relationship may be generated by the computing device when acquiring the dangerous driving behavior data.
  • the computing device acquires the dangerous driving behavior data of the driver in the first time period, and in response to the acquired dangerous driving behavior data, the computing device starts to generate the corresponding relationship.
  • the embodiment of the present application obtains the correspondence between the number of occurrences of dangerous driving behaviors and the actual number of occurrences of dangerous scenes, and the correspondence can truly reflect the actual occurrence of the dangerous scene and the actions made before the driver encounters the dangerous scene. Correlations in frequency among dangerous driving behaviors. Based on the corresponding relationship, one or more dangerous scenarios that may be caused by the dangerous driving behavior of the driver are accurately predicted, thereby improving the accuracy of early warning of driving risks.
  • the above-mentioned corresponding relationship may be generated based on the historical dangerous driving behavior data and historical dangerous scene data of the driver.
  • the above-mentioned corresponding relationship may also be generated based on historical dangerous driving behavior data and historical dangerous scene data of other drivers.
  • the computing device predicts the target number of times the driver encounters different dangerous scenarios in the first time period according to the occurrences of the dangerous driving behavior involved in the dangerous driving behavior data and the corresponding relationship.
  • the following formula (1) can be used to determine the number of occurrences of each dangerous scenario by the driver within the first time period t:
  • C N ⁇ M is the corresponding relationship between the number of occurrences of dangerous driving behaviors and the actual number of occurrences of dangerous scenes
  • C N ⁇ M is a matrix with N rows and M columns.
  • b i,t is the number of occurrences of the i-th dangerous driving behavior by the driver in the time period t, where i is greater than or equal to 1 and less than or equal to M.
  • a j, t is the predicted number of occurrences of the jth dangerous scene by the driver in the time period t, where j is greater than or equal to 1 and less than or equal to N.
  • the four dangerous driving behaviors are fatigue driving, not wearing a seat belt, drunk driving and distracted driving in Table 1, and the four dangerous scenarios are in Table 2.
  • the corresponding relationship C N ⁇ M between the occurrences of dangerous driving behaviors and the actual occurrences of dangerous scenes is a matrix with 4 rows and 4 columns, that is, All the parameters are known.
  • a 1, t is the number of occurrences of lane deviation
  • a 2 is the number of occurrences of pedestrian collisions
  • a 3 is the number of occurrences of front collisions
  • a 4 t is the number of occurrences of safe vehicle distance.
  • This embodiment of the present application generates early warning information by predicting the target number of occurrences of each dangerous scenario by the driver in the first period of time, so that the driver can determine the seriousness of the dangerous driving behavior that may be caused by the driver through the early warning.
  • this result-oriented early warning method has intuitive prompts, which can effectively improve the efficiency of human-computer interaction and achieve the purpose of regulating driving behavior. .
  • the methods for generating early warning information according to the target number of times include but are not limited to the following:
  • the warning information includes the number of targets that the driver encounters different dangerous scenarios in the first time period, for example, as shown in FIG. .
  • the number of targets corresponding to each dangerous scenario in the early warning information may be sorted according to the number of occurrences.
  • the early warning information may include a dangerous scenario with a target count not 0, but not a dangerous scenario with a target count of 0.
  • the early warning information includes one or more dangerous scenarios in which the predicted number of targets of the driver in the first time period is not 0.
  • the warning information includes two dangerous scenarios of safe vehicle distance and pedestrian collision. Among them, it is predicted that in the first period of time, the target number of the driver's occurrence of a dangerous scene of safe vehicle distance is 10 times, and the target number of pedestrian collisions is 1 time.
  • the computing device After generating the warning information according to the above method, the computing device sends the generated warning information to the warning system, and the warning system outputs the warning information to the driver, so that the driver can drive safely according to the warning information.
  • the computing device can also directly output the warning information to the driver.
  • the driving risk early warning method acquires the dangerous driving behavior data of the driver in the first time period, and acquires the corresponding relationship between the number of occurrences of the dangerous driving behavior and the actual number of occurrences of the dangerous scene. Because the corresponding relationship can truly reflect the relationship between the actual occurrence of the dangerous scene and the dangerous driving behavior that the driver made before encountering the dangerous scene, the corresponding relationship and the dangerous driving behavior involved in the dangerous driving behavior data can be obtained through the corresponding relationship. The number of occurrences can accurately predict the number of targets that the driver may encounter in different dangerous scenarios in the first time period, and based on the warning information generated by the target times, the driver will be warned of the risk. Because the warning information is based on the driver's dangerous driving behavior. The resulting dangerous scene is determined, so that the driver can clearly understand the serious consequences of his dangerous driving behavior through early warning. This result-oriented early warning method is intuitive and can effectively improve the efficiency of human-computer interaction. to regulate driving behavior.
  • FIG. 7 is another schematic flowchart of the driving risk early warning method provided by the embodiment of the application. As shown in FIG. 7 , in the above S301, the corresponding relationship between the number of occurrences of dangerous driving behaviors and the actual number of occurrences of dangerous scenarios is obtained, Can include:
  • the historical dangerous driving behavior data includes the dangerous driving behavior that the driver actually occurred in the historical time period.
  • the historical dangerous scene data includes the dangerous scenes that the driver actually encountered in the historical time period.
  • the above-mentioned preset historical time period may be the historical time period closest to the current moment, for example, the preset historical time period is the first three months of the first time period.
  • the computing device can obtain the historical dangerous driving behavior data of the driver within a preset historical time period from the in-vehicle camera.
  • the computing device can obtain the historical dangerous driving behavior of the driver within the preset historical time period from its own storage device. data.
  • the computing device may obtain the driver's historical hazard scene data within a preset historical time period from the early warning system.
  • the computing device can obtain, according to the driver's historical dangerous driving behavior data, the historical number of behaviors of each of the M types of dangerous driving behaviors that the driver occurs in a historical time period, as shown in Table 3:
  • Types of Dangerous Driving Behavior name of dangerous driving behavior history Type 1 dangerous driving behavior fatigue driving b1
  • Type 2 dangerous driving behavior not wearing seat belt b2
  • Type 3 Dangerous Driving Behavior Drunk driving b3
  • Type 4 Dangerous Driving Behavior distracted driving b4 ... ... ... ...
  • the right side displays the historical times of the driver's behavior of each dangerous driving behavior in the historical time period.
  • the behavior history frequency of the driver's second type of dangerous driving behavior in the historical time period is b2.
  • the computing device can, according to the driver's historical dangerous scene data, the historical number of scenes that the driver actually occurred in each of the N types of dangerous scenes within the historical time period, as shown in Table 4:
  • Types of Hazardous Scenarios the name of the dangerous scene Scene history
  • Type 1 Dangerous Scenario Lane offset a1 2nd Dangerous Scenario pedestrian collision a2 3rd Dangerous Scenario front crash a3 4th Dangerous Scenario safe distance a4 ... ... ...
  • the right side shows the historical number of scenarios where the driver actually occurred each dangerous scene during the historical time period.
  • the historical number of scenes where the driver actually occurred the second dangerous scene during the historical time period is: a2.
  • the linear mapping relationship is as shown in formula (2):
  • a 1 , a 2 to a N are the historical times of occurrence of different types of dangerous scenarios in Table 4 by the driver in the historical time period
  • b 1 , b 2 to b M are the historical times of the driver in the historical time period.
  • the behavior history times of different types of dangerous driving behaviors in Table 3 occurred during the time period.
  • a matrix C N ⁇ M can be obtained, and the matrix C N ⁇ M is taken as the correspondence between the occurrences of dangerous driving behaviors and the actual occurrences of dangerous scenes.
  • the linear mapping relationship is as shown in formula (3):
  • a 1 , a 2 to a N are the historical times of occurrence of different types of dangerous scenarios in Table 4 by the driver in the historical time period
  • b 1 , b 2 to b M are the historical times of the driver in the historical time period.
  • the behavior history times of different types of dangerous driving behaviors in Table 3 occurred during the time period.
  • the matrix A M ⁇ N can be obtained, and the matrix A M ⁇ N is taken as the correspondence between the occurrences of dangerous driving behaviors and the actual occurrences of dangerous scenes,
  • the matrix A M ⁇ N can be understood as a deformation of the matrix C N ⁇ M .
  • the solutions are not unique, and any solution can be selected at this time. In other cases, there may be non-convergence in the process of solving.
  • the solution with the smallest norm can be selected as the solution of the matrix.
  • any type of norm can be selected, for example, the 2-norm can be selected.
  • FIG. 8 is another schematic flowchart of the driving risk early warning method provided by the embodiment of the application. As shown in FIG. 8 , the above-mentioned S303 generates early warning information according to the target number of times, which may include:
  • the target number of times the driver encounters different dangerous scenarios in the first time period is: a 1, t , a 2,t , a 3,t isinga N,t .
  • the probability of being alerted corresponding to different dangerous scenarios is determined.
  • the warning probability of each dangerous scenario can be determined:
  • p j is the warning probability of the jth dangerous scene
  • a j,t is the predicted number of times the driver encounters the jth dangerous scene in the time period t
  • j is greater than or equal to 1 and less than or equal to N.
  • t takes the first time period.
  • the warning probability of each dangerous scenario in the first time period can be determined according to the above formula (4).
  • S802. Determine at least one dangerous scene to be warned according to the probability of being warned, and generate the warning information according to the at least one dangerous scene to be warned.
  • the early warning information includes at least one dangerous scene to be warned.
  • one or several dangerous scenarios with the highest warning probability are determined as the dangerous scenarios to be warned.
  • the above S802 determines at least one dangerous scenario to be warned according to the warning probability of each dangerous scenario, including steps A and Step B:
  • Step A generate a random number.
  • the random number obeys a 0-1 uniform distribution.
  • step B at least one dangerous scene to be warned is determined according to the probability of being alerted and the random number corresponding to different dangerous scenarios respectively.
  • At least one dangerous scene to be warned is determined, including the following two situations:
  • is a random number
  • is the sum of the warning probabilities of the first k dangerous scenarios
  • the k+1 th dangerous scene is regarded as the dangerous scene to be warned.
  • 0.5
  • p 1 0.1
  • p 2 0.3
  • p 3 0.2
  • is greater than the sum of p 1 and p 2 (ie 0.4) and less than the sum of p 1 , p 2 and p 3 (ie 0.6)
  • it can be determined that k 2
  • the third dangerous scenario ie, the front collision in Table 2 is determined as the dangerous scenario to be warned.
  • the kth dangerous scene is regarded as the dangerous scene to be warned.
  • 0.2
  • p 1 0.1
  • p 2 0.3
  • p 3 0.2
  • is less than the sum of p 1 and p 2 (ie, 0.4), and greater than p 1
  • the second dangerous scenario ie, Table 2 Pedestrian collision in
  • the kth dangerous scene is regarded as the dangerous scene to be warned.
  • 0.1
  • p 1 0.2
  • p 2 0.3
  • p 3 0.2
  • is less than p 1 (ie, 0.2) and greater than 0
  • the first dangerous scenario ie, the lane deviation in Table 2
  • the above-mentioned dangerous scenarios to be alerted include multiple times, after each time a dangerous scenario to be alerted is determined, the probability of being alerted of the determined dangerous scenario to be alerted is changed from the above-mentioned dangerous scenarios The scenario is excluded from the probability of being alerted.
  • the warning probability of the remaining dangerous scenarios is normalized, and the warning probability of the remaining dangerous scenarios is re-determined.
  • the random number is re-determined, and the re-determined random number and the re-determined probability of being warned of each dangerous scene are used to execute the above formula (5) or formula (6) or formula (7) to determine the dangerous scene to be warned.
  • the computing device first generates a random number ⁇ 1 , and determines the alert probability of each of the N dangerous scenarios according to the above formula (4).
  • a random number ⁇ 2 is generated, and the random number ⁇ 2 and the warning probability of N-1 dangerous scenes are brought into formula (5) or formula (6) or formula (7) to determine the dangerous scene 2 to be warned.
  • the probability of being warned of each dangerous scene is determined according to the predicted number of occurrences of each dangerous scene by the driver in the current time period.
  • Early warning probability determine at least one dangerous scene to be warned, and carry the determined dangerous scene to be warned in the early warning information, so that the driver can directly obtain the current dangerous scene through the early warning information, which is convenient for the driver to timely Take steps to improve the safety of vehicle driving.
  • false early warning can be understood as an early warning that should not be early warning
  • missing early warning can be understood as the early warning but no early warning.
  • Table 5 compared with the existing early warning method, the early warning method of the embodiment of the present application has a low false early warning rate and a low leakage early warning rate. Compared with the existing early warning method, the early warning method of the present application embodiment Significantly improved the accuracy of early warning.
  • the embodiments of the present application further provide a method for predicting dangerous driving behavior.
  • FIG. 9 is a schematic flowchart of a method for predicting dangerous driving behavior provided by an embodiment of the present application. As shown in FIG. 9, the method of the embodiment of the present application includes:
  • the method for predicting the dangerous driving behavior of the driver provided by the embodiment of the present application can be applied to the process of confirming responsibility for a traffic accident.
  • the dangerous driving behavior that the driver may have in the second time period is predicted according to the dangerous scene that occurs in the vehicle in the second time period.
  • the above-mentioned second time period may be the first time period or any historical time period, or may also be a time period after the first time period and adjacent to the first time period.
  • the methods for obtaining the dangerous scene data that occur in the vehicle within the second time period include but are not limited to the following:
  • a camera such as a trip recorder, is installed on the vehicle for photographing the external environment of the vehicle.
  • the driving environment images of the vehicle in the second time period are obtained from the camera, and the driving environment images are analyzed to obtain the dangerous scene data that occurred in the vehicle in the second time period.
  • the driving environment images of the vehicle in the second time period can be obtained from roadside equipment, for example, from cameras installed on both sides of the road, and these driving environment images can be analyzed to obtain the vehicle in the second time period.
  • Hazardous scene data for example, from cameras installed on both sides of the road.
  • the first dangerous scene data includes dangerous scenes that occur in the vehicle within the second time period, and the number of occurrences of each dangerous scene.
  • the computing device may obtain the dangerous scene data first, and then obtain the corresponding relationship, or may first obtain the corresponding relationship, and then obtain the dangerous scene data, and may also obtain the dangerous scene data and the corresponding relationship at the same time.
  • the above-mentioned corresponding relationship may be generated in advance, or may be triggered and generated when the computing device acquires the dangerous scene data.
  • the following formula (8) can be used to determine the number of occurrences of each dangerous scenario in the second time period:
  • a M ⁇ N is the corresponding relationship between the number of occurrences of dangerous driving behaviors and the actual number of occurrences of dangerous scenes
  • a M ⁇ N is a matrix with M rows and N columns.
  • b i,t is the number of occurrences of the i-th dangerous driving behavior by the driver in the second time period t, where i is greater than or equal to 1 and less than or equal to M.
  • a j, t is the predicted occurrence times of the jth dangerous scene for the driver in the second time period t, where j is greater than or equal to 1 and less than or equal to N.
  • the prediction information is used to indicate the possible dangerous driving behavior of the driver in the second time period.
  • the prediction information includes the number of times of each dangerous driving behavior predicted by the driver in the second time period in the above-mentioned S902.
  • the prediction information includes one or several dangerous driving behaviors whose predicted occurrence times are not zero, for example, the predicted information includes the dangerous driving behaviors that are predicted to occur the most times.
  • the method for predicting dangerous driving behavior obtains the dangerous scene data of the driver in the second time period, and obtains the corresponding relationship between the number of occurrences of dangerous driving behavior and the actual number of occurrences of dangerous scenes.
  • the corresponding relationship can truly reflect the correlation between the occurrences of dangerous driving behaviors and the occurrences of dangerous scenes. In this way, the possible dangerous driving behavior of the driver in the second time period can be accurately predicted according to the corresponding relationship and the occurrence times of each dangerous scene in the dangerous scene data.
  • FIG. 10 is a schematic structural diagram of a driving risk warning device according to an embodiment of the present application.
  • the early warning device may be an electronic device, or a component of the electronic device (eg, an integrated circuit, a chip, etc.), and the electronic device may be the computing device shown in FIG. 2 .
  • the early warning apparatus 100 may include: an acquisition unit 110 , a prediction unit 120 and a confirmation unit 130 .
  • the obtaining unit 110 is configured to obtain the dangerous driving behavior data of the driver in the first time period, and obtain the corresponding relationship between the occurrence times of the dangerous driving behavior and the actual occurrence times of the dangerous scene;
  • the prediction unit 120 is configured to predict the target number of times the driver encounters different dangerous scenarios in the first time period according to the actual occurrence times of the dangerous driving behavior involved in the dangerous driving behavior data and the corresponding relationship;
  • the determining unit 130 is configured to generate early warning information according to the target number of times.
  • the driving risk warning device of the embodiments of the present application can be used to implement the technical solutions of the above method embodiments, and the implementation principles and technical effects thereof are similar, and will not be repeated here.
  • the above-mentioned obtaining unit 110 is specifically configured to obtain the historical dangerous driving behavior data and historical dangerous scene data of the driver in the historical time period; according to the historical dangerous driving driving data of the driver, determine the driver The number of behavior history of different dangerous driving behaviors in the historical time period, and according to the driver's historical dangerous scene data, determine the historical number of scenes that the driver encountered different dangerous scenes in the historical time period; and according to the behavior history number and scene history times to obtain the corresponding relationship.
  • the early warning information includes a predicted target number of occurrences of each dangerous scenario by the driver in the current time period.
  • the above determining unit 130 is specifically configured to determine the probability of being alerted respectively corresponding to different dangerous scenarios according to the number of targets; determine at least one dangerous scenario to be alerted according to the probability of being alerted, and The warning information is generated by at least one dangerous scene to be warned.
  • the above determining unit 130 is specifically configured to generate a random number; according to the probability of being alerted and the random number corresponding to different dangerous scenarios, respectively, determine at least one dangerous scenario to be alerted.
  • the above determining unit 130 is specifically configured to be used when the random number is greater than the sum of the warning probabilities of the first k dangerous scenarios and less than or equal to the sum of the warning probabilities of the first k+1 dangerous scenarios If the random number is greater than the first value and less than or equal to the sum of the warning probabilities of the first k dangerous scenarios, then determine the The k types of dangerous scenarios are the dangerous scenarios to be warned; when k is greater than 1, the first value is the sum of the warning probabilities of the first k-1 dangerous scenarios, and when k is equal to 1, the first value is 0.
  • the above-mentioned obtaining unit 110 is specifically configured to obtain the driver image collected by the first vehicle-mounted camera in the first period of time; according to the type of dangerous driving behavior, identify different types of driver images The actual number of dangerous driving behaviors; obtain the dangerous driving behavior data according to the actual number of occurrences.
  • the above-mentioned obtaining unit 110 is specifically configured to obtain the dangerous driving behavior data of the driver in the first time period from the second vehicle-mounted camera, wherein the second vehicle-mounted camera is used to collect the image of the driver, and According to the type of dangerous driving behavior, the collected driver images are identified as dangerous driving behaviors, and the dangerous driving behavior data are generated.
  • the duration of the current time period is the duration of the first vehicle-mounted camera to collect the driver's image.
  • the driving risk warning device of the embodiments of the present application can be used to implement the technical solutions of the above method embodiments, and the implementation principles and technical effects thereof are similar, and will not be repeated here.
  • FIG. 11 is a schematic structural diagram of an apparatus for predicting dangerous driving behavior provided by an embodiment of the present application.
  • the prediction apparatus may be an electronic device, or a component of the electronic device (eg, an integrated circuit, a chip, etc.), and the electronic device may be the computing device shown in FIG. 2 .
  • the prediction apparatus 300 may include: an acquisition unit 310 , a prediction unit 320 and a confirmation unit 330 .
  • the acquiring unit 310 is configured to acquire dangerous scene data that occurs in the vehicle within the second time period.
  • the predicting unit 320 is configured to predict the predicted number of dangerous driving behaviors of the driver in the second time period according to the occurrence times of the dangerous scenes involved in the dangerous scene data and the corresponding relationship.
  • the determining unit 330 is configured to generate prediction information according to the number of predictions.
  • the device for predicting dangerous driving behavior in the embodiment of the present application can be used to execute the technical solutions of the above-mentioned embodiments of the method for predicting dangerous driving behavior.
  • FIG. 12 is a block diagram of a computing device involved in an embodiment of the application.
  • the device may be the computing device shown in FIG. 2 and is used to execute the driving risk early warning method described in the foregoing embodiment.
  • FIG. 12 please refer to the description in the foregoing method embodiment. .
  • the computing device 200 shown in FIG. 12 includes a memory 201 , a processor 202 , and a communication interface 203 .
  • the memory 201, the processor 202, and the communication interface 203 are communicatively connected to each other.
  • the memory 201, the processor 202, and the communication interface 203 may be connected by a network to realize the communication connection.
  • the computing device 200 described above may also include a bus 204 .
  • the memory 201 , the processor 202 , and the communication interface 203 are connected to each other through the bus 204 for communication.
  • FIG. 12 shows a computing device 200 in which the memory 201 , the processor 202 , and the communication interface 203 are connected to each other through the bus 204 for communication.
  • the memory 201 may be a read-only memory (Read Only Memory, ROM), a static storage device, a dynamic storage device, or a random access memory (Random Access Memory, RAM).
  • the memory 201 may store a program, and when the program stored in the memory 201 is executed by the processor 202, the processor 202 and the communication interface 203 are used to execute the above-mentioned early warning method.
  • the processor 202 may adopt a general-purpose central processing unit (Central Processing Unit, CPU), a microprocessor, an application specific integrated circuit (Application Specific Integrated Circuit, ASIC), a graphics processor (graphics processing unit, GPU) or one or more integrated circuit.
  • CPU Central Processing Unit
  • ASIC Application Specific Integrated Circuit
  • GPU graphics processing unit
  • the processor 202 can also be an integrated circuit chip with signal processing capability.
  • the early warning method of the present application may be implemented by an integrated logic circuit of hardware in the processor 202 or an instruction in the form of software.
  • the above-mentioned processor 202 may also be a general-purpose processor, a digital signal processor (digital signal processing, DSP), an application-specific integrated circuit (ASIC), an off-the-shelf programmable gate array (field programmable gate array, FPGA) or other programmable logic devices. , discrete gate or transistor logic devices, discrete hardware components.
  • a general purpose processor may be a microprocessor or the processor may be any conventional processor or the like.
  • the software module may be located in random access memory, flash memory, read-only memory, programmable read-only memory or electrically erasable programmable memory, registers and other storage media mature in the art.
  • the storage medium is located in the memory 201, and the processor 202 reads the information in the memory 201, and implements the early warning method of the embodiment of the present application in combination with its hardware.
  • the communication interface 203 uses a transceiver module such as, but not limited to, a transceiver to enable communication between the computing device 200 and other devices or a communication network.
  • a transceiver module such as, but not limited to, a transceiver to enable communication between the computing device 200 and other devices or a communication network.
  • the data set can be obtained through the communication interface 203 .
  • the bus 204 may include a pathway for communicating information between the various components of the computing device 200 (eg, memory 201, processor 202, communication interface 203).
  • an embodiment of the present application further provides a storage medium, where the storage medium is used to store a computer program, and the computer program is used to execute the method provided by the foregoing embodiment.
  • the embodiments of the present application also provide a computer program product including instructions, which, when executed on a computer, cause the computer to execute the methods provided by the above embodiments.
  • the above-mentioned embodiments it may be implemented in whole or in part by software, hardware, firmware or any combination thereof.
  • software it can be implemented in whole or in part in the form of a computer program product.
  • the computer program product includes one or more computer instructions. When the computer program instructions are loaded and executed on a computer, all or part of the processes or functions described in the embodiments of the present application are generated.
  • the computer may be a general purpose computer, special purpose computer, computer network, or other programmable device.
  • the computer instructions may be stored in or transmitted from one computer-readable storage medium to another computer-readable storage medium, for example, the computer instructions may be downloaded from a website site, computer, server, or data center Transmission to another website site, computer, server, or data center is by wire (eg, coaxial cable, fiber optic, digital subscriber line (DSL)) or wireless (eg, infrared, wireless, microwave, etc.).
  • the computer-readable storage medium can be any available medium that can be accessed by a computer or a data storage device such as a server, a data center, or the like that includes an integration of one or more available media.
  • the usable media may be magnetic media (eg, floppy disks, hard disks, magnetic tapes), optical media (eg, DVD), or semiconductor media (eg, Solid State Disk (SSD)), and the like.

Abstract

一种驾驶风险的预警方法、装置、计算设备及存储介质,驾驶风险的预警方法包括:获取驾驶员在第一时间段内的危险驾驶行为数据,以及获取危险驾驶行为的发生次数与危险场景的实际发生次数之间的对应关系(S301),根据该对应关系和危险驾驶行为数据中所涉及危险驾驶行为的实际发生次数,准确预测出驾驶员在第一时间段内可能发生每一种危险场景的目标次数(S302),并基于目标次数进行风险预警(S303),由于预警信息是根据驾驶员的危险驾驶行为可能造成的危险场景确定的,使得驾驶员通过预警能够明确自己做出的危险驾驶行为可能会导致何种严重后果,这种基于结果导向的预警方式提示直观,能够有效提升人机交互的效率,起到规范驾驶行为的目的。

Description

驾驶风险的预警方法、装置、计算设备及存储介质
本申请要求于2020年10月15日提交中国专利局、申请号为202011105843.X、申请名称为“驾驶风险的预警方法、装置、计算设备及存储介质”的中国专利申请的优先权,其全部内容通过引用结合在本申请中。
技术领域
本申请实施例涉及智能驾驶技术领域,尤其涉及驾驶风险预警。
背景技术
在车辆的行驶过程中,如何实现安全驾驶是每个驾驶员最关心的问题。
随着计算机视觉技术的发展,计算机视觉技术在安全驾驶领域得到了广泛应用。例如在相关技术中,在车辆上安装摄像头,该摄像头用于实时采集驾驶员的驾驶行为,并基于驾驶员的驾驶行为进行驾驶风险提示。
然而,相关技术中对驾驶行为的风险提示有些时候并不能得到驾驶员的重视,难以实现有效的人机交互。
发明内容
本申请实施例提供一种驾驶风险的预警方法、装置、计算设备及存储介质,用以提高车辆驾驶风险的预警准确性,实现对驾驶员当前的危险驾驶行为可能引起的多种危险场景进行准确预警。
一方面,本申请实施例提供一种驾驶风险的预警方法,所述方法由计算设备执行,所述方法包括:
获取驾驶员在第一时间段内的危险驾驶行为数据,以及获取危险驾驶行为的发生次数与危险场景的实际发生次数之间的对应关系;
根据所述危险驾驶行为数据中所涉及危险驾驶行为的实际发生次数,以及所述对应关系,预测所述驾驶员在所述第一时间段内遭遇不同危险场景的目标次数;
根据所述目标次数生成预警信息。
另一方面,本申请实施例提供一种驾驶风险的预警装置,包括:
获取单元,用于获取驾驶员在第一时间段内的危险驾驶行为数据,以及获取危险驾驶行为的发生次数与危险场景的实际发生次数之间的对应关系;
预测单元,用于根据所述危险驾驶行为数据中所涉及危险驾驶行为的实际 发生次数,以及所述对应关系,预测所述驾驶员在所述第一时间段内遭遇不同危险场景的目标次数;
确定单元,用于根据所述目标次数生成预警信息。
另一方面,本申请实施例提供一种计算设备,包括处理器和存储器;
所述存储器,用于存储计算机程序;
所述处理器,用于执行所述计算机程序以实现上述方面所述的驾驶风险的预警方法。
另一方面,本申请实施例提供了一种计算机可读存储介质,所述存储介质包括计算机程序,所述计算机程序用于执行上述方面所述的驾驶风险的预警方法。
另一方面,本申请实施例提供一种计算机程序产品,所述程序产品包括计算机程序,所述计算机程序存储在可读存储介质中,计算机的至少一个处理器可以从所述可读存储介质读取所述计算机程序,所述至少一个处理器执行所述计算机程序使得计算机实施上述方面所述的驾驶风险的预警方法。
本申请实施例提供的驾驶风险的预警方法、装置、计算设备及存储介质,获取驾驶员在第一时间段内的危险驾驶行为数据,以及获取危险驾驶行为的发生次数与危险场景的实际发生次数之间的对应关系。由于该对应关系可以真实反映出实际出现危险场景和造成驾驶员遭遇该危险场景前所作出危险驾驶行为间在次数上的关联,从而通过该对应关系和危险驾驶行为数据中所涉及危险驾驶行为的发生次数,准确预测出驾驶员在第一时间段内可能遭遇不同危险场景的目标次数,并基于目标次数生成的预警信息对驾驶员进行风险预警,由于预警信息是根据驾驶员的危险驾驶行为可能造成的危险场景确定的,使得驾驶员通过预警能够明确自己做出的危险驾驶行为可能会导致何种严重后果,这种基于结果导向的预警方式提示直观,能够有效提升人机交互的效率,起到规范驾驶行为的目的。
附图说明
图1为本申请实施例提供的一种应用场景示意图;
图2为本申请实施例涉及的系统架构图;
图3为本申请实施例提供的驾驶风险的预警方法的一种流程示意图;
图4为本申请实施例涉及的一种图像处理示意图;
图5为本申请实施例涉及的预警信息的一种示意图;
图6为本申请实施例涉及的预警信息的另一种示意图;
图7为本申请实施例提供的驾驶风险的预警方法的另一种流程示意图;
图8为本申请实施例提供的驾驶风险的预警方法的又一种流程示意图;
图9为本申请实施例提供的危险驾驶行为的预测方法的一种流程示意图;
图10为本申请实施例提供的驾驶风险的预警装置的一种结构示意图;
图11为本申请实施例提供的危险驾驶行为的预测装置的一种结构示意图;
图12为本申请实施例涉及的计算设备的框图。
具体实施方式
下面将结合本申请实施例中的附图,对本申请实施例中的技术方案进行描述。
为了便于理解本申请的实施例,首先对本申请实施例涉及到的相关概念进行如下简单介绍:
车联网(vehicle to everything,V2X)是通过装载在车上的传感器、车载终端等提供车辆信息,并通过各种通信技术实现车与车之间(Vehicle to Vehicle,V2V)、车与路之间(Vehicle to Infrastructure,V2I)、车与人之间(Vehicle to Pedestrian,V2P)、车与网络之间(Vehicle to Network,V2N)的相互通信。
人工智能(Artificial Intelligence,AI)是利用数字计算机或者数字计算机控制的机器模拟、延伸和扩展人的智能,感知环境、获取知识并使用知识获得最佳结果的理论、方法、技术及应用系统。换句话说,人工智能是计算机科学的一个综合技术,它企图了解智能的实质,并生产出一种新的能以人类智能相似的方式做出反应的智能机器。人工智能也就是研究各种智能机器的设计原理与实现方法,使机器具有感知、推理与决策的功能。
人工智能技术是一门综合学科,涉及领域广泛,既有硬件层面的技术也有软件层面的技术。人工智能基础技术一般包括如传感器、专用人工智能芯片、云计算、分布式存储、大数据处理技术、操作/交互系统、机电一体化等技术。人工智能软件技术主要包括计算机视觉技术、语音处理技术、自然语言处理技术以及机器学习/深度学习等几大方向。
计算机视觉技术(Computer Vision,CV)计算机视觉是一门研究如何使机器“看”的科学,更进一步的说,就是指用摄影机和电脑代替人眼对目标进行识别、跟踪和测量等机器视觉,并进一步做图形处理,使电脑处理成为更适合人眼观察或传送给仪器检测的图像。作为一个科学学科,计算机视觉研究相关的理论和技术,试图建立能够从图像或者多维数据中获取信息的人工智能系统。计算机视觉技术通常包括图像处理、图像识别、图像语义理解、图像检索、OCR、视频处理、视频语义理解、视频内容/行为识别、三维物体重建、3D技术、虚拟现实、增强现实、同步定位与地图构建等技术,还包括常见的人脸识别、指纹识别等生物特征识别技术。
智能驾驶技术包括高精地图、环境感知、行为决策、路径规划、运动控制等技术,智能驾驶技术有着广泛的应用前景。本申请实施例应用于智能驾驶技术领域,用于对驾驶员的驾驶风险进行预警,以辅助驾驶员进行安全驾驶。
在本申请实施例中,可以通过车联网技术可以实现驾驶员所驾驶的车辆与网络之间的相互通信,在一些实现方式中通过网络获取对应关系,获取所预测的驾驶员遭遇危险场景的次数等。还可以通过计算机视觉技术对采集的图像(例如驾驶员图像或行驶路况图像等)进行分析,得到危险驾驶行为数据或危险场景数据。
应理解,在本申请实施例中,“与A对应的B”表示B与A相关联。在一种实现方式中,可以根据A确定B。但还应理解,根据A确定B并不意味着仅仅根据A确定B,还可以根据A和/或其它信息确定B。
在本申请的描述中,除非另有说明,“多个”是指两个或多于两个。
另外,为了便于清楚描述本申请实施例的技术方案,在本申请的实施例中,采用了“第一”、“第二”等字样对功能和作用基本相同的相同项或相似项进行区分。本领域技术人员可以理解“第一”、“第二”等字样并不对数量和执行次序进行限定,并且“第一”、“第二”等字样也并不限定一定不同。
图1为本申请实施例提供的一种应用场景示意图,如图1所示,车辆A、车辆B和车辆C均为行驶中的车辆,本申请实施例提供的驾驶风险的预警方法可以为其中至少一个车辆的驾驶员在行驶过程中提供预警服务。
本申请实施例提供的驾驶风险的预警方法可以有计算设备执行,该计算设 备可以为具有车辆预警处理能力的终端设备或者服务器,其中,服务器可以是独立的物理服务器,也可以是多个物理服务器构成的服务器集群或者分布式系统,还可以是提供云计算服务的云服务器。终端可以是智能手机、平板电脑、笔记本电脑、台式计算机、智能音箱、智能手表、车载终端、智能电视等,但并不局限于此。终端设备以及服务器可以通过有线或无线通信方式进行直接或间接地连接,本申请在此不做限制。当该计算设备为服务器时,可以通过车联网与驾驶员所驾驶车辆连接,以从该车辆获取危险驾驶行为,并向该车辆返回预警信息。
图2为本申请实施例涉及的系统架构图,如图2所示,该系统架构包括:驾驶员、计算设备、车载摄像头和预警系统。
其中,车载摄像头安装在车辆上,用于采集驾驶员图像。可选的,该车载摄像头可以为AI摄像头,该AI摄像头可以利用计算机视觉技术,获得驾驶员的危险驾驶行为数据。
计算设备分别与车载摄像头和预警系统通信连接,可以从车载摄像头和预警系统中获得数据,也可以向车载摄像头和预警系统发送数据。例如,计算设备可以从车载摄像头中获得驾驶员的驾驶行为数据,以及可以从预警系统中获得车辆所发生的危险场景数据。
可选的,上述计算设备可以用于执行本申请实施例的技术方案,例如可以用于获取驾驶员在第一时间段内的危险驾驶行为数据,以及获取危险驾驶行为的发生次数与危险场景的实际发生次数之间的对应关系;并根据第一危险驾驶行为数据中所涉及危险驾驶行为的发生次数,以及对应关系,预测驾驶员在第一时间段内遭遇不同危险场景的目标次数,最后根据目标次数,生成预警信息。
在一些实施例中,若上述计算设备不具有显示功能时,计算设备可以将生成的预警信息发送给预警系统,预警系统显示预警信息。
在一些实施例中,上述计算设备具有显示功能时,例如具有显示屏时,该计算设备可以直接显示预警信息。
预警系统安装在车辆上,用于对危险场景进行预警,并且保存车辆真实发生过的危险场景数据。
在一些实施例中,若计算设备为车载设备,则上述预警系统可以安装在该 计算设备上。
相关技术中的驾驶风险预警方法,是基于驾驶员的历史驾驶行为预测驾驶员发生一种危险场景的可能性,其预警准确性低,无法对驾驶员当前时刻的危险驾驶行为可能引起的多种危险场景进行预警,难以起到有效的预警作用。为了解决上述技术问题,本申请实施例提供一种驾驶风险预测方法与装置,获取驾驶员在第一时间段内的危险驾驶行为数据,以及获取危险驾驶行为的发生次数与危险场景的实际发生次数之间的对应关系。由于该对应关系可以真实反映出实际出现危险场景和造成驾驶员遭遇该危险场景前所作出危险驾驶行为间在次数上的关联,从而通过该对应关系和危险驾驶行为数据中所涉及危险驾驶行为的发生次数,准确预测出驾驶员在第一时间段内可能遭遇不同危险场景的目标次数,并基于目标次数生成的预警信息对驾驶员进行风险预警,由于预警信息是根据驾驶员的危险驾驶行为可能造成的危险场景确定的,使得驾驶员通过预警能够明确自己做出的危险驾驶行为可能会导致何种严重后果,这种基于结果导向的预警方式提示直观,能够有效提升人机交互的效率,起到规范驾驶行为的目的。
下面通过一些实施例对本申请实施例的技术方案进行详细说明。下面这几个实施例可以相互结合,对于相同或相似的概念或过程可能在某些实施例不再赘述。
图3为本申请实施例提供的驾驶风险的预警方法的一种流程示意图。如图3所示,本申请实施例的方法包括:
S301、获取驾驶员在第一时间段内的危险驾驶行为数据,以及获取危险驾驶行为的发生次数与危险场景的实际发生次数之间的对应关系。
本申请实施例的执行主体为前述计算设备。在一些实施例中,上述执行主体为该计算设备中具有数据处理功能的单元,例如为计算设备中的处理器。
本申请实施例涉及的危险驾驶行为包括预设的M种不同类型的危险驾驶行为,如表1所示,包括疲劳驾驶、分神驾驶、醉酒驾驶、未系安全带等,其中,M为大于或等于1的整数。危险驾驶行为的类型可以根据实际需要进行设定,本申请实施例对此不做限制。
表1
危险驾驶行为的类型 危险驾驶行为的名称
第1种危险驾驶行为 疲劳驾驶
第2种危险驾驶行为 未系安全带
第3种危险驾驶行为 醉酒驾驶
第4种危险驾驶行为 分神驾驶
…… ……
需要说明的是,若危险驾驶行为包括超速时,计算设备可以从车辆的驾驶系统处获得车辆的行驶速度,以确定车辆是否超速。
在一些实施例中,危险驾驶行为的类型可以事先保存在图2所示的车载摄像头中,在实际使用时,计算设备可以从车载摄像头中获得危险驾驶行为的类型。
在一些实施例中,危险驾驶行为的类型可以事先保存在计算设备中。
本申请实施例涉及的危险场景包括预设的N种不同类型的危险场景,如表2所示,包括车道偏移、行人碰撞、前碰撞、安全车距等,其中,N为大于或等于1的整数。危险场景的类型可以根据实际需要进行设定,本申请实施例对此不做限制。
表2
危险场景的类型 危险场景的名称
第1种危险场景 车道偏移
第2种危险场景 行人碰撞
第3种危险场景 前碰撞
第4种危险场景 安全车距
…… ……
在一些实施例中,危险场景的类型可以事先保存在图2所示的预警系统中,在实际使用时,计算设备可以从预警系统中获得危险场景的类型。
在一些实施例中,危险场景的类型可以事先保存在计算设备中。
本申请实施例中,上述S301中获取驾驶员在第一时间段内的危险驾驶行为数据的方式至少包括如下两种,其中,当图2所示的车载摄像头为不具有图像识别功能的第一车载摄像头时,计算设备可以采用方式一获得驾驶员的危险驾 驶行驶数据。当图2所示的车载摄像头为具有图像识别功能的第二车载摄像头时,计算设备可以采用方式二获得驾驶员的危险驾驶行为数据,具体为:
方式一,计算设备基于第一车载摄像头采集的驾驶员图像,生成危险驾驶行为数据,具体过程包括如下步骤C1至步骤C3:
步骤C1,获得第一车载摄像头在第一时间段内所采集的驾驶员图像。
该第一车载摄像头安装在车辆上朝向驾驶员的位置处,用于实时采集驾驶员图像。同时,该第一车载摄像头与计算设备通信连接,可以将当前时间段内所采集的驾驶员图像发送给计算设备。
步骤C2,根据危险驾驶行为的类型,识别驾驶员图像中不同类型危险驾驶行为的实际发生次数。
具体的,计算设备从第一车载摄像头处获得第一时间段内该第一车载摄像头所采集的驾驶员图像。接着,根据预设的M种危险驾驶行为的类型,对驾驶员图像中驾驶员的面部特征和行为特征进行识别,判断驾驶员在第一时间段内出现了M种危险驾驶行为中的哪些类型危险驾驶行为以及所出现危险驾驶行为的实际发生次数。例如计算设备包括事先训练好的图像识别模型,该图像识别模型可以识别驾驶员的面部特征和行为特征。计算设备将驾驶员图像输入该图像识别模型中,该图像识别模型识别出驾驶员的面部特征和行为特征,其中面部特征包括眼睛状态(例如眼睛是睁开还是眯眼)、嘴巴状态(例如嘴巴是张开还是闭住,张开的大小)、头部位置等,行为特征包括手部动作和上身动作等。计算设备将图像识别模型识别出的驾驶员的面部特征和行为特征,与M种危险驾驶行为中每一种危险驾驶行为对应的面部特征和行为特征进行比对,以确定该识别出的面部特征和行为特征所对应的危险驾驶行为。例如,若识别出驾驶员的眼睛为眯眼且嘴巴张开,可以判断驾驶员出现疲劳驾驶;若在驾驶员图像中未识别出安全带,则可以判断驾驶员未系安全带;若识别出驾驶员手部举起,且手里握有电话,则可以判断驾驶员分神驾驶等。其中,不同的危险驾驶行为所对应的驾驶员的面部特征和行为特征可以根据实际情况进行设定。
步骤C3,根据所述实际发生次数获得危险驾驶行为数据。
若第一时间段内采集有多张驾驶员图像,针对每一张驾驶员图像采用上述识别方法,识别出每一张驾驶员图像中驾驶员发生的危险驾驶行为。根据每一 张驾驶员图像中驾驶员发生的危险驾驶行为,统计出多张驾驶员图像中每一种危险驾驶行为的实际发生次数,例如,1000张驾驶员图像中,有10张驾驶员图像中识别出疲劳驾驶,则可以确定当前时刻段内驾驶员发生10次疲劳驾驶。将识别出的每一种危险驾驶行为,以及每一种危险驾驶行为的发生次数,作为第一危险驾驶行为数据,也就是说,第一危险驾驶行为数据包括识别出的当前时间段内驾驶员发生的各危险驾驶行为,以及各危险驾驶行为的发生次数。需要说明的是,对于未系安全带的危险驾驶行为,在采集的连续多张驾驶员图像中均未识别出安全带,则确定驾驶员未安全带的实际发生次数为1次,而不是多次。
方式二,计算设备从第二车载摄像头中,获得驾驶员在第一时间段内的危险驾驶行为数据。
在该方式中,图2所示的车载摄像头为第二车载摄像头,该第二车载摄像头为具有图像处理功能的摄像头,例如为AI摄像头。该第二车载摄像头可以包括摄像模组和处理器,摄像模组用于实时采集驾驶员图像,并将采集的驾驶员图像发送给处理器。如图4所示,该处理器基于计算机视觉技术,对采集到的驾驶员图像进行依次处理,获得左侧所示的危险驾驶行为数据。具体是,处理器根据预设的M种危险驾驶行为的类型,对驾驶员图像进行图像识别处理,获得驾驶员在第一时间段内所发生的危险驾驶行为,例如处理器识别驾驶员图像中驾驶员的面部特征和行为特征,若识别出驾驶员的眼睛为闭眼状态且嘴巴张开,可以判断驾驶员出现疲劳驾驶,若在驾驶员图像中未识别出安全带,则可以判断驾驶员未系安全带。这样处理器根据识别出的驾驶员发生的危险驾驶行为生成危险驾驶行为数据。需要说明的是,该方式中处理器识别驾驶员图像中驾驶员的危险驾驶行的方法与上述计算设备识别驾驶员图像中驾驶员的危险驾驶行为的方法基本相同,可以参照上述具体描述,在此不再赘述。
在一些实施例中,第一时间段的时长为第一车载摄像头或第二车载摄像头采集驾驶员图像的时长。
需要说明的是,本申请实施例中,获取危险驾驶行为数据与获取上述对应关系之间没有先后顺序。也就是说,计算设备可以先获取危险驾驶行为数据,再获取对应关系,也可以对应关系,再获取危险驾驶行为数据,还可以同时获 取危险驾驶行为数据和对应关系。
在一些实施例中,危险驾驶行为的发生次数与危险场景的实际发生次数之间的对应关系可以预先生成,在进行驾驶风险预警时,计算设备直接获取生成好的对应关系即可,进而避免生成对应关系所消耗的时间资源和计算资源,从而实现驾驶风险的及时预警。例如,计算设备每天生成一次对应关系,这样在该对应关系生成后的一天内,可以在驾驶风险预警过程中直接使用该对应关系。
在一些实施例中,上述对应关系可以是计算设备在获取危险驾驶行为数据时生成的。例如,计算设备获取驾驶员在第一时间段内的危险驾驶行为数据,响应于获得的危险驾驶行为数据,计算设备开始生成该对应关系。
由于多数交通事故是由人为因素造成的,因此,驾驶员的危险驾驶行为的发生次数与危险场景的发生次数之间存在关联。基于此,本申请实施例通过获取危险驾驶行为的发生次数与危险场景的实际发生次数之间的对应关系,该对应关系可以真实反映出实际出现危险场景和造成驾驶员遭遇该危险场景前所作出危险驾驶行为间在次数上的关联。基于该对应关系准确预测驾驶员的危险驾驶行为可能导致的一个或多个危险场景,进而提高驾驶风险的预警准确性。
本申请实施例中,上述对应关系可以是基于本驾驶员的历史危险驾驶行为数据和历史危险场景数据生成的。可选的,上述对应关系还可以是基于其他驾驶员的历史危险驾驶行为数据和历史危险场景数据生成的。
S302、根据危险驾驶行为数据中所涉及危险驾驶行为的实际发生次数,以及对应关系,预测驾驶员在第一时间段内遭遇不同危险场景的目标次数。
举例说明,假设当前时间段为10s,这10s内采集了1000张驾驶员图像,对这1000张驾驶员图像进行解析,获得的危险驾驶行为数据为:10次疲劳驾驶和5次分神驾驶,这样可以获得疲劳驾驶的发生次数为10,分神驾驶的发生次数为5。计算设备根据危险驾驶行为数据中所涉及危险驾驶行为的发生次数,以及对应关系,预测驾驶员在第一时间段内遭遇不同危险场景的目标次数。
在一些实施例中,可以通过如下公式(1),确定出驾驶员在第一时间段t内发生每一种危险场景的次数:
Figure PCTCN2021114418-appb-000001
其中,C N×M为危险驾驶行为的发生次数与危险场景的实际发生次数之间的对应关系,C N×M为N行M列的矩阵。b i,t为驾驶员在时间段t内发生第i种危险驾驶行为的发生次数,i大于等于1且小于等于M。a j,t为预测的驾驶员在时间段t内发生第j种危险场景的次数,j大于等于1且小于等于N。
结合上述示例,取t为当前时间段,驾驶员在当前时间段内发生10次疲劳驾驶和5次分神驾驶的危险驾驶行为,结合表1,可以确定b 1,t=10,b 4,t=5,其他均为0。这样,将b 1,t=10,b 4,t=5带入上述公式(1),可以预测出驾驶员在第一时间段内遭遇不同危险场景的目标次数分别为a 1,t、a 2,t至a N,t
举例说明,假设上述M和N均为4,即包括4种危险驾驶行为依次为表1中的疲劳驾驶、未系安全带、醉酒驾驶和分神驾驶,4种危险场景依次为表2中的车道偏移、行人碰撞、前碰撞、安全车距。假设从危险驾驶行为数据中获得驾驶员在第一时间段内发生10次疲劳驾驶和5次分神驾驶的危险驾驶行为,没有发生未系安全带和醉酒驾驶,则b 1,t=10,b 4,t=5,b 2,t和b 3,t均为0。这样,将b 1,t=10,b 4,t=5带入上述公式(1),可以得到如下结果:
Figure PCTCN2021114418-appb-000002
此时,危险驾驶行为的发生次数与危险场景的实际发生次数之间的对应关系C N×M为4行4列的矩阵,即
Figure PCTCN2021114418-appb-000003
其中的各参数均已知。a 1,t为车道偏移的发生次数,a 2,t为行人碰撞的发生次数,a 3,t为前碰撞的发生次数,a 4,t为安全车距的发生次数。
这样,可以预测驾驶员在第一时间段内可能遭遇车道偏移的目标次数为a 1,t=c 11×10+c 14×5,驾驶员在第一时间段内可能遭遇行人碰撞的目标次数为a 2,t=c 21×10+c 24×5,驾驶员在第一时间段内可能遭遇前碰撞的目标次数为a 3,t=c 31×10+c 34×5,驾驶员在第一时间段内可能遭遇安全车距的目标次数为a 4,t=c 41×10+c 44×5。
S303、根据目标次数生成预警信息。
本申请实施例通过预测出的驾驶员在第一时间段内发生每一种危险场景的目标次数,生成预警信息,使得驾驶员通过预警能够明确自己做出的危险驾驶行为可能会导致何种严重后果,这种基于结果导向的预警方式提示直观,能够有效提升人机交互的效率,起到规范驾驶行为的目的。。
本申请实施例中,根据目标次数生成预警信息的方式包括但不限于如下几种:
方式一,预警信息包括驾驶员在第一时间段内遭遇不同危险场景的目标次数,例如图5所示,该预警信息包括预测驾驶员在当前时间段内可能分别遭遇N种危险场景的目标次数。
在一种实现方式中,预警信息中每一种危险场景对应的目标次数可以根据发生次数的大小进行排序。
在一种实现方式中,预警信息可以包括目标次数不为0的危险场景,而不包括目标次数为0的危险场景。
方式二,预警信息包括预测的驾驶员在第一时间段内的目标次数不为0的一个或多个危险场景,例如图6所示,预警信息包括安全车距和行人碰撞两个危险场景,其中预测在第一时间段内驾驶员发生安全车距危险场景的目标次数为10次,发生行人碰撞的目标次数为1次。
根据上述方法生成预警信息后,计算设备将生成的预警信息发送给预警系统,预警系统向驾驶员输出该预警信息,使得驾驶员根据该预警信息进行安全驾驶。
可选的,若计算设备具有显示功能或语音提示功能时,计算设备也可以直接将该预警信息输出给驾驶员。
本申请实施例提供的驾驶风险的预警方法,获取驾驶员在第一时间段内的 危险驾驶行为数据,以及获取危险驾驶行为的发生次数与危险场景的实际发生次数之间的对应关系。由于该对应关系可以真实反映出实际出现危险场景和造成驾驶员遭遇该危险场景前所作出危险驾驶行为间在次数上的关联,从而通过该对应关系和危险驾驶行为数据中所涉及危险驾驶行为的发生次数,准确预测出驾驶员在第一时间段内可能遭遇不同危险场景的目标次数,并基于目标次数生成的预警信息对驾驶员进行风险预警,由于预警信息是根据驾驶员的危险驾驶行为可能造成的危险场景确定的,使得驾驶员通过预警能够明确自己做出的危险驾驶行为可能会导致何种严重后果,这种基于结果导向的预警方式提示直观,能够有效提升人机交互的效率,起到规范驾驶行为的目的。
在上述实施例的基础上,下面结合图7对上述S301中获取危险驾驶行为的发生次数与危险场景的实际发生次数之间的对应关系的具体过程进行详细介绍。
图7为本申请实施例提供的驾驶风险的预警方法的另一种流程示意图,如图7所示,上述S301中获取危险驾驶行为的发生次数与危险场景的实际发生次数之间的对应关系,可以包括:
S701、获取驾驶员在历史时间段内的历史危险驾驶行为数据和历史危险场景数据。
需要说明的是,历史危险驾驶行为数据包括驾驶员在历史时间段内真实发生的危险驾驶行为。历史危险场景数据包括驾驶员在历史时间段内真实遭遇的危险场景。
上述预设的历史时间段可以为距离当前时刻最近的历史时间段,例如预设历史时间段为第一时间段的前3个月。
在一些实施例中,若历史危险驾驶行为数据保存在图2所示的车载摄像头,这样计算设备可以从车载摄像头中获得驾驶员在预设历史时间段内的历史危险驾驶行为数据。
在一些实施例中,若危险驾驶行为数据保存在图2所示的计算设备的存储装置中,这样计算设备可以从自身的存储设备中获得驾驶员在预设历史时间段内的历史危险驾驶行为数据。
在一些实施例中,若历史危险场景数据保存在图2所示的预警系统中,计 算设备可以从预警系统中获得驾驶员在预设历史时间段内的历史危险场景数据。
S702、根据驾驶员的历史危险驾驶行驶数据,获得驾驶员在历史时间段内发生不同危险驾驶行为的行为历史次数。
具体的,计算设备可以根据驾驶员的历史危险驾驶行为数据,获得驾驶员在历史时间段内发生M种危险驾驶行为中每一种危险驾驶行为的行为历史次数,如表3所示:
表3
危险驾驶行为的类型 危险驾驶的名称 行为历史次数
第1种危险驾驶行为 疲劳驾驶 b1
第2种危险驾驶行为 未系安全带 b2
第3种危险驾驶行为 醉酒驾驶 b3
第4种危险驾驶行为 分神驾驶 b4
…… …… ……
如表3所示,右侧显示驾驶员在该历史时间段发生各危险驾驶行为的行为历史次数,例如,驾驶员在该历史时间段内发生第2种危险驾驶行为的行为历史次数为b2。
S703、根据驾驶员的历史危险场景数据,获取驾驶员在历史时间段内遭遇不同危险场景的场景历史次数。
同理,计算设备可以根据驾驶员的历史危险场景数据,驾驶员在历史时间段内实际发生N种危险场景中每一种危险场景的场景历史次数,如表4所示:
表4
危险场景的类型 危险场景的名称 场景历史次数
第1种危险场景 车道偏移 a1
第2种危险场景 行人碰撞 a2
第3种危险场景 前碰撞 a3
第4种危险场景 安全车距 a4
…… …… ……
如表4所示,右侧显示驾驶员在该历史时间段内实际发生各危险场景的场 景历史次数,例如,驾驶员在该历史时间段内的实际发生第2种危险场景的场景历史次数为a2。
S704、根据行为历史次数和场景历史次数获得所述对应关系。
在一些实施例中,若对应关系中危险驾驶行为的发生次数与危险场景的实际发生次数之间具有线性映射关系,其线性映射关系如公式(2)所示:
Figure PCTCN2021114418-appb-000004
公式(2)中,a 1、a 2至a N为驾驶员在历史时间段内发生表4中各不同类型的危险场景的场景历史次数,b 1、b 2至b M为驾驶员在历史时间段内发生表3中各不同类型的危险驾驶行为的行为历史次数。
将表3和表4中的数据带入公式(2)中可以得到矩阵C N×M,将矩阵C N×M作为危险驾驶行为的发生次数与危险场景的实际发生次数之间的对应关系。
在一些实施例中,若对应关系中危险场景的实际发生次数与危险驾驶行为的发生次数之间具有线性映射关系,其线性映射关系如公式(3)所示:
Figure PCTCN2021114418-appb-000005
公式(3)中,a 1、a 2至a N为驾驶员在历史时间段内发生表4中各不同类型的危险场景的场景历史次数,b 1、b 2至b M为驾驶员在历史时间段内发生表3中各不同类型的危险驾驶行为的行为历史次数。
将表3和表4中的数据带入公式(3)中可以得到矩阵A M×N,将矩阵A M×N作为危险驾驶行为的发生次数与危险场景的实际发生次数之间的对应关系,其中矩阵A M×N可以理解为矩阵C N×M的一种变形。
在实际求解矩阵A M×N和C N×M时,可能存在解不唯一的情况,此时选择任意一个解即可。在另一些情况中,在求解的过程中可能存在不收敛的情况,此时可以选择范数最小的解作为矩阵的解,对于范数,可以选择任何类型的范数,例如选择2范数。
在上述实施例的基础上,下面结合图8对上述S303中根据预测的驾驶员在当前时间段内发生每一种危险场景的次数,生成预警信息的一种实现方式进行详细介绍。
图8为本申请实施例提供的驾驶风险的预警方法的又一种流程示意图,如图8所示,上述S303中根据目标次数生成预警信息,可以包括:
S801、根据所述目标次数,确定不同危险场景分别对应的被预警概率。
具体的,参照上述公式(1),根据驾驶员在第一时间段内所涉及危险驾驶行为的实际发生次数,预测出第一时间段内驾驶员遭遇不同危险场景的目标次数为:a 1,t、a 2,t、a 3,t……a N,t
接着,根据目标次数,确定不同危险场景分别对应的被预警概率。例如,根据公式(4)可以确定出每一种危险场景的被预警概率:
p j=a j,t/(a 1,t+a 2,t+...+a N,t)   (4)
其中,p j为第j种危险场景的被预警概率,a j,t为预测的驾驶员在时间段t内遭遇第j种危险场景的目标次数,j大于等于1且小于等于N。此时,t取第一时间段。
本步骤中可以根据上述公式(4)确定出第一时间段内每一种危险场景的被预警概率。
S802、根据被预警概率,确定至少一个待预警的危险场景,并根据所述至少一个待预警的危险场景生成所述预警信息。
其中,预警信息包括至少一个待预警的危险场景。
在一些实施例中,将预警概率最大的一个或几个危险场景确定为待预警的危险场景。
在一些实施例中,为了使得被预警概率不为0的危险场景具有同等的被预警机会,则上述S802根据每个危险场景的被预警概率,确定至少一个待预警的危险场景,包括步骤A和步骤B:
步骤A,生成一个随机数。
可选的,该随机数服从0-1均分分布。
步骤B,根据不同危险场景分别对应被预警概率和随机数,确定至少一个待预警的危险场景。
其中,根据每个危险场景的被预警概率和随机数,确定至少一个待预警的危险场景,包括如下两种情况:
情况1,在随机数大于前k种危险场景的被预警概率之和,且小于或等于前k+1种危险场景的被预警概率之和的情况下,则确定第k+1种危险场景为待预警的危险场景。
具体的,参照公式(5)所示:
Figure PCTCN2021114418-appb-000006
其中,θ为随机数,
Figure PCTCN2021114418-appb-000007
为前k种危险场景的被预警概率之和,
Figure PCTCN2021114418-appb-000008
前k+1种危险场景的被预警概率之和,k为大于等于1且小于等于N的正整数。
当随机数θ满足上述公式(5)时,将第k+1种危险场景为待预警的危险场景。例如θ=0.5,p 1=0.1,p 2=0.3,p 3=0.2,θ大于p 1和p 2的和(即0.4),且小于p 1、p 2和p 3的和(即0.6),可以确定k=2,则将第3种危险场景(即表2中的前碰撞)确定为待预警的危险场景。
情况2,在随机数大于第一数值,且小于或等于前k种危险场景的被预警概率之和的情况下,则确定第k种危险场景为待预警的危险场景。
具体的,若k大于1,参照公式(6)所示:
Figure PCTCN2021114418-appb-000009
当随机数θ满足上述公式(6)时,将第k种危险场景为待预警的危险场景。例如θ=0.2,p 1=0.1,p 2=0.3,p 3=0.2,θ小于p 1和p 2的和(即0.4),且大于p 1,则将第2种危险场景(即表2中的行人碰撞)确定为待预警的危险场景。
若k=1,参照公式(7)所示:
Figure PCTCN2021114418-appb-000010
当随机数θ满足上述公式(7)时,将第k种危险场景为待预警的危险场景。例如θ=0.1,p 1=0.2,p 2=0.3,p 3=0.2,θ小于p 1(即0.2)且大于0,则将第1种危险场景(即表2中的车道偏移)确定为待预警的危险场景。
在本申请实施例的一些实施例中,若上述待预警的危险场景包括多个时,则每次确定一个待预警的危险场景后,将该确定的待预警危险场景的被预警概 率从上述危险场景的被预警概率中剔除。对剩余的危险场景的被预警概率进行归一化,重新确定剩余的各危险场景的被预警概率。同时重新确定随机数,并使用重新确定的随机数和重新确定的各危险场景的被预警概率,执行上述公式(5)或公式(6)或公式(7),确定待预警的危险场景。
举例说明,假设需要确定的待预警的危险场景为2个,计算设备首先生成随机数θ 1,并根据上述公式(4)确定出N个危险场景中每个危险场景的被预警概率。接着,根据上述公式(5)或公式(6)或公式(7),确定待预警的危险场景1。接着,从上述N个危险场景的被预警概率中剔除待预警的危险场景1的被预警概率,对剩余的N-1个危险场景的被预警概率进行归一化处理,重新确定出N-1个危险场景中每一个危险场景的被预警概率。同时,生成随机数θ 2,并将随机数θ 2和N-1个危险场景的被预警概率带入公式(5)或公式(6)或公式(7),确定待预警的危险场景2。
本申请实施例提供的驾驶风险的预警方法,通过根据预测的驾驶员在当前时间段内发生每一种危险场景的次数,确定每一种危险场景的被预警概率,根据每个危险场景的被预警概率,确定至少一个待预警的危险场景,并将确定出的待预警的危险场景携带在预警信息中,这样驾驶员可以通过该预警信息直接获得当前可能会发生的危险场景,便于驾驶员及时采取措施,以提高车辆驾驶的安全性。
上面图3至图8所示的实施例对本申请实施例提供的危险驾驶的预警方法进行了详细描述,下面结合实验结果来进一步说明本申请实施例的技术效果。具体的,在模拟器中进行测试,得到本申请与已有技术的漏警率之比,以及和本申请与已有技术的虚警率,如表5所示:
表5
Figure PCTCN2021114418-appb-000011
Figure PCTCN2021114418-appb-000012
其中,虚预警可以理解为本该不应预警而进行了预警,漏预警可以理解为本该预警但没有预警。如表5所示,本申请实施例的预警方法相比于已有的预警方法,其虚预警率低,漏预警率也低,相比于已有的预警方法,本申请实施例的预警方法显著提高了预警的准确性。
在上述实施例的基础上,本申请实施例还提供一种危险驾驶行为的预测方法。
图9为本申请实施例提供的危险驾驶行为的预测方法的一种流程示意图。如图9所示,本申请实施例的方法包括:
S901、获取车辆在第二时间段内发生的危险场景数据,以及获取危险驾驶行为的发生次数与危险场景的实际发生次数之间的对应关系。
本申请实施例提供的驾驶员的危险驾驶行为的预测方法,可以应用于交通事故的责任确认过程中。例如,根据第二时间段内车辆发生的危险场景,来预测在第二时间段内驾驶员可能存在的危险驾驶行为。
上述第二时间段可以为第一时间段或任意历史时间段,或者也可以为第一时间段之后的、且与第一时间段相邻的时间段。
本申请实施例中获取车辆在第二时间段内发生的危险场景数据的方式包括但不限于如下几种:
方式一,车辆上安装有用于拍摄车辆外部环境的摄像头,例如行程记录仪。从该摄像头中获得第二时间段内该车辆的行驶环境图像,对这些行驶环境图像进行分析,获得车辆在第二时间段内发生的危险场景数据。
方式二,可以从路侧设备,例如从安装在道路两侧的摄像头处获得车辆在第二时间段内的行驶环境图像,对这些行驶环境图像进行分析,获得车辆在第二时间段内发生的危险场景数据。
可选的,第一危险场景数据包括第二时间段内车辆所发生的危险场景,以及各危险场景的发生次数。
需要说明的是,本申请实施例中,获取危险场景数据与获取危险驾驶行为的发生次数与危险场景的实际发生次数之间的对应关系之间没有先后顺序。也就是说,计算设备可以先获取危险场景数据,再获取对应关系,也可以先获取对应关系,再获取危险场景数据,还可以同时获取危险场景数据和对应关系。
可选的,上述对应关系可以是事先生成的,也可以是计算设备在获取危险场景数据时触发生成的。
S902、根据危险场景数据中所涉及危险场景的发生次数,以及对应关系,预测在第二时间段内驾驶员发生危险驾驶行为的预测次数。
在一些实施例中,可以通过如下公式(8),确定出第二时间段内每一种危险场景的发生次数:
Figure PCTCN2021114418-appb-000013
其中,A M×N为危险驾驶行为的发生次数与危险场景的实际发生次数之间的对应关系,A M×N为M行N列的矩阵。b i,t为驾驶员在第二时间段t内发生第i种危险驾驶行为的发生次数,i大于等于1且小于等于M。a j,t为预测的驾驶员在第二时间段t内发生第j种危险场景的发生次数,j大于等于1且小于等于N。
结合上述示例,取t表示第二时间段,车辆在第二时间段内发生的危险场景为:10次车道偏离和2次前碰撞,结合表2,可以确定a 1,t=10,a 3,t=2,其他均为0。这样,将a 1,t=10,a 3,t=2带入上述公式(8),可以预测出驾驶员在第二时间段内的每一种危险驾驶行为的发生次数。
S903、根据在第二时间段内驾驶员发生每一种危险驾驶行为的预测次数,生成预测信息。
该预测信息用于指示第二时间段内驾驶员可能存在的危险驾驶行为。
在一些实施例中,该预测信息包括上述S902预测的在第二时间段内驾驶员 发生每一种危险驾驶行为的次数。
在一些实施例中,该预测信息包括预测的发生次数不为0的一个或几个危险驾驶行为,例如预测信息包括预测的发生次数最多的危险驾驶行为。
本申请实施例提供的危险驾驶行为的预测方法,通过获取驾驶员在第二时间段内的危险场景数据,以及获取危险驾驶行为的发生次数与危险场景的实际发生次数之间的对应关系,该对应关系可以真实反映出危险驾驶行为的发生次数和危险场景的发生次数之间的关联性。这样可以根据该对应关系和危险场景数据中每一种危险场景的发生次数,准确预测出驾驶员在第二时间段内可能存在的危险驾驶行为。
图10为本申请实施例提供的驾驶风险的预警装置的一种结构示意图。该预警装置可以是电子设备,也可以是电子设备的部件(例如,集成电路,芯片等等),该电子设备可以为图2所示的计算设备。如图10所示,该预警装置100可以包括:获取单元110、预测单元120和确实单元130。
获取单元110,用于获取驾驶员在第一时间段内的危险驾驶行为数据,以及获取危险驾驶行为的发生次数与危险场景的实际发生次数之间的对应关系;
预测单元120,用于根据危险驾驶行为数据中所涉及危险驾驶行为的实际发生次数,以及对应关系,预测驾驶员在第一时间段内遭遇不同危险场景的目标次数;
确定单元130,用于根据目标次数生成预警信息。
本申请实施例的驾驶风险的预警装置,可以用于执行上述各方法实施例的技术方案,其实现原理和技术效果类似,此处不再赘述。
在一种可能的实现方式中,上述获取单元110,具体用于获取驾驶员在历史时间段内的历史危险驾驶行为数据和历史危险场景数据;根据驾驶员的历史危险驾驶行驶数据,确定驾驶员在历史时间段内发生不同危险驾驶行为的行为历史次数,以及根据驾驶员的历史危险场景数据,确定驾驶员在历史时间段内遭遇不同危险场景的场景历史次数;以及根据行为历史次数和场景历史次数,获得对应关系。
可选的,预警信息包括预测的驾驶员在当前时间段内发生每一种危险场景的目标次数。
在一种可能的实现方式中,上述确定单元130,具体用于根据目标次数,确定不同危险场景分别对应的被预警概率;根据被预警概率,确定至少一个待预警的危险场景,并根据所述至少一个待预警的危险场景生成所述预警信息。
在一种可能的实现方式中,上述确定单元130,具体用于生成一个随机数;根据不同危险场景分别对应被预警概率和随机数,确定至少一个待预警的危险场景。
在一种可能的实现方式中,上述确定单元130,具体用于在随机数大于前k种危险场景的被预警概率之和,且小于或等于前k+1种危险场景的被预警概率之和的情况下,则确定第k+1种危险场景为待预警的危险场景;在随机数大于第一数值,且小于或等于前k种危险场景的被预警概率之和的情况下,则确定第k种危险场景为待预警的危险场景;其中,当k大于1时,第一数值为前k-1种危险场景的被预警概率之和,当k等于1时,第一数值为0。
在一种可能的实现方式中,上述获取单元110,具体用于获得第一车载摄像头在第一时间段内所采集的驾驶员图像;根据危险驾驶行为的类型,识别驾驶员图像中不同类型的危险驾驶行为的实际发生次数;根据实际发生次数获得危险驾驶行为数据。
在一种可能的实现方式中,上述获取单元110,具体用于从第二车载摄像头获得驾驶员在第一时间段内的危险驾驶行为数据,其中第二车载摄像头用于采集驾驶员图像,并根据危险驾驶行为的类型对采集的驾驶员图像进行危险驾驶行为识别,生成危险驾驶行为数据。
可选的,当前时间段的时长为第一车载摄像头采集驾驶员图像的时长。
本申请实施例的驾驶风险的预警装置,可以用于执行上述各方法实施例的技术方案,其实现原理和技术效果类似,此处不再赘述。
图11为本申请实施例提供的危险驾驶行为的预测装置的一种结构示意图。该预测装置可以是电子设备,也可以是电子设备的部件(例如,集成电路,芯片等等),该电子设备可以为图2所示的计算设备。如图11所示,该预测装置300可以包括:获取单元310、预测单元320和确实单元330。
获取单元310,用于获取车辆在第二时间段内发生的危险场景数据。
预测单元320,用于根据危险场景数据中所涉及危险场景的发生次数,以 及对应关系,预测在第二时间段内驾驶员发生的危险驾驶行为的预测次数。
确定单元330,用于根据预测次数,生成预测信息。
本申请实施例的危险驾驶行为的预测装置,可以用于执行上述危险驾驶行为的预测方法实施例的技术方案,其实现原理和技术效果类似,此处不再赘述。
图12为本申请实施例涉及的计算设备的框图,该设备可以是图2所示的计算设备,用于执行上述实施例所述的驾驶风险的预警方法,具体参见上述方法实施例中的说明。
图12所示的计算设备200包括存储器201、处理器202、通信接口203。存储器201、处理器202、通信接口203之间彼此通信连接。例如,存储器201、处理器202、通信接口203之间可以采用网络连接的方式,实现通信连接。或者,上述计算设备200还可以包括总线204。存储器201、处理器202、通信接口203通过总线204实现彼此之间的通信连接。图12是以存储器201、处理器202、通信接口203通过总线204实现彼此之间的通信连接的计算设备200。
存储器201可以是只读存储器(Read Only Memory,ROM),静态存储设备,动态存储设备或者随机存取存储器(Random Access Memory,RAM)。存储器201可以存储程序,当存储器201中存储的程序被处理器202执行时,处理器202和通信接口203用于执行上述预警方法。
处理器202可以采用通用的中央处理器(Central Processing Unit,CPU),微处理器,应用专用集成电路(Application Specific Integrated Circuit,ASIC),图形处理器(graphics processing unit,GPU)或者一个或多个集成电路。
处理器202还可以是一种集成电路芯片,具有信号的处理能力。在实现过程中,本申请的预警方法可以通过处理器202中的硬件的集成逻辑电路或者软件形式的指令完成。上述的处理器202还可以是通用处理器、数字信号处理器(digital signal processing,DSP)、专用集成电路(ASIC)、现成可编程门阵列(field programmable gate array,FPGA)或者其他可编程逻辑器件、分立门或者晶体管逻辑器件、分立硬件组件。通用处理器可以是微处理器或者该处理器也可以是任何常规的处理器等。软件模块可以位于随机存储器,闪存、只读存储器,可编程只读存储器或者电可擦写可编程存储器、寄存器等本领域成熟的存储介质中。该存储介质位于存储器201,处理器202读取存储器201中的信息, 结合其硬件完成本申请实施例的预警方法。
通信接口203使用例如但不限于收发器一类的收发模块,来实现计算设备200与其他设备或通信网络之间的通信。例如,可以通过通信接口203获取数据集。
当上述计算设备200包括总线204时,总线204可包括在计算设备200各个部件(例如,存储器201、处理器202、通信接口203)之间传送信息的通路。
另外,本申请实施例还提供了一种存储介质,所述存储介质用于存储计算机程序,所述计算机程序用于执行上述实施例提供的方法。
本申请实施例还提供了一种包括指令的计算机程序产品,当其在计算机上运行时,使得计算机执行上述实施例提供的方法。
在上述实施例中,可以全部或部分地通过软件、硬件、固件或者其任意组合来实现。当使用软件实现时,可以全部或部分地以计算机程序产品的形式实现。所述计算机程序产品包括一个或多个计算机指令。在计算机上加载和执行所述计算机程序指令时,全部或部分地产生按照本申请实施例所述的流程或功能。所述计算机可以是通用计算机、专用计算机、计算机网络、或者其他可编程装置。所述计算机指令可以存储在计算机可读存储介质中,或者从一个计算机可读存储介质向另一个计算机可读存储介质传输,例如,所述计算机指令可以从一个网站站点、计算机、服务器或数据中心通过有线(例如同轴电缆、光纤、数字用户线(DSL))或无线(例如红外、无线、微波等)方式向另一个网站站点、计算机、服务器或数据中心进行传输。所述计算机可读存储介质可以是计算机能够存取的任何可用介质或者是包含一个或多个可用介质集成的服务器、数据中心等数据存储设备。所述可用介质可以是磁性介质,(例如,软盘、硬盘、磁带)、光介质(例如,DVD)、或者半导体介质(例如固态硬盘Solid State Disk(SSD))等。
所属领域的技术人员可以清楚地了解到,为描述的方便和简洁,上述描述的系统、装置和单元的具体工作过程,可以参考前述方法实施例中的对应过程,在此不再赘述。另外,各个方法实施例之间、各个装置实施例之间也可以互相参考,在不同实施例中的相同或对应内容可以互相引用,不做赘述。

Claims (13)

  1. 一种驾驶风险的预警方法,所述方法由计算设备执行,所述方法包括:
    获取驾驶员在第一时间段内的危险驾驶行为数据,以及获取危险驾驶行为的发生次数与危险场景的实际发生次数之间的对应关系;
    根据所述危险驾驶行为数据中所涉及危险驾驶行为的实际发生次数,以及所述对应关系,预测所述驾驶员在所述第一时间段内遭遇不同危险场景的目标次数;
    根据所述目标次数生成预警信息。
  2. 根据权利要求1所述的方法,所述获取所述危险驾驶行为的发生次数与危险场景的实际发生次数之间的对应关系,包括:
    获取所述驾驶员在历史时间段内的历史危险驾驶行为数据和历史危险场景数据;
    根据所述驾驶员的所述历史危险驾驶行驶数据,确定所述驾驶员在所述历史时间段内发生不同危险驾驶行为的行为历史次数;
    根据所述驾驶员的所述历史危险场景数据,确定所述驾驶员在所述历史时间段内遭遇不同危险场景的场景历史次数;
    根据所述行为历史次数和场景历史次数,获得所述对应关系。
  3. 根据权利要求1或2所述的方法,所述预警信息包括所述目标次数。
  4. 根据权利要求1或2所述的方法,所述根据所述目标次数生成预警信息,包括:
    根据所述目标次数,确定不同危险场景分别对应的被预警概率;
    根据所述被预警概率,确定至少一个待预警的危险场景,并根据所述至少一个待预警的危险场景生成所述预警信息。
  5. 根据权利要求4所述的方法,所述根据所述被预警概率,确定至少一个待预警的危险场景,包括:
    生成一个随机数;
    根据不同危险场景分别对应的被预警概率和所述随机数,确定所述至少一个待预警的危险场景。
  6. 根据权利要求5所述的方法,所述根据不同危险场景分别对应的被预 警概率和所述随机数,确定所述至少一个待预警的危险场景,包括:
    在所述随机数大于前k种危险场景的被预警概率之和,且小于或等于前k+1种危险场景的被预警概率之和的情况下,则确定第k+1种危险场景为所述待预警的危险场景;
    在所述随机数大于第一数值,且小于或等于前k种危险场景的被预警概率之和的情况下,则确定第k种危险场景为所述待预警的危险场景;
    其中,当k大于1时,所述第一数值为前k-1种危险场景的被预警概率之和,当k等于1时,所述第一数值为0。
  7. 根据权利要求1或2所述的方法,所述获取驾驶员在第一时间段内的危险驾驶行为数据,包括:
    获得第一车载摄像头在所述第一时间段内所采集的驾驶员图像;
    根据所述危险驾驶行为的类型,识别所述驾驶员图像中不同类型的危险驾驶行为的实际发生次数;
    根据所述实际发生次数获得所述危险驾驶行为数据。
  8. 根据权利要求1或2所述的方法,所述获取驾驶员在第一时间段内的危险驾驶行为数据,包括:
    从第二车载摄像头获得所述驾驶员在所述第一时间段内的所述危险驾驶行为数据,所述第二车载摄像头用于采集驾驶员图像,并根据所述危险驾驶行为的类型对采集的所述驾驶员图像进行危险驾驶行为识别,生成所述危险驾驶行为数据。
  9. 根据权利要求1所述的方法,其特征在于,所述方法还包括:
    获取所述驾驶员所在车辆在所述第二时间段内发生的危险场景数据;
    根据所述危险场景数据中所涉及危险场景的发生次数以及所述对应关系,预测在第二时间段内所述驾驶员发生的危险驾驶行为的预测次数。
  10. 一种驾驶风险的预警装置,包括:
    获取单元,用于获取驾驶员在第一时间段内的第一危险驾驶行为数据,以及获取危险驾驶行为的发生次数与危险场景的实际发生次数之间的对应关系;
    预测单元,用于根据所述第一危险驾驶行为数据中所涉及危险驾驶行为的实际发生次数,以及所述对应关系,预测所述驾驶员在所述第一时间段内遭遇 不同危险场景的目标次数;
    确定单元,用于根据所述目标次数生成预警信息。
  11. 一种计算设备,包括:存储器,处理器;
    所述存储器,用于存储计算机程序;
    所述处理器,用于执行所述计算机程序以实现如上述权利要求1至9任一项所述的驾驶风险的预警方法。
  12. 一种计算机可读存储介质,所述计算机可读存储介质中存储有计算机程序,所述计算机程序用于执行如权利要求1至9任一项所述的驾驶风险的预警方法。
  13. 一种包括指令的计算机程序产品,当其在计算机上运行时,使得所述计算机执行权利要求1至9任一项所述的驾驶风险的预警方法。
PCT/CN2021/114418 2020-10-15 2021-08-25 驾驶风险的预警方法、装置、计算设备及存储介质 WO2022078077A1 (zh)

Priority Applications (1)

Application Number Priority Date Filing Date Title
US17/968,341 US20230048112A1 (en) 2020-10-15 2022-10-18 Warning method and apparatus for driving risk, computing device and storage medium

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN202011105843.XA CN112193252A (zh) 2020-10-15 2020-10-15 驾驶风险的预警方法、装置、计算设备及存储介质
CN202011105843.X 2020-10-15

Related Child Applications (1)

Application Number Title Priority Date Filing Date
US17/968,341 Continuation-In-Part US20230048112A1 (en) 2020-10-15 2022-10-18 Warning method and apparatus for driving risk, computing device and storage medium

Publications (1)

Publication Number Publication Date
WO2022078077A1 true WO2022078077A1 (zh) 2022-04-21

Family

ID=74009143

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2021/114418 WO2022078077A1 (zh) 2020-10-15 2021-08-25 驾驶风险的预警方法、装置、计算设备及存储介质

Country Status (3)

Country Link
US (1) US20230048112A1 (zh)
CN (1) CN112193252A (zh)
WO (1) WO2022078077A1 (zh)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114782926A (zh) * 2022-06-17 2022-07-22 清华大学 驾驶场景识别方法、装置、设备、存储介质和程序产品

Families Citing this family (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US10745009B2 (en) * 2016-12-21 2020-08-18 Samsung Electronics Co., Ltd. Electronic apparatus for determining a dangerous situation of a vehicle and method of operating the same
CN112193252A (zh) * 2020-10-15 2021-01-08 腾讯科技(深圳)有限公司 驾驶风险的预警方法、装置、计算设备及存储介质
CN113506012A (zh) * 2021-07-22 2021-10-15 中冶南方城市建设工程技术有限公司 一种基于手机车联网数据的驾驶行为风险指数判定方法
CN117084683B (zh) * 2023-08-07 2024-04-19 中国人民解放军32302部队 一种技术人员心理状态监测评估方法及系统
CN117292504A (zh) * 2023-11-11 2023-12-26 克伦斯(天津)轨道交通技术有限公司 交通安全监控方法、装置、设备及介质

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
DE102015004748A1 (de) * 2015-04-11 2016-10-13 Audi Ag Verfahren zur Vorhersage einer gefährlichen Fahrsituation
US20180170375A1 (en) * 2016-12-21 2018-06-21 Samsung Electronics Co., Ltd. Electronic apparatus and method of operating the same
CN111274881A (zh) * 2020-01-10 2020-06-12 中国平安财产保险股份有限公司 驾驶安全的监控方法、装置、计算机设备及存储介质
CN111310562A (zh) * 2020-01-10 2020-06-19 中国平安财产保险股份有限公司 基于人工智能的车辆驾驶风险管控方法及其相关设备
CN111489588A (zh) * 2020-03-30 2020-08-04 腾讯科技(深圳)有限公司 车辆驾驶风险预警方法及装置、设备、存储介质
CN112193252A (zh) * 2020-10-15 2021-01-08 腾讯科技(深圳)有限公司 驾驶风险的预警方法、装置、计算设备及存储介质

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
DE102015004748A1 (de) * 2015-04-11 2016-10-13 Audi Ag Verfahren zur Vorhersage einer gefährlichen Fahrsituation
US20180170375A1 (en) * 2016-12-21 2018-06-21 Samsung Electronics Co., Ltd. Electronic apparatus and method of operating the same
CN111274881A (zh) * 2020-01-10 2020-06-12 中国平安财产保险股份有限公司 驾驶安全的监控方法、装置、计算机设备及存储介质
CN111310562A (zh) * 2020-01-10 2020-06-19 中国平安财产保险股份有限公司 基于人工智能的车辆驾驶风险管控方法及其相关设备
CN111489588A (zh) * 2020-03-30 2020-08-04 腾讯科技(深圳)有限公司 车辆驾驶风险预警方法及装置、设备、存储介质
CN112193252A (zh) * 2020-10-15 2021-01-08 腾讯科技(深圳)有限公司 驾驶风险的预警方法、装置、计算设备及存储介质

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114782926A (zh) * 2022-06-17 2022-07-22 清华大学 驾驶场景识别方法、装置、设备、存储介质和程序产品
CN114782926B (zh) * 2022-06-17 2022-08-26 清华大学 驾驶场景识别方法、装置、设备、存储介质和程序产品

Also Published As

Publication number Publication date
US20230048112A1 (en) 2023-02-16
CN112193252A (zh) 2021-01-08

Similar Documents

Publication Publication Date Title
WO2022078077A1 (zh) 驾驶风险的预警方法、装置、计算设备及存储介质
US11783590B2 (en) Method, apparatus, device and medium for classifying driving scenario data
US11475770B2 (en) Electronic device, warning message providing method therefor, and non-transitory computer-readable recording medium
Yu et al. Examination and prediction of drivers’ reaction when provided with V2I communication-based intersection maneuver strategies
US20190050711A1 (en) Method, storage medium and electronic device for detecting vehicle crashes
WO2020042984A1 (zh) 一种车辆行为检测方法及装置
US20190087668A1 (en) Electronic device and control method thereof
DE102017221617A1 (de) System und Verfahren zur Verkehrsstauabschätzung
Ali et al. Cooperate or not? Exploring drivers’ interactions and response times to a lane-changing request in a connected environment
US20180032891A1 (en) Adaptive architecture for crash prediction in vehicle collision avoidance systems
US20220035733A1 (en) Method and apparatus for checking automatic driving algorithm, related device and storage medium
CN110910673B (zh) 一种提醒行人碰撞车辆的方法及系统、摄像头、移动终端
EP4130766A1 (en) Battery detection method and apparatus
WO2022134711A1 (zh) 驾驶风格识别方法、辅助驾驶方法及装置
US20220019713A1 (en) Estimation of probability of collision with increasing severity level for autonomous vehicles
Nieto et al. On creating vision‐based advanced driver assistance systems
Lyu et al. Safety margins–a novel approach from risk homeostasis theory for evaluating the impact of advanced driver assistance systems on driving behavior in near-crash events
Kolekar et al. Behavior prediction of traffic actors for intelligent vehicle using artificial intelligence techniques: A review
Yan et al. Automatic identification method for driving risk status based on multi-sensor data
CN116597516A (zh) 训练方法、分类方法、检测方法、装置、系统及设备
US20230091574A1 (en) Driving assistance processing method and apparatus, computer-readable medium, and electronic device
CN108960160B (zh) 基于非结构化预测模型来预测结构化状态量的方法和装置
Virgilio G et al. Vision-based blind spot warning system by deep neural networks
CN115576990A (zh) 视觉真值数据与感知数据的评测方法、装置、设备及介质
CN115631626A (zh) 一种车辆数据监控分析方法、装置、设备及介质

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 21879123

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

32PN Ep: public notification in the ep bulletin as address of the adressee cannot be established

Free format text: NOTING OF LOSS OF RIGHTS PURSUANT TO RULE 112(1) EPC (EPO FORM 1205A DATED 31/08/2023)