CN117035142A - Vehicle collision detection method, device, computer device and storage medium - Google Patents

Vehicle collision detection method, device, computer device and storage medium Download PDF

Info

Publication number
CN117035142A
CN117035142A CN202211041546.2A CN202211041546A CN117035142A CN 117035142 A CN117035142 A CN 117035142A CN 202211041546 A CN202211041546 A CN 202211041546A CN 117035142 A CN117035142 A CN 117035142A
Authority
CN
China
Prior art keywords
vehicle
hidden layer
sample
collision
layer vector
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202211041546.2A
Other languages
Chinese (zh)
Inventor
钟子宏
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Tencent Technology Shenzhen Co Ltd
Original Assignee
Tencent Technology Shenzhen Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Tencent Technology Shenzhen Co Ltd filed Critical Tencent Technology Shenzhen Co Ltd
Priority to CN202211041546.2A priority Critical patent/CN117035142A/en
Publication of CN117035142A publication Critical patent/CN117035142A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B60VEHICLES IN GENERAL
    • B60WCONJOINT CONTROL OF VEHICLE SUB-UNITS OF DIFFERENT TYPE OR DIFFERENT FUNCTION; CONTROL SYSTEMS SPECIALLY ADAPTED FOR HYBRID VEHICLES; ROAD VEHICLE DRIVE CONTROL SYSTEMS FOR PURPOSES NOT RELATED TO THE CONTROL OF A PARTICULAR SUB-UNIT
    • B60W30/00Purposes of road vehicle drive control systems not related to the control of a particular sub-unit, e.g. of systems using conjoint control of vehicle sub-units
    • B60W30/08Active safety systems predicting or avoiding probable or impending collision or attempting to minimise its consequences
    • B60W30/095Predicting travel path or likelihood of collision
    • B60W30/0953Predicting travel path or likelihood of collision the prediction being responsive to vehicle dynamic parameters
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B60VEHICLES IN GENERAL
    • B60WCONJOINT CONTROL OF VEHICLE SUB-UNITS OF DIFFERENT TYPE OR DIFFERENT FUNCTION; CONTROL SYSTEMS SPECIALLY ADAPTED FOR HYBRID VEHICLES; ROAD VEHICLE DRIVE CONTROL SYSTEMS FOR PURPOSES NOT RELATED TO THE CONTROL OF A PARTICULAR SUB-UNIT
    • B60W30/00Purposes of road vehicle drive control systems not related to the control of a particular sub-unit, e.g. of systems using conjoint control of vehicle sub-units
    • B60W30/08Active safety systems predicting or avoiding probable or impending collision or attempting to minimise its consequences
    • B60W30/095Predicting travel path or likelihood of collision
    • B60W30/0956Predicting travel path or likelihood of collision the prediction being responsive to traffic or environmental parameters
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B60VEHICLES IN GENERAL
    • B60WCONJOINT CONTROL OF VEHICLE SUB-UNITS OF DIFFERENT TYPE OR DIFFERENT FUNCTION; CONTROL SYSTEMS SPECIALLY ADAPTED FOR HYBRID VEHICLES; ROAD VEHICLE DRIVE CONTROL SYSTEMS FOR PURPOSES NOT RELATED TO THE CONTROL OF A PARTICULAR SUB-UNIT
    • B60W50/00Details of control systems for road vehicle drive control not related to the control of a particular sub-unit, e.g. process diagnostic or vehicle driver interfaces
    • B60W50/08Interaction between the driver and the control system
    • B60W50/14Means for informing the driver, warning the driver or prompting a driver intervention
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F17/00Digital computing or data processing equipment or methods, specially adapted for specific functions
    • G06F17/10Complex mathematical operations
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B60VEHICLES IN GENERAL
    • B60WCONJOINT CONTROL OF VEHICLE SUB-UNITS OF DIFFERENT TYPE OR DIFFERENT FUNCTION; CONTROL SYSTEMS SPECIALLY ADAPTED FOR HYBRID VEHICLES; ROAD VEHICLE DRIVE CONTROL SYSTEMS FOR PURPOSES NOT RELATED TO THE CONTROL OF A PARTICULAR SUB-UNIT
    • B60W50/00Details of control systems for road vehicle drive control not related to the control of a particular sub-unit, e.g. process diagnostic or vehicle driver interfaces
    • B60W50/08Interaction between the driver and the control system
    • B60W50/14Means for informing the driver, warning the driver or prompting a driver intervention
    • B60W2050/143Alarm means
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B60VEHICLES IN GENERAL
    • B60WCONJOINT CONTROL OF VEHICLE SUB-UNITS OF DIFFERENT TYPE OR DIFFERENT FUNCTION; CONTROL SYSTEMS SPECIALLY ADAPTED FOR HYBRID VEHICLES; ROAD VEHICLE DRIVE CONTROL SYSTEMS FOR PURPOSES NOT RELATED TO THE CONTROL OF A PARTICULAR SUB-UNIT
    • B60W50/00Details of control systems for road vehicle drive control not related to the control of a particular sub-unit, e.g. process diagnostic or vehicle driver interfaces
    • B60W50/08Interaction between the driver and the control system
    • B60W50/14Means for informing the driver, warning the driver or prompting a driver intervention
    • B60W2050/146Display means

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Automation & Control Theory (AREA)
  • Theoretical Computer Science (AREA)
  • Mechanical Engineering (AREA)
  • Transportation (AREA)
  • Data Mining & Analysis (AREA)
  • Mathematical Physics (AREA)
  • General Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • Software Systems (AREA)
  • Computational Linguistics (AREA)
  • Mathematical Analysis (AREA)
  • Molecular Biology (AREA)
  • General Health & Medical Sciences (AREA)
  • Evolutionary Computation (AREA)
  • Biophysics (AREA)
  • Biomedical Technology (AREA)
  • Algebra (AREA)
  • Computational Mathematics (AREA)
  • Computing Systems (AREA)
  • Mathematical Optimization (AREA)
  • Pure & Applied Mathematics (AREA)
  • Databases & Information Systems (AREA)
  • Artificial Intelligence (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Health & Medical Sciences (AREA)
  • Human Computer Interaction (AREA)
  • Traffic Control Systems (AREA)

Abstract

The present application relates to a vehicle collision detection method, apparatus, computer device, storage medium, and computer program product. The method comprises the following steps: acquiring first driving characteristic data and second driving characteristic data; outputting first hidden layer vectors at different rounds respectively through the first hidden layer; the first hidden layer vector output by the current round is generated according to the second driving characteristic data and the second hidden layer vector output by the second hidden layer in the previous round; outputting second hidden layer vectors at different rounds respectively through the second hidden layers; the second hidden layer vector output by the current round is generated according to the first driving characteristic data and the first hidden layer vector output by the current round; and determining the probability of collision according to the first hidden layer vector and the second hidden layer vector which are output by the last turn. By adopting the method, the accuracy of the generated collision probability can be improved. The embodiment of the application can be applied to the traffic field and the artificial intelligence field.

Description

Vehicle collision detection method, device, computer device and storage medium
Technical Field
The present application relates to the field of computer technologies, and in particular, to a method and apparatus for detecting a collision of a vehicle, a computer device, and a storage medium.
Background
In the process of controlling the vehicles, people can have the event of collision of the vehicles, for example, in the case of the vehicles, traffic accidents caused by collision of two vehicles cannot be avoided, so that personal and property losses are caused, and therefore, safety early warning is needed for running of the vehicles so as to reduce the probability of collision of the vehicles.
In the prior art, the collision detection of vehicles is mainly performed according to the distance between two vehicles, for example, when the distance between two vehicles is smaller than or equal to a set threshold value, a collision early warning signal is sent. However, since collision detection is performed only based on the distance between two vehicles, it may result in inaccurate collision detection of vehicles.
Disclosure of Invention
In view of the foregoing, it is desirable to provide a vehicle collision detection method, apparatus, computer device, computer-readable storage medium, and computer program product that are capable of improving the accuracy of vehicle collision detection.
In a first aspect, the present application provides a vehicle collision detection method, the method comprising:
acquiring first driving characteristic data of a first vehicle, second driving characteristic data of a second vehicle driving behind the first vehicle, a first hidden layer and a second hidden layer;
Outputting corresponding first hidden layer vectors at different rounds through the first hidden layer respectively; the first hidden layer vector output by the current round is generated according to the second driving characteristic data and the second hidden layer vector output by the second hidden layer in the previous round;
outputting corresponding second hidden layer vectors at different rounds through the second hidden layers respectively; the second hidden layer vector output by the current round is generated according to the first driving characteristic data and the first hidden layer vector output by the first hidden layer at the current round;
and determining the probability of collision between the first vehicle and the second vehicle according to the first hidden layer vector output by the first hidden layer in the last round and the second hidden layer vector output by the second hidden layer in the last round.
In one embodiment, the vehicle collision detection method is performed by a vehicle collision detection model, which is determined by the following formula:
wherein X is t Representing first driving characteristic data, X t-1 Representing second driving characteristic data H t Representing a first hidden layer vector, H t-1 Representing a second hidden layer vector, Y representing a collision state between the first vehicle and the second vehicle (1 representing that the first vehicle collides with the second vehicle, 0 representing that the first vehicle does not collide with the second vehicle); First data weight representing first driving characteristic data, < ->A second data weight representing second travel characteristic data; />First vector weight representing first hidden layer vector, < ->A second vector weight representing a second hidden layer vector; />A third vector weight representing the first hidden layer vector,representation ofFourth vector weight of the second hidden layer vector. />Respectively representing the parameter vectors; tanh represents the activation function of the hidden layer and σ represents the activation function of the output layer.
In one embodiment, the vehicle collision detection method is performed by a vehicle collision detection model comprising a first hidden layer and a second hidden layer; the vehicle collision detection model is obtained through a model training step comprising:
acquiring first sample driving characteristic data of a first sample vehicle, second sample driving characteristic data of a second sample vehicle and collision sample labels corresponding to the first sample vehicle and the second sample vehicle, and acquiring a vehicle collision detection model to be trained;
from a second round except the first round, determining a first prediction hidden layer vector output in the current round according to the second sample running characteristic data and a second prediction hidden layer vector output in the previous round;
Determining a second prediction hidden layer vector output by the current round according to the second sample running characteristic data and the first prediction hidden layer vector output by the current round;
determining the prediction probability of collision between the first sample vehicle and the second sample vehicle according to the first prediction hidden layer vector and the second prediction hidden layer vector which are output by the current turn;
according to the difference between the prediction probability and the collision sample label, adjusting model parameters of the vehicle collision detection model;
taking the second prediction hidden layer vector output by the current round as the second prediction hidden layer vector output by the previous round in the next round, entering the next round, returning the second prediction hidden layer vector output by the previous round according to the second sample driving characteristic data and the second prediction hidden layer vector, and determining the first prediction hidden layer vector output by the current round to continue execution until the training stop condition is reached, so as to obtain a trained vehicle collision detection model; the vehicle collision detection model is used to determine a collision probability between at least two vehicles.
In one embodiment, the first sample travel feature data comprises a plurality of first sample travel feature sub-data; the second sample travel feature data includes a plurality of second sample travel feature sub-data; the plurality of first sample driving characteristic sub-data and the plurality of second sample driving characteristic sub-data each include characteristic data belonging to a vehicle type and characteristic data belonging to a driving object type; the feature data belonging to the vehicle type refers to feature data related to the running of the vehicle; the feature data belonging to the driving object type refers to feature data related to the behavior of the driving object.
In one embodiment, the characteristic data belonging to the vehicle type at least includes one of a driving speed of the vehicle, a driving area image of the vehicle, a distance between at least two vehicles, position information of the vehicle, size information of the vehicle, weight information of the vehicle, a number of loaded vehicles, a highest driving speed of the vehicle, an average driving speed of the vehicle, and traffic data in the driving area;
the characteristic data belonging to the driving object type at least comprises one of the number of times of adjusting a seat, the number of times of stepping on a vehicle brake, the number of times of stepping on a vehicle accelerator, the steering wheel swing amplitude, the number of times of steering wheel swing and the number of times of upshifting and downshifting operations in the driving process of the driving object.
In a second aspect, the present application also provides a vehicle collision detection apparatus, the apparatus comprising:
and the input layer module is used for acquiring first driving characteristic data of a first vehicle and second driving characteristic data of a second vehicle driving behind the first vehicle.
The hidden layer module is used for outputting corresponding first hidden layer vectors at different rounds through the first hidden layer respectively; the first hidden layer vector output by the current round is generated according to the second driving characteristic data and the second hidden layer vector output by the second hidden layer in the previous round; outputting corresponding second hidden layer vectors at different rounds through the second hidden layers respectively; the second hidden layer vector output by the current round is generated according to the first driving characteristic data and the first hidden layer vector output by the first hidden layer at the current round.
And the output layer module is used for determining the collision probability of the first vehicle and the second vehicle according to the first hidden layer vector output by the first hidden layer in the last round and the second hidden layer vector output by the second hidden layer in the last round.
In one embodiment, the hidden layer module is further configured to determine, from a second round except the first round, a first hidden layer vector output at a current round through the first hidden layer and according to the second driving characteristic data and a second hidden layer vector output by the second hidden layer at a previous round; determining a second hidden layer vector output by the current round through the second hidden layer according to the first driving characteristic data and the first hidden layer vector output by the current round; and taking the second hidden layer vector output by the current round as the second hidden layer vector output by the previous round in the next round, entering the next round, returning the second hidden layer vector output by the previous round according to the first sample driving characteristic data and the second hidden layer, and determining the first hidden layer vector output by the current round to continue to be executed until the execution stop condition is reached, so as to obtain the first hidden layer vector and the second hidden layer vector output by different rounds.
In one embodiment, the hidden layer module is further configured to obtain an initial second hidden layer vector, determine, by using the first hidden layer, a first hidden layer vector output from a first round according to the second driving feature data and the initial second hidden layer vector; and determining a second hidden layer vector output by the first round according to the second hidden layer and the first travel characteristic data and the first hidden layer vector output by the first round.
In one embodiment, the output layer module is further configured to determine a first vector weight corresponding to the first hidden layer vector output in the last round, and determine a second vector weight corresponding to the second hidden layer vector output in the last round; fusing the first vector weight and the first hidden layer vector output in the last round to obtain a fused first hidden layer vector; fusing the second vector weight and the second hidden layer vector output in the last round to obtain a fused second hidden layer vector; and superposing the fused first hidden layer vector and the fused second hidden layer vector to obtain a superposed hidden layer vector, and determining the collision probability of the first vehicle and the second vehicle through the superposed hidden layer vector.
In one embodiment, the vehicle collision detection device is further configured to trigger generation of collision avoidance early warning information when it is determined that the probability of collision between the first vehicle and the second vehicle is greater than or equal to a preset probability threshold; the anti-collision early warning information at least comprises at least one of voice reminding information, anti-collision reminding pictures, suggested driving speeds and suggested driving lanes.
In one embodiment, the vehicle collision detection device is further configured to trigger to display a collision avoidance alert screen; the anti-collision reminding picture comprises a first anti-collision reminding picture displayed by the first vehicle and a second anti-collision reminding picture displayed by the second vehicle; the first anti-collision reminding picture comprises a first virtual vehicle model corresponding to the first vehicle, a second virtual vehicle model corresponding to the second vehicle and a first perception area surrounding the first virtual vehicle model; wherein when the probability of the first vehicle colliding with the second vehicle is greater than or equal to a preset probability threshold, a first perception sub-region of the first perception region, which faces the direction of the second virtual vehicle model, is highlighted; the second anti-collision reminding picture comprises a second virtual vehicle model corresponding to the second vehicle, a first virtual vehicle model corresponding to the first vehicle and a second perception area surrounding the second virtual vehicle model; and when the probability of collision of the first vehicle with the second vehicle is greater than or equal to a preset probability threshold, highlighting a second perception subarea in the second perception area towards the direction of the first virtual vehicle model.
In one embodiment, the vehicle collision detection apparatus is deployed with a collision detection model, which is determined by the following formula:
wherein X is t Representing first driving characteristic data, X t-1 Representing second driving characteristic data H t Representing a first hidden layer vector, H t-1 Representing a second hidden layer vector, Y representing a collision state between the first vehicle and the second vehicle (1 representing that the first vehicle collides with the second vehicle, 0 representing that the first vehicle does not collide with the second vehicle);first data weight representing first driving characteristic data, < ->A second data weight representing second travel characteristic data; />First vector weight representing first prediction hidden layer vector,/for a first prediction hidden layer vector>A second vector weight representing a second prediction hidden layer vector; />Third vector weight representing the first predictive hidden layer vector,/i>Fourth vector representing second predictive hidden layer vectorAnd (5) weighting. />Respectively representing the parameter vectors; tanh represents the activation function of the hidden layer and σ represents the activation function of the output layer.
In one embodiment, the vehicle collision detection device further includes a training module, where the training module is configured to obtain first sample driving feature data of a first sample vehicle, second sample driving feature data of a second sample vehicle, collision sample tags corresponding to the first sample vehicle and the second sample vehicle, and obtain a vehicle collision detection model to be trained; from a second round except the first round, determining a first prediction hidden layer vector output in the current round according to the second sample running characteristic data and a second prediction hidden layer vector output in the previous round; determining a second prediction hidden layer vector output by the current round according to the second sample running characteristic data and the first prediction hidden layer vector output by the current round; determining the prediction probability of collision between the first sample vehicle and the second sample vehicle according to the first prediction hidden layer vector and the second prediction hidden layer vector output by the current turn; according to the difference between the prediction probability and the collision sample label, adjusting model parameters of the vehicle collision detection model; taking the second prediction hidden layer vector output by the current round as the second prediction hidden layer vector output by the previous round in the next round, entering the next round, returning the second prediction hidden layer vector output by the previous round according to the second sample driving characteristic data and the second prediction hidden layer vector, and determining the first prediction hidden layer vector output by the current round to continue execution until the training stop condition is reached, so as to obtain a trained vehicle collision detection model; the vehicle collision detection model is used to determine a collision probability between at least two vehicles.
In one embodiment, the training module is further configured to obtain first sample information corresponding to the first sample vehicle; the first sample information includes first sample travel characteristic data of the first sample vehicle and first vehicle collision information; acquiring second sample information corresponding to a second sample vehicle; the second sample information includes second sample travel characteristic data of the second sample vehicle and second vehicle collision information; and using the first vehicle collision information or the second vehicle collision information as collision sample labels corresponding to the first sample vehicle and the second sample device.
In one embodiment, the training module is further configured to obtain feature data collected for the first sample vehicle, obtain first sample driving feature data, and obtain a collision state of the first sample vehicle; determining a vehicle that is traveling behind the first sample vehicle but that is not colliding with the first sample vehicle when the collision state characterizes that the first sample vehicle is not colliding, and regarding the determined vehicle as a second sample vehicle; generating first vehicle collision information according to the device identification of the first sample vehicle, the device identification of the second sample vehicle and the collision state of the first sample vehicle; and obtaining first sample information of the first sample vehicle according to the first sample driving characteristic data and the first vehicle collision information.
In one embodiment, the training module is further configured to determine a vehicle that collides with the first vehicle when the collision status indicates that the first sample vehicle collides, and take the determined vehicle as a second sample vehicle.
In one embodiment, the training module is further configured to obtain first sample travel feature data and second sample travel feature data, the first sample travel feature data including a plurality of first sample travel feature sub-data; the second sample travel feature data includes a plurality of second sample travel feature sub-data; the plurality of first sample driving characteristic sub-data and the plurality of second sample driving characteristic sub-data each include characteristic data belonging to a vehicle type and characteristic data belonging to a driving object type; the feature data belonging to the vehicle type refers to feature data related to the running of the vehicle; the feature data belonging to the driving object type refers to feature data related to the behavior of the driving object.
In one embodiment, the training module is further configured to obtain a plurality of first sample travel feature sub-data and a plurality of second sample travel feature sub-data; the plurality of first sample driving characteristic sub-data and the plurality of second sample driving characteristic sub-data each include characteristic data belonging to a vehicle type and characteristic data belonging to a driving object type; the feature data belonging to the vehicle type refers to feature data related to the running of the vehicle; the feature data belonging to the driving object type refers to feature data related to the behavior of the driving object; the characteristic data belonging to the vehicle type at least comprises one of driving speed of the vehicle, driving area image of the vehicle, distance between at least two vehicles, position information of the vehicle, size information of the vehicle, weight information of the vehicle, number of actual vehicles on the vehicle, highest driving speed of the vehicle, average driving speed of the vehicle and traffic data in the driving area; the characteristic data belonging to the driving object type at least comprises one of the number of times of adjusting a seat, the number of times of stepping on a vehicle brake, the number of times of stepping on a vehicle accelerator, the steering wheel swing amplitude, the number of times of steering wheel swing and the number of times of upshifting and downshifting operations in the driving process of the driving object.
In one embodiment, the vehicle collision detection device is further configured to acquire a vehicle collision detection model to be trained, where the vehicle collision detection model to be trained is determined by the following formula:
wherein X is t Representing first sample travel characteristic data, X t-1 Representing second sample travel characteristic data, H t Representing a first predictive hidden layer vector, H t-1 Representing a second predictive hidden layer vector, Y representing a collision status between the specimen vehicle and the second specimen vehicle (1 representing that the first specimen vehicle collides with the second specimen vehicle, 0 representing that the first specimen vehicle does not collide with the second specimen vehicle); w (W) t First data weight representing first sample travel characteristic data,W t-1 A second data weight representing second sample travel characteristic data; v (V) t First vector weight, V, representing a first predictive hidden layer vector t-1 A second vector weight representing a second prediction hidden layer vector; u (U) t Third vector weight representing first prediction hidden layer vector, U t-1 And fourth vector weights representing the second predictive hidden layer vector. B (B) t ,B t-1 ,A t ,A t-1 Respectively representing the parameter vectors; tanh represents the implicit layer activation function and σ represents the output layer activation function.
In a third aspect, the present application also provides a computer device, where the computer device includes a memory and a processor, where the memory stores a computer program, and where the processor implements steps in any one of the vehicle collision detection methods provided by the embodiments of the present application when the computer program is executed.
In a fourth aspect, the present application also provides a computer-readable storage medium having stored thereon a computer program which, when executed by a processor, implements the steps of any of the vehicle collision detection methods provided by the embodiments of the present application.
In a fifth aspect, the present application also provides a computer program product comprising a computer program which, when executed by a processor, implements the steps of any of the vehicle collision detection methods provided by the embodiments of the present application.
According to the vehicle collision detection method, the vehicle collision detection device, the computer equipment, the storage medium and the computer program product, the first hidden layer can determine the first hidden layer vector of the current round according to the second running characteristic data and the second hidden layer vector output by the second hidden layer in the previous round by acquiring the second running characteristic data of the second vehicle, so that the hidden layer of the first vehicle can be influenced by the running characteristic data of the second vehicle and the hidden layer. By acquiring the first driving characteristic data of the first vehicle, the second hidden layer can determine the second hidden layer vector of the current round according to the first driving characteristic data and the first hidden layer vector output by the first hidden layer at the current round, so that the hidden layer of the second vehicle can be influenced by the driving characteristic data of the first vehicle and the hidden layer. The first hidden layer vector and the second hidden layer vector which are output in different rounds are obtained, so that the probability of collision between the first vehicle and the second vehicle can be jointly determined according to the first hidden layer vector and the second hidden layer vector which are output in the last round, and the purpose of jointly determining the collision probability between the first vehicle and the second vehicle is achieved. Because the collision probability is determined according to the mutual influence relation between the first vehicle and the second vehicle, compared with the traditional method for carrying out collision early warning according to the distance between the first vehicle and the second vehicle, the method can improve the accuracy of the determined collision probability between the first vehicle and the second vehicle.
Drawings
FIG. 1 is a diagram of an application environment of a vehicle collision detection method in one embodiment;
FIG. 2 is a flow chart of a method of detecting a vehicle collision in one embodiment;
FIG. 3 is a schematic illustration of an interaction relationship between two vehicles in one embodiment;
FIG. 4 is a schematic illustration of another embodiment in which there is an interaction relationship between two vehicles;
FIG. 5 is a schematic diagram of an anti-collision reminding screen according to an embodiment;
FIG. 6 is a flow chart of a model training step in one embodiment;
FIG. 7 is a schematic diagram of generation of a prediction hidden layer vector in one embodiment;
FIG. 8 is a schematic flow chart of construction and training of a vehicle collision detection model in one embodiment;
FIG. 9 is a flow chart of a method of detecting a vehicle collision in one embodiment;
FIG. 10 is a block diagram showing the construction of a vehicle collision detecting apparatus in one embodiment;
FIG. 11 is an internal block diagram of a computer device in one embodiment.
Detailed Description
The present application will be described in further detail with reference to the drawings and examples, in order to make the objects, technical solutions and advantages of the present application more apparent. It should be understood that the specific embodiments described herein are for purposes of illustration only and are not intended to limit the scope of the application.
The vehicle collision detection method provided by the embodiment of the application can be applied to an application environment shown in fig. 1. Wherein the vehicle 102 communicates with the server 104 via a network. The data storage system may store data that the server 104 needs to process. The data storage system may be integrated on the server 104 or may be located on the cloud or other servers. Both the vehicle 102 and the server 104 may be used alone to perform the vehicle collision detection method provided in the embodiment of the application. The vehicle 102 and the server 104 may also cooperate to perform the vehicle collision detection method provided in embodiments of the present application. Taking an example where the vehicle 102 and the server 104 can cooperate to perform the vehicle collision detection method provided in the embodiment of the present application, the server 104 can determine the vehicle 102 and the vehicle that is traveling after the vehicle 102, and take the vehicle 102 as a first vehicle and the vehicle that is traveling after the vehicle 102 as a second vehicle. The server 104 obtains first driving characteristic data of the first vehicle and second driving characteristic data of the second vehicle, obtains probability of collision between the first vehicle and the second vehicle according to the first driving characteristic data and the second driving characteristic data through a pre-trained vehicle collision detection model, and returns the probability of collision between the first vehicle and the second vehicle to the vehicle 102 for display.
It should be noted that the method may be applied to a vehicle or an electronic device. The electronic equipment comprises, but is not limited to, a mobile phone, a computer, intelligent voice interaction equipment, intelligent household appliances, vehicle-mounted terminals, aircrafts and the like. Vehicles include, but are not limited to, vehicles, boats, drones, etc. The server 104 may be implemented as a stand-alone server or as a server cluster of multiple servers.
The embodiment of the application can be used in the traffic field, for example, the collision detection can be carried out on vehicles in roads so as to carry out collision prevention early warning based on the collision detection result, thereby reducing the probability of vehicle collision and forming an intelligent traffic system. The intelligent transportation system (Intelligent Traffic System, ITS), also called intelligent transportation system (Intelligent Transportation System), is a comprehensive transportation system which uses advanced scientific technology (information technology, computer technology, data communication technology, sensor technology, electronic control technology, automatic control theory, operation study, artificial intelligence, etc.) effectively and comprehensively for transportation, service control and vehicle manufacturing, and enhances the connection among vehicles, roads and users, thereby forming a comprehensive transportation system for guaranteeing safety, improving efficiency, improving environment and saving energy.
Embodiments of the present application may also be used in the field of artificial intelligence, for example, the present application may determine a probability of collision between a first vehicle and a second vehicle via a pre-trained vehicle detection model. Artificial intelligence (Artificial Intelligence, AI) is the theory, method, technique and application system that uses a digital computer or a machine controlled by a digital computer to simulate, extend and extend human intelligence, sense the environment, acquire knowledge and use the knowledge to obtain optimal results. In other words, artificial intelligence is an integrated technology of computer science that attempts to understand the essence of intelligence and to produce a new intelligent machine that can react in a similar way to human intelligence. Artificial intelligence, i.e. research on design principles and implementation methods of various intelligent machines, enables the machines to have functions of sensing, reasoning and decision. The artificial intelligence technology is a comprehensive subject, and relates to the technology with wide fields, namely the technology with a hardware level and the technology with a software level. Artificial intelligence infrastructure technologies generally include technologies such as sensors, dedicated artificial intelligence chips, cloud computing, distributed storage, big data processing technologies, operation/interaction systems, mechatronics, and the like. The artificial intelligence software technology mainly comprises a computer vision technology, a voice processing technology, a natural language processing technology, machine learning/deep learning and other directions.
It should be noted that the terms "first," "second," and the like as used herein do not denote any order, quantity, or importance, but rather are used to distinguish one element from another. The singular forms "a," "an," or "the" and similar terms do not denote a limitation of quantity, but rather denote the presence of at least one, unless the context clearly dictates otherwise. The numbers of "plural" or "multiple" etc. mentioned in the embodiments of the present application each refer to the number of "at least two", for example, "plural" means "at least two", and "multiple" means "at least two".
In one embodiment, as shown in fig. 2, a vehicle collision detection method is provided, and the method is described as applied to a computer device. The computer device is the vehicle or server of fig. 1, and the vehicle collision detection includes the steps of:
step 202, acquiring first driving characteristic data of a first vehicle, second driving characteristic data of a second vehicle driving behind the first vehicle, a first hidden layer and a second hidden layer.
The vehicle may be specifically an unmanned vehicle (also referred to as an autonomous vehicle) and a vehicle driven by a driver. The feature data refers to data collected during running of the vehicle, and for example, the feature data may be feature data belonging to a vehicle type or feature data belonging to a driving object type. The feature data belonging to the vehicle type refers to feature data related to the running of the vehicle, for example, the feature data belonging to the vehicle type may specifically be the driving speed of the vehicle, the position information of the vehicle, the distance between at least two vehicles, and the like. The feature data belonging to the driving object type refers to feature data related to the behavior of an object driving the vehicle, for example, the feature data belonging to the driving object type may specifically be the number of times the driving object (for example, a driver) steps on the vehicle brake during driving, the number of times the vehicle accelerator is stepped on, the number of steering wheel swings, and the like. As will be readily appreciated, when the feature data is data acquired during the running of the first vehicle, a plurality of feature data acquired during the running of the first vehicle are integrated, and the first running feature data can be obtained; and when the characteristic data are data acquired in the running process of the second vehicle, integrating a plurality of characteristic data acquired in the running process of the second vehicle to obtain second running characteristic data.
Specifically, when vehicle collision detection is desired, the computer device may determine the first vehicle and the second vehicle. Wherein the second vehicle may be a device that travels after the first vehicle. When the first vehicle and the second vehicle are determined, the computer device may obtain first travel characteristic data of the first vehicle and second travel characteristic data of the second vehicle, and obtain a trained vehicle collision detection model. The vehicle collision detection model comprises a first hidden layer and a second hidden layer. The hidden layer refers to the rest of the layers in the machine learning model, except the input layer and the output layer.
In one embodiment, the computer device may determine a vehicle currently driven by the driving object and take the vehicle currently driven by the driving object as the current vehicle. The image acquisition device is arranged on the current vehicle, and can acquire the image of the driving area of the current vehicle. The computer device may detect the driving area image to determine a vehicle located before the current vehicle and to determine a vehicle located after the current vehicle. When it is necessary to determine the probability of collision between the current vehicle and the vehicle preceding the current vehicle, the current vehicle may be regarded as the second vehicle, and the vehicle preceding the current vehicle may be regarded as the first vehicle. When it is necessary to determine the probability of collision between the current vehicle and the vehicle located after the current vehicle, the current vehicle may be regarded as a first vehicle, and the vehicle located after the current vehicle may be regarded as a second vehicle.
In one embodiment, when acquiring the driving area image of the current vehicle, the computer device may check the driving area image to acquire a license plate number of a vehicle located before the current vehicle and/or acquire a license plate number of a vehicle located after the current vehicle, and acquire corresponding driving feature data by acquiring the acquired license plate number.
In one of the embodiments, a driving apparatus terminal may be disposed in a vehicle, for example, an in-vehicle terminal may be disposed in the vehicle. Through the driving equipment terminal, the characteristic data can be acquired when the vehicle runs, the driving characteristic data is generated based on the acquired characteristic data, and the generated driving characteristic data is sent to the server, so that the server correspondingly stores the received driving characteristic data and the corresponding vehicle identifier, and when the vehicle collision detection is required, the first driving characteristic data and the second driving characteristic data can be quickly acquired. The vehicle identifier may be information that uniquely identifies a vehicle, for example, when the vehicle is a vehicle, the vehicle identifier may be a license plate number.
In one embodiment, when the vehicle is an unmanned vehicle, the driving characteristic data collected for the vehicle may also include characteristic data pertaining to the driving object type, such as the number of times the brake is stepped on, the number of times the accelerator is stepped on, and the number of times the steering wheel swings. The driving object at this time is not a real driving object, but a driving apparatus terminal that controls the automatic operation of the vehicle, or an unmanned driving system disposed in the driving apparatus terminal. Thus, the number of times of stepping on the brake, the number of times of stepping on the accelerator, the number of times of swinging the steering wheel and other characteristic data belonging to the driving object type are not data operated by a real driving object, but data generated when the driving equipment terminal controls the vehicle to run, for example, in the automatic running process, the driving equipment terminal can control the unmanned automobile to brake according to the current road environment and determine the characteristic data belonging to the driving object type according to the number of times of the unmanned automobile to brake.
Step 204, outputting corresponding first hidden layer vectors at different rounds through the first hidden layer respectively; the first hidden layer vector output by the current round is generated according to the second driving characteristic data and the second hidden layer vector output by the second hidden layer at the previous round.
Step 206, outputting corresponding second hidden layer vectors at different rounds through the second hidden layer respectively; the second hidden layer vector output by the current round is generated according to the first driving characteristic data and the first hidden layer vector output by the first hidden layer at the current round.
Specifically, since the running condition of the first vehicle and the running condition of the second vehicle both affect the collision probability between the first vehicle and the second vehicle. For example, referring to fig. 3, when the running speed of the second vehicle (e.g., the rear vehicle) exceeds the running speed of the first vehicle (e.g., the front vehicle) and the first vehicle is closer to the second vehicle, or referring to fig. 4, when the second vehicle (e.g., the rear vehicle) is closer to the first vehicle (e.g., the front vehicle), the first vehicle (e.g., the front vehicle) suddenly brakes, and the like, the first vehicle collides with the second vehicle. Therefore, there is an interaction relationship between the first vehicle and the second vehicle during traveling, and therefore, the collision probability between the first vehicle and the second vehicle can be determined by the interaction relationship between the first vehicle and the second vehicle. Because of the mutual influence relationship between the first vehicle and the second vehicle, when the vehicle collision detection model is constructed, the first hidden layer of the first vehicle is influenced by the second running characteristic data and the second hidden layer of the second vehicle, and the second hidden layer of the second vehicle is influenced by the first running characteristic data and the first hidden layer of the first vehicle, so that the interaction between the first running characteristic data and the second running characteristic data can be considered at the same time. FIG. 3 illustrates a schematic diagram of the interaction between two vehicles in one embodiment. Fig. 4 shows a schematic diagram of a relationship between two vehicles in another embodiment.
When the first hidden layer vector and the second hidden layer vector need to be determined, the second driving characteristic data and the second hidden layer vector output by the second hidden layer in the previous round can be input to the first hidden layer, and the first hidden layer vector of the current round is output through the first hidden layer. Further, the computer device inputs the first hidden layer vector and the first driving characteristic data output by the first hidden layer at the current round to the second hidden layer, and outputs the second hidden layer vector of the current round through the second hidden layer. And (3) circulating in this way until the execution stop condition is reached, obtaining a first hidden layer vector which is respectively output by the first hidden layer in a plurality of rounds, and obtaining a second hidden layer vector which is respectively output by the second hidden layer in a plurality of rounds.
In one embodiment, the execution stop condition may be freely set according to the need, for example, the stop condition may be set to "when the preset round is reached, it is determined that the execution stop condition is reached", or may be set to "when the first hidden layer vector output by the first hidden layer tends to be stable, and the second hidden layer vector output by the second hidden layer tends to be stable, it is determined that the execution stop condition is reached".
In one embodiment, the second driving characteristic data may be divided to obtain a plurality of second driving characteristic sub-data, for example, a driving speed of the second vehicle, a number of times the driving object steps on a brake of the second vehicle, and the like may be used as one second driving characteristic sub-data. The computer device may sort the plurality of second travel feature sub-data to obtain a second travel feature sub-data sequence. The computer equipment inputs first second running characteristic sub-data in the second running characteristic sub-data sequence and a second hidden layer vector output by the second hidden layer in the previous round to the first hidden layer, and outputs a first characteristic vector at the current moment through the first hidden layer. Further, the computer device inputs the first feature vector output at the current moment and the second running feature sub-data in the second running feature sub-data sequence to the first hidden layer, and outputs the first feature vector at the next moment through the first hidden layer. And circulating until the last second driving characteristic sub-data of the second driving characteristic sub-data sequence is the last, obtaining a first characteristic vector output by the computer equipment at the last moment, and taking the first characteristic vector output at the last moment as a first hidden layer vector output by the current round. Similarly, the computer device may determine the second hidden layer vector output by the current round in the above manner.
Step 208, determining a probability of collision between the first vehicle and the second vehicle according to the first hidden layer vector output by the first hidden layer at the last round and the second hidden layer vector output by the second hidden layer at the last round.
Specifically, when the first hidden layer vector, which is output by the first hidden layer at the last round, and the second hidden layer vector, which is output by the second hidden layer at the last round, are obtained, that is, when the final first hidden layer vector and the final second hidden layer vector are obtained, since the collision state of the first vehicle and the second vehicle is acted on by both the first vehicle and the second vehicle together, the computer device can determine the collision probability of the first vehicle and the second vehicle according to the first hidden layer vector and the second hidden layer vector simultaneously. More specifically, the computer device inputs the final first hidden layer vector and the final second hidden layer vector to an output layer of the vehicle collision detection model, and the probability of collision between the first vehicle and the second vehicle is determined jointly by the output layer of the vehicle collision detection model based on the input first hidden layer vector and the second hidden layer vector.
In the vehicle collision detection method, the first hidden layer can determine the first hidden layer vector of the current round according to the second running characteristic data and the second hidden layer vector output by the second hidden layer in the previous round by acquiring the second running characteristic data of the second vehicle, so that the hidden layer of the first vehicle can be influenced by the running characteristic data of the second vehicle and the hidden layer. By acquiring the first driving characteristic data of the first vehicle, the second hidden layer can determine the second hidden layer vector of the current round according to the first driving characteristic data and the first hidden layer vector output by the first hidden layer at the current round, so that the hidden layer of the second vehicle can be influenced by the driving characteristic data of the first vehicle and the hidden layer. The first hidden layer vector and the second hidden layer vector which are output in different rounds are obtained, so that the probability of collision between the first vehicle and the second vehicle can be jointly determined according to the first hidden layer vector and the second hidden layer vector which are output in the last round, and the purpose of jointly determining the collision probability between the first vehicle and the second vehicle is achieved. Because the collision probability is determined according to the mutual influence relation between the first vehicle and the second vehicle, compared with the traditional method for carrying out collision early warning according to the distance between the first vehicle and the second vehicle, the method can improve the accuracy of the determined collision probability between the first vehicle and the second vehicle.
In one embodiment, outputting corresponding first hidden layer vectors at different rounds by the first hidden layer and outputting corresponding second hidden layer vectors at different rounds by the second hidden layer respectively includes: starting from a second round except the first round, determining a first hidden layer vector output at the current round through the first hidden layer according to second running characteristic data and a second hidden layer vector output at the previous round by the second hidden layer; determining a second hidden layer vector output by the current round through the second hidden layer according to the first driving characteristic data and the first hidden layer vector output by the current round; taking the second hidden layer vector output by the current round as the second hidden layer vector output by the previous round in the next round, entering the next round, returning the second hidden layer vector output by the second hidden layer in the previous round according to the first sample driving characteristic data, and determining the first hidden layer vector output by the current round to continue execution until the execution stop condition is reached, and obtaining the first hidden layer vector and the second hidden layer vector output by different rounds.
Specifically, from a second round outside the first round, the computer device inputs the second running characteristic data and the second hidden layer vector output at the previous round to the first hidden layer, and outputs the first hidden layer vector of the current round through the first hidden layer. Further, the computer device inputs the first hidden layer vector and the first driving characteristic data of the current round to the second hidden layer, and outputs the second hidden layer vector of the current round through the second hidden layer. The computer equipment takes the second hidden layer vector output by the second hidden layer in the current round as the second hidden layer vector output by the previous round in the next round, and enters the next round, the return computer equipment inputs the second driving characteristic data and the second hidden layer vector output in the previous round to the first hidden layer, and the step of outputting the first hidden layer vector of the current round through the first hidden layer is continuously executed until the execution stop condition is reached, so that the first hidden layer vector and the second hidden layer vector output in different rounds are obtained.
In one embodiment, after the first hidden layer vector and the second hidden layer vector output by the current round are obtained, that is, based on the first hidden layer vector and the second hidden layer vector output by the current round, the collision probability between the first vehicle and the second vehicle of the current round may be determined. When the collision probabilities between the first vehicle and the second vehicle determined by two or more successive rounds are not different, the round robin can be stopped to obtain the final collision probability. For example, the round robin may be stopped when the difference between the collision probabilities determined for the plurality of rounds is less than or equal to a preset difference threshold. In this way, the accuracy of the determined probability of collision between the first vehicle and the second vehicle can be further improved.
In one embodiment, the first driving feature data and the second driving feature data may be feature data obtained by performing feature extraction processing on the raw data through an input layer of the vehicle collision detection model. For example, the computer device may input first raw data corresponding to the first driving feature data to an input layer of the vehicle collision detection model, and the input layer performs feature extraction processing on the first raw data to obtain the first driving feature data. And the computer equipment can input second original data corresponding to the second running characteristic data to an input layer of the vehicle collision detection model, and the input layer performs characteristic extraction processing on the second original data to obtain the second running characteristic data.
In the embodiment of the application, because the hidden layer vector output by the previous round influences the hidden layer vector output by the next round, the first hidden layer vector and the second hidden layer vector of each round are obtained by executing a plurality of rounds, so that the generated first hidden layer vector and second hidden layer vector can be more and more accurate.
In one embodiment, the method further includes a step of determining a first hidden layer vector and a second hidden layer vector of a first round, where the step of determining the first hidden layer vector and the second hidden layer vector of the first round includes: acquiring an initial second hidden layer vector, passing through the first hidden layer, and determining a first hidden layer vector output by a first round according to the second driving characteristic data and the initial second hidden layer vector; and determining a second hidden layer vector output by the first round according to the first running characteristic data and the first hidden layer vector output by the first round through the second hidden layer.
Specifically, the computer device may randomly generate an initial second hidden layer vector to obtain an initial second hidden layer vector, and input the initial second hidden layer vector and the second driving feature data into the first hidden layer to obtain a first hidden layer vector output by the first round. Further, the computer device inputs the first driving characteristic data and the first hidden layer vector output by the first round into the second hidden layer to obtain the second hidden layer vector output by the first round.
In this embodiment, by generating the first hidden layer vector and the second hidden layer vector of the first round, the first hidden layer vector and the second hidden layer vector of the subsequent round may be determined based on the first hidden layer vector and the second hidden layer vector of the first round, and thus, the probability of collision between the first vehicle and the second vehicle may be determined based on the first hidden layer vector and the second hidden layer vector of the subsequent round.
In one embodiment, determining the probability of the first vehicle colliding with the second vehicle based on the first hidden layer vector output by the first hidden layer at the last round and the second hidden layer vector output by the second hidden layer at the last round includes: determining a first vector weight corresponding to a first hidden layer vector output in the last round, and determining a second vector weight corresponding to a second hidden layer vector output in the last round; fusing the first vector weight and the first hidden layer vector output in the last round to obtain a fused first hidden layer vector; fusing the second vector weight with the second hidden layer vector output in the last round to obtain a fused second hidden layer vector; and superposing the fused first hidden layer vector and the fused second hidden layer vector to obtain a superposed hidden layer vector, and determining the collision probability of the first vehicle and the second vehicle through the superposed hidden layer vector.
Specifically, the output layer in the vehicle collision detection model may jointly determine a collision probability between the first vehicle and the second vehicle from the final first hidden layer vector and the second hidden layer vector. The computer device may input the first hidden layer vector output by the last round and the second hidden layer vector output by the last round to an output layer in the vehicle collision detection model, determine, by the output layer, a first vector weight corresponding to the first hidden layer vector output by the last round, determine a second vector weight corresponding to the second hidden layer vector output by the last round, perform weighted summation processing on the first hidden layer vector output by the last round and the second hidden layer vector output by the last round based on the first vector weight and the second vector weight, obtain a superimposed hidden layer vector, and determine a probability of collision between the first vehicle and the second vehicle according to the superimposed hidden layer vector.
In one embodiment, the weighted summation processing of the first hidden layer vector output by the last round and the second hidden layer vector output by the last round based on the first vector weight and the second vector weight includes: fusing the first vector weight with the first hidden layer vector output in the last round to obtain a fused first hidden layer vector, for example, multiplying the first vector weight with the first hidden layer vector output in the last round by computer equipment to obtain a fused first hidden layer vector; fusing the second vector weight with the second hidden layer vector output in the last round to obtain a fused second hidden layer vector, for example, multiplying the second vector weight with the second hidden layer vector output in the last round by the computer equipment to obtain a fused second hidden layer vector; and superposing the fused first hidden layer vector and the fused second hidden layer vector to obtain a superposed hidden layer vector, so that the probability of collision between the first vehicle and the second vehicle is determined based on the superposed hidden layer vector fused with the first hidden layer vector and the second hidden layer vector.
In the above embodiment, since there is an interaction between the first vehicle and the second vehicle, the probability of the first vehicle colliding with the second vehicle is determined by superimposing the hidden layer vectors of the second hidden layer vector combined with the first hidden layer vector sum, and the accuracy of the determined collision probability can be improved.
In one embodiment, the method further includes a step of pre-crash warning, where the step of pre-crash warning includes: triggering to generate anti-collision early warning information when the probability of collision between the first vehicle and the second vehicle is determined to be greater than or equal to a preset probability threshold value; the anti-collision early warning information at least comprises at least one of voice reminding information, anti-collision reminding pictures, suggested driving speeds and suggested driving lanes.
Specifically, the computer device may determine whether a probability of collision between the first vehicle and the second vehicle is greater than or equal to a preset probability threshold, and if so, the computer device triggers the first vehicle and/or the second vehicle to generate a collision avoidance warning message. For example, when the first vehicle is a device currently driven by the driving object and the second vehicle is a device driving after the current vehicle, the computer device may trigger the current vehicle to generate the collision avoidance early warning information, that is, trigger the first vehicle to generate the collision avoidance early warning information. Correspondingly, when the second vehicle is the current vehicle and the first vehicle is a device running in front of the current vehicle, the computer equipment can trigger the current vehicle to generate anti-collision early warning information, namely trigger the second vehicle to generate anti-collision early warning information.
In one embodiment, the collision avoidance warning information includes at least one of a voice alert message, a collision avoidance alert animation, a suggested travel speed, and a suggested travel lane. For example, when the probability of collision between the first vehicle and the second vehicle is greater than or equal to the preset probability threshold, the first vehicle and/or the second vehicle may voice broadcast the anti-collision reminding message, that is, voice broadcast the voice reminding message. The first vehicle and/or the second vehicle may also display a collision avoidance alert animation in the local driving device terminal. The first vehicle and/or the second vehicle can also generate a suggested running speed and a suggested running lane according to the road condition of the running area, the density of the vehicles in the running area and other information, and voice broadcast the suggested running speed and/or the suggested running lane, or display the suggested running speed and/or the suggested running lane in a text mode and the like.
In one embodiment, the computer device may trigger the transmission of the pre-crash information to vehicles located around the first vehicle and/or the second vehicle in addition to triggering the first vehicle and/or the second vehicle to generate the pre-crash information. For example, a voice broadcasting device such as a loudspeaker is disposed on the first vehicle, and when the computer device triggers the driving device terminal of the first vehicle to generate the anti-collision early warning information, the driving device terminal can trigger the voice broadcasting device to broadcast the anti-collision early warning information towards surrounding vehicles so as to remind the vehicles around the first vehicle to drive carefully. Or, the computer device may acquire a driving area image around the first vehicle through an image acquisition device disposed on the first vehicle, and determine a vehicle around the first vehicle through the acquired driving area image, so as to trigger the vehicle around the first vehicle to generate anti-collision early warning information. Correspondingly, the computer device can also trigger to send anti-collision early warning information to vehicles around the second vehicle. By sending anti-collision early warning information to surrounding vehicles, the surrounding vehicles can timely determine the running collision conditions of the vehicles in the running area, so that the running path and the running speed can be timely adjusted based on the determined running collision conditions, and the driving safety is greatly improved.
In one embodiment, the first vehicle and/or the second vehicle may be an unmanned vehicle (also referred to as an autopilot vehicle), on which the unmanned system and the vehicle collision detection system provided by the embodiment of the present application are deployed, and when the probability of collision between the first vehicle and the second vehicle is greater than or equal to a preset probability threshold, the vehicle collision detection system may generate collision warning information, so that the unmanned system may adjust the running speed and the running path according to the generated collision warning information. For example, when the first vehicle is an unmanned vehicle and the vehicle collision detection system on the first vehicle determines that the collision probability between the first vehicle and the second vehicle is greater than or equal to a preset probability threshold, the vehicle collision detection system may generate collision avoidance early warning information and transmit the collision avoidance early warning information to the unmanned system, so that the unmanned system adjusts the speed and the path based on the received collision avoidance early warning information. Accordingly, when the second vehicle is an unmanned vehicle, the speed and the path can be adjusted by the method. It will be readily appreciated that the unmanned system and the vehicle collision detection system may be the same system or may be different systems. When the unmanned system and the vehicle collision detection system are the same system, the vehicle collision detection system may be one system module in the unmanned system. When the unmanned system and the vehicle collision detection system are not the same system, the vehicle collision detection system is another system independent of the unmanned system. By arranging the vehicle collision detection system in the unmanned vehicle, the probability of the vehicle collision can be determined based on the arranged vehicle collision detection system, and then the unmanned system can adjust the driving strategy based on the probability of the vehicle collision, so that the unmanned safety is improved.
In one embodiment, the anti-collision reminding frames comprise a first anti-collision reminding frame displayed by a first vehicle and a second anti-collision reminding frame displayed by a second vehicle; the first anti-collision reminding picture comprises a first virtual vehicle model corresponding to a first vehicle, a second virtual vehicle model corresponding to a second vehicle and a first perception area surrounding the first virtual vehicle model; when the probability of collision between the first vehicle and the second vehicle is greater than or equal to a preset probability threshold value, the first perception subarea of the first perception area, which faces the direction of the second virtual vehicle model, is highlighted; the second anti-collision reminding picture comprises a second virtual vehicle model corresponding to a second vehicle, a first virtual vehicle model corresponding to the first vehicle and a second perception area surrounding the second virtual vehicle model; and when the probability of collision of the first vehicle and the second vehicle is greater than or equal to a preset probability threshold value, highlighting a second perception subarea in the second perception area towards the direction of the first virtual vehicle model.
Specifically, a driving device terminal may be installed in each of the first vehicle and the second vehicle, and an anti-collision reminding screen may be displayed through the driving device terminal. The virtual vehicle model of the vehicle is displayed in the anti-collision reminding picture. Wherein, there are a plurality of perception subregions around the virtual vehicle model, and a plurality of perception subregions are used for constituting the perception region around the virtual vehicle model. For example, the first vehicle may display a first collision avoidance alert screen, and a first virtual vehicle model corresponding to the first vehicle and a second virtual vehicle model corresponding to the second vehicle are displayed in the first collision avoidance alert screen, and a plurality of first perception sub-areas are surrounded by the first virtual vehicle model. The second vehicle may display a second collision avoidance prompt screen, and display a second virtual vehicle model corresponding to the second vehicle and a first virtual vehicle model corresponding to the first vehicle in the second collision avoidance prompt screen, and a plurality of second perception sub-areas are surrounded around the second virtual vehicle model.
When the collision probability between the first vehicle and the second vehicle is greater than or equal to the preset probability threshold, the first vehicle may highlight a target first perception sub-region of the second virtual vehicle model facing the plurality of first perception sub-regions in the first collision avoidance screen to alert a driving object of the first vehicle to pay attention to the second vehicle based on the highlighted target first perception sub-region. The corresponding second vehicle may also highlight a target second perceived sub-region toward the first vehicle in the second anti-collision alert picture to alert a driving object of the second vehicle to the first vehicle based on the highlighted target second perceived sub-region. For example, referring to fig. 5, when the collision probability between the first vehicle and the second vehicle is greater than or equal to the preset probability threshold, a first virtual vehicle model 501 of the first vehicle and a second virtual vehicle model 502 of the second vehicle may be displayed in the first anti-collision alert screen, and a target first perception sub-region 503 toward the second virtual vehicle model may be highlighted. FIG. 5 illustrates a schematic diagram of a collision avoidance alert screen in one embodiment.
It will be readily appreciated that the non-highlighted sensory sub-regions may be in transparent form. The front and rear are only relative concepts, the front is not limited to the front, but also can be the oblique front, and the rear is not limited to the front, but also can be the oblique rear. By highlighting the perception subareas towards the corresponding virtual vehicle model in the form of pictures, the corresponding driving objects can be intuitively reminded of collision event prevention through the highlighted perception subareas, so that the user experience is improved, and the personal and property losses are reduced.
In the embodiment, by generating the anti-collision early warning information, the probability of collision between the first vehicle and the second vehicle can be reduced based on the anti-collision early warning information, so that personal and property losses are reduced.
In one embodiment, a vehicle collision detection method is performed by a vehicle collision detection model that includes a first hidden layer and a second hidden layer; the vehicle collision detection model is obtained by a model training step, referring to fig. 6, which includes:
step 602, acquiring first sample driving characteristic data of a first sample vehicle, second sample driving characteristic data of a second sample vehicle, collision sample tags corresponding to the first sample vehicle and the second sample vehicle, and acquiring a vehicle collision detection model to be trained.
Specifically, when the vehicle collision detection model needs to be trained, the computer device may acquire first sample travel feature data of the first sample vehicle and acquire second sample travel feature data of the second sample vehicle. And the computer device may obtain a collision specimen tag between the first specimen vehicle and the second specimen vehicle. And training the vehicle collision detection model to be trained through the first sample driving characteristic data, the second sample driving characteristic data and the collision sample tag.
In one embodiment, a computer device may store a first set of sample information and a second set of sample information. The first sample information sets store first sample information corresponding to a plurality of first sample vehicles, and the second sample information sets store second sample information corresponding to a plurality of second sample vehicles. The first sample information may include, among other things, a device identification of the first sample vehicle, first sample travel characteristic data, a device identification of a second sample vehicle associated with the first sample vehicle, and collision information between the first sample vehicle and the associated second sample vehicle (referred to as first vehicle collision information). Wherein the second sample vehicle associated with the first sample vehicle may be a sample vehicle that has collided with the first sample vehicle, or a sample vehicle that is traveling after the first sample vehicle. The first vehicle collision information reflects a history of collision conditions between the first sample vehicle and the second sample vehicle, and if the first sample vehicle has collided with the second sample vehicle, the collision state in the first vehicle collision information is set as a target value, otherwise, the collision state is set as a non-target value. Accordingly, the second sample information may include a device identifier of the second sample vehicle, second sample travel characteristic data, a device identifier of the first sample vehicle associated with the second sample vehicle, and collision information between the second sample vehicle and the associated first sample vehicle (referred to as second vehicle collision information).
In one embodiment, where the computer device stores a first set of sample information and a second set of sample information, the computer device may extract the first sample information in the first set of sample information. Since the first sample information may include therein the device identification of the first sample vehicle, the first sample travel characteristic data, the device identification of the second sample vehicle associated with the first sample vehicle, the collision information between the first sample vehicle and the associated second sample vehicle, the computer apparatus may determine the first sample vehicle, the first sample travel characteristic data of the first sample device, and the second sample device based on the extracted first sample information. Further, the computer apparatus may extract corresponding second sample information from the second sample information set based on the determined second sample device, and determine second sample travel characteristic data based on the extracted second sample information. Since collision information may be included in both the first sample information and the second sample information, the computer device may determine collision sample tags corresponding to the first sample vehicle and the second sample vehicle based on the first vehicle collision information in the first sample information or the second vehicle collision information in the second sample information.
It is readily understood that the computer device may train the vehicle collision detection model based on each of the first sample information in the first sample information set and each of the second sample information in the second sample information set.
Step 604, starting from the second round except the first round, determining the first prediction hidden layer vector output in the current round according to the second sample driving characteristic data and the second prediction hidden layer vector output in the previous round. Step 606, determining a second prediction hidden layer vector output by the current round according to the second sample driving characteristic data and the first prediction hidden layer vector output by the current round. Step 608, determining the prediction probability of the collision between the first sample vehicle and the second sample vehicle according to the first prediction hidden layer vector and the second prediction hidden layer vector output by the current round.
Specifically, from a second round other than the first round, the first hidden layer in the vehicle collision detection model outputs a first predicted hidden layer vector of the current round according to the second sample running characteristic data and a second predicted hidden layer vector output by the previous round, so that the second hidden layer outputs a second predicted hidden layer vector of the current round according to the first sample running characteristic data and the first predicted hidden layer vector output by the current round. Further, an output layer in the vehicle collision detection model outputs a predicted probability of collision between the first sample vehicle and the second sample vehicle according to the first predicted hidden layer vector and the second predicted hidden layer vector output by the current round.
Step 610, adjusting model parameters of the vehicle collision detection model according to the difference between the prediction probability and the collision sample label.
Specifically, the computer device adjusts model parameters of the vehicle collision detection model in a direction in which the difference is minimized and the gradient is reduced, based on the difference between the prediction probability and the collision sample tag. The model parameters comprise a model parameter matrix, a model parameter vector and hidden layer parameters.
Step 612, taking the second prediction hidden layer vector output by the current round as the second prediction hidden layer vector output by the previous round in the next round, entering the next round, returning the second prediction hidden layer vector output by the previous round according to the second sample driving characteristic data and the second prediction hidden layer vector output by the previous round, and determining to continue to execute the step of determining the first prediction hidden layer vector output by the current round until the training stop condition is reached, and stopping to obtain a trained vehicle collision detection model; the vehicle collision detection model is used to determine a collision probability between at least two vehicles.
Specifically, the computer device uses the second prediction hidden layer vector output by the current round as the second prediction hidden layer vector output by the previous round in the next round, enters the next round, returns to the vehicle collision detection model, and determines that the step of determining the first prediction hidden layer vector output by the current round is continuously executed according to the second sample driving characteristic data and the second prediction hidden layer vector output by the previous round until the training stop condition is reached, and stops to obtain the trained vehicle collision detection model. The training stop condition may be freely set according to the requirement, for example, the training stop condition may be determined to be reached when a preset round is reached.
In one embodiment, for the first round, the vehicle collision detection model may randomly generate an initial second predicted hidden layer vector and generate the first predicted hidden layer vector for the first round from the second sample travel characteristic data and the initial second predicted hidden layer vector. Further, the vehicle collision detection model may output a second predictive hidden layer vector of the first round based on the first sample travel characteristic data and the first predictive hidden layer vector of the first round.
In one embodiment, the vehicle collision detection model to be trained is determined by the following equation:
wherein X is t Representing first sample travel characteristic data, X t-1 Representing second sample travel characteristic data, H t Representing a first predictive hidden layer vector, H t-1 Representing a second predictive hidden layer vector, Y representing a collision status between the specimen vehicle and the second specimen vehicle (1 representing that the first specimen vehicle collides with the second specimen vehicle, 0 representing that the first specimen vehicle does not collide with the second specimen vehicle); w (W) t First data weight, W, representing first sample travel characteristic data t-1 A second data weight representing second sample travel characteristic data; v (V) t First vector weight, V, representing a first predictive hidden layer vector t-1 A second vector weight representing a second prediction hidden layer vector; u (U) t Third vector weight representing first predictive hidden layer vectorHeavy, U t-1 And fourth vector weights representing the second predictive hidden layer vector. B (B) t ,B t-1 ,A t ,A t-1 Respectively representing the parameter vectors; tanh represents the implicit layer activation function and σ represents the output layer activation function.
Specifically, the vehicle collision detection model may obtain the second predictive hidden layer vector H output from the previous round t-1 And outputs a second prediction hidden layer vector H of the previous round t-1 And second sample travel characteristic data X t-1 Input to formula H t =tanh(W t-1 X t-1 +U t-1 H t-1 +B t-1 ) Obtaining a first prediction hidden layer vector H output by the current round t . The computer equipment outputs the first prediction hidden layer vector H of the current round t And first sample driving characteristic data X t Input to formula H t-1 =tanh(W t X t +U t H t +B t ) Obtaining a second prediction hidden layer vector H output by the current round t-1 . The first prediction hidden layer vector H output by the vehicle collision detection model in the current round t And a second predictive hidden layer vector H t-1 Input to the formulaA predicted probability of collision between the first sample vehicle and the second sample vehicle for the current round is obtained.
In one embodiment, referring to fig. 7, for a first round, the vehicle collision model may generate a first predicted hidden layer vector for the first round based on the initial second predicted hidden layer vector and the second sample travel characteristic data, and output a second predicted hidden layer vector for the first round based on the first predicted hidden layer vector for the first round and the first sample travel characteristic data. For the other turns except the first turn, the vehicle collision model generates a first prediction hidden layer vector of the current turn according to the first prediction hidden layer vector of the previous turn and the first sample driving characteristic data, and generates a second prediction hidden layer vector of the current turn according to the first prediction hidden layer vector of the current turn and the first sample driving characteristic data. FIG. 7 illustrates a schematic diagram of generation of a predictive hidden layer vector in one embodiment.
In one embodiment, the trained vehicle collision detection model may be expressed by the following formula:
wherein X is t Representing first driving characteristic data, X t-1 Representing second driving characteristic data H t Representing a first hidden layer vector, H t-1 Representing a second hidden layer vector, Y representing a collision state between the first vehicle and the second vehicle (1 representing that the first vehicle collides with the second vehicle, 0 representing that the first vehicle does not collide with the second vehicle);first data weight representing first driving characteristic data, < ->A second data weight representing second travel characteristic data; />First vector weight representing first hidden layer vector, < ->A second vector weight representing a second hidden layer vector; />A third vector weight representing the first hidden layer vector,and fourth vector weights representing the second hidden layer vector. />Respectively representing the parameter vectors; tanh represents the activation function of the hidden layer and σ represents the activation function of the output layer.
In the vehicle collision detection model training step, the first hidden layer can be made to determine the first predicted hidden layer vector of the current round according to the second sample driving characteristic data and the second predicted hidden layer vector output in the previous round by acquiring the second sample driving characteristic data of the second sample vehicle, so that the hidden layer of the first sample vehicle can be influenced by the driving characteristic data of the second sample vehicle and the hidden layer. By acquiring the first sample driving characteristic data of the first sample vehicle, the second hidden layer can determine the second prediction hidden layer vector of the current round according to the first sample driving characteristic data and the first prediction hidden layer vector output at the current round, so that the hidden layer of the second vehicle can be influenced by the driving characteristic data of the first sample vehicle and the hidden layer. The first prediction hidden layer vector and the second prediction hidden layer vector are obtained, so that the prediction probability of the collision of the first vehicle and the second vehicle can be determined together according to the first prediction hidden layer vector and the second prediction hidden layer vector, and the purpose of determining the collision prediction probability according to the mutual influence relation between the first sample vehicle and the second sample vehicle is achieved. And then the vehicle collision model obtained by training the collision prediction probability based on the mutual influence relation between the first sample vehicle and the second sample vehicle can also determine the collision probability based on the mutual influence relation between the first vehicle and the second vehicle when in use, so that the accuracy of the collision probability is improved.
In one embodiment, acquiring first sample travel characteristic data of a first sample vehicle, second sample travel characteristic data of a second sample vehicle, collision sample tags corresponding to the first sample vehicle and the second sample vehicle, includes: acquiring first sample information corresponding to a first sample vehicle; the first sample information includes first sample travel characteristic data of a first sample vehicle and first vehicle collision information; acquiring second sample information corresponding to a second sample vehicle; the second sample information includes second sample travel characteristic data of a second sample vehicle and second vehicle collision information; the first vehicle collision information or the second vehicle collision information is used as collision sample labels corresponding to the first sample vehicle and the second sample device.
Specifically, when the vehicle collision detection model needs to be trained, the computer device may acquire first sample information corresponding to the first sample vehicle and acquire second sample information corresponding to the second sample vehicle. As described above, the first sample information may include the first sample travel characteristic data and the first vehicle collision information, and the second sample information may include the second sample travel characteristic data and the second vehicle collision information, so the computer apparatus may determine collision sample tags corresponding to the first sample vehicle and the second sample device according to at least one of the first vehicle collision information and the second vehicle collision information, for example, the computer apparatus may directly use the first vehicle collision information as the collision sample tag, or directly use the second vehicle collision information as the collision sample tag.
In this embodiment, by generating the first sample information and the second sample information, it is possible to cause the subsequent determination of the first sample travel feature data, the second sample travel feature data, and the collision sample tag based on the first sample information and the second sample information.
In one embodiment, the second sample vehicle is a vehicle that collides with the first sample vehicle, or is a vehicle that travels behind the first vehicle but does not collide with the first sample vehicle. Specifically, the second sample vehicle may be a vehicle that collides with the first sample vehicle, or may be a vehicle that travels behind the first vehicle but does not collide with the first sample vehicle.
In this embodiment, by using the first sample vehicle and the second sample vehicle that collides with the first sample vehicle together as a pair of sample vehicles and also using the first sample vehicle and the second sample vehicle that runs behind the first sample vehicle together as a pair of sample vehicles, the vehicle collision detection model can be trained based on sample vehicles having different collision states in the following, and a more accurate collision probability can be output from the vehicle collision detection model obtained by training the sample vehicles having different collision states than from the vehicle collision detection model obtained by training the sample vehicles having the same collision state.
In one embodiment, the step of constructing the first sample information includes: acquiring characteristic data acquired for a first sample vehicle, obtaining first sample running characteristic data, and acquiring a collision state of the first sample vehicle; determining a vehicle that is traveling behind the first sample vehicle but that is not colliding with the first sample vehicle when the collision state indicates that the first sample vehicle is not colliding, and taking the determined vehicle as a second sample vehicle; generating first vehicle collision information according to the device identification of the first sample vehicle, the device identification of the second sample vehicle and the collision state of the first sample vehicle; and obtaining first sample information of the first sample vehicle according to the first sample driving characteristic data and the first vehicle collision information.
Specifically, the generation process of the first sample information includes that, when the first sample vehicle is running, the driving apparatus terminal in the first sample vehicle may acquire the feature data in real time, for example, may determine the number of times the accelerator is stepped on, the number of times the brake is stepped on, etc. when the driving object drives the first sample vehicle, and determine the running speed of the first sample vehicle, the vehicle size of the first sample vehicle, etc. When the first sample vehicle does not collide, the driving apparatus terminal may determine that the collision state of the first sample vehicle is a non-collision state, and determine a vehicle that runs behind the first sample vehicle but does not collide with the first sample vehicle, taking the determined vehicle as the second sample vehicle. The driving apparatus terminal in the first sample vehicle generates first vehicle collision information according to the device identification of the first sample vehicle, the device identification of the second sample vehicle and the collision state of the first sample vehicle, and obtains first sample information of the first sample vehicle according to the first sample driving characteristic data and the first vehicle collision information.
In one embodiment, the method further comprises: when the collision state characterizes that the first sample vehicle collides, the driving device terminal can determine that the collision state of the first sample vehicle is the collision state, determine the vehicle colliding with the first sample vehicle, and take the determined vehicle as a second sample vehicle. The driving apparatus terminal in the first sample vehicle generates first vehicle collision information according to the device identification of the first sample vehicle, the device identification of the second sample vehicle and the collision state of the first sample vehicle, and obtains first sample information of the first sample vehicle according to the first sample driving characteristic data and the first vehicle collision information.
Similarly, the driving device terminal corresponding to the second sample vehicle may also generate second sample information based on the corresponding information.
In this embodiment, by generating the first sample information and the second sample information, the first sample travel feature data, the second sample travel feature data, and the collision sample tag may be determined based on the first sample information and the second sample information, and thus the corresponding model may be trained based on the determined data and tags.
In one embodiment, the first sample travel feature data includes a plurality of first sample travel feature sub-data; the second sample travel feature data includes a plurality of second sample travel feature sub-data; the plurality of first sample travel feature sub-data and the plurality of second sample travel feature sub-data each include feature data belonging to a vehicle type and feature data belonging to a driving object type; the feature data belonging to the vehicle type refers to feature data related to the running of the vehicle; the feature data belonging to the driving object type refers to feature data related to the behavior of the driving object.
Specifically, the first sample travel feature data may include a plurality of first sample travel feature sub-data, and the plurality of first sample travel feature sub-data may include feature data belonging to a vehicle type and feature data belonging to a driving object type. The second sample travel feature data may include a plurality of second sample travel feature sub-data, and the plurality of second sample travel feature sub-data may also include feature data belonging to a vehicle type and feature data belonging to a driving object type. As will be readily understood, the feature data belonging to the vehicle type and the feature data belonging to the driving object type included in the plurality of first sample travel feature sub-data are feature data collected for the first sample vehicle. The feature data belonging to the vehicle type and the feature data belonging to the driving object type included in the plurality of second sample travel feature sub-data are feature data collected for the second sample vehicle. Wherein the feature data belonging to the vehicle type refers to feature data related to running of the vehicle; the feature data belonging to the driving object type refers to feature data related to the behavior of the driving object, which refers to an object driving the vehicle.
In this embodiment, since the sample driving feature data may include feature data related to driving of the vehicle and feature data related to behavior of the driving object, the sample driving feature data may reflect the driving feature of the vehicle, that is, reflect the behavior feature of the driving object, so that the vehicle collision detection model may be trained by integrating the feature data belonging to the vehicle type and the feature data belonging to the driving object type, and thus, when the trained vehicle collision detection model is used, the feature data belonging to the vehicle type and the feature data belonging to the driving object type may be integrated to obtain the collision probability, and compared with obtaining the collision probability based on only a single feature data, the collision probability determined by the multiple feature data may be more accurate.
In one embodiment, the characteristic data belonging to the vehicle type at least includes one of a driving speed of the vehicle, a driving area image of the vehicle, a distance between at least two vehicles, position information of the vehicle, size information of the vehicle, weight information of the vehicle, a number of loaded vehicles, a highest driving speed of the vehicle, an average driving speed of the vehicle, and traffic data in the driving area; the characteristic data belonging to the driving object type at least comprises one of the number of times of adjusting the seat, the number of times of stepping on the vehicle brake, the number of times of stepping on the vehicle accelerator, the steering wheel swing amplitude, the number of times of steering wheel swing and the number of times of upshift and downshift operations in the driving process of the driving object.
In one embodiment, when the first sample driving feature data needs to be generated, the computer device may collect feature data pertaining to a vehicle type for the first sample vehicle, collect feature data pertaining to a driving object type for a driving object driving the first sample vehicle, and use the collected feature data as first sample driving feature sub-data, thereby obtaining a plurality of first sample driving feature sub-data, and integrate the plurality of first sample driving feature sub-data to obtain the first sample driving feature data. Correspondingly, when the second sample running characteristic data needs to be generated, the computer equipment can acquire the characteristic data of the data vehicle type aiming at the second sample vehicle, acquire the characteristic data belonging to the driving object type aiming at the driving object driving the second sample vehicle, and take the acquired characteristic data as second sample running characteristic sub-data, so as to obtain a plurality of second sample running characteristic sub-data, and synthesize the plurality of second sample running characteristic sub-data to obtain the second sample running characteristic data.
In the above embodiment, the vehicle collision detection model may be trained based on multiple types of feature data, so that the vehicle collision detection model obtained by training may also integrate multiple types of feature data to obtain the collision probability with improved accuracy when in use.
In one embodiment, referring to fig. 8, fig. 8 shows a schematic flow chart of construction and training of a vehicle collision detection model in one embodiment. Wherein the vehicle is specifically a vehicle.
Step 802, data entry phase. In the data input stage, log data are obtained from a cloud database at a vehicle machine side, wherein the log data comprise license plate numbers, sample driving characteristic data and collision states Y t = {0,1}. And for the obtained current log data, if the collision state in the current log data represents that the current vehicle collides, obtaining a license plate number image of another vehicle colliding with the current vehicle. The acquired sample characteristic data mainly comprises two parts: feature data belonging to a vehicle type (also referred to as vehicle running feature data) and feature data belonging to a driving object type (also referred to as user driving behavior feature data). Wherein the characteristic data belonging to the vehicle type includes: vehicle speed of vehicle itselfThe vehicle speed, the road surface image acquired by the vehicle, the distance between the front and the rear vehicles, the distance between the left and the right vehicles, the vehicle position information, the vehicle length, the width, the vehicle weight, the vehicle oil consumption, the number of vehicles with the core, the highest speed per hour, the vehicle displacement and the like, the average speed, the speed limit data, the overspeed data, the vehicle flow data and the like. The feature data belonging to the driving object type includes: the method comprises the steps of clicking a mobile phone (a car end screen) in the car in the driving process of a car owner, adjusting the number of seats in the driving process, adjusting the number of air conditioners, listening to music data, playing video data, stepping on the brake number, stepping on the oiling number, steering wheel swing amplitude, steering wheel swing number, upshift operation number and the like.
Step 804, a label build phase. In the tag construction stage, the license plate number, the vehicle identification, and the collision state Y of each vehicle in the data input stage are input t ={0,1}(Y t Representing a collision state of a vehicle, 1 representing a collision of the vehicle, 0 representing no collision), determining a license plate number image of another vehicle colliding with the current vehicle or determining a license plate number image of another vehicle traveling behind the current vehicle for each vehicle in the data input stage, extracting the license plate number of the other vehicle through an image recognition machine learning model, obtaining license plate number information, and generating a collision sample tag Y according to the license plate number of the current vehicle, the license plate number of the other vehicle colliding with the current vehicle and the collision state of the current vehicle, or generating a collision sample tag Y according to the license plate number of the current vehicle, the license plate number of the other vehicle behind the current vehicle and the collision state of the current vehicle.
Step 806, sample construction phase. In the construction stage of training and testing sample, the sample driving characteristic data X obtained in the data input stage is input t-1 ,X t (X t Representing first sample travel characteristic data, X t-1 Representing second sample travel characteristic data) and input a collision sample tag Y. Wherein the first sample driving characteristic data can be the driving characteristic data of the current vehicle, and the second sample driving characteristic data can be the driving characteristic data of the collision with the current vehicle The running characteristic data of the vehicle, or may be running characteristic data of a vehicle that runs after the current vehicle described above. Respectively comparing collision sample label Y with first sample driving characteristic data X t Second sample travel characteristic data X t-1 According to the vehicle identification (which can be the license plate number), the first sample information S is obtained t (also called training sample of the current vehicle) and second sample information S t-1 (also known as a training sample of a rear vehicle located behind the current vehicle). When a plurality of first sample information and a plurality of second sample information are obtained, the plurality of first sample information S are respectively obtained in a certain proportion (a proportion) t-1 Randomly segmenting into a first training sample for training(also called training sample of the current vehicle) and a second training sample for testing +.>(also referred to as a front car test specimen). Similarly, a plurality of second sample information S are arranged according to a certain proportion (a proportion) t-1 Randomly split into a second training sample +.>(also called training sample of rear car) and a second test sample for testing +.>(also referred to as a rear car test specimen).
And (3) constructing a prediction sample: in the driving process, front and rear license plate number images are acquired through a camera at the vehicle end, and are processed through an image recognition technology to acquire the license plate numbers of the front and rear vehicles, and the cloud database at the vehicle end is matched according to the license plate numbers to acquire the driving characteristic data of the front and rear vehicles And will->As a first prediction sample, will +.>As a second prediction sample.
Step 808, a construction phase of a vehicle collision detection model. In the construction stage of the vehicle collision detection model, a model concerning the vehicle collision detection is constructed based on the following formula:
step 810, training test phase of the vehicle collision detection model. Inputting a first training sample and a second training sampleTo a vehicle collision detection model to be trained, and inputting a first test sample and a second test sampleIn a vehicle collision detection model to be trained, obtaining a model parameter matrix according to a loss minimum principle and a gradient descent method>Sum parameter vectorConcealment layer sequence comprising concealment layer parameters +.>
Step 812, a prediction phase of the vehicle collision detection model. Inputting a first prediction sample and a second prediction sampleInputting the model parameter matrix obtained by step 810 +.> And parameter vector->And input the hidden layer sequence +.>And obtaining the probability P { Y } of collision between the vehicle corresponding to the first prediction sample and the vehicle corresponding to the second prediction sample at the current moment by using the following formula.
In the above embodiment, by training the vehicle collision detection model, the accurate collision probability of the vehicle can be output based on the trained vehicle collision detection model.
In one embodiment, referring to fig. 9, fig. 9 illustrates a vehicle collision detection method in one embodiment, the vehicle collision detection method comprising the steps of:
s902, acquiring first sample driving characteristic data of a first sample vehicle, second sample driving characteristic data of a second sample vehicle and collision sample labels corresponding to the first sample vehicle and the second sample vehicle through computer equipment, and acquiring a vehicle collision detection model to be trained.
S904, determining a first predictive hidden layer vector output at a current round from a second round other than the first round based on the second sample travel feature data and a second predictive hidden layer vector output at a previous round by the vehicle collision detection model.
S906, determining a second prediction hidden layer vector output by the current round through a vehicle collision detection model according to the second sample running characteristic data and the first prediction hidden layer vector output by the current round; determining the prediction probability of collision between the first sample vehicle and the second sample vehicle according to the first prediction hidden layer vector and the second prediction hidden layer vector output by the current turn; and adjusting model parameters of the vehicle collision detection model according to the difference between the prediction probability and the collision sample label.
S908, taking the second prediction hidden layer vector output by the current round as the second prediction hidden layer vector output by the previous round in the next round through the vehicle collision detection model, entering the next round, returning the second prediction hidden layer vector output according to the second sample driving characteristic data and the previous round, and determining the first prediction hidden layer vector output by the current round to continue execution until the training stop condition is reached, so as to obtain the trained vehicle collision detection model.
S910, acquiring first driving characteristic data of a first vehicle, second driving characteristic data of a second vehicle driving behind the first vehicle and a trained vehicle collision detection model through computer equipment.
S912, acquiring an initial second hidden layer vector through a vehicle collision detection model, and determining a first hidden layer vector output by the first round according to the second driving characteristic data and the initial second hidden layer vector through the first hidden layer; and determining a second hidden layer vector output by the first round according to the first running characteristic data and the first hidden layer vector output by the first round through the second hidden layer.
S914, through the vehicle collision detection model, from the second round except the first round, through the first hidden layer, and according to the second running characteristic data and the second hidden layer vector output by the second hidden layer in the previous round, determining the first hidden layer vector output in the current round; and determining a second hidden layer vector output by the current round through the second hidden layer according to the first driving characteristic data and the first hidden layer vector output by the current round.
S916, using the second hidden layer vector output by the current round as the second hidden layer vector output by the previous round in the next round through the vehicle collision detection model, entering the next round, returning the second hidden layer vector output by the previous round according to the first sample driving characteristic data and the second hidden layer, and determining the first hidden layer vector output by the current round to continue execution until the execution stop condition is reached, so as to obtain the first hidden layer vector and the second hidden layer vector output by different rounds.
S918, determining the probability of collision between the first vehicle and the second vehicle according to the first hidden layer vector output by the first hidden layer at the last round and the second hidden layer vector output by the second hidden layer at the last round through the vehicle collision detection model.
S920, triggering generation of anti-collision early warning information through computer equipment when the probability of collision between the first vehicle and the second vehicle is determined to be greater than or equal to a preset probability threshold; the anti-collision early warning information at least comprises at least one of voice reminding information, anti-collision reminding pictures, suggested driving speeds and suggested driving lanes.
In the vehicle collision detection method, since the collision probability is determined according to the mutual influence relationship between the first vehicle and the second vehicle, compared with the conventional method for performing collision early warning according to the distance between the first vehicle and the second vehicle only, the accuracy of the determined collision probability between the first vehicle and the second vehicle can be improved.
It should be understood that, although the steps in the flowcharts related to the embodiments described above are sequentially shown as indicated by arrows, these steps are not necessarily sequentially performed in the order indicated by the arrows. The steps are not strictly limited to the order of execution unless explicitly recited herein, and the steps may be executed in other orders. Moreover, at least some of the steps in the flowcharts described in the above embodiments may include a plurality of steps or a plurality of stages, which are not necessarily performed at the same time, but may be performed at different times, and the order of the steps or stages is not necessarily performed sequentially, but may be performed alternately or alternately with at least some of the other steps or stages.
The application also provides an application scene, which applies the vehicle collision detection method. Specifically, the application of the vehicle collision detection method in the application scene is as follows:
when the current vehicle runs on the road, the vehicle-mounted terminal of the current vehicle can generate first running characteristic data and send the first running characteristic data to the server for storage. An image acquisition device is arranged on the current vehicle, and can acquire a vehicle image of a front vehicle running in front of the current vehicle or acquire a vehicle image of a rear vehicle running behind the current vehicle, and the acquired vehicle image is subjected to image recognition to obtain a license plate number. The vehicle-mounted terminal of the current vehicle pulls the driving characteristic data corresponding to the determined license plate number from the server, and takes the pulled driving characteristic data as second driving characteristic data. The method comprises the steps that a trained vehicle collision detection model is deployed on a vehicle-mounted terminal of a current vehicle, the vehicle-mounted terminal of the current vehicle inputs first driving characteristic data and second driving characteristic data into the vehicle collision detection model, the collision probability between the current vehicle and a front vehicle is output through the vehicle collision detection model, or the collision probability between the current vehicle and a rear vehicle is output, and collision early warning is conducted when the collision probability is greater than or equal to a preset probability threshold value so as to remind a vehicle owner of slowing down based on the collision early warning.
The above application scenario is only illustrative, and it is to be understood that the application of the vehicle collision detection method provided by the embodiments of the present application is not limited to the above scenario.
Based on the same inventive concept, the embodiment of the application also provides a vehicle collision detection device for realizing the vehicle collision detection method. The implementation of the solution provided by the device is similar to that described in the above method, so the specific limitation in one or more embodiments of the vehicle collision detection device provided below may be referred to above as limitation of the vehicle collision detection method, and will not be repeated here.
In one embodiment, as shown in fig. 10, there is provided a vehicle collision detection apparatus 1000 including: an input layer module 1002, a hidden layer module 1004, and an output layer module 1006, wherein:
the input layer module 1002 is configured to obtain first driving characteristic data of a first vehicle, and second driving characteristic data of a second vehicle that runs behind the first vehicle.
The hidden layer module 1004 is configured to output corresponding first hidden layer vectors at different rounds through the first hidden layer respectively; the first hidden layer vector output by the current round is generated according to the second driving characteristic data and the second hidden layer vector output by the second hidden layer at the previous round; outputting corresponding second hidden layer vectors at different rounds through the second hidden layers respectively; the second hidden layer vector output by the current round is generated according to the first driving characteristic data and the first hidden layer vector output by the first hidden layer at the current round.
The output layer module 1006 is configured to determine a probability of collision between the first vehicle and the second vehicle according to a first hidden layer vector output by the first hidden layer at a last round and a second hidden layer vector output by the second hidden layer at the last round.
In one embodiment, the hidden layer module 1004 is further configured to determine, from a second round other than the first round, a first hidden layer vector output at the current round by the first hidden layer and according to the second driving characteristic data and a second hidden layer vector output at the previous round by the second hidden layer; determining a second hidden layer vector output by the current round through the second hidden layer according to the first driving characteristic data and the first hidden layer vector output by the current round; taking the second hidden layer vector output by the current round as the second hidden layer vector output by the previous round in the next round, entering the next round, returning the second hidden layer vector output by the second hidden layer in the previous round according to the first sample driving characteristic data, and determining the first hidden layer vector output by the current round to continue execution until the execution stop condition is reached, and obtaining the first hidden layer vector and the second hidden layer vector output by different rounds.
In one embodiment, the hidden layer module 1004 is further configured to obtain an initial second hidden layer vector, pass through the first hidden layer, and determine a first hidden layer vector output in the first round according to the second driving characteristic data and the initial second hidden layer vector; and determining a second hidden layer vector output by the first round according to the first running characteristic data and the first hidden layer vector output by the first round through the second hidden layer.
In one embodiment, the output layer module 1006 is further configured to determine a first vector weight corresponding to a first hidden layer vector output in a last round, and determine a second vector weight corresponding to a second hidden layer vector output in the last round; fusing the first vector weight and the first hidden layer vector output in the last round to obtain a fused first hidden layer vector; fusing the second vector weight with the second hidden layer vector output in the last round to obtain a fused second hidden layer vector; and superposing the fused first hidden layer vector and the fused second hidden layer vector to obtain a superposed hidden layer vector, and determining the collision probability of the first vehicle and the second vehicle through the superposed hidden layer vector.
In one embodiment, the vehicle collision detection apparatus 1000 is further configured to trigger generation of collision avoidance early warning information when it is determined that the probability of collision between the first vehicle and the second vehicle is greater than or equal to a preset probability threshold; the anti-collision early warning information at least comprises at least one of voice reminding information, anti-collision reminding pictures, suggested driving speeds and suggested driving lanes.
In one embodiment, the vehicle collision detection apparatus 1000 is further configured to trigger displaying a collision avoidance alert screen; the anti-collision reminding picture comprises a first anti-collision reminding picture displayed by a first vehicle and a second anti-collision reminding picture displayed by a second vehicle; the first anti-collision reminding picture comprises a first virtual vehicle model corresponding to a first vehicle, a second virtual vehicle model corresponding to a second vehicle and a first perception area surrounding the first virtual vehicle model; when the probability of collision between the first vehicle and the second vehicle is greater than or equal to a preset probability threshold value, the first perception subarea of the first perception area, which faces the direction of the second virtual vehicle model, is highlighted; the second anti-collision reminding picture comprises a second virtual vehicle model corresponding to a second vehicle, a first virtual vehicle model corresponding to the first vehicle and a second perception area surrounding the second virtual vehicle model; and when the probability of collision of the first vehicle and the second vehicle is greater than or equal to a preset probability threshold value, highlighting a second perception subarea in the second perception area towards the direction of the first virtual vehicle model.
In one embodiment, the vehicle collision detection apparatus 1000 is deployed with a collision detection model that is determined by the following equation:
wherein X is t Representing first driving characteristic data, X t-1 Representing second driving characteristic data H t Representing a first hidden layer vector, H t-1 Representing a second hidden layer vector, Y representing a collision state between the first vehicle and the second vehicle (1 representing that the first vehicle collides with the second vehicle, 0 representing that the first vehicle does not collide with the second vehicle);first data weight representing first driving characteristic data, < ->A second data weight representing second travel characteristic data; />First vector weight representing first hidden layer vector, < ->A second vector weight representing a second hidden layer vector; />A third vector weight representing the first hidden layer vector,and fourth vector weights representing the second hidden layer vector. />Respectively representing the parameter vectors; tanh represents the activation function of the hidden layer and σ represents the activation function of the output layer.
In one embodiment, the vehicle collision detection apparatus 1000 further includes a training module 1008 for acquiring first sample travel feature data of a first sample vehicle, second sample travel feature data of a second sample vehicle, collision sample tags corresponding to the first sample vehicle and the second sample vehicle, and acquiring a vehicle collision detection model to be trained; determining a first prediction hidden layer vector output at the current round from a second round except the first round according to the second sample driving characteristic data and a second prediction hidden layer vector output at the previous round; determining a second prediction hidden layer vector output by the current round according to the second sample running characteristic data and the first prediction hidden layer vector output by the current round; determining the prediction probability of collision between the first sample vehicle and the second sample vehicle according to the first prediction hidden layer vector and the second prediction hidden layer vector output by the current turn; according to the difference between the prediction probability and the collision sample label, adjusting model parameters of a vehicle collision detection model; taking the second prediction hidden layer vector output by the current round as the second prediction hidden layer vector output by the previous round in the next round, entering the next round, returning the second prediction hidden layer vector output by the previous round according to the second sample driving characteristic data and the second prediction hidden layer vector output by the previous round, and determining the first prediction hidden layer vector output by the current round to continue to be executed until the training stop condition is reached, and stopping to obtain a trained vehicle collision detection model; the vehicle collision detection model is used to determine a collision probability between at least two vehicles.
In one embodiment, the training module 1008 is further configured to obtain first sample information corresponding to the first sample vehicle; the first sample information includes first sample travel characteristic data of a first sample vehicle and first vehicle collision information; acquiring second sample information corresponding to a second sample vehicle; the second sample information includes second sample travel characteristic data of a second sample vehicle and second vehicle collision information; the first vehicle collision information or the second vehicle collision information is used as collision sample tags corresponding to the first sample vehicle and the second sample device.
In one embodiment, the training module 1008 is further configured to determine a vehicle that is traveling behind the first sample vehicle but that is not colliding with the first sample vehicle when the collision status indicates that the first sample vehicle is not colliding, and treat the determined vehicle as a second sample vehicle; generating first vehicle collision information according to the device identification of the first sample vehicle, the device identification of the second sample vehicle and the collision state of the first sample vehicle; and obtaining first sample information of the first sample vehicle according to the first sample driving characteristic data and the first vehicle collision information.
In one embodiment, the training module 1008 is further configured to determine a vehicle that collides with the first sample vehicle when the collision status indicates that the first sample vehicle is in a collision, and treat the determined vehicle as the second sample vehicle.
In one embodiment, the training module 1008 is further configured to obtain first sample travel feature data and second sample travel feature data, the first sample travel feature data including a plurality of first sample travel feature sub-data; the second sample travel feature data includes a plurality of second sample travel feature sub-data; the plurality of first sample travel feature sub-data and the plurality of second sample travel feature sub-data each include feature data belonging to a vehicle type and feature data belonging to a driving object type; the feature data belonging to the vehicle type refers to feature data related to the running of the vehicle; the feature data belonging to the driving object type refers to feature data related to the behavior of the driving object.
In one embodiment, training module 1008 is further configured to obtain a plurality of first sample travel feature sub-data and a plurality of second sample travel feature sub-data; the plurality of first sample travel feature sub-data and the plurality of second sample travel feature sub-data each include feature data belonging to a vehicle type and feature data belonging to a driving object type; the feature data belonging to the vehicle type refers to feature data related to the running of the vehicle; the feature data belonging to the driving object type refers to feature data related to the behavior of the driving object; the characteristic data belonging to the vehicle type at least comprises one of driving speed of the vehicle, driving area image of the vehicle, distance between at least two vehicles, position information of the vehicle, size information of the vehicle, weight information of the vehicle, number of actual vehicles on core of the vehicle, highest driving speed of the vehicle, average driving speed of the vehicle and traffic data in the driving area; the characteristic data belonging to the driving object type at least comprises one of the number of times of adjusting the seat, the number of times of stepping on the vehicle brake, the number of times of stepping on the vehicle accelerator, the steering wheel swing amplitude, the number of times of steering wheel swing and the number of times of upshift and downshift operations in the driving process of the driving object.
In one embodiment, the vehicle collision detection apparatus 1000 is further configured to acquire a vehicle collision detection model to be trained, where the vehicle collision detection model to be trained is determined by the following formula:
/>
wherein X is t Representing first sample travel characteristic data, X t-1 Representing second sample travel characteristic data, H t Representing a first predictive hidden layer vector, H t-1 Representing a second predictive hidden layer vector, Y representing a collision status between the specimen vehicle and the second specimen vehicle (1 representing that the first specimen vehicle collides with the second specimen vehicle, 0 representing that the first specimen vehicle does not collide with the second specimen vehicle); w (W) t First data weight, W, representing first sample travel characteristic data t-1 A second data weight representing second sample travel characteristic data; v (V) t First vector weight, V, representing a first predictive hidden layer vector t-1 A second vector weight representing a second prediction hidden layer vector; u (U) t Representing a first prediction hidden layerThird vector weight of vector, U t-1 And fourth vector weights representing the second predictive hidden layer vector. B (B) t ,B t-1 ,A t ,A t-1 Respectively representing the parameter vectors; tanh represents the implicit layer activation function and σ represents the output layer activation function.
The respective modules in the above-described vehicle collision detection apparatus may be implemented in whole or in part by software, hardware, and a combination thereof. The above modules may be embedded in hardware or may be independent of a processor in the computer device, or may be stored in software in a memory in the computer device, so that the processor may call and execute operations corresponding to the above modules.
In one embodiment, a computer device is provided, which may be a server, and the internal structure of which may be as shown in fig. 11. The computer device includes a processor, a memory, an Input/Output interface (I/O) and a communication interface. The processor, the memory and the input/output interface are connected through a system bus, and the communication interface is connected to the system bus through the input/output interface. Wherein the processor of the computer device is configured to provide computing and control capabilities. The memory of the computer device includes a non-volatile storage medium and an internal memory. The non-volatile storage medium stores an operating system, computer programs, and a database. The internal memory provides an environment for the operation of the operating system and computer programs in the non-volatile storage media. The database of the computer device is for storing vehicle collision detection data. The input/output interface of the computer device is used to exchange information between the processor and the external device. The communication interface of the computer device is used for communicating with an external terminal through a network connection. The computer program is executed by a processor to implement a vehicle collision detection method.
It will be appreciated by those skilled in the art that the structure shown in FIG. 11 is merely a block diagram of some of the structures associated with the present inventive arrangements and is not limiting of the computer device to which the present inventive arrangements may be applied, and that a particular computer device may include more or fewer components than shown, or may combine some of the components, or have a different arrangement of components.
In an embodiment, there is also provided a computer device comprising a memory and a processor, the memory having stored therein a computer program, the processor implementing the steps of the method embodiments described above when the computer program is executed.
In one embodiment, a computer-readable storage medium is provided, storing a computer program which, when executed by a processor, implements the steps of the method embodiments described above.
In one embodiment, a computer program product or computer program is provided that includes computer instructions stored in a computer readable storage medium. The processor of the computer device reads the computer instructions from the computer-readable storage medium, and the processor executes the computer instructions, so that the computer device performs the steps in the above-described method embodiments.
The present application is to
The user information (including but not limited to user equipment information, user personal information, etc.) and data (including but not limited to travel characteristic data, data for analysis, stored data, presented data, etc.) involved are both information and data authorized by the user or sufficiently authorized by the parties, and the collection, use, and processing of relevant data requires compliance with relevant laws and regulations and standards of the relevant country and region.
Those skilled in the art will appreciate that implementing all or part of the above described methods may be accomplished by way of a computer program stored on a non-transitory computer readable storage medium, which when executed, may comprise the steps of the embodiments of the methods described above. Any reference to memory, database, or other medium used in embodiments provided herein may include at least one of non-volatile and volatile memory. The nonvolatile Memory may include Read-Only Memory (ROM), magnetic tape, floppy disk, flash Memory, optical Memory, high density embedded nonvolatile Memory, resistive random access Memory (ReRAM), magnetic random access Memory (Magnetoresistive Random Access Memory, MRAM), ferroelectric Memory (Ferroelectric Random Access Memory, FRAM), phase change Memory (Phase Change Memory, PCM), graphene Memory, and the like. Volatile memory can include random access memory (Random Access Memory, RAM) or external cache memory, and the like. By way of illustration, and not limitation, RAM can be in the form of a variety of forms, such as static random access memory (Static Random Access Memory, SRAM) or dynamic random access memory (Dynamic Random Access Memory, DRAM), and the like. The databases referred to in the embodiments provided herein may include at least one of a relational database and a non-relational database. The non-relational database may include, but is not limited to, a blockchain-based distributed database, and the like. The processor referred to in the embodiments provided in the present application may be a general-purpose processor, a central processing unit, a graphics processor, a digital signal processor, a programmable logic unit, a data processing logic unit based on quantum computing, or the like, but is not limited thereto.
The technical features of the above embodiments may be arbitrarily combined, and all possible combinations of the technical features in the above embodiments are not described for brevity of description, however, as long as there is no contradiction between the combinations of the technical features, they should be considered as the scope of the description.
The foregoing examples illustrate only a few embodiments of the application and are described in detail herein without thereby limiting the scope of the application. It should be noted that it will be apparent to those skilled in the art that several variations and modifications can be made without departing from the spirit of the application, which are all within the scope of the application. Accordingly, the scope of the application should be assessed as that of the appended claims.

Claims (15)

1. A vehicle collision detection method, characterized in that the method comprises:
acquiring first driving characteristic data of a first vehicle, second driving characteristic data of a second vehicle driving behind the first vehicle, a first hidden layer and a second hidden layer;
outputting corresponding first hidden layer vectors at different rounds through the first hidden layer respectively; the first hidden layer vector output by the current round is generated according to the second driving characteristic data and the second hidden layer vector output by the second hidden layer in the previous round;
Outputting corresponding second hidden layer vectors at different rounds through the second hidden layers respectively; the second hidden layer vector output by the current round is generated according to the first driving characteristic data and the first hidden layer vector output by the first hidden layer at the current round;
and determining the probability of collision between the first vehicle and the second vehicle according to the first hidden layer vector output by the first hidden layer in the last round and the second hidden layer vector output by the second hidden layer in the last round.
2. The method according to claim 1, wherein outputting the respective first concealment layer vectors at different rounds by the first concealment layer and outputting the respective second concealment layer vectors at different rounds by the second concealment layer, respectively, comprises:
from the second round except the first round, determining a first hidden layer vector output in the current round through the first hidden layer according to the second running characteristic data and a second hidden layer vector output in the previous round by the second hidden layer;
determining a second hidden layer vector output by the current round through the second hidden layer according to the first driving characteristic data and the first hidden layer vector output by the current round;
And taking the second hidden layer vector output by the current round as the second hidden layer vector output by the previous round in the next round, entering the next round, returning the second hidden layer vector output by the previous round according to the first sample driving characteristic data and the second hidden layer, and determining the first hidden layer vector output by the current round to continue to be executed until the execution stop condition is reached, so as to obtain the first hidden layer vector and the second hidden layer vector output by different rounds.
3. The method according to claim 2, wherein the method further comprises:
acquiring an initial second hidden layer vector, passing through the first hidden layer, and determining a first hidden layer vector output by a first round according to the second driving characteristic data and the initial second hidden layer vector;
and determining a second hidden layer vector output by the first round according to the second hidden layer and the first travel characteristic data and the first hidden layer vector output by the first round.
4. The method of claim 1, wherein the determining the probability of the first vehicle colliding with the second vehicle based on the first hidden layer vector output by the first hidden layer at the last round and the second hidden layer vector output by the second hidden layer at the last round comprises:
Determining a first vector weight corresponding to the first hidden layer vector output in the last round, and determining a second vector weight corresponding to the second hidden layer vector output in the last round;
fusing the first vector weight and the first hidden layer vector output in the last round to obtain a fused first hidden layer vector;
fusing the second vector weight and the second hidden layer vector output in the last round to obtain a fused second hidden layer vector;
and superposing the fused first hidden layer vector and the fused second hidden layer vector to obtain a superposed hidden layer vector, and determining the collision probability of the first vehicle and the second vehicle through the superposed hidden layer vector.
5. The method according to claim 1, wherein the method further comprises:
triggering generation of anti-collision early warning information when the probability of collision between the first vehicle and the second vehicle is determined to be greater than or equal to a preset probability threshold; the anti-collision early warning information at least comprises at least one of voice reminding information, anti-collision reminding pictures, suggested driving speeds and suggested driving lanes.
6. The method of claim 5, wherein the anti-collision alert screen comprises a first anti-collision alert screen displayed by the first vehicle and a second anti-collision alert screen displayed by the second vehicle; the first anti-collision reminding picture comprises a first virtual vehicle model corresponding to the first vehicle, a second virtual vehicle model corresponding to the second vehicle and a first perception area surrounding the first virtual vehicle model; wherein when the probability of the first vehicle colliding with the second vehicle is greater than or equal to a preset probability threshold, a first perception sub-region of the first perception region, which faces the direction of the second virtual vehicle model, is highlighted;
The second anti-collision reminding picture comprises a second virtual vehicle model corresponding to the second vehicle, a first virtual vehicle model corresponding to the first vehicle and a second perception area surrounding the second virtual vehicle model; and when the probability of collision of the first vehicle with the second vehicle is greater than or equal to a preset probability threshold, highlighting a second perception subarea in the second perception area towards the direction of the first virtual vehicle model.
7. The method according to any one of claims 1 to 6, characterized in that the vehicle collision detection method is performed by a vehicle collision detection model comprising a first hidden layer and a second hidden layer; the vehicle collision detection model is obtained through a model training step comprising:
acquiring first sample driving characteristic data of a first sample vehicle, second sample driving characteristic data of a second sample vehicle and collision sample labels corresponding to the first sample vehicle and the second sample vehicle, and acquiring a vehicle collision detection model to be trained;
from a second round except the first round, determining a first prediction hidden layer vector output in the current round according to the second sample running characteristic data and a second prediction hidden layer vector output in the previous round;
Determining a second prediction hidden layer vector output by the current round according to the second sample running characteristic data and the first prediction hidden layer vector output by the current round;
determining the prediction probability of collision between the first sample vehicle and the second sample vehicle according to the first prediction hidden layer vector and the second prediction hidden layer vector which are output by the current turn;
according to the difference between the prediction probability and the collision sample label, adjusting model parameters of the vehicle collision detection model;
taking the second prediction hidden layer vector output by the current round as the second prediction hidden layer vector output by the previous round in the next round, entering the next round, returning the second prediction hidden layer vector output by the previous round according to the second sample driving characteristic data and the second prediction hidden layer vector, and determining the first prediction hidden layer vector output by the current round to continue execution until the training stop condition is reached, so as to obtain a trained vehicle collision detection model; the vehicle collision detection model is used to determine a collision probability between at least two vehicles.
8. The method of claim 7, wherein the acquiring first sample travel characteristic data of a first sample vehicle, second sample travel characteristic data of a second sample vehicle, collision sample tags corresponding to the first sample vehicle and the second sample vehicle, comprises:
Acquiring first sample information corresponding to a first sample vehicle; the first sample information includes first sample travel characteristic data of the first sample vehicle and first vehicle collision information;
acquiring second sample information corresponding to a second sample vehicle; the second sample information includes second sample travel characteristic data of the second sample vehicle and second vehicle collision information;
and using the first vehicle collision information or the second vehicle collision information as collision sample labels corresponding to the first sample vehicle and the second sample device.
9. The method of claim 8, wherein the constructing of the first sample information includes:
acquiring characteristic data acquired for the first sample vehicle, obtaining first sample driving characteristic data, and acquiring a collision state of the first sample vehicle;
determining a vehicle that is traveling behind the first sample vehicle but that is not colliding with the first sample vehicle when the collision state characterizes that the first sample vehicle is not colliding, and regarding the determined vehicle as a second sample vehicle;
generating first vehicle collision information according to the device identification of the first sample vehicle, the device identification of the second sample vehicle and the collision state of the first sample vehicle;
And obtaining first sample information of the first sample vehicle according to the first sample driving characteristic data and the first vehicle collision information.
10. The method according to claim 9, wherein the method further comprises:
and determining a vehicle which collides with the first vehicle when the collision state represents that the first sample vehicle collides, and taking the determined vehicle as a second sample vehicle.
11. A vehicle collision detection apparatus, characterized in that the apparatus comprises:
the input layer module is used for acquiring first driving characteristic data of a first vehicle and second driving characteristic data of a second vehicle driving behind the first vehicle;
the hidden layer module is used for outputting corresponding first hidden layer vectors at different rounds through the first hidden layer respectively; the first hidden layer vector output by the current round is generated according to the second driving characteristic data and the second hidden layer vector output by the second hidden layer in the previous round; outputting corresponding second hidden layer vectors at different rounds through the second hidden layers respectively; the second hidden layer vector output by the current round is generated according to the first driving characteristic data and the first hidden layer vector output by the first hidden layer at the current round;
And the output layer module is used for determining the collision probability of the first vehicle and the second vehicle according to the first hidden layer vector output by the first hidden layer in the last round and the second hidden layer vector output by the second hidden layer in the last round.
12. The apparatus of claim 11, wherein the concealment layer module is further configured to determine, from a second round other than the first round, a first concealment layer vector output at a current round by the first concealment layer and based on the second travel characteristic data and a second concealment layer vector output at a previous round by the second concealment layer; determining a second hidden layer vector output by the current round through the second hidden layer according to the first driving characteristic data and the first hidden layer vector output by the current round; and taking the second hidden layer vector output by the current round as the second hidden layer vector output by the previous round in the next round, entering the next round, returning the second hidden layer vector output by the previous round according to the first sample driving characteristic data and the second hidden layer, and determining the first hidden layer vector output by the current round to continue to be executed until the execution stop condition is reached, so as to obtain the first hidden layer vector and the second hidden layer vector output by different rounds.
13. A computer device comprising a memory and a processor, the memory storing a computer program, characterized in that the processor implements the steps of the method of any one of claims 1 to 10 when the computer program is executed.
14. A computer readable storage medium, on which a computer program is stored, characterized in that the computer program, when being executed by a processor, implements the steps of the method of any of claims 1 to 10.
15. A computer program product comprising a computer program, characterized in that the computer program, when being executed by a processor, implements the steps of the method of any one of claims 1 to 10.
CN202211041546.2A 2022-08-29 2022-08-29 Vehicle collision detection method, device, computer device and storage medium Pending CN117035142A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202211041546.2A CN117035142A (en) 2022-08-29 2022-08-29 Vehicle collision detection method, device, computer device and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202211041546.2A CN117035142A (en) 2022-08-29 2022-08-29 Vehicle collision detection method, device, computer device and storage medium

Publications (1)

Publication Number Publication Date
CN117035142A true CN117035142A (en) 2023-11-10

Family

ID=88637795

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202211041546.2A Pending CN117035142A (en) 2022-08-29 2022-08-29 Vehicle collision detection method, device, computer device and storage medium

Country Status (1)

Country Link
CN (1) CN117035142A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117475090A (en) * 2023-12-27 2024-01-30 粤港澳大湾区数字经济研究院(福田) Track generation model, track generation method, track generation device, terminal and medium

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117475090A (en) * 2023-12-27 2024-01-30 粤港澳大湾区数字经济研究院(福田) Track generation model, track generation method, track generation device, terminal and medium
CN117475090B (en) * 2023-12-27 2024-06-11 粤港澳大湾区数字经济研究院(福田) Track generation model, track generation method, track generation device, terminal and medium

Similar Documents

Publication Publication Date Title
US11840239B2 (en) Multiple exposure event determination
Yi et al. A machine learning based personalized system for driving state recognition
CN108388834A (en) The object detection mapped using Recognition with Recurrent Neural Network and cascade nature
CN110494863A (en) Determine autonomous vehicle drives free space
WO2020042984A1 (en) Vehicle behavior detection method and apparatus
US11919545B2 (en) Scenario identification for validation and training of machine learning based models for autonomous vehicles
CN109598943A (en) The monitoring method of vehicle violation, apparatus and system
US11756309B2 (en) Contrastive learning for object detection
US12005922B2 (en) Toward simulation of driver behavior in driving automation
Peng et al. Intelligent method for identifying driving risk based on V2V multisource big data
CN112793576B (en) Lane change decision method and system based on rule and machine learning fusion
CN112487954A (en) Pedestrian street crossing behavior prediction method facing plane intersection
CN113287120A (en) Vehicle driving environment abnormity monitoring method and device, electronic equipment and storage medium
Xue et al. A context-aware framework for risky driving behavior evaluation based on trajectory data
CN117035142A (en) Vehicle collision detection method, device, computer device and storage medium
Shinde et al. Smart traffic control system using YOLO
CN113052071B (en) Method and system for rapidly detecting distraction behavior of driver of hazardous chemical substance transport vehicle
CN114048536A (en) Road structure prediction and target detection method based on multitask neural network
Islam et al. Enhancing Longitudinal Velocity Control With Attention Mechanism-Based Deep Deterministic Policy Gradient (DDPG) for Safety and Comfort
Shinmura et al. Estimation of driver's insight for safe passing based on pedestrian attributes
Hamzah et al. Parking Violation Detection on The Roadside of Toll Roads with Intelligent Transportation System Using Faster R-CNN Algorithm
CN112149790A (en) Method and apparatus for checking robustness of artificial neural network
Ma et al. Lane change analysis and prediction using mean impact value method and logistic regression model
Park et al. Recognition Assistant Framework Based on Deep Learning for Autonomous Driving: Restoring Damaged Road Sign Information
CN116989818B (en) Track generation method and device, electronic equipment and readable storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination