CN117485372A - Collision processing method of vehicle, training method and device of collision prediction model - Google Patents

Collision processing method of vehicle, training method and device of collision prediction model Download PDF

Info

Publication number
CN117485372A
CN117485372A CN202311635574.1A CN202311635574A CN117485372A CN 117485372 A CN117485372 A CN 117485372A CN 202311635574 A CN202311635574 A CN 202311635574A CN 117485372 A CN117485372 A CN 117485372A
Authority
CN
China
Prior art keywords
collision
vehicle
voxels
index
prediction model
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202311635574.1A
Other languages
Chinese (zh)
Inventor
张睿文
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Apollo Zhilian Beijing Technology Co Ltd
Original Assignee
Apollo Zhilian Beijing Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Apollo Zhilian Beijing Technology Co Ltd filed Critical Apollo Zhilian Beijing Technology Co Ltd
Priority to CN202311635574.1A priority Critical patent/CN117485372A/en
Publication of CN117485372A publication Critical patent/CN117485372A/en
Pending legal-status Critical Current

Links

Classifications

    • BPERFORMING OPERATIONS; TRANSPORTING
    • B60VEHICLES IN GENERAL
    • B60WCONJOINT CONTROL OF VEHICLE SUB-UNITS OF DIFFERENT TYPE OR DIFFERENT FUNCTION; CONTROL SYSTEMS SPECIALLY ADAPTED FOR HYBRID VEHICLES; ROAD VEHICLE DRIVE CONTROL SYSTEMS FOR PURPOSES NOT RELATED TO THE CONTROL OF A PARTICULAR SUB-UNIT
    • B60W60/00Drive control systems specially adapted for autonomous road vehicles
    • B60W60/001Planning or execution of driving tasks
    • B60W60/0015Planning or execution of driving tasks specially adapted for safety
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B60VEHICLES IN GENERAL
    • B60WCONJOINT CONTROL OF VEHICLE SUB-UNITS OF DIFFERENT TYPE OR DIFFERENT FUNCTION; CONTROL SYSTEMS SPECIALLY ADAPTED FOR HYBRID VEHICLES; ROAD VEHICLE DRIVE CONTROL SYSTEMS FOR PURPOSES NOT RELATED TO THE CONTROL OF A PARTICULAR SUB-UNIT
    • B60W30/00Purposes of road vehicle drive control systems not related to the control of a particular sub-unit, e.g. of systems using conjoint control of vehicle sub-units, or advanced driver assistance systems for ensuring comfort, stability and safety or drive control systems for propelling or retarding the vehicle
    • B60W30/08Active safety systems predicting or avoiding probable or impending collision or attempting to minimise its consequences
    • B60W30/09Taking automatic action to avoid collision, e.g. braking and steering
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B60VEHICLES IN GENERAL
    • B60WCONJOINT CONTROL OF VEHICLE SUB-UNITS OF DIFFERENT TYPE OR DIFFERENT FUNCTION; CONTROL SYSTEMS SPECIALLY ADAPTED FOR HYBRID VEHICLES; ROAD VEHICLE DRIVE CONTROL SYSTEMS FOR PURPOSES NOT RELATED TO THE CONTROL OF A PARTICULAR SUB-UNIT
    • B60W40/00Estimation or calculation of non-directly measurable driving parameters for road vehicle drive control systems not related to the control of a particular sub unit, e.g. by using mathematical models
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B60VEHICLES IN GENERAL
    • B60WCONJOINT CONTROL OF VEHICLE SUB-UNITS OF DIFFERENT TYPE OR DIFFERENT FUNCTION; CONTROL SYSTEMS SPECIALLY ADAPTED FOR HYBRID VEHICLES; ROAD VEHICLE DRIVE CONTROL SYSTEMS FOR PURPOSES NOT RELATED TO THE CONTROL OF A PARTICULAR SUB-UNIT
    • B60W50/00Details of control systems for road vehicle drive control not related to the control of a particular sub-unit, e.g. process diagnostic or vehicle driver interfaces
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B60VEHICLES IN GENERAL
    • B60WCONJOINT CONTROL OF VEHICLE SUB-UNITS OF DIFFERENT TYPE OR DIFFERENT FUNCTION; CONTROL SYSTEMS SPECIALLY ADAPTED FOR HYBRID VEHICLES; ROAD VEHICLE DRIVE CONTROL SYSTEMS FOR PURPOSES NOT RELATED TO THE CONTROL OF A PARTICULAR SUB-UNIT
    • B60W50/00Details of control systems for road vehicle drive control not related to the control of a particular sub-unit, e.g. process diagnostic or vehicle driver interfaces
    • B60W2050/0001Details of the control system
    • B60W2050/0019Control system elements or transfer functions
    • B60W2050/0028Mathematical models, e.g. for simulation
    • B60W2050/0031Mathematical model of the vehicle
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B60VEHICLES IN GENERAL
    • B60WCONJOINT CONTROL OF VEHICLE SUB-UNITS OF DIFFERENT TYPE OR DIFFERENT FUNCTION; CONTROL SYSTEMS SPECIALLY ADAPTED FOR HYBRID VEHICLES; ROAD VEHICLE DRIVE CONTROL SYSTEMS FOR PURPOSES NOT RELATED TO THE CONTROL OF A PARTICULAR SUB-UNIT
    • B60W50/00Details of control systems for road vehicle drive control not related to the control of a particular sub-unit, e.g. process diagnostic or vehicle driver interfaces
    • B60W2050/0001Details of the control system
    • B60W2050/0043Signal treatments, identification of variables or parameters, parameter estimation or state estimation

Abstract

The disclosure provides a collision processing method of a vehicle, a training method and a device of a collision prediction model, relates to the technical field of artificial intelligence, and particularly relates to the technical fields of deep learning, image processing, automatic driving and the like. The specific implementation scheme is as follows: acquiring data acquired by a sensor on a vehicle; extracting a first feature and a second feature from the acquired data; the first characteristic is used for representing parameter information of the vehicle, and the second characteristic is used for representing parameter information of voxels in a three-dimensional space where the vehicle is located; calculating a collision index of the voxels according to the first feature and the second feature; the collision index is used for representing the collision condition between the vehicle and the object corresponding to the voxel; and controlling the running of the vehicle according to the collision index of the voxels.

Description

Collision processing method of vehicle, training method and device of collision prediction model
Technical Field
The disclosure relates to the technical field of artificial intelligence, in particular to the technical fields of deep learning, image processing, automatic driving and the like.
Background
With the development of artificial intelligence technology, a deep learning model is widely applied, for example, the deep learning model can be applied to an automatic driving scene. In autopilot, the deep learning model senses the surrounding environment by processing sensor data such as cameras, radar, and lidar, while also making decisions and controls to drive the vehicle.
During the running of an autonomous vehicle, a problem of collision with an obstacle may occur. Currently, whether an automatic driving vehicle has collision risk is determined by sensing whether obstacles exist around, and the method has the problem of insufficient precision for subsequent automatic driving control.
Disclosure of Invention
The present disclosure provides a collision processing method and apparatus for a vehicle, a training method and apparatus for a collision prediction model, an electronic device, a storage medium, and a computer program product.
According to an aspect of the present disclosure, there is provided a collision processing method of a vehicle, including: acquiring data acquired by a sensor on a vehicle; extracting a first feature and a second feature from the acquired data; the first characteristic is used for representing parameter information of the vehicle, and the second characteristic is used for representing parameter information of voxels in a three-dimensional space where the vehicle is located; calculating a collision index of the voxels according to the first feature and the second feature; the collision index is used for representing the collision condition between the vehicle and the object corresponding to the voxel; and controlling the running of the vehicle according to the collision index of the voxels.
According to another aspect of the present disclosure, there is provided a training method of a collision prediction model, including: obtaining a training sample; the training sample comprises sample data acquired by a sensor on a vehicle and collision indexes for labeling voxels in a three-dimensional space, wherein the three-dimensional space is constructed according to the acquired sample data, the labeled collision indexes are calculated according to the parameter information of the vehicle and the parameter information of the voxels, and the parameter information of the vehicle and the parameter information of the voxels are determined according to the acquired sample data; inputting the training sample into a collision prediction model to obtain a collision index predicted by the voxels; calculating a first loss according to the noted collision index and the predicted collision index; and adjusting parameters of the collision prediction model according to the first loss until convergence conditions are met, so as to obtain a trained collision prediction model.
According to another aspect of the present disclosure, there is provided a collision processing method of a vehicle, including: acquiring data acquired by a sensor on a vehicle; inputting the acquired data into a collision prediction model to obtain collision indexes of voxels in a three-dimensional space where the vehicle is located; the collision prediction model is obtained through training according to the training method; and controlling the running of the vehicle according to the collision index of the voxels.
According to another aspect of the present disclosure, there is provided a collision processing method of a vehicle, including: acquiring data acquired by a sensor on a vehicle; inputting the acquired data into a collision prediction model to obtain collision indexes of voxels in a three-dimensional space where the vehicle is located and an occupation state and/or a motion state; the collision prediction model is obtained through training according to the training method; and controlling the running of the vehicle according to the collision index of the voxels and the occupation state and/or the motion state.
According to another aspect of the present disclosure, there is provided a collision processing apparatus of a vehicle, including: the data acquisition module is used for acquiring data acquired by the sensors on the vehicle; the feature extraction module is used for extracting a first feature and a second feature from the acquired data; the first characteristic is used for representing parameter information of the vehicle, and the second characteristic is used for representing parameter information of voxels in a three-dimensional space where the vehicle is located; an index calculation module, configured to calculate a collision index of the voxel according to the first feature and the second feature; the collision index is used for representing the collision condition between the vehicle and the object corresponding to the voxel; and the running control module is used for controlling the running of the vehicle according to the collision index of the voxels.
According to another aspect of the present disclosure, there is provided a training apparatus of a collision prediction model, including: the sample acquisition module is used for acquiring training samples; the training sample comprises sample data acquired by a sensor on a vehicle and collision indexes for labeling voxels in a three-dimensional space, wherein the three-dimensional space is constructed according to the acquired sample data, the labeled collision indexes are calculated according to the parameter information of the vehicle and the parameter information of the voxels, and the parameter information of the vehicle and the parameter information of the voxels are determined according to the acquired sample data; the model prediction module is used for inputting the training sample into a collision prediction model to obtain a collision index predicted by the voxels; the loss calculation module is used for calculating a first loss according to the marked collision index and the predicted collision index; and the parameter adjustment module is used for adjusting the parameters of the collision prediction model according to the first loss until the convergence condition is met, so as to obtain a trained collision prediction model.
According to another aspect of the present disclosure, there is provided a collision processing apparatus of a vehicle, including: the first acquisition module is used for acquiring data acquired by the sensors on the vehicle; the first prediction module is used for inputting the acquired data into a collision prediction model to obtain collision indexes of voxels in a three-dimensional space where the vehicle is located; the collision prediction model is obtained through training according to the training device; and the first control module is used for controlling the running of the vehicle according to the collision index of the voxels.
According to another aspect of the present disclosure, there is provided a collision processing apparatus of a vehicle, including: the second acquisition module is used for acquiring data acquired by the sensors on the vehicle; the second prediction module is used for inputting the acquired data into a collision prediction model to obtain collision indexes of voxels in a three-dimensional space where the vehicle is located and an occupied state and/or a motion state; the collision prediction model is obtained through training according to the training device; and the second control module is used for controlling the running of the vehicle according to the collision index of the voxels and the occupation state and/or the motion state.
According to another aspect of the present disclosure, there is provided an electronic device including: at least one processor; and a memory communicatively coupled to the at least one processor; wherein the memory stores instructions executable by the at least one processor to enable the at least one processor to perform a collision processing method of the vehicle or a training method of the collision prediction model.
According to another aspect of the present disclosure, there is provided a non-transitory computer-readable storage medium storing computer instructions for causing the computer to execute the above-described collision processing method of the vehicle or the above-described training method of the collision prediction model.
According to another aspect of the present disclosure, there is provided a computer program product comprising a computer program which, when executed by a processor, implements the above-described collision processing method of a vehicle or the above-described training method of a collision prediction model.
It should be understood that the description in this section is not intended to identify key or critical features of the embodiments of the disclosure, nor is it intended to be used to limit the scope of the disclosure. Other features of the present disclosure will become apparent from the following specification.
Drawings
The drawings are for a better understanding of the present solution and are not to be construed as limiting the present disclosure. Wherein:
FIG. 1 is a flow chart of a collision handling method of a vehicle according to an embodiment of the present disclosure;
FIG. 2 is a schematic view of an application scenario of a collision handling method of a vehicle according to an embodiment of the disclosure;
FIG. 3 is a schematic view of a collision handling apparatus of a vehicle according to an embodiment of the disclosure;
FIG. 4 is a flowchart of a method of training a collision prediction model according to an embodiment of the present disclosure;
FIG. 5 is a schematic illustration of a method of training a collision prediction model according to an embodiment of the disclosure;
FIG. 6 is a schematic diagram of a training apparatus for a collision prediction model according to an embodiment of the present disclosure;
FIG. 7 is a flowchart of a collision processing method of a vehicle according to another embodiment of the present disclosure;
FIG. 8 is a schematic view of a collision handling apparatus of a vehicle according to another embodiment of the disclosure;
FIG. 9 is a flowchart of a collision processing method of a vehicle according to yet another embodiment of the present disclosure;
FIG. 10 is a schematic view of a collision processing apparatus of a vehicle according to yet another embodiment of the disclosure;
fig. 11 is a block diagram of an electronic device for implementing the methods of embodiments of the present disclosure.
Detailed Description
Exemplary embodiments of the present disclosure are described below in conjunction with the accompanying drawings, which include various details of the embodiments of the present disclosure to facilitate understanding, and should be considered as merely exemplary. Accordingly, one of ordinary skill in the art will recognize that various changes and modifications of the embodiments described herein can be made without departing from the scope and spirit of the present disclosure. Also, descriptions of well-known functions and constructions are omitted in the following description for clarity and conciseness.
It should be noted that the terms "first," "second," and the like in the description and claims of the present disclosure and in the foregoing figures are used for distinguishing between similar objects and not necessarily for describing a particular sequential or chronological order. It is to be understood that the data so used may be interchanged where appropriate such that the embodiments of the disclosure described herein may be capable of operation in sequences other than those illustrated or described herein. Furthermore, the terms "comprises," "comprising," and "having," and any variations thereof, are intended to cover a non-exclusive inclusion, such that a process, method, system, article, or apparatus that comprises a list of steps or elements is not necessarily limited to those steps or elements expressly listed but may include other steps or elements not expressly listed or inherent to such process, method, article, or apparatus.
According to an embodiment of the present disclosure, there is provided an embodiment of a collision processing method of a vehicle, it being noted that the steps shown in the flowchart of the drawings may be performed in a computer system such as a set of computer executable instructions, and that although a logical order is shown in the flowchart, in some cases the steps shown or described may be performed in an order different from that herein.
Fig. 1 is a flowchart of a collision processing method of a vehicle according to an embodiment of the present disclosure, which includes, as shown in fig. 1, the following steps S101 to S104:
and step S101, acquiring data acquired by a sensor on the vehicle.
In implementations, the vehicle may be an autonomous vehicle, and sensors on the vehicle may enable the vehicle to sense and understand the surrounding environment, which may include cameras, radar (Radar), liDAR (LiDAR), ultrasonic sensors, global Positioning System (GPS), global Navigation Satellite System (GNSS), inertial Measurement Unit (IMU), and so forth. The data collected by the sensor can comprise point cloud data or image data, and can also comprise the point cloud data and the image data.
Step S102, extracting a first feature and a second feature from the acquired data. The first characteristic is used for representing parameter information of the vehicle, and the second characteristic is used for representing parameter information of voxels in a three-dimensional space where the vehicle is located.
In particular, a three-dimensional space in which the vehicle is located is constructed from the data acquired by the sensors, the voxels in the three-dimensional space may also be referred to as a grid, where the second features extracted from the acquired data generally characterize the parametric information of voxels other than the corresponding position of the vehicle, in particular voxels located in front of the vehicle position. Wherein, the parameter information of the vehicle may include a position, a speed, an acceleration, a length, a width, a height, etc., and the parameter information of the voxel may include a position, a speed, an orientation, a size, an occupied state, etc.
In a specific implementation, different feature extraction modes can be adopted for data acquired by different sensors. For example, the features may be extracted by LSS, BEVFormer or the like for the data collected by the camera, and by BEVFusion or the like for the data collected by the camera and the radar. Wherein LSS refers to abbreviations of Lift, splat and Shoot, which are features for converting image features of a multi-View camera into Bird's Eye View (BEV). Bevfomer converts the extracted looking-around features into BEV space by means of model learning by extracting the image features captured by the looking-around camera. BEVFusion refers to task independent learning by unifying multi-modal features in the bird's eye view representation space.
Step S103, calculating the collision index of the voxels according to the first feature and the second feature. The collision index is used for representing the collision condition between the vehicle and the object corresponding to the voxel.
The collision indicators may include, among other things, a collision distance (Distance to Collision, DTC), a collision Time (Time to Collision, TTC), a turn Time (Time to Steer, TTS), a brake threat coefficient (Brake Threat Number, BTN), a turn threat coefficient (Steer Threat Number, STN), and the like.
In one example of implementation, the collision distance of the voxel may be calculated according to a first feature for representing the position, the speed, the length, the width and the height of the vehicle and a second feature for representing the position and the speed of the voxel, specifically, if the speed of the voxel is not 0, the voxel at the position is described as a dynamic object, and the collision distance of the voxel is the distance travelled when the vehicle decelerates to be consistent with the speed of the voxel; if the speed of the voxel is 0, the voxel at the position is a static object, and the collision distance of the voxel is the distance between the position of the vehicle and the position of the voxel.
In another example of implementation, the collision time of a voxel may be calculated from a first feature characterizing the position, velocity, acceleration, length, width and height of the vehicle and a second feature characterizing the position and velocity of the voxel.
In other examples of implementation, other collision indexes may be calculated according to the calculated collision indexes, for example, the collision time of the voxel may be calculated according to the collision distance of the voxel, specifically, if the acceleration of the vehicle is 0, it is indicated that the vehicle is in a uniform motion state, and at this time, the collision time of the voxel may be calculated according to the collision distance of the voxel and the speed of the vehicle; if the acceleration of the vehicle is positive and fixed, the vehicle is in a uniform acceleration motion state, at the moment, the speed of the vehicle can be calculated according to the acceleration of the vehicle, and then the collision time of the voxel can be calculated according to the collision distance of the voxel and the speed of the vehicle.
And step S104, controlling the running of the vehicle according to the collision index of the voxels.
In a specific implementation, the running state of the vehicle may be controlled according to the collision index of the voxel, for example, the acceleration, the deceleration, the lane change, etc. of the vehicle may be controlled, and the running route of the vehicle may be controlled, for example, the path of the vehicle may be planned in real time according to the collision index of the voxel, etc.
In this embodiment, the parameter information of the vehicle may reflect the self state of the vehicle, the parameter information of the voxel may reflect the situation of the surrounding environment of the vehicle, the collision index of the voxel representing the interaction situation between the vehicle and the surrounding environment may be obtained by combining the self state of the vehicle and the situation of the surrounding environment of the vehicle, the collision processing of the vehicle may be performed more accurately and conveniently on the basis of the collision index of the voxel level, and the robustness of the vehicle driving system may be further improved.
In an alternative embodiment, the collision index of the voxels is at least two. In this embodiment, the step S104 specifically includes: and if all the collision indexes of the voxels are consistent, controlling the running of the vehicle according to the collision indexes of the voxels.
In this embodiment, the number of collision indexes of the voxels calculated in step S103 is at least two, and all the collision indexes are consistent, which means that the collision indexes are self-consistent, i.e. self-consistent. For example, the collision distance DTC1 and the collision time TTC1 of the voxel are calculated according to the first feature and the second feature, the collision time TTC2 is calculated according to the collision distance DTC1 of the voxel and the speed V1 of the vehicle, and if the collision time TTC1 is different from the collision time TTC2 or has a large difference, it is indicated that the collision distance DTC1 and the collision time TTC1 of the voxel are not consistent.
In this embodiment, the collision indexes are used to control the running of the vehicle only when all the collision indexes of the voxels are consistent, so that the accuracy of the running control of the vehicle is further improved. On the contrary, when there is inconsistency in the collision index of the voxel, it is indicated that there may be a situation that the acquired data is wrong or the extracted feature is inaccurate, and at this time, the vehicle running is not controlled by using the collision indexes, and in practical application, in order to improve the accuracy of the vehicle running control, all the parameter information of the voxel and the data such as the collision index may be discarded.
The above embodiment is described below in conjunction with a specific example. The host vehicle in fig. 2 corresponds to the above-mentioned vehicle, the sensor on the host vehicle is used to collect data, the occupied grid map shown in fig. 2 can be obtained according to the collected data, and the building, the road edge, the green belt and the obstacle vehicle in front of the host vehicle, namely the car in fig. 2, can be identified according to the existing sensing method. For the main car, extracting the characteristic of the parameter information used for representing the main car from the data collected by the sensor, and for the first voxel at the upper left corner of the car, extracting the characteristic of the parameter information used for representing the main car from the data collected by the sensor, and calculating the collision distance DTC of the first voxel to be 18m according to the characteristic of the parameter information used for representing the main car and the characteristic of the parameter information used for representing the first voxel. For the second voxel of the lower left corner of the car, extracting the characteristic used for representing the parameter information of the second voxel from the data acquired by the sensor, and calculating the collision time TTC of the second voxel to be 5s according to the characteristic used for representing the parameter information of the main car and the characteristic used for representing the parameter information of the second voxel. The host vehicle may control its travel based on the collision distance of the first voxel and/or the collision time of the second voxel.
In an alternative embodiment, the collision handling method further comprises: and determining the occupation state and/or the motion state of the voxels according to the second characteristics. In this embodiment, the step S104 includes: and controlling the running of the vehicle according to the collision index of the voxels and the occupation state and/or the motion state.
Wherein the occupancy state of the voxel comprises that the voxel is occupied and that the voxel is unoccupied, in particular the occupancy state of the voxel may be determined from the features characterizing the occupancy state of the voxel. The motion states of the voxels include stationary and motion, and in particular the motion state of the voxels may be determined from features characterizing the velocity of the voxels, a velocity of 0 representing a stationary state and vice versa.
It should be noted that, when a voxel is unoccupied, the position of the voxel is described as empty, and the running state and the collision index of the voxel need not be considered, that is, the running state and the collision index of the voxel need not be determined unless the voxel is occupied.
In this embodiment, not only the collision index of the voxel but also the occupation state and/or the motion state of the voxel can be output, and richer data is provided for the running control of the subsequent vehicle, so that the accuracy of the running control of the vehicle can be further improved.
According to an embodiment of the present disclosure, there is also provided an embodiment of a collision processing apparatus of a vehicle, in which fig. 3 is a schematic diagram of the collision processing apparatus of the vehicle according to the embodiment of the present disclosure, the collision processing apparatus includes a data acquisition module 301, a feature extraction module 302, an index calculation module 303, and a travel control module 304. The data acquisition module 301 is configured to acquire data acquired by sensors on a vehicle. The feature extraction module 302 is configured to extract a first feature and a second feature from the collected data; the first characteristic is used for representing parameter information of the vehicle, and the second characteristic is used for representing parameter information of voxels in a three-dimensional space where the vehicle is located. The index calculation module 303 is configured to calculate a collision index of the voxel according to the first feature and the second feature; the collision index is used for representing the collision condition between the vehicle and the object corresponding to the voxel. The driving control module 304 is configured to control driving of the vehicle according to the collision index of the voxel.
It should be noted that the data acquisition module 301, the feature extraction module 302, the index calculation module 303, and the travel control module 304 correspond to steps S101 to S104 in the above embodiments, and the four modules are the same as examples and application scenarios implemented by the corresponding steps, but are not limited to those disclosed in the above embodiments.
In an optional embodiment, the number of collision indexes of the voxels is at least two, and the control module is configured to control the vehicle to run according to the collision indexes of the voxels if all the collision indexes of the voxels are identical.
In an alternative embodiment, the collision processing apparatus further comprises a state determination module for determining an occupancy state and/or a motion state of the voxel from the second feature; the driving control module is specifically used for controlling the driving of the vehicle according to the collision index of the voxels and the occupation state and/or the motion state.
In an alternative embodiment, the collision indicator comprises at least one of: collision distance, collision time, steering time, braking threat coefficient, steering threat coefficient.
In an alternative embodiment, the parameter information of the vehicle includes at least one of: position, velocity, acceleration, length, width, and height.
In an alternative embodiment, the parameter information of the voxels comprises at least one of: position, speed, orientation, size, occupancy status.
In an alternative embodiment, the acquired data comprises point cloud data and/or image data.
In accordance with embodiments of the present disclosure, embodiments of a method of training a collision prediction model are provided, it being noted that the steps illustrated in the flowcharts of the figures may be performed in a computer system, such as a set of computer executable instructions, and, although a logical order is illustrated in the flowcharts, in some cases, the steps illustrated or described may be performed in an order other than that illustrated herein.
Fig. 4 is a flowchart of a training method of a collision prediction model according to an embodiment of the present disclosure, which includes, as shown in fig. 4, the following steps S401 to S404:
step S401, obtaining training samples. The training sample comprises sample data acquired by a sensor on a vehicle and collision indexes for labeling voxels in a three-dimensional space, wherein the three-dimensional space is constructed according to the acquired sample data, the labeled collision indexes are calculated according to the parameter information of the vehicle and the parameter information of the voxels, and the parameter information of the vehicle and the parameter information of the voxels are determined according to the acquired sample data.
In a specific implementation, the vehicle may be an autonomous vehicle, and the sensor provided on the vehicle may include a camera, radar, lidar, ultrasonic sensor, global positioning system, global navigation satellite system, inertial measurement unit, and the like. The sample data collected by the sensor can comprise point cloud sample data or image sample data, and can also comprise point cloud sample data and image sample data at the same time
In particular, a three-dimensional space in which the vehicle is located is constructed from sample data acquired by sensors, the voxels in the three-dimensional space may also be referred to as a grid, where voxels generally refer to voxels other than the corresponding location of the vehicle, in particular voxels located in front of the vehicle location. Wherein, the parameter information of the vehicle may include a position, a speed, an acceleration, a length, a width, a height, etc., and the parameter information of the voxel may include a position, a speed, an orientation, a size, an occupied state, etc.
In the specific implementation, the collision index of the voxel can be marked manually, or the collision index of the voxel can be marked automatically by adopting a pre-trained marking model. The collision index may include a collision distance, a collision time, a steering time, a braking threat coefficient, a steering threat coefficient, and the like. In a specific example, the collision distance of a voxel may be calculated from the position, speed, length, width and height of the vehicle and the position and speed of the voxel. In another specific example, the collision time of a voxel may be calculated from the position, speed, acceleration, length, width and height of the vehicle, and the position and velocity signature of the voxel.
And step S402, inputting the training sample into a collision prediction model to obtain a collision index predicted by the voxels. Wherein the collision prediction model is typically a deep learning model. In a specific example, the collision prediction model includes a feature extraction layer and a detection head. The feature extraction layer is used to output one or more bird's eye views or voxelized feature layers. In a specific implementation, for data collected by different sensors, the feature extraction layer may adopt different feature extraction modes, for example, for sample data collected by a camera, a LSS, BEVFormer mode may be adopted to extract features, and for sample data collected by a camera and a radar, a BEVFusion mode may be adopted to extract features. The detection head is used for calculating the feature layer as a voxelized output in a 3D convolution mode, and specifically comprises a collision index of each voxel.
Step S403, calculating a first loss according to the noted collision index and the predicted collision index. In a specific implementation, the first loss generally includes a classification loss and a regression loss, where the classification loss corresponds to a deviation of whether a collision occurs between the vehicle and the voxel, the different collision indexes correspond to different regression losses, and the regression loss corresponds to a deviation of a center point regression relative collision distance of the voxel, taking a collision distance as an example; taking the collision time as an example, the regression loss corresponds to the ratio of the center point position of the voxel to the collision time. Wherein the first loss may be obtained by weighting the classification loss and the regression loss.
And step S404, adjusting parameters of the collision prediction model according to the first loss until convergence conditions are met, and obtaining a trained collision prediction model. In a specific implementation, parameters of the collision prediction model are continuously adjusted according to the calculated first loss until convergence conditions are met, and training is stopped. The convergence condition may be set according to an actual situation, for example, the convergence condition may be set such that the first loss is smaller than a certain threshold.
In the embodiment, model training is performed by using sample data acquired by the sensors on the vehicle, and the interaction condition between the vehicle and the surrounding environment is introduced by learning the characteristics related to the collision indexes of the voxels in the three-dimensional space where the vehicle is located, so that the collision prediction model obtained by training can predict the collision indexes of the voxels, and more accurate data is provided for the collision processing of the subsequent vehicle.
In an alternative embodiment, the training sample further comprises an occupancy state and/or a motion state labeling the voxels. Wherein the occupancy state of the voxel comprises that the voxel is occupied and that the voxel is unoccupied, and the motion state of the voxel comprises stationary and motion. In this embodiment, the step S402 specifically includes: and inputting the training sample into a collision prediction model to obtain a collision index predicted by the voxels and a predicted occupancy state and/or a predicted motion state. The training method further comprises the following steps: a second loss is calculated from the noted occupancy state and the predicted occupancy state and/or a third loss is calculated from the noted motion state and the predicted motion state. The step S404 specifically includes: and adjusting parameters of the collision prediction model according to the first loss and the second loss and/or the third loss.
In this embodiment, if the training sample includes a collision index and an occupancy state that label the voxels, a second loss is calculated according to the labeled occupancy state and the occupancy state predicted by the collision prediction model, where the second loss is typically a classification loss, and finally parameters of the collision prediction model are adjusted according to the first loss and the second loss.
If the training sample comprises a collision index and a motion state for labeling the voxels, calculating a third loss according to the labeled motion state and the motion state predicted by the collision prediction model, wherein the third loss is usually a regression loss, and finally adjusting parameters of the collision prediction model according to the first loss and the third loss.
If the training sample comprises a collision index, an occupied state and a motion state for labeling the voxels, calculating a second loss according to the labeled occupied state and the occupied state predicted by the collision prediction model, calculating a third loss according to the labeled motion state and the motion state predicted by the collision prediction model, and finally adjusting parameters of the collision prediction model according to the first loss, the second loss and the third loss. In a specific implementation, the first loss, the second loss, and the third loss may be weighted, and parameters of the collision prediction model may be adjusted according to the weighted results.
In the example shown in fig. 5, sample data including image sample data and point cloud sample data is input into a collision prediction model for training, a feature extraction layer in the collision prediction model is used for outputting a voxelized feature layer, the voxelized output is obtained by prediction after passing through a detection head, each voxel corresponds to a vector and includes a predicted occupation state, a motion state and a collision index, a first loss, a second loss and a third loss are calculated according to the occupation state, the motion state and the collision index marked by the voxel in the sample data, and finally parameters of the collision prediction model are adjusted according to the first loss, the second loss and the third loss.
In this embodiment, in the training process of the collision prediction model, not only the features related to the collision indexes of the voxels are learned, but also the features related to the occupation state and/or the motion state of the voxels are learned, so that the collision prediction model obtained by training can predict parameters of multiple dimensions of the voxels, and richer data is provided for the collision processing of the following vehicles.
There is further provided an embodiment of a training apparatus for a collision prediction model according to an embodiment of the present disclosure, wherein fig. 6 is a schematic diagram of the training apparatus for a collision prediction model according to an embodiment of the present disclosure, the training apparatus including a sample acquisition module 601, a model prediction module 602, a loss calculation module 603, and a parameter adjustment module 604. The sample acquisition module 601 is configured to acquire a training sample; the training sample comprises sample data acquired by a sensor on a vehicle and collision indexes for labeling voxels in a three-dimensional space, wherein the three-dimensional space is constructed according to the acquired sample data, the labeled collision indexes are calculated according to the parameter information of the vehicle and the parameter information of the voxels, and the parameter information of the vehicle and the parameter information of the voxels are determined according to the acquired sample data. The model prediction module 602 is configured to input the training sample into a collision prediction model to obtain a collision index predicted by the voxel. The loss calculation module 603 is configured to calculate a first loss according to the noted collision indicator and the predicted collision indicator. The parameter adjustment module 604 is configured to adjust parameters of the collision prediction model according to the first loss until a convergence condition is satisfied, thereby obtaining a trained collision prediction model.
It should be noted that the sample acquiring module 601, the model predicting module 602, the loss calculating module 603, and the parameter adjusting module 604 correspond to the steps S401 to S404 in the above embodiments, and the four modules are the same as the examples and application scenarios implemented by the corresponding steps, but are not limited to the disclosure of the above embodiments.
In an alternative embodiment, the collision indicator comprises at least one of: collision distance, collision time, steering time, braking threat coefficient, steering threat coefficient.
In an alternative embodiment, the parameter information of the vehicle includes at least one of: position, velocity, acceleration, length, width, and height.
In an alternative embodiment, the parameter information of the voxels comprises at least one of: position, speed, orientation, size, occupancy status.
In an alternative embodiment, the acquired sample data comprises point cloud sample data and/or image sample data.
In an alternative embodiment, the training sample further comprises an occupancy state and/or a motion state labeling the voxels; the model prediction module is specifically configured to input the training sample into a collision prediction model to obtain a collision index predicted by the voxel, a predicted occupancy state and/or a predicted motion state; the loss calculation module is further used for calculating a second loss according to the marked occupation state and the predicted occupation state and/or calculating a third loss according to the marked motion state and the predicted motion state; the parameter adjustment module is specifically configured to adjust parameters of the collision prediction model according to the first loss, the second loss, and/or the third loss.
According to an embodiment of the present disclosure, there is provided an embodiment of a collision processing method of a vehicle, it being noted that the steps shown in the flowchart of the drawings may be performed in a computer system such as a set of computer executable instructions, and that although a logical order is shown in the flowchart, in some cases the steps shown or described may be performed in an order different from that herein.
Fig. 7 is a flowchart of a collision processing method of a vehicle according to an embodiment of the present disclosure, which includes, as shown in fig. 7, the following steps S701 to S703:
step S701, acquiring data acquired by a sensor on the vehicle.
In a specific implementation, the vehicle may be an autonomous vehicle, and sensors provided on the vehicle may enable the vehicle to sense and understand the surrounding environment, and these sensors may include cameras, radars, lidars, ultrasonic sensors, global positioning systems, global navigation satellite systems, inertial measurement units, and the like. The data collected by the sensor can comprise point cloud data or image data, and can also comprise the point cloud data and the image data.
Step S702, inputting the acquired data into a collision prediction model to obtain collision indexes of voxels in a three-dimensional space where the vehicle is located; the collision prediction model is obtained through training according to the training method of the embodiment.
Step S703, controlling the running of the vehicle according to the collision index of the voxel. In a specific implementation, the running state of the vehicle may be controlled according to the collision index of the voxel, for example, the acceleration, the deceleration, the lane change, etc. of the vehicle may be controlled, and the running route of the vehicle may be controlled, for example, the path of the vehicle may be planned in real time according to the collision index of the voxel, etc.
In this embodiment, the collision prediction model provided in the foregoing embodiment predicts the collision index of the voxel, and predicts the collision risk in combination with the interaction situation between the vehicle and the surrounding environment, so that the collision processing of the vehicle can be performed more accurately and conveniently on the basis of the collision index of the voxel level, and the robustness of the vehicle driving system can be improved.
In an optional embodiment, the step S703 specifically includes: and if all the collision indexes of the voxels are consistent, controlling the running of the vehicle according to the collision indexes of the voxels.
In this embodiment, the collision prediction model outputs at least two collision indexes of the voxels, where all the collision indexes are consistent, which means that the collision indexes are self-consistent, i.e. self-consistent. For example, the collision distance of the output voxel of the collision prediction model is DTC1, the collision time of the output voxel is TTC1, the collision time TTC2 is calculated according to the collision distance DTC1 of the voxel and the speed V1 of the vehicle, and if the collision time TTC1 is different from the collision time TTC2 or has a large difference, it is indicated that the collision distance DTC1 of the voxel is inconsistent with the collision time TTC 1. The speed V1 of the vehicle can be obtained according to data collected by sensors on the vehicle.
In this embodiment, the collision indexes are used to control the running of the vehicle only when all the collision indexes of the voxels are consistent, so that the accuracy of the running control of the vehicle is further improved. On the contrary, when there is inconsistency in the collision index of the voxel, it is indicated that there may be a situation that the collected data is wrong or the result of the collision prediction model prediction is inaccurate, and at this time, the vehicle running is not controlled by using these collision indexes, and in practical application, in order to improve the accuracy of the vehicle running control, all the parameter information of the voxel and the data such as the collision index may be discarded.
There is further provided, according to an embodiment of the present disclosure, a collision processing apparatus embodiment of a vehicle, in which fig. 8 is a schematic view of a collision processing apparatus of a vehicle according to an embodiment of the present disclosure, the collision processing apparatus including a first acquisition module 801, a first prediction module 802, and a first control module 803. The first acquisition module 801 is configured to acquire data acquired by sensors on a vehicle. The first prediction module 802 is configured to input the collected data into a collision prediction model, so as to obtain a collision index of a voxel in a three-dimensional space where the vehicle is located; the collision prediction model is obtained through training by the training device of the embodiment. The first control module 803 is configured to control the travel of the vehicle according to the collision index of the voxel.
It should be noted that, the first obtaining module 801, the first predicting module 802, and the first controlling module 803 correspond to steps S701 to S703 in the above embodiment, and the three modules are the same as examples and application scenarios implemented by the corresponding steps, but are not limited to those disclosed in the above embodiment.
In an alternative embodiment, the number of collision indexes of the voxels is at least two, and the first control module 803 is specifically configured to control the running of the vehicle according to the collision indexes of the voxels if all the collision indexes of the voxels are consistent.
In an alternative embodiment, the acquired data comprises point cloud data and/or image data.
According to an embodiment of the present disclosure, there is provided an embodiment of a collision processing method of a vehicle, it being noted that the steps shown in the flowchart of the drawings may be performed in a computer system such as a set of computer executable instructions, and that although a logical order is shown in the flowchart, in some cases the steps shown or described may be performed in an order different from that herein.
Fig. 9 is a flowchart of a collision processing method of a vehicle according to an embodiment of the present disclosure, which includes, as shown in fig. 9, the following steps S901 to S903:
Step S901, acquiring data acquired by a sensor on a vehicle. The step S901 is the same as the example and application scenario implemented in the step S701, but is not limited to the disclosure of the above embodiment.
Step S902, inputting the acquired data into a collision prediction model to obtain collision indexes, occupation states and/or motion states of voxels in a three-dimensional space where the vehicle is located; the collision prediction model is obtained through training according to the training method of the embodiment.
In a specific implementation, the three-dimensional space is constructed from the acquired data, and voxels in the three-dimensional space may also be referred to as grids, where voxels generally refer to voxels other than the corresponding location of the vehicle, and in particular voxels located in front of the vehicle location.
In the training method of the above embodiment, if the training sample includes a collision index and an occupation state for labeling the voxels, the collision index and the occupation state of the voxels are obtained by inputting the data acquired by the vehicle sensor into the collision prediction model obtained by using the training sample; if the training sample comprises collision indexes and motion states for marking the voxels, inputting data acquired by a vehicle sensor into a collision prediction model obtained by using the training sample, so as to obtain the collision indexes and motion states of the voxels; if the training sample comprises collision indexes, occupation states and motion states for marking the voxels, the collision indexes, occupation states and motion states of the voxels can be obtained by inputting data acquired by the sensors on the vehicle into a collision prediction model obtained by using the training sample.
Step S903, controlling the running of the vehicle according to the collision index of the voxel and the occupancy state and/or the motion state. In a specific implementation, the running state of the vehicle may be controlled according to the collision index and the occupation state and/or the motion state of the voxel, for example, acceleration, deceleration, lane change, etc. of the vehicle may be controlled according to the collision index, the occupation state and the motion state of the voxel, and the running route of the vehicle may be controlled, for example, the path of the vehicle may be planned in real time according to the collision index and the motion state of the voxel, etc.
In this embodiment, the collision prediction model provided in the foregoing embodiment predicts the collision index of the voxel, and predicts the collision risk in combination with the interaction situation between the vehicle and the surrounding environment, so that the collision processing of the vehicle can be performed more accurately and conveniently on the basis of the collision index of the voxel level, and the robustness of the vehicle driving system can be improved. In addition, the collision prediction model provided by the embodiment can also predict the voxel occupation state and/or the motion state, and provides more abundant data for the collision processing of the subsequent vehicles.
In an optional embodiment, the step S903 specifically includes: and if all the collision indexes of the voxels are consistent, controlling the running of the vehicle according to the collision indexes of the voxels and the occupied state and/or the motion state. The collision prediction model outputs at least two collision indexes of the voxels, wherein all the collision indexes are consistent, namely, the collision indexes are self-consistent, namely, self-consistent.
In this embodiment, the collision indexes and the occupied state and/or the motion state are used to control the running of the vehicle only when all the collision indexes of the voxels are consistent, so that the accuracy of the running control of the vehicle is further improved. On the contrary, when there is inconsistency in the collision index of the voxel, it is indicated that there may be a situation that the collected data is wrong or the result of the collision prediction model prediction is inaccurate, and at this time, the collision index and the occupied state and/or the motion state are not used to control the running of the vehicle, and in practical application, in order to improve the accuracy of the running control of the vehicle, all the data of the voxel, such as parameter information, the collision index, the occupied state and the motion state, may be discarded.
There is also provided a collision processing apparatus embodiment of a vehicle according to an embodiment of the present disclosure, in which fig. 10 is a schematic view of a collision processing apparatus of a vehicle according to an embodiment of the present disclosure, the collision processing apparatus includes a second acquisition module 1001, a second prediction module 1002, and a second control module 1003. The second acquisition module 1001 is configured to acquire data acquired by sensors on the vehicle. The second prediction module 1002 is configured to input the collected data into a collision prediction model, so as to obtain a collision index of a voxel in a three-dimensional space where the vehicle is located, and an occupation state and/or a motion state; the collision prediction model is obtained through training by the training device of the embodiment. The second control module 1003 is configured to control the driving of the vehicle according to the collision index of the voxel and the occupancy state and/or the motion state.
It should be noted that the second obtaining module 1001, the second predicting module 1002, and the second controlling module 1003 correspond to steps S901 to S903 in the above embodiment, and the three modules are the same as examples and application scenarios implemented by the corresponding steps, but are not limited to those disclosed in the above embodiment.
In an optional embodiment, the number of collision indexes of the voxels is at least two, and the second control module is specifically configured to control the running of the vehicle according to the collision indexes of the voxels and the occupancy state and/or the motion state if all the collision indexes of the voxels are consistent.
In an alternative embodiment, the acquired data comprises point cloud data and/or image data.
In the technical scheme of the disclosure, the related processes of collecting, storing, using, processing, transmitting, providing, disclosing and the like of the personal information of the user accord with the regulations of related laws and regulations, and the public order colloquial is not violated.
According to embodiments of the present disclosure, the present disclosure also provides an electronic device, a readable storage medium, a computer program product, and an autonomous vehicle.
Fig. 11 illustrates a schematic block diagram of an example electronic device 1100 that can be used to implement embodiments of the present disclosure. Electronic devices are intended to represent various forms of digital computers, such as laptops, desktops, workstations, personal digital assistants, servers, blade servers, mainframes, and other appropriate computers. The electronic device may also represent various forms of mobile devices, such as personal digital processing, cellular telephones, smartphones, wearable devices, and other similar computing devices. The components shown herein, their connections and relationships, and their functions, are meant to be exemplary only, and are not meant to limit implementations of the disclosure described and/or claimed herein.
As shown in fig. 11, the electronic device 1100 includes a computing unit 1101 that can perform various appropriate actions and processes according to a computer program stored in a Read Only Memory (ROM) 1102 or a computer program loaded from a storage unit 1108 into a Random Access Memory (RAM) 1103. In the RAM 1103, various programs and data required for the operation of the electronic device 1100 can also be stored. The computing unit 1101, ROM 1102, and RAM 1103 are connected to each other by a bus 1104. An input/output (I/O) interface 1105 is also connected to bus 1104.
A number of components in the electronic device 1100 are connected to the I/O interface 1105, including: an input unit 1106 such as a keyboard, a mouse, etc.; an output unit 1107 such as various types of displays, speakers, and the like; a storage unit 1108, such as a magnetic disk, optical disk, etc.; and a communication unit 1109 such as a network card, modem, wireless communication transceiver, or the like. The communication unit 1109 allows the electronic device 1100 to exchange information/data with other devices through a computer network such as the internet and/or various telecommunications networks.
The computing unit 1101 may be a variety of general purpose and/or special purpose processing components having processing and computing capabilities. Some examples of computing unit 1101 include, but are not limited to, a Central Processing Unit (CPU), a Graphics Processing Unit (GPU), various specialized Artificial Intelligence (AI) computing chips, various computing units running machine learning model algorithms, a Digital Signal Processor (DSP), and any suitable processor, controller, microcontroller, etc. The calculation unit 1101 performs the respective methods and processes described above, for example, a collision processing method of a vehicle or a training method of a collision prediction model. For example, in some embodiments, the collision processing method of the vehicle or the training method of the collision prediction model may be implemented as a computer software program, which is tangibly embodied on a machine-readable medium, such as the storage unit 1108. In some embodiments, some or all of the computer programs may be loaded and/or installed onto electronic device 1100 via ROM 1102 and/or communication unit 1109. When the computer program is loaded into the RAM 1103 and executed by the computing unit 1101, one or more steps of the above-described collision processing method of the vehicle or training method of the collision prediction model may be performed. Alternatively, in other embodiments, the computing unit 1101 may be configured to perform a collision processing method of the vehicle or a training method of the collision prediction model by any other suitable means (e.g. by means of firmware).
Various implementations of the systems and techniques described here above may be implemented in digital electronic circuitry, integrated circuit systems, field Programmable Gate Arrays (FPGAs), application Specific Integrated Circuits (ASICs), application Specific Standard Products (ASSPs), systems On Chip (SOCs), complex Programmable Logic Devices (CPLDs), computer hardware, firmware, software, and/or combinations thereof. These various embodiments may include: implemented in one or more computer programs, the one or more computer programs may be executed and/or interpreted on a programmable system including at least one programmable processor, which may be a special purpose or general-purpose programmable processor, that may receive data and instructions from, and transmit data and instructions to, a storage system, at least one input device, and at least one output device.
Program code for carrying out methods of the present disclosure may be written in any combination of one or more programming languages. These program code may be provided to a processor or controller of a general purpose computer, special purpose computer, or other programmable data processing apparatus such that the program code, when executed by the processor or controller, causes the functions/operations specified in the flowchart and/or block diagram to be implemented. The program code may execute entirely on the machine, partly on the machine, as a stand-alone software package, partly on the machine and partly on a remote machine or entirely on the remote machine or server.
In the context of this disclosure, a machine-readable medium may be a tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device. The machine-readable medium may be a machine-readable signal medium or a machine-readable storage medium. The machine-readable medium may include, but is not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or any suitable combination of the foregoing. More specific examples of a machine-readable storage medium would include an electrical connection based on one or more wires, a portable computer diskette, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber, a portable compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing.
To provide for interaction with a user, the systems and techniques described here can be implemented on a computer having: a display device (e.g., a CRT (cathode ray tube) or LCD (liquid crystal display) monitor) for displaying information to a user; and a keyboard and pointing device (e.g., a mouse or trackball) by which a user can provide input to the computer. Other kinds of devices may also be used to provide for interaction with a user; for example, feedback provided to the user may be any form of sensory feedback (e.g., visual feedback, auditory feedback, or tactile feedback); and input from the user may be received in any form, including acoustic input, speech input, or tactile input.
The systems and techniques described here can be implemented in a computing system that includes a background component (e.g., as a data server), or that includes a middleware component (e.g., an application server), or that includes a front-end component (e.g., a user computer having a graphical user interface or a web browser through which a user can interact with an implementation of the systems and techniques described here), or any combination of such background, middleware, or front-end components. The components of the system can be interconnected by any form or medium of digital data communication (e.g., a communication network). Examples of communication networks include: local Area Networks (LANs), wide Area Networks (WANs), and the internet.
The computer system may include a client and a server. The client and server are typically remote from each other and typically interact through a communication network. The relationship of client and server arises by virtue of computer programs running on the respective computers and having a client-server relationship to each other. The server may be a cloud server, a server of a distributed system, or a server incorporating a blockchain.
According to an embodiment of the present disclosure, there is also provided an autonomous vehicle including the electronic device in the above embodiment, capable of executing the collision processing method of the vehicle or the training method of the collision prediction model in the embodiment of the present disclosure. The automatic driving vehicle is provided with a camera, a radar, a laser radar, an ultrasonic sensor, a global positioning system, a global navigation satellite system, an inertial measurement unit and other sensors.
It should be appreciated that various forms of the flows shown above may be used to reorder, add, or delete steps. For example, the steps recited in the present disclosure may be performed in parallel or sequentially or in a different order, provided that the desired results of the technical solutions of the present disclosure are achieved, and are not limited herein.
The above detailed description should not be taken as limiting the scope of the present disclosure. It will be apparent to those skilled in the art that various modifications, combinations, sub-combinations and alternatives are possible, depending on design requirements and other factors. Any modifications, equivalent substitutions and improvements made within the spirit and principles of the present disclosure are intended to be included within the scope of the present disclosure.

Claims (42)

1. A collision processing method of a vehicle, comprising:
acquiring data acquired by a sensor on a vehicle;
extracting a first feature and a second feature from the acquired data; the first characteristic is used for representing parameter information of the vehicle, and the second characteristic is used for representing parameter information of voxels in a three-dimensional space where the vehicle is located;
calculating a collision index of the voxels according to the first feature and the second feature; the collision index is used for representing the collision condition between the vehicle and the object corresponding to the voxel;
and controlling the running of the vehicle according to the collision index of the voxels.
2. The collision processing method according to claim 1, wherein the collision index of the voxel is at least two, the controlling the travel of the vehicle according to the collision index of the voxel includes:
and if all the collision indexes of the voxels are consistent, controlling the running of the vehicle according to the collision indexes of the voxels.
3. The collision processing method according to claim 1 or 2, further comprising:
determining an occupancy state and/or a motion state of the voxels according to the second feature;
wherein the controlling the running of the vehicle according to the collision index of the voxel comprises:
And controlling the running of the vehicle according to the collision index of the voxels and the occupation state and/or the motion state.
4. The collision processing method according to claim 1, wherein the collision index includes at least one of: collision distance, collision time, steering time, braking threat coefficient, steering threat coefficient.
5. The collision processing method according to claim 1, wherein the parameter information of the vehicle includes at least one of: position, velocity, acceleration, length, width, and height.
6. The collision processing method according to claim 1, wherein the parameter information of the voxels includes at least one of: position, speed, orientation, size, occupancy status.
7. The collision processing method according to claim 1, wherein the acquired data includes point cloud data and/or image data.
8. A method of training a collision prediction model, comprising:
obtaining a training sample; the training sample comprises sample data acquired by a sensor on a vehicle and collision indexes for labeling voxels in a three-dimensional space, wherein the three-dimensional space is constructed according to the acquired sample data, the labeled collision indexes are calculated according to the parameter information of the vehicle and the parameter information of the voxels, and the parameter information of the vehicle and the parameter information of the voxels are determined according to the acquired sample data;
Inputting the training sample into a collision prediction model to obtain a collision index predicted by the voxels;
calculating a first loss according to the noted collision index and the predicted collision index;
and adjusting parameters of the collision prediction model according to the first loss until convergence conditions are met, so as to obtain a trained collision prediction model.
9. The training method of claim 8, wherein the collision indicator comprises at least one of: collision distance, collision time, steering time, braking threat coefficient, steering threat coefficient.
10. The training method of claim 8, wherein the parameter information of the vehicle includes at least one of: position, velocity, acceleration, length, width, and height.
11. The training method of claim 8, wherein the parameter information of the voxels comprises at least one of: position, speed, orientation, size, occupancy status.
12. The training method of claim 8, wherein the collected sample data comprises point cloud sample data and/or image sample data.
13. Training method according to any of the claims 8-12, wherein the training sample further comprises an occupancy state and/or a motion state labeling the voxels;
Inputting the training sample into a collision prediction model to obtain a predicted collision index of the voxel, wherein the method comprises the following steps: inputting the training sample into a collision prediction model to obtain a collision index predicted by the voxels and a predicted occupancy state and/or a predicted motion state;
the training method further comprises the following steps: calculating a second loss from the noted occupancy state and the predicted occupancy state, and/or calculating a third loss from the noted motion state and the predicted motion state;
said adjusting parameters of said collision prediction model according to said first loss comprises: and adjusting parameters of the collision prediction model according to the first loss and the second loss and/or the third loss.
14. A collision processing method of a vehicle, comprising:
acquiring data acquired by a sensor on a vehicle;
inputting the acquired data into a collision prediction model to obtain collision indexes of voxels in a three-dimensional space where the vehicle is located; wherein the collision prediction model is trained according to the training method of any one of claims 8-12;
and controlling the running of the vehicle according to the collision index of the voxels.
15. The collision processing method according to claim 14, wherein the collision index of the voxel is at least two, the controlling the travel of the vehicle according to the collision index of the voxel, comprising:
and if all the collision indexes of the voxels are consistent, controlling the running of the vehicle according to the collision indexes of the voxels.
16. The collision processing method according to claim 14 or 15, wherein the acquired data includes point cloud data and/or image data.
17. A collision processing method of a vehicle, comprising:
acquiring data acquired by a sensor on a vehicle;
inputting the acquired data into a collision prediction model to obtain collision indexes of voxels in a three-dimensional space where the vehicle is located and an occupation state and/or a motion state; wherein the collision prediction model is trained according to the training method of claim 13;
and controlling the running of the vehicle according to the collision index of the voxels and the occupation state and/or the motion state.
18. The collision processing method according to claim 17, wherein the collision index of the voxels is at least two, the controlling the running of the vehicle according to the collision index of the voxels and the occupancy state and/or the motion state includes:
And if all the collision indexes of the voxels are consistent, controlling the running of the vehicle according to the collision indexes of the voxels and the occupied state and/or the motion state.
19. A collision processing method according to claim 17 or 18, wherein the acquired data comprises point cloud data and/or image data.
20. A collision processing apparatus of a vehicle, comprising:
the data acquisition module is used for acquiring data acquired by the sensors on the vehicle;
the feature extraction module is used for extracting a first feature and a second feature from the acquired data; the first characteristic is used for representing parameter information of the vehicle, and the second characteristic is used for representing parameter information of voxels in a three-dimensional space where the vehicle is located;
an index calculation module, configured to calculate a collision index of the voxel according to the first feature and the second feature; the collision index is used for representing the collision condition between the vehicle and the object corresponding to the voxel;
and the running control module is used for controlling the running of the vehicle according to the collision index of the voxels.
21. The collision processing apparatus according to claim 20, wherein the collision index of the voxel is at least two, and the control module is configured to control the travel of the vehicle according to the collision index of the voxel, in a case where all the collision indexes of the voxels are identical.
22. The collision processing apparatus according to claim 20 or 21, further comprising: a state determining module for determining an occupancy state and/or a motion state of the voxel according to the second feature;
the driving control module is specifically used for controlling the driving of the vehicle according to the collision index of the voxels and the occupation state and/or the motion state.
23. The collision processing apparatus of claim 20, wherein the collision indicator comprises at least one of: collision distance, collision time, steering time, braking threat coefficient, steering threat coefficient.
24. The collision processing apparatus according to claim 20, wherein the parameter information of the vehicle includes at least one of: position, velocity, acceleration, length, width, and height.
25. The collision processing apparatus of claim 20, wherein the parameter information of the voxels comprises at least one of: position, speed, orientation, size, occupancy status.
26. The collision processing apparatus of claim 20, wherein the acquired data comprises point cloud data and/or image data.
27. A training device of a collision prediction model, comprising:
The sample acquisition module is used for acquiring training samples; the training sample comprises sample data acquired by a sensor on a vehicle and collision indexes for labeling voxels in a three-dimensional space, wherein the three-dimensional space is constructed according to the acquired sample data, the labeled collision indexes are calculated according to the parameter information of the vehicle and the parameter information of the voxels, and the parameter information of the vehicle and the parameter information of the voxels are determined according to the acquired sample data;
the model prediction module is used for inputting the training sample into a collision prediction model to obtain a collision index predicted by the voxels;
the loss calculation module is used for calculating a first loss according to the marked collision index and the predicted collision index;
and the parameter adjustment module is used for adjusting the parameters of the collision prediction model according to the first loss until the convergence condition is met, so as to obtain a trained collision prediction model.
28. The training device of claim 27, wherein the collision indicator comprises at least one of: collision distance, collision time, steering time, braking threat coefficient, steering threat coefficient.
29. The training device of claim 27, wherein the parameter information of the vehicle comprises at least one of: position, velocity, acceleration, length, width, and height.
30. The training device of claim 27, wherein the parameter information of the voxels comprises at least one of: position, speed, orientation, size, occupancy status.
31. The training device of claim 27, wherein the acquired sample data comprises point cloud sample data and/or image sample data.
32. The training device of any of claims 27-31, wherein the training sample further comprises an occupancy state and/or a motion state labeling the voxels;
the model prediction module is specifically configured to input the training sample into a collision prediction model to obtain a collision index predicted by the voxel, a predicted occupancy state and/or a predicted motion state;
the loss calculation module is further used for calculating a second loss according to the marked occupation state and the predicted occupation state and/or calculating a third loss according to the marked motion state and the predicted motion state;
The parameter adjustment module is specifically configured to adjust parameters of the collision prediction model according to the first loss, the second loss, and/or the third loss.
33. A collision processing apparatus of a vehicle, comprising:
the first acquisition module is used for acquiring data acquired by the sensors on the vehicle;
the first prediction module is used for inputting the acquired data into a collision prediction model to obtain collision indexes of voxels in a three-dimensional space where the vehicle is located; wherein the collision prediction model is trained from the training apparatus of any one of claims 27-31;
and the first control module is used for controlling the running of the vehicle according to the collision index of the voxels.
34. The collision processing apparatus according to claim 33, wherein the collision indexes of the voxels are at least two, and the first control module is specifically configured to control the travel of the vehicle according to the collision indexes of the voxels if all the collision indexes of the voxels are identical.
35. The collision processing apparatus according to claim 33 or 34, wherein the acquired data comprises point cloud data and/or image data.
36. A collision processing apparatus of a vehicle, comprising:
The second acquisition module is used for acquiring data acquired by the sensors on the vehicle;
the second prediction module is used for inputting the acquired data into a collision prediction model to obtain collision indexes of voxels in a three-dimensional space where the vehicle is located and an occupied state and/or a motion state; wherein the collision prediction model is trained according to the training apparatus of claim 32;
and the second control module is used for controlling the running of the vehicle according to the collision index of the voxels and the occupation state and/or the motion state.
37. The collision processing apparatus according to claim 36, wherein the number of collision indexes of the voxels is at least two, and the second control module is specifically configured to control the running of the vehicle according to the collision indexes of the voxels and the occupancy state and/or the motion state, in case that all the collision indexes of the voxels are identical.
38. The collision processing apparatus according to claim 36 or 37, wherein the acquired data includes point cloud data and/or image data.
39. An electronic device, comprising:
at least one processor; and
a memory communicatively coupled to the at least one processor; wherein,
The memory stores instructions executable by the at least one processor to enable the at least one processor to perform the collision handling method according to any one of claims 1-7, the training method according to any one of claims 8-13, or the collision handling method according to any one of claims 14-19.
40. A non-transitory computer-readable storage medium storing computer instructions for causing the computer to perform the collision handling method according to any one of claims 1-7, the training method according to any one of claims 8-13, or the collision handling method according to any one of claims 14-19.
41. A computer program product comprising a computer program which, when executed by a processor, implements the collision handling method according to any one of claims 1-7, the training method according to any one of claims 8-13, or the collision handling method according to any one of claims 14-19.
42. An autonomous vehicle comprising the electronic device of claim 39.
CN202311635574.1A 2023-11-30 2023-11-30 Collision processing method of vehicle, training method and device of collision prediction model Pending CN117485372A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202311635574.1A CN117485372A (en) 2023-11-30 2023-11-30 Collision processing method of vehicle, training method and device of collision prediction model

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202311635574.1A CN117485372A (en) 2023-11-30 2023-11-30 Collision processing method of vehicle, training method and device of collision prediction model

Publications (1)

Publication Number Publication Date
CN117485372A true CN117485372A (en) 2024-02-02

Family

ID=89680093

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202311635574.1A Pending CN117485372A (en) 2023-11-30 2023-11-30 Collision processing method of vehicle, training method and device of collision prediction model

Country Status (1)

Country Link
CN (1) CN117485372A (en)

Similar Documents

Publication Publication Date Title
EP3384360B1 (en) Simultaneous mapping and planning by a robot
Rawashdeh et al. Collaborative automated driving: A machine learning-based method to enhance the accuracy of shared information
CN113378760A (en) Training target detection model and method and device for detecting target
WO2022062480A1 (en) Positioning method and positioning apparatus of mobile device
CN114179832A (en) Lane changing method for autonomous vehicle
CN113378693A (en) Target generation detection system and method and device for detecting target
CN114323033A (en) Positioning method and device based on lane lines and feature points and automatic driving vehicle
CN115164936A (en) Global pose correction method and device for point cloud splicing in high-precision map manufacturing
CN113378694B (en) Method and device for generating target detection and positioning system and target detection and positioning
EP4160269A1 (en) Systems and methods for onboard analysis of sensor data for sensor fusion
CN116358584A (en) Automatic driving vehicle path planning method, device, equipment and medium
CN117485372A (en) Collision processing method of vehicle, training method and device of collision prediction model
CN115675528A (en) Automatic driving method and vehicle based on similar scene mining
CN114282776A (en) Method, device, equipment and medium for cooperatively evaluating automatic driving safety of vehicle and road
US20210101614A1 (en) Spatio-temporal pose/object database
CN114596706A (en) Detection method and device of roadside sensing system, electronic equipment and roadside equipment
CN113963027B (en) Uncertainty detection model training method and device, and uncertainty detection method and device
US20230025579A1 (en) High-definition mapping
US20230009736A1 (en) Adaptive motion compensation of perception channels
CN115583243B (en) Method for determining lane line information, vehicle control method, device and equipment
CN114407916B (en) Vehicle control and model training method and device, vehicle, equipment and storage medium
CN114584949B (en) Method and equipment for determining attribute value of obstacle through vehicle-road cooperation and automatic driving vehicle
US11282158B2 (en) Method for managing tracklets in a particle filter estimation framework
CN114179834B (en) Vehicle parking method, device, electronic equipment, medium and automatic driving vehicle
CN113361379B (en) Method and device for generating target detection system and detecting target

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination