CN117350077A - Vehicle function verification method, device, equipment and medium - Google Patents

Vehicle function verification method, device, equipment and medium Download PDF

Info

Publication number
CN117350077A
CN117350077A CN202311469090.4A CN202311469090A CN117350077A CN 117350077 A CN117350077 A CN 117350077A CN 202311469090 A CN202311469090 A CN 202311469090A CN 117350077 A CN117350077 A CN 117350077A
Authority
CN
China
Prior art keywords
vehicle
state
state information
determining
performance index
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202311469090.4A
Other languages
Chinese (zh)
Inventor
孔艺婷
徐佳晙
韩佳良
郑鸣斐
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Horizon Shanghai Artificial Intelligence Technology Co Ltd
Original Assignee
Horizon Shanghai Artificial Intelligence Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Horizon Shanghai Artificial Intelligence Technology Co Ltd filed Critical Horizon Shanghai Artificial Intelligence Technology Co Ltd
Priority to CN202311469090.4A priority Critical patent/CN117350077A/en
Publication of CN117350077A publication Critical patent/CN117350077A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F30/00Computer-aided design [CAD]
    • G06F30/20Design optimisation, verification or simulation

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Hardware Design (AREA)
  • Evolutionary Computation (AREA)
  • Geometry (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Traffic Control Systems (AREA)

Abstract

The embodiment of the disclosure discloses a vehicle function verification method, device, equipment and medium, wherein the method comprises the following steps: determining first own vehicle state information of an own vehicle and first state information of target objects around the own vehicle relative to the own vehicle based on simulation environment data; determining at least one state offset corresponding to the first state information based on a preset perception deviation simulation rule; determining second state information of the target object corresponding to each state offset relative to the own vehicle based on the first state information and each state offset; and verifying the preset vehicle function of the own vehicle based on the first own vehicle state information and the second state information to obtain a verification result. The embodiment of the disclosure is beneficial to providing effective sensing deviation threshold reference data for development and maintenance of a sensing algorithm, so that the sensing deviation threshold of the sensing algorithm can be within the allowable range of a preset vehicle function by optimizing the sensing algorithm, and the preset vehicle function is ensured to meet corresponding functional safety requirements.

Description

Vehicle function verification method, device, equipment and medium
Technical Field
The disclosure relates to the technical field of auxiliary driving, in particular to a vehicle function verification method, device, equipment and medium.
Background
In the field of assisted driving, a vehicle generally has functions of automatic emergency braking (Autonomous Emergency Braking, abbreviated as AEB), adaptive cruise control (Adaptive Cruise Control, abbreviated as ACC), automatic assisted navigation driving (Navigate on Autopilot, abbreviated as NOA), etc., and services of corresponding functions are provided for a user, and the implementation of these functions generally needs to depend on the sensing result of a sensing module on the vehicle on the obstacle around the vehicle, and the obstacle may include, for example, other vehicles around the vehicle, pedestrians, riding pedestrians, etc. The obtaining of the sensing result needs to be achieved based on a corresponding sensing algorithm, and the sensing algorithm can comprise a sensing algorithm based on at least one sensor data of vision, laser radar, ultrasonic radar, millimeter wave radar and the like. The sensing algorithm is influenced by factors such as hardware, environmental change and the like in the application process, and easily has sensing deviation, so that the vehicle functions cannot meet the corresponding functional safety requirements.
Disclosure of Invention
In order to solve the technical problems that the vehicle function cannot meet the functional safety requirements and the like due to the perception deviation, the embodiment of the disclosure provides a method, a device, equipment and a medium for verifying the vehicle function, and the vehicle function is verified through perception deviation simulation so as to effectively determine the tolerance condition of the vehicle function to the perception deviation.
In a first aspect of the present disclosure, there is provided a method for verifying a vehicle function, including: determining first own vehicle state information of an own vehicle and first state information of target objects around the own vehicle relative to the own vehicle based on simulation environment data; determining at least one state offset corresponding to the first state information based on a preset perception deviation simulation rule; determining second state information of the target object corresponding to each state offset relative to the own vehicle based on the first state information and each state offset; and verifying the preset vehicle function of the own vehicle based on the first own vehicle state information and each second state information to obtain a verification result.
In a second aspect of the present disclosure, there is provided a vehicle function verification apparatus including: the first processing module is used for determining first own vehicle state information of the own vehicle and first state information of target objects around the own vehicle relative to the own vehicle based on simulation environment data; the second processing module is used for determining at least one state offset corresponding to the first state information based on a preset perception deviation simulation rule; the third processing module is used for determining second state information of the target object corresponding to each state offset relative to the own vehicle based on the first state information and each state offset; and the fourth processing module is used for verifying the preset vehicle function of the own vehicle based on the first own vehicle state information and the second state information to obtain a verification result.
In a third aspect of the present disclosure, there is provided a computer-readable storage medium storing a computer program for executing the vehicle function verification method according to any one of the above embodiments of the present disclosure.
In a fourth aspect of the present disclosure, there is provided an electronic device including: a processor; a memory for storing the processor-executable instructions; the processor is configured to read the executable instructions from the memory and execute the instructions to implement the method for verifying a vehicle function according to any one of the embodiments of the present disclosure.
In a fifth aspect of the present disclosure, a computer program product is provided, which when executed by a processor, performs a method of verifying a vehicle function provided by any of the above embodiments of the present disclosure.
Based on the verification method, the device, the equipment and the medium for the vehicle function provided by the embodiment of the disclosure, by establishing a simulation environment, accurate vehicle state information and first state information of target objects around the vehicle relative to the vehicle can be determined based on simulation environment data, and further, based on preset perception deviation simulation rules, various performance index values (such as perception errors in various real scenes) of a perception algorithm in a real scene are simulated, the state offset corresponding to the first state information is used as state offset, the state offset is applied to the first state information, second state information is obtained, the second state information contains the perception deviation of the perception algorithm in the real scene, the second state information is used for processing of preset vehicle functions, the preset vehicle functions are verified, the permissible conditions of the preset vehicle functions for various perception deviations are determined, based on the fact, the perception deviation threshold of the perception algorithm which can be allowed by the preset vehicle functions meets the functional safety requirements can be obtained, effective perception deviation threshold reference data is provided for development and maintenance of the perception algorithm, so that perception algorithm developers optimize the perception algorithm to enable the perception deviation of the perception algorithm to be allowed by the perception algorithm to meet the corresponding vehicle functions within the preset functional safety requirements.
Drawings
FIG. 1 is an exemplary application scenario of a vehicle function verification method provided by the present disclosure;
FIG. 2 is a flow chart of a method of verifying vehicle functionality provided by an exemplary embodiment of the present disclosure;
FIG. 3 is a flow chart diagram of a method of verifying vehicle functionality provided by another exemplary embodiment of the present disclosure;
FIG. 4 is a flow chart of a method of verifying vehicle functionality provided by yet another exemplary embodiment of the present disclosure;
FIG. 5 is a flow chart of step 202 provided by an exemplary embodiment of the present disclosure;
FIG. 6 is a flow chart of step 202 provided by another exemplary embodiment of the present disclosure;
FIG. 7 is a schematic diagram of a two-stage perceptual deviation simulation provided by an exemplary embodiment of the present disclosure;
FIG. 8 is a flow chart diagram of a method of verifying vehicle functionality provided by yet another exemplary embodiment of the present disclosure;
fig. 9 is a schematic structural view of a verification device for vehicle functions provided in an exemplary embodiment of the present disclosure;
fig. 10 is a schematic structural view of a verification device of a vehicle function provided in another exemplary embodiment of the present disclosure;
fig. 11 is a block diagram of an electronic device provided in an embodiment of the present disclosure.
Detailed Description
For the purpose of illustrating the present disclosure, exemplary embodiments of the present disclosure will be described in detail below with reference to the drawings, it being apparent that the described embodiments are only some, but not all embodiments of the present disclosure, and it is to be understood that the present disclosure is not limited by the exemplary embodiments.
It should be noted that: the relative arrangement of the components and steps, numerical expressions and numerical values set forth in these embodiments do not limit the scope of the present disclosure unless it is specifically stated otherwise.
Summary of the disclosure
In implementing the present disclosure, the inventors have found that in the field of assisted driving, a vehicle generally has functions of automatic emergency braking (Autonomous Emergency Braking, abbreviated as AEB), adaptive cruise control (Adaptive Cruise Control, abbreviated as ACC), automatic assisted navigation driving (Navigate on Autopilot, abbreviated as NOA), and the like, and services of corresponding functions are provided for a user, and implementation of these functions generally needs to depend on a sensing result of a sensing module on the vehicle on an obstacle around the vehicle, where the obstacle may include, for example, other vehicles around the vehicle, pedestrians, riding pedestrians, and the like. The obtaining of the sensing result needs to be achieved based on a corresponding sensing algorithm, and the sensing algorithm can comprise a sensing algorithm based on at least one sensor data of vision, laser radar, ultrasonic radar, millimeter wave radar and the like. The sensing algorithm is influenced by factors such as hardware, environmental change and the like in the application process, and easily has sensing deviation, so that the vehicle functions cannot meet the corresponding functional safety requirements.
Exemplary overview
Fig. 1 is an exemplary application scenario of a vehicle function verification method provided by the present disclosure.
As shown in fig. 1, the vehicle function verification method of the present disclosure may be implemented based on a simulation system including environmental simulation, perceived deviation simulation, and preset vehicle function verification. The environment simulation is used for determining an accurate perception result based on simulation environment data, wherein the perception result comprises first own vehicle state information of an own vehicle and first state information of target objects around the own vehicle relative to the own vehicle. The sensing deviation simulation is used for performing sensing deviation simulation on a sensing result, specifically, performing sensing deviation simulation on first state information of a target object, namely, determining at least one state offset corresponding to the first state information based on a preset sensing deviation simulation rule, and determining second state information, namely, the second state information, of the target object, corresponding to each state offset respectively based on the first state information and each state offset, wherein the second state information is the sensing result of the target object and comprises the sensing deviation. The preset vehicle function verification is used for verifying the preset vehicle function of the own vehicle based on the first own vehicle state information and the second state information containing the perception deviation, and obtaining a verification result so as to verify whether the preset vehicle function can reach the corresponding functional safety requirement under various perception deviations. When the preset vehicle function is verified, the function response of the preset vehicle function to the sensing result containing the sensing deviation and the environment simulation can be formed in a loop, and whether the preset vehicle function response is successful or not is determined, so that whether the preset vehicle function can meet the corresponding functional safety requirement under the sensing deviation or not can be determined. Based on the method, the method is beneficial to obtaining the perception deviation threshold of the perception algorithm which can be allowed by the preset vehicle function to meet the functional safety requirement, and provides effective perception deviation threshold reference data for development and maintenance of the perception algorithm, so that a perception algorithm developer optimizes the perception algorithm, the perception deviation threshold of the perception algorithm can be within the allowed range of the preset vehicle function, and the preset vehicle function is ensured to meet the corresponding functional safety requirement.
Exemplary method
Fig. 2 is a flow chart of a method for verifying a vehicle function according to an exemplary embodiment of the present disclosure. The embodiment can be applied to electronic devices, specifically, electronic devices such as a terminal, a server, and the like, as shown in fig. 2, and includes the following steps:
in step 201, first own vehicle state information of an own vehicle and first state information of a target object around the own vehicle relative to the own vehicle are determined based on simulation environment data.
The simulation environment data are pre-built simulation data comprising the data of the vehicle and the surrounding environment. The simulation environment data may include behavior information of the own vehicle, behavior information of other road participants (may be referred to as objects or target objects) around the own vehicle, environment information, road information, and the like. The behavior information of the vehicle may include one or more of information such as a position, an orientation (heading angle), a speed, an acceleration, an angular speed, and the like of the vehicle under a preset coordinate system (or a first coordinate system). The predetermined coordinate system may be, for example, a world coordinate system or a reference coordinate system rigidly connected to the world coordinate system. The behavior information of the other road participants may include one or more of information of the position, orientation, speed, acceleration, angular speed, etc. of the other road participants under the preset coordinate system. The environmental information may include time information, weather information, and the like. Time information such as day, night, etc. Weather information such as sunny days, rainy days, snowy days, etc. The road information may include a road type (such as expressway, urban road, etc.), lane line information, road conditions (such as flat road, uphill road, downhill road, etc.), and the like. The simulation environment data are used for simulating the running condition of the vehicle under a certain environment and providing simulation perception results for verification of preset vehicle functions of the vehicle, and the perception results comprise first vehicle state information of the vehicle and first state information of target objects around the vehicle relative to the vehicle. The first vehicle status information may include a position, orientation, speed, acceleration, angular velocity, etc. of the vehicle. The target object may refer to a dynamic object around the own vehicle, such as other vehicles around the own vehicle, pedestrians, riders, etc. The first state information of the target object relative to the host vehicle may include a position, an orientation, a speed, an acceleration, an angular velocity, etc. of the target object relative to the host vehicle. Compared with the perception result obtained by a perception algorithm in a real environment, the perception result obtained based on the simulation environment data is determined based on the states of the vehicle and surrounding objects in the simulation environment, is not determined based on an actual perception algorithm, is not influenced by factors such as environmental changes, and belongs to an accurate perception result without the perception deviation.
In some alternative embodiments, the number of target objects may be one or more. For a plurality of target objects, first state information corresponding to each target object may be determined.
Step 202, determining at least one state offset corresponding to the first state information based on a preset perception deviation simulation rule.
The preset sensing deviation simulation rule may be any rule that can be implemented. Such as random rules, rules with certain rules, etc. A random rule may refer to a random determination of at least one state offset. The rule with a certain rule may include determining at least one state offset according to an equal spacing, an equal difference spacing, etc. rule. The specific rule is not limited.
In some alternative embodiments, the first state information includes one or more of a position, a velocity, an acceleration, an orientation angle, an angular velocity, and the like of the target object. The position, the speed and the acceleration can be further divided into a transverse state and a longitudinal state. For example, the positions are divided into a lateral position and a longitudinal position. The speed is divided into a transverse speed and a longitudinal speed. Acceleration is divided into lateral acceleration and longitudinal acceleration. Each state offset corresponding to the first state information may include an offset corresponding to each state. Different states may correspond to offsets of different sizes or the same size, and are not particularly limited.
In some alternative embodiments, different state offsets may also be determined for different target objects.
For example, as for the offset amount of the longitudinal position, if the target object is another vehicle, the offset amount corresponding to the longitudinal position of the target object may be a plurality of offset amounts obtained by dividing the range [ x1, x2] by the division accuracy Δx1. If the target object is a pedestrian or a rider, the offset corresponding to the longitudinal position of the target object may be a plurality of offsets obtained by dividing the range [ x3, x4] by the division accuracy Δx2. Wherein x1 may be the same or different in size from x3, x2 may be the same or different from x4, and Δx1 and Δx2 may be the same or different. Such as x1= -0.25 meter, x2= -0.25 meter, x3= -0.15 meter, x4= 0.15 meter, Δx1= Δx2= 0.01 meter. Taking [ x1, x2] as an example, the offset obtained by segmentation may include-0.25 meters, -0.24 meters, -0.23 meters, …, -0.01 meters, 0, 0.01 meters, …, 0.25 meters. For another example, x1= -0.5 meter, x2= -0.5 meter, x3= -0.5 meter, x4= 0.5 meter, Δx1= 0.02 meter, Δx2= 0.01 meter. The other states are similar to the principle of determination of the amount of shift in the longitudinal position. For example, the offset of the lateral position may be a plurality of offsets obtained by dividing the range [ -0.5 m, 0.5 m ] by the division accuracy of 0.1 m for other vehicles. For pedestrians and riders, the range [ -2 meters, 2 meters ] can be divided according to the dividing precision of 0.2 meter to obtain a plurality of offset values.
In step 203, based on the first state information and each state offset, second state information of the target object corresponding to each state offset with respect to the own vehicle is determined.
After each state offset corresponding to the first state information is obtained, each state offset is applied to the first state information, so as to simulate the perceived deviation through each state offset, and second state information containing the perceived deviation, corresponding to the first state information under each state offset, is obtained.
Step 204, based on the first vehicle state information and the second state information, the preset vehicle function of the vehicle is verified, and a verification result is obtained.
Wherein the preset vehicle function may be at least one of an AEB function, an ACC function, a NOA function, etc. depending on a sensing result of the sensing algorithm. The verification of the preset vehicle function refers to taking the first vehicle state information and each second state information containing the perceived deviation as a perceived result, and a function algorithm (or called a function application program) corresponding to the preset vehicle function performs corresponding processing based on the perceived result to determine whether the preset vehicle function is triggered or not, and whether the vehicle reaches an expected result after the preset vehicle function is triggered or not, so as to determine whether the preset vehicle function can reach the expected result under the state offset of each second state information, thereby verifying the allowable condition of the preset vehicle function for each state offset, and taking the allowable condition of the preset vehicle function for each state offset as a verification result. Or a performance index threshold of a perception algorithm which can be allowed by the preset vehicle function can be determined based on the allowed condition of the preset vehicle function for each state offset, and the performance index threshold is used as a verification result. The specific content of the verification result is not limited.
In some optional embodiments, the performing, by the function algorithm corresponding to the preset vehicle function, the corresponding processing based on the sensing result may include: based on the first vehicle state information and the second state information of the target object, determining whether the relative relation between the target object and the vehicle satisfies a condition for triggering a preset vehicle function, and triggering the preset vehicle function if the condition is satisfied, calculating control information (or action) of the vehicle according to the first vehicle state information and the second state information of the target object, such as a transverse and longitudinal control instruction, wherein the transverse and longitudinal control instruction can include at least one of a transverse control instruction and a longitudinal control instruction, and is used for controlling the running of the vehicle. The expected result refers to a result that the preset vehicle function is able to maintain a safe state between the own vehicle and the target object, and may include, for example, no collision, the minimum distance between the own vehicle and the target object being greater than a minimum safe distance threshold, and the like. The judgment of whether the preset vehicle function can reach the expected result under the state offset of each piece of second state information can be realized based on the state change of the own vehicle after the triggering of the preset vehicle function and the combination of simulation environment data. For example, for the AEB function, performing corresponding processing based on the perceived result may include: and judging whether the relative relation between the second state information and the first vehicle state information meets the condition for triggering the AEB function or not according to the second state information of any state offset, and if the AEB function is triggered under the second state information of any state offset, determining the motion (such as deceleration) of the vehicle by using an AEB function algorithm, so that the state change of the vehicle under the motion can be determined based on a motion model. And then, based on the state change of the own vehicle, whether the own vehicle reaches an expected result after the AEB function is triggered can be determined. Specifically, in conjunction with the simulated environment data, it may be determined whether a change in the state of the host vehicle has collided with the target object. If braking is successful, no collision with the target object occurs, it can be determined that the AEB function is able to achieve the desired result at this state offset. The determination of the specific desired outcome may be set by the specific requirements of the AEB function, which is only one exemplary illustration. In this way, environment simulation, perceived deviation simulation and preset vehicle function verification are formed in the ring system, so that effective verification of the preset vehicle function can be realized on the basis of the ring system. Different vehicle functions can be set according to specific vehicle functions according to the processing mode of corresponding processing of the sensing result and the judging mode of whether the own vehicle reaches the expected result after triggering the vehicle functions, and detailed description is omitted.
According to the vehicle function verification method provided by the embodiment, through establishing a simulation environment, accurate vehicle state information and first state information of target objects around a vehicle relative to the vehicle can be determined based on simulation environment data, and then based on preset perception deviation simulation rules, various performance index values (such as errors in various real scenes) of a perception algorithm in a real scene are simulated, the state deviation corresponding to the first state information is used as state deviation values, the state deviation values are applied to the first state information, second state information is obtained, so that the second state information contains the perception deviation of the perception algorithm in the real scene, the second state information is used for processing of preset vehicle functions, the preset vehicle functions are verified, the permissible conditions of the preset vehicle functions for various perception deviations are determined, based on the permissible perception deviation threshold values of the preset vehicle functions, effective perception deviation threshold value reference data can be provided for development and maintenance of the perception algorithm, so that perception algorithm developers can optimize the perception algorithm, the perception deviation threshold values of the perception algorithm can be within the permissible range of the preset vehicle functions, and the corresponding vehicle safety requirements of the preset vehicle functions are met.
Fig. 3 is a flow chart illustrating a method of verifying a vehicle function according to another exemplary embodiment of the present disclosure.
In some alternative embodiments, as shown in fig. 3, determining the first vehicle state information of the vehicle and the first state information of the target object around the vehicle relative to the vehicle in step 201 based on the simulation environment data includes:
in step 2011, based on the simulation environment data, the first behavior information of the own vehicle in the first coordinate system and the second behavior information of the target object in the first coordinate system are determined.
The first behavior information may include, among other things, a position (e.g., an initial position), an orientation angle, a speed, an acceleration, etc. of the simulated vehicle in the first coordinate system. Similarly, the second behavior information may include a position, an orientation angle, a velocity, an acceleration, etc. of the target object in the first coordinate system.
Step 2012, determining first host vehicle status information of the host vehicle in a first coordinate system based on the first behavior information.
The first vehicle state information of the vehicle under the first coordinate system may include state information of the vehicle at the current moment in the process of simulating the running of the vehicle. For example, the simulated vehicle and the target object run according to the simulated initial state from the initial time, and the first vehicle state information of the real-time vehicle is determined based on the simulated environment data as the vehicle state sensing result at the current time in the vehicle running process. Specifically, the first vehicle state information of the vehicle may be determined based on the first behavior information such as the position, the speed, the acceleration, the angle of orientation, and the like of the vehicle included in the simulation environment data, and the motion model.
In some alternative embodiments, the state information of any time (as the current time) in the self-vehicle running process may be determined as the first self-vehicle state information based on the position, the orientation angle, the speed, the acceleration, and the like of the self-vehicle at the initial time in the first coordinate system based on the motion model. The motion model can comprise a uniform motion model, a uniform acceleration motion model and the like, and can be specifically set according to actual requirements.
In some alternative embodiments, the state information of the own vehicle at any time after the time may also be determined as the first own vehicle state information based on the position, the orientation angle, the speed, the acceleration, and the like of the own vehicle at any time in the first coordinate system based on the motion model.
In some alternative embodiments, the state information of the current moment of the own vehicle may be determined as the first own vehicle state information based on the position, the orientation angle, the speed, the acceleration, etc. of the own vehicle at the previous moment in the first coordinate system based on the motion model.
In step 2013, based on the second behavior information, original state information of the target object in the first coordinate system is determined.
The original state information is similar to the first own vehicle state information and is the state information of the target object at the moment corresponding to the first own vehicle state information.
In some optional embodiments, the state information of the target object at any time may be determined based on the second behavior information such as the position, the speed, the orientation angle, the acceleration and the like of the target object at the initial time in the first coordinate system and the motion model, so that the state information of the target object at the time corresponding to the first vehicle state information may be obtained.
In some optional embodiments, the second behavior information may also include original state information of the target object corresponding to each time of the self-driving process, and state information of the target object corresponding to the first self-driving state information may be obtained from the second behavior information as the original state information of the target object.
In step 2014, a vehicle coordinate system is determined based on the first vehicle status information.
The vehicle coordinate system can be determined according to the pose of the vehicle under the first coordinate system. Therefore, the pose of the vehicle can be determined based on the first vehicle state information, and the vehicle coordinate system corresponding to the first vehicle state information can be determined based on the pose of the vehicle, with the position of the vehicle as the origin of the vehicle coordinate system, the direction of the vehicle as the longitudinal axis direction of the vehicle coordinate system, and the direction perpendicular to the direction of the vehicle as the transverse axis direction of the vehicle coordinate system.
In step 2015, the original state information is converted into a vehicle coordinate system, and the state information of the target object in the vehicle coordinate system is obtained as the first state information.
The original state information can be converted into the own vehicle coordinate system according to the conversion relation between the own vehicle coordinate system and the first coordinate system, and the state information of the target object under the own vehicle coordinate system can be obtained. The state information is the state information of the target object relative to the own vehicle, and therefore the state information can be used as the first state information of the target object.
According to the method and the device, behavior information of the vehicle and the target object can be obtained through simulation environment data, and further state information of the vehicle and the target object at any moment, namely first vehicle state information and first state information of the target object, can be obtained based on the behavior information of the vehicle and the target object, so that accurate sensing results are obtained, and accurate and effective sensing result data are provided for sensing deviation simulation.
Fig. 4 is a flowchart illustrating a method of verifying a vehicle function according to still another exemplary embodiment of the present disclosure.
In some alternative embodiments, as shown in fig. 4, determining at least one state offset corresponding to the first state information based on the preset perceived deviation simulation rule in step 202 includes:
In step 2021, a performance index range corresponding to the sensing algorithm is determined based on the preset sensing deviation simulation rule.
The performance index range corresponding to the sensing algorithm can be determined by any applicable rule. For example, the performance index range may be preset, may be determined based on a history test result of the performance index of the sensing algorithm, may be determined by the first vehicle state information and the predicted collision time determined by the first state information, and the like. The specific examples are not limited.
In some alternative embodiments, the performance index range corresponding to the sensing algorithm may include a performance index range corresponding to each of at least one state. The at least one state may include at least one of the above-described lateral position, longitudinal position, lateral velocity, longitudinal velocity, lateral acceleration, longitudinal acceleration, orientation angle, angular velocity, etc. The performance index ranges for different states may be different. For example, the range of performance indexes corresponding to the longitudinal position of the target object is [ -0.25,0.25], the range of performance indexes corresponding to the transverse position is [ -0.5,0.5], and the range of performance indexes of the specific state is not limited.
At step 2022, at least one state offset is determined based on the performance index range.
After the performance index range is determined, at least one state offset can be obtained by sampling at least one performance index value as the state offset in the performance index range.
In some alternative embodiments, for each state of each target object, at least one offset corresponding to the state may be determined based on a performance index range corresponding to the state, and further at least one state offset may be determined based on at least one offset corresponding to each state. That is, the at least one state offset may comprise a state offset of the at least one target object. The state offset of each target object may include a state offset of at least one state of the target object, and the state offset of each state may include at least one offset corresponding to the state.
For example, taking a target object as an example, the state offset of the target object may include state offsets corresponding to five states of a longitudinal position, a transverse position, a longitudinal speed, a transverse speed, and an angular speed, respectively. The five states are taken as examples of the longitudinal position, and the state offset corresponding to the longitudinal position may include one or more offsets determined based on a performance index range corresponding to the longitudinal position, for example, including 51 offsets of-0.25 meter, -0.24 meter, -0.23 meter, …, -0.01 meter, 0, 0.01 meter, …, 0.25 meter, or 50 offsets excluding 0 meter.
According to the embodiment, the performance index range corresponding to the sensing algorithm is determined, and at least one state offset is determined based on the performance index range, so that accurate and effective determination of the state offset is realized.
Fig. 5 is a flow chart of step 202 provided by an exemplary embodiment of the present disclosure.
In some alternative embodiments, step 2021 determines a performance index range corresponding to the sensing algorithm based on a preset sensing deviation simulation rule, including:
step 20211, determining a historical performance index value of the perceptual algorithm based on the historical test result of the performance index of the perceptual algorithm.
The historical test result of the performance index of the sensing algorithm may be a test result obtained by performing a performance index test on the actual sensing algorithm. The historical performance index value may include a performance index value of a perceived algorithm obtained by a historical test. The performance index value refers to a perceived deviation (or error) value of the perceived algorithm. The historical performance index value may include at least one performance index value respectively corresponding to various states of various objects perceived by the perception algorithm, for example, performance index values respectively corresponding to states of longitudinal position, lateral position, longitudinal speed, lateral speed, angular speed, and the like of other vehicles perceived by the perception algorithm. Each state corresponds to at least one performance index value. For example, the longitudinal position corresponds to a first number of performance index values. These performance index values may be obtained by testing the perceptual algorithm under various test scenarios.
Step 20212, determining a performance indicator range based on the historical performance indicator values.
Wherein, the historical performance index value can be comprehensively considered to determine the performance index range. For example, a historical performance indicator mean may be determined based on the historical performance indicator values, and a performance indicator range may be determined based on the historical performance indicator mean. The performance indicator range may also be determined based on a maximum and a minimum of the historical performance indicator values. The specific determination rule is not limited.
According to the method and the device, the historical performance index value is determined through the historical test result of the performance index of the sensing algorithm, and the performance index range is determined based on the historical performance index value, and the accuracy and the effectiveness of the performance index range are improved due to the fact that the historical test result of the performance index of the sensing algorithm is comprehensively considered, so that the accuracy and the reliability of vehicle function verification can be improved.
In some alternative embodiments, step 2022 determines at least one state offset based on the performance metric range, comprising:
in step 20221, the performance index segmentation accuracy is obtained.
The performance index dividing precision refers to a performance index interval for dividing a performance index range, and may also be referred to as a performance index sampling interval.
In some alternative embodiments, the performance index segmentation accuracy of different states may be the same or different for the same target object. For example, the performance index division accuracy of the longitudinal position is different from that of the transverse position. Illustratively, the performance index splitting accuracy of the longitudinal position is 0.01 meters, and the performance index splitting accuracy of the lateral position is 0.1 meters. The specific performance index segmentation accuracy is not limited.
In some alternative embodiments, the performance index segmentation accuracy of the same state for different target objects may be the same or different. The performance index segmentation accuracy may be different for other vehicles and for lateral positions of pedestrians, for example. By way of example, the performance index split accuracy of the lateral position of the other vehicle is 0.1 meter, and the performance index split accuracy of the lateral position of the pedestrian is 0.2 meter.
Step 20222, dividing the performance index range based on the performance index dividing precision, to obtain at least one state offset.
For each state of each target object, the performance index range corresponding to the state can be divided into at least one performance index value with the interval of the performance index dividing precision based on the performance index dividing precision corresponding to the state, and at least one state offset of the target object is obtained based on at least one performance index value corresponding to each state.
For example, for the longitudinal position, if the target object is another vehicle, the performance index range may be [ -0.25 m, 0.25 m ], and the performance index splitting accuracy may be 0.01, then the at least one performance index value corresponding to the longitudinal position includes-0.25 m, -0.24 m, -0.23 m, …, -0.01 m, 0, 0.01 m, …,0.25 m. If the target object is a weak road participant (Vulnerable Road User, abbreviated as VRU), such as pedestrians, riding pedestrians, etc., the performance index range may be [ -0.15 meter, 0.15 meter ], and the performance index segmentation accuracy may be 0.01.
For example, for a lateral position, if the target object is another vehicle, the performance index range may be [ -0.5 meters, 0.5 meters ], and the performance index split accuracy may be 0.1 meters. If the target object is a VRU, the performance index range can be [ -2 meters, 2 meters ], and the performance index segmentation accuracy can be 0.2 meters.
For example, for longitudinal speeds, if the target object is another vehicle, the performance index range may be [ -4,4] meters/second, and the performance index split accuracy may be 0.5 meters/second. If the target object is a pedestrian, the performance index range can be [ -1.5,1.5] m/s, and the performance index segmentation accuracy can be 0.5 m/s. If the target object is a rider, the performance index range can be [ -6,6] m/s, and the performance index segmentation accuracy can be 0.5 m/s.
For example, for lateral speeds, if the target object is another vehicle, the performance index range may be [ -1.5,1.5] meters/second, and the performance index split accuracy may be 0.5 meters/second. If the target object is a VRU, the performance index range may be [ -0.6,0.6] m/s, and the performance index segmentation accuracy may be 0.1 m/s.
The above examples are merely illustrative, and in practical applications, the specific performance index range and the performance index dividing precision are not limited to the range and the precision of the above examples.
According to the embodiment, the performance index range is divided through the performance index dividing precision, so that the state offset of each state of each target object, which is uniformly distributed, can be obtained and used for simulating the perceived deviation, and the comprehensive coverage of vehicle function verification is improved. And the performance index threshold of the sensing algorithm which can be allowed by the preset vehicle functions under different precision can be realized through different settings of the performance index segmentation precision, so that the accuracy of the performance index threshold of the sensing algorithm is further improved, and more accurate and effective performance index threshold is provided for the development of the sensing algorithm.
Fig. 6 is a flow chart of step 202 provided by another exemplary embodiment of the present disclosure.
In some alternative embodiments, step 2021 determines a performance index range corresponding to the sensing algorithm based on a preset sensing deviation simulation rule, including:
step 2021a determines a predicted collision time based on the first vehicle state information and the first state information.
Wherein, the predicted collision time (Time to Collision, abbreviated as TTC) can be determined by the relative position and relative speed of the target object and the own vehicle. Such as the relative position divided by the relative velocity, to yield the predicted collision time. The relative position and the relative speed may be determined based on the first vehicle state information and the first state information. Specifically, a position of the target object relative to the own vehicle (i.e., a relative position of the target object and the own vehicle) and a speed of the target object in the own vehicle coordinate system may be determined based on the first state information. Based on the speed of the target object in the own vehicle coordinate system and the speed of the own vehicle, the speed of the target object relative to the own vehicle (i.e., the relative speed of the target object and the own vehicle) may be determined. The predicted collision time may also be determined from the longitudinal position and the longitudinal speed of the target object and the own vehicle, i.e., the collision time of the target object and the own vehicle in the longitudinal direction is determined as the predicted collision time.
Step 2021b, determining a performance index range based on the predicted collision time.
The ratio of the target object occupied in the vehicle perception field of view is increased along with the continuous approach of the vehicle and the target object, and more target object information is captured at the moment and provided for the perception module to serve as a perception basis, so that the perception result of the perception algorithm is more accurate, the calculation process of the target object information by the perception module converges along with time, and the deviation after convergence is smaller than the deviation when the target object is initially perceived. The relative relationship of the vehicle to the target object can thus be divided into at least two phases based on the predicted collision time, on the basis of which different performance index ranges are determined at different phases. For example, when the host vehicle initially senses the target object (the predicted collision time is larger), the performance index range is determined to be a larger range. When the predicted collision time is small, the performance index range is determined to be a small range.
According to the method and the device for determining the performance index range by predicting the collision time, the proportion change of the target object in the perception view of the vehicle and the convergence condition of the perception algorithm on the perception calculation process of the target object along with the continuous approach of the vehicle and the target object can be comprehensively considered, so that the real perception deviation and the change process of the real perception deviation can be more approximately simulated, the accuracy and the effectiveness of the performance index range are further improved, and the effectiveness and the reliability of vehicle function verification are further improved.
In some alternative embodiments, step 2021b determines the performance index range based on the predicted collision time, comprising:
acquiring a collision time threshold; determining the performance index range as a first performance index range in response to the predicted collision time being greater than or equal to the collision time threshold; or, in response to the predicted collision time being less than the collision time threshold, determining the performance index range as a second performance index range; the second performance index range is smaller than the first performance index range.
The first performance index range may be determined based on the foregoing historical performance index value, or may be a preset range. For example, the target performance index range over which the historical performance index value is distributed may be determined based on the aforementioned historical performance index value, the first performance index range may be a range greater than the target performance index range, and the second performance index range may be a range less than the target performance index range. Such that the second performance level range is less than the first performance level range.
According to the embodiment, when the predicted collision time is larger than the collision time threshold, a larger performance index range is adopted, so that larger perceived deviation of a remote target object can be simulated, and as a vehicle approaches the target object, a smaller performance index range is adopted when the predicted collision time is smaller than the collision time threshold, so that smaller perceived deviation of a close target object can be simulated, more accurate simulation of the actual perceived deviation of a perception algorithm is realized, the accuracy and the effectiveness of the performance index range are further improved, and the effectiveness and the reliability of vehicle function verification are further improved.
In some alternative embodiments, step 2022 determines at least one state offset based on the performance metric range, comprising:
determining at least one state offset based on the first performance index range and the first segmentation accuracy in response to the performance index range being the first performance index range; alternatively, in response to the performance level range being a second performance level range, at least one state offset is determined based on the second performance level range and the second segmentation accuracy.
The specific principle of determining the at least one state offset based on the first performance index range and the first segmentation accuracy, and the specific principle of determining the at least one state offset based on the second performance index range and the second segmentation accuracy may be referred to the foregoing step 2022 and the sub-steps thereof, which are not described herein.
In some alternative embodiments, the second segmentation accuracy may be less than or equal to the first segmentation accuracy.
Illustratively, for a longitudinal position, the first performance index range may be [ -0.25,0.25] meters, the first segmentation accuracy may be 0.01, the second performance index range may be [ -0.1,0.1] meters, and the second segmentation accuracy may be 0.002 meters.
Illustratively, FIG. 7 is a schematic diagram of a two-stage perceptual deviation simulation provided by an exemplary embodiment of the present disclosure. As shown in fig. 7, in the course of one simulated travel of the own vehicle, if the TTC of the own vehicle and the target object is greater than or equal to the collision time threshold (thr), the perceived deviation simulation applies the first state offset amount to the first state information. If the TTC is less than thr, a second state offset is applied to the first state information, the second state offset being less than the first state offset.
The embodiment can simulate the perception deviation in different ranges at different predicted collision time, and is beneficial to improving the effectiveness and accuracy of the perception deviation simulation.
In some alternative embodiments, step 2021b determines the performance index range based on the predicted collision time, comprising:
determining a third performance index range corresponding to the first state information based on the predicted collision time and the previous performance index range corresponding to the previous state information of the target object; the third performance index range is less than or equal to the previous performance index range; the previous state information is first state information of a target object determined based on simulation environment data at a previous moment; the third performance index range is taken as the performance index range.
The performance index range is continuously narrowed along with the reduction of the predicted collision time between the vehicle and the target object in the running process, so that the third performance index range corresponding to the first state information at the current moment is smaller than or equal to the previous performance index range, and compared with the two stages of the first performance index range and the second performance index range, the embodiment can more accurately simulate the change of the perception deviation of the real perception algorithm, thereby further improving the effectiveness of the performance index range.
Fig. 8 is a flowchart illustrating a method of verifying a vehicle function according to still another exemplary embodiment of the present disclosure.
In some alternative embodiments, the first state information includes at least one state of a longitudinal position, a lateral position, a longitudinal speed, a lateral speed, a longitudinal acceleration, a lateral acceleration, an orientation angle, an angular speed; each state offset includes an offset corresponding to each state in the first state information.
Step 203 of determining second state information of the target object corresponding to each state offset with respect to the own vehicle based on the first state information and each state offset includes:
step 2031, regarding any state offset, sets the sum of each state in the first state information and the corresponding offset in the state offset as the second state information corresponding to the state offset.
The state offset represents the simulated perceived deviation, and therefore, for each state of each target object, the sum of the offset amounts corresponding to the state and the state may be used as a state after the perceived deviation is applied (or referred to as a state including the perceived deviation) corresponding to the state, and the state after the perceived deviation is applied corresponding to each state may be used as second state information after the perceived deviation is applied corresponding to the first state information. For each state, each offset can obtain a state after applying the perceived deviation, so at least one offset can obtain at least one state after applying the perceived deviation, that is, second state information corresponding to the first state information under each state offset can be obtained.
According to the embodiment, the second state information of the first state information under each state offset is obtained by calculating the sum of the offset and the state obtained by the simulation environment data, so that effective simulation of different sensing deviations is realized.
In some optional embodiments, verifying the preset vehicle function of the own vehicle based on the first own vehicle state information and the second state information in step 204, and obtaining the verification result includes:
step 2041, based on the first vehicle state information and the second state information, determines a function response state of the preset vehicle function corresponding to each state offset.
Wherein, for each second state information, a function response state of the preset vehicle function may be determined based on the first own vehicle state information and the second state information. The functional response state may include both success and failure states. The success indicates that the preset vehicle function can tolerate the perceived deviation of the state offset of the second state information under the first vehicle state information, that is, the expected result that the preset vehicle function can achieve under the perceived deviation, for example, the AEB function can successfully brake in an emergency, so as to avoid collision with the target object. Conversely, failure indicates that the preset vehicle function fails to achieve the intended result under the perceived deviation.
Based on the second status information, the tolerance of the preset vehicle function to various perceived deviations under the first vehicle status information in the simulation environment can be determined.
In some alternative embodiments, at least one first vehicle state information may be determined based on the simulated environment data, such that it may be determined that the vehicle is in various states, and the tolerance of the vehicle function to various perceived deviations is preset.
Step 2042, determining a performance index threshold of a perception algorithm corresponding to the preset vehicle function based on the function response states corresponding to the preset vehicle function under each state offset, and taking the performance index threshold as a verification result.
The state offset represents the performance index of the sensing algorithm, so that the function response states of the preset vehicle function under various state offsets in various environments can be synthesized, the allowable conditions of the preset vehicle function on various performance indexes of the sensing algorithm can be determined, and the performance index threshold of the sensing algorithm corresponding to the preset vehicle function can be determined based on the allowable conditions of the preset vehicle function on various performance indexes of the sensing algorithm. The performance index threshold may be determined, for example, based on each performance index that can be tolerated.
In some alternative embodiments, similar to the performance level range, performance level segmentation accuracy described above, different states of different target objects may each have their corresponding performance level thresholds. Such as a performance index threshold for a longitudinal position, a performance index threshold for a lateral position, a performance index threshold for a longitudinal speed, a performance index threshold for a lateral speed, etc. The details are not described in detail.
According to the method and the device, the performance index threshold value of the perception algorithm corresponding to the preset vehicle function is determined as the verification result through the function response states corresponding to the preset vehicle function under the state offset, so that accurate and effective performance index threshold value reference can be provided for development and optimization of the perception algorithm.
In some alternative embodiments, determining the function response state of the preset vehicle function corresponding to each state offset in step 2041 based on the first vehicle state information and each second state information includes:
determining a triggering state of a preset vehicle function according to the first vehicle state information and the second state information corresponding to the state offset aiming at any state offset; responding to the triggering state as triggering, and determining the state of the own vehicle reaching an expected result under the preset vehicle function based on the first own vehicle state information, the second state information and a function algorithm of the preset vehicle function; in response to reaching the state of the expected result, determining that the corresponding function response state of the preset vehicle function under the state offset is successful; or in response to the state of reaching the expected result being not reached, determining that the corresponding function response state of the preset vehicle function under the state offset is failed.
Wherein the triggering state of the preset vehicle function indicates whether the preset vehicle function is to be triggered. Such as an AEB function, determines whether the AEB function needs to be triggered based on the first host status information and the second status information of the target object. Different vehicle functions may have different trigger judgment rules, and are not particularly limited. If the trigger state is determined to be the trigger, the state that the own vehicle reaches the expected result under the preset vehicle function can be determined based on the function algorithm of the preset vehicle function. The state may be determined in conjunction with simulation environment data. Specifically, a control instruction of the own vehicle may be determined based on a preset vehicle function algorithm, for example, the AEB function calculates the deceleration of the own vehicle, and then a driving path of the own vehicle under the control instruction may be calculated, and in combination with the driving path of the target object in the simulation environment data, it is determined whether the own vehicle can reach the expected result under the condition that the preset vehicle function is triggered. Such as whether the AEB function can successfully stop, avoiding collision with the target object. And whether the NOA function can automatically avoid the obstacle for the target object. The expected results for different vehicle functions may be set according to actual requirements of the vehicle functions, and the present disclosure is not limited. If the preset vehicle function can reach the expected result, the corresponding function response state of the preset vehicle function under the state offset is determined to be successful. Otherwise, determining the functional response state as failure.
The embodiment combines the preset vehicle function algorithm with the simulation environment data to form the ring system, so that after the preset vehicle function is triggered, the function response state of the preset vehicle function is determined to be successful or not by combining the simulation environment data, and the accurate and effective verification of the vehicle function is realized on the basis of the ring system.
In some alternative embodiments, step 2042 of determining the performance index threshold of the sensing algorithm corresponding to the preset vehicle function based on the respective corresponding function response states of the preset vehicle function at each state offset includes:
determining that the function response state is a successful target state offset based on the function response state corresponding to the preset vehicle function under each state offset; and determining a performance index threshold of a perception algorithm corresponding to the preset vehicle function based on the target state offset.
The maximum state offset and the minimum state offset may be determined based on the target state offset, and the performance index threshold of the sensing algorithm corresponding to the preset vehicle function may be determined based on the maximum state offset and the minimum state offset. For example, a maximum state offset and a minimum state offset may be used as performance index thresholds for the sensing algorithm, or a first target state offset less than the maximum state offset and a second target state offset greater than the minimum state offset may be used as performance index thresholds. The performance index thresholds for different target objects and different states can be determined according to the corresponding state offsets.
In some alternative embodiments, the driving process of the vehicle in each of the plurality of environments may be simulated by the simulated environment data, and the functional response states of the preset vehicle functions under the plurality of state offsets may be determined based on the first vehicle state information of the vehicle at a plurality of times (e.g., each time of the driving process) during the driving process. For determining a performance index threshold of the perceptual algorithm. The more simulation environments are covered, the more accurate the performance index threshold of the obtained perception algorithm can be.
In some alternative embodiments, the performance index threshold of the sensing algorithm of the own vehicle under different state information and different predicted collision time can be determined by combining various first own vehicle state information and predicted collision time of the own vehicle. For example, the performance index threshold of the sensing algorithm corresponding to the different speeds and different predicted collision times of the vehicle. The granularity of refinement of the performance index threshold may be set according to actual requirements, which is not limited by the present disclosure.
According to the method and the device, the performance index threshold of the sensing algorithm is determined through the target state offset with the successful function response state, and the accurate and effective performance index threshold of the sensing algorithm which can be allowed by the preset vehicle function can be obtained because the function response state is the successful state that the preset vehicle function can meet the corresponding safety requirement under the corresponding state offset.
In some alternative embodiments, step 2042 of determining the performance index threshold of the sensing algorithm corresponding to the preset vehicle function based on the respective corresponding function response states of the preset vehicle function at each state offset includes:
determining the state change quantity of the own vehicle based on the response states of the functions; determining second own vehicle state information after the state change of the own vehicle and third state information of target objects around the own vehicle relative to the own vehicle based on the state change quantity, the first own vehicle state information and the simulation environment data; the second state information of the vehicle is used as the first state information of the vehicle, the third state information is used as the first state information, and the step of determining at least one state offset corresponding to the first state information based on a preset perception deviation simulation rule is repeatedly executed; and determining a performance index threshold of a perception algorithm corresponding to the preset vehicle function based on the function response states respectively corresponding to the preset vehicle function under each state offset obtained by each execution in response to the repeated execution times meeting the preset condition.
The state change amount of the own vehicle can be determined according to a control instruction of the own vehicle and the running time of the own vehicle, which are determined by a function algorithm of a preset vehicle function. The travel time may be a time interval between adjacent moments. To enter the simulation test at the next moment. The second own vehicle state information after the state change of the own vehicle, for example, the own vehicle state information at the next time, may be determined based on the state change amount and the first own vehicle state information. Based on the simulation environment data, third state information of the target object with respect to second own vehicle state information of the own vehicle, for example, third state information of the target object at a next time may be determined. And further, the second vehicle state information is used as the first vehicle state information, the third state information is used as the first state information, the step of determining at least one state offset corresponding to the first state information and the subsequent steps are repeatedly executed based on the preset perception deviation simulation rule, verification at the next moment is completed, and the function response states respectively corresponding to the preset vehicle functions at the state offsets at the next moment are obtained. Through continuous iteration, the running of the self-vehicle in the simulation environment is simulated, and the functional response state of the preset vehicle function in the running process is determined. And determining the performance index threshold value of the perception algorithm by combining the function response state of the preset vehicle function at each moment of the running process of the vehicle in each environment.
According to the method, accurate vehicle state information and first state information of target objects around the vehicle relative to the vehicle can be determined based on simulation environment data, and then various performance index values (such as errors in various real scenes) of a perception algorithm in a real scene can be simulated based on preset perception deviation simulation rules, the state offset corresponding to the first state information is used as state offset, the state offset is applied to the first state information, second state information is obtained, the second state information comprises performance index values of the perception algorithm in the real scene, the second state information is used for processing of preset vehicle functions, functional response results of the preset vehicle functions under various state offset values are obtained, such as whether the preset vehicle functions are triggered, whether expected functional results are achieved after the preset vehicle functions are triggered, and the like.
The embodiments of the present disclosure may be implemented alone or in any combination without collision, and may specifically be set according to actual needs, which is not limited by the present disclosure.
Any of the vehicle function verification methods provided by the embodiments of the present disclosure may be performed by any suitable device having data processing capabilities, including, but not limited to: terminal equipment, servers, etc. Alternatively, any of the vehicle function verification methods provided by the embodiments of the present disclosure may be executed by a processor, such as the processor executing any of the vehicle function verification methods mentioned by the embodiments of the present disclosure by calling corresponding instructions stored in a memory. And will not be described in detail below.
Exemplary apparatus
Fig. 9 is a schematic structural view of a vehicle function verification device provided in an exemplary embodiment of the present disclosure. The device of this embodiment may be used to implement an embodiment of a verification method for a vehicle function according to the present disclosure, where the device shown in fig. 9 includes: a first processing module 51, a second processing module 52, a third processing module 53 and a fourth processing module 54.
The first processing module 51 is configured to determine, based on the simulation environment data, first own vehicle status information of the own vehicle and first status information of a target object around the own vehicle with respect to the own vehicle.
The second processing module 52 is configured to determine at least one state offset corresponding to the first state information based on a preset perceptual deviation simulation rule.
The third processing module 53 is configured to determine, based on the first state information and each state offset, second state information of the target object corresponding to each state offset with respect to the own vehicle.
The fourth processing module 54 is configured to verify a preset vehicle function of the vehicle based on the first vehicle state information and the second state information, and obtain a verification result.
Fig. 10 is a schematic structural view of a verification device for vehicle functions provided in another exemplary embodiment of the present disclosure.
In some alternative embodiments, the first processing module 51 includes: the first determination unit 511, the second determination unit 512, the third determination unit 513, the fourth determination unit 514, and the fifth determination unit 515.
The first determining unit 511 is configured to determine, based on the simulation environment data, first behavior information of the own vehicle in the first coordinate system and second behavior information of the target object in the first coordinate system.
The second determining unit 512 is configured to determine first vehicle status information of the vehicle in the first coordinate system based on the first behavior information.
The third determining unit 513 is configured to determine raw state information of the target object in the first coordinate system based on the second behavior information.
A fourth determining unit 514 is configured to determine a vehicle coordinate system based on the first vehicle state information.
A fifth determining unit 515, configured to convert the original state information into a vehicle coordinate system, and obtain, as the first state information, state information of the target object in the vehicle coordinate system.
In some alternative embodiments, the second processing module 52 includes:
the first processing unit 521 is configured to determine a performance index range corresponding to the sensing algorithm based on a preset sensing deviation simulation rule.
The second processing unit 522 is configured to determine at least one state offset based on the performance index range.
In some alternative embodiments, the first processing unit 521 is specifically configured to:
and determining the historical performance index value of the sensing algorithm based on the historical test result of the performance index of the sensing algorithm. A performance index range is determined based on the historical performance index values.
In some alternative embodiments, the second processing unit 522 is specifically configured to:
and obtaining the performance index segmentation precision. Based on the performance index segmentation accuracy, the performance index range is segmented to obtain at least one state offset.
In some alternative embodiments, the first processing unit 521 is specifically configured to:
a predicted collision time is determined based on the first vehicle state information and the first state information. Based on the predicted collision time, a performance index range is determined.
In some alternative embodiments, the first processing unit 521 is specifically configured to:
acquiring a collision time threshold; determining the performance index range as a first performance index range in response to the predicted collision time being greater than or equal to the collision time threshold; or, in response to the predicted collision time being less than the collision time threshold, determining the performance index range as a second performance index range; the second performance index range is smaller than the first performance index range.
In some alternative embodiments, the second processing unit 522 is specifically configured to:
determining at least one state offset based on the first performance index range and the first segmentation accuracy in response to the performance index range being the first performance index range; alternatively, in response to the performance level range being a second performance level range, at least one state offset is determined based on the second performance level range and the second segmentation accuracy.
In some alternative embodiments, the first processing unit 521 is specifically configured to:
Determining a third performance index range corresponding to the first state information based on the predicted collision time and the previous performance index range corresponding to the previous state information of the target object; the third performance index range is less than or equal to the previous performance index range; the previous state information is first state information of a target object determined based on simulation environment data at a previous moment; the third performance index range is taken as the performance index range.
In some alternative embodiments, the first state information includes at least one state of a longitudinal position, a lateral position, a longitudinal speed, a lateral speed, a longitudinal acceleration, a lateral acceleration, an orientation angle, an angular speed; each state offset includes an offset corresponding to each state in the first state information. The third processing module 53 includes:
the third processing unit 531 is configured to, for any state offset, use a sum of each state in the first state information and an offset corresponding to the state offset as second state information corresponding to the state offset.
In some alternative embodiments, the fourth processing module 54 includes:
the fourth processing unit 541 is configured to determine, based on the first vehicle state information and the second state information, a function response state of the preset vehicle function corresponding to each state offset.
And a fifth processing unit 542, configured to determine a performance index threshold of a perception algorithm corresponding to the preset vehicle function based on the function response states corresponding to the preset vehicle function under each state offset, and take the performance index threshold as a verification result.
In some alternative embodiments, the fourth processing unit 541 is specifically configured to:
determining a triggering state of a preset vehicle function according to the first vehicle state information and the second state information corresponding to the state offset aiming at any state offset; responding to the triggering state as triggering, and determining the state of the own vehicle reaching an expected result under the preset vehicle function based on the first own vehicle state information, the second state information and a function algorithm of the preset vehicle function; in response to reaching the state of the expected result, determining that the corresponding function response state of the preset vehicle function under the state offset is successful; or in response to the state of reaching the expected result being not reached, determining that the corresponding function response state of the preset vehicle function under the state offset is failed.
In some alternative embodiments, fifth processing unit 542 is specifically configured to:
determining that the function response state is a successful target state offset based on the function response state corresponding to the preset vehicle function under each state offset; and determining a performance index threshold of a perception algorithm corresponding to the preset vehicle function based on the target state offset.
In some alternative embodiments, fifth processing unit 542 is specifically configured to:
determining the state change quantity of the own vehicle based on the response states of the functions; determining second own vehicle state information after the state change of the own vehicle and third state information of target objects around the own vehicle relative to the own vehicle based on the state change quantity, the first own vehicle state information and the simulation environment data; the second state information of the vehicle is used as the first state information of the vehicle, the third state information is used as the first state information, and the step of determining at least one state offset corresponding to the first state information based on a preset perception deviation simulation rule is repeatedly executed; and determining a performance index threshold of a perception algorithm corresponding to the preset vehicle function based on the function response states respectively corresponding to the preset vehicle function under each state offset obtained by each execution in response to the repeated execution times meeting the preset condition.
The beneficial technical effects corresponding to the exemplary embodiments of the present apparatus may refer to the corresponding beneficial technical effects of the foregoing exemplary method section, and will not be described herein.
Exemplary electronic device
Fig. 11 is a block diagram of an electronic device provided in an embodiment of the present disclosure, including at least one processor 11 and a memory 12.
The processor 11 may be a Central Processing Unit (CPU) or other form of processing unit having data processing and/or instruction execution capabilities, and may control other components in the electronic device 10 to perform desired functions.
Memory 12 may include one or more computer program products that may include various forms of computer-readable storage media, such as volatile memory and/or non-volatile memory. Volatile memory can include, for example, random Access Memory (RAM) and/or cache memory (cache) and the like. The non-volatile memory may include, for example, read Only Memory (ROM), hard disk, flash memory, and the like. One or more computer program instructions may be stored on a computer readable storage medium and executed by the processor 11 to implement the methods and/or other desired functions of the various embodiments of the present disclosure above.
In one example, the electronic device 10 may further include: an input device 13 and an output device 14, which are interconnected by a bus system and/or other forms of connection mechanisms (not shown).
The input means 13 may also comprise, for example, a keyboard, a mouse, etc.
The output device 14 may output various information to the outside, which may include, for example, a display, a speaker, a printer, and a communication network and a remote output apparatus connected thereto, etc.
Of course, only some of the components of the electronic device 10 relevant to the present disclosure are shown in fig. 11, with components such as buses, input/output interfaces, etc. omitted for simplicity. In addition, the electronic device 10 may include any other suitable components depending on the particular application.
Exemplary computer program product and computer readable storage Medium
In addition to the methods and apparatus described above, embodiments of the present disclosure may also provide a computer program product comprising computer program instructions which, when executed by a processor, cause the processor to perform steps in the methods of the various embodiments of the present disclosure described in the "exemplary methods" section above.
The computer program product may write program code for performing the operations of embodiments of the present disclosure in any combination of one or more programming languages, including an object oriented programming language such as Java, C++ or the like and conventional procedural programming languages, such as the "C" programming language or similar programming languages. The program code may execute entirely on the user's computing device, partly on the user's device, as a stand-alone software package, partly on the user's computing device, partly on a remote computing device, or entirely on the remote computing device or server.
Furthermore, embodiments of the present disclosure may also be a computer-readable storage medium, having stored thereon computer program instructions, which when executed by a processor, cause the processor to perform the steps in the methods of the various embodiments of the present disclosure described in the "exemplary methods" section above.
A computer readable storage medium may employ any combination of one or more readable media. The readable medium may be a readable signal medium or a readable storage medium. The readable storage medium may be, for example but not limited to, a system, apparatus, or device including electronic, magnetic, optical, electromagnetic, infrared, or semiconductor, or a combination of any of the foregoing. More specific examples (a non-exhaustive list) of the readable storage medium would include the following: an electrical connection having one or more wires, a portable disk, a hard disk, random Access Memory (RAM), read-only memory (ROM), erasable programmable read-only memory (EPROM or flash memory), optical fiber, portable compact disk read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing.
The basic principles of the present disclosure have been described above in connection with specific embodiments, but the advantages, benefits, effects, etc. mentioned in this disclosure are merely examples and are not to be considered as necessarily possessed by the various embodiments of the present disclosure. Furthermore, the specific details disclosed herein are for purposes of illustration and understanding only, and are not intended to be limiting, since the disclosure is not necessarily limited to practice with the specific details described.
Various modifications and alterations to this disclosure may be made by those skilled in the art without departing from the spirit and scope of this application. Thus, the present disclosure is intended to include such modifications and alterations insofar as they come within the scope of the appended claims or the equivalents thereof.

Claims (17)

1. A method of verifying a vehicle function, comprising:
determining first own vehicle state information of an own vehicle and first state information of target objects around the own vehicle relative to the own vehicle based on simulation environment data;
determining at least one state offset corresponding to the first state information based on a preset perception deviation simulation rule;
determining second state information of the target object corresponding to each state offset relative to the own vehicle based on the first state information and each state offset;
and verifying the preset vehicle function of the own vehicle based on the first own vehicle state information and each second state information to obtain a verification result.
2. The method of claim 1, wherein the determining at least one state offset corresponding to the first state information based on a preset perceived deviation simulation rule comprises:
Determining a performance index range corresponding to a sensing algorithm based on a preset sensing deviation simulation rule;
the at least one state offset is determined based on the performance index range.
3. The method of claim 2, wherein the determining, based on the preset perceived deviation simulation rule, a performance index range corresponding to a perceived algorithm includes:
determining a historical performance index value of the sensing algorithm based on a historical test result of the performance index of the sensing algorithm;
the performance index range is determined based on the historical performance index values.
4. The method of claim 2, wherein the determining the at least one state offset based on the performance metric range comprises:
acquiring the performance index segmentation precision;
and dividing the performance index range based on the performance index dividing precision to obtain the at least one state offset.
5. The method of claim 2, wherein the determining, based on the preset perceived deviation simulation rule, a performance index range corresponding to a perceived algorithm includes:
determining a predicted collision time based on the first own vehicle state information and the first state information;
And determining the performance index range based on the predicted collision time.
6. The method of claim 5, wherein the determining the performance metric range based on the predicted collision time comprises:
acquiring a collision time threshold;
determining the performance index range as a first performance index range in response to the predicted collision time being greater than or equal to the collision time threshold; or,
determining the performance index range as a second performance index range in response to the predicted collision time being less than the collision time threshold; the second performance index range is smaller than the first performance index range.
7. The method of claim 6, wherein the determining the at least one state offset based on the performance metric range comprises:
determining the at least one state offset based on the first performance index range and a first segmentation accuracy in response to the performance index range being the first performance index range;
in response to the performance level range being the second performance level range, the at least one state offset is determined based on the second performance level range and a second segmentation accuracy.
8. The method of claim 5, wherein the determining the performance metric range based on the predicted collision time comprises:
determining a third performance index range corresponding to the first state information based on the predicted collision time and a previous performance index range corresponding to the previous state information of the target object; the third performance index range is less than or equal to the previous performance index range; the previous state information is first state information of the target object determined based on the simulation environment data at a previous moment;
and taking the third performance index range as the performance index range.
9. The method of any of claims 1-8, wherein the determining first host state information of a host vehicle and first state information of a target object surrounding the host vehicle relative to the host vehicle based on simulation environment data comprises:
determining first behavior information of the own vehicle under a first coordinate system and second behavior information of the target object under the first coordinate system based on the simulation environment data;
determining the first vehicle state information of the vehicle under the first coordinate system based on the first behavior information;
Determining original state information of the target object under the first coordinate system based on the second behavior information;
determining a vehicle coordinate system based on the first vehicle state information;
and converting the original state information into the vehicle coordinate system, and obtaining the state information of the target object under the vehicle coordinate system as the first state information.
10. The method of any of claims 1-8, wherein the first status information includes at least one of a longitudinal position, a lateral position, a longitudinal speed, a lateral speed, a longitudinal acceleration, a lateral acceleration, an orientation angle, an angular speed; each state offset includes an offset corresponding to each state in the first state information;
the determining, based on the first state information and the state offsets, second state information of the target object corresponding to each state offset with respect to the own vehicle, includes:
and regarding any state offset, taking the sum of each state in the first state information and the corresponding offset in the state offset as the second state information corresponding to the state offset.
11. The method according to any one of claims 1-8, wherein the verifying the preset vehicle function of the own vehicle based on the first own vehicle state information and each of the second state information, and obtaining a verification result includes:
determining function response states of the preset vehicle functions respectively corresponding to the state offsets based on the first vehicle state information and the second state information;
and determining a performance index threshold of a perception algorithm corresponding to the preset vehicle function based on the function response states respectively corresponding to the preset vehicle function under the state offset, and taking the performance index threshold as the verification result.
12. The method of claim 11, wherein the determining, based on the function response states of the preset vehicle functions respectively corresponding to the state offsets, a performance index threshold of a perception algorithm corresponding to the preset vehicle functions includes:
determining a state change amount of the own vehicle based on each of the function response states;
determining second own vehicle state information after the state change of the own vehicle and third state information of a target object around the own vehicle relative to the own vehicle based on the state change amount, the first own vehicle state information and the simulation environment data;
The second state information of the own vehicle is used as the first state information of the own vehicle, the third state information is used as the first state information, and the step of determining at least one state offset corresponding to the first state information based on a preset perception deviation simulation rule is repeatedly executed;
and responding to the repeated execution times to meet a preset condition, and determining the performance index threshold of the perception algorithm corresponding to the preset vehicle function based on the function response states respectively corresponding to the preset vehicle function under the state offsets respectively obtained by each execution.
13. The method of claim 11, wherein the determining, based on the first vehicle state information and the second state information, a function response state of the preset vehicle function respectively corresponding to each of the state offsets includes:
determining a triggering state of the preset vehicle function according to the first vehicle state information and the second state information corresponding to the state offset;
responding to the triggering state as triggering, and determining the state of the own vehicle reaching an expected result under the preset vehicle function based on the first own vehicle state information, the second state information and a function algorithm of the preset vehicle function;
Responding to the state reaching the expected result to reach, and determining that the corresponding function response state of the preset vehicle function under the state offset is successful; or,
and in response to the state of reaching the expected result being not reached, determining that the corresponding function response state of the preset vehicle function under the state offset is failed.
14. The method of claim 13, wherein the determining the performance indicator threshold of the perception algorithm corresponding to the preset vehicle function based on the function response states respectively corresponding to the preset vehicle function at each of the state offsets comprises:
determining that the function response state is a successful target state offset based on the function response states of the preset vehicle functions respectively corresponding to the state offsets;
and determining the performance index threshold of the perception algorithm corresponding to the preset vehicle function based on the target state offset.
15. A vehicle function verification apparatus comprising:
the first processing module is used for determining first own vehicle state information of the own vehicle and first state information of target objects around the own vehicle relative to the own vehicle based on simulation environment data;
The second processing module is used for determining at least one state offset corresponding to the first state information based on a preset perception deviation simulation rule;
the third processing module is used for determining second state information of the target object corresponding to each state offset relative to the own vehicle based on the first state information and each state offset;
and the fourth processing module is used for verifying the preset vehicle function of the own vehicle based on the first own vehicle state information and the second state information to obtain a verification result.
16. A computer-readable storage medium storing a computer program for executing the verification method of the vehicle function of any one of the preceding claims 1-14.
17. An electronic device, the electronic device comprising:
a processor;
a memory for storing the processor-executable instructions;
the processor is configured to read the executable instructions from the memory and execute the instructions to implement the method of verifying a vehicle function as claimed in any one of claims 1-14.
CN202311469090.4A 2023-11-06 2023-11-06 Vehicle function verification method, device, equipment and medium Pending CN117350077A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202311469090.4A CN117350077A (en) 2023-11-06 2023-11-06 Vehicle function verification method, device, equipment and medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202311469090.4A CN117350077A (en) 2023-11-06 2023-11-06 Vehicle function verification method, device, equipment and medium

Publications (1)

Publication Number Publication Date
CN117350077A true CN117350077A (en) 2024-01-05

Family

ID=89370981

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202311469090.4A Pending CN117350077A (en) 2023-11-06 2023-11-06 Vehicle function verification method, device, equipment and medium

Country Status (1)

Country Link
CN (1) CN117350077A (en)

Similar Documents

Publication Publication Date Title
US20200262448A1 (en) Decision method, device, equipment in a lane changing process and storage medium
CN110197027B (en) Automatic driving test method and device, intelligent equipment and server
CN109739230B (en) Driving track generation method and device and storage medium
CN112242069A (en) Method and device for determining vehicle speed
CN114212110B (en) Obstacle trajectory prediction method and device, electronic equipment and storage medium
JP2021131894A (en) Method, device, electronic apparatus, and storage media for controlling simulation vehicle
CN113104041B (en) Driving track prediction method and device, electronic equipment and storage medium
CN114715154A (en) Lane-changing driving track planning method, device, equipment and medium
CN114932901A (en) Self-adaptive speed planning method and device and domain controller
CN110285977B (en) Test method, device, equipment and storage medium for automatic driving vehicle
CN113849971B (en) Driving system evaluation method and device, computer equipment and storage medium
CN114444208A (en) Method, device, equipment and medium for determining reliability of automatic driving system
CN114475656A (en) Travel track prediction method, travel track prediction device, electronic device, and storage medium
CN113212429A (en) Automatic driving vehicle safety control method and device
US20220121213A1 (en) Hybrid planning method in autonomous vehicle and system thereof
CN115700204A (en) Confidence determination method and device of automatic driving strategy
CN117434855A (en) Automatic driving simulation method and system
CN117350077A (en) Vehicle function verification method, device, equipment and medium
CN116461507A (en) Vehicle driving decision method, device, equipment and storage medium
CN111951552B (en) Method and related device for risk management in automatic driving
CN115855531A (en) Test scene construction method, device and medium for automatic driving automobile
CN115009302A (en) Lane-changing track quality evaluation method and device, electronic equipment and storage medium
Yeo Autonomous Driving Technology through Image Classfication and Object Recognition Based on CNN
CN113581204B (en) Method for estimating path speed limit value in unmanned map, electronic equipment and storage medium
CN114407930B (en) Vehicle track prediction method and device, electronic equipment and vehicle

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination