CN115257803A - Functional scene extraction method suitable for high-speed automatic driving - Google Patents

Functional scene extraction method suitable for high-speed automatic driving Download PDF

Info

Publication number
CN115257803A
CN115257803A CN202210765745.1A CN202210765745A CN115257803A CN 115257803 A CN115257803 A CN 115257803A CN 202210765745 A CN202210765745 A CN 202210765745A CN 115257803 A CN115257803 A CN 115257803A
Authority
CN
China
Prior art keywords
vehicle
data
speed
scene
target
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202210765745.1A
Other languages
Chinese (zh)
Inventor
叶福恒
张宇飞
郑建明
覃斌
张建军
刘迪
王晓非
付忠显
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
FAW Group Corp
Original Assignee
FAW Group Corp
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by FAW Group Corp filed Critical FAW Group Corp
Priority to CN202210765745.1A priority Critical patent/CN115257803A/en
Publication of CN115257803A publication Critical patent/CN115257803A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • BPERFORMING OPERATIONS; TRANSPORTING
    • B60VEHICLES IN GENERAL
    • B60WCONJOINT CONTROL OF VEHICLE SUB-UNITS OF DIFFERENT TYPE OR DIFFERENT FUNCTION; CONTROL SYSTEMS SPECIALLY ADAPTED FOR HYBRID VEHICLES; ROAD VEHICLE DRIVE CONTROL SYSTEMS FOR PURPOSES NOT RELATED TO THE CONTROL OF A PARTICULAR SUB-UNIT
    • B60W60/00Drive control systems specially adapted for autonomous road vehicles
    • B60W60/001Planning or execution of driving tasks
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B60VEHICLES IN GENERAL
    • B60WCONJOINT CONTROL OF VEHICLE SUB-UNITS OF DIFFERENT TYPE OR DIFFERENT FUNCTION; CONTROL SYSTEMS SPECIALLY ADAPTED FOR HYBRID VEHICLES; ROAD VEHICLE DRIVE CONTROL SYSTEMS FOR PURPOSES NOT RELATED TO THE CONTROL OF A PARTICULAR SUB-UNIT
    • B60W40/00Estimation or calculation of non-directly measurable driving parameters for road vehicle drive control systems not related to the control of a particular sub unit, e.g. by using mathematical models
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B60VEHICLES IN GENERAL
    • B60WCONJOINT CONTROL OF VEHICLE SUB-UNITS OF DIFFERENT TYPE OR DIFFERENT FUNCTION; CONTROL SYSTEMS SPECIALLY ADAPTED FOR HYBRID VEHICLES; ROAD VEHICLE DRIVE CONTROL SYSTEMS FOR PURPOSES NOT RELATED TO THE CONTROL OF A PARTICULAR SUB-UNIT
    • B60W2520/00Input parameters relating to overall vehicle dynamics
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B60VEHICLES IN GENERAL
    • B60WCONJOINT CONTROL OF VEHICLE SUB-UNITS OF DIFFERENT TYPE OR DIFFERENT FUNCTION; CONTROL SYSTEMS SPECIALLY ADAPTED FOR HYBRID VEHICLES; ROAD VEHICLE DRIVE CONTROL SYSTEMS FOR PURPOSES NOT RELATED TO THE CONTROL OF A PARTICULAR SUB-UNIT
    • B60W2520/00Input parameters relating to overall vehicle dynamics
    • B60W2520/06Direction of travel
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B60VEHICLES IN GENERAL
    • B60WCONJOINT CONTROL OF VEHICLE SUB-UNITS OF DIFFERENT TYPE OR DIFFERENT FUNCTION; CONTROL SYSTEMS SPECIALLY ADAPTED FOR HYBRID VEHICLES; ROAD VEHICLE DRIVE CONTROL SYSTEMS FOR PURPOSES NOT RELATED TO THE CONTROL OF A PARTICULAR SUB-UNIT
    • B60W2520/00Input parameters relating to overall vehicle dynamics
    • B60W2520/10Longitudinal speed
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B60VEHICLES IN GENERAL
    • B60WCONJOINT CONTROL OF VEHICLE SUB-UNITS OF DIFFERENT TYPE OR DIFFERENT FUNCTION; CONTROL SYSTEMS SPECIALLY ADAPTED FOR HYBRID VEHICLES; ROAD VEHICLE DRIVE CONTROL SYSTEMS FOR PURPOSES NOT RELATED TO THE CONTROL OF A PARTICULAR SUB-UNIT
    • B60W2520/00Input parameters relating to overall vehicle dynamics
    • B60W2520/10Longitudinal speed
    • B60W2520/105Longitudinal acceleration
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B60VEHICLES IN GENERAL
    • B60WCONJOINT CONTROL OF VEHICLE SUB-UNITS OF DIFFERENT TYPE OR DIFFERENT FUNCTION; CONTROL SYSTEMS SPECIALLY ADAPTED FOR HYBRID VEHICLES; ROAD VEHICLE DRIVE CONTROL SYSTEMS FOR PURPOSES NOT RELATED TO THE CONTROL OF A PARTICULAR SUB-UNIT
    • B60W2520/00Input parameters relating to overall vehicle dynamics
    • B60W2520/12Lateral speed
    • B60W2520/125Lateral acceleration
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B60VEHICLES IN GENERAL
    • B60WCONJOINT CONTROL OF VEHICLE SUB-UNITS OF DIFFERENT TYPE OR DIFFERENT FUNCTION; CONTROL SYSTEMS SPECIALLY ADAPTED FOR HYBRID VEHICLES; ROAD VEHICLE DRIVE CONTROL SYSTEMS FOR PURPOSES NOT RELATED TO THE CONTROL OF A PARTICULAR SUB-UNIT
    • B60W2540/00Input parameters relating to occupants
    • B60W2540/18Steering angle

Landscapes

  • Engineering & Computer Science (AREA)
  • Automation & Control Theory (AREA)
  • Transportation (AREA)
  • Mechanical Engineering (AREA)
  • Human Computer Interaction (AREA)
  • Physics & Mathematics (AREA)
  • Mathematical Physics (AREA)
  • Traffic Control Systems (AREA)

Abstract

A method for high-speed autopilot function scene extraction, the method comprising: fixing data acquisition equipment on the vehicle, and acquiring related data of the vehicle and a target vehicle in real time; creating a data partition sequence number column according to the vehicle speed of the vehicle and the number of the surrounding target vehicles, and partitioning the acquired data according to the data partition sequence number column of the vehicle speed and the number of the surrounding target vehicles; performing data cleansing on each obtained partition; processing the driving parameters of the vehicle according to the cleaned data to obtain a predicted driving area of the vehicle, and marking the target object entering the predicted driving area of the vehicle as a front target object; classifying the high-speed automatic driving function scenes, and extracting data of a following scene, a cut-in scene and a cut-out scene of the high-speed automatic driving function scenes; after the extracted scene data is integrated, N1 and N2 non-repeated integers are respectively selected randomly from 1 to N, the corresponding numerical value is the number of the inspection, and the extracted data is inspected according to the number.

Description

Scene extraction method suitable for high-speed automatic driving function
Technical Field
The invention relates to the technical field of high-speed automatic driving function scenes, in particular to a method for extracting a high-speed automatic driving function scene.
Background
With the gradual achievement of the domestic intelligent automobile to the L3/L4 level, the system mainly relates to ADAS functions of congestion following, high-speed designated driving, designated passenger parking and the like; the most common function is a high-speed automatic driving function, which can automatically adjust the following speed according to the speed of the front vehicle and the lane line and assist the driver to keep the vehicle in the lane.
And extracting representative scenes related to the high-speed automatic driving function, slicing and extracting data, analyzing data segments, and providing corresponding scene data support for research and development and experimental verification of a high-speed automatic driving function decision model.
However, the ultra-high data acquisition frequency and the ultra-long data acquisition mileage lead the vehicle data to be exponentially and explosively increased, and the number of scenes is millions, so how to extract a representative effective scene with a high-speed automatic driving function from the vehicle data becomes a big difficulty in data extraction.
In the prior art, patent document CN114064656A discloses "an automatic driving scene recognition and conversion method based on a road-side sensing system", which can recognize and convert a large amount of driving data collected by the road-side sensing system, and output slice data with functional marks, so as to provide a data source for the construction of a large-scale database for automatic driving training. Patent document CN114067243A discloses "an automatic driving scene identification method, system, device, and medium", which can accurately distinguish current environmental characteristics and differences, implement deep clustering of an environment, and assist in migration of an automatic driving decision algorithm, but is not the scene segmentation method of the present invention.
Therefore, the conventional technical means cannot extract representative effective scene data from a large amount of data of the high-speed automatic driving function.
Disclosure of Invention
The invention solves the problem that the prior technical means can not extract representative effective scene data from a large amount of data with a high-speed automatic driving function.
The invention relates to a scene extraction method suitable for a high-speed automatic driving function, which comprises the following steps of:
s1, fixing data acquisition equipment on a vehicle, and acquiring data related to the vehicle and a target vehicle;
s2, creating a data partition sequence number column according to the vehicle speed of the vehicle and the number of the surrounding target vehicles, and partitioning the acquired data according to the data partition sequence number column of the vehicle speed and the number of the surrounding target vehicles;
step S3, performing data cleaning on each partition obtained in the step S2;
step S4, processing the driving parameters of the vehicle according to the data cleaned in the step S3 to obtain a predicted driving area of the vehicle, and marking the target object entering the predicted driving area of the vehicle as a front target object;
s5, classifying the high-speed automatic driving function scene, and extracting data of a car following scene, a cut-in scene and a cut-out scene of the high-speed automatic driving function scene;
and S6, after integrating the extracted scene data, respectively and randomly selecting N1 and N2 non-repeated integers from 1 to N, wherein the corresponding numerical value is a detected number, and detecting the extracted data according to the number.
Further, in an embodiment of the present invention, in the step S1, the data acquisition device includes a millimeter wave radar, an intelligent camera, and a laser radar.
Further, in an embodiment of the present invention, in the step S1, the vehicle-related data includes a vehicle speed, a lateral acceleration, a longitudinal acceleration, a steering angle, a heading angle, a longitude and a latitude of the vehicle;
the related data of the target vehicle comprises the relative longitudinal distance between the target vehicle and the vehicle, the relative transverse distance between the target vehicle and the vehicle, the relative speed between the target vehicle and the vehicle, the absolute transverse acceleration of the target vehicle and the absolute longitudinal acceleration of the target vehicle.
Further, in an embodiment of the present invention, in the step S3, the performing data washing on each partition includes the following steps:
step S301, performing second-order spline curve interpolation up-sampling on the speed of the vehicle, the relative speed between the target vehicle and the absolute speed data between the target vehicle and the vehicle;
step S302, constructing a standard deviation of a Gaussian kernel function, and after determining a mean value, starting to execute convolution operation;
step S303, the speed of the vehicle, the relative speed between the target vehicle and the absolute speed data between the target vehicle and the vehicle are sampled down to the original data sampling frequency at equal intervals.
Further, in an embodiment of the present invention, in step S303, the convolution operation formula is:
Figure BDA0003725489220000031
in the formula, y (n) is an updated value after filtering, G (i) is a Gaussian kernel function template, and h (n-i) is original data of the speed of the vehicle, the relative speed between the target vehicle and the absolute speed between the target vehicle and the vehicle.
Further, in an embodiment of the present invention, in the step S4, the processing the driving parameters of the vehicle to obtain the predicted driving area of the vehicle includes the following steps:
step S401, acquiring the collected driving curvature of the vehicle and steering wheel angle data of the vehicle;
step S402, calculating the weighted average of the running curvature of the vehicle and the steering wheel angle data of the vehicle, and taking the running track of the vehicle as the origin and the advancing direction of the vehicle as the predicted path of the vehicle;
and step S403, respectively translating the vehicle predicted path to two sides along the radial direction by 1/2 lane width to obtain a vehicle predicted driving area.
Further, in an embodiment of the present invention, in the step S5, the following scene includes following start, uniform following, following acceleration, following deceleration, following brake stop, lane overtaking and lane bending.
Further, in an embodiment of the present invention, in the step S5, the following scene, the cut-in scene, and the cut-out scene of the high-speed automatic driving function scene are subjected to data extraction according to the following principle:
taking the relative speed of the target vehicle and the vehicle as an extraction condition, and when the relative speed is in different intervals, the target vehicle and the vehicle are in different interaction states;
establishing an identification column for different interaction states of the target vehicle and the vehicle, wherein when the relative speed is greater than 0, the identification column is 0, otherwise, the identification column is 1;
carrying out derivation on the identification columns of the target vehicle and the vehicle in different interactive states to obtain a derivation number column of the identification columns, and setting a tail value to be 1;
carrying out reverse derivation on the identification columns of the target vehicle and the vehicle in different interactive states to obtain a reverse derivation number column of the identification columns, and setting a head value to be 1;
screening time points corresponding to the values which are not 0 in the derivative series and the inverse derivative series, and forming starting and ending point time of each relative speed interval by the points which are not 0 in the inverse derivative series and a time axis corresponding to the point which is 0 in the derivative series;
the relative speed interval constituting the odd number order is the vehicle state corresponding to the first interval, and the even number order is opposite.
When the interval of two continuous same states is less than 1s, merging;
the initial actions of the target vehicle and the vehicle are all following lines.
Further, in an embodiment of the present invention, in the step S6, the integrating the extracted scene data includes the following steps:
step S601, selecting 150-180 pieces of extracted scene data;
step S602, sequentially increasing the number of the scene data to be detected from 1;
step S603, setting the number of scene data, the receiving limit and the rejecting limit corresponding to the inspection round according to the selected scene data amount and the expected accuracy.
Further, in an embodiment of the present invention, in step S6, the checking the extracted data according to the number specifically includes:
if the unqualified quantity d1 found in the first inspection scene data is less than or equal to Ac1, the inspection result is considered to be received;
if the unqualified quantity d1 found in the first-time inspection scene data is greater than or equal to Re1, the inspection result is considered to be rejected;
if the unqualified quantity d1 found in the first inspection scene data is between Ac1 and Re1, performing second inspection according to the scheme, and accumulating the unqualified quantities of the two inspections, namely d1+ d2;
if the sum of d1 and d2 is less than or equal to the acceptance limit Ac2, the test result is considered as acceptance;
if the sum of d1 and d2 is greater than or equal to the rejection limit Re2, the test result is considered as rejection.
The invention solves the problem that the prior technical means can not extract representative effective scene data from a large amount of data with a high-speed automatic driving function. The method has the following specific beneficial effects:
1. the invention relates to a method for extracting a functional scene suitable for high-speed automatic driving, which is used for extracting large-data high-efficiency automatic high-speed automatic driving functional scene data by innovatively using a partitioning algorithm, a cleaning algorithm, a scene recognition algorithm based on relative speed and a scene extraction algorithm on objective data such as the speed of a vehicle, the speed of a target vehicle, the transverse and longitudinal distance between the target vehicle and the vehicle in natural driving scene data.
2. According to the method for extracting the functional scene suitable for high-speed automatic driving, the data of the interference functional scene can be corrected more accurately through a data cleaning algorithm based on the Gaussian kernel function.
3. The scene extraction method suitable for the high-speed automatic driving function uses a scene recognition algorithm based on the relative speed of the target vehicle, and reduces algorithm parameters.
4. According to the method for extracting the functional scene suitable for high-speed automatic driving, the data partition sequence number sequence is created according to the vehicle speed and the number of the surrounding target objects, so that the acquired data are partitioned according to the sequence, and parallel calculation is performed by adopting a partition algorithm, so that the calculation time is greatly shortened.
Drawings
The foregoing and/or additional aspects and advantages of the present invention will become apparent and more readily appreciated from the following description of the embodiments, taken in conjunction with the accompanying drawings of which:
fig. 1 is a general flowchart of a functional scene extraction method according to an embodiment.
FIG. 2 is a flow chart of partitioning, according to an embodiment.
Fig. 3 is a diagram of a quadratic spline solution according to an embodiment.
Fig. 4 is a schematic diagram of a predicted travel area according to an embodiment.
Fig. 5 is a key scene diagram of following a vehicle according to the embodiment.
Fig. 6 is a diagram of a method of extracting a vehicle state according to the embodiment.
FIG. 7 is a cut-in key scenario diagram in accordance with an embodiment.
Fig. 8 is a cut-out key scene diagram according to an embodiment.
Fig. 9 is a flow diagram of a subsampling test according to an embodiment.
Detailed Description
Various embodiments of the present invention will now be described more fully hereinafter with reference to the accompanying drawings. The embodiments described by way of reference to the drawings are illustrative and intended to be illustrative of the invention and are not to be construed as limiting the invention.
The method for extracting the functional scene suitable for high-speed automatic driving in the embodiment comprises the following steps of:
s1, fixing data acquisition equipment on a vehicle, and acquiring data related to the vehicle and a target vehicle;
s2, creating a data partition sequence number column according to the vehicle speed of the vehicle and the number of the surrounding target vehicles, and partitioning the acquired data according to the data partition sequence number column of the vehicle speed and the number of the surrounding target vehicles;
step S3, performing data cleaning on each partition obtained in the step S2;
step S4, processing the driving parameters of the vehicle according to the data cleaned in the step S3 to obtain a predicted driving area of the vehicle, and marking the target object entering the predicted driving area of the vehicle as a front target object;
s5, classifying the high-speed automatic driving function scene, and extracting data of a car following scene, a cut-in scene and a cut-out scene of the high-speed automatic driving function scene;
and S6, after integrating the extracted scene data, randomly selecting N1 and N2 non-repeated integers from 1 to N respectively, wherein the corresponding numerical value is the number of the inspection, and inspecting the extracted scene data according to the number.
In this embodiment, in step S1, the data acquisition device includes a millimeter wave radar, an intelligent camera, and a laser radar.
In this embodiment, in the step S1, the vehicle-related data includes a vehicle speed, a lateral acceleration, a longitudinal acceleration, a steering angle, a heading angle, a longitude and a latitude of the vehicle;
the target vehicle related data comprises a relative longitudinal distance between the target vehicle and the vehicle, a relative transverse distance between the target vehicle and the vehicle, a relative speed between the target vehicle and the vehicle, an absolute transverse acceleration of the target vehicle and an absolute longitudinal acceleration of the target vehicle.
In this embodiment, in the step S3, the executing data washing for each partition includes the following steps:
step S301, performing second-order spline curve interpolation up-sampling on the speed of the vehicle, the relative speed between the target vehicle and the absolute speed data between the target vehicle and the vehicle;
step S302, constructing a standard deviation of a Gaussian kernel function, and after determining a mean value, starting to execute convolution operation;
step S303, the speed of the vehicle, the relative speed between the target vehicle and the absolute speed data between the target vehicle and the vehicle are sampled down to the original data sampling frequency at equal intervals.
In this embodiment, in step S303, the convolution operation formula is:
Figure BDA0003725489220000081
in the formula, y (n) is an updated value after filtering, G (i) is a Gaussian kernel function template, and h (n-i) is original data of the speed of the vehicle, the relative speed between the target vehicle and the absolute speed between the target vehicle and the vehicle.
In the present embodiment, the step S4 of obtaining the predicted travel area of the vehicle after processing the travel parameters of the vehicle includes the steps of:
step S401, acquiring the collected driving curvature of the vehicle and steering wheel angle data of the vehicle;
step S402, calculating the weighted average of the running curvature of the vehicle and the steering wheel angle data of the vehicle, and taking the running track of the vehicle as the origin and the advancing direction of the vehicle as the predicted path of the vehicle;
and step S403, respectively translating the vehicle predicted path to two sides along the radial direction by 1/2 lane width to obtain a vehicle predicted driving area.
In this embodiment, in step S5, the following scene includes following starting, following at a constant speed, following accelerating, following decelerating, following braking and stopping, passing along the lane, and traveling along a curve.
In this embodiment, in step S5, the principle of extracting data from the car following scene, the cut-in scene, and the cut-out scene of the high-speed autopilot function scene is as follows:
taking the relative speed of the target vehicle and the vehicle as an extraction condition, and when the relative speed is in different intervals, the target vehicle and the vehicle are in different interaction states;
establishing an identification column for different interaction states of the target vehicle and the vehicle, wherein when the relative speed is greater than 0, the identification column is 0, otherwise, the identification column is 1;
carrying out derivation on the identification columns of the target vehicle and the vehicle in different interactive states to obtain a derivation number column of the identification columns, and setting a tail value to be 1;
carrying out reverse derivation on the identification columns of the target vehicle and the vehicle in different interactive states to obtain a reverse derivation number column of the identification columns, and setting a head value to be 1;
screening time points corresponding to the values, which are not 0, in the derivative array and the inverse derivative array, and forming starting and ending point time of each relative speed interval by the points, which are not 0, in the inverse derivative array and time axes corresponding to the points, which are 0, in the derivative array;
the relative speed interval constituting the odd number order is the vehicle state corresponding to the first interval, and the even number order is opposite.
When the interval of two continuous same states is less than 1s, merging;
the initial actions of the target vehicle and the vehicle are all following lines.
In this embodiment, the step S6 of integrating the extracted scene data includes the following steps:
s601, selecting 150-180 pieces of extracted scene data;
step S602, sequentially increasing the number of the scene data to be detected from 1;
step S603, setting the number of scene data, the receiving limit and the rejecting limit corresponding to the inspection round according to the selected scene data amount and the expected accuracy.
In this embodiment, in the step S6, the checking the extracted data according to the number specifically includes:
if the unqualified quantity d1 found in the first-time inspection scene data is less than or equal to Ac1, the inspection result is considered to be received;
if the unqualified quantity d1 found in the first-time inspection scene data is greater than or equal to Re1, the inspection result is considered to be rejected;
if the unqualified quantity d1 found in the first inspection scene data is between Ac1 and Re1, performing second inspection according to the scheme, and accumulating the unqualified quantities of the two inspections, namely d1+ d2;
if the sum of d1 and d2 is less than or equal to the acceptance limit Ac2, the test result is considered as acceptance;
if the sum of d1 and d2 is greater than or equal to the rejection limit Re2, the test result is considered as rejection.
The embodiment is based on the method for extracting the high-speed automatic driving function scene, which can be better understood by combining fig. 1, and provides an actual implementation by combining specific objects:
s1, data acquisition:
s101: the data acquisition comprises that a millimeter wave radar, an intelligent camera and a laser radar are fixed around a scene acquisition vehicle, and the objective data of surrounding traffic participants are acquired in real time. The acquired objective data can meet the requirements of scene extraction data, and the contents comprise the following contents:
the speed of the vehicle, the lateral acceleration of the vehicle, the longitudinal acceleration of the vehicle, the steering angle of the steering wheel of the vehicle, the course angle of the vehicle, the longitude of the vehicle and the latitude of the vehicle.
Target vehicle: a relative longitudinal distance to the host vehicle, a relative lateral distance to the host vehicle, a relative velocity to the host vehicle, an absolute lateral acceleration to the target vehicle, and an absolute longitudinal acceleration to the target vehicle.
S2, creating a data partition sequence number column according to the vehicle speed and the number of surrounding target objects, partitioning the acquired data according to the column, ensuring that the number of data rows in each partition is approximately the same, and having the following partitioning rule, wherein the flow is shown in the figure 2:
s201, firstly, selecting data of which the vehicle speed is not 0 from all data, and taking the data in continuous time points as a subarea;
s202, merging the partitions of which the difference between the ending time of each partition and the starting time of the next partition in the partition is less than 1S;
s203, setting a partition ending threshold value i to be 10;
s204, calculating the difference between the row number of each partition and the maximum partition row number, and if the difference is larger than i%, performing re-segmentation according to the condition that the data volume of the surrounding target object in each partition is smaller than 1, namely performing re-segmentation in each interval with the vehicle speed not being 0;
s205, judging the difference between the number of rows of each subarea and the maximum subarea number of rows, and if the difference is still larger than i%, carrying out subarea division again according to the conditions that the number of surrounding vehicles is larger than 1, the relative distance and the relative speed are stable and unchanged within 2S;
s206, judging the difference between the row number of each partition and the maximum partition row number again, and if the difference is still larger than i%, updating the value i to be 10 × cycle times;
s207, repeating the steps S204 to S206 until the conditions are met;
and if the final value of i exceeds 30, cleaning, filtering and partitioning again to ensure that the difference between the number of data lines of each partition and the maximum number of partition lines is less than 30%.
S3, data cleaning:
s301 executes a data cleansing step for each partition;
s302, second-order spline curve interpolation up-sampling is carried out on the data of the speed of the vehicle, the absolute speed of the target vehicle and the relative speed, the data sampling rate is increased to at least 100Hz, and the interpolation method is as follows:
suppose 4 points, x0, x1, x2, x3, have 3 intervals, and require 3 quadratic splines, each quadratic spline being ax ^2+ bx + c, so total 9 unknowns are shown in FIG. 3.
In the figure, two endpoints of x0 and x3 both have a quadratic function to pass through, and 2 equations can be determined.
Two quadratic functions pass through two intermediate points of X1 and X2, and 4 equations can be determined.
The intermediate points must be connected, and it is necessary to ensure that the first derivatives of the left and right quadratic functions are equal, that is:
2a1x1+b1=2a2x2+b2
2a2x2+b2=2a3x3+b32 equations can be determined, in this case 8 equations.
Constrained with free boundaries, then 2a0x0+b0=0。
And 9 unknown numbers are counted, 9 equations are simultaneously solved, and the up-sampling is completed.
S303, constructing a gaussian template with a standard deviation of the gaussian kernel of σ, a mean of μ =15, and truncated by a length of the standard deviation of μ ± n σ =4 outside the mean, as follows;
Figure BDA0003725489220000121
s304, performing convolution operation;
Figure BDA0003725489220000122
in the formula (II). y (n) is the updated value after filtering, G (i) is a Gaussian kernel function digital template, and h (n-i) is the original data.
S305 performs down-sampling to down-sample the data to the original sampling frequency at equal intervals.
S4, front target vehicle identification:
s401, acquiring the driving curvature of the vehicle acquired by a sensor, and acquiring the steering wheel corner of the vehicle;
s402, according to the weighted average of the steering wheel corner and the running curvature of the vehicle, taking the current running track of the vehicle with a front-keeping center as an original point and the advancing direction of the vehicle as a predicted path;
s403, respectively translating the predicted paths to two sides along the radial direction by 1/2 lane width to obtain predicted driving areas, as shown in FIG. 4.
S404, when the detected object invades the predicted driving area of the vehicle, the object is marked as a front object.
S5, identifying functional scenes:
s501 classifies the high-speed automatic driving function scenes, defines a situation in which a target vehicle exists in front of the host vehicle as a following scene, and extracts a key scene in the following scene, where the key scene is shown in fig. 5.
The principle of the scene extraction method is as follows:
taking the relative speed of the vehicle and the front target vehicle as a main extraction condition, and defining that the vehicle and the front vehicle are in different interaction states when the relative speed is in different intervals, such as: when the relative speed is less than 0km/h, the state is that the vehicle overtakes the front vehicle, and when the relative speed is more than 0km/h, the state is that the vehicle approaches the front vehicle, and the different state extraction method is as follows, as shown in fig. 6;
creating an identification column name, wherein when the relative speed is greater than 0, the column is 0, otherwise, the column is 1;
carrying out derivation on the identification column to obtain a derivative column of the identification column, and setting a tail value to be 1;
carrying out reverse derivation on the identification column to obtain a reverse derivation number column of the identification column, and setting a head value to be 1;
screening time points corresponding to the values, which are not 0, in the derivative array and the inverse derivative array, and forming starting and ending point time of each relative speed interval by the points, which are not 0, in the inverse derivative array and time axes corresponding to the points, which are 0, in the derivative array;
the relative speed interval forming the odd ordinal is the state of the vehicle corresponding to the first interval, and the even ordinal is opposite;
when the interval of two continuous same states is less than 1s, merging;
the initial actions of the vehicle and the target vehicle are all circular lines.
The following starting extraction method comprises the following steps:
screening an interval with relative speed greater than 0km/h and duration greater than 1s, wherein the minimum value of the speed of the vehicle in the interval is less than 1km/h, the maximum value of the speed of the vehicle is greater than 5km/h, and the average value of the longitudinal acceleration of the vehicle is greater than 0m/s2The maximum absolute speed of the target vehicle is less than 160km/h, the minimum absolute speed of the target vehicle is less than 2km/h, and the target vehicle is accelerated longitudinally and absolutelyDegree average value is more than 0m/s2The minimum value of the relative distance is less than 10m.
The uniform-speed car following extraction method comprises the following steps:
the extracted relative speed is in a range of-5 km/h, the duration is more than 5s after combination, the minimum value of the speed of the vehicle is more than 5km/h, the difference between the maximum value and the minimum value of the speed of the vehicle is less than 5km/h, the maximum value of the absolute value of the steering angle of the steering wheel of the vehicle is less than 30 degrees, and the absolute speed of the target vehicle is less than 160km/h.
The following acceleration extraction method comprises the following steps:
the extraction relative speed is more than 0km/h, and the duration time after combination is more than 1s; the minimum value of the speed of the vehicle is more than 5km/h; the absolute value of the turning angle of the steering wheel of the vehicle is less than 30 degrees; the average value of the longitudinal acceleration of the vehicle is more than 0m/s2(ii) a The average value of the longitudinal absolute acceleration of the target vehicle is more than 0m/s2(ii) a The maximum absolute speed of the target vehicle is less than 160km/h; the minimum value of the absolute speed of the target vehicle is more than 0km/h.
The following deceleration extraction method comprises the following steps:
the relative extraction speed is less than 0km/h, and the duration time after combination is more than 1s; the minimum value of the speed of the vehicle is more than 5km/h; the maximum value of the absolute value of the steering angle of the steering wheel of the vehicle is less than 30 degrees; the average value of the longitudinal acceleration of the vehicle is less than 0m/s 2; the relative speed is less than-5 km/h; the average value of the relative speed is more than-60 km/h; the maximum absolute speed of the target vehicle is less than 160km/h; the minimum value of the absolute speed of the target vehicle is more than 0km/h.
The following braking extraction method comprises the following steps:
the extraction relative speed is less than 0km/h, and the duration time after combination is more than 1s; the minimum value of the speed of the vehicle is less than 1km/h; the maximum value of the speed of the vehicle is more than 5km/h; the maximum value of the absolute value of the steering angle of the steering wheel of the vehicle is less than 30 degrees; the average value of the longitudinal acceleration of the vehicle is less than 0m/s2(ii) a The maximum absolute speed of the target vehicle is less than 160km/h; the minimum value of the relative distance is less than 15m; the mean value of the longitudinal absolute acceleration of the target vehicle is less than 0m/s2
The lane transcendence extraction rule is as follows:
the relative speed of the vehicle and the right or left vehicle is more than 0km/h; the maximum value of the absolute value of the steering angle of the steering wheel of the vehicle is less than 30 degrees; the maximum absolute speed of the target vehicle is less than 160km/h.
Curve driving extraction rule:
extracting an interval with the radius of the running track of the vehicle less than 1 km; and when the interval between two continuous intervals is less than 1s, merging.
S502, the data extraction method defines the cut-in key scene as follows:
defining: and in the continuous time period, the overlapping rate of the target vehicle and the host vehicle is more than 30%, the transverse speed of the target vehicle is more than a set threshold value, the target vehicle is initially positioned on the right front side or the right side, and the final overlapping rate is more than 90%.
Starting time: the overlapping rate of the target object and the vehicle is just greater than 30% (adjustable).
End time: the overlapping rate of the target object and the vehicle reaches more than 90 percent or the distance between the center of the target vehicle and the driving center of the vehicle is less than 0.5m (adjustable) and the transverse speed is less than 0.3m/s (adjustable).
Initial time: the starting time is deduced to the moment when the target object appears or the moment when the target object approaches the vehicle speed is less than 0.3m/s or the minimum value of the starting time.
The vehicle starts to brake: the brake pedal of the vehicle is firstly stepped on in the cutting-in process.
The vehicle is braked: the brake pedal of the vehicle is released at the last time in the cutting-in process.
As shown in fig. 7, the host vehicle lane travels: the target vehicle start time is on the right front side or right side.
The vehicle runs in the lane: the target vehicle start time is on the front left side or the left side.
S503, defining the cut-out key scene as follows:
defining: in the continuous time period, the overlapping rate of the target vehicle and the vehicle is more than 90%, and the transverse speed of the target vehicle is more than a set value. And the final overlapping rate is less than 0%.
Start time: when the target vehicle runs on the running lane of the vehicle, the transverse speed is increased and the final overlapping rate is less than 0%, and the transverse speed is greater than 0.3m/s (adjustable).
End time: the overlap ratio between the target vehicle and the host vehicle is just starting to be less than 0.
As shown in fig. 8, the host vehicle lane travels: the ending time target vehicle is positioned at the right front or the right side of the vehicle.
The vehicle runs in the lane: the ending time target vehicle is positioned at the front left side or the left side of the vehicle.
S6 data extraction result
S601, each scene of data extraction possesses a unique identifier, and the identifier exists in each line of the extracted data;
the data extracted in S602 contains all the original data information and unifies the time axis.
S7 data extraction result checking
S701 integrates the extracted samples.
1. Selecting 150-180 pieces of extracted scene data;
2. sequentially increasing the scene segments to be detected from 1 as numbers;
3. according to the sample size and the expected accuracy, the number of samples, the receiving limit and the rejecting limit corresponding to the inspection turn are set, and the expression is as follows:
(n 1, n2| Ac1, re1; ac2, re 2), wherein: n1 is the number of first test samples; n2 is the number of second test samples; ac1 is the acceptance limit of the first check; re1 is the rejection limit of the first test; ac2 is the accumulated acceptance limit of the secondary inspection; re2 is the cumulative rejection limit for the secondary test.
S702 randomly selecting non-repeated N1 and N2 integers from 1 to N respectively, wherein the numerical value is the number of the segment to be checked;
s703, checking the extracted data segments according to the numbers, wherein the checking process is shown in FIG. 9;
s705 if the unqualified quantity d1 found in the first test sample is greater than or equal to Re1, the test result is regarded as rejection;
s706, if the unqualified quantity d1 found in the first detection sample is between Ac1 and Re1, performing second detection according to the scheme, and accumulating the unqualified quantities of the two detections, namely d1+ d2;
s707, if the sum of d1 and d2 is less than or equal to the receiving limit Ac2, the checking result is considered as receiving; if the sum of d1 and d2 is greater than or equal to the rejection limit Re2, the test result is considered as a rejection.
The method for extracting the functional scene suitable for high-speed automatic driving provided by the invention is described in detail above, specific examples are applied in the text to explain the principle and the implementation mode of the invention, and the description of the above embodiments is only used for helping to understand the method and the core idea of the invention; meanwhile, for a person skilled in the art, according to the idea of the present invention, there may be changes in the specific embodiments and the application scope, and in the above description, the content of the present specification should not be construed as a limitation to the present invention.

Claims (10)

1. A scene extraction method suitable for a high-speed automatic driving function is characterized by comprising the following steps:
s1, fixing data acquisition equipment on a vehicle, and acquiring related data of the vehicle and a target vehicle;
s2, creating a data partition sequence number column according to the vehicle speed and the number of the surrounding target vehicles, and partitioning the acquired data according to the data partition sequence number column of the vehicle speed and the number of the surrounding target vehicles;
step S3, performing data cleaning on each partition obtained in the step S2;
step S4, processing the driving parameters of the vehicle according to the data cleaned in the step S3 to obtain a predicted driving area of the vehicle, and marking the target object entering the predicted driving area of the vehicle as a front target object;
s5, classifying the high-speed automatic driving function scene, and extracting data of a following scene, a cut-in scene and a cut-out scene of the high-speed automatic driving function scene;
and S6, after integrating the extracted scene data, randomly selecting non-repeated N1 and N2 integers from 1 to N respectively, wherein the corresponding numerical value is the number to be detected, and detecting the extracted scene data according to the number.
2. The method for extracting scenes with the function of high-speed automatic driving according to claim 1, wherein in the step S1, the data acquisition equipment comprises millimeter wave radar, smart camera and laser radar.
3. The method as claimed in claim 1, wherein in step S1, the vehicle-related data includes vehicle speed, lateral acceleration, longitudinal acceleration, steering angle, heading angle, longitude and latitude;
the target vehicle related data comprises a relative longitudinal distance between the target vehicle and the vehicle, a relative transverse distance between the target vehicle and the vehicle, a relative speed between the target vehicle and the vehicle, an absolute transverse acceleration of the target vehicle and an absolute longitudinal acceleration of the target vehicle.
4. The method for extracting scenes with the function of high-speed automatic driving according to claim 1, wherein in the step S3, each partition performs data washing, comprising the following steps:
step S301, performing second-order spline curve interpolation up-sampling on the speed of the vehicle, the relative speed between the target vehicle and the absolute speed data between the target vehicle and the vehicle;
step S302, constructing a standard deviation of a Gaussian kernel function, and after determining an average value, starting to execute convolution operation;
step S303, the speed of the vehicle, the relative speed between the target vehicle and the absolute speed data between the target vehicle and the vehicle are subjected to equal-interval downsampling to the original data sampling frequency.
5. The method for extracting functional scenes for high-speed automatic driving according to claim 4, wherein in step S303, the convolution formula is:
Figure FDA0003725489210000021
in the formula, y (n) is an updated value after filtering, G (i) is a Gaussian kernel function template, and h (n-i) is original data of the speed of the vehicle, the relative speed between the target vehicle and the absolute speed between the target vehicle and the vehicle.
6. The method according to claim 1, wherein in step S4, the step of processing the driving parameters of the vehicle to obtain the predicted driving area of the vehicle comprises the steps of:
step S401, acquiring the collected driving curvature of the vehicle and steering wheel angle data of the vehicle;
step S402, calculating a weighted average value of the vehicle running curvature and the steering wheel turning angle data of the vehicle, and taking the vehicle running track as an original point and the vehicle advancing direction as a vehicle predicted path;
and step S403, respectively translating the vehicle predicted path to two sides along the radial direction by 1/2 lane width to obtain a vehicle predicted driving area.
7. The method for extracting scenes with the function of high-speed automatic driving according to claim 1, wherein in the step S5, the scenes with the vehicle comprise vehicle-following starting, uniform vehicle-following, vehicle-following acceleration, vehicle-following deceleration, vehicle-following braking, lane-passing and curve-driving.
8. The method for extracting scenes with the function of high-speed automatic driving according to claim 1, wherein in the step S5, the following scenes, cut-in scenes and cut-out scenes of the scenes with the function of high-speed automatic driving are extracted according to the following principles:
taking the relative speed of the target vehicle and the vehicle as an extraction condition, and when the relative speed is in different intervals, the target vehicle and the vehicle are in different interaction states;
creating an identification column for different interaction states of the target vehicle and the vehicle, wherein when the relative speed is greater than 0, the identification column is 0, otherwise, the identification column is 1;
derivation is carried out on the identification columns of the target vehicle and the vehicle in different interactive states to obtain a derivative column of the identification columns, and a tail value is set to be 1;
carrying out reverse derivation on the identification columns of the target vehicle and the vehicle in different interactive states to obtain a reverse derivation sequence of the identification columns, and setting a head value to be 1;
screening time points corresponding to the values, which are not 0, in the derivative array and the inverse derivative array, and forming starting and ending point time of each relative speed interval by the points, which are not 0, in the inverse derivative array and time axes corresponding to the points, which are 0, in the derivative array;
the relative speed interval constituting the odd number order is the vehicle state corresponding to the first interval, and the even number order is opposite.
When the interval of two continuous same states is less than 1s, merging;
the initial actions of the target vehicle and the vehicle are all following lines.
9. The method for extracting scenes with the function of high-speed automatic driving according to claim 1, wherein the step S6 of integrating the extracted scene data comprises the following steps:
step S601, selecting 150-180 pieces of extracted scene data;
step S602, sequentially increasing the number of the scene data to be detected from 1;
step S603, setting the amount of scene data, the receiving limit, and the rejecting limit corresponding to the inspection round according to the selected amount of scene data and the expected accuracy.
10. The method for extracting scenes with the function of high-speed automatic driving according to claim 1, wherein in the step S6, the extracted data is checked according to the number, specifically:
if the unqualified quantity d1 found in the first-time inspection scene data is less than or equal to Ac1, the inspection result is considered to be received;
if the unqualified quantity d1 found in the first-time inspection scene data is greater than or equal to Re1, the inspection result is considered to be rejected;
if the unqualified quantity d1 found in the first inspection scene data is between Ac1 and Re1, performing second inspection according to the scheme, and accumulating the unqualified quantities of the two inspections, namely d1+ d2;
if the sum of d1 and d2 is less than or equal to the acceptance limit Ac2, the test result is considered as acceptance;
if the sum of d1 and d2 is greater than or equal to the rejection limit Re2, the test result is considered as rejection.
CN202210765745.1A 2022-07-01 2022-07-01 Functional scene extraction method suitable for high-speed automatic driving Pending CN115257803A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210765745.1A CN115257803A (en) 2022-07-01 2022-07-01 Functional scene extraction method suitable for high-speed automatic driving

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210765745.1A CN115257803A (en) 2022-07-01 2022-07-01 Functional scene extraction method suitable for high-speed automatic driving

Publications (1)

Publication Number Publication Date
CN115257803A true CN115257803A (en) 2022-11-01

Family

ID=83763099

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210765745.1A Pending CN115257803A (en) 2022-07-01 2022-07-01 Functional scene extraction method suitable for high-speed automatic driving

Country Status (1)

Country Link
CN (1) CN115257803A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117272690A (en) * 2023-11-21 2023-12-22 中汽智联技术有限公司 Method, equipment and medium for extracting dangerous cut-in scene of automatic driving vehicle

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117272690A (en) * 2023-11-21 2023-12-22 中汽智联技术有限公司 Method, equipment and medium for extracting dangerous cut-in scene of automatic driving vehicle
CN117272690B (en) * 2023-11-21 2024-02-23 中汽智联技术有限公司 Method, equipment and medium for extracting dangerous cut-in scene of automatic driving vehicle

Similar Documents

Publication Publication Date Title
US9527384B2 (en) Driving context generation system for generating driving behavior description information
US9727820B2 (en) Vehicle behavior prediction device and vehicle behavior prediction method, and driving assistance device
EP2710572B1 (en) Vehicle data analysis method and vehicle data analysis system
CN110400478A (en) A kind of road condition notification method and device
Seraj et al. A smartphone based method to enhance road pavement anomaly detection by analyzing the driver behavior
CN111351499B (en) Path identification method and device, computer equipment and computer readable storage medium
CN112793576B (en) Lane change decision method and system based on rule and machine learning fusion
Zhang et al. A framework for turning behavior classification at intersections using 3D LIDAR
CN115257803A (en) Functional scene extraction method suitable for high-speed automatic driving
CN115618932A (en) Traffic incident prediction method and device based on internet automatic driving and electronic equipment
CN114064656B (en) Automatic driving scene recognition and conversion method based on road end perception system
Tian et al. A vehicle re-identification algorithm based on multi-sensor correlation
CN113511204A (en) Vehicle lane changing behavior identification method and related equipment
CN111739291B (en) Interference identification method and device in road condition calculation
EP3382570A1 (en) Method for characterizing driving events of a vehicle based on an accelerometer sensor
CN111310696A (en) Parking accident recognition method and device based on parking abnormal behavior analysis and vehicle
Van Hinsbergh et al. Vehicle point of interest detection using in-car data
CN115841765A (en) Vehicle position blind area monitoring method and device, electronic equipment and readable storage medium
CN116753938A (en) Vehicle test scene generation method, device, storage medium and equipment
CN108241866A (en) A kind of method, apparatus guided to driving behavior and vehicle
CN115880909A (en) Highway congestion identification method based on floating car data
CN113868875B (en) Method, device and equipment for automatically generating test scene and storage medium
WO2018030103A1 (en) Displayed content recognition device and vehicle control device
Xu et al. Novel fast safety assessment method for the buffer section of maintenance work zone
CN110544378B (en) Method for judging traffic jam condition of mobile phone user

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination