Disclosure of Invention
The invention aims to: aiming at the defects in the prior art, the invention provides an S-shaped road simulated obstacle early warning method based on three-dimensional camera networking.
The technical scheme is as follows: in order to achieve the purpose, the invention adopts the following technical scheme that the S-shaped road simulated obstacle early warning method based on three-dimensional camera networking is characterized by comprising the following steps of:
step one, arranging a three-dimensional camera above each curve of an S-shaped road, and arranging a lens of the three-dimensional camera downwards;
establishing a three-dimensional coordinate system with each three-dimensional camera as an origin, and then shooting simulated obstacles and simulated automobiles in the S-shaped highway scene by using the three-dimensional cameras to obtain field-of-view point cloud data;
performing a random sampling consistency estimation algorithm on the field point cloud data to obtain field point cloud data with a reference surface removed, and performing straight-through filtering and voxel filtering on the field point cloud data with the reference surface removed to obtain filtered field point cloud data;
performing Euclidean clustering on the filtering field point cloud data to obtain simulated automobile point cloud data, independent simulated obstacle point cloud data and independent simulated obstacle number i;
step five, respectively carrying out boundary frame estimation on the independent simulated obstacle point cloud data and the simulated automobile point cloud data to obtain a centroid coordinate P of the independent simulated obstacle point cloud data i (x i ,y i ,z i ) Simulating barycenter coordinate R (x) of automobile point cloud data 0 ,y 0 ,z 0 ) And the shape of the independent simulated obstacle, wherein x i 、y i And z i X-axis coordinate, Y-axis coordinate, and Z-axis coordinate, X-axis coordinate, respectively representing the coordinates of the center of mass of the point cloud data of the independently simulated obstacle 0 、y 0 And z 0 Respectively representing an X-axis coordinate, a Y-axis coordinate and a Z-axis coordinate of a centroid coordinate of the simulated automobile point cloud data;
sixthly, calculating the Euclidean distance D of each independent simulated obstacle point cloud data and each simulated automobile point cloud data by using the centroid coordinates of each independent simulated obstacle point cloud data and the centroid coordinates of the simulated automobile point cloud data i ;
Step seven, the D pairs are carried out according to the sequence from small to big i Sequencing to obtain the minimum Euclidean distance D of each independent simulated obstacle point cloud data and simulated automobile point cloud data min And a maximum Euclidean distance D max ;
Step eight, simulating the quality of the cloud data of the obstacle points for each independent objectRemoving Z-axis coordinate values from the center coordinates and the centroid coordinates of the simulated automobile point cloud data to obtain Z-axis-removed center coordinates B of each independent simulated obstacle point cloud data only containing X-axis coordinate values and Y-axis coordinate values i (x i ,y i ) And de-Z-axis centroid coordinate A (x) of simulated automobile point cloud data 0 ,y 0 );
Step nine, calculating to obtain the Euclidean distance d of each independent simulated obstacle point cloud data relative to the simulated automobile point cloud data on the XOY plane by using the de-Z-axis centroid coordinate of each independent simulated obstacle point cloud data only containing the X-axis coordinate value and the Y-axis coordinate value and the de-Z-axis centroid coordinate of the simulated automobile point cloud data i ;
Step ten, calculating to obtain an azimuth angle of each independent simulated obstacle point cloud data relative to the simulated automobile point cloud data by using a de-Z-axis centroid coordinate of each independent simulated obstacle point cloud data only containing an X-axis coordinate value and a Y-axis coordinate value and a de-Z-axis centroid coordinate and an arc tangent function of the simulated automobile point cloud data;
step eleven, reading the driving speed V of the simulated automobile through an automobile control system of the simulated automobile, and calculating the driving time T from the point cloud data of the simulated automobile to the point cloud data of each independent simulated obstacle according to the driving speed V of the automobile i ;
Step twelve, the number i of the independent simulated obstacles, the shapes of the independent simulated obstacles, the minimum Euclidean distance D of the cloud data of each independent obstacle point and the cloud data of the simulated automobile point min And a maximum Euclidean distance D max Euclidean distance d of each independent simulated obstacle point cloud data relative to simulated automobile point cloud data on an XOY plane i Azimuth angle of each independent simulated obstacle point cloud data relative to simulated automobile point cloud data and driving time T from the simulated automobile point cloud data to each independent simulated obstacle point cloud data i And transmitting the data to a simulated automobile control system to early warn when the automobile runs on the S-shaped road in advance.
The technical scheme is further designed as follows: the third step specifically comprises the following steps:
301 Selecting an estimation model as a reference plane model;
302 Randomly selecting part of point clouds in the field-of-view point cloud data as initial values, fitting an estimation model by using the part of point clouds, and judging whether the part of point clouds is the point clouds in the reference surface model or not by using errors of the part of point clouds and the reference surface model;
303 When part of the point clouds is the point cloud inside the reference surface model, the reference surface model is used for testing all other field-of-view point cloud data, and if a certain point cloud is suitable for the reference surface model, the point cloud is expanded into the part of the point cloud with the initial value; when part of the point clouds are not the point clouds in the reference surface model, part of the point clouds in the view field point cloud data are selected again randomly as initial values;
304 When at least 85% of the point clouds are classified as point clouds within the reference surface model, then the estimation model is a reasonable estimation model;
305 Store the final reasonable estimate model as field-of-view point cloud data containing only the reference surface;
306 Subtracting the view field point cloud data only containing the reference surface from the view field point cloud data to obtain the view field point cloud data with the reference surface removed;
307 Input the limited ranges in three coordinate axis directions in the direct filtering Cartesian coordinate system, and then perform direct filtering on the field-of-view point cloud data without the reference surface to obtain field-of-view point cloud data after direct filtering;
308 Creating a three-dimensional voxel grid for the directly filtered point cloud data of the field of view, and then replacing all points in each voxel grid with the gravity center of a point set in the voxel grid to obtain the point cloud data of the filtered field of view.
The fourth step specifically comprises the following steps:
401 Selecting a certain point of the filtered view field point cloud data as a current point;
402 K points nearest to the current point are found through a high-dimensional tree neighbor search algorithm;
403 Point meeting the range of the number of the set threshold values in the nearest points of the current point is clustered into a set Q, and the set Q is expanded until the number of the points in the set Q is not increased any more, so that simulated automobile point cloud data or independent simulated obstacle point cloud data is obtained;
404 Set with the number of points less than twenty thousand is regarded as simulated automobile point cloud data, set with the number of points more than or equal to twenty thousand is regarded as independent simulated obstacle point cloud data, and the number of independent simulated obstacles is obtained through statistics.
The fifth step specifically comprises the following steps:
501 Calculating the inertia moment and the eccentricity ratio of the simulated automobile point cloud data and the independent simulated obstacle point cloud data respectively;
502 Respectively calculating characteristic values and corresponding characteristic vectors of the simulated automobile point cloud data and the independent simulated obstacle point cloud data to obtain a centroid coordinate R (x) of the simulated automobile point cloud data 0 ,y 0 ,z 0 ) And centroid coordinates P of independent obstacle point cloud data i (x i ,y i ,z i ) Establishing an independent simulation obstacle point cloud data coordinate system which takes the centroid coordinate of the independent simulation obstacle point cloud data as an origin and takes the characteristic vector as a coordinate axis;
503 Respectively calculating boundary values of each independent simulated obstacle point cloud data in each independent simulated obstacle point cloud data coordinate system, and adding a boundary frame for the independent simulated obstacle point cloud data;
504 Utilizing the inertia moment, the eccentricity and the bounding box to form a directed bounding box for independently simulating the cloud data of the obstacle points;
505 Carrying out three-dimensional shape matching on the directed bounding box of the independent simulated obstacle point cloud data to obtain the shape of the independent simulated obstacle.
The sixth step is specifically as follows:
according to the formula
Calculating to obtain Euclidean distance D of each independent simulated obstacle point cloud data and simulated automobile point cloud data i 。
The ninth step specifically comprises:
according to the formula
Calculating to obtain the Euclidean distance d of the cloud data of each independent simulated obstacle point relative to the cloud data of the simulated automobile point on an XOY plane i 。
The step ten specifically comprises the following steps:
1001 Center of mass coordinates R (x) for simulated automotive point cloud data 0 ,y 0 ,z 0 ) And mass center coordinate P of independent simulation obstacle point cloud data i (x i ,y i ,z i ) Removing Z-axis coordinate value to obtain A (x) 0 ,y 0 ) And B i (x i ,y i );
1002 Utilizing A (x) 0 ,y 0 ) And B i (x i ,y i ) Abscissa of (a) according to the following formula
Δx i =x 0 -x i (3)
Calculating to obtain a transverse coordinate difference value delta x i ;
1003 Utilizing A (x) 0 ,y 0 ) And B i (x i ,y i ) According to the following formula
Δy i =y 0 -y i (4)
Calculating to obtain a difference value delta y of the longitudinal coordinates i ;
1004 According to the arctan function
Calculating to obtain the azimuth angle theta of each independent simulated obstacle point cloud data relative to the simulated automobile point cloud data
i Wherein, in the process,
when in use
Judging whether the simulated obstacle point cloud data is positioned at the right front theta of the simulated automobile point cloud data
i Degree; when in use
Judging whether the simulated obstacle point cloud data is positioned in the left front theta of the simulated automobile point cloud data
i Degree; when theta is
i And when the point cloud data is equal to 0, judging that the independent simulated obstacle point cloud data is positioned right in front of the simulated automobile point cloud data.
In the step eleven, the formula is shown in the specification
Calculating the driving time T from the simulated automobile point cloud data to each independent simulated obstacle point cloud data i 。
The invention has the beneficial effects that:
compared with the prior art, the method and the device have the advantages that the three-dimensional camera networking and the computer communication are utilized to obtain the view field point cloud data of the simulated barrier, the obtained view field point cloud data are subjected to filtering denoising, euclidean clustering and Euclidean distance calculation, and then the accurate three-dimensional data point cloud is obtained. The method comprises the steps of respectively obtaining the view field point cloud data of the simulated obstacles of each road section by utilizing a three-dimensional camera network, forming the three-dimensional camera network by utilizing a router and a network cable, feeding the view field point cloud data of the simulated obstacles of each road section back to a computer, and enabling the three-dimensional camera to be sensitive to the depth information of a target in a scene, so that the defects that a two-dimensional image cannot be obtained for road early warning in a complex scene, and the automatic driving vehicle cannot be successfully early warned and obstacle avoidance cannot be guaranteed are overcome.
Detailed Description
In order to make the objects, technical solutions and advantages of the present invention more apparent, the present invention is described in further detail below with reference to the accompanying drawings and embodiments. It should be understood that the specific embodiments described herein are merely illustrative of the invention and do not limit the invention. The present invention will now be described in further detail with reference to the accompanying drawings.
The embodiment provides an S-shaped road simulated obstacle early warning method based on three-dimensional camera networking, which is as shown in fig. 1:
step 1: firstly, building a three-dimensional camera and an S-shaped road simulation obstacle early warning system, and then building communication between a networking three-dimensional camera and an automobile control system;
the method is realized by the following steps:
101, building an indoor three-dimensional camera and an S-shaped road simulated obstacle early warning system by using an iron rectangular three-dimensional camera lifting support, a three-dimensional camera box and a three-dimensional camera;
the three-dimensional camera used in the embodiment is a tin-free micro-vision sensing MEMS-3D camera, the model is PCA-P/S600, an infrared light source of 850nm is adopted, and as shown in FIG. 2, a diagram of an indoor three-dimensional camera and an S-shaped road simulated obstacle early warning system is shown.
102, setting the height of an iron rectangular three-dimensional camera lifting support to be 500mm, fixing a three-dimensional camera in a three-dimensional camera box, fixing the three-dimensional camera box at the top of the support, ensuring that a three-dimensional camera lens faces downwards, and shooting an S-shaped road at a overlooking visual angle;
103, in the embodiment, the S-shaped road is provided with three curves, a first three-dimensional camera is placed at the first curve of the S-shaped road, a second three-dimensional camera is placed at the second curve of the S-shaped road, and a third three-dimensional camera is placed at the third curve of the S-shaped road;
104, connecting a first three-dimensional camera, a second three-dimensional camera and a third three-dimensional camera into a networking three-dimensional camera through a router and a network cable;
the working distance of the three-dimensional camera is 300-600mm, and the data interface adopts two modes of USB3.0 and GigE.
And 105, establishing communication connection between the networking three-dimensional camera and the automobile control system, and sending data acquired by the networking three-dimensional camera to the automobile control system through a network.
In this embodiment, the overlook viewing angle point cloud data of the simulated obstacle is obtained by operating a computer and three-dimensional camera adaptive software, and as shown in fig. 3, a communication protocol diagram of the networking three-dimensional camera is shown.
Step 2: establishing a system coordinate system with each three-dimensional camera as an origin, and then shooting S-shaped road simulation barriers and simulation automobiles in a scene by using the three-dimensional cameras to obtain field-of-view point cloud data;
the resolution of the field-of-view point cloud data obtained by shooting in this embodiment is 1280 × 1024, the acquisition time is 0.4-0.8s, the coordinate axis value of the three-dimensional camera system is (200, 400, 600), the coordinate system takes the three-dimensional camera as the origin, the three-dimensional camera parameters are initialized, 0 is calibrated to be a gray-scale image in a non-output PCD format, and 1 is calibrated to be a depth point cloud image in an output PCD format.
And step 3: performing a random sampling consistency estimation algorithm on the field-of-view point cloud data to obtain field-of-view point cloud data with a reference surface removed, and performing direct filtering and voxel filtering on the field-of-view point cloud data with the reference surface removed to obtain filtered field-of-view point cloud data;
the method is realized by the following steps:
step 301, selecting an estimation model as a reference plane model;
in the embodiment, the reference surface model is used as a plane model, and the estimation model is a model of the point cloud data of the field of view.
Step 302, randomly selecting part of point clouds in the field point cloud data as initial values, fitting an estimation model by using the part of point clouds, and judging whether the part of point clouds is the point clouds in the reference surface model or not by using errors of the part of point clouds and the reference surface model;
step 303, when part of the point clouds is the point clouds in the reference surface model, testing all other field-of-view point cloud data by using the reference surface model, if a certain point cloud is suitable for the reference surface model, considering the point cloud to be the point cloud in the reference surface model, and expanding the point cloud to the part of the point clouds of the initial value; when part of the point clouds are not the point clouds in the reference surface model, part of the point clouds in the view field point cloud data are selected again randomly as initial values;
step 304, when at least 85% of the point clouds are classified as the point clouds in the reference surface model, the model is estimated reasonably enough;
step 305, storing the final reasonable estimation model as the field-of-view point cloud data only comprising the reference surface;
step 306, subtracting the view field point cloud data only containing the reference surface from the view field point cloud data to obtain the view field point cloud data without the reference surface;
the estimation model of the point cloud data of the field of view is optimized, and the type of the reference plane model is a plane model.
307, manually inputting three coordinate axis direction limiting ranges in a direct filtering Cartesian coordinate system, and then performing direct filtering on the field point cloud data without the reference surface to obtain field point cloud data after direct filtering;
the straight-through filtering adopted by the embodiment is implemented by setting threshold parameters in three coordinate axis directions in a Cartesian coordinate system, designating points in a parameter range to pass, and filtering points out of the parameter range, so as to realize basic filtering of the point cloud data of the field of view with a reference surface removed. Wherein the range of an X axis in a Cartesian coordinate system is-200 mm to 200mm, the range of a Y axis in the Cartesian coordinate system is-400 mm to 400mm, the range of a Z axis in the Cartesian coordinate system is 300mm to 400mm, and the parameter range of the direct filtering is 300mm to 400mm.
And 308, creating a three-dimensional voxel grid for the directly filtered field point cloud data, and replacing all points in each voxel grid with the gravity center of a point set in the voxel grid to obtain the filtered field point cloud data.
The voxel filtering adopted by the embodiment carries out down-sampling data through the voxelized grid, reduces the point cloud data of the field of view, and is beneficial to next-step point cloud segmentation and clustering. All points in each voxel grid are replaced by the gravity center of the point set in the voxel grid, filtered target point cloud data is reserved, the screening effects of three-dimensional voxel grid thresholds of different specifications are compared, and the optimal de-noising threshold of voxel filtering is determined, wherein the optimal de-noising threshold of the voxel filtering is 1 multiplied by 1.
And 4, step 4: carrying out Euclidean clustering on the filtering field point cloud data to obtain simulated automobile point cloud data, independent simulated obstacle point cloud data and independent simulated obstacle number i, wherein i represents a positive integer which is greater than or equal to 1 and less than or equal to 10;
step 401, selecting a certain point of the filtering field point cloud data as a current point;
step 402, finding k points nearest to the current point through a high-dimensional tree neighbor search algorithm;
step 403, clustering points which meet a set threshold number range in the nearest points of the current point into a set Q, and expanding the set Q until the number of points in the set Q is not increased any more to obtain simulated automobile point cloud data or independent simulated obstacle point cloud data;
the filtering view field point cloud data are stored, efficient indexing is carried out on the filtering view field point cloud data through high-dimensional tree neighbor searching, a minimum point cloud number range is set to be five thousand, a maximum point cloud number range is set to be forty thousand, when the point cloud data in the Euclidean clustering result contain five thousand to forty thousand, the point cloud data are determined to be simulated automobile point cloud data or independently simulated obstacle point cloud data, and otherwise the point cloud data are not determined to be simulated automobile point cloud data or independently simulated obstacle point cloud data.
And step 404, regarding the set Q with the number of the points less than twenty thousand as simulated automobile point cloud data, regarding the set Q with the number of the points more than or equal to twenty thousand as independent simulated obstacle point cloud data, and counting to obtain the number of the independent simulated obstacles.
In this embodiment, simulated automobile point cloud data is removed from the filtered point cloud data, 200000 points in the minimum range of the point cloud data are set as independent simulated obstacle point cloud data, and the number i of the obtained independent simulated obstacles is 4.
And 5: respectively carrying out boundary frame estimation on the independent simulated obstacle point cloud data and the simulated automobile point cloud data to obtain a mass center coordinate P of the independent simulated obstacle point cloud data i (x i ,y i ,z i ) Simulating barycenter coordinate R (x) of automobile point cloud data 0 ,y 0 ,z 0 ) And the shape of the independent simulated obstacle, wherein x i 、y i And z i X-axis coordinate, Y-axis coordinate, and Z-axis coordinate, X-axis coordinate, respectively representing centroid coordinates of independently simulated obstacle point cloud data i And y i Are all larger than 0 cm and less than or equal to 18 cm, z i Greater than 0 cm and less than or equal to 35 cm, x 0 、y 0 And z 0 X-axis coordinate, Y-axis coordinate, and Z-axis coordinate, X, respectively representing the centroid coordinate of simulated automotive point cloud data 0 And y 0 Are all larger than 0 cm and less than or equal to 18 cm, z 0 More than 0 cm and less than or equal to 35 cm;
the method is realized by the following steps:
step 501, respectively calculating inertia moment and eccentricity of the simulated automobile point cloud data and the independent simulated obstacle point cloud data;
step 502, respectively calculating characteristic values and corresponding characteristic vectors of the simulated automobile point cloud data and the independent simulated obstacle point cloud data to obtain a centroid coordinate R (x) of the simulated automobile point cloud data 0 ,y 0 ,z 0 ) And centroid coordinates P of independent obstacle point cloud data i (x i ,y i ,z i ) Establishing an independent simulation obstacle point cloud data coordinate system which takes the centroid coordinate of the independent simulation obstacle point cloud data as an origin and takes the characteristic vector as a coordinate axis;
in this embodiment, the main axis direction of the directional bounding box of the independent simulated obstacle point cloud data is determined, the X-axis component and the Y-axis component of the coordinate of the independent simulated obstacle point cloud data are obtained, and the covariance matrix, the eigenvalue, and the corresponding eigenvector are calculated, where the eigenvector corresponding to the largest eigenvalue is the main axis direction of the directional bounding box of the independent simulated obstacle point cloud data, that is, the coordinate axis of the independent simulated obstacle coordinate system.
Step 503, respectively calculating a boundary value of each independent simulated obstacle point cloud data in each independent simulated obstacle point cloud data coordinate system, and adding a boundary frame for the independent simulated obstacle point cloud data;
in the embodiment, a direction vector is determined in each independent simulated obstacle point cloud data coordinate system, coordinate points of independent simulated obstacle point cloud data are projected onto the direction vector, the maximum value and the minimum value of X-axis and Y-axis components of each independent simulated obstacle point cloud data coordinate in each direction are calculated, namely the boundary value of each independent simulated obstacle point cloud data, and then a boundary frame is added to the independent simulated obstacle point cloud data, wherein a directional bounding box range is set to be 2 × 2 × 2, and the iteration times are 300 times.
Step 504, forming a directed bounding box for independently simulating the cloud data of the obstacle points by using the inertia moment, the eccentricity and the bounding box;
and 505, performing three-dimensional shape matching on the directed bounding box of the independent simulated obstacle point cloud data to obtain the shape of the independent simulated obstacle.
In this embodiment, different independent obstacle shapes are distinguished for different gray values of each independent simulated obstacle point cloud data, and as shown in fig. 4, the independent simulated obstacle point cloud data is a directed bounding box graph, where a first obstacle is a cuboid, a second obstacle is a cylinder, a third obstacle is a cuboid, and a fourth obstacle is a cuboid.
Step 6: calculating the Euclidean distance D of each independent obstacle point cloud data and the simulated automobile point cloud data by using the mass center coordinate of each independent obstacle point cloud data and the mass center coordinate of the simulated automobile point cloud data i Wherein D is i More than or equal to 1 cm and less than or equal to 25 cm;
the method is realized by the following steps:
according to
Calculating to obtain Euclidean distance D of each independent obstacle point cloud data and simulated automobile point cloud data
i Wherein, in the step (A),
represents the square opening, (.)
2 Representing the square.
The coordinates of the center of mass of the simulated automobile point cloud data in this embodiment are (180.067, 195.015, 56.724) (mm), where mm represents mm, the coordinates of the center of mass of the first simulated obstacle point cloud data are (95.675, 87.211, 37.316) (mm), the coordinates of the center of mass of the second simulated obstacle point cloud data are (121.621, 38.961, 45.879) (mm), the coordinates of the center of mass of the third simulated obstacle point cloud data are (56.320, 126.597, 69.123) (mm), and the coordinates of the center of mass of the fourth simulated obstacle point cloud data are (150.875, 99.110, 54.449) (mm). Setting the Euclidean distance iteration times to be 400 times, and obtaining and storing the Euclidean distance D of each independent simulated obstacle point cloud data and simulated automobile point cloud data i Wherein D is 1 =138.276mm,D 2 =180.432mm,D 3 =141.056mm,D 4 =100.272mm。
And 7: pair D in order from small to large i Performing sequencing operation to obtain the minimum Euclidean distance D of each independent obstacle point cloud data and the simulated automobile point cloud data min And a maximum Euclidean distance D max ;
In this example, D 4 <D 1 <D 3 <D 2 Obtaining a minimum Euclidean distance D min =D 4 Namely 100.272mm; obtaining the maximum Euclidean distance D max =D 2 I.e. 180.432mm. Fig. 5 shows a second and fourth simulated obstacle point cloud data distance information maps.
And 8: center of mass coordinates of point cloud data of each independent obstacle and simulated automobile point cloudRemoving Z-axis coordinate values from the mass center coordinates of the data to obtain Z-axis-removed mass center coordinates B of cloud data of each independent obstacle point only containing X-axis coordinate values and Y-axis coordinate values i (x i ,y i ) And a de-Z axis centroid coordinate A (x) of the simulated automobile point cloud data 0 ,y 0 );
In this embodiment, the de-Z-axis centroid coordinate a (180.067, 195.015) (mm) of the simulated automobile point cloud data containing only the X-axis coordinate value and the Y-axis coordinate value, and the de-Z-axis centroid coordinate B of the first simulated obstacle point cloud data containing only the X-axis coordinate value and the Y-axis coordinate value are obtained 1 (95.675, 87.211) (mm), de-Z-Axis centroid coordinate B of second simulated obstacle point cloud data containing only X-Axis coordinate value and Y-Axis coordinate value 2 (121.621, 38.961) (mm), a de-Z-axis centroid coordinate B of the third simulated obstacle point cloud data containing only X-axis coordinate values and Y-axis coordinate values 3 (56.320, 126.597) (mm), a de-Z axis centroid coordinate B of the fourth simulated obstacle point cloud data comprising only the X axis coordinate value and the Y axis coordinate value 4 (150.875,99.110)(mm)。
And step 9: calculating to obtain Euclidean distance d of each independent simulated obstacle point cloud data relative to the simulated automobile point cloud data on an XOY plane by using the de-Z-axis centroid coordinate of each independent obstacle point cloud data only containing the X-axis coordinate value and the Y-axis coordinate value and the de-Z-axis centroid coordinate of the simulated automobile point cloud data i Wherein d is i More than or equal to 1 cm and less than or equal to 20 cm;
the method is realized by the following steps:
according to
Calculating to obtain the Euclidean distance d of the cloud data of each independent simulated obstacle point relative to the cloud data of the simulated automobile point on an XOY plane i ;
The Euclidean distance d of each independent simulated obstacle point cloud data relative to the simulated automobile point cloud data on the XOY plane is obtained by the embodiment i Are respectively d 1 =137.437mm,d 2 =167.722mm,d 3 =141.905mm,d 4 =100.578mm。
Step 10: calculating to obtain an azimuth angle of each independent simulated obstacle point cloud data relative to the simulated automobile point cloud data by using a Z-axis-removed centroid coordinate of each independent obstacle point cloud data only containing an X-axis coordinate value and a Y-axis coordinate value and a Z-axis-removed centroid coordinate and an arc tangent function of the simulated automobile point cloud data;
step 1001, center of mass coordinates R (x) of simulated automobile point cloud data 0 ,y 0 ,z 0 ) And centroid coordinates P of simulated obstacle point cloud data i (x i ,y i ,z i ) Removing Z-axis coordinate value to obtain A (x) 0 ,y 0 ) And B i (x i ,y i );
Step 1002, utilize A (x) 0 ,y 0 ) And B i (x i ,y i ) On the abscissa of (a) according to
Δx i =x 0 -x i (3)
Calculating to obtain a transverse coordinate difference value delta x i ,Δx i More than or equal to negative 20 cm and less than or equal to positive 20 cm;
in this example x 0 And x 1 Transverse coordinate difference value Deltax of 1 =84.392mm,x 0 And x 2 Transverse coordinate difference value Deltax 2 =58.446mm,x 0 And x 3 Transverse coordinate difference value Deltax of 3 =123.747mm,x 0 And x 4 Transverse coordinate difference value Deltax of 4 =29.192mm。
Step 1003, utilizing A (x) 0 ,y 0 ) And B i (x i ,y i ) On the ordinate of
Δy i =y 0 -y i (4)
Calculating to obtain a difference value delta y of the longitudinal coordinates i ,Δy i More than or equal to negative 20 cm and less than or equal to positive 20 cm;
in this example y 0 And y 1 Difference value of ordinate Δ y 1 =107.804mm,y 0 And y 2 Difference value Δ y of ordinate 2 =156.054mm,y 0 And y 3 Difference value Δ y of ordinate 3 =68.418mm、y 0 And y 4 Difference value Δ y of ordinate 4 =95.905mm。
Step 1004, according to the arctan function
Calculating to obtain the azimuth angle theta of each independent simulated obstacle point cloud data relative to the simulated automobile point cloud data
i Wherein, in the process,
e (-) denotes belonging to the operation when
Judging whether the simulated obstacle point cloud data are positioned at the front right theta of the simulated automobile point cloud data
i Degree; when the temperature is higher than the set temperature
Judging whether the simulated obstacle point cloud data is positioned in the left front theta of the simulated automobile point cloud data
i Degree; when theta is
i And when the data is equal to 0, judging that the simulated obstacle point cloud data is positioned right in front of the simulated automobile point cloud data.
The embodiment shows that the first simulated obstacle point cloud data is positioned 51.9 degrees at the right front of the simulated automobile point cloud data, namely theta 1 =51.9 °, the second simulated obstacle point cloud data is located 69.4 ° at the front right of the simulated automobile point cloud data, i.e. θ 2 =69.4 °, the third simulated obstacle point cloud data is positioned at 28.9 ° in front of the simulated automobile point cloud data on the right, that is, θ 3 =28.9 °, and the fourth simulated obstacle point cloud data is positioned at 73.0 ° in the front right of the simulated automobile point cloud data, namely θ 4 =73.0°。
Step 11: reading in a simulated automobile running speed V through an automobile control system, wherein V represents the simulated automobile running speed of more than or equal to 1 cm per second and less than or equal to 5 cm per second according to
Calculating the driving time T from the simulated automobile point cloud data to each independent simulated obstacle point cloud data i Wherein, T i 4 seconds or more and 20 seconds or less;
the simulated automobile running speed is 3 cm per second in the embodiment, and T is obtained by calculation 1 =4.58 seconds, T 2 =5.59 seconds, T 3 =4.73 seconds, T 4 =4.33 seconds.
Step 12: the number i of the independent simulated obstacles, the shapes of the independent simulated obstacles, the minimum Euclidean distance D of the cloud data of each independent obstacle point and the simulated automobile point cloud data min And a maximum Euclidean distance D max And the Euclidean distance d of the cloud data of each independent simulated obstacle point relative to the cloud data of the simulated automobile point on an XOY plane i Azimuth angle of each independent simulated obstacle point cloud data relative to simulated automobile point cloud data and driving time T from the simulated automobile point cloud data to each independent simulated obstacle point cloud data i The data are transmitted to an automobile control system, so that the automobile can be early warned in advance when running on the S-shaped road;
in the embodiment, the obtained number of independent simulated obstacles i =4; the first barrier is a cuboid, the second barrier is a cylinder, the third barrier is a cuboid, and the fourth barrier is a cuboid; d 1 =138.276mm,D 2 =180.432mm,D 3 =141.056mm,D 4 =100.272mm;D max =180.432mm,D min =100.272mm;d 1 =137.437mm,d 2 =167.722mm,d 3 =141.905mm,d 4 =100.578mm;T 1 =4.58 seconds, T 2 =5.59 seconds, T 3 =4.73 seconds, T 4 =4.33 seconds; first simulated obstacle point cloud data azimuth angle theta 1 The azimuth angle theta of the cloud data of the second simulated obstacle point is 51.9 degrees at the front right 2 The position of the cloud data of the obstacle point is 69.4 degrees at the front right and the third simulated obstacle pointAngle theta 3 The azimuth angle theta of the cloud data of the fourth simulated obstacle point is 28.9 degrees on the right front side 4 Road condition information of 73.0 degrees at the front right is sent to an automobile control system through a networking three-dimensional camera, and then the automobile can early warn in advance aiming at simulated obstacles.
The technical solutions of the present invention are not limited to the above embodiments, and all technical solutions obtained by using equivalent substitution modes fall within the scope of the present invention.