CN114446092B - S-shaped road simulated obstacle early warning method based on three-dimensional camera networking - Google Patents

S-shaped road simulated obstacle early warning method based on three-dimensional camera networking Download PDF

Info

Publication number
CN114446092B
CN114446092B CN202210060614.3A CN202210060614A CN114446092B CN 114446092 B CN114446092 B CN 114446092B CN 202210060614 A CN202210060614 A CN 202210060614A CN 114446092 B CN114446092 B CN 114446092B
Authority
CN
China
Prior art keywords
cloud data
point cloud
simulated
independent
obstacle
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202210060614.3A
Other languages
Chinese (zh)
Other versions
CN114446092A (en
Inventor
赵东
王中杰
王宇杰
曹毓铧
王青
陶旭
张见
杨成东
李晨
汪磊
王新蕾
苏琳琳
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Dragon Totem Technology Hefei Co ltd
Shenzhen Dragon Totem Technology Achievement Transformation Co ltd
Original Assignee
Wuxi University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Wuxi University filed Critical Wuxi University
Priority to CN202210060614.3A priority Critical patent/CN114446092B/en
Publication of CN114446092A publication Critical patent/CN114446092A/en
Application granted granted Critical
Publication of CN114446092B publication Critical patent/CN114446092B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G08SIGNALLING
    • G08GTRAFFIC CONTROL SYSTEMS
    • G08G1/00Traffic control systems for road vehicles
    • G08G1/16Anti-collision systems
    • G08G1/165Anti-collision systems for passive traffic, e.g. including static obstacles, trees
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/23Clustering techniques
    • GPHYSICS
    • G08SIGNALLING
    • G08GTRAFFIC CONTROL SYSTEMS
    • G08G1/00Traffic control systems for road vehicles
    • G08G1/01Detecting movement of traffic to be counted or controlled
    • G08G1/0104Measuring and analyzing of parameters relative to traffic conditions
    • G08G1/0125Traffic data processing
    • GPHYSICS
    • G08SIGNALLING
    • G08GTRAFFIC CONTROL SYSTEMS
    • G08G1/00Traffic control systems for road vehicles
    • G08G1/01Detecting movement of traffic to be counted or controlled
    • G08G1/04Detecting movement of traffic to be counted or controlled using optical or ultrasonic detectors

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • Theoretical Computer Science (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Evolutionary Biology (AREA)
  • Evolutionary Computation (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • General Engineering & Computer Science (AREA)
  • Artificial Intelligence (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Chemical & Material Sciences (AREA)
  • Analytical Chemistry (AREA)
  • Image Processing (AREA)
  • Processing Or Creating Images (AREA)

Abstract

The invention discloses an S-shaped road simulated obstacle early warning method based on three-dimensional camera networking, belonging to the technical field of intelligent automobile obstacle detection and early warning; firstly, a three-dimensional camera is used for networking and communicating with an automobile control system; then, shooting an S-shaped road simulation barrier and a simulation automobile by using a three-dimensional camera under a top view angle to obtain field-of-view point cloud data; further establishing a system coordinate system with the three-dimensional camera as an origin; removing a reference surface and noise by utilizing a random sampling consistency algorithm estimation, a direct connection and a voxel filtering mode; obtaining independent simulated obstacle point cloud data by using Euclidean clustering and bounding box estimation, calculating road condition state information of the simulated obstacle point cloud data, and sending the road condition state information to an automobile control system for early warning; the invention utilizes the three-dimensional camera to shoot the S-shaped road simulation barrier, does not need an external light source, obtains the depth information, and overcomes the defect that the intelligent automobile cannot obtain the barrier depth information due to the influence of the light source and the weather in the prior art.

Description

S-shaped road simulated obstacle early warning method based on three-dimensional camera networking
Technical Field
The invention belongs to the field of intelligent automobile obstacle early warning, and particularly relates to an S-shaped road obstacle simulation early warning method based on three-dimensional camera networking.
Background
With the rise of artificial intelligence technology, the problem of motion path planning taking an automatic driving vehicle as a research object is more and more emphasized, and early warning and obstacle avoidance planning is a key part of the automatic driving vehicle and has great significance for the research of the automatic driving vehicle. The prior highway simulation obstacle early warning technology in China has the defects that the early warning technology cannot accurately send timely early warning to an automobile due to the fact that the early warning technology is interfered by the environment without a light source and low visibility; the method has the defects that the two-dimensional image cannot be acquired for road early warning in a complex scene, and the automatic driving vehicle cannot be successfully early warned and obstacle avoidance.
Disclosure of Invention
The invention aims to: aiming at the defects in the prior art, the invention provides an S-shaped road simulated obstacle early warning method based on three-dimensional camera networking.
The technical scheme is as follows: in order to achieve the purpose, the invention adopts the following technical scheme that the S-shaped road simulated obstacle early warning method based on three-dimensional camera networking is characterized by comprising the following steps of:
step one, arranging a three-dimensional camera above each curve of an S-shaped road, and arranging a lens of the three-dimensional camera downwards;
establishing a three-dimensional coordinate system with each three-dimensional camera as an origin, and then shooting simulated obstacles and simulated automobiles in the S-shaped highway scene by using the three-dimensional cameras to obtain field-of-view point cloud data;
performing a random sampling consistency estimation algorithm on the field point cloud data to obtain field point cloud data with a reference surface removed, and performing straight-through filtering and voxel filtering on the field point cloud data with the reference surface removed to obtain filtered field point cloud data;
performing Euclidean clustering on the filtering field point cloud data to obtain simulated automobile point cloud data, independent simulated obstacle point cloud data and independent simulated obstacle number i;
step five, respectively carrying out boundary frame estimation on the independent simulated obstacle point cloud data and the simulated automobile point cloud data to obtain a centroid coordinate P of the independent simulated obstacle point cloud data i (x i ,y i ,z i ) Simulating barycenter coordinate R (x) of automobile point cloud data 0 ,y 0 ,z 0 ) And the shape of the independent simulated obstacle, wherein x i 、y i And z i X-axis coordinate, Y-axis coordinate, and Z-axis coordinate, X-axis coordinate, respectively representing the coordinates of the center of mass of the point cloud data of the independently simulated obstacle 0 、y 0 And z 0 Respectively representing an X-axis coordinate, a Y-axis coordinate and a Z-axis coordinate of a centroid coordinate of the simulated automobile point cloud data;
sixthly, calculating the Euclidean distance D of each independent simulated obstacle point cloud data and each simulated automobile point cloud data by using the centroid coordinates of each independent simulated obstacle point cloud data and the centroid coordinates of the simulated automobile point cloud data i
Step seven, the D pairs are carried out according to the sequence from small to big i Sequencing to obtain the minimum Euclidean distance D of each independent simulated obstacle point cloud data and simulated automobile point cloud data min And a maximum Euclidean distance D max
Step eight, simulating the quality of the cloud data of the obstacle points for each independent objectRemoving Z-axis coordinate values from the center coordinates and the centroid coordinates of the simulated automobile point cloud data to obtain Z-axis-removed center coordinates B of each independent simulated obstacle point cloud data only containing X-axis coordinate values and Y-axis coordinate values i (x i ,y i ) And de-Z-axis centroid coordinate A (x) of simulated automobile point cloud data 0 ,y 0 );
Step nine, calculating to obtain the Euclidean distance d of each independent simulated obstacle point cloud data relative to the simulated automobile point cloud data on the XOY plane by using the de-Z-axis centroid coordinate of each independent simulated obstacle point cloud data only containing the X-axis coordinate value and the Y-axis coordinate value and the de-Z-axis centroid coordinate of the simulated automobile point cloud data i
Step ten, calculating to obtain an azimuth angle of each independent simulated obstacle point cloud data relative to the simulated automobile point cloud data by using a de-Z-axis centroid coordinate of each independent simulated obstacle point cloud data only containing an X-axis coordinate value and a Y-axis coordinate value and a de-Z-axis centroid coordinate and an arc tangent function of the simulated automobile point cloud data;
step eleven, reading the driving speed V of the simulated automobile through an automobile control system of the simulated automobile, and calculating the driving time T from the point cloud data of the simulated automobile to the point cloud data of each independent simulated obstacle according to the driving speed V of the automobile i
Step twelve, the number i of the independent simulated obstacles, the shapes of the independent simulated obstacles, the minimum Euclidean distance D of the cloud data of each independent obstacle point and the cloud data of the simulated automobile point min And a maximum Euclidean distance D max Euclidean distance d of each independent simulated obstacle point cloud data relative to simulated automobile point cloud data on an XOY plane i Azimuth angle of each independent simulated obstacle point cloud data relative to simulated automobile point cloud data and driving time T from the simulated automobile point cloud data to each independent simulated obstacle point cloud data i And transmitting the data to a simulated automobile control system to early warn when the automobile runs on the S-shaped road in advance.
The technical scheme is further designed as follows: the third step specifically comprises the following steps:
301 Selecting an estimation model as a reference plane model;
302 Randomly selecting part of point clouds in the field-of-view point cloud data as initial values, fitting an estimation model by using the part of point clouds, and judging whether the part of point clouds is the point clouds in the reference surface model or not by using errors of the part of point clouds and the reference surface model;
303 When part of the point clouds is the point cloud inside the reference surface model, the reference surface model is used for testing all other field-of-view point cloud data, and if a certain point cloud is suitable for the reference surface model, the point cloud is expanded into the part of the point cloud with the initial value; when part of the point clouds are not the point clouds in the reference surface model, part of the point clouds in the view field point cloud data are selected again randomly as initial values;
304 When at least 85% of the point clouds are classified as point clouds within the reference surface model, then the estimation model is a reasonable estimation model;
305 Store the final reasonable estimate model as field-of-view point cloud data containing only the reference surface;
306 Subtracting the view field point cloud data only containing the reference surface from the view field point cloud data to obtain the view field point cloud data with the reference surface removed;
307 Input the limited ranges in three coordinate axis directions in the direct filtering Cartesian coordinate system, and then perform direct filtering on the field-of-view point cloud data without the reference surface to obtain field-of-view point cloud data after direct filtering;
308 Creating a three-dimensional voxel grid for the directly filtered point cloud data of the field of view, and then replacing all points in each voxel grid with the gravity center of a point set in the voxel grid to obtain the point cloud data of the filtered field of view.
The fourth step specifically comprises the following steps:
401 Selecting a certain point of the filtered view field point cloud data as a current point;
402 K points nearest to the current point are found through a high-dimensional tree neighbor search algorithm;
403 Point meeting the range of the number of the set threshold values in the nearest points of the current point is clustered into a set Q, and the set Q is expanded until the number of the points in the set Q is not increased any more, so that simulated automobile point cloud data or independent simulated obstacle point cloud data is obtained;
404 Set with the number of points less than twenty thousand is regarded as simulated automobile point cloud data, set with the number of points more than or equal to twenty thousand is regarded as independent simulated obstacle point cloud data, and the number of independent simulated obstacles is obtained through statistics.
The fifth step specifically comprises the following steps:
501 Calculating the inertia moment and the eccentricity ratio of the simulated automobile point cloud data and the independent simulated obstacle point cloud data respectively;
502 Respectively calculating characteristic values and corresponding characteristic vectors of the simulated automobile point cloud data and the independent simulated obstacle point cloud data to obtain a centroid coordinate R (x) of the simulated automobile point cloud data 0 ,y 0 ,z 0 ) And centroid coordinates P of independent obstacle point cloud data i (x i ,y i ,z i ) Establishing an independent simulation obstacle point cloud data coordinate system which takes the centroid coordinate of the independent simulation obstacle point cloud data as an origin and takes the characteristic vector as a coordinate axis;
503 Respectively calculating boundary values of each independent simulated obstacle point cloud data in each independent simulated obstacle point cloud data coordinate system, and adding a boundary frame for the independent simulated obstacle point cloud data;
504 Utilizing the inertia moment, the eccentricity and the bounding box to form a directed bounding box for independently simulating the cloud data of the obstacle points;
505 Carrying out three-dimensional shape matching on the directed bounding box of the independent simulated obstacle point cloud data to obtain the shape of the independent simulated obstacle.
The sixth step is specifically as follows:
according to the formula
Figure GDA0003942691000000031
Calculating to obtain Euclidean distance D of each independent simulated obstacle point cloud data and simulated automobile point cloud data i
The ninth step specifically comprises:
according to the formula
Figure GDA0003942691000000041
Calculating to obtain the Euclidean distance d of the cloud data of each independent simulated obstacle point relative to the cloud data of the simulated automobile point on an XOY plane i
The step ten specifically comprises the following steps:
1001 Center of mass coordinates R (x) for simulated automotive point cloud data 0 ,y 0 ,z 0 ) And mass center coordinate P of independent simulation obstacle point cloud data i (x i ,y i ,z i ) Removing Z-axis coordinate value to obtain A (x) 0 ,y 0 ) And B i (x i ,y i );
1002 Utilizing A (x) 0 ,y 0 ) And B i (x i ,y i ) Abscissa of (a) according to the following formula
Δx i =x 0 -x i (3)
Calculating to obtain a transverse coordinate difference value delta x i
1003 Utilizing A (x) 0 ,y 0 ) And B i (x i ,y i ) According to the following formula
Δy i =y 0 -y i (4)
Calculating to obtain a difference value delta y of the longitudinal coordinates i
1004 According to the arctan function
Figure GDA0003942691000000042
Calculating to obtain the azimuth angle theta of each independent simulated obstacle point cloud data relative to the simulated automobile point cloud data i Wherein, in the process,
Figure GDA0003942691000000043
when in use
Figure GDA0003942691000000044
Judging whether the simulated obstacle point cloud data is positioned at the right front theta of the simulated automobile point cloud data i Degree; when in use
Figure GDA0003942691000000045
Judging whether the simulated obstacle point cloud data is positioned in the left front theta of the simulated automobile point cloud data i Degree; when theta is i And when the point cloud data is equal to 0, judging that the independent simulated obstacle point cloud data is positioned right in front of the simulated automobile point cloud data.
In the step eleven, the formula is shown in the specification
Figure GDA0003942691000000046
Calculating the driving time T from the simulated automobile point cloud data to each independent simulated obstacle point cloud data i
The invention has the beneficial effects that:
compared with the prior art, the method and the device have the advantages that the three-dimensional camera networking and the computer communication are utilized to obtain the view field point cloud data of the simulated barrier, the obtained view field point cloud data are subjected to filtering denoising, euclidean clustering and Euclidean distance calculation, and then the accurate three-dimensional data point cloud is obtained. The method comprises the steps of respectively obtaining the view field point cloud data of the simulated obstacles of each road section by utilizing a three-dimensional camera network, forming the three-dimensional camera network by utilizing a router and a network cable, feeding the view field point cloud data of the simulated obstacles of each road section back to a computer, and enabling the three-dimensional camera to be sensitive to the depth information of a target in a scene, so that the defects that a two-dimensional image cannot be obtained for road early warning in a complex scene, and the automatic driving vehicle cannot be successfully early warned and obstacle avoidance cannot be guaranteed are overcome.
Drawings
FIG. 1 is a flow chart of an S-shaped road obstacle simulation early warning method based on three-dimensional camera networking according to the invention;
FIG. 2 is a diagram of a three-dimensional camera and an S-shaped road simulated obstacle early warning system according to the present invention;
FIG. 3 is a communication protocol diagram of a networked three-dimensional camera in accordance with the present invention;
FIG. 4 is a directed bounding box diagram of the point cloud data of the independent simulated obstacle of the present invention;
fig. 5 is a second and fourth simulated obstacle point cloud data distance information diagram of the present invention.
Detailed Description
In order to make the objects, technical solutions and advantages of the present invention more apparent, the present invention is described in further detail below with reference to the accompanying drawings and embodiments. It should be understood that the specific embodiments described herein are merely illustrative of the invention and do not limit the invention. The present invention will now be described in further detail with reference to the accompanying drawings.
The embodiment provides an S-shaped road simulated obstacle early warning method based on three-dimensional camera networking, which is as shown in fig. 1:
step 1: firstly, building a three-dimensional camera and an S-shaped road simulation obstacle early warning system, and then building communication between a networking three-dimensional camera and an automobile control system;
the method is realized by the following steps:
101, building an indoor three-dimensional camera and an S-shaped road simulated obstacle early warning system by using an iron rectangular three-dimensional camera lifting support, a three-dimensional camera box and a three-dimensional camera;
the three-dimensional camera used in the embodiment is a tin-free micro-vision sensing MEMS-3D camera, the model is PCA-P/S600, an infrared light source of 850nm is adopted, and as shown in FIG. 2, a diagram of an indoor three-dimensional camera and an S-shaped road simulated obstacle early warning system is shown.
102, setting the height of an iron rectangular three-dimensional camera lifting support to be 500mm, fixing a three-dimensional camera in a three-dimensional camera box, fixing the three-dimensional camera box at the top of the support, ensuring that a three-dimensional camera lens faces downwards, and shooting an S-shaped road at a overlooking visual angle;
103, in the embodiment, the S-shaped road is provided with three curves, a first three-dimensional camera is placed at the first curve of the S-shaped road, a second three-dimensional camera is placed at the second curve of the S-shaped road, and a third three-dimensional camera is placed at the third curve of the S-shaped road;
104, connecting a first three-dimensional camera, a second three-dimensional camera and a third three-dimensional camera into a networking three-dimensional camera through a router and a network cable;
the working distance of the three-dimensional camera is 300-600mm, and the data interface adopts two modes of USB3.0 and GigE.
And 105, establishing communication connection between the networking three-dimensional camera and the automobile control system, and sending data acquired by the networking three-dimensional camera to the automobile control system through a network.
In this embodiment, the overlook viewing angle point cloud data of the simulated obstacle is obtained by operating a computer and three-dimensional camera adaptive software, and as shown in fig. 3, a communication protocol diagram of the networking three-dimensional camera is shown.
Step 2: establishing a system coordinate system with each three-dimensional camera as an origin, and then shooting S-shaped road simulation barriers and simulation automobiles in a scene by using the three-dimensional cameras to obtain field-of-view point cloud data;
the resolution of the field-of-view point cloud data obtained by shooting in this embodiment is 1280 × 1024, the acquisition time is 0.4-0.8s, the coordinate axis value of the three-dimensional camera system is (200, 400, 600), the coordinate system takes the three-dimensional camera as the origin, the three-dimensional camera parameters are initialized, 0 is calibrated to be a gray-scale image in a non-output PCD format, and 1 is calibrated to be a depth point cloud image in an output PCD format.
And step 3: performing a random sampling consistency estimation algorithm on the field-of-view point cloud data to obtain field-of-view point cloud data with a reference surface removed, and performing direct filtering and voxel filtering on the field-of-view point cloud data with the reference surface removed to obtain filtered field-of-view point cloud data;
the method is realized by the following steps:
step 301, selecting an estimation model as a reference plane model;
in the embodiment, the reference surface model is used as a plane model, and the estimation model is a model of the point cloud data of the field of view.
Step 302, randomly selecting part of point clouds in the field point cloud data as initial values, fitting an estimation model by using the part of point clouds, and judging whether the part of point clouds is the point clouds in the reference surface model or not by using errors of the part of point clouds and the reference surface model;
step 303, when part of the point clouds is the point clouds in the reference surface model, testing all other field-of-view point cloud data by using the reference surface model, if a certain point cloud is suitable for the reference surface model, considering the point cloud to be the point cloud in the reference surface model, and expanding the point cloud to the part of the point clouds of the initial value; when part of the point clouds are not the point clouds in the reference surface model, part of the point clouds in the view field point cloud data are selected again randomly as initial values;
step 304, when at least 85% of the point clouds are classified as the point clouds in the reference surface model, the model is estimated reasonably enough;
step 305, storing the final reasonable estimation model as the field-of-view point cloud data only comprising the reference surface;
step 306, subtracting the view field point cloud data only containing the reference surface from the view field point cloud data to obtain the view field point cloud data without the reference surface;
the estimation model of the point cloud data of the field of view is optimized, and the type of the reference plane model is a plane model.
307, manually inputting three coordinate axis direction limiting ranges in a direct filtering Cartesian coordinate system, and then performing direct filtering on the field point cloud data without the reference surface to obtain field point cloud data after direct filtering;
the straight-through filtering adopted by the embodiment is implemented by setting threshold parameters in three coordinate axis directions in a Cartesian coordinate system, designating points in a parameter range to pass, and filtering points out of the parameter range, so as to realize basic filtering of the point cloud data of the field of view with a reference surface removed. Wherein the range of an X axis in a Cartesian coordinate system is-200 mm to 200mm, the range of a Y axis in the Cartesian coordinate system is-400 mm to 400mm, the range of a Z axis in the Cartesian coordinate system is 300mm to 400mm, and the parameter range of the direct filtering is 300mm to 400mm.
And 308, creating a three-dimensional voxel grid for the directly filtered field point cloud data, and replacing all points in each voxel grid with the gravity center of a point set in the voxel grid to obtain the filtered field point cloud data.
The voxel filtering adopted by the embodiment carries out down-sampling data through the voxelized grid, reduces the point cloud data of the field of view, and is beneficial to next-step point cloud segmentation and clustering. All points in each voxel grid are replaced by the gravity center of the point set in the voxel grid, filtered target point cloud data is reserved, the screening effects of three-dimensional voxel grid thresholds of different specifications are compared, and the optimal de-noising threshold of voxel filtering is determined, wherein the optimal de-noising threshold of the voxel filtering is 1 multiplied by 1.
And 4, step 4: carrying out Euclidean clustering on the filtering field point cloud data to obtain simulated automobile point cloud data, independent simulated obstacle point cloud data and independent simulated obstacle number i, wherein i represents a positive integer which is greater than or equal to 1 and less than or equal to 10;
step 401, selecting a certain point of the filtering field point cloud data as a current point;
step 402, finding k points nearest to the current point through a high-dimensional tree neighbor search algorithm;
step 403, clustering points which meet a set threshold number range in the nearest points of the current point into a set Q, and expanding the set Q until the number of points in the set Q is not increased any more to obtain simulated automobile point cloud data or independent simulated obstacle point cloud data;
the filtering view field point cloud data are stored, efficient indexing is carried out on the filtering view field point cloud data through high-dimensional tree neighbor searching, a minimum point cloud number range is set to be five thousand, a maximum point cloud number range is set to be forty thousand, when the point cloud data in the Euclidean clustering result contain five thousand to forty thousand, the point cloud data are determined to be simulated automobile point cloud data or independently simulated obstacle point cloud data, and otherwise the point cloud data are not determined to be simulated automobile point cloud data or independently simulated obstacle point cloud data.
And step 404, regarding the set Q with the number of the points less than twenty thousand as simulated automobile point cloud data, regarding the set Q with the number of the points more than or equal to twenty thousand as independent simulated obstacle point cloud data, and counting to obtain the number of the independent simulated obstacles.
In this embodiment, simulated automobile point cloud data is removed from the filtered point cloud data, 200000 points in the minimum range of the point cloud data are set as independent simulated obstacle point cloud data, and the number i of the obtained independent simulated obstacles is 4.
And 5: respectively carrying out boundary frame estimation on the independent simulated obstacle point cloud data and the simulated automobile point cloud data to obtain a mass center coordinate P of the independent simulated obstacle point cloud data i (x i ,y i ,z i ) Simulating barycenter coordinate R (x) of automobile point cloud data 0 ,y 0 ,z 0 ) And the shape of the independent simulated obstacle, wherein x i 、y i And z i X-axis coordinate, Y-axis coordinate, and Z-axis coordinate, X-axis coordinate, respectively representing centroid coordinates of independently simulated obstacle point cloud data i And y i Are all larger than 0 cm and less than or equal to 18 cm, z i Greater than 0 cm and less than or equal to 35 cm, x 0 、y 0 And z 0 X-axis coordinate, Y-axis coordinate, and Z-axis coordinate, X, respectively representing the centroid coordinate of simulated automotive point cloud data 0 And y 0 Are all larger than 0 cm and less than or equal to 18 cm, z 0 More than 0 cm and less than or equal to 35 cm;
the method is realized by the following steps:
step 501, respectively calculating inertia moment and eccentricity of the simulated automobile point cloud data and the independent simulated obstacle point cloud data;
step 502, respectively calculating characteristic values and corresponding characteristic vectors of the simulated automobile point cloud data and the independent simulated obstacle point cloud data to obtain a centroid coordinate R (x) of the simulated automobile point cloud data 0 ,y 0 ,z 0 ) And centroid coordinates P of independent obstacle point cloud data i (x i ,y i ,z i ) Establishing an independent simulation obstacle point cloud data coordinate system which takes the centroid coordinate of the independent simulation obstacle point cloud data as an origin and takes the characteristic vector as a coordinate axis;
in this embodiment, the main axis direction of the directional bounding box of the independent simulated obstacle point cloud data is determined, the X-axis component and the Y-axis component of the coordinate of the independent simulated obstacle point cloud data are obtained, and the covariance matrix, the eigenvalue, and the corresponding eigenvector are calculated, where the eigenvector corresponding to the largest eigenvalue is the main axis direction of the directional bounding box of the independent simulated obstacle point cloud data, that is, the coordinate axis of the independent simulated obstacle coordinate system.
Step 503, respectively calculating a boundary value of each independent simulated obstacle point cloud data in each independent simulated obstacle point cloud data coordinate system, and adding a boundary frame for the independent simulated obstacle point cloud data;
in the embodiment, a direction vector is determined in each independent simulated obstacle point cloud data coordinate system, coordinate points of independent simulated obstacle point cloud data are projected onto the direction vector, the maximum value and the minimum value of X-axis and Y-axis components of each independent simulated obstacle point cloud data coordinate in each direction are calculated, namely the boundary value of each independent simulated obstacle point cloud data, and then a boundary frame is added to the independent simulated obstacle point cloud data, wherein a directional bounding box range is set to be 2 × 2 × 2, and the iteration times are 300 times.
Step 504, forming a directed bounding box for independently simulating the cloud data of the obstacle points by using the inertia moment, the eccentricity and the bounding box;
and 505, performing three-dimensional shape matching on the directed bounding box of the independent simulated obstacle point cloud data to obtain the shape of the independent simulated obstacle.
In this embodiment, different independent obstacle shapes are distinguished for different gray values of each independent simulated obstacle point cloud data, and as shown in fig. 4, the independent simulated obstacle point cloud data is a directed bounding box graph, where a first obstacle is a cuboid, a second obstacle is a cylinder, a third obstacle is a cuboid, and a fourth obstacle is a cuboid.
Step 6: calculating the Euclidean distance D of each independent obstacle point cloud data and the simulated automobile point cloud data by using the mass center coordinate of each independent obstacle point cloud data and the mass center coordinate of the simulated automobile point cloud data i Wherein D is i More than or equal to 1 cm and less than or equal to 25 cm;
the method is realized by the following steps:
according to
Figure GDA0003942691000000091
Calculating to obtain Euclidean distance D of each independent obstacle point cloud data and simulated automobile point cloud data i Wherein, in the step (A),
Figure GDA0003942691000000092
represents the square opening, (.) 2 Representing the square.
The coordinates of the center of mass of the simulated automobile point cloud data in this embodiment are (180.067, 195.015, 56.724) (mm), where mm represents mm, the coordinates of the center of mass of the first simulated obstacle point cloud data are (95.675, 87.211, 37.316) (mm), the coordinates of the center of mass of the second simulated obstacle point cloud data are (121.621, 38.961, 45.879) (mm), the coordinates of the center of mass of the third simulated obstacle point cloud data are (56.320, 126.597, 69.123) (mm), and the coordinates of the center of mass of the fourth simulated obstacle point cloud data are (150.875, 99.110, 54.449) (mm). Setting the Euclidean distance iteration times to be 400 times, and obtaining and storing the Euclidean distance D of each independent simulated obstacle point cloud data and simulated automobile point cloud data i Wherein D is 1 =138.276mm,D 2 =180.432mm,D 3 =141.056mm,D 4 =100.272mm。
And 7: pair D in order from small to large i Performing sequencing operation to obtain the minimum Euclidean distance D of each independent obstacle point cloud data and the simulated automobile point cloud data min And a maximum Euclidean distance D max
In this example, D 4 <D 1 <D 3 <D 2 Obtaining a minimum Euclidean distance D min =D 4 Namely 100.272mm; obtaining the maximum Euclidean distance D max =D 2 I.e. 180.432mm. Fig. 5 shows a second and fourth simulated obstacle point cloud data distance information maps.
And 8: center of mass coordinates of point cloud data of each independent obstacle and simulated automobile point cloudRemoving Z-axis coordinate values from the mass center coordinates of the data to obtain Z-axis-removed mass center coordinates B of cloud data of each independent obstacle point only containing X-axis coordinate values and Y-axis coordinate values i (x i ,y i ) And a de-Z axis centroid coordinate A (x) of the simulated automobile point cloud data 0 ,y 0 );
In this embodiment, the de-Z-axis centroid coordinate a (180.067, 195.015) (mm) of the simulated automobile point cloud data containing only the X-axis coordinate value and the Y-axis coordinate value, and the de-Z-axis centroid coordinate B of the first simulated obstacle point cloud data containing only the X-axis coordinate value and the Y-axis coordinate value are obtained 1 (95.675, 87.211) (mm), de-Z-Axis centroid coordinate B of second simulated obstacle point cloud data containing only X-Axis coordinate value and Y-Axis coordinate value 2 (121.621, 38.961) (mm), a de-Z-axis centroid coordinate B of the third simulated obstacle point cloud data containing only X-axis coordinate values and Y-axis coordinate values 3 (56.320, 126.597) (mm), a de-Z axis centroid coordinate B of the fourth simulated obstacle point cloud data comprising only the X axis coordinate value and the Y axis coordinate value 4 (150.875,99.110)(mm)。
And step 9: calculating to obtain Euclidean distance d of each independent simulated obstacle point cloud data relative to the simulated automobile point cloud data on an XOY plane by using the de-Z-axis centroid coordinate of each independent obstacle point cloud data only containing the X-axis coordinate value and the Y-axis coordinate value and the de-Z-axis centroid coordinate of the simulated automobile point cloud data i Wherein d is i More than or equal to 1 cm and less than or equal to 20 cm;
the method is realized by the following steps:
according to
Figure GDA0003942691000000101
Calculating to obtain the Euclidean distance d of the cloud data of each independent simulated obstacle point relative to the cloud data of the simulated automobile point on an XOY plane i
The Euclidean distance d of each independent simulated obstacle point cloud data relative to the simulated automobile point cloud data on the XOY plane is obtained by the embodiment i Are respectively d 1 =137.437mm,d 2 =167.722mm,d 3 =141.905mm,d 4 =100.578mm。
Step 10: calculating to obtain an azimuth angle of each independent simulated obstacle point cloud data relative to the simulated automobile point cloud data by using a Z-axis-removed centroid coordinate of each independent obstacle point cloud data only containing an X-axis coordinate value and a Y-axis coordinate value and a Z-axis-removed centroid coordinate and an arc tangent function of the simulated automobile point cloud data;
step 1001, center of mass coordinates R (x) of simulated automobile point cloud data 0 ,y 0 ,z 0 ) And centroid coordinates P of simulated obstacle point cloud data i (x i ,y i ,z i ) Removing Z-axis coordinate value to obtain A (x) 0 ,y 0 ) And B i (x i ,y i );
Step 1002, utilize A (x) 0 ,y 0 ) And B i (x i ,y i ) On the abscissa of (a) according to
Δx i =x 0 -x i (3)
Calculating to obtain a transverse coordinate difference value delta x i ,Δx i More than or equal to negative 20 cm and less than or equal to positive 20 cm;
in this example x 0 And x 1 Transverse coordinate difference value Deltax of 1 =84.392mm,x 0 And x 2 Transverse coordinate difference value Deltax 2 =58.446mm,x 0 And x 3 Transverse coordinate difference value Deltax of 3 =123.747mm,x 0 And x 4 Transverse coordinate difference value Deltax of 4 =29.192mm。
Step 1003, utilizing A (x) 0 ,y 0 ) And B i (x i ,y i ) On the ordinate of
Δy i =y 0 -y i (4)
Calculating to obtain a difference value delta y of the longitudinal coordinates i ,Δy i More than or equal to negative 20 cm and less than or equal to positive 20 cm;
in this example y 0 And y 1 Difference value of ordinate Δ y 1 =107.804mm,y 0 And y 2 Difference value Δ y of ordinate 2 =156.054mm,y 0 And y 3 Difference value Δ y of ordinate 3 =68.418mm、y 0 And y 4 Difference value Δ y of ordinate 4 =95.905mm。
Step 1004, according to the arctan function
Figure GDA0003942691000000111
Calculating to obtain the azimuth angle theta of each independent simulated obstacle point cloud data relative to the simulated automobile point cloud data i Wherein, in the process,
Figure GDA0003942691000000112
e (-) denotes belonging to the operation when
Figure GDA0003942691000000113
Judging whether the simulated obstacle point cloud data are positioned at the front right theta of the simulated automobile point cloud data i Degree; when the temperature is higher than the set temperature
Figure GDA0003942691000000114
Judging whether the simulated obstacle point cloud data is positioned in the left front theta of the simulated automobile point cloud data i Degree; when theta is i And when the data is equal to 0, judging that the simulated obstacle point cloud data is positioned right in front of the simulated automobile point cloud data.
The embodiment shows that the first simulated obstacle point cloud data is positioned 51.9 degrees at the right front of the simulated automobile point cloud data, namely theta 1 =51.9 °, the second simulated obstacle point cloud data is located 69.4 ° at the front right of the simulated automobile point cloud data, i.e. θ 2 =69.4 °, the third simulated obstacle point cloud data is positioned at 28.9 ° in front of the simulated automobile point cloud data on the right, that is, θ 3 =28.9 °, and the fourth simulated obstacle point cloud data is positioned at 73.0 ° in the front right of the simulated automobile point cloud data, namely θ 4 =73.0°。
Step 11: reading in a simulated automobile running speed V through an automobile control system, wherein V represents the simulated automobile running speed of more than or equal to 1 cm per second and less than or equal to 5 cm per second according to
Figure GDA0003942691000000115
Calculating the driving time T from the simulated automobile point cloud data to each independent simulated obstacle point cloud data i Wherein, T i 4 seconds or more and 20 seconds or less;
the simulated automobile running speed is 3 cm per second in the embodiment, and T is obtained by calculation 1 =4.58 seconds, T 2 =5.59 seconds, T 3 =4.73 seconds, T 4 =4.33 seconds.
Step 12: the number i of the independent simulated obstacles, the shapes of the independent simulated obstacles, the minimum Euclidean distance D of the cloud data of each independent obstacle point and the simulated automobile point cloud data min And a maximum Euclidean distance D max And the Euclidean distance d of the cloud data of each independent simulated obstacle point relative to the cloud data of the simulated automobile point on an XOY plane i Azimuth angle of each independent simulated obstacle point cloud data relative to simulated automobile point cloud data and driving time T from the simulated automobile point cloud data to each independent simulated obstacle point cloud data i The data are transmitted to an automobile control system, so that the automobile can be early warned in advance when running on the S-shaped road;
in the embodiment, the obtained number of independent simulated obstacles i =4; the first barrier is a cuboid, the second barrier is a cylinder, the third barrier is a cuboid, and the fourth barrier is a cuboid; d 1 =138.276mm,D 2 =180.432mm,D 3 =141.056mm,D 4 =100.272mm;D max =180.432mm,D min =100.272mm;d 1 =137.437mm,d 2 =167.722mm,d 3 =141.905mm,d 4 =100.578mm;T 1 =4.58 seconds, T 2 =5.59 seconds, T 3 =4.73 seconds, T 4 =4.33 seconds; first simulated obstacle point cloud data azimuth angle theta 1 The azimuth angle theta of the cloud data of the second simulated obstacle point is 51.9 degrees at the front right 2 The position of the cloud data of the obstacle point is 69.4 degrees at the front right and the third simulated obstacle pointAngle theta 3 The azimuth angle theta of the cloud data of the fourth simulated obstacle point is 28.9 degrees on the right front side 4 Road condition information of 73.0 degrees at the front right is sent to an automobile control system through a networking three-dimensional camera, and then the automobile can early warn in advance aiming at simulated obstacles.
The technical solutions of the present invention are not limited to the above embodiments, and all technical solutions obtained by using equivalent substitution modes fall within the scope of the present invention.

Claims (8)

1. An S-shaped road simulated obstacle early warning method based on three-dimensional camera networking is characterized by comprising the following steps:
step one, arranging a three-dimensional camera above each curve of an S-shaped road, and arranging a lens of the three-dimensional camera downwards;
establishing a three-dimensional coordinate system with each three-dimensional camera as an origin, and then shooting simulated obstacles and simulated automobiles in the S-shaped highway scene by using the three-dimensional cameras to obtain field-of-view point cloud data;
performing a random sampling consistency estimation algorithm on the field point cloud data to obtain field point cloud data with a reference surface removed, and performing straight-through filtering and voxel filtering on the field point cloud data with the reference surface removed to obtain filtered field point cloud data;
performing Euclidean clustering on the filtering field point cloud data to obtain simulated automobile point cloud data, independent simulated obstacle point cloud data and independent simulated obstacle number i;
step five, respectively carrying out boundary frame estimation on the independent simulated obstacle point cloud data and the simulated automobile point cloud data to obtain a centroid coordinate P of the independent simulated obstacle point cloud data i (x i ,y i ,z i ) Simulating barycenter coordinate R (x) of automobile point cloud data 0 ,y 0 ,z 0 ) And the shape of the independent simulated obstacle, wherein x i 、y i And z i X-axis coordinate, Y-axis coordinate, and Z-axis coordinate, X-axis coordinate, respectively representing centroid coordinates of independently simulated obstacle point cloud data 0 、y 0 And z 0 Respectively representing simulated car pointsAn X-axis coordinate, a Y-axis coordinate, and a Z-axis coordinate of a centroid coordinate of the cloud data;
sixthly, calculating the Euclidean distance D of each independent simulated obstacle point cloud data and the simulated automobile point cloud data by using the centroid coordinate of each independent simulated obstacle point cloud data and the centroid coordinate of the simulated automobile point cloud data i
Step seven, pair D according to the sequence from small to big i Performing sequencing operation to obtain the minimum Euclidean distance D of each independent simulated obstacle point cloud data and simulated automobile point cloud data min And a maximum Euclidean distance D max
Step eight, removing Z-axis coordinate values from the mass center coordinates of each independent simulated obstacle point cloud data and the mass center coordinates of the simulated automobile point cloud data to obtain Z-axis-removed mass center coordinates B of each independent simulated obstacle point cloud data only containing X-axis coordinate values and Y-axis coordinate values i (x i ,y i ) And a de-Z axis centroid coordinate A (x) of the simulated automobile point cloud data 0 ,y 0 );
Step nine, calculating to obtain the Euclidean distance d of each independent simulation obstacle point cloud data relative to the simulation automobile point cloud data on an XOY plane by using the de-Z-axis centroid coordinate of each independent simulation obstacle point cloud data only containing the X-axis coordinate value and the Y-axis coordinate value and the de-Z-axis centroid coordinate of the simulation automobile point cloud data i
Step ten, calculating to obtain an azimuth angle of each independent simulation obstacle point cloud data relative to the simulation automobile point cloud data by using a Z-axis-removed centroid coordinate of each independent simulation obstacle point cloud data only containing an X-axis coordinate value and a Y-axis coordinate value and a Z-axis-removed centroid coordinate and an arc tangent function of the simulation automobile point cloud data;
step eleven, reading the driving speed V of the simulated automobile through an automobile control system of the simulated automobile, and calculating the driving time T from the point cloud data of the simulated automobile to the point cloud data of each independent simulated obstacle according to the driving speed V of the automobile i
Step twelve, the number i of the independent simulated obstacles, the shapes of the independent simulated obstacles, the point cloud data and the model of each independent obstacleMinimum Euclidean distance D of point cloud data of simulated automobile min And a maximum Euclidean distance D max And the Euclidean distance d of the cloud data of each independent simulated obstacle point relative to the cloud data of the simulated automobile point on an XOY plane i Azimuth angle of each independent simulated obstacle point cloud data relative to simulated automobile point cloud data and driving time T from the simulated automobile point cloud data to each independent simulated obstacle point cloud data i And transmitting the data to a simulated automobile control system to early warn when the automobile runs on the S-shaped road in advance.
2. The S-shaped road simulated obstacle early warning method based on three-dimensional camera networking according to claim 1, wherein the third step specifically comprises the following steps:
1) Selecting an estimation model as a reference plane model;
2) Randomly selecting part of point clouds in the field-of-view point cloud data as initial values, fitting an estimation model by using the part of point clouds, and judging whether the part of point clouds is the point clouds in the reference surface model or not by using errors of the part of point clouds and the reference surface model;
3) When part of the point clouds are point clouds in the reference surface model, testing all other field-of-view point cloud data by using the reference surface model, and if a certain point cloud is suitable for the reference surface model, expanding the point cloud into the part of the point clouds with an initial value; when part of the point clouds are not the point clouds in the reference surface model, part of the point clouds in the view field point cloud data are selected again randomly as initial values;
4) When at least 85% of the point clouds are classified as point clouds in the reference surface model, the model is estimated to be a reasonable estimation model;
5) Storing the final reasonable estimation model as the point cloud data of the field of view only containing the reference surface;
6) Subtracting the view field point cloud data only containing the reference surface from the view field point cloud data to obtain the view field point cloud data without the reference surface;
7) Inputting limited ranges in three coordinate axis directions in a direct filtering Cartesian coordinate system, and then performing direct filtering on the field-of-view point cloud data without the reference surface to obtain field-of-view point cloud data after direct filtering;
8) And creating a three-dimensional voxel grid for the directly filtered point cloud data of the field of view, and then replacing all points in each voxel grid by the gravity center of a point set in the voxel grid to obtain the point cloud data of the filtered field of view.
3. The S-shaped road simulated obstacle early warning method based on three-dimensional camera networking according to claim 1, wherein the fourth step specifically comprises the following steps:
1) Selecting a certain point of the filtering field point cloud data as a current point;
2) Finding k points nearest to the current point by a high-dimensional tree neighbor search algorithm;
3) Clustering points which meet the range of the number of set thresholds in the nearest points of the current point into a set Q, and expanding the set Q until the number of points in the set Q is not increased any more to obtain simulated automobile point cloud data or independent simulated obstacle point cloud data;
4) And (3) regarding the set with the number of the points less than twenty thousand as simulated automobile point cloud data, regarding the set with the number of the points more than or equal to twenty thousand as independent simulated obstacle point cloud data, and counting to obtain the number of the independent simulated obstacles.
4. The S-shaped road simulated obstacle early warning method based on three-dimensional camera networking according to claim 1, wherein the fifth step specifically comprises the following steps:
1) Respectively calculating the inertia moment and the eccentricity ratio of the simulated automobile point cloud data and the independent simulated obstacle point cloud data;
2) Respectively calculating characteristic values and corresponding characteristic vectors of the simulated automobile point cloud data and the independent simulated obstacle point cloud data to obtain a centroid coordinate R (x) of the simulated automobile point cloud data 0 ,y 0 ,z 0 ) And centroid coordinates P of independent obstacle point cloud data i (x i ,y i ,z i ) Establishing the independent simulation obstacle with the centroid coordinate of the independent simulation obstacle point cloud data as the origin and the characteristic vector as the coordinate axisAn object point cloud data coordinate system;
3) In each independent simulated obstacle point cloud data coordinate system, calculating the boundary value of each independent simulated obstacle point cloud data, and adding a boundary frame for the independent simulated obstacle point cloud data;
4) Forming a directed bounding box for independently simulating the cloud data of the obstacle points by using the inertia moment, the eccentricity and the bounding box;
5) And carrying out three-dimensional shape matching on the directed bounding box of the independent simulated obstacle point cloud data to obtain the shape of the independent simulated obstacle.
5. The S-shaped road simulated obstacle early warning method based on three-dimensional camera networking according to claim 1, wherein the sixth step is specifically as follows:
according to the formula
Figure FDA0003942690990000031
Calculating to obtain Euclidean distance D of each independent simulated obstacle point cloud data and simulated automobile point cloud data i
6. The S-shaped road simulated obstacle early warning method based on three-dimensional camera networking according to claim 1, characterized in that the ninth step specifically comprises:
according to the formula
Figure FDA0003942690990000032
Calculating to obtain the Euclidean distance d of each independent simulated obstacle point cloud data relative to the simulated automobile point cloud data on an XOY plane i
7. The S-shaped road simulated obstacle early warning method based on three-dimensional camera networking according to claim 1, wherein the tenth step specifically comprises the following steps:
1) Center of mass coordinates R (x) to simulated automotive point cloud data 0 ,y 0 ,z 0 ) And mass center coordinate P of independent simulation obstacle point cloud data i (x i ,y i ,z i ) Removing Z-axis coordinate value to obtain A (x) 0 ,y 0 ) And B i (x i ,y i );
2) Using A (x) 0 ,y 0 ) And B i (x i ,y i ) Abscissa of (a) according to the following formula
Δx i =x 0 -x i (3)
Calculating to obtain a transverse coordinate difference value delta x i
3) Using A (x) 0 ,y 0 ) And B i (x i ,y i ) Ordinate according to the following formula
Δy i =y 0 -y i (4)
Calculating to obtain a difference value delta y of the longitudinal coordinates i
4) According to the arctan function
Figure FDA0003942690990000041
Calculating to obtain the azimuth angle theta of each independent simulated obstacle point cloud data relative to the simulated automobile point cloud data i Wherein, in the step (A),
Figure FDA0003942690990000042
when in use
Figure FDA0003942690990000043
Judging whether the simulated obstacle point cloud data is positioned at the right front theta of the simulated automobile point cloud data i Degree; when the temperature is higher than the set temperature
Figure FDA0003942690990000044
Judging whether the simulated obstacle point cloud data is positioned in the left front theta of the simulated automobile point cloud data i Degree; when theta is measured i And when the point cloud data is equal to 0, judging that the independent simulated obstacle point cloud data is positioned right in front of the simulated automobile point cloud data.
8. The S-shaped road simulated obstacle early warning method based on three-dimensional camera networking according to claim 1, wherein in the eleventh step, the early warning method is according to the following formula
Figure FDA0003942690990000045
Calculating the driving time T from the simulated automobile point cloud data to each independent simulated obstacle point cloud data i
CN202210060614.3A 2022-01-19 2022-01-19 S-shaped road simulated obstacle early warning method based on three-dimensional camera networking Active CN114446092B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210060614.3A CN114446092B (en) 2022-01-19 2022-01-19 S-shaped road simulated obstacle early warning method based on three-dimensional camera networking

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210060614.3A CN114446092B (en) 2022-01-19 2022-01-19 S-shaped road simulated obstacle early warning method based on three-dimensional camera networking

Publications (2)

Publication Number Publication Date
CN114446092A CN114446092A (en) 2022-05-06
CN114446092B true CN114446092B (en) 2022-12-27

Family

ID=81367795

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210060614.3A Active CN114446092B (en) 2022-01-19 2022-01-19 S-shaped road simulated obstacle early warning method based on three-dimensional camera networking

Country Status (1)

Country Link
CN (1) CN114446092B (en)

Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2007164671A (en) * 2005-12-16 2007-06-28 Matsushita Electric Ind Co Ltd Device for deciding approaching obstacle and system for warning collision with obstacle
CN201266439Y (en) * 2008-10-13 2009-07-01 交通部公路科学研究所 System for early-warning curve barrier
CN110893617A (en) * 2018-09-13 2020-03-20 深圳市优必选科技有限公司 Obstacle detection method and device and storage device
JP2020075708A (en) * 2018-10-26 2020-05-21 株式会社小糸製作所 Road surface drawing device of vehicle
CN112101092A (en) * 2020-07-31 2020-12-18 北京智行者科技有限公司 Automatic driving environment sensing method and system
CN112230245A (en) * 2020-09-21 2021-01-15 卡斯柯信号有限公司 System and method for detecting active obstacles of train in tunnel based on laser radar
CN112606804A (en) * 2020-12-08 2021-04-06 东风汽车集团有限公司 Control method and control system for active braking of vehicle
CN113378647A (en) * 2021-05-18 2021-09-10 浙江工业大学 Real-time rail obstacle detection method based on three-dimensional point cloud
CN113602265A (en) * 2021-08-17 2021-11-05 东风汽车集团股份有限公司 Processing cooperative lane changing method and system based on vehicle-to-vehicle communication

Family Cites Families (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN100440269C (en) * 2006-06-12 2008-12-03 黄席樾 Intelligent detecting prewarning method for expressway automobile running and prewaring system thereof
DE102007048809A1 (en) * 2006-10-13 2008-07-10 Continental Teves Ag & Co. Ohg Method and device for detecting hidden objects in traffic
CN102431495B (en) * 2011-12-01 2014-01-15 北京理工大学 77GHz millimeter wave corner false-alarm inhibiting system for automobile active anticollision radar
FR3012784B1 (en) * 2013-11-04 2016-12-30 Renault Sa DEVICE FOR DETECTING THE LATERAL POSITION OF A PIETON IN RELATION TO THE TRACK OF THE VEHICLE
CN109927719B (en) * 2017-12-15 2022-03-25 百度在线网络技术(北京)有限公司 Auxiliary driving method and system based on obstacle trajectory prediction

Patent Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2007164671A (en) * 2005-12-16 2007-06-28 Matsushita Electric Ind Co Ltd Device for deciding approaching obstacle and system for warning collision with obstacle
CN201266439Y (en) * 2008-10-13 2009-07-01 交通部公路科学研究所 System for early-warning curve barrier
CN110893617A (en) * 2018-09-13 2020-03-20 深圳市优必选科技有限公司 Obstacle detection method and device and storage device
JP2020075708A (en) * 2018-10-26 2020-05-21 株式会社小糸製作所 Road surface drawing device of vehicle
CN112101092A (en) * 2020-07-31 2020-12-18 北京智行者科技有限公司 Automatic driving environment sensing method and system
CN112230245A (en) * 2020-09-21 2021-01-15 卡斯柯信号有限公司 System and method for detecting active obstacles of train in tunnel based on laser radar
CN112606804A (en) * 2020-12-08 2021-04-06 东风汽车集团有限公司 Control method and control system for active braking of vehicle
CN113378647A (en) * 2021-05-18 2021-09-10 浙江工业大学 Real-time rail obstacle detection method based on three-dimensional point cloud
CN113602265A (en) * 2021-08-17 2021-11-05 东风汽车集团股份有限公司 Processing cooperative lane changing method and system based on vehicle-to-vehicle communication

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
基于点云图的农业导航中障碍物检测方法;姬长英等;《农业工程学报》;20150408(第07期);全文 *
轮式AGV沿葡萄园垄道行驶避障导航算法与模拟试验;谢永良等;《农业机械学报》;20180525(第07期);全文 *

Also Published As

Publication number Publication date
CN114446092A (en) 2022-05-06

Similar Documents

Publication Publication Date Title
CN108932736B (en) Two-dimensional laser radar point cloud data processing method and dynamic robot pose calibration method
CN108759667B (en) Front truck distance measuring method under vehicle-mounted camera based on monocular vision and image segmentation
CN108445480B (en) Mobile platform self-adaptive extended target tracking system and method based on laser radar
CN111260683A (en) Target detection and tracking method and device for three-dimensional point cloud data
WO2022188663A1 (en) Target detection method and apparatus
CN108549084B (en) Target detection and attitude estimation method based on sparse two-dimensional laser radar
CN110647835A (en) Target detection and classification method and system based on 3D point cloud data
CN111369541A (en) Vehicle detection method for intelligent automobile under severe weather condition
US11442162B2 (en) Millimeter wave radar modeling-based method for object visibility determination
CN110674705A (en) Small-sized obstacle detection method and device based on multi-line laser radar
CN113111887A (en) Semantic segmentation method and system based on information fusion of camera and laser radar
Barua et al. A self-driving car implementation using computer vision for detection and navigation
CN110969064A (en) Image detection method and device based on monocular vision and storage equipment
Burger et al. Fast multi-pass 3D point segmentation based on a structured mesh graph for ground vehicles
CN111880191A (en) Map generation method based on multi-agent laser radar and visual information fusion
Jun et al. Autonomous driving system design for formula student driverless racecar
CN114155265A (en) Three-dimensional laser radar road point cloud segmentation method based on YOLACT
CN114299039B (en) Robot and collision detection device and method thereof
CN116109601A (en) Real-time target detection method based on three-dimensional laser radar point cloud
EP3555854B1 (en) A method of tracking objects in a scene
CN112666573B (en) Detection method for retaining wall and barrier behind mine unloading area vehicle
CN114446092B (en) S-shaped road simulated obstacle early warning method based on three-dimensional camera networking
Gao et al. Design and implementation of autonomous mapping system for ugv based on lidar
CN115239978A (en) Camera and laser radar automatic calibration method based on instance segmentation
CN115307622A (en) Autonomous mapping method and system based on deep learning in dynamic environment

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant
TR01 Transfer of patent right

Effective date of registration: 20230925

Address after: Room 2202, 22 / F, Wantong building, No. 3002, Sungang East Road, Sungang street, Luohu District, Shenzhen City, Guangdong Province

Patentee after: Shenzhen dragon totem technology achievement transformation Co.,Ltd.

Address before: 230000 floor 1, building 2, phase I, e-commerce Park, Jinggang Road, Shushan Economic Development Zone, Hefei City, Anhui Province

Patentee before: Dragon totem Technology (Hefei) Co.,Ltd.

Effective date of registration: 20230925

Address after: 230000 floor 1, building 2, phase I, e-commerce Park, Jinggang Road, Shushan Economic Development Zone, Hefei City, Anhui Province

Patentee after: Dragon totem Technology (Hefei) Co.,Ltd.

Address before: No.333 Xishan Avenue, Wuxi City, Jiangsu Province

Patentee before: Wuxi University

TR01 Transfer of patent right