CN111640323A - Road condition information acquisition method - Google Patents

Road condition information acquisition method Download PDF

Info

Publication number
CN111640323A
CN111640323A CN202010511372.6A CN202010511372A CN111640323A CN 111640323 A CN111640323 A CN 111640323A CN 202010511372 A CN202010511372 A CN 202010511372A CN 111640323 A CN111640323 A CN 111640323A
Authority
CN
China
Prior art keywords
grid
state
map
obstacle
sensor
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Withdrawn
Application number
CN202010511372.6A
Other languages
Chinese (zh)
Inventor
张力沛
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Xiangyun Zhihui Technology Co ltd
Original Assignee
Beijing Xiangyun Zhihui Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Xiangyun Zhihui Technology Co ltd filed Critical Beijing Xiangyun Zhihui Technology Co ltd
Priority to CN202010511372.6A priority Critical patent/CN111640323A/en
Publication of CN111640323A publication Critical patent/CN111640323A/en
Withdrawn legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G08SIGNALLING
    • G08GTRAFFIC CONTROL SYSTEMS
    • G08G1/00Traffic control systems for road vehicles
    • G08G1/09Arrangements for giving variable traffic instructions
    • G08G1/0962Arrangements for giving variable traffic instructions having an indicator mounted inside the vehicle, e.g. giving voice messages
    • G08G1/0967Systems involving transmission of highway information, e.g. weather, speed limits
    • G08G1/096708Systems involving transmission of highway information, e.g. weather, speed limits where the received information might be used to generate an automatic action on the vehicle control
    • G08G1/096725Systems involving transmission of highway information, e.g. weather, speed limits where the received information might be used to generate an automatic action on the vehicle control where the received information generates an automatic action on the vehicle control
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/23Clustering techniques
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/11Region-based segmentation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • G06V10/25Determination of region of interest [ROI] or a volume of interest [VOI]
    • GPHYSICS
    • G08SIGNALLING
    • G08GTRAFFIC CONTROL SYSTEMS
    • G08G1/00Traffic control systems for road vehicles
    • G08G1/01Detecting movement of traffic to be counted or controlled
    • G08G1/04Detecting movement of traffic to be counted or controlled using optical or ultrasonic detectors
    • GPHYSICS
    • G08SIGNALLING
    • G08GTRAFFIC CONTROL SYSTEMS
    • G08G1/00Traffic control systems for road vehicles
    • G08G1/09Arrangements for giving variable traffic instructions
    • G08G1/0962Arrangements for giving variable traffic instructions having an indicator mounted inside the vehicle, e.g. giving voice messages
    • G08G1/0968Systems involving transmission of navigation instructions to the vehicle
    • G08G1/0969Systems involving transmission of navigation instructions to the vehicle having a display in the form of a map
    • GPHYSICS
    • G08SIGNALLING
    • G08GTRAFFIC CONTROL SYSTEMS
    • G08G1/00Traffic control systems for road vehicles
    • G08G1/16Anti-collision systems
    • G08G1/166Anti-collision systems for active traffic, e.g. moving vehicles, pedestrians, bikes

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Data Mining & Analysis (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Evolutionary Computation (AREA)
  • Evolutionary Biology (AREA)
  • General Engineering & Computer Science (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Artificial Intelligence (AREA)
  • Multimedia (AREA)
  • Radar, Positioning & Navigation (AREA)
  • Remote Sensing (AREA)
  • Atmospheric Sciences (AREA)
  • Traffic Control Systems (AREA)

Abstract

The invention discloses a road condition information acquisition method, which acquires front road condition data point information through a sensor, establishes a fitting function of a road edge according to the characteristics of the road edge data point, further establishes a grid map, divides a drivable area and a non-drivable area in the grid map, detects a dynamic barrier in the drivable area, processes and acquires information of the dynamic barrier. The method is applied to the unmanned vehicle, so that the unmanned vehicle can obtain good surrounding environment perception capability, and plays an important role in autonomous cruise control, collision early warning and path planning.

Description

Road condition information acquisition method
Technical Field
The invention relates to the technical field of information integration, in particular to a road condition information acquisition method.
Background
With the rapid development of science and technology, the unmanned vehicle is not a future transportation tool in the imagination of people, but gradually appears in the field of view of the public as a real research result. Unmanned vehicles traveling in urban environments need to have good perception of the surrounding environment, including perception of road structures, detection of other dynamic obstacles, and the like. The reliable road condition information acquisition capability plays a crucial role in autonomous cruise control, collision warning and path planning.
Usually, a sensor having a road condition information acquisition function, such as a camera, a radar, a GPS, etc., can be carried and installed on the unmanned vehicle. The laser radar has the advantages of being free from influences of factors such as weather and illumination, independent of the identification of veins and colors, insensitive to shadow noise and the like. In addition, the scanning frequency of the laser radar is high during measurement, the data volume is rich, and the returned scanning data is convenient to process quickly. Therefore, the laser radar is adopted to sense the environmental information around the unmanned vehicle, so that the robustness and the rapidity are good, and the application prospect on the unmanned vehicle is good.
Because the data received by the sensor is rich, the problems of redundant algorithm operation, obvious low efficiency and incapability of ensuring the accuracy and stability of road edge detection can occur when a road edge detection is carried out to establish a grid map, which is one of the reasons that the unmanned vehicle cannot be widely applied more safely at present.
Disclosure of Invention
In view of the above-mentioned drawbacks, the technical problem to be solved by the present invention is to provide a road condition information obtaining method to solve the problems of redundant algorithm operations, significant inefficiency, and inability to ensure the accuracy and stability of road edge detection in the process of road edge detection and grid map establishment in the existing related field.
The invention provides a road condition information acquisition method, which comprises the following steps:
step 1, scanning a road condition data point in front by using a sensor to acquire information;
step 2, acquiring a fitting function of the road edge according to a road edge detection algorithm;
step 3, establishing a grid map according to the information acquired in the step;
step 4, displaying a fitting function of the road edge by using a line in a grid map, and dividing the front area into a drivable area and an undrivable area;
step 5, detecting dynamic obstacles in the travelable area according to the conflict coefficient rule;
and 6, processing and extracting information of the dynamic obstacles in the travelable area.
Preferably, the step 2 includes:
step 2.1, taking the position of the sensor as an original point O, taking a straight line of the emission direction of the sensor on the same vertical plane with the motion direction as an X axis, taking a straight line in the other horizontal direction perpendicular to the X axis from the original point O as a Y axis, establishing a sensor XOY plane coordinate system, taking a straight line perpendicular to the XOY plane from the original point O as a Z axis, and obtaining a scanning data point coordinate (X axis)i,yi) Wherein, i is a data point serial number, and a scanning data point is read according to the serial number;
2.2, screening scanning data point information of the same layer;
step 2.3, screening out road edge points in the same layer of data points according to the characteristic that the road edge data point coordinates (x, y) meet a straight-line function y which is kx + b, wherein k is the slope of the straight-line function, and b is the intercept of the straight-line function;
2.4, clustering different road edge points on each layer;
step 2.5, then, line fitting is carried out on each type of line by each layer by using a least square method, and a fitted straight line function meets the following conditions:
Figure BDA0002528422310000021
wherein k is the slope of the fitted straight line, b is the intercept of the fitted straight line, and n is the number of the road edge points;
and 2.6, fusing the plurality of scanning layers together to obtain a final road edge fitting function.
Preferably, the step 3 comprises:
3.1, establishing a sensor model and a scanning map according to the information acquired in the step;
step 3.2, initializing a global grid map, namely, the sensor model is in an unknown state;
step 3.3, fusing the t frame scanning map and the t-1 frame global map to update the raster map;
and 3.4, judging the state of the grid, comprising the following steps: the method includes the steps of defining a barrier state (O), a non-barrier state (F), an unknown state (Ω) and a collision state (Φ), wherein Ω is { F, O }, grids of the same state are marked with the same color in a grid map, and grids of different states are marked with different colors.
Preferably, said step 3.1 comprises:
step 3.1.1, dividing the detection range of the sensor, namely an XOY plane coordinate system into m × n grid units C with the size of l × l to obtain the sensor model, wherein the coordinate values of the grid units C in the sensor coordinate system are (x) coordinate valuesc,yc,zc) Wherein, the coordinate (x)c,yc) Is the coordinate value of the center point of the grid in the XOY plane of the sensor, zcIs the maximum of the heights of k scan points, z, contained in each gridc={max(Pi,z),i=1,2,...,k};
Step 3.1.2, capturing N data points P at the moment t of the sensori={xi,yi,zi,diProjecting all the points to m × n grids to obtain the scanning map, wherein the point set P of the grid C state is as follows:
Figure BDA0002528422310000031
wherein (theta)-+) Representing the maximum and minimum angles of each grid with respect to the X-axis, and N is the number of data points captured by the sensor.
Preferably, said step 3.3 comprises:
step 3.3.1, according to the estimation of the position of the XOY coordinate system after one scanning period, establishing an X 'O' Y 'grid map, namely the global map, and a grid C in the X' O 'Y' grid mapi' is a grid C of time t to time t-1 after a scan cycleiIn an X ' O ' Y ' grid mapEach grid C ofiGrid C in grid map of' and t frameiOne-to-one correspondence is realized;
and 3.3.2, judging the scanning map and the global map according to an updating rule, and updating the grid map.
Preferably, the update rule of step 3.3.2 is:
grid C of t-1 frameiMoving to unknown areas in the global map at t frames, then C is seti' corresponding t frame grid CiFusing with a grid of unknown states;
grid C of t-1 frameiMove to a known area in the global map at t frames, then C isi' corresponding t frame grid CiAnd C in t-1 framei' nearest grid CjThe fusion is carried out, and the fusion is carried out,
wherein, the basic probability function of each grid of the scanning map at the time t is m1The basic probability function of each grid of the scanned map at the time t-1 is m2The fusion formula is:
Figure BDA0002528422310000041
where the coefficient of conflict K is m1(F)m2(O)+m1(O)m2(F),
m (F), m (O), m (omega) and m (phi) respectively represent basic probability functions of 4 states of a certain grid with no barrier, unknown and conflict, and satisfy
Figure RE-GDA0002564917250000042
Preferably, the determination rule of step 3.4 is:
Figure BDA0002528422310000043
in the presence of di∈[r-2l/2,r+2l/2]And C isZC> 0.1m, then the state of grid C is the obstructed state (O);
Figure BDA0002528422310000044
absence of di∈[r-2l/2,r+2l/2]And (r +2l/2) < min (d)i) Then the state of grid C is the clear state (F);
Figure BDA0002528422310000045
in the presence of di∈[r-2l/2,r+2l/2]And (r +2l/2) > min (d)i) Then the state of grid C is the unknown state (Ω);
the gray portion grid state of the sensor horizontal scan angle limit is set to the unknown state (omega),
where r is the distance from the center point of the grid to the sensor.
Preferably, the basic probability function values of the states obtained by the determination in step 3.4 are:
the basic probability function value of the obstacle state (O) is m (f) 0, m (O) 1- λ1,m(Ω)=λ1
The basic probability function value of the barrier-free state (F) is m (F) ═ 1- λ2,m(O)=0,m(Ω)=λ2
The basic probability function value of the unknown state (Ω) is m (f) 0, m (o) 0, and m (Ω) 1;
wherein λ is12∈[0,1]Respectively, the false alarm rate and the missed detection rate of the sensor.
Preferably, the detection rule of step 5 is:
the collision coefficient is divided into two parts for detection, the detection time is from t-1 to t,
when a certain grid is changed from an obstacle-free state at the time t-1 to an obstacle-existing state at the time t, the detection result is that a dynamic obstacle enters the grid;
when a certain grid is changed from an obstacle-existing state at the time t-1 to an obstacle-free state at the time t, the detection result is that a dynamic obstacle is separated from the grid.
Preferably, the obstacle processing method in step 6 includes an eight-neighborhood region labeling algorithm, and the eight-neighborhood region labeling algorithm includes the steps of:
step 6.1, scanning the grids to find an obstacle grid;
step 6.2, judging the marking condition of the obstacle grids;
6.3, judging the conditions of the left, left lower, lower and right lower grids in the eight neighborhoods of the unmarked grids screened out in the step;
and 6.4, marking the grids with the same color of the same obstacle as the same cluster number, and wrapping the grids together by using a hollow rectangular frame.
According to the scheme, the road condition information acquisition method provided by the invention comprises the steps of acquiring front road condition data point information through a sensor, establishing a fitting function of a road edge according to the road edge data point characteristics, further establishing a grid map, and dividing a drivable area and a non-drivable area in the grid map, so as to detect dynamic obstacles in the drivable area, process the dynamic obstacles and acquire information. The method is applied to the unmanned vehicle, so that the unmanned vehicle can obtain good surrounding environment perception capability, and plays an important role in autonomous cruise control, collision early warning and path planning.
Drawings
In order to more clearly illustrate the embodiments of the present invention or the technical solutions in the prior art, the drawings used in the description of the embodiments or the prior art will be briefly described below, it is obvious that the drawings in the following description are only some embodiments of the present invention, and for those skilled in the art, other drawings can be obtained according to the drawings without creative efforts.
Fig. 1 is a performance parameter table of a sensor IBEO-LUX2010 four-wire laser radar used in a road condition information acquisition method according to an embodiment of the present invention;
FIG. 2 is a schematic diagram of a manner in which a vehicle body coordinate system and a sensor coordinate system are established in the embodiment of FIG. 1;
fig. 3 is a block diagram of a road edge detection algorithm of a road condition information obtaining method according to an embodiment of the present invention;
fig. 4 is a flow chart of a clustering algorithm of a road condition information obtaining method according to an embodiment of the present invention;
fig. 5 is a schematic diagram of a road condition information acquisition method according to an embodiment of the present invention, in which an experimental platform is a BJUT-IV driverless vehicle road edge detection experimental result;
fig. 6 is a block diagram of a process of establishing a grid map according to the road condition information obtaining method provided in the embodiment of the present invention;
fig. 7 is a schematic diagram of a sensor grid map model of a road condition information acquisition method according to an embodiment of the present invention;
fig. 8 is a schematic diagram illustrating position estimation of a grid map according to a road condition information obtaining method provided in an embodiment of the present invention;
fig. 9 is a flow chart of an improved eight-neighborhood region labeling algorithm of the road condition information obtaining method according to the embodiment of the present invention;
fig. 10 is a schematic diagram illustrating a dynamic obstacle detection process and a result of a road condition information obtaining method according to an embodiment of the present invention.
Detailed Description
The technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are only a part of the embodiments of the present invention, and not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
Referring to fig. 1 to 9, a method for acquiring traffic information according to an embodiment of the present invention will now be described. The method comprises the following steps:
step 1, scanning front road condition data points by using a sensor to acquire information
The sensor is 64-line laser radar, multi-line laser radar and the like, the multi-line laser radar is high in detection precision, long in detection distance, wide in range and moderate in scanning data amount, the requirement of the unmanned vehicle on the aspect of real-time performance can be well met, the overall algorithm processing speed is improved, and therefore the vehicle can obtain higher autonomous driving speed. In the experimental process of the embodiment, the IBEO-LUX2010 four-line laser radar is adopted, the performance parameters of the laser radar are shown in fig. 1, and the sensor can accurately and stably acquire road edge information and detect dynamic obstacles.
More specifically, the road condition information acquisition method adopts 2 layers of sensor scanning data to detect, the sensors are placed in the unmanned vehicle to test, when the driving area has obstacles, one part of road edge is shielded, namely, when the road edge data of one layer cannot be obtained, the road edge data of the other layer can still be acquired through the sensors, and the anti-interference capability and the accuracy of road edge detection are improved to a great extent.
Step 2, obtaining a fitting function of the road edge according to a road edge detection algorithm
Referring to fig. 2, a vehicle body coordinate system with a V index and a sensor XOY plane coordinate system with a V index are established, wherein a straight line of the sensor emission direction on the same vertical plane with the motion direction is an X axis, a straight line of the other horizontal direction perpendicular to the X axis from the origin O is a Y axis, and a straight line of the sensor coordinate system perpendicular to the XOY plane from the origin O is a Z axis, wherein 4 scanning layer directions of the sensor are shown in the figure, α represents that the sensor coordinates are around the Y axis relative to the vehicle body coordinate systemLThe shaft rotates by an angle, namely, the angle is included with the ground.
Referring to fig. 3 to 5, the more specific steps of obtaining the fitting function of the road edge according to the road edge detection algorithm include:
step 2.1, according to the method, establishing a sensor XOY plane coordinate system to obtain the coordinates (x) of the scanning data pointsi,yi) Wherein, i is a data point serial number, and a scanning data point is read according to the serial number;
2.2, screening scanning data point information of the same layer;
step 2.3, according to the characteristic that the sensor scans the road edge, the returned data points show stable serial number continuity in the same scanning layer, the continuous road edge data point coordinates (x, y) meet the requirement that a linear function y is kx + b, and the slopes formed between the continuous road edge scanning points in the sensor coordinate system are equal, the road edge points in the same layer of data points are screened out, and meanwhile, the number of points scanned on the same road edge by the sensor is at least 7, wherein k is the slope of the linear function, and b is the intercept of the linear function;
step 2.4, clustering different road edge points in each layer, referring to fig. 4, classifying data points of the same road edge into the same object class according to the characteristics of data points scanned on the road edge, so as to facilitate the application of the least square method to perform straight line fitting, specifically, reading the scanned data points (x) according to the serial numbersi,yi) Screening out scanning points on the same layer, further screening out road edge points in the scanning points on the same layer, then screening out road edge points with the distance less than 1m from the previous road edge points, marking the road edge points and the previous road edge points as the same type, if the distance is greater than 1, marking the road edge points as a new type, adding 1 to the type number, and marking the points;
step 2.5, then, line fitting is carried out on each type of line by each layer by using a least square method, and a fitted straight line function meets the following conditions:
Figure BDA0002528422310000071
wherein k is the slope of the fitted straight line, b is the intercept of the fitted straight line, and n is the number of the road edge points;
step 2.6, fusing the plurality of scanning layers together to obtain a final road edge fitting function,
in the present embodiment, please refer to FIG. 3, the scan data point (x) is read according to the sequence numberi,yi) Screening scanning points on the same layer, initializing n-m-k-0 and r-1, screening scanning points on the same layer, screening scanning points which meet the feature function of the road edge points in the scanning points on the same layer, screening scanning points which meet the feature and have the number of 7 and are continuous, marking the scanning points as the road edge points, clustering different road edge points, and performing straight line fitting on each type by using a least square method, wherein n is the number of data points and increasesQuantity, m is the i + n data point number, k is the slope between the road edge points, [ k [ [ k ]ti,kt2]Variable threshold interval, k, of slope between road edge pointst1=λ-, kt2And the included angle between the driving direction of the vehicle and the lane line is tan β, the included angle is a measurement error, and r is the number of data points which are consistent with the characteristics of the combination path.
In this embodiment, please refer to fig. 5, a BJUT-IV is equipped with devices such as IBEO-LUX four-line lidar, GPS, inertial navigation and camera, the experimental environment is that the average unmanned driving speed is 10km/h, the scanning frequency of the four-line lidar is set to 12.5Hz, the measurement error is 0.04, the radar height from the ground is 0.846m, the included angle α between the XOY plane of the radar coordinate system and the ground is 1.6 °, the lidar transmits the scanning data to the algorithm program through 100MB/s ethernet for processing, (a) in the scene, the angle β between the driving direction of the drone vehicle and the lane is 1.3 °, and the slope variable threshold interval [ k ] between the road edge points is variablet1,kt2]=[-0.017,0.063]The method comprises the steps of (a) extracting road edge data sets from a plurality of radar data according to road edge data point characteristics, (b) obtaining a laser radar original data graph, (c) extracting road edge data sets from a plurality of radar data according to the road edge data point characteristics, wherein the road edge data sets comprise real road edge points and interference road edge points, blue interference road edge points caused by scanning the side face of a vehicle body and red interference road edge points scanned to a wall body in the graph, (d) obtaining a result after clustering, and automatically generating an ellipse after clustering, wherein the work of (e) is that road edge classes of each layer are divided into a left road edge and a right road edge, interference road edge classes are removed, and finally a left road edge and a right road edge are fitted by applying a least square method, an included angle gamma between the driving direction and the right road edge is displayed on an unmanned vehicle, and the included angle gamma is 2.08 degrees, and the road edges can be accurately and stably detected by the road edge detection method.
Step 3, establishing a grid map according to the information acquired in the step
Referring to fig. 6 to 8, the more specific steps of building the grid map include:
step 3.1, establishing a sensor model shown in figure 7 according to the information obtained in the step; scanning the map, obtaining the scanning information of the current frame sensor,
step 3.1.1, dividing the detection range of the sensor, namely an XOY plane coordinate system into m × n grid units C with the size of l × l to obtain the sensor model, wherein the coordinate value of each grid unit C in the XOY plane coordinate system of the sensor is (x)c,yc,zc) Wherein, the coordinate (x)c,yc) Is the coordinate value, z, of the center point of the grid in the XOY plane of the sensorcIs the maximum value of the height of k scanning points contained in each grid, zc={max(Pi,z),i=1,2,...,k},
Step 3.1.2, capturing N data points P by the sensori={xi,yi,zi,diProjecting all the points to m × n grids to obtain the scanning map, wherein the point set P of the state of the grid C is as follows:
Figure BDA0002528422310000091
wherein (theta)-+) Representing the maximum and minimum angles of each grid with respect to the X-axis, N being the number of data points captured by the sensor;
step 3.2, initializing a global grid map, namely, the sensor model is in an unknown state;
step 3.3, the raster map is updated by fusing the t frame scanning map and the t-1 frame global map, the global map stores the scanning data of the previous frame raster map and defines the raster state of the unknown area,
specifically, step 3.3.1, as shown in fig. 8, establishes an X 'O' Y 'grid map according to the estimation of the position of the XOY coordinate system after one scanning cycle, and a grid C in the X' O 'Y' grid mapi' is a grid C of time t to time t-1 after one scan cycleiIs estimated, each grid C in the X ' O ' Y ' grid mapiGrid C in grid map of' and t frameiIn one-to-one correspondence, wherein
Figure BDA0002528422310000092
Wherein γ is γ21, γ1Is an included angle between the vehicle body and the road edge of a t-1 frame,γ2Is the included angle between the vehicle body and the road edge of t frames.
Step 3.3.2, determining the scan map and the global map according to an update rule, and updating the grid map, wherein the DSmT fusion rule is specifically that in the grid map, the state of each grid cell can be an unobstructed state (F) and an obstructed state (O), so that the unknown state of the grid is defined as Ω ═ F, O }, and the identification frame of the grid is 2ΩWhere the corresponding basic probability functions are [ m (F), m (O), m (Ω), m (Φ) ]]Respectively representing 4 states of a certain grid, such as no obstacle, unknown and conflict, and 4 basic probability functions need to be satisfied
Figure BDA0002528422310000093
If X1 and X2 are two evidences with different properties in the identification framework, the basic probability function is m1(X1) And m2(X2) Then the DSmT fusion rule is
Figure BDA0002528422310000094
In the formula (I), the compound is shown in the specification,
Figure BDA0002528422310000095
and K ∈ [0,1 ]]The closer the collision coefficient is to 1, the larger the conflict and inconsistency of the two evidences are; conversely, a closer to 0 indicates a smaller conflict between the two evidences and a higher consistency.
The updating rule is as follows: grid C of t-1 frameiMoving to unknown areas in the global map at t frames, then C is seti' corresponding t frame grid CiA fusion is made with the grid of unknown states,
grid C of t-1 frameiMove to a known area in the global map at t frames, then C isi' corresponding t frame grid CiAnd C in t-1 framei' nearest grid CjThe fusion is carried out, and the fusion is carried out,
wherein, the basic probability function of each grid of the scanning map at the time t is m1The basic probability function of each grid of the scanned map at the time t-1 is m2To harmonizeThe resultant formula is:
Figure BDA0002528422310000101
where the coefficient of conflict K is m1(F)m2(O)+m1(O)m2(F),
m (F), m (O), m (omega) and m (phi) respectively represent basic probability functions of 4 states of a certain grid with no barrier, unknown and conflict, and satisfy
Figure BDA0002528422310000102
Specifically, xt-1> 80m or
Figure BDA0002528422310000103
Grid C of time, i.e. t-1 frameiMoving to unknown areas in the global map at t frames, and Ci' corresponding t frame grid CiFusing with a grid of unknown states; x is the number oft-1< 80m and yt-1∈ (-16m,16m), i.e., grid C of t-1 frameiMove to a known field in the global map at t frames, Ci' corresponding t frame grid CiAnd C in t-1 framei' nearest grid CjCarrying out fusion;
and 3.4, judging the state of the grid, comprising the following steps: the method comprises the following steps of (1) indicating an obstacle state (O), an obstacle-free state (F), an unknown state (omega) and a conflict state (phi), wherein omega is { F, O }, grids in the same state are marked with the same color in a grid map, grids in different states are marked with different colors, and specific judgment rules and basic probability function values of the states are as follows:
Figure BDA0002528422310000104
in the presence of di∈[r-2l/2,r+2l/2]And C isZC> 0.1m, the state of the grid C is an obstacle state (O) having a basic probability function value of
m(F)=0,m(O)=1-λ1,m(Ω)=λ1
Figure BDA0002528422310000105
Absence of di∈[r-2l/2,r+2l/2]And (r +2l/2) < min (d)i) The state of the grid C is then the unobstructed state (F) with a basic probability function value of
m(F)=1-λ2,m(O)=0,m(Ω)=λ2
Figure BDA0002528422310000106
In the presence of di∈[r-2l/2,r+2l/2]And (r +2l/2) > min (d)i) The state of the grid C is then an unknown state (Ω), the basic probability function value of which is m (f) 0, m (o) 0, m (Ω) 1,
the gray part grid state for sensor horizontal scan angle limitation is set to an unknown state (Ω), where r is the distance from the center point of the grid to the sensor, λ12∈[0,1]The false alarm rate and the missing detection rate of the sensor are respectively, and the grid map established in the process can accurately describe the road environment information in front of the unmanned vehicle.
Step 4, displaying a fitting function of the road edge by using a line in the grid map, and dividing the front area into a travelable area and a non-travelable area
After the step is finished, the processing range is limited to the drivable area, so that on one hand, the data processing amount is greatly reduced, on the other hand, the interference information is also reduced, the interference of the obstacle information of the undrivable area to the detection result is avoided, and the accuracy of the dynamic obstacle detection is improved.
Step 5, detecting dynamic obstacles in the travelable area according to the rule of the collision coefficient
The specific detection rule is as follows: splitting the collision coefficient into two part detections, K ═ C1+C2=m1(F)m2(O)+m1(O)m2(F) Wherein, C1=m1(F)m2(O) represents the detection time from t-1 to t,when a certain grid is changed from an obstacle-free state at the time t-1 to an obstacle-existing state at the time t, the detection result is that a dynamic obstacle enters the grid; c2=m1(O)m2(F) When a certain grid is changed from an obstacle-existing state at time t-1 to an obstacle-free state at time t, the detection result is that a dynamic obstacle departs from the grid.
Step 6, clustering and information extraction are carried out on the dynamic barriers in the travelable region by adopting an expansion algorithm, an erosion algorithm and an improved eight-neighborhood region marking algorithm
More specifically, referring to fig. 9, the specific steps of the improved eight-neighborhood region labeling algorithm include:
step 6.1, scanning the grids to find an obstacle grid;
step 6.2, judging the marking condition of the obstacle grids;
6.3, judging the conditions of the left, left lower, lower and right lower grids in the eight neighborhoods of the unmarked grids screened out in the step;
and 6.4, marking the same-color grids of the same barrier as the same cluster number, and wrapping the same cluster number by using a hollow rectangular frame, wherein the expansion erosion algorithm processing mode can avoid the condition that a scanning point of the same barrier is broken into a plurality of barriers when the sensor acquires a data point of the barrier due to a certain undetected rate and a false alarm rate of the sensor, or due to the influence of factors such as the shape of the barrier, weather and the like, so that the problem that the clustering and information extraction of the barrier are greatly interfered by the condition is avoided, and the effects of connecting the breakage of the same barrier and lubricating the outline of the barrier are achieved. The algorithm of the clustering processing and the information extraction can effectively reduce the redundant operation of the algorithm and obviously improve the efficiency.
As another specific embodiment of the present invention, a specific application experiment situation of a road condition information obtaining method is now described, please refer to fig. 10, and an experiment result formed after the same frame of data extracted by the driverless vehicle in the driving process passes through different algorithm processing stages is described. In the scenario shown, 1 is a dynamic vehicle; 2 for 2 dynamic pedestrians and 1 tricycle loaded with goods. Because the two pedestrians push the tricycle to move forward together, the distance is very close, and the speed direction is completely the same, the pedestrians are gathered into the same barrier after the algorithm processing, and the autonomous driving of the unmanned vehicle is not influenced.
A grid map is built, please refer to fig. 10(b), 3 is the data points swept to the ground by the lidar, and the data points greater than 10cm are regarded as obstacle points according to the sensor model, so the data points swept to the ground in fig. 10(c) are regarded as an obstacle-free grid. Referring to FIG. 10(c), the false alarm rate λ of the lidar is based on the DSmT fused grid map1Sum and miss rate lambda2Are all set to be 0.1; collision coefficient threshold1And2all are set to be 0.1, and the white grids in the figure are in an unobstructed state; the green grid is in an obstacle state; the gray grid is in an unknown state; the red grid represents the collision coefficient C11When the grid is in use, a dynamic barrier is considered to enter the grid; the blue grid represents the collision coefficient C22When it comes time, the dynamic barrier is considered to be moving away from the grid. A small amount of the blue grid of obstacle 2 may be displayed while the blue grid of obstacle 1 is not, because of the collision coefficient C2Related to the relative speed, the relative position relation, the size and the grid size of the obstacle, the assignment of a basic probability function and other factors, so C2Information cannot be obtained stably, and therefore the moving direction of the obstacle cannot be obtained stably by the collision coefficient. Coefficient of collision C1The grid state can be stably reflected, so the red grid should be regarded as a dynamic obstacle.
As for the road division result, referring to fig. 10(d), first, a fitting function of the road edges is obtained, and the fitting function is displayed as a blue straight line in the grid map, so that the area 80m × 32m in front of the driverless vehicle is divided into a drivable area (between the road edges) and an undrivable area (outside the road edges). Modifying a red grid in the non-driving area into a barrier state, modifying a blue grid into a barrier-free state, and keeping the states of the green grid and the gray grid unchanged; the grid in the travelable area is not changed, and the interference of the travelable area on the dynamic obstacle inspection is effectively eliminated. Referring to fig. 10(d), the dynamic obstacle 1 in the travelable region has many fractures, and the red grid, which is originally the same obstacle, is divided into a plurality of blocks, so that it is difficult to perform post-processing such as clustering, tracking, and information extraction on the obstacle. After the dynamic barrier in the travelable area is processed by applying the expansion erosion algorithm, the result of the graph 10(e) is obtained, the dynamic barriers 1 and 2 are connected into a whole, so that the barriers can be conveniently clustered and tracked, the processing mode is more in line with practical significance, and meanwhile, no negative influence is caused on the detection of the dynamic barrier and the autonomous driving of the unmanned vehicle.
Referring to fig. 10(f), an improved eight-neighborhood labeling algorithm is applied to perform clustering processing on the dynamic obstacles in the travelable region, that is, red grids of the same obstacle are labeled with the same cluster number and are wrapped together by a hollow blue rectangular frame. The cluster number of the dynamic obstacle 1 is 1, 15.62m displayed below the blue frame of the dynamic obstacle 1 represents the distance between the center point of the dynamic obstacle 1 and the center of the unmanned vehicle, and 1-long displayed at the black position at the lower left corner: 1.60m represents the length of the obstacle 1, 1-wide: 1.20m represents the width of the obstacle 1. Similarly, the cluster number of the dynamic obstacle 2 is 2, 23.63m displayed below the red frame of the dynamic obstacle 2 represents the distance between the center point of the dynamic obstacle 2 and the center of the unmanned vehicle, and 2-long displayed in the black position at the lower right corner: 0.40m represents the length of the obstacle 2, 2-wide: 0.80m represents the width of the obstacle 2. The length and width information is fixed at the left and right lower corners and does not move, and the distance information can move along with the movement of the center of each dynamic obstacle.
The embodiments are described in a progressive manner, each embodiment focuses on differences from other embodiments, and the same or similar parts among the embodiments are referred to each other.
The previous description of the disclosed embodiments is provided to enable any person skilled in the art to make or use the present invention. Various modifications to these embodiments will be readily apparent to those skilled in the art, and the generic principles defined herein may be applied to other embodiments without departing from the spirit or scope of the invention. Thus, the present invention is not intended to be limited to the embodiments shown herein but is to be accorded the widest scope consistent with the principles and novel features disclosed herein.

Claims (10)

1. A road condition information acquisition method is characterized by comprising the following steps:
step 1, scanning a road condition data point in front by using a sensor to acquire information;
step 2, acquiring a fitting function of the road edge according to a road edge detection algorithm;
step 3, establishing a grid map according to the information acquired in the step;
step 4, displaying a fitting function of the road edge by using a line in a grid map, and dividing the front area into a drivable area and a non-drivable area;
step 5, detecting dynamic obstacles in the travelable area according to the conflict coefficient rule;
and 6, processing and extracting information of the dynamic obstacles in the travelable area.
2. The road condition information acquiring method according to claim 1, wherein the step 2 comprises:
step 2.1, taking the position of the sensor as an original point O, taking a straight line of the emission direction of the sensor on the same vertical plane with the motion direction as an X axis, taking a straight line in the other horizontal direction perpendicular to the X axis from the original point O as a Y axis, establishing a sensor XOY plane coordinate system, taking a straight line perpendicular to the XOY plane from the original point O as a Z axis, and obtaining a scanning data point coordinate (X axis)i,yi) Wherein, i is a data point serial number, and a scanning data point is read according to the serial number;
2.2, screening scanning data point information of the same layer;
step 2.3, screening out road edge points in the same layer of data points according to the characteristic that the road edge data point coordinates (x, y) meet a straight-line function y which is kx + b, wherein k is the slope of the straight-line function, and b is the intercept of the straight-line function;
2.4, clustering different road edge points on each layer;
step 2.5, then, line fitting is carried out on each type of line by each layer by using a least square method, and a fitting straight line function meets the condition:
Figure FDA0002528422300000011
wherein k is the slope of the fitted straight line, b is the intercept of the fitted straight line, and n is the number of the road edge points;
and 2.6, fusing the plurality of scanning layers together to obtain a final road edge fitting function.
3. The road condition information acquiring method according to claim 2, wherein the step 3 comprises:
3.1, establishing a sensor model and a scanning map according to the information acquired in the step;
step 3.2, initializing a global grid map, namely, the sensor model is in an unknown state;
step 3.3, fusing the t frame scanning map and the t-1 frame global map to update the raster map;
and 3.4, judging the state of the grid, comprising the following steps: the method comprises the following steps of (1) marking grids in the same state with the same color in a grid map, and marking grids in different states with different colors, wherein the grid is marked with an obstacle state (O), an obstacle-free state (F), an unknown state (omega) and a conflict state (phi).
4. The road condition information acquiring method according to claim 3, wherein the step 3.1 comprises:
step 3.1.1, dividing the detection range of the sensor, namely an XOY plane coordinate system into m × n grid units C with the size of l × l to obtain the sensor model, wherein the coordinate values of the grid units C in the sensor coordinate system are (x) coordinate valuesc,yc,zc) Wherein, the coordinate (x)c,yc) The center point of the grid is at the sensorCoordinate value of XOY plane, zcIs the maximum of the heights of k scan points, z, contained in each gridc={max(Pi,z),i=1,2,...,k};
Step 3.1.2, capturing N data points P at the moment t of the sensori={xi,yi,zi,diProjecting all the points to m × n grids to obtain the scanning map, wherein the point set P of the state of the grid C is as follows:
Figure FDA0002528422300000021
wherein (theta)-+) Representing the maximum and minimum angles of each grid with respect to the X-axis, and N is the number of data points captured by the sensor.
5. The road condition information acquiring method according to claim 4, wherein the step 3.3 comprises:
step 3.3.1, according to the estimation of the position of the XOY coordinate system after one scanning period, establishing an X 'O' Y 'grid map, namely the global map, and a grid C in the X' O 'Y' grid mapi' is a grid C of time t to time t-1 after a scan cycleiIs estimated, each grid C in the X ' O ' Y ' grid mapiGrid C in grid map of' and t frameiOne-to-one correspondence is realized;
and 3.3.2, judging the scanning map and the global map according to an updating rule, and updating the grid map.
6. The road condition information acquiring method according to claim 5, wherein the updating rule of the step 3.3.2 is as follows:
grid C of t-1 frameiMoving to unknown areas in the global map at t frames, then C is seti' corresponding t frame grid CiFusing with a grid of unknown states;
grid C of t-1 frameiMove to a known area in the global map at t frames, then C isi' corresponding t frame grid CiAnd C in t-1 framei' nearest grid CjThe fusion is carried out, and the fusion is carried out,
wherein, the basic probability function of each grid of the scanning map at the time t is m1The basic probability function of each grid of the scanning map at the time t-1 is m2The fusion formula is:
Figure FDA0002528422300000031
where the coefficient of conflict K is m1(F)m2(O)+m1(O)m2(F),
m (F), m (O), m (omega) and m (phi) respectively represent basic probability functions of 4 states of a certain grid with no barrier, unknown and conflict, and satisfy
Figure FDA0002528422300000032
7. The traffic information acquiring method according to claim 6, wherein the determination rule of step 3.4 is:
Figure FDA0002528422300000033
in the presence of di∈[r-2l/2,r+2l/2]And is and
Figure FDA0002528422300000034
then the state of grid C is the obstructed state (O);
Figure FDA0002528422300000035
absence of di∈[r-2l/2,r+2l/2]And (r +2l/2) < min (d)i) Then the state of grid C is the clear state (F);
Figure FDA0002528422300000036
in the presence of di∈[r-2l/2,r+2l/2]And (r + 2)l/2)>min(di) Then the state of grid C is the unknown state (Ω);
the gray portion grid state of the sensor horizontal scan angle limit is set to the unknown state (omega),
where r is the distance from the center point of the grid to the sensor.
8. The traffic information acquiring method according to claim 7, wherein the basic probability function values of each state obtained by the determination in step 3.4 are:
the basic probability function value of the obstacle state (O) is m (f) 0, m (O) 1- λ1,m(Ω)=λ1
The basic probability function value of the barrier-free state (F) is m (F) ═ 1- λ2,m(O)=0,m(Ω)=λ2
The basic probability function value of the unknown state (Ω) is m (f) 0, m (o) 0, and m (Ω) 1;
wherein λ is12∈[0,1]Respectively, the false alarm rate and the missed detection rate of the sensor.
9. The road condition information acquiring method according to claim 1, wherein the detection rule of step 5 is:
the collision coefficient is divided into two parts for detection, the detection time is from t-1 to t,
when a certain grid is changed from an obstacle-free state at the time t-1 to an obstacle-existing state at the time t, the detection result is that a dynamic obstacle enters the grid;
when a certain grid is changed from an obstacle-existing state at the time t-1 to an obstacle-free state at the time t, the detection result is that a dynamic obstacle is separated from the grid.
10. The road condition information acquiring method according to claim 1, wherein the obstacle processing method in step 6 includes an eight-neighborhood region labeling algorithm, and the eight-neighborhood region labeling algorithm includes the steps of:
step 6.1, scanning the grids to find an obstacle grid;
step 6.2, judging the marking condition of the obstacle grids;
6.3, judging the conditions of the left, left lower, lower and right lower grids in the eight neighborhoods of the unmarked grids screened out in the step;
and 6.4, marking the grids with the same color of the same obstacle as the same cluster number, and wrapping the grids together by using a hollow rectangular frame.
CN202010511372.6A 2020-06-08 2020-06-08 Road condition information acquisition method Withdrawn CN111640323A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010511372.6A CN111640323A (en) 2020-06-08 2020-06-08 Road condition information acquisition method

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010511372.6A CN111640323A (en) 2020-06-08 2020-06-08 Road condition information acquisition method

Publications (1)

Publication Number Publication Date
CN111640323A true CN111640323A (en) 2020-09-08

Family

ID=72333359

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010511372.6A Withdrawn CN111640323A (en) 2020-06-08 2020-06-08 Road condition information acquisition method

Country Status (1)

Country Link
CN (1) CN111640323A (en)

Cited By (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112068156A (en) * 2020-09-14 2020-12-11 上海应用技术大学 Collision avoidance method and system for coke pusher
CN112630788A (en) * 2020-12-08 2021-04-09 广州小鹏自动驾驶科技有限公司 Environmental obstacle position information detection method and device
CN112703144A (en) * 2020-12-21 2021-04-23 华为技术有限公司 Control method, related device and computer-readable storage medium
CN112904841A (en) * 2021-01-12 2021-06-04 北京布科思科技有限公司 Single-line positioning obstacle avoidance method and device in non-horizontal orientation, equipment and storage medium
CN113317733A (en) * 2021-06-04 2021-08-31 深圳飞鼠动力科技有限公司 Path planning method and cleaning robot
CN114445415A (en) * 2021-12-14 2022-05-06 中国科学院深圳先进技术研究院 Method for dividing a drivable region and associated device
CN115047880A (en) * 2022-06-17 2022-09-13 湖北亦煌科技有限公司 Intelligent path planning method for robot in unknown dynamic environment

Cited By (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112068156A (en) * 2020-09-14 2020-12-11 上海应用技术大学 Collision avoidance method and system for coke pusher
CN112068156B (en) * 2020-09-14 2023-06-20 上海应用技术大学 Anti-collision method and system for coke pusher
CN112630788A (en) * 2020-12-08 2021-04-09 广州小鹏自动驾驶科技有限公司 Environmental obstacle position information detection method and device
CN112703144A (en) * 2020-12-21 2021-04-23 华为技术有限公司 Control method, related device and computer-readable storage medium
CN112904841A (en) * 2021-01-12 2021-06-04 北京布科思科技有限公司 Single-line positioning obstacle avoidance method and device in non-horizontal orientation, equipment and storage medium
CN112904841B (en) * 2021-01-12 2023-11-03 北京布科思科技有限公司 Non-horizontal single-line positioning obstacle avoidance method, device, equipment and storage medium
CN113317733A (en) * 2021-06-04 2021-08-31 深圳飞鼠动力科技有限公司 Path planning method and cleaning robot
CN113317733B (en) * 2021-06-04 2023-01-31 深圳飞鼠动力科技有限公司 Path planning method and cleaning robot
CN114445415A (en) * 2021-12-14 2022-05-06 中国科学院深圳先进技术研究院 Method for dividing a drivable region and associated device
CN115047880A (en) * 2022-06-17 2022-09-13 湖北亦煌科技有限公司 Intelligent path planning method for robot in unknown dynamic environment

Similar Documents

Publication Publication Date Title
CN109961440B (en) Three-dimensional laser radar point cloud target segmentation method based on depth map
CN111640323A (en) Road condition information acquisition method
US11709058B2 (en) Path planning method and device and mobile device
WO2020043041A1 (en) Method and device for point cloud data partitioning, storage medium, and electronic device
Moras et al. Credibilist occupancy grids for vehicle perception in dynamic environments
CN107330925B (en) Multi-obstacle detection and tracking method based on laser radar depth image
Wijesoma et al. Road-boundary detection and tracking using ladar sensing
WO2022188663A1 (en) Target detection method and apparatus
Broggi et al. A full-3D voxel-based dynamic obstacle detection for urban scenario using stereo vision
CN112464812B (en) Vehicle-based concave obstacle detection method
Miyasaka et al. Ego-motion estimation and moving object tracking using multi-layer lidar
CN112346463B (en) Unmanned vehicle path planning method based on speed sampling
CN109872329A (en) A kind of ground point cloud fast partition method based on three-dimensional laser radar
JP2002352225A (en) Obstacle detector and its method
Almansa-Valverde et al. Mobile robot map building from time-of-flight camera
CN114120283A (en) Method for distinguishing unknown obstacles in road scene three-dimensional semantic segmentation
CN115223039A (en) Robot semi-autonomous control method and system for complex environment
CN114821526A (en) Obstacle three-dimensional frame detection method based on 4D millimeter wave radar point cloud
JP4956099B2 (en) Wall detector
CN113076824B (en) Parking space acquisition method and device, vehicle-mounted terminal and storage medium
Jianmin et al. Road and obstacle detection based on multi-layer laser radar in driverless car
Habermann et al. 3D point clouds segmentation for autonomous ground vehicle
CN114217641B (en) Unmanned aerial vehicle power transmission and transformation equipment inspection method and system in non-structural environment
CN115797900B (en) Vehicle-road gesture sensing method based on monocular vision
Bulatov et al. Context-based urban terrain reconstruction from images and videos

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
WW01 Invention patent application withdrawn after publication

Application publication date: 20200908

WW01 Invention patent application withdrawn after publication