CN117836667A - Static and non-static object point cloud identification method based on road side sensing unit - Google Patents

Static and non-static object point cloud identification method based on road side sensing unit Download PDF

Info

Publication number
CN117836667A
CN117836667A CN202280026656.3A CN202280026656A CN117836667A CN 117836667 A CN117836667 A CN 117836667A CN 202280026656 A CN202280026656 A CN 202280026656A CN 117836667 A CN117836667 A CN 117836667A
Authority
CN
China
Prior art keywords
static
point cloud
data
area
areas
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202280026656.3A
Other languages
Chinese (zh)
Inventor
赵聪
请求不公布姓名
魏斯瑀
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Ma Ruzheng
Original Assignee
Ma Ruzheng
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Ma Ruzheng filed Critical Ma Ruzheng
Publication of CN117836667A publication Critical patent/CN117836667A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S17/00Systems using the reflection or reradiation of electromagnetic waves other than radio waves, e.g. lidar systems
    • G01S17/88Lidar systems specially adapted for specific applications
    • G01S17/89Lidar systems specially adapted for specific applications for mapping or imaging
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S7/00Details of systems according to groups G01S13/00, G01S15/00, G01S17/00
    • G01S7/02Details of systems according to groups G01S13/00, G01S15/00, G01S17/00 of systems according to group G01S13/00
    • G01S7/40Means for monitoring or calibrating
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S17/00Systems using the reflection or reradiation of electromagnetic waves other than radio waves, e.g. lidar systems
    • G01S17/87Combinations of systems using electromagnetic waves other than radio waves
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S17/00Systems using the reflection or reradiation of electromagnetic waves other than radio waves, e.g. lidar systems
    • G01S17/88Lidar systems specially adapted for specific applications
    • G01S17/93Lidar systems specially adapted for specific applications for anti-collision purposes
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S17/00Systems using the reflection or reradiation of electromagnetic waves other than radio waves, e.g. lidar systems
    • G01S17/88Lidar systems specially adapted for specific applications
    • G01S17/93Lidar systems specially adapted for specific applications for anti-collision purposes
    • G01S17/931Lidar systems specially adapted for specific applications for anti-collision purposes of land vehicles
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S7/00Details of systems according to groups G01S13/00, G01S15/00, G01S17/00
    • G01S7/003Transmission of data between radar, sonar or lidar systems and remote stations
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S7/00Details of systems according to groups G01S13/00, G01S15/00, G01S17/00
    • G01S7/48Details of systems according to groups G01S13/00, G01S15/00, G01S17/00 of systems according to group G01S17/00
    • G01S7/4808Evaluating distance, position or velocity data
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S7/00Details of systems according to groups G01S13/00, G01S15/00, G01S17/00
    • G01S7/48Details of systems according to groups G01S13/00, G01S15/00, G01S17/00 of systems according to group G01S17/00
    • G01S7/497Means for monitoring or calibrating
    • G01S7/4972Alignment of sensor
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/50Context or environment of the image
    • G06V20/56Context or environment of the image exterior to a vehicle by using sensors mounted on the vehicle
    • GPHYSICS
    • G08SIGNALLING
    • G08GTRAFFIC CONTROL SYSTEMS
    • G08G1/00Traffic control systems for road vehicles
    • G08G1/01Detecting movement of traffic to be counted or controlled
    • G08G1/0104Measuring and analyzing of parameters relative to traffic conditions
    • G08G1/0108Measuring and analyzing of parameters relative to traffic conditions based on the source of data
    • G08G1/0116Measuring and analyzing of parameters relative to traffic conditions based on the source of data from roadside infrastructure, e.g. beacons
    • GPHYSICS
    • G08SIGNALLING
    • G08GTRAFFIC CONTROL SYSTEMS
    • G08G1/00Traffic control systems for road vehicles
    • G08G1/01Detecting movement of traffic to be counted or controlled
    • G08G1/0104Measuring and analyzing of parameters relative to traffic conditions
    • G08G1/0125Traffic data processing
    • G08G1/0133Traffic data processing for classifying traffic situation
    • GPHYSICS
    • G08SIGNALLING
    • G08GTRAFFIC CONTROL SYSTEMS
    • G08G1/00Traffic control systems for road vehicles
    • G08G1/01Detecting movement of traffic to be counted or controlled
    • G08G1/0104Measuring and analyzing of parameters relative to traffic conditions
    • G08G1/0137Measuring and analyzing of parameters relative to traffic conditions for specific applications
    • G08G1/0141Measuring and analyzing of parameters relative to traffic conditions for specific applications for traffic information dissemination
    • GPHYSICS
    • G08SIGNALLING
    • G08GTRAFFIC CONTROL SYSTEMS
    • G08G1/00Traffic control systems for road vehicles
    • G08G1/01Detecting movement of traffic to be counted or controlled
    • G08G1/0104Measuring and analyzing of parameters relative to traffic conditions
    • G08G1/0137Measuring and analyzing of parameters relative to traffic conditions for specific applications
    • G08G1/0145Measuring and analyzing of parameters relative to traffic conditions for specific applications for active traffic flow control
    • GPHYSICS
    • G08SIGNALLING
    • G08GTRAFFIC CONTROL SYSTEMS
    • G08G1/00Traffic control systems for road vehicles
    • G08G1/01Detecting movement of traffic to be counted or controlled
    • G08G1/04Detecting movement of traffic to be counted or controlled using optical or ultrasonic detectors
    • GPHYSICS
    • G08SIGNALLING
    • G08GTRAFFIC CONTROL SYSTEMS
    • G08G1/00Traffic control systems for road vehicles
    • G08G1/01Detecting movement of traffic to be counted or controlled
    • G08G1/048Detecting movement of traffic to be counted or controlled with provision for compensation of environmental or other condition, e.g. snow, vehicle stopped at detector
    • GPHYSICS
    • G08SIGNALLING
    • G08GTRAFFIC CONTROL SYSTEMS
    • G08G1/00Traffic control systems for road vehicles
    • G08G1/16Anti-collision systems
    • G08G1/164Centralised systems, e.g. external to vehicles
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/048Interaction techniques based on graphical user interfaces [GUI]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/44Arrangements for executing specific programs
    • G06F9/451Execution arrangements for user interfaces
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/194Segmentation; Edge detection involving foreground-background segmentation

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Radar, Positioning & Navigation (AREA)
  • Remote Sensing (AREA)
  • Theoretical Computer Science (AREA)
  • Electromagnetism (AREA)
  • Data Mining & Analysis (AREA)
  • Analytical Chemistry (AREA)
  • Chemical & Material Sciences (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Artificial Intelligence (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Evolutionary Biology (AREA)
  • Evolutionary Computation (AREA)
  • General Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Traffic Control Systems (AREA)
  • Radar Systems Or Details Thereof (AREA)
  • Optical Radar Systems And Details Thereof (AREA)

Abstract

A dynamic and static object rapid identification method based on a road side sensing unit comprises the following steps: 1. constructing a roadside laser radar holographic sensing scene, and collecting a batch of point cloud data as priori information; 2. establishing an effective target identification range, and extracting a static point cloud background for subsequent matching; 3. by comparing the point cloud data of the current frame to be identified with the voxel characteristics of the recorded static point cloud background, a non-static area with larger variation difference is identified, and the rest part is recorded as a static object; 4. recording the non-static area as a temporary static area at a fixed frequency, and identifying the short-time static object and the dynamic object by comparing the non-static area with the recorded temporary static area.

Description

Static and non-static object point cloud identification method based on road side sensing unit Technical Field
The invention belongs to the technical field of data processing, and particularly relates to a static and non-static object point cloud identification method based on a road side sensing unit, which is mainly oriented to target sensing of an infrastructure side in a vehicle-road cooperative environment.
Background
With the continuous improvement and development of national economy and automobile industry technology in China, automobiles become an indispensable transportation means for traveling in our daily lives and engaged in production. However, it must be acknowledged that automobiles are accompanied by problems such as traffic accidents, traffic jams, economic losses and environmental pollution while greatly improving the production and life patterns of our human beings, and are becoming more and more serious. In order to improve the traffic environment, governments around the world and expert scholars actively explore ways to effectively solve the traffic safety problem, and automatic driving technology has been developed. The vehicle is used as a software and hardware carrier, and intelligent support is provided for the vehicle through electronic equipment such as a vehicle-mounted sensor, a decision unit and an actuator, so that the vehicle can carry out driving behavior decision based on surrounding environment, traffic risks caused by personal quality variation of a driver are avoided, and the purpose of improving vehicle safety is achieved. In addition, as dedicated short-range communication technologies, sensor technologies, and vehicle control technologies are becoming more mature, the pace of automatic driving and unmanned driving technologies from laboratories to practical applications is accelerating.
However, from the development situation of the current automatic driving technology, it is difficult to effectively solve the traffic safety risk problem by only relying on the bicycle intelligent system, and the reasons mainly include the following points: 1. the sensing capability of the vehicle-mounted equipment is limited, and the phenomenon of decision-making error caused by insufficient sensing data can occur in certain scenes; 2. the sensing range of the vehicle-mounted equipment is limited and is limited by the installation position, and many sensing equipment cannot fully utilize the theoretical sensing range and is often shielded by other vehicles around the automatic driving vehicle so as to ignore possible dangers; 3. decision making capability is still to be improved, and the intelligent degree of the current automatic driving decision making system is difficult to cope with complex and changeable traffic environments. Based on the above problems, countries around the world start to propose a new solving path, namely a vehicle-road cooperative system. The basic idea of the vehicle-road cooperative system is to use a multi-disciplinary crossing and fusing method, acquire vehicle and road information in real time by using advanced wireless communication technology, sensing technology and the like, realize information interaction and sharing between vehicles and road intelligent road side facilities in a vehicle-vehicle communication and vehicle-road communication mode, and realize intelligent cooperation of vehicles and roads, thereby improving the purposes of road traffic safety, road traffic efficiency, road traffic system resource utilization rate and the like. In general, a vehicle-road cooperative system can be divided into two subsystems, namely an intelligent road side system and an intelligent vehicle-mounted system according to the installation position of a sensor in the system. The intelligent road side system mainly bears the tasks of acquisition and release of traffic flow information, control of road side equipment, vehicle-road communication, traffic management and control and the like; the intelligent vehicle-mounted system mainly completes the tasks of vehicle motion state information, vehicle surrounding environment information acquisition, vehicle-vehicle communication/vehicle-road communication, safety early warning, vehicle auxiliary control and the like. The intelligent road side system and the intelligent vehicle-mounted system transmit and share information of the two parties through vehicle-road communication, so that data interaction is realized, the perception range and the data volume of the automatic driving vehicle are widened, the decision basis of the automatic driving vehicle is enhanced, and the driving safety is improved.
In terms of the current development situation, the current realizable application scenario of the intelligent road side system mainly provides auxiliary perception information for the automatic driving vehicle. The road side sensing equipment is more abundant than the type of the vehicle-mounted sensing equipment, the installation position is relatively free, and the hard conditions such as energy supply and the like are more abundant. Common road side awareness devices include: 1. a toroidal coil; 2. millimeter wave radar; 3. UWB technology; 4. a visual means; 5. laser radar, and the like. Among the above-mentioned sensing means, technologies that can be used as engineering solutions are mainly visual detection methods and lidar technologies. Both have the characteristics that the data form is simple and easy to understand, and the target detection technology is mature, but compared with the road side visual data, if the road side visual data is to serve the vehicle side, the transmitted data must be the detection result, because the image data with larger visual angle difference can hardly realize the data fusion of the original data level, and only the detection result can be fused; the laser radar data is in a point cloud coordinate form, and data fusion can be realized through coordinate axis conversion, so that compared with post fusion based on detection results, the pre-fusion data fusion mode is less in semantic information loss of the data, and is more beneficial to improving the recognition accuracy of the detection results.
However, it should be noted that even though the pre-fusion method is more advantageous, the limitation still exists that the maximum application is that the data transmission amount is larger, because the transmitted data is the original data, and the frame size of the point cloud data is generally between several M and tens of M, that is, the one-to-one transmission data size per second can reach several hundred M, and the total data transmission amount per second can reach even several G corresponding to many complex intelligent traffic environments of the vehicle, so the transmitted data size must be reduced.
The method for reducing the data size includes various methods, such as point cloud sparse sampling, skeleton structure extraction and the like, and an effective method suitable for an automatic driving scene is to extract important parts in the data, such as the importance degree of surrounding vehicles, non-motor vehicles, pedestrians, roadblock facilities and other objects for a target vehicle is far superior to that of the objects such as pavement, greening plants, buildings on two sides and the like, so that the corresponding low-value objects can be identified and screened from the original data, and the purpose of reducing the data size is achieved. In addition, after the low-value objects are removed, the use value of the rest objects is naturally improved, and the important objects are more outstanding after interference factors are removed, so that the influence of low-value data is avoided.
Prior Art
CN108152831 A
CN106846494 A
CN107133966 A
CN110443978 A
Interpretation of the terms
In order to make the description of the present invention more accurate and clear, various terms that may appear in the present invention will now be explained as follows:
road side perception unit: and under the cooperative scene of the vehicle and the road, taking the road side upright posts or the portal frames as mounting substrates, and arranging the environment sensing equipment around the road. In the present invention, the road side sensing unit is particularly referred to as a lidar system, and both should be regarded as the same description.
And (3) point cloud data: a set of vectors in a three-dimensional coordinate system, which generally includes at least X, Y, Z three-dimensional coordinate data, is used to represent the shape of the exterior surface of the object, and other information can sometimes be obtained depending on the device. In the present invention, the symbol D is generally used to represent the point cloud data and the processed partial data thereof, and the content expressed by the symbol D should be understood to represent the point cloud data.
Static object D s : the object is mainly a road surface and auxiliary facilities, buildings, road side greening plants and the like, is in a state of fixed long-term position and unchanged appearance under the condition of not considering reconstruction and extension and frequent road surface maintenance, and can be considered to be a static object with no obvious change of position and appearance in one month. The basis for judging whether the change is obvious is as follows: the centroid shift of the horizontal projection of the object is more than 1 meter, which is regarded as obvious change, or the outline side length or volume change is more than 5% of the original data, which is regarded as obvious change.
Short-time stationary object D st : the invention mainly comprises temporary parking, standing pedestrians and the like, wherein the objects are in a state of no change in short-term position and state, but the possibility of the next moment movement is not excluded, and the objects which do not belong to static objects and have no obvious position change and appearance change within 5 frames are regarded as short-term static objects. Wherein the basis for judging the obvious change is as follows: the centroid shift of the horizontal projection of the object is more than 0.3m, which is regarded as obvious change, or the outline side length or volume change is more than 5% of the original data, which is regarded as obvious change.
Dynamic object D d : the object is in a motion state when being observed, and can be considered as a dynamic object which does not belong to a static object and has obvious position change or appearance change in 2 continuous frames. Wherein the basis for judging the obvious change is as follows: the centroid shift of the horizontal projection of the object is more than 0.3m, which is regarded as obvious change, or the outline side length or volume change is more than 5% of the original data, which is regarded as obvious change.
Non-stationary object D ns : sum of short-term static and dynamic objects.
Raw data D 0 : the point cloud data set used for the preprocessing part of the invention generally comprises about 1000 frames of point cloud data, and needs to comprise most of road section common traffic scenes installed by the drive test sensing unit.
Point cloud data to be identified: different from the original data D 0 The point cloud data frames actually used for identification in the use process of the invention can be used for supporting the subsequent works such as point cloud target detection, and the identification result is usually marked as i-th frame data D in the description i
Key road surface area: the area which is emphasized and identified by the method of the invention and contains the road surface is usually the road area which can be clearly distinguished in the laser radar scanning range and is the main traffic path.
Data value: and when the point cloud data is utilized by the automatic driving vehicle, the influence on the driving decision is large. In the invention, the judgment basis is the possible risk of the object to the automatic driving vehicle, and under the condition of not considering the data understanding capability of the automatic driving system, the data value of the dynamic object is generally considered to be the largest, and the static object is the smallest in a short time.
Invalid data: also known as invalid point cloud data; the point cloud data which has little data value generally includes buildings, slopes, open spaces and the like on both sides of a road, and is generally point cloud data other than 5m from the road edge. In practical application, the dividing range of the invalid data can be adjusted according to the identification requirement, for example, in road scenes with roadside guardrails such as urban expressways, point cloud data outside the road guardrails can be regarded as the invalid data.
Valid data: also called effective point cloud data; the point cloud data left after invalid data are removed from the point cloud data are expressed by adding Shan Yinhao (') in the expression of the invention to extract valid data.
Boundary equation set E b : and the function boundary for separating the invalid data and the valid data is established by manually selecting boundary points and then fitting by a least square method after the point cloud data is projected into a bird's eye view.
Static point cloud background B: the invention is not provided with any short-time static object and dynamic object, and the invention is not provided with any traffic environment of traffic participants such as non-permanently parked vehicles, non-motor vehicles, pedestrians and the like for the cooperative scene of the vehicle and the road.
Valid point cloud data to be identified: cutting point cloud data to be identified by utilizing a boundary equation set Eb, and obtaining a point cloud data set which is effective point cloud data to be identified, namely effective ith frame data D in the description i ’。
Statistical space: the point cloud data has the characteristics of close concentration and sparse distance, and in order to avoid the characteristics affecting the subsequent recognition work, the point cloud data is divided into a plurality of statistical spaces and the point cloud density of each place is approximately equal by utilizing superposition and downsampling operations.
Point cloud density: the index for describing the density of the point cloud is characterized by the number of the point clouds in a unit volume. The specific calculation method and method parameters can be determined according to the equipment condition.
Scanning distance L: the distance of the pointing cloud from the center of the laser radar can be characterized by the planar distance after the point cloud data is projected as a bird's eye view.
Point cloud target detection algorithm: the function of the target detection algorithm used for judging the detection of the point cloud data into a specific category (such as a large vehicle, a small vehicle, a pedestrian and the like) is different from that of the target identification method, and the method only identifies a specific point cloud data set and does not detect the specific category.
Detection confidence P: and inputting the point cloud data which can represent a certain object into a point cloud target detection algorithm, and obtaining the confidence of the output result. Specifically, if the point cloud target detection algorithm does not detect a result, the detection confidence is considered to be 0.
Identifying a trigger distance threshold DT: most point cloud target detection algorithms can represent a good perception distance range. According to the method, the point cloud target detection algorithm is well defined as that all non-static objects in the area can be detected, and the detection confidence is not lower than 75%.
Identifying point cloud data: and utilizing a result of clipping the point cloud data by using the recognition trigger distance threshold, and representing the operation of extracting the point cloud data for recognition by adding a double-quotation mark (") in the expression of the invention.
Point cloud data for identification to be identified: and cutting the effective point cloud data to be identified by utilizing an identification trigger distance threshold DT, and obtaining a point cloud data set which is the point cloud data to be identified for identification, wherein the point cloud data set is marked as i-th frame data Di for identification in the description.
Voxel: the definition of a Volume element (Volume Pixel) is similar to the definition of a Pixel in a two-dimensional space, is the minimum unit on three-dimensional space division, and is represented by a space cube, wherein the side length of the cube can be artificially established, and the model fineness of voxel descriptions with different sizes is different.
Voxelization: converting point cloud data into voxels, in the context of the present invention, is subscripted v And (3) representing.
Voxel point cloud data to be identified: and (3) carrying out voxelization operation on the point cloud data for identification to be identified, wherein the obtained point cloud data set is voxelized point cloud data to be identified, and is marked as voxelized i frame data Dv in the description.
Point cloud coordinate system origin: the point cloud data is typically represented in three-dimensional coordinate form, and the coordinate system origin of the point cloud data is referred to as the point cloud coordinate system origin in the present invention.
Annular spare identification area: in order to avoid incomplete part of static or non-static objects caused by the cutting operation, an annular standby identification area is added at the outer side of the identification triggering distance threshold value, and the annular standby identification area is divided into a plurality of sub-areas according to fixed angles; and when the non-static area is identified at a position close to the edge of the threshold value of the identification triggering distance, recording the horizontal included angle between the non-static area and the X axis of the point cloud coordinate system, and adding an annular standby identification sub-area corresponding to the included angle on the non-static area.
Sliding window method: and part of sub data is continuously screened out from the original data along a certain direction by utilizing a data screening frame with a fixed size, so that the operation is only applied to the sub data, the data processing amount is reduced, and the identification of local features is enhanced.
Background difference method: generally, a method for detecting a non-stationary object using a comparison of a current frame in an image sequence and a background reference model is applied in the present invention to three-dimensional point cloud data.
Static area A s : when the number, position, distribution and other characteristics of the voxels in a certain space region containing a plurality of voxels are smaller than the judgment threshold value in comparison with the static point cloud background, the object or scene in the space region can be considered to be unchanged, namely the static region. Essentially a subset of static objects.
Non-static area A ns : when the number, position, distribution and other characteristics of the voxels in a certain spatial region containing a plurality of voxels are compared with the static point cloud background and the variation amplitude is larger than the judgment threshold value, the object or scene in the spatial region can be considered to be changed, namely the non-static region.
Temporary static area A st : a certain spatial region containing a plurality of voxels is formed by preserving a non-static region at a fixed frequency. The short-time static area is used for judging the dynamic object and the short-time static object, and the part after the dynamic object is separated is the short-time static object.
Dynamic region A d : a space region containing a plurality of voxels belonging to a non-static region, the number, position, distribution, etc. of voxels thereinWhen the variation amplitude is larger than the judgment threshold value in comparison with the short-time static region, the object or scene in the space region can be considered to be changed, namely the dynamic region. Essentially a subset of dynamic objects.
Disclosure of Invention
The invention provides a method for quickly identifying dynamic and static objects and dividing point clouds based on a road side sensing unit, which is oriented to a real traffic environment, takes the position of a road side laser radar as a relatively fixed basis, takes the point cloud background data collected in the prior period as priori information, and quickly screens out a region to be identified with obvious change by comparing a frame to be identified with the background data, thereby greatly reducing invalid point cloud data and reducing data transmission quantity. The application scene of the method is not limited to a single radar environment, the method is also applicable to multi-radar networking, and the extracted area to be identified can be fused with other data formats, so that the detection precision is improved.
The flow chart of the invention is shown in fig. 1 and 2, and is characterized by comprising the following steps:
data acquisition (one)
And constructing a road side laser radar sensing scene facing the vehicle-road cooperative environment, and collecting a batch of point cloud data for a preprocessing stage. The collected point cloud data is required to cover comprehensively without shielding, should contain static objects, mainly pavements and auxiliary facilities thereof, buildings, road side greening plants and the like, and also should contain enough non-static objects, such as pedestrians, non-motor vehicles, vehicles and the like, can be distinguished manually, and the total recommended number should not be lower than 300.
The built road side laser radar scene should ensure that a large-area shielding phenomenon does not exist in the scanning range, namely all key pavement areas in the scanning range should be clearly visible. The left graph of fig. 3 shows a poor layout point of the half-side road width data lost due to the influence of the central dividing strip, and the right graph of fig. 3 shows a better example graph.
The format of the point cloud data collected by the invention is shown in the following table.
The point cloud data includes three-dimensional coordinate data X, Y, Z, reflected Intensity values, three-channel color values RGB, and echo Number Return Number. According to the invention, only the three-dimensional coordinate data X, Y, Z is used as a point cloud data extraction basis, and the three-dimensional coordinate data, the reflection intensity value and the three-channel color value are selected to be transmitted together when the screening result is transmitted, so that the situation that the vehicle end cannot utilize the data due to information loss is avoided. It should be understood that the point cloud data format applicable to the present invention is not limited to the above examples, and any point cloud data including three-dimensional coordinate data X, Y, Z can be used as the applicable scope of the present invention.
The data acquisition of the invention is divided into two stages: one is the data acquisition phase serving the preprocessing effort: providing data source collection work for preprocessing work, wherein the collected data is required to meet various requirements as described below, and the aim of comprehensively reflecting the normal road condition of the installation scene of the drive test sensing unit is achieved; the second is the data acquisition phase that serves daily use: the data acquisition is not required in detail in the stage, and the road side sensing unit only needs to be ensured to work normally. In the invention, the point cloud data set acquired in the preprocessing stage is collectively called as original data D 0 The point cloud data acquired by the use stage acquisition is sequentially named as D in time sequence (frame number) 1 、D 2 Etc. The following focuses on the data acquisition operation of the preprocessing stage.
In consideration of the requirement of post preprocessing, data of not less than 1000 frames are generally required to be acquired, and the specific acquisition frame number can be properly adjusted according to the layout scene of the road side sensing unit. In the acquisition process, the influence of environmental factors, such as dual reduction of density and quality of road surface point cloud caused by reduced laser radar echo number due to ground moisture in a rainy day, or abnormal quality of partial regional point cloud caused by interference of normal echo of the laser radar due to high-brightness high beam on a vehicle at night, are considered. In addition, although it is generally considered that the intensity of visible light has little effect on lidar, the collected data should be appropriately dispersed over a plurality of time periods for the sake of stringency. Based on the above requirements, the invention proposes an original data acquisition scheme:
(1) When the conditions allow, simulating rainy weather in heavy rain or by large-area large-scale sprinkling on the road surface, and operating the roadside laser radar to collect data of more than 200 frames;
(2) illuminating the pavement by using strong light generating equipment at night, particularly covering pavement marker marks made of high-reflectivity materials, and collecting more than 200 frames of data;
(3) in a daily environment, data of three time periods of morning, midday and late are respectively acquired, and more than 200 frames of data are respectively acquired in each time period.
It should be understood that other acquisition schemes that are adapted to the actual scenario, in addition to the acquisition schemes described above, should all belong to one of the acquisition scheme variations that are applicable to the present invention, such as the exemplary variation schemes described below.
The following variety of acquisition schemes can be adopted for regions with little rain but more wind and sand throughout the year:
(1) in the weather with poor sight such as sand dust or strong wind, the road side laser radar is operated to collect more than 200 frames of data;
(2) illuminating the pavement by using strong light generating equipment at night, particularly covering pavement marker marks made of high-reflectivity materials, and collecting more than 200 frames of data;
(3) in a sunny environment, data of three time periods of morning, evening and morning are respectively acquired, and each time period is respectively acquired for more than 200 frames of data.
The following variety acquisition schemes can be adopted in areas with longer winter and more serious ice and snow covered pavement in the north:
(1) when the snow weather or the road surface is covered by ice and snow, the road side laser radar is operated to collect more than 200 frames of data;
(2) illuminating the pavement by using strong light generating equipment at night, particularly covering pavement marker marks made of high-reflectivity materials, and collecting more than 200 frames of data;
(3) in a sunny environment, data of three time periods of morning, evening and morning are respectively acquired, and each time period is respectively acquired for more than 200 frames of data.
Data acquisition schemes to which the present invention is applicable include, but are not limited to, the cases described above.
Under the environment, the data acquisition arrangement is carried out by combining the traffic condition of the laser radar scanning range of the drive test, so that the acquired data can be ensured that the single frame data with the number of vehicles or pedestrians being lower than 2 accounts for not lower than 50 percent, and the total number of samples of non-static objects such as the number of vehicles or pedestrians is not lower than 300. Meanwhile, a situation that a certain visible area is shielded for a long time does not exist, namely, the frame number ratio for ensuring that the critical pavement area is clearly visible is not lower than 90% of the total data. And if the acquired data cannot meet the conditions, the period of time is required to be selected again for acquisition.
(II) pretreatment
First establishing a set of boundary equations E b And removing the invalid data from the original data. And obtaining the linear relation between the point cloud density and the scanning distance by equidistant sampling of the point cloud data. And separating the static object from the non-static object in each frame of point cloud data, wherein the non-static object is detected by using a point cloud target detection algorithm to establish a distribution curve of detection confidence and scanning distance, and the scanning distance range with the confidence higher than a threshold is selected as a recognition trigger distance threshold. Cutting all static objects by using the recognition trigger distance threshold value, and establishing a static point cloud background B by overlapping and properly sampling the cut static objects for multi-frame recognition v . And finally, performing voxelization operation on the static point cloud background. The flow chart of the preprocessing stage is shown in figure x.
First, the scanning range of the roadside laser radar is generally relatively wide, the farthest scanning distance of high-level equipment is more than hundred meters, and the effective scanning distance can also reach more than 50 meters. Therefore, the scanned data necessarily contains a plurality of various objects, such as surrounding buildings, greening plants and the like. Compared with elements such as vehicles, pedestrians, road surfaces and the like, the object has very low data value for detecting traffic environment, and can be directly removed. In the present invention, therefore, point cloud data which is 5m away from the road edge and has little data value is defined as invalid data, and areas including buildings, slopes, and open areas on both sides of the road are generally defined as valid data, whereas roads, non-stationary objects, and the like are defined as valid data. Invalid data can be removed before subsequent calculation work, and data processing capacity is reduced. The method for eliminating invalid data proposed by the invention comprises the following steps:
(1) The point cloud data is projected to a horizontal plane, namely, only X, Y values in the point cloud data are considered to form a bird's eye view.
(2) The method is characterized in that a rejection boundary is established based on manual means, and can be completed by using some common point cloud data visualization processing tools, such as a 3D (three-dimensional) Reshaper, wherein the rejection boundary is the boundary for definitely dividing invalid data and valid data, a conservative strategy is suggested to be adopted in actual selection, and if no obvious road boundary exists, a certain area is difficult to distinguish by manpower as a road or a non-road, and the area can be regarded as a road area, namely, valid data.
(3) And constructing a boundary equation based on points on the boundary, selecting proper points on each boundary according to the previously selected boundary, and fitting a plane equation of the boundary by using a least square method and the like. The number of points is generally not less than 30 points, and the distance between the two points is not less than 50cm. If the length of the boundary fails to meet the above requirements, integration between adjacent boundaries may be considered. For computational reasons, it is recommended that the number of boundary equations is not more than 6.
(4) Finally, integrating all boundary equations into a boundary equation set E b And recording the data eliminating direction, and taking the data eliminating direction as a screening condition as a data screening condition in the actual identification process.
It should be understood that, due to the difference in installation scenes of the road side sensing units, the definition of the invalid data is reasonably biased, for example, for an urban expressway or an overhead road, if no object interfering with traffic flow exists on two sides of the road, the point cloud data outside two sides of the guardrail can be removed. The following variant invalid data elimination method can be adopted for the scene:
if the point cloud data are projected onto the horizontal plane, objects such as greening plants, road side facilities and the like do not invade the road space, namely, the condition that the point cloud data of the objects cover the road data does not exist, the processing method is the same as the proposal method of the invention; if the crown has already covered a part of the road surface area as shown in fig. 5, the bottom data should be screened out according to a height threshold before being projected to the horizontal plane, wherein the height threshold is any value between the bottom of the crown and the maximum vehicle height of a common vehicle passing through the key road surface area, namely, the crown area is removed and the vehicle data is reserved.
For smooth road segments, i.e. road segments with a longitudinal slope of no more than 3%, the height threshold may be chosen to be a fixed value. For steep road sections, i.e. road sections with a road gradient of more than 3%, the height threshold may be distributed stepwise or a spatial plane equation may be constructed. The step distribution means that for the road section with the plane coordinate located in a certain area, the height threshold value selects the same fixed value, and the expression form is as follows:
Wherein X and Y correspond to X, Y value and X of the point cloud data 1 ,x 2 ,x 3 ,x 4 And y 1 ,y 2 ,y 3 ,y 4 Upper and lower region thresholds in the X, Y direction are respectively indicated, H indicates a height threshold, H 1 ,h 2 Representing the height threshold selected for the different planar coordinate areas.
The spatial plane equation is fitted by the X, Y, Z coordinates of the road points and then translated upwards so that the plane meets the segmentation condition of the height threshold. The expression form is as follows:
Ax+By+Cz+D=0
wherein A, B, C, D are coefficients of plane equations, and x, y and z correspond to X, Y, Z coordinates of point cloud data respectively. When in fitting, random sampling is needed to be carried out on road segment data, the sampling point number is not less than 100 points, and a least square method is used for fitting a plane equation. When the method is used, the Z value can be calculated according to the coordinate of the point cloud X, Y, and the obtained Z value is the height threshold value of the current area.
After the above processing is completed, the subsequent processing method is the same as the invalid data eliminating method suggested by the present invention.
The invalid data eliminating method applicable to the invention is not limited to the method, and other methods with the same function can be used as one of variant schemes. In addition, the invalid data eliminating method can be placed after other steps, and the method is only used as a pre-step for obtaining the linear relation between the point cloud density and the scanning distance in order to reduce the data calculation amount, so that other technical schemes for changing the sequence of the steps are regarded as variant schemes of the invention. After the processing of the steps, a set D of effective data is obtained 0 ′。
Secondly, due to the limitation of the physical acquisition capability of the hardware, the point cloud data has the characteristic that the point cloud data is denser near the inside and sparser near the outside, namely the point cloud density of the point cloud data is closely related to the scanning distance. And the point cloud target recognition algorithm is closely related to the point cloud density of the target, so that the object with large point cloud density is easier to recognize. Therefore, in order to improve the accuracy of the subsequent point cloud target identification, the relationship between the point cloud density and the scanning distance needs to be established first. According to the scanning principle of the laser radar, the distance between two points on the same loop line and the distance between the loop line and the center are in a linear relationship, so that the point cloud density and the scanning distance are inferred to be in a linear relationship.
As shown in fig. 7, sample points are collected at equal intervals of each scanning loop line from inside to outside with 0.5 meter as a sampling interval, points within a radius of 10cm are recorded as point cloud density with each sample point as a center, 30 sample points are selected on each loop line on average, and statistical results are filled in the following table.
Point cloud density 1 st sampling point Sample No. 2 …… Sample No. 30 Point cloud density average
1 st loop wire
Loop line 2
3 rd loop wire
……
Counting the point cloud density average value of each loop line, taking the point cloud density average value and the corresponding loop line distance as x and y values respectively, and utilizing least square fitting to establish the linear relation of the point cloud density and the scanning distance, wherein the linear relation is expressed as:
ρ=k·L
Where ρ is the point cloud density, L is the scan distance, and k is the linear function parameter.
After the above steps are completed, the point cloud data of each frame is divided into two parts, namely a static object D by a manual extraction mode s And a non-stationary object D ns . The static object refers to a road surface, an accessory facility thereof, a building, a road side greening plant and the like, and the object is in a state of unchanged long-term position and state under the condition of not considering reconstruction and extension and frequent road surface maintenance. An object whose position and appearance have not changed significantly in one month can be regarded as a static object. The non-static object is the sum of the objects except the static object in the point cloud data and is divided into a dynamic object D d And a short-time static object D st . The dynamic object refers to a running vehicle, a walking pedestrian and the like, is in a motion state when being observed, and can be regarded as an object which does not belong to a static object and has obvious position change or appearance change in 2 continuous frames. The short-time static object refers to a temporary parking or standing pedestrian, and the like, and the object is in a state with no change in short-term position or state, but the possibility of the next moment movement is not excluded.
Static object D s And the method is used for extracting the static point cloud background B. The static point cloud background refers to a pure static background space which does not contain any short-time static objects and dynamic objects, and the effect of the static point cloud background is shown in fig. 4 for a vehicle-road cooperative scene facing the invention, namely, a traffic environment which does not contain any traffic participants such as vehicles, pedestrians and the like which are not permanently parked.
Rather than a static object D ns Then it is used to obtain the identification trigger distance threshold DT. The recognition trigger distance threshold is a sensing distance range which enables most point cloud target detection methods to perform well. Because the point cloud data has the phenomenon of external sparse and internal dense, the non-static object at a far distanceThere may be only one or two scan lines to describe, and such a sparse number of point clouds is difficult to detect by most point cloud target detection algorithms. In order to meet the detection requirements of most point cloud detection algorithms, it is therefore necessary to establish a suitable recognition trigger distance threshold for representing the trigger distance of the subsequent method. For the point cloud located at the part beyond the recognition trigger distance threshold, even if a non-static object exists, the non-static object cannot be detected, or the confidence of the detection result is too low, and after the detection result is transmitted to the vehicle end, the vehicle end decision error may be caused. Point cloud data that is outside the identified trigger distance threshold will be rejected as low value data.
To obtain the recognition trigger distance threshold DT, a relationship between the scanning distance L and the detection confidence P is established. The final result of the invention is provided for the vehicle end, and the point cloud target detection algorithms built in the manufacturers of the respective motor-driven vehicles are different at present, so that in the practical consideration, the invention selects several common point cloud target detection algorithms as the test algorithms of the preprocessing stage, including VoxelNet, PIXOR, pointRCNN. Wherein:
the VoxelNet is a typical Voxel point cloud processing method, which divides a three-dimensional point cloud into a certain number of voxels, performs local feature extraction on each non-empty Voxel by using a plurality of Voxel feature coding layers after random sampling and normalization of the points to obtain a Voxel-wise feature, further abstracts the feature through three-dimensional convolution operation, increases a receptive field and learns geometric space representation in the process, and finally performs classification detection and position regression on an object by using Region Proposal Network.
PIXOR is a typical imaging point cloud processing method, where a two-dimensional aerial view with a height and reflectivity as channels is obtained by projecting a point cloud, and then object detection and positioning are performed by using a RetinaNet with fine-tuned structure. The overall process is more similar to conventional image object detection methods.
PointRCNN is a typical point cloud processing method using an original point cloud data structure, and the entire framework includes two phases: the first stage is generated by using a 3D proposal from bottom to top, and the second stage is used for modifying the proposal in the standard coordinates to obtain the final detection result. Instead of projecting the point cloud into a bird's eye view or voxel from an RGB image, the phase 1 subnetwork directly generates a small number of high quality three-dimensional examples from the point cloud in a bottom-up manner by dividing the point cloud of the entire scene into foreground and background points. The sub-network of stage 2 converts the pooled points of each sample into canonical coordinates, better learns local spatial features, and the process is combined with learning global semantic features of each point in stage 1 for Box optimization and confidence prediction.
The three methods are typical representatives in three most mainstream point cloud target detection algorithms at present, and can better simulate intelligent perception of an automatic driving vehicle. It should be understood that the three algorithms selected in the present invention cannot fully represent all the point cloud target detection algorithms, and therefore, if other point cloud target detection algorithms are adopted as the testing algorithm of the preprocessing stage, it should be considered as one of the variant schemes.
Since the length of a common vehicle is generally between 3.5 meters and 4.5 meters, a plurality of scan lines are generally spanned, which may cause the front point cloud of the vehicle to be dense, the rear point cloud to be sparse, or even be shielded with little point cloud data. Therefore, it is necessary to specify the average point cloud density of the non-stationary object represented by the vehicle.
The invention also adopts a random sampling method to obtain the average point cloud density of the non-static object, but the sampling method is slightly different from the point cloud sampling, and the method is specifically described below. The proportion of the point cloud sampling needs to be established by referring to hardware equipment parameters and the actual total number of the point clouds of the target, and the random sampling method adopted by the proposal of the invention is as follows:
(1) for a non-static object with the total point cloud number larger than 3000, randomly sampling for 3 times, sampling 300 points each time, and finally calculating the average point cloud density of 3 times;
(2) for non-static objects with the total point cloud number being more than 1000, randomly sampling for 3 times, sampling 100 points each time, and finally calculating the average point cloud density of 3 times;
(3) for non-static objects with the total number of point clouds being more than 500, randomly sampling for 3 times, sampling 75 points each time, and finally calculating the average point cloud density of 3 times;
(4) for non-static objects with a total number of point clouds less than 500, 100 points are randomly sampled and the average point cloud density is calculated.
In the sampling method, the calculation method of the point cloud density is consistent with the calculation method, namely, each sampling point is taken as a center, and the point number within the radius of 10cm is taken as the point cloud density.
In addition to the above-described sampling methods, other sampling methods for obtaining a non-stationary object point cloud density may be considered as one of the variations of the sampling methods to which the present invention is applicable. Variant schemes with different sampling ratios may be employed for different types of objects as follows:
(1) for pedestrians in a non-static object, randomly sampling for 2 times, sampling 100 points each time, fully sampling if the sampling is insufficient, and finally calculating the average point cloud density of the 2 times of sampling;
(2) for a non-motor vehicle in a non-static object, randomly sampling for 3 times, sampling 100 points each time, fully sampling if the sampling is insufficient, and finally calculating the average point cloud density of 3 times of sampling;
(3) for a small car in a non-static object, randomly sampling for 3 times, sampling 200 points each time, fully sampling if the sampling is insufficient, and finally calculating the average point cloud density of 3 times of sampling;
(4) for a large-sized automobile in a non-static object, sampling is carried out for 3 times randomly, 300 points are sampled each time, full sampling is carried out if the sampling is insufficient, and finally the average point cloud density of the 3 times of sampling is calculated.
After the average point cloud density of each non-static object is established, each non-static object is input into different back-end algorithms for detection, and each detection result and detection confidence coefficient P are obtained. As shown in fig. 8, a distribution graph of the scanning distance L and the detection confidence P is drawn, using the following formula:
Where j, i represent the upper and lower limits of the recognition trigger distance threshold DT, i is the nearest distance threshold,j is the furthest threshold, n i ,n j Representing the total number of dynamic objects at distances i, j, respectively, (n) j -n i ) p>75% Indicating a total number of non-static targets with detection confidence greater than 75% in the i, j range. It should be understood that 75% is only a decision threshold recommended for use in the present invention, and its actual value may be adjusted according to the scene in which the road side sensing unit is installed.
It should be noted that the reason for the lower limit of the identification triggering distance threshold is that the lidar device generally has a vertical scan angle parameter, i.e. cannot be scanned when the physical distance is too close to the lidar and the height is below the lidar. In this case, a part of the vehicle may be scanned out only by half of the vehicle body although it is very close to the inside, and thus a distance lower limit value for ensuring that the full vehicle body can be obtained needs to be set.
Selecting appropriate values of i, j such that P ij Greater than 75%, as the target extraction range at the time of actual recognition. The initial values of i and j can be established by directly observing the image distribution, and based on the initial values, the upper and lower limits of the range are repeatedly changed by taking 0.5m as the offset, and the area with the largest range is established as the final values of i and j. Finally, the values of i and j are required to be converted into a boundary equation form, and generally, a round equation expression is used. The representation of the recognition trigger distance threshold DT should therefore be a circular section, which is formed by the boundary equations of two circles.
After obtaining the recognition trigger distance threshold DT, it is used to cut the aforementioned static object D s The method is the same as that of removing invalid data, namely removing point cloud data except the trigger distance threshold DT by using an analog linear programming mode to obtain a static object D', for identification s
Then, the stationary object for recognition needs to be converted into a stationary point cloud background B. The single frame data can only reflect the current situation of a scene at a certain moment, so that the superposition and integration of multi-frame data into a point cloud background meeting most conditions of the scene are undoubtedly performed. However, because the point cloud data has the characteristics of sparse outside and dense inside, the dense places are more dense easily due to simple superposition, and the sparse parts are still sparse compared with the sparse parts. Therefore, there may be a case where the outside is too sparse to be regarded as noise data, making it difficult for the system to distinguish the change in the distribution of the point cloud, or the inside is too dense to make the change in the distribution of the point cloud too sensitive, even if the change in the point cloud due to the shaking of itself is recognized as a feature of the movement of the object. The present invention avoids the above problems by employing a superposition method with sampling between partitions.
Considering that the working principle of the laser radar is rotary scanning, the scanned data can be regarded as annular distribution. As shown in fig. 6, the point cloud data of each frame is divided into n statistical spaces at gradually increasing intervals from the innermost side to the outermost side. The specific division interval needs to refer to the scanning range and the point cloud density parameter of hardware equipment, and the invention proposes the interval division adopted as follows:
R represents the inner ring width, l represents the square length, and R represents the distance of the inner ring from the origin.
And (3) sequentially stacking static objects for identifying the next frame from the initial frame, wherein the point cloud density of each statistic space is counted when each static object is stacked, and the calculation formula of the point cloud density at the moment is as follows:
wherein ρ is the point cloud density, n is the total point number contained in the statistical space, S is the horizontal projection area of the statistical space, and R, l and R are as defined above.
And if the point cloud density of a certain statistical space is greater than a preset threshold alpha, randomly downsampling the point cloud in the space to keep the point cloud density. The threshold value adopted by the invention is 2000 points/m 2
It should be understood that the above parameter values are only reference schemes, and the actual values should be established according to the actual performance of the road side sensing unit, and the main basis of the establishment is:
after the subsequent processing is completed, the point cloud density in each statistical space is approximately equal, and especially the point cloud density of the outermost statistical space and the innermost statistical space needs to be checked;
the number of the statistical spaces is not more than 500 as much as possible, otherwise, the calculated amount is possibly overlarge and not less than 100, otherwise, the single statistical space is overlarge, the establishment of the point cloud density threshold alpha is not facilitated, the difference of the point cloud distribution in the statistical spaces with the same point cloud density is easily caused, and the follow-up calculation work is not facilitated.
All parameters satisfying the above requirements can be used as actual parameters for calculation.
And after the superposition and downsampling are completed, obtaining a static point cloud background B. Finally, to make them comparable in later matches, they need to be subjected to a voxelization operation. Because the laser radar is limited by the mechanical structure, the laser points emitted by the previous round and the laser points emitted by the next round cannot be guaranteed to be on the same position in the scanning process, in other words, the position comparison between the points is not only tedious and complex, but also nonsensical. For this purpose, the invention introduces voxel features as a basis for comparison.
A voxel or voxel is a short term for volume pixels (voxel pixels), conceptually resembling a pixel in two dimensions, and is the smallest unit in three dimensions. Voxelization is to uniformly express a three-dimensional model by voxels. Voxelization can be understood as a generalization of two-dimensional pixelation in three-dimensional space. The simplest voxelization is binary voxelization, which indicates that the voxelized value is not 0, i.e., 1. Voxel-based three-dimensional scene data is described, so that a complex three-dimensional scene can be expressed, and vertically overlapped objects can be correctly described. Fig. 9 is a point cloud voxelization example, and it can be seen that the voxel expression data amount is smaller and the semantic information loss is smaller.
The size of the voxels, namely the side length v of the cube, needs to be established according to the acquired point cloud data density, semantic information is easy to lose if the size is too large, and the voxel reduction data volume cannot be realized if the size is too small. Through testing, the general voxel size can be selected to be 1/20 to 1/40 of the statistical space size in the first step, and the effect is good. The size of the voxels adopted by the invention is 30cm and 30cm, but the actual value is changed according to the actual use effect.
The present invention proposes to calculate the voxel position to which any point in the point cloud data belongs based on the following formula:
wherein x, y and z are coordinates of any point in the point cloud data; x is x 0 、y 0 、z 0 The origin coordinate value representing the point cloud data is not directly 0 because there may be a case where multiple radars are networked such that the radar center is not (0, 0); v is the side length of the voxel. r, c, h are coordinate indices of voxels.
And then counting the number of point clouds contained in each voxel according to the coordinate index ordering of the voxels, if the number of the point clouds is smaller than 5, deleting the point clouds from the voxel list according to the fact that the point clouds are empty voxels, and finally reserving all non-empty voxels.
In addition to the above methods, other methods of voxelization theory are applicable to the present invention, such as using functions in the PCL library, and the like, and should be considered as one of the variations of the methods used in the present invention.
(III) static object identification
When a certain frame of point cloud data is actually identified, firstly, invalid data is removed from the point cloud data, an identification area is separated by utilizing an identification triggering threshold value, the identification area is voxelized into a three-dimensional matrix, a sliding window method is utilized to compare a result with a static point cloud background after the same voxelization, if the change rate of a certain continuous area and a background value in a sliding window is larger than a judging threshold value, the area is marked as a non-static area, and otherwise, the area is marked as a static area.
The invention refers to the idea of an algorithm for foreground and background separation in image processing, based on the static point cloud background obtained in the first step, the static point cloud background is compared with the point cloud data distribution of the current frame by utilizing a sliding window method, and if the distribution change of a part of areas is found to be too large, newly added objects which do not belong to static objects exist in the areas, thereby extracting non-static objects.
Firstly, a road side sensing unit is installed in a scene to be detected, and the preprocessing flow is adopted to finish the data processing work in the earlier stage, so that the following results are obtained: voxel static point cloud background B v Identifying a trigger distance threshold DT and a set of boundary equations E b
The road side sensing unit is used for actual use. The data collected during system start-up is generally unstable, and the method disclosed by the invention is usually used for identifying dynamic and static targets after waiting for 3-5 minutes. Recording the first frame point cloud data at the formal start as D 1 The subsequent data are D 2 、D 3 ……D i
Thereafter, a boundary equation set E obtained by preprocessing b Cutting each frame of data for the first time to obtain effective data D' 1 、D′ 2 、……D′ i . Then the data of each frame is cut for the second time by utilizing the recognition trigger distance threshold DT to obtain data D', for recognition 1 、D″ 2 、……D″ i . Finally, voxelized data D' is obtained by voxelized operation of the identification data v1 、D″ v2 、……D″ vi
Comparing whether single voxels exist or not is meaningless, and semantic information of an object cannot be reflected, so that the voxelized data of the current frame and the voxelized static background are matched from inside to outside by referring to a sliding window method and a background difference method in an image processing method. The size of the sliding window adopted by the invention is 5 times of the size of the voxel frame, namely, one sliding window contains 125 voxels at most. It should be understood that the above parameter values are only reference schemes, and the actual values should be established according to the actual performance of the method when running.
Since the environmental vibration may cause fluctuation of the scanning result, the point cloud distribution conditions of the front frame and the rear frame of the static object are different, and therefore, a decision threshold value needs to be set to avoid. If the voxel distribution phase difference in the window exceeds the judging threshold value beta, marking all voxels contained in the sliding window as a non-static area A ns Otherwise marked as static area A s . The comparison flow proposed in the present invention is:
(1) if the highest value of the voxel Z axis in the sliding window is compared with the same area in the static point cloud background, the change rate is not more than 20%, the area where the window is positioned is marked as a static area, and otherwise, the next comparison is carried out;
(2) if the total number of voxels in the sliding window is not more than 20% compared with the same area in the static point cloud background, marking the area where the window is positioned as a static area, otherwise, performing next comparison;
(3) and calculating the centroid position of the voxels in the sliding window, and if the position deviation is not more than 2 voxel side lengths, marking the area where the window is positioned as a static area, or else marking the area as a non-static area, compared with the same area in the static point cloud background. The centroid calculation method is as follows:
wherein x, y, z denote the coordinates of the centroid, x i 、y i 、z i The coordinate index of each voxel is represented, and n is the total number of voxels included in the sliding window.
(4) When the non-static region is identified again, if it has already been compared with the known non-static region A ns -1 is adjacent, then again marked A ns -1, otherwise marked as non-static area A ns -2, and so on.
After the comparison process is finished, the set of all static areas is the static object D in the final result s The rest of the non-static objects are identified by dynamic objects as described belowAnd (5) performing reclassification.
It should be understood that the above comparison process is only one of the proposed schemes, and other methods for comparing the point cloud in the sliding window with the static point cloud background are applicable to the present invention, and should be considered as a variant of the scheme.
For the edge region where the trigger distance threshold is located, a situation may occur that only half of the vehicles are included, for example, a certain vehicle enters the laser radar scanning range from the outside of the scanning range, and the vehicle is also identified due to the change of the point cloud distribution in the region, but the point cloud target detection algorithm may not detect or detect errors due to incomplete data. Since the recognition trigger distance threshold is generally smaller than the actual scanning range of the lidar, then the incomplete vehicle correspondence is actually complete in the raw data.
Therefore, the invention adds the recognition redundancy value on the basis of the recognition trigger distance threshold, namely, the annular standby recognition area with the width of 1.5m is expanded to the outside from the boundary where the recognition trigger distance threshold is located, and then the annular standby recognition area is divided into all sub-areas by the scanning angle (10 degrees) as shown in fig. 10. And if the dynamic area is identified in the edge area, adding the outer standby area nearest to the dynamic area into the dynamic area. Calculating the nearest outer spare area can directly calculate the distance between each spare area and the centroid of the dynamic area.
(IV) dynamic object identification
In the above identification process, all the areas to be identified in a certain frame of data are recorded as temporary static areas A according to a certain frequency st . When the subsequent frames are identified, the area to be identified is matched with the temporary static area, if the characteristics such as the size and the position of the two areas are identified to be unchanged, the two areas can be regarded as short-time static objects, and otherwise, the two areas are dynamic objects. And finally traversing the complete identification area, and distributing the identification result to each vehicle according to different frequencies.
Based on the above analysis, the short-time static object has a risk of being greater than the static object but smaller than the dynamic object, and is a secondary identification target of the present invention, so the transmission frequency of the short-time static object can be between the two. The static object is not transmitted or transmitted in minute low frequency, and the dynamic data is transmitted in real time and high frequency. Whereas short-time stationary objects are mid-frequency transmissions in the second order.
The short-time static object is also slightly different from the non-static object in recognition mode. Since the position of the short-time static object does not move, the difference of the point cloud distribution between the front and rear frames is almost negligible, in other words, if two objects belonging to a non-static object and having almost no change in the characteristics between the front and rear frames are identified, the front and rear frames can be considered as the same object. Based on this idea, the following method is adopted for identification.
In the actual identification process, all non-static areas A in an identification frame are recorded at fixed frequency intervals ns As a temporary static area A st For secondary matching. In particular, the 1 st frame data is used as a starting frame, and the non-static area of the 1 st frame data has no temporary static area for comparison, so that all the non-static areas are recorded as temporary static areas, but the output result is totally regarded as a dynamic object. The fixed frequency interval used may generally be 1/2 to 1/5 of the laser radar acquisition frequency, which is selected in the present invention every 5 frames.
After all the non-static areas of the frame to be identified are extracted through the step (3), each individual point cloud space is taken as a matched object as the non-static areas are obviously discontinuous point cloud data. The corresponding relation table can be established in the previous step of matching, or the Euler distance is utilized to carry out clustering separation into subareas after unified recording, and the former is suggested to be adopted in the invention. The subareas of both are respectively marked as A ns -i and A st -j。
Sequentially comparing non-static areas A ns And temporary static area A st The invention proposes to align the individual sub-areas in the way:
(1) non-static area A ns And a temporary static area A st The subareas in the two are sequenced according to the scanning distance of the centroid, the centroid positions of the subareas in the two are sequentially compared, if the subarea A exists ns -i、A st J satisfies that the centroid distance of the two is not more than 0.3 meter, and no other matching objects exist in the range of 1 meter, entering the next step, otherwise, marking all non-static area sub-areas which do not satisfy the conditions as dynamic areas;
②A ns -i、A st -j comparing the horizontal projection sizes of the two areas, if the two areas are positioned in the edge area and the change rate is within 15%, or if the two areas are positioned in the inner area and the change rate is lower than 5%, entering the next step, otherwise, marking all non-static area sub-areas which do not meet the condition as dynamic areas;
③A ns -i、A st -j two regions of voxel Z-axis highest value, a is considered to be a if two regions are located in the edge region and the rate of change is within 15%, or two regions are located in the interior region and the rate of change is below 5% ns -i can be regarded as A st -j, i.e. A ns -i、A st The same object characterized by j is marked as a short-time static object, otherwise all non-static region sub-regions that do not meet the condition are still marked as dynamic regions.
(4) Since the dynamic object is actually recorded in the temporary static area every time the temporary static area is recorded, if there is a sub-area A in the temporary static area by the front-to-back frame comparison st J, such that no sub-region in the non-static region of the post-frame data can be matched to it, then sub-region A is considered st J is not a short-term static object, so it can be removed from the temporary static area, reducing the amount of alignment of the temporary static area thereafter.
Finally, after the comparison is completed, the set of all dynamic areas is the dynamic object D d After removing the dynamic object, the rest part of the temporary static area is the short-time static object D st At the same time, the temporary static area A at the next comparison st
It should be understood that the above comparison procedure is only one of the proposed schemes of the present invention, and other methods for comparing the sub-region in the non-static region with the temporary static region are applicable to the present invention, and should be considered as a variant of the scheme.
When recording the temporary static area, a counter is added to the temporary static area, and if a short-time static object exists after comparison, the counter value is added by 1. The system can manually set the transmission frequency of the short-time static object, if the counter is set to be 3 times, the transmission frequency of the short-time static object is 1/3 of the transmission frequency of the dynamic object.
Brief description of the drawings
FIG. 1 is a flow chart of a preprocessing stage
FIG. 2 recognition phase flow chart
FIG. 3 bad data sample example and available data sample example
FIG. 4 static point cloud background data visualization example
FIG. 5 road side greenbelt intrusion road space data visualization example
FIG. 6 statistical space division schematic
FIG. 7 is a schematic diagram of a point cloud sampling method
FIG. 8 distribution graph of distance L and confidence P
FIG. 9 example of point cloud voxelization
FIG. 10 edge area supplement illustration
FIG. 11 test case single frame data visualization illustration
Detailed Description
According to the patent disclosure, the roadside sensing units are arranged. The case adopts the pole setting type installation method, the installation height of the laser radar is about 5 meters, and the scanning range covers a section of two-way double-lane road, surrounding buildings, greening trees and other objects. The furthest scanning distance of the actual identification data is about 120 meters, the data acquisition frequency is 10HZ, and the number of point clouds per frame reaches more than hundred thousand. The visual pattern is shown in fig. 11, for example.
Firstly, according to the first step, data are collected and processed. Because the test process is all sunny days, the laser radar scanning data of the rainy days cannot be acquired, and the corresponding point cloud data is acquired by replacing the laser radar scanning data with a road surface large-scale sprinkling mode. Sequentially in morning 8: around 00, noon 13: around 00 and night 21: around 00 frames of data were acquired, each for about 600 frames of data, and for avoiding contingencies, 3 days were acquired consecutively. And driving the experimental vehicle to the scanning area at night, and starting a high beam to acquire point cloud data of a strong light irradiation environment, wherein the point cloud data is about 250 frames. The total data frame number is close to 6000 frames, and after manual screening, about 2000 frames of point cloud data are obtained and used for extracting static point cloud background.
Establishing a boundary equation set based on manual means and utilizing a common point cloud data visualization processing tool to complete the method. According to the method, a certain domestic point cloud processing software is selected as a visual operation tool. Since the selected example road section has an obvious curb as a road boundary, a road area and a non-road area can be clearly divided. And (3) performing point cloud sampling in the curb region, wherein the sampling method is to manually sample the points along the extending direction of the curb at intervals of 50cm, so as to obtain a point cloud sample for fitting the road boundary. Based on the sampling points, a plane equation of the road boundary is fitted by using a least square method and the like, a plane equation of the left and right boundaries of the road is finally obtained, the direction of data elimination is recorded, and the data elimination boundary is taken as an invalid data elimination boundary and is used as a first data screening condition in the actual identification process.
Next, sampling points are collected at equal intervals of all scanning loops from inside to outside by taking 0.5 meter as a sampling interval, the points with the radius within 10cm are recorded by taking each sampling point as the center, the points are used as the point cloud density, 30 sampling points are selected on each loop on average, and part of case sampling results are exemplified in the following table.
Point cloud density 1 st sampling point Sample No. 2 …… Sample No. 30 Point cloud density average
1 st loop wire 28 23 26 27
Loop line 2 25 24 27 26
3 rd loop wire 24 25 23 24
……
29 th loop wire 7 8 6 9
30 th loop wire 9 6 7 9
31 st loop wire 9 7 8 8
……
58 th loop 3 2 2 2
No. 59 loop 2 2 1 1
60 th loop wire 2 0 1 1
Calculating a point cloud density average value of each loop line, taking the point cloud density average value and the corresponding loop line distance as an x value and a y value respectively, and utilizing least square fitting to establish a linear relation between the point cloud density and the scanning distance, wherein the example result is expressed as:
ρ=0.97·L
where ρ is the point cloud density, L is the scan distance, and 0.97 is the linear function parameter.
And then based on a manual means, carrying out static objects and non-static objects in each frame of data, wherein the static objects are used for obtaining a static point cloud background. However, unlike the description of the present invention, the point cloud target detection algorithm adopted in this case only needs to input the original point cloud data, and does not need to extract the non-static object separately for detection. All point cloud target detection algorithms should train a model with a good recognition effect before use, and the recognition effect of the algorithm can be considered to be good when the accuracy of the algorithm is considered to be more than 85%. When the detected target is subjected to point cloud sampling, a labeling frame obtained by an algorithm can be used as an extraction boundary, and all point clouds in the visual labeling frame belong to the detected target.
And obtaining the average point cloud density by adopting a random sampling method. The proportion of the point cloud sampling is established by referring to the parameters of the selected laser radar equipment and the actual total point cloud number of the target, and the mode adopted in the scheme is as follows:
for a non-static object with the total point cloud number larger than 3000, randomly sampling for 3 times, sampling 300 points each time, and finally calculating the average point cloud density of 3 times;
for non-static objects with the total point cloud number being more than 1000, randomly sampling for 3 times, sampling 100 points each time, and finally calculating the average point cloud density of 3 times;
for non-static objects with the total number of point clouds being more than 500, randomly sampling for 3 times, sampling 75 points each time, and finally calculating the average point cloud density of 3 times;
for non-static objects with a total number of point clouds less than 500, 100 points are randomly sampled and the average point cloud density is calculated.
In the sampling method, the calculation method of the point cloud density is consistent with the calculation method, namely, each sampling point is taken as a center, and the point number within the radius of 10cm is taken as the point cloud density.
After the average point cloud density of each non-static object is established, each non-static object is input into a back-end algorithm for detection, and each detection result and detection confidence coefficient P are obtained. And drawing a distribution curve graph of the scanning distance L and the detection confidence coefficient P, and utilizing the following formula:
Wherein j and i represent the upper and lower limits of the recognition trigger distance threshold, i is the nearest distance threshold, j is the farthest distance threshold, n i ,n j Representing the total number of non-static targets at distances i, j, respectively, (n) j -n i ) p>75% Representing a total number of non-static targets with a confidence level greater than 75% in the i, j range.
Selecting appropriate values of i, j such that P ij Greater than 75%, as a non-stationary object extraction range at the time of actual recognition. The values of i and j selected by the invention are 3 and 45, namely the extraction range of the corresponding non-static object is from the position of 4m from the center of the laser radar to the position of 25m from the center of the laser radar.
And for each frame of point cloud data, referring to the scanning range and the point cloud density parameters of the laser radar equipment, and dividing the point cloud data into 93 statistical spaces at gradually increasing intervals from the innermost side to the outer side. The pitch used in this case is divided into:
where R represents the inner ring width, l represents the square length, and R represents the distance of the inner ring from the origin.
And (3) sequentially superposing the point cloud data of the next frame from the initial frame, and detecting the point cloud density of each statistical space when superposing each time, wherein the calculation formula of the point cloud density is as follows:
wherein ρ is the point cloud density, n is the total point number contained in the statistical space, S is the area of the statistical space, R is the width of the inner ring, l is the length of the square, and R is the distance of the inner ring from the origin.
If the point cloud density of a certain statistical space is greater than a preset threshold value of 2000 points/m 2 And randomly downsampling the point cloud in the space to keep the point cloud density, and finally obtaining the ideal static point cloud background B.
In order to facilitate subsequent calculation, the static point cloud background is subjected to voxelization. The size of the voxels adopted by the invention is 30cm by 30cm. Calculating the voxel position of any point in the point cloud data based on the following formula:
wherein x, y and z are coordinates of any point in the point cloud data; x0, y0 and z0 represent origin coordinate values of point cloud data, and are not directly 0 because a situation that multiple radars are networked so that a radar center is not (0, 0) can exist; v is the side length of the voxel. r, c, h are coordinate indices of voxels.
Counting the number of point clouds contained in each voxel according to the coordinate index ordering of the voxels, if the number of the point clouds is smaller than 5, deleting the point clouds from a voxel list according to the fact that the point clouds are empty voxels, and finally reserving all non-empty voxels to obtain the voxelized static point cloud background B v
And thirdly, voxelizing the newly acquired data and screening out a non-static area by utilizing a sliding window method. Firstly, invalid data rejection, separation and identification region and voxelization are sequentially carried out on each frame of point cloud data, the method is the same as that of static point cloud background processing, and after two times of data screening, the average voxel number of each frame of point cloud data is about 15000.
By means of the sliding window method and the background difference method in the image processing method, voxelized data of the current frame are matched with the voxelized static background from inside to outside. The size of the sliding window adopted in the present case is 5 times the size of the voxel frame, i.e. one sliding window contains 125 voxels at most.
Because the environmental vibration may cause fluctuation of the scanning result, the point cloud distribution conditions of the front frame and the rear frame of the static object are different, and therefore, a trigger threshold value needs to be set to avoid. If the voxel distribution phase difference in the window exceeds a fixed threshold value, all voxels contained in the sliding window are marked as a non-static region A ns Otherwise marked as static area A s . The specific ratio process is as follows:
(5) if the highest value of the voxel Z axis in the sliding window is compared with the same area in the static point cloud background, the change rate is not more than 20%, the area where the window is positioned is marked as a static area, and otherwise, the next comparison is carried out;
(6) if the total number of voxels in the sliding window is not more than 20% compared with the same area in the static point cloud background, marking the area where the window is positioned as a static area, otherwise, performing next comparison;
(7) and calculating the centroid position of the voxels in the sliding window, and if the position deviation is not more than 2 voxel side lengths, marking the region where the window is positioned as a static region compared with the same region in the static point cloud background, otherwise marking the region as a non-static region. The centroid calculation method is as follows:
Wherein x, y, z denote the coordinates of the centroid, x i 、y i 、z i The coordinate index of each voxel is represented, and n is the total number of voxels included in the sliding window.
When the non-static region is identified again, if it has already been compared with the known non-static region A ns -1 are adjacent, then likewiseMarked as A ns -1, otherwise marked as non-static area A ns -2, and so on.
And finally, extracting point cloud data of all the static areas as static objects. Since the static object is continuous point cloud data, the recognition rate of the static object is difficult to compare, and the static object is converted into a non-static object. In this case, the non-static object recognition rate of the inner area (the area with the horizontal distance smaller than 23m from the center of the laser radar) can reach more than 97%, and the recognition rate of the edge area (the area with the horizontal distance larger than 23m and smaller than 25m from the center of the laser radar) can reach more than 85% due to the fact that the sizes of the vehicle intercepting parts crossing the boundary are different, and the average non-static object recognition rate is about 92%.
And (3) simulating the road side parking behavior by using the experimental vehicle to detect the method in the step four. Every 5 frames of data, recording all non-static areas A in the detection frame ns As a temporary static area A st For secondary matching. The recorded characteristics include the horizontal projection size, the centroid position and the Z-axis highest value of each subarea of the non-static area A.
After all the non-static areas of the frame to be identified are extracted through the third step, the subareas of the frame to be identified are sequentially compared with all the subareas recorded in the temporary static areas. The characteristics and sequence of the comparison are as follows:
(1) non-static area A ns And a temporary static area A st The subareas in the two are sequenced according to the scanning distance of the centroid, the centroid positions of the subareas in the two are sequentially compared, if the subarea A exists ns -i、A st J satisfies that the centroid distance of the two is not more than 0.3 meter, and no other matching objects exist in the range of 1 meter, entering the next step, otherwise, marking all non-static area sub-areas which do not satisfy the conditions as dynamic areas;
②A ns -i、A st if the two regions are located in the edge region and the rate of change is within 15% or if the two regions are located in the inner region and the rate of change is below 5% as compared with the horizontal projection size of the two regions, the method proceeds to the next step, otherwise all non-static region sub-regions which do not meet the conditions are markedThe dynamic region is marked;
③A ns -i、A st -j two regions of voxel Z-axis highest value, a is considered to be a if two regions are located in the edge region and the rate of change is within 15%, or two regions are located in the interior region and the rate of change is below 5% ns -i can be regarded as A st -j, i.e. A ns -i、A st The same object characterized by j is marked as a short-time static object, otherwise all non-static region sub-regions that do not meet the condition are still marked as dynamic regions.
(4) Since the dynamic object is actually recorded in the temporary static area every time the temporary static area is recorded, if there is a sub-area A in the temporary static area by the front-to-back frame comparison st J, such that no sub-region in the non-static region of the post-frame data can be matched to it, then sub-region A is considered st J is not a short-term static object, so it can be removed from the temporary static area, reducing the amount of alignment of the temporary static area thereafter.
Finally, after the comparison is completed, the set of all dynamic areas is the dynamic object D d After removing the dynamic object, the rest part of the temporary static area is the short-time static object D st At the same time, the temporary static area A at the next comparison st

Claims (11)

  1. A static and non-static object point cloud identification method based on a road side sensing unit comprises the following steps:
    data acquisition (one)
    Constructing a roadside laser radar sensing scene facing to a vehicle-road cooperative environment, and collecting point cloud original data D 0 For pretreatment;
    (II) pretreatment
    2.1 First establish the set of boundary equations E b From the raw data D 0 Removing invalid data to obtain effective point cloud data;
    2.2 Equidistant sampling is carried out on the effective point cloud data, and a linear relation between the point cloud density and the scanning distance is established;
    2.3 Identifying static object D in per-frame valid point cloud data s Establishing a static point cloud background B;
    2.4 Performing voxelized operation on the static point cloud background to obtain a voxelized static point cloud background B v
    (III) static object identification
    3.1 Separating the identification area from the effective point cloud data by utilizing an identification triggering threshold value;
    3.2 Voxel the identification area into a three-dimensional matrix, and utilizing a sliding window method to make the three-dimensional matrix and the voxelized static point cloud background B v Comparing the static point cloud background B with a continuous region in the sliding window after voxelization v The change rate between the areas at the same position is larger than the static area judgment threshold value, the continuous area is marked as a non-static area, otherwise, the continuous area is marked as a static area; and the union of all the static areas is the static object in the effective point cloud data.
  2. The method of claim 1, further comprising the step of:
    (IV) non-static object identification
    4.1 In the static object identification process, all static areas in some frame of voxel point cloud data are recorded as temporary static areas A according to fixed frequency intervals st
    4.2 When the subsequent frame is identified, the non-static area of the voxelized point cloud data to be identified is combined with the temporary static area A st Matching, namely considering the non-static area of the voxelized point cloud data to be identified as a short-time static object if the size and the position change rate of the two areas are smaller than the short-time static object judgment threshold value, and considering the non-static area as a dynamic object if the size and the position change rate of the two areas are not smaller than the short-time static object judgment threshold value;
    4.3 Repeating 4.1) and 4.2) until the complete valid point cloud data is traversed.
  3. The method of claim 1, wherein the raw point cloud data collected comprises static objects and non-static objects.
  4. The method of claim 1, wherein the static objects include pavement and its attendant facilities, buildings and roadside greening plants; the non-stationary object includes pedestrians, non-motor vehicles, and/or vehicles.
  5. The method of claim 1, wherein the method for establishing the linear relationship between the point cloud density and the scanning distance comprises:
    and taking 0.5 meter as a sampling interval, collecting sample points at equal intervals from inside to outside on each scanning loop, recording points within a radius of 10cm with each sampling point as a center as point cloud density, and counting the average point cloud density of each loop, so that the linear relation between the point cloud density and the scanning distance can be established.
  6. The method of claim 1, wherein the static point cloud background B is established as follows:
    2.3.1 Detecting a non-static object by using a point cloud target detection algorithm to establish a distribution curve of detection confidence and scanning distance, and selecting a scanning distance range with the confidence higher than a threshold value as a recognition trigger distance threshold value;
    2.3.2 Cutting all static objects by using the recognition trigger distance threshold value, and establishing a static point cloud background B by overlapping and properly sampling the cut static objects for multi-frame recognition.
  7. The method of claim 6, wherein the method for establishing the recognition trigger distance threshold DT is:
    extracting various non-static objects, performing point cloud sampling on each non-static object according to a certain proportion, counting the average point cloud density of each non-static object, and determining the corresponding scanning distance L;
    detecting each detection target by using a point cloud target detection algorithm to obtain a detection result and a detection confidence coefficient P;
    and drawing a distribution curve graph of the distance L and the confidence coefficient P, and utilizing the following formula:
    wherein n is i ,n j Representing the total number of sample targets at distances i, j, respectively, (n) j -n i ) p>75% Representing the total number of sampling targets with confidence greater than 75% in the range of i and j; selecting proper values of i and j to enable P by taking 0.5 meter as an upper limit sampling interval and a lower limit sampling interval ij And if the difference between I and j is greater than 75%, the boundary equation converted by the value of I and j is the trigger distance threshold DT.
  8. The method of claim 6, wherein the voxelized static point cloud background B v The construction method of (1) comprises the following steps:
    based on a manual extraction method, separating static objects from non-static objects in the effective point cloud data of each frame, and separating an identification area for the static objects by utilizing an identification trigger distance threshold DT;
    then dividing the point cloud coordinate system into n statistical spaces at gradually increasing intervals from the original side to the outside of the point cloud coordinate system; sequentially superposing effective point cloud data of the next frame from the initial frame; detecting the point cloud density of each statistical space when each superposition is performed, and randomly sampling the effective point cloud data in the space to keep the density if the point cloud density is larger than a threshold value alpha;
    finally obtaining a static point cloud background B with moderate point cloud density;
    finally, voxelized operation is carried out on the static point cloud background to obtain a voxelized static point cloud background B v
  9. The method according to claim 1, wherein the marking method of the static area and the non-static area is as follows:
    firstly utilizing a boundary equation set E to a point cloud data frame to be identified d Cutting the recognition trigger distance threshold DT, and performing voxelization according to the density of 1/20 to 1/40 of the statistical space size to obtain voxelization point cloud data Dv' to be recognized;
    matching a certain continuous region in the voxelized point cloud data to be identified with a region at the same position in the voxelized static point cloud background by adopting a sliding window and background difference method from outside to inside;
    if the change rate difference between the two exceeds the static region judgment threshold value, the non-static region A is marked ns Otherwise marked as static area A s
    When the dynamic region is detected again, if it is the same as the known non-static region A ns -1 is adjacent, then again marked A ns -1, otherwise marked as non-static area A ns -2, and so on.
  10. The method of claim 7, wherein the annular spare identification area having a width of 1.5m is extended to the outside of the edge area, and is divided into a plurality of sub-areas at a scan angle; if the non-static area is detected in the edge area, the annular standby identification area in the scanning angle range is marked into the non-static area by the scanning angle of the non-static area.
  11. The method of claim 2, wherein non-stationary areas a are recorded at regular intervals during the identification of short-term stationary objects ns As a temporary static area A st For secondary matching; after all non-static areas are extracted from the voxelized point cloud data to be identified through a sliding window and a background difference method, sequentially comparing all sub-areas belonging to the non-static areas to be identified with all sub-areas in the temporary static areas, if two sub-areas A exist between the sub-areas A ns -i、A st J is such that the difference of the position and the morphological characteristics between the two is smaller than the judgment threshold value, and the two are regarded as the sameAn object is marked as a short-time static object, otherwise, a dynamic object.
CN202280026656.3A 2021-01-01 2022-04-01 Static and non-static object point cloud identification method based on road side sensing unit Pending CN117836667A (en)

Applications Claiming Priority (5)

Application Number Priority Date Filing Date Title
CN202110000327 2021-01-01
CN202110228419 2021-03-01
PCT/CN2021/085147 WO2022141911A1 (en) 2021-01-01 2021-04-01 Roadside sensing unit-based method for quick recognition of dynamic target point cloud and point cloud segmentation
CNPCT/CN2021/085147 2021-04-01
PCT/CN2022/084912 WO2022206974A1 (en) 2021-01-01 2022-04-01 Roadside sensing unit-based static and non-static object point cloud recognition method

Publications (1)

Publication Number Publication Date
CN117836667A true CN117836667A (en) 2024-04-05

Family

ID=82260124

Family Applications (5)

Application Number Title Priority Date Filing Date
CN202180011148.3A Pending CN116685873A (en) 2021-01-01 2021-04-01 Vehicle-road cooperation-oriented perception information fusion representation and target detection method
CN202280026657.8A Pending CN117441197A (en) 2021-01-01 2022-04-01 Laser radar point cloud dynamic segmentation and fusion method based on driving safety risk field
CN202280026659.7A Pending CN117836653A (en) 2021-01-01 2022-04-01 Road side millimeter wave radar calibration method based on vehicle-mounted positioning device
CN202280026658.2A Pending CN117441113A (en) 2021-01-01 2022-04-01 Vehicle-road cooperation-oriented perception information fusion representation and target detection method
CN202280026656.3A Pending CN117836667A (en) 2021-01-01 2022-04-01 Static and non-static object point cloud identification method based on road side sensing unit

Family Applications Before (4)

Application Number Title Priority Date Filing Date
CN202180011148.3A Pending CN116685873A (en) 2021-01-01 2021-04-01 Vehicle-road cooperation-oriented perception information fusion representation and target detection method
CN202280026657.8A Pending CN117441197A (en) 2021-01-01 2022-04-01 Laser radar point cloud dynamic segmentation and fusion method based on driving safety risk field
CN202280026659.7A Pending CN117836653A (en) 2021-01-01 2022-04-01 Road side millimeter wave radar calibration method based on vehicle-mounted positioning device
CN202280026658.2A Pending CN117441113A (en) 2021-01-01 2022-04-01 Vehicle-road cooperation-oriented perception information fusion representation and target detection method

Country Status (3)

Country Link
CN (5) CN116685873A (en)
GB (2) GB2618936A (en)
WO (9) WO2022141910A1 (en)

Families Citing this family (12)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114724362B (en) * 2022-03-23 2022-12-27 中交信息技术国家工程实验室有限公司 Vehicle track data processing method
CN115358530A (en) * 2022-07-26 2022-11-18 上海交通大学 Vehicle-road cooperative sensing roadside test data quality evaluation method
CN115113157B (en) * 2022-08-29 2022-11-22 成都瑞达物联科技有限公司 Beam pointing calibration method based on vehicle-road cooperative radar
CN115480243B (en) * 2022-09-05 2024-02-09 江苏中科西北星信息科技有限公司 Multi-millimeter wave radar end-edge cloud fusion calculation integration and application method thereof
CN115166721B (en) * 2022-09-05 2023-04-07 湖南众天云科技有限公司 Radar and GNSS information calibration fusion method and device in roadside sensing equipment
CN115272493B (en) * 2022-09-20 2022-12-27 之江实验室 Abnormal target detection method and device based on continuous time sequence point cloud superposition
CN115235478B (en) * 2022-09-23 2023-04-07 武汉理工大学 Intelligent automobile positioning method and system based on visual label and laser SLAM
CN115830860B (en) * 2022-11-17 2023-12-15 西部科学城智能网联汽车创新中心(重庆)有限公司 Traffic accident prediction method and device
CN115966084B (en) * 2023-03-17 2023-06-09 江西昂然信息技术有限公司 Holographic intersection millimeter wave radar data processing method and device and computer equipment
CN116189116B (en) * 2023-04-24 2024-02-23 江西方兴科技股份有限公司 Traffic state sensing method and system
CN117471461B (en) * 2023-12-26 2024-03-08 哈尔滨工业大学(深圳)(哈尔滨工业大学深圳科技创新研究院) Road side radar service device and method for vehicle-mounted auxiliary driving system
CN117452392B (en) * 2023-12-26 2024-03-08 哈尔滨工业大学(深圳)(哈尔滨工业大学深圳科技创新研究院) Radar data processing system and method for vehicle-mounted auxiliary driving system

Family Cites Families (46)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6661370B2 (en) * 2001-12-11 2003-12-09 Fujitsu Ten Limited Radar data processing apparatus and data processing method
US9562971B2 (en) * 2012-11-22 2017-02-07 Geosim Systems Ltd. Point-cloud fusion
KR101655606B1 (en) * 2014-12-11 2016-09-07 현대자동차주식회사 Apparatus for tracking multi object using lidar and method thereof
TWI597513B (en) * 2016-06-02 2017-09-01 財團法人工業技術研究院 Positioning system, onboard positioning device and positioning method thereof
CN105892471B (en) * 2016-07-01 2019-01-29 北京智行者科技有限公司 Automatic driving method and apparatus
WO2018126248A1 (en) * 2017-01-02 2018-07-05 Okeeffe James Micromirror array for feedback-based image resolution enhancement
KR102056147B1 (en) * 2016-12-09 2019-12-17 (주)엠아이테크 Registration method of distance data and 3D scan data for autonomous vehicle and method thereof
CN106846494A (en) * 2017-01-16 2017-06-13 青岛海大新星软件咨询有限公司 Oblique photograph three-dimensional building thing model automatic single-body algorithm
US10281920B2 (en) * 2017-03-07 2019-05-07 nuTonomy Inc. Planning for unknown objects by an autonomous vehicle
CN108629231B (en) * 2017-03-16 2021-01-22 百度在线网络技术(北京)有限公司 Obstacle detection method, apparatus, device and storage medium
CN107133966B (en) * 2017-03-30 2020-04-14 浙江大学 Three-dimensional sonar image background segmentation method based on sampling consistency algorithm
CN108932462B (en) * 2017-05-27 2021-07-16 华为技术有限公司 Driving intention determining method and device
FR3067495B1 (en) * 2017-06-08 2019-07-05 Renault S.A.S METHOD AND SYSTEM FOR IDENTIFYING AT LEAST ONE MOVING OBJECT
CN109509260B (en) * 2017-09-14 2023-05-26 阿波罗智能技术(北京)有限公司 Labeling method, equipment and readable medium of dynamic obstacle point cloud
CN107609522B (en) * 2017-09-19 2021-04-13 东华大学 Information fusion vehicle detection system based on laser radar and machine vision
CN108152831B (en) * 2017-12-06 2020-02-07 中国农业大学 Laser radar obstacle identification method and system
CN108010360A (en) * 2017-12-27 2018-05-08 中电海康集团有限公司 A kind of automatic Pilot context aware systems based on bus or train route collaboration
CN108639059B (en) * 2018-05-08 2019-02-19 清华大学 Driver based on least action principle manipulates behavior quantization method and device
CN109188379B (en) * 2018-06-11 2023-10-13 深圳市保途者科技有限公司 Automatic calibration method for driving auxiliary radar working angle
JPWO2020009060A1 (en) * 2018-07-02 2021-08-05 ソニーセミコンダクタソリューションズ株式会社 Information processing equipment and information processing methods, computer programs, and mobile equipment
US10839530B1 (en) * 2018-09-04 2020-11-17 Apple Inc. Moving point detection
CN109297510B (en) * 2018-09-27 2021-01-01 百度在线网络技术(北京)有限公司 Relative pose calibration method, device, equipment and medium
CN111429739A (en) * 2018-12-20 2020-07-17 阿里巴巴集团控股有限公司 Driving assisting method and system
JP7217577B2 (en) * 2019-03-20 2023-02-03 フォルシアクラリオン・エレクトロニクス株式会社 CALIBRATION DEVICE, CALIBRATION METHOD
CN110220529B (en) * 2019-06-17 2023-05-23 深圳数翔科技有限公司 Positioning method for automatic driving vehicle at road side
CN110532896B (en) * 2019-08-06 2022-04-08 北京航空航天大学 Road vehicle detection method based on fusion of road side millimeter wave radar and machine vision
CN110443978B (en) * 2019-08-08 2021-06-18 南京联舜科技有限公司 Tumble alarm device and method
CN110458112B (en) * 2019-08-14 2020-11-20 上海眼控科技股份有限公司 Vehicle detection method and device, computer equipment and readable storage medium
CN110850378B (en) * 2019-11-22 2021-11-19 深圳成谷科技有限公司 Automatic calibration method and device for roadside radar equipment
CN110850431A (en) * 2019-11-25 2020-02-28 盟识(上海)科技有限公司 System and method for measuring trailer deflection angle
CN110906939A (en) * 2019-11-28 2020-03-24 安徽江淮汽车集团股份有限公司 Automatic driving positioning method and device, electronic equipment, storage medium and automobile
CN111121849B (en) * 2020-01-02 2021-08-20 大陆投资(中国)有限公司 Automatic calibration method for orientation parameters of sensor, edge calculation unit and roadside sensing system
CN111999741B (en) * 2020-01-17 2023-03-14 青岛慧拓智能机器有限公司 Method and device for detecting roadside laser radar target
CN111157965B (en) * 2020-02-18 2021-11-23 北京理工大学重庆创新中心 Vehicle-mounted millimeter wave radar installation angle self-calibration method and device and storage medium
CN111476822B (en) * 2020-04-08 2023-04-18 浙江大学 Laser radar target detection and motion tracking method based on scene flow
CN111554088B (en) * 2020-04-13 2022-03-22 重庆邮电大学 Multifunctional V2X intelligent roadside base station system
CN111192295B (en) * 2020-04-14 2020-07-03 中智行科技有限公司 Target detection and tracking method, apparatus, and computer-readable storage medium
CN111537966B (en) * 2020-04-28 2022-06-10 东南大学 Array antenna error correction method suitable for millimeter wave vehicle-mounted radar field
CN111766608A (en) * 2020-06-12 2020-10-13 苏州泛像汽车技术有限公司 Environmental perception system based on laser radar
CN111880191B (en) * 2020-06-16 2023-03-28 北京大学 Map generation method based on multi-agent laser radar and visual information fusion
CN111880174A (en) * 2020-07-03 2020-11-03 芜湖雄狮汽车科技有限公司 Roadside service system for supporting automatic driving control decision and control method thereof
CN111914664A (en) * 2020-07-06 2020-11-10 同济大学 Vehicle multi-target detection and track tracking method based on re-identification
CN111985322B (en) * 2020-07-14 2024-02-06 西安理工大学 Road environment element sensing method based on laser radar
CN111862157B (en) * 2020-07-20 2023-10-10 重庆大学 Multi-vehicle target tracking method integrating machine vision and millimeter wave radar
CN112019997A (en) * 2020-08-05 2020-12-01 锐捷网络股份有限公司 Vehicle positioning method and device
CN112509333A (en) * 2020-10-20 2021-03-16 智慧互通科技股份有限公司 Roadside parking vehicle track identification method and system based on multi-sensor sensing

Also Published As

Publication number Publication date
WO2022206974A1 (en) 2022-10-06
CN116685873A (en) 2023-09-01
WO2022141914A1 (en) 2022-07-07
GB2620877A (en) 2024-01-24
CN117441197A (en) 2024-01-23
WO2022141910A1 (en) 2022-07-07
GB2618936A (en) 2023-11-22
WO2022206978A1 (en) 2022-10-06
WO2022206977A1 (en) 2022-10-06
WO2022141911A1 (en) 2022-07-07
WO2022206942A1 (en) 2022-10-06
WO2022141912A1 (en) 2022-07-07
WO2022141913A1 (en) 2022-07-07
CN117836653A (en) 2024-04-05
CN117441113A (en) 2024-01-23
GB202316625D0 (en) 2023-12-13
GB202313215D0 (en) 2023-10-11

Similar Documents

Publication Publication Date Title
CN117836667A (en) Static and non-static object point cloud identification method based on road side sensing unit
CN115605777A (en) Dynamic target point cloud rapid identification and point cloud segmentation method based on road side sensing unit
CN108710875B (en) A kind of take photo by plane road vehicle method of counting and device based on deep learning
US10846874B2 (en) Method and apparatus for processing point cloud data and storage medium
CN108920481B (en) Road network reconstruction method and system based on mobile phone positioning data
CN112581612B (en) Vehicle-mounted grid map generation method and system based on fusion of laser radar and all-round-looking camera
Chen et al. Architecture of vehicle trajectories extraction with roadside LiDAR serving connected vehicles
CN102779280B (en) Traffic information extraction method based on laser sensor
CN108345822A (en) A kind of Processing Method of Point-clouds and device
CN110320504A (en) A kind of unstructured road detection method based on laser radar point cloud statistics geometrical model
CN106842231A (en) A kind of road edge identification and tracking
CN103679655A (en) LiDAR point cloud filter method based on gradient and area growth
CN105160309A (en) Three-lane detection method based on image morphological segmentation and region growing
CN103500329B (en) Street lamp automatic extraction method based on vehicle-mounted mobile laser scanning point cloud
CN110263717A (en) It is a kind of incorporate streetscape image land used status determine method
CN113569915B (en) Multi-strategy rail transit obstacle recognition method based on laser radar
CN102855759A (en) Automatic collecting method of high-resolution satellite remote sensing traffic flow information
CN103310199A (en) Vehicle model identification method based on high-resolution remote sensing data
CN114782729A (en) Real-time target detection method based on laser radar and vision fusion
Gong et al. Pedestrian detection method based on roadside light detection and ranging
Sun et al. Objects detection with 3-d roadside lidar under snowy weather
You et al. Segmentation of individual mangrove trees using UAV-based LiDAR data
CN115620165B (en) Method, device, equipment and medium for evaluating slow-moving system facilities of urban built-up area
Kamenetsky et al. Aerial car detection and urban understanding
Wu et al. Grid-based lane identification with roadside LiDAR data

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination