US20240005674A1 - Road edge recognition based on laser point cloud - Google Patents

Road edge recognition based on laser point cloud Download PDF

Info

Publication number
US20240005674A1
US20240005674A1 US18/548,042 US202218548042A US2024005674A1 US 20240005674 A1 US20240005674 A1 US 20240005674A1 US 202218548042 A US202218548042 A US 202218548042A US 2024005674 A1 US2024005674 A1 US 2024005674A1
Authority
US
United States
Prior art keywords
road edge
point cloud
edge points
cloud data
current frame
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
US18/548,042
Other languages
English (en)
Inventor
Chao Huang
Yue Ye
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shanghai Xiantu Intelligent Technology Co Ltd
Original Assignee
Shanghai Xiantu Intelligent Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shanghai Xiantu Intelligent Technology Co Ltd filed Critical Shanghai Xiantu Intelligent Technology Co Ltd
Assigned to Shanghai Xiantu Intelligent Technology Co., Ltd. reassignment Shanghai Xiantu Intelligent Technology Co., Ltd. ASSIGNMENT OF ASSIGNORS INTEREST (SEE DOCUMENT FOR DETAILS). Assignors: HUANG, CHAO, YE, Yue
Assigned to Shanghai Xiantu Intelligent Technology Co., Ltd. reassignment Shanghai Xiantu Intelligent Technology Co., Ltd. CHANGE OF ADDRESS OF ASSIGNEE Assignors: Shanghai Xiantu Intelligent Technology Co., Ltd.
Publication of US20240005674A1 publication Critical patent/US20240005674A1/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S17/00Systems using the reflection or reradiation of electromagnetic waves other than radio waves, e.g. lidar systems
    • G01S17/88Lidar systems specially adapted for specific applications
    • G01S17/93Lidar systems specially adapted for specific applications for anti-collision purposes
    • G01S17/931Lidar systems specially adapted for specific applications for anti-collision purposes of land vehicles
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/50Context or environment of the image
    • G06V20/56Context or environment of the image exterior to a vehicle by using sensors mounted on the vehicle
    • G06V20/588Recognition of the road, e.g. of lane markings; Recognition of the vehicle driving pattern in relation to the road
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S17/00Systems using the reflection or reradiation of electromagnetic waves other than radio waves, e.g. lidar systems
    • G01S17/88Lidar systems specially adapted for specific applications
    • G01S17/89Lidar systems specially adapted for specific applications for mapping or imaging
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S7/00Details of systems according to groups G01S13/00, G01S15/00, G01S17/00
    • G01S7/48Details of systems according to groups G01S13/00, G01S15/00, G01S17/00 of systems according to group G01S17/00
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S7/00Details of systems according to groups G01S13/00, G01S15/00, G01S17/00
    • G01S7/48Details of systems according to groups G01S13/00, G01S15/00, G01S17/00 of systems according to group G01S17/00
    • G01S7/4808Evaluating distance, position or velocity data
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/44Local feature extraction by analysis of parts of the pattern, e.g. by detecting edges, contours, loops, corners, strokes or intersections; Connectivity analysis, e.g. of connected components

Definitions

  • the present disclosure relates to the field of laser radar technologies, and in particular to a road edge recognition method based on a laser point cloud, an electronic device, and a storage medium.
  • a self-driving sweeping vehicle provided with a laser radar may perform automatic sweeping along a road based on a road edge detection result of the laser radar.
  • the self-driving sweeping vehicle has to perform close-to-edge sweeping and otherwise, the sweeping quality will be affected.
  • the safety and stability of the self-driving sweeping vehicle should be guaranteed while the close-to-edge sweeping with high accuracy is performed.
  • the present disclosure provides a road edge recognition method based on a laser point cloud, an electronic device, and a storage medium.
  • the present disclosure provides a road edge recognition method based on a laser point cloud, which includes: obtaining point cloud data of a current frame collected by a laser radar and pose information corresponding to a current vehicle; determining, based on the pose information, offline road edge points corresponding to the current frame in a pre-stored offline road edge point set; extracting a ground point cloud set by processing the point cloud data; determining, based on a type of the laser radar, a corresponding extraction algorithm to extract candidate road edge points of the current frame from the ground point cloud set; and selecting road edge points closest to the current vehicle in the candidate road edge points and in the offline road edge points as target road edge points.
  • the current vehicle includes a self-driving sweeping vehicle provided with a laser radar and a positioning sensor.
  • processing the point cloud data to extract the ground point cloud set includes: selecting a preset number of point clouds as initial point clouds, and performing plane fitting on the initial point clouds based on a random sample consensus algorithm to obtain a plane; calculating a distance of other point cloud from the plane, and determining whether the distance is less than a threshold; in response to determining that the distance is less than the threshold, adding the other point cloud to the ground point cloud set.
  • the method before determining the candidate road edge points of the current frame, the method further includes: based on a selected region of interest, filtering the ground point cloud set to determine ground point clouds in the region of interest; where the region of interest includes a region within a preset distance from both sides of the offline road edge points.
  • the type of the laser radar includes a forward radar and a lateral radar; determining, based on the type of the laser radar, the corresponding extraction algorithm to extract the candidate road edge points of the current frame from the ground point cloud set includes: when the laser radar is a forward radar, detecting scanning points on each laser scanning line of the current frame based on sliding window to determine points with a height change exceeding a threshold on each scanning line as the candidate road edge points; when the laser radar is a lateral radar, determining points with a voxel height difference between adjacent voxels along a vertical direction of the current vehicle exceeding a threshold as the candidate road edge points by a voxel gradient algorithm.
  • the method further includes: determining the candidate road edge points of the current frame as observation values, and inputting the candidate road edge points of a previous frame into a kinematic model to obtain results as prediction values; and filtering the observation values and the prediction values by a Kalman filtering algorithm to obtain filtered candidate road edge points.
  • the offline road edge point set includes road edge points obtained by processing dense point cloud data collected by a laser radar with large number of channels.
  • processing the dense point cloud data includes: traversing point cloud data of each frame and merging the point cloud data of the current frame, the point cloud data of a plurality of frames before the current frame and the point cloud data of a plurality of frames after the current frame to obtain merged point cloud data; based on a random sample consensus algorithm, extracting a ground point cloud set from the merged point cloud data; and based on a normal vector feature of a plane formed by ground points near the current vehicle, determining the offline road edge points.
  • the method further includes: establishing an actual road edge by fitting the target road edge points.
  • an electronic device including: a processor; a memory storing instructions executable by the processor; where the processor is configured to perform the method according to the first aspect.
  • a non-transitory computer readable storage medium storing computer programs thereon, where the programs are executed by a processor to perform the method according to the first aspect.
  • FIG. 1 is a flowchart illustrating a road edge recognition method based on a laser point cloud according to an exemplary embodiment of the present disclosure.
  • FIG. 2 is a flowchart illustrating steps of processing dense point cloud data according to an exemplary embodiment of the present disclosure.
  • FIG. 3 is a schematic diagram illustrating a road edge recognition system based on a laser point cloud according to an exemplary embodiment of the present disclosure.
  • first, second, third, and the like may be used in the present disclosure to describe various information, such information should not be limited to these terms. These terms are only used to distinguish one category of information from another. For example, without departing from the scope of the present disclosure, first information may be referred as second information; and similarly, the second information may also be referred as the first information.
  • first information may be referred as second information; and similarly, the second information may also be referred as the first information.
  • the term “if’ as used herein may be interpreted as “when” or “upon” or “in response to determining”.
  • a self-driving sweeping vehicle may, based on a positioning sensor provided on itself, for example, a global positioning system (GPS) sensor, obtain a self-positioning so as to determine its position in a road and then perform close-to-edge sweeping in combination with a road map.
  • a positioning sensor provided on itself, for example, a global positioning system (GPS) sensor
  • GPS global positioning system
  • the self-driving sweeping vehicle is provided with a laser radar with small number of channels which collects sparse point cloud data, resulting in a limited accuracy of the online road edge detection result. Furthermore, once a mistake or error occurs to the online detection, the stability and safety of the close-to-edge sweeping may be affected.
  • target road edge points are determined by combining candidate road edge points determined based on point cloud data of a current frame collected in real time with offline road edge points determined based on pre-collected point cloud data.
  • point cloud data of a current frame collected by a laser radar and pose information corresponding to a current vehicle may be firstly obtained.
  • offline road edge points corresponding to the current frame in a pre-stored offline road edge point set are determined.
  • a ground point cloud set is extracted by processing the point cloud data.
  • a corresponding extraction algorithm is determined to extract candidate road edge points of the current frame from the ground point cloud set.
  • road edge points closest to the vehicle in the candidate road edge points and the offline road edge points are selected as target road edge points.
  • the offline data corresponding to the current data collected in real time is determined based on the pose information of the vehicle, and by comparing the distances of the online road edge points and the offline road edge points from the vehicle, the road edge points closest to the vehicle are selected as the target road edge points.
  • the problem of inaccuracy of relying on the offline road edge resulting from positioning error and road change can be avoided and the real-time of the road edge recognition can be guaranteed;
  • candidate road edge points are determined based on point cloud data collected in real time, and then combined with the pre-stored offline road edge points so as to ensure the accuracy of the road edge recognition by using the high-accuracy offline road edge points.
  • FIG. 1 is a flowchart illustrating a road edge recognition method based on a laser point cloud according to an exemplary embodiment of the present disclosure. As shown in FIG. 1 , the method includes the following steps 101 to 105 .
  • step 101 point cloud data of a current frame collected by a laser radar and pose information corresponding to a current vehicle are obtained.
  • the vehicle includes a self-driving sweeping vehicle provided with a laser radar and a positioning sensor.
  • the laser radar with small number of channels may be employed to perform online road edge detection, where the collected point cloud data is relatively sparse.
  • a four-channel laser radar may be employed to collect the point cloud data of the current frame.
  • the current position information may be collected by a positioning sensor, for example, by a GPS sensor.
  • data of each frame includes the current positioning formation and the point cloud data.
  • the positioning information corresponding to the vehicle may be converted into the pose information corresponding to the vehicle so as to obtain point cloud data of each frame and the pose information corresponding to the vehicle.
  • the GPS positioning data may be converted into Mercator coordinate to obtain the pose information of the vehicle on an XY plane.
  • a frame rate refers to a number of circles that a motor of the laser radar rotates within one second, namely, a number of times that one-circle scanning is completed within each second.
  • Point cloud data of one frame refers to one point cloud picture, which corresponds to the point clouds obtained by the motor of the laser radar completing one-circle scanning.
  • offline road edge points corresponding to the current frame in a pre-stored offline road edge point set are determined.
  • the offline road edge points corresponding to the current frame may be determined from the offline road edge point set based on the pose information of the current vehicle.
  • the offline road edge point set includes the road edge points obtained by processing dense point cloud data collected by a laser radar with large number of channels.
  • the point cloud data when the point cloud data is collected as the offline road edge points, a laser radar with large number of channels (64/128 channels) may be employed to collect dense point cloud data.
  • the point cloud data may include much ground information in which the roads and the road edge structures are clear and thus can be used as a high-accuracy map.
  • the positioning data may also be obtained and converted into the pose information of the vehicle under Mercator coordinate, such that data of each frame includes point cloud data and vehicle pose information.
  • FIG. 2 is a flowchart illustrating steps of processing dense point cloud data according to an exemplary embodiment of the present disclosure. As shown in FIG. 2 , the processing may include the following steps 201 to 203 .
  • step 201 point cloud data of each frame is traversed, and the point cloud data of the current frame, the point cloud data of a plurality of frames before the current frame and the point cloud data of a plurality of frames after the current frame are merged to obtain merged point cloud data.
  • the point cloud data of previous and next frames may be merged.
  • a value k may be set.
  • the point cloud data of the n-th frame, the point cloud data of a plurality of frames before the n-th frame and the point cloud data of a plurality of frames after the n-th frame can be merged such that data of the n-th frame includes vehicle pose information of the n-th frame and the point cloud data obtained by merging the point cloud data of the (n ⁇ k)-th frame to the (n+k)-th frame.
  • the merged point cloud data is denser and has more obvious road edge features.
  • a ground point cloud set is extracted from the merged point cloud data.
  • the ground point cloud set may be determined according to a plane obtained by fitting.
  • the above merged point cloud data may be divided into different zones based on coordinate. For each zone, a preset amount of point cloud data is randomly selected as initial ground point cloud data from the point cloud data in the current zone. Then, based on the initial ground point cloud data, plane fitting is performed based on the random sample consensus algorithm to obtain a ground description model of each zone. Finally, a distance of the point cloud in each zone from the plane obtained by fitting is calculated. If the distance is less than a preset threshold, the point clouds may be classified as ground points and otherwise, as obstacle points.
  • the ground points and the road edge points have a limited height
  • the ground points can be selected based on a height feature by referring to a ground obtained by fitting.
  • the offline road edge points are determined.
  • ground point clouds within a preset distance from the vehicle may be selected and a normal vector of the plane is calculated to determine the offline road edge points.
  • the ground point cloud data within a preset distance from both sides of the vehicle may be selected and a normal vector of a plane formed by the points around each point is calculated as a normal vector feature of this point. Since the road edge is vertical to the ground, the normal vectors of the road edge points in the dense point clouds all have the feature of being parallel to the ground and pointing toward the inner side of the road. Therefore, the offline road edge points can be determined through the above calculation.
  • the preset distance from both sides of the vehicle is not limited in the present disclosure and may be set by those skilled in the art based on actual requirements.
  • outliers and noise points may be filtered from the offline road edge points by using a filter to obtain a road edge point set.
  • the offline road edge points are collected by using a laser radar with large number of channels, and the point cloud data of a plurality of frames is merged, such that point cloud data of each frame is denser, thereby both the high accuracy of the offline road edge points and the correctness of the offline road edge points are ensured.
  • Extracting the online road edge points is further described below.
  • a ground point cloud set is extracted by processing the point cloud data.
  • the point cloud data of the current frame collected by the laser radar may be processed to determine the ground point cloud set.
  • a preset number of point clouds may be selected as initial point clouds, and plane fitting is performed on the initial point clouds based on the random sample consensus algorithm to obtain a plane; a distance of other point cloud from the plane is calculated and whether the distance is less than a threshold is determined; if the distance is less than the threshold, the other point cloud is added to the ground point cloud set.
  • a preset number of point clouds may be randomly selected as initial point clouds, and plane fitting is performed on the initial point clouds based on the random sample consensus algorithm to obtain a plane; a distance of other point cloud from the plane is calculated and whether the distance is less than a threshold is determined; if the distance is less than the threshold, the other point cloud is added to the ground point cloud set; and if the distance is not less than the threshold, the other point cloud is removed from the point cloud data.
  • the ground point cloud set includes the road edge points, and the road edge points need to be further determined from the ground point cloud set based on a preset extraction algorithm.
  • further screening may be performed on the ground point cloud set in combination with the offline road edge points to reduce the size of the point cloud data to be processed.
  • the ground point cloud set is filtered to determine ground point clouds in the region of interest; where the region of interest includes a region within a preset distance from both sides of the offline road edge points.
  • a region within a preset distance from both sides of the offline road edge points may be selected as a region of interest.
  • the ground point cloud set determined from the online collection of the laser radar may be screened to determine the ground point clouds in the region of interest.
  • step 103 those parts to be preset will be selected by those skilled in the art, which is not limited herein.
  • a corresponding extraction algorithm is determined to extract candidate road edge points of the current frame from the ground point cloud set.
  • the laser radar is usually mounted on the top of the vehicle or on the periphery of the vehicle.
  • the laser radar mounted on the periphery of the vehicle usually has channels of less than 8
  • the laser radar mounted on the top of the vehicle usually has channels of no less than 16.
  • the self-driving sweeping vehicle may obtain the offline road edge points by the steps 201 to 203 based on the dense point cloud data collected by the laser radar with large number of channels mounted on the top of the vehicle.
  • the self-driving sweeping vehicle may collect real-time point cloud data for online road edge detection by using the laser radars with small number of channels mounted on the front and sides of the vehicle.
  • the type of the laser radar may be a mounting position of the laser radar, and based on a different mounting position of the laser radar, a corresponding extraction algorithm is determined to extract the candidate road edge points of the current frame from the ground point cloud set.
  • the type of the laser radar includes a forward radar and a lateral radar.
  • the laser radar is a forward radar
  • scanning points on each laser scanning line of the current frame are detected based on sliding window to determine points with a height change exceeding a threshold on each scanning line as the candidate road edge points.
  • the above scanning points are traversed to calculate a height difference from the point Pk ⁇ a to the point Pk+b, where the values of a and b may be adjusted based on experiences. If the height difference is greater than a preset threshold, the points between the point Pk ⁇ a and the point Pk+b are filtered to obtain the candidate road edge points on the scanning line, and the above step is repeated until extraction for the candidate road edge points on all scanning lines is completed.
  • the scanning lines may be divided into left and right sides with the scanning central point of the scanning lines as center, and then the above detection operation is performed on the left and right sides respectively.
  • points with a voxel height difference between adjacent voxels along a vertical direction of the vehicle exceeding a threshold are determined as the candidate road edge points based on a voxel gradient algorithm.
  • all points within a side surface of a vehicle body are divided into k*k voxels. For each voxel, a minimum value of heights of the internal points of the voxel is calculated as a height of the voxel.
  • adjacent voxels along a vertical direct of the vehicle are traversed from left to right sequentially. When the height difference between adjacent voxels exceeds a preset threshold, the point is taken as the candidate road edge point.
  • the side surface of the vehicle body may be divided into left and right sides and then the above voxel gradient processing is performed on the left and right sides respectively.
  • thresholds of the height difference on the scanning lines and the height difference between voxels can be selected by those skilled in the art, which is not limited in the present disclosure.
  • the candidate road edge points of the current frame are determined as observation values and the candidate road edge points of a previous frame are input into a kinematic model to obtain results as prediction values; the observation values and the prediction values are filtered by a Kalman filtering algorithm to obtain filtered candidate road edge points.
  • the kinematic model of the self-driving sweeping vehicle is a constant turn rate and velocity model. Supposing the observed and estimated noises both satisfy Gaussian noise, the candidate road edge points of the previous frame may be predicted based on the kinematic model to obtain prediction values of the candidate road edge points of the previous frame moving to the current frame, and the candidate road edge points obtained by performing online detection on the current frame are taken as the observation values; and then the observation values and the prediction values are filtered based on Kalman filtering to obtain filtered candidate road edge points.
  • road edge points closest to the vehicle in the candidate road edge points and in the offline road edge points are selected as target road edge points.
  • the road edge points closest to the vehicle are determined as the target road edge points.
  • the online detected road edge points may be compared with the pre-stored offline road edge points to select the points closest to the vehicle as the target road edge points.
  • an actual road edge can be established by fitting the target road edge points.
  • fitting may be performed on the final target road edge points to establish the actual road edge.
  • a planned reference line of a vehicle center of the self-driving sweeping vehicle and a planned reference line of a sweeping brush of the self-driving sweeping vehicle may be generated at the same time based on a dynamic programming algorithm.
  • the defined optimization target problem fully takes into account several constraint conditions such as the vehicle center reference line of the self-driving vehicle, the sweeping brush reference line, the acceleration change amount, the distance from the obstacle, the curvature of the trajectory, and vehicular dynamics limitations, and then a high-accuracy close-to-edge trajectory can be generated by using a solver, which is not limited in the present disclosure.
  • offline data corresponding to current data collected in real time is determined, and by comparing the distances of the online road edge points and the offline road edge points from the vehicle, the road edge points closest to the vehicle are selected as target road edge points.
  • candidate road edge points are determined based on point cloud data collected in real time, and then combined with the pre-stored offline road edge points so as to ensure the accuracy of the road edge recognition by using the high-accuracy offline road edge points.
  • FIG. 3 is a schematic diagram illustrating a road edge recognition system based on a laser point cloud according to an exemplary embodiment of the present disclosure.
  • the system includes a data receiving module 301 , configured to obtain point cloud data of a current frame collected by a laser radar and pose information corresponding to a current vehicle; an offline road edge point determination module 302 , configured to, determine, based on the pose information, offline road edge points corresponding to the current frame in a pre-stored offline road edge point set; a ground point cloud extracting module 303 , configured to extract a ground point cloud set by processing the point cloud data; a candidate road edge point extracting module 304 , configured to, determine, based on a type of the laser radar, a corresponding extraction algorithm to extract candidate road edge points of the current frame from the ground point cloud set; a target road edge point selecting module 305 , configured to select road edge points closest to the current vehicle in the candidate road edge points and in the offline road edge points as target road edge points.
  • the current vehicle includes a self-driving sweeping vehicle provided with a laser radar and a positioning sensor.
  • the ground point cloud extracting module is further configured to: select a preset number of point clouds as initial point clouds, and performing plane fitting on the initial point clouds based on a random sample consensus algorithm to obtain a plane; calculate a distance of other point clouds from the fitted plane, and determining whether the distance is less than a threshold; and in response to determining that the distance is less than the threshold, add the other point clouds to the ground point cloud set.
  • the system further includes: a region-of-interest filtering module configured to, based on a selected region of interest, filter the ground point cloud set to determine ground point clouds in the region of interest; where the region of interest includes a region within a preset distance from both sides of the offline road edge points.
  • a region-of-interest filtering module configured to, based on a selected region of interest, filter the ground point cloud set to determine ground point clouds in the region of interest; where the region of interest includes a region within a preset distance from both sides of the offline road edge points.
  • the type of the laser radar includes a forward radar and a lateral radar; the candidate road edge point extracting module is further configured to: when the laser radar is a forward radar, detect scanning points on each laser scanning line of the current frame based on sliding window to determine points with a height change exceeding a threshold on each scanning line as the candidate road edge points; when the laser radar is a lateral radar, determine points with a voxel height difference between adjacent voxels along a vertical direction of the current vehicle exceeding a threshold as the candidate road edge points by a voxel gradient algorithm.
  • the system further includes a filtering module, configured to: determine the candidate road edge points of the current frame as observation values and input the candidate road edge points of a previous frame into a kinematic model to obtain results as prediction values; and filter the observation values and the prediction values by a Kalman filtering algorithm to obtain filtered candidate road edge points.
  • a filtering module configured to: determine the candidate road edge points of the current frame as observation values and input the candidate road edge points of a previous frame into a kinematic model to obtain results as prediction values; and filter the observation values and the prediction values by a Kalman filtering algorithm to obtain filtered candidate road edge points.
  • the offline road edge point set includes road edge points obtained by processing dense point cloud data collected by a laser radar with large number of channels.
  • processing the dense point cloud data includes: traversing point cloud data of each frame, and merging the point cloud data of the current frame, the point cloud data of a plurality of frames before the current frame and the point cloud data of a plurality of frames after the current frame to obtain merged point cloud data; based on a random sample consensus algorithm, extracting a ground point cloud set from the merged point cloud data; and based on normal vector features of a plane formed by ground points near the current vehicle, determining the offline road edge points.
  • system further includes an establishing module, configured to, establish an actual road edge by fitting the target road edge points.
  • system embodiments substantially correspond to the method embodiments, a reference may be made to part of the descriptions of the method embodiments for the related part.
  • the system embodiments described above are merely illustrative, where the modules described as separate members may be or not be physically separated, and the members displayed as modules may be or not be physical, modules, i.e., may be located in one place, or may be distributed to a plurality of network modules. Part or all of the modules may be selected according to actual requirements to implement the objectives of the solutions in the embodiments. Those of ordinary skill in the art may understand and carry out them without creative work.
  • the systems, apparatuses, modules or units described in the above embodiments may be specifically implemented by a computer chip or an entity or may be implemented by a product with a particular function.
  • a typical implementing device may be a computer and the computer may be specifically a personal computer, a laptop computer, a cellular phone, a camera phone, a smart phone, a personal digital assistant, a media player, a navigation device, an email transceiver, a game console, a tablet computer, a wearable device, or a combination of any several devices of the above devices.
  • the present disclosure further provides an embodiment of an electronic device.
  • the electronic device includes a processor and a memory for storing machine executable instructions, where the processor and the memory are connected with each other via an internal bus.
  • the device may also include an external interface to communicate with other devices or components.
  • the processor by reading and executing machine executable instructions corresponding to user identity authentication logic in the memory, the processor is caused to: obtain point cloud data of a current frame collected by a laser radar and pose information corresponding to a current vehicle; based on the pose information, determine offline road edge points corresponding to the current frame in a pre-stored offline road edge point set; process the point cloud data to extract a ground point cloud set; based on a type of the laser radar, determine a corresponding extraction algorithm to extract candidate road edge points of the current frame from the ground point cloud set; take road edge points closest to the vehicle in the candidate road edge points and the offline road edge points as target road edge points.
  • an embodiment of the present disclosure further provides a computer readable storage medium storing computer programs, where these computer programs are executed by a processor to perform the steps of the road edge recognition method based on a laser point cloud in the embodiments of the present disclosure.
  • the detailed descriptions of the steps of the road edge recognition method based on a laser point cloud can be referred to the above contents and will not be repeated herein.
  • the computing device may include one or more processors (CPU), an input and output interface, a network interface and a memory.
  • the memory may include a non-permanent memory in the computer readable medium, a random access memory (RAM) and/or a non-volatile memory and the like, for example, a Read Only Memory (ROM) or a flash RAM.
  • RAM random access memory
  • ROM Read Only Memory
  • flash RAM flash random access memory
  • the computer readable medium includes permanent and non-permanent, removable and non-removable media which can realize information storage by any method or technology.
  • the information may be computer readable instructions, data structures, program modules or other data.
  • the examples of the computer storage medium include but not limited to: a phase change random access memory (PRAM), a Static Random Access Memory (SRAM), a Dynamic Random Access Memory (DRAM), and other types of RAMs, a Read-Only Memory (ROM), an Electrically-Erasable Programmable Read-Only Memory (EEPROM), a Flash Memory, or other memory technology, CD-ROM, digital versatile disc (DVD) or other optical storage, cassette type magnetic tape, magnetic disk storage or other magnetic storage device or other non-transmission medium for storing information accessible by computing devices.
  • the computer readable medium does not include transitory computer readable media such as modulated data signals or carriers.
  • one or more embodiments of the present disclosure may be provided as methods, systems, or computer program products.
  • one or more embodiments of the present disclosure may be adopted in the form of entire hardware embodiments, entire software embodiments or embodiments combining software and hardware.
  • one or more embodiments of the present disclosure may be adopted in the form of computer program products that are implemented on one or more computer available storage media (including but not limited to magnetic disk memory, CD-ROM, and optical memory and so on) including computer available program codes.

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Radar, Positioning & Navigation (AREA)
  • Remote Sensing (AREA)
  • Electromagnetism (AREA)
  • Multimedia (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Traffic Control Systems (AREA)
  • Optical Radar Systems And Details Thereof (AREA)
US18/548,042 2021-09-29 2022-01-06 Road edge recognition based on laser point cloud Pending US20240005674A1 (en)

Applications Claiming Priority (3)

Application Number Priority Date Filing Date Title
CN202111155064.5A CN115248447B (zh) 2021-09-29 2021-09-29 一种基于激光点云的路沿识别方法及系统
CN202111155064.5 2021-09-29
PCT/CN2022/070542 WO2023050638A1 (zh) 2021-09-29 2022-01-06 基于激光点云的路沿识别

Publications (1)

Publication Number Publication Date
US20240005674A1 true US20240005674A1 (en) 2024-01-04

Family

ID=83697148

Family Applications (1)

Application Number Title Priority Date Filing Date
US18/548,042 Pending US20240005674A1 (en) 2021-09-29 2022-01-06 Road edge recognition based on laser point cloud

Country Status (4)

Country Link
US (1) US20240005674A1 (de)
CN (1) CN115248447B (de)
DE (1) DE112022000949T5 (de)
WO (1) WO2023050638A1 (de)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117572451A (zh) * 2024-01-11 2024-02-20 广州市杜格科技有限公司 一种基于多线激光雷达的交通信息采集方法、设备及介质

Families Citing this family (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116518992B (zh) * 2023-04-14 2023-09-08 之江实验室 一种退化场景下的无人车定位方法和装置
CN116858195B (zh) * 2023-06-08 2024-04-02 中铁第四勘察设计院集团有限公司 一种基于无人机激光雷达技术的既有铁路测量方法
CN116449335B (zh) * 2023-06-14 2023-09-01 上海木蚁机器人科技有限公司 可行驶区域检测方法、装置、电子设备以及存储介质
CN116772894B (zh) * 2023-08-23 2023-11-14 小米汽车科技有限公司 定位初始化方法、装置、电子设备、车辆和存储介质
CN116977226B (zh) * 2023-09-22 2024-01-19 天津云圣智能科技有限责任公司 点云数据分层的处理方法、装置、电子设备及存储介质

Family Cites Families (23)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP5598694B2 (ja) * 2009-08-25 2014-10-01 スズキ株式会社 物標検出装置および物標検出方法
US9285230B1 (en) * 2013-12-20 2016-03-15 Google Inc. Methods and systems for detecting road curbs
CN106004659B (zh) * 2016-08-03 2017-08-04 安徽工程大学 车辆周围环境感知系统及其控制方法
CN107272019B (zh) * 2017-05-09 2020-06-05 深圳市速腾聚创科技有限公司 基于激光雷达扫描的路沿检测方法
US10866101B2 (en) * 2017-06-13 2020-12-15 Tusimple, Inc. Sensor calibration and time system for ground truth static scene sparse flow generation
CN108589599B (zh) * 2018-04-28 2020-07-21 上海仙途智能科技有限公司 无人清扫系统
CN108931786A (zh) * 2018-05-17 2018-12-04 北京智行者科技有限公司 路沿检测装置和方法
CN109798903B (zh) * 2018-12-19 2021-03-30 广州文远知行科技有限公司 一种从地图数据中获取道路信息的方法及装置
CN109738910A (zh) * 2019-01-28 2019-05-10 重庆邮电大学 一种基于三维激光雷达的路沿检测方法
CN109858460B (zh) * 2019-02-20 2022-06-10 重庆邮电大学 一种基于三维激光雷达的车道线检测方法
CN110349192B (zh) * 2019-06-10 2021-07-13 西安交通大学 一种基于三维激光点云的在线目标跟踪系统的跟踪方法
US11549815B2 (en) * 2019-06-28 2023-01-10 GM Cruise Holdings LLC. Map change detection
CN110376604B (zh) * 2019-08-09 2022-11-15 北京智行者科技股份有限公司 基于单线激光雷达的路沿检测方法
US11182612B2 (en) * 2019-10-28 2021-11-23 The Chinese University Of Hong Kong Systems and methods for place recognition based on 3D point cloud
CN111104908A (zh) * 2019-12-20 2020-05-05 北京三快在线科技有限公司 一种道路边沿确定方法及装置
CN111401176B (zh) * 2020-03-09 2022-04-26 中振同辂(江苏)机器人有限公司 一种基于多线激光雷达的路沿检测方法
CN111985322B (zh) * 2020-07-14 2024-02-06 西安理工大学 一种基于激光雷达的道路环境要素感知方法
CN112037328A (zh) * 2020-09-02 2020-12-04 北京嘀嘀无限科技发展有限公司 生成地图中的道路边沿的方法、装置、设备和存储介质
CN112149572A (zh) * 2020-09-24 2020-12-29 知行汽车科技(苏州)有限公司 路沿检测方法、装置和存储介质
CN112395956B (zh) * 2020-10-27 2023-06-02 湖南大学 一种面向复杂环境的可通行区域检测方法及系统
CN112597839B (zh) * 2020-12-14 2022-07-08 上海宏景智驾信息科技有限公司 基于车载毫米波雷达的道路边界检测方法
CN112650230B (zh) * 2020-12-15 2024-05-03 广东盈峰智能环卫科技有限公司 基于单线激光雷达的自适应贴边作业方法、装置及机器人
CN112964264B (zh) * 2021-02-07 2024-03-26 上海商汤临港智能科技有限公司 道路边沿检测方法、装置、高精度地图、车辆及存储介质

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117572451A (zh) * 2024-01-11 2024-02-20 广州市杜格科技有限公司 一种基于多线激光雷达的交通信息采集方法、设备及介质

Also Published As

Publication number Publication date
WO2023050638A1 (zh) 2023-04-06
CN115248447B (zh) 2023-06-02
CN115248447A (zh) 2022-10-28
DE112022000949T5 (de) 2023-12-28

Similar Documents

Publication Publication Date Title
US20240005674A1 (en) Road edge recognition based on laser point cloud
CN107341819B (zh) 目标跟踪方法及存储介质
US9870512B2 (en) Lidar-based classification of object movement
WO2021072696A1 (zh) 目标检测与跟踪方法、系统、可移动平台、相机及介质
CN109521757B (zh) 静态障碍物识别方法和装置
JP2021523443A (ja) Lidarデータと画像データの関連付け
US8620032B2 (en) System and method for traffic signal detection
JP5822255B2 (ja) 対象物識別装置及びプログラム
CN116129376A (zh) 一种道路边缘检测方法和装置
CN110794406B (zh) 多源传感器数据融合系统和方法
CN110674705A (zh) 基于多线激光雷达的小型障碍物检测方法及装置
CN109871745A (zh) 识别空车位的方法、系统及车辆
CN115240149A (zh) 三维点云检测识别方法、装置、电子设备及存储介质
US20220171975A1 (en) Method for Determining a Semantic Free Space
JP7418476B2 (ja) 運転可能な領域情報を決定するための方法及び装置
CN103714528B (zh) 物体分割装置和方法
JP2019179495A (ja) センサ処理システム、測距システム、移動体、センサ処理方法及びプログラム
CN115792945B (zh) 一种浮空障碍物检测方法、装置和电子设备、存储介质
CN114397671B (zh) 目标的航向角平滑方法、装置及计算机可读存储介质
US11948367B2 (en) Multi-object tracking for autonomous vehicles
Yoonseok et al. Railway track extraction from mobile laser scanning data
CN116263504A (zh) 车辆识别方法、装置、电子设备及计算机可读存储介质
Bota et al. A framework for object detection, tracking and classification in urban traffic scenarios using stereovision
JP2006078261A (ja) 物体検出装置
RU2798739C1 (ru) Способ трекинга объектов на этапе распознавания для беспилотных автомобилей

Legal Events

Date Code Title Description
AS Assignment

Owner name: SHANGHAI XIANTU INTELLIGENT TECHNOLOGY CO., LTD., CHINA

Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNORS:HUANG, CHAO;YE, YUE;REEL/FRAME:064710/0191

Effective date: 20230615

AS Assignment

Owner name: SHANGHAI XIANTU INTELLIGENT TECHNOLOGY CO., LTD., CHINA

Free format text: CHANGE OF ADDRESS OF ASSIGNEE;ASSIGNOR:SHANGHAI XIANTU INTELLIGENT TECHNOLOGY CO., LTD.;REEL/FRAME:064845/0589

Effective date: 20230830

STPP Information on status: patent application and granting procedure in general

Free format text: DOCKETED NEW CASE - READY FOR EXAMINATION