WO2021128777A1 - Procédé, appareil, dispositif et support de stockage pour détection d'une photographie de surface de carte - Google Patents

Procédé, appareil, dispositif et support de stockage pour détection d'une photographie de surface de carte Download PDF

Info

Publication number
WO2021128777A1
WO2021128777A1 PCT/CN2020/098286 CN2020098286W WO2021128777A1 WO 2021128777 A1 WO2021128777 A1 WO 2021128777A1 CN 2020098286 W CN2020098286 W CN 2020098286W WO 2021128777 A1 WO2021128777 A1 WO 2021128777A1
Authority
WO
WIPO (PCT)
Prior art keywords
road surface
boundary line
surface boundary
point cloud
vehicle
Prior art date
Application number
PCT/CN2020/098286
Other languages
English (en)
Other versions
WO2021128777A8 (fr
Inventor
Dixiao CUI
Zhihao Jiang
Shengliang XU
Jinwen GUO
Lei Wang
Original Assignee
Suzhou Zhijia Science & Technologies Co., Ltd.
Plusai, Inc.
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Suzhou Zhijia Science & Technologies Co., Ltd., Plusai, Inc. filed Critical Suzhou Zhijia Science & Technologies Co., Ltd.
Publication of WO2021128777A1 publication Critical patent/WO2021128777A1/fr
Publication of WO2021128777A8 publication Critical patent/WO2021128777A8/fr

Links

Images

Classifications

    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S17/00Systems using the reflection or reradiation of electromagnetic waves other than radio waves, e.g. lidar systems
    • G01S17/88Lidar systems specially adapted for specific applications
    • G01S17/89Lidar systems specially adapted for specific applications for mapping or imaging
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01CMEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
    • G01C21/00Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00
    • G01C21/10Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 by using measurements of speed or acceleration
    • G01C21/12Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 by using measurements of speed or acceleration executed aboard the object being navigated; Dead reckoning
    • G01C21/16Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 by using measurements of speed or acceleration executed aboard the object being navigated; Dead reckoning by integrating acceleration or speed, i.e. inertial navigation
    • G01C21/165Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 by using measurements of speed or acceleration executed aboard the object being navigated; Dead reckoning by integrating acceleration or speed, i.e. inertial navigation combined with non-inertial navigation instruments
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01CMEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
    • G01C21/00Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00
    • G01C21/26Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 specially adapted for navigation in a road network
    • G01C21/28Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 specially adapted for navigation in a road network with correlation of data from several navigational instruments
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S17/00Systems using the reflection or reradiation of electromagnetic waves other than radio waves, e.g. lidar systems
    • G01S17/86Combinations of lidar systems with systems other than lidar, radar or sonar, e.g. with direction finders
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S17/00Systems using the reflection or reradiation of electromagnetic waves other than radio waves, e.g. lidar systems
    • G01S17/88Lidar systems specially adapted for specific applications
    • G01S17/93Lidar systems specially adapted for specific applications for anti-collision purposes
    • G01S17/931Lidar systems specially adapted for specific applications for anti-collision purposes of land vehicles
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S19/00Satellite radio beacon positioning systems; Determining position, velocity or attitude using signals transmitted by such systems
    • G01S19/38Determining a navigation solution using signals transmitted by a satellite radio beacon positioning system
    • G01S19/39Determining a navigation solution using signals transmitted by a satellite radio beacon positioning system the satellite radio beacon positioning system transmitting time-stamped messages, e.g. GPS [Global Positioning System], GLONASS [Global Orbiting Navigation Satellite System] or GALILEO
    • G01S19/42Determining position

Definitions

  • the present disclosure relates to a method and apparatus for detecting a travelable region, a device, and a storage medium.
  • a primary task for autonomous vehicles is the accurate and robust sensing of a surrounding environment.
  • the autonomous vehicle may be able to detect a travelable region.
  • a travelable region is a region in which the autonomous vehicle can drive.
  • the travelable region may be a road.
  • a travelable region may have typical structured features such as roadsides, guardrails, vegetation, etc which mark the boundaries of the travelable region.
  • Detection of the travelable region in such an environment may comprise estimating the shape of a road surface and detecting boundaries on the left and right sides of the road.
  • a roadside model is generally assumed to be in the form of polynomials, polyline segments, etc., and predicted roadside points are obtained by using a roadside model.
  • the roadside may be directly detected by a sensor, for example, a camera, a laser, etc., to obtain detected roadside points.
  • the predicted roadside points and the detected roadside points may be merged, and road surface boundary lines may be constructed according to the merged roadside points. A region between the road surface boundary lines may then be defined as the travelable region.
  • a problem with this approach is that, in an actual environment, the shape of a road surface may be diverse and variable.
  • the road surface may be left-circular (turning to the left) , right-circular, left-spiral (in a spiral shape to left, e.g. around a hill) , or right-spiral, etc., and as a result, the assumed roadside model is quite likely to be inconsistent with actual road conditions. Therefore, the roadside points predicted by the assumed roadside model are very likely to be inaccurate, resulting in unreliable construction of the road surface boundary line and low reliability in accurately detecting a travelable region.
  • Embodiments of the present disclosure provide a method and apparatus for detecting a travelable region, a device, and a storage medium.
  • a first aspect of the present disclosure provides a method for detecting a travelable region, comprising:
  • the three-dimensional point cloud data comprises three-dimensional coordinate information of multiple laser points within a surrounding sensing area of the vehicle
  • the obstacle region is a region occupied by an obstacle
  • GPS global positioning system
  • Detecting the travelable region in this way may be more reliable than some prior art approaches which predict a travelable region by modeling a road surface using polynomials and polyline segments. Determining a candidate road surface boundary line from multiple reference tracks based on a displaced GPS track of the vehicle may prove to be more reliable than relying solely on a road model or sensor data. This approach may also allow the candidate road surface boundary line to be determined dynamically based on the GPS track and three-dimensional point cloud data. Correcting the candidate road surface boundary according to laser points in the three-dimensional point cloud data which are close to the candidate road surface boundary line may help to further improve the accuracy.
  • a second aspect of the present disclosure provides an apparatus for detecting a travelable region, comprising:
  • a point cloud data acquisition module configured to acquire three-dimensional point cloud data measured by a vehicle in a current frame, wherein the three-dimensional point cloud data comprises three-dimensional coordinate information of multiple laser points within a surrounding sensing area of the vehicle;
  • an obstacle region determination module configured to determine an obstacle region within the surrounding sensing area according to the three-dimensional point cloud data, wherein the obstacle region means a region occupied by an obstacle
  • a reference track acquisition module configured to acquire a global positioning system (GPS) track corresponding to a current traveling position of the vehicle, and to perform displacement operations on the GPS track to obtain multiple reference tracks;
  • GPS global positioning system
  • a candidate road surface boundary line determination module configured to determine a candidate road surface boundary line from the multiple reference tracks according to each reference track and the obstacle region;
  • a road surface boundary line correction module configured to correct the candidate road surface boundary line according to three-dimensional point cloud data of multiple laser points close to the candidate road surface boundary line, to obtain a road surface boundary line in the current frame.
  • a third aspect of the present disclosure provides a system (e.g. a computer device or terminal mountable in an autonomous vehicle) for detecting a travelable region, the system comprising a processor and a memory, wherein at least one instruction is stored in the memory, and the instruction is loaded and executed by the processor to implement the operations performed in the method for detecting a travelable region according to any of the above possible implementations.
  • a system e.g. a computer device or terminal mountable in an autonomous vehicle
  • the system comprising a processor and a memory, wherein at least one instruction is stored in the memory, and the instruction is loaded and executed by the processor to implement the operations performed in the method for detecting a travelable region according to any of the above possible implementations.
  • a fourth aspect of the present disclosure provides, a non-transitory computer-readable storage medium storing at least one instruction that is executable by a processor to implement the method of the first aspect of the present disclosure.
  • Fig. 1 is a schematic diagram of an implementation environment provided in an embodiment of the present disclosure
  • Fig. 2 is a schematic diagram of a system for detecting a travelable region provided in an embodiment of the present disclosure
  • Fig. 3 is a flowchart of a method for detecting a travelable region provided in an embodiment of the present disclosure
  • Fig. 4 is a flowchart of a method for detecting a travelable region provided in an embodiment of the present disclosure
  • Fig. 5 is a schematic flowchart of regression fitting by using a Gaussian process provided in an embodiment of the present disclosure
  • Fig. 6 is a block diagram of an apparatus for detecting a travelable region provided in an embodiment of the present disclosure.
  • Fig. 7 is a schematic structural diagram of a terminal provided in an embodiment of the present disclosure.
  • Fig. 1 is a schematic diagram of an implementation environment provided in an embodiment of the present disclosure.
  • the implementation environment comprises a global pose measurement system 101, a three-dimensional laser radar 102, and a terminal 103.
  • the global pose measurement system 101, the three-dimensional laser radar 102, and the terminal 103 are communicatively connected. For example, they may be connected by a data transmission device such as, but not limited to a switch. In one example a gigabit switch may be used to connect the global pose measure measurement system with the three-dimensional laser radar and the terminal 103.
  • the three-dimensional laser radar 102 may be installed directly above the roof of an autonomous vehicle, so that it can scan the surrounding environment of the autonomous vehicle.
  • the global pose measurement system 101 may be mounted to the autonomous vehicle.
  • the global pose measurement system 101 may use a GPS/inertial navigation system (INS) , which comprises an inertial measurement unit (IMU) , a GPS receiver and/or a GPS antenna, etc., for acquiring pose data of a vehicle body.
  • Pose data may for example include position data and/or orientation data of the autonomous vehicle.
  • the pose data may include speed data or velocity data (speed and direction of travel) of the autonomous vehicle.
  • the IMU and the GPS antenna can be installed on the vertical line at the center of the rear axle of the autonomous vehicle.
  • the terminal 103 may be installed inside the autonomous vehicle.
  • the terminal may for example be a computing device including one or more processors, a memory such as random access memory or read only memory, and a machine readable storage medium such as a hard drive or solid state drive etc.
  • the terminal may be used for data processing calculations and software and program operation.
  • the three-dimensional laser radar 102 acquires three-dimensional point cloud data and transmits the three-dimensional point cloud data to the terminal 103.
  • the global pose measurement system 101 acquires pose data of a vehicle body, and transmits the vehicle body pose data to the terminal 103.
  • the three-dimensional point cloud data and the vehicle body pose data may correspond to each other according to a time stamp.
  • the three-dimensional point cloud data may include a time stamp and may be matched with vehicle body pose data having a same time stamp.
  • the three-dimensional point cloud data may include multiple frames of three-dimensional point cloud data, which each frame corresponding to a respective point in time. Each frame may be associated with a time-stamp.
  • the terminal 103 may store a high-precision map, a GPS track, etc.
  • a track may, for example, be a trajectory of the vehicle.
  • a GPS track may, for example, be a past or projected future trajectory of the autonomous vehicle based on the current pose of the GPS vehicle and/or based on previous positions of the autonomous vehicle.
  • the terminal 103 determines an obstacle region within the surrounding sensing area of the vehicle according to the three-dimensional point cloud data.
  • the surrounding sensing area of the vehicle may correspond to surroundings of the vehicle which may be sensed by a sensing system, such as the three-dimensional laser radar.
  • An obstacle region is a region of the three-dimensional point cloud data containing an obstacle.
  • the terminal 103 may perform displacement operations on the pre-stored GPS track to obtain multiple reference tracks, determines a candidate road surface boundary line from the multiple reference tracks according to each reference track and the obstacle region, and corrects the candidate road surface boundary line according to three-dimensional point cloud data of multiple laser points close to the candidate road surface boundary line, to obtain a road surface boundary line in a current frame. In this way, a road surface boundary line in a current frame of the three-dimensional point cloud data may be obtained.
  • Fig. 2 is a schematic diagram of a system for detecting a travelable region provided in an embodiment of the present disclosure.
  • the system comprises: a data preprocessing module, a road surface boundary line splicing module, and a Gaussian process-based iterative optimization module.
  • the Gaussian process-based iterative optimization module is connected to the data preprocessing module and the road surface boundary line splicing module respectively.
  • the data preprocessing module is configured to perform motion compensation and coordinate system conversion on acquired three-dimensional point cloud data, acquire an obstacle raster map, generate a candidate road surface boundary line, and so on.
  • the three-dimensional point cloud data input in the data preprocessing module may for example, be rotary three-dimensional laser point cloud data or galvanometric three-dimensional laser point cloud data.
  • the road surface boundary line splicing module is configured to splice an acquired road surface boundary line in a current frame and a road surface boundary line in a previous frame.
  • the Gaussian process-based iterative optimization module is configured to perform regression fitting by using a Gaussian process on laser points close to the candidate road surface boundary line, and output the parameters of a road surface boundary point, a noise point, and a road surface boundary line; and the Gaussian process-based iterative optimization module is also configured to perform regression fitting by using a Gaussian process on road surface boundary points corresponding to the spliced road surface boundary line, and output optimized parameters of the road surface boundary line in the current frame.
  • candidate road surface boundary points that is, the laser points
  • the maximum number of iterations and the parameters of a clustering and segmentation algorithm also need to be input.
  • current training data is used to determine the parameters of the road surface boundary line
  • test data is divided into interior points and noise points by means of the clustering and segmentation algorithm. Differences between the newly added interior points in the test data and interior points in the training data are calculated, and differences between the newly added noise points in the test data and noise points in the training data are calculated until the number of iterations or iteration precision is satisfied.
  • the current training data is interior points determined during each iteration.
  • the interior points are road surface boundary points used to determine the parameters of the road surface boundary line
  • the test data is laser points introduced in each iteration.
  • the system for detecting a travelable region further comprises a pose correction module which is connected to the road surface boundary line splicing module.
  • the pose correction module is configured to correct second pose data according to first pose data in the previous frame, the road surface boundary line in the previous frame, and the road surface boundary line in the current frame to obtain third pose data.
  • the road surface boundary line splicing module is configured to splice the acquired road surface boundary line in the current frame and the road surface boundary line in the previous frame according to the third pose data.
  • the pose data may comprise GPS data and IMU data, wherein the IMU data comprises motion information, wheel speed information, traveling direction information, etc. of the vehicle.
  • the pose correction module further comprises a Gaussian process-based observation model used for acquiring accurate pose data.
  • the system for detecting a travelable region further comprises a map creation and update module which is connected to the Gaussian process-based iterative optimization module and configured to merge new detection results of a partial road surface, comprising merging, into a map, the road surface boundary line output by the Gaussian process-based iterative optimization module.
  • Fig. 3 is a flowchart of a method for detecting a travelable region provided in an embodiment of the present disclosure. Referring to Fig. 3, this embodiment comprises:
  • Block 301 acquiring three-dimensional point cloud data measured by a vehicle in a current frame, wherein the three-dimensional point cloud data comprises three-dimensional coordinate information of multiple laser points within the surrounding sensing area of the vehicle.
  • Block 302 determining an obstacle region within the surrounding sensing area according to the three-dimensional point cloud data, wherein the obstacle region means a region occupied by an obstacle.
  • Block 303 acquiring a global positioning system (GPS) track corresponding to a current traveling position of the vehicle, and performing displacement operations on the GPS track to obtain multiple reference tracks.
  • GPS global positioning system
  • Block 304 determining a candidate road surface boundary line from the multiple reference tracks according to each reference track and the obstacle region.
  • Block 305 correcting the candidate road surface boundary line according to three-dimensional point cloud data of multiple laser points close to the candidate road surface boundary line, to obtain a road surface boundary line in the current frame.
  • determining an obstacle region within the surrounding sensing area according to the three-dimensional point cloud data comprises:
  • performing displacement operations on the GPS track to obtain multiple reference tracks comprises:
  • the specified orientation is the left or right side of the vehicle; and the displacement operation is at least one of a translation operation and a rotation operation.
  • correcting the candidate road surface boundary line according to three-dimensional point cloud data of multiple laser points close to the candidate road surface boundary line, to obtain a road surface boundary line in the current frame comprises:
  • N is an integer greater than 1;
  • the method further comprises:
  • splicing the road surface boundary line in the current frame and the road surface boundary line detected by the vehicle in the previous frame to obtain a spliced road surface boundary line comprises:
  • the method before determining an obstacle region within the surrounding sensing area according to the three-dimensional point cloud data, the method further comprises:
  • the second pose data comprises GPS data and inertial measurement unit (IMU) data.
  • IMU inertial measurement unit
  • the three-dimensional point cloud data is three-dimensional point cloud data in a Cartesian coordinate system.
  • the three-dimensional point cloud data may be rotary three-dimensional laser point cloud data or galvanometric three-dimensional laser point cloud data, although the present disclosure is not limited thereto.
  • the three-dimensional point cloud data measured by the vehicle in the current frame is acquired, wherein the three-dimensional point cloud data comprises the three-dimensional coordinate information of the multiple laser points within the surrounding sensing area of the vehicle; the obstacle region is determined within the surrounding sensing area according to the three-dimensional point cloud data, wherein the obstacle region means a region occupied by an obstacle; the global positioning system (GPS) track corresponding to the current traveling position of the vehicle is acquired, and the displacement operations are performed on the GPS track to obtain the multiple reference tracks; the candidate road surface boundary line is determined from the multiple reference tracks according to each reference track and the obstacle region; and the candidate road surface boundary line is corrected according to the three-dimensional point cloud data of the multiple laser points close to the candidate road surface boundary line, to obtain the road surface boundary line in the current frame.
  • GPS global positioning system
  • the displacement operations are performed on the GPS track to obtain the multiple reference tracks, and the candidate road surface boundary line is determined from the multiple reference tracks according to the reference tracks and the obstacle region, that is, the position and shape of the candidate road surface boundary line are dynamically determined according to the GPS track and real-time obstacle information on the road, thereby preliminarily determining the travelable region.
  • the candidate road surface boundary line is corrected according to the three-dimensional point cloud data close to the candidate road surface boundary line, to obtain the road surface boundary line, thereby ensuring the accuracy of detecting a travelable region.
  • Fig. 4 is a flowchart of a method for detecting a travelable region provided in an embodiment of the present disclosure. Referring to Fig. 4, this embodiment comprises:
  • Block 401 acquiring, by a terminal, three-dimensional point cloud data measured by a vehicle in a current frame, wherein the three-dimensional point cloud data comprises three-dimensional coordinate information of multiple laser points within the surrounding sensing area of the vehicle.
  • the three-dimensional point cloud data may be rotary three-dimensional laser point cloud data or galvanometric three-dimensional laser point cloud data, and certainly, may alternatively be other types of three-dimensional laser point cloud data, which is not limited in the present disclosure.
  • the three-dimensional point cloud data is three-dimensional point cloud data in a Cartesian coordinate system.
  • the method for acquiring, by a terminal, three-dimensional point cloud data measured by a vehicle in a current frame is: acquiring, by the terminal, original three-dimensional point cloud data in the current frame that is obtained through scanning by a three-dimensional laser radar, and converting the original three-dimensional point cloud data into Cartesian coordinates to obtain three-dimensional point cloud data in the Cartesian coordinate system.
  • the original data obtained through scanning by the three-dimensional laser radar mainly comprises distance and angle information.
  • the present disclosure converts the acquired original three-dimensional point cloud data into Cartesian coordinates.
  • the coordinate conversion method is: converting, according to the distance and angle information of a laser point in the original three-dimensional point cloud data, the laser point into a three-dimensional coordinate point (x, y, z) in the Cartesian coordinate system.
  • H is the installation height of the three-dimensional laser radar from the horizontal plane of the Cartesian coordinate system
  • is the distance between the laser point and the laser radar
  • is the vertical angle of the scanning line where the laser point is located
  • is the rotation angle of the scanning line where the laser point is located.
  • the coordinate conversion of the three-dimensional point cloud data is based on rigid body transformation, that is, the relative position of the laser point in the three-dimensional point cloud data does not change.
  • Block 402 performing, by the terminal, motion compensation on the three-dimensional point cloud data according to second pose data of the vehicle.
  • the second pose data is pose data of a vehicle body in the current frame, and comprises GPS data and inertial measurement unit (IMU) data.
  • IMU inertial measurement unit
  • the three-dimensional laser radar requires one scan cycle to complete the scanning of three-dimensional point cloud data in one frame. In this cycle, pose of the vehicle body has changed relative to that upon the start of the scanning. Therefore, to accurately obtain the three-dimensional point cloud data in the current frame, it is necessary to perform motion compensation on the original three-dimensional point cloud data in the current frame obtained through scanning by the three-dimensional laser radar.
  • the implementation of performing, by the terminal, motion compensation on the three-dimensional point cloud data according to second pose data of the vehicle is: acquiring, by the terminal, the pose change of the vehicle body relative to that at the initial moment of the scanning cycle according to the second pose data of the vehicle, wherein the pose change is characterized by a rotation matrix R and a translation matrix T, and performing motion compensation on the original three-dimensional point cloud data according to R and T.
  • P (t-1) is the original three-dimensional point cloud data in one frame
  • P (t) is the three-dimensional point cloud data in one frame that is obtained after the motion compensation
  • R is the rotation matrix
  • T is the translation matrix
  • is the proportion of three-dimensional point cloud which requires rotational motion compensation
  • is the proportion of three-dimensional point cloud that requires translational motion compensation.
  • Block 403 determining, by the terminal, an obstacle region within the surrounding sensing area according to the three-dimensional point cloud data, wherein the obstacle region is a region occupied by an obstacle.
  • the terminal divides the space within the sensing area into multiple regions, and determines the obstacle region from the multiple regions according to the height value of a laser point in each region.
  • the implementation of dividing, by the terminal, the space within the sensing area into multiple regions, and determining the obstacle region from the multiple regions according to the height value of a laser point in each region may be: setting, by the terminal, raster resolution, and establishing a spatial raster map in the Cartesian coordinate system; making statistics on the maximum height, the minimum height and the height difference of laser points in each raster within the sensing area; and considering the raster as an obstacle raster if the height difference, the maximum height, and the minimum height meet certain thresholds.
  • the raster resolution can be set according to requirements, which is not limited in the present disclosure.
  • the raster map established in the Cartesian coordinate system and according to the height difference, the maximum height, and the minimum height of the laser points in each raster, whether the raster is an obstacle raster, such that most obstacles on the road can be accurately detected.
  • Block 404 acquiring, by the terminal, a GPS track corresponding to a current traveling position of the vehicle, and performing displacement operations on the GPS track to obtain multiple reference tracks.
  • the terminal stores a traveling GPS track of the vehicle, and the GPS track may be obtained by detecting the vehicle before.
  • the method for acquiring, by the terminal, a GPS track corresponding to a current traveling position of the vehicle may be: finding, by the terminal according to the pre-stored traveling GPS track of the vehicle and the current traveling position of the vehicle, a segment of GPS track which is close to the current traveling position to serve as the GPS track corresponding to the current traveling position.
  • the length of the GPS track corresponding to the current traveling position that is acquired by the terminal can be set according to requirements, which is not limited in the present disclosure.
  • the implementation of performing, by the terminal, displacement operations on the GPS track to obtain multiple reference tracks is: performing multiple displacement operations on the GPS track in a specified orientation of a body of the vehicle respectively to obtain the multiple reference tracks, wherein the specified orientation is the left or right side of the vehicle; and the displacement operation is at least one of a translation operation and a rotation operation.
  • a first reference track is obtained by translating the GPS track to the left side of the vehicle body by a first distance and rotating it by a first angle
  • a second reference track is obtained by translating the GPS track to the right side of the vehicle body by a second distance and rotating it by a second angle
  • a third reference track is obtained by translating the GPS track to the left side of the vehicle body by a third distance and rotating it by a third angle
  • a fourth reference track is obtained by translating the GPS track to the right side of the vehicle body by a fourth distance and rotating it by a fourth angle; and so on.
  • the translation distance and the rotation angle of the GPS track can be set according to requirements, which is not limited in the present disclosure.
  • the number of reference tracks can be determined according to the number of lane lines. For example, taking a single lane as an example, the number of lane lines is 2, in which case the number of reference tracks can be greater than 2. In general, the larger number of reference tracks is more favorable to obtaining the more accurate candidate road surface boundary line, and the present disclosure imposes no limitation on the number of reference tracks.
  • Block 405 determining, by the terminal, a candidate road surface boundary line from the multiple reference tracks according to each reference track and the obstacle region.
  • the implementation of this block is: making statistics on, by the terminal, the length and direction of an obstacle raster occupied on each reference track, and determining the reference track with a statistical result meeting a threshold as the candidate road surface boundary line. Since the length and direction of an obstacle raster on a road boundary line often have specific features, statistics on the length and direction of the obstacle raster occupied on each reference track are made, and the reference track with the statistical result meeting the threshold is determined as a candidate road surface boundary line, which can ensure the accuracy of the candidate road surface boundary line.
  • Block 406 correcting, by the terminal, the candidate road surface boundary line according to laser points in the three-dimensional point cloud data which are close to the candidate road surface boundary line, to obtain a road surface boundary line in the current frame.
  • the implementation of this block is: converting, by the terminal, the three-dimensional point cloud data of the multiple laser points into data in a polar coordinate system to obtain the radius, height, and angle of each laser point; dividing the multiple laser points into N regions according to the angles of the multiple laser points, where N is an integer greater than 1; and performing regression fitting by using a Gaussian process on laser points in each region according to the radius and height of each laser point respectively to obtain the road surface boundary line.
  • Fig. 5 is a schematic diagram of a regression fitting process by using a Gaussian process.
  • the terminal selects a point which is located within a certain range from the vehicle body and the height of the laser point within a certain range as an initial value of a Gaussian regression process, that is, assuming these laser points to be initial road surface boundary points.
  • a seed point that is, a point near the road surface boundary point that meets the error range of a model is determined as an expanded road surface boundary point, while the point with a larger error is eliminated as a noise point, and then fitting is performed again, and so on, until all the laser points are involved in the fitting.
  • the method for selecting, by the terminal, the initial value of the Gaussian regression process is: using, by the terminal, the density-based spatial clustering of applications with noise (DBSCAN) method to cluster and segment the laser points, and when the largest clustering result therein meets a threshold, using the laser points in this cluster as the initial road surface boundary points.
  • the thresholds to be met by the largest clustering result may be the minimum number of laser points of 60 and the minimum distance of 10 meters. The present disclosure imposes no specific limitation on the threshold.
  • an accurate road surface boundary line can be obtained by performing regression fitting by using a Gaussian process on the laser points close to the candidate road surface boundary line.
  • the method for detecting a travelable region based on a Gaussian process that is proposed in the present disclosure can effectively improve the adaptability of the existing method for detecting a travelable region to a complex traffic flow and the robustness to road surface fluctuations, etc., and significantly improve the performance and precision of autonomous vehicles in detecting travelable regions under complex road conditions.
  • Block 407 acquiring, by the terminal, a road surface boundary line detected by the vehicle in a previous frame, and splicing the road surface boundary line detected in the current frame and the road surface boundary line detected by the vehicle in the previous frame to obtain a spliced road surface boundary line.
  • the terminal acquires first pose data of the vehicle in the previous frame and second pose data thereof in the current frame; and corrects the second pose data according to the first pose data, the road surface boundary line detected by the vehicle in the previous frame, and the road surface boundary line detected by the vehicle in the current frame, to obtain third pose data of the vehicle.
  • the terminal splices, according to the third pose data, the road surface boundary line detected by the vehicle in the previous frame and the road surface boundary line detected by the vehicle in the current frame, to obtain the spliced road surface boundary line.
  • an accurate splicing result of the road surface boundary lines can be obtained by splicing, according to the third pose data obtained after correction, the road surface boundary line detected by the vehicle in the previous frame and the road surface boundary line detected by the vehicle in the current frame.
  • Block 408 performing, by the terminal, regression fitting by using a Gaussian process on road surface boundary points corresponding to the spliced road surface boundary line to obtain an optimized road surface boundary line in the current frame.
  • the block of performing, by the terminal, regression fitting by using a Gaussian process on road surface boundary points corresponding to the spliced road surface boundary line is the same as the method for performing, by the terminal, Gaussian regression fitting on the laser points, which will not be repeatedly described herein.
  • the regression fitting by using a Gaussian process in the present disclosure is actually applied in two processes.
  • the first process is to perform regression fitting by using a Gaussian process on laser point in three-dimensional point cloud in a single frame
  • the second process is to splice a road surface boundary line in a current frame and a road surface boundary line in a previous frame after the road surface boundary line in the current frame is obtained, and perform regression fitting by using a Gaussian process on road surface boundary points corresponding to the spliced candidate road surface boundary line.
  • the obtained road surface boundary line is continuously optimized by means of the regression fitting by using a Gaussian process in the two processes, so as to obtain a reliable travelable region.
  • the terminal can obtain a road surface model composed of multiple sensing subgraphs by continuously splicing and merging the road surface boundary lines in multiple frames, wherein the curve generated by a Gaussian process in each sensing subgraph represents left and right road surface boundary lines.
  • the three-dimensional point cloud data measured by the vehicle in the current frame is acquired, wherein the three-dimensional point cloud data comprises the three-dimensional coordinate information of the multiple laser points within the surrounding sensing area of the vehicle; the obstacle region is determined within the surrounding sensing area according to the three-dimensional point cloud data, wherein the obstacle region means a region occupied by an obstacle; the global positioning system (GPS) track corresponding to the current traveling position of the vehicle is acquired, and the displacement operations are performed on the GPS track to obtain the multiple reference tracks; the candidate road surface boundary line is determined from the multiple reference tracks according to each reference track and the obstacle region; and the candidate road surface boundary line is corrected according to the three-dimensional point cloud data of the multiple laser points close to the candidate road surface boundary line, to obtain the road surface boundary line in the current frame.
  • GPS global positioning system
  • the displacement operations are performed on the GPS track to obtain the multiple reference tracks, and the candidate road surface boundary line is determined from the multiple reference tracks according to the reference tracks and the obstacle region, that is, the position and shape of the candidate road surface boundary line are dynamically determined according to the GPS track and real-time obstacle information on the road, thereby preliminarily determining the travelable region.
  • the candidate road surface boundary line is corrected according to the three-dimensional point cloud data close to the candidate road surface boundary line, to obtain the road surface boundary line, thereby ensuring the accuracy of detecting a travelable region.
  • Fig. 6 is a block diagram of an apparatus for detecting a travelable region provided in an embodiment of the present disclosure. Referring to Fig. 6, this embodiment comprises:
  • a point cloud data acquisition module 601 configured to acquire three-dimensional point cloud data measured by a vehicle in a current frame, wherein the three-dimensional point cloud data comprises three-dimensional coordinate information of multiple laser points within the surrounding sensing area of the vehicle;
  • an obstacle region determination module 602 configured to determine an obstacle region within the surrounding sensing area according to the three-dimensional point cloud data, wherein the obstacle region means a region occupied by an obstacle;
  • a reference track acquisition module 603 configured to acquire a global positioning system (GPS) track corresponding to a current traveling position of the vehicle, and perform displacement operations on the GPS track to obtain multiple reference tracks;
  • GPS global positioning system
  • a candidate road surface boundary line determination module 604 configured to determine a candidate road surface boundary line from the multiple reference tracks according to each reference track and the obstacle region;
  • a road surface boundary line correction module 605 configured to correct the candidate road surface boundary line according to three-dimensional point cloud data of multiple laser points close to the candidate road surface boundary line, to obtain a road surface boundary line in the current frame.
  • the obstacle region determination module 602 is configured to divide the space within the sensing area into multiple regions; and determine the obstacle region from the multiple regions according to the height value of a laser point in each region.
  • the reference track acquisition module 603 is configured to perform multiple displacement operations on the GPS track in a specified orientation of a body of the vehicle to obtain the multiple reference tracks, wherein the specified orientation is the left or right side of the vehicle; and the displacement operation is at least one of a translation operation and a rotation operation.
  • the road surface boundary line correction module 605 is configured to convert the three-dimensional point cloud data of the multiple laser points into data in a polar coordinate system to obtain the radius, height, and angle of each laser point; divide the multiple laser points into N regions according to the angles of the multiple laser points, where N is an integer greater than 1; and perform regression fitting by using a Gaussian process on laser points in each region according to the radius and height of each laser point respectively to obtain the road surface boundary line.
  • the road surface boundary line correction module 605 is further configured to acquire a road surface boundary line detected by the vehicle in a previous frame; splice the road surface boundary line detected in the current frame and the road surface boundary line detected by the vehicle in the previous frame to obtain a spliced road surface boundary line; and perform regression fitting by using a Gaussian process on road surface boundary points corresponding to the spliced road surface boundary line to obtain an optimized road surface boundary line in the current frame.
  • the road surface boundary line correction module 605 is configured to acquire first pose data of the vehicle in the previous frame and second pose data thereof in the current frame; correct the second pose data according to the first pose data, the road surface boundary line detected by the vehicle in the previous frame, and the road surface boundary line detected by the vehicle in the current frame, to obtain third pose data of the vehicle; and splice, according to the third pose data, the road surface boundary line detected by the vehicle in the previous frame and the road surface boundary line detected by the vehicle in the current frame, to obtain the spliced road surface boundary line.
  • a motion compensation module is configured to perform motion compensation on the three-dimensional point cloud data according to the second pose data of the vehicle, wherein the second pose data comprises GPS data and inertial measurement unit (IMU) data.
  • IMU inertial measurement unit
  • the three-dimensional point cloud data is three-dimensional point cloud data in a Cartesian coordinate system
  • the three-dimensional point cloud data is rotary three-dimensional laser point cloud data or galvanometric three-dimensional laser point cloud data, although the present disclosure is not limited thereto.
  • the three-dimensional point cloud data measured by the vehicle in the current frame is acquired, wherein the three-dimensional point cloud data comprises the three-dimensional coordinate information of the multiple laser points within the surrounding sensing area of the vehicle; the obstacle region is determined within the surrounding sensing area according to the three-dimensional point cloud data, wherein the obstacle region means a region occupied by an obstacle; the global positioning system (GPS) track corresponding to the current traveling position of the vehicle is acquired, and the displacement operations are performed on the GPS track to obtain the multiple reference tracks; the candidate road surface boundary line is determined from the multiple reference tracks according to each reference track and the obstacle region; and the candidate road surface boundary line is corrected according to the three-dimensional point cloud data of the multiple laser points close to the candidate road surface boundary line, to obtain the road surface boundary line in the current frame.
  • GPS global positioning system
  • the displacement operations are performed on the GPS track to obtain the multiple reference tracks, and the candidate road surface boundary line is determined from the multiple reference tracks according to the reference tracks and the obstacle region, that is, the position and shape of the candidate road surface boundary line are dynamically determined according to the GPS track and real-time obstacle information on the road, thereby preliminarily determining the travelable region.
  • the candidate road surface boundary line is corrected according to the three-dimensional point cloud data close to the candidate road surface boundary line, to obtain the road surface boundary line, thereby ensuring the accuracy of detecting a travelable region.
  • the apparatus for detecting a travelable region provided in the above embodiment detects a travelable region
  • the division of functional modules is merely used as an example for illustration.
  • the above functions can be allocated to different functional modules according to requirements, that is, the internal structure of the device is divided into different functional modules to complete all or some of the functions described above.
  • the apparatus for detecting a travelable region provided in the above embodiment and the embodiments of the method for detecting a travelable region belong to the same concept. Reference is made to the method embodiments for the specific implementation process thereof, which will not be repeated herein.
  • the modules may be implemented by one or processors of the terminal.
  • Fig. 7 shows a structural block diagram of a terminal 700 provided in an exemplary embodiment of the present disclosure.
  • the terminal 700 may be: a smart phone, an industrial personal computer, a tablet computer, a Moving Picture Experts Group Audio Layer III (MP3) player, a Moving Picture Experts Group Audio Layer IV (MP4) player, a notebook computer, or a desktop computer.
  • MP3 Moving Picture Experts Group Audio Layer III
  • MP4 Moving Picture Experts Group Audio Layer IV
  • the terminal 700 may also be referred to as other names such as user equipment, a portable terminal, a laptop terminal, and a desktop terminal.
  • the terminal 700 comprises: a processor 701 and a memory 702.
  • the processor 701 may comprise one or more processing cores, such as a quad-core processor and an octa-core processor.
  • the processor 701 may be implemented in the form of at least one of the following hardware: a digital signal processor (DSP) , a field-programmable gate array (FPGA) , and a programmable logic array (PLA) .
  • DSP digital signal processor
  • FPGA field-programmable gate array
  • PDA programmable logic array
  • the processor 701 may also comprise a main processor and a coprocessor.
  • the main processor is a processor for processing data in a wake-up state, and also referred to a central processing unit (CPU) ; and the co-processor is a low-power processor for processing data in a standby state.
  • CPU central processing unit
  • the processor 701 may be integrated with a graphics processing unit (GPU) , and the GPU is configured to render and draw content that needs to be displayed on a display screen.
  • the processor 701 may further comprise an artificial intelligence (AI) processor configured to process computing operations related to machine learning.
  • AI artificial intelligence
  • the memory 702 may comprise one or more computer-readable storage media which may be non-transitory.
  • the memory 702 may further comprise a high-speed random access memory and a non-volatile memory, such as one or more magnetic disk storage devices and flash storage devices.
  • the non-transitory computer-readable storage media in the memory 702 are configured to store at least one instruction which is executed by the processor 701 to implement the method for detecting a travelable region provided in the method embodiments of the present application.
  • the terminal 700 further optionally comprises: a peripheral device interface 703 and at least one peripheral device.
  • the processor 701, the memory 702, and the peripheral device interface 703 may be connected by means of a bus or a signal line.
  • Each peripheral device may be connected to the peripheral device interface 703 by means of a bus, a signal line, or a circuit board.
  • the peripheral device comprises: at least one of a radio frequency circuit 704, a touch display screen 705, a camera component 706, an audio circuit 707, a positioning component 708, and a power supply 709.
  • the peripheral device interface 703 may be configured to connect at least one peripheral device related to input/output (I/O) to the processor 701 and the memory 702.
  • the processor 701, the memory 702, and the peripheral device interface 703 are integrated on the same chip or circuit board.
  • any one or two of the processor 701, the memory 702, and the peripheral device interface 703 may be implemented on a separate chip or circuit board, which is not limited in this embodiment.
  • the radio frequency circuit 704 is configured to receive and transmit radio frequency (RF) signals which are also referred to as electromagnetic signals.
  • the radio frequency circuit 704 communicates with a communication network and other communication devices through electromagnetic signals.
  • the radio frequency circuit 704 converts an electrical signal into an electromagnetic signal for transmission, or converts a received electromagnetic signal into an electrical signal.
  • the radio frequency circuit 704 comprises: an antenna system, an RF transceiver, one or more amplifiers, a tuner, an oscillator, a digital signal processor, a codec chipset, a subscriber identity module card, etc.
  • the radio frequency circuit 704 can communicate with other terminals by means of at least one wireless communication protocol.
  • the wireless communication protocol includes but is not limited to: a metropolitan area network, various generations of mobile communication networks (2G, 3G, 4G, and 5G) , a wireless local area network, and/or a Wireless Fidelity (WiFi) network.
  • the radio frequency circuit 704 may further comprise near field communication (NFC) related circuits, which is not limited in the present application.
  • NFC near field communication
  • the display screen 705 is configured to display a user interface (UI) .
  • the UI may comprise graphics, text, icons, videos, and any combination thereof.
  • the display screen 705 also has the capability of collecting a touch signal on or above the surface of the display screen 705.
  • the touch signal can be input to the processor 701 as a control signal for processing.
  • the display screen 705 can be further configured to provide a virtual button and/or a virtual keyboard which are/is also referred to as a soft button and/or a soft keyboard.
  • the display screen 705 can even be set in a non-rectangular irregular shape, that is, a special-shaped screen.
  • the display screen 705 can be manufactured by using materials such as a liquid crystal display (LCD) and an organic light-emitting diode (OLED) .
  • LCD liquid crystal display
  • OLED organic light-emitting diode
  • the camera component 706 is configured to collect an image or a video.
  • the camera component 706 comprises a front camera and a rear camera.
  • the front camera is disposed on the front panel of the terminal, and the rear camera is disposed on the back surface of the terminal.
  • there are at least two rear cameras which are separately any one of a main camera, a depth-of-field camera, a wide-angle camera, and a long-focus camera, so as to implement a bokeh function through the combination of the main camera and the depth-of-field camera, implement panoramic shooting and virtual reality (VR) shooting functions through the combination of the main camera and the wide-angle camera, or implement other shooting functions through combination.
  • the camera component 706 may further comprise a flash.
  • the flash may be a single-color temperature flash or a dual-color temperature flash.
  • the dual-color temperature flash means the combination of a warm light flash and a cold light flash, which can be used for light compensation at different color temperatures.
  • the audio circuit 707 may comprise a microphone and a loudspeaker.
  • the microphone is configured to collect acoustic waves of a user and an environment, and convert the acoustic waves into electrical signals and input them to the processor 701 for processing, or input them to the radio frequency circuit 704 to implement speech communication.
  • the microphone can also be an array microphone or an omnidirectional collection microphone.
  • the loudspeaker is configured to convert electrical signals from the processor 701 or the radio frequency circuit 704 into acoustic waves.
  • the loudspeaker may be a traditional thin film loudspeaker or a piezoelectric ceramic loudspeaker.
  • the loudspeaker When the loudspeaker is a piezoelectric ceramic loudspeaker, it can not only convert electrical signals into acoustic waves audible to humans, but also convert electrical signals into acoustic waves inaudible to humans for ranging purposes.
  • the audio circuit 707 may further comprise an earphone jack.
  • the positioning component 708 is configured to position a current geographical location of the terminal 700 to implement navigation or a location-based service (LBS) .
  • the positioning component 708 may be a positioning component based on the global positioning system (GPS) of the United States, the Beidou System of China, the GLONASS system of Russia, or the Galileo system of the European Union.
  • GPS global positioning system
  • the power supply 709 is configured to supply power to various components in the terminal 700.
  • the power supply 709 may be an alternating current power supply, a direct current power supply, a disposable battery, or a rechargeable battery.
  • the rechargeable battery may support wired charging or wireless charging.
  • the rechargeable battery can be further configured to support a fast charging technology.
  • the terminal 700 further comprises one or more sensors 710.
  • the one or more sensors 710 include but are not limited to: an acceleration sensor 711, a gyroscope sensor 712, a pressure sensor 713, a fingerprint sensor 714, an optical sensor 715, and a proximity sensor 716.
  • the acceleration sensor 711 can detect a value of acceleration on three coordinate axes of a coordinate system established with the terminal 700.
  • the acceleration sensor 711 can be configured to detect components of gravitational acceleration on the three coordinate axes.
  • the processor 701 may control the touch display screen 705 to display the user interface in a landscape view or a portrait view according to a gravity acceleration signal collected by the acceleration sensor 711.
  • the acceleration sensor 711 can be further configured to collect game data or motion data of a user.
  • the gyroscope sensor 712 can detect a body direction and a rotation angle of the terminal 700, and the gyroscope sensor 712 can cooperate with the acceleration sensor 711 to collect a 3D action of the user on the terminal 700.
  • the processor 701 can implement the following functions according to data collected by the gyroscope sensor 712: action sensing (for example, changing the UI based on the user’s tilt operation) , image stabilization during shooting, game control, and inertial navigation.
  • the pressure sensor 713 may be disposed on a side frame of the terminal 700 and/or a lower layer of the touch display screen 705.
  • the processor 701 performs left-right hand recognition or a shortcut operation according to the grip signal collected by the pressure sensor 713.
  • the processor 701 controls the operable control on the UI interface according to the user’s pressure operation on the touch display screen 705.
  • the operable control comprises at least one of a button control, a scroll bar control, an icon control, and a menu control.
  • the fingerprint sensor 714 is configured to collect the user’s fingerprint, and the processor 701 recognizes the user’s identity based on the fingerprint collected by the fingerprint sensor 714, or the fingerprint sensor 714 recognizes the user’s identity according to the collected fingerprint.
  • the processor 701 authorizes the user to perform related sensitive operations, comprising unlocking the screen, viewing encrypted information, downloading software, making payment, changing settings, etc.
  • the fingerprint sensor 714 may be disposed on the front surface, back surface, or side surface of the terminal 700. When a physical button or a manufacturer’s logo is provided on the terminal 700, the fingerprint sensor 714 may be integrated with the physical button or the manufacturer’s logo.
  • the optical sensor 715 is configured to collect an ambient light intensity.
  • the processor 701 can control display luminance of the touch display screen 705 according to the ambient light intensity collected by the optical sensor 715. Specifically, when the ambient light intensity is high, the display luminance of the touch display screen 705 is turned up; and when the ambient light intensity is low, the display luminance of the touch display screen 705 is turned down.
  • the processor 701 can also dynamically adjust shooting parameters of the camera component 706 according to the ambient light intensity collected by the optical sensor 715.
  • the proximity sensor 716 which is also referred to as a distance sensor, is usually disposed on the front panel of the terminal 700.
  • the proximity sensor 716 is configured to collect the distance between the user and the front surface of the terminal 700.
  • the processor 701 controls the touch display screen 705 to switch from a screen-on state to a screen-off state.
  • the processor 701 controls the touch display screen 705 to switch from a screen-off state to a screen-on state.
  • Fig. 7 does not constitute a limitation on the terminal 700, and may comprise components more or fewer than those shown, a combination of some components, or components arranged in different manners.
  • a computer-readable storage medium for example, a memory comprising instructions which can be executed by a processor in a terminal to complete the method for detecting a travelable region in the following embodiments.
  • the computer-readable storage medium may be a ROM, a random access memory (RAM) , a CD-ROM, a magnetic tape, a floppy disk, or an optical data storage device.
  • all or some blocks for implementing the above embodiments may be completed by means of hardware, or may be completed by a program instructing related hardware.
  • the program may be stored in a computer-readable storage medium.
  • the above-mentioned storage medium may be a read-only memory, a magnetic disk, an optical disc, etc.

Landscapes

  • Engineering & Computer Science (AREA)
  • Radar, Positioning & Navigation (AREA)
  • Remote Sensing (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Electromagnetism (AREA)
  • Automation & Control Theory (AREA)
  • Length Measuring Devices By Optical Means (AREA)
  • Optical Radar Systems And Details Thereof (AREA)
  • Traffic Control Systems (AREA)

Abstract

L'invention porte sur un procédé de détection de gaz (G). Le procédé consiste à acquérir des données de nuage de points tridimensionnelles mesurées par un véhicule dans une trame actuelle, les données de nuage de points tridimensionnels comprenant des informations de coordonnées tridimensionnelles de multiples points laser à l'intérieur de la zone de détection environnante du véhicule (301); à déterminer une région d'obstacle à l'intérieur de la zone de détection environnante en fonction des données de nuage de points tridimensionnelles, la région d'obstacle représentant une région occupée par un obstacle (302); à acquérir une piste de système de positionnement global (GPS) correspondant à une position de déplacement actuelle du véhicule, et effectuer des opérations de déplacement sur la piste GPS pour obtenir de multiples pistes de référence (303); à déterminer une ligne de limite de surface de route candidate parmi les multiples pistes de référence en fonction de chaque piste de référence et de la région d'obstacle (304); et à corriger la ligne de limite de surface de route candidate selon des données de nuage de points tridimensionnels de multiples points laser proches de la ligne de limite de surface de route candidate, pour obtenir une ligne de limite de surface de route dans la trame actuelle
PCT/CN2020/098286 2019-12-23 2020-06-24 Procédé, appareil, dispositif et support de stockage pour détection d'une photographie de surface de carte WO2021128777A1 (fr)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN201911334670.6A CN110967024A (zh) 2019-12-23 2019-12-23 可行驶区域的检测方法、装置、设备及存储介质
CN201911334670.6 2019-12-23

Publications (2)

Publication Number Publication Date
WO2021128777A1 true WO2021128777A1 (fr) 2021-07-01
WO2021128777A8 WO2021128777A8 (fr) 2021-07-29

Family

ID=70036100

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2020/098286 WO2021128777A1 (fr) 2019-12-23 2020-06-24 Procédé, appareil, dispositif et support de stockage pour détection d'une photographie de surface de carte

Country Status (2)

Country Link
CN (1) CN110967024A (fr)
WO (1) WO2021128777A1 (fr)

Cited By (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114035584A (zh) * 2021-11-18 2022-02-11 上海擎朗智能科技有限公司 机器人检测障碍物的方法、机器人以及机器人系统
CN114115231A (zh) * 2021-10-25 2022-03-01 南京工业大学 适用于医院场景下移动机器人空间位姿点云校正方法及系统
CN114119903A (zh) * 2021-11-25 2022-03-01 武汉理工大学 一种基于实景三维城市的动态交通模拟方法
CN114624726A (zh) * 2022-03-17 2022-06-14 南通探维光电科技有限公司 轮轴识别系统和轮轴识别方法
CN115062422A (zh) * 2022-04-29 2022-09-16 厦门大学 一种用于装载机铲装满斗率预测的建模方法和系统
CN115793652A (zh) * 2022-11-30 2023-03-14 上海木蚁机器人科技有限公司 行驶控制方法、装置及电子设备
CN115994940A (zh) * 2022-11-09 2023-04-21 荣耀终端有限公司 一种折叠屏设备的折痕程度测试方法、设备及存储介质
CN116449335A (zh) * 2023-06-14 2023-07-18 上海木蚁机器人科技有限公司 可行驶区域检测方法、装置、电子设备以及存储介质
CN116524029A (zh) * 2023-06-30 2023-08-01 长沙智能驾驶研究院有限公司 轨道交通工具的障碍物检测方法、装置、设备及存储介质

Families Citing this family (13)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110967024A (zh) * 2019-12-23 2020-04-07 苏州智加科技有限公司 可行驶区域的检测方法、装置、设备及存储介质
CN111603772B (zh) * 2020-05-20 2023-03-28 腾讯科技(深圳)有限公司 区域检测方法、装置、设备及存储介质
CN111829514B (zh) * 2020-06-29 2023-08-18 燕山大学 一种适用于车辆底盘集成控制的路面工况预瞄方法
CN112184736B (zh) * 2020-10-10 2022-11-11 南开大学 一种基于欧式聚类的多平面提取方法
CN112417965B (zh) * 2020-10-21 2021-09-14 湖北亿咖通科技有限公司 激光点云处理方法、电子装置和存储介质
CN112598061B (zh) * 2020-12-23 2023-05-26 中铁工程装备集团有限公司 一种隧道围岩聚类分级方法
CN112947460A (zh) * 2021-03-01 2021-06-11 北京玄马知能科技有限公司 基于激光点云模型的巡检机器人自动航线预置规划方法
CN113536883B (zh) * 2021-03-23 2023-05-02 长沙智能驾驶研究院有限公司 障碍物检测方法、车辆、设备及计算机存储介质
CN113253293B (zh) * 2021-06-03 2021-09-21 中国人民解放军国防科技大学 一种激光点云畸变的消除方法和计算机可读存储介质
CN113665500B (zh) * 2021-09-03 2022-07-19 南昌智能新能源汽车研究院 全天候作业的无人驾驶运输车环境感知系统及方法
CN113835103A (zh) * 2021-09-22 2021-12-24 深圳市镭神智能系统有限公司 轨道障碍物检测方法、系统和计算机设备
CN115205501B (zh) * 2022-08-10 2023-05-23 小米汽车科技有限公司 路面状况的显示方法、装置、设备及介质
CN116311095B (zh) * 2023-03-16 2024-01-02 广州市衡正工程质量检测有限公司 基于区域划分的路面检测方法、计算机设备及存储介质

Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107169464A (zh) * 2017-05-25 2017-09-15 中国农业科学院农业资源与农业区划研究所 一种基于激光点云的道路边界检测方法
KR20170104287A (ko) * 2016-03-07 2017-09-15 한국전자통신연구원 주행 가능 영역 인식 장치 및 그것의 주행 가능 영역 인식 방법
CN108021844A (zh) * 2016-10-31 2018-05-11 高德软件有限公司 一种道路边沿识别方法及装置
CN108460416A (zh) * 2018-02-28 2018-08-28 武汉理工大学 一种基于三维激光雷达的结构化道路可行域提取方法
CN108984599A (zh) * 2018-06-01 2018-12-11 青岛秀山移动测量有限公司 一种利用行驶轨迹参考的车载激光点云路面提取方法
CN110008921A (zh) * 2019-04-12 2019-07-12 北京百度网讯科技有限公司 一种道路边界的生成方法、装置、电子设备及存储介质
CN110033482A (zh) * 2018-01-11 2019-07-19 沈阳美行科技有限公司 一种基于激光点云的路沿识别方法和装置
CN110967024A (zh) * 2019-12-23 2020-04-07 苏州智加科技有限公司 可行驶区域的检测方法、装置、设备及存储介质

Family Cites Families (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
DE102004008868A1 (de) * 2004-02-20 2005-09-08 Daimlerchrysler Ag Verfahren zur Spurerkennung einer Fahrspur für ein Kraftfahrzeug
CN106780524B (zh) * 2016-11-11 2020-03-06 厦门大学 一种三维点云道路边界自动提取方法
CN110008941B (zh) * 2019-06-05 2020-01-17 长沙智能驾驶研究院有限公司 可行驶区域检测方法、装置、计算机设备和存储介质
CN110598541B (zh) * 2019-08-05 2021-07-23 香港理工大学深圳研究院 一种提取道路边缘信息的方法及设备

Patent Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR20170104287A (ko) * 2016-03-07 2017-09-15 한국전자통신연구원 주행 가능 영역 인식 장치 및 그것의 주행 가능 영역 인식 방법
CN108021844A (zh) * 2016-10-31 2018-05-11 高德软件有限公司 一种道路边沿识别方法及装置
CN107169464A (zh) * 2017-05-25 2017-09-15 中国农业科学院农业资源与农业区划研究所 一种基于激光点云的道路边界检测方法
CN110033482A (zh) * 2018-01-11 2019-07-19 沈阳美行科技有限公司 一种基于激光点云的路沿识别方法和装置
CN108460416A (zh) * 2018-02-28 2018-08-28 武汉理工大学 一种基于三维激光雷达的结构化道路可行域提取方法
CN108984599A (zh) * 2018-06-01 2018-12-11 青岛秀山移动测量有限公司 一种利用行驶轨迹参考的车载激光点云路面提取方法
CN110008921A (zh) * 2019-04-12 2019-07-12 北京百度网讯科技有限公司 一种道路边界的生成方法、装置、电子设备及存储介质
CN110967024A (zh) * 2019-12-23 2020-04-07 苏州智加科技有限公司 可行驶区域的检测方法、装置、设备及存储介质

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
LI HUIBIN, SHI YUN;ZHANG WENLI;XIANG MINGTAO;LIU HANHAI: "Road boundary detection based on vehicle LiDAR", ENGINEERING OF SURVEYING AND MAPPING, vol. 27, no. 12, 1 December 2018 (2018-12-01), pages 37 - 43, XP055827162, ISSN: 1006-7949, DOI: 10.19349/j.cnki.issn1006-7949.2018.12.008 *
SUN PENGPENG; ZHAO XIANGMO; XU ZHIGANG; WANG RUNMIN; MIN HAIGEN: "A 3D LiDAR Data-Based Dedicated Road Boundary Detection Algorithm for Autonomous Vehicles", IEEE ACCESS, IEEE, USA, vol. 7, 28 February 2019 (2019-02-28), USA, pages 29623 - 29638, XP011715637, DOI: 10.1109/ACCESS.2019.2902170 *

Cited By (16)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114115231A (zh) * 2021-10-25 2022-03-01 南京工业大学 适用于医院场景下移动机器人空间位姿点云校正方法及系统
CN114115231B (zh) * 2021-10-25 2023-07-25 南京工业大学 移动机器人空间位姿点云校正方法及系统
CN114035584A (zh) * 2021-11-18 2022-02-11 上海擎朗智能科技有限公司 机器人检测障碍物的方法、机器人以及机器人系统
CN114035584B (zh) * 2021-11-18 2024-03-29 上海擎朗智能科技有限公司 机器人检测障碍物的方法、机器人以及机器人系统
CN114119903A (zh) * 2021-11-25 2022-03-01 武汉理工大学 一种基于实景三维城市的动态交通模拟方法
CN114119903B (zh) * 2021-11-25 2024-04-09 武汉理工大学 一种基于实景三维城市的动态交通模拟方法
CN114624726A (zh) * 2022-03-17 2022-06-14 南通探维光电科技有限公司 轮轴识别系统和轮轴识别方法
CN115062422A (zh) * 2022-04-29 2022-09-16 厦门大学 一种用于装载机铲装满斗率预测的建模方法和系统
CN115994940B (zh) * 2022-11-09 2023-12-08 荣耀终端有限公司 一种折叠屏设备的折痕程度测试方法、设备及存储介质
CN115994940A (zh) * 2022-11-09 2023-04-21 荣耀终端有限公司 一种折叠屏设备的折痕程度测试方法、设备及存储介质
CN115793652A (zh) * 2022-11-30 2023-03-14 上海木蚁机器人科技有限公司 行驶控制方法、装置及电子设备
CN115793652B (zh) * 2022-11-30 2023-07-14 上海木蚁机器人科技有限公司 行驶控制方法、装置及电子设备
CN116449335A (zh) * 2023-06-14 2023-07-18 上海木蚁机器人科技有限公司 可行驶区域检测方法、装置、电子设备以及存储介质
CN116449335B (zh) * 2023-06-14 2023-09-01 上海木蚁机器人科技有限公司 可行驶区域检测方法、装置、电子设备以及存储介质
CN116524029B (zh) * 2023-06-30 2023-12-01 长沙智能驾驶研究院有限公司 轨道交通工具的障碍物检测方法、装置、设备及存储介质
CN116524029A (zh) * 2023-06-30 2023-08-01 长沙智能驾驶研究院有限公司 轨道交通工具的障碍物检测方法、装置、设备及存储介质

Also Published As

Publication number Publication date
WO2021128777A8 (fr) 2021-07-29
CN110967024A (zh) 2020-04-07

Similar Documents

Publication Publication Date Title
WO2021128777A1 (fr) Procédé, appareil, dispositif et support de stockage pour détection d'une photographie de surface de carte
CN110967011B (zh) 一种定位方法、装置、设备及存储介质
US11205282B2 (en) Relocalization method and apparatus in camera pose tracking process and storage medium
CN111126182B (zh) 车道线检测方法、装置、电子设备及存储介质
US11276183B2 (en) Relocalization method and apparatus in camera pose tracking process, device, and storage medium
US11978219B2 (en) Method and device for determining motion information of image feature point, and task performing method and device
CN110986930B (zh) 设备定位方法、装置、电子设备及存储介质
CN110148178B (zh) 相机定位方法、装置、终端及存储介质
CN110095128B (zh) 获取缺失道路情报的方法、装置、设备及存储介质
CN111104893B (zh) 目标检测方法、装置、计算机设备及存储介质
CN111126276B (zh) 车道线检测方法、装置、计算机设备及存储介质
CN111256676B (zh) 移动机器人定位方法、装置和计算机可读存储介质
CN111192341A (zh) 生成高精地图的方法、装置、自动驾驶设备及存储介质
CN112406707B (zh) 车辆预警方法、车辆、装置、终端和存储介质
CN110570465B (zh) 实时定位与地图构建方法、装置及计算机可读存储介质
CN112150560A (zh) 确定消失点的方法、装置及计算机存储介质
CN110633336B (zh) 激光数据搜索范围的确定方法、装置及存储介质
CN114299468A (zh) 车道的汇入口的检测方法、装置、终端、存储介质及产品
CN111928861B (zh) 地图构建方法及装置
CN111538009B (zh) 雷达点的标记方法和装置
CN111444749B (zh) 路面导向标志的识别方法、装置及存储介质
CN111664860B (zh) 定位方法、装置、智能设备及存储介质
CN114623836A (zh) 车辆位姿确定方法、装置及车辆
CN116258810A (zh) 路面要素的渲染方法、装置、设备及存储介质
WO2019233299A1 (fr) Procédé et appareil de cartographie, et support de stockage lisible par ordinateur

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 20904962

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 20904962

Country of ref document: EP

Kind code of ref document: A1