WO2021051736A1 - Procédé et appareil de détermination de la zone de détection, support d'informations et véhicule - Google Patents

Procédé et appareil de détermination de la zone de détection, support d'informations et véhicule Download PDF

Info

Publication number
WO2021051736A1
WO2021051736A1 PCT/CN2020/073693 CN2020073693W WO2021051736A1 WO 2021051736 A1 WO2021051736 A1 WO 2021051736A1 CN 2020073693 W CN2020073693 W CN 2020073693W WO 2021051736 A1 WO2021051736 A1 WO 2021051736A1
Authority
WO
WIPO (PCT)
Prior art keywords
area
point cloud
drivable
drivable area
vehicle
Prior art date
Application number
PCT/CN2020/073693
Other languages
English (en)
Chinese (zh)
Inventor
牟加俊
Original Assignee
深圳市速腾聚创科技有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 深圳市速腾聚创科技有限公司 filed Critical 深圳市速腾聚创科技有限公司
Priority to PCT/CN2020/073693 priority Critical patent/WO2021051736A1/fr
Priority to CN202080005490.8A priority patent/CN112789521B/zh
Publication of WO2021051736A1 publication Critical patent/WO2021051736A1/fr

Links

Images

Classifications

    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S17/00Systems using the reflection or reradiation of electromagnetic waves other than radio waves, e.g. lidar systems
    • G01S17/02Systems using the reflection of electromagnetic waves other than radio waves

Definitions

  • This application relates to the field of automatic driving, and in particular to a method, device, storage medium, and vehicle for determining a sensing area.
  • the perception system In the field of autonomous driving, the perception system is an extremely important part, and the scenes faced by autonomous vehicles are diverse, for example: crossroads with complex scenes, including various pedestrians and cars; highways with simple scenes, only Contains high-speed motor vehicles.
  • Self-driving vehicles perform environmental perception on different scenarios to determine autonomous driving strategies for different scenarios, such as emergency braking, automatic acceleration, or lane alignment strategies.
  • self-driving vehicles use a preset sensing area for environmental perception. The size of the sensing area is related to the detection range of equipment such as lidar, and there is a problem of poor flexibility.
  • the sensing area adjustment, device, storage medium, and lidar provided in the embodiments of this application can adaptively adjust the range of the sensing area according to the range of the drivable area, which is convenient for computing resources to concentrate on processing sampling points in the sensing area and reduce The amount of computing for environmental perception.
  • the technical solution is as follows:
  • an embodiment of the present application provides a method for determining a perception area, and the method includes:
  • the range of the sensing area of the vehicle is adjusted according to the range of the drivable area; wherein the area of the sensing area is larger than the area of the drivable area.
  • constructing the drivable area of the vehicle according to the point cloud includes:
  • a random sampling consistent RANSAC algorithm is used to identify the point cloud to obtain ground points and obstacle points;
  • the drivable area of the vehicle is constructed according to the ground point.
  • the adjusting the range of the sensing area of the vehicle according to the range of the drivable area includes:
  • the minimum circumscribed rectangle of the drivable area is determined by the geometric center, and the minimum circumscribed rectangle is used as the perception area.
  • the acquiring point cloud collected by the point cloud collecting device includes:
  • it also includes:
  • the complexity of the drivable area is calculated based on the area of the drivable area, the maximum length of the drivable area in the traveling direction, and the maximum length of the drivable area in the vertical direction of the traveling direction.
  • the maximum length of the drivable area in the traveling direction, and the maximum length of the drivable area in the vertical direction of the traveling direction Calculating the complexity of the drivable area includes:
  • S represents the area of the drivable area
  • L represents the maximum length of the drivable area in the direction of travel
  • W represents the maximum length of the drivable area in the vertical direction of the travel direction
  • w1 represents the weight of S
  • w2 represents the weight of L
  • W3 represents the weight of W
  • w1, w2, and w3 are integers greater than 0.
  • the method further includes:
  • w1 is the maximum value among the three weights
  • the parameter value of w2 is the maximum value among the three weights.
  • the parameter value of w3 is the maximum value among the three weights
  • an embodiment of the present application provides a device for determining a perception area, and the device for determining includes:
  • An acquiring unit for acquiring the point cloud collected by the point cloud collecting device An acquiring unit for acquiring the point cloud collected by the point cloud collecting device
  • the adjusting unit is configured to adjust the range of the sensing area of the vehicle according to the range of the drivable area; wherein the area of the sensing area is larger than the area of the drivable area.
  • an embodiment of the present application provides a computer storage medium, the computer storage medium stores a plurality of instructions, and the instructions are suitable for being loaded by a processor and executing the above method steps.
  • an embodiment of the present application provides a device for determining a multi-channel lidar, which may include a processor and a memory; wherein the memory stores a computer program, and the computer program is suitable for being loaded and loaded by the processor. Perform the above method steps.
  • an embodiment of the present application provides a vehicle that includes one or more point cloud collection devices and the aforementioned sensing area determination device.
  • the point cloud device may be a lidar, a camera, or other point cloud collection device, one Or multiple point cloud collection devices are installed on the vehicle.
  • the point cloud collection device constructs the drivable area according to the point cloud, adjust the range of the sensing area according to the range of the drivable area, the area of the sensing area is larger than the area of the drivable area, the area of the sensing area and the drivable area
  • the area is positively correlated, the area of the sensing area increases as the area of the drivable area increases, and decreases as the area of the drivable area decreases.
  • the embodiment of the present application can adaptively adjust the size of the perception area according to the size of the drivable area, which is convenient for determining the key points in the point cloud. Calculate the sampling points to reduce the amount of calculations for environmental perception.
  • Fig. 1 is a schematic diagram of a point cloud provided by an embodiment of the present application
  • FIG. 2 is a schematic flowchart of a method for determining a sensing area provided by an embodiment of the present application
  • Figure 3 is a schematic diagram of a drivable area provided by an embodiment of the present application.
  • FIG. 4 is a schematic diagram of a sensing area provided by an embodiment of the present application.
  • Fig. 5 is a schematic diagram of a sensing area provided by an embodiment of the present application.
  • Fig. 6 is a schematic structural diagram of a device for determining a sensing area provided by the present application.
  • FIG. 7 is a schematic diagram of another structure of a device for determining a sensing area provided by the present application.
  • FIG. 1 shows a schematic diagram of a point cloud that can be applied to an embodiment of the present application.
  • the point cloud acquisition device scans at a position 10 to obtain the point cloud shown in FIG. 1.
  • the point cloud is a collection of sampling points of the spatial distribution and surface characteristics of the object collected by the point cloud acquisition device in the same spatial coordinate system.
  • the point cloud acquisition device may be a radar or a camera.
  • the measuring instrument is a lidar
  • the point cloud is the laser point cloud.
  • the parameters of each sampling point in the point cloud include: three-dimensional coordinates and laser reflection intensity, laser reflection intensity and the object's surface material, roughness, incident angle, The emission energy of the emitted laser is related to the emission angle.
  • the parameters of each sampling point in the point cloud include: three-dimensional coordinates and color information.
  • the point cloud can also be a fusion of the sampling points collected by the lidar and the camera device, then the parameters of each sampling point in the point cloud include: three-dimensional coordinates, laser reflection intensity and color information, for example: color information usage RGB to represent.
  • FIG. 2 provides a schematic flowchart of a method for determining a sensing area according to an embodiment of the present application.
  • the method of the embodiment of the present application may include the following steps:
  • S201 Acquire a point cloud collected by a point cloud collection device.
  • the point cloud collection device is set on the vehicle, and the number of point cloud collection devices can be one or more.
  • the point cloud acquisition device can be a laser radar.
  • the laser radar emits an outgoing laser.
  • the outgoing laser encounters an object (such as other vehicles, pedestrians, flower beds, etc.) and emits it to form a reflected laser.
  • the laser radar obtains the sampling point according to the reflected laser.
  • the radar sends multiple outgoing laser beams multiple times within a preset time period to obtain the point cloud of the object.
  • the lidar When the number of lidar is one, the lidar is set on the top of the vehicle, and the lidar can scan at a preset angle in the horizontal direction and scan at a preset angle in the vertical direction; when the number of lidars is more than one, more Each lidar can be installed on the front, back, left, right, and top of the vehicle to avoid scanning blind spots around the vehicle.
  • the point cloud collection device may collect the point cloud periodically, and the determining device obtains the point cloud periodically collected by the point cloud collection device.
  • the lidar is set on the top of the vehicle, the horizontal field of view of the lidar is 360 degrees, and the vertical field of view is 45 degrees, the lidar periodically scans objects around the vehicle to generate a point cloud , Scanning around the vehicle is achieved through lidar, reducing the scanning blind area around the vehicle.
  • the number of lidars is 5, which are respectively arranged on the top, front, rear, and sides of the vehicle.
  • the lidar placed on the top of the vehicle has a horizontal field of view of 360 degrees and a vertical field of view.
  • the angle is 45 degrees; the horizontal field of view of the lidar installed in front of the vehicle is 180 degrees, and the vertical field of view is 45 degrees; the horizontal field of view of the lidar installed at the rear of the vehicle is 180 degrees, and the vertical field of view is 45 degrees; the horizontal field of view of the lidars installed on both sides of the vehicle is 180 degrees, and the vertical field of view is 45 degrees.
  • 5 Lidars periodically scan objects around the vehicle to generate point clouds to avoid scanning blind spots around the vehicle .
  • the drivable area refers to the area where the vehicle can drive normally.
  • the drivable area is generally a closed area, and the drivable area includes
  • the boundary of the drivable area is formed by the sampling points of the obstacles closest to the vehicle.
  • the sampling points corresponding to the drivable area are a subset of the point cloud, and the sampling points in the drivable area may be the sampling points obtained by the point cloud collecting device on the ground, and the sampling points corresponding to the obstacles on the ground are excluded.
  • Figure 3 is a schematic diagram of a drivable area.
  • the determining device of the embodiment of the present application constructs a drivable area 11 based on the point cloud in Fig. 1.
  • the drivable area 11 is a closed area, and the drivable area 11 contains multiple sampling points.
  • the shape of the drivable area 11 is irregular
  • the boundary of the drivable area 11 is irregular
  • the sampling points on the boundary are the sampling points on the obstacle closest to the vehicle.
  • constructing the drivable area of the vehicle according to the point cloud includes:
  • a random sampling consistent RANSAC algorithm is used to identify the point cloud to obtain ground points and obstacle points;
  • the drivable area of the vehicle is constructed according to the ground point.
  • the sampling points belonging to the plane in the point cloud are ground points, and then the ground points in the point cloud are eliminated to obtain obstacle points.
  • a polar coordinate system grid is established, with the vehicle coordinate system as the origin, the XOY plane is divided into multiple parts with the resolution of the preset angle, and the obstacle point cloud Put them into multiple grids, calculate the distance r from the obstacle point in each grid to the origin of the vehicle coordinate system, extract the obstacle point cloud with the smallest r, and extract the obstacle with the smallest r from each grid
  • the object points are projected onto the XOY plane to obtain the driving area of the vehicle.
  • each sampling point in the point cloud combining with random sampling consistent RANSAC algorithm to identify the point cloud before obtaining ground points and obstacle points, it further includes: acquiring the height of the point cloud collection device and The installation angle of the point cloud collection device obtains the height value of each adopted point in the point cloud according to the height of the point cloud collection device and the installation angle of the point cloud collection device.
  • the method further includes: removing points Sampling points in the cloud that are greater than the height of the point cloud collection device.
  • the height of the point cloud collection device represents the height of the point cloud collection device from the ground, and the sampling points in the point cloud other than a certain height above the point cloud collection device are excluded. For example, if the height of the point cloud collection device is 3m, eliminating the sampling points with a height greater than 3m in the point cloud can further reduce the amount of calculation for constructing a drivable area. For example, when some highway scenes mainly focus on obstacles near the ground in front, this method can be selected to further reduce the amount of calculation.
  • constructing the drivable area of the vehicle according to the point cloud includes:
  • At least three sampling points are randomly selected from the point cloud to form the plane to be recognized; wherein the heights of the at least three sampling points are all less than the preset height;
  • the plane to be identified is a drivable area.
  • the point cloud includes multiple sampling points, and the sampling points have spatial location information.
  • the location information of the sampling points is represented by (x, y, z)
  • the x-axis and the y-axis constitute a horizontal plane
  • the z-axis is perpendicular to the horizontal plane.
  • the obstacles around the vehicle such as the front vehicle, flower bed, pedestrian, stone in the middle of the road, etc.
  • the height of the sampling point can be used as z
  • the size of the value indicates that the preset height should be less than the maximum height of the sampling point in the point cloud.
  • the determining device first determines the boundary information of the road on which the vehicle is currently traveling, and determines the sampling points belonging to the road from the point cloud according to the boundary information of the road. The determining device first selects the sampling points whose z value is greater than the preset height from the sampling points of the road. , And then randomly select at least three sampling points from the selected sampling points, use at least three sampling points to form the plane to be recognized, and then count the number of sampling points contained in the plane to be recognized.
  • the plane to be identified is used as the driving area; if the number of sampling points contained in the plane to be identified is less than or equal to the preset number, at least three sampling points are selected from the above-screened sampling points again, and the above operation is repeated , Until a plane containing more than the preset number of sampling points is found from the point cloud.
  • the number of sampling points contained in the point cloud collected in the preset number S201 is related.
  • the preset number is calculated from the number of point clouds and the preset ratio value. For example, the preset ratio value is 30%.
  • constructing the drivable area of the vehicle according to the point cloud includes:
  • the drivable area is determined according to the ground point.
  • the ground point represents the sampling point in the point cloud that represents the ground on which the vehicle is traveling.
  • the ground point in the point cloud can be identified according to the ground point detection algorithm, and the drivable area can be constructed based on the ground point. For example: Use clustering algorithm, convolutional neural network and linear fitting process to identify the ground points in the point cloud, sampling points other than the ground points in the point cloud correspond to obstacles, and the vehicle needs to avoid the obstacle when driving .
  • the determining device evaluates the complexity of the drivable area, and selects different environment perception algorithms for environment perception according to different complexity.
  • the complexity is high, choose the environment perception algorithm with high computational complexity for environment perception, and improve the accuracy of environment perception; when the load in the drivable area is low, choose the environment perception with lower computational complexity
  • the algorithm performs environment perception to reduce the amount of computing for environment perception. For example: when a vehicle is driving on a city road, the complexity of the drivable area will be higher; when the vehicle is driving on an expressway, the complexity of the drivable area will be lower.
  • the method for evaluating the complexity of the drivable area includes:
  • the complexity of the drivable area is calculated according to the area of the drivable area, the maximum length of the drivable area in the traveling direction, and the maximum length of the drivable area in the vertical direction of the traveling direction.
  • the area of the drivable area represents the size of the two-dimensional space occupied by the drivable area
  • the travel direction represents the forward direction of the vehicle
  • the maximum length of the drivable area in the travel direction represents the maximum span of the drivable area in the travel direction.
  • the maximum length of the drivable area in the direction perpendicular to the direction of travel indicates the maximum span of the drivable area perpendicular to the direction of travel.
  • the driving direction of the vehicle is the horizontal direction
  • the maximum length of the drivable area 11 in the horizontal direction is L
  • the maximum length of the drivable area 11 in the vertical direction is W.
  • the area of the drivable area 11, the maximum length of the drivable area 11 in the direction of travel, and the maximum length of the drivable area 11 in the vertical direction of the direction of travel are positively correlated; that is, the larger the area of the drivable area 11, the more drivable
  • the determining device evaluates the complexity of the drivable area according to the following formula:
  • S represents the area of the drivable area
  • L represents the maximum length of the drivable area in the direction of travel
  • W represents the maximum length of the drivable area in the vertical direction of the travel direction
  • w1 represents the weight of S
  • w2 represents the weight of L
  • W3 represents the weight of W
  • the size of w1, w2, and w3 can be determined according to actual needs.
  • W1, w2, and w3 can be fixed values, or they can be adjusted adaptively according to different scenes where the vehicle is located, for example: the determination device recognizes that the vehicle is driving on a city road scene Set w1>w2>w3 when it is recognized that the vehicle is driving on an expressway, set w2>w3>w1; when it is recognized that the vehicle is traveling on a national highway, set w3>w2>w1.
  • the difference between the area of the perception area and the driveable area can be related to the scene where the vehicle is located; if the scene is an urban road scene, the perceptible area increases horizontally, and the value of w2 is larger, which can guarantee better Predict the changes of the left and right roads; if the scene is a highway scene, the perceptible area is increased longitudinally, and the value of w3 is larger, which can better control the wind direction with high longitudinal speed.
  • the value of each weight is also related to the scene complexity of the drivable area. When the scene complexity is high, the value of each weight is larger than when the scene complexity is low.
  • S203 Adjust the range of the vehicle's perception area according to the range of the drivable area.
  • the perception area is the area where the vehicle performs environmental perception in order to execute different automatic driving strategies according to the environmental perception results, for example: the vehicle performs emergency braking, automatic parking, automatic acceleration, or lane lines according to the environmental perception results in the perception area Medium strategy.
  • the sampling points corresponding to the sensing area are a subset of the point cloud.
  • the area of the sensing area is the area of the drivable area.
  • the area of the sensing area and the area of the drivable area are positively correlated. Larger, the larger the area of the sensing area; the smaller the area of the drivable area, the smaller the area of the sensing area.
  • the shape of the sensing area is a polygon that covers the drivable area.
  • the polygon can be a quadrilateral, pentagon, or hexagon, etc.
  • the difference between the area of the sensing area and the area of the drivable area It is a preset value, and the preset value can be determined according to actual needs, and is not limited in the embodiment of the present application.
  • the determining device determines the range of the sensing area 12 according to the range of the drivable area 11.
  • the sensing area 12 is quadrilateral in shape, the area of the sensing area 12 is larger than the area of the drivable area 11, and the sensing area 12 covers the drivable area 11.
  • Driving area 11 the sensing area 12 is quadrilateral in shape, the area of the sensing area 12 is larger than the area of the drivable area 11, and the sensing area 12 covers the drivable area 11.
  • the shape of the sensing area is a rectangle, and the rectangle is the smallest circumscribed rectangle of the drivable area
  • the determining device determines the geometric center of the drivable area, determines the smallest outer rectangle of the drivable area according to the geometric center, and uses the smallest outer rectangle as the sensing area .
  • the determining device determines the minimum external rectangle 11 according to the range of the drivable area 11, and uses the minimum external rectangle 11 as the sensing area.
  • the embodiment of this application to determine the relative positional relationship between the various fields of view in multiple fields of view, and quantitatively measure the echo intensities of the overlapping areas between two adjacent fields of view in the two fields of view.
  • the error of the echo intensity between two adjacent fields of view is then determined according to the reference field of view in multiple fields of view and the error between the two adjacent fields of view to determine the correction coefficient of the field of view to be corrected, based on the correction
  • the coefficient is used to correct the echo intensity of the field of view to be calibrated to realize the correction of the reflectivity of the detected object, so as to solve the hardware difference between the multiple channels of the lidar in the related technology, which leads to the reflectivity of the same object detected Different, so that the object cannot be accurately identified, the embodiment of the application achieves the consistency of multiple channels by correcting multiple channels, so that when the lidar uses multiple channels to detect objects, the outline can be accurately reflected and the difficulty of object identification is reduced. Improve detection accuracy.
  • FIG. 6 shows a schematic structural diagram of an apparatus for determining a sensing area provided by an exemplary embodiment of the present application, which is referred to as the determining apparatus 6 hereinafter.
  • the determining device 6 can be implemented as all or a part of the vehicle through software, hardware or a combination of both.
  • the determining device 6 includes: an acquisition unit 601, a construction unit 602, and an adjustment unit 603.
  • the obtaining unit 601 is configured to obtain the point cloud collected by the point cloud collecting device;
  • the constructing unit 602 is configured to construct a drivable area of the vehicle according to the point cloud;
  • the adjustment unit 603 is configured to adjust the range of the sensing area of the vehicle according to the range of the drivable area; wherein the area of the sensing area is larger than the area of the drivable area.
  • the construction unit 602 is specifically configured to:
  • a random sampling consistent RANSAC algorithm is used to identify the point cloud to obtain ground points and obstacle points;
  • the drivable area of the vehicle is constructed according to the ground point.
  • the adjustment unit 603 is specifically configured to:
  • the minimum circumscribed rectangle of the drivable area is determined by the geometric center, and the minimum circumscribed rectangle is used as the perception area.
  • the acquiring unit 601 is specifically configured to:
  • the determining device 6 further includes:
  • a calculation unit for calculating the drivable area based on the area of the drivable area, the maximum length of the drivable area in the traveling direction, and the maximum length of the drivable area in the vertical direction of the traveling direction The complexity of the region.
  • the maximum length of the drivable area in the traveling direction, and the maximum length of the drivable area in the vertical direction of the traveling direction includes:
  • S represents the area of the drivable area
  • L represents the maximum length of the drivable area in the direction of travel
  • W represents the maximum length of the drivable area in the vertical direction of the travel direction
  • w1 represents the weight of S
  • w2 represents the weight of L
  • W3 represents the weight of W
  • w1, w2, and w3 are integers greater than 0.
  • the determining device 6 further includes:
  • the weight adjustment unit is used to adjust the parameter values of w1, w2, and w3 according to the scene where the vehicle is located.
  • the determining device 6 provided in the foregoing embodiment executes the method for determining the sensing area
  • only the division of the foregoing functional modules is used as an example for illustration. In actual applications, the foregoing functions can be allocated to different functions according to needs. Module completion means dividing the internal structure of the device into different functional modules to complete all or part of the functions described above.
  • the device for determining the sensing area provided in the foregoing embodiment and the method for determining the sensing area belong to the same concept. For the implementation process, please refer to the method embodiment for details, and will not be repeated here.
  • the embodiment of the present application also provides a computer storage medium.
  • the computer storage medium may store a plurality of instructions, and the instructions are suitable for being loaded by a processor and executing the method steps of the embodiments shown in FIGS. 2 to 5 above.
  • the specific execution process please refer to the specific description of the embodiment shown in FIG. 2 to FIG. 5, which will not be repeated here.
  • the present application also provides a computer program product that stores at least one instruction, and the at least one instruction is loaded and executed by the processor to implement the method for determining the sensing area as described in each of the above embodiments.
  • FIG. 7 provides a schematic structural diagram of an apparatus for determining a sensing area according to an embodiment of the present application.
  • the determining apparatus 7 is hereinafter.
  • the determining device 7 may include: at least one processor 701, a memory 702, and at least one communication bus 703.
  • the communication bus 703 is used to implement connection and communication between these components.
  • the processor 701 may include one or more processing cores.
  • the processor 701 uses various interfaces and lines to connect various parts of the entire determining device 7, and executes by running or executing instructions, programs, code sets, or instruction sets stored in the memory 702, and calling data stored in the memory 702.
  • the various functions and processing data of the device 7 are determined.
  • the processor 701 may adopt at least one of Digital Signal Processing (DSP), Field-Programmable Gate Array (FPGA), and Programmable Logic Array (PLA) Realize in the form of hardware.
  • DSP Digital Signal Processing
  • FPGA Field-Programmable Gate Array
  • PDA Programmable Logic Array
  • the processor 701 may integrate one or a combination of a central processing unit (CPU), a graphics processing unit (GPU), a modem, and the like.
  • the CPU mainly processes the operating system, user interface, and application programs; the GPU is used to render and draw the content that needs to be displayed on the display; the modem is used to process wireless communication. It is understandable that the above-mentioned modem may not be integrated into the processor 701, but may be implemented by a chip alone.
  • the memory 702 may include random access memory (Random Access Memory, RAM), and may also include read-only memory (Read-Only Memory).
  • the memory 702 includes a non-transitory computer-readable storage medium.
  • the memory 702 may be used to store instructions, programs, codes, code sets, or instruction sets.
  • the memory 702 may include a program storage area and a data storage area, where the program storage area may store instructions for implementing the operating system and instructions for at least one function (such as touch function, sound playback function, image playback function, etc.), Instructions used to implement the foregoing method embodiments, etc.; the storage data area can store data and the like involved in the foregoing method embodiments.
  • the memory 702 may also be at least one storage device located far away from the foregoing processor 701.
  • the processor 701 may be used to call a computer program stored in the memory 702, and specifically execute the following steps:
  • the range of the sensing area of the vehicle is adjusted according to the range of the drivable area; wherein the area of the sensing area is larger than the area of the drivable area.
  • the execution by the processor 701 to construct the drivable area of the vehicle according to the point cloud includes:
  • a random sampling consistent RANSAC algorithm is used to identify the point cloud to obtain ground points and obstacle points;
  • the drivable area of the vehicle is constructed according to the ground point.
  • that the processor 701 executes the adjustment of the range of the vehicle's perception area according to the range of the drivable area includes:
  • the minimum circumscribed rectangle of the drivable area is determined by the geometric center, and the minimum circumscribed rectangle is used as the perception area.
  • the processor 701 executing the acquisition of the point cloud collected by the point cloud collection device includes:
  • the processor 701 is further configured to execute:
  • the complexity of the drivable area is calculated based on the area of the drivable area, the maximum length of the drivable area in the traveling direction, and the maximum length of the drivable area in the vertical direction of the traveling direction.
  • the processor 701 executes the calculation according to the area of the drivable area, the maximum length of the drivable area in the traveling direction, and the verticality of the drivable area in the traveling direction.
  • the maximum length in the direction to calculate the complexity of the drivable area includes:
  • S represents the area of the drivable area
  • L represents the maximum length of the drivable area in the direction of travel
  • W represents the maximum length of the drivable area in the vertical direction of the travel direction
  • w1 represents the weight of S
  • w2 represents the weight of L
  • W3 represents the weight of W
  • w1, w2, and w3 are integers greater than 0.
  • the processor 701 is further configured to execute:
  • FIG. 7 and the method embodiment of FIG. 2 are based on the same concept, and the technical effects brought about by them are also the same.
  • FIG. 7 For the specific implementation process of FIG. 7, reference may be made to the description of FIG.
  • the program can be stored in a computer readable storage medium, and the program can be stored in a computer readable storage medium. During execution, it may include the procedures of the above-mentioned method embodiments.
  • the storage medium can be a magnetic disk, an optical disc, a read-only storage memory or a random storage memory, etc.

Landscapes

  • Physics & Mathematics (AREA)
  • Electromagnetism (AREA)
  • Engineering & Computer Science (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • General Physics & Mathematics (AREA)
  • Radar, Positioning & Navigation (AREA)
  • Remote Sensing (AREA)
  • Traffic Control Systems (AREA)

Abstract

La présente invention concerne un procédé et un appareil de détermination d'une zone de détection, et un support d'informations et un véhicule appartenant au domaine de la conduite autonome. Le procédé comprend les étapes consistant à: acquérir un nuage de points collecté par un dispositif d'acquisition de nuage de points (S201) ; construire une zone permettant la conduite sur la base du nuage de points (S202)) ; ajuster la plage de la zone de détection sur la base de la plage de la zone permettant la conduite (S203), l'aire de de la zone de détection étant plus grande que l'aire de la zone permettant la conduite. Dans le procédé, la taille de la zone de détection peut être ajustée de manière adaptative sur la base de la taille de la zone permettant la conduite, ce qui facilite le calcul de points d'échantillonnage clés dans le nuage de points, et réduit la quantité de calcul de détection d'environnement.
PCT/CN2020/073693 2020-01-22 2020-01-22 Procédé et appareil de détermination de la zone de détection, support d'informations et véhicule WO2021051736A1 (fr)

Priority Applications (2)

Application Number Priority Date Filing Date Title
PCT/CN2020/073693 WO2021051736A1 (fr) 2020-01-22 2020-01-22 Procédé et appareil de détermination de la zone de détection, support d'informations et véhicule
CN202080005490.8A CN112789521B (zh) 2020-01-22 2020-01-22 感知区域的确定方法、装置、存储介质及车辆

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
PCT/CN2020/073693 WO2021051736A1 (fr) 2020-01-22 2020-01-22 Procédé et appareil de détermination de la zone de détection, support d'informations et véhicule

Publications (1)

Publication Number Publication Date
WO2021051736A1 true WO2021051736A1 (fr) 2021-03-25

Family

ID=74883418

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2020/073693 WO2021051736A1 (fr) 2020-01-22 2020-01-22 Procédé et appareil de détermination de la zone de détection, support d'informations et véhicule

Country Status (2)

Country Link
CN (1) CN112789521B (fr)
WO (1) WO2021051736A1 (fr)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
TWI786765B (zh) * 2021-08-11 2022-12-11 中華電信股份有限公司 自適應配置雷達參數的雷達和方法

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113552574B (zh) * 2021-07-13 2023-01-06 上海欧菲智能车联科技有限公司 一种区域探测方法、装置、存储介质及电子设备
CN113625243A (zh) * 2021-07-28 2021-11-09 山东浪潮科学研究院有限公司 恶劣天气下提高激光雷达图像信噪比的方法及装置

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20170015354A1 (en) * 2015-07-13 2017-01-19 Volvo Car Corporation Lane change control arrangement, a vehicle comprising such arrangement and a method for controlling lane changes
CN107632296A (zh) * 2017-09-08 2018-01-26 深圳市速腾聚创科技有限公司 激光雷达控制方法及激光雷达
CN108169729A (zh) * 2018-01-17 2018-06-15 上海禾赛光电科技有限公司 激光雷达的视场的调整方法、介质、激光雷达系统
CN108427124A (zh) * 2018-02-02 2018-08-21 北京智行者科技有限公司 一种多线激光雷达地面点分离方法及装置、车辆
CN109061606A (zh) * 2018-09-19 2018-12-21 深圳市速腾聚创科技有限公司 智能感知激光雷达系统及智能感知激光雷达控制方法
CN110275167A (zh) * 2019-06-03 2019-09-24 浙江吉利控股集团有限公司 一种雷达探测的控制方法、控制器及终端

Family Cites Families (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US10620638B2 (en) * 2017-08-18 2020-04-14 Wipro Limited Method, system, and device for guiding autonomous vehicles based on dynamic extraction of road region
CN109840448A (zh) * 2017-11-24 2019-06-04 百度在线网络技术(北京)有限公司 用于无人驾驶车辆的信息输出方法和装置
CN110244321B (zh) * 2019-04-22 2023-09-26 武汉理工大学 一种基于三维激光雷达的道路可通行区域检测方法

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20170015354A1 (en) * 2015-07-13 2017-01-19 Volvo Car Corporation Lane change control arrangement, a vehicle comprising such arrangement and a method for controlling lane changes
CN107632296A (zh) * 2017-09-08 2018-01-26 深圳市速腾聚创科技有限公司 激光雷达控制方法及激光雷达
CN108169729A (zh) * 2018-01-17 2018-06-15 上海禾赛光电科技有限公司 激光雷达的视场的调整方法、介质、激光雷达系统
CN108427124A (zh) * 2018-02-02 2018-08-21 北京智行者科技有限公司 一种多线激光雷达地面点分离方法及装置、车辆
CN109061606A (zh) * 2018-09-19 2018-12-21 深圳市速腾聚创科技有限公司 智能感知激光雷达系统及智能感知激光雷达控制方法
CN110275167A (zh) * 2019-06-03 2019-09-24 浙江吉利控股集团有限公司 一种雷达探测的控制方法、控制器及终端

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
TWI786765B (zh) * 2021-08-11 2022-12-11 中華電信股份有限公司 自適應配置雷達參數的雷達和方法

Also Published As

Publication number Publication date
CN112789521A (zh) 2021-05-11
CN112789521B (zh) 2024-04-26

Similar Documents

Publication Publication Date Title
WO2021051736A1 (fr) Procédé et appareil de détermination de la zone de détection, support d'informations et véhicule
JP6441993B2 (ja) レーザー点クラウドを用いる物体検出のための方法及びシステム
CN110765922B (zh) 一种agv用双目视觉物体检测障碍物系统
JP5820774B2 (ja) 路面境界推定装置及びプログラム
EP4033324B1 (fr) Procédé et dispositif de détection d'informations d'obstacle pour robot mobile
CN112513679B (zh) 一种目标识别的方法和装置
CN109635816B (zh) 车道线生成方法、装置、设备以及存储介质
CN102248947A (zh) 使用3-d激光测距仪的目标和车辆检测及跟踪
CN112560800B (zh) 路沿检测方法、装置及存储介质
CN115406457A (zh) 一种可行驶区域检测方法、系统、设备及存储介质
CN113734176A (zh) 智能驾驶车辆的环境感知系统、方法、车辆及存储介质
JP2019211403A (ja) 対象位置計測装置及び対象位置計測プログラム
CN111912418A (zh) 删除移动载体不可行驶区域内障碍物的方法、装置及介质
CN116257052A (zh) 机器人的速度规划方法、设备及计算机可读存储介质
CN113436336B (zh) 地面点云分割方法和装置及自动驾驶车辆
CN115311646A (zh) 一种障碍物检测的方法及装置
CN113763308B (zh) 一种地面检测方法、装置、服务器及介质
JP7265027B2 (ja) 処理装置及び点群削減方法
CN115164919A (zh) 基于双目相机的空间可行驶区域地图构建方法及装置
CN114549764A (zh) 基于无人车的障碍物识别方法、装置、设备及存储介质
WO2024042607A1 (fr) Dispositif de reconnaissance du monde extérieur et procédé de reconnaissance du monde extérieur
CN115705671A (zh) Rgbd相机障碍物检测方法、装置系统及移动工具
CN117690133A (zh) 点云数据标注方法、装置、电子设备、车辆及介质
CN117789155A (zh) 一种黑色障碍物检测方法及装置、相关产品
CN116507937A (zh) 用于重建机动车辆环境中的地面的表面拓扑的方法和处理单元以及包括这种处理单元的机动车辆

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 20865059

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

32PN Ep: public notification in the ep bulletin as address of the adressee cannot be established

Free format text: NOTING OF LOSS OF RIGHTS PURSUANT TO RULE 112(1) EPC (EPO FORM 1205A DATED 09/11/2022)

122 Ep: pct application non-entry in european phase

Ref document number: 20865059

Country of ref document: EP

Kind code of ref document: A1