CN111275816A - Method for acquiring point cloud data and related equipment - Google Patents

Method for acquiring point cloud data and related equipment Download PDF

Info

Publication number
CN111275816A
CN111275816A CN202010116696.XA CN202010116696A CN111275816A CN 111275816 A CN111275816 A CN 111275816A CN 202010116696 A CN202010116696 A CN 202010116696A CN 111275816 A CN111275816 A CN 111275816A
Authority
CN
China
Prior art keywords
virtual object
road
point
virtual
laser beam
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202010116696.XA
Other languages
Chinese (zh)
Other versions
CN111275816B (en
Inventor
李登宇
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Huawei Technologies Co Ltd
Original Assignee
Huawei Technologies Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Huawei Technologies Co Ltd filed Critical Huawei Technologies Co Ltd
Priority to CN202010116696.XA priority Critical patent/CN111275816B/en
Publication of CN111275816A publication Critical patent/CN111275816A/en
Priority to PCT/CN2021/077748 priority patent/WO2021170015A1/en
Application granted granted Critical
Publication of CN111275816B publication Critical patent/CN111275816B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T17/00Three dimensional [3D] modelling, e.g. data description of 3D objects
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01MTESTING STATIC OR DYNAMIC BALANCE OF MACHINES OR STRUCTURES; TESTING OF STRUCTURES OR APPARATUS, NOT OTHERWISE PROVIDED FOR
    • G01M17/00Testing of vehicles
    • G01M17/007Wheeled or endless-tracked vehicles
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S17/00Systems using the reflection or reradiation of electromagnetic waves other than radio waves, e.g. lidar systems
    • G01S17/88Lidar systems specially adapted for specific applications

Landscapes

  • Physics & Mathematics (AREA)
  • Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Software Systems (AREA)
  • Geometry (AREA)
  • Computer Graphics (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Electromagnetism (AREA)
  • Radar, Positioning & Navigation (AREA)
  • Remote Sensing (AREA)
  • Traffic Control Systems (AREA)
  • Optical Radar Systems And Details Thereof (AREA)

Abstract

The application discloses a method for acquiring point cloud data and related equipment, and belongs to the technical field of artificial intelligence. The method comprises the following steps: determining a first scanning road section according to the first position point and the road topology model; determining three-dimensional model data for each of a plurality of virtual objects distributed over a first scan segment; and determining point cloud data acquired by the laser detection device at the first position point according to the laser scanning range of the laser detection device simulated on the vehicle and the three-dimensional model data of each virtual object in the plurality of virtual objects. In the method and the device, a road topology model for the virtual space can be constructed in advance, and then when the point cloud data of the vehicle at the first position point is determined, the current road section to be scanned can be directly determined based on the road topology model without traversing all nodes in the K-D tree as in the related art, so that the data processing amount in the point cloud data acquisition process is reduced, and accordingly, the efficiency of point cloud data acquisition is improved.

Description

Method for acquiring point cloud data and related equipment
Technical Field
The present application relates to the field of artificial intelligence technologies, and in particular, to a method for acquiring point cloud data and a related device.
Background
With the development of artificial intelligence technology, autonomous vehicles are receiving more and more attention because they can realize unmanned driving. The automatic driving vehicle is provided with the laser radar, the laser radar can emit a large number of laser beams to the surrounding environment, each laser beam is reflected by objects such as buildings in the surrounding environment, and the surrounding environment of the automatic driving vehicle can be simulated based on the reflected laser beam, so that the automatic driving vehicle can plan the driving route according to the simulated surrounding environment. Prior to use of the autonomous vehicle, the autonomous vehicle is subjected to a simulation test. The process of the simulation test is as follows: the method comprises the steps of constructing a virtual space, simulating point cloud data of position points of virtual objects projected by various laser beams in the virtual space, wherein the point cloud data comprises three-dimensional position information of the position points projected by the laser beams, and detecting the response of the automatic driving vehicle according to the simulated point cloud data, so that the performance test of the automatic driving vehicle is realized.
In the related art, for a certain virtual space, a K-dimensional (K-dimensional, K-D) tree is constructed according to a spatial position relationship of each virtual object in the virtual space. Each node in the K-D tree, except for the leaf nodes, corresponds to a bounding box indicating a portion of space in virtual space. In the nodes except the leaf child nodes, the bounding box corresponding to each child node is a subset of the bounding boxes corresponding to the parent node to which the child node belongs. Each leaf node in the K-D tree corresponds to a virtual object in the virtual space, which is a virtual object in the bounding box corresponding to the parent node to which the leaf node belongs. When the simulated point cloud data is obtained, traversing each node from a root node to a leaf node in a K-D tree for each laser beam emitted by the laser radar, and skipping any node except the leaf node if the laser beam does not intersect with the bounding box corresponding to the node. And if the laser beam is intersected with the bounding box corresponding to the node, continuously determining whether the laser beam is intersected with the bounding box corresponding to the child node of the node until the bounding box is traversed to a leaf node, thereby determining a virtual object intersected with the laser beam and further obtaining point cloud data of the laser beam on the virtual object. And (4) performing the above treatment on all the laser beams to obtain the point cloud data of each laser beam.
In the process of acquiring the point cloud data, the K-D tree needs to be traversed for all laser beams, so that a large amount of computing resources need to be consumed in the process of acquiring the point cloud data.
Disclosure of Invention
The application provides a method and related equipment for acquiring point cloud data, which can improve the efficiency of acquiring the point cloud data. The technical scheme is as follows:
in a first aspect, a method for acquiring point cloud data is provided, and the method is applied to a computer device. In the method, in the case that the vehicle is at a first location point in a virtual space, a first scanned segment is determined from the first location point and a road topology model indicating a road topology in the virtual space; determining three-dimensional model data for each of a plurality of virtual objects distributed over a first scan segment; and determining point cloud data acquired by the laser detection device at the first position point according to the laser scanning range of the laser detection device simulated on the vehicle and the three-dimensional model data of each virtual object in the plurality of virtual objects.
In the present application, it is considered that the autonomous vehicle only needs to determine an object near the travel road during travel, and therefore, the data processing amount in determining the point cloud data is reduced. For a virtual space to be simulated, a road topology model for the virtual space may be constructed in advance. The road topology model is used to indicate the road topology in the virtual space. Therefore, when point cloud data when the vehicle is located at the first position point is determined subsequently, the current road section to be scanned can be directly determined based on the road topology model without traversing all nodes in the K-D tree as in the related art, so that the data processing amount in the point cloud data acquisition process is reduced, and correspondingly, the efficiency of point cloud data acquisition is improved.
According to a first aspect, in one possible implementation form of the present application, the road topology model includes a plurality of road nodes, and road information of each of the plurality of road nodes. In this scenario, the implementation manner of determining the first scanning road segment according to the first location point and the road topology model may be: determining a road node corresponding to the first position point from a plurality of road nodes according to the road information of each road node; and determining a first scanning road section according to the road node corresponding to the first position point.
In order to quickly determine the first scanning road section, the road segments in the virtual space are stored in the road topology model in a road node mode in advance, so that the first scanning road section can be quickly determined according to the first position point in the follow-up process, and the efficiency of acquiring point cloud data is improved.
According to a first aspect, in one possible implementation form of the present application, the road topology model includes a plurality of road nodes, and virtual object information of each of the plurality of road nodes. The virtual object information of a first road node among the plurality of road nodes includes three-dimensional model data of each virtual object distributed at the first road node, and the first road node is any one of the plurality of road nodes. In this scenario, determining the three-dimensional model data of each of the plurality of virtual objects distributed over the first scan segment may be implemented by: three-dimensional model data of each of a plurality of virtual objects distributed in the first scanning section is acquired from the virtual object information of each road node.
In order to further improve the efficiency of acquiring the point cloud data, the related information of the virtual objects distributed on each road section can be stored in the road topology model in advance. Therefore, after the first scanning road section is determined, the three-dimensional model data of each virtual object in the plurality of virtual objects distributed on the first scanning road section can be directly determined according to the road topology model, so that the efficiency of acquiring the point cloud data is improved.
In a possible implementation form of the application according to the first aspect, the distance between the virtual objects distributed at the first road node and the centerline of the first road node is within a first distance threshold.
It is considered that the virtual objects on both sides of the road, which are particularly distant from the autonomous vehicle, do not have any influence on the traveling of the autonomous vehicle during the driving of the autonomous vehicle. Therefore, virtual objects on both sides of the road that are particularly far away can also be filtered. Thus, in the road topology model, the virtual objects distributed at each road node include only virtual objects within a first distance threshold from the centerline of the respective road node. Therefore, the road topology model is simplified, the subsequent calculation amount for acquiring the point cloud data according to the road topology model can be reduced, and the efficiency for acquiring the point cloud data is improved.
According to a first aspect, in one possible implementation form of the present application, the three-dimensional model data of the first virtual object includes three-dimensional position information of each of a plurality of surface points of the first virtual object, the three-dimensional position information including a height of the surface point, and the first virtual object is any one of the plurality of virtual objects. In this scenario, the altitude in the three-dimensional model data of the plurality of virtual objects distributed on the first scanned link is within the altitude threshold.
It is considered that a virtual object near the road, which is particularly high from the ground, does not have any influence on the running of the autonomous vehicle during the driving of the autonomous vehicle. For example, the high rise of buildings on both sides of the road does not affect the running of the autonomous vehicle. It is therefore also possible to filter virtual objects or parts of virtual objects near the road that are particularly high from the ground. At this time, in the road topology model, three-dimensional position information of a surface point having a height exceeding a height threshold does not exist in the three-dimensional model data of each of the plurality of virtual objects distributed at the road node. The road topology model can be simplified, so that the subsequent calculation amount for acquiring the point cloud data according to the road topology model can be reduced, and the efficiency for acquiring the point cloud data is improved.
According to the first aspect, in one possible implementation manner of the present application, the three-dimensional model data of the first virtual object in the plurality of virtual objects includes three-dimensional position information of a bounding box of the first virtual object, where the bounding box of the first virtual object refers to a geometric body that surrounds the first virtual object, and the first virtual object is any virtual object in the plurality of virtual objects. In this scenario, according to the laser scanning range of the laser detection device simulated on the vehicle and the three-dimensional model data of each virtual object in the plurality of virtual objects, the implementation manner of determining the point cloud data acquired by the laser detection device at the first position point may be: determining an angle range of the laser beam emitted by the laser detection device covering the first virtual object from the laser scanning range according to the three-dimensional position information of the bounding box of the first virtual object, the direction of the vehicle and the first position point; determining the intersection point of the first laser beam on the first virtual object according to the three-dimensional model data of the first virtual object, and taking the three-dimensional position information of the intersection point as the point cloud data corresponding to the first laser beam, wherein the first laser beam is any one laser beam in the angle range.
In the present application, obtaining the point cloud data of the laser detection device at the first position point is to substantially determine the intersection point of each laser beam on the virtual object. Thus, the point cloud data of the laser detection device at the first location point may be determined by the above-described implementation.
In a possible implementation manner of the present application, the determining the intersection point of the first laser beam on the first virtual object according to the three-dimensional model data of the first virtual object may be: when the first virtual object is a near-distance virtual object, determining a plurality of first surface elements of the first virtual object according to three-dimensional position information of each surface point in three-dimensional model data of the first virtual object, wherein each first surface element is a surface formed by three or more surface points in each surface point of the first virtual object, and the near-distance virtual object is a virtual object with the distance from the first position point within a second distance threshold; and determining a first surface element which is intersected with the first laser beam in the plurality of first surface elements, and taking an intersection point between the first laser beam and the intersected first surface element as an intersection point of the first laser beam on the first virtual object.
In a possible implementation manner of the present application, the determining the intersection point of the first laser beam on the first virtual object according to the three-dimensional model data of the first virtual object may be: determining a plurality of second surface elements of the bounding box according to the three-dimensional position information of the bounding box of the first virtual object under the condition that the first virtual object is a long-distance virtual object, wherein the long-distance virtual object is a virtual object of which the distance from the first position point is between a second distance threshold and a first distance threshold, and the first distance threshold is greater than the second distance threshold; and determining a second surface element which is intersected with the first laser beam in the plurality of second surface elements, and taking an intersection point between the first laser beam and the intersected second surface element as an intersection point of the first laser beam on the first virtual object.
Considering that the automatic driving vehicle needs to make clear the detailed three-dimensional structure of the nearby objects during the driving process so as to avoid traffic accidents. For distant objects, only the approximate contour needs to be known to guide the current planning. Therefore, for the first laser beam in the angle range, in the process of determining the intersection point of the first laser beam on the first virtual object according to the three-dimensional model data of the first virtual object, the intersection point can be determined by adopting different processing modes for virtual objects which are different from the first position point.
According to the first aspect, in one possible implementation manner of the present application, before determining the intersection point of the first laser beam on the first virtual object from the three-dimensional model data of the first virtual object, if there is no other virtual object obstructing the first virtual object in the projection direction of the first laser beam, the operation of determining the intersection point of the first laser beam on the first virtual object is performed.
In consideration of the shielding condition among the virtual objects, the same laser beam may be projected onto different virtual objects theoretically, but actually, only one point cloud data can be acquired by each laser beam. Therefore, when the point cloud data is determined, the shielding condition between the virtual objects can be considered, so that the consistency between the point cloud data acquired in the virtual space and the point cloud data acquired in the real world can be improved.
According to a first aspect, in one possible implementation form of the present application, in the method, if there are other virtual objects that occlude the first virtual object in the projection direction of the first laser beam, and the other virtual objects are non-strictly-occluding objects, the operation of determining the intersection point of the first laser beam on the first virtual object is performed, the non-strictly-occluding objects being objects in which there is a through hole through which the laser beam can pass. Accordingly, after the intersection point of the first laser beam on the first virtual object is determined, if the first laser beam does not have an intersection point on other virtual objects, the three-dimensional position information of the intersection point of the first laser beam on the first virtual object is used as the point cloud data corresponding to the first laser beam.
In addition, in the process of determining whether or not there is another virtual object blocking the first virtual object in the projection direction of the first laser beam, the another virtual object may be treated as a simple model that is not permeable to the laser beam. However, in the real world, an object of a complex model such as a tree may be projected onto the first virtual object even though the bounding box corresponding to the tree blocks the first laser beam due to the through hole through which the laser beam can pass. At this time, the point cloud data can be determined by the above implementation manner to improve consistency between the point cloud data acquired in the virtual space and the point cloud data acquired in the real world.
According to the first aspect, in a possible implementation manner of the present application, determining three-dimensional model data of each virtual object in a plurality of virtual objects distributed in a first scan segment according to virtual object information of each road node may be: dividing the first scanning road section into a plurality of sector areas by taking the first position point as a center to obtain boundary information of each sector area; arranging the plurality of fan-shaped areas in sequence according to the scanning direction of the laser detection device; for the first sector area after arrangement, acquiring three-dimensional model data of a virtual object positioned in the first sector area from the virtual object position information of each road node according to the boundary information of the first sector area; for the ith arranged sector area, determining the moving displacement of the vehicle in the process of scanning the laser detection device from the first sector area to the ith sector area based on the scanning speed of the laser detection device and the moving speed of the vehicle, wherein i is a positive integer which is greater than or equal to 2 and less than or equal to the number of divided sector areas; updating the boundary information of the ith sector area according to the movement displacement; and acquiring three-dimensional model data of the virtual object in the first sector area from the virtual object position information of each road node according to the updated boundary information of the ith sector area.
The vehicle is moving forward as the lidar scans. If the movement displacement of the vehicle is large in the process of scanning the laser radar in the horizontal direction for one circle, in this case, in order to improve the consistency between the point cloud data acquired by the laser radar in the simulation test process and the point cloud data acquired in the real world, the three-dimensional model data of each virtual object in the plurality of virtual objects distributed on the first scanning road section and the two sides of the first scanning road section can be determined in a regional mode.
According to the first aspect, in one possible implementation manner of the present application, the point cloud data acquired by the laser detection device at the first position point includes point cloud data corresponding to each laser beam emitted by the laser detection device. At the moment, after point cloud data acquired by the laser detection device at the first position point is determined, caching the point cloud data corresponding to each laser beam; under the condition that the vehicle is located at a second position point in the virtual space, determining a second scanning road section according to the second position point and the road topology model; determining a plurality of virtual objects distributed on the second scanning road section; and if the same virtual object exists in the plurality of virtual objects distributed on the first scanning road section and the plurality of virtual objects distributed on the second scanning road section, and the relative distance between the same virtual object and the vehicle does not change at the first position point and the second position point, taking the point cloud data corresponding to the laser beams projected onto the same virtual object in the cache as the point cloud data corresponding to the corresponding laser beams at the second position point.
In the simulation test, as the vehicle moves in the virtual space, the point cloud data at each position point needs to be updated in time. In the case that there may be a portion of the object around the vehicle and the relative position between the vehicle does not change during the movement of the vehicle, it is not necessary to determine the point cloud data again for the portion of the virtual object. Therefore, in the present application, the amount of calculation required to determine the point cloud data can be reduced according to the relative state between the vehicle and the surrounding virtual object.
In a second aspect, an apparatus for acquiring point cloud data is provided, which has the function of implementing the method behavior of acquiring point cloud data in the first aspect. The device comprises at least one module for implementing the method for acquiring point cloud data provided by the first aspect.
In a third aspect, a computer device is provided, which comprises a memory for storing computer instructions and a processor for reading the computer instructions to execute the method for acquiring point cloud data according to the first aspect.
In a fourth aspect, a computer-readable storage medium is provided, which has instructions stored therein, which when run on a computer, cause the computer to perform the method of acquiring point cloud data of the first aspect described above.
In a fifth aspect, a computer program product containing instructions is provided, which when run on a computer causes the computer to perform the method of acquiring point cloud data as described in the first aspect above.
The technical effects obtained by the above second, third, fourth and fifth aspects are similar to the technical effects obtained by the corresponding technical means in the first aspect, and are not described herein again.
Drawings
Fig. 1 is a schematic diagram of a laser beam emitted by a laser radar according to an embodiment of the present disclosure;
FIG. 2 is a schematic diagram of a laser beam emitted by another lidar provided by an embodiment of the present application;
FIG. 3 is a schematic diagram of a laser beam emitted by another lidar provided by an embodiment of the present application;
FIG. 4 is a schematic diagram of an exemplary laser beam intersecting a triangular bin according to an embodiment of the present disclosure;
FIG. 5 is a schematic diagram of an architecture of a simulation system according to an embodiment of the present application;
fig. 6 is a flowchart of a method for acquiring point cloud data according to an embodiment of the present disclosure;
FIG. 7 is a schematic diagram of a road topology model provided by an embodiment of the present application;
FIG. 8 is a schematic diagram of a bounding box of a virtual object provided by an embodiment of the present application;
fig. 9 is a schematic view of a high-rise building provided by an embodiment of the present application;
fig. 10 is a schematic diagram of a road node provided in an embodiment of the present application;
fig. 11 is a schematic diagram of a sectorized area provided in an embodiment of the present application;
FIG. 12 is a schematic diagram of a method for determining an angle range according to an embodiment of the present disclosure;
FIG. 13 is a schematic diagram of an autonomous vehicle provided in accordance with an embodiment of the present application;
FIG. 14 is a schematic view of a laser beam projection provided by an embodiment of the present application;
FIG. 15 is a schematic diagram of an apparatus for acquiring point cloud data according to an embodiment of the present disclosure;
fig. 16 is a schematic structural diagram of a computer device according to an embodiment of the present application.
Detailed Description
To make the objects, technical solutions and advantages of the present application more clear, embodiments of the present application will be described in further detail below with reference to the accompanying drawings.
Before explaining the embodiments of the present application in detail, an application scenario of the embodiments of the present application is explained.
With the social improvement of the requirements of the intelligent, economic and safe driving, the automatic driving technology becomes one of the key development directions of the automobile industry and is more and more emphasized by host factories and internet companies. The automatic driving vehicle (English may be either an autonomous vehicle or a self-steering autonomous vehicle), also called an unmanned vehicle, a computer-driven vehicle or a wheeled mobile robot, is an intelligent vehicle that realizes unmanned driving through a computer system. Autonomous vehicles have been in the 20 th century for decades, with the 21 st century showing a trend toward near practicality.
In the architecture of an autonomous vehicle, the sensing layer is likened to the "eyes" of the autonomous vehicle. The sensing layer comprises a vision system sensor such as a vehicle-mounted camera and a radar system sensor such as a vehicle-mounted millimeter wave radar, a vehicle-mounted laser radar and a vehicle-mounted ultrasonic radar. Where lidar has been considered an essential basis for achieving autonomous driving. The autonomous vehicle equipped with the laser radar or the simulated vehicle equipped with the simulated laser radar model in the simulation may be referred to as a self vehicle.
For the sake of convenience in the following description, the operation principle of the lidar will be explained.
The laser radar is a system integrating laser, GPS global positioning and inertia measurement devices. The lidar can acquire data and generate accurate digital models. The combination of these three techniques allows for highly accurate positioning of the laser beam where it is directed. Compared with other detection products, the laser radar can fully exert the advantages of accuracy, rapidness and high efficiency. The laser radar is characterized in that three-dimensional position information can be generated, and even the position, the size, the external appearance and even the material of an object can be quickly determined, so that an accurate digital model can be generated.
Lidar is a radar system that detects characteristics such as position, velocity, etc. of a target by emitting a laser beam. The laser radar emits a large number of laser beams to the surrounding environment in a short time, and the distance between the laser radar and the surrounding object is calculated by measuring the flight time of the reflected laser beams. The laser radar can instantly construct a 3D map of the surrounding environment, and has the advantages of high measurement accuracy, good directivity and the like.
The laser radar continuously emits laser beams to the surrounding environment during working, the distance between the vehicle and the position point projected by the laser beams is calculated by measuring the transmission time of the emitted laser beams, and then point cloud data of the surrounding environment is quickly created. Briefly, a lidar is a device that quickly creates point cloud data for various location points of the surrounding environment. The point cloud data includes three-dimensional coordinates (XYZ) of the location point to which the laser beam is projected and the laser reflection Intensity (Intensity). The laser reflection intensity is related to the surface material, roughness, incident angle direction, emission energy of the instrument and the wavelength of the laser beam of the corresponding position point.
The laser radar can create cloud data of up to 150 ten thousand points per second, and the capability of rapidly creating the cloud data enables the laser radar to have a strong place in the fields of automatic driving and the like. For example, in automatic driving, the point cloud data is used as output data of the laser radar, can provide surrounding environment information for an automatic driving vehicle, and is used for development and inspection of algorithms such as perception, planning, decision making, information fusion and the like in the automatic driving process. In addition, point cloud data may also be used as training data for these algorithms.
Fig. 1 is a schematic diagram of a laser beam emitted by a laser radar according to an embodiment of the present disclosure. In fig. 1, each concentric circle corresponds to point cloud data obtained after a group of lasers scans for one circle in the horizontal direction, and each concentric circle may also be referred to as a point cloud concentric circle. The vertical separation angle is constant for two adjacent sets of lasers. Therefore, the farther the two adjacent groups of lasers are apart from each other, the larger the interval between the concentric circles of the point cloud scanned by the adjacent lasers is.
The position information indicated by the partial point cloud data in fig. 1 forms the shape of concentric circles because there is no obstacle around the autonomous vehicle and the laser beam hits the ground forming a set of concentric circles, the specific principle of which is shown in fig. 2. If an obstacle exists around the autonomous vehicle, the shape corresponding to the point cloud data is the shape of the obstacle, such as the point cloud data on the buildings around the autonomous vehicle in fig. 1.
Several parameters involved in the operation of the lidar are explained below.
Azimuth angle: the scanning range of the laser beam during operation of the lidar is referred to as the field of view (FOV). The azimuth angle includes a horizontal azimuth angle and a vertical azimuth angle. Horizontal azimuth refers to the scanning angle of the lidar in the horizontal direction. The vertical azimuth refers to a scanning angle of the laser radar in the vertical direction.
At present, most laser radar systems adopt a rotary lens, as shown in fig. 2, the main body part of the laser radar is fixed on a base of a rotary motor, and the laser radar continuously rotates during working, so that the laser radar can scan 360 degrees around. That is to say that the horizontal azimuth angle of these lidar beams is 360 deg..
Vertical azimuth refers to the scanning angle of the lidar in the vertical direction, typically within 40 °. FIG. 3 shows a schematic diagram of a laser scanning line of a laser radar with a vertical azimuth angle of-16 to 7.
Data update frequency: because the laser radar is continuously scanned in a circle mode during working, and the automatic driving vehicle moves in the laser radar scanning process. Therefore, the laser beam emitted by the laser radar in the same direction is updated in the point cloud data of the laser beam every time the laser radar scans one circle. For example, the scanning frequency of the laser radar is 10Hz, i.e. the laser radar rotates 10 turns per second, so that the point cloud data of the laser beam emitted from the same direction will be updated 10 times per second.
Angular resolution: the angular resolution of the lidar is divided into horizontal angular resolution and vertical angular resolution. Horizontal angular resolution refers to the minimum number of degrees of separation between scan lines in the horizontal direction. The scanning frequency is changed along with the change of the scanning frequency, namely the rotating speed of the laser radar, and the faster the rotating speed is, the larger the interval of the scanning lines in the horizontal direction is, and the larger the horizontal angular resolution is. Vertical angular resolution refers to the degree of separation of two scan lines in the vertical direction. As shown in FIG. 3, the vertical angle resolution of the laser radar is 1 degree between 1 and 5 lines, 0.33 degree between 6 and 30 lines, and 1 degree between 31 and 40 lines.
The range measurement range is as follows: the range finding range of the laser radar applied to the field of automatic driving is generally about 100-200 m.
Data rate: the unit of the point cloud data generated by the laser radar per second is points/sec. The amount of point cloud Data generated per second may also be referred to as Data Rate (Data Rate) or point cloud metrics (MeasurementPoints), among others. For example: the scanning frequency was 20Hz lidar with a horizontal angular resolution of 0.45 (800 laser scans per beam per turn). The number of point cloud data generated per second is therefore: 40 × 20 × 800 ═ 640,000 points/sec.
In addition, a large amount of point cloud data scanned by the laser radar is sent to other equipment needing to use the point cloud data in a User Datagram Protocol (UDP) data packet format. Taking the example of the hensai Pandar40P (a type of lidar), a total of 1304 bytes (byte) of UDP packets transmitted by the Pandar40P include 42-byte headers (headers) and 1262-byte data block intervals. All multi-byte values are little endian (LittleEndian) and are unsigned. The little-endian refers to a data storage mode in which a low-order byte is before and a high-order byte is after, and is not described in detail herein.
The data block interval in one UDP data packet includes 10 data blocks (data blocks), and each data block has a length of 124 bytes and represents a complete set of point cloud data. The 124-byte space in the data block includes: a flag bit of 2 bytes, horizontal rotation angle information of 2 bytes, 40 sets of laser beam information, each set of laser beam information including distance information of 2 bytes and intensity information of 1 byte. The data block interval further includes 22 bytes of additional information, which includes data such as sensor temperature status information, motor speed information, and time stamp information (time stamp information indicates the current packing time of the data block, and is expressed in microseconds).
The above is used to explain the operation principle of the laser radar. Currently, to ensure the safety of autonomous vehicles, autonomous vehicles may be tested prior to their use.
Currently, the industry covers as many actual driving environments and traffic conditions as possible through a large number of road tests. However, according to statistical calculation, based on this method, the tested autonomous driving vehicle needs to be tested and verified by billions of kilometers to reach a certain degree of verification. It is clear that with road testing in the real world, it is very difficult and time consuming to perform this metric of kilometers. Meanwhile, the real road test also has the problems of high danger and low occurrence rate.
Therefore, in developed countries of the automotive industry, host enterprises, primary suppliers, and equipment suppliers often first use simulation testing methods to verify the functionality of autonomous vehicles. By adopting the simulation software, a virtual space is generated or reproduced in the simulation software based on the real traffic environment, whether the automatic driving vehicle can correctly identify the surrounding environment in the virtual space is tested, and timely and accurate reaction is made and proper driving behavior is adopted. It is well recognized in the industry that only by combining simulation testing with actual road testing can a comprehensive, systematic and effective verification result be obtained. Therefore, it is increasingly important to perform simulation tests on autonomous vehicles.
The simulation test of the automatic driving vehicle mainly simulates point cloud data output by a detection device on the automatic driving vehicle in a simulation system, transmits the point cloud data to the automatic driving vehicle and observes whether the response of the automatic driving vehicle is correct or not. Because the lidar is a basic detection device for autonomous driving, a simulation test of an autonomous vehicle may refer to determining point cloud data of the lidar in a virtual space.
In the real world, the lidar detects a surrounding real object by a laser beam to acquire spatial information of the surrounding real object. In the simulation test, all environments are virtual, including virtual vehicles, virtual traffic conditions, virtual roads, buildings and other static objects. Therefore, in the simulation test, the laser radar is required to detect the virtual environment and output point cloud data.
In virtual space, all objects are represented by a three-dimensional model. Three-dimensional models are input data for computer graphics. It not only describes the concrete geometric shape of the object in the real world, but also records the surface information of the material, color, texture and the like of the surface of the object.
The three-dimensional model describes the surface geometry of an object through geometric elements such as points, lines, surfaces and the like and topological relations among the geometric elements. The surface properties of the object are described by adding material, color, texture and other information to the surface elements constituting the surface of the object. The open graphics library (openGL) of the three-dimensional rendering engine, which is the standard in the industry, uses a three-dimensional model based on a boundary representation to describe a three-dimensional object.
The three-dimensional model is a model that represents the surface of an object in a polygonal manner. For the vertices of a polygon or the polygon itself, there will be corresponding colors, normal vectors, textures to describe other attributes of the object surface besides shape. Specifically, a three-dimensional model is composed of two parts: geometry and surface properties. For convenience of description, the three-dimensional model can be written as:
Model={GS,A}
model is used to indicate the three-dimensional Model, Gs is used to indicate the data set of the three-dimensional Model geometry, and a is used to indicate the data set of the three-dimensional Model surface properties. The surface properties are not relevant to the embodiments of the present application, so a will not be described in detail herein.
GS is represented by points, lines, polygons, and topological relationships between them. Thus, the Gs can be expressed as:
GS={V,Tri}
v is used to indicate the set of all points in the three-dimensional model, and V can be represented as follows:
V={Vi(xi,yi,zi)|i=1,...,N}
wherein N is the number of the midpoints of the three-dimensional model. In GS, the surface of the three-dimensional model is represented by a set of polygons having three-dimensional points in V as vertices. I.e. for a surface in a three-dimensional model, it is decomposed into several polygons. The surfaces (surfaces may also be referred to as bins) of the model are expressed by these polygons. Since all the vertices of the polygon are from the vertex set V in the three-dimensional model, a polygon having k vertices is represented by k vertices arranged in a counterclockwise manner in V. Since the set V already contains the coordinates of the vertices, when representing a polygon, only the index numbers of the vertices in V need to be specified. In theory, the number of vertices of the polygon used to represent the surface may be arbitrary. However, in practice, when the number of vertices is more than 3, a concave polygon appears, and the rendering of the concave polygon by openGL is unstable. In addition, if the number of polygon vertices is more than 3 due to the error of the vertex coordinates, there is a possibility that the polygon vertices are not in the same plane, which may cause the polygon not to be displayed. Thus in actual use, although the rendering engine provides support for arbitrary polygons, the surface of the three-dimensional model can be expressed using only a set of triangles in view of stability and efficiency. For example, in the three-dimensional model, the set of all triangles is Tri, and the set can be expressed as follows:
Tri={tri(ai,bi,ci)|vai,vbi,vci∈V,i=1,...,M}
ai bi ci is the index of the three vertices of the ith triangle in the Tri arranged in a counterclockwise order in the vertex set V. V and Tri together completely express the geometrical structure of the surface of the three-dimensional model.
In the simulation test, the point cloud data is calculated by calculating the intersection point of the simulated laser beam and the triangular surface element in the three-dimensional model. Fig. 4 is a schematic diagram of an exemplary laser beam intersecting a triangular bin according to an embodiment of the present disclosure. As shown in fig. 4, the arrowed lines are simulated laser beams, and V1 to V8 are 8 vertices in the three-dimensional model of the simulated object. The laser beam and the surface element V of the three-dimensional model1V2V5Sum bin V2V3V6There is a point of intersection. Due to surface element V1V2V5The distance from the intersection point to the origin of the laser beam is less than the surface element V2V3V6The distance of the intersection point of (A) from the origin, so V1V2V5The above intersection point is a position point irradiated by the laser beam simulated by the ray on the object corresponding to the three-dimensional model, and the three-dimensional position information of the intersection point is point cloud data scanned when the laser beam simulated by the ray is projected on the object. Therefore, the laser radar output point cloud data is simulated in the simulation space, namely intersection point information of all laser beams and the surrounding virtual body is determined.
The embodiment of the application provides a method for acquiring point cloud data, which can acquire the point cloud data of a laser radar in a virtual space. The point cloud data can be used for development and inspection of algorithms such as verification, perception, planning, decision-making, information fusion and the like of an automatic driving system.
Fig. 5 is a schematic structural diagram of a simulation system according to an embodiment of the present application. As shown in FIG. 5, the simulation system 500 includes a simulation platform 501 and an autopilot platform 502. The simulation platform 501 and the autopilot platform 502 may be connected for communication via a wired or wireless connection.
The simulation platform 501 may be run on a computer, and is configured to obtain point cloud data by the method provided in the embodiment of the present application, and send the point cloud data to the autopilot platform 502 in a network manner. The autonomous vehicle is controlled by the autonomous platform 502 to respond according to the point cloud data to test the performance of the autonomous vehicle. Wherein the autonomous platform is operable on an autonomous vehicle.
In addition, in the case that communication is not performed between the simulation platform 501 and the autopilot platform 502, after the point cloud data is acquired by the simulation platform 501, the point cloud data can also be transmitted to the autopilot platform 502 by means of a storage medium, which is not described in detail herein.
In addition, as shown in fig. 5, the simulation system 500 may further include an algorithm platform 503, and the algorithm platform 503 may perform other algorithm processing and the like based on the point cloud data obtained by the simulation platform 501, and also will not be described in detail here.
The method for acquiring point cloud data provided by the embodiment of the present application is explained in detail below. Fig. 6 is a flowchart of a method for acquiring point cloud data according to an embodiment of the present application, where the method may be applied to the simulation platform in fig. 5. As shown in fig. 6, the method includes the steps of:
step 601: in the case where the vehicle is at a first location point in the virtual space, a first scanned segment is determined from the first location point and a road topology model indicating a road topology in the virtual space.
In the embodiment of the application, it is considered that the autonomous vehicle only needs to determine the objects near the driving road in the driving process, and therefore, in order to reduce the data processing amount in the process of determining the point cloud data, a road topology model for a virtual space to be simulated can be constructed in advance for the virtual space. The road topology model is used to indicate the road topology in the virtual space. Thus, when the point cloud data of the vehicle at the first position point is determined subsequently, the current road section to be scanned can be determined based on the road topology model through step 601 without traversing all the spaces around the first position point.
The first position point may be a current position of a laser detection device simulated on the vehicle. The laser detection device may be a laser radar or other types of laser detection devices, which are not illustrated herein.
In one possible implementation, the road topology model may include a plurality of road nodes (roads), and road information for each road node. Each road node corresponds to a segment of a road, that is, in the embodiment of the present application, each road node is substantially used for indicating a segment of a road. Wherein the road information of each road node is used to indicate various attributes of the corresponding road. The road information of each road node is used for indicating various attributes of the corresponding road, and the attributes are as follows: the specific position of the corresponding road in the virtual space can be determined through the road information corresponding to the road node. That is, the topological relation between the roads in the virtual space can be constructed through the road information of each road node in the road topology model. For example, the road information of each road node may include an identification of the road node, a length of a road corresponding to the road node, and boundary position information of the road corresponding to the road node (the boundary position information may include start position information and end position information of the corresponding road, a connection relationship between the road corresponding to the road node and other roads, and the like).
Since the road information of each road node may indicate a specific location of the corresponding road, based on the road topology model, the implementation manner of determining the first scanned road segment according to the first location point and the road topology model in step 601 may be: determining a road node corresponding to the first position point from a plurality of road nodes according to the road information of each road node; and determining a first scanning road section according to the road node corresponding to the first position point.
For example, the road information of the road node includes boundary position information of the corresponding road. In this case, the area of the road corresponding to each road node may be specified from the road information of each road node. Therefore, the area comprising the first position point can be screened out from the determined area, and the road node corresponding to the screened out area is the road node corresponding to the first position point.
In addition, after the road node corresponding to the first position point is determined, a road segment within a specified distance range from the first position point on the road node corresponding to the first position point may be used as the first scanned road segment. The specified distance is preconfigured. For example, the specified distance may be 20 meters, and in this case, a road segment within 20 meters before and after the first position point on the road node corresponding to the first position point may be used as the first scanned road segment.
Optionally, after the road node corresponding to the first position point is determined, the road node corresponding to the first position point in the road topology model and other road nodes having a connection relationship with the road node may be used together as the first scanned road segment. For example, fig. 7 is a schematic diagram of a road topology model provided in the embodiment of the present application. As shown in fig. 7, the road topology model includes 7 road nodes, and the connection relationship between the road nodes is as the connection relationship between the road nodes in fig. 7. That is, the road indicated by the road node 1 is connected to the road indicated by the road node 2, the road indicated by the road node 2 is connected to the road indicated by the road node 3, the road indicated by the road node 3 is connected to the road indicated by the road node 4, and the roads indicated by the road node 4 are connected to the roads indicated by the road nodes 5, 6, and 7, respectively. Assuming that the road node corresponding to the determined first position point is the road node 2, at this time, three road segments indicated by three road nodes, namely the road node 1, the road node 2 and the road node 3, may be used as the first scanned segment. Further, a road node having a connection relationship with a plurality of road nodes in fig. 7 may also be referred to as a junction road node (junction).
The above is only an illustration of two possible implementations of determining the first scanned segment from road nodes corresponding to the first location point. The specific implementation manner of determining the first scanning road section according to the road node corresponding to the first position point is not limited in the embodiment of the application, and the determined first scanning road section is only required to be a road section near the first position point of the automatic driving vehicle. Other implementations are not illustrated here.
After the first scanned link is determined through step 601, in order to facilitate fast determination of virtual objects distributed on the first scanned link, the road topology model configured in advance may include virtual object information of each road node in addition to road information of each road node. The virtual object information of each road node includes three-dimensional model data of the respective virtual objects distributed at the corresponding road node. So that subsequently, based on step 602, the virtual objects distributed over the first scanned road segment can be determined quickly directly from the road topology model. It should be noted that, in the embodiment of the present application, the virtual objects distributed on the road segment include virtual objects distributed on the road of the road segment and on both sides of the road.
In one possible implementation, the three-dimensional model data of the virtual object may include three-dimensional position information of each surface point on the virtual object, so as to subsequently determine point cloud data according to the three-dimensional position information of the surface point.
In the embodiment of the application, the three-dimensional model data of the virtual object can be preprocessed in advance according to the requirement on the road condition in the driving process of the automatic driving vehicle, so that the speed of determining the point cloud data is further improved subsequently. The following explains the preprocessing method of various three-dimensional model data provided in the embodiments of the present application.
In one possible implementation, it is considered that the autonomous vehicle needs to specify its detailed three-dimensional structure for nearby objects during driving to avoid traffic accidents. For distant objects, only the approximate contour needs to be known to guide the current planning. Therefore, for a first virtual object of the plurality of virtual objects, the three-dimensional model data of the first virtual object may further include three-dimensional position information of a bounding box of the first virtual object, the bounding box of the first virtual object referring to a geometric body that encloses the first virtual object. The first virtual object is any one of a plurality of virtual objects. That is, in the road topology model, the three-dimensional model data of each virtual object may further include three-dimensional position information of the bounding box of the corresponding virtual object. So as to quickly determine point cloud data according to the outline of the virtual object subsequently.
Fig. 8 is a schematic diagram of a bounding box of a virtual object according to an embodiment of the present disclosure. The bounding box shown in fig. 8 may also be referred to as an AABB box. As shown in fig. 8, the AABB box is a bounding box of a three-dimensional object. A bounding box is a simple geometric space that contains virtual objects of complex shape. The purpose of adding bounding boxes to virtual objects is to perform collision detection quickly or to perform filtering before accurate collision detection. As shown in fig. 8, when the laser beam collides with the bounding box, the laser beam may collide with the virtual object in the bounding box. When the laser beam does not collide with the bounding box, it is unlikely that the laser beam will collide with a virtual object in the bounding box. Thus, the bounding box of the virtual object may represent the approximate outline of the virtual object.
In another possible implementation, it is considered that the virtual objects on both sides of the road, which are particularly far from the autonomous vehicle, have no effect on the driving of the autonomous vehicle during driving of the autonomous vehicle. Therefore, virtual objects on both sides of the road that are particularly far away can also be filtered. At this time, in the road topology model, for a first road node among the plurality of road nodes, a distance between the virtual object distributed at the first road node and a center line of the first road node is within a first distance threshold. That is, in the road topology model, the distance between the virtual objects on both sides of the road corresponding to each road node and the center line of the corresponding road node is within the first distance threshold.
The first distance threshold is a preset distance threshold. For example, the first distance threshold may be 10 meters. For the sake of convenience in the following description, this preprocessing method is referred to as distant object removal processing. In addition, the first distance threshold value is also larger than half of the road width. Road width refers to the distance between two boundaries on either side of a road node.
In another possible implementation, it is considered that the virtual objects on the road and on both sides of the road, which are extremely high from the ground, have no influence on the driving of the autonomous vehicle during the driving of the autonomous vehicle. For example, the high rise of buildings on both sides of the road does not affect the running of the autonomous vehicle. It is thus also possible to filter roads and virtual objects or parts of virtual objects on both sides of which the distance to the ground is particularly high. At this time, in the road topology model, the three-dimensional model data of each virtual object includes three-dimensional position information of each surface point of the plurality of surface points of the corresponding virtual object, the three-dimensional position information includes a height of the surface point, and the height in the three-dimensional model data of the virtual object distributed at the road node is within a height threshold. Fig. 9 is a schematic view of a high-rise building provided in the embodiment of the present application. As shown in fig. 9, three-dimensional model data of a high-rise portion of the building will not be included in the road topology model. For the sake of convenience in the following, this preprocessing mode is referred to as a higher object removal processing.
It should be noted that the filtering may be performed on the three-dimensional model data of each virtual object in the road topology model in advance, at this time, the data amount included in the road topology model is also relatively small, and the three-dimensional model data of the virtual object may be directly determined according to the road topology model after the simplification no matter where the vehicle is located. Optionally, the road topology model may only include three-dimensional position information of each surface point on the virtual object, and then when the first scanned road segment is determined, the three-dimensional model data of the virtual object on the first scanned road segment is filtered through the above filtering method, but at any position point, the three-dimensional model data needs to be filtered, and the required calculation amount is still large.
In addition, considering that different processing manners may be subsequently adopted for virtual objects that are different from the first position point, in another possible implementation manner, in the road topology model, the three-dimensional model data of each virtual object may further include a specific position of the corresponding virtual object on the road. Specifically, for any one of the road nodes, a road starting point may be set on the road indicated by the road node, and the road starting point may be a position point located at the center of the road at the road starting position. Then the specific position of the virtual object on the road can be indicated by two parameters for any virtual object on the road indicated by the road node and on both sides of the road. One parameter is the distance between the center point of the bounding box of the virtual object and the center line of the road, which may be denoted as t-offset, and the other parameter is the distance between the center point of the bounding box of the virtual object and the start point of the road in the direction of travel of the road, which may be denoted as s-offset.
Fig. 10 is a schematic diagram of a road node according to an embodiment of the present application. As shown in fig. 10, the starting point of the road on the road indicated by the road node is point a, and t-offset and s-offset of one tree in the road are 5 meters and 50 meters, respectively. After the specific position of the start point of the road is known, the distance between the tree and the first position point can be determined according to the first position point and the two parameters of t-offset and s-offset.
Alternatively, the specific position of each virtual object on the road in the road topology model may also be directly indicated by the absolute coordinates of the center point of the bounding box of the virtual object in the geodetic coordinate system, so that for any road node, the above-mentioned road starting point may not be required to be configured.
Step 602: three-dimensional model data for each of a plurality of virtual objects distributed over the first scan segment is determined.
As can be seen from the explanation of the road topology model in step 601, in one possible implementation, the road topology model may include a plurality of road nodes and virtual object information of each road node, where the virtual object information of each road node includes three-dimensional model data of each virtual object distributed in the corresponding road node. At this time, the implementation manner of step 602 may be: three-dimensional model data of each of a plurality of virtual objects distributed in the first scanning section is acquired from the virtual object information of each road node.
Based on the description of the road topology model in step 601, in one possible implementation, for a first virtual object distributed in a plurality of virtual objects of the first scan segment, the three-dimensional model data of the first virtual object includes: three-dimensional position information of respective surface points on the first virtual object. The first virtual object is any one of the plurality of virtual objects.
In another possible implementation, the three-dimensional model data of the first virtual object may further include: three-dimensional position information of a bounding box of the first virtual object.
In addition, based on the above preprocessing method for the three-dimensional model data in the road topology model, if the three-dimensional model data of the virtual object in the road topology model is preprocessed in advance, the three-dimensional model data of the virtual object obtained in step 602 is preprocessed.
Thus, in one possible implementation, if the distant object removal process is performed on the virtual objects in the road topology model in advance, the distance between the virtual objects distributed in the first scan segment and the center line of the first scan segment is within the first distance threshold at this time. The first distance threshold is not described in detail here.
In another possible implementation manner, if the virtual object in the road topology model is subjected to the higher object removal processing in advance, at this time, the heights in the three-dimensional model data of the virtual object distributed in the first scanning section are all within the height threshold.
In addition, in a possible implementation manner, when the three-dimensional model data of each virtual object in the plurality of virtual objects distributed in the first scanned link is acquired from the virtual object information of each road node, the three-dimensional model data of the virtual object related to the first scanned link in the road topology model may be directly acquired. In this case, it is assumed that the displacement of the vehicle is substantially 0 during one round of horizontal scanning of the laser radar when the vehicle is at the first position point.
In another possible implementation, if the vehicle displacement during one round of horizontal scanning of the lidar at the first location point is relatively large, in this case the vehicle is moving forward as the lidar scans. At this time, in order to improve consistency between the point cloud data acquired by the laser radar and the point cloud data acquired in the real world in the simulation test process, the three-dimensional model data of each virtual object in the plurality of virtual objects distributed on the first scanning road section may be determined in a regional manner.
The implementation manner of determining the three-dimensional model data of each virtual object in the plurality of virtual objects distributed in the first scanning road section in a regional manner may be: dividing the first scanning road section into a plurality of sector areas by taking the first position point as a center to obtain boundary information of each sector area; arranging the plurality of fan-shaped areas in sequence according to the scanning direction of the laser detection device; for the first sector area after arrangement, acquiring three-dimensional model data of a virtual object positioned in the first sector area from the virtual object position information of each road node according to the boundary information of the first sector area; for the ith arranged sector area, determining the moving displacement of the vehicle in the process of scanning the laser detection device from the first sector area to the ith sector area based on the scanning speed of the laser detection device and the moving speed of the vehicle, wherein i is a positive integer which is greater than or equal to 2 and less than or equal to the number of divided sector areas; updating the boundary information of the ith sector area according to the movement displacement; and acquiring three-dimensional model data of the virtual object in the first sector area from the virtual object position information of each road node according to the updated boundary information of the ith sector area.
The boundary information of each sector area is used to indicate a coverage area of the corresponding sector area. For the first sector area after arrangement, which is equivalent to the area scanned by the laser radar when the vehicle is just at the first position point, at this time, the three-dimensional model data of the virtual object located in the first sector area can be directly obtained from the virtual object information of each road node. For the ith sector area after the arrangement, when the laser radar scans the ith sector area, the vehicle has moved forward by a certain displacement, and therefore, the boundary information of the ith sector area can be updated according to the position point of the vehicle after the vehicle has moved. It is equivalent to dynamically updating the virtual objects in each sector area so that the acquired virtual objects and the objects acquired in the real world are as consistent as possible.
In addition, according to the movement displacement, the implementation manner of updating the boundary information of the ith sector area may be: and determining the position point of the vehicle after the vehicle moves according to the first position point and the movement displacement, dividing the position point of the vehicle after the vehicle moves into a plurality of fan-shaped areas by taking the position point of the vehicle after the vehicle moves as a center, and arranging the plurality of fan-shaped areas according to the same mode, wherein the boundary information of the i-th fan-shaped area after arrangement is the updated boundary information of the i-th fan-shaped area.
Fig. 11 is a schematic diagram of a sectored area provided in an embodiment of the present application, as shown in fig. 11, when a vehicle is at a first position point, a first scanning link may be divided into 8 sectored areas with the first position point as a center, and the 8 sectored areas are respectively labeled as ①②③④⑤⑥⑦⑧ in fig. 11.
For example, for 8 sector areas shown in fig. 11, assuming that the scanning direction of the lidar is the sequence ① → ② → ② 2, the sequence of the 8 sector areas may be a sector area ② 8, a sector area ② 1, a sector area ③, a sector area ② 3, a sector area ② 4, a sector area ② 5, a sector area ② 6, and a sector area ② 0, and the sequence of the 8 sector areas may also be a sector area ②, a sector area ③, a sector area ④, a sector area ⑤, a sector area ⑥, a sector area ⑦, a sector area ② 7, and a sector area ①.
Further, the number of divided sector areas may be set according to the scanning speed of the laser radar and the moving speed of the vehicle. In one possible implementation, the number of divided sector areas may have a negative correlation with the scanning speed of the laser radar and the moving speed of the vehicle. That is, the faster the scanning speed of the laser radar is, the faster the moving speed of the vehicle is, and at this time, the smaller the moving displacement of the vehicle is in a circle scanned by the laser radar, and then the change of the object around the vehicle is slower in the circle scanned by the laser radar, so that the smaller the number of the divided sector areas can be, and correspondingly, the larger the coverage area of each sector area is.
In addition, after the sector areas are divided, the process of acquiring the three-dimensional model data of the virtual objects in each sector area can be processed in parallel, so that the efficiency of finally acquiring the point cloud data is improved.
Step 603: and determining point cloud data acquired by the laser detection device at the first position point according to the laser scanning range of the laser detection device simulated on the vehicle and the three-dimensional model data of each virtual object in the plurality of virtual objects.
In the embodiment of the present application, the point cloud data acquired by the laser detection device at the first position point is substantially determined as an intersection point of each laser beam on the virtual object. Therefore, in a possible implementation manner, step 603 may specifically be: for a first virtual object, determining an angle range of a laser beam emitted by a laser detection device covering the first virtual object from a laser scanning range according to the three-dimensional position information of the bounding box of the first virtual object, the direction of the vehicle and a first position point, wherein the first virtual object refers to any one of a plurality of virtual objects; for a first laser beam in an angle range, determining an intersection point of the first laser beam on a first virtual object according to three-dimensional model data of the first virtual object, and taking three-dimensional position information of the intersection point as point cloud data corresponding to the first laser beam, wherein the first laser beam is any one laser beam in the angle range.
The first virtual object refers to any virtual object in the plurality of virtual objects, that is, for any virtual object determined in step 602, the above implementation manner is adopted to determine the point cloud data projected onto the virtual object by the laser beam.
Fig. 12 is a schematic diagram of an angle range provided by an embodiment of the present application. As shown in fig. 12, it is assumed that there is a target vehicle in front of the autonomous vehicle. Then, it may be determined that the angle range of the laser beam irradiated on the target vehicle by the laser radar is, according to the orientation of the autonomous vehicle (for example, the heading direction), the position information of the laser radar (i.e., the first position point), and the AABB box information of the target vehicle: the transverse direction is a-b degrees, and the longitudinal direction is m-n degrees. (Only the transverse angular range is illustrated in FIG. 12, the longitudinal angular range is not shown)
Considering that the automatic driving vehicle needs to make clear the detailed three-dimensional structure of the nearby objects during the driving process so as to avoid traffic accidents. For distant objects, only the approximate contour needs to be known to guide the current planning. Therefore, for the first laser beam in the angle range, the intersection point of the first laser beam on the first virtual object is determined according to the three-dimensional model data of the first virtual object, and the intersection point can be determined by adopting different processing modes for virtual objects which are different from the first position point.
In a possible implementation manner, in a case that the first virtual object is a close-range virtual object, the determining, according to the three-dimensional model data of the first virtual object, an intersection point of the first laser beam on the first virtual object may specifically be: determining a plurality of first surface elements of the first virtual object according to the three-dimensional position information of each surface point in the three-dimensional model data of the first virtual object, wherein each first surface element is a surface formed by three or more surface points in each surface point of the first virtual object, and the near-distance virtual object is a virtual object of which the distance from the first position point is within a second distance threshold; and determining a first surface element which is intersected with the first laser beam in the plurality of first surface elements, and taking an intersection point between the first laser beam and the intersected first surface element as an intersection point of the first laser beam on the first virtual object.
That is, for a virtual object at a close distance, the exact surface structure of the virtual object is considered when determining the point cloud data.
For a virtual object in a close range, point cloud data is obtained for all laser beams within a determined angle range, because the distance and angle of each laser beam relative to the first virtual object and the relative position of each surface element on the first virtual object are known, the position of which surface element the laser beam intersects can be directly determined, and then an intersection point detected by the laser beam is determined through an intersection process, and the position information of the intersection point is the point cloud data.
As shown in fig. 12, assuming that the first virtual object is a target vehicle opposite to the vehicle, the process of obtaining point cloud data for all laser beams may be:
firstly, the transverse a ° and longitudinal (m + n)/2 ° laser beams are taken and subjected to intersection calculation with the outermost surface element a of the target vehicle (that is, the surface element where the origin is located in fig. 13), so as to obtain point cloud data.
And then, taking transverse (a +0.45) DEG rays, (0.45 DEG is the horizontal resolution of the laser detection device) and performing intersection calculation with the surface element A, obtaining point cloud data if the transverse (a +0.45) DEG rays intersect with the surface element A, and taking the surface element on the left side of the surface element A for intersection until the intersecting surface element is found to obtain the point cloud data if the transverse (a +0.45) DEG rays do not intersect with the surface element A.
The calculation in the vertical direction is similar to the calculation in the vertical direction, and only the surface elements at the upper and lower positions are acquired to perform intersection calculation. And performing intersection calculation on all laser rays in the ranges of a-b degrees in the transverse direction and m-n degrees in the longitudinal direction to obtain all point cloud data of the target vehicle detected by the laser radar.
The intersection of the laser beams can be processed in parallel, and will not be described in detail here.
The above is used to explain the process of acquiring point cloud data for a close-range virtual object. In another possible implementation manner, in the case that the first virtual object is a distant virtual object, the determining the intersection point of the first laser beam on the first virtual object according to the three-dimensional model data of the first virtual object may specifically be: determining a plurality of second surface elements of the bounding box according to the three-dimensional position information of the bounding box of the first virtual object under the condition that the first virtual object is a long-distance virtual object, wherein the long-distance virtual object is a virtual object of which the distance from the first position point is between a second distance threshold and a first distance threshold, and the first distance threshold is greater than the second distance threshold; and determining a second surface element which is intersected with the first laser beam in the plurality of second surface elements, and taking an intersection point between the first laser beam and the intersected second surface element as an intersection point of the first laser beam on the first virtual object.
That is, for a virtual object at a distance, the approximate contour of the virtual object is considered in determining the point cloud data. For example, for the outermost buildings, enclosing walls and the like of roads, the laser beam can be directly used for intersection calculation with the AABB box so as to reduce the calculation amount. The specific handover process is not described in detail herein.
In addition, considering the shielding situation among the virtual objects, the same laser beam may be projected onto different virtual objects theoretically, but actually, only one point cloud data can be acquired by each laser beam. For example, in FIG. 12, the angle of the target vehicle is in the range of 20-60 degrees in the lateral direction and 20-10 degrees in the longitudinal direction, and the angle of the laser beam projected to the surrounding wall farther behind the target vehicle also includes the angle in the range of 20-60 degrees in the lateral direction and 20-10 degrees in the longitudinal direction, then the laser beam in the angle range of 20-60 degrees in the lateral direction and 20-10 degrees in the longitudinal direction only needs to be subjected to intersection calculation once with the target vehicle, and the subsequent surrounding wall does not need to be subjected to repeated calculation.
Therefore, before determining the intersection point of the first laser beam on the first virtual object according to the three-dimensional model data of the first virtual object, it is further determined whether there is another virtual object blocking the first virtual object in the projection direction of the first laser beam. If there is no other virtual object obstructing the first virtual object in the projection direction of the first laser beam, an operation of determining an intersection point of the first laser beam on the first virtual object is performed. If there are other virtual objects obstructing the first virtual object in the projection direction of the first laser beam, the operation of determining the intersection point of the first laser beam on the first virtual object need not be performed. That is, in this case, it is not necessary to determine the point cloud data of the projection point of the first laser beam on the first virtual object.
The above-described determination of whether or not there is another virtual object that blocks the first virtual object in the projection direction of the first laser beam may be determined based on the angle of the first laser beam, the first position point, and the virtual object information of each virtual object determined in step 602. In one possible implementation manner, in the projection direction of the first laser beam, if the coverage area of the bounding box in which the other virtual object exists between the first position point and the first virtual object, it is considered that the other virtual object which obstructs the first virtual object exists in the projection direction of the first laser beam.
In the above-described process of determining whether or not there is another virtual object blocking the first virtual object in the projection direction of the first laser beam, the other virtual object is treated as a simple model that is not permeable to the laser beam. In the real world, an object of a complex model such as a tree has a through hole through which a laser beam can pass, so that the laser beam is projected onto the first virtual object even though the first laser beam is blocked by the bounding box corresponding to the tree.
Therefore, in the embodiment of the present application, if there is another virtual object that blocks the first virtual object in the projection direction of the first laser beam, and the other virtual object is a non-strictly-blocking object, the operation of determining the intersection point of the first laser beam on the first virtual object is performed, and the non-strictly-blocking object refers to an object in which there is a through hole through which the laser beam can pass. At this time, after the intersection point of the first laser beam on the first virtual object is determined, if the first laser beam does not have an intersection point on other virtual objects, the three-dimensional position information of the intersection point of the first laser beam on the first virtual object is taken as the point cloud data corresponding to the first laser beam.
If the first laser beam does not intersect at the other virtual object, it indicates that the first laser beam can pass through the through hole of the other virtual object. In this case, the three-dimensional position information of the intersection point of the first laser beam on the first virtual object needs to be used as the point cloud data corresponding to the first laser beam.
It should be noted that, when the method provided by the present application is applied, whether the occluded virtual object is a non-strictly occluded object may not be distinguished, and a simple model is uniformly used for processing, so that the calculation amount may be further reduced.
Optionally, the point cloud data of each laser beam in the angle range can be directly determined for all virtual objects without considering the occlusion condition. Subsequently, for the same laser beam, if the laser beam corresponds to a plurality of point cloud data, it indicates that the laser beam can be projected to a plurality of position points. At this time, the point cloud data of the position point closest to the first position point may be selected as the point cloud data of the laser beam. Thus, although the calculation amount may be more, the process of acquiring the point cloud data may be simplified.
The above steps 601 to 603 are used to explain how to determine the point cloud data acquired by the laser detection device at the first position point. In the simulation test, as the vehicle moves in the virtual space, the point cloud data at each position point needs to be updated in time. In the case that there may be a portion of the object around the vehicle and the relative position between the vehicle does not change during the movement of the vehicle, it is not necessary to determine the point cloud data again for the portion of the virtual object. Therefore, in the embodiment of the present application, the amount of calculation required to determine the point cloud data can be reduced according to the relative state between the vehicle and the surrounding virtual object.
At this time, after determining the point cloud data acquired by the laser detection device at the first position point through step 603, the point cloud data corresponding to each laser beam may also be cached. Under the condition that the vehicle is located at a second position point in the virtual space, determining a second scanning road section according to the second position point and the road topology model; determining a plurality of virtual objects distributed on the second scanning road section; and if the same virtual object exists in the plurality of virtual objects distributed on the first scanning road section and the plurality of virtual objects distributed on the second scanning road section, and the relative distance between the same virtual object and the vehicle does not change at the first position point and the second position point, taking the point cloud data corresponding to the laser beams projected onto the same virtual object in the cache as the point cloud data corresponding to the corresponding laser beams at the second position point.
The relative distance between the same virtual object and the vehicle is not changed at the first position point and the second position point, which indicates that the same virtual object and the vehicle are kept in a relatively static state during the moving process of the vehicle, so that the point cloud data does not need to be repeatedly determined for the same virtual object.
It should be noted that, during the running of the vehicle, if the vehicle is in a stationary state, dynamic objects around the vehicle, such as other vehicles, are changed. Therefore, the second position point and the first position point may be the same position point, in which case the point cloud data need not be repeatedly calculated for a static object around the vehicle.
In addition, during the driving process of the vehicle, part of the laser beam may be projected on the ground or some parts of the vehicle (such as a base on which the laser radar is mounted) all the time, and the point cloud data of the laser beam is not changed basically. Therefore, in the embodiment of the present application, the point cloud data of all the laser beams of the laser detection device may be initialized first. And the initialized point cloud data is updated according to the changed point cloud data subsequently, so that the calculation amount is further reduced.
As shown in fig. 14, a part of the laser beam of the laser radar is emitted to the ground, and the point cloud data of the part is not substantially changed subsequently; if the target object is not detected by the rest of the lasers, the point cloud data can be set to null. The point cloud data may be stored in a configuration file. When the simulation starts, the configuration file is loaded to generate a point cloud cache. And after the simulation is started, if the laser radar detects a virtual object, updating the point cloud data which is empty in the cache.
In addition, based on the configuration file, if no other virtual object appears around the vehicle within a certain time period during the running process of the vehicle, the configuration file can be reloaded, which is equivalent to initializing point cloud data of all laser beams. And subsequently, continuously updating the point cloud data according to the detected virtual object.
In the embodiment of the application, it is considered that the autonomous vehicle only needs to determine the objects near the driving road during driving, and therefore, the data processing amount during the determination of the point cloud data is reduced. For a virtual space to be simulated, a road topology model for the virtual space may be constructed in advance. The road topology model is used to indicate the road topology in the virtual space. Therefore, when point cloud data when the vehicle is located at the first position point is determined subsequently, the current road section to be scanned can be directly determined based on the road topology model without traversing all nodes in the K-D tree as in the related art, so that the data processing amount in the point cloud data acquisition process is reduced, and correspondingly, the efficiency of point cloud data acquisition is improved.
Fig. 15 is a schematic diagram of an apparatus for acquiring point cloud data according to an embodiment of the present disclosure. As shown in fig. 15, the apparatus 1500 includes:
a determining module 1501, configured to determine, if the vehicle is at a first location point in the virtual space, a first scanned segment according to the first location point and a road topology model, where the road topology model is used to indicate a road topology in the virtual space. The specific implementation manner may refer to step 601 in the embodiment of fig. 6.
The determining module 1501 is further configured to determine three-dimensional model data of each of the plurality of virtual objects distributed in the first scan segment. The specific implementation manner may refer to step 602 in the embodiment of fig. 6.
The determining module 1501 is further configured to determine point cloud data acquired by the laser detection device at the first position point according to the laser scanning range of the laser detection device simulated on the vehicle and the three-dimensional model data of each virtual object in the plurality of virtual objects. The specific implementation manner may refer to step 603 in the embodiment of fig. 6.
Optionally, the road topology model includes a plurality of road nodes, and road information of each of the plurality of road nodes;
the determination module is to:
determining a road node corresponding to the first position point from a plurality of road nodes according to the road information of each road node;
and determining a first scanning road section according to the road node corresponding to the first position point.
Optionally, the road topology model includes a plurality of road nodes and virtual object information of each of the plurality of road nodes, the virtual object information of the first road node includes three-dimensional model data of each virtual object distributed at the first road node, and the first road node is any one of the plurality of road nodes;
the determination module is to:
three-dimensional model data of each of a plurality of virtual objects distributed in the first scanning section is acquired from the virtual object information of each road node.
Optionally, the distance between the virtual object distributed over the first scan segment and the centerline of the first scan segment is within a first distance threshold.
Optionally, the three-dimensional model data of the first virtual object of the plurality of virtual objects includes three-dimensional position information of each surface point of the plurality of surface points of the first virtual object, the three-dimensional position information including a height of the surface point, the first virtual object being any one of the plurality of virtual objects;
the altitude in the three-dimensional model data of the virtual object distributed over the first scanned segment is within the altitude threshold.
Optionally, the three-dimensional model data of the first virtual object in the plurality of virtual objects includes three-dimensional position information of a bounding box of the first virtual object, where the bounding box of the first virtual object is a geometric body that surrounds the first virtual object, and the first virtual object is any virtual object in the plurality of virtual objects;
the determination module is to:
determining an angle range of the laser beam emitted by the laser detection device covering the first virtual object from the laser scanning range according to the three-dimensional position information of the bounding box of the first virtual object, the direction of the vehicle and the first position point;
determining the intersection point of the first laser beam on the first virtual object according to the three-dimensional model data of the first virtual object, and taking the three-dimensional position information of the intersection point as the point cloud data corresponding to the first laser beam, wherein the first laser beam is any one laser beam in the angle range.
Optionally, the determining module is configured to:
when the first virtual object is a near-distance virtual object, determining a plurality of first surface elements of the first virtual object according to three-dimensional position information of each surface point in three-dimensional model data of the first virtual object, wherein each first surface element is a surface formed by three or more surface points in each surface point of the first virtual object, and the near-distance virtual object is a virtual object with the distance from the first position point within a second distance threshold;
and determining a first surface element which is intersected with the first laser beam in the plurality of first surface elements, and taking an intersection point between the first laser beam and the intersected first surface element as an intersection point of the first laser beam on the first virtual object.
Optionally, the determining module is configured to:
determining a plurality of second surface elements of the bounding box according to the three-dimensional position information of the bounding box of the first virtual object under the condition that the first virtual object is a long-distance virtual object, wherein the long-distance virtual object is a virtual object of which the distance from the first position point is between a second distance threshold and a first distance threshold, and the first distance threshold is greater than the second distance threshold;
and determining a second surface element which is intersected with the first laser beam in the plurality of second surface elements, and taking an intersection point between the first laser beam and the intersected second surface element as an intersection point of the first laser beam on the first virtual object.
Optionally, the determining module is further configured to:
if there is no other virtual object obstructing the first virtual object in the projection direction of the first laser beam, an operation of determining an intersection point of the first laser beam on the first virtual object is performed.
Optionally, the determining module is further configured to:
if other virtual objects which shield the first virtual object exist in the projection direction of the first laser beam, and the other virtual objects are non-strictly shielding objects, the operation of determining the intersection point of the first laser beam on the first virtual object is executed, wherein the non-strictly shielding objects refer to objects with through holes through which the laser beam can penetrate;
after determining the intersection point of the first laser beam on the first virtual object, the method further comprises:
and if the first laser beam does not have the intersection point on other virtual objects, taking the three-dimensional position information of the intersection point of the first laser beam on the first virtual object as the point cloud data corresponding to the first laser beam.
Optionally, the determining module is configured to:
dividing the first scanning road section into a plurality of sector areas by taking the first position point as a center to obtain boundary information of each sector area;
arranging the plurality of fan-shaped areas in sequence according to the scanning direction of the laser detection device;
for the first sector area after arrangement, acquiring three-dimensional model data of a virtual object positioned in the first sector area from the virtual object position information of each road node according to the boundary information of the first sector area;
for the ith arranged sector area, determining the moving displacement of the vehicle in the process of scanning the laser detection device from the first sector area to the ith sector area based on the scanning speed of the laser detection device and the moving speed of the vehicle, wherein i is a positive integer which is greater than or equal to 2 and less than or equal to the number of divided sector areas;
updating the boundary information of the ith sector area according to the movement displacement;
and acquiring three-dimensional model data of the virtual object in the first sector area from the virtual object position information of each road node according to the updated boundary information of the ith sector area.
Optionally, the point cloud data acquired by the laser detection device at the first position point includes point cloud data corresponding to each laser beam emitted by the laser detection device;
the device also includes:
the cache module is used for caching point cloud data corresponding to each laser beam;
the determining module is further used for determining a second scanning road section according to the second position point and the road topology model under the condition that the vehicle is located at the second position point in the virtual space; determining a plurality of virtual objects distributed on the second scanning road section; and if the same virtual object exists in the plurality of virtual objects distributed on the first scanning road section and the plurality of virtual objects distributed on the second scanning road section, and the relative distance between the same virtual object and the vehicle does not change at the first position point and the second position point, taking the point cloud data corresponding to the laser beams projected onto the same virtual object in the cache as the point cloud data corresponding to the corresponding laser beams at the second position point.
In the embodiment of the application, it is considered that the autonomous vehicle only needs to determine the objects near the driving road during driving, and therefore, the data processing amount during the determination of the point cloud data is reduced. For a virtual space to be simulated, a road topology model for the virtual space may be constructed in advance. The road topology model is used to indicate the road topology in the virtual space. Therefore, when point cloud data when the vehicle is located at the first position point is determined subsequently, the current road section to be scanned can be directly determined based on the road topology model without traversing all nodes in the K-D tree as in the related art, so that the data processing amount in the point cloud data acquisition process is reduced, and correspondingly, the efficiency of point cloud data acquisition is improved.
It should be noted that: the device for acquiring point cloud data provided in the above embodiment is only illustrated by the division of the above functional modules when acquiring point cloud data, and in practical applications, the above function distribution may be completed by different functional modules as needed, that is, the internal structure of the apparatus is divided into different functional modules to complete all or part of the above described functions. In addition, the device for acquiring point cloud data and the method for acquiring point cloud data provided by the above embodiments belong to the same concept, and specific implementation processes thereof are detailed in the method embodiments and are not described herein again.
Fig. 16 is a schematic structural diagram of a computer device according to an embodiment of the present application. The simulation platform in fig. 5 may be implemented by a computer device as shown in fig. 16. Referring to FIG. 16, the computer device includes at least one processor 1601, a communication bus 1602, a memory 1603, and at least one communication interface 1604.
The processor 1601 may be a general processing unit (CPU), an application-specific integrated circuit (ASIC), or one or more integrated circuits for controlling the execution of the programs of the present application.
Communication bus 1602 may include a path that conveys information between the aforementioned components.
Memory 1603 may be, but is not limited to, a read-only Memory (ROM) or other type of static storage device that can store static information and instructions, a Random Access Memory (RAM) or other type of dynamic storage device that can store information and instructions, an electrically erasable programmable read-only Memory (EEPROM), a compact disc read-only Memory (CD-ROM) or other optical disc storage, optical disc storage (including compact disc, laser beam disc, optical disc, digital versatile disc, blu-ray disc, etc.), a magnetic disk or other magnetic storage device, or any other medium that can be used to carry or store desired program code in the form of instructions or data structures and that can be accessed by a computer. The memory 1603 may be self-contained and coupled to the processor 1601 via a communication bus 1602. Memory 1603 may also be integrated with the processor 1601.
The memory 1603 is used for storing program codes for executing the scheme of the application and is controlled by the processor 1601 to execute. Processor 1601 is used to execute program code stored in memory 1603. One or more software modules may be included in the program code. The simulation platform in fig. 5 may determine data for developing an application by one or more software modules in the program code in processor 1601 and memory 1603. The one or more software modules may be any of the modules in fig. 15.
Communication interface 1604 may be implemented using any transceiver or the like for communicating with other devices or communication networks, such as ethernet, Radio Access Network (RAN), Wireless Local Area Networks (WLAN), etc.
In particular implementations, a computer device may include multiple processors, such as processor 1601 and processor 1605 shown in FIG. 16, as one embodiment. Each of these processors may be a single-core (single-CPU) processor or a multi-core (multi-CPU) processor. A processor herein may refer to one or more devices, circuits, and/or processing cores for processing data (e.g., computer program instructions).
The computer device may be a general purpose computer device or a special purpose computer device. In a specific implementation, the computer device may be a desktop computer, a laptop computer, a web server, a personal assistant (PDA), a mobile phone, a tablet computer, a wireless terminal device, a communication device, or an embedded device. The embodiment of the application does not limit the type of the computer equipment.
In the above embodiments, the implementation may be wholly or partly realized by software, hardware, firmware, or any combination thereof. When implemented in software, may be implemented in whole or in part in the form of a computer program product. The computer program product includes one or more computer instructions. When loaded and executed on a computer, cause the processes or functions described in accordance with the embodiments of the application to occur, in whole or in part. The computer may be a general purpose computer, a special purpose computer, a network of computers, or other programmable device. The computer instructions may be stored on a computer readable storage medium or transmitted from one computer readable storage medium to another, for example, from one website, computer, server, or data center to another website, computer, server, or data center via wire (e.g., coaxial cable, fiber optic, Digital Subscriber Line (DSL)) or wireless (e.g., infrared, wireless, microwave, etc.). The computer-readable storage medium can be any available medium that can be accessed by a computer or a data storage device, such as a server, a data center, etc., that incorporates one or more of the available media. The usable medium may be a magnetic medium (e.g., floppy disk, hard disk, magnetic tape), an optical medium (e.g., Digital Versatile Disk (DVD)), or a semiconductor medium (e.g., Solid State Disk (SSD)), among others.
It will be understood by those skilled in the art that all or part of the steps for implementing the above embodiments may be implemented by hardware, or may be implemented by a program instructing relevant hardware, where the program may be stored in a computer-readable storage medium, and the above-mentioned storage medium may be a read-only memory, a magnetic disk or an optical disk, etc.
The above-mentioned embodiments are provided not to limit the present application, and any modification, equivalent replacement, improvement, etc. made within the spirit and principle of the present application should be included in the protection scope of the present application.

Claims (15)

1. A method of acquiring point cloud data, the method for use with a computing device, the method comprising:
under the condition that a vehicle is located at a first position point in a virtual space, determining a first scanning road section according to the first position point and a road topology model, wherein the road topology model is used for indicating road topology in the virtual space;
determining three-dimensional model data for each of a plurality of virtual objects distributed over the first scan segment;
and determining point cloud data acquired by the laser detection device at the first position point according to the laser scanning range of the laser detection device simulated on the vehicle and the three-dimensional model data of each virtual object in the plurality of virtual objects.
2. The method of claim 1, wherein the road topology model includes a plurality of road nodes, and virtual object information for each of the plurality of road nodes, the virtual object information for a first road node of the plurality of road nodes including three-dimensional model data of respective virtual objects distributed at the first road node, the first road node being any one of the plurality of road nodes;
the determining three-dimensional model data for each of a plurality of virtual objects distributed over the first scan segment includes:
and acquiring three-dimensional model data of each virtual object in a plurality of virtual objects distributed in the first scanning road section from the virtual object information of each road node.
3. The method of claim 2, wherein a distance between a virtual object distributed across the first scan segment and a centerline of the first scan segment is within a first distance threshold.
4. A method according to claim 2 or 3, wherein the three-dimensional model data of a first virtual object of the plurality of virtual objects comprises three-dimensional position information of each of a plurality of surface points of the first virtual object, the three-dimensional position information comprising a height of a surface point, the first virtual object being any one of the plurality of virtual objects;
the altitude in the three-dimensional model data of the virtual object distributed over the first scanned segment is within an altitude threshold.
5. The method according to any one of claims 1 to 4, wherein the three-dimensional model data of a first virtual object of the plurality of virtual objects includes three-dimensional position information of a bounding box of the first virtual object, the bounding box of the first virtual object being a geometric body that surrounds the first virtual object;
the determining point cloud data acquired by the laser detection device at the first position point according to the laser scanning range of the laser detection device simulated on the vehicle and the three-dimensional model data of each virtual object in the plurality of virtual objects comprises:
determining an angle range of the laser beam emitted by the laser detection device covering the first virtual object from the laser scanning range according to the three-dimensional position information of the bounding box of the first virtual object, the orientation of the vehicle and the first position point;
determining an intersection point of a first laser beam on the first virtual object according to the three-dimensional model data of the first virtual object, and taking the three-dimensional position information of the intersection point as point cloud data corresponding to the first laser beam, wherein the first laser beam is any one laser beam in the angle range.
6. The method of claim 5, wherein determining the intersection point of the first laser beam on the first virtual object from the three-dimensional model data of the first virtual object comprises:
when the first virtual object is a near-distance virtual object, determining a plurality of first surface elements of the first virtual object according to three-dimensional position information of each surface point in three-dimensional model data of the first virtual object, wherein each first surface element is a surface formed by three or more surface points in each surface point of the first virtual object, and the distance between the near-distance virtual object and the first position point is within a second distance threshold;
determining a first bin of the plurality of first bins that intersects the first laser beam, and taking an intersection between the first laser beam and the intersecting first bin as an intersection point of the first laser beam on the first virtual object.
7. The method of claim 5, wherein determining the intersection point of the first laser beam on the first virtual object from the three-dimensional model data of the first virtual object comprises:
determining a plurality of second surface elements of a bounding box of the first virtual object according to three-dimensional position information of the bounding box of the first virtual object under the condition that the first virtual object is a long-distance virtual object, wherein the distance between the long-distance virtual object and the first position point is between a second distance threshold and a first distance threshold, and the first distance threshold is greater than the second distance threshold;
determining a second bin of the plurality of second bins that intersects the first laser beam, and taking an intersection between the first laser beam and the intersecting second bin as an intersection point of the first laser beam on the first virtual object.
8. The method of claim 2, wherein determining three-dimensional model data for each of a plurality of virtual objects distributed over the first scan segment based on the virtual object information for the respective road node comprises:
dividing the first scanning road section into a plurality of sector areas by taking the first position point as a center to obtain boundary information of each sector area;
arranging the plurality of fan-shaped areas in sequence according to the scanning direction of the laser detection device;
for the first sector area after arrangement, acquiring three-dimensional model data of a virtual object positioned in the first sector area from the virtual object position information of each road node according to the boundary information of the first sector area;
for the arrayed ith sector area, determining the moving displacement of the vehicle in the process that the laser detection device scans from the first sector area to the ith sector area based on the scanning speed of the laser detection device and the moving speed of the vehicle, wherein i is a positive integer which is greater than or equal to 2 and less than or equal to the number of divided sector areas;
updating the boundary information of the ith sector area according to the movement displacement;
and acquiring three-dimensional model data of the virtual object in the first sector area from the virtual object position information of each road node according to the updated boundary information of the ith sector area.
9. The method of any of claims 1 to 8, wherein the point cloud data acquired by the laser detection device at the first location point comprises point cloud data corresponding to each laser beam emitted by the laser detection device;
after determining the point cloud data acquired by the laser detection device at the first position point, the method further comprises:
caching point cloud data corresponding to each laser beam;
under the condition that the vehicle is located at a second position point in the virtual space, determining a second scanning road section according to the second position point and the road topology model;
determining a plurality of virtual objects distributed on the second scanning section;
and if the same virtual object exists in the plurality of virtual objects distributed on the first scanning road section and the plurality of virtual objects distributed on the second scanning road section, and the relative distance between the same virtual object and the vehicle does not change between the first position point and the second position point, using the point cloud data corresponding to the laser beams projected onto the same virtual object in the cache as the point cloud data corresponding to the corresponding laser beams at the second position point.
10. An apparatus for acquiring point cloud data, the apparatus comprising:
a determination module to: under the condition that a vehicle is located at a first position point in a virtual space, determining a first scanning road section according to the first position point and a road topology model, wherein the road topology model is used for indicating road topology in the virtual space;
determining three-dimensional model data for each of a plurality of virtual objects distributed over the first scan segment;
and determining point cloud data acquired by the laser detection device at the first position point according to the laser scanning range of the laser detection device simulated on the vehicle and the three-dimensional model data of each virtual object in the plurality of virtual objects.
11. The apparatus of claim 10, wherein the road topology model includes a plurality of road nodes, and virtual object information for each of the plurality of road nodes, the virtual object information for a first road node of the plurality of road nodes including three-dimensional model data for respective virtual objects distributed at the first road node, the first road node being any one of the plurality of road nodes;
the determination module is to:
and acquiring three-dimensional model data of each virtual object in a plurality of virtual objects distributed in the first scanning road section from the virtual object information of each road node.
12. The apparatus according to claim 10 or 11, wherein the three-dimensional model data of a first virtual object of the plurality of virtual objects includes three-dimensional position information of a bounding box of the first virtual object, the bounding box of the first virtual object refers to a geometric body that surrounds the first virtual object, and the first virtual object is any virtual object of the plurality of virtual objects;
the determination module is to:
determining an angle range of the laser beam emitted by the laser detection device covering the first virtual object from the laser scanning range according to the three-dimensional position information of the bounding box of the first virtual object, the orientation of the vehicle and the first position point;
determining an intersection point of a first laser beam on the first virtual object according to the three-dimensional model data of the first virtual object, and taking the three-dimensional position information of the intersection point as point cloud data corresponding to the first laser beam, wherein the first laser beam is any one laser beam in the angle range.
13. The apparatus of claim 11, wherein the determination module is to:
dividing the first scanning road section into a plurality of sector areas by taking the first position point as a center to obtain boundary information of each sector area;
arranging the plurality of fan-shaped areas in sequence according to the scanning direction of the laser detection device;
for the first sector area after arrangement, acquiring three-dimensional model data of a virtual object positioned in the first sector area from the virtual object position information of each road node according to the boundary information of the first sector area;
for the arrayed ith sector area, determining the moving displacement of the vehicle in the process that the laser detection device scans from the first sector area to the ith sector area based on the scanning speed of the laser detection device and the moving speed of the vehicle, wherein i is a positive integer which is greater than or equal to 2 and less than or equal to the number of divided sector areas;
updating the boundary information of the ith sector area according to the movement displacement;
and acquiring three-dimensional model data of the virtual object in the first sector area from the virtual object position information of each road node according to the updated boundary information of the ith sector area.
14. The apparatus of any of claims 10 to 13, wherein the point cloud data acquired by the laser detection device at the first location point comprises point cloud data corresponding to each laser beam emitted by the laser detection device;
the device further comprises:
the cache module is used for caching point cloud data corresponding to each laser beam;
the determining module is further configured to determine a second scanning road section according to a second position point and the road topology model when the vehicle is at the second position point in the virtual space; determining a plurality of virtual objects distributed on the second scanning section; and if the same virtual object exists in the plurality of virtual objects distributed on the first scanning road section and the plurality of virtual objects distributed on the second scanning road section, and the relative distance between the same virtual object and the vehicle does not change between the first position point and the second position point, using the point cloud data corresponding to the laser beams projected onto the same virtual object in the cache as the point cloud data corresponding to the corresponding laser beams at the second position point.
15. A computing device, comprising a memory to store computer instructions and a processor to read the computer instructions to perform the method of any one of claims 1-9.
CN202010116696.XA 2020-02-25 2020-02-25 Method for acquiring point cloud data and related equipment Active CN111275816B (en)

Priority Applications (2)

Application Number Priority Date Filing Date Title
CN202010116696.XA CN111275816B (en) 2020-02-25 2020-02-25 Method for acquiring point cloud data and related equipment
PCT/CN2021/077748 WO2021170015A1 (en) 2020-02-25 2021-02-24 Method for acquiring point cloud data, and related device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010116696.XA CN111275816B (en) 2020-02-25 2020-02-25 Method for acquiring point cloud data and related equipment

Publications (2)

Publication Number Publication Date
CN111275816A true CN111275816A (en) 2020-06-12
CN111275816B CN111275816B (en) 2022-05-10

Family

ID=71000366

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010116696.XA Active CN111275816B (en) 2020-02-25 2020-02-25 Method for acquiring point cloud data and related equipment

Country Status (2)

Country Link
CN (1) CN111275816B (en)
WO (1) WO2021170015A1 (en)

Cited By (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112446958A (en) * 2020-11-13 2021-03-05 山东产研信息与人工智能融合研究院有限公司 Road traffic accident auxiliary processing method and system based on laser point cloud
CN112504290A (en) * 2020-11-06 2021-03-16 北京航迹科技有限公司 Method, apparatus, device and storage medium for determining nearest road boundary
CN112540616A (en) * 2020-12-11 2021-03-23 北京赛目科技有限公司 Laser point cloud generation method and device based on unmanned driving
WO2021170015A1 (en) * 2020-02-25 2021-09-02 华为技术有限公司 Method for acquiring point cloud data, and related device
CN113516865A (en) * 2021-03-17 2021-10-19 北京易控智驾科技有限公司 Mine unmanned road network vehicle queuing method and device based on high-precision map
CN114577491A (en) * 2022-01-26 2022-06-03 上海和夏新能源科技有限公司 Vehicle chassis testing system and method based on intelligent driving and ADAS simulation

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114202625B (en) * 2021-12-10 2023-03-14 北京百度网讯科技有限公司 Method and device for extracting road shoulder line and electronic equipment
CN115930933B (en) * 2022-09-20 2023-07-04 江苏海洋大学 Multi-agent collaborative mapping method based on cluster control

Citations (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP2833322A1 (en) * 2013-07-30 2015-02-04 The Boeing Company Stereo-motion method of three-dimensional (3-D) structure information extraction from a video for fusion with 3-D point cloud data
CN108564615A (en) * 2018-04-20 2018-09-21 驭势(上海)汽车科技有限公司 Method, apparatus, system and the storage medium of simulated laser radar detection
CN108564874A (en) * 2018-05-07 2018-09-21 腾讯大地通途(北京)科技有限公司 Method, the method for model training, equipment and the storage medium of surface mark extraction
CN108732556A (en) * 2018-08-17 2018-11-02 西南交通大学 A kind of mobile lidar emulation mode based on geometry intersection operation
CN109271893A (en) * 2018-08-30 2019-01-25 百度在线网络技术(北京)有限公司 A kind of generation method, device, equipment and storage medium emulating point cloud data
CN109508579A (en) * 2017-09-15 2019-03-22 百度在线网络技术(北京)有限公司 For obtaining the method and device of virtual point cloud data
CN109960857A (en) * 2019-02-28 2019-07-02 东软睿驰汽车技术(沈阳)有限公司 A kind of emulation mode of laser radar, device and emulation platform
US20190318536A1 (en) * 2016-06-20 2019-10-17 Ocean Maps GmbH Method for Generating 3D Data Relating to an Object
US10491885B1 (en) * 2018-06-13 2019-11-26 Luminar Technologies, Inc. Post-processing by lidar system guided by camera information
CN110781827A (en) * 2019-10-25 2020-02-11 中山大学 Road edge detection system and method based on laser radar and fan-shaped space division
CN110824443A (en) * 2019-04-29 2020-02-21 当家移动绿色互联网技术集团有限公司 Radar simulation method and device, storage medium and electronic equipment

Family Cites Families (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109839922B (en) * 2017-11-28 2020-11-10 百度在线网络技术(北京)有限公司 Method and apparatus for controlling unmanned vehicle
US10877152B2 (en) * 2018-03-27 2020-12-29 The Mathworks, Inc. Systems and methods for generating synthetic sensor data
US11073597B2 (en) * 2018-05-11 2021-07-27 Continental Automotive Systems, Inc. Three-dimensional point cloud generated by high-resolution flash LIDAR
CN110779730A (en) * 2019-08-29 2020-02-11 浙江零跑科技有限公司 L3-level automatic driving system testing method based on virtual driving scene vehicle on-ring
CN111275816B (en) * 2020-02-25 2022-05-10 华为技术有限公司 Method for acquiring point cloud data and related equipment

Patent Citations (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP2833322A1 (en) * 2013-07-30 2015-02-04 The Boeing Company Stereo-motion method of three-dimensional (3-D) structure information extraction from a video for fusion with 3-D point cloud data
US20190318536A1 (en) * 2016-06-20 2019-10-17 Ocean Maps GmbH Method for Generating 3D Data Relating to an Object
CN109508579A (en) * 2017-09-15 2019-03-22 百度在线网络技术(北京)有限公司 For obtaining the method and device of virtual point cloud data
CN108564615A (en) * 2018-04-20 2018-09-21 驭势(上海)汽车科技有限公司 Method, apparatus, system and the storage medium of simulated laser radar detection
CN108564874A (en) * 2018-05-07 2018-09-21 腾讯大地通途(北京)科技有限公司 Method, the method for model training, equipment and the storage medium of surface mark extraction
US10491885B1 (en) * 2018-06-13 2019-11-26 Luminar Technologies, Inc. Post-processing by lidar system guided by camera information
CN108732556A (en) * 2018-08-17 2018-11-02 西南交通大学 A kind of mobile lidar emulation mode based on geometry intersection operation
CN109271893A (en) * 2018-08-30 2019-01-25 百度在线网络技术(北京)有限公司 A kind of generation method, device, equipment and storage medium emulating point cloud data
CN109960857A (en) * 2019-02-28 2019-07-02 东软睿驰汽车技术(沈阳)有限公司 A kind of emulation mode of laser radar, device and emulation platform
CN110824443A (en) * 2019-04-29 2020-02-21 当家移动绿色互联网技术集团有限公司 Radar simulation method and device, storage medium and electronic equipment
CN110781827A (en) * 2019-10-25 2020-02-11 中山大学 Road edge detection system and method based on laser radar and fan-shaped space division

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
FAN QIANG等: "The research about modeling technology of irregular virtual reality scene based on point cloud data", 《IEEE》 *
李会宾等: "基于激光点云的道路边界检测和标线识别方法的研究与实现", 《中国优秀博硕士学位论文全文数据库(硕士) 工程科技Ⅱ辑》 *

Cited By (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2021170015A1 (en) * 2020-02-25 2021-09-02 华为技术有限公司 Method for acquiring point cloud data, and related device
CN112504290A (en) * 2020-11-06 2021-03-16 北京航迹科技有限公司 Method, apparatus, device and storage medium for determining nearest road boundary
CN112446958A (en) * 2020-11-13 2021-03-05 山东产研信息与人工智能融合研究院有限公司 Road traffic accident auxiliary processing method and system based on laser point cloud
CN112446958B (en) * 2020-11-13 2023-07-28 山东产研信息与人工智能融合研究院有限公司 Road traffic accident auxiliary processing method and system based on laser point cloud
CN112540616A (en) * 2020-12-11 2021-03-23 北京赛目科技有限公司 Laser point cloud generation method and device based on unmanned driving
CN112540616B (en) * 2020-12-11 2021-07-16 北京赛目科技有限公司 Laser point cloud generation method and device based on unmanned driving
CN113516865A (en) * 2021-03-17 2021-10-19 北京易控智驾科技有限公司 Mine unmanned road network vehicle queuing method and device based on high-precision map
CN113516865B (en) * 2021-03-17 2022-07-05 北京易控智驾科技有限公司 Mine unmanned road network vehicle queuing method and device based on high-precision map
CN114577491A (en) * 2022-01-26 2022-06-03 上海和夏新能源科技有限公司 Vehicle chassis testing system and method based on intelligent driving and ADAS simulation

Also Published As

Publication number Publication date
WO2021170015A1 (en) 2021-09-02
CN111275816B (en) 2022-05-10

Similar Documents

Publication Publication Date Title
CN111275816B (en) Method for acquiring point cloud data and related equipment
EP3506212B1 (en) Method and apparatus for generating raster map
CN110869981B (en) Vector data encoding of high definition map data for autonomous vehicles
US11353589B2 (en) Iterative closest point process based on lidar with integrated motion estimation for high definition maps
KR102548282B1 (en) High-precision mapping method and device
CN103236079A (en) Improved three-dimensional model voxelization-based inner sphere construction method
US11668573B2 (en) Map selection for vehicle pose system
CN114445565A (en) Data processing method and device, electronic equipment and computer readable medium
US11206093B2 (en) Propagation path search method and apparatus
CN116348739A (en) Map generation system and method based on ray casting and semantic image
JP7118490B1 (en) Information processing system, information processing method, program, mobile object, management server
Wu et al. A Non-rigid hierarchical discrete grid structure and its application to UAVs conflict detection and path planning
KR20150136347A (en) Apparatus and method for traversing acceleration structure in a ray tracing system
Zhou et al. Crater identification simulation using LiDAR on Lunar rover
CN112530022A (en) Method for computer-implemented simulation of LIDAR sensors in a virtual environment
US20220221585A1 (en) Systems and methods for monitoring lidar sensor health
KR102087076B1 (en) Pre-prediction system for occlusion area in aviation survey using uav and optimal route determine method of uav using the same
CN113468735A (en) Laser radar simulation method, device and system and storage medium
Farzadpour et al. Modeling and optimizing the coverage performance of the lidar sensor network
CN115081303B (en) Laser radar virtual modeling and simulation method, electronic device and storage medium
KR102524995B1 (en) Map generating method of electronic apparatus
JP7228298B1 (en) Information processing system, information processing method, program, mobile object, management server
Tepsa 3D Perception System Design for Heavy-Duty Field Robots
Mills Informative View Planning for Autonomous Exploration in Unstructured 3D Environments
Sandström Static Extrinsic Calibration of a Vehicle-Mounted Lidar Using Spherical Targets

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant