CN117635786A - Point cloud processing method, device, equipment and storage medium - Google Patents

Point cloud processing method, device, equipment and storage medium Download PDF

Info

Publication number
CN117635786A
CN117635786A CN202210949687.8A CN202210949687A CN117635786A CN 117635786 A CN117635786 A CN 117635786A CN 202210949687 A CN202210949687 A CN 202210949687A CN 117635786 A CN117635786 A CN 117635786A
Authority
CN
China
Prior art keywords
point cloud
coordinate
point
points
determining
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202210949687.8A
Other languages
Chinese (zh)
Inventor
邱靖烨
余丽
辛喆
陆亚
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Sankuai Online Technology Co Ltd
Original Assignee
Beijing Sankuai Online Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Sankuai Online Technology Co Ltd filed Critical Beijing Sankuai Online Technology Co Ltd
Priority to CN202210949687.8A priority Critical patent/CN117635786A/en
Publication of CN117635786A publication Critical patent/CN117635786A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T15/003D [Three Dimensional] image rendering
    • G06T15/005General purpose rendering architectures

Landscapes

  • Engineering & Computer Science (AREA)
  • Computer Graphics (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Optical Radar Systems And Details Thereof (AREA)

Abstract

The application discloses a point cloud processing method, a device, equipment and a storage medium, and belongs to the technical field of computer vision. The method comprises the following steps: acquiring a first point cloud and a second point cloud of a target area; adding the first point cloud into the second point cloud based on the pose transformation relation between the first coordinate system of the first point cloud and the second coordinate system of the second point cloud to obtain an initial point cloud comprising a plurality of first coordinate points and a plurality of second coordinate points; determining a first connecting line of any first coordinate point and a phase center of a laser radar for acquiring a first point cloud; and determining coordinate points to be removed in a plurality of second coordinate points included in the initial point cloud to obtain a target point cloud. Besides adding the first point cloud into the second point cloud, coordinate points to be removed in the initial point cloud are removed, the updating range is comprehensive, and the accuracy of the updated target point cloud is high.

Description

Point cloud processing method, device, equipment and storage medium
Technical Field
The embodiment of the application relates to the technical field of computer vision, in particular to a point cloud processing method, a device, equipment and a storage medium.
Background
With the development of computer vision technology, the application scene of the point cloud is wider and wider. For example, map construction is performed based on a point cloud. In the process of map construction, a target area needs to be scanned to obtain a point cloud. However, in the scanned target area, the content of the partial area may change with the passage of time, in this case, the area where the content needs to be scanned again to obtain a new point cloud, and the old point cloud collected before is updated based on the new point cloud by the corresponding point cloud processing method.
Disclosure of Invention
The embodiment of the application provides a point cloud processing method, device, equipment and storage medium, which can be used for updating old point clouds acquired before based on new point clouds. The technical scheme is as follows:
in one aspect, an embodiment of the present application provides a point cloud processing method, where the method includes:
acquiring a first point cloud and a second point cloud of a target area, wherein the acquisition time of the first point cloud is later than that of the second point cloud;
adding the first point cloud into the second point cloud based on a pose transformation relation between a first coordinate system of the first point cloud and a second coordinate system of the second point cloud to obtain an initial point cloud comprising a plurality of first coordinate points and a plurality of second coordinate points, wherein the first coordinate points are coordinates of the corresponding first point cloud in the second coordinate system, and the second coordinate points are coordinates of the corresponding second point cloud in the second coordinate system;
for any first coordinate point, determining a first connecting line of the any first coordinate point and a phase center of a laser radar for acquiring the first point cloud;
and determining coordinate points to be removed in a plurality of second coordinate points included in the initial point cloud to obtain a target point cloud, wherein the coordinate points to be removed are the second coordinate points with the distance from the first connecting line meeting the removal condition.
In one possible implementation manner, the determining a coordinate point to be removed from a plurality of second coordinate points included in the initial point cloud includes:
acquiring a reference point set corresponding to any first coordinate point, wherein the reference point set corresponding to any first coordinate point comprises a plurality of reference points positioned on a first connecting line corresponding to any first coordinate point;
determining any second coordinate point as a candidate coordinate point based on the reference point, which is smaller than a distance threshold, in the reference point set corresponding to the first coordinate points;
and determining the distance between the candidate coordinate point and a first connecting line corresponding to the candidate coordinate point, and determining the candidate coordinate point as the coordinate point to be removed based on the fact that the distance meets a removal condition.
In one possible implementation manner, the determining a coordinate point to be removed from a plurality of second coordinate points included in the initial point cloud includes:
determining a curvature of a candidate coordinate point of the plurality of second coordinate points, the candidate coordinate point being determined based on the first connection line;
calculating a first distance between the candidate coordinate point and a first connecting line corresponding to the candidate coordinate point based on the curvature of the candidate coordinate point being greater than a first threshold value; determining that the first distance meets the removal condition based on the first distance being smaller than a second threshold value, and determining the candidate coordinate point as a coordinate point to be removed;
Or, based on the curvature of the candidate coordinate point being smaller than a third threshold value, calculating a second distance between a plane where the candidate coordinate point is located and a first coordinate point corresponding to the candidate coordinate point; and determining that the second distance meets the removal condition based on the fact that the second distance is larger than a fourth threshold value, and determining the candidate coordinate point as the coordinate point to be removed.
In a possible implementation manner, the determining, based on that a reference point with a distance smaller than a distance threshold value exists in the reference point set corresponding to the plurality of first coordinate points, the any second coordinate point as a candidate coordinate point includes:
searching a plurality of reference points included in a reference point set corresponding to the plurality of first coordinate points for a reference point closest to any one of the second coordinate points as a target reference point;
and determining any second coordinate point as the candidate coordinate point based on the fact that the distance between the any second coordinate point and the target reference point corresponding to the any second coordinate point is smaller than the distance threshold value.
In one possible implementation manner, the acquiring the first point cloud and the second point cloud of the target area includes:
acquiring a third point cloud and a fourth point cloud of the target area, wherein the acquisition time of the third point cloud is later than that of the fourth point cloud;
And carrying out downsampling treatment on the third point cloud according to the size of the voxels to obtain the first point cloud with uniform distribution, and carrying out downsampling treatment on the fourth point cloud according to the size of the voxels to obtain the second point cloud with uniform distribution.
In one possible implementation manner, the method further includes, before adding the first point cloud to the second point cloud, based on a pose transformation relationship between a first coordinate system of the first point cloud and a second coordinate system of the second point cloud:
for any first initial coordinate point of the first point cloud in the first coordinate system, searching a second initial coordinate point corresponding to the any first initial coordinate point in second initial coordinate points included in the second point cloud, and determining a pose transformation relation between the first coordinate system and the second coordinate system based on a plurality of first initial coordinate points and second initial coordinate points corresponding to each of the plurality of first initial coordinate points;
or, the pose transformation relation between the first coordinate system and the second coordinate system is obtained from a pose obtaining device, and the pose obtaining device obtains the pose transformation relation through measurement.
In one possible implementation manner, after the obtaining the target point cloud, the method further includes:
acquiring a fifth point cloud of the target area, wherein the acquisition pose of the fifth point cloud is different from that of the first point cloud;
and updating the target point cloud based on the fifth point cloud.
In another aspect, a point cloud processing apparatus is provided, the apparatus including:
the acquisition module is used for acquiring a first point cloud and a second point cloud of the target area, wherein the acquisition time of the first point cloud is later than that of the second point cloud;
the adding module is used for adding the first point cloud into the second point cloud based on a pose transformation relation between a first coordinate system of the first point cloud and a second coordinate system of the second point cloud to obtain an initial point cloud comprising a plurality of first coordinate points and a plurality of second coordinate points, wherein the first coordinate points are coordinates of the corresponding first point cloud in the second coordinate system, and the second coordinate points are coordinates of the corresponding second point cloud in the second coordinate system;
the determining module is used for determining a first connecting line of any first coordinate point and a phase center of the laser radar for collecting the first point cloud for any first coordinate point;
And the removing module is used for determining coordinate points to be removed in a plurality of second coordinate points included in the initial point cloud to obtain a target point cloud, wherein the coordinate points to be removed are the second coordinate points, and the distance between the second coordinate points and the first connecting line meets the removing condition.
In a possible implementation manner, the removing module is configured to obtain a reference point set corresponding to the any first coordinate point, where the reference point set corresponding to the any first coordinate point includes a plurality of reference points located on a first connection line corresponding to the any first coordinate point; determining any second coordinate point as a candidate coordinate point based on the reference point, which is smaller than a distance threshold, in the reference point set corresponding to the first coordinate points; and determining the distance between the candidate coordinate point and a first connecting line corresponding to the candidate coordinate point, and determining the candidate coordinate point as the coordinate point to be removed based on the fact that the distance meets a removal condition.
In one possible implementation, the removing module is configured to determine a curvature of a candidate coordinate point of the plurality of second coordinate points, where the candidate coordinate point is determined based on the first connection; calculating a first distance between the candidate coordinate point and a first connecting line corresponding to the candidate coordinate point based on the curvature of the candidate coordinate point being greater than a first threshold value; determining that the first distance meets the removal condition based on the first distance being smaller than a second threshold value, and determining the candidate coordinate point as a coordinate point to be removed; or, based on the curvature of the candidate coordinate point being smaller than a third threshold value, calculating a second distance between a plane where the candidate coordinate point is located and a first coordinate point corresponding to the candidate coordinate point; and determining that the second distance meets the removal condition based on the fact that the second distance is larger than a fourth threshold value, and determining the candidate coordinate point as the coordinate point to be removed.
In a possible implementation manner, the removing module is configured to find, from a plurality of reference points included in a reference point set corresponding to a plurality of first coordinate points, a reference point closest to the any second coordinate point as a target reference point; and determining any second coordinate point as the candidate coordinate point based on the fact that the distance between the any second coordinate point and the target reference point corresponding to the any second coordinate point is smaller than the distance threshold value.
In a possible implementation manner, the acquiring module is configured to acquire a third point cloud and a fourth point cloud of the target area, where an acquisition time of the third point cloud is later than an acquisition time of the fourth point cloud; and carrying out downsampling treatment on the third point cloud according to the size of the voxels to obtain the first point cloud with uniform distribution, and carrying out downsampling treatment on the fourth point cloud according to the size of the voxels to obtain the second point cloud with uniform distribution.
In a possible implementation manner, the obtaining module is further configured to, for the first point cloud, find, in second initial coordinate points included in the second point cloud, a second initial coordinate point corresponding to the any first initial coordinate point, and determine a pose transformation relationship between the first coordinate system and the second coordinate system based on second initial coordinate points corresponding to a plurality of first initial coordinate points and each of the plurality of first initial coordinate points; or, the pose transformation relation between the first coordinate system and the second coordinate system is obtained from a pose obtaining device, and the pose obtaining device obtains the pose transformation relation through measurement.
In one possible implementation, the apparatus further includes: the updating module is used for acquiring a fifth point cloud of the target area, and the acquisition pose of the fifth point cloud is different from that of the first point cloud; and updating the target point cloud based on the fifth point cloud.
In another aspect, a computer device is provided, where the computer device includes a processor and a memory, where at least one computer program is stored in the memory, and the at least one computer program is loaded and executed by the processor, so that the computer device implements any one of the point cloud processing methods described above.
In another aspect, there is also provided a computer readable storage medium having stored therein at least one computer program loaded and executed by a processor to cause a computer to implement any one of the above-described point cloud processing methods.
In another aspect, a computer program product or computer program is provided, the computer program product or computer program comprising computer instructions stored in a computer readable storage medium. A processor of a computer device reads the computer instructions from the computer-readable storage medium, and the processor executes the computer instructions so that the computer device performs any of the above-described point cloud processing methods.
The technical scheme provided by the embodiment of the application at least brings the following beneficial effects:
the first connecting line of any first coordinate point and the phase center of the laser radar reflects the emission track of any first coordinate point, and the emission track of the first point cloud is rebuilt in the second coordinate system based on the first connecting line. By determining the coordinate points to be removed, in the process of updating the second point cloud based on the first point cloud, the coordinate points to be removed in the initial point cloud are removed except for adding the first point cloud into the second point cloud, the updating range is more comprehensive, and the accuracy of the updated target point cloud is higher.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present application, the drawings that are needed in the description of the embodiments will be briefly introduced below, and it is obvious that the drawings in the following description are only some embodiments of the present application, and that other drawings may be obtained according to these drawings without inventive effort for a person skilled in the art.
FIG. 1 is a schematic illustration of an implementation environment provided by embodiments of the present application;
fig. 2 is a flowchart of a point cloud processing method provided in an embodiment of the present application;
FIG. 3 is a schematic diagram of a positional relationship of an area according to an embodiment of the present disclosure;
FIG. 4 is a schematic diagram of a laser radar scanning process according to an embodiment of the present application;
fig. 5 is a schematic structural diagram of a point cloud processing device according to an embodiment of the present application;
fig. 6 is a schematic structural diagram of a server according to an embodiment of the present application;
fig. 7 is a schematic structural diagram of a point cloud processing device according to an embodiment of the present application.
Detailed Description
For the purpose of making the objects, technical solutions and advantages of the present application more apparent, the embodiments of the present application will be described in further detail below with reference to the accompanying drawings.
An embodiment of the application provides a point cloud processing method, please refer to fig. 1, which shows a schematic diagram of an implementation environment of the method provided in the embodiment of the application. The implementation environment may include: a terminal 11 and a server 12.
Alternatively, the method may be performed independently by the terminal 11 or the server 12, or may be performed interactively by the terminal 11 and the server 12. The process of interactively executing the terminal 11 and the server 12 is that, for example, the terminal 11 is installed with an application program capable of acquiring a first point cloud and a second point cloud, after the application program acquires the first point cloud and the second point cloud, the first point cloud and the second point cloud can be sent to the server 12, the server 12 determines a coordinate point to be removed based on the first point cloud and the second point cloud by applying the method provided by the embodiment of the present application, and the server 12 obtains a target point cloud according to the coordinate point to be removed, and stores the target point cloud. Optionally, the server 12 sends the target point cloud to the terminal 11, and the terminal 11 stores the target point cloud.
Alternatively, the terminal 11 may be any electronic product that can perform man-machine interaction with a user through one or more modes of a keyboard, a touch pad, a touch screen, a remote controller, a voice interaction or handwriting device, such as a PC (Personal Computer ), a mobile phone, a smart phone, a PDA (Personal Digital Assistant ), a wearable device, a PPC (Pocket PC), a tablet computer, a smart car machine, a smart television, a smart sound box, etc. The server 12 may be a server, a server cluster comprising a plurality of servers, or a cloud computing service center. The terminal 11 establishes a communication connection with the server 12 through a wired or wireless network.
Those skilled in the art will appreciate that the above-described terminal 11 and server 12 are by way of example only, and that other terminals or servers, either now present or later, may be suitable for use in the present application, and are intended to be within the scope of the present application and are incorporated herein by reference.
The embodiment of the application provides a point cloud processing method, which can be based on the implementation environment shown in fig. 1, and the method can be independently executed by a terminal or a server, or can be interactively realized by the terminal and the server. Taking the example that the method is applied to a server, a flowchart of the method is shown in fig. 2, and includes steps 201 to 204.
In step 201, a first point cloud and a second point cloud of a target area are acquired, wherein the acquisition time of the first point cloud is later than the acquisition time of the second point cloud.
In one possible implementation manner, the process of acquiring the first point cloud and the second point cloud by the server includes: acquiring a third point cloud and a fourth point cloud of the target area, wherein the acquisition time of the third point cloud is later than that of the fourth point cloud; and carrying out downsampling treatment on the third point cloud according to the size of the voxels to obtain uniformly distributed first point cloud, and carrying out downsampling treatment on the fourth point cloud according to the size of the voxels to obtain uniformly distributed second point cloud.
The third point cloud and the fourth point cloud are point cloud data obtained by scanning the target area based on the laser radar. The first laser radar for acquiring the third point cloud and the second laser radar for acquiring the fourth point cloud may be the same laser radar or may be different laser radars. The pose of the first laser radar in collecting the third point cloud may be the same as the pose of the second laser radar in collecting the fourth point cloud, or may be different, which is not limited in this embodiment. In addition, when the lidars including the first lidar and the second lidar collect point cloud data, a collection principle is, for example, TOF (Time of flight), but may be other principles. The laser beam emitted by the laser radar during the acquisition process is not limited in this application, and includes, but is not limited to, infrared light near the 950nm (Nanometer) band.
The triggering mode of collecting the third point cloud based on the first laser radar is not limited, and optionally, the collection of the third point cloud is triggered based on the requirement of periodically collecting the point cloud of the target area. Taking the application scene as a high-precision map as an example, the information of the regional buildings provided by the high-precision map reflects the current latest situation as far as possible. Therefore, periodic point cloud acquisition is required to be performed on the regional buildings included in the high-precision map according to the acquisition period, so that the high-precision map is updated in time. Based on this, the maintenance object of the high-precision map may scan the target area with the first lidar based on the acquisition period, so as to realize the acquisition of the third point cloud, and the acquisition period may be set based on experience, for example, the acquisition period is set to three days, one week or one month based on experience.
Optionally, the triggering manner of collecting the third point cloud may also be triggering the collection of the third point cloud based on the collection instruction. Still take the application scene as the high-precision map for example, the high-precision map can provide error correction control in the use process, and when the use object is in the use process of the high-precision map, the difference between partial scenes and reality exists in the high-precision map is detected, and the error correction control can be triggered to report. The maintenance object receives the acquisition instruction based on the triggering of the error correction control, and performs point cloud acquisition according to the acquisition instruction to the place reported by the use object, namely, to the target area to perform point cloud acquisition, so as to obtain a third point cloud. The target area in the above embodiment may be any area where the acquisition point cloud is required, and the content included in the target area is not limited in the embodiment of the present application, and may include various contents such as buildings, roads, trees, and street lamps.
As can be appreciated based on the above examples, the acquisition time of the third point cloud refers to the time when the first lidar scans the target area. And after the first laser radar acquires the third point cloud, the server acquires the third point cloud based on communication connection with the first laser radar. It should be noted that, the acquisition time of the server for acquiring the third point cloud may be the same as or different from the acquisition time of the third point cloud. The acquisition time and the acquisition time of the third point cloud are the same, for example, after the first laser radar scans the target area to obtain the third point cloud, the third point cloud is sent to the server based on the communication connection between the server and the first laser radar, so that the server updates the point cloud based on the third point cloud.
If the acquisition time of the third point cloud is different from the acquisition time, for example, due to the limited scanning range of the first lidar. Therefore, the first laser radar needs to adjust a plurality of pose scanning target areas to obtain multi-frame point cloud data including the third point cloud. Although the first laser radar finishes the collection of the third point cloud, the first laser radar selects to store the third point cloud in the storage space, and multi-frame point cloud data in the storage space are not uniformly transmitted to the server until the scanning of a plurality of poses of the target area is finished. Or, the number of frames of the point cloud data stored in the storage space reaches a storage threshold set based on experience, and the multi-frame point cloud data including the storage threshold number of the third point cloud is sent to the server.
It should be noted that the foregoing examples are intended to distinguish the acquisition time and the acquisition time of the third point cloud, and are not intended to limit the process of acquiring the third point cloud by the server, and the interaction process between the server and the first lidar may be shown in the foregoing embodiments, or may be other cases, which is not limited in this embodiment of the present application.
Optionally, after acquiring the third point cloud, the server may acquire a fourth point cloud of the target area based on the third point cloud. The collection time of the fourth point cloud is earlier than that of the third point cloud, and the collection time of the fourth point cloud refers to the time of the second laser radar scanning the target area. It should be noted that, since the fourth point cloud has two situations of single-frame point cloud data and multi-frame point cloud data, and the acquisition times corresponding to different situations are also different, for convenience of understanding, the two situations of the fourth point cloud will be explained.
In the first case, the fourth point cloud may be single-frame point cloud data acquired by the second laser radar based on one pose, for example, when the experimental effect of the point cloud processing method provided by the embodiment of the application is obtained, the execution purpose of the server aims at obtaining the accuracy of the determined coordinate point to be removed, and there is no requirement on whether the updated target point cloud needs to be put into use. Therefore, the fourth point cloud may be unprocessed single-frame point cloud data, in which case, the acquisition time of the fourth point cloud is the time when the second lidar acquires the single-frame point cloud data.
In the second case, the fourth point cloud may be point cloud data obtained by processing multi-frame point cloud data of the target area. The multi-frame point cloud processing may be to unify multi-frame point cloud data into a world coordinate system to obtain a reference point cloud for representing a three-dimensional form of the target region. The multi-frame point cloud processing may be that after unifying the multi-frame point cloud to the world coordinate system to obtain the reference point cloud, the reference point cloud is updated based on one frame of point cloud data of the newly acquired target area by applying the method provided in the embodiment of the present application, that is, before updating the fourth point cloud based on the third point cloud, the reference point cloud is updated based on other point clouds to obtain the fourth point cloud. In the second case, the collection time of the fourth point cloud is the latest time among collection times of each frame of point cloud data in the multi-frame point cloud data.
However, no matter what the case of the fourth point cloud is as shown in the above embodiment, the acquisition time of the third point cloud is later than the acquisition time of the fourth point cloud. Because the collection time of the third point cloud is later than that of the fourth point cloud, compared with the situation that the target area represented by the fourth point cloud is closer to the current target area, the server can update the fourth point cloud through the third point cloud after acquiring the fourth point cloud based on the third point cloud, and therefore accuracy of the updated fourth point cloud is improved.
Regarding the process of acquiring the fourth point cloud based on the third point cloud, for example, a target area is determined based on the third point cloud, and the fourth point cloud including the target area and having a size not smaller than the target area is determined from the range of the target area. Taking the application scene as a high-precision map and the target area as an area A as an example, the high-precision map comprises an area A, an area B and an area C. The positional relationship of the region a, the region B, and the region C is shown in fig. 3, wherein the region B refers to an elliptical region including the region a. After determining that the target area is the area A based on the third point cloud, the server determines that the point cloud corresponding to the area B is the fourth point cloud according to the range of the area A, and extracts the point cloud data corresponding to the area B from the point cloud data used for constructing the high-precision map to be the fourth point cloud, so that the fourth point cloud is obtained.
The server may perform downsampling processing on the third point cloud and the fourth point cloud according to the voxel size after acquiring the third point cloud and the fourth point cloud, obtain uniformly distributed first point cloud by downsampling the third point cloud, and obtain uniformly distributed second point cloud by downsampling the fourth point cloud. In addition, besides the characteristics of uniform distribution shown in the above embodiment, the data size of the down-sampled first point cloud and second point cloud is also reduced compared to the unprocessed third point cloud and fourth point cloud. Based on the first point cloud and the second point cloud with smaller data volume, the efficiency of updating the second point cloud based on the first point cloud is improved. The voxel size used for the down-sampling process may be set based on experience, for example, a square with a side length of 2 cm.
Of course, the server may select the foregoing embodiment to obtain the third point cloud and the fourth point cloud, and then perform downsampling processing on the third point cloud and the fourth point cloud to obtain the first point cloud and the second point cloud that are subsequently used to obtain the target point cloud. The server may also choose not to perform downsampling processing on the obtained point cloud data, that is, to use the third point cloud as the first point cloud and use the fourth point cloud as the second point cloud. However, no matter how the first point cloud and the second point cloud are obtained based on the third point cloud and the fourth point cloud, since the first point cloud is obtained based on the third point cloud, the laser radar used for collecting the first point cloud is the first laser radar for collecting the third point cloud, and the collection time of the first point cloud is the collection time of the third point cloud. The laser radar for acquiring the second point cloud is the second laser radar for acquiring the fourth point cloud, and the acquisition time of the second point cloud is the acquisition time of the fourth point cloud. Because the acquisition time of the third point cloud is later than the acquisition time of the fourth point cloud, the acquisition time of the first point cloud is also later than the acquisition time of the second point cloud.
In step 202, based on the pose transformation relationship between the first coordinate system of the first point cloud and the second coordinate system of the second point cloud, the first point cloud is added to the second point cloud, so as to obtain an initial point cloud including a plurality of first coordinate points and a plurality of second coordinate points, wherein the first coordinate points are coordinates of the corresponding first point cloud in the second coordinate system, and the second coordinate points are coordinates of the corresponding second point cloud in the second coordinate system.
In one possible implementation, before adding the first point cloud to the second point cloud, the server further needs to acquire a pose transformation relationship between the first coordinate system and the second coordinate system, where the acquiring process includes: for any first initial coordinate point of the first point cloud in the first coordinate system, searching a second initial coordinate point corresponding to any first initial coordinate point in second initial coordinate points included in the second point cloud, and determining a pose transformation relationship between the first coordinate system and the second coordinate system based on the first initial coordinate points and the second initial coordinate points corresponding to each of the first initial coordinate points.
The second initial coordinate point corresponding to the first initial coordinate point is that the reflection point indicated by the first initial coordinate point and the reflection point indicated by the second initial coordinate point are the same point. Taking the schematic diagram of the laser radar scanning process shown in fig. 4 as an example, fig. 4 (1) is a schematic diagram of a process of scanning a target area by a second laser radar to obtain a second point cloud, and fig. 4 (2) is a schematic diagram of a process of scanning a target area by a first laser radar to obtain a first point cloud. The reflection point indicated by the first initial coordinate point a and the reflection point indicated by the second initial coordinate point a shown in fig. 4 are located at the same position of the street lamp, that is, the reflection point indicated by the first initial coordinate point a and the reflection point indicated by the second initial coordinate point a are the same point, and the second initial coordinate point corresponding to the first initial coordinate point a is the second initial coordinate point a.
And obtaining the coordinates of the same point on the first coordinate system and the second coordinate system by obtaining the first initial coordinate point and the second initial coordinate point corresponding to the first initial coordinate point. And calculating the pose transformation relation between the first coordinate system and the second coordinate system based on the coordinates of the same point in different coordinate systems. Illustratively, the pose transformation relationship between the first coordinate system and the second coordinate system refers to a pose transformation matrix including rotation parameters and translation parameters. The translation parameter is used for indicating the position relation between the coordinate origin of the first coordinate system and the coordinate origin of the second coordinate system, and the rotation parameter is used for indicating the posture relation between the first coordinate system and the second coordinate system.
Alternatively, the server may obtain the pose transformation relationship from the pose obtaining device by measuring the pose transformation relationship between the first coordinate system and the second coordinate system, in addition to calculating the pose transformation relationship based on the method shown in the above embodiment. The pose obtaining device can be a device provided in the first laser radar or other devices mounted on the first laser radar, so that when the first laser radar collects the first point cloud, the pose of the first coordinate system is obtained through measurement, and then the pose transformation relation between the first coordinate system and the second coordinate system is determined according to the pose of the first coordinate system and the pose of the second coordinate system.
Illustratively, the pose of a coordinate system including a first coordinate system and a second coordinate system refers to the position of the origin of coordinates of the coordinate system in the world coordinate system and the orientation of the coordinate system. In addition, the pose of the second coordinate system may be measured by the pose acquisition device, and the pose of the second pose coordinate system may be acquired synchronously with the second point cloud or asynchronously. Alternatively, the second coordinate system may be a world coordinate system, and the second coordinate system may also be different from the world coordinate system, which is not limited in the embodiment of the present application. The pose acquisition device is, for example, an IMU (Inertial Measurement Unit ), or other device that can be used to measure pose.
In one possible implementation, after acquiring the pose transformation relationship between the first coordinate system and the second coordinate system, the server adds the first point cloud to the second point cloud based on the pose transformation relationship. The adding of the first point cloud to the second point cloud means determining coordinates of the first point cloud in the second coordinate system, and since the first point cloud is a set of a plurality of first initial coordinate points, the coordinates of the first point cloud in the second coordinate system are the first coordinate points corresponding to the plurality of first initial coordinate points in the second coordinate system. Optionally, for any first initial coordinate point, the process of determining the corresponding first coordinate point includes: and (3) carrying out matrix multiplication on the coordinate of any first initial coordinate point and the pose transformation relation, wherein the product obtained by multiplication is the first coordinate point corresponding to any first initial coordinate point. And executing the steps on each first initial coordinate point included in the first point cloud, namely determining the first coordinate point of the first point cloud in the second coordinate system, so as to add the first point cloud into the second point cloud, and obtaining the initial point cloud comprising the first coordinate point and the second coordinate point.
The second coordinate points represent coordinates of the corresponding second point cloud in the second coordinate system, similarly to the meaning of the first coordinate points, and the second point cloud corresponding to the second coordinate points is the second initial coordinate point corresponding to the second coordinate points. Note that, for point cloud data corresponding to the target area, the first point cloud and the second point cloud are both point cloud data corresponding to the target area, and the first coordinate points may be partially identical to the second initial coordinate points. Taking the schematic diagram of the laser radar scanning process shown in fig. 4 as an example, the street lamps involved in the process of scanning the target area by the first laser radar also exist in the process of scanning the target area by the second laser radar. Thus, in the second coordinate system, the first coordinate point falling on the street lamp coincides with the second initial coordinate point falling on the street lamp. In this case, the server may screen out coordinate points overlapping with the first coordinate points among the second initial coordinate points after determining the plurality of first coordinate points, and use the screened second initial coordinate points as the second coordinate points.
Optionally, the server may also select to not filter the repeated second initial coordinate points, that is, determine a plurality of second initial coordinate points as the second coordinate points, and remove the repeated second initial coordinate points through a subsequent step of determining the coordinate points to be removed. The principle of removing the repeated second initial coordinate point will be described in the process of determining the coordinates to be removed in step 204, and will not be described herein.
In step 203, for any first coordinate point, a first connection of any first coordinate point to a phase center of a lidar for collecting a first point cloud is determined.
In one possible implementation, the laser radar used to collect the first point cloud, that is, the first laser radar in the above embodiment, the phase center thereof refers to the position where the first laser radar emits the laser beam. Referring to fig. 4, the phase center of the first lidar is shown. However, it should be noted that, since the first coordinate point is the coordinate of the first point cloud in the second coordinate system, the phase center of the first lidar for establishing the connection needs to be correspondingly moved from the first coordinate system to the second coordinate system. That is, based on the pose transformation relationship between the first coordinate system and the second coordinate system, the phase center of the first laser radar in the second coordinate system is determined. The determining process of the phase center is similar to the determining process of the first coordinate point, and the determining process of the phase center can also be multiplied by a matrix, which is not described herein.
And establishing a first connection line between any first coordinate point and the phase center by determining the phase center of the first laser radar in the second coordinate system, wherein the first connection line is used for representing the emission track of the laser beam of the first laser radar in the process of collecting the first coordinate point. Continuing with fig. 4 as an example, after adding the first point cloud to the second point cloud, the second initial coordinate point a is a first coordinate point of the first initial coordinate point a in the second coordinate system, and the position of the first laser radar in the second coordinate system, that is, the laser radar shown in (1) of fig. 4, in this case, the connection line between the laser radar and the second initial coordinate point a in (1) of fig. 4 reflects the emission track of the first laser radar in the acquisition of the laser beam of the first initial coordinate point a. Reconstructing the emission track of the laser beam corresponding to each first coordinate point in the second coordinate system is achieved by establishing a first connection line with the phase center of the first laser radar for each first coordinate point included in the initial point cloud.
In step 204, a coordinate point to be removed among a plurality of second coordinate points included in the initial point cloud is determined, so as to obtain a target point cloud, where the coordinate point to be removed is the second coordinate point whose distance from the first line satisfies the removal condition.
Illustratively, the coordinate point to be removed refers to a second coordinate point that produces an occlusion to the first coordinate point. Continuing with the example of fig. 4, the second coordinate point is occluded from the first coordinate point. Fig. 4 (2) shows an emission trajectory of the laser beam corresponding to the first initial coordinate point B, and the emission trajectory of the laser beam corresponding to the first initial coordinate point B reconstructed in the second coordinate system can be referred to as a first line connecting the first coordinate point B and the phase center of the first laser radar in fig. 4 (1). The first connecting line corresponding to the first coordinate point B passes through the bush, that is, in the second coordinate system, the laser beam emitted by the first laser radar according to the scanning angle corresponding to the first coordinate point B cannot be projected onto the first coordinate point B, but is reflected back to the first laser radar by the second coordinate point B after being projected onto the second coordinate point B located in the bush, that is, the second coordinate point B shields the first coordinate point B. However, since the first initial coordinate point B corresponding to the first coordinate point B is a real coordinate point acquired by the first laser radar scanning target area, the first coordinate point B should not be blocked by the second coordinate point, that is, the second coordinate point B is not present in the first point cloud acquiring process of the first laser radar, and belongs to the coordinate point to be removed.
It should be noted that the foregoing examples are intended to illustrate the principle of determining the coordinate point to be removed in the embodiments of the present application, and are not limited to the relationship between the coordinate point to be removed and the first connection line. The coordinate point to be removed may be a second coordinate point B located on the first line, that is, a distance from the first line of 0, as shown in fig. 4, or a second coordinate point which is not 0 but satisfies the removal condition.
In addition, in the process of determining the coordinate points to be removed, the server may determine candidate coordinate points, that is, second coordinate points that may block the first coordinate points. The method includes the steps that an arbitrary first coordinate point corresponding reference point set is obtained, wherein the arbitrary first coordinate point corresponding reference point set comprises a plurality of reference points located on a first connecting line corresponding to the arbitrary first coordinate point; determining any second coordinate point as a candidate coordinate point based on the reference points, which are smaller than a distance threshold, in the reference point set corresponding to the first coordinate points; and determining the distance between the candidate coordinate point and the first connecting line corresponding to the candidate coordinate point, and determining the candidate coordinate point as the coordinate point to be removed based on the fact that the distance meets the removal condition.
The method of dividing the reference points on the first connection line corresponding to any one of the first coordinate points is not limited, the interval threshold may be determined based on experience, the first coordinate point or the phase center of the first laser radar is taken as a starting point, one reference point is determined on the first connection line every distance from the interval threshold, and therefore a reference point set including a plurality of reference points is obtained. Of course, the number of reference points may be determined based on experience, the distance value between the reference points may be determined according to the length of the first connection line corresponding to the first coordinate point and the number of reference points, and then the plurality of reference points may be determined on the first connection line with the first coordinate point or the phase center of the first laser radar as a starting point. It can be understood from the above examples that the number of reference points corresponding to different first coordinate points may be the same or different, and the distances between the reference points corresponding to different first coordinate points may be the same or different.
In one possible implementation, after determining the reference point set corresponding to each first coordinate point, candidate coordinate points may be determined from the plurality of second coordinate points based on the reference point sets corresponding to the plurality of first coordinate points. The determining process comprises the following steps: searching a plurality of reference points included in a reference point set corresponding to the plurality of first coordinate points for a reference point closest to any one second coordinate point to serve as a target reference point; and determining any second coordinate point as a candidate coordinate point based on the fact that the distance between any second coordinate point and the target reference point corresponding to any second coordinate point is smaller than a distance threshold.
Optionally, in determining the target reference point, the search range of the server may be a reference point set corresponding to the plurality of first coordinate points, or may be a reference point set corresponding to the plurality of first coordinate points and the plurality of first coordinate points. Illustratively, the server looks up the target reference point through a k-d tree (k-dimensional tree). Wherein, k-d tree is a data structure for fast nearest neighbor and near nearest neighbor search in multidimensional space, and a node in k-d tree represents a multidimensional coordinate, namely a reference point involved in the above embodiment. The k-d tree server can quickly determine the target reference point closest to the second coordinate point.
The server may choose to implement the search for the target reference point using the data structure of the k-d tree, or may use other data structures, or other search methods, for example. After searching to obtain the target reference point, the server may further calculate a distance between the second coordinate point and the target reference point, and when the distance between any second coordinate point and the target reference point is smaller than the distance threshold, determine any second coordinate point as the candidate coordinate point. The distance threshold may be set based on experience and implementation environment, such as the floor area of the target area. The candidate coordinate points are determined from the plurality of second coordinate points, so that screening of the second coordinate points is realized, and the data quantity of the second coordinate points which need to be determined in a subsequent mode from the first connecting line is reduced.
Alternatively, the server may determine the coordinate point to be removed based on the candidate coordinate point after screening the candidate coordinate point from the second coordinate point. Since the determination of the coordinate point to be removed by the candidate coordinate point requires that the distance between the candidate coordinate point and the first line corresponding to the candidate coordinate point be determined in accordance with the distance between the candidate coordinate point and the first line corresponding to the candidate coordinate point, which is in turn related to the curvature of the candidate coordinate point, the server needs to determine the curvature of the candidate coordinate point among the plurality of second coordinate points.
In one possible implementation, the process of determining the curvature of the candidate coordinate points includes: and setting a reference radius based on experience by taking the candidate coordinate point as a circle center, determining a reference surface corresponding to the candidate coordinate point according to the circle center and the reference radius, calculating the curvature of the reference surface based on a plurality of second coordinate points included in the reference surface, and determining the curvature of the reference surface as the curvature of the candidate coordinate point. For example, a reference radius is set to be 10 cm based on experience, a reference plane is obtained, a plurality of second coordinate points located in the reference plane are determined, a plurality of nearest neighbors are obtained according to a local parabolic fitting formula, coefficients in the local parabolic fitting formula are obtained through a least square method based on the plurality of second coordinate points included in the reference plane, a curvature function is obtained, and curvature of the reference plane is obtained through calculation according to the curvature function. Wherein the curvature of the reference surface is used to indicate the degree of curvature of the reference surface, and the greater the curvature of the reference surface, the greater the degree of curvature thereof. For the curvature of the candidate coordinate point, determining whether the distance between the candidate coordinate point and the first connection line corresponding to the candidate coordinate point meets the removal condition, and further determining whether the candidate coordinate point is the coordinate to be removed may include, but is not limited to, the following two ways.
Determining a first mode, and calculating a first distance between a candidate coordinate point and a first connecting line corresponding to the candidate coordinate point based on the fact that the curvature of the candidate coordinate point is larger than a first threshold value; and determining that the first distance meets the removal condition based on the first distance being smaller than the second threshold value, and determining the candidate coordinate point as the coordinate point to be removed.
Optionally, the curvature of the candidate coordinate point is greater than a first threshold, that is, the reference plane of the candidate coordinate point is non-planar, in which case a first distance between the candidate coordinate point and a first line corresponding to the candidate coordinate point, that is, a perpendicular distance from the point to the line, is calculated. The first connection line corresponding to the candidate coordinate point refers to a first connection line where the target reference point corresponding to the candidate coordinate point is located. And comparing the magnitude relation between the first distance and the second threshold value, and determining the candidate coordinate point corresponding to the first distance as the coordinate point to be removed, which can shade the first coordinate point, when the first distance is smaller than the second threshold value.
For example, the second threshold may be set based on experience. By setting the second threshold, the aperture radius of the laser beam is also considered when the coordinate point to be removed is determined, so that the situation that only the second coordinate point which is positioned on the first connecting line and is close to the first connecting line but can shade the laser beam is removed is avoided, the determined coordinate point to be removed is more comprehensive, and the accuracy is higher.
Determining a second mode, and calculating a second distance between a plane where the candidate coordinate point is located and a first coordinate point corresponding to the candidate coordinate point based on the fact that the curvature of the candidate coordinate point is smaller than a third threshold value; and determining that the second distance meets the removal condition based on the fact that the second distance is larger than a fourth threshold value, and determining the candidate coordinate points as the coordinate points to be removed.
Optionally, the curvature of the candidate coordinate point is smaller than the third threshold, that is, the reference plane of the candidate coordinate point is a plane. The third threshold may be any value set based on experience, and may be the same as the first threshold or different from the first threshold, which is not limited in the embodiment of the present application. In addition, the plane in which the candidate coordinate point is located refers to a reference plane of the candidate coordinate point, and the first coordinate point corresponding to the candidate coordinate point refers to a first coordinate point connected to a first connecting line in which the target reference point corresponding to the candidate coordinate point is located. Taking the second coordinate point B of fig. 4 as an example of the candidate coordinate points, the first coordinate point corresponding to the second coordinate point B is the first coordinate point B.
After calculating a second distance between the reference surface of the candidate coordinate point and the first coordinate point corresponding to the candidate coordinate point, the server compares the magnitude relation between the second distance and the fourth threshold value, and when the second distance is larger than the fourth distance, the server determines that the second distance meets the condition to be removed, namely the candidate coordinate point is the coordinate point to be removed. The fourth threshold may be the same as or different from the second threshold, which is not limited in the embodiment of the present application.
In one possible implementation manner, for the case that the second initial coordinate point that coincides with the first coordinate point is not screened out in the process of determining the second coordinate point based on the second initial coordinate point shown in step 202, the second coordinate point that coincides with the first coordinate point is also determined as the coordinate point to be removed in the process of determining the coordinate point to be removed. The coordinate points of the second coordinate points, which are coincident with the first coordinate points, are referred to as coincident coordinate points, and since the coincident coordinate points are coincident with the first coordinate points, for any coincident coordinate point, the found target reference point, that is, the first coordinate point corresponding to the coincident coordinate point, has a distance of 0, which is smaller than the distance threshold, and is determined as a candidate coordinate point. The first coordinate point is located on the first line corresponding to the target reference point, so that the first line corresponding to the first coordinate point and the second coordinate point meets the removal condition and is determined as the coordinate point to be removed.
Through the steps, the server can determine the coordinate point to be removed, which is blocked by the first coordinate point, in the second coordinate point, and remove the coordinate point to be removed from the initial point cloud to obtain the target point cloud which does not comprise the coordinate point to be removed. By removing the coordinate points to be removed, in the process of updating the second point cloud based on the first point cloud, redundant content in the second point cloud is removed except for originally missing content added into the second point cloud, and the redundant content refers to content which does not exist in the current target area. For example, when the second point cloud is collected, the target area comprises street lamps and shrubs, and when the first point cloud is collected, a building A is newly built in the target area, the first point cloud comprises the building A, the street lamps and the shrubs, and the addition of the point cloud data of the building A, namely the addition of originally lacking content, is realized by adding the second point cloud to the first point cloud. Since the bush has fallen during the first point cloud acquisition, the leaves of the bush included in the second point cloud belong to redundant content. By the method provided by the embodiment of the application, the point cloud data of the leaves on the shrubs can be determined to be the coordinate points to be removed, and the coordinate points to be removed are removed, so that the updated target point cloud is ensured to be closer to the current condition of the target area, and the accuracy is high.
In one possible implementation, since the laser radar has a limited scanning range, when the point cloud data of the target area is acquired by the laser radar, the acquired point cloud data is multi-frame point cloud data including the first point cloud. In this case, after updating the second point cloud based on the first point cloud to obtain the target point cloud, the server may further obtain a fifth point cloud of the target area, where an acquisition pose of the fifth point cloud is different from an acquisition pose of the first point cloud; the target point cloud is updated based on the fifth point cloud.
The difference between the fifth point cloud and the first point cloud is the acquisition pose of the laser radar, but the acquisition time of the fifth point cloud and the first point cloud may be the same or different. In addition, the laser radar for collecting the fifth point cloud and the first laser radar for collecting the first point cloud may be the same laser radar or may be different laser radars, which is not limited in the embodiment of the present application. The process of updating the target point cloud based on the fifth point cloud is similar to the process of updating the second point cloud based on the first point cloud in step 201-step 204, and will not be described in detail herein.
In summary, according to the point cloud processing method provided by the embodiment of the application, since the first connection line between any first coordinate point and the phase center of the laser radar reflects the emission track of any first coordinate point, the emission track of the first point cloud is reconstructed in the second coordinate system based on the first connection line, and since the first point cloud is the acquired coordinate point, the acquisition process of the first point cloud is not blocked, and the coordinate point to be removed, which is determined according to the first connection line and blocks the first point cloud, is high in accuracy. In addition, in the process of determining the coordinate points to be removed from the candidate coordinate points, different determination modes can be selected based on the curvature of the candidate coordinate points, so that the flexibility is high. By determining the coordinate points to be removed, in the process of updating the second point cloud based on the first point cloud, the coordinate points to be removed in the initial point cloud are removed except for adding the first point cloud into the second point cloud, the updating range is more comprehensive, and the accuracy of the updated target point cloud is higher.
Referring to fig. 5, an embodiment of the present application provides a point cloud processing apparatus, including: an acquisition module 501, an addition module 502, a determination module 503, and a removal module 504;
the acquiring module 501 is configured to acquire a first point cloud and a second point cloud of the target area, where an acquisition time of the first point cloud is later than an acquisition time of the second point cloud;
the adding module 502 is configured to add the first point cloud to the second point cloud based on a pose transformation relationship between the first coordinate system of the first point cloud and the second coordinate system of the second point cloud, to obtain an initial point cloud including a plurality of first coordinate points and a plurality of second coordinate points, where the first coordinate points are coordinates of the corresponding first point cloud in the second coordinate system, and the second coordinate points are coordinates of the corresponding second point cloud in the second coordinate system;
a determining module 503, configured to determine, for any first coordinate point, a first connection between any first coordinate point and a phase center of a laser radar for acquiring a first point cloud;
the removing module 504 is configured to determine a coordinate point to be removed from a plurality of second coordinate points included in the initial point cloud, to obtain a target point cloud, where the coordinate point to be removed is a second coordinate point whose distance from the first line satisfies a removing condition.
Optionally, the removing module 504 is configured to obtain a reference point set corresponding to any first coordinate point, where the reference point set corresponding to any first coordinate point includes a plurality of reference points located on a first line corresponding to any first coordinate point; determining any second coordinate point as a candidate coordinate point based on the reference points, which are smaller than a distance threshold, in the reference point set corresponding to the first coordinate points; and determining the distance between the candidate coordinate point and the first connecting line corresponding to the candidate coordinate point, and determining the candidate coordinate point as the coordinate point to be removed based on the fact that the distance meets the removal condition.
Optionally, the removing module 504 is configured to determine a curvature of a candidate coordinate point of the plurality of second coordinate points, where the candidate coordinate point is determined based on the first connection line; calculating a first distance between the candidate coordinate point and a first connecting line corresponding to the candidate coordinate point based on the curvature of the candidate coordinate point being greater than a first threshold value; determining that the first distance meets a removal condition based on the first distance being smaller than a second threshold value, and determining candidate coordinate points as coordinate points to be removed; or, calculating a second distance between the plane where the candidate coordinate point is located and the first coordinate point corresponding to the candidate coordinate point based on the curvature of the candidate coordinate point being smaller than a third threshold value; and determining that the second distance meets the removal condition based on the fact that the second distance is larger than a fourth threshold value, and determining the candidate coordinate points as the coordinate points to be removed.
Optionally, the removing module 504 is configured to find, as the target reference point, a reference point closest to any one of the second coordinate points from a plurality of reference points included in the reference point set corresponding to the plurality of first coordinate points; and determining any second coordinate point as a candidate coordinate point based on the fact that the distance between any second coordinate point and the target reference point corresponding to any second coordinate point is smaller than a distance threshold.
Optionally, an acquiring module 501 is configured to acquire a third point cloud and a fourth point cloud of the target area, where an acquisition time of the third point cloud is later than an acquisition time of the fourth point cloud; and carrying out downsampling treatment on the third point cloud according to the size of the voxels to obtain uniformly distributed first point cloud, and carrying out downsampling treatment on the fourth point cloud according to the size of the voxels to obtain uniformly distributed second point cloud.
Optionally, the obtaining module 501 is further configured to search, for any first initial coordinate point of the first point cloud in the first coordinate system, a second initial coordinate point corresponding to the any first initial coordinate point from second initial coordinate points included in the second point cloud, and determine a pose transformation relationship between the first coordinate system and the second coordinate system based on the second initial coordinate points corresponding to each of the plurality of first initial coordinate points and the plurality of first initial coordinate points; or, the pose transformation relation between the first coordinate system and the second coordinate system is obtained from the pose obtaining device, and the pose obtaining device obtains the pose transformation relation through measurement.
Optionally, the apparatus further comprises: the updating module is used for acquiring a fifth point cloud of the target area, and the acquisition pose of the fifth point cloud is different from that of the first point cloud; the target point cloud is updated based on the fifth point cloud.
According to the device, the first connecting line of any first coordinate point and the phase center of the laser radar reflects the emission track of any first coordinate point, the emission track of the first point cloud is rebuilt in the second coordinate system based on the first connecting line, and the first point cloud is a coordinate point obtained by successful acquisition, so that the coordinate point to be removed, which is determined according to the first connecting line and is used for shielding the first point cloud, is not shielded in the acquisition process, and the accuracy is high. By determining the coordinate points to be removed, in the process of updating the second point cloud based on the first point cloud, the coordinate points to be removed in the initial point cloud are removed except for adding the first point cloud into the second point cloud, the updating range is more comprehensive, and the accuracy of the updated target point cloud is higher.
It should be noted that, when the apparatus provided in the foregoing embodiment performs the functions thereof, only the division of the foregoing functional modules is used as an example, in practical application, the foregoing functional allocation may be performed by different functional modules according to needs, that is, the internal structure of the device is divided into different functional modules, so as to perform all or part of the functions described above. In addition, the apparatus and the method embodiments provided in the foregoing embodiments belong to the same concept, and specific implementation processes of the apparatus and the method embodiments are detailed in the method embodiments and are not repeated herein.
Fig. 6 is a schematic structural diagram of a server provided in the embodiment of the present application, where the server may have a relatively large difference due to different configurations or performances, and may include one or more processors 601 and one or more memories 602, where the one or more memories 602 store at least one computer program, and the at least one computer program is loaded and executed by the one or more processors 601, so that the server implements the point cloud processing method provided in each method embodiment. The processor is, for example, a CPU (Central Processing Unit ). Of course, the server may also have a wired or wireless network interface, a keyboard, an input/output interface, and other components for implementing the functions of the device, which are not described herein.
Fig. 7 is a schematic structural diagram of a point cloud processing device according to an embodiment of the present application. The device may be a terminal, for example: a smart phone, a tablet, an MP3 (Moving Picture Experts Group Audio Layer III, motion picture expert compression standard audio plane 3) player, an MP4 (Moving Picture Experts Group Audio Layer IV, motion picture expert compression standard audio plane 4) player, a notebook or a desktop. Terminals may also be referred to by other names as user equipment, portable terminals, laptop terminals, desktop terminals, etc.
Generally, the terminal includes: a processor 701 and a memory 702.
Processor 701 may include one or more processing cores, such as a 4-core processor, an 8-core processor, and the like. The processor 701 may be implemented in at least one hardware form of DSP (Digital Signal Processing ), FPGA (Field-Programmable Gate Array, field programmable gate array), PLA (Programmable Logic Array ). The processor 701 may also include a main processor, which is a processor for processing data in an awake state, also referred to as a CPU (Central Processing Unit ); a coprocessor is a low-power processor for processing data in a standby state. In some embodiments, the processor 701 may be integrated with a GPU (Graphics Processing Unit, image processor) for taking care of rendering and drawing of content that the display screen is required to display. In some embodiments, the processor 701 may also include an AI (Artificial Intelligence ) processor for processing computing operations related to machine learning.
Memory 702 may include one or more computer-readable storage media, which may be non-transitory. The memory 702 may also include high-speed random access memory, as well as non-volatile memory, such as one or more magnetic disk storage devices, flash memory storage devices. In some embodiments, a non-transitory computer readable storage medium in the memory 702 is configured to store at least one instruction for execution by the processor 701 to cause the terminal to implement a point cloud processing method provided by a method embodiment in the present application.
In some embodiments, the terminal may further optionally include: a peripheral interface 703 and at least one peripheral. The processor 701, the memory 702, and the peripheral interface 703 may be connected by a bus or signal lines. The individual peripheral devices may be connected to the peripheral device interface 703 via buses, signal lines or a circuit board. Specifically, the peripheral device includes: at least one of radio frequency circuitry 704, a display 705, a camera assembly 706, audio circuitry 707, a positioning assembly 708, and a power supply 709.
A peripheral interface 703 may be used to connect I/O (Input/Output) related at least one peripheral device to the processor 701 and memory 702. In some embodiments, the processor 701, memory 702, and peripheral interface 703 are integrated on the same chip or circuit board; in some other embodiments, either or both of the processor 701, the memory 702, and the peripheral interface 703 may be implemented on separate chips or circuit boards, which is not limited in this embodiment.
The Radio Frequency circuit 704 is configured to receive and transmit RF (Radio Frequency) signals, also referred to as electromagnetic signals. The radio frequency circuitry 704 communicates with a communication network and other communication devices via electromagnetic signals. The radio frequency circuit 704 converts an electrical signal into an electromagnetic signal for transmission, or converts a received electromagnetic signal into an electrical signal. Optionally, the radio frequency circuit 704 includes: antenna systems, RF transceivers, one or more amplifiers, tuners, oscillators, digital signal processors, codec chipsets, subscriber identity module cards, and so forth. The radio frequency circuitry 704 may communicate with other terminals via at least one wireless communication protocol. The wireless communication protocol includes, but is not limited to: metropolitan area networks, various generations of mobile communication networks (2G, 3G, 4G, and 5G), wireless local area networks, and/or WiFi (Wireless Fidelity ) networks. In some embodiments, the radio frequency circuitry 704 may also include NFC (Near Field Communication ) related circuitry, which is not limited in this application.
The display screen 705 is used to display a UI (User Interface). The UI may include graphics, text, icons, video, and any combination thereof. When the display 705 is a touch display, the display 705 also has the ability to collect touch signals at or above the surface of the display 705. The touch signal may be input to the processor 701 as a control signal for processing. At this time, the display 705 may also be used to provide virtual buttons and/or virtual keyboards, also referred to as soft buttons and/or soft keyboards. In some embodiments, the display 705 may be one, disposed on the front panel of the terminal; in other embodiments, the display 705 may be at least two, respectively disposed on different surfaces of the terminal or in a folded design; in other embodiments, the display 705 may be a flexible display disposed on a curved surface or a folded surface of the terminal. Even more, the display 705 may be arranged in a non-rectangular irregular pattern, i.e. a shaped screen. The display 705 may be made of LCD (Liquid Crystal Display ), OLED (Organic Light-Emitting Diode) or other materials.
The camera assembly 706 is used to capture images or video. Optionally, the camera assembly 706 includes a front camera and a rear camera. Typically, the front camera is disposed on the front panel of the terminal and the rear camera is disposed on the rear surface of the terminal. In some embodiments, the at least two rear cameras are any one of a main camera, a depth camera, a wide-angle camera and a tele camera, so as to realize that the main camera and the depth camera are fused to realize a background blurring function, and the main camera and the wide-angle camera are fused to realize a panoramic shooting and Virtual Reality (VR) shooting function or other fusion shooting functions. In some embodiments, camera assembly 706 may also include a flash. The flash lamp can be a single-color temperature flash lamp or a double-color temperature flash lamp. The dual-color temperature flash lamp refers to a combination of a warm light flash lamp and a cold light flash lamp, and can be used for light compensation under different color temperatures.
The audio circuit 707 may include a microphone and a speaker. The microphone is used for collecting sound waves of users and environments, converting the sound waves into electric signals, and inputting the electric signals to the processor 701 for processing, or inputting the electric signals to the radio frequency circuit 704 for voice communication. For the purpose of stereo acquisition or noise reduction, a plurality of microphones can be respectively arranged at different parts of the terminal. The microphone may also be an array microphone or an omni-directional pickup microphone. The speaker is used to convert electrical signals from the processor 701 or the radio frequency circuit 704 into sound waves. The speaker may be a conventional thin film speaker or a piezoelectric ceramic speaker. When the speaker is a piezoelectric ceramic speaker, not only the electric signal can be converted into a sound wave audible to humans, but also the electric signal can be converted into a sound wave inaudible to humans for ranging and other purposes. In some embodiments, the audio circuit 707 may also include a headphone jack.
The location component 708 is operative to locate a current geographic location of the terminal to enable navigation or LBS (Location Based Service, location-based services). The positioning component 708 may be a positioning component based on the United states GPS (Global Positioning System ), the Beidou system of China, the Granati system of Russia, or the Galileo system of the European Union.
The power supply 709 is used to power the various components in the terminal. The power supply 709 may be an alternating current, a direct current, a disposable battery, or a rechargeable battery. When the power supply 709 includes a rechargeable battery, the rechargeable battery may support wired or wireless charging. The rechargeable battery may also be used to support fast charge technology.
In some embodiments, the terminal further includes one or more sensors 710. The one or more sensors 710 include, but are not limited to: acceleration sensor 711, gyroscope sensor 712, pressure sensor 713, fingerprint sensor 714, optical sensor 715, and proximity sensor 716.
The acceleration sensor 711 can detect the magnitudes of accelerations on three coordinate axes of a coordinate system established with the terminal. For example, the acceleration sensor 711 may be used to detect the components of the gravitational acceleration in three coordinate axes. The processor 701 may control the display screen 705 to display a user interface in a landscape view or a portrait view based on the gravitational acceleration signal acquired by the acceleration sensor 711. The acceleration sensor 711 may also be used for the acquisition of motion data of a game or a user.
The gyro sensor 712 may detect a body direction and a rotation angle of the terminal, and the gyro sensor 712 may collect a 3D motion of the user to the terminal in cooperation with the acceleration sensor 711. The processor 701 may implement the following functions based on the data collected by the gyro sensor 712: motion sensing (e.g., changing UI according to a tilting operation by a user), image stabilization at shooting, game control, and inertial navigation.
The pressure sensor 713 may be disposed at a side frame of the terminal and/or at a lower layer of the display screen 705. When the pressure sensor 713 is disposed at a side frame of the terminal, a grip signal of the terminal by a user may be detected, and the processor 701 performs left-right hand recognition or quick operation according to the grip signal collected by the pressure sensor 713. When the pressure sensor 713 is disposed at the lower layer of the display screen 705, the processor 701 controls the operability control on the UI interface according to the pressure operation of the user on the display screen 705. The operability controls include at least one of a button control, a scroll bar control, an icon control, and a menu control.
The fingerprint sensor 714 is used to collect a fingerprint of the user, and the processor 701 identifies the identity of the user according to the fingerprint collected by the fingerprint sensor 714, or the fingerprint sensor 714 identifies the identity of the user according to the collected fingerprint. Upon recognizing that the user's identity is a trusted identity, the processor 701 authorizes the user to perform relevant sensitive operations including unlocking the screen, viewing encrypted information, downloading software, paying for and changing settings, etc. The fingerprint sensor 714 may be provided on the front, back or side of the terminal. When a physical key or vendor Logo (trademark) is provided on the terminal, the fingerprint sensor 714 may be integrated with the physical key or vendor Logo.
The optical sensor 715 is used to collect the ambient light intensity. In one embodiment, the processor 701 may control the display brightness of the display screen 705 based on the ambient light intensity collected by the optical sensor 715. Specifically, when the intensity of the ambient light is high, the display brightness of the display screen 705 is turned up; when the ambient light intensity is low, the display brightness of the display screen 705 is turned down. In another embodiment, the processor 701 may also dynamically adjust the shooting parameters of the camera assembly 706 based on the ambient light intensity collected by the optical sensor 715.
A proximity sensor 716, also referred to as a distance sensor, is typically provided on the front panel of the terminal. The proximity sensor 716 is used to collect the distance between the user and the front face of the terminal. In one embodiment, when the proximity sensor 716 detects that the distance between the user and the front face of the terminal gradually decreases, the processor 701 controls the display 705 to switch from the bright screen state to the off screen state; when the proximity sensor 716 detects that the distance between the user and the front surface of the terminal gradually increases, the processor 701 controls the display screen 705 to switch from the off-screen state to the on-screen state.
Those skilled in the art will appreciate that the architecture shown in fig. 7 is not limiting of the point cloud processing device and may include more or fewer components than shown, or may combine certain components, or may employ a different arrangement of components.
In an exemplary embodiment, a computer device is also provided, the computer device comprising a processor and a memory, the memory having at least one computer program stored therein. The at least one computer program is loaded and executed by one or more processors to cause the computer arrangement to implement any of the point cloud processing methods described above.
In an exemplary embodiment, there is also provided a computer-readable storage medium having stored therein at least one computer program loaded and executed by a processor of a computer device to cause the computer to implement any one of the above-described point cloud processing methods.
In one possible implementation, the computer readable storage medium may be a Read-Only Memory (ROM), a random-access Memory (Random Access Memory, RAM), a compact disc Read-Only Memory (CD-ROM), a magnetic tape, a floppy disk, an optical data storage device, and the like.
In an exemplary embodiment, a computer program product or a computer program is also provided, the computer program product or computer program comprising computer instructions stored in a computer readable storage medium. The processor of the computer device reads the computer instructions from the computer-readable storage medium, and the processor executes the computer instructions, so that the computer device performs any one of the point cloud processing methods described above.
It should be noted that, information (including but not limited to user equipment information, user personal information, etc.), data (including but not limited to data for analysis, stored data, presented data, etc.), and signals referred to in this application are all authorized by the user or are fully authorized by the parties, and the collection, use, and processing of relevant data is required to comply with relevant laws and regulations and standards of relevant countries and regions. For example, reference herein to both a first point cloud and a second point cloud is acquired with sufficient authorization.
It should be understood that references herein to "a plurality" are to two or more. "and/or", describes an association relationship of an association object, and indicates that there may be three relationships, for example, a and/or B, and may indicate: a exists alone, A and B exist together, and B exists alone. The character "/" generally indicates that the context-dependent object is an "or" relationship.
The foregoing description of the exemplary embodiments of the present application is not intended to limit the invention to the particular embodiments of the present application, but to limit the scope of the invention to any modification, equivalents, or improvements made within the principles of the present application.

Claims (10)

1. A method of point cloud processing, the method comprising:
acquiring a first point cloud and a second point cloud of a target area, wherein the acquisition time of the first point cloud is later than that of the second point cloud;
adding the first point cloud into the second point cloud based on a pose transformation relation between a first coordinate system of the first point cloud and a second coordinate system of the second point cloud to obtain an initial point cloud comprising a plurality of first coordinate points and a plurality of second coordinate points, wherein the first coordinate points are coordinates of the corresponding first point cloud in the second coordinate system, and the second coordinate points are coordinates of the corresponding second point cloud in the second coordinate system;
for any first coordinate point, determining a first connecting line of the any first coordinate point and a phase center of a laser radar for acquiring the first point cloud;
and determining coordinate points to be removed in a plurality of second coordinate points included in the initial point cloud to obtain a target point cloud, wherein the coordinate points to be removed are the second coordinate points with the distance from the first connecting line meeting the removal condition.
2. The method of claim 1, wherein the determining a coordinate point to be removed from a plurality of second coordinate points included in the initial point cloud comprises:
Acquiring a reference point set corresponding to any first coordinate point, wherein the reference point set corresponding to any first coordinate point comprises a plurality of reference points positioned on a first connecting line corresponding to any first coordinate point;
determining any second coordinate point as a candidate coordinate point based on the reference point, which is smaller than a distance threshold, in the reference point set corresponding to the first coordinate points;
and determining the distance between the candidate coordinate point and a first connecting line corresponding to the candidate coordinate point, and determining the candidate coordinate point as the coordinate point to be removed based on the fact that the distance meets a removal condition.
3. The method of claim 1, wherein the determining a coordinate point to be removed from a plurality of second coordinate points included in the initial point cloud comprises:
determining a curvature of a candidate coordinate point of the plurality of second coordinate points, the candidate coordinate point being determined based on the first connection line;
calculating a first distance between the candidate coordinate point and a first connecting line corresponding to the candidate coordinate point based on the curvature of the candidate coordinate point being greater than a first threshold value; determining that the first distance meets the removal condition based on the first distance being smaller than a second threshold value, and determining the candidate coordinate point as a coordinate point to be removed;
Or, based on the curvature of the candidate coordinate point being smaller than a third threshold value, calculating a second distance between a plane where the candidate coordinate point is located and a first coordinate point corresponding to the candidate coordinate point; and determining that the second distance meets the removal condition based on the fact that the second distance is larger than a fourth threshold value, and determining the candidate coordinate point as the coordinate point to be removed.
4. The method of claim 2, wherein determining, based on the reference points in the set of reference points corresponding to the plurality of first coordinate points, that any one of the second coordinate points is a candidate coordinate point, wherein the reference points have a distance from the any one of the second coordinate points that is less than a distance threshold value, comprises:
searching a plurality of reference points included in a reference point set corresponding to the plurality of first coordinate points for a reference point closest to any one of the second coordinate points as a target reference point;
and determining any second coordinate point as the candidate coordinate point based on the fact that the distance between the any second coordinate point and the target reference point corresponding to the any second coordinate point is smaller than the distance threshold value.
5. The method of any of claims 1-4, wherein obtaining the first point cloud and the second point cloud of the target area comprises:
Acquiring a third point cloud and a fourth point cloud of the target area, wherein the acquisition time of the third point cloud is later than that of the fourth point cloud;
and carrying out downsampling treatment on the third point cloud according to the size of the voxels to obtain the first point cloud with uniform distribution, and carrying out downsampling treatment on the fourth point cloud according to the size of the voxels to obtain the second point cloud with uniform distribution.
6. The method of any of claims 1-4, wherein the method further comprises, prior to adding the first point cloud to the second point cloud, based on a pose transformation relationship between a first coordinate system of the first point cloud and a second coordinate system of the second point cloud:
for any first initial coordinate point of the first point cloud in the first coordinate system, searching a second initial coordinate point corresponding to the any first initial coordinate point in second initial coordinate points included in the second point cloud, and determining a pose transformation relation between the first coordinate system and the second coordinate system based on a plurality of first initial coordinate points and second initial coordinate points corresponding to each of the plurality of first initial coordinate points;
Or, the pose transformation relation between the first coordinate system and the second coordinate system is obtained from a pose obtaining device, and the pose obtaining device obtains the pose transformation relation through measurement.
7. The method of any one of claims 1-4, wherein after the target point cloud is obtained, the method further comprises:
acquiring a fifth point cloud of the target area, wherein the acquisition pose of the fifth point cloud is different from that of the first point cloud;
and updating the target point cloud based on the fifth point cloud.
8. A point cloud processing apparatus, the apparatus comprising:
the acquisition module is used for acquiring a first point cloud and a second point cloud of the target area, wherein the acquisition time of the first point cloud is later than that of the second point cloud;
the adding module is used for adding the first point cloud into the second point cloud based on a pose transformation relation between a first coordinate system of the first point cloud and a second coordinate system of the second point cloud to obtain an initial point cloud comprising a plurality of first coordinate points and a plurality of second coordinate points, wherein the first coordinate points are coordinates of the corresponding first point cloud in the second coordinate system, and the second coordinate points are coordinates of the corresponding second point cloud in the second coordinate system;
The determining module is used for determining a first connecting line of any first coordinate point and a phase center of the laser radar for collecting the first point cloud for any first coordinate point;
and the removing module is used for determining coordinate points to be removed in a plurality of second coordinate points included in the initial point cloud to obtain a target point cloud, wherein the coordinate points to be removed are the second coordinate points, and the distance between the second coordinate points and the first connecting line meets the removing condition.
9. A computer device, characterized in that it comprises a processor and a memory, in which at least one computer program is stored, which is loaded and executed by the processor, so that the computer device implements the point cloud processing method according to any of claims 1 to 7.
10. A computer readable storage medium, wherein at least one computer program is stored in the computer readable storage medium, and the at least one computer program is loaded and executed by a processor, so that the computer implements the point cloud processing method according to any of claims 1 to 7.
CN202210949687.8A 2022-08-09 2022-08-09 Point cloud processing method, device, equipment and storage medium Pending CN117635786A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210949687.8A CN117635786A (en) 2022-08-09 2022-08-09 Point cloud processing method, device, equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210949687.8A CN117635786A (en) 2022-08-09 2022-08-09 Point cloud processing method, device, equipment and storage medium

Publications (1)

Publication Number Publication Date
CN117635786A true CN117635786A (en) 2024-03-01

Family

ID=90029064

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210949687.8A Pending CN117635786A (en) 2022-08-09 2022-08-09 Point cloud processing method, device, equipment and storage medium

Country Status (1)

Country Link
CN (1) CN117635786A (en)

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20200074652A1 (en) * 2018-08-30 2020-03-05 Baidu Online Network Technology (Beijing) Co., Ltd. Method for generating simulated point cloud data, device, and storage medium
US20210270958A1 (en) * 2021-05-20 2021-09-02 Beijing Baidu Netcom Science And Technology Co., Ltd. Radar point cloud data processing method and device, apparatus, and storage medium
CN113362445A (en) * 2021-05-25 2021-09-07 上海奥视达智能科技有限公司 Method and device for reconstructing object based on point cloud data
CN113776544A (en) * 2020-06-10 2021-12-10 杭州海康威视数字技术股份有限公司 Point cloud map updating method and device, electronic equipment and positioning system

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20200074652A1 (en) * 2018-08-30 2020-03-05 Baidu Online Network Technology (Beijing) Co., Ltd. Method for generating simulated point cloud data, device, and storage medium
CN113776544A (en) * 2020-06-10 2021-12-10 杭州海康威视数字技术股份有限公司 Point cloud map updating method and device, electronic equipment and positioning system
US20210270958A1 (en) * 2021-05-20 2021-09-02 Beijing Baidu Netcom Science And Technology Co., Ltd. Radar point cloud data processing method and device, apparatus, and storage medium
CN113362445A (en) * 2021-05-25 2021-09-07 上海奥视达智能科技有限公司 Method and device for reconstructing object based on point cloud data

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
谢诗超: ""多传感器融合的自动驾驶点云地图构建与更新方法研究"", 《中国优秀硕士学位论文全文数据库》, 15 February 2021 (2021-02-15) *

Similar Documents

Publication Publication Date Title
CN111126182B (en) Lane line detection method, lane line detection device, electronic device, and storage medium
CN110097576B (en) Motion information determination method of image feature point, task execution method and equipment
CN110148178B (en) Camera positioning method, device, terminal and storage medium
CN110095128B (en) Method, device, equipment and storage medium for acquiring missing road information
CN112150560B (en) Method, device and computer storage medium for determining vanishing point
CN111784841B (en) Method, device, electronic equipment and medium for reconstructing three-dimensional image
CN112991439B (en) Method, device, electronic equipment and medium for positioning target object
CN112565806A (en) Virtual gift presenting method, device, computer equipment and medium
CN111984755B (en) Method and device for determining target parking spot, electronic equipment and storage medium
CN111127541A (en) Vehicle size determination method and device and storage medium
CN111369684B (en) Target tracking method, device, equipment and storage medium
CN111754564B (en) Video display method, device, equipment and storage medium
CN113592874B (en) Image display method, device and computer equipment
CN114384466A (en) Sound source direction determining method, sound source direction determining device, electronic equipment and storage medium
CN112243083B (en) Snapshot method and device and computer storage medium
CN112329909B (en) Method, apparatus and storage medium for generating neural network model
CN117635786A (en) Point cloud processing method, device, equipment and storage medium
CN115545592A (en) Display positioning method, device, equipment and storage medium
CN112717393A (en) Virtual object display method, device, equipment and storage medium in virtual scene
CN111402873A (en) Voice signal processing method, device, equipment and storage medium
CN111523876A (en) Payment mode display method, device and system and storage medium
CN114093020A (en) Motion capture method, motion capture device, electronic device and storage medium
CN113409235B (en) Vanishing point estimation method and apparatus
CN117911482B (en) Image processing method and device
CN116069051B (en) Unmanned aerial vehicle control method, device, equipment and readable storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination