CN109493407B - Method and device for realizing laser point cloud densification and computer equipment - Google Patents

Method and device for realizing laser point cloud densification and computer equipment Download PDF

Info

Publication number
CN109493407B
CN109493407B CN201811374889.4A CN201811374889A CN109493407B CN 109493407 B CN109493407 B CN 109493407B CN 201811374889 A CN201811374889 A CN 201811374889A CN 109493407 B CN109493407 B CN 109493407B
Authority
CN
China
Prior art keywords
view
point cloud
target scene
laser
trained
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201811374889.4A
Other languages
Chinese (zh)
Other versions
CN109493407A (en
Inventor
陈仁
孙银健
黄天
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Tencent Technology Shenzhen Co Ltd
Original Assignee
Tencent Technology Shenzhen Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Tencent Technology Shenzhen Co Ltd filed Critical Tencent Technology Shenzhen Co Ltd
Priority to CN201811374889.4A priority Critical patent/CN109493407B/en
Publication of CN109493407A publication Critical patent/CN109493407A/en
Application granted granted Critical
Publication of CN109493407B publication Critical patent/CN109493407B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T15/003D [Three Dimensional] image rendering
    • G06T15/005General purpose rendering architectures
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T17/00Three dimensional [3D] modelling, e.g. data description of 3D objects
    • G06T17/05Geographic models
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10028Range image; Depth image; 3D point clouds
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10032Satellite or aerial image; Remote sensing
    • G06T2207/10044Radar image
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20084Artificial neural networks [ANN]

Abstract

The invention discloses a method, a device and computer equipment for realizing laser point cloud densification, wherein the method for realizing the laser point cloud densification comprises the following steps: acquiring an original point cloud of a target scene; projecting the original point cloud to a cylindrical surface according to a front view visual angle to generate a first front view, wherein the front view visual angle is related to an azimuth angle when the laser radar collects the original point cloud; mapping a second front view from the first front view based on a mapping relation between front views with different resolutions constructed by a deep learning model, wherein the resolution of the second front view is higher than that of the first front view; and projecting the second front view to a coordinate system where the original point cloud is located to obtain dense point cloud of the target scene. The method, the device and the computer equipment for realizing the laser point cloud densification solve the problem that the densification effect of the laser point cloud is relatively poor in the prior art.

Description

Method and device for realizing laser point cloud densification and computer equipment
Technical Field
The invention relates to the technical field of computers, in particular to a method and a device for realizing laser point cloud densification and computer equipment.
Background
A high-precision map is a map for driving assistance, semi-autonomous driving, or unmanned driving, and is composed of a series of map elements, for example, the map elements include: curbs, guardrails, and the like. In the generation process of the high-precision map, map elements are extracted from the laser point cloud firstly, and then the extracted map elements are edited in a manual mode to generate the high-precision map.
It can be known from the above that the extraction of map elements depends on laser point cloud, and if the laser point cloud is too sparse, the accuracy of the map elements is not high, and the production efficiency of the high-precision map is finally affected. Therefore, the existing laser point cloud densification scheme generally adopts an interpolation method, so that the laser point cloud is up-sampled, and the densification effect of the laser point cloud is achieved.
However, limited by the rules on which the interpolation method depends, for example, the rules include neighbor interpolation, bilinear interpolation, and the like, so that the densification of the laser point cloud is relatively poor.
Disclosure of Invention
In order to solve the problem of relatively poor densification effect of laser point clouds in the related art, embodiments of the present invention provide a method, an apparatus, and a computer device for realizing densification of laser point clouds.
The technical scheme adopted by the invention is as follows:
according to a first aspect of the disclosure, a method for realizing laser point cloud densification includes: acquiring an original point cloud of a target scene; projecting the original point cloud to a cylindrical surface according to a front view visual angle to generate a first front view, wherein the front view visual angle is related to an azimuth angle when the laser radar collects the original point cloud; mapping a second front view from the first front view based on a mapping relation between front views with different resolutions constructed by a deep learning model, wherein the resolution of the second front view is higher than that of the first front view; and projecting the second front view to a coordinate system where the original point cloud is located to obtain dense point cloud of the target scene.
According to a second aspect of the disclosure, an apparatus for realizing laser point cloud densification includes: the original point cloud obtaining module is used for obtaining an original point cloud of a target scene; the front view acquisition module is used for projecting the original point cloud to a cylindrical surface according to a front view visual angle to generate a first front view, wherein the front view visual angle is related to an azimuth angle when the laser radar collects the original point cloud; the front view mapping module is used for mapping a first front view to obtain a second front view based on a mapping relation between front views with different resolutions, which is constructed by a deep learning model, and the resolution of the second front view is higher than that of the first front view; and the dense point cloud acquisition module is used for projecting the second front view to the coordinate system where the original point cloud is located to obtain the dense point cloud of the target scene.
According to a third aspect of the disclosure, a computer device comprises a processor and a memory, the memory having stored thereon computer readable instructions which, when executed by the processor, implement the method of laser point cloud densification as described above.
According to a fourth aspect of the disclosure, a computer-readable storage medium has stored thereon a computer program which, when being executed by a processor, implements the method of laser point cloud densification as described above.
In the technical scheme, the original point cloud of the target scene is subjected to cylindrical projection according to the view angle of the front view to generate a first front view, mapping relation between front views with different resolutions according to the deep learning model to obtain a second front view from the first front view, and then the second front view is projected to the coordinate system of the original point cloud to obtain the dense point cloud of the target scene, that is, based on the deep learning model, the mapping relation between the front views with different resolutions is constructed, so that the mapping relation is applied to the actual scene of the first front view to obtain a second front view with a resolution higher than that of the first front view, thereby forming a dense point cloud of the target scene, therefore, the densification of the original point cloud is realized, the limitation to the rule depended by the interpolation method is avoided, and the problem of relatively poor densification effect of the laser point cloud in the prior art is solved.
It is to be understood that both the foregoing general description and the following detailed description are exemplary and explanatory only and are not restrictive of the invention, as claimed.
Drawings
The accompanying drawings, which are incorporated in and constitute a part of this specification, illustrate embodiments consistent with the invention and together with the description, serve to explain the principles of the invention.
FIG. 1 is a schematic illustration of an implementation environment in accordance with the present invention.
Fig. 2 is a block diagram illustrating a hardware architecture of a server according to an example embodiment.
FIG. 3 is a flow chart illustrating a method of achieving laser point cloud densification in accordance with an exemplary embodiment.
Fig. 4 is a schematic diagram of a laser point cloud collected by the lidar according to the corresponding embodiment of fig. 3.
Fig. 5 is a simplified schematic of fig. 4.
FIG. 6 is a flow chart of one embodiment of step 330 of the corresponding embodiment of FIG. 3.
Fig. 7 is a diagram illustrating a specific implementation of a front view generation process according to the corresponding embodiment in fig. 6.
Fig. 8 is a schematic diagram of a distance view, a height view, and an intensity view of the synthesized to-be-projected view according to the embodiment of fig. 6.
Fig. 9 is a schematic illustration of a first lower resolution front view according to the corresponding embodiment of fig. 6.
FIG. 10 is a flow chart of one embodiment of step 350 of the corresponding embodiment of FIG. 3.
Fig. 11 is a schematic diagram of a model structure of a convolutional neural network model in the corresponding embodiment of fig. 10.
Fig. 12 is a second, higher resolution front view schematic diagram of the embodiment of fig. 10.
FIG. 13 is a flow chart illustrating another method of achieving laser point cloud densification in accordance with an exemplary embodiment.
FIG. 14 is a flow chart of one embodiment of step 410 of the corresponding embodiment of FIG. 13.
FIG. 15 is a flow chart of one embodiment of step 415 in the corresponding embodiment of FIG. 14.
FIG. 16 is a comparative schematic diagram illustrating a reference target based registration process according to an exemplary embodiment.
FIG. 17 is a flowchart of one embodiment of step 4153 in the corresponding embodiment of FIG. 15.
FIG. 18 is a flow chart illustrating another method of achieving laser point cloud densification in accordance with an exemplary embodiment.
Fig. 19 is a block diagram illustrating an apparatus for implementing laser point cloud densification according to an example embodiment.
FIG. 20 is a block diagram illustrating a computer device according to an example embodiment.
While specific embodiments of the invention have been shown by way of example in the drawings and will be described in detail hereinafter, such drawings and description are not intended to limit the scope of the inventive concepts in any way, but rather to explain the inventive concepts to those skilled in the art by reference to the particular embodiments.
Detailed Description
Reference will now be made in detail to the exemplary embodiments, examples of which are illustrated in the accompanying drawings. When the following description refers to the accompanying drawings, like numbers in different drawings represent the same or similar elements unless otherwise indicated. The embodiments described in the following exemplary embodiments do not represent all embodiments consistent with the present invention. Rather, they are merely examples of apparatus and methods consistent with certain aspects of the invention, as detailed in the appended claims.
As mentioned above, the existing laser point cloud densification scheme generally adopts an interpolation method to perform the upsampling of the laser point cloud, so as to achieve the densification effect of the laser point cloud. However, limited by the rules on which the interpolation method depends, for example, the rules include neighbor interpolation, bilinear interpolation, and the like, so that the densification of the laser point cloud is relatively poor.
In order to overcome the defects of the interpolation method, an interpolation method of edge protection and area protection is further introduced:
the edge-based interpolation method enhances the edges of the laser point cloud to a certain extent, so that the visual effect of the laser point cloud is better, and the densification effect of the laser point cloud is further improved.
The interpolation method based on the regions comprises the steps of firstly dividing original low-resolution laser point cloud into different regions, then mapping interpolation points to the original low-resolution laser point cloud, judging the regions to which the interpolation points belong, designing different interpolation formulas according to neighborhood pixels of the interpolation points, and finally calculating the values of different interpolation points in the regions to which the interpolation points belong according to the different interpolation formulas, so that the densification effect of the laser point cloud is improved.
As can be seen from the above, the existing laser point cloud densification scheme is still limited by the rule that the interpolation method depends on, and a defect that the densification effect of the laser point cloud is relatively poor inevitably exists, so that the accuracy of map elements is not high, and the production efficiency of the high-precision map is finally affected.
Therefore, the invention especially provides a method for realizing laser point cloud densification, which avoids being limited by rules depended by an interpolation method, greatly improves the densification effect of the laser point cloud, and is correspondingly suitable for a device for realizing the laser point cloud densification, and the method can be deployed in computer equipment with a Von Neumann architecture, for example, the computer equipment is a server, so as to realize the method for realizing the laser point cloud densification.
Fig. 1 is a schematic diagram of an implementation environment related to a method for realizing laser point cloud densification. The implementation environment includes a client side 110 and a server side 130.
The user end 110 is disposed in a vehicle, an airplane, and a robot, and may be a desktop computer, a laptop computer, a tablet computer, a smart phone, a palm computer, a personal digital assistant, a navigator, a smart computer, and the like, which is not limited herein.
The user terminal 110 and the server terminal 130 establish a network connection in advance through a wireless or wired network, and the data transmission between the user terminal 110 and the server terminal 130 is realized through the network connection. For example, the data transmitted includes: high-precision maps of target scenes, etc.
Here, the server 130 may be one server, may be a server cluster including a plurality of servers, and may be a cloud computing center including a plurality of servers as shown in fig. 1. The server is an electronic device that provides a background service for a user, for example, the background service includes: the system comprises a laser point cloud densification service, a map element extraction service, a high-precision map generation service and the like.
For the target scene, after the server 130 obtains the original point cloud, the original point cloud may be densified to obtain the dense point cloud of the target scene, and then the map elements are extracted based on the dense point cloud of the target scene.
After the map elements are extracted, the server 130 may display the extracted map elements through a configured display screen to generate a high-precision map of the target scene under the control of the editor.
Of course, the densification of the laser point cloud, the extraction of the map elements, and the generation of the high-precision map may be performed in the same server or may be performed in different servers according to actual operation requirements, for example, the densification of the laser point cloud is performed in the servers 131 and 132, the extraction of the map elements is performed in the server 133, and the generation of the high-precision map is performed in the server 134.
The high-precision map of the target scene may be further stored, for example, to the server 130, or may be stored in other cache spaces, which is not limited herein.
Then, for the user end 110 using the high-precision map, for example, when the unmanned vehicle intends to pass through the target scene, the user end 110 carried by the unmanned vehicle will correspondingly obtain the high-precision map of the target scene, so as to assist the unmanned vehicle to safely pass through the target scene.
It should be noted that the laser point cloud acquired by the server 130 may be acquired in advance by another acquisition device and stored in the server 130, or may be acquired in real time by the client 110 and uploaded to the server 130 when a vehicle, an airplane, or a robot carrying the client 110 passes through a target scene, which is not limited herein.
Fig. 2 is a block diagram illustrating a hardware architecture of a server according to an example embodiment. Such a server is suitable for use in the server side 130 of the implementation environment shown in fig. 1.
It should be noted that this server is only an example adapted to the present invention and should not be considered as providing any limitation to the scope of use of the present invention. Nor should such a server be construed as requiring reliance on, or necessity of, one or more components of the exemplary server 200 shown in fig. 2.
The hardware structure of the server 200 may be greatly different due to the difference of configuration or performance, as shown in fig. 2, the server 200 includes: a power supply 210, an interface 230, at least one memory 250, at least one Central Processing Unit (CPU) 270, a display screen 280, and an input device 290.
Specifically, power supply 210 is used to provide operating voltages for the various components on server 200.
The interface 230 includes at least one wired or wireless network interface 231, at least one serial-to-parallel conversion interface 233, at least one input/output interface 235, and at least one USB interface 237, etc. for communicating with external devices. For example, to interact with the user terminal 110 in the implementation environment shown in fig. 1.
The storage 250 is used as a carrier for resource storage, and may be a read-only memory, a random access memory, a magnetic disk or an optical disk, etc., and the resources stored thereon include an operating system 251, an application 253, data 255, etc., and the storage manner may be a transient storage or a permanent storage.
The operating system 251 is used for managing and controlling the components and applications 253 on the server 200 to implement the computation and processing of the mass data 255 by the central processor 270, which may be Windows server, Mac OS XTM, UnixTM, linux, FreeBSDTM, or the like.
The application 253 is a computer program that performs at least one specific task on the operating system 251, and may include at least one module (not shown in fig. 2), each of which may contain a series of computer-readable instructions for the server 200. For example, the apparatus for realizing the laser point cloud densification may be regarded as an application 253 deployed in the server 200 to realize the method for realizing the laser point cloud densification.
The data 255 may be a photograph, picture, or laser point cloud, stored in the memory 250.
The central processor 270 may include one or more processors and is configured to communicate with the memory 250 via a communication bus to read computer-readable instructions stored in the memory 250 and to implement operations and processing of the mass data 255 in the memory 250. The method of upsampling the laser power supply is accomplished, for example, by the central processor 270 reading a series of computer readable instructions stored in the memory 250.
The display screen 280 may be a liquid crystal display, an electronic ink display, or the like, and the display screen 280 provides an output interface between the electronic device 200 and the user so that the output content formed by any one or combination of text, pictures, or videos can be displayed to the user through the output interface. For example, map elements available for editing are displayed in the target scene map.
The input component 290 may be a touch layer covered on the display screen 280, a key, a trackball or a touch pad arranged on the housing of the electronic device 200, or an external keyboard, a mouse, a touch pad, etc. for receiving various control instructions input by the user, so as to generate a high-precision map of the target scene under the control of an editor. For example, an edit instruction for a map element in the target scene map.
Furthermore, the present invention can be implemented by hardware circuits or by a combination of hardware circuits and software, and thus, the implementation of the present invention is not limited to any specific hardware circuits, software, or a combination of both.
Referring to fig. 3, in an exemplary embodiment, a method for implementing laser point cloud densification is applied to a server side of the implementation environment shown in fig. 1, and the structure of the server side may be as shown in fig. 2.
The method for realizing the laser point cloud densification can be executed by a server side, and can also be understood as being executed by a device which is deployed in the server side and realizes the laser point cloud densification. In the following method embodiments, for convenience of description, the main body of execution of each step is described as an apparatus for realizing the laser point cloud densification, but the method is not limited thereto.
The method for realizing the laser point cloud densification can comprise the following steps:
step 310, obtaining an original point cloud of a target scene.
First, it is explained that the laser point cloud is generated by scanning an entity in a target scene with laser, and is substantially a dot matrix image, that is, it is composed of a plurality of sampling points corresponding to the entity in the target scene.
The target scene may be a road on which a vehicle can travel and a surrounding environment thereof, may also be inside a building on which a robot can travel, or may be a channel on which an unmanned aerial vehicle flies at low altitude and a surrounding environment thereof, which is not limited in this embodiment.
It should be noted that the high-precision map of the target scene provided by the present embodiment may be applicable to different application scenes according to different target scenes, for example, the road and its surrounding environment are applicable to the driving scene of the auxiliary vehicle, the interior of the building is applicable to the traveling scene of the auxiliary robot, and the channel and its surrounding environment are applicable to the low-altitude flight scene of the auxiliary unmanned aerial vehicle.
It should be understood that, in the laser point cloud represented by the dot matrix image, one frame of laser point cloud includes up to 12 ten thousand pixel points, and if multiple frames of laser point clouds are directly collected to generate dense point clouds and map elements are extracted according to the dense point clouds, the real-time requirement is difficult to meet. Where an original point cloud refers to a sparse laser point cloud, e.g., a single frame of laser point cloud.
Regarding the acquisition of the original point cloud, the original point cloud can be derived from a pre-stored laser point cloud, and can also be derived from a laser point cloud acquired in real time, and then is acquired in a local reading or network downloading manner.
In other words, for the device for realizing the laser point cloud densification, the laser point cloud collected in real time may be obtained to facilitate the densification of the laser point cloud in real time, and the laser point cloud collected in a historical time period may also be obtained to facilitate the densification of the laser point cloud when the processing task is few, which is not specifically limited in this embodiment.
It is noted that the laser point cloud is generated and collected by scanning laser emitted by the laser radar, and in the collection process, the laser radar may be pre-deployed in the collection device, so that the collection device collects the laser point cloud for the target scene. For example, the collecting device is a vehicle, the laser radar is deployed in the vehicle as a vehicle-mounted component in advance, and when the vehicle runs through a target scene, laser point clouds of the target scene are correspondingly collected.
Step 330, projecting the original point cloud to a cylindrical surface according to a view angle of a front view to generate a first front view.
As mentioned above, one frame of laser point cloud includes up to 12 ten thousand pixel points, and in order to improve the efficiency of laser point cloud densification, in this embodiment, the densification of the laser point cloud is performed by relying on the first front view.
Firstly, it can be understood that, based on the hardware characteristics of the laser radar, the laser radar can rotate 360 degrees when collecting the laser point cloud, and the laser can be emitted once when rotating once, so that a frame of laser point cloud is formed by scanning a target scene at a specified angle, that is, the azimuth angle range when the laser radar collects the laser point cloud is 0-360 degrees. The designated angle can be flexibly set according to the actual requirements of the application scene, for example, the designated angle is set to be 360 degrees, and the laser radar needs to sweep the target scene by 360 degrees.
Based on the point cloud acquisition method, the front view visual angle is related to the azimuth angle when the laser radar acquires the laser point cloud. It can also be understood that, when the laser radar performs the laser point cloud collection at an azimuth, an entity in the forward observed target scene has a certain angle range, and the angle range is regarded as a front view angle of the laser radar. The front view perspective, therefore, is essentially used to indicate the range of angles over which the lidar is able to look forward to the entities in the target scene.
Taking a road on which a vehicle can travel and its surrounding environment as an example of a target scene, the target scene is swept by a laser radar 360 ° to form a frame of laser point cloud, as shown in fig. 4, entities in the target scene are represented in the laser point cloud, and the entities include a person 401, a rider 402, a vehicle 403, a vehicle 404, a vehicle 405, and the like.
As shown in fig. 5, assuming that the azimuth angle of the laser point cloud collected by the laser radar 400 is 0 °, the rider 402 and the vehicle 403 in the target scene can be observed at the same time, and the person 401, the vehicle 404 and the vehicle 405 in the target scene cannot be observed, so that the front view angle at this time can be determined to be 407 for the azimuth angle of 0 °.
Therefore, different view angles of the front view can be determined and obtained by taking the laser radar as the center and along with the gradual change of the azimuth angle when the laser point cloud is collected.
It is worth mentioning that, because laser radar launches laser once for every turn, and then scans the entity that can observe based on the front view visual angle in the target scene, can understand, if turned angle is too little, then there is the probability of repeated scanning, if turned angle is too big, there is the problem of missed scanning again, consequently, more optimally, turned angle sets up the angular range that the front view visual angle instructed, not only is favorable to improving the collection efficiency of laser point cloud, is favorable to prolonging laser radar's life moreover.
For example, if the azimuth angle is 0 °, the angle range indicated by the front view angle 407 is 60 °, and the rotation angle of the lidar may be set to 60 °, then the lidar only needs to rotate 6 times when sweeping 360 ° around the target scene, and accordingly, the remaining azimuth angles at which the lidar collects the laser point cloud are 60 °, 120 °, 180 °, 240 °, and 300 °, that is, the azimuth angle substantially indicates the azimuth at which the lidar collects the laser point cloud.
Then, after determining the view angle of the front view according to the azimuth angle for the obtained original point cloud, the original point cloud may be projected to a cylindrical surface to generate a first front view based on the determined view angle of the front view. The cylindrical surface can be a cylindrical surface or an elliptic cylindrical surface.
Because the original point cloud is a three-dimensional image and the first front view is a two-dimensional image, the projection in the embodiment is substantially to planarize a three-dimensional space structure of an entity in a target scene described by the original point cloud, and further to express the entity in the target scene through a two-dimensional image form, namely the first front view, so that the complexity of image processing is reduced, and the efficiency of laser point cloud densification is improved.
And 350, mapping the first front view to obtain a second front view based on the mapping relation between the front views with different resolutions constructed by the deep learning model.
The mapping relation is constructed by performing model training on the deep learning model based on a large number of training samples and further constructing the deep learning model through the model training, wherein the training samples comprise a front view of the original point cloud to be trained and a front view of the dense point cloud to be trained, and the resolution of the front view of the original point cloud to be trained is lower than that of the front view of the dense point cloud to be trained.
That is, the model training substantially takes the front view of the original point cloud to be trained in the training sample as the training input, and takes the front view of the dense point cloud to be trained in the training sample as the training true value, i.e., the training output, so that the mapping relationship between the front views with different resolutions can be constructed and obtained based on the deep learning model through a large number of training samples.
Then, based on the mapping relationship between the front views with different resolutions constructed by the deep learning model, a second front view can be mapped from the first front view, and the resolution of the second front view is higher than that of the first front view.
Step 370, projecting the second front view to the coordinate system where the original point cloud is located, to obtain a dense point cloud of the target scene.
Specifically, the second front view is divided into a distance view, a height view, and an intensity view in an image channel coding manner. Wherein the image channels include a red channel R representing red, a green channel G representing green, and a blue channel B representing blue.
And respectively projecting the distance view, the height view and the intensity view to a coordinate system where the original point clouds are located according to the view angle of the front view to obtain the dense point clouds of the target scene. The coordinate system of the original point cloud refers to a geographical coordinate system of a real world observed from an azimuth angle set by a laser radar for collecting the original point cloud.
Through the process, the density of the original point cloud is realized based on the mapping relation constructed by the deep learning model, the limitation to the rule depended by the interpolation value method is avoided, and the density effect of the laser point cloud is effectively improved.
In addition, the map element extraction depends on the dense point cloud of the target scene, so that the data input scale of entity perception in the target scene is favorably improved, namely, the environment perception information is enriched, the environment perception difficulty is effectively reduced, the accuracy of the map element extraction is favorably improved, and the production efficiency of the high-precision map is fully ensured.
Referring to fig. 6, in an exemplary embodiment, step 330 may include the following steps:
and 331, traversing an azimuth angle when the laser radar collects the original point cloud, and determining the view angle of the front view according to the traversed azimuth angle.
As mentioned above, with the lidar as the center, different view angles of the front view can be determined as the azimuth angle changes gradually when the original point cloud is collected.
As shown in fig. 5, when the traversed azimuth angle is 0 °, the determined view angle of the front view is 407. When the traversed azimuth is 270 °, the determined view angle of the front view is 408.
Step 333, acquiring a to-be-projected view of the original point cloud in the view angle of the front view.
Specifically, first, a distance view, a height view, and an intensity view of the raw point cloud from the view of the front view are acquired.
As explained in conjunction with fig. 4, for the view angle 407 of the front view, a distance view, a height view and an intensity view at the view angle 407 of the front view are obtained based on a portion of the original point cloud 409.
Similarly, for the view perspective 408, a distance view, a height view, and an intensity view at the view perspective 408 are obtained based on portions of the raw point cloud 410.
Secondly, synthesizing the distance view, the height view and the intensity view into the view to be projected according to an image channel coding mode.
Wherein the image channels include a red channel R representing red, a green channel G representing green, and a blue channel B representing blue.
For example, the height view is input into the red channel R, the distance view is input into the green channel G, and the intensity view is input into the blue channel B, and then synthesized into the view to be projected.
Step 335, projecting the view to be projected to a local area corresponding to the view angle of the front view in the cylindrical surface.
Referring to fig. 4 and 7, taking the cylindrical surface as the cylindrical surface and traversing to an azimuth angle of 0 ° for example, based on a part of the original point cloud 409, a distance view, an altitude view and an intensity view are obtained from the corresponding view angles 407 of the front view, as shown in fig. 8(a) -8(c), and a view to be projected 701 is synthesized, and assuming that a local area of the cylindrical surface corresponding to the view angles of the front view is 702, the view to be projected 701 is projected to the local area 702, so as to obtain a projection in the cylindrical surface, as shown in 703 in fig. 7.
And 337, unfolding the cylindrical surface to obtain the first front view after the traversal is completed.
As mentioned above, to form the laser point cloud of the target scene, the laser radar needs to scan the target scene by rotating the designated angle, and accordingly, the azimuth angle changes gradually, so that the traversal of the azimuth angle is completed, and it can also be understood that the laser radar has scanned the designated angle on the target scene. For example, the specified angle is 360 °.
At this time, the cylindrical surface completes the projection of the partial area corresponding to each front view angle, and the cylindrical surface is expanded into a plane, that is, the height of the cylindrical surface is taken as the width of the plane, and the perimeter of the cylindrical surface is taken as the length of the plane, so as to obtain a first front view, as shown in fig. 9. As is clear from fig. 9, the projection of the synthesized view to be projected 701 on the cylinder based on the front view perspective 407 is only a part of the first front view, which substantially also includes the projection of the synthesized view to be projected on the cylinder based on the remaining front view perspectives.
Through the process, the generation of the front view is realized, so that the representation of the entity in the target scene in the form of the two-dimensional image is realized, the complexity of image processing is reduced, and the efficiency of laser point cloud densification is improved.
Referring to fig. 10, in an exemplary embodiment, the deep learning model is a convolutional neural network model.
In this embodiment, the convolutional neural network model has a model structure including an input layer, a hidden layer, and an output layer. Wherein, the hidden layer further comprises a convolution layer and a full connection layer. The convolution layer is used for extracting image features, and the full-connection layer is used for fully connecting the image features extracted by the convolution layer.
Optionally, the hidden layer may further include an active layer and a pooling layer. The activation layer is used for improving the convergence rate of the convolutional neural network model, and the pooling layer is used for reducing the complexity of the fully-connected image features.
Optionally, the convolutional layer is configured with a plurality of channels, each channel being available for input of images having different channel information in the same input image.
For example, if the convolutional layer is configured with three channels a1, a2 and A3, the color image can be input to the three channels configured in the convolutional layer according to the image channel coding method, i.e., the color image portion corresponding to the red channel R is input to the channel a1, the color image portion corresponding to the green channel G is input to the channel a2, and the color image portion corresponding to the blue channel B is input to the channel A3.
Accordingly, step 350 may include the steps of:
and 351, inputting the first front view into the convolutional neural network model, and extracting to obtain multi-channel image features.
And 353, fully connecting the multi-channel image features to obtain global features.
Step 355, performing feature mapping on the global features based on the mapping relationship constructed by the convolutional neural network model to obtain the second front view.
In one embodiment, as shown in fig. 11, the convolutional neural network model, that is, the multi-channel convolutional neural network model, has a model structure including: input layer 601, convolutional layer 602, full interconnect layer 603, and output layer 604. Wherein each convolutional layer 602 is configured with a plurality of channels.
The learning process based on the mapping relation is explained by combining the model structure of the convolutional neural network model.
First, a first front view is input from the input layer 601, and feature extraction is performed through a plurality of channels arranged in the convolutional layer 602 to obtain multi-channel image features. Wherein, each channel image feature corresponds to a convolution kernel provided by the convolution layer.
Each channel image feature is also regarded as a local feature, and is used for describing a boundary, a spatial position, and a relative directional relationship of an entity corresponding to the original point cloud in the target scene, for example, the local feature includes a spatial relationship feature, a shape feature, and the like.
Then, the multi-channel image features are output from the convolution layer 602 to the full-link layer 603 for full-link, so as to obtain global features for describing the indication properties, such as color, texture, etc., of the entity corresponding to the original point cloud in the target scene.
After obtaining the global features, a second front view is obtained by global feature learning based on the mapping relationship constructed by the convolutional neural network model and is output through the output layer 604.
Referring back to fig. 9, the first front view shows significant data loss, as indicated by the white dots "snowflakes" in fig. 9, and referring back to fig. 12, the second front view shows no significant data loss, i.e., the second front view has higher resolution than the first front view, which is beneficial for the densification of the laser point cloud.
Under the cooperation of the embodiment, the feature mapping based on the convolutional neural network model is realized, the limitation to the rule depending on the interpolation method is avoided, the accuracy of the feature mapping is favorably improved, the densification effect of the laser point cloud is further sufficiently ensured, and abundant data support is favorably provided for environmental perception.
Referring to fig. 13, in an exemplary embodiment, the method as described above may further include the steps of:
step 410, acquiring an original point cloud to be trained and a dense point cloud to be trained aiming at the same target scene.
The original point cloud to be trained and the dense point cloud to be trained may be from the same lidar or from different lidar, which is not limited herein.
Specifically, as shown in fig. 14, in an embodiment, the obtaining process may include the following steps:
step 411, acquiring a single frame of laser point cloud of the target scene, taking the single frame of laser point cloud as the original point cloud to be trained, and taking the acquisition time of the original point cloud to be trained as the current time.
And 413, determining adjacent time according to the current time, and acquiring a plurality of frames of laser point clouds acquired at the adjacent time aiming at the target scene.
And 415, overlapping the obtained frames of laser point clouds to obtain the dense point cloud to be trained.
If a plurality of frames of laser point clouds collected at adjacent moments come from the same laser radar, superposition of the plurality of frames of laser point clouds collected at the adjacent moments can be realized through information such as moving speed, geographical position and the like provided by inertial navigation equipment configured in the collecting equipment.
On the contrary, if the frames of laser point clouds collected at adjacent moments come from different laser radars, the superposition can be realized by registering the frames of laser point clouds collected at adjacent moments.
In the process, super-resolution reconstruction is realized, namely time resolution (a plurality of frames of laser point clouds in the same target scene at different moments) is used for replacing space resolution (dense point clouds to be trained), so that the laser point clouds for model training are richer, and the mapping relation can be accurately constructed.
And 430, generating a corresponding front view by projecting the original point cloud to be trained and the dense point cloud to be trained, and taking the front view of the original point cloud to be trained and the front view of the dense point cloud to be trained as the training samples.
Projection, consistent with the generation of the aforementioned first front view, will not be described repeatedly here.
And step 450, guiding the deep learning model to carry out model training according to the training samples, and constructing the mapping relation.
Model training, which is essentially based on a large number of training samples to iteratively optimize the parameters of a deep learning model so that a specified algorithm function constructed from the parameters satisfies a convergence condition.
The deep learning model comprises a convolutional neural network model, a cyclic neural network model, a deep neural network model and the like.
Specifying algorithmic functions including, but not limited to: a maximum expectation function, a loss function (e.g., softmax activation function), etc.
For example, the parameters of the deep learning model are initialized randomly, and the loss value of the loss function constructed by the randomly initialized parameters is calculated according to the current training sample.
And if the loss value of the loss function does not reach the minimum value, updating the parameters of the deep learning model, and calculating the loss value of the loss function constructed by the updated parameters according to the next training sample.
And (4) iterating and circulating until the loss value of the loss function reaches the minimum value, namely, considering the loss function to be converged, at the moment, converging the deep learning model, and meeting the preset precision requirement, and stopping iteration.
Otherwise, iteratively updating the parameters of the deep learning model, and iteratively calculating the loss value of the loss function constructed by the updated parameters according to the rest training samples until the loss function is converged.
It is worth mentioning that if the iteration number reaches the iteration threshold before the loss function converges, the iteration is also stopped, so as to ensure the efficiency of the model training.
And when the deep learning model converges and meets the preset precision requirement, the deep learning model is represented to complete model training, and therefore a mapping relation can be established based on the deep learning model which completes model training.
After the construction of the mapping relation is completed, the device for realizing the densification of the laser point cloud has the mapping capability of the first front view.
Then, the first front view is input into the deep learning model, so that the front view of the dense point cloud of the target scene can be obtained according to the mapping relation, and further the densification of the original point cloud is realized.
Referring to FIG. 15, in an exemplary embodiment, step 415 may include the steps of:
step 4151, for the same entity in the target scene, segmenting the acquired frames of laser point clouds to obtain corresponding reference targets.
And 4153, registering the acquired frames of laser point clouds according to the reference target obtained by segmentation.
And 4155, overlapping the plurality of frames of laser point clouds after registration to obtain the dense point cloud to be trained.
As described above, the plurality of frames of laser point clouds obtained are substantially a plurality of frames of laser point clouds of the same target scene at different times, and may be derived from the same laser radar or different laser radars. It should be understood that if the laser point clouds are derived from the same laser radar, the frames of laser point clouds necessarily correspond to the same coordinate system, whereas if the laser point clouds are derived from different laser radars, the frames of laser point clouds may not maintain a uniform coordinate system, and the superposition result and thus the accurate construction of the mapping relationship are influenced.
For example, in a target scene, each entity may not be completely stationary, but there is relative motion, for example, a vehicle running on a road, and for this reason, if several frames of laser point clouds collected at different times are directly superimposed on the vehicle running on the road, a running track of the vehicle on the road is formed, as shown in 501 in fig. 16(a), and thus, the densification of the original point clouds fails, that is, the outline of the vehicle cannot be clearly and specifically obtained.
Therefore, in this embodiment, for a plurality of frames of laser point clouds of the same target scene but originating from different laser radars, registration is performed between the plurality of frames of laser point clouds first before the plurality of frames of laser point clouds are superimposed.
And the registration aims to ensure that a uniform coordinate system is kept among a plurality of frames of laser point clouds which aim at the same target scene but originate from different laser radars. Optionally, the registering comprises: the processing methods such as geometric correction, projective transformation, and unified scale are not limited in this embodiment.
The reference target is an intersection region in a plurality of frames of laser point clouds of the same target scene but from different laser radars, and is essentially a pixel point set in the laser point clouds, and can also be understood as an image representing the same entity in the same target scene in the plurality of frames of laser point clouds. For example, assuming that the same entity in the target scene is a person, a rider, a vehicle, etc., the reference target is a set of pixel points constituting the person, the rider, the vehicle, etc., in the laser point cloud, that is, an image corresponding to the person, the rider, the vehicle, etc.
Segmentation of the reference object, essentially image segmentation, optionally including: general segmentation, semantic segmentation, instance segmentation, etc., wherein the general segmentation further comprises: threshold segmentation, region segmentation, edge segmentation, histogram segmentation, etc., which are not specifically limited in this embodiment.
Based on the reference target registration, it can also be understood that the registered target is that the reference targets in a plurality of frames of laser point clouds of the same target scene but from different laser radars are completely overlapped, so that different laser point clouds are unified into the same coordinate system.
Still taking the vehicle running on the road in the target scene as an example for explanation, as shown in 501 in fig. 16(b), the running vehicle is used as a reference target for registration, so that the intersection areas are completely overlapped, that is, the running track of the vehicle on the road is eliminated, and the outline of the vehicle is displayed more clearly and specifically.
In the process, the registration of different laser point clouds is realized, the different laser point clouds are fully ensured to be in the same coordinate system, so that the superposition correctness is ensured, and the construction accuracy of the mapping relation is further ensured.
Referring to FIG. 17, in an exemplary embodiment, step 4153 may include the steps of:
in step 4153a, a projective transformation function is constructed for the acquired frames of laser point clouds.
Step 4153c, estimating parameters of the projective transformation function according to the reference object obtained by the segmentation.
And 4153e, performing projection transformation among the acquired frames of laser point clouds according to the projection transformation function for completing parameter estimation.
In this embodiment, the registration is implemented based on a projection transformation, that is, projection transformation is performed between the reference point cloud and the target point cloud, so that the target point cloud is completely overlapped with an intersection region in the reference point cloud through rotation and translation.
Now, two frames of laser point clouds among the plurality of frames of laser point clouds are respectively used as a reference point cloud and a target point cloud, and a registration process is described as follows.
It is noted that the coordinate system of the reference point cloud is a geographical coordinate system of the real world observed from the azimuth angle set by the laser radar for collecting the reference point cloud, and the coordinate system of the target point cloud is a geographical coordinate system of the real world observed from the azimuth angle set by the laser radar for collecting the target point cloud, so that the coordinate system of the reference point cloud and the coordinate system of the target point cloud may have a difference and further registration is required.
Specifically, the projective transformation function constructed for the reference point cloud and the target point cloud is as shown in the calculation formula (1):
Figure BDA0001870466750000171
wherein f isxRepresenting the physical size ratio of the pixel points in the reference point cloud and the pixel points in the target point cloud in the x-axis direction, fyThe ratio of the physical dimensions of the pixel points in the reference point cloud and the pixel points in the target point cloud in the y-axis direction is represented, (u)0,v0) The coordinate system of the target point cloud is represented by R, the coordinate system of the target point cloud is represented by T, and the coordinate system of the reference point cloud is represented by T.
(u,v,Zc) (X) three-dimensional coordinates representing pixel points in a target point cloudw,Yw,Zw) Representing the three-dimensional coordinates of the pixel points in the reference point cloud.
From the above, the registration relationship between the target point cloud and the reference point cloud is determined essentially by estimating the parameters of the projective transformation function, i.e., fx、fy、(u0,v0)、R、t。
For this, 6 sets of feature points corresponding to the target point cloud and the reference point cloud need to be obtained. And the characteristic points are related to the reference target obtained by segmentation, namely the characteristic points are pixel points of the reference target in the reference point cloud or the target point cloud.
Preferably, for sampling points (such as corners, vertexes, end points, gravity center points, inflection points and the like) with clearly displayed boundaries and distinct edges and corners of the entity in the target scene in the laser point cloud, 6 pixel points uniformly distributed in the reference point cloud or the target point cloud as much as possible are correspondingly extracted as feature points, so that the significant features of the reference target in the reference point cloud or the target point cloud are reflected, and the registration accuracy between the reference point cloud and the target point cloud is further improved.
Taking part in completing projective transformation functionsThe number estimate determines the registration relationship between the reference point cloud and the target point cloud, and then (X) is determined from the reference point cloudw,Yw,Zw) The target point cloud is projectively transformed to transform the coordinate system of the target point cloud to the coordinate system of the reference point cloud, i.e., (u, v, Z)c)。
Through the cooperation of the embodiment, the registration based on projection transformation is realized, the calculation amount in the registration process is greatly reduced, the efficiency of thickening the laser point cloud is improved, the production efficiency of a high-precision map is further improved, the characteristic points reflect the remarkable characteristics of the reference target in the laser point cloud, and the precision in the registration process is improved.
Referring to fig. 18, in an exemplary embodiment, the method as described above may further include the steps of:
and 610, extracting map elements based on the dense point cloud of the target scene to obtain map element data.
The target scene comprises at least one entity corresponding to the map element. It is also understood that the entity is an entity that actually exists in the target scene, and the map element is an element that is presented in the target scene map.
Specifically, the map elements and their corresponding entities will be differentiated according to application scenarios, for example, in an auxiliary vehicle driving scenario, the map elements include: the elements of lane line, ground sign, road tooth, fence, traffic sign board, etc., and correspondingly, the entities refer to lane line, ground sign, road tooth, fence, traffic sign board, etc. Or, in the auxiliary unmanned aerial vehicle low-altitude flight scene, the map elements include: street lamps, vegetation, buildings, traffic signboards and other elements, and correspondingly, entities refer to street lamps, vegetation, buildings, traffic signboards and the like.
Then, for a high-precision map, the map element data includes at least the three-dimensional position of the map element in the target scene. The three-dimensional position refers to a geographical position of an entity corresponding to the map element in the target scene. Optionally, the map element data further includes a color, a category, and the like of the map element in the target scene.
For example, the map elements are lane line elements, and accordingly, the map element data includes: the three-dimensional position of the lane line in the target scene, the color of the lane line, the form of the lane line, and the like. The lane lines include solid lines, dotted lines and double yellow lines.
Step 630, displaying the map element in the target scene map according to the three-dimensional position of the map element in the target scene in the map element data.
The target scene map is a map matched with a target scene.
The editor of the map elements may select to edit all the types of map elements at the same time, or select one type of map element to edit, which is not limited in this embodiment.
If the editor selects to edit the lane line element, the corresponding lane line element data is loaded in the target scene map, so that the lane line element is displayed according to the three-dimensional position of the lane line element in the target scene in the lane line element data.
It should be noted that the map element data, such as lane line element data, is stored in advance according to a specified storage format after extraction is completed, so as to facilitate reading when an editor edits the map element.
And 650, acquiring an editing instruction aiming at the map elements in the target scene map, responding, and generating a high-precision map of the target scene.
After the map elements are displayed in the target scene map, an editor can look up the map elements by referring to the laser point cloud of the target scene.
If the map elements do not meet the requirements, for example, the map elements do not meet the precision requirements, or the positions, the shapes and the types of the map elements are deviated, or the map elements are lost due to vehicle blockage, an editor can further edit the map elements, at the moment, an editing instruction for the map elements in the target scene map is correspondingly acquired, and then the map elements in the target scene map are correspondingly edited through the response of the editing instruction, so that the high-precision map containing the edited map elements is finally generated.
In a specific application scene, a high-precision map is an indispensable important link for realizing unmanned driving. The method can truly restore the target scene, so that the positioning accuracy of unmanned equipment (such as unmanned vehicles, unmanned planes and robots) is improved; the problem that the environment sensing equipment (such as a sensor) in the unmanned equipment fails under special conditions can be solved, and the defects of the environment sensing equipment are effectively overcome; meanwhile, the global path planning can be realized for the unmanned equipment, and a reasonable advancing strategy is formulated for the unmanned equipment based on pre-judgment. Therefore, the high-precision map plays an irreplaceable role in unmanned driving, and through the embodiments of the invention, not only is the densification of the laser point cloud realized, the accuracy of map element extraction fully ensured, but also the precision of the high-precision map is improved, the production cost of the high-precision map is effectively reduced, the production efficiency of the high-precision map is improved, and the large-scale batch production of the high-precision map is realized.
The following is an embodiment of the apparatus of the present invention, which can be used to implement the method for realizing laser point cloud densification according to the present invention. For details not disclosed in the embodiment of the apparatus of the present invention, please refer to the embodiment of the method for implementing laser point cloud densification according to the present invention.
Referring to fig. 19, in an exemplary embodiment, an apparatus 900 for performing laser point cloud densification includes, but is not limited to: an original point cloud acquisition module 910, a foresight acquisition module 930, a foresight mapping module 950, and a dense point cloud acquisition module 970.
The original point cloud obtaining module 910 is configured to obtain an original point cloud of a target scene.
A front view acquiring module 930, configured to project the original point cloud to a cylindrical surface according to a front view viewing angle, so as to generate a first front view, where the front view viewing angle is related to an azimuth angle when the laser radar collects the original point cloud.
A front view mapping module 950, configured to map a second front view from the first front view based on a mapping relationship between front views with different resolutions, which is constructed by a deep learning model, where the resolution of the second front view is higher than that of the first front view.
And the dense point cloud obtaining module 970 is configured to project the second front view to the coordinate system where the original point cloud is located, so as to obtain the dense point cloud of the target scene.
In an exemplary embodiment, the front view acquisition module 930 includes, but is not limited to: the device comprises an azimuth traversing unit, a view to be projected acquiring unit, a view to be projected projecting unit and a cylindrical surface expanding unit.
The azimuth traversing unit is used for traversing the azimuth of the laser radar when the original point cloud is collected, and determining the view angle of the front view according to the traversed azimuth.
And the view to be projected acquisition unit is used for acquiring a view to be projected of the original point cloud in the view angle of the front view.
And the projection unit of the view to be projected is used for projecting the view to be projected to a local area corresponding to the view angle of the front view in the cylindrical surface.
And the cylindrical surface unfolding unit is used for unfolding the cylindrical surface to obtain the first front view after the traversal is completed.
In an exemplary embodiment, the to-be-projected view acquiring unit includes, but is not limited to: a view acquisition subunit and a view synthesis subunit.
The view acquisition subunit is used for acquiring a distance view, a height view and an intensity view of the original point cloud in the view angle of the front view.
And the view synthesis unit is used for synthesizing the distance view, the height view and the intensity view into the view to be projected according to an image channel coding mode.
In an exemplary embodiment, the foresight learning module 950 includes, but is not limited to: the device comprises a feature extraction unit, a feature connection unit and a feature mapping unit.
And the feature extraction unit is used for inputting the first front view into the convolutional neural network model and extracting to obtain multi-channel image features.
And the feature connection unit is used for fully connecting the multi-channel image features to obtain global features.
And the feature mapping unit is used for performing feature mapping on the global features based on the mapping relation constructed by the convolutional neural network model to obtain the second front view.
In an exemplary embodiment, the dense point cloud acquisition module 970 includes, but is not limited to: a segmentation unit and a back projection unit.
The segmentation unit is used for segmenting the second front view into a distance view, a height view and an intensity view according to an image channel coding mode;
and the back projection unit is used for projecting the distance view, the height view and the intensity view to a coordinate system where the original point clouds are located according to the front view respectively to obtain dense point clouds of the target scene.
In an exemplary embodiment, the apparatus 900 further includes, but is not limited to: the device comprises a point cloud acquisition module, a projection module and a training module.
The point cloud obtaining module is used for obtaining an original point cloud to be trained and a dense point cloud to be trained aiming at the same target scene.
And the projection module is used for generating a corresponding front view by projecting the original point cloud to be trained and the dense point cloud to be trained, and taking the front view of the original point cloud to be trained and the front view of the dense point cloud to be trained as the training samples.
And the training module is used for guiding the deep learning model to carry out model training according to the training samples, and constructing the mapping relation between front views with different resolutions through the deep learning model completing the model training.
In an exemplary embodiment, the point cloud acquisition module includes, but is not limited to: the device comprises a single-frame point cloud acquisition unit, an adjacent-frame point cloud acquisition unit and an adjacent-frame point cloud superposition unit.
The single-frame point cloud acquiring unit is used for acquiring a single-frame laser point cloud in the target scene, taking the single-frame laser point cloud as the original point cloud to be trained, and taking the acquisition time of the original point cloud to be trained as the current time.
And the adjacent frame point cloud acquisition unit is used for determining adjacent time according to the current time and acquiring a plurality of frames of laser point clouds acquired at the adjacent time aiming at the target scene.
And the adjacent frame point cloud overlapping unit is used for overlapping the obtained plurality of frames of laser point clouds to obtain the dense point cloud to be trained.
In an exemplary embodiment, the adjacent frame point cloud overlay unit includes, but is not limited to: a target segmentation subunit, a registration subunit, and a stitching subunit.
And the target segmentation subunit is used for segmenting the same entity in the target scene to obtain a corresponding reference target from the acquired frames of laser point clouds.
And the registration subunit is used for registering the acquired frames of laser point clouds according to the reference target obtained by segmentation.
And the superposition subunit is used for superposing the plurality of frames of laser point clouds after the registration is completed to obtain the dense point clouds to be trained.
In an exemplary embodiment, the splice subunits include, but are not limited to: the system comprises a function building subunit, a parameter estimation subunit and a projective transformation subunit.
The function constructing subunit is used for constructing a projection transformation function for the acquired frames of laser point clouds.
And the parameter estimation subunit is used for estimating parameters of the projective transformation function according to the reference target obtained by segmentation.
And the projection transformation subunit is used for performing projection transformation among the acquired frames of laser point clouds according to the projection transformation function for completing parameter estimation.
In an exemplary embodiment, the apparatus 900 further includes, but is not limited to: the map generation device comprises an element extraction module, an element display module and a map generation module.
And the element extraction module is used for extracting map elements based on the dense point cloud of the target scene to obtain map element data.
And the element display module is used for displaying the map elements in the target scene map according to the three-dimensional positions of the map elements in the target scene in the map element data.
And the map generation module is used for acquiring the editing instruction aiming at the map elements in the target scene map and responding to the editing instruction to generate the high-precision map of the target scene.
It should be noted that, when the apparatus for realizing laser point cloud densification provided in the foregoing embodiment performs processing for realizing laser point cloud densification, only the division of the above functional modules is taken as an example, and in practical applications, the above functions may be distributed by different functional modules according to needs, that is, the internal structure of the apparatus for realizing laser point cloud densification is divided into different functional modules to complete all or part of the above described functions.
In addition, the apparatus for implementing laser point cloud densification and the embodiment of the method for implementing laser point cloud densification provided in the above embodiments belong to the same concept, wherein the specific manner in which each module executes operations has been described in detail in the method embodiment, and is not described herein again.
Referring to fig. 20, in an exemplary embodiment, a computer device 1000 includes at least one processor 1001, at least one memory 1002, and at least one communication bus 1003.
Wherein the memory 1002 has computer readable instructions stored thereon, the processor 1001 reads the computer readable instructions stored in the memory 1002 through the communication bus 1003.
The computer readable instructions, when executed by the processor 1001, implement the method for implementing laser point cloud densification in the embodiments described above.
In an exemplary embodiment, a computer readable storage medium has a computer program stored thereon, and the computer program is executed by a processor to implement the method for implementing laser point cloud densification in the above embodiments.
The above-mentioned embodiments are merely preferred examples of the present invention, and are not intended to limit the embodiments of the present invention, and those skilled in the art can easily make various changes and modifications according to the main concept and spirit of the present invention, so that the protection scope of the present invention shall be subject to the protection scope of the claims.

Claims (20)

1. A method for realizing laser point cloud densification is characterized by comprising the following steps:
acquiring an original point cloud of a target scene;
traversing an azimuth angle when the laser radar collects the original point cloud, determining a view of a front view according to the traversed azimuth angle, acquiring a view to be projected of the original point cloud at the view of the front view, projecting the view to be projected to a local area, corresponding to the view of the front view, in a cylindrical surface, and expanding the cylindrical surface to obtain a first front view after the traversal is completed;
mapping a second front view from the first front view based on a mapping relation between front views with different resolutions constructed by a deep learning model, wherein the resolution of the second front view is higher than that of the first front view;
and projecting the second front view to a coordinate system where the original point cloud is located to obtain dense point cloud of the target scene.
2. The method of claim 1, wherein said acquiring a to-be-projected view of the original point cloud from the view perspective of the front view comprises:
acquiring a distance view, a height view and an intensity view of the original point cloud in the view of the front view;
and synthesizing the distance view, the height view and the intensity view into the view to be projected according to an image channel coding mode.
3. The method of claim 1, in which the deep learning model is a convolutional neural network model;
the mapping relation between front views with different resolutions, which is constructed based on the deep learning model, is mapped by the first front view to obtain a second front view, and the method comprises the following steps:
inputting the first front view into the convolutional neural network model, and extracting to obtain multi-channel image features;
fully connecting the multi-channel image features to obtain global features;
and performing feature mapping on the global features based on the mapping relation constructed by the convolutional neural network model to obtain the second front view.
4. The method of claim 1, wherein projecting the second look-ahead map to a coordinate system of the original point cloud to obtain a dense point cloud of the target scene comprises:
according to an image channel coding mode, dividing the second front view into a distance view, a height view and an intensity view;
and respectively projecting the distance view, the height view and the intensity view to a coordinate system where the original point clouds are located according to the front view to obtain dense point clouds of the target scene.
5. The method of claim 1, wherein the method further comprises:
aiming at the same target scene, acquiring an original point cloud to be trained and a dense point cloud to be trained;
generating a corresponding front view by projecting the original point cloud to be trained and the dense point cloud to be trained, and taking the front view of the original point cloud to be trained and the front view of the dense point cloud to be trained as training samples;
and guiding the deep learning model to carry out model training according to the training samples, and constructing a mapping relation between front views with different resolutions through the deep learning model after model training is completed.
6. The method of claim 5, wherein the obtaining an original point cloud to be trained and a dense point cloud to be trained for the same target scene comprises:
acquiring a single-frame laser point cloud of the target scene, taking the single-frame laser point cloud as the original point cloud to be trained, and taking the acquisition time of the original point cloud to be trained as the current time;
determining adjacent moments according to the current moment, and acquiring a plurality of frames of laser point clouds collected at the adjacent moments aiming at the target scene;
and overlapping the obtained frames of laser point clouds to obtain the dense point cloud to be trained.
7. The method of claim 6, wherein the overlapping the acquired frames of laser point clouds to obtain the dense point cloud to be trained comprises:
for the same entity in the target scene, segmenting the acquired multiple frames of laser point clouds to obtain corresponding reference targets;
registering the acquired frames of laser point clouds according to the reference target obtained by segmentation;
and overlapping a plurality of frames of laser point clouds after the registration to obtain the dense point clouds to be trained.
8. The method of claim 7, wherein registering the acquired frames of laser point clouds according to the segmented reference target comprises:
constructing a projection transformation function for the obtained frames of laser point clouds;
estimating parameters of the projective transformation function according to the reference target obtained by segmentation;
and performing projection transformation among the obtained frames of laser point clouds according to a projection transformation function for completing parameter estimation.
9. The method of any of claims 1 to 8, further comprising:
extracting map elements based on the dense point cloud of the target scene to obtain map element data;
displaying the map element in a target scene map according to the three-dimensional position of the map element in the map element data in the target scene;
and acquiring an editing instruction aiming at the map elements in the target scene map, responding to the editing instruction, and generating a high-precision map of the target scene.
10. An apparatus for realizing laser point cloud densification, comprising:
the original point cloud obtaining module is used for obtaining an original point cloud of a target scene;
the system comprises a front view acquisition module, a front view acquisition module and a front view acquisition module, wherein the front view acquisition module is used for traversing an azimuth angle when the laser radar acquires the original point cloud, determining a front view visual angle according to the traversed azimuth angle, acquiring a to-be-projected view of the original point cloud in the front view visual angle, projecting the to-be-projected view to a local area, corresponding to the front view visual angle, in a cylindrical surface, and expanding the cylindrical surface to obtain a first front view after the traversal is completed;
the front view mapping module is used for mapping a first front view to obtain a second front view based on a mapping relation between front views with different resolutions, which is constructed by a deep learning model, and the resolution of the second front view is higher than that of the first front view;
and the dense point cloud acquisition module is used for projecting the second front view to the coordinate system where the original point cloud is located to obtain the dense point cloud of the target scene.
11. The apparatus of claim 10, wherein the front view acquisition module comprises:
the view acquisition subunit is used for acquiring a distance view, a height view and an intensity view of the original point cloud in the view angle of the front view;
and the view synthesis unit is used for synthesizing the distance view, the height view and the intensity view into the view to be projected according to an image channel coding mode.
12. The apparatus of claim 10, in which the deep learning model is a convolutional neural network model;
the front view mapping module includes:
the feature extraction unit is used for inputting the first front view into the convolutional neural network model and extracting to obtain multi-channel image features;
the feature connection unit is used for fully connecting the multi-channel image features to obtain global features;
and the feature mapping unit is used for performing feature mapping on the global features based on the mapping relation constructed by the convolutional neural network model to obtain the second front view.
13. The apparatus of claim 10, wherein the dense point cloud acquisition module comprises:
the segmentation unit is used for segmenting the second front view into a distance view, a height view and an intensity view according to an image channel coding mode;
and the back projection unit is used for projecting the distance view, the height view and the intensity view to a coordinate system where the original point clouds are located according to the front view respectively to obtain dense point clouds of the target scene.
14. The apparatus of claim 10, wherein the apparatus further comprises:
the point cloud obtaining module is used for obtaining an original point cloud to be trained and a dense point cloud to be trained aiming at the same target scene;
the projection module is used for generating a corresponding front view by projecting the original point cloud to be trained and the dense point cloud to be trained, and taking the front view of the original point cloud to be trained and the front view of the dense point cloud to be trained as training samples;
and the training module is used for guiding the deep learning model to carry out model training according to the training samples, and constructing the mapping relation between front views with different resolutions through the deep learning model completing the model training.
15. The apparatus of claim 14, wherein the point cloud acquisition module comprises:
the single-frame point cloud acquisition unit is used for acquiring a single-frame laser point cloud of the target scene, taking the single-frame laser point cloud as the original point cloud to be trained, and taking the acquisition time of the original point cloud to be trained as the current time;
the adjacent frame point cloud obtaining unit is used for determining adjacent time according to the current time and obtaining a plurality of frames of laser point clouds collected at the adjacent time aiming at the target scene;
and the adjacent frame point cloud overlapping unit is used for overlapping the obtained plurality of frames of laser point clouds to obtain the dense point cloud to be trained.
16. The apparatus of claim 15, wherein the adjacent frame point cloud overlay unit comprises:
the target segmentation subunit is used for segmenting the same entity in the target scene to obtain a corresponding reference target from the obtained frames of laser point clouds;
the registration subunit is used for registering the acquired frames of laser point clouds according to the reference target obtained by segmentation;
and the superposition subunit is used for superposing the plurality of frames of laser point clouds after the registration is completed to obtain the dense point clouds to be trained.
17. The apparatus of claim 16, wherein the registration subunit comprises:
the function constructing subunit is used for constructing a projection transformation function for the acquired plurality of frames of laser point clouds;
a parameter estimation subunit, configured to estimate parameters of the projective transformation function according to the reference target obtained by the segmentation;
and the projection transformation subunit is used for performing projection transformation among the acquired frames of laser point clouds according to the projection transformation function for completing parameter estimation.
18. The apparatus of any of claims 10 to 17, further comprising:
the element extraction module is used for extracting map elements based on the dense point cloud of the target scene to obtain map element data;
the element display module is used for displaying the map elements in the target scene map according to the three-dimensional positions of the map elements in the target scene in the map element data;
and the map generation module is used for acquiring the editing instruction aiming at the map elements in the target scene map and responding to the editing instruction to generate the high-precision map of the target scene.
19. A computer device, comprising:
a processor; and
a memory having stored thereon computer readable instructions which, when executed by the processor, implement the method of laser point cloud densification of any of claims 1 to 9.
20. A computer-readable storage medium, on which a computer program is stored, which, when being executed by a processor, implements the method for laser point cloud densification of any of claims 1 to 9.
CN201811374889.4A 2018-11-19 2018-11-19 Method and device for realizing laser point cloud densification and computer equipment Active CN109493407B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201811374889.4A CN109493407B (en) 2018-11-19 2018-11-19 Method and device for realizing laser point cloud densification and computer equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201811374889.4A CN109493407B (en) 2018-11-19 2018-11-19 Method and device for realizing laser point cloud densification and computer equipment

Publications (2)

Publication Number Publication Date
CN109493407A CN109493407A (en) 2019-03-19
CN109493407B true CN109493407B (en) 2022-03-25

Family

ID=65696832

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201811374889.4A Active CN109493407B (en) 2018-11-19 2018-11-19 Method and device for realizing laser point cloud densification and computer equipment

Country Status (1)

Country Link
CN (1) CN109493407B (en)

Families Citing this family (21)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110097084B (en) * 2019-04-03 2021-08-31 浙江大学 Knowledge fusion method for training multitask student network through projection characteristics
CN111854748B (en) * 2019-04-09 2022-11-22 北京航迹科技有限公司 Positioning system and method
CN110276794B (en) * 2019-06-28 2022-03-01 Oppo广东移动通信有限公司 Information processing method, information processing device, terminal device and server
CN110363771B (en) * 2019-07-15 2021-08-17 武汉中海庭数据技术有限公司 Isolation guardrail shape point extraction method and device based on three-dimensional point cloud data
CN110824496B (en) * 2019-09-18 2022-01-14 北京迈格威科技有限公司 Motion estimation method, motion estimation device, computer equipment and storage medium
CN111009011B (en) * 2019-11-28 2023-09-19 深圳市镭神智能系统有限公司 Method, device, system and storage medium for predicting vehicle direction angle
CN110992485B (en) * 2019-12-04 2023-07-07 北京恒华伟业科技股份有限公司 GIS map three-dimensional model azimuth display method and device and GIS map
CN111192265B (en) * 2019-12-25 2020-12-01 中国科学院上海微系统与信息技术研究所 Point cloud based semantic instance determination method and device, electronic equipment and storage medium
CN111221808A (en) * 2019-12-31 2020-06-02 武汉中海庭数据技术有限公司 Unattended high-precision map quality inspection method and device
CN111439594B (en) * 2020-03-09 2022-02-18 兰剑智能科技股份有限公司 Unstacking method and system based on 3D visual guidance
CN111476242B (en) * 2020-03-31 2023-10-20 北京经纬恒润科技股份有限公司 Laser point cloud semantic segmentation method and device
CN111724478B (en) * 2020-05-19 2021-05-18 华南理工大学 Point cloud up-sampling method based on deep learning
CN111667522A (en) * 2020-06-04 2020-09-15 上海眼控科技股份有限公司 Three-dimensional laser point cloud densification method and equipment
CN112308889B (en) * 2020-10-23 2021-08-31 香港理工大学深圳研究院 Point cloud registration method and storage medium by utilizing rectangle and oblateness information
CN112446953B (en) * 2020-11-27 2021-11-23 广州景骐科技有限公司 Point cloud processing method, device, equipment and storage medium
CN112837410B (en) * 2021-02-19 2023-07-18 北京三快在线科技有限公司 Training model and point cloud processing method and device
CN113219489B (en) * 2021-05-13 2024-04-16 深圳数马电子技术有限公司 Point-to-point determination method, device, computer equipment and storage medium for multi-line laser
CN113359141B (en) * 2021-07-28 2021-12-17 东北林业大学 Forest fire positioning method and system based on unmanned aerial vehicle multi-sensor data fusion
CN114266863B (en) * 2021-12-31 2024-02-09 西安交通大学 3D scene graph generation method, system, device and readable storage medium based on point cloud
CN115690641B (en) * 2022-05-25 2023-08-01 中仪英斯泰克进出口有限公司 Screen control method and system based on image display
CN115965928B (en) * 2023-03-16 2023-07-07 安徽蔚来智驾科技有限公司 Point cloud characteristic enhancement and target detection method, equipment, medium and vehicle

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104346608A (en) * 2013-07-26 2015-02-11 株式会社理光 Sparse depth map densing method and device
CN108154560A (en) * 2018-01-25 2018-06-12 北京小马慧行科技有限公司 Laser point cloud mask method, device and readable storage medium storing program for executing
CN108198145A (en) * 2017-12-29 2018-06-22 百度在线网络技术(北京)有限公司 For the method and apparatus of point cloud data reparation

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104346608A (en) * 2013-07-26 2015-02-11 株式会社理光 Sparse depth map densing method and device
CN108198145A (en) * 2017-12-29 2018-06-22 百度在线网络技术(北京)有限公司 For the method and apparatus of point cloud data reparation
CN108154560A (en) * 2018-01-25 2018-06-12 北京小马慧行科技有限公司 Laser point cloud mask method, device and readable storage medium storing program for executing

Non-Patent Citations (3)

* Cited by examiner, † Cited by third party
Title
Multi-View 3D Object Detection Network for Autonomous Driving;Chen Xiaozhi 等;《IEEE Conference on Computer Vision and Pattern Recognition (CVPR)》;IEEE;20170726;全文 *
Vehicle Detection from 3D Lidar Using Fully Convolutional Network;Li Bo 等;《Computer Vision and Pattern Recognition》;IEEE;20160829;全文 *
一种高效的人脸三维点云超分辨率融合方法;谭红春;《光学技术》;20161115;第42卷(第6期);全文 *

Also Published As

Publication number Publication date
CN109493407A (en) 2019-03-19

Similar Documents

Publication Publication Date Title
CN109493407B (en) Method and device for realizing laser point cloud densification and computer equipment
CN110160502B (en) Map element extraction method, device and server
US20210241514A1 (en) Techniques for real-time mapping in a movable object environment
CN108648270B (en) Unmanned aerial vehicle real-time three-dimensional scene reconstruction method capable of realizing real-time synchronous positioning and map construction
US10297074B2 (en) Three-dimensional modeling from optical capture
US20190026400A1 (en) Three-dimensional modeling from point cloud data migration
US10477178B2 (en) High-speed and tunable scene reconstruction systems and methods using stereo imagery
AU2016211612B2 (en) Map-like summary visualization of street-level distance data and panorama data
JP2020516853A (en) Video-based positioning and mapping method and system
JP7440005B2 (en) High-definition map creation method, apparatus, device and computer program
CN113192200B (en) Method for constructing urban real scene three-dimensional model based on space-three parallel computing algorithm
US11544898B2 (en) Method, computer device and storage medium for real-time urban scene reconstruction
Kim et al. Interactive 3D building modeling method using panoramic image sequences and digital map
CN114648640B (en) Target object monomer method, device, equipment and storage medium
Guan et al. Detecting visually salient scene areas and deriving their relative spatial relations from continuous street-view panoramas
CN103955959A (en) Full-automatic texture mapping method based on vehicle-mounted laser measurement system
CN112002007B (en) Model acquisition method and device based on air-ground image, equipment and storage medium
US20220113423A1 (en) Representation data generation of three-dimensional mapping data
US11868377B2 (en) Systems and methods for providing geodata similarity
Leberl et al. Automated photogrammetry for three-dimensional models of urban spaces
Steinemann et al. Determining the outline contour of vehicles In 3D-LIDAR-measurements
WO2021250734A1 (en) Coordinate conversion device, coordinate conversion method, and coordinate conversion program
CN114387532A (en) Boundary identification method and device, terminal, electronic equipment and unmanned equipment
Salah et al. Summarizing large scale 3D mesh for urban navigation
Oh et al. Hue-saturation-depth Guided Image-based Lidar Upsampling Technique for Ultra-high-resolution and Omnidirectional 3D Scanning

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant