CN111462275A - Map production method and device based on laser point cloud - Google Patents

Map production method and device based on laser point cloud Download PDF

Info

Publication number
CN111462275A
CN111462275A CN201910058303.1A CN201910058303A CN111462275A CN 111462275 A CN111462275 A CN 111462275A CN 201910058303 A CN201910058303 A CN 201910058303A CN 111462275 A CN111462275 A CN 111462275A
Authority
CN
China
Prior art keywords
point cloud
vectorization
dimensional
point
laser
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201910058303.1A
Other languages
Chinese (zh)
Other versions
CN111462275B (en
Inventor
李艳丽
郭云巧
高俊帆
闫瑞仙
蔡金华
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Jingbangda Trade Co Ltd
Beijing Jingdong Qianshi Technology Co Ltd
Original Assignee
Beijing Jingdong Century Trading Co Ltd
Beijing Jingdong Shangke Information Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Jingdong Century Trading Co Ltd, Beijing Jingdong Shangke Information Technology Co Ltd filed Critical Beijing Jingdong Century Trading Co Ltd
Priority to CN201910058303.1A priority Critical patent/CN111462275B/en
Publication of CN111462275A publication Critical patent/CN111462275A/en
Application granted granted Critical
Publication of CN111462275B publication Critical patent/CN111462275B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T11/002D [Two Dimensional] image generation
    • G06T11/20Drawing from basic elements, e.g. lines or circles
    • G06T11/206Drawing of charts or graphs
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • G06V10/26Segmentation of patterns in the image field; Cutting or merging of image elements to establish the pattern region, e.g. clustering-based techniques; Detection of occlusion
    • G06V10/267Segmentation of patterns in the image field; Cutting or merging of image elements to establish the pattern region, e.g. clustering-based techniques; Detection of occlusion by performing operations on regions, e.g. growing, shrinking or watersheds
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/50Context or environment of the image
    • G06V20/56Context or environment of the image exterior to a vehicle by using sensors mounted on the vehicle

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Multimedia (AREA)
  • Traffic Control Systems (AREA)
  • Length Measuring Devices By Optical Means (AREA)

Abstract

The invention discloses a map production method and device based on laser point cloud, and relates to the technical field of computers. One embodiment of the method comprises: collecting laser point clouds in a target area, and performing semantic analysis on the laser point clouds to determine the category attribute of each point cloud point in the laser point clouds and the number of an affiliated example; combining point cloud points with the same category attribute under the same instance number to obtain a point cloud cluster, and determining the shape of the point cloud cluster according to the corresponding relation between the category attribute and the shape; and determining a vectorization rule corresponding to the shape to carry out vectorization processing on the point cloud cluster to obtain a vectorization map of the target area. The laser point cloud collected by the embodiment has the advantages of multiple angles and multiple positions, and compared with a perspective view, the integrity and consistency of an object can be better kept, and meanwhile, the situation that fine parts are shielded can be avoided; and (4) carrying out three-dimensional vectorization on the point clouds in different shapes respectively, so that the element types can be enriched, and the generation of a three-dimensional high-precision map is completed.

Description

Map production method and device based on laser point cloud
Technical Field
The invention relates to the technical field of computers, in particular to a map production method and device based on laser point cloud.
Background
The high-precision map is used for guiding unmanned vehicles and robots to automatically drive, compared with a common electronic navigation map, the high-precision map can provide more accurate (centimeter-level) positions, comprises richer (road lines, curbs, isolation barriers and the like) road factor information, and is an indispensable part in the automatic driving technology.
The three-dimensional high-precision map is a map drawn in a three-dimensional space, and compared with a two-dimensional high-precision map, the three-dimensional high-precision map has the advantages that:
1) the abundant three-dimensional information improves the accuracy of automatic driving decision, such as guiding vehicle navigation running in different directions in an overpass area;
2) the application is wider, the vehicle navigation is assisted, the simulation environment can be constructed, and the virtual scene is fused to the real environment so as to enhance the virtual-real mixing in reality.
The current three-dimensional high-precision map production method mainly comprises the following steps:
1) the semi-automatic manufacturing method based on the image data source comprises the following steps: taking the panoramic image as a data source, extracting elements such as road lines, road teeth and the like from the panoramic image, and then projecting the factor vertex into a three-dimensional space by using a motion structure reconstruction algorithm, thereby obtaining a projection model of the road and the road teeth in the three-dimensional space;
2) the semi-automatic manufacturing method combining the image and the laser point cloud comprises the following steps: and similarly, extracting elements such as road routes, road teeth, license plates and the like by taking the panoramic image as a data source, and projecting the element positions onto the point cloud according to the matching relation between the point cloud and the image so as to obtain a projection model in a three-dimensional space.
The ideal three-dimensional high-precision map production method has the advantages of good accuracy, strong robustness and rich types of extraction factors, and compared with the three indexes, the inventor finds that the prior art at least has the following problems:
1) road elements are extracted from a perspective view (a panoramic camera or a common camera), and because the perspective view has the characteristics of large and small distances, the resolution of areas such as distant road lines/road teeth is low, and the misdetection physical distance difference can reach dozens of centimeters, the two modes are difficult to accurately detect and position distant positions;
2) road elements under the perspective view are often shielded by vehicles, pedestrians, trees and the like, even one road is blinded from the Mount Tai, so that the road elements are difficult to extract in the shielded areas, and the two ways have the problem of insufficient robustness;
3) the two methods only extract linear elements such as road lines, road teeth and the like, do not process surface type or body type elements such as road instructions, isolation piers and the like, and the extracted elements are too single in type;
4) for the second mode, the automatic algorithm may have the problems of low robustness and low precision, high precision indexes are required for high-precision map making, and the subsequent repair is carried out by depending on an interactive editing platform or manually, so that the workload is large.
Disclosure of Invention
In view of this, embodiments of the present invention provide a method and an apparatus for producing a map based on laser point cloud, which can at least solve the defects in the prior art that the three-dimensional high-precision map has low production accuracy, weak robustness, and a single type of extracted elements.
In order to achieve the above object, according to an aspect of an embodiment of the present invention, there is provided a map production method based on laser point cloud, including:
collecting laser point clouds in a target area, and performing semantic analysis on the laser point clouds to determine the category attribute of each point cloud point in the laser point clouds and the number of an example to which the point cloud point belongs;
combining point cloud points with the same category attribute under the same instance number to obtain a point cloud cluster, and determining the shape of the point cloud cluster according to the corresponding relation between the category attribute and the shape;
and determining a vectorization rule corresponding to the shape to carry out vectorization processing on the point cloud cluster to obtain a vectorization map of the target area.
Optionally, the acquiring the laser point cloud of the target area includes:
in the vehicle traveling process, carrying out point cloud collection operation on the target area by using a vehicle-mounted device according to a preset frequency to obtain a single-frame laser point cloud;
and splicing the collected single-frame laser point clouds according to the coordinates of the single-frame laser point clouds in a preset coordinate system to obtain the laser point clouds in the target area.
Optionally, before performing semantic parsing on the laser point cloud, the method further includes: and determining the volume of the laser point cloud, and when the volume exceeds a preset volume threshold, carrying out block cutting processing on the laser point cloud according to a preset size to obtain a plurality of laser point cloud blocks.
Optionally, the semantic parsing includes scene parsing and instance segmentation;
the semantic analysis is carried out on the laser point cloud to determine the category attribute and the number of the example of each point cloud point in the laser point cloud, and the method comprises the following steps:
performing scene analysis on the laser point cloud to determine the category attribute of each point cloud point in the laser point cloud; and
and carrying out example segmentation on the laser point cloud to obtain the number of the example to which each cloud point belongs, and establishing the corresponding relation between the example number and the class attribute by combining the determined class attribute of each cloud point.
Optionally, before point cloud points with the same category attribute under the same instance number are combined to obtain a point cloud cluster, the method further includes:
acquiring a region element, and determining point cloud points of which the category attributes do not belong to the region element to remove the determined point cloud points; and
and filling the cavity of the laser point cloud with the point cloud points removed by using a cavity repairing mode.
Optionally, the determining a vectorization rule corresponding to the shape to perform vectorization processing on the point cloud cluster includes: projecting the point cloud cluster into a two-dimensional image space to obtain a two-dimensional front view; carrying out two-dimensional vectorization on the two-dimensional front view to obtain a two-dimensional vectorization result; and projecting the two-dimensional vectorization result to a three-dimensional space according to the corresponding relation between the two-dimensional image pixel and the three-dimensional space point to obtain a three-dimensional vectorization result.
Optionally, the shape of the point cloud cluster is a line shape;
the two-dimensional vectorization of the two-dimensional front view to obtain a two-dimensional vectorization result includes: performing mask region extraction on the point cloud projection point region in the two-dimensional front view by using a skeleton extraction mode to obtain a mask region skeleton; and sequentially connecting the pixel points of the images in the adjacent areas in the mask area framework to obtain a framework vectorization segment.
Optionally, the shape of the point cloud cluster is a surface shape;
the two-dimensional vectorization of the two-dimensional front view to obtain a two-dimensional vectorization result includes: extracting edge pixel points from a point cloud projection point area in the two-dimensional front view in a contour tracing mode; and sequentially connecting the extracted edge pixel points to obtain a two-dimensional vectorization contour of the point cloud projection point region.
Optionally, the shape of the point cloud cluster is a body shape;
the determining a vectorization rule corresponding to the shape to perform vectorization processing on the point cloud cluster includes: determining the coordinates of each point cloud point in the body type point cloud cluster under a preset coordinate system, extracting the minimum coordinate value and the maximum coordinate value on each coordinate axis, and taking the extracted minimum coordinate value and the extracted maximum coordinate value as the vectorization value of the body type point cloud cluster in a three-dimensional space.
In order to achieve the above object, according to another aspect of the embodiments of the present invention, there is provided a map production apparatus based on laser point cloud, including:
the point cloud analysis module is used for acquiring laser point clouds in a target area and performing semantic analysis on the laser point clouds to determine the category attribute of each point cloud point in the laser point clouds and the number of the example to which the point cloud point belongs;
the point cloud combination module is used for combining point cloud points with the same category attribute under the same instance number to obtain a point cloud cluster, and determining the shape of the point cloud cluster according to the corresponding relation between the category attribute and the shape;
and the map production module is used for determining a vectorization rule corresponding to the shape so as to carry out vectorization processing on the point cloud cluster to obtain the vectorization map of the target area.
Optionally, the point cloud analyzing module is configured to:
in the vehicle traveling process, carrying out point cloud collection operation on the target area by using a vehicle-mounted device according to a preset frequency to obtain a single-frame laser point cloud;
and splicing the collected single-frame laser point clouds according to the coordinates of the single-frame laser point clouds in a preset coordinate system to obtain the laser point clouds in the target area.
Optionally, the point cloud analyzing module is further configured to: and determining the volume of the laser point cloud, and when the volume exceeds a preset volume threshold, carrying out block cutting processing on the laser point cloud according to a preset size to obtain a plurality of laser point cloud blocks.
Optionally, the semantic parsing includes scene parsing and instance segmentation;
the point cloud analysis module is used for:
performing scene analysis on the laser point cloud to determine the category attribute of each point cloud point in the laser point cloud; and
and carrying out example segmentation on the laser point cloud to obtain the number of the example to which each cloud point belongs, and establishing the corresponding relation between the example number and the class attribute by combining the determined class attribute of each cloud point.
Optionally, the system further comprises a point cloud repairing module, configured to:
acquiring a region element, and determining point cloud points of which the category attributes do not belong to the region element to remove the determined point cloud points; and
and filling the cavity of the laser point cloud with the point cloud points removed by using a cavity repairing mode.
Optionally, the map production module is configured to: projecting the point cloud cluster into a two-dimensional image space to obtain a two-dimensional front view; carrying out two-dimensional vectorization on the two-dimensional front view to obtain a two-dimensional vectorization result; and projecting the two-dimensional vectorization result to a three-dimensional space according to the corresponding relation between the two-dimensional image pixel and the three-dimensional space point to obtain a three-dimensional vectorization result.
Optionally, the shape of the point cloud cluster is a line shape;
the map production module is used for: performing mask region extraction on the point cloud projection point region in the two-dimensional front view by using a skeleton extraction mode to obtain a mask region skeleton; and sequentially connecting the pixel points of the images in the adjacent areas in the mask area framework to obtain a framework vectorization segment.
Optionally, the shape of the point cloud cluster is a surface shape;
the map production module is used for: extracting edge pixel points from a point cloud projection point area in the two-dimensional front view in a contour tracing mode; and sequentially connecting the extracted edge pixel points to obtain a two-dimensional vectorization contour of the point cloud projection point region.
Optionally, the shape of the point cloud cluster is a body shape;
the map production module is used for: determining the coordinates of each point cloud point in the body type point cloud cluster under a preset coordinate system, extracting the minimum coordinate value and the maximum coordinate value on each coordinate axis, and taking the extracted minimum coordinate value and the extracted maximum coordinate value as the vectorization value of the body type point cloud cluster in a three-dimensional space.
To achieve the above object, according to still another aspect of the embodiments of the present invention, there is provided an electronic map production apparatus based on laser point cloud.
The electronic device of the embodiment of the invention comprises: one or more processors; a storage device for storing one or more programs which, when executed by the one or more processors, cause the one or more processors to implement any of the above-described laser point cloud based map production methods.
To achieve the above object, according to a further aspect of the embodiments of the present invention, there is provided a computer readable medium having a computer program stored thereon, the program, when executed by a processor, implementing any one of the above-mentioned laser point cloud based map production methods.
According to the scheme provided by the invention, one embodiment of the invention has the following advantages or beneficial effects: the collected laser point cloud has the advantages of multiple angles and multiple positions, can better keep the integrity and consistency of objects compared with a perspective view, can fully automatically and accurately extract various road elements in a robust manner, and can avoid the condition that small parts are shielded; sundries are removed and holes are repaired from the laser point cloud, so that the shielding of vehicles, trees and the like in large area can be reduced, and the robustness of the method is improved; and finally, carrying out three-dimensional vectorization on the linear elements, the surface elements and the body type elements respectively, thereby enriching the element types and further completing the generation of the three-dimensional high-precision map.
Further effects of the above-mentioned non-conventional alternatives will be described below in connection with the embodiments.
Drawings
The drawings are included to provide a better understanding of the invention and are not to be construed as unduly limiting the invention. Wherein:
fig. 1 is a schematic main flow diagram of a map production method based on laser point cloud according to an embodiment of the present invention;
FIG. 2 is a schematic main flow diagram of an alternative laser point cloud based map production method according to an embodiment of the present invention;
FIG. 3 is a schematic main flow diagram of another alternative laser point cloud based map production method according to an embodiment of the present invention;
FIG. 4 is a schematic main flow diagram of another alternative laser point cloud based map production method according to an embodiment of the present invention;
FIG. 5 is a schematic flow chart of another alternative method for producing a map based on laser point cloud according to an embodiment of the present invention;
FIG. 6 is a schematic diagram of the main modules of a map production device based on laser point cloud according to the embodiment of the invention;
FIG. 7 is an exemplary system architecture diagram in which embodiments of the present invention may be employed;
FIG. 8 is a schematic block diagram of a computer system suitable for use with a mobile device or server implementing an embodiment of the invention.
Detailed Description
Exemplary embodiments of the present invention are described below with reference to the accompanying drawings, in which various details of embodiments of the invention are included to assist understanding, and which are to be considered as merely exemplary. Accordingly, those of ordinary skill in the art will recognize that various changes and modifications of the embodiments described herein can be made without departing from the scope and spirit of the invention. Also, descriptions of well-known functions and constructions are omitted in the following description for clarity and conciseness.
It should be noted that the method provided by the embodiment of the present invention is not only suitable for map production, but also suitable for indoor scene reconstruction, for example, large-scale mall construction. Elements in the corresponding semantic analysis are trees, vehicles, pedestrians and the like for map production, and tables, chairs and the like for indoor scene reconstruction.
However, one of the contributing points of the present invention is the reconstruction of line-type and surface-type elements, which is designed for curbs, road routes, zebra crossings, etc., and the reconstruction of indoor scenes has too few such elements, which may not represent the contributing points of the present invention.
The words to which the invention relates are to be construed as follows:
reconstructing a structure without three-dimensional motion: in order to artificially place mark points in a scene, tracking and positioning in an unknown environment must completely utilize natural scene images or other orientation sensors to acquire the three-dimensional structure of a target and the motion trail of a camera.
Point cloud: is a collection of a large number of points that express the target spatial distribution and target surface characteristics in the same spatial reference frame.
And (3) main component analysis: often used to reduce the dimensionality of the data set while maintaining the features in the data set that contribute most to the variance. This is done by keeping the lower order principal components and ignoring the higher order principal components. Such low order components tend to preserve the most important aspects of the data.
Framework algorithm: the "skeleton" in the image refers to the central part of the image, is one of the important features describing the geometric and topological properties of the image, and is a process of finding the skeleton of the image, which is generally called a process of "thinning" the image.
Contour tracing: when identifying a target in an image, tracking processing, also called contour tracking, is often required to be performed on the edge of the target. Contour tracing, as the name implies, tracks boundaries by finding edge points sequentially.
And (3) thinning: when processing vectorized data, there are often many duplicate data in the records, which brings inconvenience to subsequent data processing. The redundant data wastes more storage space on the one hand and causes the graphic to be expressed to be unsmooth or not to meet the standard on the other hand. Therefore, the number of data points is reduced to the maximum by some rule under the condition of ensuring that the shape of the vector curve is not changed, and the process is called thinning.
Referring to fig. 1, a main flowchart of a map production method based on laser point cloud provided by an embodiment of the present invention is shown, including the following steps:
s101: collecting laser point clouds in a target area, and performing semantic analysis on the laser point clouds to determine the category attribute of each point cloud point in the laser point clouds and the number of an example to which the point cloud point belongs;
s102: combining point cloud points with the same category attribute under the same instance number to obtain a point cloud cluster, and determining the shape of the point cloud cluster according to the corresponding relation between the category attribute and the shape;
s103: and determining a vectorization rule corresponding to the shape to carry out vectorization processing on the point cloud cluster to obtain a vectorization map of the target area.
In the above embodiment, in step S101, the target area in the present invention may be a road, a hill, a street view, or the like, and the present invention is mainly described with reference to the street view as an example. The laser point cloud of the target area is formed by continuously collecting point clouds by a vehicle-mounted device and splicing the point clouds.
And performing semantic analysis on the spliced laser point cloud to obtain the category of each point cloud point in the road point cloud, segmenting point cloud clusters such as vehicles, trees, pedestrians, road surfaces, isolation piers, lane lines and the like, and assigning a category label to the point cloud clusters to realize point cloud semantization.
Specifically, the method comprises the following steps:
the method comprises the following steps: performing scene analysis on the laser point cloud to determine the category attribute of each point cloud point in the laser point cloud; and
step two: and carrying out example segmentation on the laser point cloud to obtain the number of the example to which each cloud point belongs, and establishing the corresponding relation between the example number and the class attribute by combining the determined class attribute of each cloud point.
For the first step, the category attribute of each point cloud point in the road point cloud is segmented by using the existing point cloud scene parsing algorithm, for example, 2018Arxiv PointCNN, as shown in fig. 2.
The segmentation is point-level, and the semantic labels L abel of the same class are the same, taking trees, vehicles and pedestrians as examples, L abel-1 represents trees, L abel-2 represents vehicles and L abel-3 represents pedestrians.
In general, a foreground object is in front of a background scene and may block the background scene (L abel may be set as 0 for background).
For the second step, usually, foreground objects should be separated, because some are occlusions, and then, elimination may be needed to improve vectorization accuracy.
Therefore, different foreground objects under the same category can be further classified by using a point cloud example segmentation algorithm so as to carry out example numbering IDs on the different foreground objects. For example, ID 1 represents a first tree region in the point cloud, ID 2 represents a second tree region in the point cloud, and there are a plurality of trees in the scene, although the categories are the same, they belong to different examples.
Further, each point cloud point in the background scene may also be subjected to instance numbering, for example, ID 4 represents a first building area in the point cloud, and ID 5 represents a second building area in the point cloud, which are not removable. However, in general, the problem of low precision exists in the example segmentation of the background scene, and the background scene does not need to be subjected to element removal, so that the foreground object is usually subjected to element removal only.
The main purpose of this stage is to resolve the background scene and the foreground object, and perform instance segmentation on the foreground object, which are two different tasks, but the prior art does not mention a method capable of completing these two steps together.
In addition, the invention does not relate to a specific training process, and only uses the method, namely, inputting the laser point clouds into a trained model, so that the category L abel and/or the example ID of each point cloud can be analyzed.
For step S102, subsequent vectorization of the point cloud is performed on the point cloud cluster, since a point cloud point may not recognize its specific shape.
For example, ID 1 represents the first tree region in the point cloud, and L abel 1 represents a tree, and assuming that a tree consists of 20 point cloud points, the attribute categories and the instance numbers of the 20 point cloud points are the same, and then the 20 point cloud points are combined to obtain the point cloud cluster related to the tree.
Similarly, ID 4 represents the first building area in the point cloud, L abel 1 represents a building, and assuming that a building is composed of 200 point cloud points, the attribute categories and instance numbers of the 200 point cloud points are the same, and then the 200 point cloud points are combined to obtain the point cloud cluster related to the building.
For step S103, in the road element vectorization stage, it is first necessary to define which shape each point cloud cluster belongs to according to its category, which may be implemented by using a predefined lookup table, for example, a list in the lookup table { category, category belongs to line, plane or body, category is vectorized }.
If the shielding object removal and the hole repair are performed before (see the description of the subsequent fig. 2 to 4), the point cloud cluster is the road point cloud after the shielding object removal and the hole repair are performed. If not, the point cloud cluster is the point cloud cluster containing the obstruction and the road element.
The invention vectorizes the point cloud cluster in a three-dimensional space, and divides the point cloud cluster into a line type, a face type and a body type according to the mark of semantic analysis. The line type elements comprise lane lines, speed bumps and the like, the surface type elements comprise zebra crossings, blind roads, parking areas and the like, and the body type elements comprise road teeth, isolation piers and the like. And vectorizing the point cloud clusters in three different shapes by adopting three different strategies.
For linear and surface-type point cloud clusters, the vectorization is similar, and is described with reference to fig. 5, where vectorization of only body-type point cloud clusters is described:
carrying out body type point cloud vectorization in a three-dimensional space, and representing the vectorization by using a bounding box of a current point cloud cluster:
constructing a three-dimensional space, and representing the vectorization of the current point cloud cluster by using a bounding box of the current point cloud cluster, wherein the vectorization specifically comprises the following steps: the method comprises the steps of establishing a coordinate system by taking the longitude and latitude of a landmark GPS as X and Y coordinates in a three-dimensional space and taking the height as Z coordinate, and expressing vectorization by using an external bounding box of a point cloud cluster, namely, the leftmost and rightmost positions of point cloud points in the point cloud cluster falling on the X coordinate system are coordinates on the length of the bounding box, the foremost and rearmost positions of the point cloud points falling on the Y coordinate system are coordinates on the width of the bounding box, and the uppermost and lowermost positions of the point cloud points falling on the Z coordinate system are coordinates on the height of the bounding box.
Under normal conditions, the shape of the body type point cloud cluster is not fixed, and in order to improve vectorization accuracy, vectorization processing is generally performed by adopting the first two body type and surface type modes.
After the linear, planar and body type laser point clouds are subjected to independent vectorization, vectorization results are distributed in the same three-dimensional space, and fusion is not needed. However, for different road element point cloud clusters, after vectorization is performed according to the above method, all vector elements are finally fused to obtain a three-dimensional vectorization map.
According to the method provided by the embodiment, the collected laser point cloud has the advantages of multiple angles and multiple positions, the integrity and consistency of an object can be better kept compared with a perspective view, and meanwhile, the situation that a small part is blocked can be avoided; and (4) carrying out three-dimensional vectorization on the point clouds in different shapes respectively, so that the element types can be enriched, and the generation of a three-dimensional high-precision map is completed.
Referring to fig. 2, a main flowchart of an alternative map production method based on laser point cloud according to an embodiment of the present invention is shown, including the following steps:
s201: in the process of vehicle advancing, carrying out point cloud collection operation on a target area by using a vehicle-mounted device according to a preset frequency to obtain a single-frame laser point cloud;
s202: splicing the collected single-frame laser point clouds according to the coordinates of the single-frame laser point clouds in a preset coordinate system to obtain the laser point clouds in the target area;
s203: performing semantic analysis on the laser point cloud to determine the category attribute of each point cloud point in the laser point cloud and the number of the example to which the point cloud point belongs;
s204: combining point cloud points with the same category attribute under the same instance number to obtain a point cloud cluster, and determining the shape of the point cloud cluster according to the corresponding relation between the category attribute and the shape;
s205: and determining a vectorization rule corresponding to the shape to carry out vectorization processing on the point cloud cluster to obtain a vectorization map of the target area.
In the above embodiment, the descriptions of steps S101 to S103 shown in fig. 1 can be referred to for steps S203 to S205, respectively, and are not repeated herein.
In the above embodiment, for steps S201 and S202, the vehicle-mounted device may be a point cloud and/or image acquisition system constructed by a series of software and hardware, such as an Apollo vehicle-mounted device, a new four-dimensional map vehicle-mounted device, and the like. Common hardware includes cameras (capturing video, images), lidar (capturing laser point clouds), GPS/IMU (capturing and locating vehicle trajectories); common software functions include triggering acquisition, storing data, synchronizing hardware devices, and the like.
During the process of traveling, the vehicle-mounted device collects point clouds according to a fixed frequency (for example, the point clouds are collected once every 10 meters), and a single-frame laser point cloud is obtained.
In order to express the position relation of each point cloud conveniently, a plane coordinate and a Gaussian plane are adopted, the point cloud of a road (such as a street view) is projected on a plane, and for the road with a small range (relative to the earth), the influence of a spherical curved surface can be ignored. The coordinates used at this time are then relative coordinates, which are the coordinates of the present coordinate point relative to the previous coordinate point. The single-frame laser point cloud coordinate is in a relative coordinate system.
Because each position has an absolute coordinate, a frame of point cloud can be projected to an absolute coordinate system according to the relative coordinate to complete point cloud splicing, and finally, the laser point cloud is large-scale. The absolute coordinate system is characterized in that each positioning coordinate has no relation with the last positioning coordinate.
Finally, a complete point cloud is spliced from a frame of point cloud, and then the complete point cloud can be described as a well-spliced point cloud or directly called as a laser point cloud.
In addition, the single frame of laser point clouds collected by the vehicle-mounted device have the same resolution, wherein the resolution refers to the ratio of the unit distance between the point clouds to the physical unit distance. Because the laser point cloud is obtained by calculating the projection of the physical distance measured by the laser radar emitting light, the method has the characteristic of high accuracy, and the resolution of other points is the same except for the extremely individual noise points.
The collected single-frame laser point cloud also has the characteristics of multiple angles and multiple positions. Because the laser point cloud is formed by splicing point clouds acquired from a plurality of angles by a vehicle which continuously walks, the point cloud acquisition is 360 degrees, and if a background area is shielded by foreground trees, pedestrians and the like under a certain angle, the point cloud is not acquired. When the vehicle moves to other positions and the area is collected from other angles, the original shielded area is visible, and the point cloud of the area is finally spliced, namely the point cloud effect of the whole background is not influenced by fine objects. The laser point clouds are all 360 degrees and belong to multiple angles; the vehicle-mounted device continuously advances and is embodied in multiple positions.
The method provided by the embodiment has the advantages of multiple angles and multiple positions through the laser point cloud collected by the advancing vehicle-mounted device, can avoid the shielding of fine objects, has consistent resolution at each position, can better keep the integrity and consistency of a road compared with a perspective view, and can improve the robustness of the collected point cloud.
Referring to fig. 3, a main flowchart of an alternative map production method based on laser point cloud according to an embodiment of the present invention is shown, which includes the following steps:
s301: collecting laser point clouds in a target area, determining the volume of the laser point clouds, and when the volume exceeds a preset volume threshold, cutting the laser point clouds into blocks according to a preset size to obtain a plurality of laser point cloud blocks;
s302: performing semantic analysis on the multiple laser point cloud blocks to determine the category attribute and the number of the example of each point in the multiple laser point cloud blocks;
s303: combining point cloud points with the same category attribute under the same instance number to obtain a point cloud cluster, and determining the shape of the point cloud cluster according to the corresponding relation between the category attribute and the shape;
s304: and determining a vectorization rule corresponding to the shape to carry out vectorization processing on the point cloud cluster to obtain a vectorization map of the target area.
In the above embodiment, the descriptions of steps S101 to S103 shown in fig. 1 can be referred to in steps S302 to S304, which are not described herein again.
In the above embodiment, for step S301, in consideration of the huge number of point clouds, for example, a field acquisition may acquire a point cloud of several tens of kilometers. However, the existing point cloud analysis algorithm has limited processing capability, so that before semantic analysis, the point cloud can be cut into blocks, the point clouds after the blocks are independently analyzed, and finally, the analysis result is fused, so that the amount of the point clouds processed each time can be greatly reduced.
And judging whether the point cloud needs to be cut into pieces or not can be carried out according to the volume of the point cloud, if the volume exceeds the limit, the point cloud needs to be cut into pieces, and otherwise, the point cloud can be subjected to subsequent analysis according to a whole.
As for the point cloud dicing method, the dicing can be performed according to the forming track of the vehicle-mounted device, specifically: a section of point cloud (or a block of point cloud) with an overlapping area is intercepted at a fixed travel distance. For example, a point cloud with a length of 20 meters and a width and a height of 100 meters is cut every 10 meters along the traffic track, wherein the length direction of the bounding box is the tangential direction of the traffic track point, and the height direction is the direction vertical to the ground.
The travel path may be a series of sequential path points, which may be curved. Each track point corresponds to one frame of point cloud, and the block cutting is performed according to the number of the track points, namely, one frame of point cloud corresponding to each track point is extracted. For example, the vehicle-mounted device collects hundreds of thousands of single-frame point clouds, and only dozens of laser point cloud blocks are subjected to semantic analysis through splicing and blocking processing.
However, in different block-cutting modes, the obtained point clouds are different, and some point clouds may have overlapping regions, so after semantic analysis of each block-cut point cloud is completed, point cloud numbers ID after instance segmentation in the overlapping regions need to be merged (which may also be understood as deduplication). The overlapping may be performed in both the horizontal direction and the vertical direction, and is performed by overlapping a cloud of points in a three-dimensional space.
The above only shows a block cutting mode, and how the point cloud numbers are fused and not optimal is described for the block cutting mode. And it should be noted that the point cloud cutting is not essential. Mainly considering that the memory and calculation of the dicing machine cannot keep up with the overlarge point cloud scale. However, if the laser point cloud is diced, the subsequent semantic analysis and the like perform scene analysis and instance segmentation on each dice.
According to the method provided by the embodiment, when the point cloud amount is large, the processing capacity of semantic analysis is limited, and the laser point cloud can be subjected to block cutting processing so as to improve the processing speed of the point cloud semantic analysis.
Referring to fig. 4, a main flowchart of another alternative map production method based on laser point cloud provided by the embodiment of the present invention is shown, which includes the following steps:
s401: collecting laser point clouds in a target area, and performing semantic analysis on the laser point clouds to determine the category attribute of each point cloud point in the laser point clouds and the number of an example to which the point cloud point belongs;
s402: acquiring a region element, and determining point cloud points of which the category attributes do not belong to the region element to remove the determined point cloud points;
s403: filling the cavities in the laser point cloud with the point cloud points removed by a cavity repairing mode;
s404: combining point cloud points with the same category attribute under the same instance number to obtain a point cloud cluster, and determining the shape of the point cloud cluster according to the corresponding relation between the category attribute and the shape;
s405: and determining a vectorization rule corresponding to the shape to carry out vectorization processing on the point cloud cluster to obtain a vectorization map of the target area.
In the above embodiment, steps S401, S404, and S405 can be respectively described with reference to fig. 1 to fig. 3, and are not described again here.
In the above embodiment, as for step S402, generally, the road elements are often occluded by pedestrians, trees, and vehicles, so that some non-road elements to be eliminated may be defined, and before the vectorization of the road or the combination of the point cloud clusters, the point cloud clusters whose semantics are labeled as these non-road elements are eliminated.
Note that, the basis for rejecting the point cloud point is to look up L abel of each point cloud instead of the serial number ID. by marking L abel, determine whether the point cloud point is deleted according to L abel, and reject the point cloud point if the marking L abel is a pedestrian.
In step S403, some cavities may be generated in the point cloud clusters of road elements such as road lines, road isolation piles, road isolation piers, and the like, by the laser point cloud from which the impurities are removed.
In order to more completely realize high-precision map making subsequently and avoid the condition that the voids cause road fracture, void removal (also understood as void repair) can be further performed on each road element.
For example, the shelters are generally erected on the ground, road teeth, lane lines and the like which are interested in the high-precision mapping process are all on the ground, some holes may be left on the ground after the shelters are removed, and the point clouds can be connected for repairing the holes. If no repair is made, some broken vectoring road elements may appear.
And (4) removing the holes, namely filling the point clouds in the holes by using a point cloud hole repairing method aiming at the point cloud clusters of the road elements. The point cloud hole repairing method is already in the prior art, so the specific implementation process of the method is not repeated.
According to the method provided by the embodiment, the point cloud clusters such as pedestrians, vehicles and trees which possibly shield road elements (all of which can be removed) are removed, and meanwhile, the hole repairing is carried out on the point cloud clusters such as road surfaces and isolation piers which possibly shield road elements, so that the high-precision map manufacturing can be more completely realized in the follow-up process, and the condition that roads are broken due to the holes is avoided.
Referring to fig. 5, a main flowchart of another alternative map production method based on laser point cloud provided by the embodiment of the present invention is shown, which includes the following steps:
s501: collecting laser point clouds in a target area, and performing semantic analysis on the laser point clouds to determine the category attribute of each point cloud point in the laser point clouds and the number of an example to which the point cloud point belongs;
s502: combining point cloud points with the same category attribute under the same instance number to obtain a point cloud cluster, and determining the shape of the point cloud cluster according to the corresponding relation between the category attribute and the shape;
s503: projecting the point cloud cluster into a two-dimensional image space to obtain a two-dimensional front view;
s504: carrying out two-dimensional vectorization on the two-dimensional front view to obtain a two-dimensional vectorization result;
s505: and projecting the two-dimensional vectorization result to a three-dimensional space according to the corresponding relation between the two-dimensional image pixel and the three-dimensional space point to obtain a three-dimensional vectorization result.
In the above embodiment, the descriptions of steps S501 and S502 in fig. 1 to 4 can be referred to, and are not repeated herein.
In the above embodiment, the steps S503 to S505 are mainly applied to the linear point cloud cluster and the surface point cloud cluster.
A) Vectorization of linear point cloud clusters
Since vectorization of the linear point cloud cluster under the front view can reduce the noise effect, the linear point cloud cluster is first projected into a two-dimensional front view, the pixel value is white, and the other areas are black, i.e., a two-dimensional black-and-white front view (which can also be set to other colors, but is usually black and white because the color difference between the two is large). And carrying out two-dimensional vectorization under a black-and-white front view, and then projecting the two-dimensional vectorization result to a three-dimensional space to obtain a three-dimensional vectorization result.
The specific implementation process comprises the following steps:
1) two-dimensional black and white front view synthesis
First, two orthogonal Principal components are extracted by Principal Component Analysis (PCA, a spatial dimensionality reduction algorithm), and a three-dimensional space point cloud cluster is projected into a two-dimensional space with the two Principal components as coordinates. Then, the cloud points of the points falling under the two-dimensional space are discretized into an image.
For example, the point cloud is projected under the two-dimensional image space with the granularity of 0.5 m/pixel, and the pixel identifier of the last point cloud point is 255, otherwise 0.
The principal component analysis is mainly carried out through point cloud conversion in a three-dimensional space, so that important information can be reserved in the two-dimensional space through the converted point cloud.
Besides principal component analysis, there may be other methods, for example, each cloud point is determined by GPS at the time of acquisition, and thus the point cloud cluster generated by combination also has its coordinates. Assuming that the heights of all point cloud points are set to 0 (or other dimensions), the purpose of projecting the point cloud cluster to the two-dimensional image space can be achieved.
2) Noise and hole culling
Firstly, filtering small-scale noise points by using a filtering algorithm (such as median filtering with the size of 5 pixels);
large scale noise regions are then filtered with an image hole patching strategy (e.g., flipping tile colors with connected region areas less than 2000 pixels). For the area of the connected region, in the image, the depth-first search algorithm may be used to search for the adjacent pixels according to 4-way connection or 8-way connection, which is a simple prior art and is not described herein again.
It should be noted that the hole repairing is different from the point cloud hole repairing. The point cloud hole repairing is to repair the point cloud with semantics; the hole patching is patching without semantics for the image. Even if point cloud hole repairing is carried out, the following image hole repairing is also needed to be realized, and mainly, partial noise may exist in the front view synthesis, and the noise removal and the hole repairing are needed to smooth the image.
3) Two-dimensional element vectorization:
firstly, extracting a white mask area from a projection point area of the point cloud cluster in the step 1) by utilizing a skeleton extraction algorithm (such as a gradient shortest path skeleton extraction algorithm) to obtain a skeleton of the white mask area; and then, sequentially connecting the neighborhood pixels to locally continue the framework, thereby obtaining the framework vectorization segment.
Points on the framework are discrete pixel points, and the framework vectorization is to continue the discrete pixel points.
Further, a longest skeleton line can be searched.
Considering that there may be many branches in a skeleton vectorized segment, in order to extract a longest branch, the longest skeleton line may be searched from the vertex having only one neighborhood point, and an exhaustive method is used to search the longest skeleton line, that is, the longest skeleton line is sequentially searched from the end without adjacent skeleton vectorized segments, and the termination point is the end without other adjacent skeleton segments.
Or each framework vectorization segment has a starting point and an end point, and if two framework line vectorization segments are connected end to end, the two vectorization segments are sequentially connected. By analogy, a longest skeleton line can be obtained.
Further, if there are a plurality of roads in one area, a plurality of skeleton lines may be finally obtained according to the above manner.
4) Three-dimensional element vectorization
According to the mapping relation between the two-dimensional image pixels and the three-dimensional space point cloud points established in the step 1), continuous pixel points on the two-dimensional element vector line are projected into the three-dimensional space, a line which is constructed by the sequential adjacent continuous point cloud points in the three-dimensional space, namely a continuous three-dimensional point set, can be mapped, and three-dimensional vectorization is completed.
And for the framework vectorization segment obtained in the step 3), at this moment, the continuous pixel points on the two-dimensional element vectorization are the continuous pixel points on the framework vectorization segment.
If the skeleton vectorization segment is only projected into the three-dimensional space, the three-dimensional point sets obtained by projection may need to be connected subsequently; however, if the longest skeleton line is projected into the three-dimensional space, the step of connecting the point sets may not be needed.
In addition, considering that the point set after three-dimensional vectorization is too large, the vertex thinning algorithm is used for thinning the point set in the three-dimensional space.
B) Vectorization of surface point cloud clusters
Similarly to the vectorization of the linear point cloud cluster, the planar point cloud cluster is projected under a two-dimensional front view, vectorization is carried out under the two-dimensional front view, and then the two-dimensional vectorization result is projected under a three-dimensional space to obtain a three-dimensional vector result.
Wherein, the steps 1), 2) and 4) are the same as the vectorization of the linear point cloud cluster, which is not described herein, and only the two-dimensional vectorization step of the surface type point cloud cluster is described below:
3) two-dimensional element vectorization: and extracting the two-dimensional vectorization contour of the white mask area by using a contour tracing algorithm.
The two-dimensional vectorization is to sequentially connect a series of pixel points, and the purpose of the contour tracking algorithm comprises two parts, namely ① searching edge pixel points of a mask area, and ② sequentially connecting the edge pixel points, so that the contour tracking algorithm completes the two-dimensional element vectorization process.
In addition, considering that some surface-type point cloud clusters are too large, such as road middle isolation piles, optionally segmenting the surface-type point cloud clusters, for example, segmenting one cluster every 400 meters along a driving track, then performing surface vectorization on each cluster respectively, and fusing adjacent surface-type vector elements, where the fusion strategy includes, but is not limited to, setting an adsorption threshold, for example, 10 centimeters, and adsorbing edge vertices in the current surface-type vector element onto surface-type vector elements of a neighboring point cloud cluster within the threshold range.
The method provided by the embodiment performs three-dimensional vectorization on the linear elements and the surface elements respectively, so that the element types can be enriched, and the generation of the three-dimensional high-precision map is completed.
According to the method provided by the embodiment of the invention, the collected laser point cloud has the advantages of multiple angles and multiple positions, compared with a perspective view, the integrity and the consistency of an object can be better kept, various road elements can be fully automatically and accurately extracted, and the condition that the tiny part is blocked can be avoided; sundries are removed and holes are repaired from the laser point cloud, so that the shielding of vehicles, trees and the like in large area can be reduced, and the robustness of the method is improved; and finally, carrying out three-dimensional vectorization on the linear elements, the surface elements and the body type elements respectively, thereby enriching the element types and further completing the generation of the three-dimensional high-precision map.
Referring to fig. 6, a schematic diagram of main modules of a map production apparatus 600 based on laser point cloud according to an embodiment of the present invention is shown, including:
the point cloud analysis module 601 is configured to collect laser point clouds in a target area, perform semantic analysis on the laser point clouds, and determine category attributes and numbers of examples of each point cloud point in the laser point clouds;
the point cloud combination module 602 is configured to combine point cloud points with the same category attribute under the same instance number to obtain a point cloud cluster, and determine the shape of the point cloud cluster according to the correspondence between the category attribute and the shape;
and the map production module 603 is configured to determine a vectorization rule corresponding to the shape, so as to perform vectorization processing on the point cloud cluster to obtain a vectorization map of the target area.
In the implementation apparatus of the present invention, the point cloud analyzing module 601 is configured to: in the vehicle traveling process, carrying out point cloud collection operation on the target area by using a vehicle-mounted device according to a preset frequency to obtain a single-frame laser point cloud; and splicing the collected single-frame laser point clouds according to the coordinates of the single-frame laser point clouds in a preset coordinate system to obtain the laser point clouds in the target area.
In the implementation apparatus of the present invention, the point cloud analyzing module 601 is further configured to: and determining the volume of the laser point cloud, and when the volume exceeds a preset volume threshold, carrying out block cutting processing on the laser point cloud according to a preset size to obtain a plurality of laser point cloud blocks.
In the implementation device of the invention, the semantic analysis comprises scene analysis and instance segmentation;
the point cloud analyzing module 601 is configured to:
performing scene analysis on the laser point cloud to determine the category attribute of each point cloud point in the laser point cloud; and
and carrying out example segmentation on the laser point cloud to obtain the number of the example to which each cloud point belongs, and establishing the corresponding relation between the example number and the class attribute by combining the determined class attribute of each cloud point.
The apparatus further comprises a point cloud repairing module 604 (not shown) for:
acquiring a region element, and determining point cloud points of which the category attributes do not belong to the region element to remove the determined point cloud points; and
and filling the cavity of the laser point cloud with the point cloud points removed by using a cavity repairing mode.
In the implementation apparatus of the present invention, the map production module 603 is configured to:
projecting the point cloud cluster into a two-dimensional image space to obtain a two-dimensional front view; carrying out two-dimensional vectorization on the two-dimensional front view to obtain a two-dimensional vectorization result; and projecting the two-dimensional vectorization result to a three-dimensional space according to the corresponding relation between the two-dimensional image pixel and the three-dimensional space point to obtain a three-dimensional vectorization result.
In the implementation device, the shape of the point cloud cluster is a line shape;
the map production module 603 is configured to: performing mask region extraction on the point cloud projection point region in the two-dimensional front view by using a skeleton extraction mode to obtain a mask region skeleton; and sequentially connecting the pixel points of the images in the adjacent areas in the mask area framework to obtain a framework vectorization segment.
In the implementation device, the shape of the point cloud cluster is a surface shape;
the map production module 603 is configured to: extracting edge pixel points from a point cloud projection point area in the two-dimensional front view in a contour tracing mode; and sequentially connecting the extracted edge pixel points to obtain a two-dimensional vectorization contour of the point cloud projection point region.
In the implementation device, the shape of the point cloud cluster is a body shape;
the map production module 603 is configured to:
determining the coordinates of each point cloud point in the body type point cloud cluster under a preset coordinate system, extracting the minimum coordinate value and the maximum coordinate value on each coordinate axis, and taking the extracted minimum coordinate value and the extracted maximum coordinate value as the vectorization value of the body type point cloud cluster in a three-dimensional space.
In addition, the detailed implementation of the device in the embodiment of the present invention has been described in detail in the above method, so that the repeated description is not repeated here.
According to the device provided by the embodiment of the invention, the collected laser point cloud has the advantages of multiple angles and multiple positions, compared with a perspective view, the integrity and consistency of an object can be better kept, various road elements can be fully automatically and accurately extracted, and the situation that fine parts are shielded can be avoided; sundries are removed and holes are repaired from the laser point cloud, so that the shielding of vehicles, trees and the like in large area can be reduced, and the robustness of the method is improved; and finally, carrying out three-dimensional vectorization on the linear elements, the surface elements and the body type elements respectively, thereby enriching the element types and further completing the generation of the three-dimensional high-precision map.
Fig. 7 illustrates an exemplary system architecture 700 of a laser point cloud based map production method or a laser point cloud based map production apparatus to which embodiments of the invention may be applied.
As shown in fig. 7, the system architecture 700 may include terminal devices 701, 702, 703, a network 704 and a server 705 (by way of example only). The network 704 serves to provide a medium for communication links between the terminal devices 701, 702, 703 and the server 705. Network 704 may include various connection types, such as wired, wireless communication links, or fiber optic cables, to name a few.
A user may use the terminal devices 701, 702, 703 to interact with a server 705 over a network 704, to receive or send messages or the like. Various communication client applications may be installed on the terminal devices 701, 702, 703.
The terminal devices 701, 702, 703 may be various electronic devices having a display screen and supporting web browsing, including but not limited to smart phones, tablet computers, laptop portable computers, desktop computers, and the like.
The server 705 may be a server providing various services, such as a background management server (for example only) providing support for shopping websites browsed by users using the terminal devices 701, 702, 703. The background management server can analyze and process the received data such as the information inquiry request and the like, and feed back the processing result to the terminal equipment.
It should be noted that the map production method based on laser point cloud provided by the embodiment of the present invention is generally executed by the server 705, and accordingly, the map production apparatus based on laser point cloud is generally disposed in the server 705.
It should be understood that the number of terminal devices, networks, and servers in fig. 7 is merely illustrative. There may be any number of terminal devices, networks, and servers, as desired for implementation.
Referring now to FIG. 8, shown is a block diagram of a computer system 800 suitable for use with a terminal device implementing an embodiment of the present invention. The terminal device shown in fig. 8 is only an example, and should not bring any limitation to the functions and the scope of use of the embodiments of the present invention.
As shown in fig. 8, the computer system 800 includes a Central Processing Unit (CPU)801 that can perform various appropriate actions and processes in accordance with a program stored in a Read Only Memory (ROM)802 or a program loaded from a storage section 808 into a Random Access Memory (RAM) 803. In the RAM 803, various programs and data necessary for the operation of the system 800 are also stored. The CPU 801, ROM 802, and RAM 803 are connected to each other via a bus 804. An input/output (I/O) interface 805 is also connected to bus 804.
To the I/O interface 805, AN input section 806 including a keyboard, a mouse, and the like, AN output section 807 including a network interface card such as a Cathode Ray Tube (CRT), a liquid crystal display (L CD), and the like, a speaker, and the like, a storage section 808 including a hard disk, and the like, and a communication section 809 including a network interface card such as a L AN card, a modem, and the like are connected, the communication section 809 performs communication processing via a network such as the internet, a drive 810 is also connected to the I/O interface 805 as necessary, a removable medium 811 such as a magnetic disk, AN optical disk, a magneto-optical disk, a semiconductor memory, and the like is mounted on the drive 810 as necessary, so that a computer program read out therefrom is mounted into the storage section 808 as.
In particular, according to the embodiments of the present disclosure, the processes described above with reference to the flowcharts may be implemented as computer software programs. For example, embodiments of the present disclosure include a computer program product comprising a computer program embodied on a computer readable medium, the computer program comprising program code for performing the method illustrated in the flow chart. In such an embodiment, the computer program can be downloaded and installed from a network through the communication section 809 and/or installed from the removable medium 811. The computer program executes the above-described functions defined in the system of the present invention when executed by the Central Processing Unit (CPU) 801.
It should be noted that the computer readable medium shown in the present invention can be a computer readable signal medium or a computer readable storage medium or any combination of the two. A computer readable storage medium may be, for example, but not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or any combination of the foregoing. More specific examples of the computer readable storage medium may include, but are not limited to: an electrical connection having one or more wires, a portable computer diskette, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber, a portable compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing. In the present invention, a computer readable storage medium may be any tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device. In the present invention, however, a computer readable signal medium may include a propagated data signal with computer readable program code embodied therein, for example, in baseband or as part of a carrier wave. Such a propagated data signal may take many forms, including, but not limited to, electro-magnetic, optical, or any suitable combination thereof. A computer readable signal medium may also be any computer readable medium that is not a computer readable storage medium and that can communicate, propagate, or transport a program for use by or in connection with an instruction execution system, apparatus, or device. Program code embodied on a computer readable medium may be transmitted using any appropriate medium, including but not limited to: wireless, wire, fiber optic cable, RF, etc., or any suitable combination of the foregoing.
The flowchart and block diagrams in the figures illustrate the architecture, functionality, and operation of possible implementations of systems, methods and computer program products according to various embodiments of the present invention. In this regard, each block in the flowchart or block diagrams may represent a module, segment, or portion of code, which comprises one or more executable instructions for implementing the specified logical function(s). It should also be noted that, in some alternative implementations, the functions noted in the block may occur out of the order noted in the figures. For example, two blocks shown in succession may, in fact, be executed substantially concurrently, or the blocks may sometimes be executed in the reverse order, depending upon the functionality involved. It will also be noted that each block of the block diagrams or flowchart illustration, and combinations of blocks in the block diagrams or flowchart illustration, can be implemented by special purpose hardware-based systems which perform the specified functions or acts, or combinations of special purpose hardware and computer instructions.
The modules described in the embodiments of the present invention may be implemented by software or hardware. The described modules may also be provided in a processor, which may be described as: a processor comprises a point cloud analyzing module, a point cloud combination module and a map production module. The names of these modules do not in some cases constitute a limitation on the module itself, for example, the point cloud resolving module may also be described as a "module that collects and resolves point clouds".
As another aspect, the present invention also provides a computer-readable medium that may be contained in the apparatus described in the above embodiments; or may be separate and not incorporated into the device. The computer readable medium carries one or more programs which, when executed by a device, cause the device to comprise:
collecting laser point clouds in a target area, and performing semantic analysis on the laser point clouds to determine the category attribute of each point cloud point in the laser point clouds and the number of an example to which the point cloud point belongs;
combining point cloud points with the same category attribute under the same instance number to obtain a point cloud cluster, and determining the shape of the point cloud cluster according to the corresponding relation between the category attribute and the shape;
and determining a vectorization rule corresponding to the shape to carry out vectorization processing on the point cloud cluster to obtain a vectorization map of the target area.
According to the technical scheme of the embodiment of the invention, the laser point cloud collection has the advantages of multiple angles and multiple positions, compared with a perspective view, the integrity and the consistency of an object can be better maintained, rich road elements can be fully automatically and accurately extracted, and the situation that small parts are shielded can be avoided; sundries are removed and holes are repaired from the laser point cloud, so that the shielding of vehicles, trees and the like in large area can be reduced, and the robustness of the method is improved; and finally, carrying out three-dimensional vectorization on the linear elements, the surface elements and the body type elements respectively, thereby enriching the element types and further completing the generation of the three-dimensional high-precision map.
The above-described embodiments should not be construed as limiting the scope of the invention. Those skilled in the art will appreciate that various modifications, combinations, sub-combinations, and substitutions can occur, depending on design requirements and other factors. Any modification, equivalent replacement, and improvement made within the spirit and principle of the present invention should be included in the protection scope of the present invention.

Claims (20)

1. A map production method based on laser point cloud is characterized by comprising the following steps:
collecting laser point clouds in a target area, and performing semantic analysis on the laser point clouds to determine the category attribute of each point cloud point in the laser point clouds and the number of an example to which the point cloud point belongs;
combining point cloud points with the same category attribute under the same instance number to obtain a point cloud cluster, and determining the shape of the point cloud cluster according to the corresponding relation between the category attribute and the shape;
and determining a vectorization rule corresponding to the shape to carry out vectorization processing on the point cloud cluster to obtain a vectorization map of the target area.
2. The method of claim 1, wherein the acquiring a laser point cloud of a target area comprises:
in the vehicle traveling process, carrying out point cloud collection operation on the target area by using a vehicle-mounted device according to a preset frequency to obtain a single-frame laser point cloud;
and splicing the collected single-frame laser point clouds according to the coordinates of the single-frame laser point clouds in a preset coordinate system to obtain the laser point clouds in the target area.
3. The method of claim 1, further comprising, prior to semantically resolving the laser point cloud:
and determining the volume of the laser point cloud, and when the volume exceeds a preset volume threshold, carrying out block cutting processing on the laser point cloud according to a preset size to obtain a plurality of laser point cloud blocks.
4. The method of claim 1, wherein the semantic parsing comprises scene parsing and instance segmentation;
the semantic analysis is carried out on the laser point cloud to determine the category attribute and the number of the example of each point cloud point in the laser point cloud, and the method comprises the following steps:
performing scene analysis on the laser point cloud to determine the category attribute of each point cloud point in the laser point cloud; and
and carrying out example segmentation on the laser point cloud to obtain the number of the example to which each cloud point belongs, and establishing the corresponding relation between the example number and the class attribute by combining the determined class attribute of each cloud point.
5. The method of claim 1, wherein before combining point cloud points with the same category attribute under the same instance number to obtain a point cloud cluster, the method further comprises:
acquiring a region element, and determining point cloud points of which the category attributes do not belong to the region element to remove the determined point cloud points; and
and filling the cavity of the laser point cloud with the point cloud points removed by using a cavity repairing mode.
6. The method according to claim 1, wherein the determining a vectorization rule corresponding to the shape to perform vectorization processing on the point cloud cluster comprises:
projecting the point cloud cluster into a two-dimensional image space to obtain a two-dimensional front view;
carrying out two-dimensional vectorization on the two-dimensional front view to obtain a two-dimensional vectorization result;
and projecting the two-dimensional vectorization result to a three-dimensional space according to the corresponding relation between the two-dimensional image pixel and the three-dimensional space point to obtain a three-dimensional vectorization result.
7. The method of claim 6, wherein the shape of the point cloud cluster is a line shape;
the two-dimensional vectorization of the two-dimensional front view to obtain a two-dimensional vectorization result includes:
performing mask region extraction on the point cloud projection point region in the two-dimensional front view by using a skeleton extraction mode to obtain a mask region skeleton;
and sequentially connecting the pixel points of the images in the adjacent areas in the mask area framework to obtain a framework vectorization segment.
8. The method of claim 6, wherein the shape of the point cloud cluster is a surface;
the two-dimensional vectorization of the two-dimensional front view to obtain a two-dimensional vectorization result includes:
extracting edge pixel points from a point cloud projection point area in the two-dimensional front view in a contour tracing mode;
and sequentially connecting the extracted edge pixel points to obtain a two-dimensional vectorization contour of the point cloud projection point region.
9. The method of claim 1, wherein the shape of the point cloud cluster is a body shape;
the determining a vectorization rule corresponding to the shape to perform vectorization processing on the point cloud cluster includes:
determining the coordinates of each point cloud point in the body type point cloud cluster under a preset coordinate system, extracting the minimum coordinate value and the maximum coordinate value on each coordinate axis, and taking the extracted minimum coordinate value and the extracted maximum coordinate value as the vectorization value of the body type point cloud cluster in a three-dimensional space.
10. A map production device based on laser point cloud is characterized by comprising:
the point cloud analysis module is used for acquiring laser point clouds in a target area and performing semantic analysis on the laser point clouds to determine the category attribute of each point cloud point in the laser point clouds and the number of the example to which the point cloud point belongs;
the point cloud combination module is used for combining point cloud points with the same category attribute under the same instance number to obtain a point cloud cluster, and determining the shape of the point cloud cluster according to the corresponding relation between the category attribute and the shape;
and the map production module is used for determining a vectorization rule corresponding to the shape so as to carry out vectorization processing on the point cloud cluster to obtain the vectorization map of the target area.
11. The apparatus of claim 10, wherein the point cloud analysis module is configured to:
in the vehicle traveling process, carrying out point cloud collection operation on the target area by using a vehicle-mounted device according to a preset frequency to obtain a single-frame laser point cloud;
and splicing the collected single-frame laser point clouds according to the coordinates of the single-frame laser point clouds in a preset coordinate system to obtain the laser point clouds in the target area.
12. The apparatus of claim 10, wherein the point cloud analysis module is further configured to:
and determining the volume of the laser point cloud, and when the volume exceeds a preset volume threshold, carrying out block cutting processing on the laser point cloud according to a preset size to obtain a plurality of laser point cloud blocks.
13. The apparatus of claim 10, wherein the semantic parsing comprises scene parsing and instance segmentation;
the point cloud analysis module is used for:
performing scene analysis on the laser point cloud to determine the category attribute of each point cloud point in the laser point cloud; and
and carrying out example segmentation on the laser point cloud to obtain the number of the example to which each cloud point belongs, and establishing the corresponding relation between the example number and the class attribute by combining the determined class attribute of each cloud point.
14. The apparatus of claim 10, further comprising a point cloud repair module to:
acquiring a region element, and determining point cloud points of which the category attributes do not belong to the region element to remove the determined point cloud points; and
and filling the cavity of the laser point cloud with the point cloud points removed by using a cavity repairing mode.
15. The apparatus of claim 10, wherein the map production module is to:
projecting the point cloud cluster into a two-dimensional image space to obtain a two-dimensional front view;
carrying out two-dimensional vectorization on the two-dimensional front view to obtain a two-dimensional vectorization result;
and projecting the two-dimensional vectorization result to a three-dimensional space according to the corresponding relation between the two-dimensional image pixel and the three-dimensional space point to obtain a three-dimensional vectorization result.
16. The apparatus of claim 15, wherein the shape of the point cloud cluster is a line shape;
the map production module is used for:
performing mask region extraction on the point cloud projection point region in the two-dimensional front view by using a skeleton extraction mode to obtain a mask region skeleton;
and sequentially connecting the pixel points of the images in the adjacent areas in the mask area framework to obtain a framework vectorization segment.
17. The apparatus of claim 15, wherein the shape of the point cloud cluster is a surface;
the map production module is used for:
extracting edge pixel points from a point cloud projection point area in the two-dimensional front view in a contour tracing mode;
and sequentially connecting the extracted edge pixel points to obtain a two-dimensional vectorization contour of the point cloud projection point region.
18. The apparatus of claim 10, wherein the shape of the point cloud cluster is a body shape;
the map production module is used for:
determining the coordinates of each point cloud point in the body type point cloud cluster under a preset coordinate system, extracting the minimum coordinate value and the maximum coordinate value on each coordinate axis, and taking the extracted minimum coordinate value and the extracted maximum coordinate value as the vectorization value of the body type point cloud cluster in a three-dimensional space.
19. An electronic device, comprising:
one or more processors;
a storage device for storing one or more programs,
when executed by the one or more processors, cause the one or more processors to implement the method of any one of claims 1-9.
20. A computer-readable medium, on which a computer program is stored, which, when being executed by a processor, carries out the method according to any one of claims 1-9.
CN201910058303.1A 2019-01-22 2019-01-22 Map production method and device based on laser point cloud Active CN111462275B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910058303.1A CN111462275B (en) 2019-01-22 2019-01-22 Map production method and device based on laser point cloud

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910058303.1A CN111462275B (en) 2019-01-22 2019-01-22 Map production method and device based on laser point cloud

Publications (2)

Publication Number Publication Date
CN111462275A true CN111462275A (en) 2020-07-28
CN111462275B CN111462275B (en) 2024-03-05

Family

ID=71682188

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910058303.1A Active CN111462275B (en) 2019-01-22 2019-01-22 Map production method and device based on laser point cloud

Country Status (1)

Country Link
CN (1) CN111462275B (en)

Cited By (18)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111856499A (en) * 2020-07-30 2020-10-30 浙江大华技术股份有限公司 Map construction method and device based on laser radar
CN111929657A (en) * 2020-08-26 2020-11-13 北京布科思科技有限公司 Laser radar noise filtering method, device and equipment
CN112051574A (en) * 2020-08-05 2020-12-08 华友天宇科技(武汉)股份有限公司 Automatic rotary tillage ship based on high-precision map
CN112330680A (en) * 2020-11-04 2021-02-05 中山大学 Lookup table-based method for accelerating point cloud segmentation
CN112380894A (en) * 2020-09-30 2021-02-19 北京智汇云舟科技有限公司 Video overlapping area target duplicate removal method and system based on three-dimensional geographic information system
CN112417965A (en) * 2020-10-21 2021-02-26 湖北亿咖通科技有限公司 Laser point cloud processing method, electronic device and storage medium
CN112419719A (en) * 2020-11-18 2021-02-26 济南北方交通工程咨询监理有限公司 Method and system for evaluating traffic operation safety of highway
CN112489123A (en) * 2020-10-30 2021-03-12 江阴市智行工控科技有限公司 Three-dimensional positioning method for surface target of truck in steel mill reservoir area
CN113343840A (en) * 2021-06-02 2021-09-03 合肥泰瑞数创科技有限公司 Object identification method and device based on three-dimensional point cloud
CN113587943A (en) * 2021-07-28 2021-11-02 广州小鹏自动驾驶科技有限公司 Map processing method and device
CN113822914A (en) * 2021-09-13 2021-12-21 中国电建集团中南勘测设计研究院有限公司 Method for unifying oblique photography measurement model, computer device, product and medium
CN113838193A (en) * 2021-09-29 2021-12-24 北京市商汤科技开发有限公司 Data processing method and device, computer equipment and storage medium
CN113917452A (en) * 2021-09-30 2022-01-11 北京理工大学 Blind road detection device and method combining vision and radar
CN114419250A (en) * 2021-12-27 2022-04-29 广州极飞科技股份有限公司 Point cloud data vectorization method and device and vector map generation method and device
CN115435773A (en) * 2022-09-05 2022-12-06 北京远见知行科技有限公司 High-precision map collecting device for indoor parking lot
WO2023045044A1 (en) * 2021-09-27 2023-03-30 北京大学深圳研究生院 Point cloud coding method and apparatus, electronic device, medium, and program product
CN115937466A (en) * 2023-02-17 2023-04-07 烟台市地理信息中心 Three-dimensional model generation method, system and storage medium integrating GIS
CN117011413A (en) * 2023-09-28 2023-11-07 腾讯科技(深圳)有限公司 Road image reconstruction method, device, computer equipment and storage medium

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103389103A (en) * 2013-07-03 2013-11-13 北京理工大学 Geographical environmental characteristic map construction and navigation method based on data mining
CN105488498A (en) * 2016-01-15 2016-04-13 武汉光庭信息技术股份有限公司 Lane sideline automatic extraction method and lane sideline automatic extraction system based on laser point cloud
WO2017154061A1 (en) * 2016-03-07 2017-09-14 三菱電機株式会社 Map making device and map making method
CN107330903A (en) * 2017-06-29 2017-11-07 西安理工大学 A kind of framework extraction method of human body point cloud model
CN107862738A (en) * 2017-11-28 2018-03-30 武汉大学 One kind carries out doors structure three-dimensional rebuilding method based on mobile laser measurement point cloud
US20180211399A1 (en) * 2017-01-26 2018-07-26 Samsung Electronics Co., Ltd. Modeling method and apparatus using three-dimensional (3d) point cloud
US20180275277A1 (en) * 2017-03-22 2018-09-27 Here Global B.V. Method, apparatus and computer program product for mapping and modeling a three dimensional structure

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103389103A (en) * 2013-07-03 2013-11-13 北京理工大学 Geographical environmental characteristic map construction and navigation method based on data mining
CN105488498A (en) * 2016-01-15 2016-04-13 武汉光庭信息技术股份有限公司 Lane sideline automatic extraction method and lane sideline automatic extraction system based on laser point cloud
WO2017154061A1 (en) * 2016-03-07 2017-09-14 三菱電機株式会社 Map making device and map making method
US20180211399A1 (en) * 2017-01-26 2018-07-26 Samsung Electronics Co., Ltd. Modeling method and apparatus using three-dimensional (3d) point cloud
US20180275277A1 (en) * 2017-03-22 2018-09-27 Here Global B.V. Method, apparatus and computer program product for mapping and modeling a three dimensional structure
CN107330903A (en) * 2017-06-29 2017-11-07 西安理工大学 A kind of framework extraction method of human body point cloud model
CN107862738A (en) * 2017-11-28 2018-03-30 武汉大学 One kind carries out doors structure three-dimensional rebuilding method based on mobile laser measurement point cloud

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
孙朋朋;闵海根;徐志刚;赵祥模;: "采用延伸顶点的地面点云实时提取算法", 计算机工程与应用, no. 24 *
程健;项志宇;于海滨;刘济林;: "城市复杂环境下基于三维激光雷达实时车辆检测", 浙江大学学报(工学版), no. 12 *

Cited By (30)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111856499B (en) * 2020-07-30 2021-06-18 浙江华睿科技有限公司 Map construction method and device based on laser radar
CN111856499A (en) * 2020-07-30 2020-10-30 浙江大华技术股份有限公司 Map construction method and device based on laser radar
CN112051574A (en) * 2020-08-05 2020-12-08 华友天宇科技(武汉)股份有限公司 Automatic rotary tillage ship based on high-precision map
CN111929657A (en) * 2020-08-26 2020-11-13 北京布科思科技有限公司 Laser radar noise filtering method, device and equipment
CN111929657B (en) * 2020-08-26 2023-09-19 北京布科思科技有限公司 Noise filtering method, device and equipment for laser radar
CN112380894A (en) * 2020-09-30 2021-02-19 北京智汇云舟科技有限公司 Video overlapping area target duplicate removal method and system based on three-dimensional geographic information system
CN112380894B (en) * 2020-09-30 2024-01-19 北京智汇云舟科技有限公司 Video overlapping region target deduplication method and system based on three-dimensional geographic information system
CN112417965A (en) * 2020-10-21 2021-02-26 湖北亿咖通科技有限公司 Laser point cloud processing method, electronic device and storage medium
CN112417965B (en) * 2020-10-21 2021-09-14 湖北亿咖通科技有限公司 Laser point cloud processing method, electronic device and storage medium
CN112489123A (en) * 2020-10-30 2021-03-12 江阴市智行工控科技有限公司 Three-dimensional positioning method for surface target of truck in steel mill reservoir area
CN112489123B (en) * 2020-10-30 2021-09-10 江阴市智行工控科技有限公司 Three-dimensional positioning method for surface target of truck in steel mill reservoir area
CN112330680A (en) * 2020-11-04 2021-02-05 中山大学 Lookup table-based method for accelerating point cloud segmentation
CN112330680B (en) * 2020-11-04 2023-07-21 中山大学 Method for accelerating point cloud segmentation based on lookup table
CN112419719A (en) * 2020-11-18 2021-02-26 济南北方交通工程咨询监理有限公司 Method and system for evaluating traffic operation safety of highway
CN112419719B (en) * 2020-11-18 2022-06-07 济南北方交通工程咨询监理有限公司 Method and system for evaluating traffic operation safety of highway
CN113343840A (en) * 2021-06-02 2021-09-03 合肥泰瑞数创科技有限公司 Object identification method and device based on three-dimensional point cloud
CN113343840B (en) * 2021-06-02 2022-03-08 合肥泰瑞数创科技有限公司 Object identification method and device based on three-dimensional point cloud
CN113587943A (en) * 2021-07-28 2021-11-02 广州小鹏自动驾驶科技有限公司 Map processing method and device
CN113822914A (en) * 2021-09-13 2021-12-21 中国电建集团中南勘测设计研究院有限公司 Method for unifying oblique photography measurement model, computer device, product and medium
WO2023045044A1 (en) * 2021-09-27 2023-03-30 北京大学深圳研究生院 Point cloud coding method and apparatus, electronic device, medium, and program product
CN113838193A (en) * 2021-09-29 2021-12-24 北京市商汤科技开发有限公司 Data processing method and device, computer equipment and storage medium
CN113917452B (en) * 2021-09-30 2022-05-24 北京理工大学 Blind road detection device and method combining vision and radar
CN113917452A (en) * 2021-09-30 2022-01-11 北京理工大学 Blind road detection device and method combining vision and radar
CN114419250B (en) * 2021-12-27 2023-03-10 广州极飞科技股份有限公司 Point cloud data vectorization method and device and vector map generation method and device
CN114419250A (en) * 2021-12-27 2022-04-29 广州极飞科技股份有限公司 Point cloud data vectorization method and device and vector map generation method and device
CN115435773A (en) * 2022-09-05 2022-12-06 北京远见知行科技有限公司 High-precision map collecting device for indoor parking lot
CN115435773B (en) * 2022-09-05 2024-04-05 北京远见知行科技有限公司 High-precision map acquisition device for indoor parking lot
CN115937466A (en) * 2023-02-17 2023-04-07 烟台市地理信息中心 Three-dimensional model generation method, system and storage medium integrating GIS
CN117011413A (en) * 2023-09-28 2023-11-07 腾讯科技(深圳)有限公司 Road image reconstruction method, device, computer equipment and storage medium
CN117011413B (en) * 2023-09-28 2024-01-09 腾讯科技(深圳)有限公司 Road image reconstruction method, device, computer equipment and storage medium

Also Published As

Publication number Publication date
CN111462275B (en) 2024-03-05

Similar Documents

Publication Publication Date Title
CN111462275B (en) Map production method and device based on laser point cloud
US10670416B2 (en) Traffic sign feature creation for high definition maps used for navigating autonomous vehicles
US20180158235A1 (en) Method and apparatus for generating a cleaned object model for an object in a mapping database
CN110796714B (en) Map construction method, device, terminal and computer readable storage medium
US10354433B2 (en) Method and apparatus for generating an abstract texture for a building facade or model
CN108765487A (en) Rebuild method, apparatus, equipment and the computer readable storage medium of three-dimensional scenic
US11590989B2 (en) Training data generation for dynamic objects using high definition map data
Gao et al. SUM: A benchmark dataset of semantic urban meshes
CN111695488A (en) Interest plane identification method, device, equipment and storage medium
CN113593017A (en) Method, device and equipment for constructing surface three-dimensional model of strip mine and storage medium
US10030982B2 (en) Generalising topographical map data
CN111652241B (en) Building contour extraction method integrating image features and densely matched point cloud features
CN113010793A (en) Method, device, equipment, storage medium and program product for map data processing
CN110765542A (en) Lightweight method of high-precision digital elevation model
CN115187647A (en) Vector-based road three-dimensional live-action structured modeling method
CN114758086A (en) Method and device for constructing urban road information model
CN114972758A (en) Instance segmentation method based on point cloud weak supervision
CN112837414A (en) Method for constructing three-dimensional high-precision map based on vehicle-mounted point cloud data
CN110377776B (en) Method and device for generating point cloud data
EP4202833A1 (en) Method, apparatus, and system for pole extraction from a single image
EP4202835A1 (en) Method, apparatus, and system for pole extraction from optical imagery
Namouchi et al. Piecewise horizontal 3d roof reconstruction from aerial lidar
CN116091716A (en) High-precision map automatic manufacturing system and method based on deep learning
CN115546422A (en) Building three-dimensional model construction method and system and electronic equipment
Nakagawa et al. Fusing stereo linear CCD image and laser range data for building 3D urban model

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
TA01 Transfer of patent application right

Effective date of registration: 20210226

Address after: Room a1905, 19 / F, building 2, No. 18, Kechuang 11th Street, Daxing District, Beijing, 100176

Applicant after: Beijing Jingdong Qianshi Technology Co.,Ltd.

Address before: 101, 1st floor, building 2, yard 20, Suzhou street, Haidian District, Beijing 100080

Applicant before: Beijing Jingbangda Trading Co.,Ltd.

Effective date of registration: 20210226

Address after: 101, 1st floor, building 2, yard 20, Suzhou street, Haidian District, Beijing 100080

Applicant after: Beijing Jingbangda Trading Co.,Ltd.

Address before: 100086 8th Floor, 76 Zhichun Road, Haidian District, Beijing

Applicant before: BEIJING JINGDONG SHANGKE INFORMATION TECHNOLOGY Co.,Ltd.

Applicant before: BEIJING JINGDONG CENTURY TRADING Co.,Ltd.

TA01 Transfer of patent application right
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant