CN109726728B - Training data generation method and device - Google Patents

Training data generation method and device Download PDF

Info

Publication number
CN109726728B
CN109726728B CN201711044197.9A CN201711044197A CN109726728B CN 109726728 B CN109726728 B CN 109726728B CN 201711044197 A CN201711044197 A CN 201711044197A CN 109726728 B CN109726728 B CN 109726728B
Authority
CN
China
Prior art keywords
point
sample
cloud data
point cloud
points
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201711044197.9A
Other languages
Chinese (zh)
Other versions
CN109726728A (en
Inventor
胡胜伟
王涛
贾双成
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Alibaba China Co Ltd
Original Assignee
Alibaba China Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Alibaba China Co Ltd filed Critical Alibaba China Co Ltd
Priority to CN201711044197.9A priority Critical patent/CN109726728B/en
Publication of CN109726728A publication Critical patent/CN109726728A/en
Application granted granted Critical
Publication of CN109726728B publication Critical patent/CN109726728B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Landscapes

  • Image Processing (AREA)
  • Image Analysis (AREA)

Abstract

The application discloses a training data generation method and device, which are used for acquiring original point cloud data of a road surface; determining points corresponding to the road traffic signs from the original point cloud data as target points; taking a target point corresponding to the road traffic sign as a reference, carrying out region division on original point cloud of the road to obtain a set of sample regions, wherein the sample regions comprise positive sample regions and negative sample regions, at least one target point exists in the positive sample regions, and no target point exists in the negative sample regions; and generating a gray scale map corresponding to the sample area according to the point cloud data falling into the sample area in the original point cloud data, wherein the gray scale map of the positive sample area is used as a positive sample of the training road traffic sign recognition model, and the gray scale map of the negative sample area is used as a negative sample of the training road traffic sign recognition model. According to the method and the device, manual production is not needed, manpower resources are saved, and training samples with enough quantity can be generated by controlling the quantity of the divided sample areas.

Description

Training data generation method and device
Technical Field
The present application relates to the field of training data generation technologies, and in particular, to a training data generation method and apparatus.
Background
When various road traffic sign recognition models are trained by machine learning, a large amount of training sample data is needed to ensure the recognition accuracy of the road traffic sign recognition models obtained by training.
The existing training sample data is generally manufactured manually, that is, the image of the road surface is identified manually, wherein the image containing the road surface traffic sign is used as a positive sample, and the image not containing the road surface traffic sign is used as a negative sample.
Obviously, a large amount of human resources are consumed in the manual training data making mode, and the number of generated training samples is not large enough, so that the identification accuracy of the trained road traffic sign identification model cannot be ensured.
Disclosure of Invention
In view of this, the present application provides a training data generation method and apparatus, so as to solve the problems that the existing manual training data making manner consumes human resources and the number of made training samples is insufficient.
In order to achieve the above object, the following solutions are proposed:
a training data generation method, comprising:
acquiring original point cloud data of a road surface;
determining points corresponding to the road traffic signs from the original point cloud data as target points;
taking a target point corresponding to the road surface traffic sign as a reference, performing area division on the original point cloud data of the road surface to obtain a set of sample areas, wherein the sample areas comprise positive sample areas and negative sample areas, at least one target point exists in the positive sample areas, and the target point does not exist in the negative sample areas;
and generating a gray scale map corresponding to the sample area according to the point cloud data falling into the sample area in the original point cloud data, wherein the gray scale map of the positive sample area is used as a positive sample of the training road traffic sign recognition model, and the gray scale map of the negative sample area is used as a negative sample of the training road traffic sign recognition model.
Preferably, the road traffic sign is a lane line, and the area division is performed on the original point cloud data of the road surface by using a target point corresponding to the road traffic sign as a reference to obtain a set of sample areas, including:
starting from any target point corresponding to the lane line, determining a lane position point every other first set unit length along the lane line;
determining a relative position point every second set unit length along the direction vertical to the lane line by taking the lane position point as a vertical foot;
and generating corresponding rectangular regions according to a set rectangular size by taking the lane position point and the relative position point as central points of rectangles to form a set of sample regions, wherein the sample region corresponding to the rectangular region containing the target point corresponding to the lane line is a positive sample region, and the sample regions corresponding to the other rectangular regions not containing the target point corresponding to the lane line are negative sample regions.
Preferably, the road traffic sign is a road sign, and the dividing the original point cloud data of the road by using a target point corresponding to the road traffic sign as a reference to obtain a set of sample regions includes:
determining a minimum circumscribed rectangle of an area covered by a target point corresponding to the pavement marker;
starting from any point on the central axis of the minimum external rectangle, determining a pavement marker position point every other first set unit length along the central axis;
determining a relative position point every second set unit length along the direction perpendicular to the central axis by taking the pavement marking position point as a foot;
and generating corresponding rectangular regions according to a set rectangular size by taking the road surface mark position points and the relative position points as central points of rectangles to form a set of sample regions, wherein the sample regions corresponding to the rectangular regions containing the target points corresponding to the road surface marks are positive sample regions, and the sample regions corresponding to the other rectangular regions not containing the target points corresponding to the road surface marks are negative sample regions.
Preferably, the generating a gray scale map of a corresponding sample region according to the point cloud data falling into the sample region in the original point cloud data includes:
determining pixel points in a gray scale image of each point in the point cloud data corresponding to the sample area according to the point cloud data falling into the sample area in the original point cloud data;
and if the corresponding points exist in the point cloud data of the pixel points in the gray-scale image of the sample area, determining the gray-scale value of each pixel point in the gray-scale image of the sample area according to the reflectivity of the corresponding point of each pixel point in the gray-scale image of the sample area.
Preferably, the generating a gray scale map of a corresponding sample region according to the point cloud data falling into the sample region in the original point cloud data includes:
determining pixel points in a gray scale image of each point in the point cloud data corresponding to the sample area according to the point cloud data falling into the sample area in the original point cloud data;
taking pixel points with corresponding points in the point cloud data in the gray-scale image of the sample area as first-class pixel points, and determining the reflectivity of each first-class pixel point according to the reflectivity of the point corresponding to each first-class pixel point;
determining the reflectivity of each second type pixel point according to the reflectivity of the pixel points with known reflectivity in the set range around each second type pixel point by taking the pixel points without corresponding points in the point cloud data in the gray scale image of the sample area as the second type pixel points;
and determining the gray value of each pixel point in the gray map of the sample area according to the reflectivity of each pixel point in the gray map of the sample area.
Preferably, the determining the reflectivity of each second-class pixel point according to the reflectivity of the pixel point with known reflectivity within the set range around each second-class pixel point includes:
determining the pixel points with known reflectivity in the range by taking the second type pixel points as the center of the set range;
calculating the distance from each pixel point with known reflectivity to the second-class pixel point;
determining the weight of each pixel point with known reflectivity according to the inverse relation between the distance and the weight, wherein the sum of the weights of the pixel points with known reflectivity is 1;
and adding the multiplication results of the weight and the reflectivity of each pixel point with known reflectivity to obtain the reflectivity of the second type of pixel points.
A training data generating apparatus comprising:
the point cloud data acquisition unit is used for acquiring original point cloud data of a road surface;
the target point determining unit is used for determining points corresponding to the road traffic signs from the original point cloud data as target points;
a sample region set determining unit, configured to perform region division on the original point cloud data of the road surface by using a target point corresponding to the road surface traffic sign as a reference, so as to obtain a set of sample regions, where the sample regions include a positive sample region and a negative sample region, where at least one target point exists in the positive sample region, and the target point does not exist in the negative sample region;
and the gray-scale map generating unit is used for generating a gray-scale map corresponding to the sample area according to the point cloud data falling into the sample area in the original point cloud data, wherein the gray-scale map of the positive sample area is used as a positive sample of the training road traffic sign recognition model, and the gray-scale map of the negative sample area is used as a negative sample of the training road traffic sign recognition model.
Preferably, the process of obtaining the set of sample regions by performing region division on the original point cloud data of the road surface with reference to a target point corresponding to the road traffic sign by the sample region set determining unit includes:
starting from any target point corresponding to the lane line, determining a lane position point every other first set unit length along the lane line;
determining a relative position point every second set unit length along the direction vertical to the lane line by taking the lane position point as a vertical foot;
and generating corresponding rectangular regions according to a set rectangular size by taking the lane position point and the relative position point as central points of rectangles to form a set of sample regions, wherein the sample region corresponding to the rectangular region containing the target point corresponding to the lane line is a positive sample region, and the sample regions corresponding to the other rectangular regions not containing the target point corresponding to the lane line are negative sample regions.
Preferably, the process that the road traffic sign is a road sign, and the sample region set determining unit performs region division on the original point cloud data of the road surface with reference to a target point corresponding to the road traffic sign to obtain a set of sample regions includes:
determining a minimum circumscribed rectangle of an area covered by a target point corresponding to the pavement marker;
starting from any point on the central axis of the minimum external rectangle, determining a pavement marker position point every other first set unit length along the central axis;
determining a relative position point every second set unit length along the direction perpendicular to the central axis by taking the pavement marking position point as a foot;
and generating corresponding rectangular regions according to a set rectangular size by taking the road surface mark position points and the relative position points as central points of rectangles to form a set of sample regions, wherein the sample regions corresponding to the rectangular regions containing the target points corresponding to the road surface marks are positive sample regions, and the sample regions corresponding to the other rectangular regions not containing the target points corresponding to the road surface marks are negative sample regions.
Preferably, the process of generating the grayscale map of the corresponding sample region by the grayscale map generating unit according to the point cloud data falling into the sample region in the original point cloud data specifically includes:
determining pixel points in a gray scale image of each point in the point cloud data corresponding to the sample area according to the point cloud data falling into the sample area in the original point cloud data;
and if the corresponding points exist in the point cloud data of the pixel points in the gray-scale image of the sample area, determining the gray-scale value of each pixel point in the gray-scale image of the sample area according to the reflectivity of the corresponding point of each pixel point in the gray-scale image of the sample area.
Preferably, the process of generating the grayscale map of the corresponding sample region by the grayscale map generating unit according to the point cloud data falling into the sample region in the original point cloud data specifically includes:
determining pixel points in a gray scale image of each point in the point cloud data corresponding to the sample area according to the point cloud data falling into the sample area in the original point cloud data;
determining the reflectivity of each first-class pixel point according to the reflectivity of the point corresponding to each first-class pixel point by taking the pixel point of the corresponding point in the point cloud data in the gray scale image of the sample area as the first-class pixel point;
determining the reflectivity of each second type pixel point according to the reflectivity of the pixel points with known reflectivity in the set range around each second type pixel point by taking the pixel points without corresponding points in the point cloud data in the gray scale image of the sample area as the second type pixel points;
and determining the gray value of each pixel point in the gray map of the sample area according to the reflectivity of each pixel point in the gray map of the sample area.
Preferably, the process of determining the reflectivity of each second-class pixel point by the gray scale map generation unit according to the reflectivity of the pixel point with known reflectivity in the set range around each second-class pixel point includes:
determining the pixel points with known reflectivity in the range by taking the second type pixel points as the center of the set range;
calculating the distance from each pixel point with known reflectivity to the second-class pixel point;
determining the weight of each pixel point with known reflectivity according to the inverse relation between the distance and the weight, wherein the sum of the weights of the pixel points with known reflectivity is 1;
and adding the multiplication results of the weight and the reflectivity of each pixel point with known reflectivity to obtain the reflectivity of the second type of pixel points.
According to the technical scheme, the training data generation method obtains the original point cloud data of the road surface; determining points corresponding to the road traffic signs from the original point cloud data as target points; taking a target point corresponding to the road surface traffic sign as a reference, performing area division on the original point cloud data of the road surface to obtain a set of sample areas, wherein the sample areas comprise positive sample areas and negative sample areas, at least one target point exists in the positive sample areas, and the target point does not exist in the negative sample areas; and generating a gray scale map corresponding to the sample area according to the point cloud data falling into the sample area in the original point cloud data, wherein the gray scale map of the positive sample area is used as a positive sample of the training road traffic sign recognition model, and the gray scale map of the negative sample area is used as a negative sample of the training road traffic sign recognition model. Therefore, the gray level images of the positive sample area and the negative sample area can be automatically generated according to the original point cloud data of the road surface, and the gray level images are respectively used as positive samples and negative samples to train the road traffic sign recognition model. According to the method and the device, manual production is not needed, human resources are saved, and the number of sample regions obtained by controlling the division of the original point cloud data regions can be controlled to ensure that training samples with enough number can be generated.
Drawings
In order to more clearly illustrate the embodiments of the present application or the technical solutions in the prior art, the drawings needed to be used in the description of the embodiments or the prior art will be briefly introduced below, it is obvious that the drawings in the following description are only embodiments of the present application, and for those skilled in the art, other drawings can be obtained according to the provided drawings without creative efforts.
Fig. 1 is a flowchart of a training data generation method disclosed in an embodiment of the present application;
FIG. 2 is a flow chart of a method for partitioning a set of sample regions as disclosed in an embodiment of the present application;
FIG. 3 is a schematic illustration of a sample region set determination in accordance with an example of the present application;
FIG. 4 is a flow chart of another method for partitioning a set of sample regions disclosed in an embodiment of the present application;
FIG. 5 is a flowchart of a method for generating a gray scale map of a sample region according to an embodiment of the present disclosure;
FIG. 6 is a flowchart of another sample region gray scale map generation method disclosed in an embodiment of the present application;
fig. 7 is a flowchart of a method for determining the reflectivity of a second type of pixel points according to an embodiment of the present disclosure;
fig. 8 is a schematic structural diagram of a training data generating apparatus according to an embodiment of the present application.
Detailed Description
The technical solutions in the embodiments of the present application will be clearly and completely described below with reference to the drawings in the embodiments of the present application, and it is obvious that the described embodiments are only a part of the embodiments of the present application, and not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present application.
The road traffic sign recognition model is a model for performing road traffic sign recognition, and by inputting an image into the model, the model can give a conclusion as to whether the image contains a road traffic sign. The road traffic sign comprises a lane line and a road sign. Road signs such as turn signs, speed limit signs and the like of road signs.
In order to train and obtain the road traffic sign recognition model, a large amount of training data needs to be obtained in advance. Therefore, the present application provides a training data generation method, which is described with reference to fig. 1.
As shown in fig. 1, the method includes:
s100, acquiring original point cloud data of a road surface;
specifically, the original point cloud data of the road surface obtained in this step is the original point cloud data of the road surface with a set area size.
The original point cloud data of the road surface consists of a plurality of three-dimensional coordinate points, and the reflectivity of each point is marked.
Step S110, determining points corresponding to road traffic signs from the original point cloud data as target points;
specifically, a point corresponding to the road traffic sign may be determined from the original point cloud data by an algorithm or manually, and the determined point serves as a target point.
Step S120, taking a target point corresponding to the road traffic sign as a reference, covering the original point cloud data of the road surface for area division to obtain a set of sample areas;
wherein the sample regions comprise a positive sample region in which at least one target point is present and a negative sample region in which the target point is absent.
Specifically, after the original point cloud data and the target points forming the road traffic signs in the original point cloud data are acquired, the original point cloud data may be divided into regions by using the target points as a reference to obtain a set of a plurality of sample regions, where the sample regions include a plurality of positive sample regions and a plurality of negative sample regions. The positive sample area contains target points constituting the road traffic sign, and the negative sample area does not contain target points constituting the road traffic sign.
Step S130, generating a gray scale map corresponding to the sample area according to the point cloud data falling into the sample area in the original point cloud data.
The gray scale map of the positive sample region is used as a positive sample of the training road traffic sign recognition model, and the gray scale map of the negative sample region is used as a negative sample of the training road traffic sign recognition model.
Since the sample regions are divided from the original point cloud data, point cloud data corresponding to the spatial positions of the sample regions can be determined from the original point cloud data as point cloud data falling into the sample regions in the original point cloud data.
Specifically, the points in the point cloud data may be projected in the height direction to obtain two-dimensional point cloud data. And further determining the point cloud data falling into the sample area in the original point cloud data according to the two-dimensional coordinates of the two-dimensional point cloud data and the two-dimensional coordinates of the sample area.
In order to accelerate the speed of determining the point cloud data falling into the sample areas from the original point cloud data, the rtree structure can be created by using the set of the determined sample areas, and then the original point cloud data is traversed to determine the point cloud data corresponding to the spatial positions of the sample areas.
After the point cloud data falling into each sample region is determined, a gray scale map of the corresponding sample region can be generated according to the point cloud data contained in the sample region.
The positive sample area contains a target point corresponding to the road traffic sign, and therefore the corresponding gray-scale map is used as a positive sample for training the road traffic sign recognition model. And the negative sample region does not contain the target point corresponding to the road traffic sign, so the corresponding gray-scale map is used as a negative sample for training the road traffic sign identification model.
When the gray-scale map of the sample region is generated, the reflectivity of the point in the point cloud data contained in the sample region is converted into gray-scale pixels, and then the gray-scale map is generated.
According to the method, the gray level images of the positive sample area and the negative sample area can be automatically generated according to the original point cloud data of the road surface, and the gray level images are respectively used as positive samples and negative samples to train a road traffic sign recognition model. According to the method and the device, manual production is not needed, human resources are saved, and the number of sample regions obtained by controlling the division of the original point cloud data regions can be controlled to ensure that training samples with enough number can be generated.
In another embodiment of the present application, regarding the road traffic signs in different forms, regarding step S120, the process of dividing the original point cloud data of the road surface into regions and obtaining a set of sample regions is introduced respectively, with the target points corresponding to the road traffic signs as references.
First, the road traffic markings are lane lines.
Then, in step S120, the process of dividing the set of sample regions can be as shown in fig. 2, and the process includes:
step S200, starting from any target point corresponding to the lane line, determining a lane position point every other first set unit length along the lane line;
specifically, the method and the device can determine a lane position point every first set unit length along the lane line from any one target point corresponding to the lane line. The lane position points can be sequentially determined from the target point to the left and to the right along the straight line where the lane line is located. The number of determined lane position points may be controlled as desired. It will be appreciated that the determined lane position point may be located on the lane line or on an extension of the lane line.
Step S210, determining a relative position point every second set unit length along the direction vertical to the lane line by taking the lane position point as a vertical foot;
specifically, for the lane position point obtained in the previous step, a relative position point is determined every second set unit length in any one or two directions perpendicular to the lane line with the lane position point as a vertical foot. The number of relative position points determined can be controlled as desired.
The second set unit length may be the same as or different from the first set unit length.
Step S220, with the lane position point and the relative position point as center points of a rectangle, generating a set of corresponding rectangular region configuration sample regions according to a set rectangle size.
The sample region corresponding to the rectangular region including the target point corresponding to the lane line is a positive sample region, and the remaining sample regions corresponding to the rectangular regions not including the target point corresponding to the lane line are negative sample regions.
The above process of determining a set of sample regions is described in connection with fig. 3:
as shown in fig. 3, one lane position point is determined every first set unit length x1 in both the left and right directions along the lane line, starting with the target point O1 in the lane line.
And determining a relative position point every second set unit length x2 along the two directions vertical to the lane line by taking each lane position point determined as a vertical foot.
With the determined lane position points and relative position points as the center points of the rectangles, corresponding rectangular regions are generated as sample regions according to the set rectangle size (e.g., width k1 and height k 2). The rectangular size may be set by the user. The rectangle width k1 may be selected to be greater than the first set unit length x1 and the rectangle height k2 may be selected to be greater than the second set unit length x 2.
It is understood that, in the rectangular region generated by sliding along the lane line, when the lane position point is not located on the lane line and the distance from the nearest end point of the lane line exceeds the rectangular width k1, the generated rectangular region will not contain the target point corresponding to the lane line.
Similarly, in the generated rectangular region slid in the direction perpendicular to the lane line, when the distance of the relative position point from the closest edge point of the lane line exceeds the rectangular height k2, the generated rectangular region will not contain the target point corresponding to the lane line.
In addition, the generated other rectangular areas all contain the target points corresponding to the lane lines.
It is understood that by controlling the sizes of the first set unit length x1 and the second set unit length x2, the number of generated sample regions can be controlled. The smaller the first set unit length x1 and the second set unit length x2, the larger the number of positive sample regions generated. Of course, as the number of generated lane line position points and relative position points increases, the number of generated negative sample regions also increases.
Second, road traffic signs are road markings, such as steering arrows and the like.
Then, in step S120, the process of dividing the set of sample regions can be as shown in fig. 4, and the process includes:
s400, determining a minimum circumscribed rectangle of an area covered by a target point corresponding to the road surface mark;
step S410, starting from any point on the central axis of the minimum external rectangle, determining a road surface mark position point every other first set unit length along the central axis;
specifically, the method can determine a road sign position point every first set unit length along the central axis from any point on the central axis of the minimum circumscribed rectangle. The road surface mark position points can be sequentially determined from the starting point along the straight line where the central axis is located leftwards and rightwards. The number of determined pavement marking location points can be controlled as desired. It can be understood that the determined position point of the road surface mark can be positioned on the central axis or on the extension line of the central axis.
Step S420, determining a relative position point every second set unit length along the direction perpendicular to the central axis by taking the road surface mark position point as a foot;
specifically, for the road surface mark position point obtained in the previous step, a relative position point is determined every second set unit length along any one or two directions perpendicular to the central axis by taking the road surface mark position point as a foot. The number of relative position points determined can be controlled as desired.
The second set unit length may be the same as or different from the first set unit length.
Step S430, with the road surface marking position points and the relative position points as center points of a rectangle, generating a set of corresponding rectangular region configuration sample regions according to a set rectangle size.
The sample region corresponding to the rectangular region including the target point corresponding to the road surface marker is a positive sample region, and the sample regions corresponding to the remaining rectangular regions not including the target point corresponding to the road surface marker are negative sample regions.
It is understood that the process of dividing the set of sample regions according to the present embodiment is similar to the process when the road traffic is marked as a lane line. The minimum circumscribed rectangle of the area covered by the target point corresponding to the determined road surface identifier can be regarded as the area where the lane line is located, and the subsequent steps are similar.
Further, a process of generating a gray scale map corresponding to the sample region from the point cloud data falling into the sample region in the original point cloud data in step S130 is described. The specific implementation process can refer to fig. 5.
As shown in fig. 5, the process includes:
step S500, according to point cloud data falling into a sample area in the original point cloud data, determining pixel points in a gray scale image of each point in the point cloud data corresponding to the sample area;
specifically, the original point cloud data falls into the point cloud data in the sample region, and the difference value between the maximum abscissa maxX and the minimum abscissa minX corresponds to the transverse pixel value w of the gray-scale image of the sample region; the difference between the maximum ordinate maxY and the minimum ordinate minY corresponds to the vertical pixel value h of the grayscale map of the sample region. Therefore, the length and width of each pixel point can be determined as follows:
the long GridL of the pixel point is (maxX-minX)/w
Wide GridW ═ maxY-minY)/h of pixel points
Based on this, the positions of the pixel points in the gray scale map of each point corresponding to the sample region in the point cloud data falling into the sample region can be determined.
Taking the midpoint (x, y) of the point cloud data as an example, the abscissa of the corresponding pixel point is:
Piex_X=(x-minX)/GridL
the vertical coordinate of the corresponding pixel point is as follows:
Piex_Y=(maxY-y)/GridW
step S510, if there is a corresponding point in the point cloud data for each pixel point in the gray-scale image of the sample region, determining the gray-scale value of each pixel point in the gray-scale image of the sample region according to the reflectivity of the point corresponding to each pixel point in the gray-scale image of the sample region.
Specifically, in step S500, the points in the point cloud data are mapped to the pixel points of the gray-scale map of the sample region. If the point cloud data is dense enough, each pixel point in the gray-scale map has a corresponding point in the point cloud data. That is, each pixel in the pixel matrix of the gray scale map corresponds to a point in the point cloud data. In this case, the gray scale value of each pixel point can be determined according to the reflectivity of the point corresponding to each pixel point in the gray scale image.
In an optional implementation manner, the maximum reflectivity maxR and the minimum reflectivity minR in the points corresponding to each pixel point may be obtained, where the maximum reflectivity corresponds to the maximum gray value 255, and the minimum reflectivity corresponds to the minimum gray value 0. Therefore, when determining the gray-scale value of a pixel, if the reflectivity of the corresponding point of the pixel is R, the gray-scale value C is (R-minR) × 255/(maxR-minR).
In addition, the reflectivity of the pixel point can be defined as the reflectivity of the corresponding point. At this time, when determining the gray-scale value of a pixel, if the reflectivity of the pixel is R, the gray-scale value C is (R-minR) × 255/(maxR-minR).
In another alternative embodiment of the present application, another alternative implementation process of the step S130 is introduced, and a gray scale map corresponding to the sample region is generated according to the point cloud data falling into the sample region in the raw point cloud data, and specifically, refer to fig. 6.
As shown in fig. 6, the process includes:
step S600, determining pixel points in a gray scale image of each point in the point cloud data corresponding to a sample region according to the point cloud data falling into the sample region in the original point cloud data;
specifically, step S600 corresponds to step S500, which can be specifically described with reference to the above embodiments and will not be described herein again.
Step S610, taking the pixel points with corresponding points in the point cloud data in the gray-scale image of the sample area as first-class pixel points, and determining the reflectivity of each first-class pixel point according to the reflectivity of the point corresponding to each first-class pixel point;
step S620, taking the pixel points without corresponding points in the point cloud data in the gray-scale image of the sample area as second-class pixel points, and determining the reflectivity of each second-class pixel point according to the reflectivity of the pixel points with known reflectivity in the set range around each second-class pixel point;
specifically, if the point cloud data is sparse, some pixels may exist in the gray-scale map without corresponding points in the point cloud data, the pixels without corresponding points in the part are defined as second-class pixels, and the rest pixels with corresponding points exist are defined as first-class pixels. It should be noted that the first type pixel points and the second type pixel points are only for convenience of description, and both belong to the pixel points in the gray scale map of the sample area.
For the first type of pixel point, the reflectivity of the first type of pixel point can be determined as the reflectivity of the corresponding point. For the second type of pixel point, the reflectivity of the second type of pixel point can be determined according to the reflectivity of the pixel point with known reflectivity in the surrounding set range.
Step S630, determining the gray value of each pixel point in the gray map of the sample area according to the reflectivity of each pixel point in the gray map of the sample area.
Specifically, through the above steps, the reflectivity is determined for each pixel point (including the first type pixel point and the second type pixel point) in the gray-scale image of the sample area, and in this step, the gray-scale value of each pixel point in the gray-scale image of the sample area can be determined according to the reflectivity of each pixel point.
Similar to the above, the maximum reflectivity maxR and the minimum reflectivity minR in the reflectivity of each pixel point in the gray-scale map can be obtained, the maximum reflectivity corresponds to the maximum gray-scale value 255, and the minimum reflectivity corresponds to the minimum gray-scale value 0. Therefore, when determining the gray-scale value of a pixel, if the reflectivity of the pixel is R, the gray-scale value C is (R-minR) × 255/(maxR-minR).
In this embodiment, when it is determined that there is a point in the point cloud data corresponding to the second-type pixel point in the gray-scale image of the sample area, the reflectivity of the second-type pixel point is determined by using the reflectivity of the pixel points with known reflectivity around the second-type pixel point, so that the gray-scale value of each pixel point in the gray-scale image can be determined, and the situation that the gray-scale value cannot be determined by the second-type pixel point and a blank pixel point exists is avoided, so that it is ensured that the generated gray-scale image has sufficient characteristics, and the model training effect is improved.
Optionally, in step S620, an implementation process of determining the reflectivity of each second-class pixel according to the reflectivity of the pixel whose reflectivity is known within the set range around each second-class pixel may be as shown in fig. 7, and includes:
step S700, determining pixel points with known reflectivity in a set range by taking the second-class pixel points as the center of the range;
specifically, the setting range may be set by the user. And searching the pixel points with the determined reflectivity in the set range.
Step S710, calculating the distance from each pixel point with known reflectivity to the second type of pixel points;
s720, determining the weight of each pixel point with known reflectivity according to the inverse relation between the distance and the weight, wherein the sum of the weights of the pixel points with known reflectivity is 1;
suppose that n pixels with known reflectivity are found in the set range, wherein the distance from the ith pixel to the second type pixel is represented as di, and the weight is represented as pi. Then the following formula exists:
p1+p2+…+pi+…+pn=1(1)
since the weight is inversely proportional to the distance, the following formula exists:
selecting one pixel point, defining the weight as pl, and defining the distance between the selected pixel point and the second-class pixel point as dl, then the weights of other pixel points are as follows:
pi=pl*dl/di(2)
the weighted value of each pixel point can be determined by the formulas (1) and (2).
And step S730, adding the multiplication results of the weight and the reflectivity of each pixel point with known reflectivity to obtain the reflectivity of the second type of pixel points.
The reflectivity R of the second type of pixel points is R1 p1+ R2 p2+ … + ri pi + … + rn pn
Wherein ri represents the reflectivity of the ith pixel point with known reflectivity.
The following describes the training data generating device provided in the embodiment of the present application, and the training data generating device described below and the training data generating method described above may be referred to in correspondence with each other.
Referring to fig. 8, fig. 8 is a schematic structural diagram of a training data generating apparatus disclosed in the embodiment of the present application. As shown in fig. 8, the apparatus includes:
a point cloud data acquisition unit 11, configured to acquire original point cloud data of a road surface;
a target point determining unit 12, configured to determine, from the original point cloud data, a point corresponding to the road traffic sign as a target point;
a sample region set determining unit 13, configured to perform region division on the original point cloud data of the road surface by using a target point corresponding to the road surface traffic sign as a reference, so as to obtain a set of sample regions, where the sample regions include a positive sample region and a negative sample region, where at least one target point exists in the positive sample region, and the target point does not exist in the negative sample region;
and the gray-scale map generation unit 14 is used for generating a gray-scale map of the corresponding sample area according to the point cloud data falling into the sample area in the original point cloud data, wherein the gray-scale map of the positive sample area is used as a positive sample of the training road traffic sign identification model, and the gray-scale map of the negative sample area is used as a negative sample of the training road traffic sign identification model.
The device can automatically generate gray level images of the positive sample area and the negative sample area according to original point cloud data of the road surface, and the gray level images are respectively used as positive samples and negative samples to train a road traffic sign recognition model. According to the method and the device, manual production is not needed, human resources are saved, and the number of sample regions obtained by dividing the original point cloud data region can be controlled to ensure that training samples with enough number can be generated.
Optionally, when the road traffic sign is a lane line, the sample region set determining unit performs region division on the original point cloud data of the road surface by using a target point corresponding to the road traffic sign as a reference to obtain a set of sample regions, specifically including:
starting from any target point corresponding to the lane line, determining a lane position point every other first set unit length along the lane line;
determining a relative position point every second set unit length along the direction vertical to the lane line by taking the lane position point as a vertical foot;
and generating corresponding rectangular regions according to a set rectangular size by taking the lane position point and the relative position point as central points of rectangles to form a set of sample regions, wherein the sample region corresponding to the rectangular region containing the target point corresponding to the lane line is a positive sample region, and the sample regions corresponding to the other rectangular regions not containing the target point corresponding to the lane line are negative sample regions.
Optionally, when the road traffic sign is a road sign, the sample region set determining unit performs region division on the original point cloud data of the road surface by using a target point corresponding to the road traffic sign as a reference to obtain a set of sample regions, specifically including:
determining a minimum circumscribed rectangle of an area covered by a target point corresponding to the pavement marker;
starting from any point on the central axis of the minimum external rectangle, determining a pavement marker position point every other first set unit length along the central axis;
determining a relative position point every second set unit length along the direction perpendicular to the central axis by taking the pavement marking position point as a foot;
and generating corresponding rectangular regions according to a set rectangular size by taking the road surface mark position points and the relative position points as central points of rectangles to form a set of sample regions, wherein the sample regions corresponding to the rectangular regions containing the target points corresponding to the road surface marks are positive sample regions, and the sample regions corresponding to the other rectangular regions not containing the target points corresponding to the road surface marks are negative sample regions.
Optionally, the process of generating the grayscale map of the corresponding sample region by the grayscale map generating unit according to the point cloud data falling into the sample region in the original point cloud data may specifically include:
determining pixel points in a gray scale image of each point in the point cloud data corresponding to the sample area according to the point cloud data falling into the sample area in the original point cloud data;
and if the corresponding points exist in the point cloud data of the pixel points in the gray-scale image of the sample area, determining the gray-scale value of each pixel point in the gray-scale image of the sample area according to the reflectivity of the corresponding point of each pixel point in the gray-scale image of the sample area.
Further optionally, the process of generating the grayscale map corresponding to the sample region by the grayscale map generating unit according to the point cloud data falling into the sample region in the original point cloud data may further include:
determining pixel points in a gray scale image of each point in the point cloud data corresponding to the sample area according to the point cloud data falling into the sample area in the original point cloud data;
determining the reflectivity of the first-class pixel points according to the reflectivity of the points corresponding to the first-class pixel points by taking the pixel points of the corresponding points in the point cloud data in the gray-scale image of the sample area as the first-class pixel points;
determining the reflectivity of each second type pixel point according to the reflectivity of the pixel points with known reflectivity in the set range around each second type pixel point by taking the pixel points without corresponding points in the point cloud data in the gray scale image of the sample area as the second type pixel points;
and determining the gray value of each pixel point in the gray map of the sample area according to the reflectivity of each pixel point in the gray map of the sample area.
Optionally, the process of determining the reflectivity of each second-class pixel point by the grayscale map generating unit according to the reflectivity of the pixel point with known reflectivity within the set range around each second-class pixel point includes:
determining the pixel points with known reflectivity in the range by taking the second type pixel points as the center of the set range;
calculating the distance from each pixel point with known reflectivity to the second-class pixel point;
determining the weight of each pixel point with known reflectivity according to the inverse relation between the distance and the weight, wherein the sum of the weights of the pixel points with known reflectivity is 1;
and adding the multiplication results of the weight and the reflectivity of each pixel point with known reflectivity to obtain the reflectivity of the second type of pixel points.
Finally, it should also be noted that, herein, relational terms such as first and second, and the like may be used solely to distinguish one entity or action from another entity or action without necessarily requiring or implying any actual such relationship or order between such entities or actions. Also, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising an … …" does not exclude the presence of other identical elements in a process, method, article, or apparatus that comprises the element.
The embodiments in the present description are described in a progressive manner, each embodiment focuses on differences from other embodiments, and the same and similar parts among the embodiments are referred to each other.
The previous description of the disclosed embodiments is provided to enable any person skilled in the art to make or use the present application. Various modifications to these embodiments will be readily apparent to those skilled in the art, and the generic principles defined herein may be applied to other embodiments without departing from the spirit or scope of the application. Thus, the present application is not intended to be limited to the embodiments shown herein but is to be accorded the widest scope consistent with the principles and novel features disclosed herein.

Claims (12)

1. A method of generating training data, comprising:
acquiring original point cloud data of a road surface;
determining points corresponding to the road traffic signs from the original point cloud data as target points;
taking a target point corresponding to the road surface traffic sign as a reference, performing area division on the original point cloud data of the road surface to obtain a set of sample areas, wherein the sample areas comprise positive sample areas and negative sample areas, at least one target point exists in the positive sample areas, and the target point does not exist in the negative sample areas;
and generating a gray scale map corresponding to the sample area according to the point cloud data falling into the sample area in the original point cloud data, wherein the gray scale map of the positive sample area is used as a positive sample of the training road traffic sign recognition model, and the gray scale map of the negative sample area is used as a negative sample of the training road traffic sign recognition model.
2. The method of claim 1, wherein the road traffic sign is a lane line, and the step of performing area division on the original point cloud data of the road surface by taking a target point corresponding to the road traffic sign as a reference to obtain a set of sample areas comprises:
starting from any target point corresponding to the lane line, determining a lane position point every other first set unit length along the lane line;
determining a relative position point every second set unit length along the direction vertical to the lane line by taking the lane position point as a vertical foot;
and generating corresponding rectangular regions according to a set rectangular size by taking the lane position point and the relative position point as central points of rectangles to form a set of sample regions, wherein the sample region corresponding to the rectangular region containing the target point corresponding to the lane line is a positive sample region, and the sample regions corresponding to the other rectangular regions not containing the target point corresponding to the lane line are negative sample regions.
3. The method of claim 1, wherein the road traffic sign is a road sign, and the step of performing area division on the original point cloud data of the road surface by taking a target point corresponding to the road traffic sign as a reference to obtain a set of sample areas comprises:
determining a minimum circumscribed rectangle of an area covered by a target point corresponding to the pavement marker;
starting from any point on the central axis of the minimum external rectangle, determining a pavement marker position point every other first set unit length along the central axis;
determining a relative position point every second set unit length along the direction perpendicular to the central axis by taking the pavement marking position point as a foot;
and generating corresponding rectangular regions according to a set rectangular size by taking the road surface mark position points and the relative position points as central points of rectangles to form a set of sample regions, wherein the sample regions corresponding to the rectangular regions containing the target points corresponding to the road surface marks are positive sample regions, and the sample regions corresponding to the other rectangular regions not containing the target points corresponding to the road surface marks are negative sample regions.
4. The method of claim 1, wherein generating a gray scale map of a corresponding sample region from point cloud data falling into the sample region in the raw point cloud data comprises:
determining pixel points in a gray scale image of each point in the point cloud data corresponding to the sample area according to the point cloud data falling into the sample area in the original point cloud data;
and if the corresponding points exist in the point cloud data of the pixel points in the gray-scale image of the sample area, determining the gray-scale value of each pixel point in the gray-scale image of the sample area according to the reflectivity of the corresponding point of each pixel point in the gray-scale image of the sample area.
5. The method of claim 1, wherein generating a gray scale map of a corresponding sample region from point cloud data falling into the sample region in the raw point cloud data comprises:
determining pixel points in a gray scale image of each point in the point cloud data corresponding to the sample area according to the point cloud data falling into the sample area in the original point cloud data;
taking pixel points with corresponding points in the point cloud data in the gray-scale image of the sample area as first-class pixel points, and determining the reflectivity of each first-class pixel point according to the reflectivity of the point corresponding to each first-class pixel point;
determining the reflectivity of each second type pixel point according to the reflectivity of the pixel points with known reflectivity in the set range around each second type pixel point by taking the pixel points without corresponding points in the point cloud data in the gray scale image of the sample area as the second type pixel points;
and determining the gray value of each pixel point in the gray map of the sample area according to the reflectivity of each pixel point in the gray map of the sample area.
6. The method of claim 5, wherein determining the reflectivity of each second-type pixel according to the reflectivities of the pixels with known reflectivities within the set range around each second-type pixel comprises:
determining the pixel points with known reflectivity in the range by taking the second type pixel points as the center of the set range;
calculating the distance from each pixel point with known reflectivity to the second-class pixel point;
determining the weight of each pixel point with known reflectivity according to the inverse relation between the distance and the weight, wherein the sum of the weights of the pixel points with known reflectivity is 1;
and adding the multiplication results of the weight and the reflectivity of each pixel point with known reflectivity to obtain the reflectivity of the second type of pixel points.
7. A training data generation apparatus, comprising:
the point cloud data acquisition unit is used for acquiring original point cloud data of a road surface;
the target point determining unit is used for determining points corresponding to the road traffic signs from the original point cloud data as target points;
a sample region set determining unit, configured to perform region division on the original point cloud data of the road surface by using a target point corresponding to the road surface traffic sign as a reference, so as to obtain a set of sample regions, where the sample regions include a positive sample region and a negative sample region, where at least one target point exists in the positive sample region, and the target point does not exist in the negative sample region;
and the gray-scale map generating unit is used for generating a gray-scale map corresponding to the sample area according to the point cloud data falling into the sample area in the original point cloud data, wherein the gray-scale map of the positive sample area is used as a positive sample of the training road traffic sign recognition model, and the gray-scale map of the negative sample area is used as a negative sample of the training road traffic sign recognition model.
8. The apparatus according to claim 7, wherein the road traffic sign is a lane line, and the process of obtaining the set of sample regions by performing region division on the original point cloud data of the road surface with reference to a target point corresponding to the road traffic sign by the sample region set determination unit specifically includes:
starting from any target point corresponding to the lane line, determining a lane position point every other first set unit length along the lane line;
determining a relative position point every second set unit length along the direction vertical to the lane line by taking the lane position point as a vertical foot;
and generating corresponding rectangular regions according to a set rectangular size by taking the lane position point and the relative position point as central points of rectangles to form a set of sample regions, wherein the sample region corresponding to the rectangular region containing the target point corresponding to the lane line is a positive sample region, and the sample regions corresponding to the other rectangular regions not containing the target point corresponding to the lane line are negative sample regions.
9. The apparatus according to claim 7, wherein the road surface traffic sign is a road surface sign, and the process of obtaining the set of sample regions by the sample region set determining unit performing region division on the original point cloud data of the road surface with reference to a target point corresponding to the road surface traffic sign specifically includes:
determining a minimum circumscribed rectangle of an area covered by a target point corresponding to the pavement marker;
starting from any point on the central axis of the minimum external rectangle, determining a pavement marker position point every other first set unit length along the central axis;
determining a relative position point every second set unit length along the direction perpendicular to the central axis by taking the pavement marking position point as a foot;
and generating corresponding rectangular regions according to a set rectangular size by taking the road surface mark position points and the relative position points as central points of rectangles to form a set of sample regions, wherein the sample regions corresponding to the rectangular regions containing the target points corresponding to the road surface marks are positive sample regions, and the sample regions corresponding to the other rectangular regions not containing the target points corresponding to the road surface marks are negative sample regions.
10. The apparatus according to claim 7, wherein the process of generating the gray-scale map of the corresponding sample region from the point cloud data falling into the sample region in the original point cloud data by the gray-scale map generating unit specifically comprises:
determining pixel points in a gray scale image of each point in the point cloud data corresponding to the sample area according to the point cloud data falling into the sample area in the original point cloud data;
and if the corresponding points exist in the point cloud data of the pixel points in the gray-scale image of the sample area, determining the gray-scale value of each pixel point in the gray-scale image of the sample area according to the reflectivity of the corresponding point of each pixel point in the gray-scale image of the sample area.
11. The apparatus according to claim 7, wherein the process of generating the gray-scale map of the corresponding sample region from the point cloud data falling into the sample region in the original point cloud data by the gray-scale map generating unit specifically comprises:
determining pixel points in a gray scale image of each point in the point cloud data corresponding to the sample area according to the point cloud data falling into the sample area in the original point cloud data;
determining the reflectivity of each first-class pixel point according to the reflectivity of the point corresponding to each first-class pixel point by taking the pixel point of the corresponding point in the point cloud data in the gray scale image of the sample area as the first-class pixel point;
determining the reflectivity of each second type pixel point according to the reflectivity of the pixel points with known reflectivity in the set range around each second type pixel point by taking the pixel points without corresponding points in the point cloud data in the gray scale image of the sample area as the second type pixel points;
and determining the gray value of each pixel point in the gray map of the sample area according to the reflectivity of each pixel point in the gray map of the sample area.
12. The apparatus according to claim 11, wherein the process of determining the reflectivity of each second-type pixel point by the gray scale map generating unit according to the reflectivities of the pixel points whose reflectivities are known within the set range around each second-type pixel point includes:
determining the pixel points with known reflectivity in the range by taking the second type pixel points as the center of the set range;
calculating the distance from each pixel point with known reflectivity to the second-class pixel point;
determining the weight of each pixel point with known reflectivity according to the inverse relation between the distance and the weight, wherein the sum of the weights of the pixel points with known reflectivity is 1;
and adding the multiplication results of the weight and the reflectivity of each pixel point with known reflectivity to obtain the reflectivity of the second type of pixel points.
CN201711044197.9A 2017-10-31 2017-10-31 Training data generation method and device Active CN109726728B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201711044197.9A CN109726728B (en) 2017-10-31 2017-10-31 Training data generation method and device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201711044197.9A CN109726728B (en) 2017-10-31 2017-10-31 Training data generation method and device

Publications (2)

Publication Number Publication Date
CN109726728A CN109726728A (en) 2019-05-07
CN109726728B true CN109726728B (en) 2020-12-15

Family

ID=66293951

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201711044197.9A Active CN109726728B (en) 2017-10-31 2017-10-31 Training data generation method and device

Country Status (1)

Country Link
CN (1) CN109726728B (en)

Families Citing this family (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112419360B (en) * 2020-11-16 2023-02-21 北京理工大学 Background removing and target image segmenting method based on stereo imaging
CN112666553B (en) * 2020-12-16 2023-04-18 动联(山东)电子科技有限公司 Road ponding identification method and equipment based on millimeter wave radar
CN113298910A (en) * 2021-05-14 2021-08-24 阿波罗智能技术(北京)有限公司 Method, apparatus and storage medium for generating traffic sign line map

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US8886387B1 (en) * 2014-01-07 2014-11-11 Google Inc. Estimating multi-vehicle motion characteristics by finding stable reference points
US9052721B1 (en) * 2012-08-28 2015-06-09 Google Inc. Method for correcting alignment of vehicle mounted laser scans with an elevation map for obstacle detection
CN104766058A (en) * 2015-03-31 2015-07-08 百度在线网络技术(北京)有限公司 Method and device for obtaining lane line
CN106525000A (en) * 2016-10-31 2017-03-22 武汉大学 A road marking line automatic extracting method based on laser scanning discrete point strength gradients
CN106705962A (en) * 2016-12-27 2017-05-24 首都师范大学 Method and system for acquiring navigation data

Family Cites Families (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP3123397A4 (en) * 2014-03-27 2017-11-08 The Georgia Tech Research Corporation Systems and methods for identifying traffic control devices and testing the retroreflectivity of the same
CN104700099B (en) * 2015-03-31 2017-08-11 百度在线网络技术(北京)有限公司 The method and apparatus for recognizing traffic sign
CN104850834A (en) * 2015-05-11 2015-08-19 中国科学院合肥物质科学研究院 Road boundary detection method based on three-dimensional laser radar
CN106570446B (en) * 2015-10-12 2019-02-01 腾讯科技(深圳)有限公司 The method and apparatus of lane line drawing
CN106845321B (en) * 2015-12-03 2020-03-27 高德软件有限公司 Method and device for processing pavement marking information
CN105701449B (en) * 2015-12-31 2019-04-23 百度在线网络技术(北京)有限公司 The detection method and device of lane line on road surface
CN105488498B (en) * 2016-01-15 2019-07-30 武汉中海庭数据技术有限公司 A kind of lane sideline extraction method and system based on laser point cloud
CN106228125B (en) * 2016-07-15 2019-05-14 浙江工商大学 Method for detecting lane lines based on integrated study cascade classifier
CN106295607A (en) * 2016-08-19 2017-01-04 北京奇虎科技有限公司 Roads recognition method and device
CN107122776A (en) * 2017-04-14 2017-09-01 重庆邮电大学 A kind of road traffic sign detection and recognition methods based on convolutional neural networks

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US9052721B1 (en) * 2012-08-28 2015-06-09 Google Inc. Method for correcting alignment of vehicle mounted laser scans with an elevation map for obstacle detection
US8886387B1 (en) * 2014-01-07 2014-11-11 Google Inc. Estimating multi-vehicle motion characteristics by finding stable reference points
CN104766058A (en) * 2015-03-31 2015-07-08 百度在线网络技术(北京)有限公司 Method and device for obtaining lane line
CN106525000A (en) * 2016-10-31 2017-03-22 武汉大学 A road marking line automatic extracting method based on laser scanning discrete point strength gradients
CN106705962A (en) * 2016-12-27 2017-05-24 首都师范大学 Method and system for acquiring navigation data

Non-Patent Citations (3)

* Cited by examiner, † Cited by third party
Title
Semiautomated Extraction of Street Light Poles From Mobile LiDAR Point-Clouds;Yongtao Yu等;《IEEE Transactions on Geoscience and Remote Sensing》;20150331;第53卷(第3期);第1374-1386页 *
Traffic Sign Detection in MLS Acquired Point Clouds for Geometric and Image-based Semantic Inventory;Mario Soilán等;《ISPRS Journal of Photogrammetry and Remote Sensing》;20161231;第114卷;第92-101页 *
移动车载激光点云的道路标线自动识别与提取;邹晓亮等;《测绘与空间地理信息》;20120930;第35卷(第9期);第5-8页 *

Also Published As

Publication number Publication date
CN109726728A (en) 2019-05-07

Similar Documents

Publication Publication Date Title
CN109726728B (en) Training data generation method and device
US10223816B2 (en) Method and apparatus for generating map geometry based on a received image and probe data
CN102208013B (en) Landscape coupling reference data generation system and position measuring system
CN108416808B (en) Vehicle repositioning method and device
CN107784042B (en) Object matching method and device
KR20210061722A (en) Method, apparatus, computer program and computer readable recording medium for producing high definition map
CN112154445A (en) Method and device for determining lane line in high-precision map
CN112240772B (en) Lane line generation method and device
CN111381585B (en) Method and device for constructing occupied grid map and related equipment
CN113011364B (en) Neural network training, target object detection and driving control method and device
CN113658292A (en) Method, device and equipment for generating meteorological data color spot pattern and storage medium
CN112528477A (en) Road scene simulation method, equipment, storage medium and device
CN114199223A (en) Method and apparatus for providing data for creating digital map and program product
CN102831419A (en) Method for detecting and blurring plate number in street view image rapidly
Komadina et al. Automated 3D urban landscapes visualization using open data sources on the example of the city of Zagreb
KR102384429B1 (en) Method for discriminating the road complex position and generating the reinvestigation path in road map generation
CN112530270B (en) Mapping method and device based on region allocation
CN110542416A (en) Automatic underground garage positioning system and method
CN114425774A (en) Method and apparatus for recognizing walking path of robot, and storage medium
CN111523466B (en) Method and device for classifying urban open space based on big data
CN113902047A (en) Image element matching method, device, equipment and storage medium
CN114549632A (en) Vehicle positioning method and device
CN109978944B (en) Coordinate system establishing method and device and data structure product
KR20210041304A (en) Apparatus and method for detecting road edge
CN116625385B (en) Road network matching method, high-precision map construction method, device and equipment

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
TA01 Transfer of patent application right

Effective date of registration: 20200430

Address after: 310052 room 508, floor 5, building 4, No. 699, Wangshang Road, Changhe street, Binjiang District, Hangzhou City, Zhejiang Province

Applicant after: Alibaba (China) Co.,Ltd.

Address before: 102200, No. 8, No., Changsheng Road, Changping District science and Technology Park, Beijing, China. 1-5

Applicant before: AUTONAVI SOFTWARE Co.,Ltd.

TA01 Transfer of patent application right
GR01 Patent grant
GR01 Patent grant