CN109726728A - A kind of training data generation method and device - Google Patents
A kind of training data generation method and device Download PDFInfo
- Publication number
- CN109726728A CN109726728A CN201711044197.9A CN201711044197A CN109726728A CN 109726728 A CN109726728 A CN 109726728A CN 201711044197 A CN201711044197 A CN 201711044197A CN 109726728 A CN109726728 A CN 109726728A
- Authority
- CN
- China
- Prior art keywords
- point
- pixel
- sample
- sample areas
- cloud data
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
Landscapes
- Image Processing (AREA)
- Image Analysis (AREA)
Abstract
This application discloses a kind of training data generation method and devices, obtain the original point cloud data on road surface;From original point cloud data, determine the corresponding point of pavement marking as target point;On the basis of the corresponding target point of pavement marking, the original point cloud on road surface is subjected to region division, obtains the set of sample areas, sample areas includes positive sample region and negative sample region, there are at least one target point in positive sample region, target point is not present in negative sample region;According in original point cloud data, fall into the point cloud data in sample areas, generate the grayscale image of corresponding sample areas, wherein, positive example sample of the grayscale image in positive sample region as training pavement marking identification model, negative example sample of the grayscale image in negative sample region as training pavement marking identification model.The application is not necessarily to manual manufacture, saves human resources, by the quantity for controlling divided sample areas, it is ensured that can generate sufficient amount of training sample.
Description
Technical field
This application involves training data generation technique field, more specifically to a kind of training data generation method and
Device.
Background technique
When using machine learning training various pavement marking identification models, a large amount of training sample data are needed,
To guarantee to train the recognition accuracy of obtained pavement marking identification model.
Existing training sample data are usually manual manufacture, i.e., the picture of artificial road pavement is identified, wherein including road
The picture of face traffic sign is as positive example sample, and the picture not comprising pavement marking is as negative example sample.
Obviously, the mode of manual manufacture training data needs to expend a large amount of human resources, and the number of the training sample generated
It measures insufficient, not can guarantee the recognition accuracy of trained pavement marking identification model.
Summary of the invention
In view of this, this application provides a kind of training data generation method and device, to solve existing manual manufacture instruction
Practice the problem of the mode labor intensive resource of data and the training samples number deficiency of production.
To achieve the goals above, it is proposed that scheme it is as follows:
A kind of training data generation method, comprising:
Obtain the original point cloud data on road surface;
From the original point cloud data, determine the corresponding point of pavement marking as target point;
On the basis of the corresponding target point of the pavement marking, the original point cloud data on the road surface is subjected to region
It divides, obtains the set of sample areas, the sample areas includes positive sample region and negative sample region, the positive sample region
It is middle there are at least one target point, the target point is not present in the negative sample region;
According to the point cloud data in sample areas in the original point cloud data, is fallen into, the ash of corresponding sample areas is generated
Degree figure, wherein positive example sample of the grayscale image in positive sample region as training pavement marking identification model, negative sample region
Grayscale image as training pavement marking identification model negative example sample.
Preferably, the pavement marking is lane line, described to be with the corresponding target point of the pavement marking
The original point cloud data on the road surface is carried out region division, obtains the set of sample areas by benchmark, comprising:
Started with any one corresponding target point of the lane line, it is long every the first setting unit along the lane line
Degree, determines a lane position point;
Using the lane position point as intersection point, unit length is set every second along perpendicular to lane line direction, determines one
A relative position point;
Using the lane position point and relative position point as the central point of rectangle, according to setting rectangular dimension generation pair
The rectangular area answered constitutes the set of sample areas comprising the rectangular area of the corresponding target point of the lane line is corresponding
Sample areas be positive sample areas, remaining does not include corresponding sample area in rectangular area of the corresponding target point of the lane line
Domain is negative sample areas.
Preferably, the pavement marking is road surface identification, described with the corresponding target point of the pavement marking
On the basis of, the original point cloud data on the road surface is subjected to region division, obtains the set of sample areas, comprising:
Determine the minimum circumscribed rectangle in the region of the corresponding target point covering of the road surface identification;
Started with any one point on the central axes of the minimum circumscribed rectangle, it is single every the first setting along the central axes
Bit length determines a road surface identification location point;
Using the road surface identification location point as intersection point, unit length is set every second along perpendicular to the central axes direction
Determine a relative position point;
It is raw according to setting rectangular dimension using the road surface identification location point and relative position point as the central point of rectangle
Gather the rectangle region comprising the corresponding target point of the road surface identification at corresponding rectangular area composition sample areas
The corresponding sample areas in domain is positive sample areas, and rectangular area that remaining does not include the corresponding target point of the road surface identification is corresponding
Sample areas be negative sample areas.
Preferably, described according to the point cloud data in sample areas in the original point cloud data, is fallen into, generate corresponding sample
The grayscale image of one's respective area, comprising:
According to the point cloud data fallen into sample areas in the original point cloud data, each point in the point cloud data is determined
Pixel in the grayscale image of the corresponding sample areas;
If in the grayscale image of the sample areas, there is corresponding point in each pixel, then root in the point cloud data
The reflectivity of the corresponding point of each pixel in grayscale image according to the sample areas determines each in the grayscale image of the sample areas
The gray value of pixel.
Preferably, described according to the point cloud data in sample areas in the original point cloud data, is fallen into, generate corresponding sample
The grayscale image of one's respective area, comprising:
According to the point cloud data fallen into sample areas in the original point cloud data, each point in the point cloud data is determined
Pixel in the grayscale image of the corresponding sample areas;
It there are the pixel of corresponding points is first kind picture in the point cloud data in the grayscale image of the sample areas
Vegetarian refreshments determines the reflectivity of each first kind pixel according to the reflectivity of the corresponding point of each first kind pixel;
It is second so that the pixel of corresponding point in the grayscale image of the sample areas, to be not present in the point cloud data
Class pixel determines each according to the reflectivity of pixel known to the reflectivity around each second class pixel in setting range
The reflectivity of second class pixel;
According to the reflectivity of pixel each in the grayscale image of the sample areas, in the grayscale image for determining the sample areas
The gray value of each pixel.
Preferably, the reflection of pixel known to the reflectivity according to around each second class pixel in setting range
Rate determines the reflectivity of each second class pixel, comprising:
Using the second class pixel as the center of setting range, the pixel known to the reflectivity in the range is determined
Point;
The pixel of each known reflectivity is calculated to the distance of the second class pixel;
According to the relationship that distance is inversely proportional with weight, the weight of the pixel of each known reflectivity is determined, it is each known
The weight of the pixel of reflectivity is 1 with value;
The result that the weight of the pixel of each known reflectivity is multiplied with reflectivity is added, obtains described second
The reflectivity of class pixel.
A kind of training data generating means, comprising:
Point cloud data acquiring unit, for obtaining the original point cloud data on road surface;
Target point determination unit, for determining corresponding conduct of pavement marking from the original point cloud data
Target point;
Sample areas set determination unit is used on the basis of the corresponding target point of the pavement marking, will be described
The original point cloud data on road surface carries out region division, obtains the set of sample areas, the sample areas includes positive sample region
With negative sample region, there are at least one target point in the positive sample region, the target is not present in the negative sample region
Point;
Grayscale image generation unit, for giving birth to according to the point cloud data in sample areas in the original point cloud data, is fallen into
At the grayscale image of corresponding sample areas, wherein the grayscale image in positive sample region is as training pavement marking identification model
Positive example sample, negative example sample of the grayscale image in negative sample region as training pavement marking identification model.
Preferably, the pavement marking is lane line, and the sample areas set determination unit is handed over the road surface
Lead on the basis of the corresponding target point of mark, the original point cloud data on the road surface is subjected to region division, obtains sample areas
The process of set, specifically includes:
Started with any one corresponding target point of the lane line, it is long every the first setting unit along the lane line
Degree, determines a lane position point;
Using the lane position point as intersection point, unit length is set every second along perpendicular to lane line direction, determines one
A relative position point;
Using the lane position point and relative position point as the central point of rectangle, according to setting rectangular dimension generation pair
The rectangular area answered constitutes the set of sample areas comprising the rectangular area of the corresponding target point of the lane line is corresponding
Sample areas be positive sample areas, remaining does not include corresponding sample area in rectangular area of the corresponding target point of the lane line
Domain is negative sample areas.
Preferably, the pavement marking is road surface identification, and the sample areas set determination unit is with the road surface
On the basis of the corresponding target point of traffic sign, the original point cloud data on the road surface is subjected to region division, obtains sample areas
Set process, specifically include:
Determine the minimum circumscribed rectangle in the region of the corresponding target point covering of the road surface identification;
Started with any one point on the central axes of the minimum circumscribed rectangle, it is single every the first setting along the central axes
Bit length determines a road surface identification location point;
Using the road surface identification location point as intersection point, unit length is set every second along perpendicular to the central axes direction
Determine a relative position point;
It is raw according to setting rectangular dimension using the road surface identification location point and relative position point as the central point of rectangle
Gather the rectangle region comprising the corresponding target point of the road surface identification at corresponding rectangular area composition sample areas
The corresponding sample areas in domain is positive sample areas, and rectangular area that remaining does not include the corresponding target point of the road surface identification is corresponding
Sample areas be negative sample areas.
Preferably, the grayscale image generation unit is according to the point cloud in the original point cloud data, fallen into sample areas
Data generate the process of the grayscale image of corresponding sample areas, specifically include:
According to the point cloud data fallen into sample areas in the original point cloud data, each point in the point cloud data is determined
Pixel in the grayscale image of the corresponding sample areas;
If in the grayscale image of the sample areas, there is corresponding point in each pixel, then root in the point cloud data
The reflectivity of the corresponding point of each pixel in grayscale image according to the sample areas determines each in the grayscale image of the sample areas
The gray value of pixel.
Preferably, the grayscale image generation unit is according to the point cloud in the original point cloud data, fallen into sample areas
Data generate the process of the grayscale image of corresponding sample areas, specifically include:
According to the point cloud data fallen into sample areas in the original point cloud data, each point in the point cloud data is determined
Pixel in the grayscale image of the corresponding sample areas;
It there are the pixel of corresponding point is the first kind in the point cloud data in the grayscale image of the sample areas
Pixel determines the reflectivity of each first kind pixel according to the reflectivity of the corresponding point of each first kind pixel;
It is second so that the pixel of corresponding point in the grayscale image of the sample areas, to be not present in the point cloud data
Class pixel determines each according to the reflectivity of pixel known to the reflectivity around each second class pixel in setting range
The reflectivity of second class pixel;
According to the reflectivity of pixel each in the grayscale image of the sample areas, in the grayscale image for determining the sample areas
The gray value of each pixel.
Preferably, the grayscale image generation unit is according to known to the reflectivity in setting range around each second class pixel
Pixel reflectivity, determine the process of the reflectivity of each second class pixel, comprising:
Using the second class pixel as the center of setting range, the pixel known to the reflectivity in the range is determined
Point;
The pixel of each known reflectivity is calculated to the distance of the second class pixel;
According to the relationship that distance is inversely proportional with weight, the weight of the pixel of each known reflectivity is determined, it is each known
The weight of the pixel of reflectivity is 1 with value;
The result that the weight of the pixel of each known reflectivity is multiplied with reflectivity is added, obtains described second
The reflectivity of class pixel.
It can be seen from the above technical scheme that the training data generation method of the application, obtains the original point cloud on road surface
Data;From the original point cloud data, determine the corresponding point of pavement marking as target point;With the road traffic mark
On the basis of the corresponding target point of will, the original point cloud data on the road surface is subjected to region division, obtains the set of sample areas,
The sample areas includes positive sample region and negative sample region, and there are at least one target point, institutes in the positive sample region
Stating negative sample region, there is no the target points;According in the original point cloud data, the point cloud data in sample areas is fallen into,
Generate the grayscale image of corresponding sample areas, wherein the grayscale image in positive sample region is as training pavement marking identification model
Positive example sample, the negative example sample of the grayscale image in negative sample region as training pavement marking identification model.It can be seen that
The application can automatically generate the grayscale image in positive sample region and negative sample region according to the original point cloud data on road surface, make respectively
For positive example sample and negative example sample come train road traffic identify identification model.The application is not necessarily to manual manufacture, saves manpower money
Source, and can be by the quantity for the sample areas that control original point cloud data region division obtains, to guarantee to generate foot
The training sample of enough amounts.
Detailed description of the invention
In order to illustrate the technical solutions in the embodiments of the present application or in the prior art more clearly, to embodiment or will show below
There is attached drawing needed in technical description to be briefly described, it should be apparent that, the accompanying drawings in the following description is only this
The embodiment of application for those of ordinary skill in the art without creative efforts, can also basis
The attached drawing of offer obtains other attached drawings.
Fig. 1 is a kind of training data generation method flow chart disclosed in the embodiment of the present application;
Fig. 2 is that a kind of divide disclosed in the embodiment of the present application obtains the method flow diagram of sample areas set;
Fig. 3 is that a kind of exemplary sample areas set of the application determines schematic diagram;
Fig. 4 is that the division of another kind disclosed in the embodiment of the present application obtains the method flow diagram of sample areas set;
Fig. 5 is a kind of sample areas gray scale drawing generating method flow chart disclosed in the embodiment of the present application;
Fig. 6 is another kind sample areas gray scale drawing generating method flow chart disclosed in the embodiment of the present application;
Fig. 7 is a kind of reflectivity method flow diagram of the second class of determination pixel disclosed in the embodiment of the present application;
Fig. 8 is a kind of training data generating means structural schematic diagram disclosed in the embodiment of the present application.
Specific embodiment
Below in conjunction with the attached drawing in the embodiment of the present application, technical solutions in the embodiments of the present application carries out clear, complete
Site preparation description, it is clear that described embodiments are only a part of embodiments of the present application, instead of all the embodiments.It is based on
Embodiment in the application, it is obtained by those of ordinary skill in the art without making creative efforts every other
Embodiment shall fall in the protection scope of this application.
Pavement marking identification model is the model for carrying out pavement marking identification, by the way that piece image to be input to
Model, model can provide the image whether include pavement marking conclusion.Wherein, pavement marking includes lane
Line, road surface identification.The steering marks of road surface identification such as pavement markers, speed limit mark etc..
In order to which training obtains road traffic mark identification model, need to be previously obtained a large amount of training data.For this purpose, this Shen
A kind of training data generation method please be provide, is introduced in conjunction with training data generation method of the Fig. 1 to the application.
As shown in Figure 1, this method comprises:
Step S100, the original point cloud data on road surface is obtained;
Specifically, the original point cloud data on the road surface obtained in this step is the original point cloud on the road surface of setting regions size
Data.
The original point cloud data on road surface is made of numerous three-dimensional coordinate points, and each point is identified with reflectivity.
Step S110, from the original point cloud data, determine the corresponding point of pavement marking as target point;
Specifically, the corresponding point of pavement marking can be determined by algorithm or manually from original point cloud data,
As target point.
Step S120, on the basis of the corresponding target point of the pavement marking, by the original point cloud number on the road surface
Region division is carried out according to covering, obtains the set of sample areas;
Wherein, the sample areas includes positive sample region and negative sample region, is existed at least in the positive sample region
The target point is not present in one target point, the negative sample region.
Specifically, in getting original point cloud data and original point cloud data form pavement marking target point it
Afterwards, original point cloud data can be subjected to region division, obtain the set of multiple sample areas, sample on the basis of target point
Region includes several positive sample regions and several negative sample regions.Target comprising composition pavement marking in positive sample region
Point, and do not include the target point of composition pavement marking in negative sample region.
Step S130, according to the point cloud data in sample areas in the original point cloud data, is fallen into, corresponding sample is generated
The grayscale image in region.
Wherein, positive example sample of the grayscale image in positive sample region as training pavement marking identification model, negative sample
Negative example sample of the grayscale image in region as training pavement marking identification model.
Since sample areas is to divide to obtain from original point cloud data, can be determined from original point cloud data with
The corresponding point cloud data in the spatial position of sample areas, as the point cloud data fallen into original point cloud data in sample areas.
Specifically, the point in point cloud data can be projected along short transverse, obtains two-dimentional point cloud data.And then according to two
The two-dimensional coordinate of point cloud data and the two-dimensional coordinate of sample areas are tieed up, determines in original point cloud data, falls into sample areas
Point cloud data.
In order to accelerate from determining the speed for falling into point cloud data in sample areas in original point cloud data, the application can be with
Rtree tree structure is created using the set of determining sample areas, and then traverses original point cloud data, determining and each sample area
The corresponding point cloud data in the spatial position in domain.
After determining to fall into the point cloud data in each sample areas, the point cloud number that can include according to sample areas
According to the grayscale image of the corresponding sample areas of generation.
Wherein, positive sample region is since comprising the corresponding target point of pavement marking, corresponding grayscale image is made
For the positive example sample of training pavement marking identification model.And negative sample region is corresponding due to not including pavement marking
Target point, therefore negative example sample of its corresponding grayscale image as training pavement marking identification model.
When generating the grayscale image of sample areas, the reflectivity conversion at the point cloud data midpoint for including by sample areas is ash
Pixel is spent, and then generates grayscale image.
The application can automatically generate the gray scale in positive sample region and negative sample region according to the original point cloud data on road surface
Figure trains road traffic to identify identification model respectively as positive example sample and negative example sample.The application is not necessarily to manual manufacture, section
Human-saving resource, and can be by the quantity for the sample areas that control original point cloud data region division obtains, to guarantee
Enough generate sufficient amount of training sample.
In another embodiment of the application, for various forms of pavement markings, to above-mentioned steps S120, with institute
On the basis of stating the corresponding target point of pavement marking, the original point cloud data on the road surface is subjected to region division, obtains sample
The process of the set of one's respective area is introduced respectively.
First, pavement marking is lane line.
Then above-mentioned steps S120, the process for dividing the set for obtaining sample areas are referred to shown in Fig. 2, the process packet
It includes:
Step S200, started with any one corresponding target point of the lane line, set along the lane line every first
Order bit length determines a lane position point;
Specifically, the application can be set along lane line every first since any one corresponding target point of lane line
Order bit length determines a lane position point.Wherein it is possible to since target point, along straight line where lane line, to the left, to the right
Successively determine lane position point.The number of determining lane position point can according to need and control.It is understood that determining
Lane position point can be located at lane line on, can also be located at lane line extended line on.
Step S210, using the lane position point as intersection point, along long every the second setting unit perpendicular to lane line direction
Degree, determines a relative position point;
Specifically, the lane position point obtained for previous step, using it as intersection point, along perpendicular to any one of lane line
Unit length is set every second in a or both direction, determines a relative position point.The number of determining relative position point
It can according to need and control.
Wherein, the second setting unit length can be identical with the first setting unit length, can also be different.
Step S220, using the lane position point and relative position point as the central point of rectangle, according to setting rectangle
Size generates the set that corresponding rectangular area constitutes sample areas.
Wherein, the corresponding sample areas in rectangular area comprising the corresponding target point of the lane line is positive sample areas,
Remaining corresponding sample areas in rectangular area for not including the corresponding target point of the lane line is negative sample areas.
It is introduced in conjunction with process of the Fig. 3 to the set of above-mentioned determining sample areas:
As shown in figure 3, started with target point O1 in lane line, it is single every the first setting along lane line or so both direction
Bit length x1 determines a lane position point.
Using each lane position point of above-mentioned determination as intersection point, along perpendicular to both direction above and below lane line, Mei Ge
Two setting unit length x2, determine a relative position point.
Using each lane position point and relative position point of above-mentioned determination as the central point of rectangle, according to setting rectangular dimension
(such as width is k1, a height of k2) generates corresponding rectangular area, as sample areas.Rectangular dimension can be set by the user.It can be with
It selects rectangle width k1 to be greater than the first setting unit length x1, rectangular elevation k2 and is greater than the second setting unit length x2.
It is understood that along the rectangular area that lane line sliding generates, when lane position point is not located at lane line
When the upper and distance apart from the nearest endpoint of lane line is more than rectangle width k1, rectangular area generated will not include lane
The corresponding target point of line.
Similarly, it is slided along the rectangular area generated along perpendicular to lane line direction, when relative position point is apart from lane line
When the distance of nearest marginal point is more than rectangular elevation k2, rectangular area generated will not include the corresponding target point of lane line.
In addition to this, remaining rectangular area generated all includes the corresponding target point of lane line.
It is understood that by the size of the first setting of control unit length x1, the second setting unit length x2, it can be with
Control the quantity of the sample areas generated.First setting unit length x1 and the second setting unit length x2 are smaller, and generation is just
The quantity of sample areas is more.Certainly, with the promotion of the lane line location point of generation and relative position point quantity, generation is born
The quantity of sample areas also increases.
Second, pavement marking is road surface identification, such as rotation arrow.
Then above-mentioned steps S120, the process for dividing the set for obtaining sample areas are referred to shown in Fig. 4, the process packet
It includes:
Step S400, the minimum circumscribed rectangle in the region of the corresponding target point covering of the road surface identification is determined;
Step S410, on the central axes of the minimum circumscribed rectangle any one point start, along the central axes every
First setting unit length, determines a road surface identification location point;
Specifically, the application can be started with any one point on the central axes of the minimum circumscribed rectangle, along central axes
A road surface identification location point is determined every the first setting unit length.Wherein it is possible to from starting point, it is straight where central axes
Line to the left, successively determines to the right road surface identification location point.The number of determining road surface identification location point can according to need and control
System.It is understood that the road surface identification location point determined can be can also be located at the extended line of central axes on the central axes of position
On.
Step S420, using the road surface identification location point as intersection point, edge is set perpendicular to the central axes direction every second
Order bit length determines a relative position point;
Specifically, the road surface identification location point obtained for previous step, using it as intersection point, along appointing perpendicular to central axes
It anticipates and sets unit length every second on one or two direction, determine a relative position point.Determining relative position point
Number can according to need and control.
Wherein, the second setting unit length can be identical with the first setting unit length, can also be different.
Step S430, using the road surface identification location point and relative position point as the central point of rectangle, according to setting
Rectangular dimension generates the set that corresponding rectangular area constitutes sample areas.
Wherein, the corresponding sample areas in rectangular area comprising the corresponding target point of the road surface identification is positive sample area
Domain, remaining corresponding sample areas in rectangular area for not including the corresponding target point of the road surface identification are negative sample areas.
It is understood that the present embodiment divides to obtain the process of the set of sample areas and road traffic is identified as lane
Process when line is similar.The minimum circumscribed rectangle in the region of the determining corresponding target point covering of road surface identification can be regarded as vehicle
Region where diatom, subsequent step are similar.
Further, step S130 is given birth to according to the point cloud data in sample areas in the original point cloud data, is fallen into
It is introduced at the process of the grayscale image of corresponding sample areas.Its specific implementation process is referred to Fig. 5.
As shown in figure 5, the process includes:
Step S500, according to the point cloud data fallen into sample areas in the original point cloud data, described cloud is determined
Each point corresponds to the pixel in the grayscale image of the sample areas in data;
Specifically, original point cloud data falls into the point cloud data in sample areas, maximum abscissa maxX and minimum horizontal seat
The difference of mark minX corresponds to the horizontal pixel value w of the grayscale image of sample areas;Maximum ordinate maxY and minimum ordinate minY
Difference correspond to sample areas grayscale image longitudinal pixel value h.Hence, it can be determined that the length and width of each pixel, respectively
Are as follows:
Long GridL=(maxX-minX)/w of pixel
Wide GridW=(maxY-minY)/h of pixel
Based on this, it can determine and fall into the point cloud data of sample areas, each pair of point is answered in the grayscale image of sample areas
Pixel position.
By taking point cloud data midpoint (x, y) as an example, the abscissa of corresponding pixel points are as follows:
Piex_X=(x-minX)/GridL
The ordinate of its corresponding pixel points are as follows:
Piex_Y=(maxY-y)/GridW
If in the grayscale image of step S510, the described sample areas, each pixel exists in the point cloud data to be corresponded to
Point determine the sample areas then according to the reflectivity of the corresponding point of pixel each in the grayscale image of the sample areas
The gray value of each pixel in grayscale image.
Specifically, the point in point cloud data is mapped to the pixel of the grayscale image of sample areas in step S500.
If point cloud data is dense enough, in grayscale image there is corresponding point in each pixel in point cloud data.That is, grayscale image
Pixel matrix in each pixel be corresponding with the point in point cloud data.In this case, it is possible to according to each in grayscale image
The reflectivity of the corresponding point of pixel, to determine the gray value of each pixel.
In a kind of optional embodiment, maximum reflectivity maxR and minimum are anti-in the corresponding point of available each pixel
Rate minR is penetrated, maximum reflectivity corresponds to maximum gradation value 255, and minimum reflectance corresponds to minimum gradation value 0.Therefore, picture is being determined
When the gray value of vegetarian refreshments, if the reflectivity of the pixel corresponding points is R, then * 255/ (maxR- of its gray value C=(R-minR)
minR)。
Certainly in addition to this it is possible to which the reflectivity for defining pixel is the reflectivity of its corresponding point.At this point, in determination
When the gray value of pixel, if the reflectivity of the pixel is R, gray value C=(R-minR) * 255/ (maxR-minR).
In another optional embodiment of the application, above-mentioned steps S130 is described, according to the original point cloud number
In, the point cloud data in sample areas is fallen into, generates the optional implementation process of another kind of the grayscale image of corresponding sample areas, tool
Body is referred to Fig. 6.
As shown in fig. 6, the process includes:
Step S600, according to the point cloud data fallen into sample areas in the original point cloud data, described cloud is determined
Each point corresponds to the pixel in the grayscale image of the sample areas in data;
Specifically, step S600 is corresponded to each other with above-mentioned steps S500, is specifically referred to above-described embodiment introduction, herein
It repeats no more.
Step S610, in the grayscale image of the sample areas, there are the pixels of corresponding points in the point cloud data
The anti-of each first kind pixel is determined according to the reflectivity of the corresponding point of each first kind pixel for first kind pixel
Penetrate rate;
Step S620, the picture of corresponding point to be not present in the point cloud data in the grayscale image of the sample areas
Vegetarian refreshments is the second class pixel, according to the reflection of pixel known to the reflectivity around each second class pixel in setting range
Rate determines the reflectivity of each second class pixel;
Specifically, if point cloud data is than sparse, in grayscale image, there may be partial pixel points not to have corresponding cloud
Point in data, it is the second class pixel that the application, which defines this part not having the pixel of corresponding points, and there are corresponding points for remaining
Pixel is first kind pixel.It should be noted that first kind pixel and the second class pixel are intended merely to facilitate and retouch
It states, belongs to the pixel in the grayscale image of sample areas.
For first kind pixel, its reflectivity can be determined as to the reflectivity of corresponding point.For the second class pixel
Point, can the pixel according to known to the reflectivity around it in setting range reflectivity, determine its reflectivity.
Step S630, according to the reflectivity of pixel each in the grayscale image of the sample areas, the sample areas is determined
Grayscale image in each pixel gray value.
Specifically, by above steps, for pixel each in the grayscale image of sample areas (including first kind pixel
Point and the second class pixel) determine reflectivity, the sample can be determined according to the reflectivity of each pixel in this step
The gray value of each pixel in the grayscale image in region.
Similar to the above, maximum reflectivity maxR and minimum are anti-in the reflectivity of each pixel in available grayscale image
Rate minR is penetrated, maximum reflectivity corresponds to maximum gradation value 255, and minimum reflectance corresponds to minimum gradation value 0.Therefore, picture is being determined
When the gray value of vegetarian refreshments, if the reflectivity of the pixel is R, gray value C=(R-minR) * 255/ (maxR-minR).
When there are the second class pixels not to have corresponding point cloud data in the grayscale image for determine sample areas in the present embodiment
In point when, determine the anti-of the second class pixel using the reflectivity of the pixel of known reflectivity around the second class pixel
Rate is penetrated, and then can determine the gray value of each pixel in grayscale image, can not determine gray scale without the second class pixel
The case where value, there are blank pixel points, guarantees that the grayscale image generated possesses enough features, to improve model training effect.
Optionally, above-mentioned steps S620, according to picture known to the reflectivity around each second class pixel in setting range
The reflectivity of vegetarian refreshments determines that the realization process of the reflectivity of each second class pixel is referred to shown in Fig. 7, comprising:
Step S700, using the second class pixel as the center of setting range, the reflectivity in the range is determined
Known pixel;
Specifically, setting range can be set by the user.In the setting range, the pixel for having determined that reflectivity is searched
Point.
Step S710, the pixel of each known reflectivity is calculated to the distance of the second class pixel;
Step S720, the relationship being inversely proportional according to distance with weight determines the weight of the pixel of each known reflectivity,
The weight of the pixel of each known reflectivity is 1 with value;
Assuming that the pixel of n known reflectivity is found in setting range altogether, wherein ith pixel o'clock to the second class picture
The distance of vegetarian refreshments is expressed as di, and weight is expressed as pi.Then there is following formula:
P1+p2+ ...+pi+ ...+pn=1 (1)
Since weight and distance are inversely proportional, there are following formula:
One of pixel is selected, it is dl that define its weight, which be at a distance from pl, with the second class pixel, then other pixels
The weight of point are as follows:
Pi=pl*dl/di (2)
The weighted value of each pixel can be determined by formula (1) (2).
Step S730, the result that the weight of the pixel of each known reflectivity is multiplied with reflectivity is added, is obtained
To the reflectivity of the second class pixel.
The reflectivity R=r1*p1+r2*p2+ ...+ri*pi+ ...+rn*pn of second class pixel
Wherein, ri indicates the reflectivity of the pixel of i-th of known reflectivity.
Training data generating means provided by the embodiments of the present application are described below, training data described below is raw
Reference can be corresponded to each other with above-described training data generation method at device.
Referring to Fig. 8, Fig. 8 is a kind of training data generating means structural schematic diagram disclosed in the embodiment of the present application.Such as Fig. 8 institute
Show, which includes:
Point cloud data acquiring unit 11, for obtaining the original point cloud data on road surface;
Target point determination unit 12, for from the original point cloud data, determining that the corresponding point of pavement marking is made
For target point;
Sample areas set determination unit 13 is used on the basis of the corresponding target point of the pavement marking, by institute
The original point cloud data for stating road surface carries out region division, obtains the set of sample areas, the sample areas includes positive sample area
Domain and negative sample region, there are at least one target point in the positive sample region, the mesh is not present in the negative sample region
Punctuate;
Grayscale image generation unit 14, for falling into the point cloud data in sample areas according in the original point cloud data,
Generate the grayscale image of corresponding sample areas, wherein the grayscale image in positive sample region is as training pavement marking identification model
Positive example sample, the negative example sample of the grayscale image in negative sample region as training pavement marking identification model.
The device of the application can automatically generate positive sample region and negative sample region according to the original point cloud data on road surface
Grayscale image, trained respectively as positive example sample and negative example sample road traffic identify identification model.The application is without artificial
Production, save human resources, and can by control the obtained sample areas of original point cloud data region division quantity,
To guarantee to generate sufficient amount of training sample.
Optionally, when the pavement marking is lane line, the sample areas set determination unit is with the road
On the basis of the corresponding target point of face traffic sign, the original point cloud data on the road surface is subjected to region division, obtains sample area
The process of the set in domain, specifically includes:
Started with any one corresponding target point of the lane line, it is long every the first setting unit along the lane line
Degree, determines a lane position point;
Using the lane position point as intersection point, unit length is set every second along perpendicular to lane line direction, determines one
A relative position point;
Using the lane position point and relative position point as the central point of rectangle, according to setting rectangular dimension generation pair
The rectangular area answered constitutes the set of sample areas comprising the rectangular area of the corresponding target point of the lane line is corresponding
Sample areas be positive sample areas, remaining does not include corresponding sample area in rectangular area of the corresponding target point of the lane line
Domain is negative sample areas.
Optionally, when the pavement marking is road surface identification, the sample areas set determination unit is with described
On the basis of the corresponding target point of pavement marking, the original point cloud data on the road surface is subjected to region division, obtains sample
The process of the set in region, specifically includes:
Determine the minimum circumscribed rectangle in the region of the corresponding target point covering of the road surface identification;
Started with any one point on the central axes of the minimum circumscribed rectangle, it is single every the first setting along the central axes
Bit length determines a road surface identification location point;
Using the road surface identification location point as intersection point, unit length is set every second along perpendicular to the central axes direction
Determine a relative position point;
It is raw according to setting rectangular dimension using the road surface identification location point and relative position point as the central point of rectangle
Gather the rectangle region comprising the corresponding target point of the road surface identification at corresponding rectangular area composition sample areas
The corresponding sample areas in domain is positive sample areas, and rectangular area that remaining does not include the corresponding target point of the road surface identification is corresponding
Sample areas be negative sample areas.
Optionally, the grayscale image generation unit is according to the point cloud in the original point cloud data, fallen into sample areas
Data generate the process of the grayscale image of corresponding sample areas, can specifically include:
According to the point cloud data fallen into sample areas in the original point cloud data, each point in the point cloud data is determined
Pixel in the grayscale image of the corresponding sample areas;
If in the grayscale image of the sample areas, there is corresponding point in each pixel, then root in the point cloud data
The reflectivity of the corresponding point of each pixel in grayscale image according to the sample areas determines each in the grayscale image of the sample areas
The gray value of pixel.
Further alternative, the grayscale image generation unit is fallen into sample areas according in the original point cloud data
Point cloud data, generate the process of the grayscale image of corresponding sample areas, can also include:
According to the point cloud data fallen into sample areas in the original point cloud data, each point in the point cloud data is determined
Pixel in the grayscale image of the corresponding sample areas;
It there are the pixel of corresponding point is the first kind in the point cloud data in the grayscale image of the sample areas
Pixel determines the reflectivity of the first kind pixel according to the reflectivity of the corresponding point of each first kind pixel;
It is second so that the pixel of corresponding point in the grayscale image of the sample areas, to be not present in the point cloud data
Class pixel determines each according to the reflectivity of pixel known to the reflectivity around each second class pixel in setting range
The reflectivity of second class pixel;
According to the reflectivity of pixel each in the grayscale image of the sample areas, in the grayscale image for determining the sample areas
The gray value of each pixel.
Optionally, the grayscale image generation unit is according to known to the reflectivity in setting range around each second class pixel
Pixel reflectivity, determine the process of the reflectivity of each second class pixel, comprising:
Using the second class pixel as the center of setting range, the pixel known to the reflectivity in the range is determined
Point;
The pixel of each known reflectivity is calculated to the distance of the second class pixel;
According to the relationship that distance is inversely proportional with weight, the weight of the pixel of each known reflectivity is determined, it is each known
The weight of the pixel of reflectivity is 1 with value;
The result that the weight of the pixel of each known reflectivity is multiplied with reflectivity is added, obtains described second
The reflectivity of class pixel.
Finally, it is to be noted that, herein, relational terms such as first and second and the like be used merely to by
One entity or operation are distinguished with another entity or operation, without necessarily requiring or implying these entities or operation
Between there are any actual relationship or orders.Moreover, the terms "include", "comprise" or its any other variant meaning
Covering non-exclusive inclusion, so that the process, method, article or equipment for including a series of elements not only includes that
A little elements, but also including other elements that are not explicitly listed, or further include for this process, method, article or
The intrinsic element of equipment.In the absence of more restrictions, the element limited by sentence "including a ...", is not arranged
Except there is also other identical elements in the process, method, article or apparatus that includes the element.
Each embodiment in this specification is described in a progressive manner, the highlights of each of the examples are with other
The difference of embodiment, the same or similar parts in each embodiment may refer to each other.
The foregoing description of the disclosed embodiments makes professional and technical personnel in the field can be realized or use the application.
Various modifications to these embodiments will be readily apparent to those skilled in the art, as defined herein
General Principle can be realized in other embodiments without departing from the spirit or scope of the application.Therefore, the application
It is not intended to be limited to the embodiments shown herein, and is to fit to and the principles and novel features disclosed herein phase one
The widest scope of cause.
Claims (12)
1. a kind of training data generation method characterized by comprising
Obtain the original point cloud data on road surface;
From the original point cloud data, determine the corresponding point of pavement marking as target point;
On the basis of the corresponding target point of the pavement marking, the original point cloud data on the road surface is subjected to region and is drawn
Point, obtain the set of sample areas, the sample areas includes positive sample region and negative sample region, in the positive sample region
There are at least one target point, the target point is not present in the negative sample region;
According to the point cloud data in sample areas in the original point cloud data, is fallen into, the grayscale image of corresponding sample areas is generated,
Wherein, positive example sample of the grayscale image in positive sample region as training pavement marking identification model, the ash in negative sample region
Negative example sample of the degree figure as training pavement marking identification model.
2. the method according to claim 1, wherein the pavement marking be lane line, it is described with described
On the basis of the corresponding target point of pavement marking, the original point cloud data on the road surface is subjected to region division, obtains sample
The set in region, comprising:
Started with any one corresponding target point of the lane line, sets unit length every first along the lane line, really
A fixed lane position point;
Using the lane position point as intersection point, unit length is set every second along perpendicular to lane line direction, determines a phase
To location point;
Using the lane position point and relative position point as the central point of rectangle, generated according to setting rectangular dimension corresponding
Rectangular area composition sample areas gathers the corresponding sample in rectangular area comprising the corresponding target point of the lane line
One's respective area is positive sample areas, remaining corresponding sample areas in rectangular area for not including the corresponding target point of the lane line is
Negative sample region.
3. the method according to claim 1, wherein the pavement marking be road surface identification, it is described with institute
On the basis of stating the corresponding target point of pavement marking, the original point cloud data on the road surface is subjected to region division, obtains sample
The set of one's respective area, comprising:
Determine the minimum circumscribed rectangle in the region of the corresponding target point covering of the road surface identification;
Started with any one point on the central axes of the minimum circumscribed rectangle, it is long every the first setting unit along the central axes
Degree, determines a road surface identification location point;
Using the road surface identification location point as intersection point, determined along perpendicular to the central axes direction every the second setting unit length
One relative position point;
Using the road surface identification location point and relative position point as the central point of rectangle, according to setting rectangular dimension generation pair
The rectangular area composition sample areas answered gathers the rectangular area pair comprising the corresponding target point of the road surface identification
The sample areas answered is positive sample areas, remaining does not include corresponding sample in rectangular area of the corresponding target point of the road surface identification
One's respective area is negative sample areas.
4. falling into sample the method according to claim 1, wherein described according in the original point cloud data
Point cloud data in region generates the grayscale image of corresponding sample areas, comprising:
According to the point cloud data fallen into sample areas in the original point cloud data, determine that each point is corresponding in the point cloud data
Pixel in the grayscale image of the sample areas;
If in the grayscale image of the sample areas, there is corresponding point in each pixel, then in the point cloud data according to institute
The reflectivity for stating the corresponding point of each pixel in the grayscale image of sample areas, determines each pixel in the grayscale image of the sample areas
The gray value of point.
5. falling into sample the method according to claim 1, wherein described according in the original point cloud data
Point cloud data in region generates the grayscale image of corresponding sample areas, comprising:
According to the point cloud data fallen into sample areas in the original point cloud data, determine that each point is corresponding in the point cloud data
Pixel in the grayscale image of the sample areas;
It there are the pixel of corresponding points is first kind pixel in the point cloud data in the grayscale image of the sample areas
Point determines the reflectivity of each first kind pixel according to the reflectivity of the corresponding point of each first kind pixel;
It is the second class picture so that the pixel of corresponding point in the grayscale image of the sample areas, to be not present in the point cloud data
Vegetarian refreshments determines each second according to the reflectivity of pixel known to the reflectivity around each second class pixel in setting range
The reflectivity of class pixel;
According to the reflectivity of pixel each in the grayscale image of the sample areas, each picture in the grayscale image of the sample areas is determined
The gray value of vegetarian refreshments.
6. according to the method described in claim 5, it is characterized in that, described according in setting range around each second class pixel
Reflectivity known to pixel reflectivity, determine the reflectivity of each second class pixel, comprising:
Using the second class pixel as the center of setting range, the pixel known to the reflectivity in the range is determined;
The pixel of each known reflectivity is calculated to the distance of the second class pixel;
According to the relationship that distance is inversely proportional with weight, the weight of the pixel of each known reflectivity, each known reflection are determined
The weight of the pixel of rate is 1 with value;
The result that the weight of the pixel of each known reflectivity is multiplied with reflectivity is added, the second class picture is obtained
The reflectivity of vegetarian refreshments.
7. a kind of training data generating means characterized by comprising
Point cloud data acquiring unit, for obtaining the original point cloud data on road surface;
Target point determination unit, for from the original point cloud data, determining the corresponding point of pavement marking as target
Point;
Sample areas set determination unit is used on the basis of the corresponding target point of the pavement marking, by the road surface
Original point cloud data carry out region division, obtain the set of sample areas, the sample areas includes positive sample region and negative
Sample areas, there are at least one target point in the positive sample region, the target point is not present in the negative sample region;
Grayscale image generation unit, for according to the point cloud data in sample areas in the original point cloud data, is fallen into, generating pair
Answer the grayscale image of sample areas, wherein positive example of the grayscale image in positive sample region as training pavement marking identification model
Sample, negative example sample of the grayscale image in negative sample region as training pavement marking identification model.
8. device according to claim 7, which is characterized in that the pavement marking is lane line, the sample area
Domain gather determination unit on the basis of the corresponding target point of the pavement marking, by the original point cloud data on the road surface into
Row region division obtains the process of the set of sample areas, specifically includes:
Started with any one corresponding target point of the lane line, sets unit length every first along the lane line, really
A fixed lane position point;
Using the lane position point as intersection point, unit length is set every second along perpendicular to lane line direction, determines a phase
To location point;
Using the lane position point and relative position point as the central point of rectangle, generated according to setting rectangular dimension corresponding
Rectangular area composition sample areas gathers the corresponding sample in rectangular area comprising the corresponding target point of the lane line
One's respective area is positive sample areas, remaining corresponding sample areas in rectangular area for not including the corresponding target point of the lane line is
Negative sample region.
9. device according to claim 7, which is characterized in that the pavement marking is road surface identification, the sample
Regional ensemble determination unit is on the basis of the corresponding target point of the pavement marking, by the original point cloud data on the road surface
Region division is carried out, the process of the set of sample areas is obtained, specifically includes:
Determine the minimum circumscribed rectangle in the region of the corresponding target point covering of the road surface identification;
Started with any one point on the central axes of the minimum circumscribed rectangle, it is long every the first setting unit along the central axes
Degree, determines a road surface identification location point;
Using the road surface identification location point as intersection point, determined along perpendicular to the central axes direction every the second setting unit length
One relative position point;
Using the road surface identification location point and relative position point as the central point of rectangle, according to setting rectangular dimension generation pair
The rectangular area composition sample areas answered gathers the rectangular area pair comprising the corresponding target point of the road surface identification
The sample areas answered is positive sample areas, remaining does not include corresponding sample in rectangular area of the corresponding target point of the road surface identification
One's respective area is negative sample areas.
10. device according to claim 7, which is characterized in that the grayscale image generation unit is according to the original point cloud
In data, the point cloud data in sample areas is fallen into, the process of the grayscale image of corresponding sample areas is generated, specifically includes:
According to the point cloud data fallen into sample areas in the original point cloud data, determine that each point is corresponding in the point cloud data
Pixel in the grayscale image of the sample areas;
If in the grayscale image of the sample areas, there is corresponding point in each pixel, then in the point cloud data according to institute
The reflectivity for stating the corresponding point of each pixel in the grayscale image of sample areas, determines each pixel in the grayscale image of the sample areas
The gray value of point.
11. device according to claim 7, which is characterized in that the grayscale image generation unit is according to the original point cloud
In data, the point cloud data in sample areas is fallen into, the process of the grayscale image of corresponding sample areas is generated, specifically includes:
According to the point cloud data fallen into sample areas in the original point cloud data, determine that each point is corresponding in the point cloud data
Pixel in the grayscale image of the sample areas;
It there are the pixel of corresponding point is first kind pixel in the point cloud data in the grayscale image of the sample areas
Point determines the reflectivity of each first kind pixel according to the reflectivity of the corresponding point of each first kind pixel;
It is the second class picture so that the pixel of corresponding point in the grayscale image of the sample areas, to be not present in the point cloud data
Vegetarian refreshments determines each second according to the reflectivity of pixel known to the reflectivity around each second class pixel in setting range
The reflectivity of class pixel;
According to the reflectivity of pixel each in the grayscale image of the sample areas, each picture in the grayscale image of the sample areas is determined
The gray value of vegetarian refreshments.
12. device according to claim 11, which is characterized in that the grayscale image generation unit is according to each second class pixel
The reflectivity of pixel known to reflectivity in point surrounding setting range, determines the mistake of the reflectivity of each second class pixel
Journey, comprising:
Using the second class pixel as the center of setting range, the pixel known to the reflectivity in the range is determined;
The pixel of each known reflectivity is calculated to the distance of the second class pixel;
According to the relationship that distance is inversely proportional with weight, the weight of the pixel of each known reflectivity, each known reflection are determined
The weight of the pixel of rate is 1 with value;
The result that the weight of the pixel of each known reflectivity is multiplied with reflectivity is added, the second class picture is obtained
The reflectivity of vegetarian refreshments.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201711044197.9A CN109726728B (en) | 2017-10-31 | 2017-10-31 | Training data generation method and device |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201711044197.9A CN109726728B (en) | 2017-10-31 | 2017-10-31 | Training data generation method and device |
Publications (2)
Publication Number | Publication Date |
---|---|
CN109726728A true CN109726728A (en) | 2019-05-07 |
CN109726728B CN109726728B (en) | 2020-12-15 |
Family
ID=66293951
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201711044197.9A Active CN109726728B (en) | 2017-10-31 | 2017-10-31 | Training data generation method and device |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN109726728B (en) |
Cited By (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN112419360A (en) * | 2020-11-16 | 2021-02-26 | 北京理工大学 | Background removing and target image segmenting method based on stereo imaging |
CN112666553A (en) * | 2020-12-16 | 2021-04-16 | 动联(山东)电子科技有限公司 | Road ponding identification method and equipment based on millimeter wave radar |
CN113298910A (en) * | 2021-05-14 | 2021-08-24 | 阿波罗智能技术(北京)有限公司 | Method, apparatus and storage medium for generating traffic sign line map |
Citations (15)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US8886387B1 (en) * | 2014-01-07 | 2014-11-11 | Google Inc. | Estimating multi-vehicle motion characteristics by finding stable reference points |
US9052721B1 (en) * | 2012-08-28 | 2015-06-09 | Google Inc. | Method for correcting alignment of vehicle mounted laser scans with an elevation map for obstacle detection |
CN104700099A (en) * | 2015-03-31 | 2015-06-10 | 百度在线网络技术(北京)有限公司 | Method and device for recognizing traffic signs |
CN104766058A (en) * | 2015-03-31 | 2015-07-08 | 百度在线网络技术(北京)有限公司 | Method and device for obtaining lane line |
CN104850834A (en) * | 2015-05-11 | 2015-08-19 | 中国科学院合肥物质科学研究院 | Road boundary detection method based on three-dimensional laser radar |
CN105488498A (en) * | 2016-01-15 | 2016-04-13 | 武汉光庭信息技术股份有限公司 | Lane sideline automatic extraction method and lane sideline automatic extraction system based on laser point cloud |
CN105701449A (en) * | 2015-12-31 | 2016-06-22 | 百度在线网络技术(北京)有限公司 | Method and device for detecting lane lines on road surface |
CN106228125A (en) * | 2016-07-15 | 2016-12-14 | 浙江工商大学 | Method for detecting lane lines based on integrated study cascade classifier |
CN106295607A (en) * | 2016-08-19 | 2017-01-04 | 北京奇虎科技有限公司 | Roads recognition method and device |
CN106525000A (en) * | 2016-10-31 | 2017-03-22 | 武汉大学 | A road marking line automatic extracting method based on laser scanning discrete point strength gradients |
CN106570446A (en) * | 2015-10-12 | 2017-04-19 | 腾讯科技(深圳)有限公司 | Lane line extraction method and device |
CN106705962A (en) * | 2016-12-27 | 2017-05-24 | 首都师范大学 | Method and system for acquiring navigation data |
CN106845321A (en) * | 2015-12-03 | 2017-06-13 | 高德软件有限公司 | The treating method and apparatus of pavement markers information |
US20170193312A1 (en) * | 2014-03-27 | 2017-07-06 | Georgia Tech Research Corporation | Systems and Methods for Identifying Traffic Control Devices and Testing the Retroreflectivity of the Same |
CN107122776A (en) * | 2017-04-14 | 2017-09-01 | 重庆邮电大学 | A kind of road traffic sign detection and recognition methods based on convolutional neural networks |
-
2017
- 2017-10-31 CN CN201711044197.9A patent/CN109726728B/en active Active
Patent Citations (15)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US9052721B1 (en) * | 2012-08-28 | 2015-06-09 | Google Inc. | Method for correcting alignment of vehicle mounted laser scans with an elevation map for obstacle detection |
US8886387B1 (en) * | 2014-01-07 | 2014-11-11 | Google Inc. | Estimating multi-vehicle motion characteristics by finding stable reference points |
US20170193312A1 (en) * | 2014-03-27 | 2017-07-06 | Georgia Tech Research Corporation | Systems and Methods for Identifying Traffic Control Devices and Testing the Retroreflectivity of the Same |
CN104700099A (en) * | 2015-03-31 | 2015-06-10 | 百度在线网络技术(北京)有限公司 | Method and device for recognizing traffic signs |
CN104766058A (en) * | 2015-03-31 | 2015-07-08 | 百度在线网络技术(北京)有限公司 | Method and device for obtaining lane line |
CN104850834A (en) * | 2015-05-11 | 2015-08-19 | 中国科学院合肥物质科学研究院 | Road boundary detection method based on three-dimensional laser radar |
CN106570446A (en) * | 2015-10-12 | 2017-04-19 | 腾讯科技(深圳)有限公司 | Lane line extraction method and device |
CN106845321A (en) * | 2015-12-03 | 2017-06-13 | 高德软件有限公司 | The treating method and apparatus of pavement markers information |
CN105701449A (en) * | 2015-12-31 | 2016-06-22 | 百度在线网络技术(北京)有限公司 | Method and device for detecting lane lines on road surface |
CN105488498A (en) * | 2016-01-15 | 2016-04-13 | 武汉光庭信息技术股份有限公司 | Lane sideline automatic extraction method and lane sideline automatic extraction system based on laser point cloud |
CN106228125A (en) * | 2016-07-15 | 2016-12-14 | 浙江工商大学 | Method for detecting lane lines based on integrated study cascade classifier |
CN106295607A (en) * | 2016-08-19 | 2017-01-04 | 北京奇虎科技有限公司 | Roads recognition method and device |
CN106525000A (en) * | 2016-10-31 | 2017-03-22 | 武汉大学 | A road marking line automatic extracting method based on laser scanning discrete point strength gradients |
CN106705962A (en) * | 2016-12-27 | 2017-05-24 | 首都师范大学 | Method and system for acquiring navigation data |
CN107122776A (en) * | 2017-04-14 | 2017-09-01 | 重庆邮电大学 | A kind of road traffic sign detection and recognition methods based on convolutional neural networks |
Non-Patent Citations (6)
Title |
---|
HONGHUI ZHANG等: "Joint Segmentation of Images and Scanned Point Cloud in Large-Scale Street Scenes With Low-Annotation Cost", 《IEEE TRANSACTIONS ON IMAGE PROCESSING》 * |
MARIO SOILÁN等: "Traffic Sign Detection in MLS Acquired Point Clouds for Geometric and Image-based Semantic Inventory", 《ISPRS JOURNAL OF PHOTOGRAMMETRY AND REMOTE SENSING》 * |
YONGTAO YU等: "Semiautomated Extraction of Street Light Poles From Mobile LiDAR Point-Clouds", 《IEEE TRANSACTIONS ON GEOSCIENCE AND REMOTE SENSING》 * |
任洪梅: "基于深度学习的路面交通标志识别", 《信息通信》 * |
罗敏: "数字图像辅助激光点云特征提取研究", 《万方在线》 * |
邹晓亮等: "移动车载激光点云的道路标线自动识别与提取", 《测绘与空间地理信息》 * |
Cited By (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN112419360A (en) * | 2020-11-16 | 2021-02-26 | 北京理工大学 | Background removing and target image segmenting method based on stereo imaging |
CN112419360B (en) * | 2020-11-16 | 2023-02-21 | 北京理工大学 | Background removing and target image segmenting method based on stereo imaging |
CN112666553A (en) * | 2020-12-16 | 2021-04-16 | 动联(山东)电子科技有限公司 | Road ponding identification method and equipment based on millimeter wave radar |
CN112666553B (en) * | 2020-12-16 | 2023-04-18 | 动联(山东)电子科技有限公司 | Road ponding identification method and equipment based on millimeter wave radar |
CN113298910A (en) * | 2021-05-14 | 2021-08-24 | 阿波罗智能技术(北京)有限公司 | Method, apparatus and storage medium for generating traffic sign line map |
Also Published As
Publication number | Publication date |
---|---|
CN109726728B (en) | 2020-12-15 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
Wen et al. | A deep learning framework for road marking extraction, classification and completion from mobile laser scanning point clouds | |
US20210365610A1 (en) | Procedural world generation using tertiary data | |
CN110135351B (en) | Built-up area boundary identification method and equipment based on urban building space data | |
US7707012B2 (en) | Simulated city generation | |
CN109165549B (en) | Road identification obtaining method based on three-dimensional point cloud data, terminal equipment and device | |
CN100562895C (en) | A kind of method of the 3 D face animation based on Region Segmentation and speced learning | |
CN101770581B (en) | Semi-automatic detecting method for road centerline in high-resolution city remote sensing image | |
CN109064549B (en) | Method for generating mark point detection model and method for detecting mark point | |
WO2017020465A1 (en) | Modelling method and device for three-dimensional road model, and storage medium | |
CN109726728A (en) | A kind of training data generation method and device | |
WO2017166371A1 (en) | Form optimization control method applied to peripheral buildings of open space and using evaluation of visible sky area | |
CN110782974A (en) | Method of predicting anatomical landmarks and apparatus for predicting anatomical landmarks using the method | |
CN108520197A (en) | A kind of Remote Sensing Target detection method and device | |
CN105678747B (en) | A kind of tooth mesh model automatic division method based on principal curvatures | |
JP6760781B2 (en) | 3D model generator, 3D model generation method and program | |
CN109858374B (en) | Automatic extraction method and device for arrow mark lines in high-precision map making | |
Wang et al. | Automatic high-fidelity 3D road network modeling based on 2D GIS data | |
WO2021051346A1 (en) | Three-dimensional vehicle lane line determination method, device, and electronic apparatus | |
CN112257772B (en) | Road increase and decrease interval segmentation method and device, electronic equipment and storage medium | |
CN106558051A (en) | A kind of improved method for detecting road from single image | |
CN103177451A (en) | Three-dimensional matching algorithm between adaptive window and weight based on picture edge | |
Tang et al. | Assessing the visibility of urban greenery using MLS LiDAR data | |
Bellusci et al. | Semantic interpretation of raw survey vehicle sensory data for lane-level HD map generation | |
CN110306809A (en) | Steel construction curtain wall location and installation method for correcting error based on BIM 3 D laser scanning | |
WO2021200037A1 (en) | Road deterioration determination device, road deterioration determination method, and storage medium |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
TA01 | Transfer of patent application right | ||
TA01 | Transfer of patent application right |
Effective date of registration: 20200430 Address after: 310052 room 508, floor 5, building 4, No. 699, Wangshang Road, Changhe street, Binjiang District, Hangzhou City, Zhejiang Province Applicant after: Alibaba (China) Co.,Ltd. Address before: 102200, No. 8, No., Changsheng Road, Changping District science and Technology Park, Beijing, China. 1-5 Applicant before: AUTONAVI SOFTWARE Co.,Ltd. |
|
GR01 | Patent grant | ||
GR01 | Patent grant |