CN111179152B - Road identification recognition method and device, medium and terminal - Google Patents

Road identification recognition method and device, medium and terminal Download PDF

Info

Publication number
CN111179152B
CN111179152B CN201811341411.1A CN201811341411A CN111179152B CN 111179152 B CN111179152 B CN 111179152B CN 201811341411 A CN201811341411 A CN 201811341411A CN 111179152 B CN111179152 B CN 111179152B
Authority
CN
China
Prior art keywords
dimensional
road
point cloud
cloud data
identification
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201811341411.1A
Other languages
Chinese (zh)
Other versions
CN111179152A (en
Inventor
吕天雄
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Alibaba Group Holding Ltd
Original Assignee
Alibaba Group Holding Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Alibaba Group Holding Ltd filed Critical Alibaba Group Holding Ltd
Priority to CN201811341411.1A priority Critical patent/CN111179152B/en
Publication of CN111179152A publication Critical patent/CN111179152A/en
Application granted granted Critical
Publication of CN111179152B publication Critical patent/CN111179152B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • G06T3/06
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T17/00Three dimensional [3D] modelling, e.g. data description of 3D objects
    • G06T17/05Geographic models
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/13Edge detection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/50Context or environment of the image
    • G06V20/56Context or environment of the image exterior to a vehicle by using sensors mounted on the vehicle
    • G06V20/588Recognition of the road, e.g. of lane markings; Recognition of the vehicle driving pattern in relation to the road
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10028Range image; Depth image; 3D point clouds
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30248Vehicle exterior or interior
    • G06T2207/30252Vehicle exterior; Vicinity of vehicle
    • G06T2207/30256Lane; Road marking

Abstract

The embodiment of the invention discloses a road identification recognition method and device, a medium and a terminal, wherein the recognition method can comprise the following steps: acquiring a three-dimensional point cloud data set, wherein the three-dimensional point cloud data set comprises a plurality of three-dimensional point cloud data points obtained by detecting a road; mapping the three-dimensional point cloud data set into a two-dimensional plane image; identifying a road mark in the two-dimensional plane image by adopting a machine learning model; and inversely mapping the road mark in the two-dimensional plane image to a three-dimensional coordinate position. The technical scheme in the embodiment of the invention has higher accuracy.

Description

Road identification recognition method and device, medium and terminal
Technical Field
The present invention relates to the field of electronic map technologies, and in particular, to a method and apparatus for identifying a road identifier, a medium, and a terminal.
Background
An electronic map, i.e., a digital map, is a map stored and referred to digitally using computer technology, and is a map presented in a paperless manner using collected map data. The application of electronic maps is very widespread, and can be applied for example to navigation or cruise processes.
The electronic map may include road identifiers of the road surface, and identifying the road identifiers of the road surface is a common way of generating the road identifiers in the electronic map.
The accuracy of the existing road identification method is to be improved.
Disclosure of Invention
The technical problem solved by the embodiment of the invention is to improve the accuracy of the road mark.
In order to solve the above technical problems, an embodiment of the present invention provides a method for identifying a road identifier, which may include: acquiring a three-dimensional point cloud data set, wherein the three-dimensional point cloud data set comprises a plurality of three-dimensional point cloud data points obtained by detecting a road; mapping the three-dimensional point cloud data set into a two-dimensional plane image; identifying a road mark in the two-dimensional plane image by adopting a machine learning model; and inversely mapping the road mark in the two-dimensional plane image to a three-dimensional coordinate position.
Optionally, the acquiring the three-dimensional point cloud data set includes: generating a three-dimensional frame at preset intervals along acquisition track information, wherein the acquisition track information is track information of equipment for acquiring the three-dimensional point cloud data points; acquiring three-dimensional point cloud data points in the three-dimensional frame; and obtaining the three-dimensional point cloud data set according to the three-dimensional point cloud data points in the three-dimensional frame.
Optionally, the cross section of the three-dimensional frame in the vertical height direction is square, and the preset interval is half of the side length of the square.
Optionally, after inversely mapping the road identifier in the two-dimensional plane image to the three-dimensional coordinate, the method further includes: if the three-dimensional coordinate positions of the road marks with the same type obtained by inverse mapping are in the plane vertical to the height direction and have the overlapping parts, taking the circumscribed rectangle with the overlapping parts as the three-dimensional coordinate positions of the road marks.
Optionally, performing numerical expansion by taking the height value of the road surface as a central value to determine the height range of the three-dimensional frame; the surface range of the three-dimensional frame is set according to the width of the road surface, and the surface range is a range which is perpendicular to the height in the three-dimensional frame.
Optionally, obtaining the three-dimensional point cloud data set according to the three-dimensional point cloud data points in the three-dimensional frame includes: performing plane fitting on the three-dimensional point cloud data points in the three-dimensional frame to obtain the height of a fitting plane; and acquiring three-dimensional point cloud data points with the height values and the fitting plane within a preset range as the three-dimensional point cloud data set.
Optionally, mapping the three-dimensional point cloud data set into a two-dimensional plane image includes: and if the plurality of three-dimensional point cloud data points are mapped to the same coordinate in the two-dimensional plane image, taking the average value of the reflectivities of the plurality of three-dimensional point cloud data points as the reflectivities of the coordinate points.
Optionally, the mapping the three-dimensional point cloud data set into a two-dimensional plane image includes: orthographically projecting the three-dimensional point cloud data set to obtain projection data; and carrying out affine transformation on the projection data to obtain the two-dimensional plane image.
Optionally, the identifying the road identifier in the two-dimensional plane image by using a machine learning model includes: calculating the reflectivity range of the three-dimensional point cloud data point related to the two-dimensional plane image, and carrying out normalization processing on the reflectivity range to a gray level image; and identifying the gray level image by adopting a machine learning model so as to obtain the road mark in the two-dimensional plane image.
Optionally, identifying the grayscale image using a machine learning model includes: maximum value filtering is carried out on the gray level image, and a filtered gray level image is obtained; and identifying the filtered gray level image by adopting a machine learning model so as to obtain the road mark in the two-dimensional plane image.
Optionally, the identifying the road identifier in the two-dimensional plane image by using a machine learning model includes: and identifying the two-dimensional plane image by using the deep learning classification model.
Optionally, the identifying the road identifier in the two-dimensional plane image by using a machine learning model includes: and determining the position and the identification type of the road identification in the two-dimensional plane image by adopting a machine learning model.
Optionally, the identifying the location of the road mark in the two-dimensional plane image using the machine learning model includes: determining a road identification position obtained by identifying the two-dimensional plane image by adopting a machine learning model; and scanning the maximum value distribution of the pixel points in the edge range of the road identification position to determine the boundary of the road identification, wherein the edge range comprises a range for expanding the edge of the road identification position.
Optionally, the identifying the edge of the location by the road identifier includes: the outermost edge in the road direction.
Optionally, the inversely mapping the road identifier in the two-dimensional plane image to the three-dimensional coordinate position includes: inversely mapping the position of the road mark in the two-dimensional plane image to the two-dimensional coordinates of the point cloud through inverse affine transformation; and determining a height value according to three-dimensional cloud point data points adjacent to the data of the two-dimensional coordinates in the original stored three-dimensional cloud point data points, wherein the two-dimensional coordinates are combined with the height value to generate three-dimensional coordinates of the road mark.
Optionally, the identification type includes at least one of the following: left turn identification, right turn identification, straight line identification, and text type identification.
Optionally, the road identification recognition method further includes: and associating the identification type of the road identification to the three-dimensional coordinate position.
Optionally, the position of the road sign in the two-dimensional plane image is rectangular.
Optionally, mapping the three-dimensional point cloud data set into a two-dimensional plane image includes: mapping the three-dimensional point cloud data set to the PNG image, and compressing the height information contained in the position information in the three-dimensional point cloud data set to an alpha channel; the inverse mapping of the road identification in the two-dimensional plane image to the three-dimensional coordinate position comprises: and restoring the information in the alpha channel to be the height coordinate of the three-dimensional coordinate position.
The embodiment of the invention also provides a road identification device, which comprises: the system comprises a point cloud data set acquisition unit, a three-dimensional point cloud data set acquisition unit and a data processing unit, wherein the point cloud data set acquisition unit is suitable for acquiring a three-dimensional point cloud data set, and the three-dimensional point cloud data set comprises a plurality of three-dimensional point cloud data points obtained by detecting a road; the mapping unit is suitable for mapping the three-dimensional point cloud data set into a two-dimensional plane image; the identifying unit is suitable for identifying the road mark in the two-dimensional plane image by adopting a machine learning model; and the inverse mapping unit is suitable for inversely mapping the road mark in the two-dimensional plane image to the three-dimensional coordinate position.
Optionally, the point cloud data set acquisition unit includes: the three-dimensional frame generation subunit is suitable for generating a three-dimensional frame at preset intervals along acquisition track information, wherein the acquisition track information is track information of equipment for acquiring the three-dimensional point cloud data points; the acquisition subunit is suitable for acquiring three-dimensional point cloud data points in the three-dimensional frame; and the data set determining subunit is suitable for obtaining the three-dimensional point cloud data set according to the three-dimensional point cloud data points in the three-dimensional frame.
Optionally, the cross section of the three-dimensional frame in the vertical height direction is square, and the preset interval is half of the side length of the square.
Optionally, the inverse mapping unit is further adapted to, after inversely mapping the road identifier in the two-dimensional plane image to the three-dimensional coordinate, if the three-dimensional coordinate positions of the road identifiers with the same type obtained by inverse mapping have overlapping portions in a plane perpendicular to the height direction, take the circumscribed rectangle with the overlapping portions as the three-dimensional coordinate positions of the road identifiers.
Optionally, the three-dimensional frame generating subunit is adapted to perform numerical expansion by taking the road surface height value as a central value to determine the height range of the three-dimensional frame; the surface range of the three-dimensional frame is set according to the width of the road surface, and the surface range is a range which is perpendicular to the height in the three-dimensional frame.
Optionally, the data set determining subunit includes: the plane fitting module is suitable for carrying out plane fitting on the three-dimensional point cloud data points in the three-dimensional frame to obtain the height of a fitting plane; the data set module is suitable for acquiring three-dimensional point cloud data points with the height values and the fitting plane within a preset range as the three-dimensional point cloud data set.
Optionally, the mapping unit is further adapted to: when a plurality of three-dimensional point cloud data points are mapped to the same coordinate in the two-dimensional plane image, taking the average value of the reflectivities of the three-dimensional point cloud data points as the reflectivities of the coordinate points.
Optionally, the mapping unit includes: the projection subunit is suitable for orthographically projecting the three-dimensional point cloud data set to obtain projection data; and the affine transformation subunit is suitable for carrying out affine transformation on the projection data to obtain the two-dimensional plane image.
Optionally, the identifying unit includes: the normalization subunit is suitable for calculating the reflectivity range of the three-dimensional point cloud data points related to the two-dimensional plane image and is suitable for carrying out normalization processing on the gray level image; and the gray level image identification subunit is suitable for identifying the gray level image by adopting a machine learning model so as to obtain the road identification in the two-dimensional plane image.
Optionally, the gray image recognition subunit includes: the maximum value filtering module is suitable for carrying out maximum value filtering on the gray level image to obtain a filtered gray level image; and the identification module is suitable for identifying the filtered gray level image by adopting a machine learning model so as to obtain the road identification in the two-dimensional plane image.
Optionally, the identification unit is adapted to identify the two-dimensional planar image using a deep learning classification model.
Optionally, the identification unit is adapted to determine the location of the road marking and the marking type within the two-dimensional planar image using a machine learning model.
Optionally, the identifying unit includes: the road sign recognition position subunit is suitable for determining a road sign recognition position obtained by recognizing the two-dimensional plane image by adopting a machine learning model; and the scanning subunit is suitable for scanning the maximum value distribution of the pixel points in the edge range of the road identification position, and is suitable for determining the boundary of the road identification, wherein the edge range comprises a range for expanding the edge of the road identification position.
Optionally, the identifying the edge of the location by the road identifier includes: the outermost edge in the road direction.
Optionally, the inverse mapping unit includes: an inverse affine transformation subunit adapted to inversely map the position of the road marking within the two-dimensional plane image to the two-dimensional coordinates of the point cloud by inverse affine transformation; and the three-dimensional coordinate generation subunit is suitable for determining a height value according to the three-dimensional cloud point data point adjacent to the data of the two-dimensional coordinates in the originally stored three-dimensional cloud point data points, and the three-dimensional coordinates of the road mark are generated by combining the two-dimensional coordinates with the height value.
Optionally, the identification type includes at least one of the following: left turn identification, right turn identification, straight line identification, and text type identification.
Optionally, the road identifier identifying device further includes: and the association unit is suitable for associating the identification type of the road identification to the three-dimensional coordinate position.
Optionally, the position of the road sign in the two-dimensional plane image is rectangular.
Optionally, the mapping unit is adapted to map the three-dimensional point cloud data set to the PNG image, and compress height information contained in the position information in the three-dimensional point cloud data set to the alpha channel; the inverse mapping unit is suitable for restoring the information in the alpha channel into the height coordinate of the three-dimensional coordinate position.
The embodiment of the invention also provides a computer readable storage medium, wherein the computer instructions are stored on the computer readable storage medium, and the computer instructions execute the steps of the road identification recognition method when running.
The embodiment of the invention also provides a terminal, which comprises a memory and a processor, wherein the memory stores computer instructions capable of running on the processor, and the computer instructions execute the steps of the road identification recognition method when running.
Compared with the prior art, the technical scheme of the embodiment of the invention has the following beneficial effects:
in the embodiment of the invention, a three-dimensional point cloud data set is acquired, the three-dimensional point cloud data set is mapped into a two-dimensional plane image, and after the road mark in the two-dimensional plane image is identified, the road mark in the two-dimensional plane image is reversely mapped to a three-dimensional coordinate position. The road identification in the two-dimensional plane image can be identified by adopting the machine learning model, the adaptability of the identification method is stronger by utilizing the diversity of training samples by adopting the machine learning model to identify the two-dimensional plane image, and the identification method is suitable for identifying the two-dimensional plane image under different conditions, so that the identification result is more accurate, and the accuracy of the road identification method in the embodiment of the invention is higher.
Further, the three-dimensional frames are set to be square, the preset interval is half of the side length of the three-dimensional frames, so that overlapping parts exist among the three-dimensional frames, the probability of containing complete road identification marks in the three-dimensional frames can be improved, and the accuracy of identification can be further improved.
Further, plane fitting is performed on the three-dimensional point cloud data points in the three-dimensional frame to obtain the height of a fitting plane, the three-dimensional point cloud data points with the height value and the height of the fitting plane within a preset range are obtained to serve as the three-dimensional point cloud data set, three-dimensional point cloud data can be cleaned, three-dimensional point cloud data points near the height of a road surface are screened out, data in the three-dimensional point cloud data set are more accurate, a follow-up identification process is performed on the basis of the more accurate three-dimensional point cloud data set, and accuracy of the road identification method can be improved.
Furthermore, the gray level image obtained by carrying out normalization processing through calculating the reflectivity range of the three-dimensional point cloud data points related to the two-dimensional plane image is more suitable for machine learning model identification, and the accuracy of identification can be further improved.
Further, by carrying out maximum filtering on the gray level image, the mesh black holes in the filtered gray level image caused by the scanning mode of the three-dimensional point cloud data can be further enabled to be closer to the identification mark of the road pavement, and the filtered gray level image is easier to be identified by the machine learning model, so that the filtered gray level image is identified, and the identification accuracy can be improved.
Drawings
FIG. 1 is a flowchart of a road identification recognition method according to an embodiment of the present invention;
FIG. 2 is a flow chart of acquiring a three-dimensional point cloud data set in an embodiment of the invention;
FIG. 3 is a flow chart of a method for obtaining a three-dimensional point cloud data set from three-dimensional point cloud data points within a three-dimensional frame in accordance with an embodiment of the present invention;
FIG. 4 is a schematic representation of an image form prior to affine transformation in an embodiment of the invention;
FIG. 5 is a schematic representation of an affine transformed image form in an embodiment of the invention;
FIG. 6 is a flow chart of identifying a road marking within the two-dimensional planar image in an embodiment of the invention;
FIG. 7 is a flow chart of a method for identifying the grayscale image in an embodiment of the invention;
FIG. 8 is a flow chart of identifying the location of a road marking within the two-dimensional planar image in an embodiment of the invention;
FIG. 9 is a flow chart of inverse mapping road identifications in the two-dimensional planar image to three-dimensional coordinate positions in accordance with an embodiment of the present invention;
FIG. 10 is a schematic diagram showing the result of fusing identification tags in an embodiment of the present invention;
FIG. 11 is a schematic diagram of a road sign recognition device according to an embodiment of the present invention;
fig. 12 is a schematic structural diagram of a point cloud data set acquisition unit according to an embodiment of the present invention;
FIG. 13 is a schematic diagram of a data set determination subunit according to an embodiment of the present invention;
FIG. 14 is a schematic diagram of a mapping unit according to an embodiment of the present invention;
FIG. 15 is a schematic diagram of an identification unit according to an embodiment of the present invention;
FIG. 16 is a schematic diagram showing a gray scale image recognition subunit according to an embodiment of the present invention;
FIG. 17 is a schematic diagram of another identification cell in an embodiment of the invention;
fig. 18 is a schematic diagram of an inverse mapping unit according to an embodiment of the present invention.
Detailed Description
As described above, the accuracy of the existing road identification recognition method is to be improved.
In a road identification recognition method, an image of a road surface may be acquired, and a road identification therein may be determined by recognizing the image of the road surface. Road surface images are usually recorded and played back manually, so that the efficiency and the recognition accuracy are low.
In another road identification recognition method, three-dimensional point cloud data obtained by collecting road surfaces can be recognized, and road identification in the three-dimensional point cloud data is obtained. Compared with directly acquired image data, the efficiency is higher because the data acquisition does not need manual intervention. But the accuracy of identifying the road identification in the three-dimensional point cloud data still needs to be improved.
For example, when the ground mark is worn, blocked, stuck and the like, it is difficult to determine that the accurate threshold is good, the complete ground mark area is segmented, and the identification method generally needs lane line assistance to generate the detection area. The recognition precision and the automation degree of the road recognition mark are difficult to meet the requirements of high-precision map production.
In the embodiment of the invention, a three-dimensional point cloud data set is acquired, the three-dimensional point cloud data set is mapped into a two-dimensional plane image, and after the road mark in the two-dimensional plane image is identified, the road mark in the two-dimensional plane image is inversely mapped to a three-dimensional coordinate position.
By adopting the machine learning model to identify the road mark in the two-dimensional plane image, the adaptability of the identification method is stronger by utilizing the diversity of the training sample, for example, the conditions of abrasion, shielding, adhesion and the like of the ground mark can be covered in the training sample, so that the machine learning model is suitable for identifying the two-dimensional plane image under different conditions, and the identification result is more accurate, and the accuracy of the road mark identification method in the embodiment of the invention is higher.
In order to make the above objects, features and advantages of the present invention more comprehensible, embodiments accompanied with figures are described in detail below.
Fig. 1 is a flowchart of a road identifier identifying method according to an embodiment of the present invention, which may specifically include steps S11 to S14.
Step S11, acquiring a three-dimensional point cloud data set, wherein the three-dimensional point cloud data set comprises a plurality of three-dimensional point cloud data points obtained by detecting a road.
The point cloud data may be a laser point cloud, and specifically, the corresponding surface characteristics, such as reflectivity, may be obtained by using a laser radar (LiDAR) under the same spatial reference system for each sampling point on the object surface, where each sampling point may correspond to a three-dimensional point cloud data point.
In a specific implementation, the three-dimensional point cloud data set may be point cloud data within a certain region range, or may also be point cloud data obtained by screening point cloud data within a certain region range.
And step S12, mapping the three-dimensional point cloud data set into a two-dimensional plane image.
The position coordinates of each data point in the three-dimensional point cloud data set are three-dimensional, each three-dimensional point cloud data point can be mapped to a coordinate point of a two-dimensional planar image, and surface characteristic data, such as reflectivity data, of the three-dimensional point cloud data can be mapped to characteristic data, such as gray values, of the planar image.
And S13, identifying the road mark in the two-dimensional plane image by adopting a machine learning model.
As described above, the three-dimensional point cloud data may include a plurality of data points obtained by sampling under the same spatial reference system by using a laser radar (LiDAR), that is, three-dimensional point cloud data points, and after the three-dimensional point cloud data set is converted into the two-dimensional plane image, the position information in the coordinate system can be restored from the coordinate points of the two-dimensional plane image, which is not included in the directly acquired road surface image, so in the implementation manner of directly acquiring the road surface image, video recording and manual intervention are generally required, while in the embodiment of the present invention, the two-dimensional plane image obtained by converting the three-dimensional point cloud data set can be restored to the position information, so that the recognition efficiency can be improved.
In addition, the road identification in the two-dimensional plane image is identified by adopting the machine learning model, and the two-dimensional plane image is identified by adopting the machine learning model, so that the adaptability of the identification method is stronger by utilizing the diversity of training samples, the method is suitable for identifying the two-dimensional plane image under different conditions, and the identification result is more accurate.
The machine learning model can be a model adopted in a machine learning algorithm, the input image can be identified by adopting the machine learning model, and the input of the machine learning model can be an original two-dimensional plane image or an image obtained by processing the two-dimensional plane image. The machine learning model recognizes the foregoing inputs and can determine the type of road marking within the two-dimensional planar image. The machine learning model may be selected from various machine learning models that can be implemented by those skilled in the art, such as a deep learning model, or other models suitable for image recognition, and the like, without limitation.
And S14, inversely mapping the road mark in the two-dimensional plane image to a three-dimensional coordinate position.
As previously described, the two-dimensional plane may recover the location information, which may be two-dimensional, and in particular implementations, the three-dimensional location may be recovered in combination with the height value by determining the height value from the data adjacent to the two-dimensional coordinates in the originally saved three-dimensional point cloud data points.
In a specific implementation, referring to fig. 2, acquiring the three-dimensional point cloud data set may include steps S21 to S23.
And S21, generating a three-dimensional frame at preset intervals along acquisition track information, wherein the acquisition track information is track information of equipment for acquiring the three-dimensional point cloud data points.
In a specific implementation, the device for acquiring the three-dimensional point cloud data may be a laser acquisition device, and may be loaded on an acquisition vehicle. The trajectory information may be obtained by a global positioning system (Global Positioning System, GPS) to locate the laser acquisition device or a vehicle on which the laser acquisition device is mounted, which may be referred to as an acquisition trajectory. Track information may also be generated based on the positioning of other systems.
In particular implementations, the height range of the three-dimensional frame may be a range near the road surface, e.g., determining the road surface height as Z 1 Height of three-dimensional frameThe range may be (Z 1 +α,Z 1 - α). Wherein the value of α may be 0.5m. Wherein the height Z of the road surface 1 The vehicle height can be obtained by subtracting the vehicle height from the height value in the GPS coordinates, and the vehicle height can be preset and can be the height from the road surface of the position where the GPS device is loaded.
The cross section of the three-dimensional frame in the height direction can be square, and half of the side length of the square can be omitted, and three-dimensional point cloud data are acquired once by using the three-dimensional frame. For example, the side length of the three-dimensional frame may be 20 meters, and the point cloud data in the three-dimensional frame may be acquired along the acquisition track every 10 meters.
And S22, acquiring three-dimensional point cloud data points in the three-dimensional frame.
As previously described, a three-dimensional point cloud data point is a data point obtained by detecting a sampling point, and may include three-dimensional geographic position information and surface characteristics of the sampling point, such as reflectivity. The three-dimensional frame may be a cube frame, a cross-section of which perpendicular to the height direction may be square, the three-dimensional frame may define a spatial range, and the three-dimensional point cloud data points within the three-dimensional frame may be data points whose positions fall within the three-dimensional frame.
Step S23, obtaining the three-dimensional point cloud data set according to the three-dimensional point cloud data points in the three-dimensional frame.
In specific implementation, the three-dimensional point cloud data points in the three-dimensional frame can be further screened to obtain a three-dimensional point cloud data set.
For example, referring to fig. 3, deriving the three-dimensional point cloud data set from the three-dimensional point cloud data points within the three-dimensional frame may include:
step S31, carrying out plane fitting on the three-dimensional point cloud data points in the three-dimensional frame to obtain the height of a fitting plane;
and S32, acquiring three-dimensional point cloud data points with the height values and the fitting plane within a preset range as the three-dimensional point cloud data set.
In particular implementations, any method that one skilled in the art can implement plane fitting can be used in step S31There is no limitation in this regard. If the height of the fitting plane is Z 2 Then it can be determined (Z 2 +β,Z 2 - β) is a preset range. By plane fitting, (Z 2 +β,Z 2 - β), ratio (Z 1 +α,Z 1 - α) is more accurate.
Impurities in the data can be removed by acquiring a rough height range and then further refining the data based on a plane fitting method, for example, three-dimensional point cloud data points of objects with equal distances to the ground, such as a cone barrel, a guardrail and a front car and a rear car, can be removed, the influence of the impurity data on subsequent data is reduced, and the identification is more accurate.
After the plane fitting is performed, a plurality of three-dimensional point cloud data points can be obtained, and the position information of the three-dimensional point cloud data points can be changed from a form of (x, y, z) to a form of (x, y, 0), wherein z is height information in the position information, and x and y are information of two dimensions of a two-dimensional plane perpendicular to the height direction respectively.
In a specific implementation, when mapping the three-dimensional point cloud data set to the two-dimensional plane image, if a plurality of three-dimensional point cloud data points are mapped to the same coordinate in the two-dimensional plane image, taking the average value of the reflectivities of the three-dimensional point cloud data points as the reflectivities of the coordinate points.
For example, the position information of each of the plurality of three-dimensional data points is (x 3 ,y 3 0), then these point cloud data points map to the same coordinates in the two-dimensional planar image, where x 3 、y 3 Is a specific value of the two dimensions of the two-dimensional plane. In this scenario, the reflectivities of the three-dimensional point cloud data points can be averaged, with the average being the reflectance value of the so-called two-dimensional planar image.
In an implementation, mapping the three-dimensional point cloud data set into a two-dimensional planar image may include: orthographically projecting the three-dimensional point cloud data set to obtain projection data, and carrying out affine transformation on the projection data to obtain the two-dimensional plane image.
The method of orthographic projection may be to change the position information of the three-dimensional point cloud data point from the form of (x, y, z) to the form of (x, y, 0). The projection data may include a plurality of transformed data in a plurality of (x, y, 0) forms.
Referring to fig. 4 and 5, fig. 4 illustrates an image form before affine transformation, and fig. 5 illustrates an image form after affine transformation. In a specific implementation, affine transformation relationships required in affine transformation can be established according to position information of a data point cloud set.
For example, the affine transformation relationship can be established with the size and shape of the affine-transformed target image according to the plane of the three-dimensional frame perpendicular to the height direction. The region range illustrated in fig. 4 may be a plane of the three-dimensional frame perpendicular to the height direction, and the region range illustrated in fig. 5 may be a size and shape of the target image after affine transformation. The affine transformation relationship may be presented in the form of an affine transformation matrix.
The affine transformation is carried out, so that the view angles of projection data obtained after projection of different three-dimensional point cloud data sets can be unified, and the data with the same view angle can be identified when the road identification is carried out later, thereby avoiding identification errors caused by non-unification of standards and improving the identification accuracy.
The coordinates of the three-dimensional point cloud data points in the plane perpendicular to the height direction can form a mapping relationship with the coordinates of the data points in the two-dimensional image after affine transformation. Thus, after the two-dimensional image is identified, the identification result can be inversely mapped to the three-dimensional coordinate position.
It will be appreciated by those skilled in the art that the graphs of fig. 4 and 5 are illustrative only and are not limiting on the shape, reflectivity or gray scale of the actual graph.
Referring to fig. 6, in an implementation, identifying the road identification within the two-dimensional planar image using a machine learning model may include step S61 and step S62.
Step S61, calculating the reflectivity range of the three-dimensional point cloud data points related to the two-dimensional plane image, and carrying out normalization processing on the obtained gray level image.
In a specific implementation, the three-dimensional point cloud data points related to the two-dimensional planar image may include each point cloud data point in a range of position locations corresponding to the two-dimensional planar image.
In particular, the point cloud data points may be data points after the aforementioned plane fitting and data screening. Therefore, impurities in the data can be prevented from influencing the normalization processing result during the normalization processing, and the accuracy of subsequent identification is higher.
In other implementations, the data points may be data points obtained by obtaining the reflectivity average when a plurality of three-dimensional point cloud data points are mapped to the same coordinates in the two-dimensional plane image. By calculating the reflectivity mean value, the data of the data point can be more accurate, and the identification result can be more accurate.
In a specific implementation, performing the normalization process may include: the maximum and minimum norms of the reflectance are obtained for the two-dimensional image after affine transformation, and mapped to a gray value space, for example, a numerical space of 0 to 255.
The gray level image obtained by carrying out normalization processing through calculating the reflectivity range of the three-dimensional point cloud data points related to the two-dimensional plane image is more suitable for machine learning model identification, and the accuracy of identification can be further improved.
And step S62, recognizing the gray level image by adopting a machine learning model to obtain the road mark in the two-dimensional plane image.
In the specific implementation, in the process of recognizing the gray level image by adopting the machine learning model, the gray level image can be processed first, and the processed gray level image is recognized by adopting the machine learning model, so that the road recognition mark in the two-dimensional plane image is obtained.
For example, referring to fig. 7, identifying the grayscale image using a machine learning model may include:
step S71, maximum value filtering is carried out on the gray level image, and a filtered gray level image is obtained;
and step S72, recognizing the filtered gray level image by adopting a machine learning model so as to obtain the road mark in the two-dimensional plane image.
In a specific implementation, when three-dimensional point cloud data are obtained, the laser acquisition device can obtain the value of the reflectivity through a laser scanning line. Typically the scan lines may be interlaced, in which case if a two-dimensional image is acquired directly, it may include mesh black holes. These mesh black holes can be filled by maximum filtering the gray image. The window size for maximum filtering of the gray image may be set to fill the mesh black holes.
Through carrying out maximum value filtering on the gray level image, the mesh black holes caused by the scanning mode of the three-dimensional point cloud data in the filtered gray level image can be further enabled to be closer to the identification mark of the road pavement, and the filtered gray level image is easier to be identified by the machine learning model, so that the filtered gray level image is identified, and the identification accuracy can be improved.
In an implementation, identifying the road identification within the two-dimensional planar image using a machine learning model may include: and identifying the two-dimensional plane image by using the deep learning classification model. Specifically, convolutional neural network (CNN, convolutional Neural Network) detection models, such as Faster R-CNN, R-FCN, mask R-CNN, etc., may be employed. Alternatively, the recognition may be performed using other deep learning models suitable for image processing, which can be implemented by those skilled in the art, and the present invention is not limited thereto.
The two-dimensional plane image can be identified directly or the gray-scale image obtained by processing in the above manner can be identified.
When training the deep learning model, can adopt diversified sample to train, for example can train with wearing and tearing, shielding, the sign of gluing as the sample, so, when utilizing the deep learning model to discern, can promote the rate of accuracy of discernment.
In implementations, identifying the road identification within the two-dimensional planar image using a machine learning model may include identifying a location and a category of the road identification. Further description is given below.
Referring to fig. 8, identifying the location of the road marking within the two-dimensional plane image may include step S81 and step S82.
Step S81, a machine learning model is adopted to determine a road identification recognition position obtained by recognizing the two-dimensional plane image.
The machine learning model used to identify the image may be varied, for example, a deep learning classification model may be used for the identification. After identification, a rough location of the road identification, which may be referred to as a road identification location, may be obtained.
And S82, scanning the maximum value distribution of the pixel points in the edge range of the road identification position, and determining the boundary of the road identification, wherein the edge range comprises a range for expanding the edge of the road identification position.
The pixel points within the edge range may be pixels whose distance from the edge of the location identified by the road sign is within a preset threshold range. By carrying out progressive scanning on pixel points in the edge range, the boundary of the road mark can be more accurately determined, and the accuracy of the identification method is improved.
In a specific implementation, the location of the road identifier in the two-dimensional plane image may be a location of a rectangle corresponding to the road identifier. In particular, the rectangle may be an circumscribed rectangle of the road sign. For example, referring to fig. 5, the position of the straight arrow may be the position indicated by the dashed box 51. Accordingly, the rectangle can be restored to a plane position perpendicular to the height direction in the three-dimensional coordinates.
In the foregoing scanning maximum value distribution, data within a preset range from the outer edge in the road direction, for example, data within a preset range from both sides in the AB direction of the dotted rectangular frame 51 in fig. 5 may be scanned.
In the implementation, the result of the road identification can be used for generating an electronic map or for real-time cruising, and the boundary of the road identification in the road direction is important, so that the boundary of the road identification in the road direction can be accurately determined, and a solid foundation can be laid for subsequent application.
Referring to fig. 9, inversely mapping the road identifications within the two-dimensional planar image to three-dimensional coordinate positions may include:
step S91, inversely mapping the position of the road mark in the two-dimensional plane image to the two-dimensional coordinates of the point cloud through inverse affine transformation;
and step S92, determining a height value according to three-dimensional cloud point data points adjacent to the data of the two-dimensional coordinates in the original stored three-dimensional cloud point data points, wherein the two-dimensional coordinates are combined with the height value to generate three-dimensional coordinates of the road mark.
The inverse affine transformation inverse mapping may be performed according to an affine transformation relationship applied in the affine transformation.
In specific implementation, the height value of the three-dimensional cloud point data point, which is closest to the two-dimensional coordinate obtained by affine transformation, in the corresponding two-dimensional coordinate in the originally stored three-dimensional cloud point data point can be used as the height value of the transformed three-dimensional coordinate.
In a specific implementation, when the three-dimensional point cloud data set is mapped into a two-dimensional plane image, a 4-channel portable network graphic (Portable Network Graphics, PNG) image can be also adopted to compress ground height information into an alpha channel so as to map the three-dimensional point cloud data into the two-dimensional image. Therefore, the height information, namely the z value, can be directly restored in inverse mapping, so that the z value can be prevented from being searched in the maintained three-dimensional point cloud data, and the recognition efficiency is improved. Wherein the alpha channel is an 8-bit gray scale image channel.
As described above, identifying the road mark in the two-dimensional plane image can obtain the position and the mark type of the road mark. In a specific implementation, the identification type may be represented in a three-dimensional coordinate position obtained by inversely mapping the road identification in the two-dimensional plane image. Or when the position cannot embody the identification type, the identification type of the road identification can be associated to the three-dimensional coordinate position.
In particular implementations, the identification type may include at least one of: left turn identification, right turn identification, straight line identification, and text type identification. The identification type can be other and more types according to the requirements of the actual application scene. Correspondingly, a machine learning model is adopted to identify the road mark in the two-dimensional plane image, so that the mark type and the position of the rectangle corresponding to the road mark can be obtained.
In a specific implementation, when the identification result is stored in data, the identification result may be stored in the manner shown in table 1.
Where obj_id may be an index of the road identification identifier, left may be a center point of a Left boundary of the aforementioned rectangle, top may be a center point of an upper boundary of the aforementioned rectangle, right may be a center point of a Right boundary of the aforementioned rectangle, bottom may be a center point of a lower boundary of the aforementioned rectangle, tpye may be a data type, and stress_arrow, left_arrow may represent the Straight line identifier and the Left turn identifier, respectively, and text may represent the text type identifier.
Obj_id Left Top Right Bottom type
0000 X,y,z X,y,z X,y,z X,y,z Straight_arrow
0001 X,y,z X,y,z X,y,z X,y,z left_arrow
0002 X,y,z X,y,z X,y,z X,y,z text
TABLE 1
Those skilled in the art will appreciate that the storage means may take other forms, and is not limited herein.
With continued reference to fig. 1, in an implementation, the inverse mapping the road identifier in the two-dimensional plane image to the three-dimensional coordinates may further include: and S15, if the three-dimensional coordinate positions of the road marks with the same type obtained by inverse mapping are in the plane vertical to the height direction and have the overlapping parts, taking the circumscribed rectangle with the overlapping parts as the three-dimensional coordinate position of the road mark.
For example, referring to fig. 10 in combination, a rectangle DELK is the position of one road sign on a plane perpendicular to the height direction, a rectangle CFHI is the position of the other road sign, and the road signs of the two are the same, then the rectangle CFHI may be taken as the position of the road sign on the plane. The coordinates of the height direction can be determined by locating the CFHI of the circumscribed rectangular frame by taking the points of the CFHI on the circumscribed rectangular frame, for example, the midpoints of the four sides CF, FH, HI, CI respectively, and then combining the points adjacent to the midpoints in the original stored three-dimensional point cloud data points to determine the height coordinates of the midpoints respectively.
The result of the identification mark is fused in the mode, the situation that the position information of the identification mark is inaccurate due to the fact that the three-dimensional frame does not contain the complete road identification mark can be avoided, and the identification accuracy is improved.
It will be appreciated by those skilled in the art that the specific implementation of each step in the foregoing embodiments may be selected and combined according to the actual situation, and is not limited herein.
In the embodiment of the invention, a three-dimensional point cloud data set is acquired, the three-dimensional point cloud data set is mapped into a two-dimensional plane image, and after the road mark in the two-dimensional plane image is identified, the road mark in the two-dimensional plane image is reversely mapped to a three-dimensional coordinate position. The road identification in the two-dimensional plane image can be identified by adopting the machine learning model, the adaptability of the identification method is stronger by utilizing the diversity of training samples by adopting the machine learning model to identify the two-dimensional plane image, and the identification method is suitable for identifying the two-dimensional plane image under different conditions, so that the identification result is more accurate, and the accuracy of the road identification method in the embodiment of the invention is higher.
The embodiment of the invention also provides a road sign recognition device, the structural schematic diagram of which is shown in fig. 11, which specifically may include:
A point cloud data set obtaining unit 111 adapted to obtain a three-dimensional point cloud data set, the three-dimensional point cloud data set including a plurality of three-dimensional point cloud data points obtained by detecting a road;
a mapping unit 112 adapted to map the three-dimensional point cloud data set into a two-dimensional planar image;
an identification unit 113 adapted to identify a road identification within the two-dimensional planar image using a machine learning model;
an inverse mapping unit 114 is adapted to inverse map the road identification within the two-dimensional planar image to a three-dimensional coordinate position.
In an implementation, referring to fig. 12, the point cloud data set acquisition unit 111 may include:
a three-dimensional frame generation subunit 121, adapted to generate a three-dimensional frame at preset intervals along acquisition trajectory information, where the acquisition trajectory information is the trajectory information of a device that acquires the three-dimensional point cloud data points;
an acquisition subunit 122 adapted to acquire three-dimensional point cloud data points within the three-dimensional frame;
the data set determining subunit 123 is adapted to obtain the three-dimensional point cloud data set according to the three-dimensional point cloud data points in the three-dimensional frame.
In a specific implementation, the cross section of the three-dimensional frame in the vertical height direction is square, and the preset interval is half of the side length of the square.
With continued reference to fig. 11, in an implementation, the inverse mapping unit 114 is further adapted to, after inversely mapping the road marks in the two-dimensional plane image to the three-dimensional coordinates, if the three-dimensional coordinates of the road marks with the same type obtained by inverse mapping have overlapping portions in a plane perpendicular to the height direction, take the circumscribed rectangle with the overlapping portions as the three-dimensional coordinates of the road marks.
In a specific implementation, the height range of the three-dimensional frame can be obtained by numerical expansion by taking the height value of the road surface as a central value; the surface range of the three-dimensional frame is set according to the width of the road surface, and the surface range is a range which is perpendicular to the height in the three-dimensional frame.
Referring to fig. 12 and 13 in combination, in an implementation, the data set determining subunit 123 may include:
the plane fitting module 131 is adapted to perform plane fitting on the three-dimensional point cloud data points in the three-dimensional frame to obtain the height of a fitting plane;
the data set module 132 is adapted to obtain three-dimensional point cloud data points with the height value and the fitting plane within a preset range as the three-dimensional point cloud data set.
With continued reference to fig. 11, in an implementation, the mapping unit 114 is further adapted to: when a plurality of three-dimensional point cloud data points are mapped to the same coordinate in the two-dimensional plane image, taking the average value of the reflectivities of the three-dimensional point cloud data points as the reflectivities of the coordinate points.
Referring to fig. 14 and 11 in combination, the mapping unit 114 may include:
a projection subunit 141 adapted to orthographically project the three-dimensional point cloud data set to obtain projection data;
an affine transformation subunit 142 is adapted to affine transform the projection data to obtain the two-dimensional planar image.
Referring to fig. 15 and 11 in combination, in an implementation, the identification unit 113 may include:
a normalization subunit 151, adapted to calculate a reflectance range of a three-dimensional point cloud data point related to the two-dimensional plane image, and perform normalization processing on the obtained gray image;
the gray image recognition subunit 152 is adapted to recognize the gray image by using a machine learning model to obtain the road identifier in the two-dimensional plane image.
Referring to fig. 15 and 16 in combination, in an implementation, the gray image recognition subunit 152 may include:
a maximum value filtering module 161, adapted to perform maximum value filtering on the gray scale image, so as to obtain a filtered gray scale image;
the identifying module 162 is adapted to identify the filtered gray-scale image by using a machine learning model to obtain a road identifier in the two-dimensional plane image.
With continued reference to fig. 11, the recognition unit 113 is adapted to recognize the two-dimensional planar image using a deep learning classification model.
In a specific implementation, the identification unit 113 is adapted to determine the location of the road marking within the two-dimensional planar image and the type of marking using a machine learning model.
Referring to fig. 11 and 17 in combination, in an implementation, the identification unit 113 may include:
a road sign recognition position subunit 171 that determines a road sign recognition position obtained by recognizing the two-dimensional planar image using a machine learning model;
the scanning subunit 172 scans the distribution of the maximum values for the pixel points within the edge range of the road sign recognition position, and determines the boundary of the road sign, where the edge range includes a range in which the edge of the road sign recognition position is extended.
In a specific implementation, the identifying the edge of the location by the road sign may include: the outermost edge in the road direction.
Referring to fig. 11 and 18 in combination, in an implementation, the inverse mapping unit 114 may include:
an inverse affine transformation subunit 181 adapted to inversely map the position of the road marking within the two-dimensional planar image to the two-dimensional coordinates of the point cloud by inverse affine transformation;
the three-dimensional coordinate generating subunit 182 is adapted to determine a height value according to a three-dimensional cloud point data point adjacent to the data of the two-dimensional coordinates among the originally stored three-dimensional cloud point data points, and the two-dimensional coordinates combine with the height value to generate the three-dimensional coordinates of the road identifier.
In a specific implementation, the identification type may include at least one of: left turn identification, right turn identification, straight line identification, and text type identification.
In a specific implementation, the road identification recognition means may further comprise an association unit adapted to associate an identification type of the road identification to the three-dimensional coordinate position.
In a specific implementation, the location of the road marking in the two-dimensional planar image may be rectangular.
With continued reference to fig. 11, in an implementation, the mapping unit 112 is adapted to map the three-dimensional point cloud data set to the PNG image, and compress the height information contained in the position information in the three-dimensional point cloud data set to the alpha channel; the inverse mapping unit 114 is adapted to restore the information in the alpha channel to the height coordinates of the three-dimensional coordinate position.
The explanation, principle, implementation and beneficial effects of the noun related in the road identifier identifying device in the embodiment of the present invention may refer to the road identifier identifying method in the embodiment of the present invention, and are not repeated herein.
The embodiment of the invention also provides a computer readable storage medium, wherein the computer instructions are stored on the computer readable storage medium, and the computer instructions can execute the steps of the road identification recognition method when running.
The computer readable storage medium may be an optical disc, a mechanical hard disc, a solid state disc, or the like.
The embodiment of the invention also provides a terminal, which comprises a memory and a processor, wherein the memory stores computer instructions capable of running on the processor, and the computer instructions can execute the steps of the road identification recognition method when running.
The terminal may be a portable computer with data processing capability, such as a vehicle-mounted intelligent device, a smart phone, a tablet computer, a single computer, a server or a server cluster.
Although the present invention is disclosed above, the present invention is not limited thereto. Various changes and modifications may be made by one skilled in the art without departing from the spirit and scope of the invention, and the scope of the invention should be assessed accordingly to that of the appended claims.

Claims (32)

1. A method of identifying a road identifier, comprising:
generating a three-dimensional frame at preset intervals along acquisition track information, wherein the acquisition track information is track information of equipment for acquiring three-dimensional point cloud data points;
acquiring three-dimensional point cloud data points in the three-dimensional frame;
Obtaining a three-dimensional point cloud data set according to the three-dimensional point cloud data points in the three-dimensional frame, wherein the three-dimensional point cloud data set comprises a plurality of three-dimensional point cloud data points obtained by detecting a road;
mapping the three-dimensional point cloud data set into a two-dimensional plane image;
determining the position and the identification type of the road identification in the two-dimensional plane image by adopting a machine learning model;
inversely mapping the position of the road mark in the two-dimensional plane image to the two-dimensional coordinates of the point cloud through inverse affine transformation;
determining a height value according to three-dimensional cloud point data points adjacent to the data of the two-dimensional coordinates in the original stored three-dimensional cloud point data points, wherein the two-dimensional coordinates are combined with the height value to generate three-dimensional coordinates of the road mark;
if the three-dimensional coordinate positions of the road marks with the same type obtained by inverse mapping have overlapping parts on a plane perpendicular to the height direction, taking the circumscribed rectangle with the overlapping parts as the three-dimensional coordinate positions of the road marks.
2. The method according to claim 1, wherein the three-dimensional frame has a square cross section in the vertical height direction, and the predetermined interval is half the side length of the square.
3. The road marking recognition method according to claim 1, wherein the height range of the three-dimensional frame is determined by numerical expansion with a road surface height value as a center value; the surface range of the three-dimensional frame is set according to the width of the road surface, and the surface range is a range which is perpendicular to the height in the three-dimensional frame.
4. The method of claim 1, wherein obtaining the three-dimensional point cloud data set from three-dimensional point cloud data points within the three-dimensional frame comprises:
performing plane fitting on the three-dimensional point cloud data points in the three-dimensional frame to obtain the height of a fitting plane;
and acquiring three-dimensional point cloud data points with the height values and the fitting plane within a preset range as the three-dimensional point cloud data set.
5. The method of claim 1, wherein mapping the three-dimensional point cloud data set into a two-dimensional planar image comprises: and if the plurality of three-dimensional point cloud data points are mapped to the same coordinate in the two-dimensional plane image, taking the average value of the reflectivities of the plurality of three-dimensional point cloud data points as the reflectivities of the coordinate points.
6. The method of claim 1, wherein mapping the three-dimensional point cloud data set to a two-dimensional planar image comprises:
Orthographically projecting the three-dimensional point cloud data set to obtain projection data;
and carrying out affine transformation on the projection data to obtain the two-dimensional plane image.
7. The method of claim 1, wherein the identifying the road identification within the two-dimensional planar image using the machine learning model comprises:
calculating the reflectivity range of the three-dimensional point cloud data point related to the two-dimensional plane image, and carrying out normalization processing on the reflectivity range to a gray level image;
and identifying the gray level image by adopting a machine learning model so as to obtain the road mark in the two-dimensional plane image.
8. The method of claim 7, wherein identifying the grayscale image using a machine learning model comprises:
maximum value filtering is carried out on the gray level image, and a filtered gray level image is obtained;
and identifying the filtered gray level image by adopting a machine learning model so as to obtain the road mark in the two-dimensional plane image.
9. The method of claim 1, wherein the identifying the road identification within the two-dimensional planar image using the machine learning model comprises: and identifying the two-dimensional plane image by using the deep learning classification model.
10. The method of claim 1, wherein the identifying the location of the road marking within the two-dimensional planar image using a machine learning model comprises:
determining a road identification position obtained by identifying the two-dimensional plane image by adopting a machine learning model;
and scanning the maximum value distribution of the pixel points in the edge range of the road identification position to determine the boundary of the road identification, wherein the edge range comprises a range for expanding the edge of the road identification position.
11. The method of claim 10, wherein identifying edges of locations for the road markings comprises: the outermost edge in the road direction.
12. The method of claim 1, wherein the type of identification comprises at least one of: left turn identification, right turn identification, straight line identification, and text type identification.
13. The road identification recognition method according to claim 1, further comprising: and associating the identification type of the road identification to the three-dimensional coordinate position.
14. The method of claim 1, wherein the location of the road marking within the two-dimensional planar image is rectangular.
15. The method of claim 1, wherein mapping the three-dimensional point cloud data set into a two-dimensional planar image comprises: mapping the three-dimensional point cloud data set to the PNG image, and compressing the height information contained in the position information in the three-dimensional point cloud data set to an alpha channel;
the inverse mapping of the road identification in the two-dimensional plane image to the three-dimensional coordinate position comprises: and restoring the information in the alpha channel to be the height coordinate of the three-dimensional coordinate position.
16. A road sign recognition device, characterized by comprising:
the system comprises a point cloud data set acquisition unit, a three-dimensional point cloud data set acquisition unit and a data processing unit, wherein the point cloud data set acquisition unit is suitable for acquiring a three-dimensional point cloud data set, and the three-dimensional point cloud data set comprises a plurality of three-dimensional point cloud data points obtained by detecting a road;
the mapping unit is suitable for mapping the three-dimensional point cloud data set into a two-dimensional plane image;
the identifying unit is suitable for identifying the road mark in the two-dimensional plane image by adopting a machine learning model;
the inverse mapping unit is suitable for inversely mapping the road mark in the two-dimensional plane image to a three-dimensional coordinate position;
the point cloud data set acquisition unit includes: the three-dimensional frame generation subunit is suitable for generating a three-dimensional frame at preset intervals along acquisition track information, wherein the acquisition track information is track information of equipment for acquiring the three-dimensional point cloud data points; the acquisition subunit is suitable for acquiring three-dimensional point cloud data points in the three-dimensional frame; the data set determining subunit is suitable for obtaining the three-dimensional point cloud data set according to the three-dimensional point cloud data points in the three-dimensional frame;
The identification unit is suitable for determining the position of the road mark and the mark type in the two-dimensional plane image by adopting a machine learning model;
the inverse mapping unit includes: an inverse affine transformation subunit adapted to inversely map the position of the road marking within the two-dimensional plane image to the two-dimensional coordinates of the point cloud by inverse affine transformation; the three-dimensional coordinate generation subunit is suitable for determining a height value according to three-dimensional cloud point data points adjacent to the data of the two-dimensional coordinates in the originally stored three-dimensional cloud point data points, and the three-dimensional coordinates of the road mark are generated by combining the two-dimensional coordinates with the height value;
the inverse mapping unit is further adapted to, after inversely mapping the road marks in the two-dimensional plane image to the three-dimensional coordinates, if the three-dimensional coordinates of the road marks with the same type obtained by inverse mapping have overlapping portions on a plane perpendicular to the height direction, take the circumscribed rectangle with the overlapping portions as the three-dimensional coordinates of the road marks.
17. The road sign recognition device according to claim 16, wherein the three-dimensional frame has a square cross section in a vertical height direction, and the predetermined interval is half a side length of the square.
18. The road marking recognition device of claim 16, wherein the three-dimensional frame generation subunit is adapted to numerically expand a height range of the three-dimensional frame with a road surface height value as a center value; the surface range of the three-dimensional frame is set according to the width of the road surface, and the surface range is a range which is perpendicular to the height in the three-dimensional frame.
19. The road identification recognition apparatus of claim 16, wherein the data set determination subunit comprises:
the plane fitting module is suitable for carrying out plane fitting on the three-dimensional point cloud data points in the three-dimensional frame to obtain the height of a fitting plane;
the data set module is suitable for acquiring three-dimensional point cloud data points with the height values and the fitting plane within a preset range as the three-dimensional point cloud data set.
20. The road identification recognition device of claim 16, wherein the mapping unit is further adapted to: when a plurality of three-dimensional point cloud data points are mapped to the same coordinate in the two-dimensional plane image, taking the average value of the reflectivities of the three-dimensional point cloud data points as the reflectivities of the coordinate points.
21. The road identification recognition apparatus according to claim 16, wherein the mapping unit includes:
The projection subunit is suitable for orthographically projecting the three-dimensional point cloud data set to obtain projection data;
and the affine transformation subunit is suitable for carrying out affine transformation on the projection data to obtain the two-dimensional plane image.
22. The road identification recognition apparatus according to claim 16, wherein the recognition unit includes:
the normalization subunit is suitable for calculating the reflectivity range of the three-dimensional point cloud data points related to the two-dimensional plane image and is suitable for carrying out normalization processing on the gray level image;
and the gray level image identification subunit is suitable for identifying the gray level image by adopting a machine learning model so as to obtain the road identification in the two-dimensional plane image.
23. The road identification recognition apparatus of claim 22, wherein the gray image recognition subunit comprises:
the maximum value filtering module is suitable for carrying out maximum value filtering on the gray level image to obtain a filtered gray level image;
and the identification module is suitable for identifying the filtered gray level image by adopting a machine learning model so as to obtain the road identification in the two-dimensional plane image.
24. The road identification recognition device according to claim 16, wherein the recognition unit is adapted to recognize the two-dimensional planar image using a deep learning classification model.
25. The road identification recognition apparatus according to claim 16, wherein the recognition unit includes:
the road sign recognition position subunit is suitable for determining a road sign recognition position obtained by recognizing the two-dimensional plane image by adopting a machine learning model;
and the scanning subunit is suitable for scanning the maximum value distribution of the pixel points in the edge range of the road identification position, and is suitable for determining the boundary of the road identification, wherein the edge range comprises a range for expanding the edge of the road identification position.
26. The road sign recognition device of claim 25, wherein the edge of the road sign recognition location comprises: the outermost edge in the road direction.
27. The road identification recognition device of claim 16, wherein the identification type comprises at least one of: left turn identification, right turn identification, straight line identification, and text type identification.
28. The road identification recognition apparatus of claim 16, further comprising: and the association unit is suitable for associating the identification type of the road identification to the three-dimensional coordinate position.
29. The road sign recognition device of claim 16, wherein the position of the road sign within the two-dimensional planar image is rectangular.
30. The road identification recognition device according to claim 16, wherein the mapping unit is adapted to map a three-dimensional point cloud data set to the PNG image, and compress height information contained in position information in the three-dimensional point cloud data set to an alpha channel;
the inverse mapping unit is suitable for restoring the information in the alpha channel into the height coordinate of the three-dimensional coordinate position.
31. A computer readable storage medium having stored thereon computer instructions, which when run perform the steps of the road identification recognition method of any of claims 1 to 15.
32. A terminal comprising a memory and a processor, the memory having stored thereon computer instructions executable on the processor, wherein the computer instructions, when executed, perform the steps of the road identification recognition method of any one of claims 1 to 15.
CN201811341411.1A 2018-11-12 2018-11-12 Road identification recognition method and device, medium and terminal Active CN111179152B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201811341411.1A CN111179152B (en) 2018-11-12 2018-11-12 Road identification recognition method and device, medium and terminal

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201811341411.1A CN111179152B (en) 2018-11-12 2018-11-12 Road identification recognition method and device, medium and terminal

Publications (2)

Publication Number Publication Date
CN111179152A CN111179152A (en) 2020-05-19
CN111179152B true CN111179152B (en) 2023-04-28

Family

ID=70655263

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201811341411.1A Active CN111179152B (en) 2018-11-12 2018-11-12 Road identification recognition method and device, medium and terminal

Country Status (1)

Country Link
CN (1) CN111179152B (en)

Families Citing this family (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113721599B (en) * 2020-05-25 2023-10-20 华为技术有限公司 Positioning method and positioning device
CN111695486B (en) * 2020-06-08 2022-07-01 武汉中海庭数据技术有限公司 High-precision direction signboard target extraction method based on point cloud
CN112102409B (en) * 2020-09-21 2023-09-01 杭州海康威视数字技术股份有限公司 Target detection method, device, equipment and storage medium
CN112446884B (en) * 2020-11-27 2024-03-26 广东电网有限责任公司肇庆供电局 Positioning method and device for power transmission line in laser point cloud and terminal equipment
CN112132853B (en) * 2020-11-30 2021-02-05 湖北亿咖通科技有限公司 Method and device for constructing ground guide arrow, electronic equipment and storage medium
CN112507891B (en) * 2020-12-12 2023-02-03 武汉中海庭数据技术有限公司 Method and device for automatically identifying high-speed intersection and constructing intersection vector
CN112683169A (en) * 2020-12-17 2021-04-20 深圳依时货拉拉科技有限公司 Object size measuring method, device, equipment and storage medium
CN112907746A (en) * 2021-03-25 2021-06-04 上海商汤临港智能科技有限公司 Method and device for generating electronic map, electronic equipment and storage medium
CN113205447A (en) * 2021-05-11 2021-08-03 北京车和家信息技术有限公司 Road picture marking method and device for lane line identification
CN114485671A (en) * 2022-01-24 2022-05-13 轮趣科技(东莞)有限公司 Automatic turning method and device for mobile equipment
CN114973910B (en) * 2022-07-27 2022-11-11 禾多科技(北京)有限公司 Map generation method and device, electronic equipment and computer readable medium

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104766058A (en) * 2015-03-31 2015-07-08 百度在线网络技术(北京)有限公司 Method and device for obtaining lane line
CN107194962A (en) * 2017-04-01 2017-09-22 深圳市速腾聚创科技有限公司 Point cloud and plane picture fusion method and device
CN108107444A (en) * 2017-12-28 2018-06-01 国网黑龙江省电力有限公司检修公司 Substation's method for recognizing impurities based on laser data
CN108256446A (en) * 2017-12-29 2018-07-06 百度在线网络技术(北京)有限公司 For determining the method, apparatus of the lane line in road and equipment
CN108470159A (en) * 2018-03-09 2018-08-31 腾讯科技(深圳)有限公司 Lane line data processing method, device, computer equipment and storage medium
CN108573522A (en) * 2017-03-14 2018-09-25 腾讯科技(深圳)有限公司 A kind of methods of exhibiting and terminal of flag data

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20110222757A1 (en) * 2010-03-10 2011-09-15 Gbo 3D Technology Pte. Ltd. Systems and methods for 2D image and spatial data capture for 3D stereo imaging

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104766058A (en) * 2015-03-31 2015-07-08 百度在线网络技术(北京)有限公司 Method and device for obtaining lane line
CN108573522A (en) * 2017-03-14 2018-09-25 腾讯科技(深圳)有限公司 A kind of methods of exhibiting and terminal of flag data
CN107194962A (en) * 2017-04-01 2017-09-22 深圳市速腾聚创科技有限公司 Point cloud and plane picture fusion method and device
CN108107444A (en) * 2017-12-28 2018-06-01 国网黑龙江省电力有限公司检修公司 Substation's method for recognizing impurities based on laser data
CN108256446A (en) * 2017-12-29 2018-07-06 百度在线网络技术(北京)有限公司 For determining the method, apparatus of the lane line in road and equipment
CN108470159A (en) * 2018-03-09 2018-08-31 腾讯科技(深圳)有限公司 Lane line data processing method, device, computer equipment and storage medium

Non-Patent Citations (3)

* Cited by examiner, † Cited by third party
Title
Mahdi Javanmardi等.Precise mobile laser scanning for urban mapping utilizing 3D aerial surveillance data.2017 IEEE 20th International Conference on Intelligent Transportation Systems (ITSC).2018,全文. *
孔栋 ; 孙亮 ; 王建强 ; 王晓原 ; .基于3D激光雷达点云的道路边界识别算法.广西大学学报(自然科学版).2017,(第03期),全文. *
彭江帆.基于车载激光扫描数据的高速公路道路要素提取方法研究.中国优秀硕士学位论文全文数据库.2018,全文. *

Also Published As

Publication number Publication date
CN111179152A (en) 2020-05-19

Similar Documents

Publication Publication Date Title
CN111179152B (en) Road identification recognition method and device, medium and terminal
CN107463918B (en) Lane line extraction method based on fusion of laser point cloud and image data
US10217007B2 (en) Detecting method and device of obstacles based on disparity map and automobile driving assistance system
CN110443225B (en) Virtual and real lane line identification method and device based on feature pixel statistics
CN109165549B (en) Road identification obtaining method based on three-dimensional point cloud data, terminal equipment and device
CN106067003B (en) Automatic extraction method for road vector identification line in vehicle-mounted laser scanning point cloud
WO2018068653A1 (en) Point cloud data processing method and apparatus, and storage medium
JP6442834B2 (en) Road surface height shape estimation method and system
CN105160309A (en) Three-lane detection method based on image morphological segmentation and region growing
CN112257605B (en) Three-dimensional target detection method, system and device based on self-labeling training sample
CN109271861B (en) Multi-scale fusion point cloud traffic signboard automatic extraction method
CN104239867A (en) License plate locating method and system
CN110197173B (en) Road edge detection method based on binocular vision
CN115717894B (en) Vehicle high-precision positioning method based on GPS and common navigation map
CN110379007B (en) Three-dimensional highway curve reconstruction method based on vehicle-mounted mobile laser scanning point cloud
CN112070756B (en) Three-dimensional road surface disease measuring method based on unmanned aerial vehicle oblique photography
US20230005278A1 (en) Lane extraction method using projection transformation of three-dimensional point cloud map
CN109635737A (en) Automobile navigation localization method is assisted based on pavement marker line visual identity
CN113759391A (en) Passable area detection method based on laser radar
CN116168246A (en) Method, device, equipment and medium for identifying waste slag field for railway engineering
Hu Intelligent road sign inventory (IRSI) with image recognition and attribute computation from video log
CN112257668A (en) Main and auxiliary road judging method and device, electronic equipment and storage medium
CN116452852A (en) Automatic generation method of high-precision vector map
Liu et al. Image-translation-based road marking extraction from mobile laser point clouds
CN115909241A (en) Lane line detection method, system, electronic device and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant