CN112435293A - Method and device for determining structural parameter representation of lane line - Google Patents

Method and device for determining structural parameter representation of lane line Download PDF

Info

Publication number
CN112435293A
CN112435293A CN201910787011.1A CN201910787011A CN112435293A CN 112435293 A CN112435293 A CN 112435293A CN 201910787011 A CN201910787011 A CN 201910787011A CN 112435293 A CN112435293 A CN 112435293A
Authority
CN
China
Prior art keywords
lane line
determining
representation
structural parameter
parameter representation
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201910787011.1A
Other languages
Chinese (zh)
Other versions
CN112435293B (en
Inventor
杨帅
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Horizon Robotics Technology Research and Development Co Ltd
Original Assignee
Beijing Horizon Robotics Technology Research and Development Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Horizon Robotics Technology Research and Development Co Ltd filed Critical Beijing Horizon Robotics Technology Research and Development Co Ltd
Priority to CN201910787011.1A priority Critical patent/CN112435293B/en
Publication of CN112435293A publication Critical patent/CN112435293A/en
Application granted granted Critical
Publication of CN112435293B publication Critical patent/CN112435293B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/70Determining position or orientation of objects or cameras
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/50Context or environment of the image
    • G06V20/56Context or environment of the image exterior to a vehicle by using sensors mounted on the vehicle
    • G06V20/588Recognition of the road, e.g. of lane markings; Recognition of the vehicle driving pattern in relation to the road

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Multimedia (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Processing Or Creating Images (AREA)
  • Image Analysis (AREA)

Abstract

Disclosed are a method, a device, a computer-readable storage medium and an electronic device for determining the structural parameter representation of a lane line, wherein the method comprises the following steps: obtaining semantic information carried by pixel points in at least one frame of image; determining pixel coordinates of a reference point of a lane line corresponding to the semantic information; acquiring a first space coordinate set corresponding to the pixel coordinate according to the pixel coordinate and the camera pose corresponding to the at least one frame of image; acquiring an initial structural parameter representation of the lane line according to the first space coordinate set; and determining the optimized structural parameter representation of the lane line according to the initial structural parameter representation, the camera pose corresponding to the at least one frame of image and the semantic information. The method and the device have the advantages that the structural parameters of the lane lines represent a large number of corresponding sampling points of the lane lines in the high-precision map, so that the storage pressure and the access pressure of the high-precision map are effectively reduced.

Description

Method and device for determining structural parameter representation of lane line
Technical Field
The present disclosure relates to the field of image analysis technologies, and in particular, to a method and an apparatus for determining a structural parameter representation of a lane line.
Background
The lane lines are important components in a road scene and are indispensable elements in a high-precision map, and accurate lane line representation in the high-precision map is a premise for realizing automatic driving.
At present, a laser radar is often used for scanning a lane line to obtain point clouds of the lane line, then sampling points of the lane line are extracted from the point clouds, the lane line is represented in a high-precision map by using the sampling points, and the large number of sampling points can cause large storage pressure and access pressure when the high-precision map is used, so that the determination of a lightweight lane line representation method is very important.
Disclosure of Invention
The present disclosure is proposed to solve the above technical problems. The embodiment of the disclosure provides a method and a device for determining structural parameter representation of a lane line, a computer-readable storage medium and an electronic device, which replace a plurality of corresponding sampling points of the lane line in a high-precision map by the structural parameter representation of the lane line, thereby effectively reducing the storage pressure and the access pressure of the high-precision map.
According to a first aspect of the present disclosure, there is provided a method for determining a structured parametric representation of a lane line, comprising:
obtaining semantic information carried by pixel points in at least one frame of image;
determining pixel coordinates of a reference point of a lane line corresponding to the semantic information;
acquiring a first space coordinate set corresponding to the pixel coordinate according to the pixel coordinate and the camera pose corresponding to the at least one frame of image;
acquiring an initial structural parameter representation of the lane line according to the first space coordinate set;
and determining the optimized structural parameter representation of the lane line according to the initial structural parameter representation, the camera pose corresponding to the at least one frame of image and the semantic information.
According to a second aspect of the present disclosure, there is provided an apparatus for determining a structured parametric representation of a lane line, comprising:
the semantic information acquisition module is used for acquiring semantic information carried by pixel points in at least one frame of image;
and the pixel coordinate determination module is used for determining the pixel coordinate of the reference point of the lane line corresponding to the semantic information.
The spatial coordinate acquisition module is used for acquiring a first spatial coordinate set corresponding to the pixel coordinate according to the pixel coordinate and the camera pose corresponding to the at least one frame of image;
the first parameter representation module is used for acquiring the initial structural parameter representation of the lane line according to the first space coordinate set;
and the second parameter representation module is used for determining the optimized structural parameter representation of the lane line according to the initial structural parameter representation, the camera pose corresponding to each of the at least one frame of image and the semantic information.
According to a third aspect of the present disclosure, a computer-readable storage medium is provided, which stores a computer program for executing the above-mentioned method for determining a structured parametric representation of a lane line.
According to a fourth aspect of the present disclosure, there is provided an electronic apparatus comprising:
a processor;
a memory for storing the processor-executable instructions;
the processor is used for reading the executable instruction from the memory and executing the instruction to realize the structural parameter representation determining method of the lane line.
Compared with the prior art, the method and the device for determining the structural parameter representation of the lane line, the computer-readable storage medium and the electronic device provided by the disclosure at least have the following beneficial effects:
on one hand, in the embodiment, it is considered that the large number of sampling points are used for representing the lane lines in the high-precision map, which may cause large storage pressure and access pressure when the high-precision map is used, and therefore, by determining that semantic information in the image is a spatial coordinate set corresponding to a reference point of the lane line, structural parameter representation of the lane line is further obtained according to the spatial coordinate set, and the obtained structural parameter representation is optimized to determine structural parameter representation capable of accurately representing the lane line in the high-precision map, so that the structural parameter representation is used to replace the large number of sampling points, light-weight representation of the lane line in the high-precision map is realized, and storage pressure and access pressure using the high-precision map are effectively reduced.
On the other hand, this embodiment utilizes vision sensor to gather the image, and then obtains the structural parameter of lane line and shows, avoids utilizing expensive laser radar, practices thrift the cost effectively.
Drawings
The above and other objects, features and advantages of the present disclosure will become more apparent by describing in more detail embodiments of the present disclosure with reference to the attached drawings. The accompanying drawings are included to provide a further understanding of the embodiments of the disclosure and are incorporated in and constitute a part of this specification, illustrate embodiments of the disclosure and together with the description serve to explain the principles of the disclosure and not to limit the disclosure. In the drawings, like reference numbers generally represent like parts or steps.
Fig. 1 is a schematic flowchart of a method for determining a structured parameter representation of a lane line according to an exemplary embodiment of the present disclosure;
FIG. 2 is a scene diagram of a method for determining a structured parametric representation of a lane line according to an exemplary embodiment of the present disclosure;
FIG. 3 is a flowchart illustrating step 30 of a method for determining a structured parametric representation of a lane marking according to an exemplary embodiment of the present disclosure;
FIG. 4 is a schematic flow chart diagram illustrating step 40 of a method for determining a structured parametric representation of a lane marking according to an exemplary embodiment of the present disclosure;
FIG. 5 is a flowchart illustrating a step 50 in a method for determining a structured parametric representation of a lane marking according to an exemplary embodiment of the present disclosure;
FIG. 6 is a flowchart illustrating step 503 of a method for determining a structured parametric representation of a lane marking according to an exemplary embodiment of the present disclosure;
FIG. 7 is a flowchart illustrating steps 40 and 50 of a method for determining a structured parametric representation of a lane marking according to an exemplary embodiment of the present disclosure;
fig. 8 is a schematic flowchart illustrating a step 503 of the method for determining a structural parameter representation of a lane line according to an exemplary embodiment of the present disclosure;
fig. 9 is a schematic structural diagram of a structural parameter representation determining device for a lane line according to a first exemplary embodiment of the present disclosure;
fig. 10 is a schematic structural diagram of a structural parameter representation determining device for a lane line according to a second exemplary embodiment of the present disclosure;
fig. 11 is a schematic structural diagram of a first second parametric representation unit in a lane line structural parametric representation determining device according to an exemplary embodiment of the present disclosure;
fig. 12 is a schematic structural diagram of a first parametric representation module in the device for determining structured parametric representation of lane marking according to an exemplary embodiment of the present disclosure;
fig. 13 is a schematic structural diagram of a second parameter representation unit in the lane line structural parameter representation determining device according to an exemplary embodiment of the present disclosure;
fig. 14 is a schematic structural diagram of a third second parametric representation unit in the device for determining structured parametric representation of lane marking according to an exemplary embodiment of the present disclosure;
fig. 15 is a block diagram of an electronic device provided in an exemplary embodiment of the present disclosure.
Detailed Description
Hereinafter, example embodiments according to the present disclosure will be described in detail with reference to the accompanying drawings. It is to be understood that the described embodiments are merely a subset of the embodiments of the present disclosure and not all embodiments of the present disclosure, with the understanding that the present disclosure is not limited to the example embodiments described herein.
Summary of the application
The lane line is important information for ensuring safe driving of the vehicle, is an important component in a road scene, and is a precondition for accurate lane line representation when automatic driving is realized by using a high-precision map. At present, a laser radar scanning mode is mostly adopted to obtain point clouds of lane lines, a large amount of redundant data exists in the obtained point cloud data, sampling points capable of representing the accurate positions of the lane lines need to be extracted from the point cloud data, the lane lines are represented in a high-precision map by using the extracted sampling points, the high-precision map contains a large number of lane lines, and therefore the high-precision map has a large number of sampling points, and large storage pressure and access pressure exist when the high-precision map is used.
In the method for determining structural parameter representation of a lane line provided by this embodiment, semantic information in an image is determined as a spatial coordinate set corresponding to a reference point of the lane line, structural parameter representation of the lane line is further obtained according to the spatial coordinate set, and the obtained structural parameter representation is optimized to determine structural parameter representation capable of accurately representing the lane line in a high-precision map, and the structural parameter representation can represent the lane line in the high-precision map by using a small amount of data, so that a large number of sampling points in the high-precision map are replaced, lightweight representation of the lane line in the high-precision map is realized, and storage pressure and access pressure using the high-precision map are effectively reduced. Moreover, this embodiment utilizes vision sensor to gather the image, and then obtains the structural parameter representation of lane line, avoids utilizing expensive laser radar, practices thrift the cost effectively.
Exemplary method
Fig. 1 is a schematic flowchart of a method for determining a structural parameter representation of a lane line according to an exemplary embodiment of the present disclosure.
The embodiment can be applied to electronic equipment, and particularly can be applied to a server or a general computer. As shown in fig. 1, a method for determining a structural parameter representation of a lane line according to an exemplary embodiment of the present disclosure includes at least the following steps:
step 10, obtaining semantic information carried by pixel points in at least one frame of image.
When an image is acquired by using a vision sensor loaded on a vehicle, a series of images, namely at least one frame of image, are obtained, and the semantic segmentation is carried out on the image after the image is acquired, namely different objects in the image are segmented from the angle of pixels according to the content of the image so as to determine semantic information carried by pixel points in the image.
When a vehicle a runs on a road surface as shown in fig. 2, a vision sensor mounted on the vehicle acquires an image in real time, so that in one possible implementation manner, an image in at least one frame of image may correspond to a current frame of image acquired by the vision sensor at the current time, that is, when the vision sensor acquires one frame of current frame of image, semantic information carried by pixels in the current frame of image is acquired, and as the vision sensor continuously acquires images, the current frame of image continuously changes, so that the semantic information carried by pixels in at least one frame of image can also be acquired. Certainly, there is another possible implementation manner, that is, a series of images are obtained in advance, and semantic information carried by pixel points in one or more frames of images can be obtained at one time.
And step 20, determining pixel coordinates of the reference points of which the semantic information corresponds to the lane lines.
In the driving process of the vehicle, the visual sensor not only includes the lane line but also has other objects (such as pedestrians, vehicles, sky and the like) in the visual field range, so that various objects can also exist in the image acquired by the visual sensor, and the embodiment needs to determine the structural parameter representation of the lane line, so that the semantic information corresponding to the reference point of the lane line needs to be determined, and the pixel coordinate of the reference point needs to be determined, so that the accuracy of structural parameter representation of the lane line obtained in the subsequent process can be ensured, all pixel points can be prevented from being brought into subsequent operation, and the working efficiency of the method is improved.
And step 30, acquiring a first space coordinate set corresponding to the pixel coordinates according to the pixel coordinates and the camera pose corresponding to the at least one frame of image.
The pixel coordinates of the reference point correspond to the position information of the lane line in the image, so after the pixel coordinates corresponding to the reference point are obtained, the first space coordinates corresponding to the pixel coordinates are determined according to the camera pose corresponding to each frame of image, and each first space coordinate forms a first space coordinate set. Specifically, the camera pose corresponding to each frame of image may be provided by a positioning module, such as a satellite positioning system and an inertial measurement unit.
And step 40, acquiring the initial structural parameter representation of the lane line according to the first space coordinate set.
The first space coordinate in the first space coordinate set represents the position information of the lane line in the three-dimensional world, and the lane line in the three-dimensional world is represented in a parameterization mode according to each first space coordinate of the first space coordinate set, namely the geometric shape of the lane line is defined by a small number of parameters, and the initial structural parameter representation of the lane line is obtained.
And step 50, determining the optimized structural parameter representation of the lane line according to the initial structural parameter representation and the camera pose and semantic information respectively corresponding to at least one frame of image.
Because the initial structural parameter representation may not accurately represent the lane line, the initial structural parameter representation is optimized by using the camera pose corresponding to each frame of image and semantic information carried by the pixel points in each frame of image, so as to determine the optimized structural parameter representation of the lane line.
The method for determining the structural parameter representation of the lane line provided by the embodiment has the beneficial effects that:
on the one hand, the embodiment considers that the representation of the lane line by using a plurality of sampling points in the high-precision map results in larger storage pressure and access pressure when the high-precision map is used, therefore, by determining the semantic information in the image as the first space coordinate set corresponding to the reference point of the lane line, the initial structural parameter representation of the lane line is further obtained according to the first space coordinate set, and the obtained initial structured parametric representation is optimized to determine an optimized structured parametric representation capable of accurately representing the lane lines in the high-precision map, the optimized structured parametric representation can more accurately represent the lane lines in the high-precision map by using a small amount of data, therefore, a large number of sampling points in the high-precision map are replaced, lightweight representation of lane lines in the high-precision map is achieved, and storage pressure and access pressure of the high-precision map are effectively reduced.
On the other hand, this embodiment utilizes vision sensor to gather the image, and then obtains the structural parameter of lane line and shows, avoids utilizing expensive laser radar, practices thrift the cost effectively.
Fig. 3 is a schematic flowchart illustrating a process of acquiring a first spatial coordinate set corresponding to pixel coordinates according to the pixel coordinates and a camera pose corresponding to at least one frame of image in the embodiment shown in fig. 1.
As shown in fig. 3, based on the embodiment shown in fig. 1, in an exemplary embodiment of the present application, the acquiring the first spatial coordinate set corresponding to the pixel coordinate shown in step 30 may specifically include the following steps:
step 301, performing inverse perspective transformation on the pixel coordinate, and acquiring a sixth spatial coordinate corresponding to the pixel coordinate.
The inverse perspective transformation is a technology for converting a two-dimensional plane image into a three-dimensional space, a sixth space coordinate of a pixel coordinate in the two-dimensional plane image, which corresponds to the three-dimensional space, can be obtained through the inverse perspective transformation, the sixth space coordinate indicates position information of a reference point of a lane line in the three-dimensional space, and is essentially the same as the first space coordinate in the first space coordinate set, and the sixth space coordinate is used only for convenience of distinguishing.
Step 302, determining a tracking code of the lane line corresponding to the sixth spatial coordinate in at least one frame of image.
When the vehicle runs on the road surface, as shown in fig. 2, there is often more than one lane in the visual field range of the vision sensor, and in order to accurately determine the structural parameter representation corresponding to each of the plurality of lane lines, the sixth spatial coordinate needs to be divided according to the lane lines. When each lane line appears in the image for the first time, a tracking code (track id) with a unique identifier is distributed for the lane line, and when the lane line appears in the image for the second time, the tracking code is unchanged, so that when the pixel coordinate corresponding to the reference point of the lane line is determined, the tracking code corresponding to the lane line can be obtained, and information transmission is carried out on the tracking code in the process of obtaining the sixth spatial coordinate by carrying out inverse perspective transformation on the pixel coordinate, so that each sixth spatial coordinate corresponds to the tracking code of the lane line.
Step 303, forming a first space coordinate set by using the sixth space coordinates with the same corresponding tracking codes.
And performing cluster fitting on the sixth space coordinates according to the tracking codes, forming a first space coordinate set by the sixth space coordinates with the same tracking codes, and adding the sixth space coordinates with different tracking codes into different first space coordinate sets, namely, each lane line corresponds to the first space coordinate set.
In this embodiment, a sixth spatial coordinate corresponding to the pixel coordinate is obtained through inverse perspective transformation, and a first spatial coordinate set corresponding to each lane line is determined according to a tracking code corresponding to the sixth spatial coordinate, so that the first spatial coordinate in the first spatial coordinate set is ensured to belong to the same lane line, and accuracy of structural parameter representation of each subsequently obtained lane line is ensured.
Fig. 4 shows a schematic flow chart of acquiring an initial structured parametric representation of a lane line according to the first set of spatial coordinates in the embodiment shown in fig. 1.
As shown in fig. 4, based on the embodiment shown in fig. 1, in an exemplary embodiment of the present application, the obtaining of the initial structured parameter representation of the lane line shown in step 40 may specifically include the following steps:
step 4011, randomly selecting at least four fifth spatial coordinates from the first spatial coordinate set;
when the initial structural parameter representation of the lane line is obtained, because there are many first spatial coordinates in the first spatial coordinate set, it is not possible to obtain the initial structural parameter representation of the lane line by using each first spatial coordinate, at least four fifth spatial coordinates need to be randomly selected from the first spatial coordinate set, where the fifth spatial coordinates are used only for distinguishing convenience, and the fifth spatial coordinates are essentially identical to the first spatial coordinates.
And step 4012, obtaining an initial Bezier curve corresponding to the lane line according to the fifth spatial coordinate.
And acquiring an initial Bezier curve corresponding to the lane line according to the acquired at least four fifth space coordinates. Specifically, the initial bezier curve is a second-order bezier curve, and the parameters of the initial bezier curve are expressed as: b (t) ^2 x P0+2 x t (1-t) x P1+ t x P2, where t is 0 to 1, P0, P2 is the starting point of the bezier curve determined based on the fifth spatial coordinate, and P1 corresponds to the control point of the bezier curve for controlling the shape of the bezier curve.
In this embodiment, at least four fifth spatial coordinates are selected from the first spatial coordinates with a large number, and the lane line is represented by using the second-order bezier curve according to the at least four fifth spatial coordinates, where the curve shape corresponding to the second-order bezier curve is close to the shape of the real lane line, so that the lane line can be represented more accurately by using the second-order bezier curve.
Fig. 5 is a schematic flow chart illustrating the process of determining the optimized structured parametric representation of the lane line according to the initial structured parametric representation, the camera pose corresponding to each of the at least one frame of image, and the semantic information in the embodiment shown in fig. 1.
As shown in fig. 5, based on the embodiment shown in fig. 1, in an exemplary embodiment of the present application, the determining the optimized structural parameter representation of the lane line shown in step 50 may specifically include the following steps:
step 501, according to the initial structural parameter representation, determining a second space coordinate corresponding to the lane line.
The initialized structural parameter representation corresponds to a lane line in a three-dimensional space, points are selected on the initialized structural parameter representation to determine a second space coordinate corresponding to the lane line, and the accuracy of the initialized structural parameter representation is verified by using the second space coordinate. Specifically, the parameters represented by the initial structuring parameters are represented by b (t) (1-t) ^2 × P0+2 × t (1-t) × P1+ t × P2, and different second spatial coordinates are obtained by selecting different values of t, and the intervals such as t are 0, 0.1, 0.2, … and 1, so that a plurality of points represented by the initial structuring parameters, that is, second spatial coordinates, are obtained.
And 502, acquiring projection points of the second space coordinate in the at least one frame of image respectively according to the second space coordinate and the camera pose respectively corresponding to the at least one frame of image.
In order to verify the accuracy of the initial structural parameter representation by using the second spatial coordinates, the second spatial coordinates need to be projected into the images according to the camera poses respectively corresponding to the frames of images so as to determine the projection points of the second spatial coordinates in the frames of images. Specifically, { P _ sample } -, K × Tcw { P _ sample }, where { P _ sample } represents the projection point, K represents the camera's intrinsic reference matrix, Tcw represents the inverse of the camera pose Twc, [ R, t ] provided by the positioning module, and { P _ sample } represents the second spatial coordinate.
Step 503, determining the optimized structural parameter representation of the lane line according to the projection point and the semantic information.
And determining the optimized structural parameter representation of the lane line according to the projection point of the second space coordinate in each frame image and semantic information carried by the pixel points in each frame image.
In this embodiment, because the initial structured parameter representation may not well represent the lane line, the initial structured parameter needs to be optimized, a second spatial coordinate is determined by selecting a point in the initial structured parameter representation, and the initial structured parameter representation is optimized according to a projection point of the second spatial coordinate in each frame image and semantic information carried by a pixel point in each frame image, so as to determine the optimized structured parameter representation.
Fig. 6 shows a schematic flow chart of determining an optimized structured parametric representation of a lane line from the proxels and semantic information in the embodiment shown in fig. 5.
As shown in fig. 6, on the basis of the embodiment shown in fig. 5, in an exemplary embodiment of the present application, the determining an optimized structural parameter representation of a lane line shown in step 503 may specifically include the following steps:
step 5031, determining semantic information corresponding to the projection point according to the semantic information carried by the pixel point in the at least one frame of image.
After the second spatial coordinate is projected in each frame image to determine the corresponding projection point of the second spatial coordinate in each frame image, because different pixel points in the image carry different semantic information, the semantic information corresponding to the projection point needs to be determined first.
Step 5032, if the semantic information corresponding to the projection point is a lane line, marking the projection point.
Different second space coordinates correspond to different projection points, different projection points may correspond to different semantic information, but the initial structured parameter representation is used for representing the lane lines, that is, theoretically, the semantic information of the projection points corresponding to the second space coordinates in the image should correspond to the lane lines, however, due to the accuracy problem of the initial structured parameters, some semantic information corresponding to the projection points is not the lane lines, and when the semantic information corresponding to the projection points is the lane lines, the projection points are marked, so that the accuracy degree represented by the initial structured parameters is judged by using the marked projection points subsequently.
Specifically, when the semantic information corresponding to the projection point is a lane line, it is determined that the projection point corresponds to a positive vote. Because there is a case where the distance between the two lane lines is close, such as a double-yellow solid line, in a possible implementation manner, after it is determined that the semantic information corresponding to the projection point is the lane line, the tracking code of the lane line corresponding to the projection point is determined, and if the tracking code of the lane line is consistent with the tracking code of the first spatial coordinate set corresponding to the initial structured parameter representation, the projection point corresponds to positive voting.
Step 5033, determining the sum of the corresponding marked proxels represented by the initial structuring parameter.
And the statistical semantic information is the sum of the projection points of the lane lines, the accuracy degree of the initial structural parameter representation is judged according to the sum of the projection points, and the larger the sum of the projection points is, the more accurate the initial structural parameter representation can represent the lane lines.
Step 5034, if the sum of the marked projection points meets a first preset condition, determining the initial structural parameter representation as an optimized structural parameter representation of the lane line.
Setting a first preset condition that the first preset condition is larger than a preset threshold value, wherein the preset threshold value is a threshold of the sum of the projection points, and the initial structural parameter representation at the moment can accurately represent the lane line only when the sum of the projection points is larger than the preset threshold value, and then the initial structural parameter representation at the moment is determined as the optimized structural parameter representation of the lane line.
In this embodiment, the semantic information corresponding to the projection points of the lane line is marked, and the sum of the marked projection points is counted, where the sum of the projection points can be used to determine the accuracy of the initial structured parameter representation, and the larger the sum of the projection points is, the more accurate the initial structured parameter representation is, and in order to select the accurate initial structured parameter representation, a first preset condition is set to be greater than a preset threshold value, so that the initial structured parameter representation meeting the first preset condition is determined to be the optimized structured parameter representation, and the accuracy of the obtained optimized structured parameter representation is higher.
Fig. 7 shows a schematic flow chart of acquiring an initial structured parametric representation of a lane line according to the first spatial coordinates in the embodiment shown in fig. 6.
As shown in fig. 7, based on the embodiment shown in fig. 6, in an exemplary embodiment of the present application, the obtaining of the initial structured parameter representation of the lane line shown in step 40 may specifically include the following steps:
step 4021, selecting a third spatial coordinate set from the first spatial coordinate set.
And selecting a third space coordinate set from the first space coordinate set, wherein the third space coordinate set is a part of the first space coordinate set.
Step 4022, obtaining an initial structural parameter representation of the lane line according to the third space coordinate set.
And acquiring the initial structural parameter representation of the lane line according to the selected third space coordinate set.
On the basis of the step 4021 and the step 4022, as shown in fig. 7, in an exemplary embodiment of the present application, after the step 5033 determines the sum of the corresponding marked projection points represented by the initial structuring parameter, the method may further include the following steps:
step 5035, if the sum of the marked projection points does not meet the first preset condition, selecting a fourth spatial coordinate set from the first spatial coordinate set.
The method comprises the steps that the sum of corresponding marked projection points represented by an initial structural parameter does not meet a first preset condition, namely the sum of the projection points is smaller than or equal to a preset threshold value, the initial structural parameter at the moment represents that a lane line cannot be accurately represented, therefore, a fourth spatial coordinate set needs to be selected from a first spatial coordinate set again, the fourth spatial coordinate set is a new third spatial coordinate set, and the corresponding spatial coordinates in the fourth spatial coordinate set and the third spatial coordinate set are different.
Step 5036, updating the initial structured parametric representation according to the fourth set of spatial coordinates.
And updating the initial structural parameter representation according to the reselected fourth spatial coordinate set, namely determining a new initial structural parameter representation again according to the fourth spatial coordinate in the fourth spatial coordinate set.
In this embodiment, because the initial structural parameter representation is determined by the third spatial coordinate set, when the sum of the marked projection points corresponding to the initial structural parameter representation does not satisfy the first preset condition, the accuracy of the initial structural parameter representation at this time is low, that is, the lane line cannot be accurately represented, so that the selection is performed again in the first spatial coordinate set, the fourth spatial coordinate set is determined to update the initial structural parameter representation, and the step 50 is performed again according to the updated initial structural parameter representation, so that the step 50 is continuously and repeatedly performed to determine the optimized structural parameter representation, and it is ensured that the accuracy of the finally obtained optimized structural parameter representation of the lane line is high.
Fig. 8 is a schematic flow chart of the embodiment shown in fig. 7, which is included after the fourth spatial coordinate set is selected from the first spatial coordinate set.
As shown in fig. 8, on the basis of the embodiment shown in fig. 7, in an exemplary embodiment of the present application, after the step 5035 selects the fourth spatial coordinate set from the first spatial coordinate set, the method may further include the following steps:
step 5037, determining the selection times corresponding to the fourth spatial coordinate set.
And when the sum of the marked projection points does not meet a first preset condition, selecting a fourth space coordinate set from the first space coordinate set, recording the selection times of the fourth space coordinate set, re-selecting the fourth space coordinate set once, and adding one to the selection times.
Step 5038, if the number of selections meets a second preset condition, determining the initial structural parameter representation that the sum of the marked projection points meets a third preset condition as the optimized structural parameter representation of the lane line.
Setting a second preset condition as reaching the maximum selection times, wherein each time the fourth space coordinate set is selected, the maximum selection times are the maximum iteration times if the selection times are subjected to an iteration process, and if the iteration times are multiple, but the sum of the marked projection points corresponding to the new initial structural parameter representation obtained according to the fourth space coordinate set still cannot meet the first preset condition, setting the second preset condition to prevent the method from falling into endless iteration, determining the initial structural parameter with the maximum sum of the projection points in all the initial structural parameter representations as the optimized structural parameter representation of the lane line after the maximum selection times are reached, setting a third preset condition as determining the maximum sum of the projection points, and representing the optimized structural parameter at the moment as the lane line relatively accurately.
In this embodiment, considering that there may be a case where the sum of projection points corresponding to the initial structured parameter representation obtained after multiple iterations still does not satisfy the first preset condition, by setting the second preset condition, after the number of times of selection of the fourth spatial coordinate set reaches the maximum number of times of selection, the initial structured parameter representation with the maximum sum of projection points is determined in all the initial structured parameter representations, and the initial structured parameter representation is determined to be the optimized structured parameter representation, where the obtained optimized structured parameter representation can relatively accurately represent a lane line, thereby avoiding the method from falling into endless iterations.
Exemplary devices
Based on the same concept as the method embodiment of the application, the embodiment of the application also provides a device for determining the structural parameter representation of the lane line.
Fig. 9 is a schematic structural diagram of a structural parameter representation determining apparatus for a lane line according to an exemplary embodiment of the present application.
As shown in fig. 9, an exemplary embodiment of the present application provides a device for determining a structured parameter representation of a lane line, including:
the semantic information acquiring module 91 is configured to acquire semantic information carried by a pixel point in at least one frame of image;
and the pixel coordinate determining module 92 is configured to determine the pixel coordinate of the reference point of the lane line corresponding to the semantic information.
A spatial coordinate obtaining module 93, configured to obtain, according to the pixel coordinate and a camera pose corresponding to the at least one frame of image, a first spatial coordinate set corresponding to the pixel coordinate;
a first parametric representation module 94, configured to obtain an initial structured parametric representation of the lane line according to the first spatial coordinate set;
a second parameter representation module 95, configured to determine an optimized structured parameter representation of the lane line according to the initial structured parameter representation, the respective corresponding camera pose of the at least one frame of image, and the semantic information.
As shown in fig. 10, in an exemplary embodiment, the spatial coordinate acquisition module 93 includes: an inverse perspective transformation unit 931, a tracking code determination unit 932, a first acquisition unit 933;
the inverse perspective transformation unit 931 is configured to perform inverse perspective transformation on the pixel coordinate to obtain a sixth spatial coordinate corresponding to the pixel coordinate;
a tracking code determining unit 932, configured to determine a tracking code of the lane line corresponding to the sixth spatial coordinate in at least one frame of image;
a first obtaining unit 933, configured to combine the sixth spatial coordinates where the corresponding tracking codes are the same into the first spatial coordinate set.
As shown in FIG. 10, in one exemplary embodiment, the first parametric representation module 94 includes: a first selecting unit 9411 and a first parameter representation unit 9412;
a first selecting unit 9411, configured to randomly select at least four fifth spatial coordinates from the first spatial coordinate set;
a first parameter representing unit 9412, configured to obtain an initial bezier curve corresponding to the lane line according to the fifth spatial coordinate.
As shown in fig. 10, in an exemplary embodiment, the second parameter representation module 95 includes: a second obtaining unit 951, a projection point determining unit 952, and a second parameter representation unit 953;
a second obtaining unit 951, configured to determine a second spatial coordinate corresponding to the lane line according to the initial structured parameter representation;
a projection point determining unit 952, configured to obtain, according to the second spatial coordinate and the camera pose corresponding to each of the at least one frame of image, a projection point corresponding to each of the second spatial coordinate in the at least one frame of image;
and the second parameter representation unit 953 is configured to determine an optimized structured parameter representation of the lane line according to the projection point and the semantic information.
As shown in fig. 11, in an exemplary embodiment, the second parameter representation unit 953 includes: a first determination subunit 9531, a projected point marker subunit 9532, a second determination subunit 9533, a first parametric representation subunit 9534;
the first determining subunit 9531 is configured to determine semantic information corresponding to the projection point according to semantic information carried by a pixel point in at least one frame of image;
a projection point marking subunit 9532, configured to mark a projection point if the semantic information corresponding to the projection point is a lane line;
a second determining subunit 9533, configured to determine a sum of the corresponding marked proxels represented by the initial structuring parameter;
a first parametric representation subunit 9534, configured to determine the initial structured parametric representation as an optimized structured parametric representation of the lane line if the sum of the marked projection points satisfies a first preset condition.
As shown in FIG. 12, in one exemplary embodiment, the first parametric representation module 94 includes: a second selection unit 9421 and a third parameter representation unit 9422;
a second selecting unit 9421, configured to select a third spatial coordinate set from the first spatial coordinate set;
a third parameter representation unit 9422, configured to obtain an initial structured parameter representation of the lane line according to the third spatial coordinate set;
when the first parameter representation module 94 includes the second selection unit 9421 and the third parameter representation unit 9422, as shown in fig. 13, in an exemplary embodiment, the second parameter representation unit 953 further includes: selecting a subunit 9535, the parameter representation updating subunit 9536;
a selecting subunit 9535, configured to select a fourth spatial coordinate set from the first spatial coordinate set if the sum of the marked projection points does not meet a first preset condition;
a parametric representation updating subunit 9536 for updating the initial structured parametric representation in dependence on the fourth set of spatial coordinates.
As shown in fig. 14, in an exemplary embodiment, the second parameter representation unit 953 further includes: a third determination subunit 9537 and a second parameter representation subunit 9538;
a third determining subunit 9537, configured to determine the selection times corresponding to the fourth spatial coordinate set;
a second parametric representation subunit 9538, configured to, if the number of selections satisfies the second preset condition, determine the initial structured parametric representation that the sum of the marked projection points satisfies the third preset condition as the optimized structured parametric representation of the lane line.
Exemplary electronic device
FIG. 15 illustrates a block diagram of an electronic device in accordance with an embodiment of the present application.
As shown in fig. 15, the electronic device 100 includes one or more processors 101 and memory 102.
The processor 101 may be a Central Processing Unit (CPU) or other form of processing unit having data processing capabilities and/or instruction execution capabilities, and may control other components in the electronic device 100 to perform desired functions.
Memory 102 may include one or more computer program products that may include various forms of computer-readable storage media, such as volatile memory and/or non-volatile memory. Volatile memory can include, for example, Random Access Memory (RAM), cache memory (or the like). The non-volatile memory may include, for example, Read Only Memory (ROM), a hard disk, flash memory, and the like. One or more computer program instructions may be stored on a computer readable storage medium and executed by the processor 101 to implement the above method for determining the structured parametric representation of the lane lines of the various embodiments of the present application and/or other desired functions.
In one example, the electronic device 100 may further include: an input device 103 and an output device 104, which are interconnected by a bus system and/or other form of connection mechanism (not shown).
Of course, for the sake of simplicity, only some of the components related to the present application in the electronic apparatus 100 are shown in fig. 15, and components such as a bus, an input/output interface, and the like are omitted. In addition, electronic device 100 may include any other suitable components depending on the particular application.
Exemplary computer program product and computer-readable storage Medium
In addition to the above-described methods and apparatus, embodiments of the present application may also be a computer program product comprising computer program instructions which, when executed by a processor, cause the processor to perform the steps in the method of determining structured parametric representations of lane lines according to various embodiments of the present application described in the above-mentioned "exemplary methods" section of this specification.
The computer program product may be written with program code for performing the operations of embodiments of the present application in any combination of one or more programming languages, including an object oriented programming language such as Java, C + + or the like and conventional procedural programming languages, such as the "C" programming language or similar programming languages. The program code may execute entirely on the user's computing device, partly on the user's device, as a stand-alone software package, partly on the user's computing device and partly on a remote computing device, or entirely on the remote computing device or server.
Furthermore, embodiments of the present application may also be a computer-readable storage medium having stored thereon computer program instructions which, when executed by a processor, cause the processor to perform steps in a method of determining structured parametric representations of lane lines according to various embodiments of the present application, described in the "exemplary methods" section above in this description.
The computer-readable storage medium may take any combination of one or more readable media. The readable medium may be a readable signal medium or a readable storage medium. A readable storage medium may include, for example, but not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or a combination of any of the foregoing. More specific examples (a non-exhaustive list) of the readable storage medium include: an electrical connection having one or more wires, a portable disk, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber, a portable compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing.
The foregoing describes the general principles of the present application in conjunction with specific embodiments, however, it is noted that the advantages, effects, etc. mentioned in the present application are merely examples and are not limiting, and they should not be considered essential to the various embodiments of the present application. Furthermore, the foregoing disclosure of specific details is for the purpose of illustration and description and is not intended to be limiting, since the foregoing disclosure is not intended to be exhaustive or to limit the disclosure to the precise details disclosed.
The block diagrams of devices, apparatuses, systems referred to in this application are only given as illustrative examples and are not intended to require or imply that the connections, arrangements, configurations, etc. must be made in the manner shown in the block diagrams. These devices, apparatuses, devices, systems may be connected, arranged, configured in any manner, as will be appreciated by those skilled in the art. Words such as "including," "comprising," "having," and the like are open-ended words that mean "including, but not limited to," and are used interchangeably therewith. The words "or" and "as used herein mean, and are used interchangeably with, the word" and/or, "unless the context clearly dictates otherwise. The word "such as" is used herein to mean, and is used interchangeably with, the phrase "such as but not limited to".
It should also be noted that in the devices, apparatuses, and methods of the present application, the components or steps may be decomposed and/or recombined. These decompositions and/or recombinations are to be considered as equivalents of the present application.
The previous description of the disclosed aspects is provided to enable any person skilled in the art to make or use the present application. Various modifications to these aspects will be readily apparent to those skilled in the art, and the generic principles defined herein may be applied to other aspects without departing from the scope of the application. Thus, the present application is not intended to be limited to the aspects shown herein but is to be accorded the widest scope consistent with the principles and novel features disclosed herein.
The foregoing description has been presented for purposes of illustration and description. Furthermore, the description is not intended to limit embodiments of the application to the form disclosed herein. While a number of example aspects and embodiments have been discussed above, those of skill in the art will recognize certain variations, modifications, alterations, additions and sub-combinations thereof.

Claims (10)

1. A method for determining a structured parametric representation of a lane line, comprising:
obtaining semantic information carried by pixel points in at least one frame of image;
determining pixel coordinates of a reference point of a lane line corresponding to the semantic information;
acquiring a first space coordinate set corresponding to the pixel coordinate according to the pixel coordinate and the camera pose corresponding to the at least one frame of image;
acquiring an initial structural parameter representation of the lane line according to the first space coordinate set;
and determining the optimized structural parameter representation of the lane line according to the initial structural parameter representation, the camera pose corresponding to the at least one frame of image and the semantic information.
2. The method according to claim 1, wherein the determining an optimized structured parametric representation of the lane lines according to the initial structured parametric representation, the respective corresponding camera poses of the at least one frame of image, and the semantic information comprises:
determining a second space coordinate corresponding to the lane line according to the initial structural parameter representation;
acquiring projection points of the second space coordinate in the at least one frame of image respectively corresponding to the second space coordinate and the camera pose of the at least one frame of image respectively corresponding to the second space coordinate;
and determining the optimized structural parameter representation of the lane line according to the projection point and the semantic information.
3. The method of claim 2, wherein said determining an optimized structured parametric representation of the lane line from the proxels and the semantic information comprises:
determining semantic information corresponding to the projection point according to the semantic information carried by the pixel points in the at least one frame of image;
if the semantic information corresponding to the projection point is a lane line, marking the projection point;
determining the sum of the corresponding marked projection points represented by the initial structuring parameter;
and if the sum of the marked projection points meets a first preset condition, determining the initial structural parameter representation as the optimized structural parameter representation of the lane line.
4. The method of claim 3, wherein said obtaining an initial structured parametric representation of the lane line from the first spatial coordinate comprises:
selecting a third space coordinate set from the first space coordinate set;
acquiring an initial structural parameter representation of the lane line according to the third space coordinate set;
then, after determining that the initial structural parameter represents the sum of the corresponding marked projection points, the method further includes:
if the sum of the marked projection points does not meet the first preset condition, selecting a fourth space coordinate set from the first space coordinate set;
updating the initial structured parametric representation in accordance with the fourth set of spatial coordinates.
5. The method of claim 4, wherein said selecting a fourth set of spatial coordinates from the first set of spatial coordinates further comprises:
determining the selection times corresponding to the fourth space coordinate set;
and if the selection times meet a second preset condition, determining the initial structural parameter representation of the marked projection points meeting a third preset condition as the optimized structural parameter representation of the lane line.
6. The method of claim 1, wherein said obtaining an initial structured parametric representation of the lane line from the first set of spatial coordinates comprises:
randomly selecting at least four fifth space coordinates from the first space coordinate set;
and acquiring an initial Bezier curve corresponding to the lane line according to the fifth space coordinate.
7. The method according to any one of claims 1-6, wherein the acquiring a first set of spatial coordinates corresponding to the pixel coordinates according to the pixel coordinates and a camera pose corresponding to the at least one frame of image comprises:
performing inverse perspective transformation on the pixel coordinate to obtain a sixth space coordinate corresponding to the pixel coordinate;
determining a tracking code of a lane line corresponding to the sixth spatial coordinate in the at least one frame of image;
and forming a first space coordinate set by the sixth space coordinates with the same corresponding tracking codes.
8. A lane line structured parametric representation determination apparatus comprising:
the semantic information acquisition module is used for acquiring semantic information carried by pixel points in at least one frame of image;
and the pixel coordinate determination module is used for determining the pixel coordinate of the reference point of the lane line corresponding to the semantic information.
The spatial coordinate acquisition module is used for acquiring a first spatial coordinate set corresponding to the pixel coordinate according to the pixel coordinate and the camera pose corresponding to the at least one frame of image;
the first parameter representation module is used for acquiring the initial structural parameter representation of the lane line according to the first space coordinate set;
and the second parameter representation module is used for determining the optimized structural parameter representation of the lane line according to the initial structural parameter representation, the camera pose corresponding to each of the at least one frame of image and the semantic information.
9. A computer-readable storage medium, in which a computer program is stored, which computer program is adapted to carry out the method of determining a structured parametric representation of a lane marking according to any of claims 1 to 7.
10. An electronic device, the electronic device comprising:
a processor;
a memory for storing the processor-executable instructions;
the processor is configured to read the executable instructions from the memory and execute the instructions to implement the method for determining a structured parametric representation of a lane marking according to any of claims 1 to 7.
CN201910787011.1A 2019-08-24 2019-08-24 Method and device for determining structural parameter representation of lane line Active CN112435293B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910787011.1A CN112435293B (en) 2019-08-24 2019-08-24 Method and device for determining structural parameter representation of lane line

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910787011.1A CN112435293B (en) 2019-08-24 2019-08-24 Method and device for determining structural parameter representation of lane line

Publications (2)

Publication Number Publication Date
CN112435293A true CN112435293A (en) 2021-03-02
CN112435293B CN112435293B (en) 2024-04-19

Family

ID=74690023

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910787011.1A Active CN112435293B (en) 2019-08-24 2019-08-24 Method and device for determining structural parameter representation of lane line

Country Status (1)

Country Link
CN (1) CN112435293B (en)

Citations (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR20090064946A (en) * 2007-12-17 2009-06-22 한국전자통신연구원 Method and apparatus for generating virtual lane for video based car navigation system
US20150055831A1 (en) * 2012-03-19 2015-02-26 Nippon Soken, Inc. Apparatus and method for recognizing a lane
CN105667518A (en) * 2016-02-25 2016-06-15 福州华鹰重工机械有限公司 Lane detection method and device
US20180181817A1 (en) * 2015-09-10 2018-06-28 Baidu Online Network Technology (Beijing) Co., Ltd. Vehicular lane line data processing method, apparatus, storage medium, and device
US20180285659A1 (en) * 2017-03-31 2018-10-04 Here Global B.V. Method, apparatus, and system for a parametric representation of lane lines
JP2018169947A (en) * 2017-03-30 2018-11-01 株式会社日立情報通信エンジニアリング Lane recognition apparatus and lane recognition program
CN109543520A (en) * 2018-10-17 2019-03-29 天津大学 A kind of lane line parametric method of Semantic-Oriented segmentation result
US20190146520A1 (en) * 2014-03-18 2019-05-16 Ge Global Sourcing Llc Optical route examination system and method
CN109948470A (en) * 2019-03-01 2019-06-28 武汉光庭科技有限公司 'STOP' line ahead detection method and system based on Hough transformation
CN110084095A (en) * 2019-03-12 2019-08-02 浙江大华技术股份有限公司 Method for detecting lane lines, lane detection device and computer storage medium

Patent Citations (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR20090064946A (en) * 2007-12-17 2009-06-22 한국전자통신연구원 Method and apparatus for generating virtual lane for video based car navigation system
US20150055831A1 (en) * 2012-03-19 2015-02-26 Nippon Soken, Inc. Apparatus and method for recognizing a lane
US20190146520A1 (en) * 2014-03-18 2019-05-16 Ge Global Sourcing Llc Optical route examination system and method
US20180181817A1 (en) * 2015-09-10 2018-06-28 Baidu Online Network Technology (Beijing) Co., Ltd. Vehicular lane line data processing method, apparatus, storage medium, and device
CN105667518A (en) * 2016-02-25 2016-06-15 福州华鹰重工机械有限公司 Lane detection method and device
JP2018169947A (en) * 2017-03-30 2018-11-01 株式会社日立情報通信エンジニアリング Lane recognition apparatus and lane recognition program
US20180285659A1 (en) * 2017-03-31 2018-10-04 Here Global B.V. Method, apparatus, and system for a parametric representation of lane lines
CN109543520A (en) * 2018-10-17 2019-03-29 天津大学 A kind of lane line parametric method of Semantic-Oriented segmentation result
CN109948470A (en) * 2019-03-01 2019-06-28 武汉光庭科技有限公司 'STOP' line ahead detection method and system based on Hough transformation
CN110084095A (en) * 2019-03-12 2019-08-02 浙江大华技术股份有限公司 Method for detecting lane lines, lane detection device and computer storage medium

Also Published As

Publication number Publication date
CN112435293B (en) 2024-04-19

Similar Documents

Publication Publication Date Title
US11422261B2 (en) Robot relocalization method and apparatus and robot using the same
CN109461211B (en) Semantic vector map construction method and device based on visual point cloud and electronic equipment
CN112489126B (en) Vehicle key point information detection method, vehicle control method and device and vehicle
CN108279670B (en) Method, apparatus and computer readable medium for adjusting point cloud data acquisition trajectory
CN111612852B (en) Method and apparatus for verifying camera parameters
CN110853085B (en) Semantic SLAM-based mapping method and device and electronic equipment
CN110068824B (en) Sensor pose determining method and device
CN113554643B (en) Target detection method and device, electronic equipment and storage medium
CN109034214B (en) Method and apparatus for generating a mark
CN112668596B (en) Three-dimensional object recognition method and device, recognition model training method and device
CN112150529B (en) Depth information determination method and device for image feature points
CN112212873B (en) Construction method and device of high-precision map
CN112097742B (en) Pose determination method and device
CN112381873A (en) Data labeling method and device
CN116642490A (en) Visual positioning navigation method based on hybrid map, robot and storage medium
CN115393423A (en) Target detection method and device
CN112435293A (en) Method and device for determining structural parameter representation of lane line
CN112417924B (en) Space coordinate acquisition method and device for marker post
CN112348874B (en) Method and device for determining structural parameter representation of lane line
CN114419564A (en) Vehicle pose detection method, device, equipment, medium and automatic driving vehicle
CN112766068A (en) Vehicle detection method and system based on gridding labeling
CN112528918A (en) Road element identification method, map marking method and device and vehicle
CN112348876A (en) Method and device for acquiring space coordinates of signboards
CN113095347A (en) Deep learning-based mark recognition method and training method, system and electronic equipment thereof
CN111724431B (en) Parallax map obtaining method and device and electronic equipment

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant