CN113139031A - Method for generating traffic sign for automatic driving and related device - Google Patents

Method for generating traffic sign for automatic driving and related device Download PDF

Info

Publication number
CN113139031A
CN113139031A CN202110541380.XA CN202110541380A CN113139031A CN 113139031 A CN113139031 A CN 113139031A CN 202110541380 A CN202110541380 A CN 202110541380A CN 113139031 A CN113139031 A CN 113139031A
Authority
CN
China
Prior art keywords
guideboard
images
point set
feature point
camera
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202110541380.XA
Other languages
Chinese (zh)
Other versions
CN113139031B (en
Inventor
单国航
朱磊
贾双成
李倩
李成军
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Zhidao Network Technology Beijing Co Ltd
Original Assignee
Zhidao Network Technology Beijing Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Zhidao Network Technology Beijing Co Ltd filed Critical Zhidao Network Technology Beijing Co Ltd
Priority to CN202110541380.XA priority Critical patent/CN113139031B/en
Publication of CN113139031A publication Critical patent/CN113139031A/en
Application granted granted Critical
Publication of CN113139031B publication Critical patent/CN113139031B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/20Information retrieval; Database structures therefor; File system structures therefor of structured data, e.g. relational data
    • G06F16/29Geographical information databases
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01CMEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
    • G01C21/00Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00
    • G01C21/26Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 specially adapted for navigation in a road network
    • G01C21/28Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 specially adapted for navigation in a road network with correlation of data from several navigational instruments
    • G01C21/30Map- or contour-matching
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/50Information retrieval; Database structures therefor; File system structures therefor of still image data
    • G06F16/58Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually
    • G06F16/587Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually using geographical or spatial information, e.g. location
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/30Determination of transform parameters for the alignment of images, i.e. image registration
    • G06T7/33Determination of transform parameters for the alignment of images, i.e. image registration using feature-based methods
    • G06T7/337Determination of transform parameters for the alignment of images, i.e. image registration using feature-based methods involving reference images or patches
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/50Context or environment of the image
    • G06V20/56Context or environment of the image exterior to a vehicle by using sensors mounted on the vehicle
    • G06V20/58Recognition of moving objects or obstacles, e.g. vehicles or pedestrians; Recognition of traffic objects, e.g. traffic signs, traffic lights or roads
    • G06V20/582Recognition of moving objects or obstacles, e.g. vehicles or pedestrians; Recognition of traffic objects, e.g. traffic signs, traffic lights or roads of traffic signs
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10016Video; Image sequence

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Remote Sensing (AREA)
  • Physics & Mathematics (AREA)
  • Radar, Positioning & Navigation (AREA)
  • Databases & Information Systems (AREA)
  • Data Mining & Analysis (AREA)
  • General Engineering & Computer Science (AREA)
  • Library & Information Science (AREA)
  • Multimedia (AREA)
  • Automation & Control Theory (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Traffic Control Systems (AREA)
  • Navigation (AREA)

Abstract

The application relates to a method and a related device for generating traffic signs for automatic driving. The method comprises the following steps: acquiring two images containing the same guideboard; calculating a rotation matrix and a translation matrix between the two images; carrying out guideboard identification on the two images to obtain pixel coordinates of the first guideboard feature point set in the two images respectively; calculating the space coordinates of the first road sign feature point set relative to the camera according to the rotation matrix and the translation matrix between the two images and the pixel coordinates of the first road sign feature point set in the two images; determining a guideboard space plane where the first guideboard feature point set is located by using the space coordinate of the first guideboard feature point set relative to the camera and the horizontal reference plane; and calculating the space coordinates of the second guideboard feature point set relative to the camera according to the guideboard space plane and the pixel coordinates of the second guideboard feature point set in one of the images. The scheme provided by the application can obtain the geographical coordinates of the guideboard with high accuracy.

Description

Method for generating traffic sign for automatic driving and related device
Technical Field
The present application relates to the field of navigation technologies, and in particular, to a method and a related apparatus for generating a traffic sign for automatic driving.
Background
With the development of technologies such as artificial intelligence and automatic driving, the construction of intelligent traffic becomes a research hotspot, and a high-precision map is an essential part in the construction of intelligent traffic data. The high-precision map can contain various traffic signs, for example, ground feature elements such as lane lines, driving stop lines and pedestrian crossing lines in the real world and high-altitude feature elements such as guideboards and traffic lights can be expressed through a detailed lane map so as to provide data support for navigation in an application scene such as automatic driving.
The guideboard in the traffic sign is used as an information bearing carrier of a city geographic entity, has information navigation functions such as place names, routes, distances and directions, is used as infrastructure distributed at urban road intersections, has specificity in space, and is a good carrier of a city basic internet of things.
In the related art, the geographical coordinates of the guideboard are generated by respectively calculating the spatial coordinates of each feature point in the guideboard, so that the guideboard is manufactured. If the calculation error of the space coordinate of one of the feature points is large, the manufacturing precision of the guideboard is directly influenced.
Disclosure of Invention
In order to solve or partially solve the problems in the related art, the application provides a method and a related device for generating a traffic sign for automatic driving, which can obtain geographical coordinates of a guideboard with high accuracy.
The application provides a method for generating traffic signs for automatic driving in a first aspect, which comprises the following steps:
acquiring two images containing the same guideboard and acquiring geographic position information of a camera when the two images are respectively shot;
calculating a rotation matrix and a translation matrix between the two images;
performing guideboard recognition on the two images to obtain pixel coordinates of a first guideboard feature point set in the two images, wherein the first guideboard feature point set comprises at least three feature points in the guideboard;
calculating the space coordinates of the first road sign feature point set relative to the camera according to the rotation matrix and the translation matrix between the two images and the pixel coordinates of the first road sign feature point set in the two images;
determining a guideboard space plane where the first guideboard feature point set is located by using the space coordinate of the first guideboard feature point set relative to the camera and a horizontal reference plane; the guideboard space plane is vertical to the horizontal reference plane;
calculating the space coordinate of a second guideboard feature point set relative to the camera according to the guideboard space plane and the pixel coordinate of the second guideboard feature point set in one of the images, wherein the second guideboard feature point set comprises at least two feature points at preset positions on the guideboard;
and generating the geographical coordinates of the guideboard by utilizing the space coordinates of the second guideboard feature point set relative to the camera and the geographical position information of the camera when the two images are shot.
In one embodiment, the calculating a rotation matrix and a translation matrix between the two images includes:
acquiring a characteristic point of each image in the two images;
matching the characteristic points of the two images to obtain a target characteristic point set successfully matched in the two images;
and calculating a rotation matrix and a translation matrix between the two images by using the target characteristic point set.
In one embodiment, the obtaining pixel coordinates of the first set of feature points in the two images includes:
acquiring feature points in the identified guideboard area in each of the two images;
matching the feature points in the guideboard areas in the two images to obtain a first guideboard feature point set successfully matched in the two images;
and acquiring pixel coordinates of the first guideboard feature point set in the two images respectively.
In one embodiment, the determining a guideboard spatial plane in which the first set of guideboard feature points is located using the spatial coordinates of the first set of guideboard feature points with respect to the camera and a horizontal reference plane includes:
constructing a vertical plane error equation by using a least square optimization algorithm according to the space coordinate and the horizontal reference plane of the first guideboard feature point set relative to the camera;
and obtaining a guideboard space plane where the first guideboard feature point set is located according to the vertical plane error equation.
In one embodiment, said calculating spatial coordinates of said second set of guideboard feature points relative to said camera based on said guideboard spatial plane and pixel coordinates of said second set of guideboard feature points in one of said images comprises:
constructing a feature point space coordinate solving equation set by using the guideboard space plane and a preset calculation formula;
and sequentially substituting the pixel coordinates of the second guideboard feature point set in one of the images into the equation set to obtain the space coordinates of the second guideboard feature point set relative to the camera.
In one embodiment, the preset positions include:
and one or a combination of more of corner points, center points, line segment intersection points, points on edge lines and vertexes of fonts of the guideboard.
A second aspect of the present application provides an apparatus for generating a traffic sign for automatic driving, including:
the system comprises an acquisition unit, a display unit and a display unit, wherein the acquisition unit is used for acquiring two images containing the same guideboard and acquiring the geographical position information of a camera when the two images are respectively shot;
the first calculation unit is used for calculating a rotation matrix and a translation matrix between the two images;
the identification unit is used for carrying out guideboard identification on the two images and acquiring pixel coordinates of a first guideboard feature point set in the two images, wherein the first guideboard feature point set comprises at least three feature points in the guideboard;
the second calculation unit is used for calculating the space coordinates of the first guideboard feature point set relative to the camera according to the rotation matrix and the translation matrix between the two images and the pixel coordinates of the first guideboard feature point set in the two images;
the determining unit is used for determining a guideboard space plane where the first guideboard feature point set is located by using the space coordinate of the first guideboard feature point set relative to the camera and a horizontal reference plane; the guideboard space plane is vertical to the horizontal reference plane;
a third calculating unit, configured to calculate, according to the guideboard spatial plane and a pixel coordinate of a second guideboard feature point set in one of the images, a spatial coordinate of the second guideboard feature point set with respect to the camera, where the second guideboard feature point set includes at least two feature points in a preset position on the guideboard;
and the generating unit is used for generating the geographical coordinates of the guideboard by utilizing the space coordinates of the second guideboard feature point set relative to the camera and the geographical position information of the camera when the two images are shot.
In one embodiment, the preset positions include:
and one or a combination of more of corner points, center points, line segment intersection points, points on edge lines and vertexes of fonts of the guideboard.
A third aspect of the present application provides an electronic device comprising:
a processor; and
a memory having executable code stored thereon, which when executed by the processor, causes the processor to perform the method as described above.
A fourth aspect of the present application provides a non-transitory machine-readable storage medium having stored thereon executable code, which when executed by a processor of an electronic device, causes the processor to perform a method as described above.
The technical scheme provided by the application can comprise the following beneficial effects:
according to the method provided by the embodiment of the application, the guideboard space plane where the first guideboard feature point set is located is determined by using the space coordinate of the first guideboard feature point set relative to the camera. Because the guideboard space plane is determined jointly according to all the first guideboard feature points in the first guideboard feature point set, the influence of calculation errors of some first guideboard feature points on the calculation correctness of the guideboard space plane is avoided to the greatest extent. The fact that the guideboard is perpendicular to the horizontal reference plane in the real world is reflected by setting the guideboard space plane to be perpendicular to the horizontal reference plane, so that the guideboard space plane is corrected, the influence of calculation errors of certain first guideboard feature points is eliminated, and the calculation accuracy of the guideboard space plane is guaranteed. And calculating the space coordinates of the second guideboard feature point set relative to the camera according to the guideboard space plane and the pixel coordinates of the second guideboard feature point set in one of the images, wherein the second guideboard feature point set comprises at least two feature points at preset positions on the guideboard. The second guideboard feature point can be a feature point of a preset position on the guideboard, so that the second guideboard feature point is more representative and is beneficial to ensuring the accuracy of identification and acquisition of the second guideboard feature point. The accurate and reliable guideboard space plane is utilized, and the second guideboard feature point set with more representativeness and high accuracy is used for calculation, so that the accuracy, reliability and stability of the second guideboard feature point set relative to the space coordinate of the camera are guaranteed, the geographic coordinate of the guideboard with high accuracy is obtained, and the high-precision guideboard is manufactured.
Further, the method provided by the embodiment of the application can obtain the feature points of each of the two images, match the feature points of the two images to obtain the target feature point set successfully matched in the two images, and calculate the rotation matrix and the translation matrix between the two images by using the target feature point set, so that the accuracy of calculation of the space coordinate of the first guideboard feature point set relative to the camera is ensured, and the acquisition of the geographical coordinates of the guideboard with high accuracy is facilitated.
Further, the method provided by the embodiment of the application can obtain the feature points in the guideboard area identified in each of the two images, match the feature points in the guideboard areas in the two images, and obtain the first guideboard feature point set successfully matched in the two images, so that the pixel coordinates of the first guideboard feature point set in the two images are obtained, and further, the method is favorable for obtaining the geographic coordinates of the guideboard with high accuracy.
It is to be understood that both the foregoing general description and the following detailed description are exemplary and explanatory only and are not restrictive of the application.
Drawings
The foregoing and other objects, features and advantages of the application will be apparent from the following more particular descriptions of exemplary embodiments of the application, as illustrated in the accompanying drawings wherein like reference numbers generally represent like parts throughout the exemplary embodiments of the application.
Fig. 1 is a schematic flow chart illustrating a method for generating a traffic sign for automatic driving according to an embodiment of the present application;
fig. 2 is a schematic structural diagram of a traffic sign generation device for automatic driving according to an embodiment of the present application;
fig. 3 is a schematic structural diagram of an electronic device shown in an embodiment of the present application.
Detailed Description
Embodiments of the present application will be described in more detail below with reference to the accompanying drawings. While embodiments of the present application are illustrated in the accompanying drawings, it should be understood that the present application may be embodied in various forms and should not be limited to the embodiments set forth herein. Rather, these embodiments are provided so that this disclosure will be thorough and complete, and will fully convey the scope of the disclosure to those skilled in the art.
The terminology used herein is for the purpose of describing particular embodiments only and is not intended to be limiting of the application. As used in this application and the appended claims, the singular forms "a", "an", and "the" are intended to include the plural forms as well, unless the context clearly indicates otherwise. It should also be understood that the term "and/or" as used herein refers to and encompasses any and all possible combinations of one or more of the associated listed items.
It should be understood that although the terms "first," "second," "third," etc. may be used herein to describe various information, these information should not be limited to these terms. These terms are only used to distinguish one type of information from another. For example, first information may also be referred to as second information, and similarly, second information may also be referred to as first information, without departing from the scope of the present application. Thus, a feature defined as "first" or "second" may explicitly or implicitly include one or more of that feature. In the description of the present application, "a plurality" means two or more unless specifically limited otherwise.
In the related art, the geographical coordinates of the guideboard are generated by respectively calculating the spatial coordinates of each feature point in the guideboard, so that the guideboard is manufactured. If the calculation error of the space coordinate of one of the feature points is large, the manufacturing precision of the guideboard is directly influenced.
In view of the foregoing problems, embodiments of the present application provide a method and a related apparatus for generating a traffic sign for automatic driving, which can obtain geographical coordinates of a guideboard with high accuracy.
The technical solutions of the embodiments of the present application are described in detail below with reference to the accompanying drawings.
Fig. 1 is a flowchart illustrating a method for generating a traffic sign for automatic driving according to an embodiment of the present application.
Referring to fig. 1, the method includes:
step S101, two images containing the same guideboard are obtained, and geographic position information of the camera when the two images are respectively shot is obtained.
In the embodiment of the application, video data in the driving process can be acquired through a camera device, wherein the camera device can include but is not limited to a device with a camera function, such as a driving recorder, a camera or a mobile phone of a driver, which is installed on a vehicle. The camera device may be a monocular camera device. The camera may be provided at the head of the vehicle to video-record the guideboard in front of the vehicle to obtain a continuous video image containing the guideboard. In order to process the images subsequently, the video data including the guideboard acquired during the running of the vehicle needs to be subjected to frame extraction. Generally, the frame rate of the video is 30 frames per second, and the video may be decimated according to a preset rule, for example, decimating 10 frames per second, 15 frames per second, 20 frames per second, or other values, so as to obtain a plurality of captured images, where the time interval between two adjacent frames of images is the decimation time interval. In addition, the image capturing device captures an image and also records the image capturing time of the image. The embodiment of the application regards the camera device for collecting the image as a camera.
Wherein the orientation of the camera (i.e. the optical axis of the camera) may be set parallel to a horizontal reference plane (i.e. the horizontal plane).
In addition, the geographic position information of the vehicle or the camera may be acquired by a Positioning device configured in a vehicle or a mobile phone, where the Positioning device may be implemented by existing devices such as a GPS (Global Positioning System), a beidou, and an RTK (real time kinematic), and the present application is not limited thereto. The geographic location information of the vehicle (or camera) may include, but is not limited to, geographic coordinates (e.g., GPS coordinates, latitude and longitude coordinates, etc.), position, heading angle, orientation, etc. information of the vehicle (or camera).
The method provided by the embodiment of the application can be applied to the vehicle machine and can also be applied to other equipment with calculation and processing functions, such as a computer, a mobile phone and the like. Taking the car machine as an example, the camera and the positioning device may be built in the car machine, or may be disposed outside the car machine, and establish communication connection with the car machine.
When the camera shoots an image, the positioning device collects the geographic position information of the vehicle or the camera and transmits the geographic position information to the vehicle machine. The geographical position information acquired by the positioning equipment at the same time can be searched according to the shooting time of the image. It will be appreciated that the time of the camera and the positioning device may be synchronized in advance, with the aim of enabling the captured image to correspond exactly to the current position of the vehicle or camera.
And S102, calculating a rotation matrix and a translation matrix between the two images.
In an alternative embodiment, the specific implementation of step S102, calculating the rotation matrix and the translation matrix between the two images may include the following steps:
11) and acquiring the characteristic points of each of the two images.
The feature points may include points on a guideboard, or feature points on some other fixed objects (such as buildings, billboards, etc.) on the image, which is not limited herein. Specifically, feature points of each of the two images may be extracted by using a brisk operator, the feature points of each image may be described, and the described feature points may be used as the feature points of the image.
12) And matching the characteristic points of the two images to obtain a target characteristic point set successfully matched in the two images.
In the embodiment of the present application, the two images may include the same object (such as a building, a billboard, a road sign, etc.) under different viewing angles. By matching the feature points on the images, some feature points of the same object on the two images can be successfully matched. And the target characteristic point set is a set of characteristic points which are successfully matched on each picture in the two images.
13) And calculating a rotation matrix and a translation matrix between the two images by using the target characteristic point set.
For example, while the vehicle is traveling, an image a containing a guideboard is collected at location a and an image B containing the same guideboard is collected at location B. Assuming that there are eight pairs of successfully matched feature points in the two images, an eight-point method can be used to calculate a rotation matrix and a translation matrix between the two images. It is understood that the above process is illustrated by an eight-point method, but is not limited thereto. When the matched feature points on the two images are more than eight pairs, a translation matrix and a rotation matrix between the two images can be obtained by utilizing epipolar constraint to construct a least square method.
It is understood that, in step S102, the feature points acquired in the two images may be feature points inside the guideboard region or feature points outside the guideboard region; that is, the feature points acquired in the two images in step S102 are selected from the entire regions of the two images. And matching the characteristic points in all the areas of the two images to obtain a target characteristic point set successfully matched in the two images. The target characteristic points can be in the guideboard area of each image or outside the guideboard area. Therefore, the result of the rotation matrix and the translation matrix between the two images obtained by calculation is more accurate and reliable by utilizing the target characteristic point set.
And S103, carrying out guideboard identification on the two images to obtain pixel coordinates of the first guideboard feature point set in the two images respectively.
Wherein the first set of guideboard feature points may include at least three feature points in a guideboard.
In an optional implementation manner, in step S103, performing guideboard recognition on the two images, and obtaining pixel coordinates of the first guideboard feature point set in the two images, where the first guideboard feature point set includes at least three feature points in a guideboard, the specific implementation manner may include the following steps:
14) and acquiring the characteristic points in the identified guideboard area in each of the two images.
Wherein, two images can be respectively identified so as to identify the guideboard contained in the image. The specific implementation process of the image recognition can comprise the following steps: carrying out sample training based on a deep learning algorithm, constructing a model, carrying out precision verification on the constructed model, identifying a guideboard in an image by using the model passing the precision verification, and extracting feature points on the guideboard by using a preset algorithm. In this embodiment, the guideboard of each of the two images can be identified by the YOLO V5 algorithm to ensure the reliability of the acquisition of the guideboard in the image. Furthermore, a brisk operator can be used for extracting the feature points in the identified guideboard region in each of the two images, describing the feature points in the identified guideboard region in each image, and taking the described feature points as the feature points of the image.
It is understood that other algorithms may be used to identify the guideboard region in the image, such as the Deeplab algorithm, but not limited thereto. Other algorithms, such as ORB, SURF, SIFT algorithm, etc., may also be used to extract feature points in the image, and are not limited herein.
15) And matching the feature points in the guideboard areas in the two images to obtain a first guideboard feature point set successfully matched in the two images.
The first guideboard feature point set is a set of feature points successfully matched in the guideboard area of each of the two images. Here, in step S103, the feature points acquired in the two images are feature points in the guideboard area, and the feature points to be matched are also feature points in the guideboard area. In step S102, the feature points acquired in the two images may be feature points within the guideboard area or feature points outside the guideboard area.
16) And acquiring pixel coordinates of the first guideboard feature point set in the two images respectively.
In the embodiment of the application, the extracted feature points can be represented by pixels, one feature point can be regarded as one pixel point, and each pixel point can be represented by a pixel coordinate. The pixel coordinates are used to describe the positions of the pixel points on the digital image after the object is imaged. To determine the coordinates of a pixel, a pixel coordinate system needs to be determined first. The pixel coordinate system is a rectangular coordinate system u-v with the top left vertex of the image plane as the origin of coordinates, the abscissa u and the ordinate v of the pixel are the number of columns and the number of rows in the image array, respectively, and the pixel coordinate of a certain point can be represented as Puv (u, v). Because the imaging positions of the guideboard on different images are different, the pixel coordinates of the same characteristic point on the guideboard on different images are different, and therefore, the pixel coordinates of each characteristic point on the two images need to be acquired.
And step S104, calculating the space coordinates of the first road sign feature point set relative to the camera according to the rotation matrix and the translation matrix between the two images and the pixel coordinates of the first road sign feature point set in the two images respectively.
The space coordinates of the first road sign feature point set relative to the camera can be calculated by utilizing pixel coordinates of the first road sign feature point set in the two images, and a rotation matrix and a translation matrix between the two images through a triangulation algorithm.
In an alternative embodiment, the specific implementation of calculating the spatial coordinates of the first road sign feature point set relative to the camera according to the rotation matrix and the translation matrix between the two images and the pixel coordinates of the first road sign feature point set in the two images at step S104 may include the following steps:
17) calculating the moving distance of the camera by using the geographical position information of the camera when the two images are shot;
18) optimizing a translation matrix between the two images according to the moving distance of the camera to obtain a new translation matrix;
19) and obtaining the space coordinates of the first road sign feature point set relative to the camera according to the pixel coordinates of the first road sign feature point set in the two images, the rotation matrix and the new translation matrix between the two images.
S105, determining a guideboard space plane where the first guideboard feature point set is located by using the space coordinate of the first guideboard feature point set relative to the camera and the horizontal reference plane; wherein, the space plane of the guideboard is vertical to the horizontal reference plane.
In practical applications, when an image is captured, the orientation of the camera is usually set to be parallel to a horizontal reference plane (i.e., a horizontal plane), that is, the optical axis of the camera is ensured to be parallel to the horizontal plane, so that the captured image can restore the position relationship between the object and the horizontal plane as much as possible. However, even if the camera is oriented parallel to the horizontal plane, the calculated guideboard spatial plane may not be perpendicular to the horizontal plane due to post-pointing errors or calculation errors, and the like, and thus the guideboard spatial plane needs to be corrected so as to be perpendicular to the horizontal plane.
In an optional implementation manner, the specific implementation manner of determining the guideboard spatial plane in which the first guideboard feature point set is located by using the spatial coordinates of the first guideboard feature point set with respect to the camera and the horizontal reference plane in step S105 may include the following steps:
20) constructing a vertical plane error equation by utilizing a least square optimization algorithm according to the space coordinate of the first guideboard feature point set relative to the camera and the horizontal reference plane;
21) and obtaining the space plane of the guideboard where the first guideboard feature point set is located according to the error equation of the vertical plane.
Specifically, a plane error equation is constructed by using a least square optimization algorithm:
Figure 588598DEST_PATH_IMAGE001
since the guideboard in the real world is perpendicular to the horizontal reference plane, in order to ensure that the spatial plane of the guideboard is perpendicular to the horizontal reference plane, a plane equation can be applied: ax + By + Cz + D =0, and B =0, so that the guideboard space plane is perpendicular to the horizontal reference plane. Thus, the guideboard space plane where the first guideboard feature point set is located uses the vertical plane equation: ax + Cz + D =0, so that the space plane of the guideboard is corrected, the influence of calculation errors of certain first guideboard feature points is eliminated, and the calculation accuracy of the space plane of the guideboard is guaranteed.
Namely, the plane error equation is modified into a vertical plane error equation, which specifically comprises the following steps:
Figure 37335DEST_PATH_IMAGE002
and sequentially substituting the first guideboard feature point set into the error equation relative to the space coordinates of the camera to obtain the minimum error and a corresponding A, C, D value, thereby determining the guideboard space plane where the first guideboard feature point set is located. That is, the guideboard spatial plane in which the first set of guideboard feature points is located may be represented by the equation of the vertical plane: ax + Cz + D = 0.
It can be understood that the guideboard space plane is determined jointly according to all the first guideboard feature points in the first guideboard feature point set, so that the influence of calculation errors of some first guideboard feature points on the calculation correctness of the guideboard space plane is avoided to a great extent.
And S106, calculating the space coordinate of the second guideboard feature point set relative to the camera according to the guideboard space plane and the pixel coordinate of the second guideboard feature point set in one of the images.
The second set of guideboard feature points may include at least two feature points at preset positions on the guideboard.
In an alternative embodiment, the step S106 of calculating the spatial coordinates of the second set of guideboard feature points relative to the camera according to the guideboard spatial plane and the pixel coordinates of the second set of guideboard feature points in one of the images, where the second set of guideboard feature points includes at least two feature points at preset positions on the guideboard may include the following steps:
22) constructing a feature point space coordinate solving equation set by using the guideboard space plane and a preset calculation formula;
23) and sequentially substituting the pixel coordinates of the second guideboard feature point set in one of the images into an equation set to obtain the space coordinates of the second guideboard feature point set relative to the camera.
Specifically, by using the vertical plane equation of the guideboard space plane: ax + Cz + D =0, and a preset calculation formula:
Figure 117286DEST_PATH_IMAGE003
and constructing a characteristic point space coordinate solving equation set:
Figure 470907DEST_PATH_IMAGE004
wherein Z iscAnd u and v represent the abscissa value and the ordinate value of the pixel coordinate of the feature point respectively, K is an internal parameter of the camera, and P represents the space coordinate of the feature point.
And sequentially substituting the pixel coordinates of the second guideboard feature point set in one of the images into the equation set to obtain the space coordinates of the second guideboard feature point set relative to the camera. For example, the second guideboard feature point P (x)p,yp,zp) The pixel coordinate in the A image is P (u)A,vA) The pixel coordinate in the B image is P (u)B,vB) Then, P (u)A,vA) Or P (u)B,vB) Substituting into the above equation set to obtain a second guideboard characteristic point P (x)p,yp,zp) Specific values of the spatial coordinates of (a). Preferably, the pixel coordinates of the second guideboard feature point P in the image acquired closest to the current time are taken.
In the embodiment of the present application, the preset positions include: one or a combination of several of corner points, center points, line segment intersection points, points on edge lines and vertexes of fonts of the guideboard. That is, the second guideboard feature point is selected from the corner point, the center point, the intersection point of the line segments, the point on the edge line, and the vertex of the font. Therefore, the second feature points are more representative and are easier to identify and acquire, the accuracy of the calculation result of the second guideboard feature point set relative to the space coordinate of the camera can be ensured, and the acquisition of the guideboard geographical coordinates with high accuracy is facilitated. For example, when the guideboard is a triangular guideboard, the second set of guideboard feature points may include three corner points of the guideboard; when the guideboard is a square guideboard, the second guideboard feature point set may include four angular points of the guideboard; when the guideboard is a circular guideboard, the second guideboard feature point set may include an intersection point, a circle center, and the like between two diameters (e.g., a vertical diameter and a horizontal diameter) of the guideboard and the circumference.
And S107, generating the geographical coordinates of the guideboard by utilizing the space coordinates of the second guideboard feature point set relative to the camera and the geographical position information of the camera when the two images are shot.
In the embodiment of the application, when the geographic coordinates of the guideboard are determined and the geographic position of the vehicle is also known, the distance between the vehicle and the guideboard can be obtained, so that data support is provided for vehicle navigation, and accurate driving guidance is provided for the vehicle.
As can be seen from this embodiment, the method provided in this embodiment of the present application determines a guideboard spatial plane in which the first guideboard feature point set is located by using the spatial coordinates of the first guideboard feature point set with respect to the camera. Because the guideboard space plane is determined jointly according to all the first guideboard feature points in the first guideboard feature point set, the influence of calculation errors of some first guideboard feature points on the calculation correctness of the guideboard space plane is avoided to the greatest extent. The fact that the guideboard is perpendicular to the horizontal reference plane in the real world is reflected by setting the guideboard space plane to be perpendicular to the horizontal reference plane, so that the guideboard space plane is corrected, the influence of calculation errors of certain first guideboard feature points is eliminated, and the calculation accuracy of the guideboard space plane is guaranteed. And calculating the space coordinates of the second guideboard feature point set relative to the camera according to the guideboard space plane and the pixel coordinates of the second guideboard feature point set in one of the images, wherein the second guideboard feature point set comprises at least two feature points at preset positions on the guideboard. The second guideboard feature point can be a feature point of a preset position on the guideboard, so that the second guideboard feature point is more representative and is beneficial to ensuring the accuracy of identification and acquisition of the second guideboard feature point. The accurate and reliable guideboard space plane is utilized, and the second guideboard feature point set with more representativeness and high accuracy is used for calculation, so that the accuracy, reliability and stability of the second guideboard feature point set relative to the space coordinate of the camera are guaranteed, the geographic coordinate of the guideboard with high accuracy is obtained, and the high-precision guideboard is manufactured.
Corresponding to the embodiment of the application function implementation method, the application also provides a device for generating the traffic sign for automatic driving, an electronic device and a corresponding embodiment.
Fig. 2 is a schematic structural diagram of a traffic sign generation device for automatic driving according to an embodiment of the present application.
Referring to fig. 2, an embodiment of the present application provides a device for generating a traffic sign for automatic driving, including:
an obtaining unit 201, configured to obtain two images including the same guideboard, and obtain geographic position information of the camera when the two images are taken respectively;
a first calculation unit 202, configured to calculate a rotation matrix and a translation matrix between two images;
the identification unit 203 is used for performing guideboard identification on the two images to acquire pixel coordinates of a first guideboard feature point set in the two images, wherein the first guideboard feature point set comprises at least three feature points in a guideboard;
the second calculating unit 204 is configured to calculate a spatial coordinate of the first road sign feature point set relative to the camera according to the rotation matrix and the translation matrix between the two images and the pixel coordinates of the first road sign feature point set in the two images;
a determining unit 205, configured to determine, by using the spatial coordinates of the first guideboard feature point set relative to the camera and the horizontal reference plane, a guideboard spatial plane where the first guideboard feature point set is located; the guideboard space plane is vertical to the horizontal reference plane;
a third calculating unit 206, configured to calculate, according to the guideboard spatial plane and a pixel coordinate of the second guideboard feature point set in one of the images, a spatial coordinate of the second guideboard feature point set relative to the camera, where the second guideboard feature point set includes at least two feature points in a preset position on the guideboard;
and a generating unit 207, configured to generate the geographic coordinates of the guideboard by using the spatial coordinates of the second set of guideboard feature points with respect to the camera and the geographic position information of the camera when the two images are captured.
Alternatively, the manner of calculating the rotation matrix and the translation matrix between the two images by the first calculation unit 202 may include:
acquiring a characteristic point of each of the two images; matching the characteristic points of the two images to obtain a target characteristic point set successfully matched in the two images; and calculating a rotation matrix and a translation matrix between the two images by using the target characteristic point set.
Optionally, the manner of acquiring the pixel coordinates of the first feature point set of the road sign in the two images by the identifying unit 203 may include:
acquiring feature points in the guideboard area identified in each of the two images; matching the feature points in the guideboard areas in the two images to obtain a first guideboard feature point set successfully matched in the two images; and acquiring pixel coordinates of the first guideboard feature point set in the two images respectively.
Optionally, the determining unit 205 may determine, by using the spatial coordinates of the first guideboard feature point set with respect to the camera and the horizontal reference plane, a guideboard spatial plane where the first guideboard feature point set is located, where the method includes:
constructing a vertical plane error equation by utilizing a least square optimization algorithm according to the space coordinate of the first guideboard feature point set relative to the camera and the horizontal reference plane; and obtaining the space plane of the guideboard where the first guideboard feature point set is located according to the error equation of the vertical plane.
Optionally, the manner of calculating, by the third calculating unit 206, the spatial coordinates of the second set of guideboard feature points relative to the camera according to the guideboard spatial plane and the pixel coordinates of the second set of guideboard feature points in one of the images may include:
constructing a feature point space coordinate solving equation set by using the guideboard space plane and a preset calculation formula; and sequentially substituting the pixel coordinates of the second guideboard feature point set in one of the images into an equation set to obtain the space coordinates of the second guideboard feature point set relative to the camera.
Optionally, the preset positions on the guideboard may include: one or a combination of several of corner points, center points, line segment intersection points, points on edge lines and vertexes of fonts of the guideboard.
Implementing the apparatus shown in fig. 2, a high degree of accuracy of the geographical coordinates of the guideboard can be obtained.
With regard to the apparatus in the above-described embodiment, the specific manner in which each module performs the operation has been described in detail in the embodiment related to the method, and will not be elaborated here.
Fig. 3 is a schematic structural diagram of an electronic device shown in an embodiment of the present application.
Referring to fig. 3, an electronic device 300 is further provided in the present embodiment. The electronic device 300 may be used to execute the method for generating a traffic sign for automatic driving provided by the above-described embodiment. The electronic device 300 may be any device having a computing unit, such as a computer, a server, a handheld device (e.g., a smart phone, a tablet computer, etc.), or a vehicle event recorder, and the embodiments of the present application are not limited thereto.
Referring to fig. 3, the electronic device 300 includes a memory 310 and a processor 320.
The Processor 320 may be a Central Processing Unit (CPU), other general purpose Processor, a Digital Signal Processor (DSP), an Application Specific Integrated Circuit (ASIC), a Field Programmable Gate Array (FPGA) or other Programmable logic device, discrete Gate or transistor logic, discrete hardware components, etc. A general purpose processor may be a microprocessor or the processor may be any conventional processor or the like.
The memory 310 may include various types of storage units, such as system memory, Read Only Memory (ROM), and permanent storage. Wherein the ROM may store static data or instructions for the processor 320 or other modules of the computer. The persistent storage device may be a read-write storage device. The persistent storage may be a non-volatile storage device that does not lose stored instructions and data even after the computer is powered off. In some embodiments, the persistent storage device employs a mass storage device (e.g., magnetic or optical disk, flash memory) as the persistent storage device. In other embodiments, the permanent storage may be a removable storage device (e.g., floppy disk, optical drive). The system memory may be a read-write memory device or a volatile read-write memory device, such as a dynamic random access memory. The system memory may store instructions and data that some or all of the processors require at runtime. Further, the memory 310 may comprise any combination of computer-readable storage media, including various types of semiconductor memory chips (DRAM, SRAM, SDRAM, flash memory, programmable read-only memory), magnetic and/or optical disks, may also be employed. In some embodiments, memory 310 may include a removable storage device that is readable and/or writable, such as a Compact Disc (CD), a read-only digital versatile disc (e.g., DVD-ROM, dual layer DVD-ROM), a read-only Blu-ray disc, an ultra-density optical disc, a flash memory card (e.g., SD card, min SD card, Micro-SD card, etc.), a magnetic floppy disc, or the like. Computer-readable storage media do not contain carrier waves or transitory electronic signals transmitted by wireless or wired means.
The memory 310 has stored thereon executable code, which when processed by the processor 320, causes the processor 320 to perform some or all of the steps of the methods described above.
Furthermore, the method according to the present application may also be implemented as a computer program or computer program product comprising computer program code instructions for performing some or all of the steps of the above-described method of the present application.
Alternatively, the present application may also be embodied as a non-transitory machine-readable storage medium (or computer-readable storage medium, or machine-readable storage medium) having stored thereon executable code (or a computer program, or computer instruction code) which, when executed by a processor of an electronic device (or electronic device, server, etc.), causes the processor to perform some or all of the various steps of the above-described methods in accordance with the present application.
Having described embodiments of the present application, the foregoing description is intended to be exemplary, not exhaustive, and not limited to the disclosed embodiments. Many modifications and variations will be apparent to those of ordinary skill in the art without departing from the scope and spirit of the described embodiments. The terminology used herein is chosen in order to best explain the principles of the embodiments, the practical application, or improvements made to the technology in the marketplace, or to enable others of ordinary skill in the art to understand the embodiments disclosed herein.

Claims (10)

1. A method of generating a traffic sign for autonomous driving, comprising:
acquiring two images containing the same guideboard and acquiring geographic position information of a camera when the two images are respectively shot;
calculating a rotation matrix and a translation matrix between the two images;
performing guideboard recognition on the two images to obtain pixel coordinates of a first guideboard feature point set in the two images, wherein the first guideboard feature point set comprises at least three feature points in the guideboard;
calculating the space coordinates of the first road sign feature point set relative to the camera according to the rotation matrix and the translation matrix between the two images and the pixel coordinates of the first road sign feature point set in the two images;
determining a guideboard space plane where the first guideboard feature point set is located by using the space coordinate of the first guideboard feature point set relative to the camera and the horizontal reference plane; the guideboard space plane is vertical to the horizontal reference plane;
calculating the space coordinate of a second guideboard feature point set relative to the camera according to the guideboard space plane and the pixel coordinate of the second guideboard feature point set in one of the images, wherein the second guideboard feature point set comprises at least two feature points at preset positions on the guideboard;
and generating the geographical coordinates of the guideboard by utilizing the space coordinates of the second guideboard feature point set relative to the camera and the geographical position information of the camera when the two images are shot.
2. The method of claim 1, wherein the calculating a rotation matrix and a translation matrix between the two images comprises:
acquiring a characteristic point of each image in the two images;
matching the characteristic points of the two images to obtain a target characteristic point set successfully matched in the two images;
and calculating a rotation matrix and a translation matrix between the two images by using the target characteristic point set.
3. The method according to claim 1, wherein the obtaining pixel coordinates of the first set of road sign feature points in the two images respectively comprises:
acquiring feature points in the identified guideboard area in each of the two images;
matching the feature points in the guideboard areas in the two images to obtain a first guideboard feature point set successfully matched in the two images;
and acquiring pixel coordinates of the first guideboard feature point set in the two images respectively.
4. The method according to any one of claims 1 to 3, wherein the determining the guideboard spatial plane in which the first set of guideboard feature points is located using the spatial coordinates of the first set of guideboard feature points relative to the camera and a horizontal reference plane comprises:
constructing a vertical plane error equation by using a least square optimization algorithm according to the space coordinate and the horizontal reference plane of the first guideboard feature point set relative to the camera;
and obtaining a guideboard space plane where the first guideboard feature point set is located according to the vertical plane error equation.
5. The method of any one of claims 1 to 3, wherein calculating spatial coordinates of a second set of guideboard feature points relative to the camera based on the guideboard spatial plane and pixel coordinates of the second set of guideboard feature points in one of the images comprises:
constructing a feature point space coordinate solving equation set by using the guideboard space plane and a preset calculation formula;
and sequentially substituting the pixel coordinates of the second guideboard feature point set in one of the images into the equation set to obtain the space coordinates of the second guideboard feature point set relative to the camera.
6. The method according to any one of claims 1 to 3, wherein the preset positions comprise:
and one or a combination of more of corner points, center points, line segment intersection points, points on edge lines and vertexes of fonts of the guideboard.
7. An apparatus for generating a traffic sign for autonomous driving, comprising:
the system comprises an acquisition unit, a display unit and a display unit, wherein the acquisition unit is used for acquiring two images containing the same guideboard and acquiring the geographical position information of a camera when the two images are respectively shot;
the first calculation unit is used for calculating a rotation matrix and a translation matrix between the two images;
the identification unit is used for carrying out guideboard identification on the two images and acquiring pixel coordinates of a first guideboard feature point set in the two images, wherein the first guideboard feature point set comprises at least three feature points in the guideboard;
the second calculation unit is used for calculating the space coordinates of the first guideboard feature point set relative to the camera according to the rotation matrix and the translation matrix between the two images and the pixel coordinates of the first guideboard feature point set in the two images;
the determining unit is used for determining a guideboard space plane where the first guideboard feature point set is located by using the space coordinate of the first guideboard feature point set relative to the camera and a horizontal reference plane; the guideboard space plane is vertical to the horizontal reference plane;
a third calculating unit, configured to calculate, according to the guideboard spatial plane and a pixel coordinate of a second guideboard feature point set in one of the images, a spatial coordinate of the second guideboard feature point set with respect to the camera, where the second guideboard feature point set includes at least two feature points in a preset position on the guideboard;
and the generating unit is used for generating the geographical coordinates of the guideboard by utilizing the space coordinates of the second guideboard feature point set relative to the camera and the geographical position information of the camera when the two images are shot.
8. The apparatus of claim 7, wherein the preset position comprises:
and one or a combination of more of corner points, center points, line segment intersection points, points on edge lines and vertexes of fonts of the guideboard.
9. An electronic device, comprising:
a processor; and
a memory having executable code stored thereon, which when executed by the processor, causes the processor to perform the method of any one of claims 1-6.
10. A non-transitory machine-readable storage medium having stored thereon executable code, which when executed by a processor of an electronic device, causes the processor to perform the method of any one of claims 1-6.
CN202110541380.XA 2021-05-18 2021-05-18 Method and related device for generating traffic sign for automatic driving Active CN113139031B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110541380.XA CN113139031B (en) 2021-05-18 2021-05-18 Method and related device for generating traffic sign for automatic driving

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110541380.XA CN113139031B (en) 2021-05-18 2021-05-18 Method and related device for generating traffic sign for automatic driving

Publications (2)

Publication Number Publication Date
CN113139031A true CN113139031A (en) 2021-07-20
CN113139031B CN113139031B (en) 2023-11-03

Family

ID=76817561

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110541380.XA Active CN113139031B (en) 2021-05-18 2021-05-18 Method and related device for generating traffic sign for automatic driving

Country Status (1)

Country Link
CN (1) CN113139031B (en)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113408509A (en) * 2021-08-20 2021-09-17 智道网联科技(北京)有限公司 Signboard recognition method and device for automatic driving
CN114119963A (en) * 2021-11-19 2022-03-01 智道网联科技(北京)有限公司 Method and device for generating high-precision map guideboard
CN114419594A (en) * 2022-01-17 2022-04-29 智道网联科技(北京)有限公司 Method and device for identifying intelligent traffic guideboard

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2018196391A1 (en) * 2017-04-28 2018-11-01 华为技术有限公司 Method and device for calibrating external parameters of vehicle-mounted camera
CN111930877A (en) * 2020-09-18 2020-11-13 蘑菇车联信息科技有限公司 Map guideboard generation method and electronic equipment
CN111932627A (en) * 2020-09-15 2020-11-13 蘑菇车联信息科技有限公司 Marker drawing method and system
WO2021026705A1 (en) * 2019-08-09 2021-02-18 华为技术有限公司 Matching relationship determination method, re-projection error calculation method and related apparatus
CN112598743A (en) * 2021-02-08 2021-04-02 智道网联科技(北京)有限公司 Pose estimation method of monocular visual image and related device

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2018196391A1 (en) * 2017-04-28 2018-11-01 华为技术有限公司 Method and device for calibrating external parameters of vehicle-mounted camera
WO2021026705A1 (en) * 2019-08-09 2021-02-18 华为技术有限公司 Matching relationship determination method, re-projection error calculation method and related apparatus
CN111932627A (en) * 2020-09-15 2020-11-13 蘑菇车联信息科技有限公司 Marker drawing method and system
CN111930877A (en) * 2020-09-18 2020-11-13 蘑菇车联信息科技有限公司 Map guideboard generation method and electronic equipment
CN112598743A (en) * 2021-02-08 2021-04-02 智道网联科技(北京)有限公司 Pose estimation method of monocular visual image and related device

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
单春艳;杨维;耿翠博;: "面向井下无人机自主飞行的人工路标辅助位姿估计方法", 煤炭学报, no. 1 *

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113408509A (en) * 2021-08-20 2021-09-17 智道网联科技(北京)有限公司 Signboard recognition method and device for automatic driving
CN114119963A (en) * 2021-11-19 2022-03-01 智道网联科技(北京)有限公司 Method and device for generating high-precision map guideboard
CN114419594A (en) * 2022-01-17 2022-04-29 智道网联科技(北京)有限公司 Method and device for identifying intelligent traffic guideboard

Also Published As

Publication number Publication date
CN113139031B (en) 2023-11-03

Similar Documents

Publication Publication Date Title
CN113139031B (en) Method and related device for generating traffic sign for automatic driving
JP4232167B1 (en) Object identification device, object identification method, and object identification program
CN111830953B (en) Vehicle self-positioning method, device and system
CN111261016B (en) Road map construction method and device and electronic equipment
WO2020043081A1 (en) Positioning technique
JP2010511212A (en) Method and apparatus for identifying and locating planar objects in an image
CN101842808A (en) Method of and apparatus for producing lane information
JP4978615B2 (en) Target identification device
JP2010510559A (en) Method and apparatus for detecting an object from ground mobile mapping data
CN110969592B (en) Image fusion method, automatic driving control method, device and equipment
JP2008065087A (en) Apparatus for creating stationary object map
CN111930877B (en) Map guideboard generation method and electronic equipment
CN111340877A (en) Vehicle positioning method and device
CN116097128A (en) Method and device for determining the position of a vehicle
CN112595335B (en) Intelligent traffic driving stop line generation method and related device
CN113838129B (en) Method, device and system for obtaining pose information
CN115205382A (en) Target positioning method and device
CN114863347A (en) Map checking method, device and equipment
JP2012099010A (en) Image processing apparatus and image processing program
CN111488771B (en) OCR hooking method, device and equipment
CN112991434B (en) Method for generating automatic driving traffic identification information and related device
CN114299469A (en) Traffic guideboard generation method, device and equipment
CN114863383A (en) Method for generating intelligent traffic circular guideboard and related device
CN112818866A (en) Vehicle positioning method and device and electronic equipment
CN112801077B (en) Method for SLAM initialization of autonomous vehicles and related device

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant