CN112507887B - Intersection sign extracting and associating method and device - Google Patents
Intersection sign extracting and associating method and device Download PDFInfo
- Publication number
- CN112507887B CN112507887B CN202011460610.1A CN202011460610A CN112507887B CN 112507887 B CN112507887 B CN 112507887B CN 202011460610 A CN202011460610 A CN 202011460610A CN 112507887 B CN112507887 B CN 112507887B
- Authority
- CN
- China
- Prior art keywords
- intersection
- signboards
- dimensional
- target
- point cloud
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V20/00—Scenes; Scene-specific elements
- G06V20/60—Type of objects
- G06V20/64—Three-dimensional objects
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/25—Fusion techniques
- G06F18/253—Fusion techniques of extracted features
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/25—Fusion techniques
- G06F18/254—Fusion techniques of classification results, e.g. of results related to same input data
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V2201/00—Indexing scheme relating to image or video recognition or understanding
- G06V2201/07—Target detection
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V2201/00—Indexing scheme relating to image or video recognition or understanding
- G06V2201/09—Recognition of logos
Abstract
The invention provides a method and a device for extracting and associating intersection signboards, wherein the method comprises the following steps: acquiring road laser point cloud data and corresponding continuous frame picture data; crossing target detection is carried out on the continuous frame pictures through a target detection network without an anchor point frame, corresponding track point positions are marked, and point clouds are cut according to the track point positions to obtain crossing laser point cloud blocks with fixed sizes; carrying out three-dimensional target detection based on the fusion characteristics of the intersection picture and the intersection laser point cloud, acquiring a two-dimensional surrounding frame of a target in the image according to the camera parameters and the three-dimensional target information, and extracting an intersection sign board in the image; identifying content information in the intersection signboards, and classifying the signboards according to direction information in the intersection signboards; and acquiring road information near the marked track points, and associating the intersection signboards with corresponding roads. Therefore, automatic extraction and association of the crossing signboards can be realized, the labor cost is reduced, and the accuracy of the signboard extraction and the road association can be guaranteed.
Description
Technical Field
The invention relates to the field of high-precision map manufacturing, in particular to a method and a device for extracting and associating intersection signboards.
Background
In the process of manufacturing a high-precision map, the direction and distance information of the intersection signboards needs to be acquired to assist driving. In a conventional detection method, a camera is used for collecting a signboard image, and then the signboard information is obtained through target detection, however, the extracted signboard is two-dimensional image information, three-dimensional information such as a distance of the signboard is difficult to obtain, the signboard cannot be associated with a corresponding road, manual intervention is often required in an actual application process, and although the accuracy of extraction and association can be guaranteed through manual intervention, the labor cost is high.
Disclosure of Invention
In view of this, the embodiment of the invention provides a method and a device for extracting and associating intersection signboards, so as to solve the problems that the existing intersection signboard extracting and associating process needs manual participation and the labor cost is high.
In a first aspect of the embodiments of the present invention, a method for extracting and associating intersection signboards is provided, including:
acquiring road laser point cloud data and corresponding continuous frame picture data;
crossing target detection is carried out on the continuous frame pictures through a target detection network without an anchor frame, corresponding track point positions when a target is detected are marked, and point clouds are cut according to the track point positions to obtain crossing laser point cloud blocks with fixed sizes;
performing three-dimensional target detection based on the intersection picture and the fusion characteristics of intersection laser point clouds, acquiring a two-dimensional surrounding frame of a target in an image according to camera parameters and three-dimensional target information, and extracting an intersection signboard in the image;
identifying content information in the intersection signboards, and classifying the signboards according to direction information in the intersection signboards;
and acquiring road information near the marked track points, and associating the intersection signboards and the content information of the signboards with corresponding roads.
In a second aspect of the embodiments of the present invention, there is provided an apparatus for intersection signboard extraction and association, including:
the acquisition module is used for acquiring road laser point cloud data and corresponding continuous frame picture data;
the intersection detection module is used for carrying out intersection target detection on the continuous frame pictures through the anchor-frame-free target detection network, marking the corresponding track point positions when the targets are detected, and cutting the point cloud according to the track point positions to obtain intersection laser point cloud blocks with fixed sizes;
the target extraction module is used for detecting a three-dimensional target based on the intersection picture and the fusion characteristics of intersection laser point clouds, acquiring a two-dimensional surrounding frame of the target in the image according to the camera parameters and the three-dimensional target information, and extracting an intersection signboard in the image;
the identification module is used for identifying content information in the intersection signboards and classifying the signboards according to direction information in the intersection signboards;
and the association module is used for acquiring the road information near the marked track point and associating the intersection signboards and the content information of the signboards with the corresponding roads.
In a third aspect of the embodiments of the present invention, an electronic device is provided, which includes a memory, a processor, and a computer program stored in the memory and executable on the processor, and when the processor executes the computer program, the steps of the method according to the first aspect of the embodiments of the present invention are implemented.
In a fourth aspect of the embodiments of the present invention, a computer-readable storage medium is provided, in which a computer program is stored, and the computer program, when executed by a processor, implements the steps of the method provided in the first aspect of the embodiments of the present invention.
In the embodiment of the invention, the target detection network is used for carrying out target detection on the corresponding picture on the track point, filtering a non-intersection laser point cloud area, carrying out three-dimensional target detection based on the fusion characteristics of intersection laser point cloud and the picture, extracting the signboard in the image according to the camera parameters and the three-dimensional information, identifying the geographic information and the direction information in the intersection signboard picture, acquiring the road information around the marked track point, and associating the intersection signboard with the corresponding road. Therefore, automatic extraction and association of the intersection signboards are realized, and unnecessary labor cost is reduced. The calculated amount is reduced based on point cloud cutting, two-dimensional and three-dimensional characteristic information of the signboard can be obtained based on target detection of point cloud and picture fusion characteristics, the accuracy of signboard extraction can be guaranteed when the image is not clear, and meanwhile, the intersection signboard can be accurately associated with the road.
Drawings
In order to more clearly illustrate the technical solutions in the embodiments of the present invention, the drawings needed to be used in the embodiments or the prior art descriptions will be briefly described below, and it is obvious that the drawings described below are only some embodiments of the present invention, and it is obvious for those skilled in the art to obtain other drawings based on these drawings without creative efforts.
Fig. 1 is a schematic flow chart of a method for extracting and associating intersection signboards according to an embodiment of the present invention;
FIG. 2 is a schematic diagram of a road sign in an image according to an embodiment of the invention;
FIG. 3 is a cross sign illustration in a laser point cloud provided by an embodiment of the present invention;
fig. 4 is a schematic structural diagram of an apparatus for intersection signboard extraction and association according to an embodiment of the present invention.
Detailed Description
In order to make the objects, features and advantages of the present invention more obvious and understandable, the technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the accompanying drawings in the embodiments of the present invention, and it is obvious that the embodiments described below are only a part of the embodiments of the present invention, and not all of the embodiments. Based on the embodiments of the present invention, all other embodiments obtained by persons skilled in the art without any inventive work shall fall within the protection scope of the present invention, and the principle and features of the present invention shall be described below with reference to the accompanying drawings.
The terms "comprises" and "comprising," when used in this specification and claims, and in the accompanying drawings and figures, are intended to cover non-exclusive inclusions, such that a process, method or system, or apparatus that comprises a list of steps or elements is not limited to the listed steps or elements.
The terms "comprises" and "comprising," when used in this specification and claims, and in the accompanying drawings and figures, are intended to cover non-exclusive inclusions, such that a process, method or system, or apparatus that comprises a list of steps or elements is not limited to the listed steps or elements. In addition, "first" and "second" are used to distinguish different objects, and are not used to describe a specific order.
Referring to fig. 1, fig. 1 is a schematic flow chart of a method for extracting and associating intersection signboards according to an embodiment of the present invention, including:
s101, acquiring road laser point cloud data and corresponding continuous frame picture data;
the road laser point cloud data is point cloud data acquired by a vehicle-mounted laser radar, and the continuous frame picture data is picture data acquired by a vehicle-mounted camera, as shown in fig. 2 and 3, respectively, an acquired intersection signboard picture and laser point cloud (road signboards in different directions exist in a and b). In the process of manufacturing a high-precision map, a collection vehicle is required to collect laser point cloud data and image data of a road.
It can be understood that road laser point cloud data and corresponding continuous frame picture data and laser point cloud pavement track point information are directly obtained from a high-precision map making system.
S102, intersection target detection is carried out on the continuous frame pictures through a target detection network without an anchor point frame, corresponding track point positions when a target is detected are marked, and point clouds are cut according to the track point positions to obtain intersection laser point cloud blocks with fixed sizes;
specifically, intersection target detection in continuous frame pictures is performed by using a target detection network without anchor points (no anchors), features of different scales are extracted by using resnet as a skeleton network through a cavity convolution and SPP (Spatial Pyramid Pooling), features of multiple scales are fused to construct a thermodynamic diagram, a central point of a target frame is predicted through key point estimation in the thermodynamic diagram, and then the size of the target frame is regressed.
When the target detection network detects a target in the continuous frame pictures, marking the pictures, determining track points corresponding to the pictures and marking the track points. And cutting a certain distance of front and back areas according to the track point position and the track point road direction to obtain the intersection laser spot cloud blocks with fixed sizes.
The non-traffic road intersection area in the laser point cloud is automatically filtered through intersection target detection (such as a flow guide belt), and the laser point cloud is cut into a plurality of point cloud small blocks according to the track points for detection, so that the calculated amount can be effectively reduced.
S103, detecting a three-dimensional target based on the fusion characteristics of the intersection picture and the intersection laser point cloud, acquiring a two-dimensional surrounding frame of the target in the image according to the camera parameters and the three-dimensional target information, and extracting an intersection signboard in the image;
the method comprises the steps of utilizing fusion characteristics of pictures and laser point clouds to detect a three-dimensional target, specifically, obtaining a laser point cloud block obtained through cutting, projecting the three-dimensional laser point cloud on a color image corresponding to a track point, fusing according to the proportion relation between an image characteristic diagram and a characteristic diagram projected by the laser point cloud, generating a three-dimensional proposal based on the fused characteristic diagram, converting the three-dimensional proposal into standard coordinates, and predicting a three-dimensional bounding box.
And determining the two-dimensional surrounding frame of the signboard according to the three-dimensional surrounding frame of the signboard and camera parameters (such as focal length and the like), so as to obtain the intersection signboard to be extracted.
S104, identifying content information in the intersection signboards, and classifying the signboards according to direction information in the intersection signboards;
for the content information of the intersection signboards, the content information at least comprises character information and direction information, and the character information (generally, location information) can be obtained through character recognition.
Specifically, the place information characters in the intersection signboards are corrected into characters in normal linear arrangement through a space transformation network, and the place information characters are identified by combining an end-to-end identification network. Therefore, the accuracy of character recognition can be improved.
And extracting the direction information of the signboards, and classifying the signboards according to the direction information in the intersection signboards.
And S105, acquiring road information near the marked track point, and associating the intersection signboards and the content information of the signboards with corresponding roads.
The marking track points are track points corresponding to detected intersection targets, road information near the marking track points is obtained, and intersection signboards and signboard content information (namely road direction, geographic information and the like) in the extracted laser point cloud are correlated with corresponding roads.
Because the same intersection signboard may exist in the front frame picture and the rear frame picture, the intersection signboard repeatedly extracted from the laser point cloud can be removed according to the shape and the size of the signboard. And feeding back the extracted geographical information of the intersection signboards and the associated road attributes to a high-precision map making system, thereby completing the automatic extraction of the road associated attributes of the intersection signboards.
In this embodiment, based on the detection of the intersection target, the intersection region is roughly extracted, and intersection point cloud cutting is conveniently performed. The intersection image and the intensity information of the laser point cloud are combined, the laser point cloud characteristics and the image characteristics of multiple scales are fused and detected, the intersection signboards are accurately extracted, and the intersection signboard target detection effects of shielding and cavity loss can be effectively guaranteed based on mutual supplement of the image characteristics and the point cloud characteristics. For the content identification of the intersection signboards, the accurate extraction of the character information in the signboards under various scenes is guaranteed by combining a character identification network and a space transformation network, and the intersection signboards are automatically and accurately associated with different roads.
It should be understood that, the sequence numbers of the steps in the foregoing embodiments do not imply an execution sequence, and the execution sequence of each process should be determined by its function and inherent logic, and should not constitute any limitation to the implementation process of the embodiments of the present invention.
Fig. 4 is a schematic structural diagram of an apparatus for extracting and associating intersection signboards according to an embodiment of the present invention, where the apparatus includes:
an obtaining module 410, configured to obtain road laser point cloud data and corresponding continuous frame picture data;
the intersection detection module 420 is used for carrying out intersection target detection on the continuous frame pictures through a target detection network without an anchor frame, marking the corresponding track point position when the target is detected, and cutting the point cloud according to the track point position to obtain an intersection laser point cloud block with a fixed size;
specifically, the intersection target detection of the continuous frame pictures by the anchor-free frame target detection network includes:
taking resnet as a skeleton network, and extracting target features of different scales through void convolution and space pyramid pooling;
and constructing a thermodynamic diagram by fusing the multi-scale features, predicting the central point of the target frame based on key points in the thermodynamic diagram, and regressing the size of the target frame.
The target extraction module 430 is used for detecting a three-dimensional target based on the fusion characteristics of the intersection picture and the intersection laser point cloud, acquiring a two-dimensional surrounding frame of the target in the image according to the camera parameters and the three-dimensional target information, and extracting an intersection signboard in the image;
specifically, three-dimensional laser point clouds are projected onto corresponding two-dimensional images based on track point positions, and feature fusion is carried out according to the proportional relation between an image feature map and a laser point cloud projection feature map;
and generating a three-dimensional proposal based on the feature graph obtained by fusion, converting the three-dimensional proposal into standard coordinates, and predicting the three-dimensional bounding box.
The identification module 440 is used for identifying content information in the intersection signboards and classifying the signboards according to direction information in the intersection signboards;
specifically, the identifying of the content information in the intersection signboard includes:
and correcting the location information characters in the intersection signboards into characters in normal linear arrangement through a space transformation network, and identifying the location information characters by combining an end-to-end identification network.
The association module 450 is configured to obtain road information near the marked track point, and associate the intersection signboard and the content information of the signboard with the corresponding road.
Preferably, the associating module 450 further includes:
and the duplication removing module is used for removing the intersection signboards repeatedly extracted from the laser point cloud according to the shapes and the sizes of the intersection signboards and feeding back the extracted intersection signboards and the associated roads to the high-precision map making system.
It is understood that, in one embodiment, the electronic device includes a memory, a processor, and a computer program stored in the memory and executable on the processor, the computer program performs steps S101 to S105 as in the first embodiment, and the processor executes the computer program to realize automatic extraction and association of the intersection signboards.
Those skilled in the art will appreciate that all or part of the steps in the method for implementing the above embodiments may be implemented by a program to instruct associated hardware, where the program may be stored in a computer-readable storage medium, and when the program is executed, the program includes steps S101 to S105, where the storage medium includes, for example: ROM/RAM, magnetic disk, optical disk, etc.
In the above embodiments, the descriptions of the respective embodiments have respective emphasis, and reference may be made to the related descriptions of other embodiments for parts that are not described or illustrated in a certain embodiment.
The above embodiments are only used to illustrate the technical solution of the present invention, and not to limit the same; although the present invention has been described in detail with reference to the foregoing embodiments, it will be understood by those of ordinary skill in the art that: the technical solutions described in the foregoing embodiments may still be modified, or some technical features may be equivalently replaced; and such modifications or substitutions do not depart from the spirit and scope of the corresponding technical solutions of the embodiments of the present invention.
Claims (8)
1. A method for extracting and associating intersection signboards is characterized by comprising the following steps:
acquiring road laser point cloud data and corresponding continuous frame picture data;
crossing target detection is carried out on the continuous frame pictures through a target detection network without an anchor point frame, corresponding track point positions when the targets are detected are marked, and point clouds are cut according to the track point positions to obtain crossing laser point cloud blocks with fixed sizes;
extracting target features of different scales by using resnet as a skeleton network through cavity convolution and spatial pyramid pooling;
constructing a thermodynamic diagram by fusing multi-scale features, predicting a central point of a target frame based on key points in the thermodynamic diagram, and regressing the size of the target frame;
performing three-dimensional target detection based on the intersection picture and the fusion characteristics of intersection laser point clouds, acquiring a two-dimensional surrounding frame of a target in an image according to camera parameters and three-dimensional target information, and extracting an intersection signboard in the image;
identifying content information in the intersection signboards, and classifying the signboards according to direction information in the intersection signboards;
and acquiring road information near the marked track points, and associating the intersection signboards and the content information of the signboards with corresponding roads.
2. The method of claim 1, wherein the three-dimensional target detection based on the intersection picture and the intersection laser point cloud fusion features comprises:
projecting the three-dimensional laser point cloud to a corresponding two-dimensional image based on the track point position, and performing feature fusion according to the proportional relation between the image feature map and the laser point cloud projection feature map;
and generating a three-dimensional proposal based on the feature graph obtained by fusion, converting the three-dimensional proposal into standard coordinates, and predicting the three-dimensional bounding box.
3. The method of claim 1, wherein the identifying the content information in the intersection sign comprises:
and correcting the location information characters in the intersection signboards into characters in normal linear arrangement through a space transformation network, and identifying the location information characters by combining an end-to-end identification network.
4. The method of claim 1, wherein the obtaining of the road information near the marked track point and the associating of the intersection signboard and the signboard content information with the corresponding road further comprises:
and removing the intersection signboards repeatedly extracted from the laser point cloud according to the shapes and the sizes of the intersection signboards, and feeding back the extracted intersection signboards and the associated roads to the high-precision map making system.
5. An apparatus for intersection sign extraction and association, comprising:
the acquisition module is used for acquiring road laser point cloud data and corresponding continuous frame picture data;
the intersection detection module is used for carrying out intersection target detection on the continuous frame pictures through the anchor-free frame target detection network, marking the corresponding track point position when the target is detected, and cutting the point cloud according to the track point position to obtain an intersection laser point cloud block with a fixed size;
the method comprises the following steps of taking resnet as a skeleton network, and extracting target features of different scales through cavity convolution and space pyramid pooling; constructing a thermodynamic diagram by fusing multi-scale features, predicting the central point of a target frame based on key points in the thermodynamic diagram, and regressing the size of the target frame;
the target extraction module is used for detecting a three-dimensional target based on the fusion characteristics of the intersection picture and the intersection laser point cloud, acquiring a two-dimensional surrounding frame of the target in the image according to the camera parameters and the three-dimensional target information, and extracting an intersection signboard in the image;
the identification module is used for identifying content information in the intersection signboards and classifying the signboards according to direction information in the intersection signboards;
and the association module is used for acquiring the road information near the marked track point and associating the intersection signboard and the content information of the signboard with the corresponding road.
6. The apparatus of claim 5, wherein the three-dimensional target detection based on the intersection picture and the intersection laser point cloud fusion feature comprises:
projecting the three-dimensional laser point cloud onto a corresponding two-dimensional image based on the track point position, and performing feature fusion according to the proportional relation between the image feature map and the laser point cloud projection feature map; and generating a three-dimensional proposal based on the feature graph obtained by fusion, converting the three-dimensional proposal into standard coordinates, and predicting the three-dimensional bounding box.
7. An electronic device comprising a processor, a memory, and a computer program stored in and run on the memory, wherein the processor when executing the computer program implements the steps of the method of intersection sign extraction and association of any of claims 1-4.
8. A computer-readable storage medium, in which a computer program is stored, which computer program, when being executed by a processor, carries out the steps of the method of intersection sign extraction and association according to any one of claims 1 to 4.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202011460610.1A CN112507887B (en) | 2020-12-12 | 2020-12-12 | Intersection sign extracting and associating method and device |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202011460610.1A CN112507887B (en) | 2020-12-12 | 2020-12-12 | Intersection sign extracting and associating method and device |
Publications (2)
Publication Number | Publication Date |
---|---|
CN112507887A CN112507887A (en) | 2021-03-16 |
CN112507887B true CN112507887B (en) | 2022-12-13 |
Family
ID=74972408
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202011460610.1A Active CN112507887B (en) | 2020-12-12 | 2020-12-12 | Intersection sign extracting and associating method and device |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN112507887B (en) |
Family Cites Families (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN107516077B (en) * | 2017-08-17 | 2020-09-08 | 武汉大学 | Traffic sign information extraction method based on fusion of laser point cloud and image data |
CN108846333B (en) * | 2018-05-30 | 2022-02-18 | 厦门大学 | Method for generating landmark data set of signpost and positioning vehicle |
CN109165549B (en) * | 2018-07-09 | 2021-03-19 | 厦门大学 | Road identification obtaining method based on three-dimensional point cloud data, terminal equipment and device |
FR3086396B1 (en) * | 2018-09-25 | 2021-01-15 | Continental Automotive France | RADAR OR LIDAR IDENTIFICATION SYSTEM FOR SIGNALING SIGNS |
CN111210488B (en) * | 2019-12-31 | 2023-02-03 | 武汉中海庭数据技术有限公司 | High-precision extraction system and method for road upright rod in laser point cloud |
CN111695486B (en) * | 2020-06-08 | 2022-07-01 | 武汉中海庭数据技术有限公司 | High-precision direction signboard target extraction method based on point cloud |
-
2020
- 2020-12-12 CN CN202011460610.1A patent/CN112507887B/en active Active
Also Published As
Publication number | Publication date |
---|---|
CN112507887A (en) | 2021-03-16 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
EP3171292B1 (en) | Driving lane data processing method, device, storage medium and apparatus | |
CN112069856A (en) | Map generation method, driving control method, device, electronic equipment and system | |
US8625851B2 (en) | Measurement apparatus, measurement method, and feature identification apparatus | |
KR101269981B1 (en) | Bird's-eye image forming device, bird's-eye image forming method, and recording medium | |
CN109271861B (en) | Multi-scale fusion point cloud traffic signboard automatic extraction method | |
CN112270272B (en) | Method and system for extracting road intersections in high-precision map making | |
CN110969592B (en) | Image fusion method, automatic driving control method, device and equipment | |
JP6653361B2 (en) | Road marking image processing apparatus, road marking image processing method, and road marking image processing program | |
CN112434707B (en) | Traffic intersection RPP point automatic extraction method and device | |
CN113392169A (en) | High-precision map updating method and device and server | |
CN112308913A (en) | Vision-based vehicle positioning method and device and vehicle-mounted terminal | |
CN112749584A (en) | Vehicle positioning method based on image detection and vehicle-mounted terminal | |
CN114067288A (en) | Traffic sign extraction method and system, electronic equipment and storage medium | |
JP2007265292A (en) | Road sign database construction device | |
CN114120254A (en) | Road information identification method, device and storage medium | |
CN112507891A (en) | Method and device for automatically identifying high-speed intersection and constructing intersection vector | |
CN112507887B (en) | Intersection sign extracting and associating method and device | |
CN114677658A (en) | Billion-pixel dynamic large-scene image acquisition and multi-target detection method and device | |
CN112308904A (en) | Vision-based drawing construction method and device and vehicle-mounted terminal | |
CN111414848B (en) | Full-class 3D obstacle detection method, system and medium | |
CN117576652B (en) | Road object identification method and device, storage medium and electronic equipment | |
EP4239288A2 (en) | Methods for processing map, and vehicle | |
CN112180395B (en) | Extraction method, device and equipment of road mark and storage medium | |
CN111738051B (en) | Point cloud processing method and device, computer equipment and storage medium | |
CN117690133A (en) | Point cloud data labeling method and device, electronic equipment, vehicle and medium |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |