CN113640772A - Method and system for realizing target perception in vehicle-road cooperation - Google Patents
Method and system for realizing target perception in vehicle-road cooperation Download PDFInfo
- Publication number
- CN113640772A CN113640772A CN202111087072.0A CN202111087072A CN113640772A CN 113640772 A CN113640772 A CN 113640772A CN 202111087072 A CN202111087072 A CN 202111087072A CN 113640772 A CN113640772 A CN 113640772A
- Authority
- CN
- China
- Prior art keywords
- point cloud
- primary
- laser radar
- perception
- splicing
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
- 230000008447 perception Effects 0.000 title claims abstract description 51
- 238000000034 method Methods 0.000 title claims abstract description 30
- 238000012545 processing Methods 0.000 claims abstract description 55
- 238000012544 monitoring process Methods 0.000 claims abstract description 36
- 238000004891 communication Methods 0.000 claims description 5
- 238000012790 confirmation Methods 0.000 claims description 5
- 239000013307 optical fiber Substances 0.000 claims description 5
- 238000005516 engineering process Methods 0.000 description 7
- 238000001514 detection method Methods 0.000 description 2
- 238000010586 diagram Methods 0.000 description 2
- 239000000835 fiber Substances 0.000 description 2
- 230000003993 interaction Effects 0.000 description 2
- 238000012360 testing method Methods 0.000 description 2
- 230000009286 beneficial effect Effects 0.000 description 1
- 230000005540 biological transmission Effects 0.000 description 1
- 230000004927 fusion Effects 0.000 description 1
- 230000003287 optical effect Effects 0.000 description 1
- 230000005855 radiation Effects 0.000 description 1
- 238000006467 substitution reaction Methods 0.000 description 1
- 238000013519 translation Methods 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G01—MEASURING; TESTING
- G01S—RADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
- G01S7/00—Details of systems according to groups G01S13/00, G01S15/00, G01S17/00
- G01S7/48—Details of systems according to groups G01S13/00, G01S15/00, G01S17/00 of systems according to group G01S17/00
- G01S7/4802—Details of systems according to groups G01S13/00, G01S15/00, G01S17/00 of systems according to group G01S17/00 using analysis of echo signal for target characterisation; Target signature; Target cross-section
Landscapes
- Engineering & Computer Science (AREA)
- Computer Networks & Wireless Communication (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Radar, Positioning & Navigation (AREA)
- Remote Sensing (AREA)
- Traffic Control Systems (AREA)
Abstract
The application relates to a method and a system for realizing target perception in vehicle-road collaboration, belonging to the technical field of intelligent networked automobiles and intelligent traffic industry; performing point cloud splicing processing according to point cloud information in the original monitoring data, performing target perception based on the obtained splicing result, and outputting a perception result; the plurality of laser radar devices are preset at the specified positions of the road side and used for monitoring the target area. The method and the device can better realize target perception in vehicle-road collaboration.
Description
Technical Field
The application belongs to the technical field of intelligent networked automobiles and intelligent traffic industry, and particularly relates to a method and a system for realizing target perception in vehicle-road cooperation.
Background
The vehicle-road cooperation is a safe, efficient and environment-friendly road traffic system which adopts the advanced wireless communication, new generation internet and other technologies, implements vehicle-road dynamic real-time information interaction in all directions, develops vehicle active safety control and road cooperative management on the basis of full-time dynamic traffic information acquisition and fusion, fully realizes effective cooperation of human and vehicle roads, ensures traffic safety and improves traffic efficiency. The vehicle-road cooperation related technology relates to identification and positioning of targets (such as vehicles, pedestrians and the like) in roads, and is also called target perception.
In the related art, generally, millimeter wave positioning radar is arranged on the road side, and real-time perception of a target in a target area is realized based on the millimeter wave positioning radar. Through practical practices, the applicant finds that the requirements on target positioning, speed and other aspects of precision and real-time performance are high due to the fact that a plurality of targets are identified simultaneously in the vehicle-road cooperation, and the requirements on target perception in the vehicle-road cooperation cannot be well met based on the characteristics of low positioning precision and small identification range of the millimeter wave radar.
The above is only for the purpose of assisting understanding of the technical aspects of the present invention, and does not represent an admission that the above is prior art.
Disclosure of Invention
In order to overcome the problems in the related technologies at least to a certain extent, the application provides a method and a system for realizing target positioning in vehicle-road cooperation, and a mode of carrying out target perception by a plurality of laser radars is adopted to help better realize target perception in vehicle-road cooperation.
In order to achieve the purpose, the following technical scheme is adopted in the application:
in a first aspect,
the application provides a method for realizing target perception in vehicle-road collaboration, which comprises the following steps:
acquiring original monitoring data output by a plurality of laser radar devices;
performing point cloud splicing processing according to point cloud information in the original monitoring data, performing target perception based on the obtained splicing result, and outputting a perception result;
the plurality of laser radar devices are preset at the specified positions of the road side and used for monitoring the target area.
Optionally, the presetting process of the plurality of laser radar devices includes grouping the plurality of laser radar devices according to the position adjacency relation of the laser radar devices;
the point cloud splicing processing is carried out based on the point cloud information in the original monitoring data, target perception is carried out based on the obtained splicing result, and a perception result is output, and the method comprises the following steps:
respectively carrying out primary point cloud splicing processing on original monitoring data output by each group of equipment to correspondingly obtain primary spliced point cloud data of each group of equipment, respectively carrying out primary identification processing on the primary spliced point cloud data to correspondingly obtain a primary sensing result of each group of equipment;
performing secondary point cloud splicing processing according to the primary splicing point cloud data to obtain target area point cloud data;
and performing identification processing on the data of the spliced overlapping area in the point cloud data of the target area, performing uniqueness confirmation on each primary sensing result according to the obtained identification result, and outputting the confirmed sensing result as the sensing result of the target area.
Optionally, the grouping the plurality of laser radar devices according to the position adjacency relationship of the laser radar devices includes:
and sequentially carrying out external parameter calibration on the laser radar equipment based on a preset calibration strategy according to the position adjacent relation of the laser radar equipment to obtain external parameter information for the primary point cloud splicing treatment and the secondary point cloud splicing treatment.
In a second aspect of the present invention,
the application provides a system for realizing target perception in vehicle-road cooperation, the system includes:
the laser radar equipment is preset at the specified position of the road side and used for monitoring a target area, generating and outputting original monitoring data;
and the perception processing device is used for carrying out point cloud splicing processing according to the point cloud information in the original monitoring data, carrying out target perception based on the obtained splicing result and outputting a perception result.
Optionally, the presetting process of the plurality of laser radar devices includes grouping the plurality of laser radar devices according to the position adjacency relation of the laser radar devices;
the perception processing device comprises a central server and edge computing nodes which are correspondingly arranged aiming at all groups of equipment respectively;
each edge computing node is used for respectively carrying out primary point cloud splicing processing on original monitoring data output by each group of equipment to correspondingly obtain primary spliced point cloud data of each group of equipment, and respectively carrying out primary identification processing on the basis of each primary spliced point cloud data to correspondingly obtain a primary sensing result of each group of equipment;
the central server is used for carrying out secondary point cloud splicing processing according to the primary splicing point cloud data to obtain target area point cloud data, and
and performing identification processing on the data of the spliced overlapping area in the point cloud data of the target area, performing uniqueness confirmation on each primary sensing result according to the obtained identification result, and outputting the confirmed sensing result as a final sensing result of the target area.
Optionally, the number of lidar devices in each group of devices is determined based on the computational processing power of the corresponding edge compute node.
Optionally, the number of lidar devices in each group of devices is 2 or 3.
Optionally, the laser radar device and the edge computing node are in communication connection based on an optical fiber; and the edge computing nodes and the central server are in communication connection based on Ethernet.
Optionally, the edge computing node is an MEC industrial personal computer.
This application adopts above technical scheme, possesses following beneficial effect at least:
according to the technical scheme, in the implementation of cooperative target perception of the vehicle and the road, a plurality of laser radar devices are preset at the specified positions of the road side to monitor the target area, point cloud splicing is carried out on the basis of original monitoring data output by the plurality of laser radar devices, and target perception is carried out on the basis of the obtained splicing result. The method can effectively cover a large target area, and can better realize target perception in vehicle-road cooperation based on the performance characteristics of the laser radar equipment.
Additional advantages, objects, and features of the invention will be set forth in part in the description which follows and in part will become apparent to those having ordinary skill in the art upon examination of the following or may be learned from practice of the invention. The objectives and other advantages of the invention will be realized and attained by the structure particularly pointed out in the written description and claims hereof as well as the appended drawings.
Drawings
The accompanying drawings are included to provide a further understanding of the technology or prior art of the present application and are incorporated in and constitute a part of this specification. The drawings expressing the embodiments of the present application are used for explaining the technical solutions of the present application, and should not be construed as limiting the technical solutions of the present application.
FIG. 1 is a schematic flow chart illustrating a method for implementing target awareness in vehicle-road coordination according to an embodiment of the present application;
fig. 2 is a schematic block diagram of an application of a system for implementing target awareness in vehicle-road coordination according to an embodiment of the present application.
In the figure, 10-lidar apparatus; 20-a central server; 30-edge compute nodes;
41-Ethernet-to-fiber transceiver; 42-fiber to ethernet transceiver; 43-ethernet switch.
Detailed Description
In order to make the objects, technical solutions and advantages of the present application more apparent, the technical solutions of the present application will be described in detail below. It is to be understood that the embodiments described are only a few embodiments of the present application and not all embodiments. All other embodiments, which can be derived by a person skilled in the art from the examples given herein without making any creative effort, shall fall within the protection scope of the present application.
As described in the background, in the vehicle-road cooperation related art, identification and positioning of an object in a road (or referred to as object perception) are involved. In the related art, generally, millimeter wave positioning radar is arranged on the road side, and real-time perception of a target in a target area is realized based on the millimeter wave positioning radar.
Through practical practices, the applicant finds that the requirements on target positioning, speed and other aspects of precision and real-time performance are high due to the fact that a plurality of targets are identified simultaneously in the vehicle-road cooperation, and the requirements on target perception in the vehicle-road cooperation cannot be well met based on the characteristics of low positioning precision and small identification range of the millimeter wave radar.
Lidar (light laser Detection and ranging), which is a short term for laser Detection and ranging system, is a radar using a laser as a radiation source. Lidar is a product of a combination of laser technology and radar technology. The principle of the laser radar emits laser beams to the periphery, the distance between the laser radar and an obstacle is measured through the reflection time and the wavelength, and the 3D signals of the surrounding environment are drawn through the reflected signals.
The laser radar and the millimeter wave radar have similar functional characteristics, but compared with the millimeter wave radar, the laser radar has higher angle resolution, distance resolution and speed resolution, the angle resolution is not lower than 0.1mard, the distance resolution can reach 0.1m, and the speed resolution can reach within 10 m/s. Meanwhile, the laser radar can track a plurality of targets in the identification range at the same time. However, the effective identification distance of a single laser radar is 100-200m, and in the scene that the laser radar needs to be covered in a large area, such as the cooperation between the vehicle and the road, the target area cannot be effectively covered by the single radar.
In view of the above, the present application provides a method for implementing target perception in vehicle-road coordination, which is based on a manner of implementing target perception by using multiple laser radars to implement target perception in vehicle-road coordination better.
In an embodiment, as shown in fig. 1, a method for implementing target awareness in vehicle-road coordination provided by the present application includes:
step S110, acquiring original monitoring data output by a plurality of laser radar devices;
it should be noted that, here, the plurality of laser radar devices are preset at specified positions on the road side, and are used for monitoring the target area, for example, if the target area is a certain test road section in a certain vehicle test site, the specified position corresponds to a corresponding position on the road side of the road section.
And step S120, performing point cloud splicing processing according to point cloud information in the original monitoring data, performing target perception based on the obtained splicing result, and outputting a perception result.
It should be noted that the related technical principle of point cloud stitching can be found in the prior published technical literature, and the detailed description of the present application is omitted here. For example, in step S120 in the present embodiment, the sensing result includes the type attribute (such as Car, Truck, Bike, Pedestrian, etc.) of the identified target, and the calculated contour size, GPS coordinates, speed, heading angle, etc. of the target.
According to the technical scheme, in the implementation of cooperative target perception of the vehicle and the road, a plurality of laser radar devices are preset at the specified positions of the road side to monitor the target area, point cloud splicing is carried out on the basis of original monitoring data output by the plurality of laser radar devices, and target perception is carried out on the basis of the obtained splicing result. The method can effectively cover a large target area, and can better realize target perception in vehicle-road cooperation based on the performance characteristics of the laser radar equipment.
To facilitate understanding of the technical solutions of the present application, the technical solutions of the present application will be described below with reference to another embodiment.
Because the laser radar adopts a plurality of laser transmitters and receivers to establish a three-dimensional point cloud picture, the purpose of real-time environment perception is realized, and the data volume generated by each radar device in real time is large;
in the application scenario of the present application, in order to reduce the processing pressure and improve the processing efficiency, in this embodiment, in the process of presetting a plurality of sets of laser radar devices, the plurality of sets of laser radar devices are grouped according to the position adjacency relationship of the laser radar devices, that is, a predetermined number of laser radar devices with similar positions are grouped into one group.
For example, for a certain road segment, 6 sets of laser radar devices, namely a, b, c, d, e, and f, are sequentially arranged on the road side and are divided into an X group and a Y group, wherein the devices in the X group are a, b, and c, and the devices in the Y group are d, e, and f.
At this time, in this embodiment, performing point cloud stitching processing based on point cloud information in the original monitoring data, performing target sensing based on an obtained stitching result, and outputting a sensing result includes:
respectively carrying out primary point cloud splicing treatment on the original monitoring data output by each group of equipment to correspondingly obtain primary spliced point cloud data of each group of equipment, respectively carrying out primary identification treatment on the basis of each primary spliced point cloud data to correspondingly obtain a primary sensing result of each group of equipment,
continuing the foregoing example, performing primary point cloud registration processing on the original monitoring data output by the devices a, b, and c in the X group to obtain primary registration point cloud data DYx of the X group, performing primary point cloud registration processing on the original monitoring data output by the devices d, e, and f in the Y group to obtain primary registration point cloud data DYy of the Y group, and performing primary identification processing based on the point cloud data DYx and DYy respectively to obtain primary sensing results GZx and GZy;
performing secondary point cloud splicing treatment according to the obtained primary splicing point cloud data (such as DYx and DYy) to obtain point cloud data of a target area;
and performing identification processing on the data of the spliced overlapping area in the point cloud data of the target area, performing uniqueness confirmation on each primary sensing result (such as GZx and GZy) according to the obtained identification result, and outputting the confirmed sensing result as the sensing result of the target area.
It is easy to understand that, in the process of performing splicing and sensing based on grouping, the same target may be repeatedly sensed in the boundary monitoring area between the groups of devices, in order to ensure correct sensing of the target in the whole target area, the splicing overlapping area of the secondary point cloud splicing processing is identified and processed at the end of the processing process, and the uniqueness of each primary sensing result is confirmed according to the obtained identification result, so that the accuracy of the final sensing result is ensured while the processing efficiency is improved.
Further, it is easy to understand that, in the above process, if the distance between the laser radar device and the computing processing device executing the computing processing is long, the original monitoring data is transmitted in a long distance and then the data is processed and analyzed, which is limited by the network bandwidth and the transmission speed, and a large time delay and a packet loss phenomenon may occur, so that the real-time performance of the target sensing cannot be guaranteed.
Therefore, as a preferred embodiment, edge calculation nodes are provided for each group of devices, and the first point cloud stitching processing and the first recognition processing are performed by the edge calculation nodes.
Edge computing is a practice of a distributed information technology architecture, and means that an open platform integrating network, computing, storage and application core capabilities is adopted on one side close to an object or a data source to provide nearest-end services nearby. The edge computing node performs computing processing on the data, initiates the processing at the edge side, can generate faster network service response, and meets the basic requirements of the industry in the aspects of real-time business, application intelligence, safety, privacy protection and the like.
In addition, in this embodiment, the grouping of the plurality of laser radar devices according to the position adjacency relationship of the laser radar devices includes:
according to the position adjacent relation of the laser radar equipment, external reference calibration is sequentially carried out on the laser radar equipment on the basis of a preset calibration strategy to obtain external reference information used for primary point cloud splicing processing and secondary point cloud splicing processing, for example, the calibration strategy specifically comprises the steps that one laser radar is set as a main sensor in primary calibration, and coordinate systems of other radars are unified into the same coordinate system through translation and rotation to obtain corresponding external reference information.
In an embodiment, the present application further provides a system for implementing target awareness in vehicle-road coordination, and as shown in fig. 2, a schematic block diagram of an application of the system is provided, where the system includes:
a plurality of laser radar devices 10, which are preset at specified positions on the road side, and are used for monitoring a target area, generating and outputting original monitoring data;
and the perception processing device is used for carrying out point cloud splicing processing according to the point cloud information in the original monitoring data, carrying out target perception based on the obtained splicing result and outputting a perception result.
Specifically, in this embodiment, the process of presetting the plurality of laser radar devices includes grouping the plurality of laser radar devices 10 according to the positional adjacency relationship of the laser radar devices (shown in the upper part of fig. 2);
the perception processing device comprises a central server 20 and edge computing nodes 30 which are correspondingly arranged aiming at all groups of equipment respectively, wherein the edge computing nodes are MEC industrial personal computers (for example);
each edge computing node 30 is configured to perform primary point cloud registration processing on the original monitoring data output by each group of devices, correspondingly obtain primary registration point cloud data of each group of devices, perform primary identification processing on the primary registration point cloud data, and correspondingly obtain a primary sensing result of each group of devices;
a central server 20 for performing a secondary point cloud registration process according to the primary registration point cloud data to obtain target area point cloud data, an
And identifying the data of the spliced area in the point cloud data of the target area, confirming the uniqueness of each primary sensing result according to the obtained identification result, and outputting the confirmed sensing result as the final sensing result of the target area.
Regarding the system for implementing target awareness in vehicle-road coordination in this embodiment, the method interaction process related thereto has been described in detail in the foregoing embodiment of the method, and will not be elaborated herein
Furthermore, it is easily understood that, in the above grouping, the number of lidar devices in each group of devices is determined based on the computation processing capability of the corresponding edge computing node, specifically, the number of lidar devices in each group of devices is 2 or 3, for example, three 80-line radar devices are grouped into one group, and two 32-line radar devices are grouped into one group based on specific scene considerations.
It should be noted that, in this embodiment, the optical fiber-based communication connection between the laser radar apparatus 10 and the edge computing node 30 is performed; the edge computing nodes 30 are communicatively coupled to the central server 20 over an ethernet network.
For example, as shown in fig. 2, the original monitoring data of each lidar device 10 is transmitted to an ethernet-to-optical fiber transceiver 41 through ethernet, then optical fibers of a plurality of lidar devices are transmitted in the same pipeline, and reach an optical fiber-to-ethernet transceiver 42, and then are correspondingly transmitted to each edge computing node 30 connected to the ethernet switch 43 through the ethernet switch 43 for processing; the central server 20 is also connected to the ethernet switch 43 and is communicatively connected to each of the edge computing nodes 30.
The above description is only a preferred embodiment of the present invention, but the scope of the present invention is not limited thereto, and any changes or substitutions that can be easily conceived by those skilled in the art within the technical scope of the present invention are included in the scope of the present invention. Therefore, the protection scope of the present invention shall be subject to the protection scope of the claims.
Claims (9)
1. A method for realizing target perception in vehicle-road cooperation is characterized by comprising the following steps:
acquiring original monitoring data output by a plurality of laser radar devices;
performing point cloud splicing processing according to point cloud information in the original monitoring data, performing target perception based on the obtained splicing result, and outputting a perception result;
the plurality of laser radar devices are preset at the specified positions of the road side and used for monitoring the target area.
2. The method of claim 1, wherein the presetting process of the plurality of laser radar devices comprises grouping the plurality of laser radar devices according to the position adjacent relation of the laser radar devices;
the point cloud splicing processing is carried out based on the point cloud information in the original monitoring data, target perception is carried out based on the obtained splicing result, and a perception result is output, and the method comprises the following steps:
respectively carrying out primary point cloud splicing processing on original monitoring data output by each group of equipment to correspondingly obtain primary spliced point cloud data of each group of equipment, respectively carrying out primary identification processing on the primary spliced point cloud data to correspondingly obtain a primary sensing result of each group of equipment;
performing secondary point cloud splicing processing according to the primary splicing point cloud data to obtain target area point cloud data;
and performing identification processing on the data of the spliced overlapping area in the point cloud data of the target area, performing uniqueness confirmation on each primary sensing result according to the obtained identification result, and outputting the confirmed sensing result as the sensing result of the target area.
3. The method of claim 2, wherein grouping the plurality of lidar devices according to their positional adjacency comprises:
and sequentially carrying out external parameter calibration on the laser radar equipment based on a preset calibration strategy according to the position adjacent relation of the laser radar equipment to obtain external parameter information for the primary point cloud splicing treatment and the secondary point cloud splicing treatment.
4. A system for realizing target perception in vehicle-road cooperation is characterized by comprising:
the laser radar equipment is preset at the specified position of the road side and used for monitoring a target area, generating and outputting original monitoring data;
and the perception processing device is used for carrying out point cloud splicing processing according to the point cloud information in the original monitoring data, carrying out target perception based on the obtained splicing result and outputting a perception result.
5. The system according to claim 4, wherein the presetting process of the plurality of laser radar devices comprises grouping the plurality of laser radar devices according to the position adjacent relation of the laser radar devices;
the perception processing device comprises a central server and edge computing nodes which are correspondingly arranged aiming at all groups of equipment respectively;
each edge computing node is used for respectively carrying out primary point cloud splicing processing on original monitoring data output by each group of equipment to correspondingly obtain primary spliced point cloud data of each group of equipment, and respectively carrying out primary identification processing on the basis of each primary spliced point cloud data to correspondingly obtain a primary sensing result of each group of equipment;
the central server is used for carrying out secondary point cloud splicing processing according to the primary splicing point cloud data to obtain point cloud data of a target area, and
and performing identification processing on the data of the spliced overlapping area in the point cloud data of the target area, performing uniqueness confirmation on each primary sensing result according to the obtained identification result, and outputting the confirmed sensing result as a final sensing result of the target area.
6. The system of claim 5, wherein the number of lidar devices in each group of devices is determined based on the computational processing capabilities of the corresponding edge compute node.
7. The system of claim 6, wherein the number of lidar devices in each group of devices is 2 or 3.
8. The system according to any one of claims 5 to 7, wherein the lidar device is communicatively coupled to the edge computing node via an optical fiber; and the edge computing nodes and the central server are in communication connection based on Ethernet.
9. The system according to any one of claims 5 to 7, wherein the edge computing node is an MEC industrial personal computer.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202111087072.0A CN113640772A (en) | 2021-09-16 | 2021-09-16 | Method and system for realizing target perception in vehicle-road cooperation |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202111087072.0A CN113640772A (en) | 2021-09-16 | 2021-09-16 | Method and system for realizing target perception in vehicle-road cooperation |
Publications (1)
Publication Number | Publication Date |
---|---|
CN113640772A true CN113640772A (en) | 2021-11-12 |
Family
ID=78425923
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202111087072.0A Pending CN113640772A (en) | 2021-09-16 | 2021-09-16 | Method and system for realizing target perception in vehicle-road cooperation |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN113640772A (en) |
Citations (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO2020154964A1 (en) * | 2019-01-30 | 2020-08-06 | Baidu.Com Times Technology (Beijing) Co., Ltd. | A point clouds registration system for autonomous vehicles |
WO2020154967A1 (en) * | 2019-01-30 | 2020-08-06 | Baidu.Com Times Technology (Beijing) Co., Ltd. | Map partition system for autonomous vehicles |
CN112184545A (en) * | 2019-07-05 | 2021-01-05 | 杭州海康威视数字技术股份有限公司 | Vehicle-mounted ring view generating method, device and system |
CN112213735A (en) * | 2020-08-25 | 2021-01-12 | 上海主线科技有限公司 | Laser point cloud noise reduction method for rainy and snowy weather |
CN112712589A (en) * | 2021-01-08 | 2021-04-27 | 浙江工业大学 | Plant 3D modeling method and system based on laser radar and deep learning |
CN112731358A (en) * | 2021-01-08 | 2021-04-30 | 奥特酷智能科技(南京)有限公司 | Multi-laser-radar external parameter online calibration method |
CN113112840A (en) * | 2021-03-15 | 2021-07-13 | 上海交通大学 | Unmanned vehicle over-the-horizon navigation system and method based on vehicle-road cooperation |
-
2021
- 2021-09-16 CN CN202111087072.0A patent/CN113640772A/en active Pending
Patent Citations (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO2020154964A1 (en) * | 2019-01-30 | 2020-08-06 | Baidu.Com Times Technology (Beijing) Co., Ltd. | A point clouds registration system for autonomous vehicles |
WO2020154967A1 (en) * | 2019-01-30 | 2020-08-06 | Baidu.Com Times Technology (Beijing) Co., Ltd. | Map partition system for autonomous vehicles |
CN112184545A (en) * | 2019-07-05 | 2021-01-05 | 杭州海康威视数字技术股份有限公司 | Vehicle-mounted ring view generating method, device and system |
CN112213735A (en) * | 2020-08-25 | 2021-01-12 | 上海主线科技有限公司 | Laser point cloud noise reduction method for rainy and snowy weather |
CN112712589A (en) * | 2021-01-08 | 2021-04-27 | 浙江工业大学 | Plant 3D modeling method and system based on laser radar and deep learning |
CN112731358A (en) * | 2021-01-08 | 2021-04-30 | 奥特酷智能科技(南京)有限公司 | Multi-laser-radar external parameter online calibration method |
CN113112840A (en) * | 2021-03-15 | 2021-07-13 | 上海交通大学 | Unmanned vehicle over-the-horizon navigation system and method based on vehicle-road cooperation |
Non-Patent Citations (1)
Title |
---|
张迪思 等: "基于道路智能化技术的试验场车辆全域轨迹感知方案研究", 汽车实用技术, no. 15, pages 34 - 38 * |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
JP6548691B2 (en) | Image generation system, program and method, simulation system, program and method | |
US10943355B2 (en) | Systems and methods for detecting an object velocity | |
WO2022206977A1 (en) | Cooperative-vehicle-infrastructure-oriented sensing information fusion representation and target detection method | |
WO2018066351A1 (en) | Simulation system, simulation program and simulation method | |
JP2020107324A (en) | Collection and processing of data distributed between vehicles constituting vehicle convoy | |
Shi et al. | VIPS: Real-time perception fusion for infrastructure-assisted autonomous driving | |
WO2018066352A1 (en) | Image generation system, program and method, and simulation system, program and method | |
CN109307869A (en) | For increasing the equipment and lighting device of the visual field of laser radar detection device | |
CN111474947A (en) | Robot obstacle avoidance method, device and system | |
CN112166458B (en) | Target detection and tracking method, system, equipment and storage medium | |
WO2023155580A1 (en) | Object recognition method and apparatus | |
CN112099042B (en) | Vehicle tracking method and system | |
US20230080076A1 (en) | Platooning processing method and apparatus, computer-readable medium, and electronic device | |
CN113985405A (en) | Obstacle detection method and obstacle detection equipment applied to vehicle | |
Poornima et al. | Fog robotics-based intelligence transportation system using line-of-sight intelligent transportation | |
CN115879060A (en) | Multi-mode-based automatic driving perception method, device, equipment and medium | |
CN114179829A (en) | Multi-end cooperative vehicle driving method, device, system and medium | |
CN114660568B (en) | Laser radar obstacle detection method and device | |
CN115049820A (en) | Determination method and device of occlusion region and training method of segmentation model | |
CN113160292A (en) | Laser radar point cloud data three-dimensional modeling device and method based on intelligent mobile terminal | |
CN113640772A (en) | Method and system for realizing target perception in vehicle-road cooperation | |
JP2021101372A (en) | Position detection method, device, apparatus, and readable storage media | |
Chen | Intelligent perception system for vehicle-road cooperation | |
CN112735121A (en) | Holographic sensing system based on image-level laser radar | |
JP2022500737A (en) | How to select the image section of the sensor |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination |