CN220323539U - Road side sensing equipment - Google Patents

Road side sensing equipment Download PDF

Info

Publication number
CN220323539U
CN220323539U CN202321362245.XU CN202321362245U CN220323539U CN 220323539 U CN220323539 U CN 220323539U CN 202321362245 U CN202321362245 U CN 202321362245U CN 220323539 U CN220323539 U CN 220323539U
Authority
CN
China
Prior art keywords
sensor
point cloud
cloud data
blind
lidar sensor
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202321362245.XU
Other languages
Chinese (zh)
Inventor
王方建
蒋成
李雪松
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Yikong Zhijia Technology Co Ltd
Original Assignee
Beijing Yikong Zhijia Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Yikong Zhijia Technology Co Ltd filed Critical Beijing Yikong Zhijia Technology Co Ltd
Priority to CN202321362245.XU priority Critical patent/CN220323539U/en
Application granted granted Critical
Publication of CN220323539U publication Critical patent/CN220323539U/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Landscapes

  • Traffic Control Systems (AREA)
  • Optical Radar Systems And Details Thereof (AREA)

Abstract

Embodiments of the present disclosure provide a roadside sensing apparatus including a movable device (100), an unmanned computing platform (110) provided in the movable device (100), a mounting bar (300) provided in the movable device (100), and a main lidar sensor (400), a blind-complement lidar sensor (500), an image sensor (600), a communication unit (700), and a positioning antenna (800) provided in the mounting bar (300). According to the road side sensing equipment, the data of the unmanned environment are sensed in real time by using the multiple sensors, then the data sensed by the multiple sensors are fused, and finally the real-time sensing data are sent to the unmanned vehicle end and/or the cloud end of the automatic driving system through the V2X communication link, so that planning of the automatic driving system is assisted, and safety, stability and operation efficiency of the automatic driving system are guaranteed.

Description

Road side sensing equipment
Technical Field
The embodiment of the disclosure belongs to the technical field of unmanned driving, and particularly relates to road side sensing equipment.
Background
The field of application of surface mine autopilot systems is currently limited to controllable, independent operating sections, in which only vehicles within the system are allowed to travel. With the expansion of the running range of unmanned vehicles in mining areas, intersection areas where independent operation road sections meet with conventional road sections outside an automatic driving system appear, and further, the service demands of the unmanned vehicles and manual driving vehicles in mixed running at the intersections and road sections are provided.
Because of the complex road and rough terrain in mining areas, effective perception of vehicles and other objects outside of the autonomous system and on regular road segments cannot be ensured by the perception devices on the unmanned vehicles alone.
Disclosure of Invention
The embodiment of the disclosure aims to at least solve one of the technical problems in the prior art, and provides a road side sensing device, which comprises a plurality of sensors, wherein real-time sensing data of an unmanned environment are obtained from a fusion result by fusing data obtained by the plurality of sensors.
According to one aspect of an embodiment of the present disclosure, there is provided a roadside awareness apparatus including a movable device, an unmanned computing platform disposed in the movable device, and a mounting bar disposed in the movable device; the mounting rod is provided with at least one laser radar sensor and a communication unit; the unmanned computing platform is electrically connected with the laser radar sensor and the communication unit; the at least one laser radar sensor is used for acquiring point cloud data; the unmanned computing platform is used for acquiring real-time perception data according to the point cloud data acquired by the at least one laser radar sensor; and the communication unit is used for sending the real-time sensing data to an automatic driving vehicle and/or a cloud.
Optionally, the at least one lidar sensor includes a main lidar sensor and a blind-complement lidar sensor, the main lidar sensor is configured to obtain first point cloud data, and the blind-complement lidar sensor is configured to obtain second point cloud data.
Optionally, the detection distance or the field angle of the main lidar sensor is different from that of the blind-supplement lidar sensor.
Optionally, the scanning wire harness and the detection distance of the main laser radar sensor are larger than those of the blind-complement laser radar sensor; and the field angle of the main laser radar sensor is smaller than that of the blind supplementing laser radar sensor.
Optionally, the scanning wire harness of the main laser radar sensor is 160 wire harnesses, the detection distance is 200 meters, and the scanning angle is 120 degrees; the scanning wire harness of the blind supplementing laser radar sensor is 32 wire harnesses, the detection distance is 50 meters, and the scanning angle is 360 degrees.
Optionally, the road side sensing device further comprises an image sensor, and the image sensor is used for acquiring image and/or video data.
Optionally, the unmanned computing platform fuses the first point cloud data acquired by the main laser radar sensor and the second point cloud data acquired by the blind supplementing laser radar sensor, and acquires the real-time sensing data according to a fusion result.
Optionally, the unmanned computing platform fuses the first point cloud data acquired by the main laser radar sensor, the second point cloud data acquired by the blind-patch laser radar sensor, and the image and/or video data acquired by the image sensor, and acquires the real-time sensing data according to a fusion result.
Optionally, the unmanned computing platform includes: the receiving module is used for receiving the point cloud data acquired by the at least one laser radar sensor; and the processing module is used for acquiring the real-time perception data according to the point cloud data acquired by the at least one laser radar sensor.
In one example, the unmanned computing platform includes: the receiving module is used for receiving the first point cloud data acquired by the main laser radar sensor and the second point cloud data acquired by the blind supplementing laser radar sensor; the fusion module is used for fusing the first point cloud data acquired by the main laser radar sensor and the second point cloud data acquired by the blind supplementing laser radar sensor to obtain a fusion result; and the processing module is used for acquiring the real-time perception data according to the fusion result.
In another example, the unmanned computing platform includes: the receiving module is used for receiving the first point cloud data acquired by the main laser radar sensor, the second point cloud data acquired by the blind supplementing laser radar sensor and the image and/or video data acquired by the image sensor; the fusion module is used for fusing the first point cloud data acquired by the main laser radar sensor, the second point cloud data acquired by the blind-complement laser radar sensor and the image and/or video data acquired by the image sensor to obtain the fusion result; and the processing module is used for acquiring the real-time perception data according to the fusion result.
Optionally, the communication unit includes: the V2V communication module is used for sending the real-time sensing data to the automatic driving vehicle; and/or the V2N communication module is used for sending the real-time perception data to the cloud.
Optionally, the road side sensing device further includes a positioning antenna, where the positioning antenna is used to obtain real-time position information of the road side sensing device.
Optionally, the road side sensing device includes a first bearing bracket, a second bearing bracket and a third bearing bracket sequentially arranged on the mounting rod; the first bearing support is provided with the blind supplementing laser radar sensor, the second bearing support is provided with the main laser radar sensor and the image sensor, and the third bearing support is provided with the positioning antenna.
Optionally, the blind supplementing laser radar sensor is arranged at one side of the first bearing bracket facing the second bearing bracket; the image sensor is arranged on one side of the second bearing bracket, which faces the first bearing bracket; the main laser radar sensor is arranged on one side of the second bearing bracket, which faces the third bearing bracket.
Optionally, the mounting bar is detachably provided to the movable device, and the mounting bar is telescopic or foldable.
Optionally, a fixed pile is arranged at the bottom of the movable device.
Optionally, the mobile device is a road side vehicle.
According to the road side sensing equipment, the unmanned environment data are acquired through the multiple sensors, the multiple sensor data are fused and calculated, the real-time sensing data are acquired and sent to the unmanned vehicle end or the cloud end of the automatic driving system, the sensing requirements of the automatic driving system of the surface mine on the road and the target objects outside the system are met, the planning of the automatic driving system is assisted, and the safety, stability and operation efficiency of the automatic driving system are guaranteed.
Drawings
Fig. 1 is a schematic structural diagram of a road side sensing device according to an embodiment of the disclosure;
FIG. 2 is a schematic structural diagram of an unmanned computing platform of a roadside awareness apparatus according to an embodiment of the disclosure;
fig. 3 is an application scenario illustration of a road side aware device according to an embodiment of the present disclosure;
fig. 4 is a schematic diagram of a road side sensing device according to an embodiment of the disclosure.
Detailed Description
In order that those skilled in the art will better understand the technical solutions of the present disclosure, the present disclosure will be described in further detail with reference to the accompanying drawings and detailed description.
Fig. 1 is a schematic structural diagram of a road side sensing device according to an embodiment of the disclosure. The road side sensing equipment comprises a movable device 100, an unmanned computing platform 110 arranged in the device, at least one laser radar sensor, for example, a main laser radar sensor 400, a blind supplementing laser radar sensor 500, an image sensor 600, a communication unit 700 and a positioning antenna 800. The mobile device 100 may be a road side vehicle, or any other mobile device.
It should be noted that, in the embodiments of the present disclosure, the number of the lidar sensors, the main lidar sensor, and/or the blind-complement lidar sensors is not limited, and those skilled in the art may freely select the number of the lidar sensors, the main lidar sensor, and/or the blind-complement lidar sensors on the premise of ensuring that the technical solution of the present disclosure can be implemented.
The unmanned computing platform 110 is electrically connected with the main lidar sensor 400, the blind-supplement lidar sensor 500, the image sensor 600, the communication unit 700 and the positioning antenna 800, respectively. The main lidar sensor 400 and the blind-mate lidar sensor 500 are respectively configured to acquire first point cloud data and second point cloud data of the unmanned environment. The unmanned computing platform 110 is configured to fuse the first point cloud data and the second point cloud data, and obtain real-time sensing data according to a fusion result.
The image sensor 600 is a vision sensor for acquiring real-time image and/or video data of an environment. The vision sensor is preferably a camera or video camera. The real-time image and/or video data acquired by the image sensor 600 is used for video monitoring of the environment and assisting unmanned driving.
In a preferred embodiment, the unmanned computing platform 110 may further fuse the first point cloud data obtained by the main lidar sensor 400, the second point cloud data obtained by the blind-supplement lidar sensor 500, and the real-time image and/or video data obtained by the image sensor 600, and obtain real-time sensing data according to the fusion result.
The communication unit 700 is configured to send the real-time sensing data to an autopilot vehicle and/or a cloud. The communication unit 700 includes a V2V communication module and a V2N communication module.
The positioning antenna 800 is used for acquiring real-time positioning information of the roadside vehicle. For example, GPS or Beidou navigation positioning information may be used.
As shown in fig. 2, a schematic structural diagram of an unmanned computing platform 110 of a roadside sensing device according to an embodiment of the disclosure is provided. The unmanned computing platform 110 includes a receiving module 111, a fusion module 112, and a processing module 113.
The receiving module 110 is configured to receive the first point cloud data acquired by the main lidar sensor 400, the second point cloud data acquired by the blind-mate lidar sensor 500, and the real-time image and/or video data acquired by the image sensor 600.
The fusion module 112 is configured to fuse the first point cloud data acquired by the main lidar sensor 400 with the second point cloud data acquired by the blind-mate lidar sensor 500 to obtain the fusion result. Specifically, the fusion module 112 performs de-distortion processing on the first point cloud data and the second point cloud data, and then performs time alignment and space alignment on coordinate systems of the two sets of point cloud data, and registers the two sets of point cloud data to the same coordinate system to obtain fused point cloud data.
In a preferred embodiment, the fusion module 112 may further fuse the first point cloud data acquired by the main lidar sensor 400, the second point cloud data acquired by the blind-patch lidar sensor 500, and the real-time image and/or video data acquired by the image sensor 600 to obtain the fusion result.
The processing module 113 is configured to obtain real-time sensing data of the unmanned driving environment according to the fusion result obtained by the fusion module 112. Specifically, the processing module 113 performs denoising, segmentation, object detection and other processes on the fusion result obtained by the fusion module 112 through a series of algorithms, so as to obtain the real-time perception data.
As one example, the unmanned computing platform 110 in embodiments of the present disclosure may employ a smart driving computing platform known as MDC.
As shown in fig. 3, an application scenario example of the present embodiment includes an intersection area formed by four roads A, B, C, D, where A, B is a manned road outside the autopilot system and C, D is an unmanned road inside the system. The road side sensing apparatus of this embodiment is placed at the point RP in fig. 3, the road A, B is farther from the road side sensing apparatus, the angle to be observed is smaller for the road side sensing apparatus, and the road C, D is closer to the road side sensing apparatus, the angle to be observed is larger.
Referring to fig. 1 to 2 together, in the present embodiment, the road A, B is perceived by the main lidar sensor 400 to obtain the first point cloud data, and the road C, D is perceived by the blind supplement lidar sensor 500 to obtain the second point cloud data. Wherein, the detection distance and the scanning beam of the main lidar sensor 400 are larger than those of the blind-complement lidar sensor 500, and the field angle of the main lidar sensor 400 is smaller than that of the blind-complement lidar sensor 500.
As a specific example, the scanning beam of the main lidar sensor 400 for sensing the road A, B is 160 beams, the effective detection distance is 200 meters, and the angle of view is 120 °. The scanning beam of the blind-patch lidar sensor 500 for scanning the road C, D is 32 beams, the effective detection distance is 50 meters, and the angle of view is 360 °. As an example, a dawset falcon may be selected as the primary lidar sensor 400 and a graminex 32 may be selected as the blind-complement lidar sensor 500.
After the first and second point cloud data of the intersection area are obtained using the primary lidar sensor 400 and the blind-supplement lidar sensor 500, the first and second point cloud data are transmitted to the unmanned computing platform 110. The receiving module 111 of the unmanned computing platform 110 receives the first point cloud data and the second point cloud data; the fusion module 112 of the unmanned computing platform 110 fuses the first point cloud data and the second point cloud data to obtain the fusion result; the processing module 113 is configured to obtain the real-time perception data of the intersection area according to the fusion result. The communication unit 700 is configured to send the real-time sensing data to an autopilot vehicle and/or a cloud.
In this example, the image sensor 600 is configured to acquire image and/or video data of an intersection area and send the image data to the unmanned computing platform 110 or a cloud platform with intelligent driving management and control functions. Illustratively, the image sensor 600 is a camera for acquiring raw image data of an intersection area. The image data acquired by the camera can be downloaded to a display device for manual monitoring of the condition of the intersection area. Or the fusion module 112 is configured to fuse the first point cloud data, the second point cloud data, and the image and/or video data acquired by the image sensor 600 to obtain the fusion result; the processing module 113 is configured to obtain the real-time perception data of the intersection area according to the fusion result. And then, the communication unit 700 is used for sending the real-time sensing data to an automatic driving vehicle and/or the cloud.
The road side sensing equipment of the embodiment of the disclosure is based on the sensing requirement of the mixed road junction, the road side sensing equipment is erected on the road side of the mixed road junction, the laser radar sensors with different parameters are selected to complementarily scan the road junction area, and the laser radar sensors are matched with the vision sensors to provide high-precision high-density laser point cloud data meeting the sensing distance requirement for the unmanned computing platform as sensing data sources.
Illustratively, as shown in fig. 4, a schematic diagram of a road side sensing device according to an embodiment of the disclosure is shown. The road side sensing device comprises a road side vehicle 100, an unmanned computing platform 110 arranged on the road side vehicle 100, and a first bearing bracket 310, a second bearing bracket 320 and a third bearing bracket 330 which are arranged on the mounting rod 300 from low to high of the vehicle platform 200.
The first bearing bracket 310 is provided with the blind supplementing laser radar sensor 500, the second bearing bracket 320 is provided with the main laser radar sensor 400 and the image sensor 600, and the third bearing bracket 330 is provided with the positioning antenna 800. The communication unit 700 is disposed below the first carrier 310 and near the unmanned computing platform 110.
Illustratively, the first carrier 310, the second carrier 320, and the third carrier 330 are sequentially disposed in order of low to high positions. The blind-patch laser radar sensor 500 is disposed on the first bearing bracket 310, and the main laser radar sensor 400 and the image sensor 600 are disposed on the second bearing bracket 320, so that the relative height setting of the main laser radar sensor 400 and the blind-patch laser radar sensor 500 can be realized. In this embodiment, the blind-complement laser radar sensor 500 may be a mechanical rotation type laser radar sensor, so that the main laser radar sensor 400 is assisted by the laser radar sensor 500 rotating 360 degrees, and more comprehensive environmental point cloud data is obtained.
The positioning antenna 800 is disposed on the third bearing bracket 330 that is relatively highest, and is configured to send the position information of the road side sensing device to the remote monitoring device, so that the remote operator can monitor and adjust the position of the road side sensing device.
Illustratively, the blind-mate lidar sensor 500 is disposed on a side of the first carrier 310 facing the second carrier 320. The image sensor 600 is disposed on a side of the second carrier 320 facing the first carrier 310, and the main lidar sensor 400 is disposed on a side of the second carrier 320 facing the third carrier 330. The road side sensing device of the embodiment realizes the monitoring of the intersection area and the processing of the auxiliary point cloud data by using the image data acquired by the image sensor 600 through arranging the image sensor 600, thereby further improving the sensing accuracy and further improving the safety and the reliability of an automatic driving system.
Illustratively, as shown in fig. 4, the vehicle platform 200 is disposed on one side of the roadside vehicle 100. The first bearing bracket 310, the second bearing bracket 320 and the third bearing bracket 330 are all disposed on the other side of the mounting bar 300 facing the roadside vehicle 100. The arrangement can enable the ground projection of the gravity center of the road side sensing equipment to be positioned in the ground projection of the bearing part of the road side sensing equipment, and the overall stability of the road side equipment is improved.
Illustratively, as shown in fig. 4, the vehicle platform 200 is provided with a fixing pile 900 at the bottom. When the road side sensing equipment stays at the preset sensing position, fixing piles 900 are fixed on the ground, so that the position of the road side sensing equipment is fixed, and the stability of the road side sensing equipment is enhanced; when the road side sensing equipment needs to move, the fixing piles 900 are suspended on the ground, so that the road side sensing equipment can move conveniently.
Illustratively, as shown in FIG. 4, the mounting bar 300 is removable with respect to the vehicle platform 200. Preferably, the mounting bar 300 is telescopic or foldable to facilitate its installation and height adjustment.
It should be noted that, in the embodiments of the present disclosure, the number and positions of each element are not strictly limited, and those skilled in the art may freely select the number and positions of each element on the premise that those skilled in the art can implement the technical solution of the present disclosure.
It is to be understood that the above embodiments are merely exemplary embodiments employed to illustrate the principles of the present disclosure, however, the present disclosure is not limited thereto. Various modifications and improvements may be made by those skilled in the art without departing from the spirit and substance of the disclosure, and are also considered to be within the scope of the disclosure.

Claims (18)

1. A roadside awareness apparatus comprising a mobile device (100), an unmanned computing platform (110) disposed in the mobile device (100), and a mounting bar (300) disposed in the mobile device (100); at least one laser radar sensor and a communication unit (700) are arranged on the mounting rod (300); the unmanned computing platform (110) is electrically connected with the laser radar sensor and the communication unit (700); the at least one laser radar sensor is used for acquiring point cloud data; the unmanned computing platform (110) is used for acquiring real-time perception data according to the point cloud data acquired by the at least one laser radar sensor; the communication unit (700) is used for sending the real-time sensing data to an automatic driving vehicle and/or a cloud.
2. The roadside awareness device according to claim 1, characterized in that the at least one lidar sensor comprises a main lidar sensor (400) and a blind-supplement lidar sensor (500), the main lidar sensor (400) being for acquiring first point cloud data and the blind-supplement lidar sensor (500) being for acquiring second point cloud data.
3. The roadside sensing device according to claim 2, characterized in that the detection distance or angle of view of the main lidar sensor (400) and the blind-complement lidar sensor (500) are different.
4. A roadside sensing device according to claim 2 or 3, characterized in that the scanning beam and detection distance of the main lidar sensor (400) are both larger than the scanning beam and detection distance of the blind-patch lidar sensor (500); the field angle of the main lidar sensor (400) is smaller than the field angle of the blind-complement lidar sensor (500).
5. The roadside sensing device according to claim 4, wherein the scanning beam of the main lidar sensor (400) is 160 beams, the detection distance is 200 meters, and the scanning angle is 120 °; the scanning wire harness of the blind supplementing laser radar sensor (500) is 32 wire harnesses, the detection distance is 50 meters, and the scanning angle is 360 degrees.
6. The road side perception device according to claim 5, characterized in that the road side perception device further comprises an image sensor (600), the image sensor (600) being adapted to acquire image and/or video data.
7. The roadside awareness device according to claim 5 or 6, wherein the unmanned computing platform (110) fuses the first point cloud data acquired by the main lidar sensor (400) and the second point cloud data acquired by the blind-supplement lidar sensor (500) and acquires the real-time awareness data according to a fusion result.
8. The roadside awareness device according to claim 6, wherein the unmanned computing platform (110) fuses the first point cloud data acquired by the main lidar sensor (400), the second point cloud data acquired by the blind-supplement lidar sensor (500), and the image and/or video data acquired by the image sensor (600) and acquires the real-time awareness data according to a fusion result.
9. The roadside awareness device of claim 1, wherein the unmanned computing platform (110) comprises:
a receiving module (111) for receiving the point cloud data acquired by the at least one lidar sensor;
and the processing module (113) is used for acquiring the real-time perception data according to the point cloud data acquired by the at least one laser radar sensor.
10. The roadside awareness device of claim 7 wherein the unmanned computing platform (110) comprises:
a receiving module (111) for receiving the first point cloud data acquired by the main lidar sensor (400) and the second point cloud data acquired by the blind-complement lidar sensor (500);
the fusion module (112) is used for fusing the first point cloud data acquired by the main laser radar sensor (400) and the second point cloud data acquired by the blind supplementing laser radar sensor (500) to obtain a fusion result;
and the processing module (113) is used for acquiring the real-time perception data according to the fusion result.
11. The roadside awareness device of claim 8 wherein the unmanned computing platform (110) comprises:
a receiving module (111) for receiving the first point cloud data acquired by the main lidar sensor (400), the second point cloud data acquired by the blind-patch lidar sensor (500), and the image and/or video data acquired by the image sensor (600);
the fusion module (112) is used for fusing the first point cloud data acquired by the main laser radar sensor (400), the second point cloud data acquired by the blind-complement laser radar sensor (500) and the image and/or video data acquired by the image sensor (600) to obtain the fusion result;
and the processing module (113) is used for acquiring the real-time perception data according to the fusion result.
12. The roadside awareness device according to claim 1, wherein the communication unit (700) comprises:
the V2V communication module is used for sending the real-time sensing data to the automatic driving vehicle; and/or the number of the groups of groups,
and the V2N communication module is used for sending the real-time sensing data to the cloud.
13. The road side sensing device of claim 6, further comprising a positioning antenna (800), the positioning antenna (800) being configured to obtain real-time location information of the road side sensing device.
14. The roadside sensing device according to claim 13, wherein the roadside sensing device comprises a first load bearing bracket (310), a second load bearing bracket (320) and a third load bearing bracket (330) which are sequentially provided to the mounting bar (300);
the first bearing support (310) is provided with the blind supplementing laser radar sensor (500), the second bearing support (320) is provided with the main laser radar sensor (400) and the image sensor (600), and the third bearing support (330) is provided with the positioning antenna (800).
15. The roadside sensing device according to claim 14, wherein the blind-supplement lidar sensor (500) is arranged at a side of the first carrying support (310) facing the second carrying support (320); the image sensor (600) is arranged on one side of the second bearing bracket (320) facing the first bearing bracket (310); the main laser radar sensor (400) is arranged on one side of the second bearing bracket (320) facing the third bearing bracket (330).
16. The roadside sensing apparatus according to claim 1, wherein the mounting bar (300) is detachably provided to the movable device (100), and the mounting bar (300) is retractable or foldable.
17. The roadside sensing apparatus according to claim 1, wherein the movable device (100) is provided with a fixing peg (900) at the bottom.
18. The road side perception apparatus according to claim 1, characterized in that the movable device (100) is a road side vehicle.
CN202321362245.XU 2023-05-31 2023-05-31 Road side sensing equipment Active CN220323539U (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202321362245.XU CN220323539U (en) 2023-05-31 2023-05-31 Road side sensing equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202321362245.XU CN220323539U (en) 2023-05-31 2023-05-31 Road side sensing equipment

Publications (1)

Publication Number Publication Date
CN220323539U true CN220323539U (en) 2024-01-09

Family

ID=89425867

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202321362245.XU Active CN220323539U (en) 2023-05-31 2023-05-31 Road side sensing equipment

Country Status (1)

Country Link
CN (1) CN220323539U (en)

Similar Documents

Publication Publication Date Title
US10554757B2 (en) Smart road system for vehicles
US20180307245A1 (en) Autonomous Vehicle Corridor
EP3742200B1 (en) Detection apparatus and parameter adjustment method thereof
CN113002396B (en) A environmental perception system and mining vehicle for automatic driving mining vehicle
Ilas Electronic sensing technologies for autonomous ground vehicles: A review
EP2535883B1 (en) Train-of-vehicle travel support device
KR101747180B1 (en) Auto video surveillance system and method
JP5473304B2 (en) Remote location image display device, remote control device, vehicle control device, remote control system, remote control method, remote control program, vehicle control program, remote location image display method, remote location image display program
EP3451315B1 (en) Apparatus and method of implementing an augmented reality processed terrain and obstacle threat scouting service
CN110799804A (en) Map generation system and method
CN113378947A (en) Vehicle road cloud fusion sensing system and method for unmanned transportation in open-pit mining area
US11514790B2 (en) Collaborative perception for autonomous vehicles
CN114667460A (en) System and method for improving vehicle operation using movable sensors
CN104590573A (en) Barrier avoiding system and method for helicopter
CN111746789B (en) Shooting system, server, control method, and storage medium storing program
EP4148385A1 (en) Vehicle navigation positioning method and apparatus, and base station, system and readable storage medium
CN112461249A (en) Sensor localization from external source data
CN111776942A (en) Tire crane running control system, method and device and computer equipment
CN210377164U (en) Air-ground cooperative operation system
US20220221298A1 (en) Vehicle control system and vehicle control method
CN220323539U (en) Road side sensing equipment
CN204297108U (en) Helicopter obstacle avoidance system
CN113758482A (en) Vehicle navigation positioning method, device, base station, system and readable storage medium
US11661077B2 (en) Method and system for on-demand roadside AI service
CN116569070A (en) Method and system for analyzing dynamic LiDAR point cloud data

Legal Events

Date Code Title Description
GR01 Patent grant
GR01 Patent grant