CN115440034A - Vehicle-road cooperation realization method and system based on camera - Google Patents

Vehicle-road cooperation realization method and system based on camera Download PDF

Info

Publication number
CN115440034A
CN115440034A CN202211016027.0A CN202211016027A CN115440034A CN 115440034 A CN115440034 A CN 115440034A CN 202211016027 A CN202211016027 A CN 202211016027A CN 115440034 A CN115440034 A CN 115440034A
Authority
CN
China
Prior art keywords
vehicle
camera
road
coordinates
intelligent network
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202211016027.0A
Other languages
Chinese (zh)
Other versions
CN115440034B (en
Inventor
王平
傅良伟
王超
王新红
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Tongji University
Original Assignee
Tongji University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Tongji University filed Critical Tongji University
Priority to CN202211016027.0A priority Critical patent/CN115440034B/en
Publication of CN115440034A publication Critical patent/CN115440034A/en
Application granted granted Critical
Publication of CN115440034B publication Critical patent/CN115440034B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G08SIGNALLING
    • G08GTRAFFIC CONTROL SYSTEMS
    • G08G1/00Traffic control systems for road vehicles
    • G08G1/01Detecting movement of traffic to be counted or controlled
    • G08G1/0104Measuring and analyzing of parameters relative to traffic conditions
    • GPHYSICS
    • G08SIGNALLING
    • G08GTRAFFIC CONTROL SYSTEMS
    • G08G1/00Traffic control systems for road vehicles
    • G08G1/01Detecting movement of traffic to be counted or controlled
    • G08G1/0104Measuring and analyzing of parameters relative to traffic conditions
    • G08G1/0108Measuring and analyzing of parameters relative to traffic conditions based on the source of data
    • GPHYSICS
    • G08SIGNALLING
    • G08GTRAFFIC CONTROL SYSTEMS
    • G08G1/00Traffic control systems for road vehicles
    • G08G1/01Detecting movement of traffic to be counted or controlled
    • G08G1/0104Measuring and analyzing of parameters relative to traffic conditions
    • G08G1/0125Traffic data processing
    • GPHYSICS
    • G08SIGNALLING
    • G08GTRAFFIC CONTROL SYSTEMS
    • G08G1/00Traffic control systems for road vehicles
    • G08G1/01Detecting movement of traffic to be counted or controlled
    • G08G1/04Detecting movement of traffic to be counted or controlled using optical or ultrasonic detectors
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02TCLIMATE CHANGE MITIGATION TECHNOLOGIES RELATED TO TRANSPORTATION
    • Y02T10/00Road transport of goods or passengers
    • Y02T10/10Internal combustion engine [ICE] based vehicles
    • Y02T10/40Engine management systems

Landscapes

  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Chemical & Material Sciences (AREA)
  • Analytical Chemistry (AREA)
  • Traffic Control Systems (AREA)

Abstract

The invention provides a vehicle-road cooperation realization method and a realization system based on a camera, which comprise an intelligent network-connected vehicle-mounted subsystem arranged on a vehicle and an intelligent network-connected road-side subsystem arranged on a road side, wherein the vehicle-mounted camera and the road-side camera are time-synchronized through GNSS time service, the intelligent network-connected vehicle-mounted subsystem and the intelligent network-connected road-side subsystem convert pixel coordinates into WGS84 coordinates to realize the spatial alignment of the vehicle-mounted camera and the road-side camera, the intelligent network-connected road-side subsystem pushes a detection result to the vehicle, the vehicle is subjected to correlation fusion with the road-side detection result based on self RTK position information, and obstacles around the vehicle fuse the vehicle detection result and the road-side detection result.

Description

Vehicle-road cooperation realization method and system based on camera
Technical Field
The invention belongs to the technical field of intelligent traffic, and particularly relates to a vehicle-road cooperation realization method and system based on a camera.
Background
In recent years, the rapid development of information and communication technology has initiated the deep revolution of the traditional automobile industry, which has promoted the maturity of intelligent driving automobiles, and automobile driving is becoming more and more simple and intelligent. However, conventional autonomous and assisted driving systems typically rely solely on-board sensors to sense and understand the surrounding driving environment. On the one hand, this requires the deployment of advanced and complex sensor devices and computing storage devices on the vehicle, greatly increasing the manufacturing and maintenance costs of the vehicle. On the other hand, because the viewing angle of the vehicle-mounted sensor is usually low, the vehicle-mounted sensor is physically limited by severe weather and complex driving environments such as tunnel entrances and exits, intersections and the like, and the range and the accuracy of vehicle perception are greatly limited. 2020 world intelligent networking automobile congress, the Chinese society of automotive engineering college, the Chinese academy of engineers Li Jun teach 5 major challenges for single-car intelligence: 1) The artificial intelligence is highly depended on, so that the 'black box effect' is difficult to overcome; 2) Finally, road tests which need billions of miles are realized, and the road tests are difficult to realize in a short period; 3) Full autonomous driving requires at least millions of extreme condition data trains; 4) Too many sensing devices result in too high cost; 5) The actual driving safety is difficult to be absolutely guaranteed.
Therefore, in order to accurately and effectively sense the information of the target state in the traffic system, it is far from sufficient to use only the sensor of the vehicle and the computing resource, and the internet of vehicles (IoV) technology develops the route in cooperation with the vehicle road, thereby taking advantage of the fact. The technology of the Vehicle-to-electrical networking (V2X) is to connect all traffic units such as vehicles and other vehicles, vehicles and pedestrians, vehicles and roadside infrastructure and the like into a network by using an advanced mobile communication technology.
For a vehicle-road cooperative sensing system, the most important task is to identify and continuously track traffic participants such as vehicles, pedestrians and the like on a road by using a sensor to obtain an accurate track of each target so as to support subsequent path planning and decision making. At present, most schemes detect the tracking target by setting up a camera, because the vehicle-mounted camera becomes standard configuration, the roadside camera is also widely deployed, but how to effectively fuse the vehicle-mounted camera and the roadside camera, and the realization of cooperative vehicle road perception is still rarely studied at present. The existing research idea is to solve the problem by using multi-sensor fusion, pixel coordinates can be effectively converted into laser point cloud coordinates through combined calibration of a laser radar and a camera, and the laser point cloud coordinates are easily converted into northeast coordinates through rigid transformation, which is a common method for establishing a unified coordinate system in the multi-sensor fusion process. However, under the condition of no laser radar, how to construct a unified coordinate system for a vehicle-mounted camera and a road side camera to realize the vehicle-road cooperative sensing based on the cameras has no relevant research at present.
Disclosure of Invention
Aiming at the defects in the prior art, the invention aims to provide a vehicle-road cooperative implementation method and a vehicle-road cooperative implementation system based on a camera, which ensure the spatial alignment of the vehicle-road cooperative system in the information fusion process and can provide a uniform reference coordinate system for vehicle-road cooperative sensing, thereby improving the sensing precision. To achieve the above objects and other advantages in accordance with the present invention, there is provided a camera-based vehicle road coordination implementation system, comprising:
the intelligent network connection vehicle-mounted subsystem is arranged on the vehicle, and the intelligent network connection side subsystem is arranged on the road side, and the intelligent network connection vehicle-mounted subsystem and the intelligent network connection side subsystem realize vehicle-road cooperation through V2X.
Preferably, the intelligent network connection vehicle-mounted subsystem comprises an RTK, a vehicle-mounted camera, a vehicle-mounted computing device and a V2X device, and the devices are integrated together.
Preferably, the intelligent network connection side subsystem comprises a road side camera, a road side computing device and a V2X device, which are integrated together.
A vehicle-road cooperation realization method based on a camera comprises the following steps:
s1, time synchronization is achieved between an intelligent network connection side subsystem and an intelligent network connection vehicle-mounted subsystem through GNSS time service;
s2, the intelligent network connection road side subsystem realizes the spatial alignment of the road side camera through the conversion from the pixel coordinate to the WGS84 coordinate;
s3, the intelligent networking vehicle-mounted subsystem realizes conversion from pixel coordinates to WGS84 coordinates;
s4, the subsystem on the intelligent network connection side pushes the detection result to the subsystem on the intelligent network connection vehicle through V2X;
s5, information fusion is realized between the intelligent network connection vehicle-mounted subsystem and the intelligent network connection side subsystem through a unified WGS84 coordinate system;
preferably, the step S2 relates to a roadside camera, and an RTK may be installed on a test vehicle (a self vehicle in the intelligent internet protocol vehicle-mounted subsystem) to obtain a real-time GPS coordinate of the test vehicle, record a driving video of the test vehicle in a field area of the roadside camera, recognize a position of the test vehicle in real time through a target detection algorithm to obtain a pixel coordinate of the test vehicle, obtain more corresponding GPS coordinates-pixel coordinate pairs after time synchronization, perform training and test set division on the obtained data, perform training by using decision tree regression, a random forest regression algorithm or a neural network deep learning algorithm, and finally obtain a conversion model from the roadside camera pixel coordinate to the GPS coordinate.
Preferably, the step S3 relates to a vehicle-mounted camera, wherein the camera does not need to be calibrated by internal reference and external reference, and the conversion from the pixel coordinate to the WGS84 coordinate is directly realized by establishing a homography transformation matrix. The method comprises the following steps:
s1: a camera is fixedly arranged on the vehicle body, and a plurality of paper sheets with bright colors are selected to be placed in a proper position in the visual field range of the camera to be used as marks;
s2: acquiring the GPS coordinates of the marking paper sheet by using a handheld high-precision GPS;
s3: acquiring any one frame of detection picture of a camera, and acquiring pixel point coordinates of a marking paper in a picture;
s4: and solving homography transformation of a sensor coordinate system and a GPS coordinate system by using a corresponding GPS coordinate-pixel point coordinate pair through a least square method, and projecting the camera coordinate to the GPS coordinate system. When the sensor observes the P point simultaneously with the GPS, the coordinates of the P point under the pixel coordinate system Ouv and the GPS coordinate system OXY are defined as (u, v) and (X, Y), respectively, the transformation relationship between the two can be expressed as:
Figure BDA0003812541790000041
the camera coordinates of the detected object can be projected to a GPS coordinate system through matrix transformation, and N groups of space corresponding sets (u, v) and (X) are given i ,Y i ) (i =1,2,.., n). From equation (1), the following equation can be determined:
Figure BDA0003812541790000042
Figure BDA0003812541790000043
Figure BDA0003812541790000044
Figure BDA0003812541790000045
defining:
Figure BDA0003812541790000046
Figure BDA0003812541790000047
the least squares solution of the transformation matrix N is then:
Figure BDA0003812541790000051
thus, mapping of the pixel coordinates of the camera to the WGS84 coordinate system (i.e., GPS coordinates) is achieved by the solved transformation matrix.
Preferably, step S4 relates to V2X communication, the intelligent network link-side subsystem may send the message sensed by the road-side camera to the intelligent network link-mounted subsystem through a PC5 air interface or Uu air interface using a message set RSM, and the message encapsulation format may adopt JSON, protoBuf, or XML. Preferably, the step S5 relates to information fusion between the vehicle-mounted subsystem of the intelligent network connection and the subsystem on the side of the intelligent network connection. Aiming at the intelligent network connection vehicle-mounted subsystem, the vehicle information can utilize the position information provided by the vehicle-mounted RTK, and the obstacle information around the vehicle can utilize the perception object information provided by the vehicle-mounted camera, and is respectively fused with the information perceived by the intelligent network connection side subsystem. Information fusion consists of two steps, first object matching and second filtering. The target matching can adopt methods such as Global Nearest Neighbor (GNN), joint Probability Data Association (JPDA), multi-hypothesis tracking (MHT) and the like, and the filtering algorithm can adopt methods such as Kalman filtering, extended Kalman filtering, lossless Kalman filtering, particle filtering and the like.
Compared with the prior art, the invention has the following beneficial effects: the system is designed aiming at the requirement of vehicle-road cooperation on space-time synchronization, joint calibration is not needed to be carried out depending on a laser radar, a conversion model can be directly mapped to a GPS coordinate under WGS84 from a pixel coordinate of a camera, even for different subsystems, such as an intelligent network connection vehicle-mounted subsystem or an intelligent network connection side subsystem, the state of each target object under a unified coordinate system can be obtained, and further information fusion based on position points can be carried out, including fusion of vehicles and fusion of obstacles around the vehicles, so that vehicle-road cooperation perception is realized.
Drawings
FIG. 1 is an architecture diagram of a camera-based vehicle-road cooperative implementation method and implementation system according to the present invention;
fig. 2 is a schematic layout diagram of a camera-based vehicle-road cooperation implementation method and system according to the present invention.
Detailed Description
A more detailed description of a camera-based vehicle-road coordination implementation method and system according to the present invention will be given below in conjunction with a schematic diagram, in which a preferred embodiment of the present invention is shown, it being understood that a person skilled in the art may modify the invention described herein while still achieving the advantageous effects of the present invention, and therefore the following description should be understood as being widely known to a person skilled in the art and not as limiting the present invention.
Referring to fig. 1-2, a camera-based vehicle-road cooperative implementation system includes: the intelligent network connection vehicle-mounted subsystem is arranged on the vehicle, and the intelligent network connection side subsystem is arranged on the road side, and the intelligent network connection vehicle-mounted subsystem and the intelligent network connection side subsystem realize vehicle-road cooperation through V2X.
The intelligent network connection vehicle-mounted subsystem comprises an RTK, a vehicle-mounted camera, vehicle-mounted computing equipment and V2X equipment, and all the equipment are integrated together through a switch. RTK can be big dipper or GPS difference system, and on-vehicle camera includes forward view camera, back vision camera and look around the camera, and on-vehicle computing equipment includes embedded controller and industrial computer, and V2X equipment includes 5G 4GCPE, V2XOBU, and its space alignment process is: for each camera, a conversion model is established, and conversion from pixel coordinates to WGS84 coordinates is achieved. Here, the vehicle-mounted camera may be replaced by other vehicle-mounted sensing devices, such as a laser radar and a millimeter wave radar.
The intelligent network connection road side subsystem comprises a road side camera, road side computing equipment and V2X equipment, and all the equipment are integrated together through a switch. The roadside camera includes industrial camera, network camera, and roadside computing equipment includes MEC server, industrial computer, and V2X equipment includes 5G/4GCPE, V2XRSU, and its space alignment process is: and establishing a conversion model for each roadside camera to realize conversion from pixel coordinates to WGS84 coordinates.
Furthermore, the vehicle-mounted camera and the road side camera are synchronized in time through the NTP time server. Aiming at the vehicle-mounted or road-side camera, a homography transformation matrix can be established, and the conversion from pixel coordinates to WGS84 coordinates is realized, and the method comprises the following steps:
s1, fixedly mounting a camera, selecting a plurality of positions in the detection range of the camera, and placing marker paper sheets with bright colors on the positions to mark. The positions are equivalent to sampling of the detection range of the camera, and the selected positions are proper in distance and cover the detection range of the camera as much as possible. The more the number of the position selection is, the higher the calibration precision is, but the workload can be increased; otherwise, the less the number of position selections, the lower the calibration accuracy. In order to solve the homography transformation matrix, at least 9 position points need to be selected, and the distribution positions are shown in fig. 2.
And S2, measuring and recording the GPS coordinates of the position selected in the step S1 by adopting a handheld GPS high-precision data acquisition device. It should be noted that the recorded GPS coordinates and the pixel positions of the paper sheet should correspond one-to-one, so as to correspond to the pixel point coordinates in the following;
and S3, after marking each position by using a marker paper sheet with bright color, acquiring a detection picture of a frame of camera, wherein each marker position can be clearly seen in the frame of picture as shown in FIG. 2. Acquiring pixel point coordinates of all marked positions in the picture, wherein the pixel point coordinates correspond to the GPS coordinates acquired in the step S2 one by one;
s4, solving homography transformation of a sensor coordinate system and a GPS coordinate system by a least square method by adopting a homography transformation principle of the camera coordinate system and the GPS coordinate system, and projecting the camera coordinate to a WGS84 coordinate system;
and S5, because the distortion and perspective phenomena exist in the picture detected by the camera, the conversion error is larger at some positions by using a homography transformation. Therefore, a partition calibration method can be considered. And repeating the steps S1-S4, acquiring a series of GPS coordinates and pixel point coordinates below and above the detected picture of the camera, and calculating a conversion matrix of the GPS coordinates and the pixel point coordinates. If the road is approximately parallel to the y-axis direction in the pixel coordinate system, the road can be partitioned according to the pixel coordinate values in the y-axis direction, and corresponding mapping transformation matrixes are calculated in different areas through the calibration method;
aiming at the road side camera, a running video of a test vehicle with RTK in a field area of the road side camera is recorded, a corresponding GPS coordinate-pixel point coordinate pair is obtained based on a target detection algorithm, and then training is carried out by utilizing decision tree regression, random forest regression algorithm or neural network deep learning, so that a conversion model from the pixel point coordinate to the GPS coordinate is obtained.
In the intelligent network connection road side subsystem, firstly, the video image of a road side camera is processed by a road side MEC, for example, yolov5 and Deepsort extract characteristic information related to the state of a target object, wherein the characteristic information not only comprises the category but also comprises position information, then JSON is used for packaging into RSM information, and the RSU is pushed to a vehicle OBU through a PC5 air interface.
The intelligent network connection vehicle-mounted subsystem is characterized in that a vehicle-mounted industrial personal computer is used for collecting RTK data, characteristic extraction is carried out on a video image of a vehicle-mounted camera, and similarly, the state characteristic of a target object comprises position information. And simultaneously, the vehicle-mounted OBU analyzes the received RSM information, and then the vehicle-mounted industrial personal computer performs fusion processing on the information acquired by the vehicle and the information transmitted from the RSU. Data association can be carried out on vehicle-mounted information and road-side information by adopting Hungary matching based on the Mahalanobis distance, and a target is updated by utilizing Kalman filtering to obtain an updated own vehicle state and a surrounding target object state. The target matching and filtering methods herein are not limited to the Hungarian matching and Kalman filtering mentioned herein.
The above description is only a preferred embodiment of the present invention, and does not limit the present invention in any way. It will be understood by those skilled in the art that various changes, substitutions and alterations can be made herein without departing from the spirit and scope of the invention as defined by the appended claims.

Claims (9)

1. A vehicle-road cooperation realization method based on cameras is characterized in that aiming at a plurality of vehicle-mounted cameras in an intelligent network connection road-side subsystem and a plurality of road-side cameras in the intelligent network connection vehicle-mounted subsystem, a unified coordinate system is constructed, the vehicle-mounted cameras and the road-side cameras are effectively fused, and vehicle-road sensing cooperation based on the cameras is realized, and the method comprises the following steps:
s1: the vehicle-mounted camera and the roadside camera realize time synchronization through GNSS time service;
s2: the intelligent network connection side subsystem realizes the spatial alignment of each roadside camera through the conversion from pixel coordinates to WGS84 coordinates;
s3: the intelligent network on-board subsystem realizes the spatial alignment of each on-board camera through the conversion from pixel coordinates to WGS84 coordinates;
s4: the intelligent network connection side subsystem pushes a detection result to the intelligent network connection vehicle-mounted subsystem through V2X;
s5: the intelligent network connection vehicle-mounted subsystem and the intelligent network connection side subsystem realize information fusion through a unified WGS84 coordinate system, and therefore vehicle-road sensing cooperation based on a camera is achieved.
2. The method for realizing vehicle-road cooperation based on the camera of claim 1, wherein in the step S2, the conversion from the roadside camera pixel point coordinates to the WGS84 coordinates is realized by establishing a conversion model, and the conversion model from the roadside camera pixel point coordinates to the WGS84 coordinates comprises the following steps:
s21: installing an RTK on a test vehicle to obtain real-time WGS84 coordinates of the test vehicle, simultaneously recording a running video of the test vehicle in a field of view area of a roadside camera, and identifying the position of the test vehicle in real time through a target detection algorithm, so as to obtain pixel point coordinates of the test vehicle, and simultaneously obtaining more corresponding pixel point coordinate pairs after time synchronization;
s22: and dividing the acquired data into a training set and a test set, and training by using a decision tree regression algorithm, a random forest regression algorithm or a neural network deep learning algorithm to finally obtain a conversion model from the pixel point coordinates of the roadside camera to the WGS84 coordinates.
3. The method for realizing vehicle-road cooperation based on the camera according to claim 1, wherein in the step S3, the vehicle-mounted camera directly realizes the conversion from the pixel coordinate to the WGS84 coordinate by establishing a homography transformation matrix, and the specific conversion process includes the following steps:
s31: a camera is arranged on the vehicle body, and a plurality of paper sheets with bright colors are placed in the visual field range of the camera to be used as marks;
s32: acquiring marked GPS coordinates by using a high-precision GPS handset;
s33: acquiring any one frame of detection picture of a camera, and acquiring pixel point coordinates of a marking paper in a picture;
s34: utilizing corresponding pixel point coordinate pairs, solving homography transformation of a sensor coordinate system and a GPS coordinate system through a least square method, projecting camera coordinates to GPS coordinates, defining the coordinates of a P point under the pixel coordinate system Our and the GPS coordinate system OXY to be (u, v) and (x, y) respectively when the sensor and the GPS simultaneously observe the P point, and then expressing the conversion relation between Our and OXY as a formula I, wherein the formula I is as follows:
Figure FDA0003812541780000021
projecting the camera coordinates of the detection target to a GPS coordinate system through matrix transformation, and giving N groups of space corresponding sets (u, v) and (X) i ,Y i ) (i =1,2,. Ann.., n), determining a formula two to a formula five according to formula one, wherein formula two is expressed as:
Figure FDA0003812541780000022
the third public representation is:
Figure FDA0003812541780000023
the formula four is expressed as:
Figure FDA0003812541780000031
the formula five is expressed as:
Figure FDA0003812541780000032
defining a formula six and a formula seven, wherein the formula six is expressed as:
Figure FDA0003812541780000033
the formula seven is expressed as:
Figure FDA0003812541780000034
the least squares solution of the transformation matrix N yields equation eight, which is expressed as:
Figure FDA0003812541780000035
mapping of pixel coordinates of the camera to a WGS84 coordinate system is achieved through the solved transformation matrix.
4. The camera-based vehicle-road cooperative implementation method according to claim 1, wherein in S4, the V2X includes a V2XPC5 air interface and a V2XUu air interface, and the intelligent network-link-side subsystem pushes a detection result to the intelligent network-link-vehicle-mounted subsystem through the V2XPC5 air interface or the V2XUu air interface.
5. The method for realizing vehicle-road cooperation based on camera according to claim 1, wherein in step S5, the vehicle information and the obstacle information around the vehicle are respectively fused with the information sensed by the intelligent network link-side subsystem, the vehicle information is position information provided by the vehicle-mounted RTK, and the obstacle information around the vehicle is sensing target information provided by the vehicle-mounted camera.
6. The camera-based vehicle-road cooperative implementation method is characterized in that information fusion is sequentially performed through a target matching and filtering algorithm, wherein the target matching adopts one or more combinations of global nearest neighbor, joint probability data association and multi-hypothesis tracking methods; the filtering algorithm adopts one or more of Kalman filtering, extended Kalman filtering, lossless Kalman filtering or particle filtering algorithm.
7. A camera-based vehicle-road cooperation realization system is used for realizing the camera-based vehicle-road cooperation realization method as claimed in any one of claims 1 to 6, and is characterized by comprising an intelligent network-connected vehicle-mounted subsystem arranged on a vehicle and an intelligent network-connected road-side subsystem arranged on a road test, wherein the intelligent network-connected vehicle-mounted subsystem comprises an RTK, a vehicle-mounted camera, a vehicle-mounted computing device and a V2X device which are integrated into a whole through an exchanger; the intelligent network connection road side subsystem comprises a road side camera, road side computing equipment and V2X equipment which are integrated through an exchanger.
8. The camera-based vehicle-road cooperative implementation method according to claim 7, wherein the RTK is a Beidou or GPS differential system, the vehicle-mounted camera comprises a forward-looking camera, a backward-looking camera and a look-around camera, the vehicle-mounted computing device comprises an embedded controller and an industrial personal computer, and the V2X device comprises a 5G/4GCPE and a V2XOBU.
9. The camera-based vehicle-road cooperative implementation method according to claim 7, wherein the roadside camera includes an industrial camera and a network camera, the roadside computing device includes an MEC server, an industrial personal computer and a computer, and the V2X device includes a 5G/4GCPE and a V2XRSU.
CN202211016027.0A 2022-08-24 2022-08-24 Vehicle-road cooperation realization method and realization system based on camera Active CN115440034B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202211016027.0A CN115440034B (en) 2022-08-24 2022-08-24 Vehicle-road cooperation realization method and realization system based on camera

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202211016027.0A CN115440034B (en) 2022-08-24 2022-08-24 Vehicle-road cooperation realization method and realization system based on camera

Publications (2)

Publication Number Publication Date
CN115440034A true CN115440034A (en) 2022-12-06
CN115440034B CN115440034B (en) 2023-09-01

Family

ID=84245171

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202211016027.0A Active CN115440034B (en) 2022-08-24 2022-08-24 Vehicle-road cooperation realization method and realization system based on camera

Country Status (1)

Country Link
CN (1) CN115440034B (en)

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2010236891A (en) * 2009-03-30 2010-10-21 Nec Corp Position coordinate conversion method between camera coordinate system and world coordinate system, vehicle-mounted apparatus, road side photographing apparatus, and position coordinate conversion system
CN111476999A (en) * 2020-01-17 2020-07-31 武汉理工大学 Intelligent network-connected automobile over-the-horizon sensing system based on vehicle-road multi-sensor cooperation
CN113099529A (en) * 2021-03-29 2021-07-09 千寻位置网络(浙江)有限公司 Indoor vehicle navigation method, vehicle-mounted terminal, field terminal server and system
JPWO2022009848A1 (en) * 2020-07-07 2022-01-13

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2010236891A (en) * 2009-03-30 2010-10-21 Nec Corp Position coordinate conversion method between camera coordinate system and world coordinate system, vehicle-mounted apparatus, road side photographing apparatus, and position coordinate conversion system
CN111476999A (en) * 2020-01-17 2020-07-31 武汉理工大学 Intelligent network-connected automobile over-the-horizon sensing system based on vehicle-road multi-sensor cooperation
JPWO2022009848A1 (en) * 2020-07-07 2022-01-13
CN113099529A (en) * 2021-03-29 2021-07-09 千寻位置网络(浙江)有限公司 Indoor vehicle navigation method, vehicle-mounted terminal, field terminal server and system

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
李秀知;: "一种基于多维时空融合的车路协同系统", 信息通信, no. 12, pages 44 - 46 *

Also Published As

Publication number Publication date
CN115440034B (en) 2023-09-01

Similar Documents

Publication Publication Date Title
CN111951305B (en) Target detection and motion state estimation method based on vision and laser radar
CN110174093B (en) Positioning method, device, equipment and computer readable storage medium
CN105654732A (en) Road monitoring system and method based on depth image
CN112836737A (en) Roadside combined sensing equipment online calibration method based on vehicle-road data fusion
CN108594244B (en) Obstacle recognition transfer learning method based on stereoscopic vision and laser radar
CN103176185A (en) Method and system for detecting road barrier
CN111458721B (en) Exposed garbage identification and positioning method, device and system
CN112382085A (en) System and method suitable for intelligent vehicle traffic scene understanding and beyond visual range perception
US20200341150A1 (en) Systems and methods for constructing a high-definition map based on landmarks
CN112068567B (en) Positioning method and positioning system based on ultra-wideband and visual image
CN112884892B (en) Unmanned mine car position information processing system and method based on road side device
JP2018077162A (en) Vehicle position detection device, vehicle position detection method and computer program for vehicle position detection
CN111538008B (en) Transformation matrix determining method, system and device
KR102362510B1 (en) Image map making system auto-searching the misprounciations of digital map data and modifying error in the image
CN107607939B (en) Optical target tracking and positioning radar device based on real map and image
CN117310627A (en) Combined calibration method applied to vehicle-road collaborative road side sensing system
CN115440034B (en) Vehicle-road cooperation realization method and realization system based on camera
CN115409691A (en) Bimodal learning slope risk detection method integrating laser ranging and monitoring image
EP3223188A1 (en) A vehicle environment mapping system
TWI811954B (en) Positioning system and calibration method of object location
CN117553811B (en) Vehicle-road co-location navigation method and system based on road side camera and vehicle-mounted GNSS/INS
CN115272490B (en) Method for calibrating camera of road-end traffic detection equipment
US20240071034A1 (en) Image processing device, image processing method, and program
CN114659512A (en) Geographic information acquisition system
CN117173251A (en) Data set labeling method and device

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
CB02 Change of applicant information

Address after: 200092 Siping Road 1239, Shanghai, Yangpu District

Applicant after: TONGJI University

Address before: 200092 Siping Road 1239, Shanghai, Hongkou District

Applicant before: TONGJI University

CB02 Change of applicant information
GR01 Patent grant
GR01 Patent grant