CN115440034B - Vehicle-road cooperation realization method and realization system based on camera - Google Patents

Vehicle-road cooperation realization method and realization system based on camera Download PDF

Info

Publication number
CN115440034B
CN115440034B CN202211016027.0A CN202211016027A CN115440034B CN 115440034 B CN115440034 B CN 115440034B CN 202211016027 A CN202211016027 A CN 202211016027A CN 115440034 B CN115440034 B CN 115440034B
Authority
CN
China
Prior art keywords
vehicle
road
camera
intelligent network
subsystem
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202211016027.0A
Other languages
Chinese (zh)
Other versions
CN115440034A (en
Inventor
王平
傅良伟
王超
王新红
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Tongji University
Original Assignee
Tongji University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Tongji University filed Critical Tongji University
Priority to CN202211016027.0A priority Critical patent/CN115440034B/en
Publication of CN115440034A publication Critical patent/CN115440034A/en
Application granted granted Critical
Publication of CN115440034B publication Critical patent/CN115440034B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G08SIGNALLING
    • G08GTRAFFIC CONTROL SYSTEMS
    • G08G1/00Traffic control systems for road vehicles
    • G08G1/01Detecting movement of traffic to be counted or controlled
    • G08G1/0104Measuring and analyzing of parameters relative to traffic conditions
    • GPHYSICS
    • G08SIGNALLING
    • G08GTRAFFIC CONTROL SYSTEMS
    • G08G1/00Traffic control systems for road vehicles
    • G08G1/01Detecting movement of traffic to be counted or controlled
    • G08G1/0104Measuring and analyzing of parameters relative to traffic conditions
    • G08G1/0108Measuring and analyzing of parameters relative to traffic conditions based on the source of data
    • GPHYSICS
    • G08SIGNALLING
    • G08GTRAFFIC CONTROL SYSTEMS
    • G08G1/00Traffic control systems for road vehicles
    • G08G1/01Detecting movement of traffic to be counted or controlled
    • G08G1/0104Measuring and analyzing of parameters relative to traffic conditions
    • G08G1/0125Traffic data processing
    • GPHYSICS
    • G08SIGNALLING
    • G08GTRAFFIC CONTROL SYSTEMS
    • G08G1/00Traffic control systems for road vehicles
    • G08G1/01Detecting movement of traffic to be counted or controlled
    • G08G1/04Detecting movement of traffic to be counted or controlled using optical or ultrasonic detectors
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02TCLIMATE CHANGE MITIGATION TECHNOLOGIES RELATED TO TRANSPORTATION
    • Y02T10/00Road transport of goods or passengers
    • Y02T10/10Internal combustion engine [ICE] based vehicles
    • Y02T10/40Engine management systems

Landscapes

  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Chemical & Material Sciences (AREA)
  • Analytical Chemistry (AREA)
  • Traffic Control Systems (AREA)

Abstract

The invention provides a vehicle-road collaborative implementation method and system based on cameras, comprising an intelligent network vehicle-mounted subsystem and an intelligent network road-side subsystem, wherein the intelligent network vehicle-mounted subsystem and the intelligent network road-side subsystem are arranged on a vehicle, the vehicle-mounted cameras and the road-side cameras are time-synchronized through GNSS time service, the intelligent network vehicle-mounted subsystem and the intelligent network road-side subsystem both convert pixel coordinates into WGS84 coordinates, so that space alignment of the vehicle-mounted cameras and the road-side cameras is realized, the intelligent network road-side subsystem pushes detection results to the vehicle, the vehicle carries out association fusion based on self RTK position information and road-side detection results, and the vehicle-mounted camera detection results and road-side detection results are fused by surrounding barriers.

Description

Vehicle-road cooperation realization method and realization system based on camera
Technical Field
The invention belongs to the technical field of intelligent traffic, and particularly relates to a vehicle-road cooperation realization method and system based on cameras.
Background
In recent years, the rapid development of information and communication technology has led to a profound innovation in the traditional automobile industry, the maturation of intelligent driving automobiles is promoted, and automobile driving is becoming simpler and more intelligent. However, conventional autopilot and driver-assist systems typically rely solely on onboard sensors to sense and understand the surrounding driving environment. On the one hand, this requires the deployment of advanced and complex sensor devices and computational storage devices on the vehicle, greatly increasing the manufacturing and maintenance costs of the vehicle. On the other hand, since the viewing angle of the on-vehicle sensor is generally low, physical limitations of severe weather and complex driving environments such as tunnel entrances and exits and intersections and the like are imposed, and the range and accuracy of vehicle perception are greatly limited. 2020 world intelligent Internet-connected automobile university, china's society of automotive engineering, china's academy of engineering Li Jun professor pointed out 5 challenges of bicycle intelligence: 1) Highly dependent on artificial intelligence, resulting in difficulty in overcoming the "black box effect"; 2) Finally, road tests requiring billions of miles are realized, which is difficult to realize in a short period of time; 3) Fully automatic driving requires at least millions of extreme condition data training; 4) Too many sensing devices result in excessive costs; 5) The actual running safety is difficult to be absolutely ensured.
Therefore, in order to accurately and effectively sense the information of the target state in the traffic system, the technology of the internet of vehicles (Internet of Vehicles, ioV) and the cooperative development route of the vehicle road are far from enough only through the sensors and the computing resources of the vehicle. The technology of Vehicle-to-evaluation of the Vehicle networking V2X (Vehicle-to-evaluation) is to use an advanced mobile communication technology to connect vehicles with all traffic units such as other vehicles, vehicles and pedestrians, vehicles and road side infrastructures and the like to form a network.
For a perception system of vehicle-road coordination, the most important task is to identify and continuously track traffic participants such as vehicles, pedestrians and the like on a road by using a sensor, so as to obtain accurate tracks of all targets, and support subsequent path planning and decision. At present, most schemes detect tracking targets by arranging cameras, because the vehicle-mounted cameras are in standard allocation, road side cameras are also widely deployed, but how to effectively fuse the vehicle-mounted cameras and the road side cameras, so that the cooperative sensing of the vehicle and the road is currently studied. The existing research thought is to solve the problem by utilizing multi-sensor fusion, pixel coordinates can be effectively converted into laser point cloud coordinates through joint calibration of a laser radar and a camera, and the laser point cloud coordinates can be easily converted into northeast coordinates through rigid transformation, which is a common method for establishing a unified coordinate system in the multi-sensor fusion process. However, under the condition that no laser radar exists, how to construct a unified coordinate system aiming at the vehicle-mounted camera and the road side camera so as to realize the vehicle-road cooperative sensing based on the camera, and no related research exists at present.
Disclosure of Invention
Aiming at the defects existing in the prior art, the invention aims to provide a vehicle-road collaborative implementation method and a vehicle-road collaborative implementation system based on cameras, which ensure the spatial alignment of the vehicle-road collaborative system in the information fusion process and can provide a unified reference coordinate system for vehicle-road collaborative perception, thereby improving the perception precision. To achieve the above objects and other advantages and in accordance with the purpose of the present invention, there is provided a camera-based vehicle-road cooperative achieving system including:
the intelligent network vehicle-mounted subsystem is arranged on the vehicle and the intelligent network road-side subsystem is arranged on the road side, and the intelligent network vehicle-mounted subsystem and the intelligent network road-side subsystem realize vehicle road cooperation through V2X.
Preferably, the intelligent network vehicle-mounted subsystem comprises RTK, vehicle-mounted camera, vehicle-mounted computing equipment and V2X equipment, and the devices are integrated together.
Preferably, the intelligent network link side subsystem comprises a road side camera, road side computing equipment and V2X equipment, and the equipment is integrated together.
A vehicle-road cooperation realization method based on cameras comprises the following steps:
s1, the intelligent network link side subsystem and the intelligent network link vehicle subsystem realize time synchronization through GNSS time service;
s2, the intelligent network link side subsystem realizes space alignment of the road side camera through conversion from pixel coordinates to WGS84 coordinates;
s3, the intelligent network vehicle-mounted subsystem realizes conversion from pixel coordinates to WGS84 coordinates;
s4, the intelligent network link side subsystem pushes the detection result to the intelligent network vehicle-mounted subsystem through V2X;
s5, the intelligent network connection vehicle-mounted subsystem and the intelligent network connection side subsystem realize information fusion through a unified WGS84 coordinate system;
preferably, the step S2 involves a road side camera, an RTK may be installed on a test vehicle (the vehicle in the intelligent network vehicle-mounted subsystem) to obtain real-time GPS coordinates of the test vehicle, and meanwhile, a driving video of the test vehicle in a field of view of the road side camera is recorded, and the position of the test vehicle is identified in real time by a target detection algorithm to obtain pixel coordinates of the test vehicle, after time synchronization, more corresponding GPS coordinates—pixel coordinate pairs may be obtained at the same time, the obtained data is divided into a training set and a testing set, and training is performed by using decision tree regression, random forest regression algorithm or neuronal network deep learning algorithm, and finally, a conversion model from the pixel coordinates of the road side camera to the GPS coordinates is obtained.
Preferably, the step S3 involves a vehicle-mounted camera, in which the camera does not need to be calibrated by internal parameters and external parameters, and the conversion from the pixel coordinates to the WGS84 coordinates is directly achieved by establishing a homography conversion matrix. The method comprises the following steps:
s1: a camera is fixedly arranged on a vehicle body, and a plurality of bright-colored paper sheets are selected and placed at proper positions in the visual field range of the camera to serve as marks;
s2: acquiring GPS coordinates of the marking paper sheet by using a handheld high-precision GPS;
s3: acquiring any frame of detection picture of a camera, and acquiring pixel point coordinates of a marking paper sheet in the picture;
s4: and solving homography transformation of the sensor coordinate system and the GPS coordinate system by using a corresponding GPS coordinate-pixel point coordinate pair through a least square method, and projecting camera coordinates to the GPS coordinate system. When the sensor observes the P point simultaneously with the GPS, and the coordinates of the P point in the pixel coordinate system Ouv and the GPS coordinate system OXY are defined as (u, v) and (X, Y), respectively, the conversion relationship between the two can be expressed as:
the camera coordinates of the detected targets can be projected under a GPS coordinate system through matrix transformation, and N groups of space corresponding sets (u, v) and (X) are given i ,Y i ) (i=1, 2,) n. The following equation can be determined according to equation (1):
definition:
the least squares solution of the transformation matrix N is:
thus, mapping from the pixel coordinates of the camera to the WGS84 coordinate system (i.e., GPS coordinates) is achieved by the solved transformation matrix.
Preferably, the step S4 involves V2X communication, the intelligent network link side subsystem may send the message perceived by the road side camera to the intelligent network link vehicle subsystem through the PC5 air interface or Uu air interface with the message set RSM, and the message encapsulation format may be JSON, protoBuf or XML. Preferably, the step S5 relates to information fusion between the intelligent network connection vehicle subsystem and the intelligent network connection side subsystem. Aiming at the intelligent network vehicle-connected subsystem, the vehicle information can utilize the position information provided by the vehicle-mounted RTK, and the obstacle information around the vehicle can utilize the perception object information provided by the vehicle-mounted camera to be respectively fused with the information perceived by the intelligent network road-connected subsystem. The information fusion includes two steps, first target matching and second filtering. The target matching can adopt methods such as Global Nearest Neighbor (GNN), joint Probability Data Association (JPDA), multi-hypothesis tracking (MHT) and the like, and the filtering algorithm can adopt methods such as Kalman filtering, extended Kalman filtering, lossless Kalman filtering, particle filtering and the like.
Compared with the prior art, the invention has the beneficial effects that: according to the method, the system and the device, the requirements of vehicle-road coordination on space-time synchronization are designed, joint calibration is not needed depending on a laser radar, the pixel coordinates of a camera can be directly mapped to GPS coordinates under WGS84 by establishing a conversion model, states of all targets under a unified coordinate system can be obtained even for different subsystems such as an intelligent network vehicle-mounted subsystem or an intelligent network road-side subsystem, and then information fusion based on position points, including fusion of a vehicle and fusion of obstacles around the vehicle, is carried out, so that vehicle-road coordination perception is realized.
Drawings
FIG. 1 is a schematic diagram of a camera-based vehicle-road collaboration implementation method and implementation system according to the present invention;
fig. 2 is a schematic diagram of a vehicle-road cooperation implementation method and a vehicle-road cooperation implementation system based on a camera according to the invention.
Detailed Description
The following describes a vehicle-road cooperation implementation method and implementation system based on a camera in more detail with reference to the schematic drawings, in which preferred embodiments of the present invention are shown, it should be understood that those skilled in the art can modify the present invention described herein and still realize the advantageous effects of the present invention, and therefore, the following description should be construed as widely known to those skilled in the art, and not as limiting the present invention.
Referring to fig. 1-2, a camera-based vehicle-road cooperation implementation system includes: the intelligent network vehicle-mounted subsystem is arranged on the vehicle and the intelligent network road-side subsystem is arranged on the road side, and the intelligent network vehicle-mounted subsystem and the intelligent network road-side subsystem realize vehicle road cooperation through V2X.
The intelligent network vehicle-mounted subsystem comprises an RTK, a vehicle-mounted camera, a vehicle-mounted computing device and V2X devices, and the devices are integrated together through a switch. RTK can be big dipper or GPS differential system, and on-vehicle camera includes forward looking camera, back looking camera and look around the camera, and on-vehicle computing device includes embedded controller and industrial computer, and V2X equipment includes 5G 4G CPE, V2X OBU, and its space alignment process is: for each camera, a conversion model is established to realize conversion from pixel coordinates to WGS84 coordinates. Here, the vehicle-mounted camera may be replaced by other vehicle-mounted sensing devices, such as a laser radar and a millimeter wave radar.
The intelligent network link side subsystem comprises a road side camera, road side computing equipment and V2X equipment, and the equipment are integrated together through a switch. The road side camera comprises an industrial camera and a network camera, the road side computing equipment comprises an MEC server, an industrial personal computer and a computer, the V2X equipment comprises 5G/4G CPE and V2X RSU, and the space alignment process is as follows: and establishing a conversion model for each road side camera to realize conversion from pixel coordinates to WGS84 coordinates.
Furthermore, the vehicle-mounted camera and the road side camera are time-synchronized through the NTP time server. For a vehicle-mounted or road side camera, a homography transformation matrix can be established to realize the conversion from pixel coordinates to WGS84 coordinates, and the method comprises the following steps:
s1, fixedly installing a camera, selecting a plurality of positions in the detection range of the camera, and placing colorful marking paper sheets on the positions to make marks. The positions are equivalent to sampling the detection range of the camera, and the selected positions are suitable in spacing and cover the detection range of the camera as far as possible. The more the number of the position selection is, the higher the calibration precision is, but the workload is increased; on the contrary, the smaller the position selection quantity is, the lower the calibration precision is. To solve the homography transformation matrix, at least 9 location points need to be selected, and the location of the distribution points is shown in fig. 2.
S2, measuring GPS coordinates of the position selected in the step S1 by adopting a handheld GPS high-precision data collector and recording the GPS coordinates. It should be noted that the recorded GPS coordinates correspond to the pixel positions of the sheet one to one so as to correspond to the pixel point coordinates subsequently;
s3, after each position is marked by using the colorful marking paper sheet, a detection picture of a frame of camera needs to be acquired, and as shown in fig. 2, each marking position can be clearly seen in the frame of picture. Acquiring pixel point coordinates of all the mark positions in the picture, wherein the pixel point coordinates correspond to the GPS coordinates acquired in the step S2 one by one;
s4, adopting a homography conversion principle of a camera coordinate system and a GPS coordinate system, solving homography conversion of the sensor coordinate system and the GPS coordinate system through a least square method, and projecting the camera coordinate to a WGS84 coordinate system;
s5, because the camera detects that the picture has distortion and perspective phenomenon, the conversion error is larger at certain positions by using a homography conversion. Thus, a zonal calibration approach may be considered. Repeating the steps S1-S4, obtaining a series of GPS coordinates and pixel point coordinates of the lower part and the upper part of the detection picture of the camera, and calculating a conversion matrix of the coordinate. If the road is approximately parallel to the y-axis direction in the pixel coordinate system, the road can be partitioned according to the pixel coordinate values in the y-axis direction, and the corresponding mapping transformation matrix is calculated in different areas through the calibration method;
aiming at the road side camera, a driving video of a test vehicle with RTK in the visual field area of the road side camera is recorded, a corresponding GPS coordinate-pixel point coordinate pair is obtained based on a target detection algorithm, and training is carried out by utilizing decision tree regression, random forest regression algorithm or neural network deep learning, so that a conversion model from the pixel point coordinate to the GPS coordinate is obtained.
The intelligent network link side subsystem processes the video image of the road side camera by the road side MEC, such as Yolov5 and deep Sort, extracts the characteristic information about the state of the target object, wherein the characteristic information not only comprises the category but also comprises the position information, then the information is packaged into an RSM message by JSON, and the RSU is pushed to the vehicle-mounted OBU through a PC5 air interface.
The intelligent network vehicle-mounted subsystem firstly collects RTK data by a vehicle-mounted industrial personal computer, and performs feature extraction on video images of a vehicle-mounted camera, and the same object state features comprise position information. Meanwhile, the received RSM message is analyzed by the vehicle-mounted OBU, and then the vehicle-mounted industrial personal computer fuses the information acquired by the vehicle and the information transmitted from the RSU. The vehicle-mounted information and the road side information can be subjected to data association by adopting Hungary matching based on the Mahalanobis distance, and the targets are updated by utilizing Kalman filtering to obtain updated vehicle states and surrounding target object states. The target matching and filtering methods herein are not limited to the hungarian matching and kalman filtering mentioned herein.
The foregoing is merely a preferred embodiment of the present invention and is not intended to limit the present invention in any way. Any person skilled in the art will make any equivalent substitution or modification to the technical solution and technical content disclosed in the invention without departing from the scope of the technical solution of the invention, and the technical solution of the invention is not departing from the scope of the invention.

Claims (9)

1. The vehicle-road cooperation realization method based on the cameras is characterized by comprising the following steps of aiming at a plurality of vehicle-mounted cameras in an intelligent network coupling side subsystem and a plurality of road-side cameras in the intelligent network coupling vehicle subsystem, constructing a unified coordinate system, and effectively fusing the vehicle-mounted cameras and the road-side cameras to realize vehicle-road sensing cooperation based on the cameras:
s1: the vehicle-mounted camera and the road side camera realize time synchronization through GNSS time service;
s2: the intelligent network link side subsystem realizes the space alignment of each of the road side cameras through the conversion from pixel coordinates to WGS84 coordinates;
s3: the intelligent network vehicle-mounted subsystem realizes the space alignment of each vehicle-mounted camera through the conversion from the pixel coordinates to the WGS84 coordinates;
s4: the intelligent network link side subsystem pushes the detection result to the intelligent network vehicle-mounted subsystem through V2X;
s5: the intelligent network vehicle-mounted subsystem and the intelligent network road-side subsystem realize information fusion through a unified WGS84 coordinate system, so that vehicle road sensing coordination based on cameras is realized.
2. The camera-based vehicle-road collaborative implementation method according to claim 1, wherein in S2, a conversion model is established to implement conversion from a road-side camera pixel coordinate to a WGS84 coordinate, and the conversion model from the road-side camera pixel coordinate to the WGS84 coordinate includes the following steps:
s21: the method comprises the steps that an RTK is installed on a test vehicle to obtain real-time WGS84 coordinates of the test vehicle, meanwhile, a driving video of the test vehicle in a visual field area of a road side camera is recorded, the position of the test vehicle is identified in real time through an object detection algorithm, so that pixel point coordinates of the test vehicle are obtained, and more corresponding pixel point coordinate pairs are obtained simultaneously after time synchronization;
s22: and dividing the acquired data into a training set and a testing set, training by utilizing decision tree regression, a random forest regression algorithm or a neural network deep learning algorithm, and finally obtaining a conversion model from the pixel point coordinates of the road side camera to the WGS84 coordinates.
3. The vehicle-road cooperation implementation method based on the camera according to claim 1, wherein in S3, the vehicle-mounted camera directly implements conversion from pixel coordinates to WGS84 coordinates by establishing a homography conversion matrix, and the specific conversion process includes the following steps:
s31: a camera is arranged on the car body, and a plurality of vivid paper sheets are placed in the visual field of the camera to serve as marks;
s32: acquiring GPS coordinates of the marker using a high-precision GPS handset;
s33: acquiring any frame of detection picture of a camera, and acquiring pixel point coordinates of a marking paper sheet in the picture;
s34: the corresponding pixel point coordinate pair is utilized, the homography transformation of the sensor coordinate system and the GPS coordinate system is solved through a least square method, the camera coordinates are projected to the GPS coordinates, when the sensor and the GPS observe the P point at the same time, the coordinates of the P point under the pixel coordinate system Our and the GPS coordinate system OXY are defined as (u, v) and (x, y) respectively, the conversion relation between Our and OXY is expressed as a formula I, and the formula I is:
projecting the camera coordinates of the detection target to a GPS coordinate system through matrix transformation, and giving N groups of space corresponding sets (u, v) and (X) i ,Y i ) (i=1, 2,., n), determining from equation one, equation two to equation five, the equation two being expressed as:
the notation three is expressed as:
the formula four is expressed as:
the formula five is expressed as:
defining a formula six and a formula seven, wherein the formula six is expressed as:
the formula seven is expressed as:
the least squares solution of the transformation matrix N yields the equation eight, which is expressed as:
the mapping from the pixel coordinates of the camera to the WGS84 coordinate system is realized through the solved transformation matrix.
4. The vehicle-road collaborative implementation method based on a camera according to claim 1, wherein in the S4, the V2X includes a V2XPC5 air interface and a V2XUu air interface, and the intelligent network link side subsystem pushes a detection result to the intelligent network link vehicle-mounted subsystem through the V2XPC5 air interface or the V2XUu air interface.
5. The method according to claim 1, wherein in step S5, the vehicle information and the obstacle information around the vehicle are respectively fused with the information perceived by the intelligent network link side subsystem, the vehicle information is the position information provided by the vehicle-mounted RTK, and the obstacle information around the vehicle is the perception target information provided by the vehicle-mounted camera.
6. The vehicle-road collaborative implementation method based on the camera according to claim 5, wherein information fusion is performed sequentially through a target matching and filtering algorithm, and the target matching adopts one or more of a global nearest neighbor, a joint probability data association and a multi-hypothesis tracking method; the filtering algorithm adopts one or more of Kalman filtering, extended Kalman filtering, lossless Kalman filtering or particle filtering algorithm.
7. A camera-based vehicle-road cooperation realization system for realizing the camera-based vehicle-road cooperation realization method according to any one of claims 1-6, characterized by comprising an intelligent network vehicle-mounted subsystem arranged on a vehicle and an intelligent network road-side subsystem arranged on road test, wherein the intelligent network vehicle-mounted subsystem comprises an RTK, a vehicle-mounted camera, a vehicle-mounted computing device and a V2X device which are integrated through a switch; the intelligent network link side subsystem comprises a road side camera, road side computing equipment and V2X equipment which are integrated integrally through a switch.
8. The camera-based vehicle-road cooperation realization system according to claim 7, wherein the RTK is a beidou or GPS differential system, the vehicle-mounted camera comprises a front-view camera, a rear-view camera and a round-view camera, the vehicle-mounted computing device comprises an embedded controller and an industrial personal computer, and the V2X device comprises 5G/4GCPE and V2XOBU.
9. The camera-based vehicle-road collaboration implementation system of claim 7, wherein the road side cameras comprise industrial cameras and network cameras, the road side computing devices comprise MEC servers, industrial computers and computers, and the V2X devices comprise 5G/4GCPE and V2XRSU.
CN202211016027.0A 2022-08-24 2022-08-24 Vehicle-road cooperation realization method and realization system based on camera Active CN115440034B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202211016027.0A CN115440034B (en) 2022-08-24 2022-08-24 Vehicle-road cooperation realization method and realization system based on camera

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202211016027.0A CN115440034B (en) 2022-08-24 2022-08-24 Vehicle-road cooperation realization method and realization system based on camera

Publications (2)

Publication Number Publication Date
CN115440034A CN115440034A (en) 2022-12-06
CN115440034B true CN115440034B (en) 2023-09-01

Family

ID=84245171

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202211016027.0A Active CN115440034B (en) 2022-08-24 2022-08-24 Vehicle-road cooperation realization method and realization system based on camera

Country Status (1)

Country Link
CN (1) CN115440034B (en)

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2010236891A (en) * 2009-03-30 2010-10-21 Nec Corp Position coordinate conversion method between camera coordinate system and world coordinate system, vehicle-mounted apparatus, road side photographing apparatus, and position coordinate conversion system
CN111476999A (en) * 2020-01-17 2020-07-31 武汉理工大学 Intelligent network-connected automobile over-the-horizon sensing system based on vehicle-road multi-sensor cooperation
CN113099529A (en) * 2021-03-29 2021-07-09 千寻位置网络(浙江)有限公司 Indoor vehicle navigation method, vehicle-mounted terminal, field terminal server and system
JPWO2022009848A1 (en) * 2020-07-07 2022-01-13

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2010236891A (en) * 2009-03-30 2010-10-21 Nec Corp Position coordinate conversion method between camera coordinate system and world coordinate system, vehicle-mounted apparatus, road side photographing apparatus, and position coordinate conversion system
CN111476999A (en) * 2020-01-17 2020-07-31 武汉理工大学 Intelligent network-connected automobile over-the-horizon sensing system based on vehicle-road multi-sensor cooperation
JPWO2022009848A1 (en) * 2020-07-07 2022-01-13
CN113099529A (en) * 2021-03-29 2021-07-09 千寻位置网络(浙江)有限公司 Indoor vehicle navigation method, vehicle-mounted terminal, field terminal server and system

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
一种基于多维时空融合的车路协同系统;李秀知;;信息通信(12);第44-46页 *

Also Published As

Publication number Publication date
CN115440034A (en) 2022-12-06

Similar Documents

Publication Publication Date Title
CN111951305B (en) Target detection and motion state estimation method based on vision and laser radar
Krämmer et al. Providentia--A Large-Scale Sensor System for the Assistance of Autonomous Vehicles and Its Evaluation
CN110174093B (en) Positioning method, device, equipment and computer readable storage medium
CN112836737A (en) Roadside combined sensing equipment online calibration method based on vehicle-road data fusion
CN102792316B (en) The mapping of traffic signals and detection
CN101075376B (en) Intelligent video traffic monitoring system based on multi-viewpoints and its method
CN108229366A (en) Deep learning vehicle-installed obstacle detection method based on radar and fusing image data
CN105654732A (en) Road monitoring system and method based on depth image
CN109212542A (en) Calibration method for autonomous vehicle operation
CN108196260A (en) The test method and device of automatic driving vehicle multi-sensor fusion system
CN103176185A (en) Method and system for detecting road barrier
CN103499337B (en) Vehicle-mounted monocular camera distance and height measuring device based on vertical target
CN117836653A (en) Road side millimeter wave radar calibration method based on vehicle-mounted positioning device
CN108594244B (en) Obstacle recognition transfer learning method based on stereoscopic vision and laser radar
CN105976606A (en) Intelligent urban traffic management platform
CN112382085A (en) System and method suitable for intelligent vehicle traffic scene understanding and beyond visual range perception
CN106019264A (en) Binocular vision based UAV (Unmanned Aerial Vehicle) danger vehicle distance identifying system and method
CN200990147Y (en) Intelligent video traffic monitoring system based on multi-view point
JP2018077162A (en) Vehicle position detection device, vehicle position detection method and computer program for vehicle position detection
CN107607939B (en) Optical target tracking and positioning radar device based on real map and image
CN115440034B (en) Vehicle-road cooperation realization method and realization system based on camera
CN117310627A (en) Combined calibration method applied to vehicle-road collaborative road side sensing system
CN208452986U (en) It is a kind of for detecting the detection system of outside vehicle environmental information
CN112634354B (en) Road side sensor-based networking automatic driving risk assessment method and device
Zhang et al. A Roadside Millimeter-Wave Radar Calibration Method Based on Connected Vehicle Technology

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
CB02 Change of applicant information

Address after: 200092 Siping Road 1239, Shanghai, Yangpu District

Applicant after: TONGJI University

Address before: 200092 Siping Road 1239, Shanghai, Hongkou District

Applicant before: TONGJI University

CB02 Change of applicant information
GR01 Patent grant
GR01 Patent grant