CN116453346B - Vehicle-road cooperation method, device and medium based on radar fusion layout - Google Patents
Vehicle-road cooperation method, device and medium based on radar fusion layout Download PDFInfo
- Publication number
- CN116453346B CN116453346B CN202310728092.4A CN202310728092A CN116453346B CN 116453346 B CN116453346 B CN 116453346B CN 202310728092 A CN202310728092 A CN 202310728092A CN 116453346 B CN116453346 B CN 116453346B
- Authority
- CN
- China
- Prior art keywords
- data
- radar
- vehicle
- target vehicle
- video data
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
- 238000000034 method Methods 0.000 title claims abstract description 45
- 230000004927 fusion Effects 0.000 title claims abstract description 28
- 239000004973 liquid crystal related substance Substances 0.000 claims 2
- 230000010354 integration Effects 0.000 claims 1
- 230000008447 perception Effects 0.000 abstract description 14
- 230000000295 complement effect Effects 0.000 abstract description 2
- 238000010586 diagram Methods 0.000 description 11
- 238000004590 computer program Methods 0.000 description 7
- 238000005516 engineering process Methods 0.000 description 7
- 230000000694 effects Effects 0.000 description 6
- 230000008569 process Effects 0.000 description 6
- 230000006870 function Effects 0.000 description 5
- 230000009286 beneficial effect Effects 0.000 description 4
- 238000012545 processing Methods 0.000 description 4
- 230000007547 defect Effects 0.000 description 3
- 230000008901 benefit Effects 0.000 description 2
- 230000005540 biological transmission Effects 0.000 description 2
- 230000004048 modification Effects 0.000 description 2
- 238000012986 modification Methods 0.000 description 2
- 230000003287 optical effect Effects 0.000 description 2
- 238000012360 testing method Methods 0.000 description 2
- 238000012800 visualization Methods 0.000 description 2
- 238000004458 analytical method Methods 0.000 description 1
- 230000008859 change Effects 0.000 description 1
- 238000010276 construction Methods 0.000 description 1
- 230000009977 dual effect Effects 0.000 description 1
- 230000006872 improvement Effects 0.000 description 1
- 238000004519 manufacturing process Methods 0.000 description 1
- 238000012544 monitoring process Methods 0.000 description 1
- 230000000750 progressive effect Effects 0.000 description 1
- 230000003068 static effect Effects 0.000 description 1
- 230000000007 visual effect Effects 0.000 description 1
Classifications
-
- G—PHYSICS
- G08—SIGNALLING
- G08G—TRAFFIC CONTROL SYSTEMS
- G08G1/00—Traffic control systems for road vehicles
- G08G1/01—Detecting movement of traffic to be counted or controlled
- G08G1/017—Detecting movement of traffic to be counted or controlled identifying vehicles
-
- G—PHYSICS
- G08—SIGNALLING
- G08G—TRAFFIC CONTROL SYSTEMS
- G08G1/00—Traffic control systems for road vehicles
- G08G1/01—Detecting movement of traffic to be counted or controlled
- G08G1/04—Detecting movement of traffic to be counted or controlled using optical or ultrasonic detectors
-
- G—PHYSICS
- G08—SIGNALLING
- G08G—TRAFFIC CONTROL SYSTEMS
- G08G1/00—Traffic control systems for road vehicles
- G08G1/09—Arrangements for giving variable traffic instructions
- G08G1/0962—Arrangements for giving variable traffic instructions having an indicator mounted inside the vehicle, e.g. giving voice messages
- G08G1/0967—Systems involving transmission of highway information, e.g. weather, speed limits
- G08G1/096708—Systems involving transmission of highway information, e.g. weather, speed limits where the received information might be used to generate an automatic action on the vehicle control
- G08G1/096725—Systems involving transmission of highway information, e.g. weather, speed limits where the received information might be used to generate an automatic action on the vehicle control where the received information generates an automatic action on the vehicle control
-
- G—PHYSICS
- G08—SIGNALLING
- G08G—TRAFFIC CONTROL SYSTEMS
- G08G1/00—Traffic control systems for road vehicles
- G08G1/09—Arrangements for giving variable traffic instructions
- G08G1/0962—Arrangements for giving variable traffic instructions having an indicator mounted inside the vehicle, e.g. giving voice messages
- G08G1/0967—Systems involving transmission of highway information, e.g. weather, speed limits
- G08G1/096766—Systems involving transmission of highway information, e.g. weather, speed limits where the system is characterised by the origin of the information transmission
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N7/00—Television systems
- H04N7/18—Closed-circuit television [CCTV] systems, i.e. systems in which the video signal is not broadcast
- H04N7/181—Closed-circuit television [CCTV] systems, i.e. systems in which the video signal is not broadcast for receiving images from a plurality of remote sources
-
- Y—GENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
- Y02—TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
- Y02T—CLIMATE CHANGE MITIGATION TECHNOLOGIES RELATED TO TRANSPORTATION
- Y02T10/00—Road transport of goods or passengers
- Y02T10/10—Internal combustion engine [ICE] based vehicles
- Y02T10/40—Engine management systems
Abstract
The application relates to the field of traffic control systems, and particularly discloses a vehicle-road cooperation method, device and medium based on radar fusion layout, wherein the method comprises the following steps: acquiring radar data and video data of a target vehicle; extracting features of the radar data and the video data to obtain vehicle features of a target vehicle; extracting vehicle characteristics of a target vehicle at different time points; and according to the vehicle characteristics of the target vehicle at different time points, fusing the radar data and the video data to obtain vehicle track data. The principle that the perception confidence of each region of a radar near-end region, a video near-end region and a superposition region is complementary to each other is utilized to obtain the height, so that the perception accuracy is greatly improved; compared with the traditional radar fusion deployment mode, the device layout quantity is reduced.
Description
Technical Field
The application relates to the field of traffic control systems, in particular to a vehicle-road cooperation method, device and medium based on radar fusion layout.
Background
In recent years, the automatic driving technology is mature gradually, more and more automobile enterprises can conduct actual drive test of automatic driving, and the current level of the latest technology can be displayed to the public. With the gradual progress of technology, more and more industry personnel gradually find that the intelligent single bicycle cannot solve the complex and changeable situation in urban roads, and the technology is eager to the outside beyond the human driver for the final goal of automatic driving of automobiles to be safer. To accomplish this goal, it is very difficult to implement the single-vehicle intelligence at the present stage, and the vehicle-road cooperation technology is generated at this time, so as to solve some complex dangerous situations that cannot be solved and handled by the single-vehicle intelligence.
In general, vehicle-road cooperation is one of important links of future automatic driving landing, and the deployment cost of the external field equipment of the current vehicle-road cooperation road section and the perception accuracy of road traffic participants are low, so that the vehicle-road cooperation is one of the problems to be solved in realizing vehicle-road cooperation scenes and realizing large-scale landing.
Disclosure of Invention
In order to solve the problems, the application provides a vehicle-road cooperation method, device and medium based on the thunder fusion layout, wherein the method comprises the following steps: acquiring radar data and video data of a target vehicle; extracting features of the radar data and the video data to obtain vehicle features of the target vehicle; extracting vehicle characteristics of the target vehicle at different time points; and according to the vehicle characteristics of the target vehicle at different time points, fusing the radar data and the video data to obtain vehicle track data.
In one example, the radar data is from a radar device disposed on the vertical bar, and the video data is from a video capture device disposed on the vertical bar; the radar device and the video acquisition device on the same vertical rod are reversely arranged; the radar device on the first vertical rod and the video acquisition device on the second vertical rod adjacent to the first vertical rod are oppositely arranged.
In one example, according to the vehicle characteristics of the target vehicle at different time points, the radar data and the video data are fused, specifically including: determining an intermediate position of the target vehicle according to the radar data and the video data; comparing the intermediate positions corresponding to different time points respectively so as to logically compensate the intermediate positions; and according to the distances between the logically compensated intermediate position and the radar device and the video acquisition device, respectively, fusing the radar data and the video data of the target vehicle to obtain the running data of the target vehicle.
In one example, the comparing the intermediate positions corresponding to different time points to logically compensate the intermediate positions specifically includes: comparing the first intermediate position at the current moment with the second intermediate position at the previous moment to determine the distance difference between the first intermediate position and the second intermediate position; the time interval between the current time and the previous time is a preset time interval; and if the distance difference is larger than the preset distance, performing logic compensation on the first intermediate position.
In one example, the logically compensating the first intermediate position specifically includes: discarding the first intermediate position data at the current moment; generating a simulated running track of the target vehicle from the previous moment to the current moment according to the speed characteristic and the course angle characteristic of the target vehicle at the previous moment; the simulated running track is a straight running track; and determining a third intermediate position of the target vehicle at the current moment according to the simulated running track and the second intermediate position.
In one example, the fusing the radar data and the video data of the target vehicle according to the distances between the logically compensated intermediate position and the radar device and the video acquisition device respectively specifically includes: determining radar data weight and video data weight according to the distances between the intermediate position and the radar device and the video acquisition device respectively; and generating vehicle track data of the target vehicle by fusion according to the radar data weight, the video data weight, the radar data and the video data.
In one example, the vehicle characteristics include at least a target number, lane number, latitude and longitude, altitude, vehicle type, speed, heading angle.
In one example, after fusing the radar data and the video data to obtain vehicle trajectory data, the method further includes: comparing the vehicle track data with the actual running track of the target vehicle to determine an error value of the vehicle track data; and modifying the number of vertical rods, or the number of radar devices and the number of video acquisition devices on the vertical rods according to the error value.
The application also provides a vehicle-road cooperative device based on the radar fusion layout, which comprises:
at least one processor; and a memory communicatively coupled to the at least one processor; wherein the memory stores instructions executable by the at least one processor to enable the at least one processor to perform: acquiring radar data and video data of a target vehicle; extracting features of the radar data and the video data to obtain vehicle features of the target vehicle; extracting vehicle characteristics of the target vehicle at different time points; and according to the vehicle characteristics of the target vehicle at different time points, fusing the radar data and the video data to obtain vehicle track data.
The present application also provides a non-volatile computer storage medium storing computer executable instructions, characterized in that the computer executable instructions are configured to: acquiring radar data and video data of a target vehicle; extracting features of the radar data and the video data to obtain vehicle features of the target vehicle; extracting vehicle characteristics of the target vehicle at different time points; and according to the vehicle characteristics of the target vehicle at different time points, fusing the radar data and the video data to obtain vehicle track data.
The method provided by the application has the following beneficial effects: the principle that the perception confidence of each region of a radar near-end region, a video near-end region and a superposition region is complementary to take the height is utilized, so that the perception accuracy is greatly improved; compared with the traditional radar fusion deployment mode, the device layout quantity is reduced. The advantage of utilizing radar and video possesses the dual function of track continuous tracking and event perception visual, satisfies the operating condition of bad weather such as daytime, night, fog, heavy rain, etc. and possesses all-weather perception effect.
Drawings
The accompanying drawings, which are included to provide a further understanding of the application and are incorporated in and constitute a part of this specification, illustrate embodiments of the application and together with the description serve to explain the application and do not constitute a limitation on the application. In the drawings:
FIG. 1 is a schematic flow chart of a vehicle-road cooperation method based on the radar fusion layout in an embodiment of the application;
FIG. 2 is a schematic diagram of a radar-aware deployment in an embodiment of the present application;
FIG. 3 is a schematic diagram of a video camera perception deployment in an embodiment of the present application;
FIG. 4 is a schematic diagram of a conventional forward deployment of a radar fusion in an embodiment of the present application;
FIG. 5 is a schematic diagram of a novel deployment based on a radar fusion in an embodiment of the present application;
fig. 6 is a schematic structural diagram of a vehicle-road cooperative device based on a radar fusion layout in an embodiment of the application.
Detailed Description
In order to make the objects, technical solutions and advantages of the present application more apparent, the technical solutions of the present application will be clearly and completely described below with reference to specific embodiments of the present application and corresponding drawings. It will be apparent that the described embodiments are only some, but not all, embodiments of the application. All other embodiments, which can be made by those skilled in the art based on the embodiments of the application without making any inventive effort, are intended to be within the scope of the application.
The following describes in detail the technical solutions provided by the embodiments of the present application with reference to the accompanying drawings.
Fig. 1 is a schematic flow chart of a vehicle-road collaboration method based on a radar fusion layout according to one or more embodiments of the present disclosure. The process may be performed by computing devices in the respective areas, with some input parameters or intermediate results in the process allowing manual intervention adjustments to help improve accuracy.
The implementation of the analysis method according to the embodiment of the present application may be a terminal device or a server, which is not particularly limited in the present application. For ease of understanding and description, the following embodiments are described in detail with reference to a server.
It should be noted that the server may be a single device, or may be a system formed by a plurality of devices, that is, a distributed server, which is not particularly limited in the present application.
As shown in fig. 1, an embodiment of the present application provides a vehicle-road collaboration method based on radar fusion layout, including:
s101: radar data and video data of a target vehicle are acquired.
First, radar data and video data of a target vehicle need to be acquired on a monitoring link, which is a link on which a radar device and a video acquisition device are disposed.
The traditional vehicle-road cooperative section outfield arrangement mode comprises single radar perception deployment, single video camera perception deployment and radar fusion forward deployment.
As shown in fig. 2, the single radar perception deployment adopts a radar fight or chasing fight deployment mode, and the perception of the track and the event of the traffic participant is realized through the radar scanning principle. The sensing mode has the defects of low track sensing accuracy, incapability of realizing visualization by event sensing and the like, and is rarely adopted in engineering projects.
The single video camera sensing arrangement is shown in fig. 3, and the front end sensing is realized by adopting a video camera mode, so that the problem of event sensing visualization can be solved, but the sensing effect is poor in severe weather such as heavy fog, night, heavy rain and the like, and the all-weather requirement cannot be met. Of engineering projects, only a small number of projects are employed.
The forward deployment of the radar fusion is shown in fig. 4, in order to make up for the defect of single radar or single vision external field sensing means in engineering application, the current radar fusion mode is widely adopted, and the deployment method can solve the problem that an event cannot be visualized, and can have good sensing effect in the night, the fog and the rainy days, so that the method is widely used in engineering. However, as the mileage of the cooperative test section of the vehicle and road increases, the defect of the forward deployment method of the lightning fusion is gradually developed, and the mode has good performance at the sensing end, but a large amount of sensing equipment is distributed in the external field, so that the engineering investment and the construction difficulty are greatly increased.
In the method, as shown in fig. 5, radar data come from a radar device arranged on a vertical rod, and video data come from a video acquisition device arranged on the vertical rod; the radar device and the video acquisition device on the same vertical rod are reversely arranged; the radar device on the first vertical rod and the video acquisition device on the second vertical rod adjacent to the first vertical rod are oppositely arranged.
S102: and extracting features of the radar data and the video data to obtain vehicle features of the target vehicle.
And extracting features of the radar and video sensing data, and extracting information such as a time stamp, a target number, a lane number, longitude and latitude, elevation, vehicle type, speed, course angle and the like. The target number here refers to the number of the vehicle, and may be associated with a license plate number.
S103: and extracting the vehicle characteristics of the target vehicle at different time points.
Based on the vehicle characteristics of the target vehicle, such as unique or single vehicle characteristics like license plate numbers, the vehicle characteristics of the target vehicle, which correspond to different time points, are extracted, wherein the vehicle characteristics mainly refer to the position data of the target vehicle, and the position data comprise the current longitude and latitude, elevation, speed, course angle and the like.
S104: and according to the vehicle characteristics of the target vehicle at different time points, fusing the radar data and the video data to obtain vehicle track data.
And according to the vehicle characteristic data of the target vehicle at different time points, fusing the radar data and the video data to obtain vehicle track data so as to achieve the aim of vehicle-road coordination.
As shown in FIG. 5, when the target vehicle is between 0m and 200m, the radar and the video can realize the perception and tracking of the target at the same time, but the area is closer to the radar, the confidence of the radar data is relatively higher, and when the edge side algorithm performs data fusion, the weight of the radar data is distributed to be high, the weight of the video data is low, and the accuracy after fusion is higher; when the automobile runs to 200m to 400m, the radar and the video can realize the perception and tracking of the target at the same time, the area is closer to the video, the confidence coefficient of the video data is relatively higher, the weight of the distributed radar data is low, the weight of the video data is high, and the accuracy after comprehensive fusion is higher.
In one embodiment, when data fusion is performed, first, the intermediate position of the target vehicle is determined according to radar data and video data; the method comprises the steps of comparing the corresponding intermediate positions at different time points to logically compensate the intermediate positions, and then fusing radar data and video data of a target vehicle according to the distances between the logically compensated intermediate positions and a radar device and a video acquisition device respectively to obtain running data of the target vehicle.
Further, when comparing the intermediate positions at different time points, the first intermediate position at the current time and the second intermediate position at the previous time need to be compared to determine the distance difference between the first intermediate position and the second intermediate position, where the time interval between the current time and the previous time is a preset time interval, such as 100ms. If the distance difference is greater than a preset distance, such as greater than 5m, the first intermediate position is logically compensated.
When logic compensation is performed, the first intermediate position data at the current moment is needed to be abandoned, and a simulated running track of the target vehicle from the previous moment to the current moment is generated according to the speed characteristic and the course angle characteristic of the target vehicle at the previous moment, wherein the simulated running track is a straight running track. And determining a third intermediate position of the target vehicle at the current moment according to the simulated running track and the second intermediate position, and replacing the first intermediate position with the third intermediate position.
Further, after the logic compensation is performed, or the distance is smaller than the preset distance, that is, when the logic compensation is not required, the main area judgment needs to be performed, that is, whether the current position of the target vehicle in the monitored road section is more biased to the radar device or the video acquisition device is judged. And determining the radar data weight and the video data weight according to the distances between the middle position and the radar device and the video acquisition device respectively. And then according to the radar data weight, the video data weight, the radar data and the video data, fusing to generate the vehicle track data of the target vehicle. The weight here includes the meaning of the ratio, and the first intermediate position and the second intermediate position are the fused track data with the radar weight and the video weight of 50%.
In one embodiment, to determine whether the number of devices is sufficient, the vehicle track data may be compared with the actual travel track of the target vehicle to determine an error value for the vehicle track data, and then the number of posts, or the number of radar devices on the posts and the number of video capture devices, may be modified based on the error value.
As shown in fig. 6, the embodiment of the present application further provides a vehicle-road collaboration device based on a radar fusion layout, including: at least one processor; and a memory communicatively coupled to the at least one processor; wherein the memory stores instructions executable by the at least one processor to enable the at least one processor to:
acquiring radar data and video data of a target vehicle; extracting features of the radar data and the video data to obtain vehicle features of the target vehicle; extracting vehicle characteristics of the target vehicle at different time points; and according to the vehicle characteristics of the target vehicle at different time points, fusing the radar data and the video data to obtain vehicle track data.
The embodiment of the application also provides a nonvolatile computer storage medium, which stores computer executable instructions, wherein the computer executable instructions are configured to:
acquiring radar data and video data of a target vehicle; extracting features of the radar data and the video data to obtain vehicle features of the target vehicle; extracting vehicle characteristics of the target vehicle at different time points; and according to the vehicle characteristics of the target vehicle at different time points, fusing the radar data and the video data to obtain vehicle track data.
The embodiments of the present application are described in a progressive manner, and the same and similar parts of the embodiments are all referred to each other, and each embodiment is mainly described in the differences from the other embodiments. In particular, for the apparatus and medium embodiments, the description is relatively simple, as it is substantially similar to the method embodiments, with reference to the section of the method embodiments being relevant.
The devices and media provided in the embodiments of the present application are in one-to-one correspondence with the methods, so that the devices and media also have similar beneficial technical effects as the corresponding methods, and since the beneficial technical effects of the methods have been described in detail above, the beneficial technical effects of the devices and media are not repeated here.
It will be appreciated by those skilled in the art that embodiments of the present application may be provided as a method, system, or computer program product. Accordingly, the present application may take the form of an entirely hardware embodiment, an entirely software embodiment or an embodiment combining software and hardware aspects. Furthermore, the present application may take the form of a computer program product embodied on one or more computer-usable storage media (including, but not limited to, disk storage, CD-ROM, optical storage, and the like) having computer-usable program code embodied therein.
The present application is described with reference to flowchart illustrations and/or block diagrams of methods, apparatus (systems) and computer program products according to embodiments of the application. It will be understood that each flow and/or block of the flowchart illustrations and/or block diagrams, and combinations of flows and/or blocks in the flowchart illustrations and/or block diagrams, can be implemented by computer program instructions. These computer program instructions may be provided to a processor of a general purpose computer, special purpose computer, embedded processor, or other programmable data processing apparatus to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable data processing apparatus, create means for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be stored in a computer-readable memory that can direct a computer or other programmable data processing apparatus to function in a particular manner, such that the instructions stored in the computer-readable memory produce an article of manufacture including instruction means which implement the function specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be loaded onto a computer or other programmable data processing apparatus to cause a series of operational steps to be performed on the computer or other programmable apparatus to produce a computer implemented process such that the instructions which execute on the computer or other programmable apparatus provide steps for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
In one typical configuration, a computing device includes one or more processors (CPUs), input/output interfaces, network interfaces, and memory.
The memory may include volatile memory in a computer-readable medium, random Access Memory (RAM) and/or nonvolatile memory, such as Read Only Memory (ROM) or flash memory (flash RAM). Memory is an example of computer-readable media.
Computer readable media, including both non-transitory and non-transitory, removable and non-removable media, may implement information storage by any method or technology. The information may be computer readable instructions, data structures, modules of a program, or other data. Examples of storage media for a computer include, but are not limited to, phase change memory (PRAM), static Random Access Memory (SRAM), dynamic Random Access Memory (DRAM), other types of Random Access Memory (RAM), read Only Memory (ROM), electrically Erasable Programmable Read Only Memory (EEPROM), flash memory or other memory technology, compact disc read only memory (CD-ROM), digital Versatile Discs (DVD) or other optical storage, magnetic cassettes, magnetic tape magnetic disk storage or other magnetic storage devices, or any other non-transmission medium, which can be used to store information that can be accessed by a computing device. Computer-readable media, as defined herein, does not include transitory computer-readable media (transmission media), such as modulated data signals and carrier waves.
It should also be noted that the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising one … …" does not exclude the presence of other like elements in a process, method, article or apparatus that comprises the element.
The foregoing is merely exemplary of the present application and is not intended to limit the present application. Various modifications and variations of the present application will be apparent to those skilled in the art. Any modification, equivalent replacement, improvement, etc. which come within the spirit and principles of the application are to be included in the scope of the claims of the present application.
Claims (6)
1. A vehicle-road cooperation method based on the thunder fusion layout is characterized by comprising the following steps:
acquiring radar data and video data of a target vehicle; the radar data come from radar devices arranged on the vertical rods, and the video data come from video acquisition devices arranged on the vertical rods;
the radar device and the video acquisition device on the same vertical rod are reversely arranged;
the radar device on the first vertical rod and the video acquisition device on the second vertical rod adjacent to the first vertical rod are oppositely arranged;
extracting features of the radar data and the video data to obtain vehicle features of the target vehicle; the vehicle features at least comprise a target number, a lane number, longitude and latitude, elevation, vehicle type, speed and course angle;
extracting vehicle characteristics of the target vehicle at different time points based on the target number;
according to the vehicle characteristics of the target vehicle at different time points, fusing the radar data and the video data to obtain vehicle track data;
according to the vehicle characteristics of the target vehicle at different time points, the radar data and the video data are fused, and the method specifically comprises the following steps:
determining an intermediate position of the target vehicle according to the radar data and the video data;
comparing the intermediate positions corresponding to different time points respectively so as to logically compensate the intermediate positions;
according to the distances between the logically compensated intermediate position and the radar device and the video acquisition device, respectively, fusing the radar data and the video data of the target vehicle to obtain the running data of the target vehicle;
the method for fusing the radar data and the video data of the target vehicle according to the distances between the logically compensated intermediate position and the radar device and the video acquisition device respectively specifically comprises the following steps:
determining radar data weight and video data weight according to the distances between the intermediate position and the radar device and the video acquisition device respectively;
and generating vehicle track data of the target vehicle by fusion according to the radar data weight, the video data weight, the radar data and the video data.
2. The method according to claim 1, wherein the comparing the intermediate positions corresponding to different time points respectively to logically compensate the intermediate positions specifically includes:
comparing the first intermediate position at the current moment with the second intermediate position at the previous moment to determine the distance difference between the first intermediate position and the second intermediate position; the time interval between the current time and the previous time is a preset time interval;
and if the distance difference is larger than the preset distance, performing logic compensation on the first intermediate position.
3. The method according to claim 2, wherein said logically compensating said first intermediate position comprises:
discarding the first intermediate position data at the current moment;
generating a simulated running track of the target vehicle from the previous moment to the current moment according to the speed characteristic and the course angle characteristic of the target vehicle at the previous moment; the simulated running track is a straight running track;
and determining a third intermediate position of the target vehicle at the current moment according to the simulated running track and the second intermediate position.
4. The method of claim 1, wherein after fusing the radar data and the video data to obtain vehicle trajectory data, the method further comprises:
comparing the vehicle track data with the actual running track of the target vehicle to determine an error value of the vehicle track data;
and modifying the number of vertical rods, or the number of radar devices and the number of video acquisition devices on the vertical rods according to the error value.
5. Vehicle road cooperative equipment based on thunder vision integration layout, characterized by comprising:
at least one processor; and a memory communicatively coupled to the at least one processor; wherein, the liquid crystal display device comprises a liquid crystal display device,
the memory stores instructions executable by the at least one processor to enable the at least one processor to perform:
acquiring radar data and video data of a target vehicle; the radar data come from radar devices arranged on the vertical rods, and the video data come from video acquisition devices arranged on the vertical rods;
the radar device and the video acquisition device on the same vertical rod are reversely arranged;
the radar device on the first vertical rod and the video acquisition device on the second vertical rod adjacent to the first vertical rod are oppositely arranged;
extracting features of the radar data and the video data to obtain vehicle features of the target vehicle; the vehicle features at least comprise a target number, a lane number, longitude and latitude, elevation, vehicle type, speed and course angle;
extracting vehicle characteristics of the target vehicle at different time points based on the target number;
according to the vehicle characteristics of the target vehicle at different time points, fusing the radar data and the video data to obtain vehicle track data;
according to the vehicle characteristics of the target vehicle at different time points, the radar data and the video data are fused, and the method specifically comprises the following steps:
determining an intermediate position of the target vehicle according to the radar data and the video data;
comparing the intermediate positions corresponding to different time points respectively so as to logically compensate the intermediate positions;
according to the distances between the logically compensated intermediate position and the radar device and the video acquisition device, respectively, fusing the radar data and the video data of the target vehicle to obtain the running data of the target vehicle;
the method for fusing the radar data and the video data of the target vehicle according to the distances between the logically compensated intermediate position and the radar device and the video acquisition device respectively specifically comprises the following steps:
determining radar data weight and video data weight according to the distances between the intermediate position and the radar device and the video acquisition device respectively;
and generating vehicle track data of the target vehicle by fusion according to the radar data weight, the video data weight, the radar data and the video data.
6. A non-transitory computer storage medium storing computer-executable instructions, the computer-executable instructions configured to:
acquiring radar data and video data of a target vehicle; the radar data come from radar devices arranged on the vertical rods, and the video data come from video acquisition devices arranged on the vertical rods;
the radar device and the video acquisition device on the same vertical rod are reversely arranged;
the radar device on the first vertical rod and the video acquisition device on the second vertical rod adjacent to the first vertical rod are oppositely arranged;
extracting features of the radar data and the video data to obtain vehicle features of the target vehicle; the vehicle features at least comprise a target number, a lane number, longitude and latitude, elevation, vehicle type, speed and course angle;
extracting vehicle characteristics of the target vehicle at different time points based on the target number;
according to the vehicle characteristics of the target vehicle at different time points, fusing the radar data and the video data to obtain vehicle track data;
according to the vehicle characteristics of the target vehicle at different time points, the radar data and the video data are fused, and the method specifically comprises the following steps:
determining an intermediate position of the target vehicle according to the radar data and the video data;
comparing the intermediate positions corresponding to different time points respectively so as to logically compensate the intermediate positions;
according to the distances between the logically compensated intermediate position and the radar device and the video acquisition device, respectively, fusing the radar data and the video data of the target vehicle to obtain the running data of the target vehicle;
the method for fusing the radar data and the video data of the target vehicle according to the distances between the logically compensated intermediate position and the radar device and the video acquisition device respectively specifically comprises the following steps:
determining radar data weight and video data weight according to the distances between the intermediate position and the radar device and the video acquisition device respectively;
and generating vehicle track data of the target vehicle by fusion according to the radar data weight, the video data weight, the radar data and the video data.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202310728092.4A CN116453346B (en) | 2023-06-20 | 2023-06-20 | Vehicle-road cooperation method, device and medium based on radar fusion layout |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202310728092.4A CN116453346B (en) | 2023-06-20 | 2023-06-20 | Vehicle-road cooperation method, device and medium based on radar fusion layout |
Publications (2)
Publication Number | Publication Date |
---|---|
CN116453346A CN116453346A (en) | 2023-07-18 |
CN116453346B true CN116453346B (en) | 2023-09-19 |
Family
ID=87122379
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202310728092.4A Active CN116453346B (en) | 2023-06-20 | 2023-06-20 | Vehicle-road cooperation method, device and medium based on radar fusion layout |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN116453346B (en) |
Citations (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN111538052A (en) * | 2020-04-30 | 2020-08-14 | 西安大唐电信有限公司 | Beidou/GPS track optimization method based on OBD |
CN113420805A (en) * | 2021-06-21 | 2021-09-21 | 车路通科技(成都)有限公司 | Dynamic track image fusion method, device, equipment and medium for video and radar |
CN114394088A (en) * | 2021-12-28 | 2022-04-26 | 北京易航远智科技有限公司 | Parking tracking track generation method and device, electronic equipment and storage medium |
CN114475593A (en) * | 2022-01-18 | 2022-05-13 | 上汽通用五菱汽车股份有限公司 | Travel track prediction method, vehicle, and computer-readable storage medium |
WO2022156276A1 (en) * | 2021-01-22 | 2022-07-28 | 华为技术有限公司 | Target detection method and apparatus |
CN114973681A (en) * | 2022-07-28 | 2022-08-30 | 山东高速信息集团有限公司 | In-transit vehicle sensing method and device |
CN115683124A (en) * | 2021-07-26 | 2023-02-03 | 北京四维图新科技股份有限公司 | Method for determining a driving trajectory |
Family Cites Families (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US11155258B2 (en) * | 2019-03-25 | 2021-10-26 | GM Global Technology Operations LLC | System and method for radar cross traffic tracking and maneuver risk estimation |
-
2023
- 2023-06-20 CN CN202310728092.4A patent/CN116453346B/en active Active
Patent Citations (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN111538052A (en) * | 2020-04-30 | 2020-08-14 | 西安大唐电信有限公司 | Beidou/GPS track optimization method based on OBD |
WO2022156276A1 (en) * | 2021-01-22 | 2022-07-28 | 华为技术有限公司 | Target detection method and apparatus |
CN113420805A (en) * | 2021-06-21 | 2021-09-21 | 车路通科技(成都)有限公司 | Dynamic track image fusion method, device, equipment and medium for video and radar |
CN115683124A (en) * | 2021-07-26 | 2023-02-03 | 北京四维图新科技股份有限公司 | Method for determining a driving trajectory |
CN114394088A (en) * | 2021-12-28 | 2022-04-26 | 北京易航远智科技有限公司 | Parking tracking track generation method and device, electronic equipment and storage medium |
CN114475593A (en) * | 2022-01-18 | 2022-05-13 | 上汽通用五菱汽车股份有限公司 | Travel track prediction method, vehicle, and computer-readable storage medium |
CN114973681A (en) * | 2022-07-28 | 2022-08-30 | 山东高速信息集团有限公司 | In-transit vehicle sensing method and device |
Non-Patent Citations (1)
Title |
---|
面向高级辅助驾驶雷达和视觉传感器信息融合算法的研究;杨鑫;刘威;林辉;;汽车实用技术(第01期);全文 * |
Also Published As
Publication number | Publication date |
---|---|
CN116453346A (en) | 2023-07-18 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
EP2979261B1 (en) | Backend for driver assistance systems | |
DE102014107488B4 (en) | DEVICE FOR DETERMINING THE ENVIRONMENT | |
WO2022206942A1 (en) | Laser radar point cloud dynamic segmentation and fusion method based on driving safety risk field | |
DE102019131384A1 (en) | ROAD COVERING CHARACTERIZATION USING POINT OBSERVATION OF NEIGHBORING VEHICLES | |
DE102008030555B4 (en) | Device for processing stereo images | |
US20200182631A1 (en) | Method for detecting map error information, apparatus, device, vehicle and storage medium | |
DE102017126877A1 (en) | Automated copilot control for autonomous vehicles | |
DE102016123887A1 (en) | VIRTUAL SENSOR DATA GENERATION FOR WHEEL STOP DETECTION | |
DE112018000479T5 (en) | Event prediction system, event prediction method, recording medium and moving body | |
KR20210080459A (en) | Lane detection method, apparatus, electronic device and readable storage medium | |
CN112116031B (en) | Target fusion method, system, vehicle and storage medium based on road side equipment | |
DE102013009856B4 (en) | Determining the position of a stationary traffic object using a central server arrangement | |
DE102021103149A1 (en) | METHOD AND DEVICE FOR DETERMINING THE OPTIMAL CROSSING LANE IN AN ASSISTED DRIVING SYSTEM | |
WO2023197408A1 (en) | Method and apparatus for determining vehicle speed control model training sample | |
DE112019001542T5 (en) | POSITION ESTIMATE DEVICE | |
CN103810854A (en) | Intelligent traffic algorithm evaluation method based on manual calibration | |
DE102019003963A1 (en) | Method for determining a driving strategy of a vehicle, in particular a commercial vehicle | |
CN113643431A (en) | System and method for iterative optimization of visual algorithm | |
CN115187946A (en) | Multi-scale intelligent sensing method for fusing underground obstacle point cloud and image data | |
CN116453346B (en) | Vehicle-road cooperation method, device and medium based on radar fusion layout | |
DE102009041586B4 (en) | Method for increasing the accuracy of sensor-detected position data | |
CN116740944B (en) | Driving safety early warning method, device and storage medium in road tunnel | |
CN116597690B (en) | Highway test scene generation method, equipment and medium for intelligent network-connected automobile | |
DE102013212010A1 (en) | A method and apparatus for assisting a throat pass for a vehicle, a method for supporting a pit pass for a follower vehicle, and methods for managing survey information to assist bottleneck vehicle transits | |
EP3374242A1 (en) | Method and device for analysing a driving manner of a driver of a vehicle |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |