CN115359652A - Automatic driving video analysis task scheduling method and medium based on vehicle-road cooperation - Google Patents

Automatic driving video analysis task scheduling method and medium based on vehicle-road cooperation Download PDF

Info

Publication number
CN115359652A
CN115359652A CN202210803950.2A CN202210803950A CN115359652A CN 115359652 A CN115359652 A CN 115359652A CN 202210803950 A CN202210803950 A CN 202210803950A CN 115359652 A CN115359652 A CN 115359652A
Authority
CN
China
Prior art keywords
vehicle
road
target
video
video analysis
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202210803950.2A
Other languages
Chinese (zh)
Other versions
CN115359652B (en
Inventor
杨鹏
林洁
喻莉
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Huazhong University of Science and Technology
Original Assignee
Huazhong University of Science and Technology
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Huazhong University of Science and Technology filed Critical Huazhong University of Science and Technology
Priority to CN202210803950.2A priority Critical patent/CN115359652B/en
Publication of CN115359652A publication Critical patent/CN115359652A/en
Application granted granted Critical
Publication of CN115359652B publication Critical patent/CN115359652B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G08SIGNALLING
    • G08GTRAFFIC CONTROL SYSTEMS
    • G08G1/00Traffic control systems for road vehicles
    • G08G1/01Detecting movement of traffic to be counted or controlled
    • G08G1/0104Measuring and analyzing of parameters relative to traffic conditions
    • G08G1/0137Measuring and analyzing of parameters relative to traffic conditions for specific applications
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/46Multiprogramming arrangements
    • G06F9/48Program initiating; Program switching, e.g. by interrupt
    • G06F9/4806Task transfer initiation or dispatching
    • G06F9/4843Task transfer initiation or dispatching by program, e.g. task dispatcher, supervisor, operating system
    • G06F9/4881Scheduling strategies for dispatcher, e.g. round robin, multi-level priority queues
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/46Multiprogramming arrangements
    • G06F9/50Allocation of resources, e.g. of the central processing unit [CPU]
    • G06F9/5005Allocation of resources, e.g. of the central processing unit [CPU] to service a request
    • G06F9/5027Allocation of resources, e.g. of the central processing unit [CPU] to service a request the resource being a machine, e.g. CPUs, Servers, Terminals
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/50Context or environment of the image
    • G06V20/56Context or environment of the image exterior to a vehicle by using sensors mounted on the vehicle
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04WWIRELESS COMMUNICATION NETWORKS
    • H04W4/00Services specially adapted for wireless communication networks; Facilities therefor
    • H04W4/30Services specially adapted for particular environments, situations or purposes
    • H04W4/40Services specially adapted for particular environments, situations or purposes for vehicles, e.g. vehicle-to-pedestrians [V2P]
    • H04W4/44Services specially adapted for particular environments, situations or purposes for vehicles, e.g. vehicle-to-pedestrians [V2P] for communication between vehicles and infrastructures, e.g. vehicle-to-cloud [V2C] or vehicle-to-home [V2H]
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04WWIRELESS COMMUNICATION NETWORKS
    • H04W4/00Services specially adapted for wireless communication networks; Facilities therefor
    • H04W4/30Services specially adapted for particular environments, situations or purposes
    • H04W4/40Services specially adapted for particular environments, situations or purposes for vehicles, e.g. vehicle-to-pedestrians [V2P]
    • H04W4/46Services specially adapted for particular environments, situations or purposes for vehicles, e.g. vehicle-to-pedestrians [V2P] for vehicle-to-vehicle communication [V2V]

Abstract

The invention discloses an automatic driving video analysis task scheduling method and medium based on vehicle-road cooperation, belonging to the field of automatic driving, wherein the method comprises the following steps: head vehicles in a vehicle formation capture a road environment ahead to generate a road video; determining a video analysis task and distributing the video analysis task to a corresponding head vehicle; the head vehicle calculates the computing resources needed by the video analysis task, and when the needed computing resources are larger than the rest computing resources, if other head vehicles with the rest computing resources larger than the needed computing resources exist, the video analysis task and the road video are forwarded to other head vehicles with the highest rest computing resources for analysis processing, and if not, the video analysis task and the road video are forwarded to a road unit for analysis processing; and transmitting the result obtained by the analysis processing to each vehicle in the vehicle formation, so that each vehicle controls the driving state according to the result. The pressure of vehicle calculation tasks can be relieved, the perception sensitivity of vehicle formation to the surrounding environment is improved, and the driving safety is guaranteed.

Description

Automatic driving video analysis task scheduling method and medium based on vehicle-road cooperation
Technical Field
The invention belongs to the field of automatic driving, and particularly relates to an automatic driving video analysis task scheduling method and medium based on vehicle-road cooperation.
Background
Autonomous vehicles need to sense the surrounding environment sensitively and accurately, and cameras are widely deployed in autonomous vehicles as low-cost sensors. The video frames acquired by the camera to the environment are analyzed in time, so that the vehicle can be helped to make accurate real-time decision.
The computing resources on the vehicle are limited, and the number of video frames is large, so that the timeliness and the accuracy of the results of all the arrived video analysis tasks cannot be guaranteed. In addition, since the surrounding targets are random, the relative position of the current vehicle and the target brings different shooting angles, and therefore, the collected video frames may not provide more complete information to support accurate analysis. In this context, how to properly select video frames and reasonably schedule video analysis tasks is of great significance.
Disclosure of Invention
Aiming at the defects and improvement requirements of the prior art, the invention provides an automatic driving video analysis task scheduling method and medium based on vehicle-road cooperation, and aims to provide the ability of quickly and accurately sensing the environment for an automatic driving vehicle and ensure the driving safety.
In order to achieve the above object, according to an aspect of the present invention, there is provided an automatic driving video analysis task scheduling method based on vehicle-road coordination, including: s1, capturing a front road environment by head vehicles in a vehicle formation to generate a corresponding road video; s2, determining video analysis tasks according to the motion state of the target in the front road environment, and distributing the video analysis tasks to corresponding head vehicles according to the positions of the head vehicles relative to the target; s3, the head vehicle calculates the computing resources needed by the video analysis task according to the delay requirement and the accuracy requirement of the arrived video analysis task, and judges whether other head vehicles with the residual computing resources larger than the needed computing resources exist when the needed computing resources are larger than the residual computing resources, if so, the video analysis task and the road video are forwarded to other head vehicles with the highest residual computing resources for analysis processing, otherwise, the video analysis task and the road video are forwarded to a road unit for analysis processing; and S4, transmitting the result obtained by the analysis processing to each vehicle in the vehicle formation, so that each vehicle controls the running state according to the result.
Still further, S1 is preceded by: dividing vehicles which have the same driving direction and the same vehicle positions in the coverage range of the roadside units into the same vehicle formation corresponding to the roadside units, wherein the vehicles in the vehicle formation meet the following conditions:
Figure BDA0003735735530000021
wherein, the first and the second end of the pipe are connected with each other,
Figure BDA0003735735530000022
for vehicles m to their nearest roadside units
Figure BDA0003735735530000023
H is the height of the roadside unit,
Figure BDA0003735735530000024
is the average coverage radius of the roadside unit.
Still further, the S1 includes: the lead vehicle captures the light signals of the road environment ahead and converts the light signals into corresponding road video using the vehicle-mounted camera.
Furthermore, the motion state of the target in the front road environment is a moving state, and the video analysis task comprises a target type identification task and a target track tracking task; in the step S2, the target type recognition task is assigned to the head vehicle closest to the target, and the target trajectory tracking task is assigned to the head vehicle in the same lane as the target.
Furthermore, the motion state of the target in the front road environment is a static state, and the video analysis task comprises a target type identification task; in S2, the target category identification task is assigned to the leading vehicle closest to the target.
Still further, the method further comprises: and the road unit determines the head vehicle closest to the target and the head vehicle in the same lane with the target according to the received distance and the received azimuth angle.
Still further, the required computing resources satisfy the following condition:
Figure BDA0003735735530000031
wherein, B q 、C q Respectively the bandwidth resource required by the video analysis task q, the required computing resource L q (B q ,C q ) For required bandwidth resource B q And required computing resources C q The delay that is caused by the delay is,
Figure BDA0003735735530000032
for the highest delay of the video analysis task q, A q (B q ,C q ) For required bandwidth resource B q And required computing resources C q The degree of accuracy that is produced is,
Figure BDA0003735735530000033
is the lowest accuracy of the video analysis task q.
Further, when the analysis processing is performed on the head vehicle, the head vehicle transmits the result to each vehicle in the vehicle formation through a V2V communication manner in S4; when the analysis processing is carried out in the roadside unit, the roadside unit transmits the result to each vehicle in the vehicle formation in the S4 through a V2R communication mode.
According to another aspect of the present invention, there is provided a computer-readable storage medium having stored thereon a computer program which, when executed by a processor, implements the automated driving video analysis task scheduling method based on vehicle-road coordination as described above.
Generally, by the above technical solution conceived by the present invention, the following beneficial effects can be obtained: the method comprises the steps of determining video analysis tasks according to the motion state of a target in the road environment in front, selecting the optimal road video for each video analysis task, reducing the quantity of subsequent road video analysis tasks, improving the analysis accuracy and precision, and making a proper scheduling strategy for the road video analysis tasks by using a vehicle-road cooperation mode, so that the pressure of vehicle calculation tasks can be further relieved, the perception sensitivity of vehicle formation to the surrounding environment can be improved, and the driving safety can be guaranteed.
Drawings
Fig. 1 is a flowchart of an automatic driving video analysis task scheduling method based on vehicle-road cooperation according to an embodiment of the present invention;
fig. 2 is a schematic view of an application scenario provided in the embodiment of the present invention.
Detailed Description
In order to make the objects, technical solutions and advantages of the present invention more apparent, the present invention is described in further detail below with reference to the accompanying drawings and embodiments. It should be understood that the specific embodiments described herein are merely illustrative of the invention and are not intended to limit the invention. In addition, the technical features involved in the embodiments of the present invention described below may be combined with each other as long as they do not conflict with each other.
In the present application, the terms "first," "second," and the like (if any) in the description and the drawings are used for distinguishing between similar elements and not necessarily for describing a particular sequential or chronological order.
Fig. 1 is a flowchart of an automatic driving video analysis task scheduling method based on vehicle-road coordination according to an embodiment of the present invention. Referring to fig. 1 and fig. 2, a detailed description is given of the method for scheduling an automatic driving video analysis task based on vehicle-road coordination in the present embodiment, where the method includes operations S1 to S4.
In operation S1, head vehicles in a vehicle formation capture a road environment ahead to generate a corresponding road video.
According to the embodiment of the present invention, before performing operation S1, the method further includes: vehicle formation is determined based on the coverage, vehicle location and direction of travel of the roadside units.
In this embodiment, a vehicle formation is constructed based on the geographic location and the driving direction of the vehicles, the vehicles in the same formation all drive in the same direction, and the length of the vehicle formation cannot exceed the coverage area of each roadside Unit (RSU). Let M = {1,2 \8230m } represent the set of all vehicles in the vehicle formation, M e M represents the vehicles in the vehicle formation,
Figure BDA0003735735530000041
indicating the roadside unit to which vehicle m is closest at time t. Specifically, in sub-operation S11, vehicles having the same driving direction and vehicle positions within the coverage of the roadside units are divided into the same vehicle formation corresponding to the roadside units, and the vehicles in the vehicle formation satisfy:
Figure BDA0003735735530000042
wherein, the first and the second end of the pipe are connected with each other,
Figure BDA0003735735530000051
for vehicles m to their nearest roadside units
Figure BDA0003735735530000052
H is the height of the roadside unit,
Figure BDA0003735735530000053
is the average radius of coverage of the roadside units.
The number of head vehicles in the formation of vehicles is the same as the number of lanes in the same direction on the road, one head vehicle on each lane. Taking the two lanes in the application scenario shown in fig. 2 as an example, two head vehicles are included in one vehicle formation.
According to an embodiment of the present invention, operation S1 specifically includes: the head vehicle captures the light signal of the road environment in front by using the vehicle-mounted camera, and converts the light signal into a corresponding road video by using a digital signal processing unit in the vehicle-mounted camera.
And operation S2, determining video analysis tasks according to the motion state of the target in the front road environment, and distributing the video analysis tasks to corresponding head vehicles according to the positions of the head vehicles relative to the target.
Since the multi-lane vehicle formation is considered in the embodiment, each lane has a head vehicle, which is responsible for sensing the whole environment. Considering that the distances between vehicles with different heads and the target are different, the video definition shot by the vehicle-mounted camera also has a large difference, so that the road video screening scheme is designed in the embodiment to utilize the optimal road video for scheduling.
Since the vehicles in the vehicle formation are relatively stationary, the relative positional relationship between the head vehicles remains unchanged. If the front target is static, such as a traffic light, a building and the like, only the type of the target needs to be identified, namely, the type of the target is identified; if the front object is moving, such as a pedestrian or an intersection vehicle, it is necessary to identify not only the type of the object but also to perform motion estimation and trajectory tracking on the object.
According to the embodiment of the invention, if the motion state of the target in the road video is a moving state, the video analysis task comprises a target type identification task and a target track tracking task. In operation S2, the target type recognition task is assigned to the head vehicle closest to the target, and the target trajectory tracking task is assigned to the head vehicle in the same lane as the target. That is, a road video generated by a head vehicle closest to the target and a road video generated by a head vehicle in the same lane as the target are used as the available road video, the road video generated by the head vehicle closest to the target is used for target type recognition, and the road video generated by the head vehicle in the same lane as the target is used for target motion estimation and trajectory tracking.
According to the embodiment of the invention, if the motion state of the target in the road video is a static state, the video analysis task comprises a target type identification task. The target kind recognition task is assigned to the lead vehicle closest to the target in operation S2. That is, the road video generated by the head vehicle closest to the target is used for target category recognition as the available road video.
In the embodiment of the invention, each head vehicle respectively detects the distance and the azimuth angle relative to the target and transmits the distance and the azimuth angle to the road unit, and the road unit determines the head vehicle closest to the target and the head vehicle in the same lane with the target according to the received distance and azimuth angle. Each head vehicle measures its distance and azimuth with respect to the target, for example, by means of a distance sensor, an azimuth sensor, and uploads to the road unit.
And operation S3, the head vehicle calculates the computing resources required by the video analysis task according to the delay requirement and the accuracy requirement of the arrived video analysis task, judges whether other head vehicles with the residual computing resources larger than the required computing resources exist when the required computing resources are larger than the residual computing resources, forwards the video analysis task and the road video to other head vehicles with the highest residual computing resources for analysis processing if the required computing resources are larger than the residual computing resources, and forwards the video analysis task and the road video to the road unit for analysis processing if the required computing resources are larger than the residual computing resources.
According to an embodiment of the invention, operation S3 comprises sub-operation S31-sub-operation S33.
In sub-operation S31, the head vehicle that received the video analysis task calculates the calculation resources required for the video analysis task.
The required computing resources satisfy the following conditions:
Figure BDA0003735735530000061
wherein, B q 、C q Respectively the bandwidth resource required by the video analysis task q, the required computing resource L q (B q ,C q ) For the required bandwidthResource B q And required computing resources C q The delay that is caused by the delay is,
Figure BDA0003735735530000062
for the highest delay of the video analysis task q, A q (B q ,C q ) For required bandwidth resource B q And required computing resources C q The degree of accuracy that is produced is,
Figure BDA0003735735530000071
is the lowest accuracy of the video analysis task q.
In sub-operation S32, the head vehicle that received the video analysis task determines a video analysis task scheduling location according to the calculated required computing resources and the remaining computing resources of each head vehicle.
Taking a head vehicle A which is closest to the target and receives the target type identification task as an example, the head vehicle A judges whether the residual computing resources are larger than the computing resources required by the target type identification task, if so, the head vehicle A is a final target type identification task scheduling place; if not, the head vehicle A judges whether other head vehicles with the residual computing resources larger than the computing resources required by the target type identification task exist, if so, the head vehicle with the most residual computing resources (such as the head vehicle C) is used as a final target type identification task scheduling place, and if not, the road unit is used as the final target type identification task scheduling place. The head vehicle a needs to transmit the target category recognition task and the road video generated by the target category recognition task to the final target category recognition task scheduling place. The determination method of the scheduling location of the other video analysis tasks is the same, and is not repeated here.
In sub-operation S33, the scheduling location performs an analysis process on the available road video.
Taking the target in a motion state, the final scheduling point of the target type identification task as the head vehicle A closest to the target, and the final scheduling point of the target track tracking task as the road unit, the head vehicle A analyzes the generated road video to identify the type of the target, and the road unit analyzes the road video generated by the head vehicle in the same lane as the target to perform motion estimation and track tracking on the target.
And operation S4, transmitting the result obtained by the analysis processing to each vehicle in the vehicle formation, so that each vehicle controls the driving state according to the result.
When the analysis process is performed at the head vehicle, the head vehicle transmits the result to each vehicle in the vehicle formation through the V2V communication manner in operation S4; when the analysis process is performed in the roadside unit, the roadside unit transmits the result to each vehicle in the vehicle formation through a V2R communication manner in operation S4. That is, the road video and the analysis result are transmitted between the leading vehicle and the leading vehicle by the V2V communication method, and the road video and the analysis result are transmitted between the leading vehicle and the road unit by the V2R communication method.
Embodiments of the present invention further provide a computer-readable storage medium, on which a computer program is stored, where the computer program, when executed by a processor, implements the method for scheduling an automatic driving video analysis task based on vehicle-road coordination as shown in fig. 1-2.
It will be understood by those skilled in the art that the foregoing is only an exemplary embodiment of the present invention, and is not intended to limit the invention to the particular forms disclosed, since various modifications, substitutions and improvements within the spirit and scope of the invention are possible and within the scope of the appended claims.

Claims (9)

1. An automatic driving video analysis task scheduling method based on vehicle-road cooperation is characterized by comprising the following steps:
s1, capturing a front road environment by head vehicles in a vehicle formation to generate a corresponding road video;
s2, determining video analysis tasks according to the motion state of the target in the front road environment, and distributing the video analysis tasks to corresponding head vehicles according to the positions of the head vehicles relative to the target;
s3, the head vehicle calculates the computing resources needed by the video analysis task according to the delay requirement and the accuracy requirement of the arrived video analysis task, and judges whether other head vehicles with the residual computing resources larger than the needed computing resources exist when the needed computing resources are larger than the residual computing resources, if so, the video analysis task and the road video are forwarded to other head vehicles with the highest residual computing resources for analysis processing, otherwise, the video analysis task and the road video are forwarded to a road unit for analysis processing;
and S4, transmitting the result obtained by the analysis processing to each vehicle in the vehicle formation, so that each vehicle controls the running state according to the result.
2. The method for scheduling analysis tasks of automatic driving videos based on vehicle-road coordination according to claim 1, wherein the step S1 is preceded by the steps of:
dividing vehicles which have the same driving direction and the same vehicle positions in the coverage range of the roadside units into the same vehicle formation corresponding to the roadside units, wherein the vehicles in the vehicle formation meet the following conditions:
Figure FDA0003735735520000011
wherein, the first and the second end of the pipe are connected with each other,
Figure FDA0003735735520000012
for vehicles m to their nearest roadside units
Figure FDA0003735735520000013
H is the height of the roadside unit,
Figure FDA0003735735520000014
is the average radius of coverage of the roadside units.
3. The method for scheduling analysis tasks of automatic driving videos based on vehicle-road coordination according to claim 1, wherein the step S1 comprises: the lead vehicle captures the light signals of the road environment ahead and converts the light signals into corresponding road video using the vehicle-mounted camera.
4. The vehicle-road cooperation based automatic driving video analysis task scheduling method according to claim 1, wherein the motion state of the target in the front road environment is a moving state, and the video analysis task includes a target type recognition task and a target trajectory tracking task; in the step S2, the target type identification task is assigned to the head vehicle closest to the target, and the target trajectory tracking task is assigned to the head vehicle in the same lane as the target.
5. The method according to claim 1, wherein the moving state of the target in the road environment ahead is a static state, and the video analysis task includes a target category identification task; in S2, the target category identification task is assigned to the leading vehicle closest to the target.
6. The vehicle-road coordination based automatic driving video analysis task scheduling method according to claim 4 or 5, characterized by further comprising: and the road unit determines the head vehicle closest to the target and the head vehicle in the same lane with the target according to the received distance and the received azimuth.
7. The method according to claim 1, wherein the required computing resources satisfy the following conditions:
Figure FDA0003735735520000021
wherein, B q 、C q Respectively the bandwidth resource required by the video analysis task q, the required computing resource L q (B q ,C q ) For required bandwidth resource B q And required computing resources C q The delay that is caused by the delay is,
Figure FDA0003735735520000022
for the maximum delay of the video analysis task q, A q (B q ,C q ) For required bandwidth resource B q And required computing resources C q The degree of accuracy that is produced is,
Figure FDA0003735735520000023
is the lowest accuracy of the video analysis task q.
8. The automatic driving video analysis task scheduling method based on vehicle-road coordination according to claim 1, wherein when the analysis processing is performed on the head vehicle, the head vehicle in S4 transmits the result to each vehicle in the vehicle formation through a V2V communication manner;
when the analysis processing is carried out in the roadside unit, the roadside unit transmits the result to each vehicle in the vehicle formation in the S4 through a V2R communication mode.
9. A computer-readable storage medium, on which a computer program is stored, wherein the program, when executed by a processor, implements the method for automated driving video analytics task scheduling based on vehicle-road coordination according to any one of claims 1 to 8.
CN202210803950.2A 2022-07-07 2022-07-07 Automatic driving video analysis task scheduling method and medium based on vehicle-road cooperation Active CN115359652B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210803950.2A CN115359652B (en) 2022-07-07 2022-07-07 Automatic driving video analysis task scheduling method and medium based on vehicle-road cooperation

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210803950.2A CN115359652B (en) 2022-07-07 2022-07-07 Automatic driving video analysis task scheduling method and medium based on vehicle-road cooperation

Publications (2)

Publication Number Publication Date
CN115359652A true CN115359652A (en) 2022-11-18
CN115359652B CN115359652B (en) 2024-04-19

Family

ID=84031528

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210803950.2A Active CN115359652B (en) 2022-07-07 2022-07-07 Automatic driving video analysis task scheduling method and medium based on vehicle-road cooperation

Country Status (1)

Country Link
CN (1) CN115359652B (en)

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108363381A (en) * 2018-02-05 2018-08-03 大陆汽车投资(上海)有限公司 Vehicle management system based on car networking
CN111341093A (en) * 2020-03-04 2020-06-26 腾讯科技(深圳)有限公司 Control method, device, equipment and storage medium of motorcade
CN111464976A (en) * 2020-04-21 2020-07-28 电子科技大学 Vehicle task unloading decision and overall resource allocation method based on fleet
CN112188442A (en) * 2020-11-16 2021-01-05 西南交通大学 Vehicle networking data-driven task unloading system and method based on mobile edge calculation
CN112348992A (en) * 2020-11-12 2021-02-09 广东利通科技投资有限公司 Vehicle-mounted video processing method and device based on vehicle-road cooperative system and storage medium

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108363381A (en) * 2018-02-05 2018-08-03 大陆汽车投资(上海)有限公司 Vehicle management system based on car networking
CN111341093A (en) * 2020-03-04 2020-06-26 腾讯科技(深圳)有限公司 Control method, device, equipment and storage medium of motorcade
CN111464976A (en) * 2020-04-21 2020-07-28 电子科技大学 Vehicle task unloading decision and overall resource allocation method based on fleet
CN112348992A (en) * 2020-11-12 2021-02-09 广东利通科技投资有限公司 Vehicle-mounted video processing method and device based on vehicle-road cooperative system and storage medium
CN112188442A (en) * 2020-11-16 2021-01-05 西南交通大学 Vehicle networking data-driven task unloading system and method based on mobile edge calculation

Also Published As

Publication number Publication date
CN115359652B (en) 2024-04-19

Similar Documents

Publication Publication Date Title
US11270457B2 (en) Device and method for detection and localization of vehicles
CN110809790B (en) Vehicle information storage method, vehicle travel control method, and vehicle information storage device
JP5713106B2 (en) Vehicle identification system and vehicle identification device
US10279809B2 (en) Travelled-route selecting apparatus and method
US11335188B2 (en) Method for automatically producing and updating a data set for an autonomous vehicle
JP6325806B2 (en) Vehicle position estimation system
WO2018218680A1 (en) Obstacle detection method and device
US10493987B2 (en) Target-lane relationship recognition apparatus
CN110194162B (en) System and method for detecting an approaching cut-in vehicle based on available spatial signals
JP7310313B2 (en) POSITION CORRECTION SERVER, POSITION MANAGEMENT DEVICE, MOBILE POSITION MANAGEMENT SYSTEM AND METHOD, POSITION INFORMATION CORRECTION METHOD, COMPUTER PROGRAM, IN-VEHICLE DEVICE, AND VEHICLE
CN108713220A (en) Sending device, reception device, sending method, method of reseptance, communication system
JP2018147399A (en) Target detection device
CN109416885B (en) Vehicle identification method and system
US20200020121A1 (en) Dimension estimating system and method for estimating dimension of target vehicle
CN113111682A (en) Target object sensing method and device, sensing base station and sensing system
Tak et al. Development of AI-based vehicle detection and tracking system for C-ITS application
Schiegg et al. Object Detection Probability for Highly Automated Vehicles: An Analytical Sensor Model.
CN115359652B (en) Automatic driving video analysis task scheduling method and medium based on vehicle-road cooperation
US20230128379A1 (en) Method and device for evaluating a function for predicting a trajectory of an object in an environment of a vehicle
CN110764526B (en) Unmanned aerial vehicle flight control method and device
CN111480165A (en) Method for creating a feature-based localization map for a vehicle taking into account the feature structure of an object
US11885640B2 (en) Map generation device and map generation method
JP6996142B2 (en) Speed bump position estimator
US11651583B2 (en) Multi-channel object matching
CN115171371B (en) Cooperative road intersection passing method and device

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant