CN115100623A - Unmanned aerial vehicle auxiliary vehicle networking blind area pedestrian detection system based on end-edge-cloud cooperation - Google Patents

Unmanned aerial vehicle auxiliary vehicle networking blind area pedestrian detection system based on end-edge-cloud cooperation Download PDF

Info

Publication number
CN115100623A
CN115100623A CN202210528443.2A CN202210528443A CN115100623A CN 115100623 A CN115100623 A CN 115100623A CN 202210528443 A CN202210528443 A CN 202210528443A CN 115100623 A CN115100623 A CN 115100623A
Authority
CN
China
Prior art keywords
task
node
delay
edge
blind spot
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202210528443.2A
Other languages
Chinese (zh)
Other versions
CN115100623B (en
Inventor
张棋森
刘凯
蒋璐遥
钟成亮
晏国志
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Chongqing University
Original Assignee
Chongqing University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Chongqing University filed Critical Chongqing University
Priority to CN202210528443.2A priority Critical patent/CN115100623B/en
Publication of CN115100623A publication Critical patent/CN115100623A/en
Application granted granted Critical
Publication of CN115100623B publication Critical patent/CN115100623B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/50Context or environment of the image
    • G06V20/56Context or environment of the image exterior to a vehicle by using sensors mounted on the vehicle
    • G06V20/58Recognition of moving objects or obstacles, e.g. vehicles or pedestrians; Recognition of traffic objects, e.g. traffic signs, traffic lights or roads
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/764Arrangements for image or video recognition or understanding using pattern recognition or machine learning using classification, e.g. of video objects
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/82Arrangements for image or video recognition or understanding using pattern recognition or machine learning using neural networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/40Scenes; Scene-specific elements in video content
    • G06V20/41Higher-level, semantic clustering, classification or understanding of video scenes, e.g. detection, labelling or Markovian modelling of sport events or news items

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Evolutionary Computation (AREA)
  • Software Systems (AREA)
  • Multimedia (AREA)
  • Artificial Intelligence (AREA)
  • General Health & Medical Sciences (AREA)
  • Computing Systems (AREA)
  • Health & Medical Sciences (AREA)
  • Medical Informatics (AREA)
  • Computational Linguistics (AREA)
  • Databases & Information Systems (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Biomedical Technology (AREA)
  • Mathematical Physics (AREA)
  • General Engineering & Computer Science (AREA)
  • Molecular Biology (AREA)
  • Data Mining & Analysis (AREA)
  • Biophysics (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Traffic Control Systems (AREA)

Abstract

本发明公开了一种基于端‑边‑云协同的无人机辅助车联网盲区行人检测系统。所述系统至少包含一个无人机终端,一个移动边缘节点,一个静态边缘节点和云端节点,且每个节点均具有计算和传输能力。方法包括:1.无人机部署在空中,对车辆盲区进行实时监控,采集实时视频流信息,并及时将盲区行人检测服务开启告诉路侧的静态边缘节点;2.静态边缘节点根据任务需求信息,结合异构节点的可用计算资源和通信带宽,根据一定的调度算法确定任务将被卸载到哪个节点;3.静态边缘节点将调度结果通知无人机,无人机通过V2X通信将其监控的视频流发送给相应节点;4.对应节点根据部署的模型执行行人检测任务,检测到潜在碰撞危险时向车辆发送警告消息。本发明提供了通过无人机辅助车联网的一种方案,实现了对交通系统中车辆盲区危险碰撞的预防。

Figure 202210528443

The invention discloses a blind spot pedestrian detection system based on terminal-edge-cloud collaboration based on unmanned aerial vehicle assisted vehicle networking. The system at least includes a UAV terminal, a mobile edge node, a static edge node and a cloud node, and each node has computing and transmission capabilities. The method includes: 1. UAVs are deployed in the air to monitor the blind spots of vehicles in real time, collect real-time video stream information, and inform the static edge nodes on the roadside of the blind spot pedestrian detection service in time; 2. The static edge nodes need information according to the task , combined with the available computing resources and communication bandwidth of heterogeneous nodes, according to a certain scheduling algorithm to determine which node the task will be offloaded to; 3. The static edge node will notify the UAV of the scheduling result, and the UAV will monitor it through V2X communication. The video stream is sent to the corresponding node; 4. The corresponding node performs the pedestrian detection task according to the deployed model, and sends a warning message to the vehicle when a potential collision danger is detected. The present invention provides a solution for assisting the Internet of Vehicles through unmanned aerial vehicles, and realizes the prevention of dangerous collisions in blind spots of vehicles in the traffic system.

Figure 202210528443

Description

一种基于端-边-云协同的无人机辅助车联网盲区行人检测 系统A UAV-assisted Vehicle Networking Blind Spot Pedestrian Detection Based on Device-Edge-Cloud Collaboration system

技术领域technical field

本发明涉及到无人机、车联网、边缘计算、任务卸载、V2X通信及智能交通系统,具体涉及一种基于端-边-云协同的无人机辅助车联网盲区行人检测系统以及其处理方法。The invention relates to unmanned aerial vehicles, Internet of Vehicles, edge computing, task offloading, V2X communication and intelligent transportation systems, and in particular relates to an unmanned aerial vehicle-assisted vehicle networking blind spot pedestrian detection system based on end-edge-cloud collaboration and a processing method thereof .

背景技术Background technique

近年来无线通信和智能网络技术的发展推动了车载自组织网络(VANETs)的发展,VANETs是新兴智能交通系统(ITS)的基础。另一方面,随着环境监测、智能监控、无线通信、航拍等以无人机为基础的应用的普及,形成VANET与无人机合作,实现创新、强大的信息技术服务的全新范式是一种愿景。In recent years, the development of wireless communication and intelligent network technology has promoted the development of vehicular ad hoc networks (VANETs), which are the basis of emerging intelligent transportation systems (ITS). On the other hand, with the popularization of UAV-based applications such as environmental monitoring, intelligent monitoring, wireless communication, and aerial photography, it is a new paradigm for VANET to cooperate with UAVs to realize innovative and powerful information technology services. vision.

无人机与VANETs的集成已经有大量的研究,包括数据转发、交通监控、计算卸载和轨迹优化。在数据转发场景中,以无人机为中继节点,研究了数据缓存和轨迹规划问题,使网络吞吐量最大化。在交通监控场景中,无人机充分利用其机动性,实时监控交通网络状况,并向控制中心报告异常情况。在计算卸载场景下,研究重点是考虑无人机与VANET之间的通信、存储和计算资源分配成本,降低计算处理延迟,节约无人机能源消耗。在轨迹优化场景下,通过规划无人机的轨迹,研究无人机与VANET集成时最大系统吞吐量和减少服务延迟。The integration of UAVs with VANETs has been extensively studied, including data forwarding, traffic monitoring, computational offloading, and trajectory optimization. In the data forwarding scenario, the UAV is used as the relay node to study data caching and trajectory planning to maximize the network throughput. In traffic monitoring scenarios, UAVs make full use of their mobility to monitor traffic network conditions in real time and report abnormal conditions to the control center. In the computational offloading scenario, the research focus is to consider the communication, storage, and computing resource allocation costs between UAVs and VANETs, reduce computational processing delays, and save UAV energy consumption. In the trajectory optimization scenario, by planning the trajectory of the UAV, the maximum system throughput and service delay reduction when the UAV is integrated with VANET are studied.

在目前的交通系统中,很多情况下车辆前方存在视野盲区是造成交通事故的主要原因,而道路中车辆视野盲区可以是固定和随机产生的。例如,某时刻和某位置车辆前方存在大型客车或障碍物,此时车辆前方将会随机产生视野盲区,车辆在此情况下极易导致交通事故,目前ITS主要采用路边监视摄像头进行实时路况直播,旨在消除这种隐患,但盲区是随机性产生,这种方式在这样场景下无法帮助车辆对盲区进行实时的检测与预警,交通事故的预防效果也不佳,此外传统的云计算架构,也不能为道路车辆提供实时高效的服务。如何解决车辆视野盲区产生的随机性,解决云计算架构任务响应不及时等问题仍存在巨大考验。In the current traffic system, blind spots in front of vehicles are the main cause of traffic accidents in many cases, and blind spots in vehicles on the road can be fixed and randomly generated. For example, if there is a large passenger car or an obstacle in front of the vehicle at a certain moment and at a certain position, a blind spot will be randomly generated in front of the vehicle. In this case, the vehicle is very likely to cause traffic accidents. At present, ITS mainly uses roadside surveillance cameras to broadcast real-time road conditions. , aims to eliminate this hidden danger, but the blind spot is randomly generated, this method cannot help the vehicle to perform real-time detection and early warning of the blind spot in such a scenario, and the prevention effect of traffic accidents is not good. In addition, the traditional cloud computing architecture, It also cannot provide real-time and efficient services for road vehicles. There are still huge challenges in how to solve the randomness generated by the blind spot of the vehicle's field of vision, and how to solve the problems such as the untimely response of cloud computing architecture tasks.

发明内容SUMMARY OF THE INVENTION

为解决上述问题,本发明主要提供的是一种基于端-边-云协同的无人机辅助车联网盲区行人检测系统,具体地,考虑一种具有计算能力和通信接口的无人机悬停在空中,对道路中车辆可能出现的盲区进行监控,可以基于某种目标检测模型,对行人进行检测。同时,无人机可分别通过V2V(车对车)、V2I(车对基础设施)和V2C(车对云)通信与车辆、路边基础设施和云服务器进行通信。因此,无人机可以通过将监控到的视频传输到相应节点,将目标检测任务卸载给移动车辆、部署在路边的静态边缘节点以及云服务器,由于无人机具有较高的灵活性,系统一方面解决了车辆视野盲区产生具有随机性的问题,另一方面,采用端-边-云协同的方式克服了云计算响应较慢的问题。In order to solve the above problems, the present invention mainly provides an unmanned aerial vehicle-assisted vehicle networking blind spot pedestrian detection system based on end-edge-cloud collaboration. In the air, the possible blind spots of vehicles on the road are monitored, and pedestrians can be detected based on a certain target detection model. Meanwhile, drones can communicate with vehicles, roadside infrastructure and cloud servers through V2V (vehicle-to-vehicle), V2I (vehicle-to-infrastructure), and V2C (vehicle-to-cloud) communications, respectively. Therefore, UAVs can offload target detection tasks to moving vehicles, static edge nodes deployed on the roadside, and cloud servers by transmitting the monitored video to the corresponding nodes. Due to the high flexibility of UAVs, the system On the one hand, it solves the problem of randomness in the blind spot of vehicle vision, on the other hand, the use of device-edge-cloud collaboration overcomes the problem of slow response of cloud computing.

为解决上述技术问题,本发明提供的基于端-边-云协同的无人机辅助车联网盲区行人检测系统主要步骤如下:In order to solve the above technical problems, the main steps of the unmanned aerial vehicle-assisted vehicle networking blind spot pedestrian detection system based on the terminal-edge-cloud collaboration provided by the present invention are as follows:

步骤1:离线模型训练,用于对盲区内的行人进行实时监测。采集无人机拍摄的行人数据集,用于模型训练,将训练好的模型部署在具有异构计算和通信能力的终端、移动边缘、静态边缘和云端节点上。本发明考虑采用SSD和YOLO两种经典的目标检测模型,YOLO与SSD相比是一个规模较大的网络,需要更高的计算开销,具有更好的检测效果;SSD是一种单阶段网络,同步实现对象检测和分类。SSD的轻量级和快速推理特性使其更适合于移动设备。Step 1: Offline model training for real-time monitoring of pedestrians in the blind area. Pedestrian datasets captured by drones are collected for model training, and the trained models are deployed on terminals, mobile edges, static edges, and cloud nodes with heterogeneous computing and communication capabilities. The present invention considers using two classical target detection models, SSD and YOLO. Compared with SSD, YOLO is a larger-scale network, which requires higher computational overhead and has better detection effect; SSD is a single-stage network. Simultaneous implementation of object detection and classification. The lightweight and fast inference properties of SSDs make them more suitable for mobile devices.

系统的训练数据集来自无人机拍摄的某大学校园广场视频。从视频中随机选取2000帧,其中90%用于训练,其余10%用于模型验证,训练过程将batch大小设置为32,IOU阈值设置为0.98,训练迭代为5000次。在交通场景下测试了这两种模型。采集无人机视频帧,场景模拟车辆潜在盲区,最后实验结果表明YOLO和SSD的检测精度分别能保持在95%和90%。The training data set of the system comes from a video of a university campus square shot by a drone. 2000 frames were randomly selected from the video, 90% of which were used for training and the remaining 10% for model validation. The training process set the batch size to 32, the IOU threshold to 0.98, and the training iterations to 5000. Both models are tested in traffic scenarios. The UAV video frames are collected, and the scene simulates the potential blind spot of the vehicle. The final experimental results show that the detection accuracy of YOLO and SSD can be maintained at 95% and 90%, respectively.

步骤2:将无人机部署在空中,监控交通道路中车辆的视野盲区,同时开启盲区行人检测服务并通知路侧的静态边缘节点,静态边缘节点收到通知过后开始在线更新任务卸载策略。具体地,对无人机进行视频流数据获取的方法,无人机采用FFmpeg实现视频的解码操作,将得到的解码结果转换为YUVImage格式的图像,以此来实现图像的传递和存储。最终得到的YUV视频流通过V2X通信发送至系统中的各个节点。Step 2: Deploy the drone in the air to monitor the blind spot of the vehicle in the traffic road. At the same time, the blind spot pedestrian detection service is enabled and the static edge node on the roadside is notified. After the static edge node receives the notification, it starts to update the task offloading strategy online. Specifically, for the method of obtaining video stream data for the UAV, the UAV uses FFmpeg to realize the decoding operation of the video, and converts the obtained decoding result into an image in the YUVImage format, so as to realize the transmission and storage of the image. The final YUV video stream is sent to each node in the system through V2X communication.

步骤3:提供了一种结合异构节点可用的计算资源和网络的通信带宽,确定将该任务卸载到某个节点执行的方法,该方法包括:Step 3: Provide a method for determining the offloading of the task to a node for execution in combination with the computing resources available to the heterogeneous nodes and the communication bandwidth of the network, and the method includes:

步骤3.1:在节点计算、存储和通信资源受约束的情况下,以最小化平均任务时延为优化目标定义的目标函数:Step 3.1: Under the condition that the node computing, storage and communication resources are constrained, the objective function defined by minimizing the average task delay as the optimization goal:

其中节点参数包括:定义S={s1,s2,…,s|S|},M={m1,m2,…,m|M|},U={u1,u2,…,u|U|}和{c}所述分别为静态边缘节点、移动边缘节点、终端节点和云服务器的集合;The node parameters include: definition S={s 1 ,s 2 ,…,s |S| }, M={m 1 ,m 2 ,…,m |M| }, U={u 1 ,u 2 ,… , u |U| } and {c} are the sets of static edge nodes, mobile edge nodes, terminal nodes and cloud servers, respectively;

用N={s1,s2,…,s|S|,m1,m2,…,m|M|,u1,u2,…,u|U|,c}表示的卸载节点;N={s 1 ,s 2 ,…,s |S| ,m 1 ,m 2 ,…,m |M| ,u 1 ,u 2 ,…,u |U| ,c};

无人机感知的任务集合用

Figure BDA0003645549840000021
表示;UAV perception for task collection
Figure BDA0003645549840000021
express;

每个任务可以表示为一个5元组

Figure BDA0003645549840000022
分别表示输入数据大小、所需CPU周期数、精度要求、生成时间和截止日期;Each task can be represented as a 5-tuple
Figure BDA0003645549840000022
Respectively represent the input data size, the number of CPU cycles required, the accuracy requirement, the generation time and the deadline;

每个卸载节点n∈N与5个元组<Cn,Bnn,Sn,Rn>,分别代表通信覆盖范围的半径、总无线带宽、通道的数量、最大的存储功能、和计算能力(例如,CPU周期的数量单位时间);Each offload node n∈N and 5 tuples <C n ,B nn ,S n ,R n >representing the radius of communication coverage, total wireless bandwidth, number of channels, maximum storage function, and computing power (for example, the number of CPU cycles per unit time);

记conu,n,t=1表示u∈N在t时刻的单跳通信中可以与n∈N交换消息;Denoting con u,n,t = 1 means that u∈N can exchange messages with n∈N in single-hop communication at time t;

用H={h1,h2,…,h|H|}表示神经网络模型的权值;Use H={h 1 , h 2 ,...,h |H| } to represent the weight of the neural network model;

假设利用神经网络模型hk将任务wji卸载到节点n进行推理,定义一个二元变量

Figure BDA0003645549840000023
Figure BDA0003645549840000024
Suppose the neural network model h k is used to offload the task w ji to node n for inference, and a binary variable is defined
Figure BDA0003645549840000023
Figure BDA0003645549840000024

节点之间的通信时延:Communication delay between nodes:

Figure BDA0003645549840000025
Figure BDA0003645549840000025

总传输时延总和包括无人机将视频数据发送给卸载节点,卸载节点将计算结果发送至车辆两个部分,任务卸载的总通信时延定义提如:The sum of the total transmission delay includes the UAV sending video data to the offloading node, and the offloading node sending the calculation results to the two parts of the vehicle. The definition of the total communication delay of task offloading is as follows:

Figure BDA0003645549840000026
Figure BDA0003645549840000026

任务的计算时延:The calculation delay of the task:

Figure BDA0003645549840000027
Figure BDA0003645549840000027

任务的等待时延:The waiting delay of the task:

Figure BDA0003645549840000031
Figure BDA0003645549840000031

最后,将wji卸载到节点n的任务时延由

Figure BDA0003645549840000032
表示,它是总传输时延、计算时延和等待时延的计算得到:Finally, the task latency of offloading w ji to node n is given by
Figure BDA0003645549840000032
means that it is calculated from the total transmission delay, calculation delay and latency delay:

Figure BDA0003645549840000033
Figure BDA0003645549840000033

系统的目标是通过确定卸载策略来最小化平均任务时延,目标函数为:The goal of the system is to minimize the average task delay by determining the offloading strategy, and the objective function is:

Figure BDA0003645549840000034
Figure BDA0003645549840000034

所述目标函数的约束条件包括:The constraints of the objective function include:

约束C1与约束C2表示每个任务必须卸载并计算到一个节点:Constraints C1 and C2 indicate that each task must be offloaded and computed to a node:

C1:xj,i,n,h∈{0,1}C1: xj,i,n, h∈{0,1}

C2:∑n∈Nxj,i,n,h=1C2:∑ n∈N x j,i,n,h =1

约束C3表示存储约束:Constraint C3 represents a storage constraint:

Figure BDA0003645549840000035
Figure BDA0003645549840000035

约束C4表示同时传输的任务数不能超过卸载节点的信道数:Constraint C4 means that the number of tasks transmitted simultaneously cannot exceed the number of channels of the offloading node:

Figure BDA0003645549840000036
Figure BDA0003645549840000036

约束C5表示所选神经网络模型满足精度要求:Constraint C5 means that the selected neural network model meets the accuracy requirements:

Figure BDA0003645549840000037
Figure BDA0003645549840000037

约束C6表示意味着任务必须在截止日期前完成:Constraint C6 means that the task must be completed by the deadline:

Figure BDA0003645549840000038
Figure BDA0003645549840000038

步骤3.2:根据目标函数的优化目标,设计了一种任务卸载算法:Step 3.2: According to the optimization objective of the objective function, a task offloading algorithm is designed:

根据当前应用的卸载策略估计任务时延。如果超过了预定义的阈值,则采用贪心方法搜索新的卸载策略。它由两种策略组成,一种是时延驱动策略,目标是最小化系统的整体任务时延;另一种是资源驱动策略,目标是最大化资源利用。The task delay is estimated based on the currently applied uninstall policy. If a predefined threshold is exceeded, a greedy method is used to search for a new offloading strategy. It consists of two strategies, one is a delay-driven strategy, the goal is to minimize the overall task delay of the system; the other is a resource-driven strategy, the goal is to maximize resource utilization.

时延驱动策略:给定某个卸载策略,表示任务时延为tcur。当数据为θji和计算要求为cji的新任务到达时,根据上述推导的时延模型估计卸载时延taver。当tcur和taver的差值超过预定义的阈值tdiff时,算法开始寻找新的卸载策略。具体来说,它遍历每个计算节点和每个神经网络模型,计算总任务时延

Figure BDA0003645549840000039
然后,选择
Figure BDA00036455498400000310
最小的策略。Delay-driven strategy: Given an offload strategy, the task delay is t cur . When a new task with data θ ji and computational requirement c ji arrives, the offloading delay t aver is estimated according to the delay model derived above. When the difference between tcur and taver exceeds the predefined threshold tdiff , the algorithm starts to find a new offloading strategy. Specifically, it traverses each computing node and each neural network model to calculate the total task latency
Figure BDA0003645549840000039
Then, select
Figure BDA00036455498400000310
minimal strategy.

资源驱动策略:考虑资源利用率最大化,以计算资源可用性为例。具体来说,给定一个卸载策略,表示计算时延为

Figure BDA00036455498400000311
当系统中节点的Rn增加时,算法开始寻找新的卸载策略。具体更新策略类似于任务驱动时延策略。Resource-driven strategy: Consider maximizing resource utilization, taking computing resource availability as an example. Specifically, given an offloading strategy, the computation delay is
Figure BDA00036455498400000311
When the R n of the nodes in the system increases, the algorithm starts to find new offloading strategies. The specific update strategy is similar to the task-driven delay strategy.

步骤4:无人机与其他节点通过V2X进行通信,具体地,无人机与静态边缘节点,移动边缘节点同时位于一个车联网之中,通过如:WiFi、DSRC等方式进行通信,无人机与云端节点通过4G、5G或蜂窝网络进行通信,无人机可以收到来自静态边缘节点的任务卸载节点选择结果,并将采集到的盲区视频流发送到相应的节点进行任务计算。Step 4: The drone communicates with other nodes through V2X. Specifically, the drone and the static edge node and the mobile edge node are located in a car networking at the same time, and communicate through methods such as WiFi and DSRC. By communicating with cloud nodes through 4G, 5G or cellular networks, the drone can receive the task offloading node selection results from static edge nodes, and send the collected blind spot video streams to the corresponding nodes for task calculation.

步骤5:根据步骤1中得到的行人检测模型,计算任务的节点将提前部署好离线训练好的行人检测模型,节点将根据接收的消息选择相应的深度学习模型进行推理计算,该选择结果可以在步骤3.2中所提的两种任务卸载策略得到。节点得到计算结果后,考虑到通信带宽有限性和结果的有效性,并不会将所有计算结果发送至道路中的车辆,只会在检测到碰撞危险时候向车辆发送警告信号。Step 5: According to the pedestrian detection model obtained in step 1, the node of the computing task will deploy the offline trained pedestrian detection model in advance, and the node will select the corresponding deep learning model for inference calculation according to the received message. The two task offloading strategies mentioned in step 3.2 are obtained. After the node obtains the calculation results, considering the limited communication bandwidth and the validity of the results, it will not send all the calculation results to the vehicles on the road, but will only send a warning signal to the vehicle when a collision danger is detected.

附图说明Description of drawings

本发明的附图说明如下:The accompanying drawings of the present invention are described as follows:

图1是本发明的系统示意图;Fig. 1 is the system schematic diagram of the present invention;

图2是本发明的流程图;Fig. 2 is the flow chart of the present invention;

具体实施方式Detailed ways

图1是本发明的系统示意图。图示主要展示的是终端-边缘-云协同架构由无人机终端、作为移动边缘节点的车辆、作为静态边缘节点的路侧单元和云服务器4个基本元素组成。各要素均具备一定的计算和通信能力,使无人机终端能够通过V2X通信与车辆、路侧单元、云进行通信。在这种架构下,一般的应用场景描述如下:该无人机部署在空中,可以监控驾驶车辆的盲区,如图中圆圈包围的行人。可以根据特定的对象检测模型(如YOLO)对监控的视频流进行处理。该模型可以部署在无人机、车辆、路侧单元以及云进行行人检测。路侧单元根据一定的卸载算法得到当前任务的卸载节点并通知给无人机,无人机端将视频流通过V2X通信传输到相应节点进行任务计算。最后,如果检测到潜在的行人碰撞,则通过任务计算节点将警告信息传输给驾驶车辆。FIG. 1 is a schematic diagram of the system of the present invention. The figure mainly shows that the terminal-edge-cloud collaborative architecture consists of four basic elements: UAV terminals, vehicles as mobile edge nodes, roadside units as static edge nodes, and cloud servers. Each element has certain computing and communication capabilities, enabling the UAV terminal to communicate with vehicles, roadside units, and the cloud through V2X communication. Under this architecture, the general application scenario is described as follows: The drone is deployed in the air and can monitor the blind spot of the driving vehicle, such as the pedestrian surrounded by the circle in the figure. The monitored video stream can be processed according to a specific object detection model such as YOLO. The model can be deployed on drones, vehicles, roadside units, and the cloud for pedestrian detection. The roadside unit obtains the unloading node of the current task according to a certain unloading algorithm and notifies it to the UAV. The UAV side transmits the video stream to the corresponding node through V2X communication for task calculation. Finally, if a potential pedestrian collision is detected, the warning information is transmitted to the driving vehicle through the task computing node.

图2是本发明的具体实施流程图。以下将对系统执行步骤详细说明:FIG. 2 is a flow chart of the specific implementation of the present invention. The following will describe the steps of the system in detail:

在步骤101,离线训练行人检测模型,采用SSD和YOLO两种经典模型进行训练,其中SSD轻量级特性更加适合移动设备,YOLO则是检测精度较高,大量采集交通道路中由无人机在空中拍摄到的行人数据集,将训练好的模型均部署在无人机终端、路侧静态边缘节点、车载移动边缘节点和远端云节点。In step 101, the pedestrian detection model is trained offline, and two classic models, SSD and YOLO, are used for training. The lightweight characteristics of SSD are more suitable for mobile devices, and YOLO has higher detection accuracy. A large number of traffic roads are collected by drones in For the pedestrian datasets captured in the air, the trained models are deployed on UAV terminals, roadside static edge nodes, vehicle-mounted mobile edge nodes, and remote cloud nodes.

在步骤102,无人机被部署到交通道路的空中,由于大部分无人机不具有计算能力,可以在无人机上搭载一个树莓派作为处理器,用于监控交通道路中存在车辆视野盲区。In step 102, the UAV is deployed in the air of the traffic road. Since most UAVs do not have computing power, a Raspberry Pi can be mounted on the UAV as a processor to monitor the presence of blind spots of vehicle vision in the traffic road. .

在步骤103,无人机实时监控盲区状况,拍摄行人位置等信息,随后采用FFmpeg实现视频的解码操作,将得到的解码结果转换为YUVImage格式图像,以此来实现视频流的传输和存储,同时在开始对盲区监控的同时向路侧单元发送通知。In step 103, the drone monitors the blind spot conditions in real time, and captures information such as the location of pedestrians, and then uses FFmpeg to decode the video, and converts the decoded result into a YUVImage format image, so as to realize the transmission and storage of the video stream, and at the same time A notification is sent to the roadside unit at the same time as the monitoring of the blind spot is started.

在步骤104,无人机通过V2I与路侧单元进行通信,交通道路中存在较多信道干扰信号,传输过程中丢包现象较为严重,使用可靠传输TCP协议将为大大提高传输时延,不符合实际交通道路中对任务响应速率的要求,因此本发明主要设计使用UDP广播的形式进行数据的传输,在忽略少数包丢失的情况下,将消息或数据发送至路侧单元,这里则将盲区监控服务开启的通知发送给了路侧单元。In step 104, the drone communicates with the roadside unit through V2I. There are many channel interference signals in the traffic road, and the phenomenon of packet loss during the transmission process is serious. Using the reliable transmission TCP protocol will greatly improve the transmission delay. In the actual traffic road, the task response rate is required. Therefore, the present invention mainly designs data transmission in the form of UDP broadcast. In the case of ignoring the loss of a small number of packets, the message or data is sent to the roadside unit. Here, the blind spot monitoring is performed. A service-on notification is sent to the roadside unit.

在步骤105,路侧单元接收到了无人机发送的通知信息,路侧单元开始实时更新任务卸载策略。由于路侧单元的计算能力较车载端和无人机端更强,同时相较于云节点它更靠近实际交通道路,本发明把任务卸载算法提前部署到了路侧单元。In step 105, the roadside unit receives the notification information sent by the UAV, and the roadside unit starts to update the task unloading strategy in real time. Since the computing power of the roadside unit is stronger than that of the vehicle end and the UAV end, and at the same time it is closer to the actual traffic road than the cloud node, the present invention deploys the task offloading algorithm to the roadside unit in advance.

在步骤106,路侧单元整合系统所有异构节点的计算、存储和带宽资源,确定以最小化平均任务时延为优化目标,并定义的目标函数,其中定义与约束条件如下:In step 106, the roadside unit integrates the computing, storage and bandwidth resources of all heterogeneous nodes in the system, determines the objective function that minimizes the average task delay as the optimization goal, and defines the objective function, wherein the definition and constraints are as follows:

相关参数包括:定义S={s1,s2,…,s|S|},M={m1,m2,…,m|M|},U={u1,u2,…,u|U|}和{c}所述分别为静态边缘节点、移动边缘节点、终端节点和云服务器的集合;Relevant parameters include: define S={s 1 ,s 2 ,…,s |S| }, M={m 1 ,m 2 ,…,m |M| }, U={u 1 ,u 2 ,…, u |U| } and {c} are the sets of static edge nodes, mobile edge nodes, terminal nodes and cloud servers, respectively;

用N={s1,s2,…,s|S|,m1,m2,…,m|M|,u1,u2,…,u|U|,c}表示的卸载节点;N={s 1 ,s 2 ,…,s |S| ,m 1 ,m 2 ,…,m |M| ,u 1 ,u 2 ,…,u |U| ,c};

无人机感知的任务集合用

Figure BDA0003645549840000059
表示;UAV perception for task collection
Figure BDA0003645549840000059
express;

每个任务可以表示为一个5元组

Figure BDA0003645549840000051
分别表示输入数据大小、所需CPU周期数、精度要求、生成时间和截止日期;Each task can be represented as a 5-tuple
Figure BDA0003645549840000051
Respectively represent the input data size, the number of CPU cycles required, the accuracy requirement, the generation time and the deadline;

每个卸载节点n∈N与5个元组<Cn,Bnn,Sn,Rn>,分别代表通信覆盖范围的半径、总无线带宽、通道的数量、最大的存储功能、和计算能力(例如,CPU周期的数量单位时间);Each offload node n∈N and 5 tuples <C n ,B nn ,S n ,R n >representing the radius of communication coverage, total wireless bandwidth, number of channels, maximum storage function, and computing power (for example, the number of CPU cycles per unit time);

记conu,n,t=1表示u∈N在t时刻的单跳通信中可以与n∈N交换消息;Denoting con u,n,t = 1 means that u∈N can exchange messages with n∈N in single-hop communication at time t;

用H={h1,h2,…,h|H|}表示神经网络模型的权值;Use H={h 1 , h 2 ,...,h |H| } to represent the weight of the neural network model;

假设利用神经网络模型hk将任务wji卸载到节点n进行推理,定义一个二元变量

Figure BDA0003645549840000052
Suppose the neural network model h k is used to offload the task w ji to node n for inference, and a binary variable is defined
Figure BDA0003645549840000052

Figure BDA0003645549840000053
Figure BDA0003645549840000053

节点之间的通信时延:Communication delay between nodes:

Figure BDA0003645549840000054
Figure BDA0003645549840000054

总传输时延总和包括无人机将视频数据发送给卸载节点,卸载节点将计算结果发送至车辆两个部分,任务卸载的总通信时延定义提如:The sum of the total transmission delay includes the UAV sending video data to the offloading node, and the offloading node sending the calculation results to the two parts of the vehicle. The definition of the total communication delay of task offloading is as follows:

Figure BDA0003645549840000055
Figure BDA0003645549840000055

任务的计算时延:The calculation delay of the task:

Figure BDA0003645549840000056
Figure BDA0003645549840000056

任务的等待时延:The waiting delay of the task:

Figure BDA0003645549840000057
Figure BDA0003645549840000057

最后,将wji卸载到节点n的任务时延由

Figure BDA0003645549840000058
表示,它是总传输时延、计算时延和等待时延的计算得到:Finally, the task latency of offloading w ji to node n is given by
Figure BDA0003645549840000058
means that it is calculated from the total transmission delay, calculation delay and latency delay:

Figure BDA0003645549840000061
Figure BDA0003645549840000061

系统的目标是通过确定卸载策略来最小化平均任务时延,目标函数为:The goal of the system is to minimize the average task delay by determining the offloading strategy, and the objective function is:

Figure BDA0003645549840000062
Figure BDA0003645549840000062

所述目标函数的约束条件包括:The constraints of the objective function include:

约束C1与约束C2表示每个任务必须卸载并计算到一个节点:Constraints C1 and C2 indicate that each task must be offloaded and computed to a node:

C1:xj,i,n,h∈{0,1}C1: xj,i,n, h∈{0,1}

C2:∑n∈Nxj,i,n,h=1C2:∑ n∈N x j,i,n,h =1

约束C3表示存储约束:Constraint C3 represents a storage constraint:

Figure BDA0003645549840000063
Figure BDA0003645549840000063

约束C4表示同时传输的任务数不能超过卸载节点的信道数:Constraint C4 means that the number of tasks transmitted simultaneously cannot exceed the number of channels of the offloading node:

Figure BDA0003645549840000064
Figure BDA0003645549840000064

约束C5表示所选神经网络模型满足精度要求:Constraint C5 means that the selected neural network model meets the accuracy requirements:

Figure BDA0003645549840000065
Figure BDA0003645549840000065

约束C6表示意味着任务必须在截止日期前完成:Constraint C6 means that the task must be completed by the deadline:

Figure BDA0003645549840000066
Figure BDA0003645549840000066

在步骤107,系统根据当前应用的卸载策略估计任务时延。如果超过了预定义的阈值,则采用贪心方法搜索新的卸载策略。它由两种策略组成,一种是时延驱动策略,目标是最小化系统的整体任务时延;另一种是资源驱动策略,目标是最大化资源利用。In step 107, the system estimates the task delay according to the currently applied uninstall policy. If a predefined threshold is exceeded, a greedy method is used to search for a new offloading strategy. It consists of two strategies, one is a delay-driven strategy, the goal is to minimize the overall task delay of the system; the other is a resource-driven strategy, the goal is to maximize resource utilization.

给定某个卸载策略,系统初始按着该卸载策略进行任务卸载,表示任务时延为tcur。当数据为θji和计算要求为cji的新任务到达时,根据上述推导的时延模型估计卸载时延taver。同样的资源驱动策略考虑资源利用率最大化,以计算资源可用性为例。具体来说,给定一个卸载策略,表示计算时延为

Figure BDA0003645549840000067
当系统中节点的Rn增加时,计算当前的卸载时延taver,具体更新策略类似于任务驱动时延策略。Given an unloading strategy, the system initially unloads tasks according to the unloading strategy, indicating that the task delay is t cur . When a new task with data θ ji and computational requirement c ji arrives, the offloading delay t aver is estimated according to the delay model derived above. The same resource-driven strategy considers maximizing resource utilization, taking computing resource availability as an example. Specifically, given an offloading strategy, the computation delay is
Figure BDA0003645549840000067
When the R n of the nodes in the system increases, the current unloading delay taver is calculated, and the specific update strategy is similar to the task-driven delay strategy.

在步骤108,当tcur和taver的差值超过预定义的阈值tdiff时,算法开始寻找新的卸载策略。At step 108, when the difference between tcur and taver exceeds the predefined threshold tdiff , the algorithm starts to find a new offloading strategy.

在步骤109,路侧单元开始寻找在当前环境最优的任务卸载策略,具体来说,遍历每个计算节点和每个神经网络模型,计算总任务时延

Figure BDA0003645549840000068
然后,选择贪心的选择
Figure BDA0003645549840000069
最小的策略,并记录该卸载策略发送给无人机端。In step 109, the roadside unit starts to search for the optimal task offloading strategy in the current environment. Specifically, it traverses each computing node and each neural network model to calculate the total task delay.
Figure BDA0003645549840000068
Then, choose the greedy choice
Figure BDA0003645549840000069
Minimal strategy, and record the unloading strategy and send it to the drone.

在步骤110,当tcur和taver的差值未超过阈值,算法继续沿用之前的卸载策略。At step 110, when the difference between t cur and t aver does not exceed the threshold, the algorithm continues to use the previous unloading strategy.

在步骤111,无人机接收到来自路侧单元的卸载策略通知,然后将采集到的盲区视频流发送到相应的节点进行任务计算。在这个过程中无人机与其他节点通过V2X进行通信,具体地,无人机与静态边缘节点,移动边缘节点同时位于一个车联网之中,通过如:WiFi、DSRC等方式进行通信,无人机与云端节点通过4G、5G或蜂窝网络进行通信。In step 111, the UAV receives the unloading strategy notification from the roadside unit, and then sends the collected blind spot video stream to the corresponding node for task calculation. In this process, the UAV communicates with other nodes through V2X. Specifically, the UAV and the static edge node and the mobile edge node are located in the same Internet of Vehicles at the same time, and communicate through methods such as WiFi, DSRC, etc. The computer communicates with the cloud node through 4G, 5G or cellular network.

在步骤112,节点接收到来自无人机的视频流,计算任务的节点将调用提前部署好离线训练好的行人检测模型,节点将根据接收的消息选择相应的深度学习模型进行推理计算,节点得到计算结果后,考虑到通信带宽有限和结果的有效性,并不会将所有计算结果发送至道路中的车辆,只会在检测到行人碰撞危险时候向车辆发送警告信号。In step 112, the node receives the video stream from the drone, the node of the computing task will call the pedestrian detection model that has been deployed offline and trained in advance, and the node will select the corresponding deep learning model according to the received message for inference calculation, and the node obtains After calculating the results, considering the limited communication bandwidth and the validity of the results, not all the calculation results will be sent to the vehicles on the road, and only a warning signal will be sent to the vehicle when the danger of pedestrian collision is detected.

在步骤113,计算节点检测到存在行人碰撞危险,并预警信息发送至车辆端,车辆中驾驶人员根据预警信息做出相应的操作,并选择是否需要结束盲区行人检测服务,如果不结束服务则系统继续执行步骤103。In step 113, the computing node detects that there is a danger of pedestrian collision, and sends the warning information to the vehicle. The driver in the vehicle makes corresponding operations according to the warning information, and chooses whether to end the blind spot pedestrian detection service. If the service is not ended, the system will Proceed to step 103.

在步骤114,计算节点没有检测到存在行人碰撞危险,不发送预警信息,系统将继续执行盲区行人检测服务。In step 114, the computing node does not detect that there is a danger of pedestrian collision, and does not send early warning information, and the system will continue to perform the blind spot pedestrian detection service.

Claims (8)

1.一种基于端-边-云协同的无人机辅助车联网盲区行人检测系统,其特征在于,包括以下步骤:1. A blind spot pedestrian detection system based on end-side-cloud collaboration based on unmanned aerial vehicle-assisted vehicle networking, is characterized in that, comprises the following steps: 步骤1、离线准备阶段,采集交通道路中危险路段包含行人的数据集,用于训练YOLO、SSD等行人检测模型,可将训练好的模型部署在具有异构计算和通信能力的终端、移动边缘、静态边缘和云端节点上。Step 1. In the offline preparation stage, collect data sets containing pedestrians in dangerous sections of traffic roads, which are used to train pedestrian detection models such as YOLO and SSD. The trained models can be deployed on terminals and mobile edges with heterogeneous computing and communication capabilities. , static edge and cloud nodes. 步骤2、将无人机部署在空中,监控交通道路中车辆的视野盲区,同时开启盲区行人检测服务并通知路侧的静态边缘节点。Step 2. Deploy the drone in the air to monitor the blind spot of the vehicle in the traffic road, enable the pedestrian detection service in the blind spot, and notify the static edge nodes on the roadside. 步骤3、路侧静态边缘节点,结合当前系统中所有异构节点可用的计算资源和网络的通信带宽,根据提出的任务卸载算法,确定把任务卸载到某个节点执行相应的推理任务。Step 3. The roadside static edge node, combined with the available computing resources of all heterogeneous nodes in the current system and the communication bandwidth of the network, determines to offload the task to a certain node to perform the corresponding reasoning task according to the proposed task offloading algorithm. 步骤4、路侧静态边缘节点将任务的卸载结果传输给无人机,无人机通过V2X通信将监控的视频流发送给对应节点。Step 4. The roadside static edge node transmits the unloading result of the task to the UAV, and the UAV sends the monitored video stream to the corresponding node through V2X communication. 步骤5、获取计算任务的节点根据步骤1中得到的行人检测模型对行人进行检测计算,如果检测到盲区中存在潜在的碰撞危险时,向道路中的车辆发送警告信息,获取到警告信号的车辆,根据驾驶的实际情况进行相应的避险操作。Step 5. The node that obtains the calculation task performs detection and calculation on pedestrians according to the pedestrian detection model obtained in step 1. If a potential collision danger is detected in the blind spot, it sends a warning message to the vehicle on the road, and obtains the vehicle with the warning signal. , and carry out the corresponding evasion operation according to the actual driving situation. 2.根据权利要求1所述的一种基于端-边-云协同的无人机辅助车联网盲区行人检测系统,其特征在于:在步骤1中,根据交通系统的实际状况,同时提高对盲区行人检测的准确度,本系统考虑采用SSD和YOLO两种经典的目标检测模型,YOLO与SSD相比是一个规模较大的网络,需要更高的计算开销,具有更好的检测效果;SSD的轻量级和快速推理特性使其更适合于移动设备。2. A kind of unmanned aerial vehicle-assisted vehicle networking blind spot pedestrian detection system based on end-edge-cloud collaboration according to claim 1, is characterized in that: in step 1, according to the actual situation of the traffic system, simultaneously improve the detection of blind spots. For the accuracy of pedestrian detection, this system considers two classic target detection models, SSD and YOLO. Compared with SSD, YOLO is a larger network, which requires higher computational overhead and has better detection effect; Lightweight and fast inference features make it more suitable for mobile devices. 系统的训练数据集来自无人机拍摄某大学校园广场的视频流。从视频中随机选取2000帧,其中90%用于训练,其余10%用于模型验证,训练过程将batch大小设置为32,IOU阈值设置为0.98,训练迭代为5000次。最后根据实验测试,YOLO和SSD的检测精度分别能保持在95%和90%。The training data set of the system comes from a video stream of a university campus plaza captured by a drone. 2000 frames were randomly selected from the video, 90% of which were used for training and the remaining 10% for model validation. The training process set the batch size to 32, the IOU threshold to 0.98, and the training iterations to 5000. Finally, according to the experimental test, the detection accuracy of YOLO and SSD can be maintained at 95% and 90%, respectively. 3.根据权利要求1所述的一种基于端-边-云协同的无人机辅助车联网盲区行人检测系统,所述的步骤2中的无人机对盲区实时监控,其特征在于:对无人机进行视频流数据获取的方法,无人机采用FFmpeg实现视频的解码操作,将得到的解码结果转换为YUVImage格式的图像,以此来实现图像的传递和存储。最终得到的YUV视频流通过V2X通信发送至系统中的各个节点。3. A kind of unmanned aerial vehicle-assisted vehicle networking blind spot pedestrian detection system based on end-edge-cloud collaboration according to claim 1, the unmanned aerial vehicle in the described step 2 monitors the blind spot in real time, it is characterized in that: to The method of UAV for video stream data acquisition, UAV uses FFmpeg to realize video decoding operation, and converts the obtained decoding results into images in YUVImage format, so as to realize the transmission and storage of images. The final YUV video stream is sent to each node in the system through V2X communication. 4.根据权利要求1所述的一种基于端-边-云协同的无人机辅助车联网盲区行人检测系统,所述的步骤3中的整合异构节点可用的计算和带宽资源,其特征在于:4. a kind of unmanned aerial vehicle-assisted vehicle networking blind spot pedestrian detection system based on end-edge-cloud collaboration according to claim 1, the computing and bandwidth resources available for integrating heterogeneous nodes in the described step 3, it is characterized in that: in: 所述节点参数包括:定义S={s1,s2,…,s|S|},M={m1,m2,…,m|M|},U={u1,u2,…,u|U|}和{c}所述分别为静态边缘节点、移动边缘节点、终端节点和云服务器的集合;The node parameters include: define S={s 1 ,s 2 ,...,s |S| }, M={m 1 ,m 2 ,...,m |M| }, U={u 1 ,u 2 , ..., u |U| } and {c} are a set of static edge nodes, mobile edge nodes, terminal nodes and cloud servers, respectively; 用N={s1,s2,…,s|S|,m1,m2,…,m|M|,u1,u2,…,u|U|,c}表示的卸载节点;N={s 1 ,s 2 ,…,s |S| ,m 1 ,m 2 ,…,m |M| ,u 1 ,u 2 ,…,u |U| ,c}; 无人机感知的任务集合用
Figure FDA0003645549830000011
表示;
UAV perception for task collection
Figure FDA0003645549830000011
express;
每个任务可以表示为一个5元组
Figure FDA0003645549830000012
分别表示输入数据大小、所需CPU周期数、精度要求、生成时间和截止日期;
Each task can be represented as a 5-tuple
Figure FDA0003645549830000012
Respectively represent the input data size, the number of CPU cycles required, the accuracy requirement, the generation time and the deadline;
每个卸载节点n∈N与5个元组<Cn,Bnn,Sn,Rn>,分别代表通信覆盖范围的半径、总无线带宽、通道的数量、最大的存储功能、和计算能力(例如,CPU周期的数量单位时间);Each offload node n∈N and 5 tuples <C n ,B nn ,S n ,R n >representing the radius of communication coverage, total wireless bandwidth, number of channels, maximum storage function, and computing power (for example, the number of CPU cycles per unit time); 记conu,n,t=1表示u∈N在t时刻的单跳通信中可以与n∈N交换消息;Denoting con u,n,t = 1 means that u∈N can exchange messages with n∈N in single-hop communication at time t; 用H={h1,h2,…,h|H|}表示神经网络模型的权值;Use H={h 1 , h 2 ,...,h |H| } to represent the weight of the neural network model; 假设利用神经网络模型hk将任务wji卸载到节点n进行推理,定义一个二元变量
Figure FDA0003645549830000021
Suppose the neural network model h k is used to offload the task w ji to node n for inference, and a binary variable is defined
Figure FDA0003645549830000021
Figure FDA0003645549830000022
Figure FDA0003645549830000022
节点之间的通信时延:Communication delay between nodes:
Figure FDA0003645549830000023
Figure FDA0003645549830000023
任务卸载的总通信时延:Total communication latency for task offloading:
Figure FDA0003645549830000024
Figure FDA0003645549830000024
任务的计算时延:The calculation delay of the task:
Figure FDA0003645549830000025
Figure FDA0003645549830000025
任务的等待时延:The waiting delay of the task:
Figure FDA0003645549830000026
Figure FDA0003645549830000026
最后,将wji卸载到节点n的任务时延由
Figure FDA0003645549830000027
表示,它是总传输时延、计算时延和等待时延的计算得到:
Finally, the task latency of offloading w ji to node n is given by
Figure FDA0003645549830000027
means that it is calculated from the total transmission delay, calculation delay and latency delay:
Figure FDA0003645549830000028
Figure FDA0003645549830000028
系统的目标是通过确定卸载策略来最小化平均任务时延,目标函数为:The goal of the system is to minimize the average task delay by determining the offloading strategy, and the objective function is:
Figure FDA0003645549830000029
Figure FDA0003645549830000029
5.根据权利要求4所述的一种基于端-边-云协同的无人机辅助车联网盲区行人检测系统,所述的以平均任务时延为优化目标定义的目标函数,其特征在于:所述目标函数的约束条件包括:5. a kind of unmanned aerial vehicle-assisted vehicle networking blind spot pedestrian detection system based on end-edge-cloud collaboration according to claim 4, the described objective function defined by the average task delay as the optimization target, is characterized in that: The constraints of the objective function include: 约束C1与约束C2表示每个任务必须卸载并计算到一个节点:Constraints C1 and C2 indicate that each task must be offloaded and computed to a node: C1:xj,i,n,h∈{0,1}C1: xj,i,n, h∈{0,1} C2:∑n∈Nxj,i,n,h=1C2:∑ n∈N x j,i,n,h =1 约束C3表示存储约束:Constraint C3 represents a storage constraint: C3:
Figure FDA00036455498300000210
C3:
Figure FDA00036455498300000210
约束C4表示同时传输的任务数不能超过卸载节点的信道数:Constraint C4 means that the number of tasks transmitted simultaneously cannot exceed the number of channels of the offloading node: C4:
Figure FDA00036455498300000211
C4:
Figure FDA00036455498300000211
约束C5表示所选神经网络模型满足精度要求:Constraint C5 means that the selected neural network model meets the accuracy requirements: C5:
Figure FDA0003645549830000031
C5:
Figure FDA0003645549830000031
约束C6表示意味着任务必须在截止日期前完成:Constraint C6 means that the task must be completed by the deadline: C6:
Figure FDA0003645549830000032
C6:
Figure FDA0003645549830000032
6.根据权利要求1所述的一种基于端-边-云协同的无人机辅助车联网盲区行人检测系统,所述步骤3的任务卸载算法,其特征在于:系统根据当前应用的卸载策略估计任务时延。如果超过了预定义的阈值,则采用贪心方法搜索新的卸载策略。它由两种策略组成,一种是时延驱动策略,目标是最小化系统的整体任务时延;另一种是资源驱动策略,目标是最大化资源利用。6. A kind of unmanned aerial vehicle-assisted vehicle networking blind spot pedestrian detection system based on end-edge-cloud collaboration according to claim 1, the task unloading algorithm of step 3 is characterized in that: the system is based on the currently applied unloading strategy. Estimate task latency. If a predefined threshold is exceeded, a greedy method is used to search for a new offloading strategy. It consists of two strategies, one is a delay-driven strategy, the goal is to minimize the overall task delay of the system; the other is a resource-driven strategy, the goal is to maximize resource utilization. 时延驱动策略:给定某个卸载策略,表示任务时延为tcur。当数据为θji和计算要求为cji的新任务到达时,根据上述推导的时延模型估计卸载时延taver。当tcur和taver的差值超过预定义的阈值tdiff时,算法开始寻找新的卸载策略。具体来说,它遍历每个计算节点和每个神经网络模型,计算总任务时延
Figure FDA0003645549830000033
然后,选择
Figure FDA0003645549830000034
最小的策略。
Delay-driven strategy: Given an offload strategy, the task delay is t cur . When a new task with data θ ji and computational requirement c ji arrives, the offloading delay t aver is estimated according to the delay model derived above. When the difference between tcur and taver exceeds the predefined threshold tdiff , the algorithm starts to find a new offloading strategy. Specifically, it traverses each computing node and each neural network model to calculate the total task latency
Figure FDA0003645549830000033
Then, select
Figure FDA0003645549830000034
minimal strategy.
资源驱动策略:考虑资源利用率最大化,以计算资源可用性为例。具体来说,给定一个卸载策略,表示计算时延为
Figure FDA0003645549830000035
当系统中节点的Rn增加时,算法开始寻找新的卸载策略。具体更新策略类似于任务驱动时延策略。
Resource-driven strategy: Consider maximizing resource utilization, taking computing resource availability as an example. Specifically, given an offloading strategy, the computation delay is
Figure FDA0003645549830000035
When the R n of the nodes in the system increases, the algorithm starts to find new offloading strategies. The specific update strategy is similar to the task-driven delay strategy.
7.根据权利要求1所述的一种基于端-边-云协同的无人机辅助车联网盲区行人检测系统,所述步骤4的无人机与系统中其他节点之间通信,其特征在于:无人机与其他节点通过V2X进行通信,具体地,无人机与静态边缘节点,移动边缘节点同时位于一个车联网之中,通过如:WiFi、DSRC等方式进行通信,无人机与云端节点通过4G、5G或蜂窝网络进行通信,无人机可以收到来自静态边缘节点的任务卸载节点选择结果,并将采集到的盲区视频流发送至相应的节点进行任务计算。7. a kind of unmanned aerial vehicle-assisted vehicle networking blind spot pedestrian detection system based on end-edge-cloud coordination according to claim 1, the unmanned aerial vehicle of described step 4 communicates with other nodes in the system, it is characterized in that : The UAV communicates with other nodes through V2X. Specifically, the UAV and the static edge node and the mobile edge node are located in the same Internet of Vehicles, and communicate through methods such as WiFi, DSRC, etc., and the UAV communicates with the cloud. The nodes communicate through 4G, 5G or cellular networks, and the UAV can receive the task offloading node selection result from the static edge node, and send the collected video stream of the blind spot to the corresponding node for task calculation. 8.根据权利要求1所述的一种基于端-边-云协同的无人机辅助车联网盲区行人检测系统,所述步骤5的节点对盲区的行人检测,其特征在于:计算任务的节点将提前部署好离线训练好的行人检测模型,节点将根据接收的消息选择相应的深度学习模型进行推理计算,该选择结果可以在权利要求6中所提的两种任务卸载策略得到。节点得到计算结果后,考虑到通信带宽有限和结果的有效性,并不会将所有计算结果发送至道路中的车辆,只会在检测到碰撞危险时候向车辆发送警告信号。8. A UAV-assisted vehicle networking blind spot pedestrian detection system based on end-edge-cloud collaboration according to claim 1, the node in step 5 detects pedestrians in the blind spot, characterized in that: the node of the calculation task The offline-trained pedestrian detection model will be deployed in advance, and the node will select the corresponding deep learning model for inference calculation according to the received message. The selection result can be obtained from the two task offloading strategies mentioned in claim 6. After the node obtains the calculation results, considering the limited communication bandwidth and the validity of the results, it will not send all the calculation results to the vehicles on the road, but will only send a warning signal to the vehicle when a collision danger is detected.
CN202210528443.2A 2022-05-16 2022-05-16 A UAV-assisted blind spot pedestrian detection system for connected vehicles based on device-edge-cloud collaboration Active CN115100623B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210528443.2A CN115100623B (en) 2022-05-16 2022-05-16 A UAV-assisted blind spot pedestrian detection system for connected vehicles based on device-edge-cloud collaboration

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210528443.2A CN115100623B (en) 2022-05-16 2022-05-16 A UAV-assisted blind spot pedestrian detection system for connected vehicles based on device-edge-cloud collaboration

Publications (2)

Publication Number Publication Date
CN115100623A true CN115100623A (en) 2022-09-23
CN115100623B CN115100623B (en) 2025-03-14

Family

ID=83287848

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210528443.2A Active CN115100623B (en) 2022-05-16 2022-05-16 A UAV-assisted blind spot pedestrian detection system for connected vehicles based on device-edge-cloud collaboration

Country Status (1)

Country Link
CN (1) CN115100623B (en)

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115641497A (en) * 2022-12-23 2023-01-24 中电信数字城市科技有限公司 Multi-channel video processing system and method
CN115714965A (en) * 2022-11-04 2023-02-24 北京工业大学 Land-air network intelligent vehicle system and cooperative control method
CN116437316A (en) * 2023-03-22 2023-07-14 重庆大学 Unmanned aerial vehicle assisted Internet of vehicles data transmission method and system
CN117590863A (en) * 2024-01-18 2024-02-23 苏州朗捷通智能科技有限公司 Unmanned aerial vehicle cloud edge end cooperative control system of 5G security rescue net allies oneself with

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109447048A (en) * 2018-12-25 2019-03-08 苏州闪驰数控系统集成有限公司 A kind of artificial intelligence early warning system
WO2021092039A1 (en) * 2019-11-04 2021-05-14 Intel Corporation Maneuver coordination service in vehicular networks

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109447048A (en) * 2018-12-25 2019-03-08 苏州闪驰数控系统集成有限公司 A kind of artificial intelligence early warning system
WO2021092039A1 (en) * 2019-11-04 2021-05-14 Intel Corporation Maneuver coordination service in vehicular networks

Non-Patent Citations (3)

* Cited by examiner, † Cited by third party
Title
MOHAN LIYANAGE 等: "Edge Computing Enabled by Unmanned Autonomous Vehicles", 《ARXIV:2012.14713》, 29 December 2020 (2020-12-29), pages 1 - 38 *
QISEN ZHANG 等: "UAV-Assisted Blind Area Pedestrian Detection via Terminal-Edge-Cloud Cooperation in VANETs", 《NEURAL COMPUTING FOR ADVANCED APPLICATIONS》, 21 October 2022 (2022-10-21), pages 314 - 330 *
侯璐: "面向车联网应用的移动云网络资源管理与优化研究", 《中国博士学位论文全文数据库 工程科技Ⅱ辑》, no. 2019, 15 August 2019 (2019-08-15), pages 034 - 20 *

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115714965A (en) * 2022-11-04 2023-02-24 北京工业大学 Land-air network intelligent vehicle system and cooperative control method
CN115641497A (en) * 2022-12-23 2023-01-24 中电信数字城市科技有限公司 Multi-channel video processing system and method
CN116437316A (en) * 2023-03-22 2023-07-14 重庆大学 Unmanned aerial vehicle assisted Internet of vehicles data transmission method and system
CN117590863A (en) * 2024-01-18 2024-02-23 苏州朗捷通智能科技有限公司 Unmanned aerial vehicle cloud edge end cooperative control system of 5G security rescue net allies oneself with
CN117590863B (en) * 2024-01-18 2024-04-05 苏州朗捷通智能科技有限公司 Unmanned aerial vehicle cloud edge end cooperative control system of 5G security rescue net allies oneself with

Also Published As

Publication number Publication date
CN115100623B (en) 2025-03-14

Similar Documents

Publication Publication Date Title
CN115100623B (en) A UAV-assisted blind spot pedestrian detection system for connected vehicles based on device-edge-cloud collaboration
Qiu et al. Autocast: Scalable infrastructure-less cooperative perception for distributed collaborative driving
Ateya et al. Energy-and latency-aware hybrid offloading algorithm for UAVs
CN105228180B (en) A kind of vehicle-mounted Delay Tolerant Network method for routing based on the estimation of node transfer capability
US11743694B2 (en) Vehicle to everything object exchange system
US20220240168A1 (en) Occupancy grid map computation, v2x complementary sensing, and coordination of cooperative perception data transmission in wireless networks
US9935875B2 (en) Filtering data packets to be relayed in the car2X network
EP3226647A1 (en) Wireless communication apparatus and wireless communication method
CN111294767A (en) A method, system, device and storage medium for data processing of intelligent networked vehicle
CN113498011A (en) Internet of vehicles method, device, equipment, storage medium and system
US20230087496A1 (en) Method for vru to predict movement path in wireless communication system supporting sidelink, and device for same
US20210125424A1 (en) Vehicle Running Status Field Model-Based Information Transmission Frequency Optimization Method in Internet of Vehicles
US20230080095A1 (en) Method and device for generating vru path map related to moving path of vru by softv2x server in wireless communication system supporting sidelink
CN109474897B (en) Single-hop cooperative broadcast method for safety message of Internet of Vehicles based on hidden Markov model
Chitra et al. Selective epidemic broadcast algorithm to suppress broadcast storm in vehicular ad hoc networks
Liang et al. Distributed information exchange with low latency for decision making in vehicular fog computing
CN112583872B (en) Communication method and device
WO2020175604A1 (en) Wireless communication terminal device, and wireless communication method therefor
Saleem et al. A vehicle-to-infrastructure data offloading scheme for vehicular networks with QoS provisioning
de Souza et al. A task offloading scheme for wave vehicular clouds and 5G mobile edge computing
Balen et al. Survey on using 5G technology in VANETs
CN116709249A (en) A management method for edge computing in Internet of Vehicles
Alsaleh How Do V2V and V2I Messages Affect the Performance of Driving Smart Vehicles?
EP4242938A1 (en) Method for processing image on basis of v2x message in wireless communication system and apparatus therefor
Zhu et al. Boosting Collaborative Vehicular Perception on the Edge with Vehicle-to-Vehicle Communication

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant