WO2023109589A1 - 一种智能车-无人机的协同感知系统及方法 - Google Patents

一种智能车-无人机的协同感知系统及方法 Download PDF

Info

Publication number
WO2023109589A1
WO2023109589A1 PCT/CN2022/136955 CN2022136955W WO2023109589A1 WO 2023109589 A1 WO2023109589 A1 WO 2023109589A1 CN 2022136955 W CN2022136955 W CN 2022136955W WO 2023109589 A1 WO2023109589 A1 WO 2023109589A1
Authority
WO
WIPO (PCT)
Prior art keywords
smart car
module
uav
information
unmanned aerial
Prior art date
Application number
PCT/CN2022/136955
Other languages
English (en)
French (fr)
Inventor
徐坤
李慧云
潘仲鸣
Original Assignee
深圳先进技术研究院
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 深圳先进技术研究院 filed Critical 深圳先进技术研究院
Publication of WO2023109589A1 publication Critical patent/WO2023109589A1/zh

Links

Classifications

    • GPHYSICS
    • G05CONTROLLING; REGULATING
    • G05DSYSTEMS FOR CONTROLLING OR REGULATING NON-ELECTRIC VARIABLES
    • G05D1/00Control of position, course or altitude of land, water, air, or space vehicles, e.g. automatic pilot
    • G05D1/10Simultaneous control of position or course in three dimensions
    • G05D1/101Simultaneous control of position or course in three dimensions specially adapted for aircraft

Definitions

  • the present invention relates to the technical field of intelligent equipment, and more specifically, to an intelligent vehicle-drone cooperative sensing system and method.
  • the existing technical solutions for sensing the driving area mainly include two types.
  • One is to rely on bicycle perception, that is, rely on the sensor equipment carried by itself to collect information about the surrounding environment, and then process and calculate the information on the on-board computing equipment, and design corresponding Advanced algorithms and software to realize the perception of the surrounding environment, such as identifying objects, obstacles, and roads in the environment, tracking dynamic obstacles, and dividing the space that can be driven.
  • the second type is collaborative perception, that is, not only relying on the sensing equipment carried by itself, but also relying on the sensing equipment arranged in the surrounding environment, such as the sensing module installed on the side of the road. Collect and then fuse and process in the computing module to obtain the results of perception and understanding of the environment.
  • the computing module can be performed on the vehicle side or in the cloud computing module.
  • the patent application CN201510889283.4 discloses a UAV-based vehicle environment perception system and method, including an unmanned aerial vehicle and a ground station. Connect, use the UAV to shoot the road surface, and transmit the video image to the ground station; the ground station on the vehicle calculates the flight parameters and sends them to the UAV to control the UAV and the vehicle to move forward together.
  • Patent application CN201910181147.8 discloses a vehicle driving assistance system based on drones. There is a parking cabin on the roof and a wireless charging board in the cabin. The drone can take off from the vehicle and rely on its own high-definition camera to provide the driver Provides extended field of view.
  • Patent application CN202110535374.3 discloses a vehicle-following tethered drone obstacle avoidance system. There is a tethered rope between the drone and the vehicle, and a decoupling device is designed to prevent the tethered cable from being entangled by obstacles. Unhook the drone for a safe landing.
  • the patent application also does not involve the reliability assurance method of collaborative sensing, and does not have the ability of autonomous collaboration
  • the existing way of using drones to take off to realize perception is connected to the ground station on the vehicle through a tethered rope, which is not suitable for complex environments (such as bridges, wire crosspieces, tall trees) and other complex environments; need
  • the ground station computing module is used to control the flight of the UAV, which increases the equipment cost (such as the ground station); the UAV cannot realize autonomous tracking and flexible trajectory planning, and cannot flexibly adjust the detection range.
  • the purpose of the present invention is to overcome the defects of the above-mentioned prior art, and provide a smart car-UAV cooperative sensing system and method.
  • a smart car-unmanned aerial vehicle cooperative perception system includes: a smart car, one or more drones.
  • the smart car includes a first perception module, a first calculation module, an unmanned cabin module and an active guidance module
  • the drone includes a second perception module and a second calculation module, wherein: the first perception module is used to detect the environment around the smart car Information; the unmanned aerial vehicle module has space for placing one or more unmanned aerial vehicles; the active guidance module is used to transmit guidance information to instruct the relative positioning between the unmanned aerial vehicle and the smart car; the first calculation module is used to The detection information returned by the man-machine and the detection information of the smart car evaluate the complexity of the surrounding environment, and decide whether to transmit the guidance information according to the evaluation results; the second perception module is used to detect the environmental information in the field of view of the drone and Identify the smart car or the guidance information transmitted by the smart car; the second calculation module is used to control the movement of the drone, process the detection information of the second perception module, and obtain the
  • a smart car-drone cooperative sensing method includes the following steps:
  • the environmental information around the smart car and the detection information sent back by the UAV determine whether the smart car transmits guidance information, and instruct the UAV to respond to the guidance information to obtain relative positioning with the smart car;
  • the control trajectory of the smart car is controlled.
  • the present invention has the advantage of providing a new technology solution for reliable cooperative perception in complex environments, and realizing multi-views of the environment through the reliable cooperation of UAVs and smart vehicles (or unmanned vehicles).
  • Environmental awareness can overcome the lack of security cross-validation link in the existing cooperative awareness system, which makes it difficult to guarantee the reliability of the system.
  • Fig. 1 is a schematic diagram of a collaborative sensing system of a smart car-unmanned aerial vehicle according to an embodiment of the present invention
  • Fig. 2 is a flow chart of a smart car-UAV collaborative sensing method according to an embodiment of the present invention.
  • the provided collaborative sensing system includes smart cars and drones.
  • the smart car includes a first perception module, a first calculation module, a first communication module, an unmanned cabin module, and an active guidance module.
  • the first perception module is equipped with intelligent vehicle environment detection sensing equipment and intelligent vehicle air-to-air detection sensing equipment.
  • the smart car environment detection sensor device is used to detect the environmental information around the smart car.
  • One or more types can be set, including but not limited to laser radar, camera, satellite positioning system, inertial measurement unit, etc.
  • Intelligent vehicle-to-air detection sensing equipment is used to detect and identify drones in the sky, and can include one or a combination of several types, such as lidar, camera, etc.
  • the first communication module provides information interaction between the UAV and the smart car, and the optional communication methods include wifi, 4G or 5G network, etc.
  • the optional communication methods include wifi, 4G or 5G network, etc.
  • a 5G network is used to provide real-time high-bandwidth communication between the smart car and the drone.
  • the unmanned aerial vehicle module has a space for placing one or more unmanned aerial vehicles; it has a door that can be opened or closed, and after the door is opened, the unmanned aerial vehicle can take off and land vertically through the open door; it has wired and/or Or a wireless charging system, which can use the on-board power supply and power conversion system to realize the charging of the drone.
  • the active guidance module is used to provide guidance information that can be recognized by the UAV, including but not limited to image marking, pattern marking composed of luminous LEDs, etc. After the UAV detects the guidance information, it can obtain the relative positioning with the smart car through the on-board computing module.
  • the relative positioning information can be used for reasonable trajectory control of the UAV (such as hovering, tracking, autonomous landing , or centering on the intelligent vehicle for trajectory detection).
  • the first calculation module processes the detection information sent back by the UAV and the detection information of the smart car. By fusing these two kinds of information, it can obtain the complexity assessment around the smart car and guide the subsequent navigation planning and control of the smart car, and Perform secure cross-validation.
  • the content of security cross-validation will be introduced in detail below.
  • the number of UAVs can be one or more.
  • an unmanned aerial vehicle is taken as an example to describe the involved modules and corresponding functions.
  • the drone includes a second sensing module, a second computing module and a second communication module.
  • the second perception module is used to detect the environmental information within the field of view of the drone, and identify the smart car or the active guidance module on the smart car.
  • One or more types of detection and sensing devices can be installed on the second perception module, including but not limited to laser radar, camera, satellite positioning system, inertial measurement unit, etc.
  • the second communication module is used to provide information interaction between the drone and the smart car. For example, the environment detection result of the second calculation module of the drone is transmitted to the first communication module of the smart car through the second communication module.
  • the second computing module handles the motion control of the UAV; processes the detection information of the second perception module, performs data preprocessing, and obtains semantic information of the surrounding environment of the smart car; performs safety cross-validation, etc.
  • the reliable cooperative sensing method provided by the present invention includes the following steps.
  • Step S210 when the smart car is navigating in a structured and simple environment, the smart car only relies on the first sensing module and the first computing module to realize the perception of the environment and guide the car to realize safe navigation.
  • Step S220 when the smart car encounters a complex environment, the smart car sends a collaborative sensing request command to the UAV and the UAV cabin.
  • the smart car can determine whether it is in a complex environment based on the sensing of the surrounding environment or when the smart car cannot rely on its own navigation system to achieve safe navigation, determine that it is currently in a complex environment.
  • Step S230 after receiving the cooperative sensing request instruction, the unmanned cabin opens the hatch to vacate the upper area; after receiving the request instruction from the smart car and the signal that the unmanned cabin door is successfully opened, the unmanned aerial vehicle opens the hatch from the smart car. take off.
  • Step S240 the smart car acquires the relative position information of the UAV relative to the smart car, or referred to as the first relative position.
  • the smart car can obtain the first relative position in the following two ways:
  • the recognition method is as follows: input images or laser air detection data, establish a recognition network model, and the network output is the regression result of the first relative position.
  • the parameters of the recognition network model are obtained through training based on the sample data set, and each piece of sample data reflects the correspondence between the input image or detection data and the known relative position label.
  • the recognition network model can be of various types, preferably a deep neural network.
  • the relative position information or the second relative position relative to the smart car can be obtained in the following two ways:
  • the UAV recognizes the active guidance module of the smart car, and calculates the second relative position relative to the smart car.
  • the identification method is: collect the visual information of the UAV's ground detection, which includes the information of the active guidance module on the smart car, and the information of the active guidance module can be an image label (such as the two-dimensional code image identification aruco or apriltag), It can also be an image composed of active light-emitting LEDs (can be used at night).
  • the UAV captures the image containing the above-mentioned active guidance module according to the camera it carries, it uses a general image recognition algorithm (such as an edge detection algorithm), or establishes A new neural network recognition model (preferably a deep neural network) recognizes the second relative position information.
  • Step S250 the UAV and the smart car perform two-way cross security verification.
  • the smart car pre-sets a safe collaborative space for the UAV to perform collaborative sensing tasks.
  • the safe collaborative space is a vertebral body space
  • the size of its space range can be calibrated and adjusted according to the specific smart car and UAV system. For example, at least the following three elements are met: the smart car and the UAV in the space have good communication quality; both the UAV and the smart car can clearly detect each other; and the requirements for the environmental detection range of smart car navigation are met.
  • the smart car judges whether the position of the UAV falls in the safe collaborative space, and if so, continues to perform subsequent movement and detection tasks; if the UAV exceeds the safe collaborative space set by the smart car, The smart car sends an out-of-range command to the UAV, and the UAV adjusts its trajectory, returns to the safe collaborative space, and then performs follow-up tasks.
  • the active verification of the safety of the drone's own position includes:
  • the smart car sends the first relative position to the drone
  • the separation parameter is less than the preset threshold fmin, it is considered that the cooperative perception system of the smart car and the UAV works well, and the follow-up work will continue; if the parameter is greater than the set threshold, or either the first relative position or the second relative position is abnormal (If the numerical information of the relative position cannot be obtained or the relative position is greater than the set standard), it means that the cooperative perception motion system is abnormal, and the trajectory or attitude must be corrected so that the UAV tends to a safe collaborative space until the separation parameter is less than the set value. set threshold.
  • the above-mentioned two-way cross-security verification can be triggered by period or event. For example, set a fixed-period execution method, such as executing the cross-validation once every second. When an exception occurs, reduce the fixed-period time, such as speeding up to 500ms once. Or, when a smart car or drone detects an abnormal event, it actively requests two-way cross-validation, such as taking the inability to navigate safely as an abnormal event or determining the abnormal event according to the complexity of the surrounding environment.
  • a fixed-period execution method such as executing the cross-validation once every second.
  • reduce the fixed-period time such as speeding up to 500ms once.
  • two-way cross-validation such as taking the inability to navigate safely as an abnormal event or determining the abnormal event according to the complexity of the surrounding environment.
  • Step S260 under the condition of two-way cross-safety verification, the UAV moves and detects according to a predetermined trajectory pattern.
  • the predetermined trajectory mode includes the following three types: hovering above the smart car at a specified relative position, moving with the smart car, and moving in the safe collaborative space according to the planned trajectory centered on the smart car.
  • step S270 the drone uses the second sensing module to detect environmental information.
  • the unmanned aerial vehicle inputs the information detected by the second sensing module into the second calculation module, performs preprocessing and identification of the detection information, and obtains the second perception information.
  • the drone transmits the second perception information to the first communication module of the smart car through the second communication module.
  • Step S280 the first computing module of the smart car fuses the detection information of the smart car and the drone.
  • the environmental information collected from multiple different perspectives of drones and smart cars can be aggregated to identify objects, roads, obstacles, etc. in the surrounding environment.
  • the identification method can use existing general image recognition technology (such as based on deep learning object classification). Since the information comes from both the detection information of the smart car and the information from the UAV's top-down perspective and forward-looking perspective, more comprehensive environmental information can be detected;
  • Another example is to understand and process the surrounding environment, output high-level abstract results, obtain a quantitative assessment of the complexity of the surrounding environment, establish a risk space-time situation map, etc., and send it to the planning module of the vehicle to guide the vehicle to achieve safe navigation.
  • the method for quantitatively evaluating complexity is:
  • the attribute value reflects the degree of trafficability of the smart car. The larger the value, the worse the trafficability. For example, when there is an impenetrable obstacle or ravine area, the attribute value is defined as infinity; the attribute value of the sloped terrain area is larger than that of the flat terrain area.
  • a risk spatio-temporal situation map is generated. Specifically, around the smart car, for each area, there is a quantitative value of the risk space-time situation, which is equal to the passable attribute value defined above.
  • the area to which it belongs can be divided by a discrete grid map. Therefore, for each grid on the map, there is a quantitative value of the risk space-time situation. The larger the value, the risk of vehicles passing on the grid in the future higher.
  • step S290 the UAV autonomously lands in the parking cabin after completing the environment detection.
  • the number of drones can be greater than 1. If there are n drones, in addition to the first relative position, there are also second relative positions, ..., the n+th 1 relative position. For example, the n+1th relative position represents the position obtained by the nth drone relative to the smart car. In this case, in the two-way cross-safety verification, the two-way cross-safety verification between the smart car and multiple drones is added, which is similar to the above process and will not be repeated here.
  • the present invention improves the reliability of the UAV-smart car when performing cooperative sensing tasks by setting an active guidance module and designing a two-way cross-safety verification mechanism between the smart car and the UAV, and is applicable to a smart car Cooperative perception of vehicles and multiple UAVs.
  • the present invention can be a system, method and/or computer program product.
  • a computer program product may include a computer readable storage medium having computer readable program instructions thereon for causing a processor to implement various aspects of the present invention.
  • a computer readable storage medium may be a tangible device that can retain and store instructions for use by an instruction execution device.
  • a computer readable storage medium may be, for example, but is not limited to, an electrical storage device, a magnetic storage device, an optical storage device, an electromagnetic storage device, a semiconductor storage device, or any suitable combination of the foregoing.
  • Non-exhaustive list of computer-readable storage media include: portable computer diskettes, hard disks, random access memory (RAM), read-only memory (ROM), erasable programmable read-only memory (EPROM), or flash memory), static random access memory (SRAM), compact disc read only memory (CD-ROM), digital versatile disc (DVD), memory stick, floppy disk, mechanically encoded device, such as a printer with instructions stored thereon A hole card or a raised structure in a groove, and any suitable combination of the above.
  • RAM random access memory
  • ROM read-only memory
  • EPROM erasable programmable read-only memory
  • flash memory static random access memory
  • SRAM static random access memory
  • CD-ROM compact disc read only memory
  • DVD digital versatile disc
  • memory stick floppy disk
  • mechanically encoded device such as a printer with instructions stored thereon
  • a hole card or a raised structure in a groove and any suitable combination of the above.
  • computer-readable storage media are not to be construed as transient signals per se, such as radio waves or other freely propagating electromagnetic waves, electromagnetic waves propagating through waveguides or other transmission media (e.g., pulses of light through fiber optic cables), or transmitted electrical signals.
  • Computer-readable program instructions described herein may be downloaded from a computer-readable storage medium to a respective computing/processing device, or downloaded to an external computer or external storage device over a network, such as the Internet, a local area network, a wide area network, and/or a wireless network.
  • the network may include copper transmission cables, fiber optic transmission, wireless transmission, routers, firewalls, switches, gateway computers, and/or edge servers.
  • a network adapter card or a network interface in each computing/processing device receives computer-readable program instructions from the network and forwards the computer-readable program instructions for storage in a computer-readable storage medium in each computing/processing device .
  • Computer program instructions for performing operations of the present invention may be assembly instructions, instruction set architecture (ISA) instructions, machine instructions, machine-dependent instructions, microcode, firmware instructions, state setting data, or Source or object code written in any combination, including object-oriented programming languages—such as Smalltalk, C++, Python, etc., and conventional procedural programming languages—such as the “C” language or similar programming languages.
  • Computer readable program instructions may execute entirely on the user's computer, partly on the user's computer, as a stand-alone software package, partly on the user's computer and partly on a remote computer, or entirely on the remote computer or server implement.
  • the remote computer can be connected to the user computer through any kind of network, including a local area network (LAN) or a wide area network (WAN), or it can be connected to an external computer (such as through the Internet using an Internet service provider). connect).
  • electronic circuits such as programmable logic circuits, field programmable gate arrays (FPGAs) or programmable logic arrays (PLAs), can be customized by utilizing state information of computer-readable program instructions, which can Various aspects of the invention are implemented by executing computer readable program instructions.
  • These computer-readable program instructions may be provided to a processor of a general purpose computer, special purpose computer, or other programmable data processing apparatus to produce a machine such that when executed by the processor of the computer or other programmable data processing apparatus , producing an apparatus for realizing the functions/actions specified in one or more blocks in the flowchart and/or block diagram.
  • These computer-readable program instructions can also be stored in a computer-readable storage medium, and these instructions cause computers, programmable data processing devices and/or other devices to work in a specific way, so that the computer-readable medium storing instructions includes An article of manufacture comprising instructions for implementing various aspects of the functions/acts specified in one or more blocks in flowcharts and/or block diagrams.
  • each block in a flowchart or block diagram may represent a module, a portion of a program segment, or an instruction that includes one or more Executable instructions.
  • the functions noted in the block may occur out of the order noted in the figures. For example, two blocks in succession may, in fact, be executed substantially concurrently, or they may sometimes be executed in the reverse order, depending upon the functionality involved.
  • each block of the block diagrams and/or flowchart illustrations, and combinations of blocks in the block diagrams and/or flowchart illustrations can be implemented by a dedicated hardware-based system that performs the specified function or action , or may be implemented by a combination of dedicated hardware and computer instructions. It is well known to those skilled in the art that implementation by means of hardware, implementation by means of software, and implementation by a combination of software and hardware are all equivalent.

Abstract

一种智能车-无人机的协同感知系统及方法。在智能车-无人机的协同感知系统中,智能车包括第一感知模块、第一计算模块、无人机舱模块和主动引导模块,无人机包括第二感知模块和第二计算模块。第一感知模块用于探测智能车周围的环境信息;无人机舱模块有放置一个或多个无人机的空间;主动引导模块传递引导信息,以指示无人机获取与智能车的相对定位;第一计算模块用于评估周围环境复杂度,并根据评估结果决定是否传递引导信息;第二感知模块探测环境信息以及识别智能车传递的引导信息;第二计算模块控制无人机的运动,处理探测到的信息,获取智能车周围环境的语义信息。智能车-无人机的协同感知系统及方法主动介入引导无人机进行姿态和轨迹修正,提高了无人机协同感知的可靠性。

Description

一种智能车-无人机的协同感知系统及方法 技术领域
本发明涉及智能装备技术领域,更具体地,涉及一种智能车-无人机的协同感知系统及方法。
背景技术
智能车在复杂环境(包括结构化道路环境和非结构化野外环境等)、未知或非结构化环境进行自主运行时,首先需要实现对周围环境场景的感知,确定可行驶的区域,然后才能进行合理的路径规划和运动控制。因此准确感知和识别可行驶的区域是智能车安全导航的关键,在侦查、搜索、监控等领域具有广阔的应用前景。
现有感知行驶区域的技术方案主要包括两类,一类是依靠单车感知,即依靠自身携带的传感设备,采集周围环境的信息,然后在车载计算设备上对信息进行处理和计算,设计相应的算法和软件,实现周围环境的感知,如识别环境中的物体、障碍物、道路,跟踪动态障碍物,划分出可以行驶的空间等。第二类是协同感知,即不仅依靠自身携带的传感设备,还可借助于周围环境中布置的感知设备,例如路边设置的感知模块,这两种传感设备从不同的视角对环境信息进行采集,然后在计算模块中进行融合、处理,得到对环境的感知和理解结果。其中计算模块可以在车载端,也可以在云端计算模块中进行。
例如,对于协同感知的方案,专利申请CN201510889283.4公开了基于无人机的车辆环境感知系统及方法,包括无人飞行器和地面站,无人飞行器和安装在车辆上的地面站通过系留绳连接,通过无人机对路面进行拍摄,将视频图像传给地面站;车上的地面站计算飞行参数发给无人机控制无人机与车辆一同前行。专利申请CN201910181147.8公开了一种基于无人机的车辆驾驶辅助系统,车顶部有停机舱,舱内有无线充电板,无人机可从车上起飞,依靠自身携带的高清摄像头为驾驶员提供扩展的视野范围。但没有提供系统变复杂后如何保证车辆和无人机协同的可靠性方法,且不具备自主协同能力。专利申请CN202110535374.3公开了一种跟车系留无人机避障系统,无人机与车辆之间有系留绳,并设计了脱钩装置,避免系留线缆被障碍物缠住时,使无人机脱钩保证安全着陆。该专利申请同样没有涉及协同感知的可靠性保证方法,并不具备自主协同能力
技术问题
现有技术方案主要存在以下缺陷:
1)、单车感知所获取的环境信息受限,只能得到智能车第一视角所探测到的信息,无法实现对周围环境的全面准确探测;
2)、协同感知虽然扩大了环境信息探测的范围,但是需要提前在环境的适当位置安装传感设备,不灵活,不适用于首次进入的未知环境(这类环境无法提前在行驶空间的适当位置布置传感器)。
3)、现有采用无人机起飞实现感知的方式,通过系留绳与车上的地面站连接,不适合复杂的环境(如存在桥梁、电线横挡、高大树木)等的复杂环境;需要地面站计算模块来控制无人机的飞行,增加了设备成本(如地面站);无人机无法实现自主跟踪和灵活的航迹自主规划,无法灵活调整感知探测范围。
4)、现有的采用无人机和车辆协同感知的方式,由于增加了无人机,系统变得更加复杂,缺乏可靠协同感知的机制和方法,难以保证系统的可靠协同感知。
技术解决方案
本发明的目的是克服上述现有技术的缺陷,提供一种智能车-无人机的协同感知系统及方法。
根据本发明的第一方面,提供一种智能车-无人机的协同感知系统。该系统包括:智能车、一个或多个无人机。智能车包括第一感知模块、第一计算模块、无人机舱模块和主动引导模块,无人机包括第二感知模块和第二计算模块,其中:第一感知模块用于探测智能车周围的环境信息;无人机舱模块具有放置一个或多个无人机的空间;主动引导模块用于传递引导信息,以指示无人机获取与智能车之间的相对定位;第一计算模块用于根据无人机传回的探测信息以及智能车的探测信息对周围环境的复杂度进行评估,并根据评估结果决定是否传递所述引导信息;第二感知模块用于探测无人机视野内的环境信息以及识别智能车或智能车传递的引导信息;第二计算模块用于控制无人机的运动,处理第二感知模块的探测信息,获取智能车周围环境的语义信息。
根据本发明的第二方面,提供一种智能车-无人机的协同感知方法。该方法包括以下步骤:
探测智能车周围的环境信息;
接收无人机传回的探测信息;
根据智能车周围的环境信息和无人机传回的探测信息,确定智能车是否传递引导信息,并以指示无人机响应于该引导信息获取与智能车之间的相对定位;
基于无人机获取的智能车的周围环境的语义信息,控制智能车的控制行驶轨迹。
有益效果
与现有技术相比,本发明的优点在于,提供复杂环境中可靠协同感知的新技术方案,通过无人机与智能车(或称无人车)可靠协同的方式,实现对环境的多视角环境感知,能够克服现有协同感知系统中缺乏安全交叉验证环节,导致系统可靠性难以保证的问题。
通过以下参照附图对本发明的示例性实施例的详细描述,本发明的其它特征及其优点将会变得清楚。
附图说明
被结合在说明书中并构成说明书的一部分的附图示出了本发明的实施例,并且连同其说明一起用于解释本发明的原理。
图1是根据本发明一个实施例的智能车-无人机的协同感知系统的示意图;
图2是根据本发明一个实施例的智能车-无人机的协同感知方法的流程图。
本发明的实施方式
现在将参照附图来详细描述本发明的各种示例性实施例。应注意到:除非另外具体说明,否则在这些实施例中阐述的部件和步骤的相对布置、数字表达式和数值不限制本发明的范围。
以下对至少一个示例性实施例的描述实际上仅仅是说明性的,决不作为对本发明及其应用或使用的任何限制。
对于相关领域普通技术人员已知的技术、方法和设备可能不作详细讨论,但在适当情况下,所述技术、方法和设备应当被视为说明书的一部分。
在这里示出和讨论的所有例子中,任何具体值应被解释为仅仅是示例性的,而不是作为限制。因此,示例性实施例的其它例子可以具有不同的值。
应注意到:相似的标号和字母在下面的附图中表示类似项,因此,一旦某一项在一个附图中被定义,则在随后的附图中不需要对其进行进一步讨论。
参见图1所示,所提供的协同感知系统包括智能车和无人机。在一个实施例中,智能车包括第一感知模块,第一计算模块,第一通信模块,无人机舱模块,主动引导模块。
第一感知模块设有智能车环境探测传感设备和智能车对空探测传感设备。智能车环境探测传感设备用于探测智能车周围的环境信息,可设置一种或多种类型,包括但不限于激光雷达、摄像头、卫星定位系统、惯性测量单元等。智能车对空探测传感设备用于探测并识别上空的无人机,可以包括一种或几种类型的组合,例如激光雷达、摄像头等。
第一通信模块提供无人机与智能车之间的信息交互,可供选择的通信方式包括wifi,4G或5G网络等。优选地,采用5G网络以提供智能车和无人机之间的实时高带宽的通信。
无人机舱模块具有放置一个或多个无人机的空间;具有可控制开启或关闭的舱门,舱门开启后,无人机可经由开启的舱门实现垂直起飞和降落;具有有线和/或无线充电系统,可利用车载电源和电源转换系统实现对无人机的充电。
主动引导模块用于提供无人机可以识别的引导信息,包括但不限于图像标示、发光LED组成的图案标示等。无人机探测到该引导信息后,可通过机载计算模块获取与智能车之间的相对定位,该相对定位信息可用于无人机进行合理的运动轨迹控制(如悬停、跟踪、自主降落、或以智能车为中心进行制定轨迹的探测)。
第一计算模块处理无人机传回的探测信息以及智能车的探测信息,通过对这两种信息进行融合,获取智能车周围的复杂性评估,并引导智能车后续的导航规划和控制,以及进行安全交叉验证。关于安全交叉验证内容将在下文具体介绍。
需说明的是,在所提供的协同感知系统中,无人机数目可以是一个或多个。为清楚起见,在下文的描述中,以一个无人机为例描述涉及的各模块和相应功能。
在一个实施例中,无人机包括第二感知模块,第二计算模块和第二通信模块。
第二感知模块用于探测无人机视野内的环境信息、识别智能车或智能车上的主动引导模块。第二感知模块上可安装一种或多种类型的探测传感设备,包括但不限于激光雷达、摄像头、卫星定位系统、惯性测量单元等。
第二通信模块用于提供无人机与智能车之间的信息交互。例如,将无人机第二计算模块的环境探测结果通过第二通信模块传输给智能车的第一通信模块。
第二计算模块处理无人机的运动控制;处理第二感知模块的探测信息,进行数据预处理,获取智能车周围环境的语义信息;进行安全交叉验证等。
结合图2所示,基于上述系统,本发明提供的可靠协同感知方法包括以下步骤。
步骤S210,智能车在结构化、简单环境下进行导航时,智能车仅依靠第一传感模块和第一计算模块,实现环境的感知,引导车辆实现安全导航。
步骤S220,当智能车遇到复杂环境时,智能车向无人机和无人机舱发送协同感知请求指令。
智能车可基于对周围环境的感测确定是否处于复杂环境或者当智能车无法依靠自身导航系统实现安全的导航时,确定目前处于复杂环境。
步骤S230,在接收到协同感知请求指令后,无人机舱开启舱门,空出上方区域;无人机在接收到智能车的请求指令,以及无人舱门开启成功信号后,从智能车上起飞。
步骤S240,智能车获取无人机相对智能车的相对位置信息,或称为第一相对位置。
例如,智能车可采用以下两种方式获取第一相对位置:
1)、根据无人机和智能车卫星定位系统获取的位置信息,计算得到第一相对位置(dxi,dyi,dzi),计算方式为:dxi=x2-x1;dyi=y2-y1;dzi=z2-z1。其中,(x1,y1,z1)为智能车根据卫星定位系统得到的位置信息;(x2,y2,z2)为无人机根据卫星定位系统得到的位置信息。卫星定位系统可以是GPS、北斗或其他制式。
2)、通过第一感知模块的对空探测传感设备,识别出无人机相对智能车的位置(第一相对位置)。例如,识别方法为:输入图像或激光的对空探测数据,建立识别网络模型,网络输出为第一相对位置的回归结果。其中识别网络模型的参数基于样本数据集通过训练获得,每条样本数据反映输入图像或探测数据与已知的相对位置标签之间的对应关系。识别网络模型可采用多种类型,优选深度神经网络。
对于无人机,可按照如下两种方式获取相对智能车的相对位置信息或称第二相对位置:
1)、根据无人机和智能车的卫星定位系统获取的位置信息,计算得到第二相对位置(dxi 2,dyi 2,dzi 2),计算方式为:dxi 2=x2-x1;dyi 2=y2-y1; dzi 2=z2-z1。其中,(x1,y1,z1)为智能车根据卫星定位系统得到的位置信息;(x2,y2,z2)为无人机根据全球定位系统得到的位置信息。
2)无人机识别智能车的主动引导模块,计算得到相对智能车的第二相对位置。例如,识别方法为:采集无人机对地探测的视觉信息,其中包含了智能车上的主动引导模块信息,该主动引导模块信息可以是图像标示(例如二维码图像标识aruco或apriltag),也可以是主动发光LED组成的图像(夜晚可以用),无人机根据所携带的摄像机对地拍摄到包含上述主动引导模块的图像后,利用通用图像识别算法(如边缘检测算法),或者建立新的神经网络识别模型(优选为深度神经网络),识别出第二相对位置信息。
步骤S250,无人机和智能车进行双向交叉安全验证。
为了保证协同感知时系统的可靠性,设计了一种无人机和智能车的双向交叉安全验证方法,包括以下两个验证分支:
1)、智能车主动验证无人机的安全。
a)、智能车预先为无人机设定执行协同感知任务时的安全协同空间。
在一个实施例中,安全协同空间是椎体空间,其空间范围的大小可根据具体的智能车与无人机系统进行标定和调整。例如,至少满足以下三个要素:该空间内智能车与无人机具有良好的通信质量;无人机和智能车都能清晰的探测到对方;满足智能车导航的环境探测范围的要求。
b)、智能车根据第一相对位置判定无人机位置是否落在安全协同空间内,如果是,则继续执行后续的运动和探测任务;如果无人机超出智能车设定的安全协同空间,智能车向无人机发送超范围指令,无人机调整运动轨迹,回到安全协同空间内,再进行后续任务。
2)、无人机主动验证自身位置的安全性。
在一个实施例中,无人机主动验证自身位置的安全性包括:
a)、智能车将第一相对位置发送给无人机;
b)、无人机判断第一相对位置与第二相对位置的分离参数f,表示为:
f=cx*|dxi-dxi 2|+cy*|dyi-dyi 2|+cz*|dzi-dzi 2|,
其中,cx,cy,cz为第一相对位置和第二相对位置在x,y,z三个方向上偏差的权重系数,均为正,且cx+cy+cz=1,所述权重参数可根据实际情况进行调整,以满足不同的需求。例如,如果更看重z方向上的偏差,则可以设置cz大于cx和cy。
如果分离参数小于预先设定的阈值fmin,则认为智能车与无人机的协同感知系统运行良好,继续后续工作;如果参数大于设定阈值,或者第一相对位置或第二相对位置任何一个异常(如无法获得相对位置的数值信息或者相对位置大于设定的标准),说明协同感知运动系统异常,则须进行轨迹或姿态的修正,以使无人机趋向安全协同空间,直到分离参数小于设定的阈值。
上述双向交叉安全验证可采用周期触发或事件触发,例如,设置固定周期的执行方式,如1秒执行一次交叉验证,当出现异常时,将固定周期时间减少,如加快为500ms一次。或者,当智能车或无人机检测到异常事件时,主动请求进行双向交叉验证,如将不能安全导航作为异常事件或根据周围环境的复杂程度确定异常事件。
步骤S260,在双向交叉安全验证的条件下,无人机按照预定轨迹模式进行运动并探测。
在一个实施例中,预定轨迹模式包括以下三种:以指定的相对位置悬停在智能车上方、跟随智能车一起运动、安全协同空间内按照规划的以智能车为中心的轨迹进行运动。
步骤S270,无人机利用第二传感模块探测环境信息。无人机将第二传感模块探测的信息输入第二计算模块,进行探测信息的预处理和识别,得到第二感知信息。无人机通过第二通信模块将第二感知信息传送给智能车第一通信模块。
步骤S280,智能车的第一计算模块对智能车和无人机的探测信息进行融合。
例如,汇总无人机和智能车的多个不同视角采集到的环境信息,识别出周围环境中的物体、道路、障碍物等,识别方法可采用现有的通用图像识别技术(如基于深度学习的物体分类)。由于信息既有来自于智能车探测的信息,又有来自于无人机俯视视角、前视视角等的信息,因此可以探测到更全面的环境信息;
又如,对周围环境进行理解和加工,输出高级抽象结果,获得周围环境的复杂性定量评估,建立风险时空态势图等,传送给车辆的规划模块,引导车辆实现安全导航。
在一个实施例中,复杂性定量评估的方法为:
首先,利用语义提取方法识别出物体,进行语义分割,定义各部分的属性值,该属性值反映了智能车可通行的程度,数值越大代表通行性越差。例如,有不可通过的障碍物或沟壑区域时,属性值定义为无穷大;有坡度的地形区域属性值大于平坦的地形区域。
然后,根据上述定义的可通行性属性图和定位位置,生成风险时空态势图。具体的,在智能车周围,针对每一个区域,具有一个风险时空态势的定量值,该值等于上述定义的可通行的属性值。所属的区域可采用离散栅格地图来进行划分,因此,对于图上的每个栅格,都有一个风险时空态势的定量值,该值越大,代表未来车辆在该栅格上通行时风险越高。
在上述基础上,智能车导航时根据目标位置和风险失控态势图,选择风险较低的栅格规划行驶路径,保证智能车在复杂地形通行的安全性
步骤S290,无人机完成环境探测后,自主降落在停机舱。
需要说明的是,在本发明所提供的系统中,无人机的数量可大于1,如有n个无人机,则除了第一相对位置、还有第二相对位置、…、第n+1相对位置。例如,第n+1相对位置表示第n个无人机获取的自身相对智能车的位置。在这种情况下,在双向交叉安全验证中,增加智能车与多个无人机的双向交叉安全验证,与上述过程类似,在此不再赘述。
为进一步验证本发明的效果,进行了样机系统的测试实验。实验证明,在无人机和智能车的相对位置出现过大偏差,或者无人机超出智能车设定的安全空间时,本发明主动介入引导无人机进行姿态和轨迹修正,提高了无人机协同感知的可靠性。
综上所述,本发明通过设置主动引导模块、设计智能车与无人机具有双向交叉安全验证机制,提高了无人机-智能车执行协同感知任务时的可靠性,并可适用于一个智能车与多个无人机的协同感知。
本发明可以是系统、方法和/或计算机程序产品。计算机程序产品可以包括计算机可读存储介质,其上载有用于使处理器实现本发明的各个方面的计算机可读程序指令。
计算机可读存储介质可以是可以保持和存储由指令执行设备使用的指令的有形设备。计算机可读存储介质例如可以是但不限于电存储设备、磁存储设备、光存储设备、电磁存储设备、半导体存储设备或者上述的任意合适的组合。计算机可读存储介质的更具体的例子(非穷举的列表)包括:便携式计算机盘、硬盘、随机存取存储器(RAM)、只读存储器(ROM)、可擦式可编程只读存储器(EPROM或闪存)、静态随机存取存储器(SRAM)、便携式压缩盘只读存储器(CD-ROM)、数字多功能盘(DVD)、记忆棒、软盘、机械编码设备、例如其上存储有指令的打孔卡或凹槽内凸起结构、以及上述的任意合适的组合。这里所使用的计算机可读存储介质不被解释为瞬时信号本身,诸如无线电波或者其他自由传播的电磁波、通过波导或其他传输媒介传播的电磁波(例如,通过光纤电缆的光脉冲)、或者通过电线传输的电信号。
这里所描述的计算机可读程序指令可以从计算机可读存储介质下载到各个计算/处理设备,或者通过网络、例如因特网、局域网、广域网和/或无线网下载到外部计算机或外部存储设备。网络可以包括铜传输电缆、光纤传输、无线传输、路由器、防火墙、交换机、网关计算机和/或边缘服务器。每个计算/处理设备中的网络适配卡或者网络接口从网络接收计算机可读程序指令,并转发该计算机可读程序指令,以供存储在各个计算/处理设备中的计算机可读存储介质中。
用于执行本发明操作的计算机程序指令可以是汇编指令、指令集架构(ISA)指令、机器指令、机器相关指令、微代码、固件指令、状态设置数据、或者以一种或多种编程语言的任意组合编写的源代码或目标代码,所述编程语言包括面向对象的编程语言—诸如Smalltalk、C++、Python等,以及常规的过程式编程语言—诸如“C”语言或类似的编程语言。计算机可读程序指令可以完全地在用户计算机上执行、部分地在用户计算机上执行、作为一个独立的软件包执行、部分在用户计算机上部分在远程计算机上执行、或者完全在远程计算机或服务器上执行。在涉及远程计算机的情形中,远程计算机可以通过任意种类的网络—包括局域网(LAN)或广域网(WAN)—连接到用户计算机,或者,可以连接到外部计算机(例如利用因特网服务提供商来通过因特网连接)。在一些实施例中,通过利用计算机可读程序指令的状态信息来个性化定制电子电路,例如可编程逻辑电路、现场可编程门阵列(FPGA)或可编程逻辑阵列(PLA),该电子电路可以执行计算机可读程序指令,从而实现本发明的各个方面。
这里参照根据本发明实施例的方法、装置(系统)和计算机程序产品的流程图和/或框图描述了本发明的各个方面。应当理解,流程图和/或框图的每个方框以及流程图和/或框图中各方框的组合,都可以由计算机可读程序指令实现。
这些计算机可读程序指令可以提供给通用计算机、专用计算机或其它可编程数据处理装置的处理器,从而生产出一种机器,使得这些指令在通过计算机或其它可编程数据处理装置的处理器执行时,产生了实现流程图和/或框图中的一个或多个方框中规定的功能/动作的装置。也可以把这些计算机可读程序指令存储在计算机可读存储介质中,这些指令使得计算机、可编程数据处理装置和/或其他设备以特定方式工作,从而,存储有指令的计算机可读介质则包括一个制造品,其包括实现流程图和/或框图中的一个或多个方框中规定的功能/动作的各个方面的指令。
也可以把计算机可读程序指令加载到计算机、其它可编程数据处理装置、或其它设备上,使得在计算机、其它可编程数据处理装置或其它设备上执行一系列操作步骤,以产生计算机实现的过程,从而使得在计算机、其它可编程数据处理装置、或其它设备上执行的指令实现流程图和/或框图中的一个或多个方框中规定的功能/动作。
附图中的流程图和框图显示了根据本发明的多个实施例的系统、方法和计算机程序产品的可能实现的体系架构、功能和操作。在这点上,流程图或框图中的每个方框可以代表一个模块、程序段或指令的一部分,所述模块、程序段或指令的一部分包含一个或多个用于实现规定的逻辑功能的可执行指令。在有些作为替换的实现中,方框中所标注的功能也可以以不同于附图中所标注的顺序发生。例如,两个连续的方框实际上可以基本并行地执行,它们有时也可以按相反的顺序执行,这依所涉及的功能而定。也要注意的是,框图和/或流程图中的每个方框、以及框图和/或流程图中的方框的组合,可以用执行规定的功能或动作的专用的基于硬件的系统来实现,或者可以用专用硬件与计算机指令的组合来实现。对于本领域技术人员来说公知的是,通过硬件方式实现、通过软件方式实现以及通过软件和硬件结合的方式实现都是等价的。
以上已经描述了本发明的各实施例,上述说明是示例性的,并非穷尽性的,并且也不限于所披露的各实施例。在不偏离所说明的各实施例的范围和精神的情况下,对于本技术领域的普通技术人员来说许多修改和变更都是显而易见的。本文中所用术语的选择,旨在最好地解释各实施例的原理、实际应用或对市场中的技术改进,或者使本技术领域的其它普通技术人员能理解本文披露的各实施例。本发明的范围由所附权利要求来限定。

Claims (10)

  1. 一种智能车-无人机的协同感知系统,包括:智能车、一个或多个无人机,其特征在于,智能车包括第一感知模块、第一计算模块、无人机舱模块和主动引导模块,无人机包括第二感知模块和第二计算模块,其中:
    第一感知模块用于探测智能车周围的环境信息;
    无人机舱模块具有放置一个或多个无人机的空间;
    主动引导模块用于传递引导信息,以指示无人机获取与智能车之间的相对定位;
    第一计算模块用于根据无人机传回的探测信息以及智能车的探测信息对周围环境的复杂度进行评估,并根据评估结果决定是否传递所述引导信息;
    第二感知模块用于探测无人机视野内的环境信息以及识别智能车或智能车传递的引导信息;
    第二计算模块用于控制无人机的运动,处理第二感知模块的探测信息,获取智能车周围环境的语义信息。
  2. 根据权利要求1所述的系统,其特征在于,响应于智能车传递的引导信息,无人机获取与智能车之间的相对定位,并配合智能车执行双向交叉安全验证。
  3. 根据权利要求2所述的系统,其特征在于,所述双向交叉安全验证包括:
    智能车根据第一相对位置主动验证无人机是否落在预先设定的安全协同空间内,如判断为否,则智能车向无人机发送超范围指令,以供无人机调整运动轨迹,其中第一相对位置是智能车获取的相对无人机的位置;
    无人机根据第一相对位置和第二相对位置主动验证自身位置的安全性,如判断为否,则进行轨迹修正,使无人机趋向所述安全协同空间,其中第二相对位置是无人机感测的与智能车的相对位置。
  4. 根据权利要求3所述的系统,其特征在于,所述无人机根据第一相对位置和第二相对位置主动验证自身位置的安全性包括:
    无人机从智能车接收第一相对位置;
    无人机判断第一相对位置与第二相对位置的分离参数f,如果大于设定的阈值f min,或者第一相对位置、第二相对位置异常,则判定无人机的自身位置处于非安全状态。
  5. 根据权利要求4所述的系统,其特征在于,所述分离参数f表示为:
    f=cx*|dxi-dxi 2|+cy*|dyi-dyi 2|+cz*|dzi-dzi 2|
    其中,cx,cy,cz为第一相对位置dxi和第二相对位置dxi 2在x,y,z三个方向上偏差的权重系数,均为正,且cx+cy+cz=1。
  6. 根据权利要求1所述的系统,其特征在于,所述引导信息包括图像标示或发光LED组成的图案标示。
  7. 根据权利要求1所述的系统,其特征在于,所述第一感知模块包括智能车环境探测传感设备和智能车对空探测传感设备,所述智能车环境探测传感设备用于探测智能车周围的环境信息,所述智能车对空探测传感设备用于探测并识别上空的无人机。
  8. 根据权利要求1所述的系统,其特征在于,智能车基于以下步骤进行周围环境的复杂性评估,以控制行驶轨迹:
    利用语义提取识别出的物体并进行语义分割,定义各部分的属性值,其中属性值反映智能车可通行的程度,获得可通行属性图;
    根据所述可通行性属性图和定位位置,生成风险时空态势图,其中在智能车周围,针对每一个区域,具有一个风险时空态势的定量值,该值等于所定义的可通行的属性值,所属的区域采用离散栅格地图来进行划分,对于图上的每个栅格,具有一个风险时空态势的定量值。
  9. 一种智能车-无人机的协同感知方法,用于权利要求1至8任一项所述的系统,包括以下步骤:
    探测智能车周围的环境信息;
    接收无人机传回的探测信息;
    根据智能车周围的环境信息和所述无人机传回的探测信息,确定智能车是否传递引导信息,并以指示无人机响应于该引导信息获取与智能车之间的相对定位;
    基于无人机获取的智能车的周围环境的语义信息,控制智能车的控制行驶轨迹。
  10. 一种计算机可读存储介质,其上存储有计算机程序,其中,该程序被处理器执行时实现根据权利要求9所述的方法的步骤。
PCT/CN2022/136955 2021-12-13 2022-12-06 一种智能车-无人机的协同感知系统及方法 WO2023109589A1 (zh)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN202111519197.6 2021-12-13
CN202111519197.6A CN114489112A (zh) 2021-12-13 2021-12-13 一种智能车-无人机的协同感知系统及方法

Publications (1)

Publication Number Publication Date
WO2023109589A1 true WO2023109589A1 (zh) 2023-06-22

Family

ID=81493025

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2022/136955 WO2023109589A1 (zh) 2021-12-13 2022-12-06 一种智能车-无人机的协同感知系统及方法

Country Status (2)

Country Link
CN (1) CN114489112A (zh)
WO (1) WO2023109589A1 (zh)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117111178A (zh) * 2023-10-18 2023-11-24 中国电建集团贵阳勘测设计研究院有限公司 一种堤坝隐患和险情空地水协同探测系统及方法

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114489112A (zh) * 2021-12-13 2022-05-13 深圳先进技术研究院 一种智能车-无人机的协同感知系统及方法
CN116540784B (zh) * 2023-06-28 2023-09-19 西北工业大学 一种基于视觉的无人系统空地协同导航与避障方法

Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105512628A (zh) * 2015-12-07 2016-04-20 北京航空航天大学 基于无人机的车辆环境感知系统及方法
CN107479554A (zh) * 2017-09-07 2017-12-15 苏州三体智能科技有限公司 机器人系统及其户外建图导航方法
CN110221623A (zh) * 2019-06-17 2019-09-10 酷黑科技(北京)有限公司 一种空地协同作业系统及其定位方法
CN110221625A (zh) * 2019-05-27 2019-09-10 北京交通大学 无人机精确位置的自主降落导引方法
DE102018205578A1 (de) * 2018-04-12 2019-10-17 Audi Ag Verfahren zum Bilden eines Konvois mindestens ein unbemanntes, autonom bewegbares Objekt umfassend, sowie entsprechend ausgebildetes, bewegbares Objekt
CN111300372A (zh) * 2020-04-02 2020-06-19 同济人工智能研究院(苏州)有限公司 空地协同式智能巡检机器人及巡检方法
CN111707988A (zh) * 2020-05-29 2020-09-25 江苏科技大学 基于无人车车载uwb基站的无人器定位系统及定位方法
CN112731922A (zh) * 2020-12-14 2021-04-30 南京大学 基于室内定位的无人机辅助智能车驾驶方法与系统
CN114489112A (zh) * 2021-12-13 2022-05-13 深圳先进技术研究院 一种智能车-无人机的协同感知系统及方法

Patent Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105512628A (zh) * 2015-12-07 2016-04-20 北京航空航天大学 基于无人机的车辆环境感知系统及方法
CN107479554A (zh) * 2017-09-07 2017-12-15 苏州三体智能科技有限公司 机器人系统及其户外建图导航方法
DE102018205578A1 (de) * 2018-04-12 2019-10-17 Audi Ag Verfahren zum Bilden eines Konvois mindestens ein unbemanntes, autonom bewegbares Objekt umfassend, sowie entsprechend ausgebildetes, bewegbares Objekt
CN110221625A (zh) * 2019-05-27 2019-09-10 北京交通大学 无人机精确位置的自主降落导引方法
CN110221623A (zh) * 2019-06-17 2019-09-10 酷黑科技(北京)有限公司 一种空地协同作业系统及其定位方法
CN111300372A (zh) * 2020-04-02 2020-06-19 同济人工智能研究院(苏州)有限公司 空地协同式智能巡检机器人及巡检方法
CN111707988A (zh) * 2020-05-29 2020-09-25 江苏科技大学 基于无人车车载uwb基站的无人器定位系统及定位方法
CN112731922A (zh) * 2020-12-14 2021-04-30 南京大学 基于室内定位的无人机辅助智能车驾驶方法与系统
CN114489112A (zh) * 2021-12-13 2022-05-13 深圳先进技术研究院 一种智能车-无人机的协同感知系统及方法

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117111178A (zh) * 2023-10-18 2023-11-24 中国电建集团贵阳勘测设计研究院有限公司 一种堤坝隐患和险情空地水协同探测系统及方法
CN117111178B (zh) * 2023-10-18 2024-02-06 中国电建集团贵阳勘测设计研究院有限公司 一种堤坝隐患和险情空地水协同探测系统及方法

Also Published As

Publication number Publication date
CN114489112A (zh) 2022-05-13

Similar Documents

Publication Publication Date Title
Yasin et al. Unmanned aerial vehicles (uavs): Collision avoidance systems and approaches
US10809361B2 (en) Hybrid-view LIDAR-based object detection
Beul et al. Fast autonomous flight in warehouses for inventory applications
US11164016B2 (en) Object detection and property determination for autonomous vehicles
Kouris et al. Learning to fly by myself: A self-supervised cnn-based approach for autonomous navigation
US10310087B2 (en) Range-view LIDAR-based object detection
Lee et al. Deep learning-based monocular obstacle avoidance for unmanned aerial vehicle navigation in tree plantations: Faster region-based convolutional neural network approach
WO2023109589A1 (zh) 一种智能车-无人机的协同感知系统及方法
CN102707724B (zh) 一种无人机的视觉定位与避障方法及系统
CN112558608B (zh) 一种基于无人机辅助的车机协同控制及路径优化方法
CN114200471B (zh) 基于无人机的森林火源检测系统、方法、存储介质、设备
US11498587B1 (en) Autonomous machine motion planning in a dynamic environment
CN112596071A (zh) 无人机自主定位方法、装置及无人机
CN112379681A (zh) 无人机避障飞行方法、装置及无人机
US20230111354A1 (en) Method and system for determining a mover model for motion forecasting in autonomous vehicle control
Wallar et al. Foresight: Remote sensing for autonomous vehicles using a small unmanned aerial vehicle
Chen et al. A review of autonomous obstacle avoidance technology for multi-rotor UAVs
EP3674972A1 (en) Methods and systems for generating training data for neural network
Mutz et al. Following the leader using a tracking system based on pre-trained deep neural networks
CN113167038B (zh) 一种车辆通过道闸横杆的方法及装置
Chen et al. From perception to control: an autonomous driving system for a formula student driverless car
Lin Moving obstacle avoidance for unmanned aerial vehicles
Deniz et al. Autonomous Landing of eVTOL Vehicles via Deep Q-Networks
Rodríguez-Gómez et al. UAV human teleoperation using event-based and frame-based cameras
Pant et al. Obstacle Avoidance Method for UAVs using Polar Grid

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 22906339

Country of ref document: EP

Kind code of ref document: A1