WO2020135187A1 - 基于rgb_d和深度卷积网络的无人机识别定位系统和方法 - Google Patents

基于rgb_d和深度卷积网络的无人机识别定位系统和方法 Download PDF

Info

Publication number
WO2020135187A1
WO2020135187A1 PCT/CN2019/126349 CN2019126349W WO2020135187A1 WO 2020135187 A1 WO2020135187 A1 WO 2020135187A1 CN 2019126349 W CN2019126349 W CN 2019126349W WO 2020135187 A1 WO2020135187 A1 WO 2020135187A1
Authority
WO
WIPO (PCT)
Prior art keywords
module
image
drone
camera
uav
Prior art date
Application number
PCT/CN2019/126349
Other languages
English (en)
French (fr)
Inventor
樊宽刚
杨杰
邓永芳
唐宏
Original Assignee
赣州德业电子科技有限公司
江西理工大学
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 赣州德业电子科技有限公司, 江西理工大学 filed Critical 赣州德业电子科技有限公司
Publication of WO2020135187A1 publication Critical patent/WO2020135187A1/zh

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/10Terrestrial scenes
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/70Determining position or orientation of objects or cameras
    • G06T7/73Determining position or orientation of objects or cameras using feature-based methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/56Extraction of image or video features relating to colour
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10024Color image
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30232Surveillance

Definitions

  • the invention relates to the technical field of UAV identification and positioning, in particular to an UAV identification and positioning system and method based on RGB_D and deep convolutional networks.
  • Drones are the hotspot of a new round of scientific and technological revolution and industrial revolution in the world. They have been used in various fields nowadays. Today, drones are constantly breaking through. Now they have escaped from the past pure military use, and gradually Used for multi-directional extension of home. In the face of the difficult, high-risk, and high-content tasks that humans are not capable of, drones came into being. It replaces manned aircraft to perform these tasks.
  • a drone is a device controlled by radio. Some people call it a remotely piloted aircraft. It can tend to make perfect use of sophisticated technologies such as artificial intelligence, signal processing and autonomous driving, and because of its advantages of small size, unmanned driving and long range, it has been used in natural environment inspections, science research, agriculture, and safeguarding national sovereignty. It has applications in many aspects such as public health and safety, and is a major hot spot in the contemporary era.
  • the present invention aims to provide a UAV recognition and positioning system and method based on RGB_D and deep convolutional network, which can realize automatic recognition of UAVs in the area and realize UAV
  • the specific positioning, identification and positioning accuracy is high. Solve the problem of regional drone safety and avoid the impact of drones.
  • a UAV recognition and positioning system based on RGB_D and deep convolution network including camera monitoring module, UAV recognition module, 2D image generation 3D grid module and RGB_D ranging and positioning module;
  • the camera monitoring module is used to obtain images of the entire monitoring area
  • the UAV identification module receives the image of the monitoring area acquired by the camera monitoring module, matches the pre-stored image characteristics of the UAV, and identifies whether there is a UAV in the monitoring area;
  • the two-dimensional image generation three-dimensional grid module is used to generate a three-dimensional grid map from the image of the monitoring area acquired by the camera monitoring module through the graph convolution neural network when the UAV recognition module recognizes that there is a drone in the monitoring area;
  • the RGB_D ranging and positioning module is used to obtain the RGB_D image of the surveillance area through the binocular camera when the drone recognition module recognizes that there is a drone in the surveillance area, and according to the RGB_D image of the surveillance area, the drone and the binocular camera
  • the relationship between the color depth is calculated to obtain the distance between the two, combined with the direction of the drone obtained from the three-dimensional grid map, to achieve the specific positioning of the drone.
  • the camera monitoring module includes a plurality of cameras, each camera is arranged at a different position in the monitoring area, and the camera ranges of all cameras add up to cover the entire monitoring area.
  • each camera is installed around the camera in a distributed manner to ensure that the adjacent cameras on the left and right sides of the camera can be seen from the perspective of any camera.
  • the invention also provides a method for identifying and positioning an unmanned aerial vehicle by using the above system, including the following steps:
  • the camera monitoring module is used to obtain images of the entire monitoring area
  • the UAV recognition module receives the image of the monitoring area acquired by the camera monitoring module, matches the pre-stored image characteristics of the UAV, and identifies whether there is a UAV in the monitoring area;
  • the two-dimensional image generation three-dimensional grid module When the UAV recognition module recognizes that there is a drone in the monitoring area, the two-dimensional image generation three-dimensional grid module generates a three-dimensional grid map from the image of the monitoring area acquired by the camera monitoring module through the graph convolution neural network; RGB_D
  • the ranging and positioning module obtains the RGB_D image of the monitoring area through the binocular camera, and calculates the distance between the two according to the relationship between the color depth of the drone and the binocular camera in the RGB_D image of the monitoring area, combined with the three-dimensional grid map. The direction of the drone to achieve the specific positioning of the drone.
  • step S1 all cameras in the camera monitoring module transmit the acquired images to the UAV recognition module; in step S2, the UAV recognition module monitors the surveillance area collected by all cameras at the same time point. Analyze each frame image, and match each frame image with the pre-stored UAV image features, identify whether there is a drone in each frame image, and determine whether the drone appears in the monitoring area at this time point; Step In S3, the two-dimensional image generation three-dimensional grid module simultaneously combines each frame image of the monitoring area collected by all cameras at the same time point to calculate and generate a three-dimensional grid map of the entire monitoring area.
  • the pre-stored UAV image feature acquisition process is: a set of UAV images with different functions and designs are pre-stored in the UAV automatic recognition module and the UAV image features are extracted therefrom .
  • step S2 the two-dimensional image generation three-dimensional grid module specifically extracts the features of different levels of the image of the monitoring area through a multi-layer graph convolution neural network, and then generates a three-dimensional grid through a cascaded grid deformation network Figure.
  • step S3 the RGB_D ranging and positioning module calculates the distance between the drone and the binocular camera according to the following formula:
  • C1 and C2 represent the color of the drone and binocular camera
  • C1R and C2R represent the R channel of the color of the drone and binocular camera
  • C1G and C2G represent the G of the color of the drone and binocular camera, respectively
  • Channel represent the color of the drone and binocular camera, respectively.
  • the invention can realize the functions of regional monitoring, automatic identification and positioning of drones, and has high identification efficiency and strong anti-interference.
  • the present invention is specifically divided into a camera monitoring module, a drone recognition module, a two-dimensional image generation three-dimensional grid module, and an RGB_D ranging and positioning module.
  • the camera monitoring module collects images of the monitoring area and is recognized by the drone.
  • the module recognizes the drone from the image collected by the camera, and at the same time processes the image through the graph convolution neural network, restores the monitoring area in the form of a three-dimensional grid, and combines the generated three-dimensional grid map to get the direction of the drone, RGB_D
  • the ranging and positioning module calculates the distance between the drone and the binocular camera in the RGB_D image to realize the specific positioning of the drone. Under the joint action of multiple modules, it realizes the regional monitoring, automatic identification and positioning functions of the UAV.
  • Embodiment 1 is a schematic diagram of the system structure in Embodiment 1 of the present invention.
  • FIG. 2 is a schematic plan view of the arrangement of cameras in the camera monitoring module in Embodiment 1 of the present invention.
  • FIG. 3 is an overview diagram of a cascaded mesh deformation network in Embodiment 1 of the present invention.
  • FIG. 4 is a schematic diagram of UAV positioning in a three-dimensional grid diagram in Embodiment 1 of the present invention.
  • FIG. 5 is a schematic flowchart of the method in Embodiment 2 of the present invention.
  • This embodiment provides a UAV recognition and positioning system based on RGB_D and deep convolutional network. As shown in FIG. 1, it includes a camera monitoring module, a UAV recognition module, a two-dimensional image generation three-dimensional grid module and RGB_D ranging Positioning module
  • the camera monitoring module is used to obtain images of the entire monitoring area
  • the UAV identification module receives the image of the monitoring area acquired by the camera monitoring module, matches the pre-stored image characteristics of the UAV, and identifies whether there is a UAV in the monitoring area;
  • the two-dimensional image generation three-dimensional grid module is used to generate a three-dimensional grid map from the image of the monitoring area acquired by the camera monitoring module through the graph convolution neural network when the UAV recognition module recognizes that there is a drone in the monitoring area;
  • the RGB_D ranging and positioning module is used to obtain the RGB_D image of the surveillance area through the binocular camera when the drone recognition module recognizes that there is a drone in the surveillance area, and according to the RGB_D image of the surveillance area, the drone and the binocular camera
  • the relationship between the color depth is calculated to obtain the distance between the two, combined with the direction of the drone obtained from the three-dimensional grid map, to achieve the specific positioning of the drone.
  • the camera monitoring module includes a plurality of cameras, each camera is arranged at a different position in the monitoring area, and the camera ranges of all cameras add up to cover the entire monitoring area;
  • the camera monitoring module includes four cameras. By dividing the angle of view of the four cameras, the UAV can be monitored regionally, and multi-angle video of the monitoring area can be collected. Image, through the total view angle of all cameras to achieve full coverage of the monitoring area.
  • the seamless coverage of the monitoring area can be achieved through the cross-view angle of each camera.
  • the camera choose to install it in a decentralized surround, so that you can see the adjacent cameras on the left and right sides of the camera from the perspective of any camera.
  • FIG. 2 when the camera 1 is at a higher position and its orientation is changed, monitoring of the front area can be achieved, and the adjacent viewing cameras on the left and right sides can be covered within the monitoring angle of view.
  • the rest of the camera 2, camera 3 and camera 4 are arranged similarly, and the height and position may be inconsistent, but it is necessary to ensure that there are adjacent cameras on the left and right sides in the viewing angle. This arrangement can be achieved by adjusting the viewing angle and direction of each camera.
  • the full coverage of the surveillance area is covered by the camera angle.
  • the UAV identification module analyzes each frame of the monitoring area collected by all cameras at the same time point, and at the same time matches each frame with the pre-stored UAV image features, Realize automatic identification of drones. Simultaneous matching of multi-angle images can help further improve the accuracy of recognition.
  • a set of UAV images with different functions and designs are pre-stored in the UAV automatic recognition module, and the UAV image features are extracted therefrom, and the extracted UAV image features are used to recognize the image of the surveillance area during recognition Perform image recognition, target extraction, feature analysis, and image matching in sequence to identify the drone from the images in the monitoring area, thereby realizing automatic recognition of the drone.
  • the UAV identification module and the camera monitoring module perform identification and monitoring at the same time, ensuring the high efficiency of automatic identification of the UAV.
  • the two-dimensional image generation three-dimensional grid module specifically extracts the characteristics of different levels of the image of the monitoring area through a multi-layer graph convolution neural network, and then generates a three-dimensional grid map through a cascade grid deformation network, thereby The image of the monitoring area is restored in the form of a three-dimensional grid.
  • the generated three-dimensional grid map is used to measure the distance and angle parameters required for calculation.
  • the two-dimensional image generation three-dimensional grid module combines the images of the monitoring areas of all cameras at different angles to calculate the three-dimensional grid map of the entire monitoring area with higher accuracy.
  • the image feature network is a two-dimensional convolutional neural network that extracts the perceptual features from the input image.
  • the cascaded grid deformation network uses the extracted features of different levels of the image to The ellipsoidal grid is gradually transformed into the desired three-dimensional grid model.
  • the cascaded mesh deformation network is a graph-based convolution network, which contains three deformation blocks.
  • the deformation block is formed by the intersection of two disintegration layers.
  • the image of the monitoring area is used as the input image
  • the image feature network is used as the two-dimensional convolutional neural network.
  • the perceptual features are extracted from the input image, and the extracted perceptual feature data is used as the input.
  • the three deformed blocks gradually transform the ellipsoidal mesh into the desired three-dimensional mesh model from thick to thin.
  • the cascade grid deformation network uses the features of different levels of the image to gradually deform the ellipsoid grid into the required three-dimensional grid model, generate a high-precision three-dimensional grid map, and restore the monitoring area in the form of a three-dimensional grid.
  • the actual coordinates of each point A, B, C, and D of the camera are known, and the two parameters of distance and direction can be calculated.
  • the specific coordinates of the drone can realize the positioning of the drone.
  • the RGB_D ranging and positioning module obtains the RGB_D image of the monitoring area through a binocular camera, analyzes the color depth of the image of the drone and the binocular camera, calculates the distance according to the relationship between the color depth of the two, and combines the three-dimensional grid The direction of the drone obtained in the figure realizes the positioning of the drone.
  • the RGB_D image is actually two images: one is an ordinary RGB three-channel color image, and the other is a Depth image. Depth images are similar to grayscale images, except that each pixel value is the actual distance of the sensor from the object. Image depth is to determine the number of colors that each pixel of a color image may have, or to determine the number of gray levels that each pixel of a gray-scale image may have. It determines the maximum number of colors that can appear in a color image, or the maximum gray level in a gray image. The depth of the image is different at different distances of the object in the image. The distance between the two objects can be calculated according to the image depth relationship on the RGB_D image.
  • the color distance refers to the difference between two colors. Generally, the greater the distance, the greater the difference between the two colors. Conversely, the closer the two colors are. In RGB space, the distance between two colors can be obtained as:
  • C1 and C2 represent color 1 and color 2
  • C1R and C2R represent R channel of color 1 and color 2
  • C1G and C2G represent G channel of color 1 and color 2
  • C1B and C2B represent color 1 and color 2 respectively Channel B.
  • the distance between the two is obtained based on the color difference between the drone and the binocular camera.
  • This embodiment provides a method for performing UAV identification and positioning using the system described in Embodiment 1, as shown in FIG. 5, including the following steps:
  • the camera monitoring module is used to obtain images of the entire monitoring area
  • the UAV identification module receives the image of the monitoring area acquired by the camera monitoring module, matches the pre-stored image characteristics of the UAV, and identifies whether there is an unmanned aircraft in the monitoring area;
  • the two-dimensional image generation three-dimensional grid module When the UAV recognition module recognizes that there is a drone in the monitoring area, the two-dimensional image generation three-dimensional grid module generates a three-dimensional grid map from the image of the monitoring area acquired by the camera monitoring module through the graph convolution neural network; RGB_D
  • the ranging and positioning module obtains the RGB_D image of the monitoring area through the binocular camera, and calculates the distance between the two according to the relationship between the color depth of the drone and the binocular camera in the RGB_D image of the monitoring area, combined with the three-dimensional grid map. The direction of the drone to achieve the specific positioning of the drone.
  • step S1 all cameras in the camera monitoring module transmit the acquired images to the UAV recognition module; in step S2, the UAV recognition module monitors the surveillance area collected by all cameras at the same time point. Analyze each frame image, and match each frame image with the pre-stored UAV image features, identify whether there is a drone in each frame image, and determine whether the drone appears in the monitoring area at this time point; Step In S3, the two-dimensional image generation three-dimensional grid module simultaneously combines each frame image of the monitoring area collected by all cameras at the same time point to calculate and generate a three-dimensional grid map of the entire monitoring area.
  • the pre-stored UAV image feature acquisition process is: a set of UAV images with different functions and designs are pre-stored in the UAV automatic recognition module and the UAV image features are extracted therefrom .
  • step S2 the two-dimensional image generation three-dimensional grid module specifically extracts the features of different levels of the image of the monitoring area through a multi-layer graph convolution neural network, and then generates a three-dimensional grid through a cascaded grid deformation network Figure.
  • step S3 the RGB_D ranging and positioning module calculates the distance between the drone and the binocular camera according to the following formula:
  • C1 and C2 represent the color of the drone and binocular camera
  • C1R and C2R represent the R channel of the color of the drone and binocular camera
  • C1G and C2G represent the G of the color of the drone and binocular camera, respectively
  • Channel represent the color of the drone and binocular camera, respectively.

Abstract

一种基于RGB_D和深度卷积网络的无人机识别定位系统和方法,包括摄像头监控模块、无人机识别模块、二维图像生成三维网格模块和RGB_D测距定位模块;摄像头监控模块获取整个监控区域的图像;无人机识别模块将监控区域的图像与预存的无人机图像特征进行匹配,识别监控区域中是否存在无人机;二维图像生成三维网格模块通过图卷积神经网络将摄像头监控模块获取的监控区域的图像生成三维网格图;RGB_D测距定位模块通过双目摄像头获取监控区域的RGB_D图像,并根据监控区域的RGB_D图像中无人机与双目摄像头颜色深度的关系计算两者间的距离,结合三维网格图得到无人机方向,实现对无人机的具体定位。该系统可实现对区域内无人机进行高精度的识别与定位。

Description

基于RGB_D和深度卷积网络的无人机识别定位系统和方法 技术领域
本发明涉及无人机识别定位技术领域,具体涉及基于一种基于RGB_D和深度卷积网络的无人机识别定位系统和方法。
背景技术
无人机是全球新一轮科技革命和产业革命的热点,现今已在各个领域得到使用,现今无人机这一领域不断在突破,现在已跳脱出过去的单纯军用用途,逐步向民用、警用与家用多方向延伸。而面对人类无法胜任的高难度、高风险与高含量的任务,无人机应运而生,它替代有人驾驶的飞机去执行这些任务。无人机是一种有无线电来操控的设备,遂有人称其为遥控驾驶航空器。它能够趋于完美的利用人工智能、信号处理和自动驾驶等精尖技术,并由于它具有体积小、无人驾驶和航程远等优势,在自然环境考察、科普研究、农业领域、维护国家主权与公共卫生安全等许多方面都有所应用,是当代的一大热点。
随着对无人机使用的普遍性,无人机的安全问题愈来愈严重,对于无人机的监管存在局限,因此屡次出现无人机事故,使得人们在对于无人机识别监测定位这一方面的技术愈来愈关注。
发明内容
针对现有技术的不足,本发明旨在提供一种基于RGB_D和深度 卷积网络的无人机识别定位系统和方法,可以实现对于区域内的无人机的自动识别,并实现对无人机的具体定位,识别与定位的精度高。解决了对于区域无人机安全问题,避免无人机所带来的影响。
为了实现上述目的,本发明采用如下技术方案:
一种基于RGB_D和深度卷积网络的无人机识别定位系统,包括摄像头监控模块、无人机识别模块、二维图像生成三维网格模块和RGB_D测距定位模块;
所述摄像头监控模块用于获取整个监控区域的图像;
所述无人机识别模块接收到摄像头监控模块获取的监控区域的图像,与预存的无人机图像特征进行匹配,识别监控区域中是否存在无人机;
二维图像生成三维网格模块用于当无人机识别模块识别出监控区域中存在无人机时,通过图卷积神经网络将摄像头监控模块获取的监控区域的图像生成三维网格图;
RGB_D测距定位模块用于当无人机识别模块识别出监控区域中存在无人机时,通过双目摄像头获取监控区域的RGB_D图像,并根据监控区域的RGB_D图像中无人机与双目摄像头颜色深度的关系计算得到两者之间的距离,结合三维网格图得到的无人机方向,实现对无人机的具体定位。
进一步地,所述摄像头监控模块包括若干摄像头,各个摄像头分别布置于监控区域的不同位置,所有摄像头的摄像范围加总起来涵盖整个监控区域。
进一步地,各个摄像头分散地环绕式安装,保证在任一摄像头的视角中能够看到该摄像头左右两侧相邻的摄像头。
本发明还提供一种利用上述系统进行无人机识别定位的方法,包括如下步骤:
S1、摄像头监控模块用于获取整个监控区域的图像;
S2、无人机识别模块接收到摄像头监控模块获取的监控区域的图像,与预存的无人机图像特征进行匹配,识别监控区域中是否存在无人机;
S3、当无人机识别模块识别出监控区域中存在无人机时,二维图像生成三维网格模块通过图卷积神经网络将摄像头监控模块获取的监控区域的图像生成三维网格图;RGB_D测距定位模块通过双目摄像头获取监控区域的RGB_D图像,并根据监控区域的RGB_D图像中无人机与双目摄像头颜色深度的关系计算得到两者之间的距离,结合三维网格图得到的无人机方向,实现对无人机的具体定位。
进一步地,步骤S1中,摄像头监控模块中所有摄像头均将获取的图像传输至无人机识别模块;步骤S2中,所述无人机识别模块对在同一时间点上所有摄像头采集的监控区域的各帧图像进行分析,同时将各帧图像与预存的无人机图像特征进行匹配,识别出各帧图像中是否出现无人机,从而确定在该时间点上监控区域是否出现无人机;步骤S3中,二维图像生成三维网格模块同时结合了所有摄像头在同一时间点上采集的监控区域的各帧图像计算生成整个监控区域的三维网格图。
进一步地,步骤S2中,所述预存的无人机图像特征的获得过程为:无人机自动识别模块中预存一组功能和设计都不相同的无人机图像并从中提取无人机图像特征。
进一步地,步骤S2中,所述二维图像生成三维网格模块具体通过一个多层的图卷积神经网络提取监控区域的图像的不同层次的特征,进而通过级联网格变形网络生成三维网格图。
进一步地,步骤S3中,RGB_D测距定位模块根据下式计算无人机和双目摄像头之间的距离:
Figure PCTCN2019126349-appb-000001
其中,C1和C2表示无人机和双目摄像头的颜色,C1R和C2R分别表示无人机和双目摄像头的颜色的R通道,C1G和C2G分别表示无人机和双目摄像头的颜色的G通道,C1B和C2B分别表示无人机和双目摄像头的颜色的B通道。
本发明的有益效果在于:
1、本发明可以实现对无人机的区域性监控、自动识别与定位功能,且识别效率高、抗干扰性强。
2、本发明具体分为摄像头监控模块、无人机识别模块、二维图像生成三维网格模块、RGB_D测距定位模块,通过摄像头监控模块对监控区域的图像进行采集,同时通过无人机识别模块,将无人机从摄像头采集的图像中识别出来,同时通过图卷积神经网络处理图像,将监控区域以三维网格形式还原,结合生成的三维网格图得到无人机所在方向,RGB_D测距定位模块在RGB_D图像中计算出无人机与 双目摄像头之间的距离,实现对于无人机的具体定位。在多个模块的共同作用下,实现对无人机的区域性监控、自动识别与定位功能。
附图说明
图1为本发明实施例1中的系统结构示意图;
图2为本发明实施例1中摄像监控模块中各摄像头布置平面示意图;
图3为本发明实施例1中级联网格变形网络的概述示意图;
图4为本发明实施例1中三维网格图中无人机定位示意图;
图5为本发明实施例2中的方法流程示意图。
具体实施方式
以下将结合附图对本发明作进一步的描述,需要说明的是,本实施例以本技术方案为前提,给出了详细的实施方式和具体的操作过程,但本发明的保护范围并不限于本实施例。
实施例1
本实施例提供一种基于RGB_D和深度卷积网络的无人机识别定位系统,如图1所示,包括摄像头监控模块、无人机识别模块、二维图像生成三维网格模块和RGB_D测距定位模块;
所述摄像头监控模块用于获取整个监控区域的图像;
所述无人机识别模块接收到摄像头监控模块获取的监控区域的 图像,与预存的无人机图像特征进行匹配,识别监控区域中是否存在无人机;
二维图像生成三维网格模块用于当无人机识别模块识别出监控区域中存在无人机时,通过图卷积神经网络将摄像头监控模块获取的监控区域的图像生成三维网格图;
RGB_D测距定位模块用于当无人机识别模块识别出监控区域中存在无人机时,通过双目摄像头获取监控区域的RGB_D图像,并根据监控区域的RGB_D图像中无人机与双目摄像头颜色深度的关系计算得到两者之间的距离,结合三维网格图得到的无人机方向,实现对无人机的具体定位。
进一步地,所述摄像头监控模块包括若干摄像头,各个摄像头分别布置于监控区域的不同位置,所有摄像头的摄像范围加总起来涵盖整个监控区域;
在本实施例中,如图2所示,所述摄像头监控模块包括有四个摄像头,通过四个摄像头的视角划分区域,可以对无人机进行区域性监控,实现采集监控区域的多角度视频图像,通过所有摄像头的视角加总实现监控区域的全覆盖。
更进一步地,可以通过各个摄像头的交叉视角实现监控区域的无缝覆盖。安装摄像头时,选择分散地环绕式安装,保证在任一摄像头的视角中可以看到该摄像头左右两侧相邻的摄像头。如图2所示,摄像头1处于较高位置改变其朝向,即可实现对于前方区域的监控,其监控视角内可以覆盖住左右两侧相邻的摄像头。其余的摄像头2、摄 像头3和摄像头4布置与其相仿,高度与位置可以不一致,但需要保证视角内有左右两侧相邻的摄像头,这一布置可以通过调整各个摄像头的视角和方向来实现,保证了摄像头视角对监控区域的全覆盖。
当采用多个摄像头时,所述无人机识别模块是对在同一时间点上所有摄像头采集的监控区域的各帧图像进行分析,同时将各帧图像与预存的无人机图像特征进行匹配,实现对于无人机的自动识别。多角度的图像的同时匹配可以有助于进一步提高识别的准确性。
进一步地,无人机自动识别模块中预存一组功能和设计都不相同的无人机图像并从中提取无人机图像特征,进行识别时利用所提取的无人机图像特征对监控区域的图像依次进行图像识别、目标提取、特征分析、图像匹配,将无人机从监控区域的图像中识别出来,从而实现对于无人机的自动识别。
由于现有的无人机的功能和设计都不大一致,所以需要收集一组功能和设计都不相同的无人机图像来提取无人机图像特征,可以提高识别的准确性。
在实际应用中,无人机识别模块和摄像头监控模块同时进行识别和监控的工作,保证对无人机自动识别的高效性。
进一步地,所述二维图像生成三维网格模块具体通过一个多层的图卷积神经网络提取监控区域的图像的不同层次的特征,进而通过级联网格变形网络生成三维网格图,从而将监控区域的图像以三维网格的形式还原。生成的三维网格图用于测量计算需要的距离和角度参数。当采用多个摄像头进行监控区域的监控时,二维图像生成三维网格模 块同时结合了所有摄像头不同角度的监控区域的图像计算生成整个监控区域的三维网格图,精度更高。
需要说明的是,二维图像生成的三维网格中,图像特征网络是从输入图像中提取感知特征的二维卷积神经网络,由级联网格变形网络利用提取的图像的不同层次的特征将椭球网格逐步变形为所需的三维网格模型。
级联网格变形网络是基于图的卷积网络,它包含三个变形块,变形块是由两个图层解体层相交形成。如图3所示,监控区域的图像作为输入图像,图像特征网络作为二维卷积神经网络,从输入图像中提取感知特征,并将提取的感知特征数据作为输入,由级联网格变形网络中的三个变形块从粗到细将椭球网格逐步变形为所需的三维网格模型。由级联网格变形网络利用图像的不同层次的特征将椭球网格逐步变形为所需的三维网格模型,生成高精度的三维网格图,将监控区域以三维网格形式还原。参阅图4所示,在还原的三维网格图中选无人机上一特征点P作为参考,摄像头各点A、B、C、D的实际坐标已知,结合距离与方向两个参数可以计算出无人机具体坐标,即可实现无人机的定位。
进一步地,所述RGB_D测距定位模块通过双目摄像头得到监控区域的RGB_D图像,分析无人机与双目摄像头的图像颜色深度,根据两者的颜色深度的关系计算出距离,结合三维网格图中得到的无人机方向实现对于无人机的定位。
需要说明的是,RGB_D图像其实是两幅图像:一个是普通的RGB 三通道彩色图像,另一个是Depth图像。Depth图像类似于灰度图像,只是它的每个像素值是传感器距离物体的实际距离。图像深度是确定彩色图像的每个像素可能有的颜色数,或者确定灰度图像的每个像素可能有的灰度级数。它决定了彩色图像中可出现的最多颜色数,或灰度图像中的最大灰度等级。物体在图像中不同的距离图像深度也不一样,可以根据两物体在RGB_D图像上的图像深度关系计算出他们之间的距离。
进一步地需要说明的是,颜色距离指的是两个颜色之间的差距,通常距离越大,两个颜色相差越大,反之,两个颜色越相近。在RGB空间内,可以得到两个颜色之间的距离为:
Figure PCTCN2019126349-appb-000002
其中,C1和C2表示颜色1和颜色2,C1R和C2R分别表示颜色1和颜色2的R通道,C1G和C2G分别表示颜色1和颜色2的G通道,C1B和C2B分别表示颜色1和颜色2的B通道。
根据无人机与双目摄像头的颜色差得到两者间的距离。
实施例2
本实施例提供一种利用实施例1所述系统进行无人机识别定位的方法,如图5所示,包括如下步骤:
S1、摄像头监控模块用于获取整个监控区域的图像;
S2、无人机识别模块接收到摄像头监控模块获取的监控区域的图像,与预存的无人机图像特征进行匹配,识别监控区域中是否存在无 人机;
S3、当无人机识别模块识别出监控区域中存在无人机时,二维图像生成三维网格模块通过图卷积神经网络将摄像头监控模块获取的监控区域的图像生成三维网格图;RGB_D测距定位模块通过双目摄像头获取监控区域的RGB_D图像,并根据监控区域的RGB_D图像中无人机与双目摄像头颜色深度的关系计算得到两者之间的距离,结合三维网格图得到的无人机方向,实现对无人机的具体定位。
进一步地,步骤S1中,摄像头监控模块中所有摄像头均将获取的图像传输至无人机识别模块;步骤S2中,所述无人机识别模块对在同一时间点上所有摄像头采集的监控区域的各帧图像进行分析,同时将各帧图像与预存的无人机图像特征进行匹配,识别出各帧图像中是否出现无人机,从而确定在该时间点上监控区域是否出现无人机;步骤S3中,二维图像生成三维网格模块同时结合了所有摄像头在同一时间点上采集的监控区域的各帧图像计算生成整个监控区域的三维网格图。
进一步地,步骤S2中,所述预存的无人机图像特征的获得过程为:无人机自动识别模块中预存一组功能和设计都不相同的无人机图像并从中提取无人机图像特征。
进一步地,步骤S2中,所述二维图像生成三维网格模块具体通过一个多层的图卷积神经网络提取监控区域的图像的不同层次的特征,进而通过级联网格变形网络生成三维网格图。
进一步地,步骤S3中,RGB_D测距定位模块根据下式计算无人 机和双目摄像头之间的距离:
Figure PCTCN2019126349-appb-000003
其中,C1和C2表示无人机和双目摄像头的颜色,C1R和C2R分别表示无人机和双目摄像头的颜色的R通道,C1G和C2G分别表示无人机和双目摄像头的颜色的G通道,C1B和C2B分别表示无人机和双目摄像头的颜色的B通道。
对于本领域的技术人员来说,可以根据以上的技术方案和构思,给出各种相应的改变和变形,而所有的这些改变和变形,都应该包括在本发明权利要求的保护范围之内。

Claims (8)

  1. 一种基于RGB_D和深度卷积网络的无人机识别定位系统,其特征在于,包括摄像头监控模块、无人机识别模块、二维图像生成三维网格模块和RGB_D测距定位模块;
    所述摄像头监控模块用于获取整个监控区域的图像;
    所述无人机识别模块接收到摄像头监控模块获取的监控区域的图像,与预存的无人机图像特征进行匹配,识别监控区域中是否存在无人机;
    二维图像生成三维网格模块用于当无人机识别模块识别出监控区域中存在无人机时,通过图卷积神经网络将摄像头监控模块获取的监控区域的图像生成三维网格图;
    RGB_D测距定位模块用于当无人机识别模块识别出监控区域中存在无人机时,通过双目摄像头获取监控区域的RGB_D图像,并根据监控区域的RGB_D图像中无人机与双目摄像头颜色深度的关系计算得到两者之间的距离,结合三维网格图得到的无人机方向,实现对无人机的具体定位。
  2. 根据权利要求1所述的基于RGB_D和深度卷积网络的无人机识别定位系统,其特征在于,所述摄像头监控模块包括若干摄像头,各个摄像头分别布置于监控区域的不同位置,所有摄像头的摄像范围加总起来涵盖整个监控区域。
  3. 根据权利要求1所述的基于RGB_D和深度卷积网络的无人机识别定位系统,其特征在于,各个摄像头分散地环绕式安装,保证 在任一摄像头的视角中能够看到该摄像头左右两侧相邻的摄像头。
  4. 一种利用上述任一权利要求所述系统进行无人机识别定位的方法,其特征在于,包括如下步骤:
    S1、摄像头监控模块用于获取整个监控区域的图像;
    S2、无人机识别模块接收到摄像头监控模块获取的监控区域的图像,与预存的无人机图像特征进行匹配,识别监控区域中是否存在无人机;
    S3、当无人机识别模块识别出监控区域中存在无人机时,二维图像生成三维网格模块通过图卷积神经网络将摄像头监控模块获取的监控区域的图像生成三维网格图;RGB_D测距定位模块通过双目摄像头获取监控区域的RGB_D图像,并根据监控区域的RGB_D图像中无人机与双目摄像头颜色深度的关系计算得到两者之间的距离,结合三维网格图得到的无人机方向,实现对无人机的具体定位。
  5. 根据权利要求4所述的方法,其特征在于,步骤S1中,摄像头监控模块中所有摄像头均将获取的图像传输至无人机识别模块;步骤S2中,所述无人机识别模块对在同一时间点上所有摄像头采集的监控区域的各帧图像进行分析,同时将各帧图像与预存的无人机图像特征进行匹配,识别出各帧图像中是否出现无人机,从而确定在该时间点上监控区域是否出现无人机;步骤S3中,二维图像生成三维网格模块同时结合了所有摄像头在同一时间点上采集的监控区域的各帧图像计算生成整个监控区域的三维网格图。
  6. 根据权利要求4所述的方法,其特征在于,步骤S2中,所述 预存的无人机图像特征的获得过程为:无人机自动识别模块中预存一组功能和设计都不相同的无人机图像并从中提取无人机图像特征。
  7. 根据权利要求4所述的方法,其特征在于,步骤S2中,所述二维图像生成三维网格模块具体通过一个多层的图卷积神经网络提取监控区域的图像的不同层次的特征,进而通过级联网格变形网络生成三维网格图。
  8. 根据权利要求4所述的方法,其特征在于,步骤S3中,RGB_D测距定位模块根据下式计算无人机和双目摄像头之间的距离:
    Figure PCTCN2019126349-appb-100001
    其中,C1和C2表示无人机和双目摄像头的颜色,C1R和C2R分别表示无人机和双目摄像头的颜色的R通道,C1G和C2G分别表示无人机和双目摄像头的颜色的G通道,C1B和C2B分别表示无人机和双目摄像头的颜色的B通道。
PCT/CN2019/126349 2018-12-27 2019-12-18 基于rgb_d和深度卷积网络的无人机识别定位系统和方法 WO2020135187A1 (zh)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN201811606339.0 2018-12-27
CN201811606339.0A CN109697428B (zh) 2018-12-27 2018-12-27 基于rgb_d和深度卷积网络的无人机识别定位系统

Publications (1)

Publication Number Publication Date
WO2020135187A1 true WO2020135187A1 (zh) 2020-07-02

Family

ID=66232124

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2019/126349 WO2020135187A1 (zh) 2018-12-27 2019-12-18 基于rgb_d和深度卷积网络的无人机识别定位系统和方法

Country Status (2)

Country Link
CN (1) CN109697428B (zh)
WO (1) WO2020135187A1 (zh)

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109697428B (zh) * 2018-12-27 2020-07-07 江西理工大学 基于rgb_d和深度卷积网络的无人机识别定位系统
CN111464938B (zh) * 2020-03-30 2021-04-23 滴图(北京)科技有限公司 定位方法、装置、电子设备和计算机可读存储介质

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105759834A (zh) * 2016-03-09 2016-07-13 中国科学院上海微系统与信息技术研究所 一种主动捕获低空小型无人飞行器的系统及方法
CN107038901A (zh) * 2017-04-29 2017-08-11 毕雪松 飞行器入侵预警系统
WO2018020965A1 (ja) * 2016-07-28 2018-02-01 パナソニックIpマネジメント株式会社 無人飛行体検知システム及び無人飛行体検知方法
CN107885231A (zh) * 2016-09-30 2018-04-06 成都紫瑞青云航空宇航技术有限公司 一种基于可见光图像识别的无人机捕获方法及系统
CN109697428A (zh) * 2018-12-27 2019-04-30 江西理工大学 基于rgb_d和深度卷积网络的无人机识别定位系统

Family Cites Families (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP6843773B2 (ja) * 2015-03-03 2021-03-17 プレナヴ インコーポレイテッド 環境の走査及び無人航空機の追跡
CN105955273A (zh) * 2016-05-25 2016-09-21 速感科技(北京)有限公司 室内机器人导航系统及方法
CN106598226B (zh) * 2016-11-16 2019-05-21 天津大学 一种基于双目视觉和深度学习的无人机人机交互方法
FR3065097B1 (fr) * 2017-04-11 2019-06-21 Pzartech Ltd. Procede automatise de reconnaissance d'un objet
CN108447075B (zh) * 2018-02-08 2020-06-23 烟台欣飞智能系统有限公司 一种无人机监测系统及其监测方法
CN108875813B (zh) * 2018-06-04 2021-10-08 北京工商大学 一种基于几何图像的三维网格模型检索方法

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105759834A (zh) * 2016-03-09 2016-07-13 中国科学院上海微系统与信息技术研究所 一种主动捕获低空小型无人飞行器的系统及方法
WO2018020965A1 (ja) * 2016-07-28 2018-02-01 パナソニックIpマネジメント株式会社 無人飛行体検知システム及び無人飛行体検知方法
CN107885231A (zh) * 2016-09-30 2018-04-06 成都紫瑞青云航空宇航技术有限公司 一种基于可见光图像识别的无人机捕获方法及系统
CN107038901A (zh) * 2017-04-29 2017-08-11 毕雪松 飞行器入侵预警系统
CN109697428A (zh) * 2018-12-27 2019-04-30 江西理工大学 基于rgb_d和深度卷积网络的无人机识别定位系统

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
NANYANG, WANG ET AL.: "Pixel2Mesh: Generating 3D Mesh Models from Single RGB Images", COMPUTER VISION.15TH EUROPEAN CONFERENCE(ECCV 2018), 3 August 2018 (2018-08-03), DOI: 20200303152214 *

Also Published As

Publication number Publication date
CN109697428A (zh) 2019-04-30
CN109697428B (zh) 2020-07-07

Similar Documents

Publication Publication Date Title
CN109360240B (zh) 一种基于双目视觉的小型无人机定位方法
US9443143B2 (en) Methods, devices and systems for detecting objects in a video
CN106356757B (zh) 一种基于人眼视觉特性的电力线路无人机巡检方法
CN110751081B (zh) 基于机器视觉的施工安全监控方法及装置
CN104050662A (zh) 一种用光场相机一次成像直接获取深度图的方法
CN106403924B (zh) 基于深度摄像头的机器人快速定位与姿态估计方法
WO2020135187A1 (zh) 基于rgb_d和深度卷积网络的无人机识别定位系统和方法
CN110853002A (zh) 一种基于双目视觉的变电站异物检测方法
CN112508865A (zh) 一种无人机巡检避障方法、装置、计算机设备和存储介质
Mount et al. 2d visual place recognition for domestic service robots at night
CN112001266B (zh) 一种大型无人运输车监控方法及系统
CN113052139A (zh) 一种基于深度学习双流网络的攀爬行为检测方法及系统
Aliakbarpour et al. Multi-sensor 3D volumetric reconstruction using CUDA
Yue et al. An intelligent identification and acquisition system for UAVs based on edge computing using in the transmission line inspection
CN114299153A (zh) 一种超大电力设备的相机阵列同步标定方法及系统
CN114494427A (zh) 一种对吊臂下站人的违规行为检测方法、系统及终端
CN112183411A (zh) 一种面向高压输电线路巡检的单目slam系统
CN112800828A (zh) 地面栅格占有概率目标轨迹方法
Lei et al. Radial coverage strength for optimization of multi-camera deployment
CN116973939B (zh) 安全监测方法及装置
CN108022217B (zh) 一种空中拍摄形变调整方法
Yue et al. Transmission Line Component Inspection Method Based on Deep Learning under Visual Navigation
CN115331160A (zh) 多机器人协同视觉监控方法和系统
Yu et al. Improved ORB Algorithm used in Image Mosaic
Brose et al. Development of a Fallen People Detector

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 19905789

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 19905789

Country of ref document: EP

Kind code of ref document: A1