CN109697428A - Positioning system is identified based on the unmanned plane of RGB_D and depth convolutional network - Google Patents

Positioning system is identified based on the unmanned plane of RGB_D and depth convolutional network Download PDF

Info

Publication number
CN109697428A
CN109697428A CN201811606339.0A CN201811606339A CN109697428A CN 109697428 A CN109697428 A CN 109697428A CN 201811606339 A CN201811606339 A CN 201811606339A CN 109697428 A CN109697428 A CN 109697428A
Authority
CN
China
Prior art keywords
unmanned plane
image
monitoring area
module
rgb
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201811606339.0A
Other languages
Chinese (zh)
Other versions
CN109697428B (en
Inventor
樊宽刚
邱海云
王渠
刘平川
王文帅
杨杰
侯浩楠
逄启寿
陈宇航
匡以顺
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
GANZHOU DEYE ELECTRONICS TECHNOLOGY CO LTD
Original Assignee
Jiangxi University of Science and Technology
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Jiangxi University of Science and Technology filed Critical Jiangxi University of Science and Technology
Priority to CN201811606339.0A priority Critical patent/CN109697428B/en
Publication of CN109697428A publication Critical patent/CN109697428A/en
Priority to PCT/CN2019/126349 priority patent/WO2020135187A1/en
Application granted granted Critical
Publication of CN109697428B publication Critical patent/CN109697428B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/10Terrestrial scenes
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/70Determining position or orientation of objects or cameras
    • G06T7/73Determining position or orientation of objects or cameras using feature-based methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/56Extraction of image or video features relating to colour
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10024Color image
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30232Surveillance

Abstract

Identify that positioning system, including camera head monitor module, unmanned plane identification module, two dimensional image generate three-dimensional grid module and RGB_D ranging localization module based on the unmanned plane of RGB_D and depth convolutional network the invention discloses a kind of;Camera head monitor module obtains the image of entire monitoring area;Unmanned plane identification module is matched the image of monitoring area with the unmanned plane characteristics of image prestored, is identified and be whether there is unmanned plane in monitoring area;The image that two dimensional image generates the monitoring area that three-dimensional grid module is obtained camera head monitor module by figure convolutional neural networks generates three dimensional network trrellis diagram;RGB_D ranging localization module obtains the RGB_D image of monitoring area by binocular camera, and it is calculated at a distance between the two according to the relationship of unmanned plane and binocular camera color depth in the RGB_D image of monitoring area, in conjunction with the unmanned plane direction that three dimensional network trrellis diagram obtains, the specific positioning to unmanned plane is realized.The present invention may be implemented to carry out unmanned plane in region high-precision identification and positioning.

Description

Positioning system is identified based on the unmanned plane of RGB_D and depth convolutional network
Technical field
The present invention relates to unmanned planes to identify field of locating technology, and in particular to is based on RGB_D and depth convolution based on a kind of The unmanned plane of network identifies positioning system.
Background technique
Unmanned plane is the hot spot of global new round scientific and technological revolution and Industrial Revolution, now every field obtain using, This field of unmanned plane is constantly being broken through now, and escape goes out past simple military purposes now, gradually to it is civilian, police with The multi-direction extension of household.And face the mankind incompetent highly difficult, high risk and high-content task, unmanned plane meet the tendency of and Raw, it substitutes manned aircraft and goes to execute these tasks.Unmanned plane is a kind of equipment for having radio to manipulate, then someone It is called remote driving aircraft.It can tend to perfectly utilize the sharp skill of the essence such as artificial intelligence, signal processing and automatic Pilot Art, and since it has the advantages such as small in size, unmanned and voyage is remote, it is led in natural environment investigations, popular science research, agricultural Domain defends state sovereignty and is all applied with many aspects such as public health security, is a big hot spot in the present age.
With the generality used unmanned plane, the safety problem of unmanned plane is more and more serious, the supervision for unmanned plane There are limitations, therefore unmanned plane accident repeatedly occur, so that people are in the skill for identifying monitoring positioning this aspect for unmanned plane Art is more and more paid close attention to.
Summary of the invention
In view of the deficiencies of the prior art, the present invention is intended to provide a kind of unmanned plane based on RGB_D and depth convolutional network It identifies positioning system, the automatic identification for the unmanned plane in region may be implemented, and realize the specific positioning to unmanned plane, know It is not high with the precision of positioning.It solves for region unmanned plane safety problem, avoids influencing brought by unmanned plane.
To achieve the goals above, the present invention adopts the following technical scheme:
It is a kind of that positioning system, including camera head monitor module, nothing are identified based on the unmanned plane of RGB_D and depth convolutional network Man-machine identification module, two dimensional image generate three-dimensional grid module and RGB_D ranging localization module;
The camera head monitor module is used to obtain the image of entire monitoring area;
The unmanned plane identification module receives the image of the monitoring area of camera head monitor module acquisition, with the nothing prestored Man-machine characteristics of image is matched, and is identified and be whether there is unmanned plane in monitoring area;
Two dimensional image generates three-dimensional grid module and is used to identify that there are nobody in monitoring area when unmanned plane identification module When machine, the image for the monitoring area for being obtained camera head monitor module by figure convolutional neural networks generates three dimensional network trrellis diagram;
RGB_D ranging localization module is used to identify in monitoring area when unmanned plane identification module there are when unmanned plane, lead to It crosses binocular camera and obtains the RGB_D image of monitoring area, and taken the photograph according to unmanned plane in the RGB_D image of monitoring area with binocular As distance between the two is calculated in the relationship of head color depth, in conjunction with the unmanned plane direction that three dimensional network trrellis diagram obtains, realize Specific positioning to unmanned plane.
Further, the camera head monitor module includes several cameras, and each camera is respectively arranged in monitored space The different location in domain, the image pickup scope of all cameras, which adds up, covers entire monitoring area.
Further, each camera dispersedly circulating type install, guarantee in the visual angle of any camera it can be seen that Adjacent camera at left and right sides of the camera.
The present invention also provides a kind of methods for carrying out unmanned plane identification positioning using above system, include the following steps:
S1, camera head monitor module are used to obtain the image of entire monitoring area;
S2, unmanned plane identification module receive the image of the monitoring area of camera head monitor module acquisition, with the nothing prestored Man-machine characteristics of image is matched, and is identified and be whether there is unmanned plane in monitoring area;
S3, identify in monitoring area that there are when unmanned plane, two dimensional image generates three-dimensional grid when unmanned plane identification module The image for the monitoring area that module is obtained camera head monitor module by figure convolutional neural networks generates three dimensional network trrellis diagram;RGB_ D ranging localization module obtains the RGB_D image of monitoring area by binocular camera, and according to the RGB_D image of monitoring area The relationship of middle unmanned plane and binocular camera color depth is calculated at a distance between the two, obtains in conjunction with three dimensional network trrellis diagram The specific positioning to unmanned plane is realized in unmanned plane direction.
Further, in step S1, all cameras will acquire in camera head monitor module image transmitting to nobody Machine identification module;In step S2, monitored space that the unmanned plane identification module acquires cameras all on same time point Each frame image in domain is analyzed, while each frame image being matched with the unmanned plane characteristics of image prestored, identifies each frame Whether there is unmanned plane in image, so that it is determined that whether upper monitoring area unmanned plane occurs at the time point;In step S3, two dimension Image generates each frame figure for the monitoring area that three-dimensional grid module acquires on same time point in combination with all cameras The three dimensional network trrellis diagram of entire monitoring area is generated as calculating.
Further, in step S2, the acquisition process of the unmanned plane characteristics of image prestored are as follows: unmanned plane automatic identification The unmanned plane image that one group of function and design are different from is prestored in module and therefrom extracts unmanned plane characteristics of image.
Further, in step S2, the two dimensional image generates three-dimensional grid module especially by the picture scroll of a multilayer Product neural network extracts the feature of the different levels of the image of monitoring area, and then three-dimensional by cascade grid modified network generation Grid chart.
Further, in step S3, RGB_D ranging localization module calculates between unmanned plane and binocular camera according to the following formula Distance:
Wherein, C1 and C2 indicates the color of unmanned plane and binocular camera, and C1R and C2R respectively indicate unmanned plane and binocular The channel R of the color of camera, C1G and C2G respectively indicate the channel G of the color of unmanned plane and binocular camera, C1B and C2B Respectively indicate the channel B of the color of unmanned plane and binocular camera.
The beneficial effects of the present invention are:
1, regional monitoring, automatic identification and the positioning function of the invention that may be implemented to unmanned plane, and recognition efficiency height, Strong interference immunity.
2, the present invention is specifically divided into camera head monitor module, unmanned plane identification module, two dimensional image and generates three-dimensional grid mould Block, RGB_D ranging localization module, are acquired, while passing through nobody by image of the camera head monitor module to monitoring area Machine identification module identifies unmanned plane from the image that camera acquires, while passing through picture scroll product Processing with Neural Network figure Picture, monitoring area is restored in the form of three-dimensional grid, obtains unmanned plane direction, RGB_D in conjunction with the three dimensional network trrellis diagram of generation Ranging localization module calculated in RGB_D image unmanned plane between binocular camera at a distance from, realize for unmanned plane Specific positioning.Under the collective effect of multiple modules, regional monitoring, automatic identification and the positioning function to unmanned plane are realized.
Detailed description of the invention
Fig. 1 is the system structure diagram in the embodiment of the present invention 1;
Fig. 2 is each camera plane of arrangement schematic diagram in monitoring camera-shooting module in the embodiment of the present invention 1;
Fig. 3 is the general introduction schematic diagram of 1 cascade distortion of the mesh network of the embodiment of the present invention;
Fig. 4 is that unmanned plane positions schematic diagram in three dimensional network trrellis diagram in the embodiment of the present invention 1;
Fig. 5 is the method flow schematic diagram in the embodiment of the present invention 2.
Specific embodiment
Below with reference to attached drawing, the invention will be further described, it should be noted that the present embodiment is with this technology side Premised on case, the detailed implementation method and specific operation process are given, but protection scope of the present invention is not limited to this reality Apply example.
Embodiment 1
Positioning system is identified based on the unmanned plane of RGB_D and depth convolutional network the present embodiment provides a kind of, such as Fig. 1 institute Show, including camera head monitor module, unmanned plane identification module, two dimensional image generate three-dimensional grid module and RGB_D ranging localization Module;
The camera head monitor module is used to obtain the image of entire monitoring area;
The unmanned plane identification module receives the image of the monitoring area of camera head monitor module acquisition, with the nothing prestored Man-machine characteristics of image is matched, and is identified and be whether there is unmanned plane in monitoring area;
Two dimensional image generates three-dimensional grid module and is used to identify that there are nobody in monitoring area when unmanned plane identification module When machine, the image for the monitoring area for being obtained camera head monitor module by figure convolutional neural networks generates three dimensional network trrellis diagram;
RGB_D ranging localization module is used to identify in monitoring area when unmanned plane identification module there are when unmanned plane, lead to It crosses binocular camera and obtains the RGB_D image of monitoring area, and taken the photograph according to unmanned plane in the RGB_D image of monitoring area with binocular As distance between the two is calculated in the relationship of head color depth, in conjunction with the unmanned plane direction that three dimensional network trrellis diagram obtains, realize Specific positioning to unmanned plane.
Further, the camera head monitor module includes several cameras, and each camera is respectively arranged in monitored space The different location in domain, the image pickup scope of all cameras, which adds up, covers entire monitoring area;
In the present embodiment, it as shown in Fig. 2, the camera head monitor module includes that there are four cameras, is taken the photograph by four As the visual angle division region of head, regional monitoring can be carried out to unmanned plane, realizes the multi-angle video figure of acquisition monitoring area Picture realizes all standing of monitoring area by the visual angle aggregation of all cameras.
Further, the seamless coverage of monitoring area can be realized by the cross-view of each camera.Installation is taken the photograph When as head, dispersedly circulating type is installed for selection, is guaranteed in the visual angle of any camera it can be seen that at left and right sides of the camera Adjacent camera.As shown in Fig. 2, camera 1, which is in higher position, changes its direction, the prison for front region can be realized Control, the adjacent camera in the left and right sides can be covered by monitoring in visual angle.Remaining camera 2, camera 3 and camera 4 Arrangement is similar with it, highly can be inconsistent with position, but needs to guarantee the camera for having the left and right sides adjacent in visual angle, this Arrangement can be realized by adjusting the visual angle of each camera and direction, ensure that camera visual angle covering entirely to monitoring area Lid.
When using multiple cameras, the unmanned plane identification module is acquired to cameras all on same time point Each frame image of monitoring area analyzed, while each frame image being matched with the unmanned plane characteristics of image prestored, it is real Referring now to the automatic identification of unmanned plane.Matching can contribute to further increase the accurate of identification while the image of multi-angle Property.
Further, one group of function is prestored in unmanned plane automatic identification module and designs the unmanned plane image being different from simultaneously Therefrom extract unmanned plane characteristics of image, when being identified using extracted unmanned plane characteristics of image to the image of monitoring area according to Secondary progress image recognition, Objective extraction, signature analysis, images match, unmanned plane is identified from the image of monitoring area, To realize the automatic identification for unmanned plane.
Since the function of existing unmanned plane is all less consistent with design, so needing to collect one group of function and design not Identical unmanned plane image extracts unmanned plane characteristics of image, and the accuracy of identification can be improved.
In practical applications, the work that unmanned plane identification module and camera head monitor module are identified and monitored simultaneously, Guarantee the high efficiency to unmanned plane automatic identification.
Further, the two dimensional image generates three-dimensional grid module especially by the figure convolutional neural networks of a multilayer The feature of the different levels of the image of monitoring area is extracted, and then three dimensional network trrellis diagram is generated by cascade grid modified network, from And the image of monitoring area is restored in the form of three-dimensional grid.The three dimensional network trrellis diagram of generation is used for the distance that survey calculation needs And angle parameter.When being monitored the monitoring in region using multiple cameras, two dimensional image generates three-dimensional grid module simultaneously The image for combining the monitoring area of all camera different angles calculates the three dimensional network trrellis diagram for generating entire monitoring area, precision It is higher.
It should be noted that characteristics of image network is extracted from input picture in the three-dimensional grid that two dimensional image generates The two-dimensional convolution neural network of Perception Features, will using the feature of the different levels of the image extracted by cascade grid modified network Ellipsoid grid is gradually deformed into required three-dimensional grid model.
Cascading grid modified network is the convolutional network based on figure, it includes three deformation blocks, and deformation block is schemed by two Layer disintegration layer is crossed to form.As shown in figure 3, the image of monitoring area is as input picture, characteristics of image network is as two dimension volume Product neural network extracts Perception Features from input picture, and using the Perception Features data of extraction as input, by cascade grid Ellipsoid grid is gradually deformed into required three-dimensional grid model by three deformation blocks in modified network from coarse to fine.By cascade network Ellipsoid grid is gradually deformed into required three-dimensional grid model using the feature of the different levels of image by lattice modified network, is generated High accuracy three-dimensional grid chart restores monitoring area in the form of three-dimensional grid.As shown in figure 4, in the three dimensional network trrellis diagram of reduction In select on unmanned plane a characteristic point P as reference, the actual coordinate of camera each point A, B, C, D are it is known that combine distance and direction Two parameters can calculate the specific coordinate of unmanned plane, and the positioning of unmanned plane can be realized.
Further, the RGB_D ranging localization module obtains the RGB_D image of monitoring area by binocular camera, The color of image depth for analyzing unmanned plane and binocular camera, calculates distance according to the relationship of the color depth of the two, in conjunction with Realize the positioning for unmanned plane in unmanned plane direction obtained in three dimensional network trrellis diagram.
It should be noted that RGB_D image is two images in fact: one is common RGB Three Channel Color image, separately One is Depth image.Depth image is similar to gray level image, and only its each pixel value is sensor distance object Actual range.Picture depth is the presumable number of colours of each pixel of determining color image, or determines the every of gray level image A presumable number of greyscale levels of pixel.It is determined in the MaxColors or gray level image that may occur in which in color image most High-gray level grade.Different range image depth are also different in the picture for object, can be according to two articles on RGB_D image Picture depth relationship calculate the distance between they.
Further it should be noted that color distance refers to the gap between two colors, usually distance is bigger, and two A color difference is bigger, conversely, two colors are more close.In rgb space, the distance between available two colors are as follows:
Wherein, C1 and C2 indicate color 1 and color 2, and C1R and C2R respectively indicate the channel R of color 1 and color 2, C1G and C2G respectively indicates the channel G of color 1 and color 2, and C1B and C2B respectively indicate the channel B of color 1 and color 2.
It is obtained at a distance between the two according to the colour-difference of unmanned plane and binocular camera.
Embodiment 2
The present embodiment provides a kind of methods for carrying out unmanned plane identification positioning using system described in embodiment 1, such as Fig. 5 institute Show, includes the following steps:
S1, camera head monitor module are used to obtain the image of entire monitoring area;
S2, unmanned plane identification module receive the image of the monitoring area of camera head monitor module acquisition, with the nothing prestored Man-machine characteristics of image is matched, and is identified and be whether there is unmanned plane in monitoring area;
S3, identify in monitoring area that there are when unmanned plane, two dimensional image generates three-dimensional grid when unmanned plane identification module The image for the monitoring area that module is obtained camera head monitor module by figure convolutional neural networks generates three dimensional network trrellis diagram;RGB_ D ranging localization module obtains the RGB_D image of monitoring area by binocular camera, and according to the RGB_D image of monitoring area The relationship of middle unmanned plane and binocular camera color depth is calculated at a distance between the two, obtains in conjunction with three dimensional network trrellis diagram The specific positioning to unmanned plane is realized in unmanned plane direction.
Further, in step S1, all cameras will acquire in camera head monitor module image transmitting to nobody Machine identification module;In step S2, monitored space that the unmanned plane identification module acquires cameras all on same time point Each frame image in domain is analyzed, while each frame image being matched with the unmanned plane characteristics of image prestored, identifies each frame Whether there is unmanned plane in image, so that it is determined that whether upper monitoring area unmanned plane occurs at the time point;In step S3, two dimension Image generates each frame figure for the monitoring area that three-dimensional grid module acquires on same time point in combination with all cameras The three dimensional network trrellis diagram of entire monitoring area is generated as calculating.
Further, in step S2, the acquisition process of the unmanned plane characteristics of image prestored are as follows: unmanned plane automatic identification The unmanned plane image that one group of function and design are different from is prestored in module and therefrom extracts unmanned plane characteristics of image.
Further, in step S2, the two dimensional image generates three-dimensional grid module especially by the picture scroll of a multilayer Product neural network extracts the feature of the different levels of the image of monitoring area, and then three-dimensional by cascade grid modified network generation Grid chart.
Further, in step S3, RGB_D ranging localization module calculates between unmanned plane and binocular camera according to the following formula Distance:
Wherein, C1 and C2 indicates the color of unmanned plane and binocular camera, and C1R and C2R respectively indicate unmanned plane and binocular The channel R of the color of camera, C1G and C2G respectively indicate the channel G of the color of unmanned plane and binocular camera, C1B and C2B Respectively indicate the channel B of the color of unmanned plane and binocular camera.
For those skilled in the art, it can be provided various corresponding according to above technical solution and design Change and modification, and all these change and modification, should be construed as being included within the scope of protection of the claims of the present invention.

Claims (8)

1. a kind of identify positioning system based on the unmanned plane of RGB_D and depth convolutional network, which is characterized in that supervised including camera It controls module, unmanned plane identification module, two dimensional image and generates three-dimensional grid module and RGB_D ranging localization module;
The camera head monitor module is used to obtain the image of entire monitoring area;
The unmanned plane identification module receives the image of the monitoring area of camera head monitor module acquisition, with the unmanned plane prestored Characteristics of image is matched, and is identified and be whether there is unmanned plane in monitoring area;
Two dimensional image generates three-dimensional grid module and is used to identify in monitoring area when unmanned plane identification module there are when unmanned plane, The image for the monitoring area for being obtained camera head monitor module by figure convolutional neural networks generates three dimensional network trrellis diagram;
RGB_D ranging localization module is used to identify in monitoring area when unmanned plane identification module there are when unmanned plane, by double Mesh camera obtains the RGB_D image of monitoring area, and according to unmanned plane and binocular camera in the RGB_D image of monitoring area Distance between the two is calculated in the relationship of color depth, in conjunction with the unmanned plane direction that three dimensional network trrellis diagram obtains, realizes to nothing Man-machine specific positioning.
2. according to claim 1 identify that positioning system, feature exist based on the unmanned plane of RGB_D and depth convolutional network In the camera head monitor module includes several cameras, and each camera is respectively arranged the different location in monitoring area, institute There is the image pickup scope of camera to add up and covers entire monitoring area.
3. according to claim 1 identify that positioning system, feature exist based on the unmanned plane of RGB_D and depth convolutional network In each camera dispersedly install by circulating type, guarantees in the visual angle of any camera it can be seen that camera or so two The adjacent camera in side.
4. a kind of method for carrying out unmanned plane identification positioning using system described in any of the above-described claim, which is characterized in that packet Include following steps:
S1, camera head monitor module are used to obtain the image of entire monitoring area;
S2, unmanned plane identification module receive the image of the monitoring area of camera head monitor module acquisition, with the unmanned plane prestored Characteristics of image is matched, and is identified and be whether there is unmanned plane in monitoring area;
S3, identify in monitoring area that there are when unmanned plane, two dimensional image generates three-dimensional grid module when unmanned plane identification module The image for the monitoring area for being obtained camera head monitor module by figure convolutional neural networks generates three dimensional network trrellis diagram;RGB_D is surveyed The RGB_D image of monitoring area is obtained by binocular camera away from locating module, and according to nothing in the RGB_D image of monitoring area The man-machine relationship with binocular camera color depth is calculated at a distance between the two, obtain in conjunction with three dimensional network trrellis diagram nobody The specific positioning to unmanned plane is realized in machine direction.
5. according to the method described in claim 4, it is characterized in that, in step S1, all cameras in camera head monitor module The image transmitting that will acquire is to unmanned plane identification module;In step S2, the unmanned plane identification module is at same time point Each frame image of the monitoring area of upper all cameras acquisitions is analyzed, while by each frame image and the unmanned plane image that prestores Feature is matched, and identifies in each frame image whether unmanned plane occur, so that it is determined that whether going up monitoring area at the time point There is unmanned plane;In step S3, two dimensional image generates three-dimensional grid module in combination with all cameras at same time point Each frame image of the monitoring area of upper acquisition calculates the three dimensional network trrellis diagram for generating entire monitoring area.
6. according to the method described in claim 4, it is characterized in that, in step S2, the unmanned plane characteristics of image prestored Acquisition process are as follows: prestore the unmanned plane image that one group of function and design are different from unmanned plane automatic identification module and therefrom mention Take unmanned plane characteristics of image.
7. according to the method described in claim 4, it is characterized in that, the two dimensional image generates three-dimensional grid mould in step S2 Block extracts the feature of the different levels of the image of monitoring area especially by the figure convolutional neural networks of a multilayer, and then passes through It cascades grid modified network and generates three dimensional network trrellis diagram.
8. according to the method described in claim 4, it is characterized in that, RGB_D ranging localization module is counted according to the following formula in step S3 Calculate the distance between unmanned plane and binocular camera:
Wherein, C1 and C2 indicates the color of unmanned plane and binocular camera, and C1R and C2R respectively indicate unmanned plane and binocular camera shooting The channel R of the color of head, C1G and C2G respectively indicate the channel G of the color of unmanned plane and binocular camera, C1B and C2B difference Indicate the channel B of the color of unmanned plane and binocular camera.
CN201811606339.0A 2018-12-27 2018-12-27 Unmanned aerial vehicle identification and positioning system based on RGB _ D and depth convolution network Active CN109697428B (en)

Priority Applications (2)

Application Number Priority Date Filing Date Title
CN201811606339.0A CN109697428B (en) 2018-12-27 2018-12-27 Unmanned aerial vehicle identification and positioning system based on RGB _ D and depth convolution network
PCT/CN2019/126349 WO2020135187A1 (en) 2018-12-27 2019-12-18 Unmanned aerial vehicle recognition and positioning system and method based on rgb_d and deep convolutional network

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201811606339.0A CN109697428B (en) 2018-12-27 2018-12-27 Unmanned aerial vehicle identification and positioning system based on RGB _ D and depth convolution network

Publications (2)

Publication Number Publication Date
CN109697428A true CN109697428A (en) 2019-04-30
CN109697428B CN109697428B (en) 2020-07-07

Family

ID=66232124

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201811606339.0A Active CN109697428B (en) 2018-12-27 2018-12-27 Unmanned aerial vehicle identification and positioning system based on RGB _ D and depth convolution network

Country Status (2)

Country Link
CN (1) CN109697428B (en)
WO (1) WO2020135187A1 (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2020135187A1 (en) * 2018-12-27 2020-07-02 赣州德业电子科技有限公司 Unmanned aerial vehicle recognition and positioning system and method based on rgb_d and deep convolutional network
CN111464938A (en) * 2020-03-30 2020-07-28 滴图(北京)科技有限公司 Positioning method, positioning device, electronic equipment and computer readable storage medium

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2016141100A2 (en) * 2015-03-03 2016-09-09 Prenav Inc. Scanning environments and tracking unmanned aerial vehicles
CN105955273A (en) * 2016-05-25 2016-09-21 速感科技(北京)有限公司 Indoor robot navigation system and method
CN106598226A (en) * 2016-11-16 2017-04-26 天津大学 UAV (Unmanned Aerial Vehicle) man-machine interaction method based on binocular vision and deep learning
CN107038901A (en) * 2017-04-29 2017-08-11 毕雪松 Aircraft attack early warning system
CN108447075A (en) * 2018-02-08 2018-08-24 烟台欣飞智能系统有限公司 A kind of unmanned plane monitoring system and its monitoring method
WO2018189627A1 (en) * 2017-04-11 2018-10-18 Pzartech Automated method of recognition of an object
CN108875813A (en) * 2018-06-04 2018-11-23 北京工商大学 A kind of three-dimensional grid model search method based on several picture

Family Cites Families (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105759834B (en) * 2016-03-09 2018-07-24 中国科学院上海微系统与信息技术研究所 A kind of system and method actively capturing low latitude small-sized unmanned aircraft
WO2018020965A1 (en) * 2016-07-28 2018-02-01 パナソニックIpマネジメント株式会社 Unmanned aerial vehicle detection system and unmanned aerial vehicle detection method
CN107885231B (en) * 2016-09-30 2020-12-29 成都紫瑞青云航空宇航技术有限公司 Unmanned aerial vehicle capturing method and system based on visible light image recognition
CN109697428B (en) * 2018-12-27 2020-07-07 江西理工大学 Unmanned aerial vehicle identification and positioning system based on RGB _ D and depth convolution network

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2016141100A2 (en) * 2015-03-03 2016-09-09 Prenav Inc. Scanning environments and tracking unmanned aerial vehicles
CN105955273A (en) * 2016-05-25 2016-09-21 速感科技(北京)有限公司 Indoor robot navigation system and method
CN106598226A (en) * 2016-11-16 2017-04-26 天津大学 UAV (Unmanned Aerial Vehicle) man-machine interaction method based on binocular vision and deep learning
WO2018189627A1 (en) * 2017-04-11 2018-10-18 Pzartech Automated method of recognition of an object
CN107038901A (en) * 2017-04-29 2017-08-11 毕雪松 Aircraft attack early warning system
CN108447075A (en) * 2018-02-08 2018-08-24 烟台欣飞智能系统有限公司 A kind of unmanned plane monitoring system and its monitoring method
CN108875813A (en) * 2018-06-04 2018-11-23 北京工商大学 A kind of three-dimensional grid model search method based on several picture

Non-Patent Citations (3)

* Cited by examiner, † Cited by third party
Title
NANYANG WANG ET AL.: "Pixel2Mesh:Generating 3D Mesh Models from Single RGB Images", 《ARXIV:1804.01654V2》 *
QING LI ET AL.: "Autonomous navigation and environment modeling for MAVs in 3-D enclosed industrial environments", 《COMPUTERS IN INDUSTRY》 *
汤圣君: "多视图像增强的RGB-D室内高精度三维测图方法", 《中国博士学位论文全文数据库》 *

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2020135187A1 (en) * 2018-12-27 2020-07-02 赣州德业电子科技有限公司 Unmanned aerial vehicle recognition and positioning system and method based on rgb_d and deep convolutional network
CN111464938A (en) * 2020-03-30 2020-07-28 滴图(北京)科技有限公司 Positioning method, positioning device, electronic equipment and computer readable storage medium

Also Published As

Publication number Publication date
CN109697428B (en) 2020-07-07
WO2020135187A1 (en) 2020-07-02

Similar Documents

Publication Publication Date Title
CN104754302B (en) A kind of target detection tracking method based on rifle ball linked system
US9646212B2 (en) Methods, devices and systems for detecting objects in a video
CN104036488B (en) Binocular vision-based human body posture and action research method
CN105550670A (en) Target object dynamic tracking and measurement positioning method
CN102932605B (en) Method for selecting camera combination in visual perception network
CN108731587A (en) A kind of the unmanned plane dynamic target tracking and localization method of view-based access control model
KR101788225B1 (en) Method and System for Recognition/Tracking Construction Equipment and Workers Using Construction-Site-Customized Image Processing
CN102243765A (en) Multi-camera-based multi-objective positioning tracking method and system
CN106033614B (en) A kind of mobile camera motion object detection method under strong parallax
CN107560592A (en) A kind of precision ranging method for optronic tracker linkage target
CN107124581A (en) Video camera running status and suspected target real-time display system on the electronic map
KR101639275B1 (en) The method of 360 degrees spherical rendering display and auto video analytics using real-time image acquisition cameras
CN106295657A (en) A kind of method extracting human height's feature during video data structure
CN109697428A (en) Positioning system is identified based on the unmanned plane of RGB_D and depth convolutional network
CN110853002A (en) Transformer substation foreign matter detection method based on binocular vision
CN110288623A (en) The data compression method of unmanned plane marine cage culture inspection image
CN106488217A (en) The correction parameter acquisition methods of stereoscopic display device and device
US20120162412A1 (en) Image matting apparatus using multiple cameras and method of generating alpha maps
CN112470189B (en) Occlusion cancellation for light field systems
CN116580107A (en) Cross-view multi-target real-time track tracking method and system
Fedorov et al. Placement strategy of multi-camera volumetric surveillance system for activities monitoring
CN112954188B (en) Human eye perception imitating active target snapshot method and device
KR20120071286A (en) Apparatus for image matting using multi camera, and method for generating alpha map
CN106408613A (en) Stereoscopic vision building method suitable for virtual lawsuit advisor
WANG et al. Research on algorithm of night vision image fusion and coloration

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant
TR01 Transfer of patent right
TR01 Transfer of patent right

Effective date of registration: 20201130

Address after: 341000 Ganzhou economic and Technological Development Zone, Ganzhou City, Jiangxi Province

Patentee after: GANZHOU DEYE ELECTRONICS TECHNOLOGY Co.,Ltd.

Address before: 341000 No. 86 Hongqi Avenue, Jiangxi, Ganzhou

Patentee before: Jiangxi University of Science and Technology