CN112241180A - Visual processing method for landing guidance of unmanned aerial vehicle mobile platform - Google Patents
Visual processing method for landing guidance of unmanned aerial vehicle mobile platform Download PDFInfo
- Publication number
- CN112241180A CN112241180A CN202011140021.5A CN202011140021A CN112241180A CN 112241180 A CN112241180 A CN 112241180A CN 202011140021 A CN202011140021 A CN 202011140021A CN 112241180 A CN112241180 A CN 112241180A
- Authority
- CN
- China
- Prior art keywords
- aerial vehicle
- unmanned aerial
- candidate frame
- candidate
- pearson correlation
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
Images
Classifications
-
- G—PHYSICS
- G05—CONTROLLING; REGULATING
- G05D—SYSTEMS FOR CONTROLLING OR REGULATING NON-ELECTRIC VARIABLES
- G05D1/00—Control of position, course or altitude of land, water, air, or space vehicles, e.g. automatic pilot
- G05D1/10—Simultaneous control of position or course in three dimensions
- G05D1/101—Simultaneous control of position or course in three dimensions specially adapted for aircraft
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/24—Classification techniques
- G06F18/241—Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/04—Architecture, e.g. interconnection topology
- G06N3/045—Combinations of networks
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/08—Learning methods
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/20—Image preprocessing
- G06V10/25—Determination of region of interest [ROI] or a volume of interest [VOI]
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V20/00—Scenes; Scene-specific elements
- G06V20/10—Terrestrial scenes
- G06V20/13—Satellite images
Abstract
The invention discloses a visual processing method for landing guidance of an unmanned aerial vehicle mobile platform, belonging to the technical field of aircraft navigation; the method specifically comprises the following steps: firstly, the flight altitude of a certain unmanned aerial vehicle is divided into spaces, and an airborne camera is used for shooting targets fixed on the ground at different angles at each section of altitude; obtaining each picture candidate frame and corresponding characteristics by using a convolutional neural network, storing the candidate frames and the corresponding characteristics according to different heights, and storing the candidate frames and the corresponding characteristics as matching characteristics into each unmanned aerial vehicle; photographing any unmanned aerial vehicle A in real time and transmitting the photographed image to a convolutional neural network for extracting candidate frames and features; for each image, performing Pearson correlation coefficient calculation on each candidate frame in the image and n images in a matching feature library one by one; selecting the maximum value from all results of all candidate frames in each image as the result of each image; and calculating the target position according to the pixel coordinates of the final candidate frame. The invention saves the process of neural network training and saves time.
Description
Technical Field
The invention belongs to the technical field of aircraft navigation, and particularly relates to a visual processing method for landing guidance of an unmanned aerial vehicle mobile platform.
Background
An Unmanned Aerial Vehicle (UAV) is an Unmanned aircraft that is not operated by a pilot, but is piloted using a wireless remote control device or an onboard flight control system. The unmanned aerial vehicle mainly comprises parts such as fuselage, driving system, flight control system, energy system and task load. With the continuous development of artificial intelligence technology and unmanned aerial vehicle technology, unmanned aerial vehicle has not only been a flight platform, possesses complicated image processing and independently plans the ability simultaneously.
In recent years, with the vigorous development of artificial intelligence technology, machine vision is also developed rapidly, target recognition and tracking in the machine vision are combined with an unmanned aerial vehicle, and the application of the unmanned aerial vehicle in the fields of monitoring, smart cities, aerial survey, post-disaster search and rescue, military and the like is expanded. Therefore, unmanned aerial vehicle combines machine vision, realizes the landing of moving platform under the complex condition, will have important using value.
The landing technology of the unmanned aerial vehicle on the mobile platform can be extended to the unmanned aerial vehicle to track the moving target on the ground or even in the air. Especially, when the unmanned aerial vehicle operates on the sea, fuel is insufficient or a task is completed, the unmanned aerial vehicle can directly land on a running ship, and the operation efficiency is greatly improved.
The traditional functions based on GPS fixed-point landing, one-key return and the like cannot be applied to ship-borne and mobile land-based platforms. When the unmanned aerial vehicle carries out autonomous intelligent tasks with higher requirements on timeliness and maneuverability, in order to improve the efficiency of the unmanned aerial vehicle as much as possible and improve the rapid reaction capability of the battlefield of the unmanned aerial vehicle, the takeoff and landing technology of the unmanned aerial vehicle based on the mobile platform becomes particularly important. By arranging the target on the mobile platform, the unmanned aerial vehicle airborne image processing platform processes the video stream of the camera in real time, and the relative distance between the optical center of the camera and the center of the target or the coordinates of the target in an image coordinate system can be obtained through calculation.
Deep learning has evolved rapidly in recent years, and embedded on-board platforms running such algorithms have become more and more sophisticated. At present, functions of detecting, identifying and tracking targets are achieved based on a convolutional neural network method, but after the method is applied to an actual business scene, time-consuming work is needed for making a training set and training network weight parameters, and the requirement of the training data set on hardware is high. In addition, in the taking-off and landing process of the unmanned aerial vehicle mobile platform, the height change is very large, so that the size and the scale of an image of a target in an airborne camera of the unmanned aerial vehicle are greatly changed, and the image identification accuracy is not well affected.
Disclosure of Invention
The invention provides a visual processing method for landing guidance of an unmanned aerial vehicle mobile platform based on a convolutional neural network, and solves the problems that the time for making a data set is consumed for training and the imaging scale change is large due to the fact that the height change of a target is large.
The visual processing method for the landing guidance of the unmanned aerial vehicle mobile platform comprises the following specific steps:
the method comprises the following steps that firstly, space division is carried out on the flying height of an unmanned aerial vehicle according to actual conditions, and targets in a flying area are fixedly arranged on the ground;
step two, opening an onboard camera of the unmanned aerial vehicle aiming at each flight altitude, and aligning to a target to obtain a group of pictures with different illumination and angles corresponding to each altitude;
after preprocessing each group of pictures with different heights, inputting the pictures into a convolutional neural network one by one, and running forward propagation to obtain feature extraction of each picture and corresponding candidate frame extraction;
and step four, storing the features extracted from each candidate frame according to the respective heights, and transmitting the features serving as a matching feature library to each unmanned aerial vehicle.
Fifthly, for any unmanned aerial vehicle A, flying to the position near the upper part of the area containing the target by using the GPS, and obtaining the real-time height of the unmanned aerial vehicle A from airborne inertial navigation;
taking a picture of a flight area in real time by an airborne camera of the unmanned aerial vehicle A, transmitting the picture to a convolutional neural network in real time, and extracting features and positions of candidate frames by forward propagation;
each image contains a plurality of candidate frames and a plurality of corresponding characteristics.
Seventhly, extracting m candidate frames and corresponding feature vectors for a certain image, and respectively calculating the Pearson correlation coefficient of the feature vectors extracted from the current candidate frame and all matched feature libraries corresponding to the real-time height of the unmanned aerial vehicle A;
the method specifically comprises the following steps:
the unmanned aerial vehicle A has n feature vectors in a feature library at the current height, one candidate frame is sequentially selected from m candidate frames to serve as a current candidate frame, and primary calculation is respectively carried out on the current candidate frame and n matched features of the unmanned aerial vehicle A to obtain n Pearson correlation coefficient results;
the calculation formula is as follows:
r is the Pearson correlation coefficient; xiRepresenting the ith feature vector corresponding to a certain candidate box at a certain height of the unmanned aerial vehicle A; y isiRepresenting the ith feature vector corresponding to the feature library at a certain height of the unmanned aerial vehicle A; n represents the length of all feature vectors at a certain altitude of drone a;represents the average value of all the values of the feature vector corresponding to a candidate box at a certain height of the unmanned aerial vehicle A.The average value of all values of the corresponding feature vectors in the feature library at a certain height of the unmanned aerial vehicle A is represented.
Step eight, performing an NMS non-maximum value inhibition algorithm on the n Pearson correlation coefficients corresponding to the current candidate frame, judging whether the Pearson correlation coefficients larger than the IOU threshold exist or not according to the set IOU threshold, if so, selecting the maximum value, and entering the step nine; otherwise, continuously returning to the step seven, and selecting the next candidate frame as the current candidate frame for processing.
And step nine, selecting the Pearson correlation coefficients of at most m maximum values corresponding to the m candidate frames of the image, and selecting the maximum value from the at most m values as the final Pearson correlation coefficient.
Step ten, calculating a central position coordinate according to the pixel coordinates of the four vertexes of the candidate frame corresponding to the final Pearson correlation coefficient, namely the position coordinate of the target.
And eleventh, guiding the unmanned aerial vehicle A to land according to the position coordinates of the target.
The invention has the advantages that:
(1) according to the visual processing method for the landing guidance of the unmanned aerial vehicle mobile platform, the target characteristics are extracted by using the trained weight parameters on the public data set, so that the training time is saved;
(2) according to the visual processing method for guiding the landing of the unmanned aerial vehicle moving platform, the height scale changes greatly in the landing process of the unmanned aerial vehicle, the feature vectors of the targets are stored according to the heights in a segmented mode, and are matched in the corresponding height database according to the real-time height pair during real-time identification, so that the matching efficiency is greatly improved.
(3) According to the visual processing method for guiding the landing of the unmanned aerial vehicle mobile platform, the terminal of the unmanned aerial vehicle is accurately guided to land by using the image, compared with the conventional neural network which is used for identifying the landing of the terminal of the unmanned aerial vehicle and needs to be trained by the neural network, the method adopts the capability of extracting the characteristics of the neural network, directly stores the target characteristics according to the height, saves the step of training the neural network, and saves a lot of time.
Drawings
Fig. 1 is a flowchart of a visual processing method for guiding landing of a mobile platform of an unmanned aerial vehicle according to the present invention;
fig. 2 is a schematic diagram of candidate frames identified by the drone at three altitudes according to the embodiment of the present invention.
Detailed Description
The present invention will be described in further detail with reference to the accompanying drawings and examples.
The invention relates to a visual processing method for guiding landing of an unmanned aerial vehicle mobile platform, which trains weight parameters of a convolutional neural network model by utilizing a public data set so as to extract image characteristics, sectionally extracts the characteristics of a training set target for the whole height in the landing process, matches characteristic vectors in a corresponding height training set by combining with real-time height information measured by sensors such as a GPS, a Beidou or a laser range finder and the like in the real-time landing process, calculates a Pearson correlation coefficient, compares the Pearson correlation coefficient with a set threshold value, identifies the target and obtains the coordinate of the center of the target in an image coordinate system.
The method comprises a pre-training process and a real-time identification process, and in consideration of two main problems that the time consumption is long in the convolutional neural network training process and the size is very large in the unmanned aerial vehicle landing process, the height-based pre-training method is provided, and as shown in fig. 1, the specific process is as follows:
the pre-training process is specifically divided into:
the method comprises the following steps that firstly, space division is carried out on the flying height of an unmanned aerial vehicle according to actual conditions, and targets in a flying area are fixedly arranged on the ground;
step two, opening an onboard camera of the unmanned aerial vehicle aiming at each flight altitude, and aligning to a target to obtain a group of pictures with different illumination and angles corresponding to each altitude;
after preprocessing each group of pictures with different heights, inputting the pictures into a convolutional neural network one by one, and running forward propagation to obtain feature extraction of each picture and corresponding candidate frame extraction;
according to the input requirement of the convolutional neural network, the sizes of the roi areas resize selected from the candidate boxes are input into the convolutional neural network one by one; running forward propagation of a convolutional neural network, and extracting features of an input training set;
and step four, storing the features extracted from each candidate frame according to the respective heights, and transmitting the features serving as a matching feature library to each unmanned aerial vehicle.
The weight parameters trained on the public data set are directly utilized, so that the training time is effectively shortened, and the complexity is reduced. The database for establishing the training set is stored according to the landing height in a segmented mode, and the problem of identification accuracy caused by the fact that the size of the unmanned aerial vehicle is very large in the landing process can be effectively solved. The database is stored according to the landing height in a segmented mode, so that the matching speed can be guaranteed when the database is matched in real time.
The unmanned aerial vehicle landing terminal utilizes the characteristic library to carry out real-time identification and guide the landing process as follows:
fifthly, for any unmanned aerial vehicle A, flying to the position near the upper part of the area containing the target by using the GPS, and obtaining the real-time height of the unmanned aerial vehicle A from airborne inertial navigation;
taking a picture of a flight area in real time by an airborne camera of the unmanned aerial vehicle A, transmitting the picture to a convolutional neural network in real time, and extracting features and positions of candidate frames by forward propagation;
each image contains a plurality of candidate frames and a plurality of corresponding characteristics.
Seventhly, extracting m candidate frames and corresponding feature vectors for a certain image, and respectively calculating the Pearson correlation coefficient of the feature vectors extracted from the current candidate frame and all matched feature libraries corresponding to the real-time height of the unmanned aerial vehicle A;
the method specifically comprises the following steps:
the unmanned aerial vehicle A has n feature vectors in a feature library at the current height, one candidate frame is sequentially selected from m candidate frames to serve as a current candidate frame, and primary calculation is respectively carried out on the current candidate frame and n matched features of the unmanned aerial vehicle A to obtain n Pearson correlation coefficient results;
the calculation formula is as follows:
r is the Pearson correlation coefficient; xiRepresenting a feature vector corresponding to the ith candidate box under a certain height of the unmanned aerial vehicle A; y isiRepresenting the ith feature vector corresponding to the feature library at a certain height of the unmanned aerial vehicle A; n represents the length of all feature vectors at a certain altitude of drone a;represents the average value of all the values of the feature vector corresponding to a candidate box at a certain height of the unmanned aerial vehicle A.The average value of all values of the corresponding feature vectors in the feature library at a certain height of the unmanned aerial vehicle A is represented.
For two variables of the pearson correlation coefficient: x, Y, as can be understood from:
(1) and when the correlation coefficient is 0, the variables X and Y have no relation.
(2) And when the value of X is increased (decreased), the value of Y is increased (decreased), the two variables are positively correlated, and the correlation coefficient is between 0.00 and 1.00.
(3) When the value of X is increased (decreased) and the value of Y is decreased (increased), the two variables are in negative correlation, and the correlation coefficient is between-1.00 and 0.00.
The larger the absolute value of the correlation coefficient is, the stronger the correlation is, the closer the correlation coefficient is to 1 or-1, the stronger the correlation is, the closer the correlation coefficient is to 0, and the weaker the correlation is.
Step eight, performing an NMS non-maximum value inhibition algorithm on the n Pearson correlation coefficients corresponding to the current candidate frame, judging whether the Pearson correlation coefficients larger than the IOU threshold exist or not according to the set IOU threshold, if so, selecting the maximum value, and entering the step nine; otherwise, continuously returning to the step seven, and selecting the next candidate frame as the current candidate frame for processing.
And for the n Pearson correlation coefficients of each candidate box, solving the largest Pearson coefficient scoremax, and then scoring the candidate box as scoremax. Performing an NMS non-maximum value inhibition algorithm, and if the IOU of the two frames is greater than a set IOU threshold value according to a set IOU threshold value, assigning the scoremax of one frame with a smaller score to be 0 according to the scoremax of the two frames; and according to the scoremax score, deleting redundant candidate boxes by using a set Pearson threshold (between 0.5 and 1) to obtain final candidate boxes.
And step nine, selecting the Pearson correlation coefficients of at most m maximum values corresponding to the m candidate frames of the image, and selecting the maximum value from the at most m values as the final Pearson correlation coefficient.
Step ten, calculating a central position coordinate according to the pixel coordinates of the four vertexes of the candidate frame corresponding to the maximum Pearson correlation coefficient, namely the position coordinate of the target.
And eleventh, guiding the unmanned aerial vehicle A to land according to the position coordinates of the target.
As described above, the target database for visual guidance of the mobile platform used in the invention is established according to the height, the convolutional neural network is modified based on YOLOV3, and the size of the prior frame is selected according to the real-time height obtained by airborne inertial navigation when the network is in forward propagation, so that the problem of low accuracy rate caused by the fact that the scale comes in the airborne visual guidance landing process of the unmanned aerial vehicle can be effectively solved.
Native YOLOV3 has outputs of 3 scales, 52 × 52, 26 × 26, and 13 × 13, respectively. The scale output of 13 × 13 is used to detect large targets, the corresponding 26 × 26 is medium, and 52 × 52 is used to detect small targets. According to the invention, in a preferred embodiment, the target imaging changes from small to large in scale during the landing process of the unmanned aerial vehicle, and the three scales correspond to the data sets with three heights respectively, wherein 52 × 52 corresponds to the height of 5m-6m from the ground, 26 × 26 corresponds to the height of 3m-4m from the ground, and 13 × 13 corresponds to the height of 0-2m from the ground. In the airborne landing process of the unmanned aerial vehicle, the feature vectors under the corresponding scales are output according to the height of the unmanned aerial vehicle relative to the ground, which is obtained through inertial navigation, Pearson correlation coefficients are obtained by the feature vectors in all the feature vectors and the feature vectors in the database under the corresponding heights, and correct candidate frames are selected according to threshold values set by the correlation coefficients. Three dimensions for three heights are used in this embodiment, but should not be construed as a limitation of the present invention. According to the invention, a network structure and a database which are output in multiple scales can be established according to the height requirement of the unmanned aerial vehicle needing to land under different scenes.
Claims (2)
1. A visual processing method for landing guidance of an unmanned aerial vehicle mobile platform is characterized by comprising the following specific steps:
the method comprises the following steps that firstly, space division is carried out on the flying height of an unmanned aerial vehicle according to actual conditions, and targets in a flying area are fixedly arranged on the ground;
step two, opening an onboard camera of the unmanned aerial vehicle aiming at each flight altitude, and aligning to a target to obtain a group of pictures with different illumination and angles corresponding to each altitude;
after preprocessing each group of pictures with different heights, inputting the pictures into a convolutional neural network one by one, and running forward propagation to obtain feature extraction of each picture and corresponding candidate frame extraction;
step four, the features extracted from each candidate frame are respectively stored according to the respective heights and are respectively transmitted to each unmanned aerial vehicle as a matching feature library;
fifthly, for any unmanned aerial vehicle A, flying to the position near the upper part of the area containing the target by using the GPS, and obtaining the real-time height of the unmanned aerial vehicle A from airborne inertial navigation;
taking a picture of a flight area in real time by an airborne camera of the unmanned aerial vehicle A, transmitting the picture to a convolutional neural network in real time, and extracting features and positions of candidate frames by forward propagation;
seventhly, extracting m candidate frames and corresponding feature vectors for a certain image, and respectively calculating the Pearson correlation coefficient of the feature vectors extracted from the current candidate frame and all matched feature libraries corresponding to the real-time height of the unmanned aerial vehicle A;
the method specifically comprises the following steps:
the unmanned aerial vehicle A has n feature vectors in a feature library at the current height, one candidate frame is sequentially selected from m candidate frames to serve as a current candidate frame, and primary calculation is respectively carried out on the current candidate frame and n matched features of the unmanned aerial vehicle A to obtain n Pearson correlation coefficient results;
the calculation formula is as follows:
r is the Pearson correlation coefficient; xiRepresenting the ith feature vector corresponding to a certain candidate box at a certain height of the unmanned aerial vehicle A; y isiRepresenting the ith feature vector corresponding to the feature library at a certain height of the unmanned aerial vehicle A; n represents the length of all feature vectors at a certain altitude of drone a;representing the average value of all the values of the feature vectors corresponding to a certain candidate frame at a certain height of the unmanned aerial vehicle A;representing the average value of all values of the corresponding characteristic vectors in the characteristic library at a certain height of the unmanned aerial vehicle A;
step eight, performing an NMS non-maximum value inhibition algorithm on the n Pearson correlation coefficients corresponding to the current candidate frame, judging whether the Pearson correlation coefficients larger than the IOU threshold exist or not according to the set IOU threshold, if so, selecting the maximum value, and entering the step nine; otherwise, continuously returning to the step seven, and selecting the next candidate frame as the current candidate frame for processing;
step nine, selecting Pearson correlation coefficients of at most m maximum values corresponding to the m candidate frames of the image, and selecting the maximum value from the at most m values as a final Pearson correlation coefficient;
step ten, calculating a central position coordinate according to the pixel coordinates of the four vertexes of the candidate frame corresponding to the final Pearson correlation coefficient, wherein the central position coordinate is the position coordinate of the target;
and eleventh, guiding the unmanned aerial vehicle A to land according to the position coordinates of the target.
2. The visual processing method for guiding landing of a mobile platform of an unmanned aerial vehicle as claimed in claim 1, wherein in step six, each image comprises a plurality of candidate frames and a plurality of corresponding features.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202011140021.5A CN112241180B (en) | 2020-10-22 | 2020-10-22 | Visual processing method for landing guidance of unmanned aerial vehicle mobile platform |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202011140021.5A CN112241180B (en) | 2020-10-22 | 2020-10-22 | Visual processing method for landing guidance of unmanned aerial vehicle mobile platform |
Publications (2)
Publication Number | Publication Date |
---|---|
CN112241180A true CN112241180A (en) | 2021-01-19 |
CN112241180B CN112241180B (en) | 2021-08-17 |
Family
ID=74169888
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202011140021.5A Active CN112241180B (en) | 2020-10-22 | 2020-10-22 | Visual processing method for landing guidance of unmanned aerial vehicle mobile platform |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN112241180B (en) |
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN114999021A (en) * | 2022-05-17 | 2022-09-02 | 中联重科股份有限公司 | Method, processor, device and storage medium for determining cause of oil temperature abnormality |
Citations (9)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN107589758A (en) * | 2017-08-30 | 2018-01-16 | 武汉大学 | A kind of intelligent field unmanned plane rescue method and system based on double source video analysis |
US20180029842A1 (en) * | 2016-07-29 | 2018-02-01 | Otis Elevator Company | Monitoring system of a passenger conveyor and monitoring method thereof |
US20180186452A1 (en) * | 2017-01-04 | 2018-07-05 | Beijing Deephi Technology Co., Ltd. | Unmanned Aerial Vehicle Interactive Apparatus and Method Based on Deep Learning Posture Estimation |
CN108820233A (en) * | 2018-07-05 | 2018-11-16 | 西京学院 | A kind of fixed-wing unmanned aerial vehicle vision feels land bootstrap technique |
CN109190581A (en) * | 2018-09-17 | 2019-01-11 | 金陵科技学院 | Image sequence target detection recognition methods |
CN110458494A (en) * | 2019-07-19 | 2019-11-15 | 暨南大学 | A kind of unmanned plane logistics delivery method and system |
CN110673642A (en) * | 2019-10-28 | 2020-01-10 | 深圳市赛为智能股份有限公司 | Unmanned aerial vehicle landing control method and device, computer equipment and storage medium |
CN110825101A (en) * | 2019-12-26 | 2020-02-21 | 电子科技大学 | Unmanned aerial vehicle autonomous landing method based on deep convolutional neural network |
CN110837842A (en) * | 2019-09-12 | 2020-02-25 | 腾讯科技(深圳)有限公司 | Video quality evaluation method, model training method and model training device |
-
2020
- 2020-10-22 CN CN202011140021.5A patent/CN112241180B/en active Active
Patent Citations (9)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20180029842A1 (en) * | 2016-07-29 | 2018-02-01 | Otis Elevator Company | Monitoring system of a passenger conveyor and monitoring method thereof |
US20180186452A1 (en) * | 2017-01-04 | 2018-07-05 | Beijing Deephi Technology Co., Ltd. | Unmanned Aerial Vehicle Interactive Apparatus and Method Based on Deep Learning Posture Estimation |
CN107589758A (en) * | 2017-08-30 | 2018-01-16 | 武汉大学 | A kind of intelligent field unmanned plane rescue method and system based on double source video analysis |
CN108820233A (en) * | 2018-07-05 | 2018-11-16 | 西京学院 | A kind of fixed-wing unmanned aerial vehicle vision feels land bootstrap technique |
CN109190581A (en) * | 2018-09-17 | 2019-01-11 | 金陵科技学院 | Image sequence target detection recognition methods |
CN110458494A (en) * | 2019-07-19 | 2019-11-15 | 暨南大学 | A kind of unmanned plane logistics delivery method and system |
CN110837842A (en) * | 2019-09-12 | 2020-02-25 | 腾讯科技(深圳)有限公司 | Video quality evaluation method, model training method and model training device |
CN110673642A (en) * | 2019-10-28 | 2020-01-10 | 深圳市赛为智能股份有限公司 | Unmanned aerial vehicle landing control method and device, computer equipment and storage medium |
CN110825101A (en) * | 2019-12-26 | 2020-02-21 | 电子科技大学 | Unmanned aerial vehicle autonomous landing method based on deep convolutional neural network |
Non-Patent Citations (4)
Title |
---|
ADRIAN CARRIO: "A Real-time Supervised Learning Approach for Sky Segmentation", 《2016 INTERNATIONAL CONFERENCE ON》 * |
FLORIAN SHKURTI: "Underwater Multi-Robot Convoying using", 《IEEE》 * |
祖林禄 等: "农用无人机移动补给平台自主降落算法与试验", 《农 业 机 械 学 报》 * |
郭砚辉: "面向无人机-艇协同的计算机视觉辅助关键技术研究", 《中国优秀硕士学位论文全文数据库 工程科技Ⅱ辑》 * |
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN114999021A (en) * | 2022-05-17 | 2022-09-02 | 中联重科股份有限公司 | Method, processor, device and storage medium for determining cause of oil temperature abnormality |
Also Published As
Publication number | Publication date |
---|---|
CN112241180B (en) | 2021-08-17 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN110782481B (en) | Unmanned ship intelligent decision-making method and system | |
CN106981073B (en) | A kind of ground moving object method for real time tracking and system based on unmanned plane | |
CN106054929B (en) | A kind of unmanned plane based on light stream lands bootstrap technique automatically | |
CN111213155A (en) | Image processing method, device, movable platform, unmanned aerial vehicle and storage medium | |
CN113485441A (en) | Distribution network inspection method combining unmanned aerial vehicle high-precision positioning and visual tracking technology | |
CN109063532B (en) | Unmanned aerial vehicle-based method for searching field offline personnel | |
CN110832494A (en) | Semantic generation method, equipment, aircraft and storage medium | |
CN112488061B (en) | Multi-aircraft detection and tracking method combined with ADS-B information | |
CN112927264B (en) | Unmanned aerial vehicle tracking shooting system and RGBD tracking method thereof | |
CN111831010A (en) | Unmanned aerial vehicle obstacle avoidance flight method based on digital space slice | |
CN115526790A (en) | Spacecraft wreckage search and rescue identification tracking method and system based on neural network | |
CN112241180B (en) | Visual processing method for landing guidance of unmanned aerial vehicle mobile platform | |
CN114689030A (en) | Unmanned aerial vehicle auxiliary positioning method and system based on airborne vision | |
CN114815871A (en) | Vision-based autonomous landing method for vertical take-off and landing unmanned mobile platform | |
Kim et al. | A deep-learning-aided automatic vision-based control approach for autonomous drone racing in game of drones competition | |
KR102349818B1 (en) | Autonomous UAV Navigation based on improved Convolutional Neural Network with tracking and detection of road cracks and potholes | |
CN112818964A (en) | Unmanned aerial vehicle detection method based on FoveaBox anchor-free neural network | |
CN116185049A (en) | Unmanned helicopter autonomous landing method based on visual guidance | |
CN112364854B (en) | Airborne target approaching guidance system and method based on detection, tracking and fusion | |
CN115755575A (en) | ROS-based double-tripod-head unmanned aerial vehicle autonomous landing method | |
Shakirzyanov et al. | Method for unmanned vehicles automatic positioning based on signal radially symmetric markers recognition of underwater targets | |
CN112198884A (en) | Unmanned aerial vehicle mobile platform landing method based on visual guidance | |
CN112069997A (en) | Unmanned aerial vehicle autonomous landing target extraction method and device based on DenseHR-Net | |
CN113065499B (en) | Air robot cluster control method and system based on visual learning drive | |
CN114740900B (en) | Four-rotor unmanned aerial vehicle accurate landing system and method based on fault-tolerant control |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |