CN110631588A - Unmanned aerial vehicle visual navigation positioning method based on RBF network - Google Patents
Unmanned aerial vehicle visual navigation positioning method based on RBF network Download PDFInfo
- Publication number
- CN110631588A CN110631588A CN201910924244.1A CN201910924244A CN110631588A CN 110631588 A CN110631588 A CN 110631588A CN 201910924244 A CN201910924244 A CN 201910924244A CN 110631588 A CN110631588 A CN 110631588A
- Authority
- CN
- China
- Prior art keywords
- feature point
- image
- aerial vehicle
- unmanned aerial
- descriptor
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
Images
Classifications
-
- G—PHYSICS
- G01—MEASURING; TESTING
- G01C—MEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
- G01C21/00—Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00
- G01C21/20—Instruments for performing navigational calculations
Landscapes
- Engineering & Computer Science (AREA)
- Radar, Positioning & Navigation (AREA)
- Remote Sensing (AREA)
- Automation & Control Theory (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Image Analysis (AREA)
Abstract
The invention discloses an unmanned aerial vehicle visual navigation positioning method based on a RBF network. The scheme of the invention is as follows: when the GNSS signal is not lost, acquiring an image through a camera, extracting an image frame from the camera, detecting a feature point of each image, extracting a descriptor, and keeping the feature point information of the image; repeating the processing of the descriptors of the extracted image frames, and storing descriptor information and positioning information into a visual database; under the condition that GNSS signals are lost, images shot by a camera are extracted, descriptor extraction is also carried out, and the RBF network classifier is trained by utilizing visual database information: and then performing neighborhood search on the generated descriptors according to the RBF network classifier, estimating the optimal matching position and obtaining the current positioning information based on the positioning information recorded by the optimal matching position. According to the invention, under the condition that GNSS signals are lost, positioning and navigation processing of the unmanned aerial vehicle can be realized based on the visual database constructed by the GNSS signals, and the visual database only stores characteristic point descriptor information of images, so that the occupied space of a memory is small.
Description
Technical Field
The invention belongs to the technical field of unmanned aerial vehicle navigation positioning, and particularly relates to an unmanned aerial vehicle visual navigation positioning method based on an RBF (radial Basis function) network.
Background
The unmanned aerial vehicle comprehensive positioning system plays a crucial role in stability and integrity. The most common solution for positioning involves combining the Global Navigation Satellite System (GNSS) and the Inertial Navigation System (INS) within a multi-sensor fusion framework. In this case, GNSS is used as a compact and economical method to constrain the unbounded errors generated by the INS sensors during the positioning process. In fact, however, the INS integrates time in an iterative process of continuously acquiring data from a plurality of sensors to obtain an approximate drone position, and in this process, measurement errors generated by the sensors are rapidly accumulated and increase without limitation. Therefore, most drones use an Extended Kalman Filter (EKF) framework to fuse data from INS and GNSS, which combines the short-term accuracy of the inertial navigation system with the long-term accuracy of the global navigation satellite system, thereby effectively suppressing the positioning error. Therefore, the global navigation satellite system is widely used for various drones.
Despite the advantages of the global navigation satellite system, it has proven unreliable in many recorded cases. Outdoor scenes such as urban canyons, forests, jungles, and rainy regions also prove to be vulnerable to both intentional attacks and unintentional environmental disturbances. In addition to this, drones using global navigation satellite systems have proven to be vulnerable to signal spoofing on a number of occasions, and such attacks are now becoming a reality. A disadvantage of using global navigation satellite systems in drone navigation is the radio communication necessary to acquire positioning data, which radio communication systems are generally prone to usability problems, interference and signal changes. The root cause of using GNSS/INS fusion is to rely on global information obtained from GNSS to solve the local positioning problem. In order to solve these problems, suitable navigation sensors and new navigation algorithms are introduced to solve the problem of navigation and positioning of the drone when subjected to wireless communication interference and short-term or long-term GNSS/INS failures.
One popular method of reliably determining drone position in an outdoor environment where gnss rejection/gnss degradation is to use monocular 2D cameras in combination with vision-based techniques. These techniques fall into two categories: one is a technique using a priori knowledge of the environment and a technique that does not use a priori knowledge of the environment. In the field of visual navigation using a priori knowledge, map-based navigation techniques seem to be advanced, which match images taken by drones with previously flown high resolution landmark satellite images or landmark images, and the limitations of this solution include the need for a large database of geographic images, database access by network-connected onboard devices, and another important limitation is the need to know the starting point or predefined boundaries in advance. Therefore, the map-based solution has serious limitations, which prevent its application in practical scenes. The second category of vision-based techniques does not have this limitation because they do not require prior knowledge of the environment. This class of solutions includes visual measurement and simultaneous localization and mapping (SLAM), among others. In visual measurements, the motion of the drone is estimated by tracking features or pixels between successive images obtained from a monocular camera. However, even the most advanced monocular vision measurements are affected over time because the current location estimate is based on the previous location estimate, resulting in an accumulation of errors. With respect to visual measurements, SLAM solves the localization problem while building an environmental map. The map building requires multiple steps such as tracking, repositioning and loop closure, and this solution is always accompanied by heavy computation and memory usage.
Disclosure of Invention
The invention aims to: aiming at the existing problems, the RBF network-based visual navigation positioning method for the unmanned aerial vehicle is provided, and by acquiring the ground image feature descriptors in the navigation process of the unmanned aerial vehicle and using the RBF network classifier trained by the feature description sub data set to perform neighborhood search on the feature point descriptors of the currently acquired images, the optimal matching position of the current images is obtained, so that more accurate positioning information of the location of the unmanned aerial vehicle is estimated.
The invention discloses an unmanned aerial vehicle visual navigation positioning method based on an RBF network, which comprises the following steps:
step S1: setting an RBF neural network for matching the feature point descriptors of the image, and training the neural network;
wherein, the training sample is: in the navigation process of the unmanned aerial vehicle, images collected by an airborne camera are used; the feature vectors of the training samples are: feature point descriptors of the image obtained through ORB feature point detection processing;
step S2: constructing a visual database of the unmanned aerial vehicle during navigation:
in the navigation process of the unmanned aerial vehicle, images are collected through an airborne camera, ORB feature point detection processing is carried out on the collected images, descriptors of all feature points are extracted, and feature point descriptors of the current images are obtained; storing the feature point descriptors of the image and positioning information during image acquisition into a visual database;
step S3: unmanned aerial vehicle vision navigation positioning based on visual database:
based on a fixed interval period, extracting an image acquired by an airborne camera to serve as an image to be matched;
carrying out ORB feature point detection processing on the image to be matched, and extracting a descriptor of each feature point to obtain a feature point descriptor of the image to be matched;
inputting the feature point descriptor of the image to be matched into a trained RBF neural network, and performing neighborhood search to obtain the optimal matching feature point descriptor of the image to be matched in a visual database;
and obtaining the current visual navigation positioning result of the unmanned aerial vehicle based on the positioning information recorded in the database by the optimal matching feature point descriptor.
Further, step S3 further includes: detecting whether the similarity between the optimal matching feature point descriptor and the feature point descriptor of the image to be matched is smaller than a preset similarity threshold value or not; if so, obtaining a current visual navigation positioning result of the unmanned aerial vehicle based on the positioning information recorded in the database by the optimal matching feature point descriptor; otherwise, the navigation is continued based on the recently obtained visual navigation positioning result.
In summary, due to the adoption of the technical scheme, the invention has the beneficial effects that:
(1) the visual database only stores the feature point descriptor information of the image, so that the occupied space of a memory is reduced;
(2) under the condition of no reference image library, the visual database can be directly accessed to match images shot by the unmanned aerial vehicle;
(3) and (4) realizing feature descriptor neighborhood search based on the RBF network obtained by training, obtaining the best matching position, and estimating positioning information.
Drawings
FIG. 1 is a visual positioning overall system framework;
FIG. 2 is a flow chart of ORB feature point detection;
FIG. 3 is a schematic diagram of rough feature point extraction during ORB feature point extraction;
FIG. 4 is a schematic diagram of an RBF network architecture;
fig. 5 is a schematic diagram of a matching positioning process of the RBF network.
Detailed Description
In order to make the objects, technical solutions and advantages of the present invention more apparent, the present invention will be described in further detail with reference to the following embodiments and accompanying drawings.
According to the unmanned aerial vehicle navigation positioning method based on vision, the ground image feature descriptors of the current area where the unmanned aerial vehicle is located are collected, the feature point descriptors of the collected images are subjected to neighborhood search by using the RBF network classifier trained by the feature description sub data set, and the optimal matching position of the images is obtained, so that more accurate positioning information of the location of the unmanned aerial vehicle is estimated.
Referring to fig. 1, the method for positioning navigation of unmanned aerial vehicle based on vision mainly comprises two parts: firstly, data acquisition during outbound voyage and secondly positioning estimation during return voyage;
in the data acquisition part, acquiring images through a camera, extracting image frames from the camera, detecting characteristic points of each image, extracting a descriptor, discarding image data in the process, and keeping the characteristic point information of the image; repeating the processing of the descriptors of the extracted image frames, and storing descriptor information and positioning information into a visual database;
during positioning estimation processing, under the condition that GNSS signals are lost, images shot by a camera are extracted, descriptor extraction is also carried out, and an RBF network classifier is trained by using visual database information: then performing neighborhood search on the generated descriptors according to an RBF network classifier, and estimating an optimal matching position; and finally, estimating the positioning information of the current image according to the positioning information stored in the visual database at the optimal matching position.
The method comprises the following concrete implementation steps:
(1) and (6) collecting data.
The method comprises the steps of collecting images from an onboard camera, carrying out ORB (optimized Brief) feature point detection on each frame of image, extracting descriptors of each feature point, and then creating and storing a database entry, wherein the database entry consists of a feature point descriptor set extracted previously and corresponding positioning information. The positioning information is composed of attitude information and position information provided by an airborne equipment application program of the unmanned aerial vehicle, and the format or the property of the information is highly dependent on the specific application program.
(2) And (5) feature extraction.
ORB feature point detection uses the FAST (features from obtained segment test) algorithm to detect feature points on each level of the scale pyramid. And detecting a circle of pixel values around the candidate characteristic point based on the gray value of the image around the characteristic point, and if the gray value difference between enough pixel points in the area around the candidate point and the gray value of the candidate point is large enough (namely the gray value difference is larger than a preset threshold), considering the candidate point as a characteristic point.
The method comprises the following specific steps:
1) and detecting ORB characteristic points.
Referring to fig. 2, when detecting ORB feature points, first, FAST corner detection is performed on an input image; then, calculating a Harris corner response value from the selected FAST characteristic points by using a Harris corner measurement method; then according to the result of the angular point response value sorting, picking out N characteristic points with the maximum response values; then, calculating the direction of the ORB characteristic point by adopting a gray centroid method, and adopting BRIEF as a characteristic point description method; finally, each feature point generates 256-bit binary point pairs.
That is, the ORB features detect FAST feature points by using a FAST feature point detection method, and then calculate Harris corner response values from the selected FAST feature points by using a Harris corner measurement method, and pick the top N feature points with the largest response values.
Among them, of FAST characteristic pointsCorner response function fCRFDefined as:
wherein epsilondAnd the threshold value is I (x), the pixel value of a pixel point in the neighborhood of the point to be measured, and I (p), the pixel value of the current point to be measured.
The sum of the corner response function values of the point to be measured and all the corresponding surrounding points is denoted as N, and when N is greater than the set threshold value, the point to be measured is the FAST feature point, and the threshold value is usually 12.
The specific processing flow for ORB feature point extraction is as follows:
the first step is as follows: and (5) roughly extracting the feature points. Selecting a point in the image as p, taking p as the center of circle and 3 pixels as radius, detecting the pixel values of the corresponding points with the position numbers of 1, 5, 9 and 13 on the circumference (as shown in fig. 3, one of the points includes 16 positions, when rough extraction is performed, four points on the circumference in the four directions of the upper, lower, left and right of the center of circle p are detected), and if the pixel value of at least 3 points in the 4 points is greater than or less than the pixel value of the p point, then the p point is considered as a feature point.
The second step is that: and removing local dense points. And (4) calculating by adopting a non-maximum inhibition algorithm, reserving the characteristic points at the maximum position, and deleting the rest characteristic points.
The third step: the scale invariance of the feature points. And establishing a pyramid to realize multi-scale invariance of the feature points. A scale factor scale (e.g. 1.2) and the pyramid level levels nlevels (e.g. 8 levels) are set. The original image is down-sampled into n levels of images according to the scale factor, and the relation between each level of down-sampled image I' and the original image I is as follows:
I’=I/scalek(k=1,2,…,8)
the fourth step: rotational invariance of feature points. And calculating the direction of the characteristic point by adopting a gray scale centroid method, wherein the moment in the radius r range of the characteristic point is the centroid, and the vector formed between the characteristic point and the centroid is the direction of the characteristic point.
The vector angle theta of the feature point and the centroid C is the main direction of the feature point:
θ=arctan(Cx,Cy)
wherein (C)x,Cy) Representing the coordinates of the centroid C.
2) And generating a characteristic point descriptor.
ORB features use descriptor BRIEF as feature point description method. The BRIEF descriptor is composed of a binary string of length n, where n is 256 in this embodiment. The formula for calculating the binary value τ (p: x, y) of a certain bit in the descriptor is as follows:
wherein p (x) and p (y) are the respective gray levels of two points in a pair of points, and n is a feature descriptor f consisting of the pair of pointsn(p) can be expressed as:
fn(p)=∑1≤i≤n2i-1τ(p:x,y)
constructing affine transformation matrix RθMaking the descriptor rotationally invariant, a rotation-corrected version S of the generator matrix S is obtainedθ:
Sθ=RθS
Wherein the generator matrix S is n point pairs (x)i,yi) I is 1, 2n, theta is the principal direction of the feature point.
Finally obtained feature point descriptor gn(p,θ)=fn(p)|xi,yi∈SθAnd 256-bit descriptors of the feature points are formed.
(3) And matching positioning based on the RBF neural network.
When the GNSS/INS signal of the unmanned aerial vehicle is unavailable, the system prompts the unmanned aerial vehicle to return to the home. And matching the image descriptor extracted in the return process with the descriptor previously inserted into the database by utilizing the unmanned aerial vehicle motion information stored in the characteristic database to obtain the positioning information. The matching positioning system based on the RBF neural network consists of network pattern training and pattern positioning. The concrete mode is as follows:
1) a training mode is set.
And setting a training mode, learning the training samples and providing a classification decision.
The RBF network only contains one hidden layer, the distance between an input value and a central vector is taken as an independent variable of a function, and a radial basis function is taken as an activation function. The local approximation approach can simplify the computational complexity, since for one input X, only some neurons respond, and others are approximately 0, and w of the response adjusts the parameter.
Referring to FIG. 4, the RBF neural network is composed of an input layer, a hidden layer and an output layer, wherein
An input layer, the transformation from the input space to the hidden layer space being nonlinear;
a hidden layer, neurons using radial basis functions as activation functions, the hidden layer to output layer spatial transformation being linear;
the output layer adopts neurons of linear functions and is a linear combination of the output of the neurons of the hidden layer;
the RBF network adopts RBF as the 'base' of the hidden unit to form a hidden layer space, and an input vector is directly mapped to the hidden space. After the center point is determined, the mapping relationship can be determined. The mapping from input to output of the network is nonlinear, the network output is linear for adjustable parameters, and the connection weight can be directly solved by a linear equation set, so that the learning speed is greatly increased, and the local minimum problem is avoided.
In the specific embodiment, the weight from the input layer to the hidden layer of the RBF neural network is fixed to 1, the transfer function of the hidden layer unit adopts a radial basis function, and the hidden layer neuron uses the layer weight vector wiAnd the input vector XiVector distance and deviation b betweeniAfter multiplication, the input is the neuron activation function. Taking the radial basis function as a gaussian function, the output of the neuron is:
where x represents the input data, i.e. the input vector, xiσ is a function width parameter, centered on the basis function, used to determine the input vector to each radial basis layer neuron.
2) And (4) RBF neural network learning.
The RBF network has three parameters to learn: center x of basis functioniAnd the variance σ and the weight w between the hidden layer and the output layer.
i. Determining a basis function center xi。
Feature descriptor vectors of images acquired by a camera are used for generating a feature database, and a k-means clustering algorithm is adopted to determine the center x of a kernel functioniRandomly selecting I different samples from the training samples as initial center xi(0) Random input of training sample XkDetermining which center the training sample is closest to, finding the center that satisfies:
i(Xk)=argmin||Xk-xi(n)||
wherein I1, 2i(n) denotes the ith center of the radial basis function at the nth iteration, and the iteration step number n is set to 0. The basis function center is adjusted by the following formula:
where γ is the learning step size, 0< γ < 1.
Namely, the center of the basis function is continuously updated by iteration training, and the updating formula is as follows: x is the number ofi(n+1)=xi(n)+γ[Xk(n)- xi(n)]When the change of the processing result of the latest two iteration updates does not exceed the preset threshold, the update is stopped (learning is finished), and x is considered to bei(n +1) is approximately equal to xi(n) taking the updated basis function center of the last time as a final iteration training output result xi(I ═ 1,2, …, I). Otherwise, n is n + 1.
Determining the variance σ of the basis function.
After the RBF neural network center is determined, the width is expressed as:
wherein M is the number of hidden layer units, dmaxIs the maximum distance between the selected centers.
Determining a hidden-layer-to-output-layer weight w.
The connection weight of the hidden layer to the output layer unit is calculated by adopting a least square method, namely
In the formula, gqiWeight, X, representing the center of the vector and basis function of the qth input sampleqIs a vector of the qth input sample, q 1, 2.
3) And matching and positioning.
In view of the time sequence of images shot by the unmanned aerial vehicle, during return voyage, images shot by the camera can be extracted at intervals of fixed frames and features are extracted, for example, every ten frames are extracted to extract features, feature descriptor vectors are generated, neighborhood search is performed by using a trained RBF network classifier, and an optimal matching position is obtained, namely, an optimal matching result of the currently extracted feature descriptors and the feature descriptors of the shot images in the process of departure (from a departure point to a destination voyage) stored in a database is obtained based on the trained RBF network classifier, whether the similarity between the currently extracted descriptors and the optimal matching result does not exceed a preset similarity threshold value is detected, and if yes, the position of the currently optimal matching result is used as a current position estimation result of the unmanned aerial vehicle during return voyage, and positioning information is obtained.
Furthermore, the error compensation of the navigation system can be carried out on the obtained position estimation result to obtain the positioning information. If the similarity between the position and the optimal matching position is lower than a predefined similarity threshold value, the position can be defined as unknown, the ground image of the area where the unmanned aerial vehicle is located is continuously acquired, and navigation is further performed according to the speed and posture information of the unmanned aerial vehicle and the positioning result obtained last time.
The error formula of the navigation system is as follows:
wherein, the symbolIndicating the position estimation result (the position of the current best match result),the final position estimation result (or position estimation result) representing the latest previous n times) J ═ 1, 2.. times, n, n are preset values. That is, the average standard error of the obtained positioning results of the latest times is used as the current compensation amount, and the error compensation is performed on the current position estimation result to obtain the current final position estimation result
Referring to fig. 5, the RBF network matching location processing procedure of the present invention is:
firstly, randomly sampling feature descriptor information stored in a visual database; and training the two-level system data of the feature descriptors obtained by sampling (training RBF network): setting a training mode, and determining an RBF center by adopting K-means clustering; determining the RBF width d according to the obtained center; determining a connection weight from a hidden layer to an output layer by adopting a least square method according to the RBF center and the width, and finally determining an RBF network structure;
and obtaining an RBF network structure according to training, performing neighborhood matching on the feature point descriptors generated by the acquired images during return flight, judging the optimal matching position, and finally performing positioning estimation on the current image during return flight according to the stored positioning information.
In summary, the present invention starts with the acquisition of images by the camera during the data collection process, performs feature point detection on the images by using the ORB feature point extraction technique, and extracts descriptors for each keypoint. A database entry is created and stored, the database entry consisting of the previously extracted descriptor and the positioning information. Wherein the positioning information comprises attitude information and position information of the unmanned aerial vehicle. While the parameters to be solved in the RBF network are mainly 3: including the center of the basis function, the variance, and the weight from the hidden layer to the output layer. The method adopts a self-organizing selection center learning method, and solves the center and variance of the underlying layer basis function by using an unsupervised learning process in the first step; and in the second step, a supervised learning process is used, and finally, a weight value between the hidden layer and the output layer is directly obtained by using a least square method. In order to reduce the similarity of adjacent images, one image is extracted at intervals of fixed frames, then key points are detected, and descriptors are extracted for each key point by using the same characteristic point extraction method as the data collection process. And (4) obtaining the closest distance between the current image and the descriptor previously inserted into the database by using an RBF network, and finding the optimal matching position. And estimating the positioning information of the current image according to the optimal matching position.
While the invention has been described with reference to specific embodiments, any feature disclosed in this specification may be replaced by alternative features serving the same, equivalent or similar purpose, unless expressly stated otherwise; all of the disclosed features, or all of the method or process steps, may be combined in any combination, except mutually exclusive features and/or steps.
Claims (6)
1. An unmanned aerial vehicle visual navigation positioning method based on an RBF network is characterized by comprising the following steps:
step S1: setting an RBF neural network for matching the feature point descriptors of the image, and training the neural network;
the RBF neural network comprises an input layer, a hidden layer and an output layer, wherein a transfer function of the hidden layer adopts a radial basis function;
the training samples are: in the navigation process of the unmanned aerial vehicle, images collected by an airborne camera are used; the feature vectors of the training samples are: feature point descriptors of the image obtained through ORB feature point detection processing;
step S2: constructing a visual database of the unmanned aerial vehicle during navigation:
in the navigation process of the unmanned aerial vehicle, images are collected through an airborne camera, ORB feature point detection processing is carried out on the collected images, descriptors of all feature points are extracted, and feature point descriptors of the current images are obtained; storing the feature point descriptors of the image and positioning information during image acquisition into a visual database;
step S3: unmanned aerial vehicle vision navigation positioning based on visual database:
based on a fixed interval period, extracting an image acquired by an airborne camera to serve as an image to be matched;
carrying out ORB feature point detection processing on the image to be matched, and extracting a descriptor of each feature point to obtain a feature point descriptor of the image to be matched;
inputting the feature point descriptor of the image to be matched into a trained RBF neural network, and performing neighborhood search to obtain the optimal matching feature point descriptor of the image to be matched in a visual database;
and obtaining the current visual navigation positioning result of the unmanned aerial vehicle based on the positioning information recorded in the database by the optimal matching feature point descriptor.
2. The method of claim 1, wherein step S3 further comprises: detecting whether the similarity between the optimal matching feature point descriptor and the feature point descriptor of the image to be matched is smaller than a preset similarity threshold value or not; if so, obtaining a current visual navigation positioning result of the unmanned aerial vehicle based on the positioning information recorded in the database by the optimal matching feature point descriptor; otherwise, the navigation is continued based on the recently obtained visual navigation positioning result.
3. The method of claim 1, wherein a weight between an input layer to a hidden layer of the RBF neural network is fixed to 1.
4. The method of claim 3, wherein a plurality of basis function centers of the radial basis functions are determined using a k-means clustering algorithm when training the RBF neural network.
The variance σ of the radial basis function is set as:wherein M is the number of cells in the hidden layer, dmaxIs the maximum distance between the centers of the basis functions;
the weight W between the hidden layer and the output layer weight is:wherein x isqFeature vector, x, representing the q-th input sampleiRepresenting the ith basis function center.
5. The method of claim 1, wherein the positioning information comprises pose information and position information of the drone.
6. The method according to claim 1, wherein in step S3, the interval for extracting the images collected by the onboard camera is: extracted once every ten frames.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201910924244.1A CN110631588B (en) | 2019-09-23 | 2019-09-23 | Unmanned aerial vehicle visual navigation positioning method based on RBF network |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201910924244.1A CN110631588B (en) | 2019-09-23 | 2019-09-23 | Unmanned aerial vehicle visual navigation positioning method based on RBF network |
Publications (2)
Publication Number | Publication Date |
---|---|
CN110631588A true CN110631588A (en) | 2019-12-31 |
CN110631588B CN110631588B (en) | 2022-11-18 |
Family
ID=68972992
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201910924244.1A Active CN110631588B (en) | 2019-09-23 | 2019-09-23 | Unmanned aerial vehicle visual navigation positioning method based on RBF network |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN110631588B (en) |
Cited By (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN111221340A (en) * | 2020-02-10 | 2020-06-02 | 电子科技大学 | Design method of migratable visual navigation based on coarse-grained features |
CN111833395A (en) * | 2020-06-04 | 2020-10-27 | 西安电子科技大学 | Direction-finding system single target positioning method and device based on neural network model |
CN113936064A (en) * | 2021-12-17 | 2022-01-14 | 荣耀终端有限公司 | Positioning method and device |
WO2022016563A1 (en) * | 2020-07-23 | 2022-01-27 | 南京科沃信息技术有限公司 | Ground monitoring system for plant-protection unmanned aerial vehicle, and monitoring method for same |
CN114202583A (en) * | 2021-12-10 | 2022-03-18 | 中国科学院空间应用工程与技术中心 | Visual positioning method and system for unmanned aerial vehicle |
CN115729269A (en) * | 2022-12-27 | 2023-03-03 | 深圳市逗映科技有限公司 | Unmanned aerial vehicle intelligent recognition system based on machine vision |
Citations (45)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
EP0460866A2 (en) * | 1990-06-05 | 1991-12-11 | Hughes Aircraft Company | A precision satellite tracking system |
JPH06331733A (en) * | 1993-05-24 | 1994-12-02 | Ikuo Arai | Method and equipment for measuring distance |
CN1258845A (en) * | 1998-12-28 | 2000-07-05 | 日本高压电气株式会社 | Fault locating system |
TW426882B (en) * | 1999-10-29 | 2001-03-21 | Taiwan Semiconductor Mfg | Overlap statistic process control with efficiency by using positive and negative feedback overlap correction system |
JP2002504722A (en) * | 1998-02-19 | 2002-02-12 | マインドメーカー、インコーポレーテッド | Gesture category recognition and training method and system |
DE102006055563B3 (en) * | 2006-11-24 | 2008-01-03 | Ford Global Technologies, LLC, Dearborn | Correcting desired value deviations of fuel injected into internal combustion engine involves computing deviation value using square error method and correcting deviation based on computed deviation value |
CN101118280A (en) * | 2007-08-31 | 2008-02-06 | 西安电子科技大学 | Distributed wireless sensor network node self positioning method |
CN101476891A (en) * | 2008-01-02 | 2009-07-08 | 丘玓 | Accurate navigation system and method for movable object |
CN101655561A (en) * | 2009-09-14 | 2010-02-24 | 南京莱斯信息技术股份有限公司 | Federated Kalman filtering-based method for fusing multilateration data and radar data |
US20100049376A1 (en) * | 2008-08-19 | 2010-02-25 | Abraham Schultz | Method and system for providing a gps-based position |
US20100169006A1 (en) * | 2006-02-20 | 2010-07-01 | Toyota Jidosha Kabushiki Kaisha | Positioning system, positioning method and car navigation system |
CN101860622A (en) * | 2010-06-11 | 2010-10-13 | 中兴通讯股份有限公司 | Device and method for unlocking mobile phone |
CN102387526A (en) * | 2010-08-30 | 2012-03-21 | 中兴通讯股份有限公司 | Method and device for increasing positioning accuracy of wireless honeycomb system |
CN103561463A (en) * | 2013-10-24 | 2014-02-05 | 电子科技大学 | RBF neural network indoor positioning method based on sample clustering |
CN103983263A (en) * | 2014-05-30 | 2014-08-13 | 东南大学 | Inertia/visual integrated navigation method adopting iterated extended Kalman filter and neural network |
CN104330084A (en) * | 2014-11-13 | 2015-02-04 | 东南大学 | Neural network assisted integrated navigation method for underwater vehicle |
CN106203261A (en) * | 2016-06-24 | 2016-12-07 | 大连理工大学 | Unmanned vehicle field water based on SVM and SURF detection and tracking |
CN106709909A (en) * | 2016-12-13 | 2017-05-24 | 重庆理工大学 | Flexible robot vision recognition and positioning system based on depth learning |
CN106780484A (en) * | 2017-01-11 | 2017-05-31 | 山东大学 | Robot interframe position and orientation estimation method based on convolutional neural networks Feature Descriptor |
US20170255840A1 (en) * | 2014-11-26 | 2017-09-07 | Captricity, Inc. | Analyzing content of digital images |
WO2017215026A1 (en) * | 2016-06-16 | 2017-12-21 | 东南大学 | Extended kalman filter positioning method based on height constraint |
CN107808407A (en) * | 2017-10-16 | 2018-03-16 | 亿航智能设备(广州)有限公司 | Unmanned plane vision SLAM methods, unmanned plane and storage medium based on binocular camera |
CN107862705A (en) * | 2017-11-21 | 2018-03-30 | 重庆邮电大学 | A kind of unmanned plane small target detecting method based on motion feature and deep learning feature |
CN107909600A (en) * | 2017-11-04 | 2018-04-13 | 南京奇蛙智能科技有限公司 | The unmanned plane real time kinematics target classification and detection method of a kind of view-based access control model |
CN108051836A (en) * | 2017-11-02 | 2018-05-18 | 中兴通讯股份有限公司 | A kind of localization method, device, server and system |
CN108153334A (en) * | 2017-12-01 | 2018-06-12 | 南京航空航天大学 | No cooperative target formula unmanned helicopter vision is independently maked a return voyage and drop method and system |
CN108168539A (en) * | 2017-12-21 | 2018-06-15 | 儒安科技有限公司 | A kind of blind man navigation method based on computer vision, apparatus and system |
CN108364314A (en) * | 2018-01-12 | 2018-08-03 | 香港科技大学深圳研究院 | A kind of localization method, system and medium |
CN108426576A (en) * | 2017-09-15 | 2018-08-21 | 辽宁科技大学 | Aircraft paths planning method and system based on identification point vision guided navigation and SINS |
CN108820233A (en) * | 2018-07-05 | 2018-11-16 | 西京学院 | A kind of fixed-wing unmanned aerial vehicle vision feels land bootstrap technique |
WO2018209862A1 (en) * | 2017-05-18 | 2018-11-22 | 广州视源电子科技股份有限公司 | Pose error correction method and device, robot and storage medium |
CN109141194A (en) * | 2018-07-27 | 2019-01-04 | 成都飞机工业(集团)有限责任公司 | A kind of rotation pivot angle head positioning accuracy measures compensation method indirectly |
CN109238288A (en) * | 2018-09-10 | 2019-01-18 | 电子科技大学 | Autonomous navigation method in a kind of unmanned plane room |
US20190025858A1 (en) * | 2016-10-09 | 2019-01-24 | Airspace Systems, Inc. | Flight control using computer vision |
CN109445449A (en) * | 2018-11-29 | 2019-03-08 | 浙江大学 | A kind of high subsonic speed unmanned plane hedgehopping control system and method |
CN109615645A (en) * | 2018-12-07 | 2019-04-12 | 国网四川省电力公司电力科学研究院 | The Feature Points Extraction of view-based access control model |
CN109658445A (en) * | 2018-12-14 | 2019-04-19 | 北京旷视科技有限公司 | Network training method, increment build drawing method, localization method, device and equipment |
CN109670513A (en) * | 2018-11-27 | 2019-04-23 | 西安交通大学 | A kind of piston attitude detecting method based on bag of words and support vector machines |
CN109739254A (en) * | 2018-11-20 | 2019-05-10 | 国网浙江省电力有限公司信息通信分公司 | Using the unmanned plane and its localization method of visual pattern positioning in a kind of electric inspection process |
CN109765930A (en) * | 2019-01-29 | 2019-05-17 | 理光软件研究所(北京)有限公司 | A kind of unmanned plane vision navigation system |
CN109859225A (en) * | 2018-12-24 | 2019-06-07 | 中国电子科技集团公司第二十研究所 | A kind of unmanned plane scene matching aided navigation localization method based on improvement ORB Feature Points Matching |
CN109959898A (en) * | 2017-12-26 | 2019-07-02 | 中国船舶重工集团公司七五〇试验场 | A kind of seat bottom type underwater sound Passive Positioning basic matrix method for self-calibrating |
CN109991633A (en) * | 2019-03-05 | 2019-07-09 | 上海卫星工程研究所 | A kind of low orbit satellite orbit determination in real time method |
CN110032965A (en) * | 2019-04-10 | 2019-07-19 | 南京理工大学 | Vision positioning method based on remote sensing images |
CN110058602A (en) * | 2019-03-27 | 2019-07-26 | 天津大学 | Multi-rotor unmanned aerial vehicle autonomic positioning method based on deep vision |
-
2019
- 2019-09-23 CN CN201910924244.1A patent/CN110631588B/en active Active
Patent Citations (45)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
EP0460866A2 (en) * | 1990-06-05 | 1991-12-11 | Hughes Aircraft Company | A precision satellite tracking system |
JPH06331733A (en) * | 1993-05-24 | 1994-12-02 | Ikuo Arai | Method and equipment for measuring distance |
JP2002504722A (en) * | 1998-02-19 | 2002-02-12 | マインドメーカー、インコーポレーテッド | Gesture category recognition and training method and system |
CN1258845A (en) * | 1998-12-28 | 2000-07-05 | 日本高压电气株式会社 | Fault locating system |
TW426882B (en) * | 1999-10-29 | 2001-03-21 | Taiwan Semiconductor Mfg | Overlap statistic process control with efficiency by using positive and negative feedback overlap correction system |
US20100169006A1 (en) * | 2006-02-20 | 2010-07-01 | Toyota Jidosha Kabushiki Kaisha | Positioning system, positioning method and car navigation system |
DE102006055563B3 (en) * | 2006-11-24 | 2008-01-03 | Ford Global Technologies, LLC, Dearborn | Correcting desired value deviations of fuel injected into internal combustion engine involves computing deviation value using square error method and correcting deviation based on computed deviation value |
CN101118280A (en) * | 2007-08-31 | 2008-02-06 | 西安电子科技大学 | Distributed wireless sensor network node self positioning method |
CN101476891A (en) * | 2008-01-02 | 2009-07-08 | 丘玓 | Accurate navigation system and method for movable object |
US20100049376A1 (en) * | 2008-08-19 | 2010-02-25 | Abraham Schultz | Method and system for providing a gps-based position |
CN101655561A (en) * | 2009-09-14 | 2010-02-24 | 南京莱斯信息技术股份有限公司 | Federated Kalman filtering-based method for fusing multilateration data and radar data |
CN101860622A (en) * | 2010-06-11 | 2010-10-13 | 中兴通讯股份有限公司 | Device and method for unlocking mobile phone |
CN102387526A (en) * | 2010-08-30 | 2012-03-21 | 中兴通讯股份有限公司 | Method and device for increasing positioning accuracy of wireless honeycomb system |
CN103561463A (en) * | 2013-10-24 | 2014-02-05 | 电子科技大学 | RBF neural network indoor positioning method based on sample clustering |
CN103983263A (en) * | 2014-05-30 | 2014-08-13 | 东南大学 | Inertia/visual integrated navigation method adopting iterated extended Kalman filter and neural network |
CN104330084A (en) * | 2014-11-13 | 2015-02-04 | 东南大学 | Neural network assisted integrated navigation method for underwater vehicle |
US20170255840A1 (en) * | 2014-11-26 | 2017-09-07 | Captricity, Inc. | Analyzing content of digital images |
WO2017215026A1 (en) * | 2016-06-16 | 2017-12-21 | 东南大学 | Extended kalman filter positioning method based on height constraint |
CN106203261A (en) * | 2016-06-24 | 2016-12-07 | 大连理工大学 | Unmanned vehicle field water based on SVM and SURF detection and tracking |
US20190025858A1 (en) * | 2016-10-09 | 2019-01-24 | Airspace Systems, Inc. | Flight control using computer vision |
CN106709909A (en) * | 2016-12-13 | 2017-05-24 | 重庆理工大学 | Flexible robot vision recognition and positioning system based on depth learning |
CN106780484A (en) * | 2017-01-11 | 2017-05-31 | 山东大学 | Robot interframe position and orientation estimation method based on convolutional neural networks Feature Descriptor |
WO2018209862A1 (en) * | 2017-05-18 | 2018-11-22 | 广州视源电子科技股份有限公司 | Pose error correction method and device, robot and storage medium |
CN108426576A (en) * | 2017-09-15 | 2018-08-21 | 辽宁科技大学 | Aircraft paths planning method and system based on identification point vision guided navigation and SINS |
CN107808407A (en) * | 2017-10-16 | 2018-03-16 | 亿航智能设备(广州)有限公司 | Unmanned plane vision SLAM methods, unmanned plane and storage medium based on binocular camera |
CN108051836A (en) * | 2017-11-02 | 2018-05-18 | 中兴通讯股份有限公司 | A kind of localization method, device, server and system |
CN107909600A (en) * | 2017-11-04 | 2018-04-13 | 南京奇蛙智能科技有限公司 | The unmanned plane real time kinematics target classification and detection method of a kind of view-based access control model |
CN107862705A (en) * | 2017-11-21 | 2018-03-30 | 重庆邮电大学 | A kind of unmanned plane small target detecting method based on motion feature and deep learning feature |
CN108153334A (en) * | 2017-12-01 | 2018-06-12 | 南京航空航天大学 | No cooperative target formula unmanned helicopter vision is independently maked a return voyage and drop method and system |
CN108168539A (en) * | 2017-12-21 | 2018-06-15 | 儒安科技有限公司 | A kind of blind man navigation method based on computer vision, apparatus and system |
CN109959898A (en) * | 2017-12-26 | 2019-07-02 | 中国船舶重工集团公司七五〇试验场 | A kind of seat bottom type underwater sound Passive Positioning basic matrix method for self-calibrating |
CN108364314A (en) * | 2018-01-12 | 2018-08-03 | 香港科技大学深圳研究院 | A kind of localization method, system and medium |
CN108820233A (en) * | 2018-07-05 | 2018-11-16 | 西京学院 | A kind of fixed-wing unmanned aerial vehicle vision feels land bootstrap technique |
CN109141194A (en) * | 2018-07-27 | 2019-01-04 | 成都飞机工业(集团)有限责任公司 | A kind of rotation pivot angle head positioning accuracy measures compensation method indirectly |
CN109238288A (en) * | 2018-09-10 | 2019-01-18 | 电子科技大学 | Autonomous navigation method in a kind of unmanned plane room |
CN109739254A (en) * | 2018-11-20 | 2019-05-10 | 国网浙江省电力有限公司信息通信分公司 | Using the unmanned plane and its localization method of visual pattern positioning in a kind of electric inspection process |
CN109670513A (en) * | 2018-11-27 | 2019-04-23 | 西安交通大学 | A kind of piston attitude detecting method based on bag of words and support vector machines |
CN109445449A (en) * | 2018-11-29 | 2019-03-08 | 浙江大学 | A kind of high subsonic speed unmanned plane hedgehopping control system and method |
CN109615645A (en) * | 2018-12-07 | 2019-04-12 | 国网四川省电力公司电力科学研究院 | The Feature Points Extraction of view-based access control model |
CN109658445A (en) * | 2018-12-14 | 2019-04-19 | 北京旷视科技有限公司 | Network training method, increment build drawing method, localization method, device and equipment |
CN109859225A (en) * | 2018-12-24 | 2019-06-07 | 中国电子科技集团公司第二十研究所 | A kind of unmanned plane scene matching aided navigation localization method based on improvement ORB Feature Points Matching |
CN109765930A (en) * | 2019-01-29 | 2019-05-17 | 理光软件研究所(北京)有限公司 | A kind of unmanned plane vision navigation system |
CN109991633A (en) * | 2019-03-05 | 2019-07-09 | 上海卫星工程研究所 | A kind of low orbit satellite orbit determination in real time method |
CN110058602A (en) * | 2019-03-27 | 2019-07-26 | 天津大学 | Multi-rotor unmanned aerial vehicle autonomic positioning method based on deep vision |
CN110032965A (en) * | 2019-04-10 | 2019-07-19 | 南京理工大学 | Vision positioning method based on remote sensing images |
Non-Patent Citations (18)
Title |
---|
BAI WENFENG等: "Development of machine vision positioning system based on neural network", 《2011 INTERNATIONAL CONFERENCE ON MECHATRONIC SCIENCE, ELECTRIC ENGINEERING AND COMPUTER (MEC)》 * |
HAITAO JIA等: "UAV search planning based on perceptual cue", 《2013 INTERNATIONAL WORKSHOP ON MICROWAVE AND MILLIMETER WAVE CIRCUITS AND SYSTEM TECHNOLOGY》 * |
W.Y. KONG等: "Feature Based Navigation for UAVs", 《2006 IEEE/RSJ INTERNATIONAL CONFERENCE ON INTELLIGENT ROBOTS AND SYSTEMS》 * |
姚若晨等: "基于视觉的小型无人机导航与感知系统", 《PROCEEDINGS OF THE 5TH CHINA HIGH RESOLUTION EARTH OBSERVATION CONFERENCE中国科学院高分重大专项管理办公室会议论文集》 * |
尤宇星: "无人机视觉导航关键技术的研究与实现", 《中国优秀硕士学位论文全文数据库 (工程科技Ⅱ辑)》 * |
殷锡亮等: "基于RBF的视觉定位图像局部特征匹配算法", 《移动通信》 * |
汪宏昇: "机器视觉对准系统的研究及相关技术的开发", 《中国优秀硕士学位论文全文数据库 (信息科技辑)》 * |
汪宏昇等: "高速高精度的机器视觉定位的算法", 《光电工程》 * |
王祖武等: "基于视觉导航的输电线杆塔方位确定方法", 《激光与光电子学进展》 * |
王辉等: "基于虚拟磁链预测直接功率控制的双PWM变频调速系统研究", 《三峡大学学报(自然科学版)》 * |
白根本等: "马尾松分级绝对生长量标准的研究", 《新疆农业大学学报》 * |
罗偲等: "一种基于神经网络的仿生无人机系统设计", 《实验室研究与探索》 * |
肖恭伟等: "基于BP神经网络对七里街测站洪峰的预报与分析", 《山东理工大学学报(自然科学版)》 * |
蒋福春等: "自来水厂后臭氧接触池进水流量的在线软测量方法", 《中国给水排水》 * |
贾海涛: "基于感知引导的数据融合算法研究", 《中国博士学位论文全文数据库 (信息科技辑)》 * |
赵腾飞: "基于视觉的门把手识别与姿态估计方法研究", 《中国优秀硕士学位论文全文数据库 (信息科技辑)》 * |
郭俊: "神经网络与非线性滤波在景象匹配辅助导航中的应用研究", 《中国优秀硕士学位论文全文数据库 (信息科技辑)》 * |
齐超等: "车辆监控地理信息系统中的地图控制及实现", 《计算机自动测量与控制》 * |
Cited By (9)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN111221340A (en) * | 2020-02-10 | 2020-06-02 | 电子科技大学 | Design method of migratable visual navigation based on coarse-grained features |
CN111221340B (en) * | 2020-02-10 | 2023-04-07 | 电子科技大学 | Design method of migratable visual navigation based on coarse-grained features |
CN111833395A (en) * | 2020-06-04 | 2020-10-27 | 西安电子科技大学 | Direction-finding system single target positioning method and device based on neural network model |
CN111833395B (en) * | 2020-06-04 | 2022-11-29 | 西安电子科技大学 | Direction-finding system single target positioning method and device based on neural network model |
WO2022016563A1 (en) * | 2020-07-23 | 2022-01-27 | 南京科沃信息技术有限公司 | Ground monitoring system for plant-protection unmanned aerial vehicle, and monitoring method for same |
CN114202583A (en) * | 2021-12-10 | 2022-03-18 | 中国科学院空间应用工程与技术中心 | Visual positioning method and system for unmanned aerial vehicle |
CN113936064A (en) * | 2021-12-17 | 2022-01-14 | 荣耀终端有限公司 | Positioning method and device |
CN115729269A (en) * | 2022-12-27 | 2023-03-03 | 深圳市逗映科技有限公司 | Unmanned aerial vehicle intelligent recognition system based on machine vision |
CN115729269B (en) * | 2022-12-27 | 2024-02-20 | 深圳市逗映科技有限公司 | Unmanned aerial vehicle intelligent recognition system based on machine vision |
Also Published As
Publication number | Publication date |
---|---|
CN110631588B (en) | 2022-11-18 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN110631588B (en) | Unmanned aerial vehicle visual navigation positioning method based on RBF network | |
CN111028277B (en) | SAR and optical remote sensing image registration method based on pseudo-twin convolution neural network | |
CN110856112B (en) | Crowd-sourcing perception multi-source information fusion indoor positioning method and system | |
Leira et al. | Object detection, recognition, and tracking from UAVs using a thermal camera | |
CN110119438B (en) | Airborne LiDAR point cloud filtering method based on active learning | |
CN106529538A (en) | Method and device for positioning aircraft | |
CN112070807B (en) | Multi-target tracking method and electronic device | |
CN112325883B (en) | Indoor positioning method for mobile robot with WiFi and visual multi-source integration | |
Tao et al. | Scene context-driven vehicle detection in high-resolution aerial images | |
CN113313763B (en) | Monocular camera pose optimization method and device based on neural network | |
CN104881029B (en) | Mobile Robotics Navigation method based on a point RANSAC and FAST algorithms | |
CN112419374A (en) | Unmanned aerial vehicle positioning method based on image registration | |
Dumble et al. | Airborne vision-aided navigation using road intersection features | |
CN114119659A (en) | Multi-sensor fusion target tracking method | |
CN114238675A (en) | Unmanned aerial vehicle ground target positioning method based on heterogeneous image matching | |
CN117876723B (en) | Unmanned aerial vehicle aerial image global retrieval positioning method under refusing environment | |
CN115861352A (en) | Monocular vision, IMU and laser radar data fusion and edge extraction method | |
CN114046790A (en) | Factor graph double-loop detection method | |
CN117115414B (en) | GPS-free unmanned aerial vehicle positioning method and device based on deep learning | |
CN110636248A (en) | Target tracking method and device | |
CN118225087A (en) | Autonomous absolute positioning and navigation method for aircraft under satellite navigation refusal condition | |
CN110738098A (en) | target identification positioning and locking tracking method | |
CN115761693A (en) | Method for detecting vehicle location mark points and tracking and positioning vehicles based on panoramic image | |
CN113343747A (en) | Method for multi-modal image robust matching VNS | |
Timotheatos et al. | Visual horizon line detection for uav navigation |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |