CN111950386A - Functional intelligence-based environment self-adaptive navigation scene recognition method for micro unmanned aerial vehicle - Google Patents

Functional intelligence-based environment self-adaptive navigation scene recognition method for micro unmanned aerial vehicle Download PDF

Info

Publication number
CN111950386A
CN111950386A CN202010710116.XA CN202010710116A CN111950386A CN 111950386 A CN111950386 A CN 111950386A CN 202010710116 A CN202010710116 A CN 202010710116A CN 111950386 A CN111950386 A CN 111950386A
Authority
CN
China
Prior art keywords
scene
unmanned aerial
aerial vehicle
mobile terminal
intelligent mobile
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202010710116.XA
Other languages
Chinese (zh)
Inventor
王玲玲
李小宇
富立
王亚宁
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beihang University
Original Assignee
Beihang University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beihang University filed Critical Beihang University
Priority to CN202010710116.XA priority Critical patent/CN111950386A/en
Publication of CN111950386A publication Critical patent/CN111950386A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/10Terrestrial scenes
    • G06V20/13Satellite images
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S19/00Satellite radio beacon positioning systems; Determining position, velocity or attitude using signals transmitted by such systems
    • G01S19/38Determining a navigation solution using signals transmitted by a satellite radio beacon positioning system
    • G01S19/39Determining a navigation solution using signals transmitted by a satellite radio beacon positioning system the satellite radio beacon positioning system transmitting time-stamped messages, e.g. GPS [Global Positioning System], GLONASS [Global Orbiting Navigation Satellite System] or GALILEO
    • G01S19/42Determining position
    • G01S19/45Determining position by combining measurements of signals from the satellite radio beacon positioning system with a supplementary measurement
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Remote Sensing (AREA)
  • General Physics & Mathematics (AREA)
  • Radar, Positioning & Navigation (AREA)
  • Theoretical Computer Science (AREA)
  • Computing Systems (AREA)
  • Mathematical Physics (AREA)
  • Data Mining & Analysis (AREA)
  • Evolutionary Computation (AREA)
  • General Health & Medical Sciences (AREA)
  • Molecular Biology (AREA)
  • Biophysics (AREA)
  • General Engineering & Computer Science (AREA)
  • Biomedical Technology (AREA)
  • Computational Linguistics (AREA)
  • Software Systems (AREA)
  • Artificial Intelligence (AREA)
  • Astronomy & Astrophysics (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Multimedia (AREA)
  • Health & Medical Sciences (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Image Analysis (AREA)
  • Traffic Control Systems (AREA)

Abstract

The invention discloses a functional intelligence-based environment self-adaptive navigation scene recognition method for a micro unmanned aerial vehicle. The combined flight platform is designed to be composed of a micro unmanned aerial vehicle and an intelligent mobile terminal, a non-visual sensor of the intelligent mobile terminal is used for collecting data and extracting environmental characteristics from the data, the environmental characteristics are accurately identified through a trained scene Net deep learning network, images can be collected through a visual sensor of the intelligent mobile terminal, and the collected images are extracted and classified through a trained mobile-v 2 network to obtain a final identification decision. Compared with the traditional scene recognition method, the double-layer scene recognition method can improve the accuracy of scene recognition by simulating the physiological behavior perception environment of organisms, realize functional intelligent navigation with good self-adaption performance in different environments, freely develop self-adaption navigation software on the premise of not changing a navigation system, and realize the effects of compact flight and reliable operation in different environments.

Description

Functional intelligence-based environment self-adaptive navigation scene recognition method for micro unmanned aerial vehicle
Technical Field
The invention relates to the technical field of navigation scene recognition, in particular to a functional intelligence-based environment self-adaptive navigation scene recognition method for a micro unmanned aerial vehicle.
Background
The micro unmanned aerial vehicle is widely applied to monitoring, searching and rescuing tasks in various environments due to the excellent flexibility and hovering capability of the micro unmanned aerial vehicle. To ensure that these tasks are performed with high precision, the drone must have autonomous flight capabilities. The inertial sensor is an autonomous sensor, and the measurement information of the inertial sensor is not influenced by the external environment. By fully considering the limits of cost and effective load, most of the navigation systems of the miniature unmanned aerial vehicle are provided with low-cost and low-precision MEMS inertial sensors. In order to improve the precision of inertial navigation, most indoor systems adopt a visual/inertial integrated navigation mode, and most outdoor systems adopt a GPS/inertial integrated navigation mode, and a great deal of research has been carried out on the aspect of improving the precision of an integrated navigation system. However, the working space of a drone covers a complex changing environment from indoor to outdoor, while the availability of external auxiliary sensors (like GPS, cameras) to measure information depends on the changing scene. Therefore, obtaining satisfactory navigation accuracy under different environments is a challenge in autonomous flight of micro aircraft.
In recent years, research on improving the environment adaptive navigation capability of unmanned aerial vehicles has achieved serial results. Groves, A modulation Solution, The Journal of Navigation,67, pp.311-326,2014, researches The improvement of The smoothness of Navigation system information fusion in The process of GPS auxiliary Navigation in a complex environment, and adopts a modularized multi-sensor combined Navigation mode to adapt to different environments without redesigning The whole system. In addition, in order to divide The environment in more detail and ensure smooth switching between modules, g.han, p.d.groves, "Context Determination for Adaptive Navigation using Multiple Sensors on a Smartphone", in Proceedings of The 29th International Technical Meeting of The Satellite Division of The Institute of Navigation, pp.742-756, separator 2016 introduced an "intermediate" scene category to cover The boundary between indoor and outdoor in some special scenes, such as urban valleys and places near windows, a high-precision, fault-tolerant variable structure Navigation system was developed based on functional system theory through modeling of smart Adaptive behavior. However, measurement sensor selection criteria based on system state observability quantification are less robust to environmental changes and interference.
Although environment adaptive navigation has become a current research hotspot, how to accurately identify the environment background and reconfigure a multi-sensor navigation system in the process of continuously switching scenes related to the micro unmanned aerial vehicle still is a key challenge. Therefore, the environment sensing information is needed to be utilized to complete online high-efficiency identification of complex scenes, and a proper decision is provided for the micro unmanned aerial vehicle to select an accurate, real-time and robust optimal sensor combination and switching points.
Scene recognition has been widely applied in the context of pedestrian or robotic navigation. The external environment perception is the key for improving the navigation performance of the robot in an unknown environment. Han, P.D. Groves, "Environmental content Detection for Adaptive Navigation using GNSS Measurements from a Smartphone", Journal of The Institute of Navigation, vol.65, No.1, pp.99-116,2018. according to The characteristics of The GNSS signals of The intelligent mobile terminal under different environments, a mixed environment Detection scheme is designed to detect indoor, transitional, urban and outdoor environments. The mixed environment detection method can correctly distinguish indoor and outdoor sky environments. However, since the characteristics of GNSS signals in the transition zone environment are partially similar to those in the indoor or outdoor environment, the intermediate transition environment is difficult to be accurately detected. In addition, the hybrid environment detection method is not verified under high dynamic conditions, and needs to be improved according to the requirements of the navigation of the micro unmanned aerial vehicle.
Extracting features from the intelligent mobile terminal sensor information can improve the reliability of environment detection. However, in practical applications, time series measurements of multiple sensors on a smart mobile terminal tend to be confused with unmodeled time-varying noise. The variety of unmodeled time-varying noise contained in the sensor measurement information makes it not simple to extract environmental features from the sensor measurement information. In this regard, ZHENGHUA Chen, Chaoyang Jiang, and Lihua Xie, "A Novel Ensemble ELM for Human Activity Recognition Using Smartphone", IEEE Transactions on Industrial information, Vol.15, No.5, pp.2691-2699,2019. an integrated extreme learning machine is used to extract the characteristics of Human Activity from the smart mobile-end Sensor data, the convolutional neural network and the recurrent neural network are combined through a unified deep learning framework, and the different types of relationships between the smart mobile-end Sensor input signals and unmodeled noise are used to automatically extract the robust and significant characteristics and to perform efficient classification. In addition, Y.Ding, Y.Cheng, and X.L.Chen. "Noise-resistant network: a deep-learning method for face recognition under Noise." Eurasip Journal on Image and Video Processing,2017. the anti-Noise network based on the deep neural network is provided, which can show good performance in the Image recognition of unknown Noise. Xuetao Zhang, Zhenxue Chen, Q.M. Jonathan Wu, Lei Cai, Dan Lu and Xianming Li, "Fast Semantic Segmentation for Scene Perception", IEEE Transactions on Industrial information, Vol.15, No.2, pp.1183-1191,2019. In low-dynamic applications, the effect of scene recognition feature extraction based on a deep neural network is obviously superior to that of the most advanced method at present, but when the environment is changed drastically under high dynamics, the scene recognition may need to be redesigned.
Disclosure of Invention
In view of the above, the invention provides a method for identifying an environment adaptive navigation scene of a micro unmanned aerial vehicle based on functional intelligence, which is used for improving the adaptive navigation capability of the micro unmanned aerial vehicle in different environments.
Therefore, the invention provides a functional intelligence-based environment self-adaptive navigation scene identification method for a micro unmanned aerial vehicle, which comprises the following steps:
s1: carrying out an environment self-adaptive navigation scene recognition experiment by using a combined flight platform constructed by a micro unmanned aerial vehicle and an intelligent mobile terminal, extracting environmental characteristics from data acquired by a non-visual sensor of the intelligent mobile terminal, and preprocessing the environmental characteristics;
s2: inputting the preprocessed environmental characteristics into a scene Net deep learning network trained in advance on a real-time recognition layer to obtain scene categories and confidence degrees;
s3: judging whether the confidence coefficient obtained by the real-time identification layer is lower than a trigger threshold value or not; if yes, executing step S4 and step S5; if not, go to step S6;
s4: generating a control instruction to enable the micro unmanned aerial vehicle to hover and activate the trigger identification layer;
s5: after the trigger recognition layer is activated, changing the direction of a visual sensor of the intelligent mobile terminal, acquiring images from the front direction, the left direction, the right direction, the back direction and the upper direction in the hovering state of the micro unmanned aerial vehicle, splicing the acquired images in the five directions, and extracting and classifying the spliced images by applying a pre-trained google mobile et-v2 network, wherein a three-dimensional output tensor of the google mobile et-v2 network represents recognition possibility in different flight scenes, so that a final recognition decision of the flight scenes is provided;
s6: and the scene type obtained by the real-time identification layer is an identification result, and the micro unmanned aerial vehicle continuously flies.
In a possible implementation manner, in the method for identifying an environment adaptive navigation scene of a micro unmanned aerial vehicle based on functional intelligence provided by the present invention, the training process of the scenenetet deep learning network in step S2 specifically includes the following steps:
s21: carrying out multiple flight tests under different weather and illumination conditions and different flight scenes by using a combined flight platform constructed by a micro unmanned aerial vehicle and an intelligent mobile terminal, and acquiring data by using a non-visual sensor and information processing capacity of the intelligent mobile terminal; wherein the non-visual sensors comprise a magnetometer, a barometer, a light intensity sensor, and a GNSS;
s22: mixing collected data of a magnetometer, a barometer and a light intensity sensor with normal distribution noise to obtain mixed data, obtaining combined data by using a signal-to-noise ratio of a collected visible GNSS satellite through an equation set, dividing the mixed data and the combined data in a preset time period into a plurality of intervals through a sliding window and a fixed stride, converting the mixed data and the combined data into a frequency domain form by adopting discrete Fourier transform in each interval, and extracting tensor to obtain sample data with a scene label; wherein the tensor is an order and phase pair in the frequency domain;
s23: and randomly dividing the sample data into a training set and a testing set, and inputting the training and learning to a scene net deep learning network to obtain the trained scene net deep learning network.
In a possible implementation manner, in the method for identifying an environment adaptive navigation scene of a drone based on functional intelligence provided by the present invention, the training process of the google mobile-v 2 network in step S5 specifically includes the following steps:
s51: the method comprises the following steps of carrying out multiple flight tests under different weather and illumination conditions and different flight scenes by using a combined flight platform constructed by a micro unmanned aerial vehicle and an intelligent mobile terminal, collecting images from the front direction, the left direction, the right direction, the back direction and the upper direction by using a visual sensor of the intelligent mobile terminal, and splicing the collected images in the five directions;
s52: and taking the spliced image with the scene label as sample data of the google mobile-v 2 network, randomly dividing the sample data into a training set and a testing set, and inputting the google mobile-v 2 network for training and learning to obtain the trained google mobile-v 2 network.
In a possible implementation manner, in the method for identifying an environment adaptive navigation scene of a micro unmanned aerial vehicle based on functional intelligence provided by the invention, all sensor acquisition processes of the intelligent mobile terminal control the intelligent mobile terminal through a UI interface of the intelligent mobile terminal or a ground station of the combined flight platform.
The invention provides a functional intelligence-based environment self-adaptive navigation scene recognition method for a micro unmanned aerial vehicle. A combined flight platform consisting of a micro unmanned aerial vehicle and an intelligent mobile terminal is designed, a low-cost non-visual sensor of the intelligent mobile terminal is used for collecting data and extracting environmental characteristics from the data, the extracted environmental characteristics are accurately and robustly identified on line through a pre-trained scene net deep learning network, images can be further collected through the visual sensor of the intelligent mobile terminal, and the collected images are extracted and classified through a pre-trained google mobile et-v2 network to obtain a final identification decision of a flight scene. Compared with the traditional scene recognition method, the double-layer scene recognition method has better performance in accuracy, delay and robustness, can improve the accuracy of scene recognition of the micro unmanned aerial vehicle, realizes functional intelligent navigation with good self-adaption performance in different environments, can freely develop self-adaption navigation software on the premise of not changing the hardware structure of a navigation system by simulating the physiological behavior perception environment of organisms, and realizes the effects of compact flight and reliable operation in different environments.
Drawings
Fig. 1 is a flowchart of an environment adaptive navigation scene recognition method for a micro unmanned aerial vehicle based on functional intelligence provided by the invention;
FIG. 2 is a flow chart of a training process of a scene deep learning network in the context adaptive navigation scene recognition method of a micro unmanned aerial vehicle based on functional intelligence provided by the invention;
FIG. 3 is a flowchart of a training process of a google mobile phone-v 2 network in the method for identifying the environment adaptive navigation scene of the unmanned aerial vehicle based on functional intelligence provided by the invention;
fig. 4 is a schematic structural diagram of a system required for implementing the environment adaptive navigation scene recognition method for a micro unmanned aerial vehicle based on functional intelligence in embodiment 1 of the present invention;
fig. 5 is a configuration diagram of a combined flying platform according to embodiment 1 of the present invention;
fig. 6 is a schematic structural diagram of double-layer scene recognition in embodiment 1 of the present invention;
fig. 7 is a schematic structural diagram of a scene net deep learning network of a real-time identification layer in embodiment 1 of the present invention;
fig. 8a is a graph of a relationship between a training loss value and a training iteration number in a training process of a SceneNet deep learning network in embodiment 1 of the present invention;
fig. 8b is a diagram of a relationship between training classification accuracy and training iteration number in the training process of the SceneNet deep learning network in embodiment 1 of the present invention;
FIG. 8c is a graph showing the relationship between the optimal individual fitness value and the evolution algebra of the PSO-SVM in the conventional machine learning method in embodiment 1 of the present invention;
FIG. 8d is a diagram showing the classification result of the PSO-SVM of the conventional machine learning method in embodiment 1 of the present invention;
fig. 8e is a classification precision comparison diagram of the SceneNet deep learning network and the PSO-SVM in the conventional machine learning method in embodiment 1 of the present invention;
FIG. 9 is a graph showing the relationship between the training loss value and the number of training iterations of the google mobile-v 2 network in the training process in embodiment 1 of the present invention;
FIG. 10 is a view showing the scene recognition result of the daytime garden flight experiment in embodiment 1 of the present invention;
FIG. 11 is a view showing a scene recognition result of a daytime corridor flight experiment in embodiment 1 of the present invention;
fig. 12 is a view showing a scene recognition result of a building entrance flight experiment at dusk in embodiment 1 of the present invention;
fig. 13 is a scene recognition result diagram of an intermediate scene actual test in embodiment 1 of the present invention.
Detailed Description
The technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are only illustrative and are not intended to limit the present invention.
The invention provides a functional intelligence-based environment self-adaptive navigation scene recognition method for a micro unmanned aerial vehicle, which comprises the following steps as shown in figure 1:
s1: performing an environment self-adaptive navigation scene recognition experiment by using a combined flight platform constructed by a micro unmanned aerial vehicle and an intelligent mobile terminal, extracting environment characteristics from data acquired by a non-visual sensor of the intelligent mobile terminal, and preprocessing the environment characteristics;
s2: inputting the preprocessed environmental characteristics into a scene Net deep learning network trained in advance on a real-time recognition layer to obtain scene categories and confidence degrees;
s3: judging whether the confidence coefficient obtained by the real-time identification layer is lower than a trigger threshold value or not; if yes, executing step S4 and step S5; if not, go to step S6;
s4: generating a control instruction to enable the micro unmanned aerial vehicle to hover and activate the trigger identification layer;
s5: after the trigger recognition layer is activated, the direction of a visual sensor of the intelligent mobile terminal is changed, images are collected from the front direction, the left direction, the right direction, the back direction and the upper direction in the hovering state of the micro unmanned aerial vehicle, the collected images in the five directions are spliced, the spliced images are extracted and classified by applying a pre-trained google mobile et-v2 network, the three-dimensional output tensor of the google mobile et-v2 network represents the recognition possibility in different flight scenes, and the final recognition decision of the flight scenes is provided;
specifically, the trigger recognition layer sequentially generates 5 frame control instructions, and introduces sufficient image information from frame angles (0 ° ), (0 °, 90 °), (0 °, 180 °), (0 °, 270 °), and (90 °, 0);
s6: and the scene type obtained by the real-time identification layer is an identification result, and the micro unmanned aerial vehicle can continuously fly.
The invention provides a functional intelligence-based environment self-adaptive navigation scene recognition method for a micro unmanned aerial vehicle. A combined flight platform consisting of a micro unmanned aerial vehicle and an intelligent mobile terminal is designed, a low-cost non-visual sensor (namely an environment sensitive sensor) of the intelligent mobile terminal is used for collecting data and extracting environmental characteristics from the data, the extracted environmental characteristics are accurately and robustly identified on line through a pre-trained scene net deep learning network, images can be further collected through the visual sensor of the intelligent mobile terminal, and the collected images are extracted and classified through the pre-trained google mobile-v 2 network to obtain the final identification decision of a flight scene. Compared with the traditional scene recognition method, the double-layer scene recognition method has better performance in accuracy, delay and robustness, can improve the accuracy of scene recognition of the micro unmanned aerial vehicle, realizes functional intelligent navigation with good self-adaption performance in different environments, can freely develop self-adaption navigation software on the premise of not changing the hardware structure of a navigation system by simulating the physiological behavior perception environment of organisms, and realizes the effects of compact flight and reliable operation in different environments.
The method for identifying the environment self-adaptive navigation scene of the micro unmanned aerial vehicle based on the functional intelligence is motivated by improving the precision and reducing the calculation cost, and the usable information with high quality is obtained according to the prior knowledge, the working space and the navigation information processing and integrating result, so that the precision and the robustness of the environment self-adaptive autonomous navigation are improved. The determination of the navigation and positioning scheme of the micro unmanned aerial vehicle depends on the environment scene where the micro unmanned aerial vehicle is located, and can be generally divided into three environment scenes: indoor scenes, outdoor scenes and transitional scenes. In different environmental scenes, the micro unmanned aerial vehicle needs to switch the navigation method adaptive to the environment to ensure the continuity and stability of the flight mission, and the accurate flight environment information extraction is beneficial to the determination and switching of the navigation method. Therefore, the flight environment scene recognition is an important link for realizing the continuous and accurate navigation and positioning of the micro unmanned aerial vehicle.
The method for identifying the environment self-adaptive navigation scene of the micro unmanned aerial vehicle based on the functional intelligence is designed as an independent module independent of the micro unmanned aerial vehicle, obtains information from the environment, and feeds the environment identification result back to the micro unmanned aerial vehicle, so that the micro unmanned aerial vehicle can independently select a proper navigation method. The main tasks of scene recognition are to extract information from the environment, preprocess, extract features, and construct a feature classifier. The environmental scene recognition is composed of double-layer recognition functions (a real-time recognition layer and a trigger recognition layer) so as to realize processing and integration of information of a plurality of sensors. The first layer of recognition function operates in real time, the second layer of recognition function is triggered under specific conditions, and the two layers of recognition functions are combined to provide environmental information for the micro unmanned aerial vehicle. The first layer of recognition function receives data input of four non-visual sensors in a period of time, and classification confidence degrees of three types of environments are obtained through data preprocessing and SceneNet deep learning network classifier classification. When the confidence of the highest probability is still low (lower than a trigger threshold), a second-layer recognition function is triggered, the second-layer recognition function receives the visual sensor data, uses the scene image as an input information source, performs image splicing and image preprocessing on a plurality of pictures in the surrounding environment, and uses the image splicing and image preprocessing as the input of the google mobile-v 2 network classifier, and the google mobile-v 2 network classifier gives a final recognition decision.
In specific implementation, in the method for identifying an environment adaptive navigation scene of a micro unmanned aerial vehicle based on functional intelligence provided by the present invention, as shown in fig. 2, the training process of the SceneNet deep learning network in step S2 may specifically include the following steps:
s21: the method comprises the following steps of performing multiple flight tests under different weather and illumination conditions and different flight scenes by using a combined flight platform constructed by a micro unmanned aerial vehicle and an intelligent mobile terminal, and acquiring data by using a non-visual sensor and information processing capacity of the intelligent mobile terminal; the non-visual sensor comprises a magnetometer, a barometer, a light intensity sensor and a GNSS (global navigation satellite system); specifically, data acquisition software can be used for acquiring measurement data of a non-visual sensor of the intelligent mobile terminal;
s22: mixing the collected data of the magnetometer, the barometer and the light intensity sensor with normal distribution noise to obtain mixed data, obtaining combined data by using the signal-to-noise ratio of the collected visible GNSS satellite through an equation set, dividing the mixed data and the combined data in a preset time period into a plurality of intervals by using a sliding window and a fixed stride, converting the mixed data and the combined data into a frequency domain form by using discrete Fourier transform in each interval, and extracting tensor to obtain sample data with a scene label; wherein the tensor is an order and phase pair in the frequency domain;
s23: and randomly dividing the sample data into a training set and a testing set, and inputting the training and learning to the scene net deep learning network to obtain the trained scene net deep learning network.
In specific implementation, in the method for identifying an environment adaptive navigation scene of a drone based on functional intelligence provided by the present invention, as shown in fig. 3, the training process of the google mobile-v 2 network in step S5 may specifically include the following steps:
s51: the method comprises the following steps of carrying out multiple flight tests under different weather and illumination conditions and different flight scenes by using a combined flight platform constructed by a micro unmanned aerial vehicle and an intelligent mobile terminal, collecting images from the front direction, the left direction, the right direction, the rear direction and the upper direction by using a visual sensor of the intelligent mobile terminal, and splicing the collected images in the five directions;
s52: and taking the spliced image with the scene label as sample data of the google mobile-v 2 network, randomly dividing the sample data into a training set and a testing set, inputting the google mobile-v 2 network for training and learning, and obtaining the trained google mobile-v 2 network.
In specific implementation, in the method for identifying the environment self-adaptive navigation scene of the unmanned aerial vehicle based on functional intelligence provided by the invention, all the sensor acquisition processes of the intelligent mobile terminal control the intelligent mobile terminal through a UI (user interface) of the intelligent mobile terminal or a ground station of a combined flight platform.
The following describes in detail a specific implementation of the above-mentioned environment adaptive navigation scene recognition method for a micro-unmanned aerial vehicle based on functional intelligence according to a specific embodiment.
Example 1:
the implementation of the above function-intelligence-based environment adaptive navigation scene recognition method for the unmanned aerial vehicle needs to be realized through an environment perception function system, an intelligent classification decision function system and an intelligent information fusion function system (as shown in fig. 4), wherein the environment perception function system and the intelligent classification decision function system are realized through an intelligent mobile terminal. As shown in fig. 5, a combined flight platform composed of a quadrotor micro unmanned aerial vehicle 1 (such as DJI M100) and an intelligent mobile terminal 2 (such as a smart phone) is designed, the intelligent mobile terminal 2 is connected with a triaxial frame 3 with a Storm32bgc controller, and the micro unmanned aerial vehicle body DJI M100 is controlled by a Pixhawk autopilot 4. The system based on px4 realizes an intelligent information fusion function system output by an inertia measurement unit, a magnetometer, a GNSS, a barometer, an airspeed sensor and a light intensity sensor in the pixhawk autopilot. The intelligent mobile terminal communicates with the PixHawk autopilot via the WiFi module ESP 8266. The PX 4-based system in the PixHawk autopilot receives commands from the intelligent classification decision function system on the intelligent mobile side, reconfigures the multi-sensor navigation system, and forwards gimbal control commands to the Storm32BGC controller through the serial port.
As shown in fig. 6, the method for identifying an environment adaptive navigation scene of a micro unmanned aerial vehicle based on functional intelligence provided by the invention comprises a real-time identification layer and a trigger identification layer. In the flight process of the micro unmanned aerial vehicle, a real-time identification layer applies a scene net deep learning network, and environmental features are extracted from non-visual sensor measurement data of an intelligent mobile terminal, so that high-update-rate identification of a flight scene is realized. If the confidence of the real-time identification layer is lower than the trigger threshold, the intelligent classification decision function system generates a control instruction, so that the micro unmanned aerial vehicle can hover and activate the trigger identification layer. The trigger recognition layer sequentially generates 5 frame control instructions, enough image information is introduced from frame angles (0 degrees ), (0 degrees, 90 degrees), (0 degrees, 180 degrees), (0 degrees, 270 degrees) and (90 degrees, 0 degrees), and then a google mobile-v 2 network is applied to provide a final recognition decision.
As shown in fig. 7, four convolutional neural sub-networks were constructed at the bottom of the SceneNet deep learning network, with intrinsic connections of the same non-visual sensor in different measurement dimensions. Each convolutional neural subnetwork contains three independent convolutional layers, the convolutional kernels of which differ in size. The outputs of the four convolutional neural sub-networks are combined by three additional convolutional neural layers with different kernel sizes to learn the interactions in the different non-visual sensor measurements, as the main environmental features of the same non-visual sensor. The combined output is considered to be a high-level feature of different non-visual sensors under different circumstances. The activation function in each convolutional layer is:
Figure BDA0002596220430000111
where x represents the batch normalization input.
Since scene recognition is a classification problem, a fully-connected neural network is used as a classifier to classify high-level features, the input of the network is the average value of GRU units in a sliding window, and a Softmax function is used as an output function of the fully-connected network to obtain classification probabilities under three different environments. The Softmax function is:
Figure BDA0002596220430000112
wherein, aiAnd akRespectively, an ith output neuron and a kth output neuron, T represents the number of neurons, and P (i) represents the probability of the ith output. The time series tensor average of the GRU is input to the fully connected layer, illustrating the possibility of identification in three different flight scenarios.
When the trigger identification layer is activated, the direction of an intelligent mobile terminal vision sensor (such as a camera) is changed, images are collected from the front direction, the left direction, the right direction, the back direction and the upper direction in the hovering state of the micro unmanned aerial vehicle, and the collected images in the five directions are spliced to obtain a scene image with the largest view field. The image is marked with the gimbal angle at the time of shooting. The upward image is arranged in the center of the image, the ratio is maximum, and the other direction images are randomly arranged around the upward image. The stitched image is cropped and filled in as standardized in equation (3) below:
Figure BDA0002596220430000121
wherein, ymnIs the mn-th element of the image matrix, μ and σ are the mean and variance, respectively, of the image, and N is the number of pixels in the image.
The data collected by the magnetometer, barometer and light intensity sensor is mixed with the unmodeled noise. In order to enhance the data set and improve the robustness of the network, normal distribution noise is added in the training process. Combining the signal-to-noise ratios of the GNSS visible light satellites into the following equation set:
Figure BDA0002596220430000122
where sum represents the total signal-to-noise ratio, SNRpRepresenting the signal-to-noise ratio of the p-th satellite,
Figure BDA0002596220430000123
representing the signal-to-noise ratio in s1,s2]Number of satellites in the range, s1Denotes the lower signal-to-noise ratio limit, s, of the specified range2Denotes the upper limit of the signal-to-noise ratio of the specified range, a is given by s1Ordinal number of assignment, set s1Equals to 0,12,24,36, and 5 GNSS combined data are obtained, which are sum and num respectively0~12、num12~24、num24~36And num36~48
The data processed in each time segment (3.75s) is divided into 15 intervals by a sliding window (t ═ 0.25s) and a fixed step (v ═ 0.25 s). Within each interval, the processed data is converted to the frequency domain using a discrete fourier transform, and a tensor 10 × 2f × t is extracted, where f is an order of magnitude and phase pair in the frequency domain. In each time period, the dimension of the input tensor of the SceneNet deep learning network is 10 × 2f × t × 15.
Preprocessing data collected by a non-visual sensor of 1300 flight tests to obtain 23000 sample data with a scene tag. The sample data is randomly divided into a training set and a testing set of a scene net deep learning network, wherein the proportion of the training set is 0.8, and the proportion of the testing set is 0.2.
In the training process of the SceneNet deep learning network, an adam optimizer and a cross entropy loss function are adopted as cost functions. The regularization loss value is set to 0.0005 and the initial learning rate value is set to 0.01. The batch size is set to 128. Tensorflow was used to train the SceneNet deep learning network and to record the summary during the training process. The training loss value and the training classification precision in the training process of the SceneNet deep learning network are shown in fig. 8a and 8b, and the results show that the loss value of the SceneNet deep learning network on the training set is gradually reduced, and the classification precision is gradually improved along with the increase of the number of training iterations.
As shown in fig. 8c, fig. 8d and fig. 8e, the results of comparing the performance of the SceneNet deep learning network with the existing machine learning method PSO-SVM are shown. Fig. 8c and 8d are the optimal individual fitness value and classification result of the PSO-SVM method, respectively. Compared with the existing machine learning method PSO-SVM, as shown in FIG. 8e, the classification accuracy of the scene Net deep learning network scheme provided by the invention is improved by 5.1%.
In order to train the google mobile-v 2 network, 1300 spliced images with scene labels are used as sample data of mobile-v 2, and are randomly divided into a training set and a test set. The ratio of the training set is 0.8 and the ratio of the test set is 0.2. An sgd optimizer and a cross entropy loss function were used. The value of the regularization loss is set to 0.00004. Learning rate decay is used, the initial learning rate is 0.01, the decay factor is 0.1, the learning rate varies with time, the batch size is set to 16, and the moving average coefficient is set to 0.9999. And training the google mobile-v 2 network by adopting tensor flow, and recording the result in the training process. The loss values of the training process of the google mobile-v 2 network are shown in fig. 9, and the best model of the google mobile-v 2 network achieves 98.2% classification accuracy on the test set.
In addition, the performance of the double-layer scene recognition method in the self-adaptive navigation scheme can be verified through flight experiments of a combined flight platform consisting of the micro unmanned aerial vehicle and the intelligent mobile terminal under different environments. The trigger threshold is set to 0.6. Two different tasks of single scene flight and indoor-to-outdoor switching scene flight are completed in flight experiments. For single scene flight, the flight arena includes playgrounds, gardens, streets, building entrances, urban canyons, laboratories, classrooms, and corridors during the day, and streets, urban canyons, and corridors during the evening. In each single-scene flight, multiple scene photographs were taken to evaluate the accuracy of embodiment 1 of the present invention. Meanwhile, the state of the scene recognition application on the intelligent mobile terminal is monitored at the ground station. The intelligent classification decision function system under the daytime garden and corridor and the output of the building entrance double-layer scene recognition at dusk are shown in fig. 10, fig. 11 and fig. 12, respectively. The flight experiment results at other positions in the spliced image are shown in table 1, and the minimum confidence of scene recognition in the flight process is given, wherein when the minimum confidence of the street scene standing in the building at the evening is 87.7%, the trigger recognition layer in the intelligent classification decision function system is activated.
TABLE 1
Figure BDA0002596220430000141
In the flight mission, the adaptive navigation method in embodiment 1 of the present invention is robust to illumination changes. In most flight tasks, a real-time recognition layer in the intelligent classification decision function system works independently, and a correct scene recognition result is output, so that the effectiveness of a scene net deep learning network model is proved. Under some complex conditions, such as the flight at the entrance of a building during dusk, the classification results of the real-time identification layer and the trigger identification layer in the intelligent classification decision-making functional system are combined to obtain a correct identification decision.
As shown in fig. 13, the test results of the platform from corridor to garden across three scenarios, indoor scenario, intermediate scenario and outdoor scenario. The scene recognition method provided by the embodiment 1 of the invention can quickly and accurately recognize the intermediate scene, and the confidence coefficient is increased along with the increase of time. The activation of the trigger recognition layer is divided into two processes, from indoor to mid, and from mid to outdoor. The test results shown in fig. 13 well verify the effectiveness and robustness of the two-layer scene recognition method.
The invention provides a functional intelligence-based environment self-adaptive navigation scene recognition method for a micro unmanned aerial vehicle. A combined flight platform consisting of a micro unmanned aerial vehicle and an intelligent mobile terminal is designed, a low-cost non-visual sensor of the intelligent mobile terminal is used for collecting data and extracting environmental characteristics from the data, the extracted environmental characteristics are accurately and robustly identified on line through a pre-trained scene net deep learning network, images can be further collected through the visual sensor of the intelligent mobile terminal, and the collected images are extracted and classified through a pre-trained google mobile et-v2 network to obtain a final identification decision of a flight scene. Compared with the traditional scene recognition method, the double-layer scene recognition method has better performance in accuracy, delay and robustness, can improve the accuracy of scene recognition of the micro unmanned aerial vehicle, realizes functional intelligent navigation with good self-adaption performance in different environments, can freely develop self-adaption navigation software on the premise of not changing the hardware structure of a navigation system by simulating the physiological behavior perception environment of organisms, and realizes the effects of compact flight and reliable operation in different environments.
It will be apparent to those skilled in the art that various changes and modifications may be made in the present invention without departing from the spirit and scope of the invention. Thus, if such modifications and variations of the present invention fall within the scope of the claims of the present invention and their equivalents, the present invention is also intended to include such modifications and variations.

Claims (4)

1. A method for identifying an environment self-adaptive navigation scene of a micro unmanned aerial vehicle based on functional intelligence is characterized by comprising the following steps:
s1: carrying out an environment self-adaptive navigation scene recognition experiment by using a combined flight platform constructed by a micro unmanned aerial vehicle and an intelligent mobile terminal, extracting environmental characteristics from data acquired by a non-visual sensor of the intelligent mobile terminal, and preprocessing the environmental characteristics;
s2: inputting the preprocessed environmental characteristics into a scene Net deep learning network trained in advance on a real-time recognition layer to obtain scene categories and confidence degrees;
s3: judging whether the confidence coefficient obtained by the real-time identification layer is lower than a trigger threshold value or not; if yes, executing step S4 and step S5; if not, go to step S6;
s4: generating a control instruction to enable the micro unmanned aerial vehicle to hover and activate the trigger identification layer;
s5: after the trigger recognition layer is activated, changing the direction of a visual sensor of the intelligent mobile terminal, acquiring images from the front direction, the left direction, the right direction, the back direction and the upper direction in the hovering state of the micro unmanned aerial vehicle, splicing the acquired images in the five directions, and extracting and classifying the spliced images by applying a pre-trained google mobile et-v2 network, wherein a three-dimensional output tensor of the google mobile et-v2 network represents recognition possibility in different flight scenes, so that a final recognition decision of the flight scenes is provided;
s6: and the scene type obtained by the real-time identification layer is an identification result, and the micro unmanned aerial vehicle continuously flies.
2. The method as claimed in claim 1, wherein the training process of the scene deep learning network in step S2, specifically includes the following steps:
s21: carrying out multiple flight tests under different weather and illumination conditions and different flight scenes by using a combined flight platform constructed by a micro unmanned aerial vehicle and an intelligent mobile terminal, and acquiring data by using a non-visual sensor and information processing capacity of the intelligent mobile terminal; wherein the non-visual sensors comprise a magnetometer, a barometer, a light intensity sensor, and a GNSS;
s22: mixing collected data of a magnetometer, a barometer and a light intensity sensor with normal distribution noise to obtain mixed data, obtaining combined data by using a signal-to-noise ratio of a collected visible GNSS satellite through an equation set, dividing the mixed data and the combined data in a preset time period into a plurality of intervals through a sliding window and a fixed stride, converting the mixed data and the combined data into a frequency domain form by adopting discrete Fourier transform in each interval, and extracting tensor to obtain sample data with a scene label; wherein the tensor is an order and phase pair in the frequency domain;
s23: and randomly dividing the sample data into a training set and a testing set, and inputting the training and learning to a scene net deep learning network to obtain the trained scene net deep learning network.
3. The method as claimed in claim 1, wherein the training process of the google mobile phone-v 2 network in step S5 includes the following steps:
s51: the method comprises the following steps of carrying out multiple flight tests under different weather and illumination conditions and different flight scenes by using a combined flight platform constructed by a micro unmanned aerial vehicle and an intelligent mobile terminal, collecting images from the front direction, the left direction, the right direction, the back direction and the upper direction by using a visual sensor of the intelligent mobile terminal, and splicing the collected images in the five directions;
s52: and taking the spliced image with the scene label as sample data of the google mobile-v 2 network, randomly dividing the sample data into a training set and a testing set, and inputting the google mobile-v 2 network for training and learning to obtain the trained google mobile-v 2 network.
4. The method for identifying the environment-adaptive navigation scene of the unmanned aerial vehicle based on functional intelligence as claimed in any one of claims 1 to 3, wherein all the sensor acquisition processes of the intelligent mobile terminal control the intelligent mobile terminal through a UI (user interface) of the intelligent mobile terminal or a ground station of the combined flight platform.
CN202010710116.XA 2020-07-22 2020-07-22 Functional intelligence-based environment self-adaptive navigation scene recognition method for micro unmanned aerial vehicle Pending CN111950386A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010710116.XA CN111950386A (en) 2020-07-22 2020-07-22 Functional intelligence-based environment self-adaptive navigation scene recognition method for micro unmanned aerial vehicle

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010710116.XA CN111950386A (en) 2020-07-22 2020-07-22 Functional intelligence-based environment self-adaptive navigation scene recognition method for micro unmanned aerial vehicle

Publications (1)

Publication Number Publication Date
CN111950386A true CN111950386A (en) 2020-11-17

Family

ID=73340880

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010710116.XA Pending CN111950386A (en) 2020-07-22 2020-07-22 Functional intelligence-based environment self-adaptive navigation scene recognition method for micro unmanned aerial vehicle

Country Status (1)

Country Link
CN (1) CN111950386A (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114510044A (en) * 2022-01-25 2022-05-17 北京圣威特科技有限公司 AGV navigation ship navigation method and device, electronic equipment and storage medium
CN115147718A (en) * 2022-06-21 2022-10-04 北京理工大学 Scene self-adaption system and method for unmanned mobile terminal visual analysis

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110737212A (en) * 2018-07-18 2020-01-31 华为技术有限公司 Unmanned aerial vehicle control system and method
CN111367318A (en) * 2020-03-31 2020-07-03 华东理工大学 Dynamic obstacle environment navigation method and device based on visual semantic information

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110737212A (en) * 2018-07-18 2020-01-31 华为技术有限公司 Unmanned aerial vehicle control system and method
CN111367318A (en) * 2020-03-31 2020-07-03 华东理工大学 Dynamic obstacle environment navigation method and device based on visual semantic information

Non-Patent Citations (3)

* Cited by examiner, † Cited by third party
Title
HAN GAO ET AL.: "Environmental Context Detection for Adaptive Navigation using GNSS Measurements from a Smartphone", 《JOURNAL OF THE INSTITUTE OF NAVIGATION》 *
WILLIAM POWER ET AL.: "Autonomous Navigation for Drone Swarms in GPS-Denied Environments Using Structured Learning", 《ARTIFICIAL INTELLIGENCE APPLICATIONS AND INNOVATIONS》 *
YANING WANG ET AL.: "An environment recognition method for MAVs using a smartphone", 《2018 IEEE/ION POSITION, LOCATION AND NAVIGATION SYMPOSIUM》 *

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114510044A (en) * 2022-01-25 2022-05-17 北京圣威特科技有限公司 AGV navigation ship navigation method and device, electronic equipment and storage medium
CN115147718A (en) * 2022-06-21 2022-10-04 北京理工大学 Scene self-adaption system and method for unmanned mobile terminal visual analysis
CN115147718B (en) * 2022-06-21 2024-05-28 北京理工大学 Scene self-adaptive system and method for unmanned mobile terminal visual analysis

Similar Documents

Publication Publication Date Title
CN108596101B (en) Remote sensing image multi-target detection method based on convolutional neural network
Huang et al. Structure from motion technique for scene detection using autonomous drone navigation
CN111797676A (en) High-resolution remote sensing image target on-orbit lightweight rapid detection method
CN112990211A (en) Neural network training method, image processing method and device
CN104517103A (en) Traffic sign classification method based on deep neural network
CN111931764B (en) Target detection method, target detection frame and related equipment
US11636348B1 (en) Adaptive training of neural network models at model deployment destinations
CN104134364B (en) Real-time traffic sign identification method and system with self-learning capacity
Xu et al. A cascade adaboost and CNN algorithm for drogue detection in UAV autonomous aerial refueling
CN109902610A (en) Traffic sign recognition method and device
CN111950386A (en) Functional intelligence-based environment self-adaptive navigation scene recognition method for micro unmanned aerial vehicle
Andrea et al. Geolocation and counting of people with aerial thermal imaging for rescue purposes
CN112380923A (en) Intelligent autonomous visual navigation and target detection method based on multiple tasks
Aposporis Object detection methods for improving UAV autonomy and remote sensing applications
Senthilnath et al. BS-McL: Bilevel segmentation framework with metacognitive learning for detection of the power lines in UAV imagery
Dong et al. Real-time survivor detection in UAV thermal imagery based on deep learning
Cabrera-Ponce et al. Convolutional neural networks for geo-localisation with a single aerial image
Suprapto et al. The detection system of helipad for unmanned aerial vehicle landing using yolo algorithm
Montanari et al. Ground vehicle detection and classification by an unmanned aerial vehicle
CN116310894B (en) Unmanned aerial vehicle remote sensing-based intelligent recognition method for small-sample and small-target Tibetan antelope
CN116453109A (en) 3D target detection method, device, equipment and storage medium
CN112818837B (en) Aerial photography vehicle weight recognition method based on attitude correction and difficult sample perception
Cirneanu et al. CNN based on LBP for evaluating natural disasters
CN113343930A (en) Unmanned aerial vehicle image processing method based on Gaussian denoising
Changpradith Application of object detection using hardware acceleration for autonomous uav

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication
RJ01 Rejection of invention patent application after publication

Application publication date: 20201117