CN113419555A - Monocular vision-based low-power-consumption autonomous obstacle avoidance method and system for unmanned aerial vehicle - Google Patents

Monocular vision-based low-power-consumption autonomous obstacle avoidance method and system for unmanned aerial vehicle Download PDF

Info

Publication number
CN113419555A
CN113419555A CN202110551110.7A CN202110551110A CN113419555A CN 113419555 A CN113419555 A CN 113419555A CN 202110551110 A CN202110551110 A CN 202110551110A CN 113419555 A CN113419555 A CN 113419555A
Authority
CN
China
Prior art keywords
obstacle avoidance
distance
unmanned aerial
aerial vehicle
linear velocity
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202110551110.7A
Other languages
Chinese (zh)
Other versions
CN113419555B (en
Inventor
钱德沛
郭好雨
王锐
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beihang University
Original Assignee
Beihang University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beihang University filed Critical Beihang University
Priority to CN202110551110.7A priority Critical patent/CN113419555B/en
Publication of CN113419555A publication Critical patent/CN113419555A/en
Application granted granted Critical
Publication of CN113419555B publication Critical patent/CN113419555B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G05CONTROLLING; REGULATING
    • G05DSYSTEMS FOR CONTROLLING OR REGULATING NON-ELECTRIC VARIABLES
    • G05D1/00Control of position, course or altitude of land, water, air, or space vehicles, e.g. automatic pilot
    • G05D1/10Simultaneous control of position or course in three dimensions
    • G05D1/101Simultaneous control of position or course in three dimensions specially adapted for aircraft
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • G06N3/084Backpropagation, e.g. using gradient descent

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • General Health & Medical Sciences (AREA)
  • General Engineering & Computer Science (AREA)
  • Biophysics (AREA)
  • Computational Linguistics (AREA)
  • Data Mining & Analysis (AREA)
  • Evolutionary Computation (AREA)
  • Artificial Intelligence (AREA)
  • Molecular Biology (AREA)
  • Computing Systems (AREA)
  • Biomedical Technology (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Health & Medical Sciences (AREA)
  • Aviation & Aerospace Engineering (AREA)
  • Radar, Positioning & Navigation (AREA)
  • Remote Sensing (AREA)
  • Automation & Control Theory (AREA)
  • Control Of Position, Course, Altitude, Or Attitude Of Moving Bodies (AREA)

Abstract

The invention relates to a monocular vision-based unmanned aerial vehicle low-power-consumption autonomous obstacle avoidance method and system, and provides a low-power-consumption autonomous obstacle avoidance algorithm. In order to apply an obstacle avoidance algorithm and a control algorithm to an actual system, an obstacle avoidance system of a Trolo unmanned aerial vehicle and an embedded platform Jetson Nano is built. The troo unmanned aerial vehicle is responsible for collecting video streams and transmitting the video streams to the Jetson Nano, and the Jetson Nano is responsible for analyzing the videos and operating an obstacle avoidance model and a control algorithm, and transmits back to the troo unmanned aerial vehicle to control the flight of the troo unmanned aerial vehicle after obtaining a control command of the unmanned aerial vehicle.

Description

Monocular vision-based low-power-consumption autonomous obstacle avoidance method and system for unmanned aerial vehicle
Technical Field
The invention relates to the field of autonomous obstacle avoidance in unmanned aerial vehicle technology, in particular to a low-power-consumption autonomous obstacle avoidance method and system based on monocular vision.
Background
The unmanned aerial vehicle obstacle avoidance mainly has two major tasks, namely sensing and control, sensors commonly used in a sensing stage comprise a GPS (global positioning system), ultrasonic waves, lasers, radars and the like, and in a control stage, except for traditional methods such as a potential field method and a grid method, more and more researches tend to use a neural network to control obstacle avoidance. Different obstacle avoidance methods have advantages and disadvantages, and the current mainstream obstacle avoidance methods include:
(1) and (4) a sound wave control method. The sound wave control method mainly utilizes ultrasonic ranging, and the basic principle of ultrasonic sensor ranging is to measure the flight time of ultrasonic waves. The speed of the ultrasonic waves in the air and the humidity and temperature in the air are both related, so that in accurate measurement, the temperature and humidity changes in the air and other factors need to be taken into account. Ultrasonic waves can simply measure object distance information, and most of common distance measuring systems adopt the method. At present, the technology of ultrasonic waves is mature, the cost is low, but the action distance is short, the requirement on the finishing degree of a reflecting surface is high, when an obstacle without reflecting capacity or weak reflecting capacity is faced, the safety problem cannot be guaranteed, the reflection or the attraction of different materials to the sound waves is different, and the consideration is needed in practical application.
(2) The grid map method is characterized by that it uses heuristic algorithm to search safety path in unit and create a map which can be used for path planning. The smaller the grid granularity is, the more accurate the representation of the obstacle will be, and the better the obstacle avoidance will be. However, grid maps often occupy a large amount of storage space, and are not suitable for resource-limited platforms.
(3) The artificial potential field method is characterized in that a potential field is manually established, an obstacle is set as repulsive force, a target is set as attractive force, force vectors are added, and finally the direction of resultant force is calculated. The method has the advantages of simplicity, practicability, good real-time performance and convenience for real-time control of the bottom layer, but if the gravitational repulsion force of a certain point is just equal, the direction is opposite, and the method is easy to get into local optimal solution or oscillation.
(4) The laser obstacle avoidance method is to use laser radar to avoid the obstacle. A common lidar is time of flight (ToF) based, ranging by measuring the time of flight of a laser, where d is the distance, c is the speed of light, and t is the time interval from transmission to reception, with d being ct/2. The laser radar comprises a transmitter and a receiver, wherein the transmitter irradiates a target with laser light, and the receiver receives light waves returning reversely. The measuring distance of the laser radar can reach dozens of meters or even hundreds of meters, the angular resolution is high, usually, the distance can reach several tenths of degrees, and the distance measuring precision is also high. The confidence in measuring distance, however, may be inversely proportional to the square of the received signal amplitude, and thus, a black body or distant object distance measurement may not be as well estimated as a bright, close object. Moreover, for transparent materials such as glass, the laser radar cannot be used, and the laser radar is not suitable for platforms with limited resources due to the complex structure, high device cost, high cost of the laser radar and high power consumption during operation.
The obstacle avoidance methods can meet the obstacle avoidance requirements of a common unmanned aerial vehicle, but with the development of the unmanned aerial vehicle technology and the promotion of the intelligent automation technology, the unmanned aerial vehicle begins to develop towards the direction of microminiature and low power consumption, and the existing obstacle avoidance methods are not suitable for low-power consumption unmanned aerial vehicle platforms with limited resources due to the large size and high power consumption of sensors and the complex obstacle avoidance models, so that the invention provides a low-power consumption autonomous obstacle avoidance method and system for the microminiature low-power consumption unmanned aerial vehicle.
Disclosure of Invention
The invention solves the problems: the defects of the prior art are overcome, the low-power-consumption autonomous obstacle avoidance method and the system of the unmanned aerial vehicle based on monocular vision are provided, and the low-power-consumption autonomous obstacle avoidance method and the system for the micro unmanned aerial vehicle are realized.
The obstacle avoidance sensor is a monocular camera, an obstacle avoidance model is trained by a convolutional neural network, and therefore a data set and a model for training are needed, and key contents of the obstacle avoidance sensor are as follows:
1. collecting a data set: collecting a data set by using a depth camera, wherein the distance of the whole scene is collected by the depth camera; synchronously collecting monocular RGB images and depth images, wherein the monocular RGB image data set is used as a data set for training a lightweight autonomous obstacle avoidance model, and the depth images are used for manufacturing distance labels;
2. data set processing: cutting the collected depth image, equally dividing the cut depth image into three parts of left, middle and right according to the width, and calculating the average depth value of the three parts of left, middle and right, wherein the average depth value is used as the distance of three directions of left, middle and right, and the distances of the three directions are corresponding to the monocular RGB image data set and used as the distance labels dl, dc and dr of the three directions of the monocular RGB image;
3. constructing a lightweight autonomous obstacle avoidance model based on a lightweight convolutional neural network;
4. training and testing the lightweight autonomous obstacle avoidance model, dividing the data set into a training set, a verification set and a test set, and determining model training parameters, wherein the training parameters comprise: the optimizer selects Adam optimizer, the learning rate is set to 0.001, the attenuation rate is set to 1e-5, 100 epochs (period) are trained totally, and the batch size is set to 32. And training the lightweight autonomous obstacle avoidance model by using the training parameters. Substituting the model training parameters into the lightweight autonomous obstacle avoidance model, and finally predicting distances dl, dc and dr in the left, middle and right directions from the monocular image;
5. and designing a control algorithm to convert the predicted distances dl, dc and dr in the left, middle and right directions into an unmanned aerial vehicle control instruction so as to control the obstacle avoidance flight of the unmanned aerial vehicle.
The invention discloses a monocular vision-based unmanned aerial vehicle low-power consumption autonomous obstacle avoidance method, which comprises the following specific steps:
step 1: collecting a data set: collecting a data set by using a depth camera, wherein the distance of the whole scene is collected by the depth camera; synchronously collecting monocular RGB images and depth images, wherein the monocular RGB image data set is used as a data set for training a lightweight autonomous obstacle avoidance model, and the depth images are used for manufacturing distance labels;
step 2: data set processing: cutting the collected depth image, equally dividing the cut depth image into three parts of a left part, a middle part and a right part according to the width, solving the average depth value of the three parts of the left part, the middle part and the right part, wherein the depth value represents a distance value, the average depth value is used as the distance of the three directions of the left part, the middle part and the right part, and the distance of the three directions is corresponding to the monocular RGB image data set and is used as the distance label of the three directions of the left part, the middle part and the right part of the monocular RGB image;
and step 3: constructing a lightweight autonomous obstacle avoidance model based on a lightweight convolutional neural network;
and 4, step 4: training and testing the lightweight autonomous obstacle avoidance model, dividing the data set into a training set, a verification set and a test set, and determining model training parameters, wherein the training parameters are as follows: the optimization method comprises the steps that an Adam optimizer is selected, the learning rate is set to be 0.001, the attenuation rate is set to be 1e-5, 100 cycles epoch are trained totally, the batch size is set to be 32, the lightweight autonomous obstacle avoidance model is trained by using the above training parameters, the model training parameters are substituted into the lightweight autonomous obstacle avoidance model, and finally the distances dl, dc and dr in the left, middle and right directions are predicted from monocular images;
and 5: and designing a control algorithm to convert the predicted distances dl, dc and dr in the left, middle and right directions into an unmanned aerial vehicle control instruction so as to control the obstacle avoidance flight of the unmanned aerial vehicle.
In the step 3, the lightweight autonomous obstacle avoidance model is realized by adopting a lightweight convolutional neural network, an input layer of the lightweight convolutional neural network is a monocular RGB image, features are extracted through a pre-feature extraction layer, and the pre-feature extraction layer is composed of a pooling layer max pooling + a convolutional layer conv + a pooling layer max pooling; then extracting features through two lightweight modules; finally, after extracting features through a 3 x 3 convolution operation, dividing the feature graph into a left block, a middle block and a right block according to the width, respectively sending the three blocks into three branches, wherein each branch layer consists of a 3 x 3 convolution and a full connection layer, and each branch finally outputs the distance in the left direction, the middle direction and the right direction; the two lightweight modules both adopt modules in a lightweight convolutional neural network SuffleNet, and the loss function adopts a three-branch MSE (Mean Square Error) loss function.
In step 2, a safety threshold d is selected before normalizationsafeAnd a hazard threshold ddangerRatio of ddangerA small distance is considered a dangerous distance, ratio dsafeIf the large distance is regarded as the safe distance, the normalization formula is:
Figure BDA0003075427950000031
in step 5, the control algorithm is implemented as follows:
(1) the unmanned aerial vehicle control instruction comprises a linear velocity V and an angular velocity W, the linear velocity V is controlled by a middle direction distance dc, whether the unmanned aerial vehicle is opposite to a space and has a barrier or not can be judged, two thresholds dmin and dmax are set for better linear velocity regulation, dc is smaller than dmin and indicates that the barrier is opposite to the space currently, the linear velocity is adjusted to be 0, dc is larger than dmax and indicates that the barrier is opposite to the space currently, the adjustment is the maximum linear velocity Vmax, dc is located between the two, the linear velocity is adjusted according to the dc distance, and the linear velocity formula is as follows:
Figure BDA0003075427950000041
(2) the angular velocity W is jointly determined by the distances dl, dc and dr of the left, middle and right directions, the range of the angular velocity W is [ -1,1], a value smaller than 0 indicates that the vehicle turns to the left, and a value larger than 0 indicates that the vehicle turns to the right; setting the left direction as-45 degrees, the middle direction as 0 degree, the right direction as 45 degrees, then dl, dc and dr become vectors dl, dc and dr, when the distances of the left, middle and right directions are all larger than dmin, the angular velocity is determined by the distance vectors of the three directions, theta is equal to the degrees of the dl, dc and dr resultant directions, and the angular velocity is equal to theta divided by dr degrees; otherwise, when the distance in the left direction is greater than the distance in the right direction, the angular velocity is determined by the distance vectors in the left direction, theta is equal to dl, the degree of the dc resultant force direction is equal to theta, and the angular velocity is equal to theta divided by dr degree; otherwise, the distance vector of the right middle is jointly determined, theta is equal to dc, the degree of the resultant force direction of dr, the angular speed is equal to theta divided by dr, and when the distances of the left middle and right directions are smaller than dmin, the obstacle exists in the left middle and right directions at the moment, and the turning around is performed to find other paths.
The invention relates to a monocular vision-based unmanned aerial vehicle low-power consumption autonomous obstacle avoidance system, which comprises: the system comprises a video analysis module, an obstacle avoidance system module and a control system module;
the video analysis module is used for providing image input for the whole system, firstly, an unmanned aerial vehicle is used for collecting videos, the videos are transmitted to the control platform through a Socket protocol, then the videos are analyzed, the videos are analyzed into frame images, then the images are preprocessed, namely the images are adjusted to be in a specified size, and the processed images are input into the obstacle avoidance system module;
the obstacle avoidance system module is used for predicting the distances in the left, middle and right directions from the image; the video analysis module obtains a preprocessed image, and then deploys the trained lightweight autonomous obstacle avoidance model to the control platform, wherein the specific deployment mode is as follows: and installing the environment required by the operation of the lightweight autonomous obstacle avoidance model on the control platform, and performing model compression on the lightweight autonomous obstacle avoidance model to enable the lightweight autonomous obstacle avoidance model to operate on the control platform. After deployment is finished, the image obtained by the video analysis module is used as input of a lightweight autonomous obstacle avoidance model, and the model is operated to obtain distances dl, dc and dr of the image in the left, middle and right directions;
the control system module is used for converting the distances in the left, middle and right directions into commands for controlling the unmanned aerial vehicle, and the unmanned aerial vehicle control commands comprise the linear velocity V and the angular velocity W of the unmanned aerial vehicle;
(1) the linear velocity V is controlled by a middle direction distance dc, whether the unmanned aerial vehicle is opposite to a space and has an obstacle or not can be judged, two thresholds dmin and dmax are set for better linear velocity regulation, the fact that dc is smaller than dmin means that the space is opposite to the obstacle currently and the speed needs to be slowed down, the fact that dc is larger than dmax means that the space is opposite to the obstacle currently and the speed is regulated to be the maximum linear velocity Vmax, the speed is regulated according to the distance between the two linear velocities, and the linear velocity formula is as follows:
Figure BDA0003075427950000051
(2) the angular speed of the unmanned aerial vehicle is jointly determined by distances dl, dc and dr of the left, middle and right directions W, the angular speed range is [ -1,1], a left turn is represented by being smaller than 0, and a right turn is represented by being larger than 0; setting the left direction as-45 degrees, the middle direction as 0 degree, the right direction as 45 degrees, then dl, dc and dr become vectors dl, dc and dr, when the distances of the left, middle and right directions are all larger than dmin, the angular velocity is determined by the distance vectors of the three directions, theta is equal to the degrees of the dl, dc and dr resultant directions, and the angular velocity is equal to theta divided by dr degrees; otherwise, when the distance in the left direction is greater than the distance in the right direction, the angular velocity is determined by the distance vectors in the left direction, theta is equal to dl, the degree of the dc resultant force direction is equal to theta, and the angular velocity is equal to theta divided by dr degree; otherwise, the distance vector of the right middle is jointly determined, theta is equal to dc, the degree of the resultant force direction of dr, the angular speed is equal to theta divided by dr, and when the distances of the left middle and right directions are less than dmin, the left middle and right directions all have obstacles, and the turning around is carried out to find other paths;
(3) after the linear velocity V and the angular velocity W of the unmanned aerial vehicle are obtained, the control platform sends the linear velocity V and the angular velocity W to the unmanned aerial vehicle through a Socket protocol to control the unmanned aerial vehicle to avoid obstacles for flying.
Compared with the prior art, the invention has the advantages that:
(1) the obstacle avoidance method is oriented to the obstacle avoidance requirements of the micro unmanned aerial vehicle, aims at the characteristics of small size and low cost of the micro unmanned aerial vehicle, uses the monocular camera as the sensor, and has smaller size and lower power consumption than the sensor used by the traditional obstacle avoidance method.
(2) The lightweight autonomous obstacle avoidance model uses the depth separable convolution and the 1 x 1 convolution to replace the common convolution, so that the calculated amount and the parameter amount of the model are small, the running time on a calculation platform Jetson Nano is fast, and the running power consumption is low.
(3) The present invention uses a depth camera to collect a data set that is more accurate than data sets collected using ultrasonic and infrared sensors of the prior art because the accuracy of ultrasonic and infrared sensor ranging is more affected by the environment.
Drawings
FIG. 1 is a block diagram of an autonomous obstacle avoidance system of the present invention;
FIG. 2 is a flow chart of FFmpeg decoding of the present invention;
FIG. 3 is a schematic structural diagram of an obstacle avoidance model according to the present invention;
fig. 4 is a schematic view of a lightweight module according to the present invention.
Detailed Description
The present invention will be described in detail below with reference to the accompanying drawings and examples.
As shown in fig. 1, the autonomous obstacle avoidance system framework of the present invention is configured to obtain a video stream from a monocular camera of a troops unmanned aerial vehicle, transmit the video stream to an embedded platform Jetson Nano for video analysis, input a video frame into an obstacle avoidance model for prediction, obtain an output, convert the video frame into an unmanned aerial vehicle control command through a control algorithm, and input the command into an unmanned aerial vehicle control system to control the advancing direction of the unmanned aerial vehicle. The whole system comprises a video analysis module, an obstacle avoidance system module and a control system module.
Video analysis module
The module is used for collecting videos by a Trolo unmanned aerial vehicle, transmitting the videos to a Jetson Nano platform through a Socket protocol, and then performing video analysis by using a PyAV library, wherein the PyAV library is python binding of an FFmpeg (Fast moving picture expert group) library and provides a python interface of the FFmpeg. FFmpeg is an open source program that can be used to decode video and convert it into a stream, which can decode the video transmitted by the drone. The decoding process is as shown in the description figure 2, firstly registering the FFmpeg component and opening the input video file to obtain the information of the video file, then searching and opening the decoder to read the video file according to the file information, decoding if the video data packet is read, decoding the video data into an image and converting the image format to output the image, otherwise, closing the decoder.
Obstacle avoidance system module
The module is mainly used for predicting the distance of the obstacle from the image obtained by the video analysis module. The lightweight obstacle avoidance model meeting the real-time low power consumption requirement is obtained through four steps of data set collection, data set processing, lightweight obstacle avoidance model design, model training and testing. The data set collection aspect employs a depth camera to simultaneously collect a depth map and a monocular image. In the aspect of data set processing, the depth map is divided into a left part, a middle part and a right part to obtain the average depth of the three parts as the overall distance of the three parts, the distance is normalized to be used as a label of a corresponding monocular image, and one part is selected before normalizationA safety threshold value dsafeAnd a hazard threshold ddangerRatio of ddangerA small distance is considered a dangerous distance, ratio dsafeIf the large distance is regarded as the safe distance, the normalization formula is:
Figure BDA0003075427950000061
and then, data enhancement is carried out, and the main enhancement methods comprise horizontal turning, rotation, color channel offset, scaling, dithering, Gaussian noise and the like. Horizontal flipping refers to flipping the image left or right, and the left and right labels are also interchanged in order to be consistent with the label. The rotation is to slightly rotate the image, and the color channel shift is to change the color of the whole image to make the whole image show a certain color tone, like adding a piece of colored glass on the image. Scaling is the multiplication of each pixel value of the image by a scaling factor, which is beneficial for model convergence. The dithering is to add dithering blur and Gaussian blur to the image, and since the dithering blur is generated in the walking process of the robot, the data is randomly dithered and blurred during model training. In the stage of designing the light-weight obstacle avoidance model, the invention designs a convolutional neural network model, the model structure is shown in figure 3, and the model structure is composed of an input layer, a pre-feature extraction layer, a light-weight module feature extraction layer, a feature map division layer and branch layers, wherein the input layer is an input monocular RGB image. The pre-feature extraction layer is composed of three layers of a pooling layer max pooling, a convolution layer conv and a pooling layer max pooling and is used for pre-extracting features. The feature extraction layer of the lightweight module is a module in a classical lightweight convolutional neural network (hybrid washing network), as shown in fig. 4, the module is divided into two branches, the convolution operation in the branches adopts depth separable convolution, which is one ninth of the calculation amount of common convolution, after the branches are ended, the feature maps of the two branches are concatered rather than add, because add operation is memory intensive, the memory access time is increased, shuffle operation is performed after can operation, and shuffle operation is performed to disturb the channels of the feature maps, so that communication is performed between the channels. The feature map dividing layer divides the feature map into a left part, a middle part and a right part according to the width, and the left part, the middle part and the right part are respectively input into three branches. The branch layer is composed of a 3 × 3 convolution plus a full connection layer, and the distance of the left, the middle and the right directions is finally output by each branch. Compared with other models for avoiding obstacles, the whole model has the advantages of less parameters, high running speed and low running power consumption. The final output is the distance in the left, middle and right directions, which is expressed as dl, dc, dr.
The model is well designed and trained, a data set used for training comprises 18000 pictures, and the data set is randomly divided into a training set, a verification set and a test set according to the ratio of 6:2: 2. The optimizer used in the training was an Adam (adaptive moment estimation) optimizer, the learning rate was set to 0.001, the decay rate was set to 1e-5, a total of 100 epochs (cycles) were trained, and the batch size was set to 32, with each epoch training, the dataset was randomly shuffled into different batches. During training, TopK Loss (K samples with the largest Loss) is used for difficult sample mining, wherein TopK Loss refers to that the previous K samples with the largest Loss function are selected for back propagation during training, and the samples with smaller Loss functions can be considered to be learned and are not learned through back propagation. The TopK Loss can solve the difficult sample learning problem, tends to learn the problem, and does not learn the problem which is easy to solve and can be correctly learned.
Control system module
The obstacle avoidance system module predicts the distances dl, dc and dr in the left, middle and right directions from the monocular image, converts the distances dl, dc and dr into the linear velocity and the angular velocity for controlling the unmanned aerial vehicle to fly, wherein the linear velocity is controlled by dc, and the distance in the middle direction can be judged because dc represents the distance in the middle direction, so that whether the unmanned aerial vehicle has an obstacle in the dead space or not can be judged. Two thresholds dmin and dmax are set for better linear velocity adjustment, wherein dc is smaller than dmin to indicate that the linear velocity is close to the obstacle and needs to be slowed down, dc is larger than dmax to indicate that the linear velocity is far away from the obstacle and can be adjusted to be the maximum linear velocity Vmax, and the linear velocity is adjusted according to the distance between the two thresholds. The linear velocity formula is:
Figure BDA0003075427950000071
the angular velocity is determined by the distance of the left, the middle and the right, the angular velocity is in the range of [ -1,1], less than 0 represents the left turn, and more than 0 represents the right turn. Setting the left direction to-45 °, the middle direction to 0 °, and the right direction to 45 °, then dl, dc, dr all become vectors dl, dc, dr. When the distances in the left, middle and right directions are all larger than dmin, the angular velocity is determined by three distance vectors, wherein theta is equal to the degrees of dl, dc and dr resultant force directions, and the angular velocity is equal to theta divided by dr degrees. Otherwise, when the left direction distance is larger than the right direction distance, the angular velocity is determined by the left middle distance vector, theta is equal to dl, the degree of the dc resultant force direction, and the angular velocity is equal to theta divided by dr. Otherwise, the right-middle distance vector jointly determines that theta is equal to dc, the degree of the dr resultant force direction is equal to theta, and the angular speed is equal to theta divided by dr degree. And when the distances between the left, middle and right directions are smaller than dmin, the obstacles exist in the left, middle and right directions at the moment, and the vehicle turns around to search other paths.
After the output of the obstacle avoidance model is converted into linear velocity and angular velocity, linear velocity and angular velocity commands are sent to the Telo unmanned aerial vehicle through a Socket protocol, and the flight of the unmanned aerial vehicle is controlled.
The whole obstacle avoidance system is composed of the three modules, the command is transmitted back to the unmanned aerial vehicle after the analyzed video stream in the troo unmanned aerial vehicle passes through the obstacle avoidance system module and the control system module to obtain the control command, and the whole process forms a loop and is used for obstacle avoidance flight of the troo unmanned aerial vehicle.
The above examples are provided only for the purpose of describing the present invention, and are not intended to limit the scope of the present invention. The scope of the invention is defined by the appended claims. Various equivalent substitutions and modifications can be made without departing from the spirit and principles of the invention, and are intended to be within the scope of the invention.

Claims (5)

1. A low-power-consumption autonomous obstacle avoidance method of an unmanned aerial vehicle based on monocular vision is characterized by comprising the following steps:
step 1: collecting a data set: collecting a data set by using a depth camera, wherein the distance of the whole scene is collected by the depth camera; synchronously collecting monocular RGB images and depth images, wherein the monocular RGB image data set is used as a data set for training a lightweight autonomous obstacle avoidance model, and the depth images are used for manufacturing distance labels;
step 2: data set processing: cutting the collected depth image, equally dividing the cut depth image into three parts of a left part, a middle part and a right part according to the width, solving the average depth value of the three parts of the left part, the middle part and the right part, wherein the depth value represents a distance value, the average depth value is used as the distance of the three directions of the left part, the middle part and the right part, and the distance of the three directions is corresponding to the monocular RGB image data set and is used as the distance label of the three directions of the left part, the middle part and the right part of the monocular RGB image;
and step 3: constructing a lightweight autonomous obstacle avoidance model based on a lightweight convolutional neural network;
and 4, step 4: training and testing the lightweight autonomous obstacle avoidance model, dividing the data set into a training set, a verification set and a test set, determining model training parameters, adopting the model training parameters to train the lightweight autonomous obstacle avoidance model, substituting the model training parameters into the lightweight autonomous obstacle avoidance model, and finally predicting distances dl, dc and dr in the left, middle and right directions from the monocular image;
and 5: and designing a control algorithm to convert the predicted distances dl, dc and dr in the left, middle and right directions into an unmanned aerial vehicle control instruction so as to control the obstacle avoidance flight of the unmanned aerial vehicle.
2. The monocular vision-based unmanned aerial vehicle low-power-consumption autonomous obstacle avoidance method according to claim 1, characterized in that: in the step 3, the lightweight autonomous obstacle avoidance model is realized by adopting a lightweight convolutional neural network, an input layer of the lightweight convolutional neural network is a monocular RGB image, features are extracted through a pre-feature extraction layer, and the pre-feature extraction layer is composed of a pooling layer max pooling + a convolutional layer conv + a pooling layer max pooling; then extracting features through two lightweight modules; finally, after extracting features through a 3 x 3 convolution operation, dividing the feature graph into a left block, a middle block and a right block according to the width, respectively sending the three blocks into three branches, wherein each branch layer consists of a 3 x 3 convolution and a full connection layer, and each branch finally outputs the distance in the left direction, the middle direction and the right direction; the two lightweight modules both adopt modules in a lightweight convolutional neural network SuffleNet, and the loss function adopts a three-branch MSE (Mean Square Error) loss function.
3. The monocular vision-based unmanned aerial vehicle low-power-consumption autonomous obstacle avoidance method according to claim 1, characterized in that: in step 2, a safety threshold d is selected before normalizationsafeAnd a hazard threshold ddangerRatio of ddangerA small distance is considered a dangerous distance, ratio dsafeIf the large distance is regarded as the safe distance, the normalization formula is:
Figure FDA0003075427940000021
4. the monocular vision-based unmanned aerial vehicle low-power-consumption autonomous obstacle avoidance method according to claim 1, characterized in that: in step 5, the control algorithm is implemented as follows:
(1) the unmanned aerial vehicle control instruction comprises a linear velocity V and an angular velocity W, the linear velocity V is controlled by a middle direction distance dc, whether the unmanned aerial vehicle is opposite to a space and has a barrier or not can be judged, two thresholds dmin and dmax are set for better linear velocity regulation, dc is smaller than dmin and indicates that the barrier is opposite to the space currently, the linear velocity is adjusted to be 0, dc is larger than dmax and indicates that the barrier is opposite to the space currently, the adjustment is the maximum linear velocity Vmax, dc is located between the two, the linear velocity is adjusted according to the dc distance, and the linear velocity formula is as follows:
Figure FDA0003075427940000022
(2) the angular velocity W is jointly determined by the distances dl, dc and dr of the left, middle and right directions, the range of the angular velocity W is [ -1,1], a value smaller than 0 indicates that the vehicle turns to the left, and a value larger than 0 indicates that the vehicle turns to the right; setting the left direction as-45 degrees, the middle direction as 0 degree, the right direction as 45 degrees, then dl, dc and dr become vectors dl, dc and dr, when the distances of the left, middle and right directions are all larger than dmin, the angular velocity is determined by the distance vectors of the three directions, theta is equal to the degrees of the dl, dc and dr resultant directions, and the angular velocity is equal to theta divided by dr degrees; otherwise, when the distance in the left direction is greater than the distance in the right direction, the angular velocity is determined by the distance vectors in the left direction, theta is equal to dl, the degree of the dc resultant force direction is equal to theta, and the angular velocity is equal to theta divided by dr degree; otherwise, the distance vector of the right middle is jointly determined, theta is equal to dc, the degree of the resultant force direction of dr, the angular speed is equal to theta divided by dr, and when the distances of the left middle and right directions are smaller than dmin, the obstacle exists in the left middle and right directions at the moment, and the turning around is performed to find other paths.
5. The utility model provides an unmanned aerial vehicle low-power consumption is obstacle avoidance system independently based on monocular vision, its characterized in that includes: the system comprises a video analysis module, an obstacle avoidance system module and a control system module;
the video analysis module is used for providing image input for the whole system, firstly, an unmanned aerial vehicle is used for collecting videos, the videos are transmitted to the control platform through a Socket protocol, then the videos are analyzed, the videos are analyzed into frame images, then the images are preprocessed, namely the images are adjusted to be in a specified size, and the processed images are input into the obstacle avoidance system module;
the obstacle avoidance system module is used for predicting the distances in the left, middle and right directions from the image; the video analysis module obtains a preprocessed image, and then deploys the trained lightweight autonomous obstacle avoidance model to the control platform, wherein the specific deployment mode is as follows: installing an environment required by the operation of the lightweight autonomous obstacle avoidance model on the control platform, and performing model compression on the lightweight autonomous obstacle avoidance model to enable the lightweight autonomous obstacle avoidance model to operate on the control platform; after deployment is finished, the image obtained by the video analysis module is used as input of a lightweight autonomous obstacle avoidance model, and the model is operated to obtain distances dl, dc and dr of the image in the left, middle and right directions;
the control system module is used for converting the distances in the left, middle and right directions into commands for controlling the unmanned aerial vehicle, and the unmanned aerial vehicle control commands comprise the linear velocity V and the angular velocity W of the unmanned aerial vehicle;
(1) the linear velocity V is controlled by a middle direction distance dc, whether the unmanned aerial vehicle is opposite to a space and has an obstacle or not can be judged, two thresholds dmin and dmax are set for better linear velocity regulation, dc is smaller than dmin and indicates that the space is opposite to the obstacle currently, the linear velocity is regulated to be 0, dc is larger than dmax and indicates that the space is opposite to the obstacle currently, the linear velocity is regulated to be a maximum linear velocity Vmax, and the linear velocity is regulated according to the dc distance between the dc and the maximum linear velocity, wherein the linear velocity formula is as follows:
Figure FDA0003075427940000031
(2) the angular speed of the unmanned aerial vehicle is jointly determined by distances dl, dc and dr of the left, middle and right directions W, the angular speed range is [ -1,1], a left turn is represented by being smaller than 0, and a right turn is represented by being larger than 0; setting the left direction as-45 degrees, the middle direction as 0 degree, the right direction as 45 degrees, then dl, dc and dr become vectors dl, dc and dr, when the distances of the left, middle and right directions are all larger than dmin, the angular velocity is determined by the distance vectors of the three directions, theta is equal to the degrees of the dl, dc and dr resultant directions, and the angular velocity is equal to theta divided by dr degrees; otherwise, when the distance in the left direction is greater than the distance in the right direction, the angular velocity is determined by the distance vectors in the left direction, theta is equal to dl, the degree of the dc resultant force direction is equal to theta, and the angular velocity is equal to theta divided by dr degree; otherwise, the distance vector of the right middle is jointly determined, theta is equal to dc, the degree of the resultant force direction of dr, the angular speed is equal to theta divided by dr, and when the distances of the left middle and right directions are less than dmin, the left middle and right directions all have obstacles, and the turning around is carried out to find other paths;
(3) after the linear velocity V and the angular velocity W of the unmanned aerial vehicle are obtained, the control platform sends the linear velocity V and the angular velocity W to the unmanned aerial vehicle through a Socket protocol to control the unmanned aerial vehicle to avoid obstacles for flying.
CN202110551110.7A 2021-05-20 2021-05-20 Monocular vision-based low-power-consumption autonomous obstacle avoidance method and system for unmanned aerial vehicle Active CN113419555B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110551110.7A CN113419555B (en) 2021-05-20 2021-05-20 Monocular vision-based low-power-consumption autonomous obstacle avoidance method and system for unmanned aerial vehicle

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110551110.7A CN113419555B (en) 2021-05-20 2021-05-20 Monocular vision-based low-power-consumption autonomous obstacle avoidance method and system for unmanned aerial vehicle

Publications (2)

Publication Number Publication Date
CN113419555A true CN113419555A (en) 2021-09-21
CN113419555B CN113419555B (en) 2022-07-19

Family

ID=77712679

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110551110.7A Active CN113419555B (en) 2021-05-20 2021-05-20 Monocular vision-based low-power-consumption autonomous obstacle avoidance method and system for unmanned aerial vehicle

Country Status (1)

Country Link
CN (1) CN113419555B (en)

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107553490A (en) * 2017-09-08 2018-01-09 深圳市唯特视科技有限公司 A kind of monocular vision barrier-avoiding method based on deep learning
US20180158197A1 (en) * 2016-12-01 2018-06-07 Skydio, Inc. Object tracking by an unmanned aerial vehicle using visual sensors
EP3591490A1 (en) * 2017-12-15 2020-01-08 Autel Robotics Co., Ltd. Obstacle avoidance method and device, and unmanned aerial vehicle
CN110908399A (en) * 2019-12-02 2020-03-24 广东工业大学 Unmanned aerial vehicle autonomous obstacle avoidance method and system based on light weight type neural network
CN111399505A (en) * 2020-03-13 2020-07-10 浙江工业大学 Mobile robot obstacle avoidance method based on neural network
CN111831010A (en) * 2020-07-15 2020-10-27 武汉大学 Unmanned aerial vehicle obstacle avoidance flight method based on digital space slice

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20180158197A1 (en) * 2016-12-01 2018-06-07 Skydio, Inc. Object tracking by an unmanned aerial vehicle using visual sensors
CN107553490A (en) * 2017-09-08 2018-01-09 深圳市唯特视科技有限公司 A kind of monocular vision barrier-avoiding method based on deep learning
EP3591490A1 (en) * 2017-12-15 2020-01-08 Autel Robotics Co., Ltd. Obstacle avoidance method and device, and unmanned aerial vehicle
CN110908399A (en) * 2019-12-02 2020-03-24 广东工业大学 Unmanned aerial vehicle autonomous obstacle avoidance method and system based on light weight type neural network
CN111399505A (en) * 2020-03-13 2020-07-10 浙江工业大学 Mobile robot obstacle avoidance method based on neural network
CN111831010A (en) * 2020-07-15 2020-10-27 武汉大学 Unmanned aerial vehicle obstacle avoidance flight method based on digital space slice

Also Published As

Publication number Publication date
CN113419555B (en) 2022-07-19

Similar Documents

Publication Publication Date Title
US11460844B2 (en) Unmanned aerial image capture platform
US10817731B2 (en) Image-based pedestrian detection
US10255525B1 (en) FPGA device for image classification
CN109341694A (en) A kind of autonomous positioning air navigation aid of mobile sniffing robot
CN106168808A (en) A kind of rotor wing unmanned aerial vehicle automatic cruising method based on degree of depth study and system thereof
CN112378397B (en) Unmanned aerial vehicle target tracking method and device and unmanned aerial vehicle
CN110850877A (en) Automatic driving trolley training method based on virtual environment and deep double Q network
US20220032452A1 (en) Systems and Methods for Sensor Data Packet Processing and Spatial Memory Updating for Robotic Platforms
CN112596071A (en) Unmanned aerial vehicle autonomous positioning method and device and unmanned aerial vehicle
US20220153310A1 (en) Automatic Annotation of Object Trajectories in Multiple Dimensions
CN112379681A (en) Unmanned aerial vehicle obstacle avoidance flight method and device and unmanned aerial vehicle
CN111831010A (en) Unmanned aerial vehicle obstacle avoidance flight method based on digital space slice
Nahar et al. Autonomous UAV forced graffiti detection and removal system based on machine learning
CN115019060A (en) Target recognition method, and training method and device of target recognition model
CN112380933B (en) Unmanned aerial vehicle target recognition method and device and unmanned aerial vehicle
CN113419555B (en) Monocular vision-based low-power-consumption autonomous obstacle avoidance method and system for unmanned aerial vehicle
CN116520890B (en) Unmanned aerial vehicle control platform capable of three-dimensional holographic inspection
CN113029154A (en) Navigation method and device for blind person
Zhang et al. Multi-loss function for distance-to-collision estimation
Salhaoui Smart IoT monitoring and real-time control based on autonomous robots, visual recognition and cloud/edge computing services
Hong et al. Real-time human search and monitoring system using unmanned aerial vehicle
CN117873112A (en) Track planning method, system and device for intelligent automatic driving queue
CN117746351A (en) High-precision aircraft automatic identification and tracking system based on depth network
Eaton Automated taxiing for unmanned aircraft systems
Tian et al. A Visual Perception Method for Autonomous Navigation at Sea Based on ROS

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant