CN110908399B - Unmanned aerial vehicle autonomous obstacle avoidance method and system based on lightweight neural network - Google Patents
Unmanned aerial vehicle autonomous obstacle avoidance method and system based on lightweight neural network Download PDFInfo
- Publication number
- CN110908399B CN110908399B CN201911214854.9A CN201911214854A CN110908399B CN 110908399 B CN110908399 B CN 110908399B CN 201911214854 A CN201911214854 A CN 201911214854A CN 110908399 B CN110908399 B CN 110908399B
- Authority
- CN
- China
- Prior art keywords
- unmanned aerial
- aerial vehicle
- neural network
- convolutional neural
- processor
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
- 238000000034 method Methods 0.000 title claims abstract description 41
- 238000013528 artificial neural network Methods 0.000 title claims abstract description 23
- 238000013527 convolutional neural network Methods 0.000 claims abstract description 58
- 238000012549 training Methods 0.000 claims abstract description 37
- 238000007781 pre-processing Methods 0.000 claims abstract description 7
- RZVHIXYEVGDQDX-UHFFFAOYSA-N 9,10-anthraquinone Chemical compound C1=CC=C2C(=O)C3=CC=CC=C3C(=O)C2=C1 RZVHIXYEVGDQDX-UHFFFAOYSA-N 0.000 claims description 12
- 230000004913 activation Effects 0.000 claims description 5
- 238000004364 calculation method Methods 0.000 claims description 3
- 238000002372 labelling Methods 0.000 claims description 3
- 238000010606 normalization Methods 0.000 claims description 2
- 238000002054 transplantation Methods 0.000 claims description 2
- 230000004888 barrier function Effects 0.000 description 4
- 238000010586 diagram Methods 0.000 description 4
- 230000006870 function Effects 0.000 description 4
- 230000008569 process Effects 0.000 description 4
- 238000011161 development Methods 0.000 description 3
- 238000012545 processing Methods 0.000 description 3
- 238000013459 approach Methods 0.000 description 2
- 238000012986 modification Methods 0.000 description 2
- 230000004048 modification Effects 0.000 description 2
- HPTJABJPZMULFH-UHFFFAOYSA-N 12-[(Cyclohexylcarbamoyl)amino]dodecanoic acid Chemical group OC(=O)CCCCCCCCCCCNC(=O)NC1CCCCC1 HPTJABJPZMULFH-UHFFFAOYSA-N 0.000 description 1
- 230000003213 activating effect Effects 0.000 description 1
- 230000009286 beneficial effect Effects 0.000 description 1
- 230000007547 defect Effects 0.000 description 1
- 238000005516 engineering process Methods 0.000 description 1
- 238000011156 evaluation Methods 0.000 description 1
- 230000006872 improvement Effects 0.000 description 1
- 238000007689 inspection Methods 0.000 description 1
- 238000011835 investigation Methods 0.000 description 1
- 238000013508 migration Methods 0.000 description 1
- 230000005012 migration Effects 0.000 description 1
- 238000012544 monitoring process Methods 0.000 description 1
- 238000005457 optimization Methods 0.000 description 1
- 238000011084 recovery Methods 0.000 description 1
- 238000004088 simulation Methods 0.000 description 1
- 238000013519 translation Methods 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G05—CONTROLLING; REGULATING
- G05D—SYSTEMS FOR CONTROLLING OR REGULATING NON-ELECTRIC VARIABLES
- G05D1/00—Control of position, course, altitude or attitude of land, water, air or space vehicles, e.g. using automatic pilots
- G05D1/10—Simultaneous control of position or course in three dimensions
- G05D1/101—Simultaneous control of position or course in three dimensions specially adapted for aircraft
-
- G—PHYSICS
- G05—CONTROLLING; REGULATING
- G05B—CONTROL OR REGULATING SYSTEMS IN GENERAL; FUNCTIONAL ELEMENTS OF SUCH SYSTEMS; MONITORING OR TESTING ARRANGEMENTS FOR SUCH SYSTEMS OR ELEMENTS
- G05B13/00—Adaptive control systems, i.e. systems automatically adjusting themselves to have a performance which is optimum according to some preassigned criterion
- G05B13/02—Adaptive control systems, i.e. systems automatically adjusting themselves to have a performance which is optimum according to some preassigned criterion electric
- G05B13/0265—Adaptive control systems, i.e. systems automatically adjusting themselves to have a performance which is optimum according to some preassigned criterion electric the criterion being a learning criterion
- G05B13/027—Adaptive control systems, i.e. systems automatically adjusting themselves to have a performance which is optimum according to some preassigned criterion electric the criterion being a learning criterion using neural networks only
-
- G—PHYSICS
- G05—CONTROLLING; REGULATING
- G05B—CONTROL OR REGULATING SYSTEMS IN GENERAL; FUNCTIONAL ELEMENTS OF SUCH SYSTEMS; MONITORING OR TESTING ARRANGEMENTS FOR SUCH SYSTEMS OR ELEMENTS
- G05B13/00—Adaptive control systems, i.e. systems automatically adjusting themselves to have a performance which is optimum according to some preassigned criterion
- G05B13/02—Adaptive control systems, i.e. systems automatically adjusting themselves to have a performance which is optimum according to some preassigned criterion electric
- G05B13/04—Adaptive control systems, i.e. systems automatically adjusting themselves to have a performance which is optimum according to some preassigned criterion electric involving the use of models or simulators
- G05B13/042—Adaptive control systems, i.e. systems automatically adjusting themselves to have a performance which is optimum according to some preassigned criterion electric involving the use of models or simulators in which a parameter or coefficient is automatically adjusted to optimise the performance
-
- Y—GENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
- Y02—TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
- Y02T—CLIMATE CHANGE MITIGATION TECHNOLOGIES RELATED TO TRANSPORTATION
- Y02T10/00—Road transport of goods or passengers
- Y02T10/10—Internal combustion engine [ICE] based vehicles
- Y02T10/40—Engine management systems
Landscapes
- Engineering & Computer Science (AREA)
- Artificial Intelligence (AREA)
- Physics & Mathematics (AREA)
- Evolutionary Computation (AREA)
- Automation & Control Theory (AREA)
- General Physics & Mathematics (AREA)
- Health & Medical Sciences (AREA)
- Software Systems (AREA)
- Medical Informatics (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Aviation & Aerospace Engineering (AREA)
- Radar, Positioning & Navigation (AREA)
- Remote Sensing (AREA)
- Traffic Control Systems (AREA)
- Control Of Position, Course, Altitude, Or Attitude Of Moving Bodies (AREA)
Abstract
The invention provides an unmanned aerial vehicle autonomous obstacle avoidance method based on a light-weight neural network, which comprises the following steps of: collecting video data simulating the flight of a camera carried by the unmanned aerial vehicle as training data; preprocessing training data; constructing a convolutional neural network by adopting a lightweight convolutional neural network architecture, and inputting the preprocessed training data into the convolutional neural network for training; the convolution neural network after training is applied to the processor of the unmanned aerial vehicle, the monocular camera in the unmanned aerial vehicle transmits the video frame data acquired in real time to the processor, the video frame data is output after passing through the convolution neural network in the processor to obtain collision probability, the processor modulates the flight speed of the current unmanned aerial vehicle according to the output collision probability, and when the flight speed of the unmanned aerial vehicle is reduced to the preset minimum speed, the unmanned aerial vehicle translates along the y axis of the unmanned aerial vehicle, so that the autonomous obstacle avoidance of the unmanned aerial vehicle is realized. The invention further provides an unmanned aerial vehicle autonomous obstacle avoidance system based on the lightweight neural network.
Description
Technical Field
The invention relates to the technical field of unmanned aerial vehicle autonomous obstacle avoidance, in particular to an unmanned aerial vehicle autonomous obstacle avoidance method and system based on a lightweight neural network.
Background
With the progress of the age and the development of technology, unmanned aerial vehicles have been applied to specific tasks such as inspection, transportation, monitoring, security, investigation and the like, and even in complex and limited environments such as forests, tunnels, indoor environments and the like, unmanned aerial vehicles can normally complete tasks.
At present, the method for recognizing the obstacle and avoiding the obstacle to continue flying by the unmanned aerial vehicle is mainly combined with a GPS and a vision sensor to estimate the system state of the unmanned aerial vehicle, deduce whether the obstacle exists or not and plan a path. However, this approach is difficult to implement in urban environments where high-rise buildings exist and is prone to system state estimation errors when dynamic obstacles are encountered. Therefore, the obstacle recognition precision is improved, the calculated amount is reduced, and meanwhile, the safe and reliable flight control command can be quickly sent, so that the method has important significance for unmanned aerial vehicle obstacle avoidance.
Disclosure of Invention
The invention provides an unmanned aerial vehicle autonomous obstacle avoidance method based on a light-weight neural network and an unmanned aerial vehicle autonomous obstacle avoidance system based on the light-weight neural network, which are used for overcoming the defect that system state estimation errors are easy to cause when the unmanned aerial vehicle encounters a dynamic obstacle in the prior art.
In order to solve the technical problems, the technical scheme of the invention is as follows:
an unmanned aerial vehicle autonomous obstacle avoidance method based on a lightweight neural network comprises the following steps:
s1: collecting video data simulating the flight of a camera carried by the unmanned aerial vehicle as training data;
s2: preprocessing the training data;
s3: constructing a convolutional neural network by adopting a lightweight convolutional neural network architecture MFnet, and then inputting the preprocessed training data into the convolutional neural network for training;
s4: the method is characterized in that the training-completed convolutional neural network is applied to a processor of the unmanned aerial vehicle, a monocular camera in the unmanned aerial vehicle transmits video frame data acquired in real time to the processor, the video frame data is output to obtain collision probability after passing through the convolutional neural network in the processor, the processor modulates the flight speed of the current unmanned aerial vehicle according to the output collision probability, and when the flight speed of the unmanned aerial vehicle is reduced to a preset minimum speed, the unmanned aerial vehicle translates along the y axis of the unmanned aerial vehicle, so that the unmanned aerial vehicle can avoid obstacle autonomously.
In the technical scheme, the video sequence is collected as training data and is preprocessed, so that the learning capacity of the training convolutional neural network is improved, and excessive fitting is avoided; and the video sequence is input into the convolutional neural network by combining the light-weight neural network to obtain corresponding collision probability, then a control command of the forward flight speed of the aircraft is calculated according to the collision probability of the output end of the convolutional neural network, and then the control command is fed back to the flight control platform of the unmanned aerial vehicle to control the forward flight speed, so that the autonomous obstacle avoidance of the unmanned aerial vehicle is realized, and the method is specific.
Preferably, in the step S1, a monocular camera is fixed on a bicycle to collect video data, so as to realize the collection of video data simulating the flight of a camera carried by an unmanned aerial vehicle. Because unmanned aerial vehicle's use has certain danger, can't use unmanned aerial vehicle to carry on monocular camera and gather the video sequence that is close to the barrier, consequently adopt monocular camera to fix the data acquisition that simulation unmanned aerial vehicle carried monocular camera and flied on the bicycle, realize the training data acquisition under the changeable sight of the environment of different regions, different barriers.
Preferably, in step S2, the step of preprocessing the training data includes:
s21: carrying out frame-by-frame artificial labeling on the video data, wherein video frames with the distance of more than 1m from the obstacle are marked as 0, and video frames with the distance of less than or equal to 1m from the obstacle are marked as 1;
s22: and adding random noise to the image in the marked video frame, and turning over or cutting to obtain the preprocessed training data.
Preferably, in step S3, the structure of the convolutional neural network includes a lightweight convolutional neural network architecture MFnet, taking mobiletv 2 as a reference, where a first layer of convolution adopts a hole convolutional layer, and output ends of the hole convolutional layer are respectively connected with input ends of 6 depth separable convolutional components, where each of the depth separable convolutional components includes a channel-by-channel convolutional layer, a point-by-point convolutional layer, a BN normalizing layer, and a Relu activating layer, which are sequentially connected; the output end of the depth separable convolution component is respectively connected with the convolution layer adopting a dropout method, the output end of the convolution layer adopting the dropout method is connected with the input end of the full-connection layer, a sigmoid activation function is adopted in the full-connection layer, and the collision probability corresponding to the input video frame image is output.
Preferably, the dropout value in the convolutional layer using the dropout method is preset to 0.5.
Preferably, the channel-by-channel convolution layer is a 3*3 convolution kernel and the point-by-point convolution layer is a 1*1 convolution kernel.
Preferably, in the step S3, the method further includes the following steps: and optimizing parameters of each layer of the convolutional neural network by adopting a binary cross entropy loss function, wherein the calculation formula is as follows:
wherein ,and (3) representing the collision probability output by the convolutional neural network, and y representing the mark corresponding to the video frame input into the convolutional neural network.
Preferably, in the step S4, the specific step of modulating the flight speed of the current unmanned aerial vehicle by the processor according to the output collision probability includes modulating the forward speed of the unmanned aerial vehicle according to the output collision probability, so as to realize autonomous obstacle avoidance of the unmanned aerial vehicle; the forward speed modulation formula of the unmanned aerial vehicle is as follows:
v k =(1-α)v k-1 +α(1-p t )V max
wherein ,vk Representing modulation speed, p t Representing collision probability, V max The maximum forward speed of the unmanned aerial vehicle is represented, alpha represents a modulation coefficient, and alpha is more than or equal to 0 and less than or equal to 1.
The invention also provides an unmanned aerial vehicle autonomous obstacle avoidance system based on the light-weight neural network, which is applied to the unmanned aerial vehicle autonomous obstacle avoidance method based on the light-weight neural network, and comprises an unmanned aerial vehicle with a monocular camera, a display card, a processor and a flight control platform, wherein:
the unmanned aerial vehicle collects a current video sequence through a monocular camera carried by the unmanned aerial vehicle and transmits the current video sequence to a processor;
the display card is used for training the convolutional neural network, and then transplanting the trained convolutional neural network to the processor for application;
the processor obtains collision probability corresponding to the current video sequence according to convolutional neural network output obtained from the graphic card transplantation, obtains unmanned aerial vehicle flight modulation speed according to a preset modulation formula, and sends a modulation command to a flight control platform;
and the flight control platform adjusts the flight speed of the unmanned aerial vehicle according to the modulation command sent from the processor so as to realize autonomous obstacle avoidance.
Compared with the prior art, the technical scheme of the invention has the beneficial effects that: by adopting the light-weight convolutional neural network architecture MFnet to construct the convolutional neural network, the calculated amount can be reduced while the obstacle can be accurately identified, and the video frame processing time can be reduced, so that the flight speed modulation speed of the unmanned aerial vehicle can be improved, and the autonomous obstacle avoidance of the unmanned aerial vehicle can be effectively realized; the unmanned aerial vehicle dynamically modulates the flying speed according to the collision probability output by the convolutional neural network, and can be applied to an environment with dynamic obstacles.
Drawings
Fig. 1 is a flowchart of an unmanned aerial vehicle autonomous obstacle avoidance method based on a lightweight neural network in embodiment 1.
Fig. 2 is a partial training data image of example 1.
Fig. 3 is a schematic structural diagram of a convolutional neural network of embodiment 1.
Fig. 4 is a schematic structural diagram of an unmanned aerial vehicle autonomous obstacle avoidance system based on a lightweight neural network in embodiment 2.
Detailed Description
The drawings are for illustrative purposes only and are not to be construed as limiting the present patent;
for the purpose of better illustrating the embodiments, certain elements of the drawings may be omitted, enlarged or reduced and do not represent the actual product dimensions;
it will be appreciated by those skilled in the art that certain well-known structures in the drawings and descriptions thereof may be omitted.
The technical scheme of the invention is further described below with reference to the accompanying drawings and examples.
Example 1
The embodiment provides an unmanned aerial vehicle autonomous obstacle avoidance method based on a lightweight neural network, as shown in fig. 1, which is a flowchart of the unmanned aerial vehicle autonomous obstacle avoidance method based on the lightweight neural network in the embodiment.
In the unmanned aerial vehicle autonomous obstacle avoidance method based on the light-weight neural network provided by the embodiment, the method comprises the following steps:
s1: video data simulating the flight of the onboard camera of the unmanned aerial vehicle is collected as training data.
In this embodiment, video data acquisition is performed by carrying a monocular camera on a bicycle, so that acquisition of video data simulating flight of the camera carried by the unmanned aerial vehicle is realized, and training data under the scene of changeable environments of different areas and different obstacles is obtained.
As shown in fig. 2, the training data image of the present embodiment is shown.
S2: and preprocessing the training data.
In this step, the step of preprocessing the training data includes:
s21: carrying out frame-by-frame artificial labeling on the video data, wherein video frames with the distance of more than 1m from the obstacle are marked as 0, which indicates that no obstacle exists in front; marking a video frame with a distance less than or equal to 1m from the obstacle as 1, indicating that the obstacle exists in front;
s22: and adding random noise to the image in the marked video frame, and turning over or cutting to obtain the preprocessed training data.
S3: and constructing a convolutional neural network by adopting a lightweight convolutional neural network architecture MFnet, and then inputting the preprocessed training data into the convolutional neural network for training.
In the step, the constructed structure of the convolutional neural network comprises a lightweight convolutional neural network architecture MFnet and takes mobiletv 2 as a reference, wherein a first layer of convolution adopts a cavity convolution layer and avoids using a 5*5 convolution kernel, the output end of the cavity convolution layer is respectively connected with the input ends of 6 depth separable convolution components, and each depth separable convolution component comprises a channel-by-channel convolution layer, a point-by-point convolution layer, a BN normalization layer and a Relu activation layer which are sequentially connected; the output end of the depth separable convolution component is respectively connected with the convolution layer adopting the dropout method, the output end of the convolution layer adopting the dropout method is connected with the input end of the full-connection layer, a sigmoid activation function is adopted in the full-connection layer, and the collision probability corresponding to the input video frame image is output.
As shown in fig. 3, a schematic structural diagram of the convolutional neural network of the present embodiment is shown.
In this embodiment, a dropout value in a convolution layer adopting a dropout method is preset to be 0.5; the channel-by-channel convolution layer is a 3*3 convolution kernel and the point-by-point convolution layer is a 1*1 convolution kernel.
The method also comprises a convolutional neural network optimization step, wherein parameters of each layer of the convolutional neural network are optimized by adopting a binary cross entropy loss function, and the calculation formula is as follows:
wherein ,and (3) representing the collision probability output by the convolutional neural network, and y representing the mark corresponding to the video frame input into the convolutional neural network.
The convolutional neural network training in this embodiment uses a random lifting-dropping method SGD as an optimizer, with a learning rate set to 0.001, a batch_size of 16, and epochs of 50.
S4: the method is characterized in that the training-completed convolutional neural network is applied to a processor of the unmanned aerial vehicle, a monocular camera in the unmanned aerial vehicle transmits video frame data acquired in real time to the processor, the video frame data is output to obtain collision probability after passing through the convolutional neural network in the processor, the processor modulates the flight speed of the current unmanned aerial vehicle according to the output collision probability, and when the flight speed of the unmanned aerial vehicle is reduced to a preset minimum speed, the unmanned aerial vehicle translates along the y axis of the unmanned aerial vehicle, so that the unmanned aerial vehicle can avoid obstacle autonomously.
In the step, the specific step of modulating the flight speed of the current unmanned aerial vehicle by the processor according to the output collision probability comprises modulating the forward speed of the unmanned aerial vehicle according to the output collision probability, so as to realize autonomous obstacle avoidance of the unmanned aerial vehicle; the forward speed modulation formula of the unmanned aerial vehicle is as follows:
v k =(1-α)v k-1 +α(1-p t )V max
wherein ,vk Representing modulation speed, p t Representing collision probability, V max The maximum forward speed of the unmanned aerial vehicle is represented, alpha represents a modulation coefficient, and alpha is more than or equal to 0 and less than or equal to 1.
In this embodiment, the maximum forward speed V of the unmanned aerial vehicle max The flying height of the unmanned aerial vehicle is controlled to be about 2m, the modulation coefficient alpha is set to be 0.7, and the minimum speed V of the unmanned aerial vehicle is set to be 2m/s min Set to 0.01m/s.
In the specific implementation process, when the unmanned aerial vehicle encounters an obstacle, a monocular camera mounted on the unmanned aerial vehicle processes a currently acquired video frame in a convolutional neural network after training, and outputs the video frame to obtain collision probability p t And according to the collision probability p t And a speed modulation formula to obtain a corresponding modulation speed v k When the unmanned aerial vehicle is closer to an obstacle, the speed v is modulated k Gradually decrease by modulation when the modulation speed v k Reduced to a preset V min When the unmanned aerial vehicle translates along the y axis of the fuselage and no obstacle exists in front of the monocular camera, the collision probability p output by the convolutional neural network t Decrease, modulation speed v k The unmanned aerial vehicle continues to fly forward.
Example 2
The embodiment provides an unmanned aerial vehicle autonomous obstacle avoidance system based on a lightweight neural network, as shown in fig. 4, which is a schematic structural diagram of the unmanned aerial vehicle autonomous obstacle avoidance system based on the lightweight neural network in the embodiment.
In unmanned aerial vehicle independently keeps away barrier system based on light-weight neural network that this embodiment provided, including unmanned aerial vehicle 1, display card 3, the treater 4 that carry on monocular camera 2, flight control platform 5, wherein:
the unmanned aerial vehicle 1 collects the current video sequence through the monocular camera 2 carried by the unmanned aerial vehicle 1 and transmits the current video sequence to the processor 4;
the display card 3 is used for training the convolutional neural network, and then transplanting the trained convolutional neural network to the processor 4 for application;
the processor 4 obtains collision probability corresponding to the current video sequence according to the convolutional neural network output obtained through the graphic card 3 migration, obtains the flight modulation speed of the unmanned aerial vehicle 1 according to a preset modulation formula, and sends a modulation command to the flight control platform 5;
the flight control platform 5 adjusts the flight speed of the unmanned aerial vehicle 1 according to the modulation command sent from the processor 4 to realize autonomous obstacle avoidance.
In this embodiment, RTX2080ti graphics card 3 is used for training, and the adopted evaluation indexes are accuracy and F-1score, wherein:
F-1=(2*precison*recall)/(precison+recall)
where, prescison represents accuracy and recovery represents recall.
The trained convolutional neural network is transplanted to the Nvidia Jetson TX2 mobile development platform of the unmanned aerial vehicle 1 to infer, wherein the inference process is that the output obtained through the MFnet controls the advancing speed of the unmanned aerial vehicle 1 through a flying speed control part.
In order to reduce the volume and the load of the unmanned aerial vehicle 1, the Nvidia Jetson TX2 is adopted as the processor 4, the weight of the loading plate added to the TX2 core module is less than 300g, and the load of the unmanned aerial vehicle 1 can be effectively reduced; the TX2 contains ARM Cortex-A57 and Nvidia Denver2 processing cores, and 256 CUDA cores are designed into a PascalTM architecture, so that hardware facilities required by a mobile development platform can be met.
In this embodiment, the maximum forward speed V of the unmanned aerial vehicle 1 max The flying height of the unmanned aerial vehicle 1 is controlled to be about 2m/s, the modulation coefficient alpha is set to be 0.7V min Set to 0.01m/s.
In a specific implementation process, when the unmanned aerial vehicle 1 encounters an obstacle, the monocular camera 2 mounted on the unmanned aerial vehicle 1 transmits a video frame currently acquired by the monocular camera to the processor 4 for processing, and the processor 4 is preset with a convolutional neural network which completes training in the display card 3, and the convolutional neural network outputs a collision probability p of the current unmanned aerial vehicle 1 t The processor 4 is based on the collision probability p t And a speed modulation formula to obtain a corresponding modulation speed v k And is sent to the flight control platform 5 to control the flight speed of the unmanned aerial vehicle 1.
When the unmanned aerial vehicle 1 approaches an obstacle, the speed v is modulated k Gradually decrease by modulation when the modulation speed v k Reduced to a preset minimum speed V of the unmanned aerial vehicle 1 min During the translation of the unmanned aerial vehicle 1 along the y axis of the fuselage, when the unmanned aerial vehicle 1 translates to the front of the monocular camera and no obstacle exists, the collision probability p output by the convolutional neural network t Decrease, modulation speed v k The unmanned aerial vehicle 1 continues to fly forward through the increase to realize unmanned aerial vehicle 1 independently keeps away the barrier function.
The same or similar reference numerals correspond to the same or similar components;
the terms describing the positional relationship in the drawings are merely illustrative, and are not to be construed as limiting the present patent;
it is to be understood that the above examples of the present invention are provided by way of illustration only and not by way of limitation of the embodiments of the present invention. Other variations or modifications of the above teachings will be apparent to those of ordinary skill in the art. It is not necessary here nor is it exhaustive of all embodiments. Any modification, equivalent replacement, improvement, etc. which come within the spirit and principles of the invention are desired to be protected by the following claims.
Claims (6)
1. An unmanned aerial vehicle autonomous obstacle avoidance method based on a lightweight neural network is characterized by comprising the following steps of:
s1: collecting video data simulating the flight of a camera carried by the unmanned aerial vehicle as training data;
s2: preprocessing the training data; the step of preprocessing the training data comprises:
s21: carrying out frame-by-frame artificial labeling on the video data, wherein video frames with the distance of more than 1m from the obstacle are marked as 0, and video frames with the distance of less than or equal to 1m from the obstacle are marked as 1;
s22: adding random noise to the image in the marked video frame, and turning over or cutting to obtain preprocessed training data;
s3: constructing a convolutional neural network by adopting a lightweight convolutional neural network architecture MFnet, and then inputting the preprocessed training data into the convolutional neural network for training;
s4: the convolutional neural network after training is applied to a processor of the unmanned aerial vehicle, a monocular camera in the unmanned aerial vehicle transmits video frame data acquired in real time to the processor, the video frame data is output to obtain collision probability after passing through the convolutional neural network in the processor, the processor modulates the flight speed of the current unmanned aerial vehicle according to the output collision probability, and when the flight speed of the unmanned aerial vehicle is reduced to a preset minimum speed, the unmanned aerial vehicle translates along the y axis of the unmanned aerial vehicle, so that autonomous obstacle avoidance of the unmanned aerial vehicle is realized;
the method comprises the specific steps that the processor modulates the flight speed of the current unmanned aerial vehicle according to the output collision probability, wherein the specific steps comprise modulating the forward speed of the unmanned aerial vehicle according to the output collision probability, and realizing autonomous obstacle avoidance of the unmanned aerial vehicle; the forward speed modulation formula of the unmanned aerial vehicle is as follows:
v k =(1-α)v k-1 +α(1-p t )V max
wherein ,vk Representing modulation speed, p t Representing collision probability, V max The maximum forward speed of the unmanned aerial vehicle is represented, alpha represents a modulation coefficient, and alpha is more than or equal to 0 and less than or equal to 1;
the structure of the convolutional neural network comprises a lightweight convolutional neural network architecture MFnet and takes mobiletv 2 as a reference, wherein a first layer of convolution adopts a cavity convolution layer, the output ends of the cavity convolution layer are respectively connected with the input ends of 6 depth separable convolution components, and each depth separable convolution component comprises a channel-by-channel convolution layer, a point-by-point convolution layer, a BN normalization layer and a Relu activation layer which are sequentially connected; the output end of the depth separable convolution component is respectively connected with the convolution layer adopting a dropout method, the output end of the convolution layer adopting the dropout method is connected with the input end of the full-connection layer, a sigmoid activation function is adopted in the full-connection layer, and the collision probability corresponding to the input video frame image is output.
2. The unmanned aerial vehicle autonomous obstacle avoidance method of claim 1, wherein: in the step S1, a monocular camera is fixed on a bicycle for video data acquisition, so that acquisition of video data simulating the flight of a camera carried by an unmanned aerial vehicle is realized.
3. The unmanned aerial vehicle autonomous obstacle avoidance method of claim 1, wherein: the dropoff value in the convolution layer adopting the dropoff method is preset to be 0.5.
4. The unmanned aerial vehicle autonomous obstacle avoidance method of claim 1, wherein: the channel-by-channel convolution layer is a 3*3 convolution kernel and the point-by-point convolution layer is a 1*1 convolution kernel.
5. The unmanned aerial vehicle autonomous obstacle avoidance method of claim 1, wherein: in the step S3, the method further includes the following steps: and optimizing parameters of each layer of the convolutional neural network by adopting a binary cross entropy loss function, wherein the calculation formula is as follows:
6. An unmanned aerial vehicle autonomous obstacle avoidance system based on a lightweight neural network, which is applied to the unmanned aerial vehicle autonomous obstacle avoidance method according to any one of claims 1 to 5, and is characterized by comprising an unmanned aerial vehicle with a monocular camera, a video card, a processor and a flight control platform, wherein:
the unmanned aerial vehicle collects a current video sequence through a monocular camera carried by the unmanned aerial vehicle and transmits the current video sequence to a processor;
the display card is used for training the convolutional neural network, and then transplanting the trained convolutional neural network to the processor for application;
the processor obtains collision probability corresponding to the current video sequence according to convolutional neural network output obtained from the graphic card transplantation, obtains unmanned aerial vehicle flight modulation speed according to a preset modulation formula, and sends a modulation command to a flight control platform;
and the flight control platform adjusts the flight speed of the unmanned aerial vehicle according to the modulation command sent from the processor so as to realize autonomous obstacle avoidance.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201911214854.9A CN110908399B (en) | 2019-12-02 | 2019-12-02 | Unmanned aerial vehicle autonomous obstacle avoidance method and system based on lightweight neural network |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201911214854.9A CN110908399B (en) | 2019-12-02 | 2019-12-02 | Unmanned aerial vehicle autonomous obstacle avoidance method and system based on lightweight neural network |
Publications (2)
Publication Number | Publication Date |
---|---|
CN110908399A CN110908399A (en) | 2020-03-24 |
CN110908399B true CN110908399B (en) | 2023-05-12 |
Family
ID=69821638
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201911214854.9A Active CN110908399B (en) | 2019-12-02 | 2019-12-02 | Unmanned aerial vehicle autonomous obstacle avoidance method and system based on lightweight neural network |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN110908399B (en) |
Families Citing this family (8)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN111880558B (en) * | 2020-07-06 | 2021-05-11 | 广东技术师范大学 | Plant protection unmanned aerial vehicle obstacle avoidance spraying method and device, computer equipment and storage medium |
CN111831010A (en) * | 2020-07-15 | 2020-10-27 | 武汉大学 | Unmanned aerial vehicle obstacle avoidance flight method based on digital space slice |
CN112364774A (en) * | 2020-11-12 | 2021-02-12 | 天津大学 | Unmanned vehicle brain autonomous obstacle avoidance method and system based on impulse neural network |
CN112666975B (en) * | 2020-12-18 | 2022-03-29 | 中山大学 | Unmanned aerial vehicle safety trajectory tracking method based on predictive control and barrier function |
CN113419555B (en) * | 2021-05-20 | 2022-07-19 | 北京航空航天大学 | Monocular vision-based low-power-consumption autonomous obstacle avoidance method and system for unmanned aerial vehicle |
CN113485392B (en) * | 2021-06-17 | 2022-04-08 | 广东工业大学 | Virtual reality interaction method based on digital twins |
CN114661061B (en) * | 2022-02-14 | 2024-05-17 | 天津大学 | GPS-free visual indoor environment-based miniature unmanned aerial vehicle flight control method |
CN117475358B (en) * | 2023-12-27 | 2024-04-23 | 广东南方电信规划咨询设计院有限公司 | Collision prediction method and device based on unmanned aerial vehicle vision |
Family Cites Families (8)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN106155082B (en) * | 2016-07-05 | 2019-02-15 | 北京航空航天大学 | A kind of unmanned plane bionic intelligence barrier-avoiding method based on light stream |
GB2556644B (en) * | 2017-02-28 | 2018-11-28 | Matthew Russell Iain | Unmanned aerial vehicles |
US10878708B2 (en) * | 2017-03-03 | 2020-12-29 | Farrokh Mohamadi | Drone terrain surveillance with camera and radar sensor fusion for collision avoidance |
CN109784298A (en) * | 2019-01-28 | 2019-05-21 | 南京航空航天大学 | A kind of outdoor on-fixed scene weather recognition methods based on deep learning |
RU2703797C1 (en) * | 2019-02-05 | 2019-10-22 | Общество с ограниченной ответственностью "Гарант" (ООО "Гарант") | Method and system for transmitting media information from unmanned aerial vehicles to a data collection point on a low-directivity optical channel with quantum reception of a media stream |
CN109960278B (en) * | 2019-04-09 | 2022-01-28 | 岭南师范学院 | LGMD-based bionic obstacle avoidance control system and method for unmanned aerial vehicle |
CN110456805B (en) * | 2019-06-24 | 2022-07-19 | 深圳慈航无人智能系统技术有限公司 | Intelligent tracking flight system and method for unmanned aerial vehicle |
CN110298397A (en) * | 2019-06-25 | 2019-10-01 | 东北大学 | The multi-tag classification method of heating metal image based on compression convolutional neural networks |
-
2019
- 2019-12-02 CN CN201911214854.9A patent/CN110908399B/en active Active
Also Published As
Publication number | Publication date |
---|---|
CN110908399A (en) | 2020-03-24 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN110908399B (en) | Unmanned aerial vehicle autonomous obstacle avoidance method and system based on lightweight neural network | |
Luo et al. | A survey of intelligent transmission line inspection based on unmanned aerial vehicle | |
CN111932588B (en) | Tracking method of airborne unmanned aerial vehicle multi-target tracking system based on deep learning | |
Ettinger et al. | Vision-guided flight stability and control for micro air vehicles | |
CN109063532B (en) | Unmanned aerial vehicle-based method for searching field offline personnel | |
US12118461B2 (en) | Methods and systems for predicting dynamic object behavior | |
CN108563236B (en) | Target tracking method of nano unmanned aerial vehicle based on concentric circle characteristics | |
CN111123964B (en) | Unmanned aerial vehicle landing method and device and computer readable medium | |
CN110832494A (en) | Semantic generation method, equipment, aircraft and storage medium | |
CN113552867B (en) | Planning method for motion trail and wheeled mobile device | |
US12078507B2 (en) | Route planning for a ground vehicle through unfamiliar terrain | |
CN114683290B (en) | Method and device for optimizing pose of foot robot and storage medium | |
Bartolomei et al. | Autonomous emergency landing for multicopters using deep reinforcement learning | |
CN116661497A (en) | Intelligent aerocar | |
CN112364774A (en) | Unmanned vehicle brain autonomous obstacle avoidance method and system based on impulse neural network | |
CN113568427A (en) | Method and system for unmanned aerial vehicle to land mobile platform independently | |
CN113674355A (en) | Target identification and positioning method based on camera and laser radar | |
CN118107822A (en) | Complex environment search and rescue method based on unmanned aerial vehicle | |
CN118031956A (en) | Unmanned aerial vehicle picking obstacle avoidance method, device, equipment and medium | |
CN113110597A (en) | Indoor unmanned aerial vehicle autonomous flight system based on ROS system | |
CN112241180B (en) | Visual processing method for landing guidance of unmanned aerial vehicle mobile platform | |
CN113065499B (en) | Air robot cluster control method and system based on visual learning drive | |
Qi et al. | Detection and tracking of a moving target for UAV based on machine vision | |
Zaier et al. | Vision-based UAV tracking using deep reinforcement learning with simulated data | |
CN213690330U (en) | Image recognition-based autonomous carrier landing system for fixed-wing unmanned aerial vehicle |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
CB03 | Change of inventor or designer information | ||
CB03 | Change of inventor or designer information |
Inventor after: Liao Jianwen Inventor after: Cai Qianqian Inventor after: Meng Wei Inventor after: Lu Renquan Inventor before: Liao Jianwen Inventor before: Cai Qianqian Inventor before: Meng Wei Inventor before: Lu Renquan Inventor before: Fu Min Yue |
|
GR01 | Patent grant | ||
GR01 | Patent grant |