CN114911268A - Unmanned aerial vehicle autonomous obstacle avoidance method and system based on visual simulation - Google Patents
Unmanned aerial vehicle autonomous obstacle avoidance method and system based on visual simulation Download PDFInfo
- Publication number
- CN114911268A CN114911268A CN202210681303.9A CN202210681303A CN114911268A CN 114911268 A CN114911268 A CN 114911268A CN 202210681303 A CN202210681303 A CN 202210681303A CN 114911268 A CN114911268 A CN 114911268A
- Authority
- CN
- China
- Prior art keywords
- aerial vehicle
- unmanned aerial
- frame
- obstacle avoidance
- point coordinate
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
- 238000000034 method Methods 0.000 title claims abstract description 46
- 238000004088 simulation Methods 0.000 title claims abstract description 23
- 230000000007 visual effect Effects 0.000 title claims abstract description 22
- 230000008569 process Effects 0.000 claims abstract description 21
- 239000011159 matrix material Substances 0.000 claims abstract description 17
- 238000013528 artificial neural network Methods 0.000 claims description 28
- 230000004888 barrier function Effects 0.000 claims description 15
- 230000002567 autonomic effect Effects 0.000 claims description 6
- 230000008859 change Effects 0.000 claims description 4
- 238000004364 calculation method Methods 0.000 claims description 2
- 230000035945 sensitivity Effects 0.000 description 5
- 238000003062 neural network model Methods 0.000 description 3
- 230000006399 behavior Effects 0.000 description 2
- 230000006378 damage Effects 0.000 description 2
- NAWXUBYGYWOOIX-SFHVURJKSA-N (2s)-2-[[4-[2-(2,4-diaminoquinazolin-6-yl)ethyl]benzoyl]amino]-4-methylidenepentanedioic acid Chemical compound C1=CC2=NC(N)=NC(N)=C2C=C1CCC1=CC=C(C(=O)N[C@@H](CC(=C)C(O)=O)C(O)=O)C=C1 NAWXUBYGYWOOIX-SFHVURJKSA-N 0.000 description 1
- 230000009471 action Effects 0.000 description 1
- 230000009286 beneficial effect Effects 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G05—CONTROLLING; REGULATING
- G05D—SYSTEMS FOR CONTROLLING OR REGULATING NON-ELECTRIC VARIABLES
- G05D1/00—Control of position, course, altitude or attitude of land, water, air or space vehicles, e.g. using automatic pilots
- G05D1/10—Simultaneous control of position or course in three dimensions
- G05D1/101—Simultaneous control of position or course in three dimensions specially adapted for aircraft
Landscapes
- Engineering & Computer Science (AREA)
- Aviation & Aerospace Engineering (AREA)
- Radar, Positioning & Navigation (AREA)
- Remote Sensing (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Automation & Control Theory (AREA)
- Image Processing (AREA)
- Closed-Circuit Television Systems (AREA)
Abstract
The invention discloses an unmanned aerial vehicle autonomous obstacle avoidance method and system based on visual simulation, which can obtain a video of a calibration plate in the moving process by moving the calibration plate to avoid obstacles in the environment, randomly select a first frame picture and a second frame picture which are adjacent to each other from the video, and calculating homography matrix of the two frames of pictures, extracting a first interest region with a calibration plate from the first frame of picture, extracting a second interest region with the calibration plate from the second frame of picture, calculating a first central point coordinate of the first interest region and a second central point coordinate of the second interest region, and obtaining the central point coordinate of a new frame according to the second central point coordinate and the homography matrix, obtaining the direction label of the first frame of picture according to the first central point coordinate and the central point coordinate of the new frame, and forming a calibrated data set by the other frames of pictures except the last frame in the video and the corresponding direction labels. Automatic calibration of the data set is realized.
Description
Technical Field
The invention relates to the technical field of unmanned aerial vehicle obstacle avoidance, in particular to an unmanned aerial vehicle autonomous obstacle avoidance method and system based on visual simulation.
Background
The statements in this section merely provide background information related to the present disclosure and may not necessarily constitute prior art.
Unmanned aerial vehicle can break away from the restriction of two-dimensional plane, fly in three-dimensional space, but because the reason of its fuselage load and duration, it keeps away the barrier function and receives very big restriction, unmanned aerial vehicle keeps away the barrier based on vision is a low-cost solution, but need realize with the help of techniques such as imitative study, in order to train unmanned aerial vehicle and independently keep away the barrier through imitative study in three-dimensional space, need control unmanned aerial vehicle and fly and avoid in the obstacle environment, acquire the video of flight and avoid the in-process, and artificially mark picture in the video, obtain the data set of training usefulness.
Because unmanned aerial vehicle is very sensitive to the tiny change of control command, artificial error appears very easily, and serious person causes unmanned aerial vehicle to destroy or the environment receives destruction etc. consequently it is very difficult to go to collect and mark the data set through the mode that unmanned aerial vehicle controlled the barrier by the manual work. In addition, in the conventional data set calibration, pictures are obtained by capturing video images, and then each picture is manually calibrated, which consumes a lot of manpower.
Disclosure of Invention
The utility model discloses in order to solve above-mentioned problem, an unmanned aerial vehicle independently keeps away barrier method and system based on vision simulation is proposed, keep away the barrier in the environment through handheld calibration plate, carry out video recording to the removal process of calibration plate, and carry out automatic calibration to the picture in the video, acquire the data set after demarcating, the quick accurate demarcation of data set has been realized, independently keep away the barrier neural network to unmanned aerial vehicle through this data set and train, the sensitivity and the accuracy that unmanned aerial vehicle independently kept away the barrier neural network have been improved, the accuracy that unmanned aerial vehicle independently kept away the barrier has finally been improved.
In order to achieve the purpose, the following technical scheme is adopted in the disclosure:
in a first aspect, an unmanned aerial vehicle autonomous obstacle avoidance method based on visual simulation is provided, which includes:
acquiring an unmanned aerial vehicle obstacle image;
acquiring the obstacle avoidance direction of the unmanned aerial vehicle according to the obstacle image of the unmanned aerial vehicle and the trained unmanned aerial vehicle autonomous obstacle avoidance neural network, and controlling the unmanned aerial vehicle to autonomously avoid the obstacle according to the obstacle avoidance direction;
wherein, use the data set of calibrating good to train unmanned aerial vehicle autonomic obstacle avoidance neural network, the process of obtaining the data set of calibrating good is:
moving the calibration plate to avoid obstacles in the environment and acquiring a video of the calibration plate in the moving process;
randomly selecting a first frame picture and a second frame picture which are adjacent from a video, and calculating a homography matrix of the two frames of pictures;
extracting a first interest area with a calibration plate from the first frame picture, extracting a second interest area with the calibration plate from the second frame picture, and calculating a first center point coordinate of the first interest area and a second center point coordinate of the second interest area;
acquiring the center point coordinate of a new frame according to the second center point coordinate and the homography matrix;
obtaining a direction label of the first frame of picture according to the first central point coordinate and the central point coordinate of the new frame;
after the direction labels corresponding to the other frame pictures except the last frame in the video are obtained, the other frame pictures except the last frame and the corresponding direction labels form a calibrated data set.
In a second aspect, an unmanned aerial vehicle autonomous obstacle avoidance system based on visual simulation is provided, which includes:
the image acquisition module is used for acquiring an unmanned aerial vehicle obstacle image;
the autonomous obstacle avoidance module is used for acquiring the obstacle avoidance direction of the unmanned aerial vehicle according to the obstacle image of the unmanned aerial vehicle and the trained unmanned aerial vehicle autonomous obstacle avoidance neural network, and controlling the unmanned aerial vehicle to autonomously avoid the obstacle according to the obstacle avoidance direction;
wherein, use the data set of calibrating good to train unmanned aerial vehicle autonomic obstacle avoidance neural network, the process of obtaining the data set of calibrating good is:
moving the calibration plate to avoid obstacles in the environment and acquiring a video of the calibration plate in the moving process;
randomly selecting a first frame picture and a second frame picture which are adjacent from a video, and calculating a homography matrix of the two frames of pictures;
extracting a first interest area with a calibration plate from the first frame picture, extracting a second interest area with the calibration plate from the second frame picture, and calculating a first center point coordinate of the first interest area and a second center point coordinate of the second interest area;
acquiring the center point coordinate of a new frame according to the second center point coordinate and the homography matrix;
obtaining a direction label of the first frame of picture according to the first central point coordinate and the central point coordinate of the new frame;
after the direction labels corresponding to the other frame pictures except the last frame in the video are obtained, the other frame pictures except the last frame and the corresponding direction labels form a calibrated data set.
In a third aspect, an electronic device is provided, which includes a memory, a processor, and computer instructions stored in the memory and executed on the processor, where the computer instructions, when executed by the processor, perform the steps of the unmanned aerial vehicle autonomous obstacle avoidance method based on visual simulation.
In a fourth aspect, a computer-readable storage medium is provided for storing computer instructions, and the computer instructions, when executed by a processor, perform the steps of a method for autonomous obstacle avoidance for a drone based on visual simulation.
Compared with the prior art, the beneficial effect of this disclosure is:
1. according to the unmanned aerial vehicle autonomous obstacle avoidance neural network training system, the unmanned aerial vehicle autonomous obstacle avoidance behavior is simulated by holding the calibration plate to avoid obstacles in the environment, so that the unmanned aerial vehicle can more flexibly avoid the obstacles, video recording is performed on the moving process of the calibration plate, pictures in the video are automatically calibrated, the rapid automatic calibration of a data set is realized, the accuracy and the sensitivity of the calibration of the data set are improved, the unmanned aerial vehicle autonomous obstacle avoidance neural network is trained through the data set, the sensitivity and the accuracy of the unmanned aerial vehicle autonomous obstacle avoidance neural network are improved, and finally the accuracy of the unmanned aerial vehicle autonomous obstacle avoidance is improved.
Advantages of additional aspects of the invention will be set forth in part in the description which follows, and in part will be obvious from the description, or may be learned by practice of the invention.
Drawings
The accompanying drawings, which are incorporated in and constitute a part of this application, illustrate embodiments of the application and, together with the description, serve to explain the application and are not intended to limit the application.
FIG. 1 is a flow chart of the calibration of a data set disclosed in example 1;
fig. 2 is a calibration example disclosed in example 1.
The specific implementation mode is as follows:
the present disclosure is further described with reference to the following drawings and examples.
It should be noted that the following detailed description is exemplary and is intended to provide further explanation of the disclosure. Unless defined otherwise, all technical and scientific terms used herein have the same meaning as commonly understood by one of ordinary skill in the art to which this application belongs.
It is noted that the terminology used herein is for the purpose of describing particular embodiments only and is not intended to be limiting of example embodiments according to the present application. As used herein, the singular forms "a", "an" and "the" are intended to include the plural forms as well, and it should be understood that when the terms "comprises" and/or "comprising" are used in this specification, they specify the presence of stated features, steps, operations, devices, components, and/or combinations thereof, unless the context clearly indicates otherwise.
Example 1
In the embodiment, an unmanned aerial vehicle autonomous obstacle avoidance method based on visual simulation is disclosed, which comprises the following steps:
acquiring an unmanned aerial vehicle obstacle image;
acquiring the obstacle avoidance direction of the unmanned aerial vehicle according to the obstacle image of the unmanned aerial vehicle and the trained unmanned aerial vehicle autonomous obstacle avoidance neural network, and controlling the unmanned aerial vehicle to autonomously avoid the obstacle according to the obstacle avoidance direction;
wherein, use the data set of calibrating good to train unmanned aerial vehicle autonomic obstacle avoidance neural network, the process of obtaining the data set of calibrating good is:
moving the calibration plate to avoid obstacles in the environment and acquiring a video of the calibration plate in the moving process;
randomly selecting a first frame picture and a second frame picture which are adjacent from a video, and calculating a homography matrix of the two frames of pictures;
extracting a first interest area with a calibration plate from the first frame picture, extracting a second interest area with the calibration plate from the second frame picture, and calculating a first center point coordinate of the first interest area and a second center point coordinate of the second interest area;
acquiring the center point coordinate of a new frame according to the second center point coordinate and the homography matrix;
obtaining a direction label of the first frame of picture according to the first central point coordinate and the central point coordinate of the new frame;
after the direction labels corresponding to the other frame pictures except the last frame in the video are obtained, the other frame pictures except the last frame and the corresponding direction labels form a calibrated data set.
An unmanned aerial vehicle autonomous obstacle avoidance method based on visual simulation disclosed in the embodiment is explained in detail.
An unmanned aerial vehicle autonomous obstacle avoidance method based on visual simulation comprises the following steps:
s1: and acquiring an obstacle image of the unmanned aerial vehicle.
During specific implementation, unmanned aerial vehicle obstacle images are obtained in a separated mode from real-time video streams collected by an unmanned aerial vehicle camera.
S2: according to the unmanned aerial vehicle obstacle image and the trained unmanned aerial vehicle autonomous obstacle avoidance neural network, the obstacle avoidance direction of the unmanned aerial vehicle is obtained, and the unmanned aerial vehicle is controlled to autonomously avoid the obstacle according to the obstacle avoidance direction.
The unmanned aerial vehicle autonomous obstacle avoidance neural network model is built, the unmanned aerial vehicle autonomous obstacle avoidance neural network model inputs obstacle images of the unmanned aerial vehicle, and outputs obstacle avoidance directions of the unmanned aerial vehicle, wherein the obstacle avoidance directions of the unmanned aerial vehicle comprise left movement, right movement, left rotation, right rotation, upward movement and downward movement of the unmanned aerial vehicle.
And acquiring a calibrated data set, and training the unmanned aerial vehicle autonomous obstacle avoidance neural network through the calibrated data set to obtain a trained unmanned aerial vehicle autonomous obstacle avoidance neural network model.
The process of obtaining the calibrated data set comprises the following steps:
s21: and moving the calibration plate to avoid obstacles in the environment and acquiring a video of the calibration plate in the moving process.
Through handheld calibration plate to remove the calibration plate, thereby the mode of dodging the barrier in the environment simulates unmanned aerial vehicle's the action of keeping away the barrier, dodges the barrier through handheld calibration plate under the complex environment's the mode of dodging the barrier, can be more nimble, and effectively prevented the damage to unmanned aerial vehicle and environment.
In the process of moving the calibration plate to avoid the obstacle, the video of the calibration plate in the moving process is obtained.
S22: the video obtained in S21 is processed to obtain a nominal data set.
As shown in fig. 1, the process of obtaining the calibrated data set specifically includes:
s221: and selecting a first frame picture and a second frame picture which are adjacent from the video, and calculating a homography matrix between the two frames of pictures.
And (3) calculating a homography matrix H between the first frame picture and the second frame picture by adopting a formula (1).
H=getHomoGraphy(F i ,F i+1 ) (1)
Wherein, F i For the first frame of picture, F i+1 Is the second frame picture.
S222: extracting a first interest area with a calibration plate from the first frame picture, extracting a second interest area with the calibration plate from the second frame picture, and calculating a first center point coordinate of the first interest area and a second center point coordinate of the second interest area.
The first interest area and the second interest area are both represented in the form of rectangular boxes.
Acquiring a first interest area B from a first frame of picture by adopting a formula (2) i 。
B i =selectROI(F i ) (2)
Tracking a first region of interest B on a first frame picture using a CSRT tracker i And in the second frame picture F i+1 To obtain a second interest area B of the frame i+1 Of the position of (a).
B i+1 =track(B i ,F i ,F i+1 ) (3)
Calculating the first center point coordinate PointB through formulas (4) and (5) i And second center point coordinate PointB i+1 。
PointB i =getCenter(B i ) (4)
PointB i+1 =getCenter(B i+1 ) (5)
S223: and acquiring the center point coordinate of a new frame according to the second center point coordinate and the homography matrix.
The formula (6) is adopted to obtain the center point coordinate PointB of the new frame as shown in FIG. 2 i+1 '。
PointB i+1 '=getPoint(H,PointB i+1 )(6)
S224: and obtaining the direction label according to the first central point coordinate and the central point coordinate of the new frame.
The direction labels include angle labels and distance labels.
The angle label is obtained according to the coordinate angle change between the first central point coordinate and the central point coordinate of the new frame;
and the distance label is obtained by calculation according to the size of the first interest area frame, the first center point coordinate and the angle label.
When the calibration plates in the first frame of picture and the second frame of picture move in the horizontal plane, the angle label A is obtained through the formula (7) i Indicated by A in FIG. 2 i And (4) showing.
A i =arctan((PointB ix –PonitB i+1 ' x )/(PointB iy –PointB i+1 ' y ))(7)
Wherein, PointB ix Is a firstA center point coordinate PointB i X coordinate of (1), PointB iy As the first center point coordinate PointB i Y coordinate of (1); PonitB i+1 ' x For the central point coordinate PointB of a new frame i+1 ' x coordinate, PonitB i+1 ' x For the central point coordinate PointB of a new frame i+1 The y coordinate in.
According to the height B of the border of the first interest area i Height, x coordinate PointB in first center point coordinates ix And angle label A i Calculating the distance label T when the horizontal plane of the calibration plate moves by adopting a formula (8) i And is denoted by T in fig. 2.
T i =PointB ix +tan(-A i *π)*B i .height/2 (8)
When the calibration plates in the first frame picture and the second frame picture move in a vertical plane, the angle label A is obtained through the formula (9) i ', indicated by A in FIG. 2 i ' means.
A i '=arctan((PointB ix –PonitB i+1 ' x )/(PointB iy –PointB i+1 ' y )) (9)
According to the width B of the border of the first interest region i Width, y coordinate PointB in first center point coordinate iy And angle label A i ' calculating the distance label R of the calibration plate when moving in the vertical plane by the formula (10) i And is denoted by R in fig. 2.
R i =PointB iy +B i .width/(2*tan(-A i '*π)) (10)
Obtaining the pictures of the rest frames except the last frame in the video according to the steps of S221-S224 (F) 0 ,F 1 …F n-1 ) And the corresponding direction labels form a calibrated data set by using the other frame pictures except the last frame and the corresponding direction labels.
S23: and training the constructed unmanned aerial vehicle autonomous obstacle avoidance neural network through the calibrated data set to obtain the trained unmanned aerial vehicle autonomous obstacle avoidance neural network.
When the unmanned aerial vehicle autonomous obstacle avoidance neural network is trained, obstacle features in the picture are extracted through the unmanned aerial vehicle autonomous obstacle avoidance neural network, then the unmanned aerial vehicle autonomous obstacle avoidance neural network is trained according to the direction labels obtained by the acquired change between the adjacent frame images, and the trained unmanned aerial vehicle autonomous obstacle avoidance neural network is obtained.
The unmanned aerial vehicle obstacle image is identified through the trained unmanned aerial vehicle autonomous obstacle avoidance neural network, the obstacle avoidance direction of the unmanned aerial vehicle can be output, the unmanned aerial vehicle is controlled through the obstacle avoidance direction, and the unmanned aerial vehicle effective obstacle avoidance is realized.
According to the method disclosed by the embodiment, the obstacle avoidance behavior of the unmanned aerial vehicle is simulated by holding the calibration plate to avoid the obstacles in the environment, the obstacles can be more flexibly avoided, video recording is carried out on the moving process of the calibration plate, and automatic calibration is carried out on pictures in the video, so that the rapid automatic calibration of the data set is realized, the accuracy and the sensitivity of the calibration of the data set are improved, the autonomous obstacle avoidance neural network of the unmanned aerial vehicle is trained through the data set, the sensitivity and the accuracy of the autonomous obstacle avoidance neural network of the unmanned aerial vehicle are improved, and finally the accuracy of the autonomous obstacle avoidance of the unmanned aerial vehicle is improved.
Example 2
In this embodiment, an unmanned aerial vehicle is from keeping away barrier system based on vision is disclosed, includes:
the image acquisition module is used for acquiring an obstacle image of the unmanned aerial vehicle;
the autonomous obstacle avoidance module is used for acquiring the obstacle avoidance direction of the unmanned aerial vehicle according to the obstacle image of the unmanned aerial vehicle and the trained unmanned aerial vehicle autonomous obstacle avoidance neural network, and controlling the unmanned aerial vehicle to autonomously avoid the obstacle according to the obstacle avoidance direction;
wherein, use the data set of calibrating good to train unmanned aerial vehicle autonomic obstacle avoidance neural network, the process of obtaining the data set of calibrating good is:
moving the calibration plate to avoid obstacles in the environment and acquiring a video of the calibration plate in the moving process;
randomly selecting a first frame picture and a second frame picture which are adjacent from a video, and calculating a homography matrix of the two frames of pictures;
extracting a first interest area with a calibration plate from the first frame picture, extracting a second interest area with the calibration plate from the second frame picture, and calculating a first center point coordinate of the first interest area and a second center point coordinate of the second interest area;
acquiring a new central point coordinate of the frame according to the second central point coordinate and the homography matrix;
obtaining a direction label of the first frame of picture according to the first central point coordinate and the central point coordinate of the new frame;
after the direction labels corresponding to the other frame pictures except the last frame in the video are obtained, the other frame pictures except the last frame and the corresponding direction labels form a calibrated data set.
Example 3
In this embodiment, an electronic device is disclosed, which includes a memory, a processor, and computer instructions stored in the memory and executed on the processor, where the computer instructions, when executed by the processor, implement the steps of the unmanned aerial vehicle autonomous obstacle avoidance method based on visual simulation disclosed in embodiment 1.
Example 4
In this embodiment, a computer-readable storage medium is disclosed for storing computer instructions, which when executed by a processor, perform the steps of the unmanned aerial vehicle autonomous obstacle avoidance method based on visual simulation disclosed in embodiment 1.
Finally, it should be noted that: the above embodiments are only for illustrating the technical solutions of the present invention and not for limiting the same, and although the present invention is described in detail with reference to the above embodiments, those of ordinary skill in the art should understand that: modifications and equivalents may be made to the embodiments of the invention without departing from the spirit and scope of the invention, which is to be covered by the claims.
Claims (10)
1. An unmanned aerial vehicle autonomous obstacle avoidance method based on visual simulation is characterized by comprising the following steps:
acquiring an unmanned aerial vehicle obstacle image;
acquiring the obstacle avoidance direction of the unmanned aerial vehicle according to the obstacle image of the unmanned aerial vehicle and the trained unmanned aerial vehicle autonomous obstacle avoidance neural network, and controlling the unmanned aerial vehicle to autonomously avoid the obstacle according to the obstacle avoidance direction;
wherein, use the data set of calibrating good to train unmanned aerial vehicle autonomic obstacle avoidance neural network, the process of obtaining the data set of calibrating good is:
moving the calibration plate to avoid obstacles in the environment and acquiring a video of the calibration plate in the moving process;
randomly selecting a first frame picture and a second frame picture which are adjacent from a video, and calculating a homography matrix of the two frames of pictures;
extracting a first interest area with a calibration plate from the first frame picture, extracting a second interest area with the calibration plate from the second frame picture, and calculating a first center point coordinate of the first interest area and a second center point coordinate of the second interest area;
acquiring the center point coordinate of a new frame according to the second center point coordinate and the homography matrix;
obtaining a direction label of the first frame of picture according to the first central point coordinate and the central point coordinate of the new frame;
after the direction labels corresponding to the other frame pictures except the last frame in the video are obtained, the other frame pictures except the last frame and the corresponding direction labels form a calibrated data set.
2. The unmanned aerial vehicle autonomous obstacle avoidance method based on visual simulation as claimed in claim 1, wherein a CSRT tracker is used to track the region of interest on the first frame picture, and obtain the region of interest on the second frame picture.
3. The unmanned aerial vehicle autonomous obstacle avoidance method based on visual simulation of claim 1, wherein the direction tag comprises an angle tag and a distance tag.
4. The unmanned aerial vehicle autonomous obstacle avoidance method based on visual simulation of claim 3, wherein the angle tag is obtained according to an angle change between the first center point coordinate and a center point coordinate of a new frame.
5. The unmanned aerial vehicle autonomous obstacle avoidance method based on visual simulation as claimed in claim 3, wherein a distance tag is obtained by calculation according to the first region of interest border size, the first center point coordinates and the angle tag.
6. The unmanned aerial vehicle autonomous obstacle avoidance method based on visual simulation as claimed in claim 1, wherein the first region of interest and the second region of interest are extracted in a rectangular frame form.
7. The unmanned aerial vehicle autonomous obstacle avoidance method based on visual simulation as claimed in claim 1, wherein obstacles in the environment are avoided by moving the handheld calibration plate.
8. The utility model provides an unmanned aerial vehicle is barrier system of keeping away independently based on vision emulation, its characterized in that includes:
the image acquisition module is used for acquiring an obstacle image of the unmanned aerial vehicle;
the autonomous obstacle avoidance module is used for acquiring the obstacle avoidance direction of the unmanned aerial vehicle according to the obstacle image of the unmanned aerial vehicle and the trained unmanned aerial vehicle autonomous obstacle avoidance neural network, and controlling the unmanned aerial vehicle to autonomously avoid the obstacle according to the obstacle avoidance direction;
wherein, use the data set of calibrating good to train unmanned aerial vehicle autonomic obstacle avoidance neural network, the process of obtaining the data set of calibrating good is:
moving the calibration plate to avoid obstacles in the environment and acquiring a video of the calibration plate in the moving process;
randomly selecting a first frame picture and a second frame picture which are adjacent from a video, and calculating a homography matrix of the two frames of pictures;
extracting a first interest area with a calibration plate from the first frame picture, extracting a second interest area with the calibration plate from the second frame picture, and calculating a first center point coordinate of the first interest area and a second center point coordinate of the second interest area;
acquiring the center point coordinate of a new frame according to the second center point coordinate and the homography matrix;
obtaining a direction label of the first frame of picture according to the first central point coordinate and the central point coordinate of the new frame;
after the direction labels corresponding to the other frame pictures except the last frame in the video are obtained, the other frame pictures except the last frame and the corresponding direction labels form a calibrated data set.
9. An electronic device, comprising a memory and a processor, and computer instructions stored on the memory and executed on the processor, wherein the computer instructions, when executed by the processor, perform the steps of the unmanned aerial vehicle autonomous obstacle avoidance method based on visual simulation of any one of claims 1-7.
10. A computer readable storage medium for storing computer instructions, which when executed by a processor, perform the steps of the unmanned aerial vehicle autonomous obstacle avoidance method based on visual simulation of any one of claims 1-7.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202210681303.9A CN114911268A (en) | 2022-06-16 | 2022-06-16 | Unmanned aerial vehicle autonomous obstacle avoidance method and system based on visual simulation |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202210681303.9A CN114911268A (en) | 2022-06-16 | 2022-06-16 | Unmanned aerial vehicle autonomous obstacle avoidance method and system based on visual simulation |
Publications (1)
Publication Number | Publication Date |
---|---|
CN114911268A true CN114911268A (en) | 2022-08-16 |
Family
ID=82771231
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202210681303.9A Pending CN114911268A (en) | 2022-06-16 | 2022-06-16 | Unmanned aerial vehicle autonomous obstacle avoidance method and system based on visual simulation |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN114911268A (en) |
Citations (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN110245567A (en) * | 2019-05-16 | 2019-09-17 | 深圳前海达闼云端智能科技有限公司 | Barrier-avoiding method, device, storage medium and electronic equipment |
CN114359714A (en) * | 2021-12-15 | 2022-04-15 | 中国电子科技南湖研究院 | Unmanned body obstacle avoidance method and device based on event camera and intelligent unmanned body |
-
2022
- 2022-06-16 CN CN202210681303.9A patent/CN114911268A/en active Pending
Patent Citations (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN110245567A (en) * | 2019-05-16 | 2019-09-17 | 深圳前海达闼云端智能科技有限公司 | Barrier-avoiding method, device, storage medium and electronic equipment |
CN114359714A (en) * | 2021-12-15 | 2022-04-15 | 中国电子科技南湖研究院 | Unmanned body obstacle avoidance method and device based on event camera and intelligent unmanned body |
Non-Patent Citations (1)
Title |
---|
YUE FAN: "Learn by Observation: Imitation Learning for Drone Patrolling from Videos of A Human Navigator", 2020 IEEE/RSJ INTERNATIONAL CONFERENCE ON INTELLIGENT ROBOTS AND SYSTEMS (IROS), 10 February 2021 (2021-02-10), pages 5209 - 5216 * |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
Asadi et al. | Vision-based integrated mobile robotic system for real-time applications in construction | |
Chaves et al. | NEEC research: Toward GPS-denied landing of unmanned aerial vehicles on ships at sea | |
Dijkshoorn | Simultaneous localization and mapping with the ar. drone | |
CN111401146A (en) | Unmanned aerial vehicle power inspection method, device and storage medium | |
CN114127806A (en) | System and method for enhancing visual output from a robotic device | |
Zou et al. | Real-time full-stack traffic scene perception for autonomous driving with roadside cameras | |
CN113597874B (en) | Weeding robot and weeding path planning method, device and medium thereof | |
US20190311209A1 (en) | Feature Recognition Assisted Super-resolution Method | |
CN112991534B (en) | Indoor semantic map construction method and system based on multi-granularity object model | |
Landgraf et al. | Interactive robotic fish for the analysis of swarm behavior | |
CN106155082B (en) | A kind of unmanned plane bionic intelligence barrier-avoiding method based on light stream | |
EP4088884A1 (en) | Method of acquiring sensor data on a construction site, construction robot system, computer program product, and training method | |
CN111958593B (en) | Vision servo method and system for inspection operation robot of semantic intelligent substation | |
CN113848931A (en) | Agricultural machinery automatic driving obstacle recognition method, system, equipment and storage medium | |
CN112947550A (en) | Illegal aircraft striking method based on visual servo and robot | |
CN111695497B (en) | Pedestrian recognition method, medium, terminal and device based on motion information | |
Zujevs et al. | An event-based vision dataset for visual navigation tasks in agricultural environments | |
CN111611869B (en) | End-to-end monocular vision obstacle avoidance method based on serial deep neural network | |
CN114911268A (en) | Unmanned aerial vehicle autonomous obstacle avoidance method and system based on visual simulation | |
CN111126170A (en) | Video dynamic object detection method based on target detection and tracking | |
Leung et al. | Toward a large-scale multimodal event-based dataset for neuromorphic deep learning applications | |
Dinaux et al. | FAITH: Fast iterative half-plane focus of expansion estimation using optic flow | |
WO2022004333A1 (en) | Information processing device, information processing system, information processing method, and program | |
JP2021177582A (en) | Control device, control method, and program | |
CN113781524A (en) | Target tracking system and method based on two-dimensional label |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination |