CN114879729A - Unmanned aerial vehicle autonomous obstacle avoidance method based on obstacle contour detection algorithm - Google Patents

Unmanned aerial vehicle autonomous obstacle avoidance method based on obstacle contour detection algorithm Download PDF

Info

Publication number
CN114879729A
CN114879729A CN202210531544.5A CN202210531544A CN114879729A CN 114879729 A CN114879729 A CN 114879729A CN 202210531544 A CN202210531544 A CN 202210531544A CN 114879729 A CN114879729 A CN 114879729A
Authority
CN
China
Prior art keywords
unmanned aerial
aerial vehicle
image
obstacle
pixel
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202210531544.5A
Other languages
Chinese (zh)
Inventor
符小卫
李环宇
谢国燕
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Northwestern Polytechnical University
Original Assignee
Northwestern Polytechnical University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Northwestern Polytechnical University filed Critical Northwestern Polytechnical University
Priority to CN202210531544.5A priority Critical patent/CN114879729A/en
Publication of CN114879729A publication Critical patent/CN114879729A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G05CONTROLLING; REGULATING
    • G05DSYSTEMS FOR CONTROLLING OR REGULATING NON-ELECTRIC VARIABLES
    • G05D1/00Control of position, course or altitude of land, water, air, or space vehicles, e.g. automatic pilot
    • G05D1/10Simultaneous control of position or course in three dimensions
    • G05D1/101Simultaneous control of position or course in three dimensions specially adapted for aircraft
    • G05D1/106Change initiated in response to external conditions, e.g. avoidance of elevated terrain or of no-fly zones
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02TCLIMATE CHANGE MITIGATION TECHNOLOGIES RELATED TO TRANSPORTATION
    • Y02T90/00Enabling technologies or technologies with a potential or indirect contribution to GHG emissions mitigation

Landscapes

  • Engineering & Computer Science (AREA)
  • Aviation & Aerospace Engineering (AREA)
  • Radar, Positioning & Navigation (AREA)
  • Remote Sensing (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Automation & Control Theory (AREA)
  • Image Analysis (AREA)

Abstract

The invention discloses an unmanned aerial vehicle autonomous obstacle avoidance method based on an obstacle contour detection algorithm. Then, the converted image is subjected to thresholding and morphological processing. And performing edge detection and contour detection on the processed image, combining the detected result with data calibrated by a camera, and calculating the barycenter coordinate of the obstacle under a world coordinate system, thereby obtaining the position information and contour information of the obstacle. And finally, transmitting the information of the obstacle into a D-obstacle avoidance algorithm to carry out real-time path solving until the autonomous obstacle avoidance function of the unmanned aerial vehicle is completed. The method is high in real-time performance and high in calculation efficiency, and can be popularized to the autonomous obstacle avoidance of the unmanned aerial vehicle under dynamic obstacles and the autonomous obstacle avoidance of the unmanned aerial vehicle under a real three-dimensional scene based on the method.

Description

Unmanned aerial vehicle autonomous obstacle avoidance method based on obstacle contour detection algorithm
Technical Field
The invention belongs to the technical field of unmanned aerial vehicles, and particularly relates to an unmanned aerial vehicle autonomous obstacle avoidance method.
Background
Along with the rapid development of the unmanned aerial vehicle industry, the safety of the unmanned aerial vehicle is also paid extensive attention to, and especially under an unknown environment, the autonomous obstacle avoidance of the unmanned aerial vehicle is very important. Due to the lack of prior information of unknown environments, the unmanned aerial vehicle needs to sense and avoid obstacles. Among them, obstacle detection is an important ring. The current obstacle detection methods mainly include: ultrasonic-based detection methods, infrared-based detection methods, laser-based detection methods, and machine-vision-based detection methods. The detection method based on machine vision is to acquire an image by using a camera and process the image by using an image processing algorithm to obtain information such as the outline, the position, the depth and the like of an obstacle. Different from the first three methods, the information acquired by the machine vision-based detection method is richer.
There are also significant differences in the application of obstacle detection algorithms based on machine vision. The interframe difference method is only suitable for detecting dynamic obstacles and cannot meet the requirement of real-time detection; the optical flow estimation method needs to predict the detected area in advance and cannot detect a complete obstacle; the traditional obstacle contour detection algorithm is suitable for feature obstacle detection, but has the defects of large noise influence, poor adaptability and the like. Therefore, designing an obstacle detection algorithm which has good adaptability and small calculation amount and can meet the real-time requirement and completing the autonomous obstacle avoidance function of the unmanned aerial vehicle is a technical problem to be solved by researchers in the field.
Disclosure of Invention
In order to overcome the defects of the prior art, the invention provides an unmanned aerial vehicle autonomous obstacle avoidance method based on an obstacle contour detection algorithm. Then, the converted image is subjected to thresholding and morphological processing. And performing edge detection and contour detection on the processed image, combining the detected result with data calibrated by a camera, and calculating the barycenter coordinate of the obstacle under a world coordinate system, thereby obtaining the position information and contour information of the obstacle. And finally, transmitting the information of the obstacle into a D-obstacle avoidance algorithm to carry out real-time path solving until the autonomous obstacle avoidance function of the unmanned aerial vehicle is completed. The method is high in real-time performance and high in calculation efficiency, and can be popularized to the autonomous obstacle avoidance of the unmanned aerial vehicle under dynamic obstacles and the autonomous obstacle avoidance of the unmanned aerial vehicle under a real three-dimensional scene based on the method.
The technical scheme adopted by the invention for solving the technical problem comprises the following steps:
step 1: establishing an unmanned aerial vehicle model;
step 1-1: nobodyThe layout structure of the model is X-shaped, namely the included angle between the advancing direction and the adjacent support is 45 degrees; assuming that the model of the unmanned aerial vehicle is a rigid body, the brake of the unmanned aerial vehicle generates a force F and a torque tau, and the force and the torque of the unmanned aerial vehicle at the ith moment are respectively set as F i And τ i The calculation formula is as follows:
Figure BDA0003646481290000021
Figure BDA0003646481290000022
in the formula C T And C pow Respectively representing the thrust coefficient and the power coefficient based on the rotor, rho representing the air density, D representing the rotor diameter, omega max Represents the maximum angular velocity of rotation, u i Representing the rotating speed of the motor at the ith moment;
step 1-2: calculating the next motion state of the unmanned aerial vehicle;
let the speed of the unmanned aerial vehicle at the m-1 th moment be v m-1 In the position p m-1 Acceleration of a m-1 Dt, time step, position p at time m m And velocity v m The calculation is as follows:
Figure BDA0003646481290000023
Figure BDA0003646481290000024
in the formula, p m Position of the drone at moment m, v m The speed of the unmanned aerial vehicle at the mth moment;
step 2: determining a starting point and a target point of the unmanned aerial vehicle by using a world coordinate system;
and step 3: determining an unmanned aerial vehicle path search algorithm;
selecting a D route search algorithm, and setting an heuristic function as follows:
f(s)=h(s)+g(s) (5)
wherein h(s) represents the cost value from the current node to the target point, and g(s) represents the cost value from the current node to the starting point;
suppose the position coordinate of the current node is (x) s ,y s ) The coordinates of the starting point are (x) start ,y start ) The coordinates of the target point are ((x) goal ,y goal ) Then h(s) and g(s) are respectively expressed as:
Figure BDA0003646481290000025
Figure BDA0003646481290000026
generating a path from the current node to the target node through a D-path searching algorithm;
and 4, step 4: establishing an obstacle contour detection algorithm based on color information;
step 4-1: acquiring an environment image through an airborne camera of the unmanned aerial vehicle;
step 4-2: performing smooth filtering processing on the acquired environment image in a mode of combining Gaussian filtering and median filtering, wherein the calculation formula of the Gaussian filtering is as follows:
Figure BDA0003646481290000031
wherein G (.) is a two-dimensional Gaussian function, (Δ x) 2 +Δy 2 ) Expressed is the sum of squares of the distances between other pixels in the neighborhood and the central pixel, σ is the standard deviation of the two-dimensional normal distribution, (Δ x, Δ y) represents the domain;
performing further filtering on the image subjected to smooth filtering by using median filtering;
step 4-3: converting an environment image of an RGB space into an HSV color space, wherein the calculation formula is as follows:
Figure BDA0003646481290000032
Figure BDA0003646481290000033
V=max(R,G,B) (11)
r, G, B respectively represents the values of three color components in the RGB space, H, S, V respectively represents the chromaticity, saturation and brightness in the HSV space;
step 4-4: carrying out binarization operation on the image, and carrying out threshold processing on the image by adopting an Otus algorithm; the specific calculation process is as follows:
step 4-4-1: calculating the zero-order moment of integration of the gray level histogram:
Figure BDA0003646481290000034
in the formula, histogram I Representing a normalized image grey histogram, histogram I (k) Representing the ratio of pixel points with the gray value equal to k in the image;
step 4-4-2: calculating a first order moment of integration of the gray level histogram:
Figure BDA0003646481290000035
step 4-4-3: calculating the gray level average value of the image population:
mean=oneCuMo(255) (14)
step 4-4-4: dividing the image into a foreground image and a background image according to the gray characteristic, and calculating a threshold q which can enable the variance of the foreground image and the background image to be maximum; the following metric was used for the measure of variance:
Figure BDA0003646481290000041
and 4-5: performing morphological processing of expansion and corrosion on the image;
and 4-6: adopting Canny operator edge detection and contour detection:
step 4-6-1: smoothing noise of a non-edge area of the image;
step 4-6-2: calculating the amplitude and direction of the image gradient by using a Sobel operator;
step 4-6-3: traversing the pixel points one by one, judging whether the current pixel point is a point with a maximum gradient value in the gradient direction, if so, keeping the point, and if not, returning the point to zero;
step 4-6-4: carrying out threshold processing by using double thresholds to obtain edge points;
step 4-6-5: fitting the result after the edge detection with the foreground information of the image to approximately obtain the image contour;
and 4-7: determining internal parameter matrix M of airborne camera by adopting Zhangyingyou camera calibration method 1 And an external parameter matrix M 2
And 4-8: solving the barycenter coordinate of the obstacle under the world coordinate system;
step 4-8-1: the calculation formula of the (i + j) order moment of the environment image is as follows:
Figure BDA0003646481290000042
wherein x and y represent the horizontal and vertical coordinates of the pixel points, and I (x, y) represents the pixel intensity corresponding to the pixel point with the coordinates (x, y);
step 4-8-2: by a zeroth order image moment M 00 And first order moment of image (M) 01 、M 10 ) Calculating the centroid coordinates under the pixel coordinate system:
Figure BDA0003646481290000043
step 4-8-3: and (3) performing coordinate conversion, and converting the centroid coordinate into a world coordinate system:
Figure BDA0003646481290000044
Figure BDA0003646481290000045
Figure BDA0003646481290000051
wherein u and v are coordinates in a pixel coordinate system, (X) C ,Y C ,Z C ) As coordinates of the camera coordinate system, f x And f y Denotes the physical size of the pixel, u, in the x-axis and y-axis, respectively 0 And v 0 Respectively representing pixel difference values of the central pixel coordinate of the image and the original point pixel coordinate of the image in the x direction and the y direction, wherein f is the focal length of the camera; r is a 3 x 3 rotation matrix, namely a matrix obtained by rotating coordinate axes when a pixel coordinate system is converted into a world coordinate system; t is an offset vector, (X) W ,Y W ,Z W ) Coordinates in a world coordinate system;
and 5: the unmanned aerial vehicle starts to move along the initial path generated in the step 3 from the starting point, meanwhile, the airborne camera adopts the obstacle contour detection algorithm in the step 4 to perform real-time detection, if an unknown obstacle appears on the path, whether the flight of the unmanned aerial vehicle is influenced is judged according to the position information and the contour information of the unmanned aerial vehicle, and if the flight of the unmanned aerial vehicle is influenced, a new autonomous obstacle avoidance path from the current point to the target point is generated by adopting the path search algorithm in the step 3;
and circulating according to the process until the unmanned aerial vehicle reaches a target point, and finishing the whole autonomous obstacle avoidance process.
The invention has the following beneficial effects:
according to the unmanned aerial vehicle obstacle avoidance method, under the condition that the environment is not completely known, the unmanned aerial vehicle can adopt an obstacle detection algorithm to detect and obtain information such as the position and the outline of an unknown obstacle in real time, and the unmanned aerial vehicle can achieve the target of unmanned aerial vehicle autonomous obstacle avoidance under the condition that the environment is unknown by combining with a D-path searching algorithm. The method is high in real-time performance and high in calculation efficiency, and can be popularized to the autonomous obstacle avoidance of the unmanned aerial vehicle under dynamic obstacles and the autonomous obstacle avoidance of the unmanned aerial vehicle under a real three-dimensional scene based on the method.
Drawings
Fig. 1 is a general flow chart of the unmanned aerial vehicle autonomous obstacle avoidance method based on the obstacle contour detection algorithm.
Fig. 2 is a diagram of a simulation model of an unmanned aerial vehicle for use in an AirSim according to an embodiment of the present invention.
FIG. 3 is a top view of a simulation environment built in AirSim according to an embodiment of the present invention.
Fig. 4 is a diagram illustrating the effect of the obstacle contour detection algorithm at a certain time according to an embodiment of the present invention.
Fig. 5 is a path result diagram of autonomous obstacle avoidance by the unmanned aerial vehicle according to the embodiment of the present invention.
Detailed Description
The invention is further illustrated with reference to the following figures and examples.
The invention provides an unmanned aerial vehicle autonomous obstacle avoidance method based on an improved obstacle contour detection algorithm. Firstly, filtering processing and color space conversion are carried out on the acquired image. Then, the converted image is subjected to thresholding and morphological processing. And performing edge detection and contour detection on the processed image, combining the detected result with data calibrated by a camera, and calculating the barycenter coordinate of the obstacle under a world coordinate system, thereby obtaining the position information and contour information of the obstacle. And finally, transmitting the information of the obstacle into a D-obstacle avoidance algorithm to carry out real-time path solving until the autonomous obstacle avoidance function of the unmanned aerial vehicle is completed.
The simulation environment is as follows: windows10 operating system, Airsim emulation platform.
The invention takes into account a three-dimensional map model, the coordinate system being a planar coordinate system. Suppose there are 1 drone with its own vision camera, as shown in fig. 2. The size of the built simulation map is 100m multiplied by 100m, gray obstacles represent obstacles with known positions and contours, orange and white obstacles represent unknown obstacles, and the environment is built as shown in fig. 3. Assume that the position coordinates of the drone are (70, 20) and the coordinates of the target point are (70, 80).
As shown in FIG. 1, the method comprises the following specific steps in AirSim environment:
step 1: establishing an unmanned aerial vehicle model;
step 1-1: the layout structure of the unmanned aerial vehicle model is X-shaped, namely the included angle between the advancing direction and the adjacent support is 45 degrees; assuming that the model of the unmanned aerial vehicle is a rigid body, and the unmanned aerial vehicle can generate a force F and a torque tau by any number of brakes, the force and the torque of the unmanned aerial vehicle at the ith moment are respectively set as F i And τ i The calculation formula is as follows:
Figure BDA0003646481290000061
Figure BDA0003646481290000062
in the formula C T And C pow Respectively representing the thrust coefficient and the power coefficient based on the rotor, rho representing the air density, D representing the rotor diameter, omega max Represents the maximum angular velocity of rotation, u i Representing the rotating speed of the motor at the ith moment;
step 1-2: calculating the next motion state of the unmanned aerial vehicle;
let the speed of the unmanned aerial vehicle at the m-1 th moment be v m-1 In the position p m-1 Acceleration of a m-1 Dt, time step, position p at time m m And velocity v m The calculation is as follows:
Figure BDA0003646481290000063
Figure BDA0003646481290000064
in the formula, p m Position of the drone at moment m, v m The speed of the unmanned aerial vehicle at the mth moment;
step 2: determining a starting point and a target point of the unmanned aerial vehicle by using a world coordinate system;
and step 3: determining an unmanned aerial vehicle path search algorithm;
selecting a D route search algorithm, and setting an heuristic function as follows:
f(s)=h(s)+g(s) (5)
wherein h(s) represents the cost value from the current node to the target point, and g(s) represents the cost value from the current node to the starting point;
suppose the position coordinate of the current node is (x) s ,y s ) The coordinates of the starting point are (x) start ,y start ) The coordinates of the target point are ((x) goal ,y goal ) Then h(s) and g(s) are respectively expressed as:
Figure BDA0003646481290000071
Figure BDA0003646481290000072
generating a path from the current node to the target node through a D-path searching algorithm;
and 4, step 4: establishing an obstacle contour detection algorithm based on color information;
step 4-1: acquiring an environment image through an airborne camera of the unmanned aerial vehicle;
step 4-2: performing smooth filtering processing on the acquired environment image in a mode of combining Gaussian filtering and median filtering, wherein the calculation formula of the Gaussian filtering is as follows:
Figure BDA0003646481290000073
wherein G (.) is a two-dimensional Gaussian function: (Δx 2 +Δy 2 ) Expressed is the sum of squares of the distances between other pixels in the neighborhood and the central pixel, σ is the standard deviation of the two-dimensional normal distribution, (Δ x, Δ y) represents the domain;
performing further filtering on the image subjected to smooth filtering by using median filtering, so as to furthest retain the contour information of the image while eliminating noise; (ii) a
Step 4-3: converting an environment image of an RGB space into an HSV color space, wherein the calculation formula is as follows:
Figure BDA0003646481290000074
Figure BDA0003646481290000075
V=max(R,G,B) (11)
r, G, B respectively represents the values of three color components in RGB space, H, S, V respectively represents the chroma, saturation and brightness in HSV space;
step 4-4: carrying out binarization operation on the image, and carrying out threshold processing on the image by adopting an Otus algorithm; the specific calculation process is as follows:
step 4-4-1: calculating the zero-order moment of integration of the gray level histogram:
Figure BDA0003646481290000081
in the formula, histogram I Representing a normalized image grey histogram, histogram I (k) Representing the ratio of pixel points with the gray value equal to k in the image;
step 4-4-2: calculating a first order moment of integration of the gray level histogram:
Figure BDA0003646481290000082
step 4-4-3: calculating the gray level average value of the image population:
mean=oneCuMo(255) (14)
step 4-4-4: dividing the image into a foreground image and a background image according to the gray characteristic, and calculating a threshold q which can enable the variance of the foreground image and the background image to be maximum; the following metric was used for the measure of variance:
Figure BDA0003646481290000083
and 4-5: performing morphological processing of expansion and corrosion on the image;
and 4-6: adopting Canny operator edge detection and contour detection:
step 4-6-1: smoothing noise of a non-edge area of the image;
step 4-6-2: calculating the amplitude and direction of the image gradient by using a Sobel operator;
step 4-6-3: traversing the pixel points one by one, judging whether the current pixel point is a point with a maximum gradient value in the gradient direction, if so, keeping the point, and if not, returning the point to zero;
step 4-6-4: carrying out threshold processing by using double thresholds to obtain edge points;
step 4-6-5: fitting the result after the edge detection with the foreground information of the image to approximately obtain the image contour;
and 4-7: determining internal parameter matrix M of airborne camera by adopting Zhangyingyou camera calibration method 1 And an external parameter matrix M 2
And 4-8: solving the barycenter coordinate of the obstacle under the world coordinate system;
step 4-8-1: the calculation formula of the (i + j) order moment of the environment image is as follows:
Figure BDA0003646481290000084
wherein x and y represent the horizontal and vertical coordinates of the pixel points, and I (x, y) represents the pixel intensity corresponding to the pixel point with the coordinates (x, y);
step 4-8-2: by a zeroth order image moment M 00 And first order moment of image (M) 01 、M 10 ) Calculating the centroid coordinates under the pixel coordinate system:
Figure BDA0003646481290000091
step 4-8-3: and performing coordinate conversion, and converting the centroid coordinate into a world coordinate system:
Figure BDA0003646481290000092
Figure BDA0003646481290000093
Figure BDA0003646481290000094
wherein u and v are coordinates in a pixel coordinate system, (X) C ,Y C ,Z C ) As coordinates of the camera coordinate system, f x And f y Denotes the physical size of the pixel, u, in the x-axis and y-axis, respectively 0 And v 0 Respectively representing pixel difference values of the central pixel coordinate of the image and the original point pixel coordinate of the image in the x direction and the y direction, wherein f is the focal length of the camera; r is a 3 x 3 rotation matrix, namely a matrix obtained by rotating coordinate axes when a pixel coordinate system is converted into a world coordinate system; t is an offset vector, (X) W ,Y W ,Z W ) For coordinates in the world coordinate system, matrix M 1 Is an internal reference matrix of the camera, matrix M 2 The external parameter matrix is an external parameter matrix of the camera and can be measured through calibration of the camera;
the unmanned aerial vehicle can obtain the position information and the contour information of an unknown obstacle;
and 5: the unmanned aerial vehicle starts to move along the initial path generated in the step 3 from the starting point, meanwhile, the airborne camera adopts the obstacle contour detection algorithm in the step 4 to perform real-time detection, if an unknown obstacle appears on the path, whether the flight of the unmanned aerial vehicle is influenced is judged according to the position information and the contour information of the unmanned aerial vehicle, and if the flight of the unmanned aerial vehicle is influenced, a new autonomous obstacle avoidance path from the current point to the target point is generated by adopting the path search algorithm in the step 3;
and circulating according to the process until the unmanned aerial vehicle reaches a target point, and finishing the whole autonomous obstacle avoidance process.
In summary, the invention determines the profile information and the position information of the unknown obstacle by using the obstacle profile detection algorithm, and fig. 4 is a profile result graph at a certain moment in the obstacle detection process, so that the required information of the unknown obstacle can be provided for the unmanned aerial vehicle autonomous obstacle avoidance algorithm quickly and accurately. When the unmanned aerial vehicle encounters an unknown obstacle during the process of searching and detecting, a path from a current unknown node to a target node can be generated well, as shown in fig. 5, a feasible path generated by the unmanned aerial vehicle through an autonomous obstacle avoidance algorithm verifies the real-time performance and feasibility of the algorithm. For autonomous obstacle avoidance of the unmanned aerial vehicle, the method is simple, the real-time robustness is high, and autonomous obstacle avoidance of the unmanned aerial vehicle is achieved.

Claims (1)

1. An unmanned aerial vehicle autonomous obstacle avoidance method based on an obstacle contour detection algorithm is characterized by comprising the following steps:
step 1: establishing an unmanned aerial vehicle model;
step 1-1: the layout structure of the unmanned aerial vehicle model is X-shaped, namely the included angle between the advancing direction and the adjacent support is 45 degrees; assuming that the model of the unmanned aerial vehicle is a rigid body, the brake of the unmanned aerial vehicle generates a force F and a torque tau, and the force and the torque of the unmanned aerial vehicle at the ith moment are respectively set as F i And τ i The calculation formula is as follows:
Figure FDA0003646481280000011
Figure FDA0003646481280000012
in the formula C T And C pow Respectively representing the thrust coefficient and the power coefficient based on the rotor, rho representing the air density, D representing the rotor diameter, omega max Represents the maximum angular velocity of rotation, u i Representing the rotating speed of the motor at the ith moment;
step 1-2: calculating the next motion state of the unmanned aerial vehicle;
let the speed of the unmanned aerial vehicle at the m-1 th moment be v m-1 In the position p m-1 Acceleration of a m-1 Dt, time step, position p at time m m And velocity v m The calculation is as follows:
Figure FDA0003646481280000013
Figure FDA0003646481280000014
in the formula, p m Position of the drone at moment m, v m The speed of the unmanned aerial vehicle at the mth moment;
step 2: determining a starting point and a target point of the unmanned aerial vehicle by using a world coordinate system;
and step 3: determining an unmanned aerial vehicle path search algorithm;
selecting a D route search algorithm, and setting an heuristic function as follows:
f(s)=h(s)+g(s) (5)
wherein h(s) represents the cost value from the current node to the target point, and g(s) represents the cost value from the current node to the starting point;
suppose the position coordinate of the current node is (x) s ,y s ) The coordinates of the starting point are (x) start ,y start ) The coordinates of the target point are ((x) goal ,y goal ) Then h(s) and g(s) are respectively expressed as:
Figure FDA0003646481280000015
Figure FDA0003646481280000016
generating a path from the current node to the target node through a D-path searching algorithm;
and 4, step 4: establishing an obstacle contour detection algorithm based on color information;
step 4-1: acquiring an environment image through an airborne camera of the unmanned aerial vehicle;
step 4-2: performing smooth filtering processing on the acquired environment image in a mode of combining Gaussian filtering and median filtering, wherein the calculation formula of the Gaussian filtering is as follows:
Figure FDA0003646481280000021
wherein G (.) is a two-dimensional Gaussian function, (Δ x) 2 +Δy 2 ) Expressed is the sum of squares of the distances between other pixels in the neighborhood and the central pixel, σ is the standard deviation of the two-dimensional normal distribution, (Δ x, Δ y) represents the domain;
performing further filtering on the image subjected to smooth filtering by using median filtering;
step 4-3: converting an environment image of an RGB space into an HSV color space, wherein the calculation formula is as follows:
Figure FDA0003646481280000022
Figure FDA0003646481280000023
V=max(R,G,B) (11)
r, G, B respectively represents the values of three color components in the RGB space, H, S, V respectively represents the chromaticity, saturation and brightness in the HSV space;
step 4-4: carrying out binarization operation on the image, and carrying out threshold processing on the image by adopting an Otus algorithm; the specific calculation process is as follows:
step 4-4-1: calculating the zero-order moment of integration of the gray level histogram:
Figure FDA0003646481280000024
in the formula, histogram I Representing a normalized image grey histogram, histogram I (k) Representing the ratio of pixel points with the gray value equal to k in the image;
step 4-4-2: calculating a first order moment of accumulation of the gray level histogram:
Figure FDA0003646481280000031
step 4-4-3: calculating the gray level average value of the image population:
mean=oneCuMo(255) (14)
step 4-4-4: dividing the image into a foreground image and a background image according to the gray characteristic, and calculating a threshold q which can enable the variance of the foreground image and the background image to be maximum; the following metric was used for the measure of variance:
Figure FDA0003646481280000032
and 4-5: performing morphological processing of expansion and corrosion on the image;
and 4-6: adopting Canny operator edge detection and contour detection:
step 4-6-1: smoothing noise of a non-edge area of the image;
step 4-6-2: calculating the amplitude and direction of the image gradient by using a Sobel operator;
step 4-6-3: traversing the pixel points one by one, judging whether the current pixel point is a point with a maximum gradient value in the gradient direction, if so, keeping the point, and if not, returning the point to zero;
step 4-6-4: carrying out threshold processing by using double thresholds to obtain edge points;
step 4-6-5: fitting the result after the edge detection with the foreground information of the image to approximately obtain the image contour;
and 4-7: determining internal parameter matrix M of airborne camera by adopting Zhangyingyou camera calibration method 1 And an external parameter matrix M 2
And 4-8: solving the barycenter coordinate of the obstacle under the world coordinate system;
step 4-8-1: the calculation formula of the (i + j) order moment of the environment image is as follows:
Figure FDA0003646481280000033
wherein x and y represent the horizontal and vertical coordinates of the pixel points, and I (x, y) represents the pixel intensity corresponding to the pixel point with the coordinates (x, y);
step 4-8-2: by a zeroth order image moment M 00 And first order moment of image (M) 01 、M 10 ) Calculating the centroid coordinates under the pixel coordinate system:
Figure FDA0003646481280000034
step 4-8-3: and (3) performing coordinate conversion, and converting the centroid coordinate into a world coordinate system:
Figure FDA0003646481280000041
Figure FDA0003646481280000042
Figure FDA0003646481280000043
wherein u and v are coordinates in a pixel coordinate system, (X) C ,Y C ,Z C ) As coordinates of the camera coordinate system, f x And f y Denotes the physical size of the pixel, u, in the x-axis and y-axis, respectively 0 And v 0 Respectively representing pixel difference values of the central pixel coordinate of the image and the original point pixel coordinate of the image in the x direction and the y direction, wherein f is the focal length of the camera; r is a 3 x 3 rotation matrix, namely a matrix obtained by rotating coordinate axes when a pixel coordinate system is converted into a world coordinate system; t is an offset vector, (X) W ,Y W ,Z W ) Coordinates in a world coordinate system;
and 5: the unmanned aerial vehicle starts to move along the initial path generated in the step 3 from the starting point, meanwhile, the airborne camera adopts the obstacle contour detection algorithm in the step 4 to perform real-time detection, if an unknown obstacle appears on the path, whether the flight of the unmanned aerial vehicle is influenced is judged according to the position information and the contour information of the unmanned aerial vehicle, and if the flight of the unmanned aerial vehicle is influenced, a new autonomous obstacle avoidance path from the current point to the target point is generated by adopting the path search algorithm in the step 3;
and circulating according to the process until the unmanned aerial vehicle reaches a target point, and finishing the whole autonomous obstacle avoidance process.
CN202210531544.5A 2022-05-16 2022-05-16 Unmanned aerial vehicle autonomous obstacle avoidance method based on obstacle contour detection algorithm Pending CN114879729A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210531544.5A CN114879729A (en) 2022-05-16 2022-05-16 Unmanned aerial vehicle autonomous obstacle avoidance method based on obstacle contour detection algorithm

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210531544.5A CN114879729A (en) 2022-05-16 2022-05-16 Unmanned aerial vehicle autonomous obstacle avoidance method based on obstacle contour detection algorithm

Publications (1)

Publication Number Publication Date
CN114879729A true CN114879729A (en) 2022-08-09

Family

ID=82674949

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210531544.5A Pending CN114879729A (en) 2022-05-16 2022-05-16 Unmanned aerial vehicle autonomous obstacle avoidance method based on obstacle contour detection algorithm

Country Status (1)

Country Link
CN (1) CN114879729A (en)

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106681353A (en) * 2016-11-29 2017-05-17 南京航空航天大学 Unmanned aerial vehicle (UAV) obstacle avoidance method and system based on binocular vision and optical flow fusion
CN106708084A (en) * 2016-11-24 2017-05-24 中国科学院自动化研究所 Method for automatically detecting and avoiding obstacles for unmanned aerial vehicle under complicated environments
WO2019015158A1 (en) * 2017-07-21 2019-01-24 歌尔科技有限公司 Obstacle avoidance method for unmanned aerial vehicle, and unmanned aerial vehicle
CN110032211A (en) * 2019-04-24 2019-07-19 西南交通大学 Multi-rotor unmanned aerial vehicle automatic obstacle-avoiding method

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106708084A (en) * 2016-11-24 2017-05-24 中国科学院自动化研究所 Method for automatically detecting and avoiding obstacles for unmanned aerial vehicle under complicated environments
CN106681353A (en) * 2016-11-29 2017-05-17 南京航空航天大学 Unmanned aerial vehicle (UAV) obstacle avoidance method and system based on binocular vision and optical flow fusion
WO2019015158A1 (en) * 2017-07-21 2019-01-24 歌尔科技有限公司 Obstacle avoidance method for unmanned aerial vehicle, and unmanned aerial vehicle
CN110032211A (en) * 2019-04-24 2019-07-19 西南交通大学 Multi-rotor unmanned aerial vehicle automatic obstacle-avoiding method

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
林涛;: "无人机视觉定位与避障子系统研究", 机械工程师, no. 03, 10 March 2020 (2020-03-10) *

Similar Documents

Publication Publication Date Title
CN109345588B (en) Tag-based six-degree-of-freedom attitude estimation method
US10288418B2 (en) Information processing apparatus, information processing method, and storage medium
WO2020151109A1 (en) Three-dimensional target detection method and system based on point cloud weighted channel feature
US20200334843A1 (en) Information processing apparatus, control method for same, non-transitory computer-readable storage medium, and vehicle driving support system
KR101595537B1 (en) Networked capture and 3d display of localized, segmented images
CN108171715B (en) Image segmentation method and device
Munoz-Banon et al. Targetless camera-lidar calibration in unstructured environments
CN108447094B (en) Method and system for estimating attitude of monocular color camera
JP6817742B2 (en) Information processing device and its control method
CN112164117A (en) V-SLAM pose estimation method based on Kinect camera
Fan et al. Dynamic objects elimination in SLAM based on image fusion
Hochdorfer et al. 6 DoF SLAM using a ToF camera: The challenge of a continuously growing number of landmarks
CN108596947B (en) Rapid target tracking method suitable for RGB-D camera
CN112184765A (en) Autonomous tracking method of underwater vehicle based on vision
CN114179788A (en) Automatic parking method, system, computer readable storage medium and vehicle terminal
CN114919584A (en) Motor vehicle fixed point target distance measuring method and device and computer readable storage medium
US20230376106A1 (en) Depth information based pose determination for mobile platforms, and associated systems and methods
CN110207702A (en) The method and device of target positioning
CN112733678A (en) Ranging method, ranging device, computer equipment and storage medium
Tsai et al. Vision-Based Obstacle Detection for Mobile Robot in Outdoor Environment.
Petrovai et al. Obstacle detection using stereovision for Android-based mobile devices
CN114879729A (en) Unmanned aerial vehicle autonomous obstacle avoidance method based on obstacle contour detection algorithm
CN114972491A (en) Visual SLAM method, electronic device, storage medium and product
Zhang et al. Feature regions segmentation based RGB-D visual odometry in dynamic environment
Yang et al. A new algorithm for obstacle segmentation in dynamic environments using a RGB-D sensor

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination