CN111367318A - Dynamic obstacle environment navigation method and device based on visual semantic information - Google Patents

Dynamic obstacle environment navigation method and device based on visual semantic information Download PDF

Info

Publication number
CN111367318A
CN111367318A CN202010242906.XA CN202010242906A CN111367318A CN 111367318 A CN111367318 A CN 111367318A CN 202010242906 A CN202010242906 A CN 202010242906A CN 111367318 A CN111367318 A CN 111367318A
Authority
CN
China
Prior art keywords
dynamic
module
semantic information
picture
rotor
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202010242906.XA
Other languages
Chinese (zh)
Other versions
CN111367318B (en
Inventor
唐漾
和望利
杜文莉
钱锋
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
East China University of Science and Technology
Original Assignee
East China University of Science and Technology
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by East China University of Science and Technology filed Critical East China University of Science and Technology
Priority to CN202010242906.XA priority Critical patent/CN111367318B/en
Publication of CN111367318A publication Critical patent/CN111367318A/en
Application granted granted Critical
Publication of CN111367318B publication Critical patent/CN111367318B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G05CONTROLLING; REGULATING
    • G05DSYSTEMS FOR CONTROLLING OR REGULATING NON-ELECTRIC VARIABLES
    • G05D1/00Control of position, course, altitude or attitude of land, water, air or space vehicles, e.g. using automatic pilots
    • G05D1/10Simultaneous control of position or course in three dimensions
    • G05D1/101Simultaneous control of position or course in three dimensions specially adapted for aircraft
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01CMEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
    • G01C21/00Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00
    • G01C21/005Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 with correlation of navigation data from several sources, e.g. map or contour matching
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01CMEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
    • G01C21/00Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00
    • G01C21/20Instruments for performing navigational calculations
    • GPHYSICS
    • G05CONTROLLING; REGULATING
    • G05DSYSTEMS FOR CONTROLLING OR REGULATING NON-ELECTRIC VARIABLES
    • G05D1/00Control of position, course, altitude or attitude of land, water, air or space vehicles, e.g. using automatic pilots
    • G05D1/08Control of attitude, i.e. control of roll, pitch, or yaw
    • G05D1/0808Control of attitude, i.e. control of roll, pitch, or yaw specially adapted for aircraft

Landscapes

  • Engineering & Computer Science (AREA)
  • Radar, Positioning & Navigation (AREA)
  • Remote Sensing (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Automation & Control Theory (AREA)
  • Aviation & Aerospace Engineering (AREA)
  • Control Of Position, Course, Altitude, Or Attitude Of Moving Bodies (AREA)

Abstract

The invention relates to the technical field of multi-rotor unmanned aerial vehicle control and navigation, in particular to a dynamic obstacle environment navigation method and device based on visual semantic information. The invention provides a dynamic obstacle environment navigation method based on visual semantic information, which comprises the following steps: s1, segmenting and analyzing semantic information in the scene of the original picture by using a probability regression prediction method based on a deep neural network model, detecting and identifying obstacles in the scene in real time, and establishing an obstacle map; s2, using a heuristic layered search method, performing path search based on the motion infinitesimal sequence of the multi-rotor dynamic model, and performing dynamic path planning on the established barrier map; and S3, performing fast-response track tracking control by using the lith-generation controller. According to the invention, by using the visual semantic information, the multi-rotor unmanned aerial vehicle can complete high-efficiency and high-dynamic-performance operation tasks in a complex dynamic scene.

Description

Dynamic obstacle environment navigation method and device based on visual semantic information
Technical Field
The invention relates to the technical field of multi-rotor unmanned aerial vehicle control and navigation, in particular to a dynamic obstacle environment navigation method and device based on visual semantic information.
Background
The quad-rotor unmanned aerial vehicle has the characteristics of light size, flexible flight, strong hovering capability, good maneuvering performance and the like, and is widely concerned in various fields of the engineering application field.
The four-rotor aircraft provided with the airborne vision sensor is an ideal platform for autonomous navigation tasks, executes tasks in a complex dynamic environment, and becomes an important concern in application and research of unmanned aerial equipment.
The application scenes of the quad-rotor unmanned aerial vehicle with the airborne visual sensor are widely distributed in the aspects of scene exploration, danger investigation, indoor and outdoor map building, environment interaction, search and rescue tasks and the like.
The quad-rotor unmanned aerial vehicle consists of two pairs of counter-rotating rotors and propellers and has the system characteristics of under-actuation, nonlinearity, strong coupling and the like. Some drone controllers have been developed that are near hovering and that do not consider path planning methods for complex dynamical models, by using a back-push controller, sliding mode techniques, and Dijkstra (Dijkstra) algorithms.
However, when the environment is complex and the obstacle is dynamic, the quad-rotor unmanned aerial vehicle system has the characteristic of strong nonlinearity, and the method is based on the known environmental information, and the problem of how to accurately detect the environmental information in real time is also the problem of needing important attention.
The existing navigation method of the quad-rotor unmanned aerial vehicle has the following defects:
1) visual semantic information is not fully utilized, the efficiency of judging dynamic obstacles is deficient, and the processing efficiency of a semantic segmentation method based on deep learning needs to be improved;
2) the unmanned aerial vehicle model is simply considered in the trajectory planning, so that the unmanned aerial vehicle cannot pass through a complex barrier;
3) the controller design based on the Euler angle posture representation method is limited by the defects of the universal lock of the Euler angle, and the control performance of the controller is limited.
Disclosure of Invention
The invention aims to provide a dynamic obstacle environment navigation method and device based on visual semantic information, and solves the problems of low efficiency and poor control performance of an unmanned aerial vehicle navigation method in the prior art in a complex dynamic scene.
In order to achieve the aim, the invention provides a dynamic obstacle environment navigation method based on visual semantic information, which comprises the following steps:
s1, segmenting and analyzing semantic information in the scene of the original picture by using a probability regression prediction method based on a deep neural network model, detecting and identifying obstacles in the scene in real time, and establishing an obstacle map;
s2, using a heuristic layered search method, performing path search based on the motion infinitesimal sequence of the multi-rotor dynamic model, and performing dynamic path planning on the established barrier map;
and S3, performing fast-response track tracking control by using the lith-generation controller.
In one embodiment, the step S1 further includes the following steps:
s11, zooming the original picture to a specified size and carrying out block segmentation;
s12, inputting the segmented picture into a deep neural network model for probability regression prediction, and outputting predicted values of the position, the size and the confidence coefficient of an obstacle in the picture scene;
and S13, removing redundant predicted values according to a non-maximum value inhibition method to obtain a target detection result, wherein the target detection result is represented by a boundary frame body and an object type.
In an embodiment, the predicting parameters of the probabilistic regression predicting method for the deep neural network model in step S1 further include:
the distance coordinates of the picture pixel point and the center of the predicted object;
the ratio of the predicted object size to the image size of the picture pixel;
obstacle prediction confidence, including accuracy of predicted selected region dimensions and confidence in predicted targets.
In an embodiment, the loss function of the deep neural network model in step S1 is a sum of the center point position deviation loss term, the selected frame width and height deviation loss term, and the prediction confidence deviation loss term:
the central point position deviation loss term is as follows:
Figure BDA0002433143570000031
wherein λ iscoordIs the weight of the central point position deviation loss item, S is the number of the divided blocks of the picture, B is the total number of the predicted objects, i is the number of the cells of the current divided picture, j is the number of the current predicted objects,
Figure BDA0002433143570000032
the corresponding coefficients of cell i and responsible object j, (x)i,yi) As the coordinates of the center of the actual object,
Figure BDA0002433143570000033
predicting the center coordinates of the object;
the width and height deviation loss term of the selected frame is as follows:
Figure BDA0002433143570000034
wherein λ iscoordFor selecting the weight of the width and height deviation loss term of the frame, S is the number of the blocks divided by the picture, B is the total number of the predicted objects, i is the number of the cells of the current divided picture, j is the number of the current predicted objects,
Figure BDA0002433143570000035
the corresponding coefficient of cell i and responsible object j, (w)i,hi) Is the width and height of the picture size,
Figure BDA0002433143570000036
to predict the size of an objectWidth and height of (2);
the prediction confidence bias loss term is:
Figure BDA0002433143570000037
wherein λ isNobjFor predicting confidence deviation loss term weight, S is the number of blocks of picture segmentation, B is the total number of predicted objects, i is the number of cells of the current segmented picture, j is the number of the current predicted objects,
Figure BDA0002433143570000038
the corresponding coefficient, C, for cell i and responsible object jiFor the confidence of the actual object or objects,
Figure BDA0002433143570000039
to predict object confidence.
In one embodiment, in step S2:
and the mass normalization acting force, the direction and the angular speed under the inertial coordinate system are equivalently obtained through the control input of the multi-rotor dynamic model.
In an embodiment, the method for performing a path search based on a motion infinitesimal sequence of a multi-rotor dynamical model in step S2 further includes:
by applying a pair of accelerations umSelection and extension in time series, search for paths in space, find and optimize from an initial state s0To the last state sgThe optimal track which is optimal in terms of total control force J and time T required by the target is obtained, and the target function corresponding to the optimal track algorithm is
Figure BDA0002433143570000041
Wherein j is the jerk, and ρ is the time loss term coefficient.
In one embodiment, the constraint conditions in the path planning of the optimized trajectory algorithm in step S2 include a dynamic property constraint and a collision constraint:
the dynamics constraint satisfies the following condition:
Figure BDA0002433143570000042
wherein the content of the first and second substances,
Figure BDA0002433143570000043
in order to be the speed of the vehicle,
Figure BDA0002433143570000044
in order to be able to accelerate the vehicle,
Figure BDA0002433143570000045
to add acceleration, vmaxIs the maximum value of the velocity, amaxIs the maximum value of acceleration, jmaxIs the maximum value of the acceleration;
the collision restraint is achieved by:
checking for intersection between the validation ξ and the point cloud, i.e. checking for the following formula
Figure BDA0002433143570000046
If the intersection exists, the multi-rotor system is considered to collide with the obstacle;
wherein the content of the first and second substances,
Figure BDA0002433143570000047
is a set of obstacles, o is a set element in the obstacles, D is an Euclidean distance, ξ is a multi-rotor system
Figure BDA0002433143570000048
Ellipsoid modeling in (1), d is the position of the multi-rotor system.
In an embodiment, in the step S2, the method for heuristic hierarchical search further includes using a priori trajectory Φ of the low-dimensional spacepUsing acceleration to search for a trajectory phi in a high dimensional spaceq
Current state sqTo the last state sgOf (a) is determinedq) Is composed of
Figure BDA0002433143570000049
Figure BDA00024331435700000410
Wherein q is a high-dimensional track, p is a low-dimensional track, g is an end point subscript, n is a current point subscript, H1For a heuristic function representing the distance in the ascending dimension, H2For the heuristic function to the end point,
Figure BDA00024331435700000411
the total control force from the high-dimensional starting point to the end point is shown, rho is a time loss term coefficient, and T is time.
In one embodiment, the control variables of the litmus controller in step S3 include the lift τ and the torque M,
the lift τ is generated by the following equation:
Figure BDA00024331435700000412
wherein k isxCoefficient of position term, kvAs coefficient of velocity term, exPosition error, evFor speed error, m is the multi-rotor mass, g is the gravitational acceleration, e3Is a z-axis unit vector of an inertial coordinate system, R is the attitude of the multi-rotor system,
Figure BDA0002433143570000051
is a desired acceleration;
the torque M is generated by the following equation:
Figure BDA0002433143570000052
where C is the desired value of the control instruction, kR,kΩTo control the parameters, eRAs attitude error, eΩAs error of angular velocity, RCTo the desired attitude, ΩCIn order to expect the angular velocity of the object,
Figure BDA0002433143570000053
in order to expect the angular acceleration,
Figure BDA0002433143570000054
for the angular velocity of a multi-rotor system,
Figure BDA0002433143570000055
is the rotational inertia of the multi-rotor system, R is the attitude of the multi-rotor system,
Figure BDA0002433143570000056
predicted angular velocity for a multi-rotor system.
In order to achieve the above object, the present invention provides a dynamic obstacle environment navigation device based on visual semantic information, which is characterized by comprising a target detection module, a path search planning module and an execution module:
the target detection module is connected with the multi-rotor system and the path search planning module and comprises an image segmentation module and a regression prediction module, wherein the image segmentation module receives and segments an original picture of the multi-rotor system, and the regression prediction module performs probability regression prediction on the segmented picture based on a deep neural network model to obtain a target detection result;
the path search planning module is connected with the target detection module and the execution module and comprises a map generation module and a path planning module, the map generation module generates an obstacle map according to a target detection result, the path planning module performs path search based on a motion element sequence of a multi-rotor dynamic model by using a heuristic layered search method and performs dynamic path planning on the established obstacle map;
the execution module is connected with the multi-rotor system and comprises a navigation module and a controller module, the navigation module outputs expected lift force and expected attitude according to a navigation track formed by dynamic path planning, the controller module is a Lidai number controller, receives the expected lift force and the expected attitude of the navigation module, receives the actual lift force and the actual attitude of the multi-rotor system, and controls the multi-rotor system through the lift force and torque.
According to the multi-rotor dynamic obstacle environment navigation method based on the visual semantic information, the multi-rotor unmanned aerial vehicle can complete high-efficiency and high-dynamic-performance operation tasks in a complex dynamic scene through the visual semantic information.
Drawings
The above and other features, properties and advantages of the present invention will become more apparent from the following description of the embodiments with reference to the accompanying drawings in which like reference numerals denote like features throughout the several views, wherein:
FIG. 1 discloses a flow chart of a dynamic obstacle environment navigation method based on visual semantic information according to an embodiment of the invention;
FIG. 2 discloses a detailed flowchart of a dynamic obstacle environment navigation method based on visual semantic information according to yet another embodiment of the present invention;
FIG. 3 illustrates a schematic diagram of a four-rotor non-particle model in accordance with an embodiment of the present invention;
FIG. 4a discloses a graph of the results of a first path planning experiment according to an embodiment of the present invention;
FIG. 4b discloses a graph of the results of a second path planning experiment according to an embodiment of the invention;
FIG. 5a discloses a graph of the results of a first trajectory tracking experiment according to an embodiment of the invention;
FIG. 5b discloses a graph of the results of a second trajectory tracking experiment according to an embodiment of the invention;
FIG. 6 discloses a schematic diagram of a dynamic obstacle environment navigation system based on visual semantic information according to an embodiment of the invention.
The meanings of the reference symbols in the figures are as follows:
100 an object detection module;
101, a picture segmentation module;
102 a regression prediction module;
200 a path search planning module;
201 map generation module;
202 a path planning module;
300 executing the module;
301 a navigation module;
302 a controller module;
400 four rotor system.
Detailed Description
In order to make the objects, technical solutions and advantages of the present invention more apparent, the present invention is described in further detail below with reference to the accompanying drawings and embodiments. It should be understood that the specific embodiments described herein are merely illustrative of the invention and are not intended to limit the invention.
Aiming at the defects of the existing navigation method in the background art, the invention provides a complex dynamic scene navigation method based on visual semantic information, which does not relate to the bottom dynamics related to rotors and can be suitable for other multi-rotor systems.
Taking a quad-rotor unmanned aerial vehicle system as an example, the quad-rotor unmanned aerial vehicle carries a forward-looking camera as a visual sensing unit and guides path planning by using visual semantic information.
Fig. 1 discloses a flow chart of a dynamic obstacle environment navigation method based on visual semantic information according to an embodiment of the present invention, and as shown in fig. 1, the dynamic obstacle environment navigation method based on visual semantic information provided by the present invention mainly includes the following 3 steps:
s1, segmenting and analyzing semantic information in the scene by using a probability regression method based on a deep neural network model, detecting and identifying obstacles in the scene in real time, and establishing obstacle map information;
s2, using a heuristic layered search method, performing path search based on a motion element sequence of the four-rotor dynamic model, and performing dynamic path planning on the established barrier map;
and S3, performing fast-response track tracking control by using the lith-generation controller.
Fig. 2 discloses a detailed flowchart of a dynamic obstacle environment navigation method based on visual semantic information according to another embodiment of the present invention, and each step of the dynamic obstacle environment navigation method based on visual semantic information proposed by the present invention is described in detail below with reference to fig. 1 and fig. 2.
And step S1, segmenting and analyzing semantic information in the scene by using a probability regression method based on the deep neural network model, detecting and identifying the obstacles in the scene in real time, and establishing obstacle map information.
Step S1 further includes the steps of:
and S11, scaling the original picture to a specified size and carrying out block segmentation.
In the embodiment shown in fig. 2, step S11 further includes performing a series of pre-processing on the original picture, including image noise filtering, luminance balancing, and the like. The raw pictures are obtained from the vision sensing unit of the quad-rotor system.
In the embodiment shown in fig. 2, step S11 further includes determining whether the first N frames of images have the desired detection target:
if the current original picture exists, uniformly dividing the current original picture to form a preselected grid, in some embodiments, dividing the current original picture into N × N grid-shaped pixel blocks, and performing regression prediction on the target for each unit grid pixel block;
if not, the segmentation around the previous target is denser, a denser preselected grid is generated, and for each block of unit-grid pixels, a regression prediction of the target is made.
And S12, inputting the segmented picture into a convolution network of the deep neural network model for probability regression prediction, and outputting predicted values of the position, the size and the confidence coefficient of the obstacle in the picture scene.
And S13, removing redundant predicted values according to a non-maximum value suppression method to obtain a target detection result, wherein the target detection result is represented by a boundary frame and an object type.
In the embodiment shown in fig. 2, the target detection result of step S13 is the position information of the target object.
And generating an obstacle map according to the position information of the target object, wherein in the embodiment shown in fig. 2, the obstacle map is an obstacle point cloud map.
The Non-maximum suppression (NMS) method in step S13 suppresses elements that are not maximum values, which can be understood as a local maximum search. The local representation is a neighborhood, and the neighborhood has two variable parameters, namely the dimension of the neighborhood and the size of the neighborhood.
In step S12, the prediction parameters of the probabilistic regression detection method for the deep neural network model include: the distance coordinates between the picture pixel point and the center of the predicted object, the ratio of the size of the predicted object to the size of the image of the picture pixel point and the confidence coefficient of the prediction of the obstacle.
Distance coordinates (r) of the picture pixel point and the center of the predicted objectx,ry),
Figure BDA0002433143570000081
Wherein (x)c,yc) To predict object center coordinates, (w)i,hi) Width and height of picture size, S is the number of blocks of picture division, (x)col,yrow) Is the grid point coordinate of the predicted block center.
Ratio (r) of predicted object size to image size of pixel points of the picturew,rh),
Figure BDA0002433143570000082
Wherein (w)i,hi) Width and height (w) of picture sizeb,hi) To predict the width and height of the object dimensions.
Confidence P of the obstacle predictionconfI.e. confidence level of prediction of an obstacle for an object of a certain type, mainly takes into account two influencing factors, including accuracy of the predicted selected area size and the predicted selected area sizeAnd measuring the confidence of the target.
Total obstacle prediction confidence PconfExpressed as:
Figure BDA0002433143570000091
wherein Pr is whether an object exists in the cell, if yes, the value is 1, if no, the value is 0, the object is the object number,
Figure BDA0002433143570000092
the accuracy of the predicted box relative to the true value.
In order to control the gradient of the loss function to cross the limit value level and prevent the gradient mutation formed by the loss value and the no loss value in each point, the step S1 of the invention introduces two parameters lambdacoordAnd λNobjTo control coordinate position loss and loss of confidence.
Further, the loss function term of the deep neural network model of step S1 of the present invention includes: a central point position deviation loss item, a selected frame width and height deviation loss item and a prediction confidence degree deviation loss item.
The loss function term of the deep neural network model is the sum of the above loss terms.
The central point position deviation loss term is as follows:
Figure BDA0002433143570000093
wherein λ iscoordIs the weight of the central point position deviation loss item, S is the number of the divided blocks of the picture, B is the total number of the predicted objects, i is the number of the cells of the current divided picture, j is the number of the current predicted objects,
Figure BDA0002433143570000094
the corresponding coefficients of cell i and responsible object j, (x)i,yi) As the coordinates of the center of the actual object,
Figure BDA0002433143570000095
to predict object center coordinates.
The width and height deviation loss term of the selected frame is as follows:
Figure BDA0002433143570000096
wherein λ iscoordFor selecting the weight of the width and height deviation loss term of the frame, S is the number of the blocks divided by the picture, B is the total number of the predicted objects, i is the number of the cells of the current divided picture, j is the number of the current predicted objects,
Figure BDA0002433143570000097
the corresponding coefficient of cell i and responsible object j, (w)i,hi) Is the width and height of the picture size,
Figure BDA0002433143570000098
to predict the width and height of the object dimensions.
The prediction confidence deviation loss term is as follows:
Figure BDA0002433143570000101
wherein λ isNobjFor predicting confidence deviation loss term weight, S is the number of blocks of picture segmentation, B is the total number of predicted objects, i is the number of cells of the current segmented picture, j is the number of the current predicted objects,
Figure BDA0002433143570000102
the corresponding coefficient, C, for cell i and responsible object jiFor the confidence of the actual object or objects,
Figure BDA0002433143570000103
to predict object confidence.
And S2, performing path search based on the motion infinitesimal sequence of the four-rotor dynamic model by using a heuristic layered search method, and performing dynamic path planning on the established barrier map.
In step S2, a heuristic layered search method is adopted to plan the dynamic trajectory of the established barrier map, a four-rotor dynamic model is utilized to generate a motion infinitesimal sequence, a four-rotor body is modeled into an ellipsoid, the flight attitude along the trajectory of the ellipsoid is calculated, and whether the position relationship between the four rotors and the barrier meets the constraint condition is checked.
The step S2 further includes the steps of:
s21 a dynamic model of the four-rotor system path planning is established.
The path trajectory Φ (t) at time t of the quadrotor system is represented by the following state vector:
Φ(t)=[xT,vT,aT,jT]T
wherein x is the position of the track point, v is the speed of the track point, a is the acceleration of the track point, j is the acceleration of the track point, and T is the transposition symbol of the vector.
Thrust f required when a four-rotor system is useddIn the direction RdAnd angular velocity ωdWhen known, the control inputs required for the dynamics model calculations for a four-rotor system can be equivalently derived.
In the embodiment shown in fig. 1 and 2, jerk is selected as the control input to the quad-rotor dynamics model. After the control input is obtained by using the path planning method, the required information such as direction, angular velocity and the like can be reversely deduced, and the parameters of the thrust, the direction and the angular velocity are associated with the control input according to the following formula.
The thrust force fdMass normalized acting force under an inertial coordinate system:
fd=ad+gzw
wherein g is gravity acceleration and vector Zw=[0,0,1]TIs a Z-axis direction vector in the world coordinate system, adTo plan a four rotor acceleration vector in the trajectory.
The trajectory attitude R of the four-rotor system is expressed as R ═ R in SO (3) space1,r2,r3]:
Figure BDA0002433143570000111
Figure BDA0002433143570000112
Figure BDA0002433143570000113
Figure BDA0002433143570000114
Figure BDA0002433143570000115
Wherein r is1Is the first component of R, R2Is the second component of R, R3Is the third component of R and is,
Figure BDA0002433143570000116
in order to be the angular velocity of the object,
Figure BDA0002433143570000117
is composed of
Figure BDA0002433143570000118
Is measured with respect to the first component of (a),
Figure BDA0002433143570000119
is composed of
Figure BDA00024331435700001110
Is measured with respect to the first component of (a),
Figure BDA00024331435700001111
is composed of
Figure BDA00024331435700001112
And psi is the four rotor yaw angle.
And S22, performing path search based on the motion infinitesimal sequence.
Selecting jerk umAs a control input of the four-rotor dynamic model, the motion element means that the initial state s is changed within a tiny time period tau > 00To the last state sgCan be expressed in polynomial form as follows:
Figure BDA00024331435700001113
where t is the current time, a0Is an initial acceleration, v0As initial velocity, p0Is the initial position.
By applying a pair of accelerations umSelection and extension in time series, forming a search for paths in space by finding and optimizing from an initial state s0To the last state sgTo obtain an optimal trajectory that is optimal in terms of the total control force J and the time T required by the target.
The objective function corresponding to the optimized track algorithm is
Figure BDA00024331435700001114
Wherein J is the acceleration, ρ is the time loss term coefficient, J is the total control force, and ρ is the time loss term coefficient.
The dynamic constraint and the collision constraint are mainly considered in the optimization process, and therefore, in the path planning of the optimization trajectory algorithm of step S22 of the present invention, the constraint conditions include a dynamic property constraint and a collision constraint.
The dynamics constraints of a four-rotor system are essentially derived from the maximum minimum thrust and torque that the rotors can provide, taking advantage of the differential flatness characteristics of a four-rotor system, and applying speed, acceleration and jerk limits independently on each axis.
The dynamics in the constrained path planning need to satisfy the following conditions:
Figure BDA0002433143570000121
where x (t) is the quad rotor position coordinate,
Figure BDA0002433143570000122
in order to be the speed of the vehicle,
Figure BDA0002433143570000123
in order to be able to accelerate the vehicle,
Figure BDA0002433143570000124
to add acceleration, vmaxIs the maximum value of the velocity, amaxIs the maximum value of acceleration, jmaxIs the maximum value of the acceleration.
FIG. 3 discloses a four-rotor non-particle model schematic diagram according to an embodiment of the invention, as shown in FIG. 3, modeling a four-rotor system to account for actual robot shape and attitude in dealing with collision constraints
Figure BDA0002433143570000125
Ellipsoid ξ in (1), the space of the four-rotor system in attitude state S is represented as
Figure BDA0002433143570000126
Wherein p is an ellipsoid assembly element,
Figure BDA0002433143570000127
is a scale vector, E is a matrix representing rotation, r is a radius, h is a height,
Figure BDA0002433143570000128
is a three-dimensional real number space, S is the pose state of a given quad-rotor system, and d is the position of the quad-rotor system.
Figure BDA0002433143570000129
Wherein R is the attitude of the four-rotor system, and R ═ R1,r2,r3]。
In the ellipsoid model of the four-rotor system, when tracing the trajectory, it is checked whether there is an intersection between the validation ξ and the point cloud, i.e. the following formula is checked:
Figure BDA00024331435700001210
wherein the content of the first and second substances,
Figure BDA00024331435700001211
is the set of obstacles, o is the set element within the obstacle, D is the euclidean distance, and D is the position of the quad rotor system.
If there is an intersection, the quad-rotor system is considered to collide with the obstacle.
In order to accelerate the planning speed in the dynamic environment, step S22 of the present invention further includes the following steps:
a heuristic layered search method is adopted to accelerate track searching, firstly, a lower-dimensional space is searched to obtain a prior track, the prior track is used as a heuristic method to guide searching of a high-dimensional space, and finally a dynamic feasible path track is generated.
A priori trajectory in low dimensional space of phipThe trajectory in the high-dimensional space being searched is phiq(q>p)。
Current state sqTo the last state sgOf (a) is determinedq) Can be expressed as
Figure BDA0002433143570000131
Figure BDA0002433143570000132
Wherein q is a high-dimensional track, p is a low-dimensional track, g is an end point subscript, n is a current point subscript, H1For a heuristic function representing the distance in the ascending dimension, H2For the heuristic function to the end point,
Figure BDA0002433143570000133
the total control force from the high-dimensional starting point to the end point is shown, rho is a time loss term coefficient, and T is time.
Whereby the acceleration can be used to search for the trajectory phiqUsing the previous trajectory phipTo initiate a search.
New search trajectory ΦqTend to follow a priori trajectory phipRemain consistent while H2Finer dynamics will be added to the planning objective.
And S3, performing fast-response track tracking control by using the lith-generation controller.
Aiming at the aggressive posture and high dynamic characteristics appearing in a complex scene, a four-rotor dynamic model is established in a lie algebra space three-dimensional rotating group space, namely an SO (3) space:
Figure BDA0002433143570000134
Figure BDA0002433143570000135
wherein the content of the first and second substances,
Figure BDA0002433143570000136
the position of the four-rotor system in the inertial frame,
Figure BDA0002433143570000137
is the speed of the four-rotor system in the inertial coordinate system, m is the mass of the four-rotor system,
Figure BDA0002433143570000138
for the angular velocity of a four-rotor system,
Figure BDA0002433143570000139
is the moment of inertia of the four-rotor system, R is the attitude of the four-rotor system, e3Is a z-axis unit vector of the inertial coordinate system,
Figure BDA00024331435700001310
the predicted angular velocity for a four-rotor system.
On the basis of a four-rotor dynamic model, a feedforward-feedback nonlinear litmus numerical controller is established, and control quantity comprises lift force tau and torque M.
The lift τ is generated by the following equation:
Figure BDA00024331435700001311
wherein k isxCoefficient of position term, kvAs coefficient of velocity term, exPosition error, evFor speed error, m is the quad-rotor mass, g is the gravitational acceleration, e3Is a z-axis unit vector of an inertial coordinate system, R is the attitude of a four-rotor system,
Figure BDA00024331435700001312
is the desired acceleration.
The torque M is generated by the following equation:
Figure BDA0002433143570000141
where C is the desired value of the control instruction, kR,kΩTo control the parameters, eRAs attitude error, eΩAs error of angular velocity, RCTo the desired attitude, ΩCIn order to expect the angular velocity of the object,
Figure BDA0002433143570000142
in order to expect the angular acceleration,
Figure BDA0002433143570000143
angular velocity for four-rotor systems, J ∈ R3×3Is the rotational inertia of the four-rotor system, R is the attitude of the four-rotor system,
Figure BDA0002433143570000144
the predicted angular velocity for a four-rotor system.
The invention uses the Li-generation controller to perform quick-response track tracking control, thereby realizing the navigation task of a complex dynamic scene by using visual semantic information.
Fig. 4a discloses a graph of results of a first path planning experiment according to an embodiment of the invention, fig. 4b discloses a graph of results of a second path planning experiment according to an embodiment of the invention, fig. 5a discloses a graph of results of a first trajectory tracking experiment according to an embodiment of the invention, fig. 5b discloses a graph of results of a second trajectory tracking experiment according to an embodiment of the invention, and table 1 shows a comparison of experimental parameters of experiment 1 shown in fig. 4a and 5a and experiment 2 shown in fig. 4b and 5 b.
In experiment 1 shown in fig. 4a and 5a, a path planning as shown in fig. 4a was performed on the obstacle map, the solid line in fig. 5a is a desired path planning path, and the broken line is a path obtained by actual control, and in experiment 2 shown in fig. 4b and 5b, a path planning as shown in fig. 4b was performed on the obstacle map, the solid line in fig. 5b is a desired path planning path, and the broken line is a path obtained by actual control.
As can be seen from experiments 1 and 2, the four-rotor dynamic obstacle environment navigation method based on the visual semantic information can realize navigation and control of the four-rotor unmanned aerial vehicle in a complex dynamic obstacle environment.
TABLE 1
Figure BDA0002433143570000145
The invention also provides a dynamic obstacle environment navigation device based on the visual semantic information, which can realize the dynamic obstacle environment navigation method based on the visual semantic information, and still takes a four-rotor system as an example.
Fig. 6 is a schematic diagram of a dynamic obstacle environment navigation system based on visual semantic information according to an embodiment of the present invention, and in the embodiment shown in fig. 6, the dynamic obstacle environment navigation system based on visual semantic information according to the present invention includes a target detection module 100, a route search planning module 200, and an execution module 300.
The object detection module 100 is connected to the four-rotor system 400 and the path search planning module 200.
The object detection module 100 includes an image segmentation module 101 and a regression prediction module 102.
The image segmentation module 101 receives and segments an original image of the four-rotor system 400, and sends the segmented image, i.e., a block image, to the regression prediction module 102.
The regression prediction module 102 performs probabilistic regression prediction on the segmented picture based on the deep neural network model to obtain a target detection result, and sends the obstacle information corresponding to the target detection result to the path search planning module 200.
The path search planning module 200 is connected to the target detection module 100 and the execution module 300, and includes a map generation module 201 and a path planning module 202.
The map generation module 201 generates an obstacle map according to the target detection result. The obstacle map is a point cloud map, and the map generation module 201 sends the point cloud map to the path planning module 202.
The path planning module 202 performs path search based on a motion element sequence of the four-rotor dynamics model by using a heuristic hierarchical search method, performs dynamic path planning on the established barrier map, and sends the planned navigation track to the execution module.
The implement module 300, coupled to the quad-rotor system 400, includes a navigation module 301 and a controller module 302.
The navigation module 301 outputs an expected lift force and an expected posture according to a navigation track formed by dynamic path planning.
The controller module 302, which is a li-generation controller, receives the desired lift and the desired attitude of the navigation module 301, receives the actual lift and the actual attitude of the quad-rotor system 400, and controls the quad-rotor system 400 through the lift and the torque.
According to the four-rotor dynamic obstacle environment navigation method and device based on the visual semantic information, the multi-rotor unmanned aerial vehicle can complete high-efficiency and high-dynamic-performance operation tasks in a complex dynamic scene through the visual semantic information.
While, for purposes of simplicity of explanation, the methodologies are shown and described as a series of acts, it is to be understood and appreciated that the methodologies are not limited by the order of acts, as some acts may, in accordance with one or more embodiments, occur in different orders and/or concurrently with other acts from that shown and described herein or not shown and described herein, as would be understood by one skilled in the art.
As used in this application and the appended claims, the terms "a," "an," "the," and/or "the" are not intended to be inclusive in the singular, but rather are intended to be inclusive in the plural unless the context clearly dictates otherwise. In general, the terms "comprises" and "comprising" merely indicate that steps and elements are included which are explicitly identified, that the steps and elements do not form an exclusive list, and that a method or apparatus may include other steps or elements.
The embodiments described above are provided to enable persons skilled in the art to make or use the invention and that modifications or variations can be made to the embodiments described above by persons skilled in the art without departing from the inventive concept of the present invention, so that the scope of protection of the present invention is not limited by the embodiments described above but should be accorded the widest scope consistent with the innovative features set forth in the claims.

Claims (10)

1. A dynamic obstacle environment navigation method based on visual semantic information is characterized by comprising the following steps:
s1, segmenting and analyzing semantic information in the scene of the original picture by using a probability regression prediction method based on a deep neural network model, detecting and identifying obstacles in the scene in real time, and establishing an obstacle map;
s2, using a heuristic layered search method, performing path search based on the motion infinitesimal sequence of the multi-rotor dynamic model, and performing dynamic path planning on the established barrier map;
and S3, performing fast-response track tracking control by using the lith-generation controller.
2. The dynamic obstacle environment navigation method based on visual semantic information according to claim 1, wherein the step S1 further includes the steps of:
s11, zooming the original picture to a specified size and carrying out block segmentation;
s12, inputting the segmented picture into a deep neural network model for probability regression prediction, and outputting predicted values of the position, the size and the confidence coefficient of an obstacle in the picture scene;
and S13, removing redundant predicted values according to a non-maximum value inhibition method to obtain a target detection result, wherein the target detection result is represented by a boundary frame body and an object type.
3. The dynamic obstacle environment navigation method based on visual semantic information according to claim 1, wherein the prediction parameters of the probabilistic regression prediction method of the deep neural network model in the step S1 further include:
the distance coordinates of the picture pixel point and the center of the predicted object;
the ratio of the predicted object size to the image size of the picture pixel;
obstacle prediction confidence, including accuracy of predicted selected region dimensions and confidence in predicted targets.
4. The dynamic obstacle environment navigation method based on visual semantic information according to claim 1, wherein the loss function of the deep neural network model in step S1 is a sum of a center point position deviation loss term, a selected frame width and height deviation loss term, and a prediction confidence deviation loss term:
the central point position deviation loss term is as follows:
Figure FDA0002433143560000021
wherein λ iscoordIs the weight of the central point position deviation loss item, S is the number of the divided blocks of the picture, B is the total number of the predicted objects, i is the number of the cells of the current divided picture, j is the number of the current predicted objects,
Figure FDA0002433143560000022
the corresponding coefficients of cell i and responsible object j, (x)i,yi) As the coordinates of the center of the actual object,
Figure FDA0002433143560000023
predicting the center coordinates of the object;
the width and height deviation loss term of the selected frame is as follows:
Figure FDA0002433143560000024
wherein λ iscoordFor selecting the weight of the width and height deviation loss term of the frame, S is the number of the blocks divided by the picture, B is the total number of the predicted objects, i is the number of the cells of the current divided picture, j is the number of the current predicted objects,
Figure FDA0002433143560000025
the corresponding coefficient of cell i and responsible object j, (w)i,hi) Is the width and height of the picture size,
Figure FDA0002433143560000026
width and height for predicting object size;
the prediction confidence bias loss term is:
Figure FDA0002433143560000027
wherein λ isNobjFor predicting confidence deviation loss term weight, S is the number of blocks of picture segmentation, B is the total number of predicted objects, i is the number of cells of the current segmented picture, j is the number of the current predicted objects,
Figure FDA0002433143560000028
the corresponding coefficient, C, for cell i and responsible object jiFor the confidence of the actual object or objects,
Figure FDA0002433143560000029
to predict object confidence.
5. The dynamic obstacle environment navigation method based on visual semantic information according to claim 1, wherein in the step S2:
and the mass normalization acting force, the direction and the angular speed under the inertial coordinate system are equivalently obtained through the control input of the multi-rotor dynamic model.
6. The dynamic obstacle environment navigation method based on visual semantic information according to claim 1, wherein the step S2 is a path search method based on a motion infinitesimal sequence of a multi-rotor dynamical model, and further comprises:
by applying a pair of accelerations umSelection and extension in time series, search for paths in space, find and optimize from an initial state s0To the last state sgThe optimal track which is optimal in terms of total control force J and time T required by the target is obtained, and the target function corresponding to the optimal track algorithm is
Figure FDA0002433143560000031
Wherein j is the jerk, and ρ is the time loss term coefficient.
7. The dynamic obstacle environment navigation method based on visual semantic information according to claim 6, wherein the constraint conditions in the path planning of the optimized trajectory algorithm in the step S2 include a dynamic property constraint and a collision constraint:
the dynamics constraint satisfies the following condition:
Figure FDA0002433143560000032
wherein the content of the first and second substances,
Figure FDA0002433143560000033
in order to be the speed of the vehicle,
Figure FDA0002433143560000034
in order to be able to accelerate the vehicle,
Figure FDA0002433143560000035
to add acceleration, vmaxIs the maximum value of the velocity, amaxIs the maximum value of acceleration, jmaxIs the maximum value of the acceleration;
the collision restraint is achieved by:
checking for intersection between the validation ξ and the point cloud, i.e. checking for the following formula
Figure FDA0002433143560000036
If the intersection exists, the multi-rotor system is considered to collide with the obstacle;
wherein the content of the first and second substances,
Figure FDA0002433143560000037
is a set of obstacles, o is a set element in the obstacles, D is an Euclidean distance, ξ is a multi-rotor system
Figure FDA0002433143560000038
Ellipsoid modeling in (1), d is the position of the multi-rotor system.
8. The dynamic obstacle environment navigation method based on visual semantic information of claim 6, wherein in the step S2, the method of heuristic hierarchical search further comprisesUsing a priori trajectory phi of low dimensional spacepUsing acceleration to search for a trajectory phi in a high dimensional spaceq
Current state sqTo the last state sgOf (a) is determinedq) Is composed of
Figure FDA0002433143560000041
Figure FDA0002433143560000042
Wherein q is a high-dimensional track, p is a low-dimensional track, g is an end point subscript, n is a current point subscript, H1For a heuristic function representing the distance in the ascending dimension, H2For the heuristic function to the end point,
Figure FDA0002433143560000043
the total control force from the high-dimensional starting point to the end point is shown, rho is a time loss term coefficient, and T is time.
9. The dynamic obstacle environment navigation method based on visual semantic information according to claim 1, wherein the control amount of the litmus controller in step S3 includes a lift force τ and a torque M,
the lift τ is generated by the following equation:
Figure FDA0002433143560000044
wherein k isxCoefficient of position term, kvAs coefficient of velocity term, exPosition error, evFor speed error, m is the multi-rotor mass, g is the gravitational acceleration, e3Is a z-axis unit vector of an inertial coordinate system, R is the attitude of the multi-rotor system,
Figure FDA0002433143560000045
is a desired acceleration;
the torque M is generated by the following equation:
Figure FDA0002433143560000046
where C is the desired value of the control instruction, kR,kΩTo control the parameters, eRAs attitude error, eΩAs error of angular velocity, RCTo the desired attitude, ΩCIn order to expect the angular velocity of the object,
Figure FDA0002433143560000047
in order to expect the angular acceleration,
Figure FDA0002433143560000048
for the angular velocity of a multi-rotor system,
Figure FDA0002433143560000049
for the predicted angular velocity of a multi-rotor system,
Figure FDA00024331435600000410
the moment of inertia of the multi-rotor system, and R is the attitude of the multi-rotor system.
10. The dynamic obstacle environment navigation device based on visual semantic information is characterized by comprising a target detection module, a path search planning module and an execution module, wherein the target detection module comprises:
the target detection module is connected with the multi-rotor system and the path search planning module and comprises an image segmentation module and a regression prediction module, wherein the image segmentation module receives and segments an original picture of the multi-rotor system, and the regression prediction module performs probability regression prediction on the segmented picture based on a deep neural network model to obtain a target detection result;
the path search planning module is connected with the target detection module and the execution module and comprises a map generation module and a path planning module, the map generation module generates an obstacle map according to a target detection result, the path planning module performs path search based on a motion element sequence of a multi-rotor dynamic model by using a heuristic layered search method and performs dynamic path planning on the established obstacle map;
the execution module is connected with the multi-rotor system and comprises a navigation module and a controller module, the navigation module outputs expected lift force and expected attitude according to a navigation track formed by dynamic path planning, the controller module is a Lidai number controller, receives the expected lift force and the expected attitude of the navigation module, receives the actual lift force and the actual attitude of the multi-rotor system, and controls the multi-rotor system through the lift force and torque.
CN202010242906.XA 2020-03-31 2020-03-31 Dynamic obstacle environment navigation method and device based on visual semantic information Active CN111367318B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010242906.XA CN111367318B (en) 2020-03-31 2020-03-31 Dynamic obstacle environment navigation method and device based on visual semantic information

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010242906.XA CN111367318B (en) 2020-03-31 2020-03-31 Dynamic obstacle environment navigation method and device based on visual semantic information

Publications (2)

Publication Number Publication Date
CN111367318A true CN111367318A (en) 2020-07-03
CN111367318B CN111367318B (en) 2022-11-22

Family

ID=71207276

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010242906.XA Active CN111367318B (en) 2020-03-31 2020-03-31 Dynamic obstacle environment navigation method and device based on visual semantic information

Country Status (1)

Country Link
CN (1) CN111367318B (en)

Cited By (12)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111880573A (en) * 2020-07-31 2020-11-03 电子科技大学 Four-rotor autonomous navigation method based on visual inertial navigation fusion
CN111950386A (en) * 2020-07-22 2020-11-17 北京航空航天大学 Functional intelligence-based environment self-adaptive navigation scene recognition method for micro unmanned aerial vehicle
CN112183221A (en) * 2020-09-04 2021-01-05 北京科技大学 Semantic-based dynamic object self-adaptive trajectory prediction method
CN112509056A (en) * 2020-11-30 2021-03-16 中国人民解放军32181部队 Dynamic battlefield environment real-time path planning system and method
CN113311833A (en) * 2021-05-20 2021-08-27 江苏图知天下科技有限公司 Prefabricated slab surface folding method and device based on robot
CN113358118A (en) * 2021-05-06 2021-09-07 北京化工大学 End-to-end autonomous navigation method for indoor mobile robot in unstructured environment
CN113390410A (en) * 2021-08-04 2021-09-14 北京云恒科技研究院有限公司 Inertial integrated navigation method suitable for unmanned aerial vehicle
CN113608548A (en) * 2021-07-23 2021-11-05 中国科学院地理科学与资源研究所 Unmanned aerial vehicle emergency processing method and system, storage medium and electronic equipment
CN113925490A (en) * 2021-10-14 2022-01-14 河北医科大学 Space-oriented obstacle classification method
CN115840369A (en) * 2023-02-20 2023-03-24 南昌大学 Track optimization method, device and equipment based on improved whale optimization algorithm
WO2023178931A1 (en) * 2022-03-24 2023-09-28 美的集团(上海)有限公司 Motion planning method and apparatus, and robot
CN117541655A (en) * 2024-01-10 2024-02-09 上海几何伙伴智能驾驶有限公司 Method for eliminating radar map building z-axis accumulated error by fusion of visual semantics

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20180164124A1 (en) * 2016-09-15 2018-06-14 Syracuse University Robust and stable autonomous vision-inertial navigation system for unmanned vehicles
CN109682381A (en) * 2019-02-22 2019-04-26 山东大学 Big visual field scene perception method, system, medium and equipment based on omnidirectional vision
CN109993780A (en) * 2019-03-07 2019-07-09 深兰科技(上海)有限公司 A kind of three-dimensional high-precision ground drawing generating method and device
US10366508B1 (en) * 2016-08-29 2019-07-30 Perceptin Shenzhen Limited Visual-inertial positional awareness for autonomous and non-autonomous device
CN110275540A (en) * 2019-07-01 2019-09-24 湖南海森格诺信息技术有限公司 Semantic navigation method and its system for sweeping robot

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US10366508B1 (en) * 2016-08-29 2019-07-30 Perceptin Shenzhen Limited Visual-inertial positional awareness for autonomous and non-autonomous device
US20180164124A1 (en) * 2016-09-15 2018-06-14 Syracuse University Robust and stable autonomous vision-inertial navigation system for unmanned vehicles
CN109682381A (en) * 2019-02-22 2019-04-26 山东大学 Big visual field scene perception method, system, medium and equipment based on omnidirectional vision
CN109993780A (en) * 2019-03-07 2019-07-09 深兰科技(上海)有限公司 A kind of three-dimensional high-precision ground drawing generating method and device
CN110275540A (en) * 2019-07-01 2019-09-24 湖南海森格诺信息技术有限公司 Semantic navigation method and its system for sweeping robot

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
ZHAO, BF,ET AL.: "Vision-Based Tracking Control of Quadrotor with Backstepping Sliding Mode Control", 《IEEE ACCESS》 *
盛超等: "基于图像语义分割的动态场景下的单目SLAM算法", 《测绘通报》 *

Cited By (18)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111950386A (en) * 2020-07-22 2020-11-17 北京航空航天大学 Functional intelligence-based environment self-adaptive navigation scene recognition method for micro unmanned aerial vehicle
CN111880573A (en) * 2020-07-31 2020-11-03 电子科技大学 Four-rotor autonomous navigation method based on visual inertial navigation fusion
CN111880573B (en) * 2020-07-31 2022-09-06 电子科技大学 Four-rotor autonomous navigation method based on visual inertial navigation fusion
CN112183221B (en) * 2020-09-04 2024-05-03 北京科技大学 Semantic-based dynamic object self-adaptive track prediction method
CN112183221A (en) * 2020-09-04 2021-01-05 北京科技大学 Semantic-based dynamic object self-adaptive trajectory prediction method
CN112509056A (en) * 2020-11-30 2021-03-16 中国人民解放军32181部队 Dynamic battlefield environment real-time path planning system and method
CN112509056B (en) * 2020-11-30 2022-12-20 中国人民解放军32181部队 Dynamic battlefield environment real-time path planning system and method
CN113358118A (en) * 2021-05-06 2021-09-07 北京化工大学 End-to-end autonomous navigation method for indoor mobile robot in unstructured environment
CN113358118B (en) * 2021-05-06 2022-09-20 北京化工大学 End-to-end autonomous navigation method for indoor mobile robot in unstructured environment
CN113311833A (en) * 2021-05-20 2021-08-27 江苏图知天下科技有限公司 Prefabricated slab surface folding method and device based on robot
CN113608548A (en) * 2021-07-23 2021-11-05 中国科学院地理科学与资源研究所 Unmanned aerial vehicle emergency processing method and system, storage medium and electronic equipment
CN113390410A (en) * 2021-08-04 2021-09-14 北京云恒科技研究院有限公司 Inertial integrated navigation method suitable for unmanned aerial vehicle
CN113925490A (en) * 2021-10-14 2022-01-14 河北医科大学 Space-oriented obstacle classification method
CN113925490B (en) * 2021-10-14 2023-06-27 河北医科大学 Space orientation obstacle classification method
WO2023178931A1 (en) * 2022-03-24 2023-09-28 美的集团(上海)有限公司 Motion planning method and apparatus, and robot
CN115840369A (en) * 2023-02-20 2023-03-24 南昌大学 Track optimization method, device and equipment based on improved whale optimization algorithm
CN117541655A (en) * 2024-01-10 2024-02-09 上海几何伙伴智能驾驶有限公司 Method for eliminating radar map building z-axis accumulated error by fusion of visual semantics
CN117541655B (en) * 2024-01-10 2024-03-26 上海几何伙伴智能驾驶有限公司 Method for eliminating radar map building z-axis accumulated error by fusion of visual semantics

Also Published As

Publication number Publication date
CN111367318B (en) 2022-11-22

Similar Documents

Publication Publication Date Title
CN111367318B (en) Dynamic obstacle environment navigation method and device based on visual semantic information
Lin et al. Robust vision-based obstacle avoidance for micro aerial vehicles in dynamic environments
Fu et al. Input uncertainty sensitivity enhanced nonsingleton fuzzy logic controllers for long-term navigation of quadrotor UAVs
Thomas et al. Visual servoing of quadrotors for perching by hanging from cylindrical objects
Fu et al. Monocular visual-inertial SLAM-based collision avoidance strategy for Fail-Safe UAV using fuzzy logic controllers: comparison of two cross-entropy optimization approaches
Tordesillas et al. PANTHER: Perception-aware trajectory planner in dynamic environments
Alexopoulos et al. A comparative study of collision avoidance techniques for unmanned aerial vehicles
Hérissé et al. A terrain-following control approach for a vtol unmanned aerial vehicle using average optical flow
Tarhan et al. EKF based attitude estimation and stabilization of a quadrotor UAV using vanishing points in catadioptric images
Liu et al. Robust nonlinear control approach to nontrivial maneuvers and obstacle avoidance for quadrotor UAV under disturbances
Patel et al. An intelligent hybrid artificial neural network-based approach for control of aerial robots
Ahmadinejad et al. Autonomous Flight of Quadcopters in the Presence of Ground Effect
Jacquet et al. Perception-constrained and Motor-level Nonlinear MPC for both Underactuated and Tilted-propeller UAVS
Fu et al. A comparative study on the control of quadcopter uavs by using singleton and non-singleton fuzzy logic controllers
Cajo et al. Fractional order PD path-following control of an AR. Drone quadrotor
Gomez-Balderas et al. Vision based tracking for a quadrotor using vanishing points
Erskine et al. Model predictive control for dynamic quadrotor bearing formations
Razinkova et al. Tracking a moving ground object using quadcopter UAV in a presence of noise
CN116301007A (en) Intensive task path planning method for multi-quad-rotor unmanned helicopter based on reinforcement learning
CN116149193A (en) Anti-disturbance control method and system for rotor unmanned aerial vehicle based on vision
Mercado et al. Sliding mode collision-free navigation for quadrotors using monocular vision
CN112161626B (en) High-flyability route planning method based on route tracking mapping network
Hua et al. A Novel Learning-Based Trajectory Generation Strategy for a Quadrotor
Bouzerzour et al. Robust Vision-Based Sliding Mode Control for Uncooperative Ground Target Searching and Tracking by Quadrotor
Kadhim et al. Improving the Size of the Propellers of the Parrot Mini-Drone and an Impact Study on its Flight Controller System.

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant