CN111367318B - Dynamic obstacle environment navigation method and device based on visual semantic information - Google Patents
Dynamic obstacle environment navigation method and device based on visual semantic information Download PDFInfo
- Publication number
- CN111367318B CN111367318B CN202010242906.XA CN202010242906A CN111367318B CN 111367318 B CN111367318 B CN 111367318B CN 202010242906 A CN202010242906 A CN 202010242906A CN 111367318 B CN111367318 B CN 111367318B
- Authority
- CN
- China
- Prior art keywords
- dynamic
- module
- semantic information
- picture
- rotor
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
- 238000000034 method Methods 0.000 title claims abstract description 66
- 230000000007 visual effect Effects 0.000 title claims abstract description 41
- 238000003062 neural network model Methods 0.000 claims abstract description 18
- 230000004888 barrier function Effects 0.000 claims abstract description 9
- 230000004044 response Effects 0.000 claims abstract description 6
- 230000001133 acceleration Effects 0.000 claims description 31
- 238000001514 detection method Methods 0.000 claims description 27
- 230000006870 function Effects 0.000 claims description 17
- 230000011218 segmentation Effects 0.000 claims description 9
- 230000036461 convulsion Effects 0.000 claims description 8
- 238000003709 image segmentation Methods 0.000 claims description 6
- 238000004422 calculation algorithm Methods 0.000 claims description 5
- 238000012795 verification Methods 0.000 claims description 3
- 230000001174 ascending effect Effects 0.000 claims description 2
- 230000005764 inhibitory process Effects 0.000 claims description 2
- 238000010606 normalization Methods 0.000 claims description 2
- 238000002474 experimental method Methods 0.000 description 13
- 238000010586 diagram Methods 0.000 description 4
- 230000008901 benefit Effects 0.000 description 3
- 230000007547 defect Effects 0.000 description 3
- 230000007613 environmental effect Effects 0.000 description 2
- 230000005484 gravity Effects 0.000 description 2
- 238000005457 optimization Methods 0.000 description 2
- 239000002245 particle Substances 0.000 description 2
- 230000001629 suppression Effects 0.000 description 2
- 206010037180 Psychiatric symptoms Diseases 0.000 description 1
- 230000008878 coupling Effects 0.000 description 1
- 238000010168 coupling process Methods 0.000 description 1
- 238000005859 coupling reaction Methods 0.000 description 1
- 238000013135 deep learning Methods 0.000 description 1
- 230000002950 deficient Effects 0.000 description 1
- 238000013461 design Methods 0.000 description 1
- 238000001914 filtration Methods 0.000 description 1
- 230000003993 interaction Effects 0.000 description 1
- 238000011835 investigation Methods 0.000 description 1
- 239000011159 matrix material Substances 0.000 description 1
- 238000012821 model calculation Methods 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
- 230000035772 mutation Effects 0.000 description 1
- 238000007781 pre-processing Methods 0.000 description 1
- 230000008569 process Effects 0.000 description 1
- 238000012545 processing Methods 0.000 description 1
- 238000011160 research Methods 0.000 description 1
- 230000017105 transposition Effects 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G05—CONTROLLING; REGULATING
- G05D—SYSTEMS FOR CONTROLLING OR REGULATING NON-ELECTRIC VARIABLES
- G05D1/00—Control of position, course, altitude or attitude of land, water, air or space vehicles, e.g. using automatic pilots
- G05D1/10—Simultaneous control of position or course in three dimensions
- G05D1/101—Simultaneous control of position or course in three dimensions specially adapted for aircraft
-
- G—PHYSICS
- G01—MEASURING; TESTING
- G01C—MEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
- G01C21/00—Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00
- G01C21/005—Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 with correlation of navigation data from several sources, e.g. map or contour matching
-
- G—PHYSICS
- G01—MEASURING; TESTING
- G01C—MEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
- G01C21/00—Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00
- G01C21/20—Instruments for performing navigational calculations
-
- G—PHYSICS
- G05—CONTROLLING; REGULATING
- G05D—SYSTEMS FOR CONTROLLING OR REGULATING NON-ELECTRIC VARIABLES
- G05D1/00—Control of position, course, altitude or attitude of land, water, air or space vehicles, e.g. using automatic pilots
- G05D1/08—Control of attitude, i.e. control of roll, pitch, or yaw
- G05D1/0808—Control of attitude, i.e. control of roll, pitch, or yaw specially adapted for aircraft
Landscapes
- Engineering & Computer Science (AREA)
- Radar, Positioning & Navigation (AREA)
- Remote Sensing (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Automation & Control Theory (AREA)
- Aviation & Aerospace Engineering (AREA)
- Control Of Position, Course, Altitude, Or Attitude Of Moving Bodies (AREA)
Abstract
The invention relates to the technical field of multi-rotor unmanned aerial vehicle control and navigation, in particular to a dynamic obstacle environment navigation method and device based on visual semantic information. The invention provides a dynamic obstacle environment navigation method based on visual semantic information, which comprises the following steps: s1, segmenting and analyzing semantic information in a scene of an original picture by using a probability regression prediction method based on a deep neural network model, detecting and identifying obstacles in the scene in real time, and establishing an obstacle map; s2, using a heuristic hierarchical search method, carrying out path search on the motion infinitesimal sequence based on the multi-rotor dynamic model, and carrying out dynamic path planning on the established barrier map; and S3, performing fast-response track tracking control by using a Li Dai number controller. According to the invention, by using the visual semantic information, the multi-rotor unmanned aerial vehicle can complete high-efficiency and high-dynamic-performance operation tasks in a complex dynamic scene.
Description
Technical Field
The invention relates to the technical field of multi-rotor unmanned aerial vehicle control and navigation, in particular to a dynamic obstacle environment navigation method and device based on visual semantic information.
Background
The quad-rotor unmanned aerial vehicle has the characteristics of light size, flexible flight, strong hovering capability, good maneuvering performance and the like, and is widely concerned in various fields of the engineering application field.
The four-rotor aircraft provided with the airborne vision sensor is an ideal platform for autonomous navigation tasks, executes tasks in a complex dynamic environment, and becomes an important concern in application and research of unmanned aerial equipment.
The application scenes of the quad-rotor unmanned aerial vehicle with the airborne visual sensor are widely distributed in the aspects of scene exploration, danger investigation, indoor and outdoor map building, environment interaction, search and rescue tasks and the like.
The quad-rotor unmanned aerial vehicle consists of two pairs of counter-rotating rotors and propellers and has the system characteristics of under-actuation, nonlinearity, strong coupling and the like. Some drone controllers have been developed for near hover conditions and for path planning methods that do not take into account complex dynamical models, by using back-push controllers, sliding mode techniques and Dijkstra (Dijkstra) algorithms.
However, when the environment is complex and the obstacle is dynamic, the quad-rotor unmanned aerial vehicle system has the characteristic of strong nonlinearity, and the method is based on the known environmental information, and the problem of how to accurately detect the environmental information in real time is also the problem of needing important attention.
The existing navigation method of the quad-rotor unmanned aerial vehicle has the following defects:
1) Visual semantic information is not fully utilized, the efficiency of judging dynamic obstacles is deficient, and the processing efficiency of a semantic segmentation method based on deep learning needs to be improved;
2) The unmanned aerial vehicle model is simply considered in the trajectory planning, so that the unmanned aerial vehicle cannot pass through a complex barrier;
3) The controller design based on the Euler angle posture representation method is limited by the defects of the universal lock of the Euler angle, and the control performance of the controller is limited.
Disclosure of Invention
The invention aims to provide a dynamic obstacle environment navigation method and device based on visual semantic information, and solves the problems of low efficiency and poor control performance of an unmanned aerial vehicle navigation method in the prior art in a complex dynamic scene.
In order to achieve the aim, the invention provides a dynamic obstacle environment navigation method based on visual semantic information, which comprises the following steps:
s1, segmenting and analyzing semantic information in a scene of an original picture by using a probability regression prediction method based on a deep neural network model, detecting and identifying obstacles in the scene in real time, and establishing an obstacle map;
s2, using a heuristic hierarchical search method, carrying out path search on the motion infinitesimal sequence based on the multi-rotor dynamic model, and carrying out dynamic path planning on the established barrier map;
and S3, performing fast-response track tracking control by using a Li Dai number controller.
In an embodiment, the step S1 further includes the steps of:
s11, scaling the original picture to a specified size and carrying out block segmentation;
s12, inputting the segmented picture into a deep neural network model for probability regression prediction, and outputting predicted values of the position, the size and the confidence coefficient of the obstacle in the picture scene;
and S13, removing redundant predicted values according to a non-maximum value inhibition method to obtain a target detection result, wherein the target detection result is represented by a boundary frame body and an object type.
In an embodiment, the prediction parameters of the probabilistic regression prediction method for the deep neural network model in step S1 further include:
the distance coordinates of the picture pixel point and the center of the predicted object;
the ratio of the predicted object size to the image size of the picture pixel;
obstacle prediction confidence, including accuracy of predicted selected region dimensions and confidence in predicted targets.
In an embodiment, the loss function of the deep neural network model in step S1 is a sum of a central point position deviation loss term, a selected frame width and height deviation loss term, and a prediction confidence deviation loss term:
the central point position deviation loss term is as follows:
wherein λ is coord The weight of the central point position deviation loss term is taken as S is the number of blocks of picture segmentation, B is the total number of predicted objects, i is the unit of the current segmented pictureThe grid serial number, j is the serial number of the current predicted object,the corresponding coefficients of cell i and responsible object j, (x) i ,y i ) As the coordinates of the center of the actual object,predicting the center coordinates of the object;
the width and height deviation loss term of the selected frame is as follows:
wherein λ is whd For selecting the weight of the width and height deviation loss term of the frame, S is the number of the blocks divided by the picture, B is the total number of the predicted objects, i is the number of the cells of the current divided picture, j is the number of the current predicted objects,the corresponding coefficient of cell i and responsible object j, (w) i ,h i ) Is the width and height of the picture size,width and height for predicting object size;
the prediction confidence bias loss term is:
wherein λ is Nobj For predicting confidence deviation loss term weight, S is the number of blocks of picture segmentation, B is the total number of predicted objects, i is the number of cells of the current segmented picture, j is the number of the current predicted objects,the corresponding coefficient, C, for cell i and responsible object j i For the confidence of the actual object or objects,to predict object confidence.
In an embodiment, in step S2:
and the mass normalization acting force, the direction and the angular speed under the inertial coordinate system are equivalently obtained through the control input of the multi-rotor dynamic model.
In an embodiment, the method for performing path search based on the motion infinitesimal sequence of the multi-rotor dynamical model in step S2 further includes:
by selecting and extending the jerk j in time series, a path in space is searched for and optimized from an initial state s 0 To the last state s g The optimal track is obtained, so that the total control force J and the time T required by the target are optimal, and the corresponding target function is
Wherein j is the jerk, ρ is the time loss term coefficient, and κ is the acceleration term coefficient.
In an embodiment, the constraint conditions in the path planning of the optimized trajectory algorithm in step S2 include a dynamic property constraint and a collision constraint:
the dynamics constraint satisfies the following condition:
wherein,in order to be the speed of the vehicle,in order to be able to accelerate the vehicle,to add acceleration, v max Is the maximum value of the velocity, a max Is the maximum value of acceleration, j max Is the maximum value of acceleration;
the collision restraint is achieved by:
checking for verification of the presence of intersection between xi and point cloud, i.e. checking the following formula
If the intersection exists, the multi-rotor system is considered to collide with the obstacle;
wherein, O is an obstacle set, O is a set element in the obstacle, D is an Euclidean distance, and xi is the position of the multi-rotor system in R 3 Ellipsoid modeling in (1), d is the position of the multi-rotor system.
In an embodiment, in the step S2, the method for heuristic hierarchical search further includes using a prior trajectory Φ of the low-dimensional space p Using acceleration to search for a trajectory phi in a high dimensional space q ;
Wherein q is a high-dimensional track, p is a low-dimensional track, g is an end point subscript, n is a current point subscript, H 1 For heuristic cost functions representing the ascending distance, H 2 For a heuristic cost function of high dimensional start to end points,the total control force from the high-dimensional starting point to the end point is shown, rho is a time loss term coefficient, and T is time.
In one embodiment, the control variables of the Li Dai number controller in step S3 include the lift τ and the torque M,
the lift τ is generated by the following equation:
wherein k is x Coefficient of position term, k v As coefficient of velocity term, e x Position error, e v For speed error, m is the multi-rotor mass, g is the gravitational acceleration, e 3 Is a z-axis unit vector of an inertial coordinate system, R is the attitude of the multi-rotor system,is a desired acceleration;
the torque M is generated by the following equation:
where C is the desired value of the control instruction, k R ,k Ω To control the parameters, e R As attitude error, e Ω As error of angular velocity, R C To the desired attitude, Ω C In order to expect the angular velocity of the object,for desired angular acceleration, Ω ∈ R 3 For angular velocities of multi-rotor systems, J ∈ R 3×3 Is the rotational inertia of the multi-rotor system, R is the attitude of the multi-rotor system,predicted angular velocity for a multi-rotor system.
In order to achieve the above object, the present invention provides a dynamic obstacle environment navigation device based on visual semantic information, which is implemented by using the dynamic obstacle environment navigation method based on visual semantic information as described in any one of the above, and comprises a target detection module, a path search planning module and an execution module:
the target detection module is connected with the multi-rotor system and the path search planning module and comprises an image segmentation module and a regression prediction module, wherein the image segmentation module receives and segments an original picture of the multi-rotor system, and the regression prediction module performs probability regression prediction on the segmented picture based on a deep neural network model to obtain a target detection result;
the path search planning module is connected with the target detection module and the execution module and comprises a map generation module and a path planning module, the map generation module generates an obstacle map according to a target detection result, the path planning module performs path search based on a motion element sequence of a multi-rotor dynamic model by using a heuristic layered search method and performs dynamic path planning on the established obstacle map;
the execution module is connected with the multi-rotor system and comprises a navigation module and a controller module, the navigation module outputs expected lift force and expected attitude according to a navigation track formed by dynamic path planning, the controller module is a controller Li Dai, receives the expected lift force and the expected attitude of the navigation module, receives the actual lift force and the actual attitude of the multi-rotor system, and controls the multi-rotor system through the lift force and torque.
According to the multi-rotor wing dynamic obstacle environment navigation method based on the visual semantic information, the multi-rotor wing unmanned aerial vehicle can complete high-efficiency and high-dynamic-performance operation tasks in a complex dynamic scene through the visual semantic information.
Drawings
The above and other features, properties and advantages of the present invention will become more apparent from the following description of the embodiments with reference to the accompanying drawings in which like reference numerals denote like features throughout the several views, wherein:
FIG. 1 discloses a flow chart of a dynamic obstacle environment navigation method based on visual semantic information according to an embodiment of the invention;
FIG. 2 discloses a detailed flowchart of a dynamic obstacle environment navigation method based on visual semantic information according to yet another embodiment of the present invention;
FIG. 3 illustrates a schematic diagram of a four-rotor non-particle model in accordance with an embodiment of the present invention;
FIG. 4a discloses a graph of the results of a first path planning experiment according to an embodiment of the present invention;
FIG. 4b discloses a graph of the results of a second path planning experiment according to an embodiment of the invention;
FIG. 5a discloses a graph of the results of a first trajectory tracking experiment according to an embodiment of the invention;
FIG. 5b discloses a graph of the results of a second trajectory tracking experiment according to an embodiment of the invention;
FIG. 6 discloses a schematic diagram of a dynamic obstacle environment navigation system based on visual semantic information according to an embodiment of the invention.
The meanings of the reference symbols in the figures are as follows:
100. a target detection module;
101. a picture segmentation module;
102. a regression prediction module;
200. a path search planning module;
201. a map generation module;
202. a path planning module;
300. an execution module;
301. a navigation module;
302. a controller module;
400. a four-rotor system.
Detailed Description
In order to make the objects, technical solutions and advantages of the present invention more apparent, the present invention is described in further detail below with reference to the accompanying drawings and embodiments. It should be understood that the specific embodiments described herein are merely illustrative of the invention and do not limit the invention.
Aiming at the defects of the existing navigation method in the background art, the invention provides a complex dynamic scene navigation method based on visual semantic information, which does not relate to the bottom dynamics related to rotors and can be suitable for other multi-rotor systems.
Taking a quad-rotor unmanned aerial vehicle system as an example, the quad-rotor unmanned aerial vehicle carries a forward-looking camera as a visual sensing unit and guides path planning by using visual semantic information.
Fig. 1 discloses a flow chart of a dynamic obstacle environment navigation method based on visual semantic information according to an embodiment of the present invention, and as shown in fig. 1, the dynamic obstacle environment navigation method based on visual semantic information provided by the present invention mainly includes the following 3 steps:
s1, segmenting and analyzing semantic information in a scene by using a probability regression method based on a deep neural network model, detecting and identifying obstacles in the scene in real time, and establishing obstacle map information;
s2, using a heuristic layered search method, performing path search based on a motion element sequence of the four-rotor dynamic model, and performing dynamic path planning on the established barrier map;
and S3, performing quick-response trajectory tracking control by using a Li Dai controller.
Fig. 2 discloses a detailed flowchart of a dynamic obstacle environment navigation method based on visual semantic information according to another embodiment of the present invention, and each step of the dynamic obstacle environment navigation method based on visual semantic information according to the present invention is described in detail below with reference to fig. 1 and fig. 2.
S1, segmenting and analyzing semantic information in a scene by using a probability regression method based on a deep neural network model, detecting and identifying obstacles in the scene in real time, and establishing obstacle map information.
Step S1 further comprises the steps of:
s11, scaling the original picture to a specified size and carrying out block segmentation.
In the embodiment shown in fig. 2, step S11 further includes performing a series of pre-processing on the original picture, including image noise filtering, luminance balancing, and the like. The raw pictures are obtained from the vision sensing unit of the quad-rotor system.
In the embodiment shown in fig. 2, step S11 further includes determining whether there is a desired detection target in the first N frames of images:
if the current original picture exists, the current original picture is evenly divided to form a preselected grid, in some embodiments, the current original picture is divided into N multiplied by N grid-shaped pixel blocks, and regression prediction of a target is carried out on each unit grid pixel block;
if not, the segmentation around the previous target is denser, a denser preselected grid is generated, and for each block of unit-grid pixels, a regression prediction of the target is made.
And S12, inputting the segmented picture into a convolution network of the deep neural network model for probability regression prediction, and outputting predicted values of the position, the size and the confidence coefficient of the obstacle in the picture scene.
And S13, removing redundant predicted values according to a non-maximum value suppression method to obtain a target detection result, wherein the target detection result is represented by a boundary frame and an object type.
In the embodiment shown in fig. 2, the target detection result of step S13 is the position information of the target object.
And generating an obstacle map according to the position information of the target object, wherein in the embodiment shown in fig. 2, the obstacle map is an obstacle point cloud map.
The Non-Maximum Suppression (NMS) method in step S13 suppresses elements that are not Maximum values, and can be understood as a local Maximum search. The local representation is a neighborhood, and the neighborhood has two variable parameters, namely the dimension of the neighborhood and the size of the neighborhood.
In step S12, the prediction parameters of the probabilistic regression detection method for the deep neural network model include: the distance coordinates between the picture pixel point and the center of the predicted object, the ratio of the size of the predicted object to the size of the image of the picture pixel point and the confidence coefficient of the prediction of the obstacle.
Distance coordinates (r) of the picture pixel point to the center of the predicted object x ,r y ),
Wherein (x) c ,y c ) To predict object center coordinates, (w) i ,h i ) Width and height of picture size, S is the number of blocks of picture division, (x) col ,y row ) Is the grid point coordinate of the predicted block center.
Ratio (r) of predicted object size to image size of pixel points of the picture w ,r h ),
Wherein (w) i ,h i ) Width and height (w) of picture size b ,h i ) To predict the width and height of the object dimensions.
Confidence P of the obstacle prediction conf I.e. the confidence level of the prediction of an obstacle for an object of a certain type, mainly takes into account two influencing factors, including the accuracy of the predicted selected area size and the confidence level of the predicted object.
Total obstacle prediction confidence P conf Expressed as:
wherein Pr is the number of objects in the cell, if any, the value is 1, if not, the value is 0, and the object is the object number,accuracy of the predicted box relative to the true value.
To control the gradient of the loss function across the threshold level, it is desirable to prevent loss values from being lost to loss values within each pointThe gradient mutation formed, step S1 of the invention, introduces two parameters λ coord And λ Nobj To control coordinate position loss and loss of confidence.
Further, the loss function term of the deep neural network model of step S1 of the present invention includes: a central point position deviation loss item, a selected frame width and height deviation loss item and a prediction confidence degree deviation loss item.
The loss function term of the deep neural network model is the sum of the above loss terms.
The central point position deviation loss term is as follows:
wherein λ is coord Is the weight of the central point position deviation loss item, S is the number of the divided blocks of the picture, B is the total number of the predicted objects, i is the number of the cells of the current divided picture, j is the number of the current predicted objects,the corresponding coefficients of cell i and responsible object j, (x) i ,y i ) As the coordinates of the center of the actual object,to predict the object center coordinates.
The width and height deviation loss term of the selected frame is as follows:
wherein λ is whd For selecting the weight of the width and height deviation loss term of the frame, S is the number of the blocks divided by the picture, B is the total number of the predicted objects, i is the number of the cells of the current divided picture, j is the number of the current predicted objects,is a sheetCoefficient of correspondence of cell i to object j in charge, (w) i ,h i ) Is the width and height of the picture size,to predict the width and height of the object dimensions.
The prediction confidence deviation loss term is as follows:
wherein λ is Nobj For predicting confidence deviation loss term weight, S is the number of segmented blocks of the picture, B is the total number of predicted objects, i is the cell serial number of the current segmented picture, j is the serial number of the current predicted object,the corresponding coefficient, C, of the cell i and the responsible object j i For the confidence of the actual object or objects,to predict object confidence.
And S2, performing path search by using a heuristic hierarchical search method based on the motion infinitesimal sequence of the four-rotor dynamic model, and performing dynamic path planning on the established barrier map.
In step S2, a heuristic hierarchical search method is adopted to carry out dynamic trajectory planning on the established barrier map, a four-rotor dynamic model is utilized to generate a motion infinitesimal sequence, a four-rotor body is modeled into an ellipsoid, the flight attitude of the ellipsoid along the trajectory is calculated, and whether the position relation between the four rotors and the barrier meets constraint conditions is checked.
The step S2 further includes the steps of:
s21, establishing a dynamic model of path planning of the four-rotor system.
The path trajectory Φ (t) at time t of the quadrotor system is represented by the following state vector:
Φ(t)=[x T ,v T ,a T ,j T ] T
wherein x is the position of the track point, v is the speed of the track point, a is the acceleration of the track point, j is the acceleration of the track point, and T is the transposition symbol of the vector.
Thrust f required when a four-rotor system is used d In the direction R d And angular velocity ω d When known, the control inputs required for the dynamics model calculations for a four-rotor system can be equivalently derived.
In the embodiment shown in fig. 1 and 2, jerk is selected as the control input to the quadrotor dynamics model. After the control input is obtained by using the path planning method, the required information such as direction, angular velocity and the like can be reversely deduced, and the parameters of the thrust, the direction and the angular velocity are associated with the control input according to the following formula.
The thrust force f d Mass normalized acting force under an inertial coordinate system:
f d =a d +gz w
wherein g is gravity acceleration and vector Z w =[0,0,1] T Is a Z-axis direction vector in the world coordinate system, a d To plan a four rotor acceleration vector in the trajectory.
The trajectory attitude R of the four-rotor system is represented by R = [ R1, R ] in SO (3) space 2 ,r 3 ]:
Wherein r is 1 Is the first component of R, R 2 Is the second component of R, R 3 Is the third component of R and is,is the angular velocity of the rotating body, and,is composed ofIs measured with respect to the first component of (a),is composed ofIs measured with respect to the first component of (a),is composed ofAnd psi is the four rotor yaw angle.
And S22, performing path search based on the motion infinitesimal sequence.
Selecting jerk u m As a control input of the four-rotor dynamic model, the motion element means that the initial state s is changed within a tiny time period tau > 0 0 To the last state s g Can be expressed in polynomial form as follows:
where t is the current time, a0 is the initial acceleration, v0 is the initial velocity, and p0 is the initial position.
By selecting and extending the jerk j in time series, a search is formed for a path in space, by finding and optimizing the path from the initial state s 0 To the last state s g To obtain an optimal trajectory that is optimal in terms of the total control force J and the time T required by the target.
The objective function corresponding to the optimized track algorithm is
Wherein J is the acceleration, rho is the time loss term coefficient, J is the total control force, rho is the time loss term coefficient, and kappa is the acceleration term coefficient.
In the optimization process, the dynamic constraint and the collision constraint are mainly considered, and therefore, in the path planning of the optimization trajectory algorithm of step S22 of the present invention, the constraint conditions include a dynamic property constraint and a collision constraint.
The dynamics constraints of a four-rotor system are essentially derived from the maximum minimum thrust and torque that the rotors can provide, taking advantage of the differential flatness characteristics of a four-rotor system, and applying speed, acceleration and jerk limits independently on each axis.
The dynamics in the constrained path planning need to satisfy the following conditions:
wherein x (t) is the position coordinate of the four rotors,in order to be able to speed up the vehicle,in order to be able to accelerate the vehicle,to add acceleration, v max Is the maximum value of the velocity, a max Is the maximum value of acceleration, j max Is the maximum value of the acceleration.
FIG. 3 discloses a diagram of a four-rotor non-particle model, as shown in FIG. 3, for modeling a four-rotor system as R, taking into account actual robot shape and pose, while addressing collision constraints 3 Ellipsoid xi in the middle, the space of the four-rotor system in the attitude state S is expressed as
Wherein p is an ellipsoid assembly element,is a scale vector, E is a matrix representing rotation, R is a radius, h is a height, R is a radius 3 Is a three-dimensional real number space, S is the pose state of a given quad-rotor system, and d is the position of the quad-rotor system.
Wherein R is the attitude of the four-rotor system, and R = [ R ] 1 ,r 2 ,r3]。
In the ellipsoid model of the four-rotor system, when tracking the trajectory, it is checked whether there is an intersection between verification ξ and the point cloud, i.e. the following formula is checked:
where O is the set of obstacles, O is the set element within the obstacle, D is the Euclidean distance, and D is the position of the quad-rotor system.
If there is an intersection, the quad-rotor system is considered to collide with the obstacle.
In order to accelerate the planning speed in the dynamic environment, step S22 of the present invention further includes the following steps:
a heuristic layered search method is adopted to accelerate track searching, firstly, a lower-dimensional space is searched to obtain a prior track, the prior track is used as a heuristic method to guide searching of a high-dimensional space, and finally a dynamic feasible path track is generated.
A priori trajectory in low dimensional space of phi p The trajectory being searched in the high-dimensional space is Φ q (q>p)。
Wherein q is a high-dimensional track, p is a low-dimensional track, g is an end point subscript, n is a current point subscript, H1 is a heuristic cost function representing a dimension-increasing distance, H2 is a heuristic cost function from a high-dimensional starting point to an end point,the total control force from the high-dimensional starting point to the end point is shown, rho is a time loss term coefficient, and T is time.
Whereby the acceleration can be used to search for the trajectory phi q Using the previous trajectory phi p To initiate a search.
New search trajectory Φ q Tend to follow a priori trajectory phi p Keeping consistency, while H2 will add finer dynamics to the planning objective.
And S3, performing fast-response track tracking control by using a Li Dai number controller.
Aiming at the aggressive posture and high dynamic characteristics appearing in a complex scene, a four-rotor dynamic model is established in a lie algebra space three-dimensional rotating group space, namely an SO (3) space:
wherein x ∈ R 3 Is the position of the four-rotor system under an inertial coordinate system, v belongs to R 3 Is the speed of the four-rotor system under an inertial coordinate system, m is the mass of the four-rotor system, and omega belongs to R 3 For angular velocities of four-rotor systems, J ∈ R 3×3 Is the rotational inertia of the four-rotor system, R is the attitude of the four-rotor system, e3 is the z-axis unit vector of the inertial coordinate system,the predicted angular velocity for a four-rotor system.
On the basis of a four-rotor dynamic model, a feedforward-feedback nonlinear Li Dai number controller is established, and control quantity comprises lift force tau and torque M.
The lift τ is generated by the following equation:
wherein k is x Coefficient of position term, k v As coefficient of velocity term, e x Position error, e v For speed error, m is the mass of the four rotors, g is the gravity acceleration, e3 is the z-axis unit vector of the inertial coordinate system, R is the attitude of the four rotor system,is the desired acceleration.
The torque M is generated by the following equation:
where C is the desired value of the control instruction, k R ,k Ω To control the parameters, e R As attitude error, e Ω As error of angular velocity, R C To the desired attitude, Ω C In order to expect the angular velocity of the object,for desired angular acceleration, Ω ∈ R 3 For angular velocities of four-rotor systems, J ∈ R 3×3 Is the rotational inertia of the four-rotor system, R is the attitude of the four-rotor system,the predicted angular velocity for a four-rotor system.
The invention uses Li Dai number controller to carry out quick response track tracking control, thereby realizing the navigation task of complex dynamic scene by using visual semantic information.
Fig. 4a discloses a graph of results of a first path planning experiment according to an embodiment of the invention, fig. 4b discloses a graph of results of a second path planning experiment according to an embodiment of the invention, fig. 5a discloses a graph of results of a first trajectory tracking experiment according to an embodiment of the invention, fig. 5b discloses a graph of results of a second trajectory tracking experiment according to an embodiment of the invention, and table 1 shows a comparison of experimental parameters of experiment 1 shown in fig. 4a and 5a and experiment 2 shown in fig. 4b and 5 b.
In experiment 1 shown in fig. 4a and 5a, a path planning as shown in fig. 4a was performed on the obstacle map, the solid line in fig. 5a is a desired path planning path, and the broken line is a path obtained by actual control, and in experiment 2 shown in fig. 4b and 5b, a path planning as shown in fig. 4b was performed on the obstacle map, the solid line in fig. 5b is a desired path planning path, and the broken line is a path obtained by actual control.
As can be seen from experiments 1 and 2, the four-rotor dynamic obstacle environment navigation method based on the visual semantic information can realize navigation and control of the four-rotor unmanned aerial vehicle in a complex dynamic obstacle environment.
TABLE 1
The invention also provides a dynamic obstacle environment navigation device based on visual semantic information, which can realize the dynamic obstacle environment navigation method based on visual semantic information, and still takes a four-rotor system as an example.
Fig. 6 is a schematic diagram of a dynamic obstacle environment navigation system based on visual semantic information according to an embodiment of the present invention, and in the embodiment shown in fig. 6, the dynamic obstacle environment navigation system based on visual semantic information according to the present invention includes a target detection module 100, a route search planning module 200, and an execution module 300.
The object detection module 100 is connected to the four-rotor system 400 and the path search planning module 200.
The object detection module 100 includes an image segmentation module 101 and a regression prediction module 102.
The image segmentation module 101 receives and segments an original image of the four-rotor system 400, and sends the segmented image, i.e., a block image, to the regression prediction module 102.
The regression prediction module 102 performs probabilistic regression prediction on the segmented picture based on the deep neural network model to obtain a target detection result, and sends the obstacle information corresponding to the target detection result to the path search planning module 200.
The path search planning module 200 is connected to the target detection module 100 and the execution module 300, and includes a map generation module 201 and a path planning module 202.
The map generation module 201 generates an obstacle map according to the target detection result. The obstacle map is a point cloud map, and the map generation module 201 sends the point cloud map to the path planning module 202.
The path planning module 202 performs path search based on a motion element sequence of the four-rotor dynamics model by using a heuristic hierarchical search method, performs dynamic path planning on the established barrier map, and sends the planned navigation track to the execution module.
The implement module 300, coupled to the quad-rotor system 400, includes a navigation module 301 and a controller module 302.
The navigation module 301 outputs an expected lift force and an expected posture according to a navigation track formed by dynamic path planning.
The controller module 302 is a controller of Li Dai, receives the expected lift and the expected attitude of the navigation module 301, receives the actual lift and the actual attitude of the four-rotor system 400, and controls the four-rotor system 400 through the lift and the torque.
According to the four-rotor dynamic obstacle environment navigation method and device based on the visual semantic information, the multi-rotor unmanned aerial vehicle can complete high-efficiency and high-dynamic-performance operation tasks in a complex dynamic scene through the visual semantic information.
While, for purposes of simplicity of explanation, the methodologies are shown and described as a series of acts, it is to be understood and appreciated that the methodologies are not limited by the order of acts, as some acts may, in accordance with one or more embodiments, occur in different orders and/or concurrently with other acts from that shown and described herein or not shown and described herein, as would be understood by one skilled in the art.
As used in this application and the appended claims, the terms "a," "an," "the," and/or "the" are not intended to be inclusive in the singular, but rather are intended to be inclusive in the plural unless the context clearly dictates otherwise. In general, the terms "comprises" and "comprising" merely indicate that steps and elements are included which are explicitly identified, that the steps and elements do not form an exclusive list, and that a method or apparatus may include other steps or elements.
The embodiments described above are provided to enable persons skilled in the art to make or use the invention and that modifications or variations can be made to the embodiments described above by persons skilled in the art without departing from the inventive concept of the present invention, so that the scope of protection of the present invention is not limited by the embodiments described above but should be accorded the widest scope consistent with the innovative features set forth in the claims.
Claims (8)
1. A dynamic obstacle environment navigation method based on visual semantic information is characterized by comprising the following steps:
s1, segmenting and analyzing semantic information in a scene of an original picture by using a probability regression prediction method based on a deep neural network model, detecting and identifying obstacles in the scene in real time, and establishing an obstacle map;
s2, using a heuristic hierarchical search method, carrying out path search on the motion infinitesimal sequence based on the multi-rotor dynamic model, and carrying out dynamic path planning on the established barrier map;
s3, performing quick-response trajectory tracking control by using a Li Dai number controller;
wherein, the loss function of the deep neural network model in the step S1 is the sum of the central point position deviation loss term, the selected frame width and height deviation loss term, and the prediction confidence degree deviation loss term:
the central point position deviation loss term is as follows:
wherein λ is coord Is the weight of the central point position deviation loss term, S is the number of the divided blocks of the picture, B is the total number of the predicted objects, i is the serial number of the cell of the current divided picture, j is the serial number of the current predicted object,the corresponding coefficients of the cell i and the responsible object j are shown, and (xi, yi) are the center coordinates of the actual object,predicting the center coordinates of the object;
the selected frame width and height deviation loss term is as follows:
wherein λ is whd For selecting the weight of the width and height deviation loss term of the frame, S is the number of the blocks divided by the picture, B is the total number of the predicted objects, i is the number of the cells of the current divided picture, j is the number of the current predicted objects,the corresponding coefficient of cell i and responsible object j, (w) i ,h i ) Is the width and height of the picture size,width and height for predicting the size of the object;
the prediction confidence bias loss term is:
wherein λ is Nobj For predicting confidence deviation loss term weight, S is the number of blocks of picture segmentation, B is the total number of predicted objects, i is the number of cells of the current segmented picture, j is the number of the current predicted objects,the corresponding coefficient, C, of the cell i and the responsible object j i For the confidence of the actual object or objects,to predict object confidence;
the control quantity of the Li Dai controller in the step S3 includes a lift force τ and a torque M,
the lift τ is generated by the following equation:
wherein k is x Coefficient of position term, k v As coefficient of velocity term, e x Position error, e v For speed error, m is the multi-rotor mass, g is the gravitational acceleration, e 3 Is a z-axis unit vector of an inertial coordinate system, R is the attitude of the multi-rotor system,is the desired acceleration;
the torque M is generated by the following equation:
where C is the desired value of the control instruction, k R ,k Ω To control the parameters, e R As attitude error, e Ω Error of angular velocity, R C To the desired attitude, Ω C In order to expect the angular velocity of the object,for a desired angular acceleration, Ω ∈ R 3 For the angular velocity of a multi-rotor system,predicting angular velocity for a multi-rotor system, J ∈ R 3×3 The moment of inertia of the multi-rotor system, and R is the attitude of the multi-rotor system.
2. The dynamic obstacle environment navigation method based on visual semantic information according to claim 1, wherein the step S1 further comprises the steps of:
s11, scaling the original picture to a specified size and carrying out block segmentation;
s12, inputting the segmented picture into a deep neural network model for probability regression prediction, and outputting predicted values of the position, the size and the confidence coefficient of an obstacle in a picture scene;
and S13, removing redundant predicted values according to a non-maximum value inhibition method to obtain a target detection result, wherein the target detection result is represented by a boundary frame body and an object type.
3. The dynamic obstacle environment navigation method based on visual semantic information according to claim 1, wherein the prediction parameters of the probabilistic regression prediction method of the deep neural network model in step S1 further include:
distance coordinates of the picture pixel point and the center of the predicted object;
the ratio of the predicted object size to the image size of the picture pixel points;
obstacle prediction confidence, including accuracy of predicted selected region size and confidence in the predicted target.
4. The dynamic obstacle environment navigation method based on visual semantic information according to claim 1, wherein in the step S2:
and the mass normalization acting force, the direction and the angular speed under the inertial coordinate system are equivalently obtained through the control input of the multi-rotor dynamic model.
5. The dynamic obstacle environment navigation method based on visual semantic information according to claim 1, wherein the step S2 of performing a path search method based on a motion micro element sequence of a multi-rotor dynamical model further comprises:
by selecting and extending the jerk j in time series, a path in space is searched for and optimized from an initial state s 0 To the last state s g The optimal track is obtained, so that the total control force J and the time T required by the target are optimal, and the corresponding target function is achievedNumber is
Wherein j is the jerk, ρ is the time loss term coefficient, and κ is the acceleration term coefficient.
6. The dynamic obstacle environment navigation method based on visual semantic information according to claim 5, wherein the constraint conditions in the path planning of the optimized trajectory algorithm in the step S2 include a dynamic characteristic constraint and a collision constraint:
the dynamics constraint satisfies the following condition:
wherein,in order to be the speed of the vehicle,in order to be able to accelerate the vehicle,to add acceleration, v max Is the maximum value of the velocity, a max Is the maximum value of acceleration, j max Is the maximum value of the acceleration;
the collision restraint is achieved by:
checking for verification of the presence of intersection between xi and point cloud, i.e. checking the following formula
If the intersection exists, the multi-rotor system is considered to collide with the obstacle;
wherein O is a barrierThe set of obstacles, o is a set element in the obstacle, D is an Euclidean distance, and xi is the number of the multi-rotor system in R 3 Ellipsoid modeling in (1), d is the position of the multi-rotor system.
7. The dynamic obstacle environment navigation method based on visual semantic information as claimed in claim 5, wherein in the step S2, the method of heuristic hierarchical search further comprises using a priori trajectory Φ of a low-dimensional space p Using acceleration to search for a trajectory phi in a high dimensional space q ;
Wherein q is a high-dimensional track, p is a low-dimensional track, g is an end point subscript, n is a current point subscript, H 1 For heuristic cost functions representing the ascending distance, H 2 For a heuristic cost function of high dimensional start to end,and p is a time loss term coefficient, and T is time.
8. A dynamic obstacle environment navigation device based on visual semantic information, which is realized by using the dynamic obstacle environment navigation method based on visual semantic information according to any one of claims 1 to 7, and is characterized by comprising a target detection module, a path search planning module and an execution module:
the target detection module is connected with the multi-rotor system and the path search planning module and comprises an image segmentation module and a regression prediction module, wherein the image segmentation module receives and segments an original picture of the multi-rotor system, and the regression prediction module performs probability regression prediction on the segmented picture based on a deep neural network model to obtain a target detection result;
the path search planning module is connected with the target detection module and the execution module and comprises a map generation module and a path planning module, the map generation module generates an obstacle map according to a target detection result, the path planning module performs path search based on a motion element sequence of a multi-rotor dynamic model by using a heuristic layered search method and performs dynamic path planning on the established obstacle map;
the execution module is connected with the multi-rotor system and comprises a navigation module and a controller module, the navigation module outputs expected lift force and expected attitude according to a navigation track formed by dynamic path planning, the controller module is a controller Li Dai, receives the expected lift force and the expected attitude of the navigation module, receives the actual lift force and the actual attitude of the multi-rotor system, and controls the multi-rotor system through the lift force and torque.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202010242906.XA CN111367318B (en) | 2020-03-31 | 2020-03-31 | Dynamic obstacle environment navigation method and device based on visual semantic information |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202010242906.XA CN111367318B (en) | 2020-03-31 | 2020-03-31 | Dynamic obstacle environment navigation method and device based on visual semantic information |
Publications (2)
Publication Number | Publication Date |
---|---|
CN111367318A CN111367318A (en) | 2020-07-03 |
CN111367318B true CN111367318B (en) | 2022-11-22 |
Family
ID=71207276
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202010242906.XA Active CN111367318B (en) | 2020-03-31 | 2020-03-31 | Dynamic obstacle environment navigation method and device based on visual semantic information |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN111367318B (en) |
Families Citing this family (13)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN111950386A (en) * | 2020-07-22 | 2020-11-17 | 北京航空航天大学 | Functional intelligence-based environment self-adaptive navigation scene recognition method for micro unmanned aerial vehicle |
CN111880573B (en) * | 2020-07-31 | 2022-09-06 | 电子科技大学 | Four-rotor autonomous navigation method based on visual inertial navigation fusion |
CN112183221B (en) * | 2020-09-04 | 2024-05-03 | 北京科技大学 | Semantic-based dynamic object self-adaptive track prediction method |
CN112509056B (en) * | 2020-11-30 | 2022-12-20 | 中国人民解放军32181部队 | Dynamic battlefield environment real-time path planning system and method |
CN113358118B (en) * | 2021-05-06 | 2022-09-20 | 北京化工大学 | End-to-end autonomous navigation method for indoor mobile robot in unstructured environment |
CN113311833B (en) * | 2021-05-20 | 2023-06-20 | 江苏图知天下科技有限公司 | Prefabricated plate surface collecting method and device based on robot |
CN113608548A (en) * | 2021-07-23 | 2021-11-05 | 中国科学院地理科学与资源研究所 | Unmanned aerial vehicle emergency processing method and system, storage medium and electronic equipment |
CN113390410B (en) * | 2021-08-04 | 2023-01-13 | 北京云恒科技研究院有限公司 | Inertial integrated navigation method suitable for unmanned aerial vehicle |
CN113925490B (en) * | 2021-10-14 | 2023-06-27 | 河北医科大学 | Space orientation obstacle classification method |
CN114693721B (en) * | 2022-03-24 | 2023-09-01 | 美的集团(上海)有限公司 | Motion planning method and device and robot |
CN114791734A (en) * | 2022-04-29 | 2022-07-26 | 北京理工大学 | Semantic information series based tracked vehicle control method and system |
CN115840369A (en) * | 2023-02-20 | 2023-03-24 | 南昌大学 | Track optimization method, device and equipment based on improved whale optimization algorithm |
CN117541655B (en) * | 2024-01-10 | 2024-03-26 | 上海几何伙伴智能驾驶有限公司 | Method for eliminating radar map building z-axis accumulated error by fusion of visual semantics |
Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN109682381A (en) * | 2019-02-22 | 2019-04-26 | 山东大学 | Big visual field scene perception method, system, medium and equipment based on omnidirectional vision |
CN109993780A (en) * | 2019-03-07 | 2019-07-09 | 深兰科技(上海)有限公司 | A kind of three-dimensional high-precision ground drawing generating method and device |
US10366508B1 (en) * | 2016-08-29 | 2019-07-30 | Perceptin Shenzhen Limited | Visual-inertial positional awareness for autonomous and non-autonomous device |
CN110275540A (en) * | 2019-07-01 | 2019-09-24 | 湖南海森格诺信息技术有限公司 | Semantic navigation method and its system for sweeping robot |
Family Cites Families (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US10578457B2 (en) * | 2016-09-15 | 2020-03-03 | Syracuse University | Robust and stable autonomous vision-inertial navigation system for unmanned vehicles |
-
2020
- 2020-03-31 CN CN202010242906.XA patent/CN111367318B/en active Active
Patent Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US10366508B1 (en) * | 2016-08-29 | 2019-07-30 | Perceptin Shenzhen Limited | Visual-inertial positional awareness for autonomous and non-autonomous device |
CN109682381A (en) * | 2019-02-22 | 2019-04-26 | 山东大学 | Big visual field scene perception method, system, medium and equipment based on omnidirectional vision |
CN109993780A (en) * | 2019-03-07 | 2019-07-09 | 深兰科技(上海)有限公司 | A kind of three-dimensional high-precision ground drawing generating method and device |
CN110275540A (en) * | 2019-07-01 | 2019-09-24 | 湖南海森格诺信息技术有限公司 | Semantic navigation method and its system for sweeping robot |
Non-Patent Citations (2)
Title |
---|
Vision-Based Tracking Control of Quadrotor with Backstepping Sliding Mode Control;Zhao, BF,et al.;《IEEE ACCESS》;20181119;第72439-72448页 * |
基于图像语义分割的动态场景下的单目SLAM算法;盛超等;《测绘通报》;20200125(第01期);第40-44页 * |
Also Published As
Publication number | Publication date |
---|---|
CN111367318A (en) | 2020-07-03 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN111367318B (en) | Dynamic obstacle environment navigation method and device based on visual semantic information | |
Tordesillas et al. | PANTHER: Perception-aware trajectory planner in dynamic environments | |
Doukhi et al. | Neural network-based robust adaptive certainty equivalent controller for quadrotor UAV with unknown disturbances | |
Sun et al. | Quadrotor gray-box model identification from high-speed flight data | |
Sarabakha et al. | Novel Levenberg–Marquardt based learning algorithm for unmanned aerial vehicles | |
Hérissé et al. | A terrain-following control approach for a vtol unmanned aerial vehicle using average optical flow | |
Liu et al. | Robust nonlinear control approach to nontrivial maneuvers and obstacle avoidance for quadrotor UAV under disturbances | |
Alexopoulos et al. | A comparative study of collision avoidance techniques for unmanned aerial vehicles | |
CN110231828B (en) | Four-rotor unmanned aerial vehicle visual servo control method based on nonsingular rapid terminal sliding mode | |
Patel et al. | An intelligent hybrid artificial neural network-based approach for control of aerial robots | |
Ahmadinejad et al. | Autonomous flight of quadcopters in the presence of ground effect | |
Jacquet et al. | Perception-constrained and Motor-level Nonlinear MPC for both Underactuated and Tilted-propeller UAVS | |
Hoang et al. | Vision-based target tracking and autonomous landing of a quadrotor on a ground vehicle | |
Tilki et al. | Robust adaptive backstepping global fast dynamic terminal sliding mode controller design for quadrotors | |
Li et al. | Robocentric model-based visual servoing for quadrotor flights | |
Fu et al. | A comparative study on the control of quadcopter uavs by using singleton and non-singleton fuzzy logic controllers | |
CN118331040A (en) | Self-adaptive collaborative handling method for multi-unmanned aerial vehicle hanging load system | |
Gomez-Balderas et al. | Vision based tracking for a quadrotor using vanishing points | |
CN116149193B (en) | Anti-disturbance control method and system for rotor unmanned aerial vehicle based on vision | |
Razinkova et al. | Tracking a moving ground object using quadcopter UAV in a presence of noise | |
Jain et al. | Docking two multirotors in midair using relative vision measurements | |
Bouzerzour et al. | Robust vision-based sliding mode control for uncooperative ground target searching and tracking by quadrotor | |
CN116301007A (en) | Intensive task path planning method for multi-quad-rotor unmanned helicopter based on reinforcement learning | |
CN112161626B (en) | High-flyability route planning method based on route tracking mapping network | |
Mendes et al. | Safe teleoperation of a quadrotor using FastSLAM |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |