CN110825076B - Mobile robot formation navigation semi-autonomous control method based on sight line and force feedback - Google Patents

Mobile robot formation navigation semi-autonomous control method based on sight line and force feedback Download PDF

Info

Publication number
CN110825076B
CN110825076B CN201910920285.3A CN201910920285A CN110825076B CN 110825076 B CN110825076 B CN 110825076B CN 201910920285 A CN201910920285 A CN 201910920285A CN 110825076 B CN110825076 B CN 110825076B
Authority
CN
China
Prior art keywords
formation
mobile
robot
slave
operator
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201910920285.3A
Other languages
Chinese (zh)
Other versions
CN110825076A (en
Inventor
宋光明
程琳琳
曾洪
秦留界
高源�
李松涛
宋爱国
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Southeast University
Original Assignee
Southeast University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Southeast University filed Critical Southeast University
Priority to CN201910920285.3A priority Critical patent/CN110825076B/en
Publication of CN110825076A publication Critical patent/CN110825076A/en
Application granted granted Critical
Publication of CN110825076B publication Critical patent/CN110825076B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G05CONTROLLING; REGULATING
    • G05DSYSTEMS FOR CONTROLLING OR REGULATING NON-ELECTRIC VARIABLES
    • G05D1/00Control of position, course, altitude or attitude of land, water, air or space vehicles, e.g. using automatic pilots
    • G05D1/02Control of position or course in two dimensions
    • G05D1/021Control of position or course in two dimensions specially adapted to land vehicles
    • G05D1/0287Control of position or course in two dimensions specially adapted to land vehicles involving a plurality of land vehicles, e.g. fleet or convoy travelling
    • G05D1/0291Fleet control

Landscapes

  • Engineering & Computer Science (AREA)
  • Aviation & Aerospace Engineering (AREA)
  • Radar, Positioning & Navigation (AREA)
  • Remote Sensing (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Automation & Control Theory (AREA)
  • Control Of Position, Course, Altitude, Or Attitude Of Moving Bodies (AREA)
  • Manipulator (AREA)

Abstract

The invention discloses a semi-autonomous control method for formation navigation of mobile robots based on sight line and force feedback. The master end comprises an operator, an eye tracker, a hand controller and a control computer. The slave end comprises a multi-mobile robot system, a camera and a working environment. The master and slave terminals perform wireless communication via WiFi or the like. The multi-mobile robot system at the slave end has semi-autonomous control capability, and automatic obstacle avoidance and formation maintenance of the multi-mobile robot are realized by utilizing a virtual rigid body algorithm. The master end captures the sight signals of an operator by using the eye tracker and converts the sight signals into formation switching commands of the slave end, and the remote intervention is carried out on the slave end by combining three degrees of freedom of the hand controller tail end controller. The control method combines force feedback and sight tracking, applies the sight tracking to the control of the multiple mobile robots, reduces the cognitive load of operators, and improves the efficiency and stability of the teleoperation control system.

Description

Mobile robot formation navigation semi-autonomous control method based on sight and force feedback
Technical Field
The invention relates to a semi-autonomous control method for formation navigation of mobile robots based on sight line and force feedback.
Background
The multi-robot system has the advantages of good redundancy, robustness, expandability and the like, is widely researched in recent years, and is used for tasks such as large-scale reconnaissance detection, safety inspection, search and rescue and the like. However, these environments tend to be complex and variable, and the adoption of fully autonomous control methods for multiple mobile robots is difficult to achieve, so that currently a feasible and effective method is combined with teleoperation technology to perform tasks in complex environments that are difficult for humans to access or that may cause harm to humans.
In order to cope with a complex and variable environment, the multiple robots need to realize the switching of the formation when executing tasks. In the prior art, two methods are mainly adopted for realizing queue switching: one is to give different target formation commands by using the position of the terminal HIP of the human-computer interaction equipment, and the other is to use an algorithm to enable the slave robot to automatically realize the switching of the formations according to different environments. In the first method, the human-computer interaction device needs to control the target speed of the slave robot at the same time, and the conversion of the position information into the target formation command is not intuitive enough, so that the operator needs to keep high alertness all the time. This mode puts an increased cognitive load on the operator, is inefficient, and is prone to fatigue. For the second method, the algorithm is difficult to implement, and the system is easy to be unstable.
Disclosure of Invention
The invention aims to introduce the sight tracking into a main end control loop of a multi-mobile robot, and combine the feedback of force information, so that the multi-mobile robot can avoid obstacles and timely and accurately complete the switching of formation according to different environments in the formation navigation process to smoothly reach a target position.
The technical scheme adopted by the invention is as follows: the semi-autonomous control method for formation navigation of the mobile robots based on sight line and force feedback comprises a master end, a slave end and a communication link; the main end comprises an operator, a visual tracking device, a force feedback human-computer interface device and a control computer; the slave end comprises a plurality of mobile robots, a camera and a working environment; the communication link adopts WiFi or other wireless communication modes;
the operator at the master end interacts with the force feedback human-computer interface equipment through the visual tracking equipment and sends a control command to the multi-mobile-robot system at the slave end through a communication link; the multi-mobile robot system comprises a multi-mobile platform system consisting of n mobile robots;
the visual tracking equipment is used for capturing visual signals of an operator and converting the information into target formation control instructions of the slave multi-mobile robot; the multi-mobile system on the slave side forms an expected formation according to the received instructions.
The force feedback man-machine interface equipment has three-degree-of-freedom output feedback and is used for feeding back the state of the slave-end multi-robot system to the master end in a force signal mode;
the control computer predefines a target formation of the slave-end multi-mobile platform by using a visual interface, displays the target formation on the interface in the form of a picture, captures eye movement information of an operator by using visual tracking equipment, and forms a corresponding click event according to a graph watched by the operator; and the slave end robot performs corresponding queue form switching.
The camera is used for feeding back the state of the multiple robots and the slave-end environment to the main end in a video or picture mode.
According to the figure watched by an operator, the multi-mobile robot system adopts a virtual rigid body algorithm for formation control, the multi-robot system at the slave end is regarded as a whole, each robot automatically realizes formation retention and automatic obstacle avoidance, and in this way, the operator can concentrate on the whole control of the slave end robot group without worrying about whether a single robot can touch an obstacle.
When the virtual rigid bodies detect obstacles or receive instructions of switching the formation, the relative pose between the vectors in the virtual rigid bodies changes correspondingly and appears in another rigid body again; in this way, formation keeping and automatic obstacle avoidance are automatically realized.
The invention is further improved in that: the force feedback man-machine interface equipment has three-degree-of-freedom output feedback, wherein the x direction and the z direction reflect the distance and angle information between the robot and the obstacle, and the output quantity in the y direction reflects the size of the formation of the whole multi-mobile platform system.
For force feedback man-machine interface equipment for controlling a multi-robot system, the angular velocity and linear velocity of a virtual rigid body in the slave-end multi-robot system are respectively controlled by using the two degrees of freedom x and z of an end effector of the equipment; the multi-robot system of the invention needs to finally reach the designated position and avoid all obstacles as much as possible in the whole movement process. Although the adopted virtual rigid body algorithm can realize automatic obstacle avoidance, because the algorithm has deadlock problems as the artificial potential field method, the obstacle avoidance is assisted by using force feedback,
therefore, the force feedback man-machine interface equipment outputs angular velocity information corresponding to the x direction and distance and angle information between the robot and the obstacle, linear velocity information corresponding to the z direction and distance and angle information between the robot and the obstacle, and the feedback force is related to the linear velocity and the angular velocity of the robot, so that the robot can be continuously close to the final target position.
The invention is further improved in that: the force feedback man-machine interface equipment adopts a hand controller.
The invention is further improved in that: the visual tracking device employs an eye tracker.
The technical scheme of the invention has the following advantages and beneficial effects:
(1) The scheme of the invention realizes the formation switching of the slave end robot system by capturing the sight signal of the operator, and simultaneously utilizes the remaining degree of freedom of the controller at the end of the hand controller to control the size of the formation, thereby increasing the flexibility of predefining the formation, reducing the cognitive load of the operator, avoiding the difficulty of realizing local autonomy and improving the efficiency and the stability of the remote operation control system.
(2) In the invention, the multi-mobile platform at the slave end adopts a virtual rigid body algorithm, and a multi-robot system at the slave end is regarded as a whole, wherein each robot in the virtual rigid body can automatically realize the maintenance and obstacle avoidance of the formation. The operator can be conveniently focused on the centralized control of the multiple mobile robot platforms.
(3) According to the scheme, the sight tracking is applied to the control loop of the multi-mobile robot system, and the multi-mobile robot system can be reasonably controlled by means of advanced cognition and decision-making capability of people when encountering complex tasks.
Drawings
FIG. 1 is a system framework for multi-mobile robotic formation navigation based on force feedback and line-of-sight tracking in accordance with the present invention.
FIG. 2 is a high level task control schematic of the master end of the present invention incorporating gaze tracking and force feedback.
Fig. 3 is a schematic diagram of the principle that the multi-mobile robot based on the virtual rigid body algorithm completes the task of switching the formation.
FIG. 4 is a control block diagram of a multi-mobile robotic platform based on force feedback and gaze tracking in accordance with the present invention.
Fig. 5 is a diagram of a queue switching process of multiple mobile robots based on sight tracking.
Fig. 6 shows the actual linear and angular velocities of robot1.
Fig. 7 shows the error of robot1.
Detailed Description
The working principle and working process of the present invention will be further described in detail with reference to the accompanying drawings and embodiments.
Referring to fig. 1, the force feedback and sight line tracking based semi-autonomous control system for formation navigation of mobile robots comprises a master end 1, a communication link 2 between the master end and the slave end, and a slave end 3. The main end 1 comprises an operator 1-1, a computer control platform 1-2, a finger controller 1-3 and an eye tracker 1-4, wherein the finger controller 1-3 of the system has six-degree-of-freedom input and three-degree-of-freedom output;
the communication link 2 adopts an Internet wireless communication mode; the slave end 3 comprises a multi-mobile robot system, an obstacle 3-2 in the environment, a target position 3-3 to be reached by the multi-robot system, and a camera 3-4 for feeding back the working state of the multi-robot system from the slave end to the master end.
The multi-mobile robot system comprises a multi-mobile platform system 3-1 consisting of n mobile robots 3-1-i (i =1,2, \8230;, n), wherein the cameras 3-4 are loaded on the unmanned aerial vehicles and can move along with the movement of the multi-mobile robot system, so that the moving range of the multi-robot system is expanded.
The computer control platform 1-2 can issue a control instruction to the multi-mobile-robot system at the slave end through the communication link 2, and simultaneously feed back information acquired by the sensor to the operator 1-1 in a text or image mode, and simultaneously can display video information acquired by the cameras 3-4 in real time, so that the operator 1-1 can master the working state of the multi-mobile-robot system platform at the slave end.
The method comprises the steps that an operator at a master end interacts with a human-computer interface device 1-3 and an eye tracker 1-4, a control command is sent to a multi-mobile-platform system 3-1 at a slave end through a communication link 2, the eye tracker 1-4 acquires a click command of a formation to be switched by capturing eye movement information of the operator at the master end, a multi-mobile-platform 3-1 at the slave end 3 forms a target formation according to the received command, and after a high-grade task command issued by the operator is received, the multi-mobile-platform 3-1 automatically completes formation keeping and obstacle avoidance according to a virtual rigid body algorithm.
In the system slave-end multi-mobile-platform system 3-1, each mobile robot is provided with a sensor capable of acquiring self pose information and related information such as relative distance and relative angle between the mobile robot and an obstacle.
According to the virtual rigid body algorithm, the multiple robots at the slave end are regarded as a whole, wherein the pose between each robot and the adjacent robot is fixed, is similar to two vectors in a rigid body and cannot change along with the movement of the rigid body, but is different from a real rigid body in that when the virtual rigid body senses an obstacle or receives a command of switching formation, the relative pose between the vectors in the virtual rigid body changes correspondingly; and the new appearance is another appearance, and the formation keeping and the automatic obstacle avoidance are automatically realized in the mode. At the same time the operator can concentrate on the overall control of the slave end-of-line robot group without worrying about whether a single robot will hit an obstacle.
Referring to fig. 2, with the visual interface, a required formation is set in advance according to a working environment which the slave multi-mobile platform may face. The eye tracker acquires the click stimulus by capturing visual information of the operator and determining the area at which the operator is gazing. And obtaining a target formation command, combining the force feedback interface equipment, and finally determining the high-level control commands of the main end, such as formation, formation size, target speed and the like.
For the position P (x, y, z) of the end controller of the hand controller 1-3, in this embodiment, the x coordinate axis represents the angular velocity of the virtual rigid body, the positive direction of the x axis is a right turn, and the negative direction of the x axis is a left turn; the z coordinate axis represents the linear velocity of the virtual rigid body, the positive direction of the z axis is backward, and the negative direction of the z axis is forward; the y coordinate axis is used for controlling the size of the virtual rigid body; namely the size of a formation formed by a plurality of mobile robot platforms; to prevent unintended control commands caused by operator control instability, such as hand jitter.
The expected formation for a mobile robot is defined as follows:
T=[L dd ]
wherein
Figure GDA0003908641650000071
L d And phi d Respectively represent relative distance and relative angle matrixes among the mobile robots, and the relative distance and the relative angle matrixes jointly determine the formation of the multi-mobile robot system
Figure GDA0003908641650000072
There are m kinds of formation forms, the size of the formation form is defined by y coordinate axis of the human interface device end controller, in order to prevent unwanted movement and accident caused by shaking of the operator, the y is divided into regions [ y M1 ,y M2 ,…,y Mm ]Each area corresponds to a scale, and the size of the formation is mainly realized according to the change of the relative distance. The corresponding relationship between the master-end eye tracker 1-4 and the force feedback human-machine interface device 1-3 and the slave-end multi-mobile-platform system 3-1 is as follows:
Figure GDA0003908641650000081
wherein v and w represent linear velocity and angular velocity of virtual rigid body VRB in the slave multi-mobile robot system respectively; k is a radical of v ,k ω ,k T And k S Gain coefficients of linear velocity, angular velocity, formation and formation size are respectively set; [ q ] of x ,q y ,q z ] T Position coordinate [ x ] representing force feedback human-machine interface equipment end controller M ,y M ,z M ]And q is S It is based on the click stimulus of the target formation captured by the eye tracker capturing the operator's gaze.
Referring to fig. 3, the virtual rigid body algorithm is defined as follows.
Definition 1: for the set of mobile robots marked N {1, 2.., N }, use F i A local reference coordinate system representing the robot i, with R representing the position, and R i (t) epsilon SO (3) indicates that robot i is relative to F within time t w The position of (a).
Definition 2 (virtual rigid body): the virtual rigid body is composed of a group of N mobile robots and a local reference system coordinate system F v And (4) forming. Wherein the local position of the robot is defined by a set of time-varying vectors r 1 (t),r 1 (t),...,R N (t) }.
Definition 3 (formation): form pi as a virtual rigid body, for a group of robots of size N, at F v Has a constant local position r 1 ,r 2 ,...,r N H, duration T Π >0.
Definition 4 (transform): the transformation phi is a virtual rigid body with respect to the local reference frame F v Time-varying position r 1 (t),r 1 (t),...,R N (T), such a group of N number of mobile robots, for a duration T Φ >In 0, P v (t)∈R 3 And R v (t) ∈ SO (3) respectively represent F at time t w Middle F v The position and orientation of the origin. P of robot i i (t) and r i The relationship between (t) is p i =p v +R V *r i ,i∈{1,2,...,N}。
On the basis of the above definition, a repulsive vector pointing from the position of the obstacle to the VRB is defined at two-dimensional coordinates p, the magnitude of which is a gaussian function related to the position and radius of the obstacle, and assuming that there are n obstacle groups in the environment, the obstacle k is placed in the global coordinate system F w The horizontal position in (1) is denoted as o w,k Then the overall repulsion vector in the vector field generated from the n obstacle at two-dimensional coordinate p
Figure GDA0003908641650000091
Wherein
Figure GDA0003908641650000092
Wherein B is k Is the radius r from the obstacle k k The associated positive scalar parameter. Can select B k Is such that the commanded speed v of the VRB v The largest exclusion vector can still be overcome because VRB is virtual and the 2 x 2 matrix Σ is positive, which defines the long and short axes of the dynamic gaussian-like function in the following way.
Figure GDA0003908641650000093
The virtual rigid body algorithm defines a stronger vector field for a single mobile robot than the VRB so that the repulsion vector can "push" the mobile robot away as it approaches an obstacle. And thus away from the obstacle.
Figure GDA0003908641650000094
The above formula is in the global coordinate system F w In a local coordinate system F v In which they are expressed as
Figure GDA0003908641650000095
In the simulation experiment, four mobile robots firstly complete parallelogram formation, then keep the formation to do linear motion for a period of time, when the sight of an operator turns to a linear button of a computer interface, trigger a click event, receive a formation switching command from a multi-robot system at a slave end, and change the relative pose of a VRB and each robot according to a virtual rigid body algorithm. Thereby completing the transition of formation. The initial pose and formation process of the robot and the corresponding motion trail are shown in fig. 5, and the process is a multi-mobile robot formation switching process based on sight tracking.
Wherein the circle represents a virtual rigid body VRB and the triangle represents the actual mobile robot. The dotted lines indicate the relative positions of the VRBs and the respective mobile robots, and the solid lines indicate the relative positions between the mobile robots. Since the four robots are identical in structure, only the robot1 is analyzed here,
as shown in fig. 7; actual linear and angular velocity errors of robot 1;
errors robot1.Xe, robot1.Ye, robot1.Theta of the actual pose and the target pose of the mobile robot 1;
the actual linear and angular velocities of robot1 are shown in fig. 6. As can be seen, when the formation is changed, the speed of the robot changes abruptly, the error increases, and then after 10s, the error converges to a smaller value.
Referring to fig. 4, a control block diagram of a multiple mobile robotic platform based on force feedback and gaze tracking. Operator applied force F h Obtaining the position information P of the end controller of the hand controller for the hand controller M (x M ,y M ,z M ) Converted into the linear velocity v of the slave virtual rigid body VRB through the processing of the corresponding controller Ml Angular velocity ω Ml And a corresponding formation shape T and size S. The control information enters a communication link and is sent to the slave-end multi-robot platform. The delay problem of the communication link is solved by a passive processing method. And the multiple mobile platforms at the slave end change respective positions according to the virtual rigid body algorithm under the high-level task instruction. When the multi-mobile platform is interacted with the environment from the end, the obstacles provide the mobile robot crowd with environment reaction, and the reaction is related to the linear velocity, the angular velocity and the formation of the whole robot crowd. Feedback is carried out to the main end through the communication link to obtain feedback force (F) in each direction x ,F y ,F z )。
Wherein, the acting forces of the force feedback device in three directions are defined as follows:
Figure GDA0003908641650000111

Claims (1)

1.the semi-autonomous control method for formation navigation of mobile robots based on sight line and force feedback is characterized by comprising the following steps: the system consists of a master end, a slave end and a communication link; the main end comprises an operator, a visual tracking device, a force feedback human-computer interface device and a control computer; the slave end comprises a multi-mobile robot system, a camera and a working environment; the communication link adopts WiFi or other wireless communication modes;
the operator at the master end interacts with the force feedback human-computer interface equipment through the visual tracking equipment and sends a control command to the multi-mobile-robot system at the slave end through a communication link;
the multi-mobile robot system comprises a multi-mobile platform system consisting of n mobile robots;
the visual tracking equipment is used for capturing eye movement information of an operator and converting the information into a target formation control instruction of the slave multi-mobile robot; the multi-mobile robot system at the slave end forms an expected formation according to the received instruction;
the force feedback man-machine interface equipment has three-degree-of-freedom output feedback and is used for feeding back the state of the slave-end multi-robot system to the master end in a force signal mode;
the speed and position information of the multi-mobile robot system is fed back to the main end in real time in the form of text or video through the control computer; the control computer predefines a target formation of the slave-end multi-mobile platform by using a visual interface, displays the target formation on the interface in the form of pictures, captures eye movement information of an operator by using visual tracking equipment, and forms a corresponding click event according to a figure watched by the operator; the camera is used for feeding back the state of the multiple robots and the slave-end environment to the main end in a video or picture mode; according to a figure watched by an operator, the multi-mobile robot system at the slave end adopts a virtual rigid body algorithm, the multi-robot system at the slave end is regarded as a whole, and each robot automatically realizes formation maintenance and automatic obstacle avoidance; the state of the multiple robots is constant between each robot and the adjacent robot, and does not change along with the movement of the virtual rigid bodies, when the virtual rigid bodies detect obstacles or receive instructions of queue form switching, the relative pose between the vectors in the virtual rigid bodies changes correspondingly and reappears with the other rigid bodies; the formation keeping and the automatic obstacle avoidance are automatically realized in the mode; the force feedback man-machine interface equipment has three-degree-of-freedom output feedback, wherein the x direction and the z direction reflect the distance and angle information between the robot and the obstacle, and the output quantity in the y direction reflects the size of the formation of the whole multi-mobile platform system; the force feedback man-machine interface equipment adopts a hand controller; the visual tracking equipment adopts an eye tracker;
setting a required formation in advance according to a working environment possibly faced by a slave-end multi-mobile platform by utilizing a visual interface; the eye tracker determines an area watched by an operator by capturing visual information of the operator, thereby acquiring a click stimulus; obtaining a target formation command, combining the force feedback interface equipment, and finally determining a high-level control command of the main end;
regarding the position P (x, y, z) of the end controller of the hand controller, representing the angular velocity of the virtual rigid body by the x coordinate axis, wherein the positive direction of the x axis is a right turn, and the negative direction of the x axis is a left turn; the z coordinate axis represents the linear velocity of the virtual rigid body, the positive direction of the z axis is backward, and the negative direction of the z axis is forward; the y coordinate axis is used for controlling the size of the virtual rigid body;
the expected formation for a mobile robot is defined as follows:
T=[L dd ]
wherein
Figure FDA0003908641640000021
L d And phi d Respectively representing the relative distance and the relative angle matrix between the mobile robots, and determining the formation of the multi-mobile robot system
Figure FDA0003908641640000022
Total mThe size of the formation is defined by the y coordinate axis of the human-computer interface device end controller, and the y is divided into areas [ y M1 ,y M2 ,…,y Mm ](ii) a Then the corresponding relationship between the eye tracker and the force feedback human-computer interface device at the master end and the multi-mobile-platform system at the slave end is as follows:
Figure FDA0003908641640000031
wherein v and w respectively represent the linear velocity and the angular velocity of a virtual rigid body VRB in the slave-end multi-mobile-robot system, and the area of a formation; k is a radical of formula v ,k ω ,k T And k S Gain coefficients of linear velocity, angular velocity, formation and formation size are respectively set; [ q ] q x ,q y ,q z ] T Position coordinate [ x ] representing force feedback human-machine interface equipment end controller M ,y M ,z M ]And q is S The target formation is obtained according to the click stimulation of the target formation obtained by capturing the sight of the operator by the eye tracker;
the virtual rigid body algorithm is defined as follows;
definition 1: for the set of mobile robots marked N {1, 2.., N }, use F i A local reference coordinate system representing the robot i, with R representing the position, and R i (t) ε SO (3) indicates that robot i is relative to F during time t w The position of (a);
definition 2: the virtual rigid body is composed of a group of N mobile robots and a local reference system coordinate system F v Composition is carried out; wherein the local position of the robot is defined by a set of time-varying vectors r 1 (t),r 1 (t),...,R N (t) };
definition 3: form pi as a virtual rigid body, for a group of robots of size N, at F v Has a constant local position r 1 ,r 2 ,...,r N H, duration T Π >0;
Definition 4: the transformation phi is a virtual rigid body with respect to the local reference frame F v Time-varying position of { r } 1 (t),r 1 (t),...,R N (T) }, such that a group of N number of mobile robots, for a duration T Φ >In 0, P v (t)∈R 3 And R v (t) ∈ SO (3) respectively represent F at time t w Middle F v The position and orientation of the origin; p of robot i i (t) and r i The relationship between (t) is p i =p v +R V *r i ,i∈{1,2,...,N};
On the basis of the above definition, a repulsive vector pointing from the position of the obstacle to the VRB is defined at two-dimensional coordinates p, the magnitude of which is a gaussian function related to the position and radius of the obstacle, and assuming that there are n obstacle groups in the environment, the obstacle k is placed in the global coordinate system F w The horizontal position in (1) is denoted as o w,k Then the overall repulsion vector in the vector field generated from the n obstacles at two-dimensional coordinate p is:
Figure FDA0003908641640000041
wherein
Figure FDA0003908641640000042
Wherein B is k Is the radius r from the obstacle k k An associated positive scalar parameter; selection B k Is such that the commanded speed v of the VRB v The largest exclusion vector can be overcome because VRB is virtual and the 2 x 2 matrix Σ is positive definite, which defines the long and short axes of the dynamic gaussian-like function in the following way;
Figure FDA0003908641640000043
the virtual rigid body algorithm defines a stronger vector field for a single mobile robot than VRB, so that the repulsion vector can "push" it away when the mobile robot approaches an obstacle; away from the obstacle;
Figure FDA0003908641640000044
the above formula is in the global coordinate system F w In a local coordinate system F v In which they are expressed as
Figure FDA0003908641640000045
CN201910920285.3A 2019-09-26 2019-09-26 Mobile robot formation navigation semi-autonomous control method based on sight line and force feedback Active CN110825076B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910920285.3A CN110825076B (en) 2019-09-26 2019-09-26 Mobile robot formation navigation semi-autonomous control method based on sight line and force feedback

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910920285.3A CN110825076B (en) 2019-09-26 2019-09-26 Mobile robot formation navigation semi-autonomous control method based on sight line and force feedback

Publications (2)

Publication Number Publication Date
CN110825076A CN110825076A (en) 2020-02-21
CN110825076B true CN110825076B (en) 2022-12-09

Family

ID=69548403

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910920285.3A Active CN110825076B (en) 2019-09-26 2019-09-26 Mobile robot formation navigation semi-autonomous control method based on sight line and force feedback

Country Status (1)

Country Link
CN (1) CN110825076B (en)

Families Citing this family (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111459161B (en) * 2020-04-03 2021-07-06 北京理工大学 Multi-robot system human intervention control method
CN111890389B (en) * 2020-06-22 2021-10-08 东南大学 Multi-mobile robot cooperative control system based on multi-modal interactive interface
CN112051780B (en) * 2020-09-16 2022-05-17 北京理工大学 Brain-computer interface-based mobile robot formation control system and method
CN112363389B (en) * 2020-11-11 2022-07-05 西北工业大学 Shared autonomous formation planning control method for single-master multi-slave teleoperation mode
CN112944287B (en) * 2021-02-08 2023-05-30 西湖大学 Air repair system with active light source
CN112959342B (en) * 2021-03-08 2022-03-15 东南大学 Remote operation method for grabbing operation of aircraft mechanical arm based on operator intention identification
CN113031651B (en) * 2021-03-12 2022-09-27 南京工程学院 Bilateral teleoperation control system and method of UAV hanging system based on value function approximation
CN113419631B (en) * 2021-06-30 2022-08-09 珠海云洲智能科技股份有限公司 Formation control method, electronic device and storage medium
CN113386142A (en) * 2021-07-07 2021-09-14 天津大学 Grinding and cutting integrated processing system and method of teleoperation robot based on virtual clamp

Family Cites Families (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102096415B (en) * 2010-12-31 2012-09-26 重庆邮电大学 Multi-robot formation method based on Ad-Hoc network and leader-follower algorithm
CN104950885B (en) * 2015-06-10 2017-12-22 东南大学 A kind of view-based access control model and power feel the UAV group's bilateral teleoperation control system and its method of feedback
CN108594846A (en) * 2018-03-23 2018-09-28 哈尔滨工程大学 More AUV flight patterns optimal control methods under a kind of obstacle environment
CN109933069B (en) * 2019-03-21 2022-03-08 东南大学 Wire flaw detection robot remote control system and control method based on vision and force feedback

Also Published As

Publication number Publication date
CN110825076A (en) 2020-02-21

Similar Documents

Publication Publication Date Title
CN110825076B (en) Mobile robot formation navigation semi-autonomous control method based on sight line and force feedback
US9862090B2 (en) Surrogate: a body-dexterous mobile manipulation robot with a tracked base
US8577126B2 (en) System and method for cooperative remote vehicle behavior
US10759051B2 (en) Architecture and methods for robotic mobile manipulation system
Luo et al. Real time human motion imitation of anthropomorphic dual arm robot based on Cartesian impedance control
US20090180668A1 (en) System and method for cooperative remote vehicle behavior
CN113829343B (en) Real-time multitasking and multi-man-machine interaction system based on environment perception
CN112621746A (en) PID control method with dead zone and mechanical arm visual servo grabbing system
CN114571469A (en) Zero-space real-time obstacle avoidance control method and system for mechanical arm
Fang et al. Visual grasping for a lightweight aerial manipulator based on NSGA-II and kinematic compensation
Jorgensen et al. cockpit interface for locomotion and manipulation control of the NASA valkyrie humanoid in virtual reality (VR)
Wang et al. Design of stable visual servoing under sensor and actuator constraints via a Lyapunov-based approach
Quesada et al. Holo-SpoK: Affordance-aware augmented reality control of legged manipulators
Gromov et al. Guiding quadrotor landing with pointing gestures
Li et al. Modified Bug Algorithm with Proximity Sensors to Reduce Human-Cobot Collisions
CN116100565A (en) Immersive real-time remote operation platform based on exoskeleton robot
Wu et al. Kinect-based robotic manipulation: From human hand to end-effector
Lin et al. Intuitive kinematic control of a robot arm via human motion
Zhu et al. A shared control framework for enhanced grasping performance in teleoperation
Popov et al. Detection and following of moving target by an indoor mobile robot using multi-sensor information
Buss et al. Advanced telerobotics: Dual-handed and mobile remote manipulation
Wang et al. Vision based robotic grasping with a hybrid camera configuration
Serpiva et al. Swarmpaint: Human-swarm interaction for trajectory generation and formation control by dnn-based gesture interface
Yan et al. A Complementary Framework for Human–Robot Collaboration With a Mixed AR–Haptic Interface
Abdi et al. Safe Operations of an Aerial Swarm via a Cobot Human Swarm Interface

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant