CN117270385A - A human-robot fusion decision-making method with weight adaptive adjustment - Google Patents
A human-robot fusion decision-making method with weight adaptive adjustment Download PDFInfo
- Publication number
- CN117270385A CN117270385A CN202310606008.1A CN202310606008A CN117270385A CN 117270385 A CN117270385 A CN 117270385A CN 202310606008 A CN202310606008 A CN 202310606008A CN 117270385 A CN117270385 A CN 117270385A
- Authority
- CN
- China
- Prior art keywords
- robot
- obstacle
- operator
- weight
- workload
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
Classifications
-
- G—PHYSICS
- G05—CONTROLLING; REGULATING
- G05B—CONTROL OR REGULATING SYSTEMS IN GENERAL; FUNCTIONAL ELEMENTS OF SUCH SYSTEMS; MONITORING OR TESTING ARRANGEMENTS FOR SUCH SYSTEMS OR ELEMENTS
- G05B13/00—Adaptive control systems, i.e. systems automatically adjusting themselves to have a performance which is optimum according to some preassigned criterion
- G05B13/02—Adaptive control systems, i.e. systems automatically adjusting themselves to have a performance which is optimum according to some preassigned criterion electric
- G05B13/04—Adaptive control systems, i.e. systems automatically adjusting themselves to have a performance which is optimum according to some preassigned criterion electric involving the use of models or simulators
- G05B13/042—Adaptive control systems, i.e. systems automatically adjusting themselves to have a performance which is optimum according to some preassigned criterion electric involving the use of models or simulators in which a parameter or coefficient is automatically adjusted to optimise the performance
Landscapes
- Engineering & Computer Science (AREA)
- Health & Medical Sciences (AREA)
- Artificial Intelligence (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Evolutionary Computation (AREA)
- Medical Informatics (AREA)
- Software Systems (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Automation & Control Theory (AREA)
- Control Of Position, Course, Altitude, Or Attitude Of Moving Bodies (AREA)
Abstract
The invention relates to a weight self-adaptive adjustment human-robot fusion decision method, which uses environment information obtained by a robot through a laser sensor and an operator estimated workload by using a behavior entropy as input of a shared fuzzy controller, and outputs a control weight factor to realize on-line distribution of control weight. And then, the speed command output of the robot based on the improved VFH+ autonomous obstacle avoidance module and the operating handle of the operator is input to the robot executor through weighting of the weight factors, and a safety track is generated, so that the robot avoids obstacles and dangerous areas. The method is suitable for carrying out complex operations on site by the robot in an unstructured environment, and an operator can be remotely controlled to avoid exposing the operator to dangerous or harmful environments, so that the execution efficiency of tasks can be improved, the control precision is more accurate, and the safety coefficient is higher.
Description
Technical Field
The invention relates to a weight self-adaptive adjustment human-robot fusion decision-making method, and belongs to the field of human-robot fusion intelligent decision-making.
Background
Human-robot fusion refers to a system of co-control of a human operator with an autonomous robot controller, between direct operator control and autonomous robot control. Direct operator control means that the operator remotely controls the robot, and no autonomous capabilities of the robot are involved in the system. The autonomous control of the robot means that the robot executes tasks through own perception decisions without human intervention. However, there are many uncertainties in the real world, and the current level of artificial intelligence has not yet enabled complete autonomy of robots.
In early researches, the autonomy of the robot is mostly predefined or coded in advance according to specific tasks, the autonomy of the robot is endowed with poor generalization by the mode, and when facing complex and various uncertain working environments, a large amount of modification codes are needed to increase the task amount. The current autonomous level of the robot is mostly controlled by manual adjustment of an operator, but the online adjustment of the autonomous level is a future development direction. The self-adaptive adjustment mode can enable the robot to be closer to human beings, is beneficial to realizing high-efficiency man-machine interaction, and is a necessary path from teleoperation to full-autonomous operation of the robot. It is therefore a very urgent need to be able to adaptively adjust the control weights of a human-robot in the face of unstructured environments.
Disclosure of Invention
The invention provides a human-robot intelligent decision-making method for dynamically adjusting control weights, by which the control weights of operators and robots can be adjusted on line to adapt to complex and changeable unstructured environments.
The technical scheme adopted by the invention for achieving the purpose is as follows: a weight self-adaptive adjustment human-robot fusion decision-making method comprises the following steps:
the obstacle avoidance module acquires environmental information through the laser sensor and outputs a command u r ;
Obtaining quantized workload by adopting operator behavior entropy;
taking the environment information and the workload as inputs of a sharing controller, and outputting sharing control weights through the sharing controller;
robot based on obstacle avoidance module's output command u r And command state u of the operating handle h Is decomposed into a linear velocity and an angular velocity, the linear velocity and the angular velocity of the robot and the operator are weighted by controlling the weight coefficient, and the weighted linear velocity and the weighted angular velocity are inputted to the robot actuator to generate a safe velocity command u f So that the robot avoids the obstacle.
The obstacle avoidance module acquires environmental information through a laser radar and outputs a command u r Comprising the following steps:
step 1, acquiring environment information containing the position of an obstacle through a laser sensor, wherein the environment information comprises the distance and the angle from the obstacle to a robot, and establishing a polar coordinate map taking the robot as the center;
step 2, expanding the obstacle to obtain a radius r r+s =r r +r s Is a map of the expansion of (1); wherein r is r Is the furthest distance from the center of the robot to the edge of the robot itself, r s Is the minimum distance between the robot and the obstacle;
step 3, creating a dynamic window C by taking the robot as a circle center α The radius is the radar detection distance d n The method comprises the steps of carrying out a first treatment on the surface of the Round window C according to lidar resolution α α Is divided intoThe sectors are numbered as 1,2 and … n anticlockwise by taking a robot as a center and taking a line parallel to the x axis of the global rectangular coordinate system as a starting line; each sector k corresponds to a discrete angle +.>
Step 4, according to different obstacle distances, giving different obstacle intensity values, and calculating each movable grid C in each sector i,j Obstacle vector size m i,j Is that
Wherein the method comprises the steps of
1d i,j ≤r r+s
0d n <d i,j
Wherein, c i,j Is the determined value of the jth active mesh of the ith sector, d i,j Is the distance of the obstacle to the center point of the robot; i i,j Is an intermediate variable;
step 5, calculating the density of the polar obstacle in the VFH+ algorithm to obtain the maximum value of the intensity of all the active grid obstacles in each sector
Wherein beta is i.j Is the angle of the obstacle detected by the laser radar, gamma i,j Is the enlarged angle of each obstacle activity grid;
step 6, passing through the maximum value of the intensity of the obstacleAnd the lower limit tau of the obstacle intensity threshold low Upper limit tau high Is to use 0 and 1 bin values to compare the dominant polarity histogram H p Conversion to a binary polar histogram H b The method comprises the steps of carrying out a first treatment on the surface of the According to the constraint of the steering capability of the robot, removing the direction of non-advancing, and constructing a mask polar coordinate histogram H m ;
Step 7, determining the left boundary k of all openings in the mask polar coordinate histogram l And right boundary k r The method comprises the steps of carrying out a first treatment on the surface of the If the difference between the left and right boundaries is greater than the threshold S max The opening is a wide opening; otherwise, the opening is narrow;
for narrow openings, the middle direction is taken as a candidate direction of robot motion;
for wide openings, there are multiple candidate directions;
selecting the direction corresponding to the minimum cost function from all candidate directions as the optimal direction, namely the advancing direction of the robot;
obtaining an output command u representing a speed vector according to the advancing direction of the robot and the speed value of the initial set autonomous controller r 。
The cost function is as follows:
where Δ is a function of calculating the absolute angle difference between two intervals, k t Is the guiding direction, θ n Is the current direction, k n-1 Is the previous direction of motion, W is the width of the candidate direction; the weight is as follows: mu (mu) 1 >μ 2 +μ 3 >μ 4 And are all constant, c representing the candidate direction.
The quantized workload obtained by adopting the operator behavior entropy comprises the following steps:
operator speed command E (t):
sigma is the decay weight, u h (t) is the operator actual speed command at time t;
estimation error at time t: e (t) =u h (t)-E(t)
Taking delta so that the probability P {0 is less than or equal to e (t) < delta } =90%, and dividing the deviation value into 9 intervals according to delta;
linear and angular velocity commands for an operatorAccording to the behavior entropy H h Formulas for calculating linear velocity +.>Behavior entropy and angular velocity->Behavior entropy, the behavior entropy H is obtained to characterize the workload:
p i indicating the frequency with which the estimation error occurs in the corresponding i-th interval.
The method takes the environment information and the workload as the input of the sharing controller, outputs the sharing control weight through the sharing controller, and comprises the following steps:
the robot to obstacle distance and the operator workload are used as input, the workload represented by the behavior entropy is blurred into four sets { S, MS, M and B }, the four sets { no load, light load, medium load and heavy load }, the robot to obstacle distance blur is five sets { S, MS, M, MB and B }, the three sets { far, medium, near and near }, and the fuzzy subset of the control weight coefficient omega of the robot is output to be { S, MS, M, MB and B }.
Said generating a safe speed command u f The following are provided:
the linear speed and the angular speed of the safety speed command are respectively; omega is the control weight coefficient, +.> The operator's linear and angular speed, respectively, < >>The linear and angular speeds of the robot, respectively.
A weight adaptively adjusted human-robot fusion decision making system comprising:
the obstacle avoidance module is used for acquiring environment information through the laser sensor and outputting a command u r ;
The workload quantification module is used for obtaining quantified workload by adopting operator behavior entropy;
the shared fuzzy controller is used for taking the environment information and the workload as the input of the shared controller and outputting the shared control weight through the shared controller;
the fusion decision module is used for outputting a command u based on the obstacle avoidance module by the robot r And command state u of the operating handle h Is decomposed into a linear velocity and an angular velocity, the linear velocity and the angular velocity of the robot and the operator are weighted by controlling the weight coefficient, and the weighted linear velocity and the weighted angular velocity are inputted to the robot actuator to generate a safe velocity command u f So that the robot avoids the obstacle.
The invention has the following beneficial effects and advantages:
1. the invention realizes man-machine cooperative control, complements the advantages of an operator and an autonomous controller system, improves the control precision, enhances the adaptability of the system, and can improve the efficiency of executing tasks.
2. The invention uses the action entropy to quantify the workload of the operator, is simpler and easier to execute compared with an invasive method such as an electromyographic signal characteristic measurement method, and is more real-time and objective compared with a post subjective method such as a NASA-TLX scale.
3. According to the invention, the control weight regulator is designed based on the fuzzy rule to adjust the control weight distribution of the human-robot on line, so that correct judgment is timely made and operation is executed in the face of complex and changeable environments, the obstacle distance and the workload of an operator are comprehensively considered, the control weight of the human-machine is dynamically adjusted, the safety of the operation is ensured, and the workload of the operator is reduced.
4. The method is suitable for carrying out complex operations on site by the robot in an unstructured environment, and an operator can be remotely controlled to avoid exposing the operator to dangerous or harmful environments, so that the execution efficiency of tasks can be improved, the control precision is more accurate, and the safety coefficient is higher.
Drawings
FIG. 1 is a system frame diagram of the present invention;
FIG. 2 is a diagram of an autonomous obstacle avoidance module of the robot of the present invention;
FIG. 3 is a control weight adjuster diagram of the present invention;
fig. 4 is a fuzzy rule table of the present invention.
Detailed Description
The present invention will be described in further detail with reference to the accompanying drawings and examples.
As shown in fig. 1, the present method takes environmental information acquired by a robot through a lidar and a workload quantized by an operator by using a behavioral entropy as inputs to a shared controller, and the controller mixes them together and outputs a shared control weight. Output command u of robot based on improved VFH+obstacle avoidance module r And command state u of the operating handle h Can be decomposed into linear velocity and angular velocity parts, the linear velocity and angular velocity of the robot and the operator are weighted by a weight factor and are input to a robot actuator to generate a safe velocity command u f So that the robot avoids obstacles and dangerous areas.
Fig. 2 is a schematic diagram of a vfh+ obstacle avoidance module based on the improvement. When the robot does not detect an obstacle in the movable window, an operating lever of an operator is used as a robot reference running direction; when an obstacle is detected, according to the guiding direction of an operator and the position information of the obstacle, performing obstacle avoidance operation through the following specific steps:
step 1, detecting by using a laser sensor instead of an ultrasonic sensor in the original method, obtaining surrounding environment information, namely the distance and angle between an obstacle and the robot, and establishing a polar coordinate map taking the robot as the center.
Step 2, expanding the obstacle data to obtain a radius r r+s =r r +r s Wherein r is r Is the furthest distance r from the center of the robot to the edge of the robot itself s Is the minimum distance between the robot and the obstacle. At this time, the robot can be regarded as a point-like object.
Step 3, creating a dynamic window C by taking the robot as a circle center α The radius is the radar detection distance d n . Round window C according to lidar resolution α α Is divided intoEach sector is defined by taking a robot as a center and taking a line parallel to the x axis of the global rectangular coordinate system as an initial line, and the sectors are numbered 1,2 and … n anticlockwise. Each small sector region k corresponds to a discrete angle
Step 4, quantifying the obstacles around the robot, giving different obstacle intensity values according to different obstacle distances, and calculating each movable grid C i,j Obstacle vector size m i,j Is that
Wherein the method comprises the steps of
In c i,j Is a determined value of the active grid, d i,j Is the distance from the obstacle to the center point of the robot, I i,j Is an influencing function of the active mesh.
In step 5, the conventional vfh+ algorithm is reflected in the histogram by summing, and the blocked obstacle irrelevant to the selection of the new direction is still calculated in the polar histogram, however, in a practical situation, the sensor can only acquire one side information of the obstacle, and the response of the robot to the same obstacle may be different according to the exploration speed of the corresponding area. For this purpose, the polar obstacle density is calculated by taking the maximum value of the intensity of all active grid obstacles in the sector
Wherein beta is i.j Is the angle of the obstacle detected by the laser radar, gamma i,j Is the angle of enlargement of each obstacle grid cell.
Step 6 in combination with a suitable obstacle intensity threshold τ low And τ high For principal polar coordinate histogram H p Binarization processing is carried out to obtain a binary polar coordinate histogram H b . The robot has the maximum left and right steering angle limitation under the constraint of steering capability, removes the direction which can not go forward, and can construct a mask polar coordinate histogram H m 。
Step 7, determining the left and right boundaries k of all openings in the mask polar coordinate histogram l And k r If the difference between the left and right boundaries is greater than the threshold S max The opening is a wide opening; otherwise, the opening is narrow.
For a narrow opening, selecting an intermediate direction as a candidate direction for robot motion; for wide openings, there are multiple candidate directions. Of all candidate directions, the optimal direction is obtained by selecting the minimum value of the improvement cost function g (c).
Where Δ is a function of calculating the absolute angle difference between two intervals, k t Is the guiding direction, θ n Is the current direction, k n-1 Is the previous direction of motion and W is the width of the candidate direction. According to the priority degree of each item, the weight is set as follows: mu (mu) 1 >μ 2 +μ 3 >μ 4 And all are provided withIs constant.
Obtaining an output command u representing a speed vector according to the advancing direction of the robot and the speed value of the initial set autonomous controller r 。
As shown in fig. 3, the human-computer fusion weight regulator of the method is designed as a fuzzy controller with two inputs and one output. The input of the controller is the distance D between the robot and the obstacle and the workload level D of the operator, the output is the weight omega of the robot in the man-machine fusion control decision, and the weight of the operator is (1-omega).
The increase in operator workload caused by mental tasks is first characterized by using behavioral entropy as an indicator of predictability. High workload is associated with unpredictable operational behavior, and as the operational workload increases, operators are likely to be in error in performing tasks, which are more prone to correcting errors with rapid commands controlled by the handle (operator's linear and angular speeds).
To detect these abnormal commands, the operator's speed command E (t) at that moment is estimated using a modified exponentially weighted moving average model:
sigma is the decay weight, u h And (t) is the actual speed command of the operator at time t. The weight of the exponential moving average method is exponentially decaying with time, and the numerical weighting coefficient is larger as the current time is closer.
Using the behavioural entropy H in estimating errors h It is a common measure of unpredictability, i.e. high entropy reflects high workload. Its inputs are operator linear and angular velocity commandsCalculating an estimated error at the time: e (t) =u h (t) -E (t). Taking delta so that P { 0.ltoreq.e (t < delta } = 90%, i.e. ninety percent of the estimated error frequency distribution value is defined as delta. When the operating behavior deviates from the estimated behavior, prediction errors occurThe variation is generated, so that the estimation error distribution is wider and delta is larger. The partition deviation value is 9 intervals: [ - ≡5 ]],[-5δ,-2.5δ],[-2.5δ,-δ],[-δ,-0.5δ],[-0.5δ,0.5δ],[0.5δ,δ],[δ,2.5δ],[2.5δ,5δ],[5δ,+∞]. The entropy calculation is two-dimensional +.>The calculated behavior entropy is as follows:
p i indicating the frequency with which the estimation error occurs in the corresponding i-th interval.
When the behavior entropy exceeds a specified threshold H max And sending out audio-visual warning information on the graphical user interface to prompt an operator. The audio warning is a soft sound to draw the attention of the operator. The visual warning is a "high workload" indication that displays a red color on the user interface instead of a "normal operation" indication.
The fuzzy rule table is shown in fig. 4. The work load reflected by the behavior entropy is blurred into four sets { S, MS, M, B }, corresponding to { no load, light load, medium load, heavy load }, the blurred subset of the distance from the robot to the obstacle is { S, MS, M, MB, B }, corresponding to { far, medium, near }, and the blurred subset of the control weight coefficient omega of the robot is { S, MS, M, MB, B }, respectively. The weight value of the autonomous intelligent controller of the robot represents the autonomous level of the robot, and the greater the weight of the autonomous controller of the robot is, the higher the autonomous level of the robot is. When the workload of the operator is small and the distance from the obstacle is far, the operator can normally operate, and the robot occupies less control right distribution in the intervention operation task which is as small as possible. As the workload of operators increases or the robot gets closer to the obstacle, the robot should get more control, and at this time, the distribution weight of the autonomous controller of the robot increases, so as to correct the misoperation of operators with high workload or realize obstacle avoidance operation on the near obstacle.
Output command u of robot based on improved VFH+obstacle avoidance module r And a speed command u for operating the handle h Can be decomposed into linear velocity and angular velocity parts, the linear velocity and angular velocity of the robot and the operator are weighted on line by a weight factor and are input to a robot actuator, and finally a safe velocity command u is generated f So that the robot avoids obstacles and dangerous areas. u (u) f The linear and angular velocities are as follows:
Claims (7)
1. the human-robot fusion decision-making method with self-adaptive weight adjustment is characterized by comprising the following steps of:
the obstacle avoidance module acquires environmental information through the laser sensor and outputs a command u r ;
Obtaining quantized workload by adopting operator behavior entropy;
taking the environment information and the workload as inputs of a sharing controller, and outputting sharing control weights through the sharing controller;
robot based on obstacle avoidance module's output command u r And command state u of the operating handle h Is decomposed into a linear velocity and an angular velocity, the linear velocity and the angular velocity of the robot and the operator are weighted by weights, and the weighted weights are input to a robot actuator to generate a safe velocity command u f So that the robot avoids the obstacle.
2. The weight adaptive adjustment human-robot fusion decision method of claim 1, wherein the obstacle avoidance module obtains environmental information through a laser radar and outputs a command u r Comprising the following steps:
step 1, acquiring environment information containing the position of an obstacle through a laser sensor, wherein the environment information comprises the distance and the angle from the obstacle to a robot, and establishing a polar coordinate map taking the robot as the center;
step 2, expanding the obstacle to obtain a radius r r+s =r r +r s Is a map of the expansion of (1); wherein r is r Is the furthest distance from the center of the robot to the edge of the robot itself, r s Is the minimum distance between the robot and the obstacle;
step 3, creating a dynamic window C by taking the robot as a circle center α The radius is the radar detection distance d n The method comprises the steps of carrying out a first treatment on the surface of the Round window C according to lidar resolution α α Is divided intoThe sectors are numbered as 1,2 and … n anticlockwise by taking a robot as a center and taking a line parallel to the x axis of the global rectangular coordinate system as a starting line; each sector k corresponds to a discrete angle +.>
Step 4, according to different obstacle distances, giving different obstacle intensity values, and calculating each movable grid C in each sector i,j Obstacle vector size m i,j Is that
Wherein the method comprises the steps of
Wherein, c i,j Is the determined value of the jth active mesh of the ith sector, d i,j Is the distance of the obstacle to the center point of the robot; i i,j Is an intermediate variable;
step 5, calculating the density of the polar obstacle in the VFH+ algorithm to obtain the maximum value of the intensity of all the active grid obstacles in each sector
Wherein beta is i.j Is the angle of the obstacle detected by the laser radar, gamma i,j Is the enlarged angle of each obstacle activity grid;
step 6, passing through the maximum value of the intensity of the obstacleAnd the lower limit tau of the obstacle intensity threshold low Upper limit tau high Is to use 0 and 1 bin values to compare the dominant polarity histogram H p Conversion to a binary polar histogram H b The method comprises the steps of carrying out a first treatment on the surface of the According to the constraint of the steering capability of the robot, removing the direction of non-advancing, and constructing a mask polar coordinate histogram H m ;
Step 7, determining the left boundary k of all openings in the mask polar coordinate histogram l And right boundary k r The method comprises the steps of carrying out a first treatment on the surface of the If the difference between the left and right boundaries is greater than the threshold S max The opening is a wide opening; otherwise, the opening is narrow;
for narrow openings, the middle direction is taken as a candidate direction of robot motion;
for wide openings, there are multiple candidate directions;
selecting the direction corresponding to the minimum cost function from all candidate directions as the optimal direction, namely the advancing direction of the robot;
obtaining an output command u representing a speed vector according to the advancing direction of the robot and the speed value of the initial set autonomous controller r 。
3. A weight-adaptive-tuning human-robot fusion decision-making method according to claim 2, characterized in that the cost function is as follows:
where Δ is a function of calculating the absolute angle difference between two intervals, k t Is the guiding direction, θ n Is the current direction, k n-1 Is the previous direction of motion, W is the width of the candidate direction; the weight is as follows: mu (mu) 1 >μ 2 +μ 3 >μ 4 And are all constant, c representing the candidate direction.
4. The weight adaptive adjustment human-robot fusion decision-making method according to claim 1, wherein said employing operator behavior entropy to obtain quantized workload comprises the steps of:
operator speed command E (t):
sigma is the decay weight, u h (t) is the operator actual speed command at time t;
estimation error at time t: e (t) =u h (t)-E(t)
Taking delta so that the probability P {0 is less than or equal to e (t < delta } =90%, and dividing the deviation value into 9 intervals according to delta;
linear and angular velocity commands for an operatorAccording to the behavior entropy H h Formulas for calculating linear velocity +.>Behavior entropy and angular velocity->Behavior entropy, the behavior entropy H is obtained to characterize the workload:
p i indicating the frequency with which the estimation error occurs in the corresponding i-th interval.
5. The weight adaptive adjustment human-robot fusion decision method according to claim 1, wherein the input of the shared controller is environment information and workload, and the shared control weight is output through the shared controller, comprising the steps of:
the robot-to-obstacle distance and the operator workload are used as input, the workload represented by the behavior entropy is blurred into four sets { S, MS, }, the sets { S, MS, B, B } correspond to the robot-to-obstacle distance, the fuzzy subset of the control weight coefficient omega of the robot is output to be { S, MS, B, B }, and the fuzzy subset of the control weight coefficient omega of the robot is output to be { S, MS, B, B }, wherein the fuzzy sets of the robot-to-obstacle distance and the operator workload are used as inputs.
6. A weight-adaptive-tuning human-robot fusion decision-making method in accordance with claim 1, characterized in that said generating a safe speed command u f The following are provided:
the linear speed and the angular speed of the safety speed command are respectively; omega is the control weight coefficient of the autonomous controller of the robot,/-for the robot>The operator's linear and angular speed, respectively, < >>The linear and angular speeds of the robot, respectively.
7. A weight-adaptive tuned human-robot fusion decision system comprising:
the obstacle avoidance module is used for acquiring environment information through the laser sensor and outputting a command u r ;
The workload quantification module is used for obtaining quantified workload by adopting operator behavior entropy;
the shared fuzzy controller is used for taking the environment information and the workload as the input of the shared controller and outputting the shared control weight through the shared controller;
the fusion decision module is used for outputting a command u of the robot based on the obstacle avoidance module r And command state u of the operating handle h Is decomposed into a linear velocity and an angular velocity, the linear velocity and the angular velocity of the robot and the operator are weighted by controlling the weight coefficient, and the weighted linear velocity and the weighted angular velocity are inputted to the robot actuator to generate a safe velocity command u f So that the robot avoids the obstacle.
Priority Applications (1)
| Application Number | Priority Date | Filing Date | Title |
|---|---|---|---|
| CN202310606008.1A CN117270385A (en) | 2023-05-26 | 2023-05-26 | A human-robot fusion decision-making method with weight adaptive adjustment |
Applications Claiming Priority (1)
| Application Number | Priority Date | Filing Date | Title |
|---|---|---|---|
| CN202310606008.1A CN117270385A (en) | 2023-05-26 | 2023-05-26 | A human-robot fusion decision-making method with weight adaptive adjustment |
Publications (1)
| Publication Number | Publication Date |
|---|---|
| CN117270385A true CN117270385A (en) | 2023-12-22 |
Family
ID=89218454
Family Applications (1)
| Application Number | Title | Priority Date | Filing Date |
|---|---|---|---|
| CN202310606008.1A Pending CN117270385A (en) | 2023-05-26 | 2023-05-26 | A human-robot fusion decision-making method with weight adaptive adjustment |
Country Status (1)
| Country | Link |
|---|---|
| CN (1) | CN117270385A (en) |
Cited By (1)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| CN119329856A (en) * | 2024-12-20 | 2025-01-21 | 浙江利强包装科技有限公司 | Adaptive control method of fully automatic packaging equipment based on dynamic data acquisition |
-
2023
- 2023-05-26 CN CN202310606008.1A patent/CN117270385A/en active Pending
Cited By (1)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| CN119329856A (en) * | 2024-12-20 | 2025-01-21 | 浙江利强包装科技有限公司 | Adaptive control method of fully automatic packaging equipment based on dynamic data acquisition |
Similar Documents
| Publication | Publication Date | Title |
|---|---|---|
| Bai et al. | Learning-based multi-robot formation control with obstacle avoidance | |
| Khairudin et al. | The mobile robot control in obstacle avoidance using fuzzy logic controller | |
| Wang et al. | Self-learning cruise control using kernel-based least squares policy iteration | |
| Xu et al. | Enhanced bioinspired backstepping control for a mobile robot with unscented Kalman filter | |
| Nicolis et al. | Human intention estimation based on neural networks for enhanced collaboration with robots | |
| AU2021101646A4 (en) | Man-machine cooperative safe operation method based on cooperative trajectory evaluation | |
| CN118816895B (en) | Robot navigation decision method based on target recognition | |
| CN108674185A (en) | A kind of unmanned agricultural vehicle field chance barrier method for control speed | |
| CN115469665B (en) | Intelligent wheelchair target tracking control method and system suitable for dynamic environment | |
| Sagar et al. | Artificial intelligence in autonomous vehicles-a literature review | |
| CN115488881A (en) | Human-machine shared autonomous teleoperation method and system based on multi-sport skill prior | |
| CN117270385A (en) | A human-robot fusion decision-making method with weight adaptive adjustment | |
| JP2006320997A (en) | Robot action selection device and robot action selection method | |
| Zhang et al. | Collision-risk assessment model for teleoperation robots considering acceleration | |
| CN120024817B (en) | A crane anti-collision detection method, system, device and medium | |
| CN118650608B (en) | Collaborative methods and systems for industrial robots under multi-sensory fusion | |
| CN120246205A (en) | Underwater thruster control method and related equipment based on multi-parameter fusion | |
| Ullah et al. | Integrated collision avoidance and tracking system for mobile robot | |
| CN117666586A (en) | A brain-controlled robot control system and method based on adaptive shared control | |
| CN118387142A (en) | A speed control method and device for automatic vehicle driving and a vehicle | |
| Hamad et al. | Path Planning of Mobile Robot Based on Modification of Vector Field Histogram using Neuro-Fuzzy Algorithm. | |
| US20240208514A1 (en) | Method for Determining a Value of a Controller Variable | |
| Pushp et al. | Cognitive decision making for navigation assistance based on intent recognition | |
| Liang et al. | Underwater Dynamic Tracking Control of AUV Based on Complex Environment Simulation and ACL-SAC Deep Reinforcement Learning | |
| CN120370991B (en) | Four-legged robot multi-scene intelligent inspection and resource allocation optimization method and system |
Legal Events
| Date | Code | Title | Description |
|---|---|---|---|
| PB01 | Publication | ||
| PB01 | Publication | ||
| SE01 | Entry into force of request for substantive examination | ||
| SE01 | Entry into force of request for substantive examination |