CN114131610B - Robot man-machine action interaction system and method based on human behavior recognition and perception - Google Patents

Robot man-machine action interaction system and method based on human behavior recognition and perception Download PDF

Info

Publication number
CN114131610B
CN114131610B CN202111533054.0A CN202111533054A CN114131610B CN 114131610 B CN114131610 B CN 114131610B CN 202111533054 A CN202111533054 A CN 202111533054A CN 114131610 B CN114131610 B CN 114131610B
Authority
CN
China
Prior art keywords
robot
behavior
human
module
manipulator
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202111533054.0A
Other languages
Chinese (zh)
Other versions
CN114131610A (en
Inventor
王振斌
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shenzhen Yijiahe Technology R & D Co ltd
Original Assignee
Shenzhen Yijiahe Technology R & D Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shenzhen Yijiahe Technology R & D Co ltd filed Critical Shenzhen Yijiahe Technology R & D Co ltd
Priority to CN202111533054.0A priority Critical patent/CN114131610B/en
Publication of CN114131610A publication Critical patent/CN114131610A/en
Application granted granted Critical
Publication of CN114131610B publication Critical patent/CN114131610B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • BPERFORMING OPERATIONS; TRANSPORTING
    • B25HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
    • B25JMANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
    • B25J9/00Programme-controlled manipulators
    • B25J9/16Programme controls
    • B25J9/1602Programme controls characterised by the control system, structure, architecture
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B25HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
    • B25JMANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
    • B25J11/00Manipulators not otherwise provided for
    • B25J11/008Manipulators for service tasks
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B25HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
    • B25JMANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
    • B25J9/00Programme-controlled manipulators
    • B25J9/16Programme controls
    • B25J9/1679Programme controls characterised by the tasks executed

Abstract

The invention discloses a robot man-machine action interaction system and a robot man-machine action interaction method based on human behavior recognition perception. In the man-machine interaction system, the robot has the capability of communication and cooperation, can perform perception decision according to the behavior action and expression of the human body, and then execute corresponding action to complete the task; in the process of executing the task, if the human behavior suddenly changes, the robot stops the task, the whole interaction process is intelligent, and the actions of the robot are controlled through two cost functions, so that the robot is more accurate.

Description

Robot man-machine action interaction system and method based on human behavior recognition and perception
Technical Field
The invention relates to the technical field of robots, in particular to a robot man-machine action interaction system and method based on human behavior recognition and perception.
Background
The Human-computer interaction system (HCI for short) is communication and communication between researchers and computers through mutual understanding, and functions of service, processing and the like are finished for the people to the greatest extent. Personal safety becomes a biggest concern when a person and a robot are in the same environment and together perform a task. In an industrial environment, safety issues are minimized by isolating robots from people. Although unattended in a robot work environment solves the safety problem, this isolation implies a lack of interaction between the robot and the person. Man-machine interaction presents new challenges for the robot technology, especially when servicing robot interactions with humans, it is necessary to understand the behavior of humans and have a strong adaptability to follow social rules, such as opening a door and delivering a bottle of water for a robot. When a robot cooperates with a person to complete a task, behavior complying with social rules is more important.
At present, a welcome service robot interacts with a user through a display interface or language, and mainly provides simple consultation services in public places such as markets, stations, banks and the like, and has no behavior action capability and behavior recognition capability. In the prior art, a distribution service robot works according to set distribution content, time and route, a visual recognition road is utilized during the period, and an ultrasonic radar is utilized to realize obstacle avoidance. The distribution robot also lacks the ability to identify human behavior, only works point to point, and has poor flexibility.
Disclosure of Invention
The technical purpose is that: aiming at the technical problems, the invention discloses a robot man-machine action interaction system which has the capability of communication and cooperation, can perform perception decision on the behavior action and expression of a human body, then execute corresponding actions to complete tasks, and improve more accurate service.
The technical scheme is as follows: in order to achieve the technical purpose, the invention adopts the following technical scheme:
a robot-computer interaction system based on human behavior recognition perception is characterized by comprising a laser radar ranging module, a visual detection device, an auditory detection device, a robot perception and positioning navigation module, a database module, a strategy module and a monitoring and executing module, wherein,
the robot perception and positioning navigation module is respectively connected with the laser radar ranging module, the visual detection device and the auditory detection device and is used for realizing the map construction of the robot to the surrounding environment, the identification of human behaviors and the planning of paths; the robot detects the environment of the robot through the laser radar module, establishes a space map, receives human behavior signals through the visual detection device and the auditory detection device, and performs path planning according to the instructions sent by the monitoring and executing module;
the database module is used for storing data of human social behavior rules in advance and taking the data as prior information of human-computer interaction;
the strategy module is used for comparing the information sent by the robot sensing and positioning navigation module with the data in the database module to obtain a cognitive result, and outputting a behavior decision to be autonomously executed by the robot according to the cognitive result;
and the monitoring and executing module is used for sending an executing instruction to the robot and monitoring the current behavior of the robot.
Preferably, the robot comprises an actuator comprising a mobile chassis supporting the movement of the robot and a manipulator in communication with the human in the item.
Preferably, the monitoring and execution module is further configured to establish a first cost function f1 (x 1, y1, z1, theta) for controlling the behavior of the mobile chassis of the robot, and a second cost function f2 (x 2, y2, z2, α, β, γ) for controlling the behavior of the manipulator of the robot, wherein x1, y1, z1 represents the position coordinates of the robot chassis, theta represents the orientation of the chassis; (x 2, y2, z 2) represents the position of the manipulator end, and α, β, γ represents the posture of the manipulator end.
A robot-to-machine interaction method based on human behavior recognition perception, applied to the system, characterized by comprising the steps of:
(1) Starting up the robot and completing initialization work: detecting a target area by using a laser range radar module, establishing a space map, and acquiring initial position information of the robot in the target area;
(2) The robot perceives human behavior information and recognizes: the robot acquires the behavior, expression and language information of the person in the target area in real time through the visual detection device and the auditory detection device, and recognizes the position information of the corresponding person, the state and the intention of the person by combining the data of the human behavior rules stored in the database module;
(3) The robot makes a policy to be executed according to the identified information: comparing the human behavior information perceived in the step (2) with the pre-stored data of the human social behavior rules to obtain a cognitive result, and outputting a behavior strategy to be autonomously executed by the robot according to the cognitive result;
(4) Performing tasks according to policies: according to the behavior strategy made in the step (3), the mobile chassis of the robot reaches a target position along a planned path under navigation to face a person to be interacted, the corresponding task is completed through the manipulator, then the robot returns to the initial position, and the next task is waited again
Preferably, in the step (3), the behavior of the moving chassis of the robot is controlled by a first cost function f1 (x 1, y1, z1, theta), the target pose of the end of the manipulator of the robot is controlled by a second cost function f2 (x 2, y2, z2, α, β, γ), x1, y1, z1 represents the position coordinates of the robot chassis, and theta represents the orientation of the chassis; (x 2, y2, z 2) represents the position of the manipulator end, α, β, γ represents the pose of the manipulator end, and the manipulator end is routed to the target pose using an a-x algorithm based on the manipulator target pose provided by the cost function f 2.
Preferably, the expression of the first cost function is as follows:
f1(x1,y1,z1,theta)=(x 1 -x ref ) 2 +(y 1 -y ref ) 2 +(z 1 -z ref ) 2 +(theta-theta ref ) 2
wherein x1, y1, z1 represent the position coordinates of the robot moving chassis, theta represents the orientation of the chassis, x ref ,y ref ,z ref Representing the desired position coordinates of the chassis, theta ref Indicating a desired orientation of the chassis;
the expression of the second cost function is as follows:
f2(x2,y2,z2,α,β,γ)=(x 2 -x tar ) 2 +(y 2 -y tar ) 2 +(z 2 -z tar ) 2 +(α-α tar ) 2 +(β-β tar ) 2 +(γ-γ tar ) 2
wherein x2, y2, z2 represent the position of the manipulator end, α, β, γ represent the pose of the manipulator end, x tar 、y tar 、z tar Indicating the target position of the end of the mechanical arm, alpha tar 、β tar 、γ tar Indicating the desired pose of the end of the arm.
Preferably, in the step (4), during the task execution of the robot, the external force moment τ applied to the robot is obtained by using a moment observer based on generalized momentum when the joint motor of the robot arm is in the moment mode ext Controlling a male by admittanceA kind of electronic device with high-pressure air-conditioning systemCalculating the acceleration of the change of the robot track, wherein m is a mass term, b is a damping term, k is a spring term, deltax is the difference between the current displacement and the reference displacement, and Deltav is the difference between the current speed and the reference speed;
the acceleration is integrated to obtain offset displacement, the offset displacement is added with a path obtained by an A-algorithm to obtain a fusion track, and then the fusion track is obtained through a dynamic formulaCalculating the moment required by the joint motor, wherein M is an inertia matrix, < >>Is the centrifugal force and the Golgi force term matrix, tau is the joint moment vector, tau fric The joint friction force moment vector and G are gravity moment vectors; finally, the admittance control of the mechanical arm is realized.
Preferably, in the step (4), during the task execution process, if the robot monitors that the behavior of the person to be interacted is not abnormal, the robot completes the corresponding task through the manipulator, otherwise, the robot stops executing the task, returns to the initial position, and waits for the next task again.
The beneficial effects are that: compared with the prior art, the invention has the following technical effects:
in the man-machine action interaction system, the robot has the capability of communication and cooperation, can perform perception decision on the action and expression of the human body, then execute corresponding actions to complete tasks, if the action of the human body suddenly changes in the process of executing the tasks, the robot stops the tasks, the whole interaction process is intelligent, and the actions of the robot are controlled through two cost functions, so that the robot is more accurate.
Drawings
FIG. 1 is a schematic diagram of a compound robotic machine interaction system of the present invention;
FIG. 2 is a flow chart of a method of compound robotic-machine interaction of the present invention;
FIG. 3 is a schematic structural view of a manipulator of the compound robot of the present invention;
fig. 4 is a flow chart of a method a algorithm applied to the present invention.
Detailed Description
The present invention will be described in detail with reference to the accompanying drawings.
1. System architecture and implementation flow principle introduction
The system architecture is shown in fig. 1, and mainly comprises a robot sensing, positioning and navigation module, a database module and a strategy module:
the robot sensing, positioning and navigation module is used for the robot to build a map of the surrounding environment, identify human behaviors and plan paths;
the database module provides knowledge of human social behavior specification and monitors and manages the behaviors of the robot;
the strategy module is used for cognizing the external environment and the self state by combining knowledge in the database through data such as perception information in the robot perception, positioning and navigation module and making a decision of behavior autonomously according to a cognition result.
The human-computer interaction flow is shown in fig. 2, and comprises the following steps:
the robot is started to finish initialization work, and a surrounding space map is established by utilizing a laser radar and vision, so that the robot has relative position information;
the robot acquires the behavior actions, expressions and language information of nearby people in real time by utilizing vision and voice, and recognizes the states and intentions of the people by combining the stored human behavior rules;
a large amount of data is stored in the robot database, and the behavior, language and expression of the person are modeled by deep learning so as to understand the behavior meaning of the person. When the robot interacts with the person, the robot can still make corresponding judgment when the behavior of the person exceeds the range of the database.
The robot utilizes the navigation function to plan a path and completes corresponding tasks through the manipulator. After the task is completed, the robot returns to the initial position and waits for the next service task again.
When the robot performs behavior interaction with a person, the robot must understand the human behavior in real time in combination with the on-site environment so that the robot can adapt to the change adjustment task and monitor the behavior of the robot by combining the data of the database. In performing collaborative tasks, robots should clearly plan a set of actions to achieve the intended goal and actively follow the social criteria of a person. For example, the robot should have a similar speed to the human arm during water delivery and door opening, and should pay attention to avoid collision with human. If the object is packaged and placed at a desired position as required during storage, damage to the object and the protection robot are avoided in the process. When an event is encountered, such as a successful behavior, a planned modification, etc., the robot should stop the interaction. The robot comprises an actuator, wherein the actuator comprises a mobile chassis for supporting the robot to move and a manipulator for carrying out article transfer with human beings, and the manipulator comprises a robot mechanical arm and a joint motor for driving.
2. Human-machine interaction
In the invention, in order to enable the robot to safely and comfortably interact with a person, the following three aspects are considered: firstly, a robot monitors behaviors of a person and the robot; secondly, planning actions by the robot by using a cost function; finally, the robot completes the interactive action.
1. Robot monitoring and cognition
The database module stores priori knowledge of man-machine interaction, and is used for judging the intention of the human and monitoring the behavior of the robot. Taking the delivery object as an example, when a person approaches the robot by 1.5m, the robot starts the service monitoring function. The perception module detects that a person walks towards the robot and passes through and stretches out one hand by using the visual sensor, and detects and collects the language of the person by using the auditory module. The policy module determines which item of service needs to be provided based on knowledge stored in the database. Therefore, the robot starts to execute tasks, the movement process is monitored by priori knowledge in the database, a certain safety distance is kept between the robot and a person, the robot is stable and smooth in the process, the movement amplitude is small, the robot stops after reaching a target position, and required objects such as a product of water, paper towels or hot towels are taken out from a storage cabinet of a robot chassis. When the human body state is abnormal, such as turning away from the robot, the task can be stopped immediately.
2. Cost function of robot
The robot must know the surrounding environment to achieve collision-free navigation. Because of the computational expense of building a three-dimensional world map, it is necessary to project information in three-dimensional space onto a two-dimensional grid to build a cost function (2D costmap) of distance obstacles. The robot can realize collision-free path planning of the robot according to the 2D costmap.
The actuator of the robot consists of a movable chassis and a manipulator. To be able to safely and comfortably interact with a person, the robot should not be too close to or too far from the person or target location, and its pose should be such that the task execution costs low. Taking the article delivering service as an example, according to experiments, the robot chassis is positioned at the position 50cm in front of one side of a human hand, and the position of the tail end of the manipulator, which is far from the human hand, is in the range of 5-10 cm. By setting the robot chassis cost function f1 (x 1, y1, z1, theta) = (x) 1 -x ref ) 2 +(y 1 -y ref ) 2 +(z 1 -z ref ) 2 +(theta-theta ref ) 2 To control the position and attitude of the chassis, wherein x1, y1, z1 represent the position coordinates of the robot chassis, theta represents the orientation of the chassis, x ref ,y ref ,z ref Representing the desired position coordinates of the chassis, theta ref Indicating the desired orientation of the chassis. The lower the value of the function f1, the closer the robot chassis is to the desired pose. The robot interacts with the person, and the gesture of person and hand is detected to the vision. In order to provide safe and comfortable service, the manipulator needs to reach the target pose range. Establishing a cost function f2 (x 2, y2, z2, alpha, beta, gamma) = (x) of the pose of the manipulator target 2 -x tar ) 2 +(y 2 -y tar ) 2 +(z 2 -z tar ) 2 +(α-α tar ) 2 +(β-β tar ) 2 +(γ-γ tar ) 2 Wherein x2, y2, z2 represent the position of the manipulator end, α, β, γ represent the pose of the manipulator end, x tar 、y tar 、z tar Indicating the target position of the end of the mechanical arm, alpha tar 、β tar 、γ tar Indicating the desired pose of the end of the arm. The robot arm end is fixed with a coordinate system { B }, the expression of { B } relative to a reference coordinate system { A } is a gesture, as shown in FIG. 3. x, y, z are the coordinates of the origin of the coordinate system { A } in { B }, and α, β, and γ represent the z-, y-, and x-axis rotation angles around the coordinate system { A }. The smaller the value of the function f2, the more comfortable the robot reaches the target position.
3. Interaction of robot actions
The robot provides a target position according to the 2D costmap and the cost function f1, and a collision-free shortest path reaching the target pose is planned through an A-algorithm, so that the robot has minimum motion amplitude.
After the robot chassis reaches the target position, the manipulator starts to execute the interactive task. And planning a path by using an A-algorithm based on the target gesture of the manipulator provided by the cost function f2 to enable the tail end of the manipulator to reach the target gesture, wherein the processing flow of the A-algorithm is shown in fig. 4. After the chassis and the mechanical arm reach the target positions, the robot stops moving and waits for the object in the mechanical arm to be taken away, and the robot monitors the behaviors of the person in real time through the vision sensor in the process. When the person suddenly goes away from the robot and the like to stop servicing, the robot returns to the origin of the response, and the article is put back into the chassis storage cabinet to wait for the next servicing. Wherein the algorithm implementation flow for path planning a is shown in fig. 4.
The pseudocode for the implementation of the algorithm is as follows:
initial to-be-searched queue and expanded queue, storing unexpanded node into one to-be-searched queue open_set, and presetting value h (n) of heuristic search function with respect to node n
The initial point in the queue to be searched is X_s
Setting a cost function g (x_s) =0 reaching an initial point, and setting a cost g (n) reaching other nodes in the graph to infinity
And (3) circulation:
if the queue open_set to be searched is empty, returning to FALSE; ending execution;
removing a node 'n' with a minimum cost function f (n) =g (n) +h (n) in the queue open_set to be searched;
marking node "n" as used and adding to the close_set queue;
if node "n" is the endpoint, the parent node is tracked step by step from the endpoint until the start point is found. Returning TRUE; ending execution;
the neighbor node "m" of the cyclic expansion node "n" that is not searched:
if g (m) =infinity
g (m) =g (n) +cnm, cnm being the cost of node "n" reaching "m
Pressing m-nodes into open_set path queues
If g (m) > g (n) +Cnm
g(m)=g(n)+Cnm
End of cycle
End of cycle
4. Force control of mechanical arm
In order to ensure smoothness, comfort and safety in the service process, the mechanical arm has certain flexibility. And under the moment mode, the joint motor of the mechanical arm performs admittance control on the track after the path is smoothed and the track is planned, wherein the path is obtained through an A algorithm. Due to changes in the external environment, for example, a person may take an item before the arm reaches the target, or may hold the arm down due to instability under the foot. In order to be flexible, the original trajectory may be changed somewhat.
Under the moment mode of the joint motor of the mechanical arm, the moment observer based on generalized momentum is utilized to obtain the external moment tau applied to the robot ext External moment, through admittance control formulaCalculating the acceleration of the change of the robot track, wherein m is a quality item, and b is dampingThe term k is the spring term, Δx is the difference between the current displacement and the reference displacement, and Δv is the difference between the current speed and the reference speed. The acceleration is integrated to obtain offset displacement, the offset displacement is added with the path smoothing and the path planning obtained by the A algorithm to obtain a fusion path, and then the fusion path is obtained by a dynamics formula +.> Calculating the moment required by the joint motor, wherein M is an inertia matrix, < >>Is the centrifugal force and the Golgi force term matrix, tau is the joint moment vector, tau fric The joint friction force moment vector and G are gravity moment vectors. Finally, the admittance control of the mechanical arm is realized.
The robot-computer interaction system can enable the robot to have certain capabilities of understanding human intention and executing corresponding behaviors, and when the robot-computer interaction system is applied to scenes such as fire fighting, post-disaster rescue and the like, the robot can cooperate with people to finish tasks together, so that risks borne by people in special scenes are reduced. In addition, the robot with the capability of interacting with the human can provide specific behavior services, road guidance and the like for the human, reduce the daily chores of simplicity, boring and heavy work for the human and provide a solution for coping with the increasingly serious aging problem of the population.
The foregoing is only a preferred embodiment of the invention, it being noted that: it will be apparent to those skilled in the art that various modifications and adaptations can be made without departing from the principles of the present invention, and such modifications and adaptations are intended to be comprehended within the scope of the invention.

Claims (5)

1. A robot mobile interaction system based on human behavior recognition perception is characterized by comprising a laser radar ranging module, a visual detection device, an auditory detection device, a robot perception and positioning navigation module, a database module, a strategy module and a monitoring and executing module, wherein,
the robot perception and positioning navigation module is respectively connected with the laser radar ranging module, the visual detection device and the auditory detection device and is used for realizing the map construction of the robot to the surrounding environment, the identification of human behaviors and the planning of paths; the robot detects the environment of the robot through the laser radar ranging module, establishes a space map, receives human behavior signals through the visual detection device and the auditory detection device, and plans a path according to the instruction sent by the monitoring and executing module;
the database module is used for storing data of human social behavior rules in advance and taking the data as prior information of human-computer interaction;
the strategy module is used for comparing the information sent by the robot sensing and positioning navigation module with the data in the database module to obtain a cognitive result, and outputting a behavior decision to be autonomously executed by the robot according to the cognitive result;
the monitoring and executing module is used for sending an executing instruction to the robot and monitoring the current behavior of the robot;
the robot comprises an actuator, wherein the actuator comprises a mobile chassis for supporting the movement of the robot and a manipulator for transferring articles to a human, and the monitoring and executing module is further used for establishing a first cost function f1 (x 1 ,y 1 ,z 1 Theta), and a second cost function f2 (x) controlling the behavior of the manipulator of the robot 2 ,y 2 ,z 2 α, β, γ), wherein (x) 1 ,y 1 ,z 1 ) Representing position coordinates of the robot chassis, theta representing the orientation of the chassis; (x) 2 ,y 2 ,z 2 ) The position of the manipulator end is represented, and alpha, beta and gamma represent the gesture of the manipulator end;
the expression of the first cost function is as follows:
wherein the method comprises the steps ofTheta denotes the orientation of the chassis, x ref ,y ref ,z ref Representing the desired position coordinates of the chassis, theta ref Indicating a desired orientation of the chassis;
the expression of the second cost function is as follows:
wherein alpha, beta and gamma represent the gesture of the tail end of the manipulator, and x tar 、y tar 、z tar Indicating the target position of the end of the mechanical arm, alpha tar 、β tar 、γ tar Indicating the desired pose of the end of the arm.
2. A robot motion interaction method based on human behavior recognition perception, applied to the system of claim 1, comprising the steps of:
(1) Starting up the robot and completing initialization work: detecting a target area by using a laser radar ranging module, establishing a space map, and acquiring initial position information of the robot in the target area;
(2) The robot perceives human behavior information and recognizes: the robot acquires the behavior, expression and language information of the person in the target area in real time through the visual detection device and the auditory detection device, and recognizes the position information of the corresponding person, the state and the intention of the person by combining the data of the human behavior rules stored in the database module;
(3) The robot makes a policy to be executed according to the identified information: comparing the human behavior information perceived in the step (2) with the pre-stored data of the human social behavior rules to obtain a cognitive result, and outputting a behavior strategy to be autonomously executed by the robot according to the cognitive result;
(4) Performing tasks according to policies: and (3) enabling the mobile chassis of the robot to reach a target position along a planned path under navigation according to the behavior strategy made in the step (3) so as to face a person to be interacted with, completing a corresponding task through the manipulator, returning to an initial position, and waiting for the next task again.
3. The robot motion interaction method based on human behavior recognition sensing according to claim 2, wherein in the step (3), the behavior of the moving chassis of the robot is represented by a first cost function f1 (x 1 ,y 1 ,z 1 Theta) control, the target pose of the manipulator end of the robot is controlled by a second cost function f2 (x 2 ,y 2 ,z 2 α, β, γ) control, based on the target pose of the manipulator provided by the second cost function f2, to plan a path by using an a-algorithm to make the end of the manipulator reach the target pose.
4. A robot motion interaction method based on human behavior recognition sensing according to claim 3, characterized in that: in the step (4), during the task execution process of the robot, the external force moment applied to the robot is obtained by using a moment observer based on generalized momentum when the joint motor of the robot arm is in a moment modeBy admittance control formulaCalculating the acceleration of the change of the robot track, wherein m is a mass term, b is a damping term, k is a spring term, deltax is the difference between the current displacement and the reference displacement, and Deltav is the difference between the current speed and the reference speed;
the acceleration is integrated to obtain offset displacement, the offset displacement is added with a path obtained by an A-algorithm to obtain a fusion track, and then the fusion track is obtained through a dynamic formulaCalculating the moment required by the joint motor, wherein M is an inertia matrix, < >>For centrifugal force and coriolis force term matrices, +.>Is the joint moment vector, ++>The joint friction force moment vector and G are gravity moment vectors; finally, the admittance control of the mechanical arm is realized.
5. A robot motion interaction method based on human behavior recognition sensing according to claim 3, characterized in that: in the step (4), during the task execution process, if the robot monitors that the behavior of the person to be interacted is not abnormal, the robot completes the corresponding task through the manipulator, otherwise, the robot stops executing the task, returns to the initial position and waits for the next task again.
CN202111533054.0A 2021-12-15 2021-12-15 Robot man-machine action interaction system and method based on human behavior recognition and perception Active CN114131610B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202111533054.0A CN114131610B (en) 2021-12-15 2021-12-15 Robot man-machine action interaction system and method based on human behavior recognition and perception

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202111533054.0A CN114131610B (en) 2021-12-15 2021-12-15 Robot man-machine action interaction system and method based on human behavior recognition and perception

Publications (2)

Publication Number Publication Date
CN114131610A CN114131610A (en) 2022-03-04
CN114131610B true CN114131610B (en) 2023-11-10

Family

ID=80382743

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202111533054.0A Active CN114131610B (en) 2021-12-15 2021-12-15 Robot man-machine action interaction system and method based on human behavior recognition and perception

Country Status (1)

Country Link
CN (1) CN114131610B (en)

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2007249631A (en) * 2006-03-16 2007-09-27 Fujitsu Ltd Polygonal line following mobile robot, and control method for polygonal line following mobile robot
CN107272710A (en) * 2017-08-08 2017-10-20 河海大学常州校区 A kind of medical merchandising machine people system and its control method of view-based access control model positioning
CN110465928A (en) * 2019-08-23 2019-11-19 河北工业大学 A kind of paths planning method of storage commodity pick-and-place mobile platform and the mobile platform
CN112476434A (en) * 2020-11-24 2021-03-12 新拓三维技术(深圳)有限公司 Visual 3D pick-and-place method and system based on cooperative robot
CN112828883A (en) * 2020-12-25 2021-05-25 香港中文大学深圳研究院 Robot environment exploration method and system in unknown environment
CN112947403A (en) * 2019-11-22 2021-06-11 医达科技公司 Deterministic robot path planning for obstacle avoidance
CN113390411A (en) * 2021-06-10 2021-09-14 中国北方车辆研究所 Foot type robot navigation and positioning method based on variable configuration sensing device

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2007249631A (en) * 2006-03-16 2007-09-27 Fujitsu Ltd Polygonal line following mobile robot, and control method for polygonal line following mobile robot
CN107272710A (en) * 2017-08-08 2017-10-20 河海大学常州校区 A kind of medical merchandising machine people system and its control method of view-based access control model positioning
CN110465928A (en) * 2019-08-23 2019-11-19 河北工业大学 A kind of paths planning method of storage commodity pick-and-place mobile platform and the mobile platform
CN112947403A (en) * 2019-11-22 2021-06-11 医达科技公司 Deterministic robot path planning for obstacle avoidance
CN112476434A (en) * 2020-11-24 2021-03-12 新拓三维技术(深圳)有限公司 Visual 3D pick-and-place method and system based on cooperative robot
CN112828883A (en) * 2020-12-25 2021-05-25 香港中文大学深圳研究院 Robot environment exploration method and system in unknown environment
CN113390411A (en) * 2021-06-10 2021-09-14 中国北方车辆研究所 Foot type robot navigation and positioning method based on variable configuration sensing device

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
一种室内移动机器人自主探索方法;李秀智;龚月;张祥银;贾松敏;梁兴楠;;控制与决策(第06期);全文 *
智能空间助老助残服务机器人人机协作导航;江济良;屠大维;;智能系统学报(第05期);全文 *

Also Published As

Publication number Publication date
CN114131610A (en) 2022-03-04

Similar Documents

Publication Publication Date Title
CN106573377B (en) Humanoid robot with collision avoidance and trajectory recovery capabilities
KR101999033B1 (en) Method for building a map of probability of one of absence and presence of obstacles for an autonomous robot
Yoshimi et al. Development of a person following robot with vision based target detection
Wallhoff et al. A skill-based approach towards hybrid assembly
CA2946049C (en) Omnidirectional wheeled humanoid robot based on a linear predictive position and velocity controller
CN106548231B (en) Mobile control device, mobile robot and method for moving to optimal interaction point
JP5764795B2 (en) Mobile robot, mobile robot learning system, and mobile robot behavior learning method
Shi et al. A robot that distributes flyers to pedestrians in a shopping mall
Malysz et al. Trilateral teleoperation control of kinematically redundant robotic manipulators
KR20160054862A (en) Method to obstacle avoidance for wheeled mobile robots
Hosoda et al. Basic design of human-symbiotic robot EMIEW
He et al. Bidirectional human–robot bimanual handover of big planar object with vertical posture
CN111150566B (en) Wheelchair control system and method for autonomous navigation and multi-mode man-machine interaction sharing
CN114131610B (en) Robot man-machine action interaction system and method based on human behavior recognition and perception
Sugiura et al. Real-time self collision avoidance for humanoids by means of nullspace criteria and task intervals
US11618164B2 (en) Robot and method of controlling same
TWI555524B (en) Walking assist system of robot
Han et al. Development of a shared controller for obstacle avoidance in a teleoperation system
Kinpara et al. Situation-driven control of a robotic wheelchair to follow a caregiver
Mortezapoor et al. CoboDeck: A Large-Scale Haptic VR System Using a Collaborative Mobile Robot
JP7075935B2 (en) Autonomous robot system
Cuan et al. Gesture2path: Imitation learning for gesture-aware navigation
Popov et al. Detection and following of moving target by an indoor mobile robot using multi-sensor information
Edmonds et al. Optimal trajectories for autonomous human-following carts with gesture-based contactless positioning suggestions
Ghandour et al. Interactive collision avoidance system for indoor mobile robots based on human-robot interaction

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant