CN108453742B - Kinect-based robot man-machine interaction system and method - Google Patents

Kinect-based robot man-machine interaction system and method Download PDF

Info

Publication number
CN108453742B
CN108453742B CN201810374190.1A CN201810374190A CN108453742B CN 108453742 B CN108453742 B CN 108453742B CN 201810374190 A CN201810374190 A CN 201810374190A CN 108453742 B CN108453742 B CN 108453742B
Authority
CN
China
Prior art keywords
joint
robot
kinect
straight line
control
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201810374190.1A
Other languages
Chinese (zh)
Other versions
CN108453742A (en
Inventor
梅海艺
李磊
王凯
陈宝存
郭毓
郭健
吴益飞
吴禹均
张冕
郭飞
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Nanjing University of Science and Technology
Original Assignee
Nanjing University of Science and Technology
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Nanjing University of Science and Technology filed Critical Nanjing University of Science and Technology
Priority to CN201810374190.1A priority Critical patent/CN108453742B/en
Publication of CN108453742A publication Critical patent/CN108453742A/en
Application granted granted Critical
Publication of CN108453742B publication Critical patent/CN108453742B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • BPERFORMING OPERATIONS; TRANSPORTING
    • B25HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
    • B25JMANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
    • B25J9/00Programme-controlled manipulators
    • B25J9/16Programme controls
    • B25J9/1602Programme controls characterised by the control system, structure, architecture
    • B25J9/161Hardware, e.g. neural networks, fuzzy logic, interfaces, processor
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B25HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
    • B25JMANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
    • B25J9/00Programme-controlled manipulators
    • B25J9/16Programme controls
    • B25J9/1602Programme controls characterised by the control system, structure, architecture
    • B25J9/1605Simulation of manipulator lay-out, design, modelling of manipulator
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B25HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
    • B25JMANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
    • B25J9/00Programme-controlled manipulators
    • B25J9/16Programme controls
    • B25J9/1656Programme controls characterised by programming, planning systems for manipulators
    • B25J9/1671Programme controls characterised by programming, planning systems for manipulators characterised by simulation, either to verify existing program or to create and verify new program, CAD/CAM oriented, graphic oriented programming systems

Abstract

The invention relates to a Kinect-based robot human-computer interaction system and a Kinect-based robot human-computer interaction method, wherein the system comprises a Kinect information acquisition module, a human-computer interaction module, a posture control module, a voice control module, a Kinect three-dimensional sensor, a robot control box and a robot; firstly, image data stream and audio data stream are transmitted to a human-computer interaction module through a Kinect information acquisition module, a user is guided to select different control modes through voice/character prompt of the human-computer interaction module, and then different control modules are called to realize control of the robot. The invention integrates a plurality of interactive modes of body feeling, voice and gesture, can realize the motion control of the robot, replaces manual work to operate in an unstructured scene, and improves the operation capability and the intelligent level of the robot.

Description

Kinect-based robot man-machine interaction system and method
Technical Field
The invention belongs to the technical field of robots, and particularly relates to a Kinect-based robot human-computer interaction system and method.
Background
Along with the more frequent interaction between people and computers, various novel human-computer interaction technologies such as voice recognition, gesture interaction, virtual reality, somatosensory control and the like make a breakthrough, and the human-computer interaction mode is more and more natural, visual and simple.
The Kinect is a body sensing device, and detection tracking of a human body is achieved based on a depth scene segmentation and skeleton fitting technology. The non-contact somatosensory operation mode thereof raises the hot tide of human-computer interaction research in a plurality of fields such as virtual reality, medicine, robots and the like, for example: a 3D virtual fitting mirror, an Osirix PACS system for browsing medical images, and the like.
With the rapid development of the robot technology, the robot plays an increasingly important role in daily life of people, is widely applied to the fields of national defense, industry, medical treatment and the like, and has important significance for improving the labor efficiency and reducing the labor intensity of workers. However, in the face of some complex and dangerous working environments such as disaster relief, hazardous article treatment, deep sea exploration, space experiments and the like, the traditional robot control mode mostly operates autonomously through a set program or depends on a control lever, and is complex in operation and poor in flexibility.
Disclosure of Invention
The invention aims to provide a Kinect-based robot man-machine interaction system and method, which can realize motion control of a robot by integrating various interaction modes of body feeling, sound and gestures, replace manual work to operate in an unstructured scene and improve the operation capacity and the intelligence level of the robot.
The technical scheme for realizing the purpose of the invention is as follows: a robot-human interaction system based on Kinect comprises a Kinect information acquisition module, a human-computer interaction module, a posture control module, a voice control module, a Kinect three-dimensional sensor, a robot control box and a robot;
the Kinect information acquisition module acquires an image data stream and a voice data stream by using a Kinect three-dimensional sensor;
selecting different control modes through a man-machine interaction module, and calling an attitude control module or a voice control module;
the posture control module controls the mechanical arm of the robot to move according to the posture of the human body based on the image data;
the voice control module realizes recognition of voice instructions based on voice data and controls the robot to perform corresponding actions.
A robot human-computer interaction method based on Kinect comprises the following steps:
acquiring an image data stream and a voice data stream by using a Kinect three-dimensional sensor;
controlling the mechanical arm of the robot to move according to the human body posture based on the image data;
based on the voice data processing result, realizing the recognition of the voice command and controlling the robot to perform corresponding actions;
different control modes are selected through voice/character prompt of the man-machine interaction module, and different control modules are called to realize man-machine interaction.
Compared with the prior art, the invention has the following remarkable advantages: (1) the human-computer interaction module, the gesture control module and the voice module which are designed by combining the gesture, the voice and other modes can provide more natural, flexible and various human-computer interaction modes; (2) the man-machine interaction system has good communication real-time performance with the robot, the robot has high motion precision, the technical problems of path planning obstacle avoidance, collision prevention, target identification and positioning and the like in the practical engineering application can be well solved by adopting a man-machine interaction mode, meanwhile, the practical problems of long training time of technical personnel, complex operation mode, high error rate and the like can be well solved, and the working efficiency and the development efficiency are improved; (3) the invention can be applied to some complex scenes which need man-machine co-fusion and are difficult for the robot to operate independently, such as disaster relief, hazardous article treatment, space experiments, submarine exploration and the like, and has wide application prospect.
Drawings
FIG. 1 is a flow chart of a Kinect-based robot human-machine interaction method of the present invention.
FIG. 2 is a flow chart of a Kinect information collection module of the present invention.
FIG. 3 is a flow chart of an attitude control module of the present invention.
Fig. 4 is a schematic view of the joints of the UR robot.
FIG. 5 shows the rotation angle θ of the waist joint according to the embodiment of the present invention1Schematic representation.
FIG. 6 shows a shoulder joint rotation angle θ according to an embodiment of the present invention2Schematic representation.
FIG. 7 shows an exemplary embodiment of the present invention for the elbow joint rotation angle θ3Schematic representation.
FIG. 8 shows the first and second rotation angles of the wrist joint according to the embodiment of the present invention4、θ5) Schematic representation.
FIG. 9 is a flow chart of a filtering algorithm according to an embodiment of the present invention.
FIG. 10 is a flow chart of a speech module implemented in accordance with the present invention.
Detailed description of the invention
The invention discloses a Kinect-based robot-human interaction system, which comprises: the software design and the hardware equipment are connected through a network and a USB interface. The software design comprises a Kinect information acquisition module, a man-machine interaction module, a posture control module and a voice module; the hardware equipment comprises a Kinect three-dimensional sensor, a robot control box and a robot.
The system firstly transmits image data stream and audio data stream to the human-computer interaction module through the Kinect information acquisition module, guides a user to select different control modes through voice/character prompt of the human-computer interaction module, and then calls different control modules to realize control of the robot. The method specifically comprises the following steps:
the Kinect information acquisition module acquires an image data stream and a voice data stream by using a Kinect three-dimensional sensor;
selecting different control modes through a man-machine interaction module, and calling an attitude control module or a voice control module;
the posture control module controls the mechanical arm of the robot to move according to the posture of the human body based on the image data;
the voice control module realizes recognition of voice instructions based on voice data and controls the robot to perform corresponding actions.
The invention also provides a man-machine interaction method of the robot man-machine interaction system based on Kinect, which comprises the following steps as shown in figure 1:
step 1, acquiring an image data stream and a voice data stream by using a Kinect three-dimensional sensor, as shown in FIG. 2, specifically comprising the following steps:
step 1-1, directly obtaining a depth image by using a depth camera of a Kinect three-dimensional sensor, and capturing voice by using a microphone array of the Kinect three-dimensional sensor.
And step 1-2, placing the image data stream and the audio data stream into a buffer area.
Step 2, controlling the robot arm to move according to the human body posture based on the image data, as shown in fig. 3, the specific steps are as follows:
and 2-1, processing the image data information flow in the step 1, acquiring human body index (BodyInex) information by using a Kinect depth image processing method, and extracting gesture State information (Hand State) and Joint point information (Joint) of an operator.
Step 2-2, establishing a human body posture-robot mapping relation model:
due to the differences between different robots, different mapping relationship models need to be designed according to different robot structures. The embodiment of the invention establishes a mapping relation model based on the UR robot, wherein joints of the UR robot are specified as follows: a machine base, B: shoulder, C: elbow and D, E, F: wrists one, two and three, as shown in fig. 4.
The embodiment of the invention specifically comprises the following steps of establishing a mapping relation model:
and (3) selecting a LEFT SHOULDER (SHOULDER _ LEFT), a RIGHT SHOULDER (SHOULDER _ RIGHT), a RIGHT ELBOW (ELBOW _ RIGHT), a RIGHT WRIST (WRIST _ RIGHT), a RIGHT fingertip (HAND _ TIP _ RIGHT) and gestures (HAND _ RIGHT _ STATE and HAND _ LEFT _ STATE) of the LEFT HAND and the RIGHT HAND in the joint point information extracted in the step (2-1).
Solving through the selected 5 joint points to obtain the waist joint rotation angle (theta) of the human body1) Right shoulder joint rotation angle (theta)2) Right elbow joint rotation angle (theta)3) Right wrist joint rotation angle one (theta)4) The right wrist joint has a second rotation angle (theta)5). Table 1 shows a specific mapping relationship according to an embodiment of the present invention.
Table 1: concrete mapping relation
Figure GDA0002979347390000041
For convenient solution, a selected joint is arranged to be composed of a left shoulder joint J1(x1,y1,z1) Right shoulder joint J2(x2,y2,z2) Right elbow joint J3(x3,y3,z3) Right wrist joint J4(x4,y4,z4) Right fingertip joint J5(x5,y5,z5) It is shown that the schematic rotational angle diagrams herein are all mirror views.
(1) Waist joint rotation angle (theta)1) As shown in fig. 5, the following is specific:
solving for waist joint rotation angle (theta)1) Can pass through the left shoulder joint J1(x1,y1,z1) And right shoulder joint J2(x2,y2,z2) The coordinate relationship of (2) is obtained. A straight line J formed by two joint points1 J2Projecting to the xOz plane to obtain a straight line l1To obtain l1Angle theta with the x-axis11Namely the waist joint rotation angle:
Figure GDA0002979347390000042
(2) right shoulder joint rotation angle (theta)2) As shown in fig. 6, the following are specific:
right shoulder joint rotation angle (theta)2) Solution of and θ1Similar to the solution method of (1), through the right shoulder joint J2(x2,y2,z2) And the right elbow joint J3(x3,y3,z3) The coordinate relationship of (2) is obtained. Will straight line J2J3Projecting to the xOy plane to obtain a straight line l2Straight line l2Has a slope of k2,l2Angle theta with the x-axis2Namely the rotation angle of the shoulder joint:
Figure GDA0002979347390000051
(3) right elbow joint rotation angle (theta)3) As shown in fig. 7, the following is specific:
solving for right elbow joint rotation angle (theta)3) In a slightly different way, first, the straight line J is drawn3J4Projecting to the xOy plane to obtain a straight line l3Straight line l3Has a slope of k3Straight line l3And a straight line l2Angle of (theta)3Namely the elbow joint rotation angle:
Figure GDA0002979347390000052
(4) first and second rotation angles (theta) of right wrist joint45) As shown in fig. 8, the following is specific:
solving the first and second rotation angles (theta) of the right wrist joint45) Method and solution of theta3The method is the same and similar, straight line J3J4And the straight line J4J5The projection of the included angle alpha on the xOy plane is theta4The projection of the included angle alpha on the xOz plane is theta5
θ4In the xOy plane, straight line J3J4The projection in the xOy plane is l3Straight line l3Has a slope of k3Straight line J4J5The projection in the xOy plane is l4Straight line l4Has a slope of k4Straight line l4And a straight line l3Angle of (theta)4Namely the first rotation angle of the wrist joint;
θ5in the xOz plane, straight line J3J4Projection in the xOz plane is l'3L 'straight line'3The slope of (b) is k'3Straight line J4J5Projection in the xOz plane is l'4L 'straight line'4The slope of (b) is k'4L 'straight line'4And linear l'3Angle of (theta)5Namely the wrist joint rotation angle II:
Figure GDA0002979347390000053
Figure GDA0002979347390000054
(5) gesture information
Controlling the end effector based on the RIGHT HAND gesture information (HAND _ RIGHT _ STATE), wherein in the embodiment of the invention, the RIGHT HAND opening controls the end grab (grip) to open, and the RIGHT HAND closing controls the end grab (grip) to close; and (3) running/stopping the LEFT HAND gesture information (HAND _ LEFT _ STATE) system, wherein in the embodiment of the invention, the LEFT HAND opening control system runs, and the LEFT HAND closing control system suspends running.
And 2-3, establishing a filtering algorithm of the joint angle value. Deriving angle Y using a filtering algorithmiReplacing the joint angle theta obtained in step 2-2iWherein i is 1,2,3,4, 5.
The filtering algorithm is shown in fig. 9, and specifically as follows:
and establishing a joint angle queue with a fixed length for the joint angles obtained in the step 2-2. The joint angle value at the current time is subtracted from the value at the previous time in the queue. If the difference value is smaller than the set threshold value, the joint angle value at the moment enters the tail of the queue, and meanwhile, the data at the head of the queue is moved out of the queue; otherwise, the joint angle value at the moment is replaced by the value at the previous moment, then the data enters the tail of the queue, and the data at the head of the queue is simultaneously moved out of the queue.
And (3) distributing proper weight to the joint angles in the queue according to the sequence of the time of entering the queue, wherein the weight of the angle value entering the queue is small, the weight of the angle value entering the queue is large, and the angle average value Y of the joint is calculated according to the weight.
Figure GDA0002979347390000061
Wherein, XnIs the angle value, P, of the current time1Is its weight, XnIs the angle value, P, of the previous moment2Is its weight, Xn-2Is the angle value, P, of the first two moments2Is its weight, Xn-3Is the angle value, P, of the first three moments4Is its weight.
And 2-4, sending the 5 target joint angles obtained in the step 2-3 to a robot control box through a TCP/IP protocol, and simultaneously returning robot joint state information and electrical parameter information of the robot to the robot control box, wherein the robot joint state information comprises real-time angles and angular velocities of all joints of the robot, and the electrical parameter information comprises real-time currents, temperatures and the like of all joints of the robot.
And 2-5, resolving the received control instruction by the robot control box to realize the following motion of the robot.
Step 3, based on the voice data processing in step 1, implementing motion control of the robot, and establishing a voice module, as shown in fig. 10, the specific steps are as follows:
step 3-1, extracting an Audio data stream processed by noise reduction, automatic gain control, echo suppression and the like from a microphone array of the Kinect by using a Kinect Audio Source object in the Kinect for Windows SDK, then setting a control command object which is composed of sentences ("great act one", "great act two", "great act depth", "Hello", "First Pattern", "Second Pattern", "Stop", "Pattern") of specific control commands, and recognizing the user's command by the result of searching in the object.
Step 3-2, setting a corresponding mechanical arm control instruction and a corresponding human-computer interaction operation instruction for the control command statement in the step 3-1, as shown in table 2, specifically as follows:
table 2: correspondence between control command and operation instruction
Figure GDA0002979347390000071
The voice control mode refers to the following modes of voice for an operator: "Plost do action one", "Plost do action two", "Plost do action three" to control the robot to execute the corresponding fixed action.
3-3, if the command obtained by the calculation in the step 3-1 is successfully matched with the control command in the table 2, sending the corresponding operation instruction to the control program and realizing the corresponding function; and if the matching is not successful, ending the operation of the module.
Step 4, establishing a human-computer interaction module, and establishing an integral human-computer interaction system, wherein the specific steps are as follows:
step 4-1, designing an integral human-computer interaction system interface, wherein the method comprises the following steps: the system has the functions of gesture/voice mode selection, voice/character prompt, real-time robot joint gesture display, Kinect human body real-time joint gesture display, real-time robot electrical parameter monitoring and the like.
And 4-2, recognizing the mode control requirement of the user by calling the voice module in the step 3, entering a corresponding mode to control the robot, and simultaneously visualizing the robot joint state information and the electrical parameter information which are acquired in real time in the step 2-4 and the human body joint point information acquired in the step 2-1 in three windows respectively.

Claims (4)

1. A man-machine interaction method of a robot man-machine interaction system based on Kinect is characterized in that the robot man-machine interaction system comprises a Kinect information acquisition module, a man-machine interaction module, a posture control module, a voice control module, a Kinect three-dimensional sensor, a robot control box and a robot; the Kinect information acquisition module acquires an image data stream and a voice data stream by using a Kinect three-dimensional sensor; selecting different control modes through a man-machine interaction module, and calling an attitude control module or a voice control module; the posture control module controls the mechanical arm of the robot to move according to the posture of the human body based on the image data; the voice control module realizes recognition of voice instructions based on voice data and controls the robot to perform corresponding actions; the method comprises the following steps:
acquiring an image data stream and a voice data stream by using a Kinect three-dimensional sensor;
controlling the mechanical arm of the robot to move according to the human body posture based on the image data;
based on the voice data processing result, realizing the recognition of the voice command and controlling the robot to perform corresponding actions;
different control modes are selected through voice/character prompt of the man-machine interaction module, and different control modules are called to realize man-machine interaction;
wherein, based on image data, according to human gesture control robot arm motion, specifically do:
step 2-1, processing the image data stream, acquiring human body index information by a Kinect depth image processing method, and extracting user gesture state information and joint point information;
step 2-2, establishing a human body posture-robot mapping relation model; the method specifically comprises the following steps:
establishing a mapping relation model based on a UR robot, wherein joints of the UR robot comprise a machine base, shoulders, elbows, a first wrist, a second wrist and a third wrist;
selecting the gestures of the left shoulder, the right elbow, the right wrist, the right fingertip, the left hand and the right hand in the joint point information extracted in the step 2-1;
solving through the selected 5 joint points to obtain a waist joint rotation angle, a right shoulder joint rotation angle, a right elbow joint rotation angle, a right wrist joint rotation angle I and a right wrist joint rotation angle II of the human body;
table 1 is a mapping relationship;
table 1: concrete mapping relation
Figure FDA0002997948610000011
Figure FDA0002997948610000021
The selected joint is arranged to be composed of a left shoulder joint J1(x1,y1,z1) Right shoulder joint J2(x2,y2,z2) Right elbow joint J3(x3,y3,z3) Right wrist joint J4(x4,y4,z4) Right fingertip joint J5(x5,y5,z5) Represents;
(1) waist joint rotation angle theta1The method comprises the following steps:
waist joint rotation angle theta1Through the left shoulder joint J1(x1,y1,z1) And right shoulder joint J2(x2,y2,z2) Solving the coordinate relation of the two parts; a straight line J formed by two joint points1 J2Projecting to the xOz plane to obtain a straight line l1To obtain l1Angle theta with the x-axis11Namely the waist joint rotation angle:
Figure FDA0002997948610000022
(2) right shoulder joint rotation angle theta2The method comprises the following steps:
right shoulder joint rotation angle theta2Through the right shoulder joint J2(x2,y2,z2) And the right elbow joint J3(x3,y3,z3) Solving the coordinate relation of the two parts; will straight line J2J3Projecting to the xOy plane to obtain a straight line l2Straight line l2Has a slope of k2,l2Angle theta with the x-axis2Namely the right shoulder joint rotation angle:
Figure FDA0002997948610000023
(3) right elbow joint rotation angle theta3The method comprises the following steps:
solving the right elbow joint rotation angle theta3First, a straight line J is drawn3J4Projecting to the xOy plane to obtain a straight line l3Straight line l3Has a slope of k3Straight line l3And a straight line l2Angle of (theta)3Namely the right elbow joint rotation angle:
Figure FDA0002997948610000031
(4) right wrist joint rotation angle theta4Right wrist joint rotation angle two theta5The method comprises the following steps:
right wrist joint rotation angle theta4Right wrist joint rotation angle two theta5Straight line J3J4And the straight line J4J5The projection of the included angle alpha on the xOy plane is theta4The projection of the included angle alpha on the xOz plane is theta5
θ4Solving: in the xOy plane, a straight line J3J4The projection in the xOy plane is l3Straight line l3Has a slope of k3Straight line J4J5The projection in the xOy plane is l4Straight line l4Has a slope of k4Straight line l4And a straight line l3Angle of (theta)4Namely the rotation angle I of the right wrist joint;
θ5solving: in the xOz plane, a straight line J3J4Projection in the xOz plane is l'3L 'straight line'3The slope of (b) is k'3Straight line J4J5Projection in the xOz plane is l'4L 'straight line'4The slope of (b) is k'4L 'straight line'4And linear l'3Angle of (theta)5Namely the rotation angle of the right wrist joint is two:
Figure FDA0002997948610000032
Figure FDA0002997948610000033
(5) gesture information
Controlling the end effector based on the gesture information of the right hand, controlling the end effector to open by opening the right hand, and controlling the end effector to close by closing the right hand; the left hand gesture information controls the system to run/stop, the left hand opens the control system to run, and the left hand closes the control system to pause running;
2-3, establishing a filtering algorithm of the joint angle value, and replacing the joint angle value with the angle value obtained by the filtering algorithm; the method specifically comprises the following steps:
establishing a joint angle queue with a fixed length for the joint angle value obtained in the step 2-2; subtracting the joint angle value at the current moment from the value at the previous moment in the queue; if the difference value is smaller than the set threshold value, the joint angle value at the current moment enters the tail of the queue, and meanwhile, the data at the head of the queue is moved out of the queue; otherwise, the joint angle value at the current moment is replaced by the value at the previous moment, then the joint angle value enters the tail of the queue, and the data at the head of the queue is simultaneously moved out of the queue;
the weight of the joint angle value in the queue is distributed according to the sequence of the time of entering the queue, the weight of the joint angle value entering the queue firstly is small, the weight of the joint angle value entering the queue secondly is large, and the angle average value Y of the joint is calculated according to the weight:
Figure FDA0002997948610000041
wherein, XnIs the value of the joint angle, P, at the current moment1Is XnWeight of (1), Xn-1Is the value of the joint angle at the previous moment, P2Is Xn-1Weight of (1), Xn-2Is the joint angle value, P, of the first two moments3Is Xn-2Weight of (1), Xn-3Is the joint angle value, P, at the first three times4Is Xn-3The weight of (c);
step 2-4, sending the joint angle average value obtained in the step 2-3 to a robot control box through a TCP/IP protocol, and simultaneously returning robot joint state information and electrical parameter information of the robot to the robot control box, wherein the robot joint state information comprises real-time joint angle values and angular speeds of all joints of the robot, and the electrical parameter information comprises real-time currents and temperatures of all joints of the robot;
and 2-5, resolving the received control instruction by the robot control box to realize the following motion of the robot.
2. The human-computer interaction method of the Kinect-based robot-human interaction system as claimed in claim 1, wherein a Kinect three-dimensional sensor is used to obtain a plurality of image data streams and voice data streams, specifically:
step 1-1, directly obtaining a depth image by using a depth camera of a Kinect three-dimensional sensor, and capturing voice by using a microphone array of the Kinect three-dimensional sensor;
and step 1-2, placing the image data stream and the voice data stream into a buffer area.
3. The human-computer interaction method of the Kinect-based robot-human interaction system as claimed in claim 1, wherein based on the voice data processing result, recognition of a voice command is realized, and the robot is controlled to perform corresponding actions, specifically:
step 3-1, extracting a voice data stream processed by noise reduction, automatic gain control and echo suppression from a microphone array of Kinect by using a Kinect Audio Source object in Kinect for Windows SDK, then setting a control command object, wherein the control command object is formed by sentences of control commands, and identifying the command of a user according to the result searched in the control command object;
step 3-2, setting a corresponding mechanical arm control instruction and a corresponding human-computer interaction operation instruction for the control command statement in the step 3-1;
3-3, if the command resolved in the step 3-1 is successfully matched with the control command, sending the corresponding operation instruction to the control program and realizing the corresponding function; and if the matching is not successful, ending the operation of the module.
4. The human-computer interaction method of a Kinect-based robot-human-computer interaction system as claimed in claim 1, wherein different control modes are selected by voice/text prompt of a human-computer interaction module, and different control modules are invoked to realize human-computer interaction, specifically:
step 4-1, designing an integral human-computer interaction system interface, which comprises a posture/voice mode selection interface, a voice/character prompt interface, a robot real-time joint posture display interface, a Kinect human body real-time joint posture display interface and a robot real-time electrical parameter monitoring interface;
and 4-2, recognizing the mode control requirement of the user by calling the voice control module, entering a corresponding mode to control the robot, and simultaneously visualizing the joint state information, the electrical parameter information and the human body joint information of the robot in three windows respectively.
CN201810374190.1A 2018-04-24 2018-04-24 Kinect-based robot man-machine interaction system and method Active CN108453742B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201810374190.1A CN108453742B (en) 2018-04-24 2018-04-24 Kinect-based robot man-machine interaction system and method

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201810374190.1A CN108453742B (en) 2018-04-24 2018-04-24 Kinect-based robot man-machine interaction system and method

Publications (2)

Publication Number Publication Date
CN108453742A CN108453742A (en) 2018-08-28
CN108453742B true CN108453742B (en) 2021-06-08

Family

ID=63235204

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201810374190.1A Active CN108453742B (en) 2018-04-24 2018-04-24 Kinect-based robot man-machine interaction system and method

Country Status (1)

Country Link
CN (1) CN108453742B (en)

Families Citing this family (13)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109262609A (en) * 2018-08-29 2019-01-25 南京理工大学 Mechanical arm tele-control system and method based on virtual reality technology
CN109200576A (en) * 2018-09-05 2019-01-15 深圳市三宝创新智能有限公司 Somatic sensation television game method, apparatus, equipment and the storage medium of robot projection
CN109483538A (en) * 2018-11-16 2019-03-19 左志强 A kind of VR movement projection robot system based on Kinect technology
CN110154048B (en) * 2019-02-21 2020-12-18 北京格元智博科技有限公司 Robot control method and device and robot
CN109976338A (en) * 2019-03-14 2019-07-05 山东大学 A kind of multi-modal quadruped robot man-machine interactive system and method
CN109955254B (en) * 2019-04-30 2020-10-09 齐鲁工业大学 Mobile robot control system and teleoperation control method for robot end pose
CN110378937B (en) * 2019-05-27 2021-05-11 浙江工业大学 Kinect camera-based industrial mechanical arm man-machine safety distance detection method
CN110216676B (en) * 2019-06-21 2022-04-26 深圳盈天下视觉科技有限公司 Mechanical arm control method, mechanical arm control device and terminal equipment
CN110480637B (en) * 2019-08-12 2020-10-20 浙江大学 Mechanical arm part image recognition and grabbing method based on Kinect sensor
CN111688526B (en) * 2020-06-18 2021-07-20 福建百城新能源科技有限公司 User side new energy automobile energy storage charging station
CN112192585B (en) * 2020-10-13 2022-02-15 厦门大学 Interactive performance method and system of palm-faced puppet performance robot
CN114714358A (en) * 2022-04-18 2022-07-08 山东大学 Method and system for teleoperation of mechanical arm based on gesture protocol
WO2024011518A1 (en) * 2022-07-14 2024-01-18 Abb Schweiz Ag Method for controlling industrial robot and industrial robot

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103164696A (en) * 2013-03-28 2013-06-19 深圳泰山在线科技有限公司 Method and device for recognizing gesture
CN104570731A (en) * 2014-12-04 2015-04-29 重庆邮电大学 Uncalibrated human-computer interaction control system and method based on Kinect
CN106313072A (en) * 2016-10-12 2017-01-11 南昌大学 Humanoid robot based on leap motion of Kinect
CN106826838A (en) * 2017-04-01 2017-06-13 西安交通大学 A kind of interactive biomimetic manipulator control method based on Kinect space or depth perception sensors
KR20170090631A (en) * 2016-01-29 2017-08-08 한국해양대학교 산학협력단 Pid
CN107443396A (en) * 2017-08-25 2017-12-08 魔咖智能科技(常州)有限公司 A kind of intelligence for imitating human action in real time accompanies robot

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103164696A (en) * 2013-03-28 2013-06-19 深圳泰山在线科技有限公司 Method and device for recognizing gesture
CN104570731A (en) * 2014-12-04 2015-04-29 重庆邮电大学 Uncalibrated human-computer interaction control system and method based on Kinect
KR20170090631A (en) * 2016-01-29 2017-08-08 한국해양대학교 산학협력단 Pid
CN106313072A (en) * 2016-10-12 2017-01-11 南昌大学 Humanoid robot based on leap motion of Kinect
CN106826838A (en) * 2017-04-01 2017-06-13 西安交通大学 A kind of interactive biomimetic manipulator control method based on Kinect space or depth perception sensors
CN107443396A (en) * 2017-08-25 2017-12-08 魔咖智能科技(常州)有限公司 A kind of intelligence for imitating human action in real time accompanies robot

Non-Patent Citations (3)

* Cited by examiner, † Cited by third party
Title
基于Kinect的人体步态跟踪与识别技术;沈小康;《中国优秀硕士学位论文全文数据库 信息科技辑》;20170115;第I138-741页 *
基于人机交互的机器人动作模仿学习;陈家顺;《中国优秀硕士学位论文全文数据库 信息科技辑》;20151215;第I140-1页 *
陈家顺.基于人机交互的机器人动作模仿学习.《中国优秀硕士学位论文全文数据库 信息科技辑》.2015,第I140-1页. *

Also Published As

Publication number Publication date
CN108453742A (en) 2018-08-28

Similar Documents

Publication Publication Date Title
CN108453742B (en) Kinect-based robot man-machine interaction system and method
Mazhar et al. A real-time human-robot interaction framework with robust background invariant hand gesture detection
CN106909216B (en) Kinect sensor-based humanoid manipulator control method
Qian et al. Developing a gesture based remote human-robot interaction system using kinect
Rogalla et al. Using gesture and speech control for commanding a robot assistant
Krupke et al. Comparison of multimodal heading and pointing gestures for co-located mixed reality human-robot interaction
Mazhar et al. Towards real-time physical human-robot interaction using skeleton information and hand gestures
CN109044651B (en) Intelligent wheelchair control method and system based on natural gesture instruction in unknown environment
CN111694428B (en) Gesture and track remote control robot system based on Kinect
WO2019024577A1 (en) Natural human-computer interaction system based on multi-sensing data fusion
CN107765855A (en) A kind of method and system based on gesture identification control machine people motion
CN102830798A (en) Mark-free hand tracking method of single-arm robot based on Kinect
CN105867595A (en) Human-machine interaction mode combing voice information with gesture information and implementation device thereof
Chen et al. A human–robot interface for mobile manipulator
Shin et al. EMG and IMU based real-time HCI using dynamic hand gestures for a multiple-DoF robot arm
Kang et al. A robot system that observes and replicates grasping tasks
Teke et al. Real-time and robust collaborative robot motion control with Microsoft Kinect® v2
Shang-Liang et al. Using deep learning technology to realize the automatic control program of robot arm based on hand gesture recognition
Iossifidis et al. Anthropomorphism as a pervasive design concept for a robotic assistant
CN110766804B (en) Method for cooperatively grabbing object by human and machine in VR scene
Zhao et al. Intuitive robot teaching by hand guided demonstration
Dadiz et al. Go-Mo (Go-Motion): An android mobile application detecting motion gestures for generating basic mobile phone commands utilizing KLT algorithm
Infantino et al. Visual control of a robotic hand
Jayasurya et al. Gesture controlled AI-robot using Kinect
TK et al. Real-Time Virtual Mouse using Hand Gestures for Unconventional Environment

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant