CN106826838A - A kind of interactive biomimetic manipulator control method based on Kinect space or depth perception sensors - Google Patents
A kind of interactive biomimetic manipulator control method based on Kinect space or depth perception sensors Download PDFInfo
- Publication number
- CN106826838A CN106826838A CN201710213574.0A CN201710213574A CN106826838A CN 106826838 A CN106826838 A CN 106826838A CN 201710213574 A CN201710213574 A CN 201710213574A CN 106826838 A CN106826838 A CN 106826838A
- Authority
- CN
- China
- Prior art keywords
- depth
- control
- kinect
- space
- information
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
Classifications
-
- B—PERFORMING OPERATIONS; TRANSPORTING
- B25—HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
- B25J—MANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
- B25J9/00—Programme-controlled manipulators
- B25J9/16—Programme controls
- B25J9/1694—Programme controls characterised by use of sensors other than normal servo-feedback from position, speed or acceleration sensors, perception control, multi-sensor controlled systems, sensor fusion
- B25J9/1697—Vision controlled systems
Abstract
The invention discloses a kind of interactive biomimetic manipulator control method based on Kinect space or depth perception sensors, by building multivariant biomimetic manipulator and manipulator, using central processing unit as control panel, it is connected with driving plate, Kinect space or depth perception sensors respectively;Kinect space or depth perceptions sensor includes RGB video camera, spatial depth video camera and speech detection circuit;Kinect space or depth perceptions sensor recognizes the two-dimentional limbs profile of discrimination objective human arm, obtains the two-dimensional signal of limbs profile;The information that control panel will be received is resolved, and obtains position, angle, the velocity information of biomimetic manipulator, changes into control instruction;Driving plate is acted according to control instruction motor and/or steering wheel, and each joint coordinates the action for completing to follow target human arm, is completed limbs and is imitated control.Human action data and phonetic order that the present invention can be obtained according to kinect sensors, a series of corresponding locus conversion are completed by resolving, and realization includes that limbs imitate control, Voice command and man-machine action in real time and interact.
Description
Technical field
The invention belongs to robotics, it is related to a kind of bionical machine of interaction based on Kinect space or depth perception sensors
Tool arm control method.
Background technology
Robot can be divided into professional domain service robot and individual/home-services robot, the application model of robot
Enclose very wide, be mainly engaged in the work such as maintaining, repairing, transport, cleaning, security personnel, rescue, monitoring.Mechanical arm is robot
The automated machine device of most broad practice is obtained in technical field, in industrial manufacture, therapeutic treatment, entertainment service, army
The fields such as thing, semiconductor manufacturing and space probation can see its figure.Biomimetic manipulator has wide range of applications, usually with
Robot is used cooperatively, and is an important component in family of robot.
For robot, biomimetic manipulator is critically important, and it can complete different task by corresponding action, more preferably
Serve the people, or realize the functions such as interactive recreation.Multivariant mechanical arm is more flexible, while coordinate dexterous bionic hand,
With multi-sensor technology, can preferably serve the people, meet various robots and flexibly require.
The content of the invention
Present invention solves the problem in that providing a kind of interactive biomimetic manipulator based on Kinect space or depth perception sensors
Control method, realizes that the limbs of biomimetic manipulator imitate control, Voice command and man-machine action interaction in real time and cooperate.
The present invention is to be achieved through the following technical solutions:
A kind of interactive biomimetic manipulator control method based on Kinect space or depth perception sensors, including following operation:
1) multivariant biomimetic manipulator and manipulator are built, joint therein is driven by motor, and finger is drawn by steering wheel
Line draws, and motor and steering wheel provide power source by driving plate respectively;
2) using central processing unit as control panel, it is connected with driving plate, Kinect space or depth perception sensors respectively;
Kinect space or depth perceptions sensor includes RGB video camera, spatial depth video camera and speech detection circuit;
3) Kinect space or depth perceptions sensor recognizes the two-dimentional limbs profile of discrimination objective human arm, obtains limbs profile
Two-dimensional signal;The spatial depth information of utilization space depth camera capture, three-dimensional is expanded to by the two-dimensional signal of limbs profile
Attitude information, is then sent to control panel using three dimensional orientation information as the input information that limbs are imitated;
4) information that control panel will be received is resolved, and obtains position, angle, the velocity information of biomimetic manipulator, changes into control
System instruction, is sent to driving plate;
5) driving plate is acted according to control instruction motor and/or steering wheel, and each joint coordinates completion to follow target person
The action of arm, completes limbs and imitates control.
The speech detection circuit of described Kinect space or depth perception sensors also receives target sound waves signal, and is converted
It is binary quantization data is activation to control panel;
The information that control panel will be received recognizes the phonetic order content of target sound waves signal after resolving, further according to phonetic order
Content transfers corresponding control instruction and is transferred to driving plate, and control instruction is the preset mechanical arm control that can complete required movement
Code processed;
Driving plate is acted according to control instruction motor and/or steering wheel, and each joint coordinates completion phonetic order content,
Complete Voice command.
Limbs are imitated into control and Voice command is combined, complete man-machine action interaction in real time:
While Kinect space or depth perceptions sensor captures the limb action of I-goal, the voice of I-goal is also received
Instruction, control panel is sent to by the signal of capture;
Control panel combines I-goal limb action and phonetic order, and control panel preferentially generates limbs and imitates control
Instruction, voice responsive instruction while completing to imitate control instruction;
Driving plate is according to command content and people's live collaboration or man-machine real-time, interactive.
Described Kinect space or depth perceptions sensor to people's limb action continuous acquisition by target instantaneous acquiring, knowing
Not, the limb three-dimensional attitude information of acquisition;
Limbs joint and bone are each mapped to control panel node and connecting rod in three dimensions, and joint position correspondence is saved
Point coordinates, bone length respective link length;Then orientation, angle, the speed letter for obtaining corresponding target limb action are resolved
Breath;Calculate including the control instruction data including correspondingly each joint freedom degrees of biomimetic manipulator, control instruction data are passed
Control panel is defeated by, imitation of the biomimetic manipulator to target limb action is realized.
The information that described control panel is received is the RGB color image and spatial depth letter of space or depth perception sensor collection
Breath, wherein Kinect is gathered as space or depth perception sensor, only responsible IMAQ and spatial depth information, is not involved in resolving
Journey;
Control panel carries out real-time resolving after receiving the information transmitted:
21) shade of gray analysis is carried out by RGB image information, recognizes contour of object;
22) contour of object is classified, identifies the contour feature for belonging to human body appearance;
23) according to human body contour outline image, determine human body limb bone trend and joint position, be converted into connecting rod and node is sat
Mark, what is now obtained is the data on two bit planes;
24) controller treatment depth transducer information, by the spatial depth data of depth transducer, determines step 23) in
The distance of node coordinate and sensor, in step 23) in add the 3rd dimension as depth dimension in the two-dimensional coordinate that resolves,
Three coordinate datas in formation three dimensions under cartesian coordinate system;
25) by space geometry method, the angle between connecting rod at each node is obtained, i.e., actual corresponding people's limbs are closed
Bent-segment degree;
26) angle data are transformed to the angle-data in each joint of mechanical arm by mapping;
27) step 21-26 is carried out in real time, the angle-data consecutive variations in the joint for being formed, according to the number of consecutive variations
According to the running status needed for calculating each motor/steering wheel, action command is formed, that is, be still in specified location or transported with command speed
It is dynamic;
28) in image and depth data solution process, the effective phonetic order of interactive object is such as received, then in this step
The calculation result of phonetic order is added into action command;
29) action command in each joint of mechanical arm is sent respectively to the driver in corresponding joint, is driven by driver
Dynamic actuating motor/steering wheel motion.
Described driving plate provides power according to motor or steering wheel rated voltage and electric current, according to the control for receiving control panel
Signal, transmits corresponding voltage to motor or steering wheel, makes position or moved according to command speed that motor and steering wheel arrival are specified,
Complete the action of mechanical arm.
Described control panel receives the voice data of Kinect microphones collection, phonetic order resolve be divided into speech recognition and
It is instruction morphing, the voice data that microphone is collected is identified as specific sentence and word, used for instruction morphing;
It is described instruction morphing to be:First determine whether whether the sentence keyword that speech recognition goes out is effective, is effectively judged to
With the Data Matching in instruction database, it is then effective to match target;
Data feature values in instruction database are write by user to database, comprising but be not limited only to:Stop, starting,
Follow, avoid, X joints are moved at Y;
The valid statement keyword for getting is converted into action command by the way of data base querying, by being preset at data
Respective action in storehouse determines action command output.
Compared with prior art, the present invention has following beneficial technique effect:
The interactive biomimetic manipulator control method based on Kinect space or depth perception sensors that the present invention is provided, can basis
Human action data and phonetic order that kinect sensors are obtained, a series of corresponding locus changes are completed by resolving
Change, realize the control to mechanical arm.Control mode includes that limbs imitate control, Voice command and man-machine action interaction in real time.
The limb three-dimensional attitude information that the present invention is obtained using sensor, limbs joint and bone are each mapped to
Node and connecting rod in three dimensions, joint position corresponding node coordinate, bone length respective link length.Calculate and obtain corresponding
The information such as orientation, angle, the speed of people's limb action;Again by the instruction number of each joint freedom degrees of correspondence biomimetic manipulator
According to, the data of acquisition are transferred to the control circuit of mechanical arm, realize imitation of the mechanical arm to people's limb action.
Acoustic signals are converted into binary quantization data by the present invention in Voice command using speech detection circuit, are led to
Resolving identification phonetic order content is crossed, such as " hello " " thanks " " goodbye ", these voice messagings specific control is converted into
Instruction is transferred to control circuit, realizes the simple motion such as wave, wave, shaking hands of mechanical arm.
In real-time man-machine action interactive controlling is lived, be on the basis of both the above control mode, will be mutual with mechanical arm
The limb action and phonetic order of dynamic object combine, according to command content and people's live collaboration or man-machine real-time, interactive,
Complete the voice and limbs information of response people while instruction task, such as man-machine transmission article, it is to avoid collision harm people class etc..
Brief description of the drawings
Fig. 1 is biomimetic manipulator schematic diagram.
Fig. 2 is connection diagram of the invention.
Fig. 3 is workflow diagram of the invention.
Specific embodiment
With reference to specific embodiment, the present invention is described in further detail, it is described be explanation of the invention and
It is not to limit.
Referring to Fig. 1~Fig. 3, a kind of interactive biomimetic manipulator control method based on Kinect space or depth perception sensors, bag
Include following operation:
1) multivariant biomimetic manipulator and manipulator are built, joint therein is driven by motor, and finger is drawn by steering wheel
Line draws, and motor and steering wheel provide power source by driving plate respectively;
2) using central processing unit as control panel, it is connected with driving plate, Kinect space or depth perception sensors respectively;
Kinect space or depth perceptions sensor includes RGB video camera, spatial depth video camera and speech detection circuit;
3) Kinect space or depth perceptions sensor recognizes the two-dimentional limbs profile of discrimination objective human arm, obtains limbs profile
Two-dimensional signal;The spatial depth information of utilization space depth camera capture, three-dimensional is expanded to by the two-dimensional signal of limbs profile
Attitude information, is then sent to control panel using three dimensional orientation information as the input information that limbs are imitated;
4) information that control panel will be received is resolved, and obtains position, angle, the velocity information of biomimetic manipulator, changes into control
System instruction, is sent to driving plate;
5) driving plate is acted according to control instruction motor and/or steering wheel, and each joint coordinates completion to follow target person
The action of arm, completes limbs and imitates control.
The speech detection circuit of described Kinect space or depth perception sensors also receives target sound waves signal, and is converted
It is binary quantization data is activation to control panel;
The information that control panel will be received recognizes the phonetic order content of target sound waves signal after resolving, further according to phonetic order
Content transfers corresponding control instruction and is transferred to driving plate, and control instruction is the preset mechanical arm control that can complete required movement
Code processed;
Driving plate is acted according to control instruction motor and/or steering wheel, and each joint coordinates completion phonetic order content,
Complete Voice command.
Limbs are imitated into control and Voice command is combined, complete man-machine action interaction in real time:
While Kinect space or depth perceptions sensor captures the limb action of I-goal, the voice of I-goal is also received
Instruction, control panel is sent to by the signal of capture;
Control panel combines I-goal limb action and phonetic order, and control panel preferentially generates limbs and imitates control
Instruction, voice responsive instruction while completing to imitate control instruction;
Driving plate is according to command content and people's live collaboration or man-machine real-time, interactive.
Below to the further details of explanation of each several part.
Kinect sensor includes RGB video camera, spatial depth video camera and speech detection circuit.Wherein, RGB video camera
Identification distinguishes the two-dimentional limbs profile of people, while the two-dimensional signal of limbs profile can be expanded to three-dimensional by utilization space depth information
Attitude information.Then three-dimensional space data is imitated the input information of control, man-machine action interaction in real time as limbs;Profit
Acoustic signals are converted into binary quantization data by term tone detection circuit, by resolving identification phonetic order content.Kinect
Used as space or depth perception sensor, only responsible IMAQ and spatial depth information collection, are not involved in solution process, its data acquisition
Process can be replaced by the space or depth perception sensor of other producers.
Controller is made up of master control borad and driving plate.Master control borad receives RGB image information, the spatial depth of Kinect transmission
Information and speech data, complete data calculation process, can believe according to position, angle, speed that human arm is acted etc. is obtained
Breath, exports the drive signal of mechanical arm joint motor, completes the action of mechanical arm, it is also possible to receive the specific control of voice messaging
System instruction, exports the action of mechanical arm joint motor, completes simple interactive action.
Driving plate provides power according to motor or steering wheel rated voltage and electric current, according to the control signal for receiving control panel,
Corresponding voltage is transmitted to motor or steering wheel, makes position or moved according to command speed that motor and steering wheel arrival are specified, completed
The action of mechanical arm.
The mechanical arm that the present invention is used is biomimetic manipulator, imitates human upper limb, by shoulder joint, large arm, elbow joint, small
Arm, wrist joint and Dextrous Hand composition, each joint are driven by motor, and clever hand finger is drawn by steering wheel bracing wire.
In limbs imitate control, people's limb action continuous acquisition is recognized using the instantaneous acquiring function of sensor, profit
The limb three-dimensional attitude information obtained with sensor, the node in three dimensions is each mapped to by limbs joint and bone
And connecting rod, joint position corresponding node coordinate, bone length respective link length.Central processing unit is resolved and obtains corresponding afterwards
The information such as orientation, angle, the speed of people's limb action.Then, correspondingly each joint freedom of biomimetic manipulator is further calculated
The director data of degree, the data of acquisition are transferred to the control circuit of mechanical arm, realize imitation of the mechanical arm to people's limb action.
The information that control panel is received is the RGB color image and spatial depth information of space or depth perception sensor collection, wherein
Kinect is gathered as space or depth perception sensor, only responsible IMAQ and spatial depth information, is not involved in solution process;, its
Data acquisition can be replaced by the space or depth perception sensor of other producers.
Control panel carries out real-time resolving after receiving the information transmitted:
21) shade of gray analysis is carried out by RGB image information, recognizes contour of object;
22) contour of object is classified, identifies the contour feature for belonging to human body appearance;
23) according to human body contour outline image, determine human body limb bone trend and joint position, be converted into connecting rod and node is sat
Mark, what is now obtained is the data on two bit planes;
24) controller treatment depth transducer information, by the spatial depth data of depth transducer, determines step 23) in
The distance of node coordinate and sensor, in step 23) in add the 3rd dimension as depth dimension in the two-dimensional coordinate that resolves,
Three coordinate datas in formation three dimensions under cartesian coordinate system;
25) by space geometry method, the angle between connecting rod at each node is obtained, i.e., actual corresponding people's limbs are closed
Bent-segment degree;
26) angle data are transformed to the angle-data in each joint of mechanical arm by mapping;
27) step 21-26 is carried out in real time, the angle-data consecutive variations in the joint for being formed, according to the number of consecutive variations
According to the running status needed for calculating each motor/steering wheel, action command is formed, that is, be still in specified location or transported with command speed
It is dynamic;
28) in image and depth data solution process, the effective phonetic order of interactive object is such as received, then in this step
The calculation result of phonetic order is added into action command;
29) action command in each joint of mechanical arm is sent respectively to the driver in corresponding joint, is driven by driver
Dynamic actuating motor/steering wheel motion.
In Voice command, acoustic signals are converted into binary quantization data using speech detection circuit, by resolving
These voice messagings such as " hello " " thanks " " goodbye ", are converted to specific control instruction and passed by identification phonetic order content
Control circuit is defeated by, the simple motion such as wave, wave, shaking hands of mechanical arm is realized, its control instruction is that preset can complete
The mechanical arm control routine of required movement.
Controller receives the voice data of Kinect microphones collection.Wherein Kinect includes microphone in itself, herein
Voice collecting is only used as, can be substituted by other producer's same type microphones.Solution process is completed by controller.Phonetic order solution
Point counting is speech recognition and instruction morphing.Wherein speech recognition technology is more ripe and increases income, and directly uses here, by microphone
The voice data for collecting is identified as specific sentence and word, is used for instruction morphing.
Described control panel receives the voice data of Kinect microphones collection, phonetic order resolve be divided into speech recognition and
It is instruction morphing, the voice data that microphone is collected is identified as specific sentence and word, used for instruction morphing;
It is described instruction morphing to be:First determine whether whether the sentence keyword that speech recognition goes out is effective, is effectively judged to
With the Data Matching in instruction database, it is then effective to match target;
Data feature values in instruction database are write by user to database, comprising but be not limited only to:Stop, starting,
Follow, avoid, X joints are moved at Y;
The valid statement keyword for getting is converted into action command by the way of data base querying, by being preset at data
Respective action in storehouse determines action command output.
In real-time man-machine action interactive controlling is lived, be on the basis of both the above control mode, will be mutual with mechanical arm
The limb action and phonetic order of dynamic object combine, according to command content and people's live collaboration or man-machine real-time, interactive,
Complete the voice and limbs information of response people while instruction task, such as man-machine transmission article, it is to avoid collision harm people class etc..
Specific embodiment is given below.
Step one:Multivariant mechanical arm and manipulator are built, as shown in Figure 1 and Figure 2;
Step 2:It is connected with driving plate as control panel with central processing unit, develops the control program of manipulator.
Step 3:Control panel is connected with kinect sensors, and completing human action using body-sensing program catches, by people's upper limbs
Attitude is reduced into the information such as each articulation angle of arm and angular speed by algorithm, changes into mechanical arm control instruction, leads to
Communications are crossed to control panel.
Step 4:Control panel is received after mechanical arm control instruction, and decoding is converted to the motion in each joint of mechanical arm
Information, controls each joint to coordinate action or the finishing man-machine interaction for completing to follow human arm.
Example given above is to realize the present invention preferably example, the invention is not restricted to above-described embodiment.This area
Technical staff any nonessential addition, the replacement made according to the technical characteristic of technical solution of the present invention, belong to this
The protection domain of invention.
Claims (7)
1. a kind of interactive biomimetic manipulator control method based on Kinect space or depth perception sensors, it is characterised in that including with
Lower operation:
1) multivariant biomimetic manipulator and manipulator are built, joint therein is driven by motor, and finger is led by steering wheel bracing wire
Draw, motor and steering wheel provide power source by driving plate respectively;
2) using central processing unit as control panel, it is connected with driving plate, Kinect space or depth perception sensors respectively;
Kinect space or depth perceptions sensor includes RGB video camera, spatial depth video camera and speech detection circuit;
3) Kinect space or depth perceptions sensor recognizes the two-dimentional limbs profile of discrimination objective human arm, obtains the two dimension of limbs profile
Information;The spatial depth information of utilization space depth camera capture, three dimensions is expanded to by the two-dimensional signal of limbs profile
Azimuth information, is then sent to control panel using three dimensional orientation information as the input information that limbs are imitated;
4) information that control panel will be received is resolved, and obtains position, angle, the velocity information of biomimetic manipulator, is changed into control and is referred to
Order, is sent to driving plate;
5) driving plate is acted according to control instruction motor and/or steering wheel, and each joint coordinates completion to follow target human arm
Action, complete limbs imitate control.
2. the interactive biomimetic manipulator control method of Kinect space or depth perception sensors is based on as claimed in claim 1, and it is special
Levy and be, the speech detection circuit of described Kinect space or depth perception sensors also receives target sound waves signal, and is converted
It is binary quantization data is activation to control panel;
The information that control panel will be received recognizes the phonetic order content of target sound waves signal after resolving, further according to phonetic order content
Transfer corresponding control instruction and be transferred to driving plate, control instruction is the preset mechanical arm that can complete required movement control generation
Code;
Driving plate is acted according to control instruction motor and/or steering wheel, and each joint coordinates completion phonetic order content, completes
Voice command.
3. the interactive biomimetic manipulator control method of Kinect space or depth perception sensors is based on as claimed in claim 2, and it is special
Levy and be, limbs are imitated into control and Voice command is combined, complete man-machine action interaction in real time:
While Kinect space or depth perceptions sensor captures the limb action of I-goal, the voice for also receiving I-goal refers to
Order, control panel is sent to by the signal of capture;
Control panel combines I-goal limb action and phonetic order, and control panel preferentially generates limbs imitation control and refers to
Order, voice responsive instruction while completing to imitate control instruction;
Driving plate is according to command content and people's live collaboration or man-machine real-time, interactive.
4. the interactive biomimetic manipulator control method of Kinect space or depth perception sensors is based on as claimed in claim 1, and it is special
Levy and be, described Kinect space or depth perceptions sensor by target instantaneous acquiring, being recognized to people's limb action continuous acquisition,
The limb three-dimensional attitude information of acquisition;
Limbs joint and bone are each mapped to control panel node and connecting rod in three dimensions, and joint position corresponding node is sat
Mark, bone length respective link length;Then orientation, angle, the velocity information for obtaining corresponding target limb action are resolved;Solution
Calculate including the control instruction data including correspondingly each joint freedom degrees of biomimetic manipulator, control instruction data are transferred to control
Making sheet, realizes imitation of the biomimetic manipulator to target limb action.
5. the interactive biomimetic manipulator control method based on Kinect space or depth perception sensors as described in claim 1 or 4, its
It is characterised by, the information that control panel is received is the RGB color image and spatial depth information of space or depth perception sensor collection, wherein
Kinect is gathered as space or depth perception sensor, only responsible IMAQ and spatial depth information, is not involved in solution process;
Control panel carries out real-time resolving after receiving the information transmitted:
21) shade of gray analysis is carried out by RGB image information, recognizes contour of object;
22) contour of object is classified, identifies the contour feature for belonging to human body appearance;
23) according to human body contour outline image, determine human body limb bone trend and joint position, be converted into connecting rod and node coordinate,
What is now obtained is the data on two bit planes;
24) controller treatment depth transducer information, by the spatial depth data of depth transducer, determines step 23) interior joint
The distance of coordinate and sensor, in step 23) in add the 3rd dimension as depth dimension in the two-dimensional coordinate that resolves, formed
Three coordinate datas in three dimensions under cartesian coordinate system;
25) by space geometry method, the angle between connecting rod at each node is obtained, i.e., actual corresponding people's limbs joint is curved
Qu Chengdu;
26) angle data are transformed to the angle-data in each joint of mechanical arm by mapping;
27) step 21-26 is carried out in real time, the angle-data consecutive variations in the joint for being formed, the data meter according to consecutive variations
Running status needed for calculating each motor/steering wheel, forms action command, that is, be still in specified location or moved with command speed;
28) in image and depth data solution process, the effective phonetic order of interactive object is such as received, then in this step by language
The calculation result of sound instruction adds action command;
29) action command in each joint of mechanical arm is sent respectively to the driver in corresponding joint, is held by driver drives
Row motor/steering wheel motion.
6. the interactive biomimetic manipulator control method of Kinect space or depth perception sensors is based on as claimed in claim 1, and it is special
Levy and be, described driving plate provides power according to motor or steering wheel rated voltage and electric current, according to the control for receiving control panel
Signal, transmits corresponding voltage to motor or steering wheel, makes position or moved according to command speed that motor and steering wheel arrival are specified,
Complete the action of mechanical arm.
7. the interactive biomimetic manipulator control method of Kinect space or depth perception sensors is based on as claimed in claim 1, and it is special
Levy and be, described control panel receives the voice data of Kinect microphones collection, phonetic order resolve be divided into speech recognition and
It is instruction morphing, the voice data that microphone is collected is identified as specific sentence and word, used for instruction morphing;
It is described instruction morphing to be:First determine whether whether the sentence keyword that speech recognition goes out is effective, is effectively judged to and finger
The Data Matching in storehouse is made, it is then effective to match target;
Data feature values in instruction database are write by user to database, comprising but be not limited only to:Stop, starting, following,
Avoid, X joints are moved at Y;
The valid statement keyword for getting is converted into action command by the way of data base querying, by being preset in database
Respective action determine action command output.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201710213574.0A CN106826838B (en) | 2017-04-01 | 2017-04-01 | Interaction bionic mechanical arm control method based on Kinect visual depth sensor |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201710213574.0A CN106826838B (en) | 2017-04-01 | 2017-04-01 | Interaction bionic mechanical arm control method based on Kinect visual depth sensor |
Publications (2)
Publication Number | Publication Date |
---|---|
CN106826838A true CN106826838A (en) | 2017-06-13 |
CN106826838B CN106826838B (en) | 2019-12-31 |
Family
ID=59142262
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201710213574.0A Active CN106826838B (en) | 2017-04-01 | 2017-04-01 | Interaction bionic mechanical arm control method based on Kinect visual depth sensor |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN106826838B (en) |
Cited By (24)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN107450376A (en) * | 2017-09-09 | 2017-12-08 | 北京工业大学 | A kind of service mechanical arm crawl attitude angle computational methods based on intelligent family moving platform |
CN107643827A (en) * | 2017-09-04 | 2018-01-30 | 王怡科 | A kind of body-sensing tracing system and method with pressure feedback |
CN108247633A (en) * | 2017-12-27 | 2018-07-06 | 珠海格力节能环保制冷技术研究中心有限公司 | The control method and system of robot |
CN108453742A (en) * | 2018-04-24 | 2018-08-28 | 南京理工大学 | Robot man-machine interactive system based on Kinect and method |
CN108828996A (en) * | 2018-05-31 | 2018-11-16 | 四川文理学院 | A kind of the mechanical arm remote control system and method for view-based access control model information |
WO2019041658A1 (en) * | 2017-08-31 | 2019-03-07 | 南京埃斯顿机器人工程有限公司 | Robot external motion path control method |
CN109621331A (en) * | 2018-12-13 | 2019-04-16 | 深圳壹账通智能科技有限公司 | Fitness-assisting method, apparatus and storage medium, server |
CN110450145A (en) * | 2019-08-13 | 2019-11-15 | 广东工业大学 | A kind of biomimetic manipulator based on skeleton identification |
CN110480634A (en) * | 2019-08-08 | 2019-11-22 | 北京科技大学 | A kind of arm guided-moving control method for manipulator motion control |
CN110762362A (en) * | 2019-08-14 | 2020-02-07 | 广东工业大学 | Support based on human body posture control |
CN110834331A (en) * | 2019-11-11 | 2020-02-25 | 路邦科技授权有限公司 | Bionic robot action control method based on visual control |
CN111113429A (en) * | 2019-12-31 | 2020-05-08 | 深圳市优必选科技股份有限公司 | Action simulation method, action simulation device and terminal equipment |
CN111267083A (en) * | 2020-03-12 | 2020-06-12 | 北京科技大学 | Mechanical arm autonomous carrying system based on combination of monocular and binocular cameras |
CN111482967A (en) * | 2020-06-08 | 2020-08-04 | 河北工业大学 | Intelligent detection and capture method based on ROS platform |
CN111645080A (en) * | 2020-05-08 | 2020-09-11 | 覃立万 | Intelligent service robot hand-eye cooperation system and operation method |
CN112276947A (en) * | 2020-10-21 | 2021-01-29 | 乐聚(深圳)机器人技术有限公司 | Robot motion simulation method, device, equipment and storage medium |
CN112975964A (en) * | 2021-02-23 | 2021-06-18 | 青岛海科虚拟现实研究院 | Robot automatic control method and system based on big data and robot |
CN113070877A (en) * | 2021-03-24 | 2021-07-06 | 浙江大学 | Variable attitude mapping method for seven-axis mechanical arm visual teaching |
CN113282109A (en) * | 2021-07-23 | 2021-08-20 | 季华实验室 | Unmanned aerial vehicle and human cooperative operation system |
CN114131591A (en) * | 2021-12-03 | 2022-03-04 | 山东大学 | Semi-physical simulation method and system for operation strategy of outer limb robot |
CN114129392A (en) * | 2021-12-07 | 2022-03-04 | 山东大学 | Self-adaptive redundant driving exoskeleton rehabilitation robot capable of regulating and controlling terminal fingertip force |
CN114932555A (en) * | 2022-06-14 | 2022-08-23 | 如你所视(北京)科技有限公司 | Mechanical arm cooperative operation system and mechanical arm control method |
WO2023184535A1 (en) * | 2022-04-02 | 2023-10-05 | 京东方科技集团股份有限公司 | Speech interaction system and method, and smart device |
CN117102562A (en) * | 2023-10-24 | 2023-11-24 | 常州法尔林精机有限公司 | Manipulator control system and method for planing machine processing automation production line |
Citations (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN102114631A (en) * | 2010-01-06 | 2011-07-06 | 深圳华强智能技术有限公司 | Simulated hand |
CN102727362A (en) * | 2012-07-20 | 2012-10-17 | 上海海事大学 | NUI (Natural User Interface)-based peripheral arm motion tracking rehabilitation training system and training method |
KR101438002B1 (en) * | 2013-02-28 | 2014-09-16 | 계명대학교 산학협력단 | Falldown prevent device |
CN105701806A (en) * | 2016-01-11 | 2016-06-22 | 上海交通大学 | Depth image-based parkinson's tremor motion characteristic detection method and system |
CN106125925A (en) * | 2016-06-20 | 2016-11-16 | 华南理工大学 | Method is arrested based on gesture and voice-operated intelligence |
CN106426168A (en) * | 2016-10-19 | 2017-02-22 | 辽宁工程技术大学 | Bionic mechanical arm and control method thereof |
-
2017
- 2017-04-01 CN CN201710213574.0A patent/CN106826838B/en active Active
Patent Citations (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN102114631A (en) * | 2010-01-06 | 2011-07-06 | 深圳华强智能技术有限公司 | Simulated hand |
CN102727362A (en) * | 2012-07-20 | 2012-10-17 | 上海海事大学 | NUI (Natural User Interface)-based peripheral arm motion tracking rehabilitation training system and training method |
KR101438002B1 (en) * | 2013-02-28 | 2014-09-16 | 계명대학교 산학협력단 | Falldown prevent device |
CN105701806A (en) * | 2016-01-11 | 2016-06-22 | 上海交通大学 | Depth image-based parkinson's tremor motion characteristic detection method and system |
CN106125925A (en) * | 2016-06-20 | 2016-11-16 | 华南理工大学 | Method is arrested based on gesture and voice-operated intelligence |
CN106426168A (en) * | 2016-10-19 | 2017-02-22 | 辽宁工程技术大学 | Bionic mechanical arm and control method thereof |
Cited By (33)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO2019041658A1 (en) * | 2017-08-31 | 2019-03-07 | 南京埃斯顿机器人工程有限公司 | Robot external motion path control method |
CN107643827A (en) * | 2017-09-04 | 2018-01-30 | 王怡科 | A kind of body-sensing tracing system and method with pressure feedback |
CN107450376A (en) * | 2017-09-09 | 2017-12-08 | 北京工业大学 | A kind of service mechanical arm crawl attitude angle computational methods based on intelligent family moving platform |
CN108247633A (en) * | 2017-12-27 | 2018-07-06 | 珠海格力节能环保制冷技术研究中心有限公司 | The control method and system of robot |
CN108247633B (en) * | 2017-12-27 | 2021-09-03 | 珠海格力节能环保制冷技术研究中心有限公司 | Robot control method and system |
CN108453742B (en) * | 2018-04-24 | 2021-06-08 | 南京理工大学 | Kinect-based robot man-machine interaction system and method |
CN108453742A (en) * | 2018-04-24 | 2018-08-28 | 南京理工大学 | Robot man-machine interactive system based on Kinect and method |
CN108828996A (en) * | 2018-05-31 | 2018-11-16 | 四川文理学院 | A kind of the mechanical arm remote control system and method for view-based access control model information |
CN109621331A (en) * | 2018-12-13 | 2019-04-16 | 深圳壹账通智能科技有限公司 | Fitness-assisting method, apparatus and storage medium, server |
CN110480634A (en) * | 2019-08-08 | 2019-11-22 | 北京科技大学 | A kind of arm guided-moving control method for manipulator motion control |
CN110450145A (en) * | 2019-08-13 | 2019-11-15 | 广东工业大学 | A kind of biomimetic manipulator based on skeleton identification |
CN110762362A (en) * | 2019-08-14 | 2020-02-07 | 广东工业大学 | Support based on human body posture control |
CN110762362B (en) * | 2019-08-14 | 2024-01-05 | 广东工业大学 | Support based on human posture control |
CN110834331A (en) * | 2019-11-11 | 2020-02-25 | 路邦科技授权有限公司 | Bionic robot action control method based on visual control |
CN111113429A (en) * | 2019-12-31 | 2020-05-08 | 深圳市优必选科技股份有限公司 | Action simulation method, action simulation device and terminal equipment |
CN111113429B (en) * | 2019-12-31 | 2021-06-25 | 深圳市优必选科技股份有限公司 | Action simulation method, action simulation device and terminal equipment |
CN111267083A (en) * | 2020-03-12 | 2020-06-12 | 北京科技大学 | Mechanical arm autonomous carrying system based on combination of monocular and binocular cameras |
CN111267083B (en) * | 2020-03-12 | 2022-01-04 | 北京科技大学 | Mechanical arm autonomous carrying system based on combination of monocular and binocular cameras |
CN111645080A (en) * | 2020-05-08 | 2020-09-11 | 覃立万 | Intelligent service robot hand-eye cooperation system and operation method |
CN111482967A (en) * | 2020-06-08 | 2020-08-04 | 河北工业大学 | Intelligent detection and capture method based on ROS platform |
CN112276947A (en) * | 2020-10-21 | 2021-01-29 | 乐聚(深圳)机器人技术有限公司 | Robot motion simulation method, device, equipment and storage medium |
CN112975964B (en) * | 2021-02-23 | 2022-04-01 | 青岛海科虚拟现实研究院 | Robot automatic control method and system based on big data and robot |
CN112975964A (en) * | 2021-02-23 | 2021-06-18 | 青岛海科虚拟现实研究院 | Robot automatic control method and system based on big data and robot |
CN113070877A (en) * | 2021-03-24 | 2021-07-06 | 浙江大学 | Variable attitude mapping method for seven-axis mechanical arm visual teaching |
CN113070877B (en) * | 2021-03-24 | 2022-04-15 | 浙江大学 | Variable attitude mapping method for seven-axis mechanical arm visual teaching |
CN113282109A (en) * | 2021-07-23 | 2021-08-20 | 季华实验室 | Unmanned aerial vehicle and human cooperative operation system |
CN114131591A (en) * | 2021-12-03 | 2022-03-04 | 山东大学 | Semi-physical simulation method and system for operation strategy of outer limb robot |
CN114129392A (en) * | 2021-12-07 | 2022-03-04 | 山东大学 | Self-adaptive redundant driving exoskeleton rehabilitation robot capable of regulating and controlling terminal fingertip force |
WO2023184535A1 (en) * | 2022-04-02 | 2023-10-05 | 京东方科技集团股份有限公司 | Speech interaction system and method, and smart device |
CN114932555A (en) * | 2022-06-14 | 2022-08-23 | 如你所视(北京)科技有限公司 | Mechanical arm cooperative operation system and mechanical arm control method |
CN114932555B (en) * | 2022-06-14 | 2024-01-05 | 如你所视(北京)科技有限公司 | Mechanical arm collaborative operation system and mechanical arm control method |
CN117102562A (en) * | 2023-10-24 | 2023-11-24 | 常州法尔林精机有限公司 | Manipulator control system and method for planing machine processing automation production line |
CN117102562B (en) * | 2023-10-24 | 2023-12-22 | 常州法尔林精机有限公司 | Manipulator control system and method for planing machine processing automation production line |
Also Published As
Publication number | Publication date |
---|---|
CN106826838B (en) | 2019-12-31 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN106826838A (en) | A kind of interactive biomimetic manipulator control method based on Kinect space or depth perception sensors | |
CN106909216B (en) | Kinect sensor-based humanoid manipulator control method | |
CN108453742B (en) | Kinect-based robot man-machine interaction system and method | |
CN100360204C (en) | Control system of intelligent perform robot based on multi-processor cooperation | |
CN106826822A (en) | A kind of vision positioning and mechanical arm crawl implementation method based on ROS systems | |
CN106737760B (en) | Human-type intelligent robot and human-computer communication system | |
CN102848388A (en) | Service robot locating and grabbing method based on multiple sensors | |
CN104503450A (en) | Service robot achieving intelligent obstacle crossing | |
CN102323817A (en) | Service robot control platform system and multimode intelligent interaction and intelligent behavior realizing method thereof | |
CN106514667A (en) | Human-computer cooperation system based on Kinect skeletal tracking and uncalibrated visual servo | |
CN104385284A (en) | Method of implementing intelligent obstacle-surmounting | |
CN109571513A (en) | A kind of mobile crawl service robot system of immersion | |
CN108572586B (en) | Information processing apparatus and information processing system | |
Khajone et al. | Implementation of a wireless gesture controlled robotic arm | |
CN105773642A (en) | Gesture remote control system for manipulator | |
Teke et al. | Real-time and robust collaborative robot motion control with Microsoft Kinect® v2 | |
JP2008080431A (en) | Robot system | |
CN108062102A (en) | A kind of gesture control has the function of the Mobile Robot Teleoperation System Based of obstacle avoidance aiding | |
Wakabayashi et al. | Associative motion generation for humanoid robot reflecting human body movement | |
Shaikh et al. | Voice assisted and gesture controlled companion robot | |
CN206296913U (en) | A kind of acoustic control mechanical arm | |
WO2018157355A1 (en) | Humanoid intelligent robot and human-machine communication system | |
Kamath et al. | Kinect sensor based real-time robot path planning using hand gesture and clap sound | |
CN208323396U (en) | A kind of hardware platform of intelligent robot | |
Jayasurya et al. | Gesture controlled AI-robot using Kinect |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |