CN102350700A - Method for controlling robot based on visual sense - Google Patents
Method for controlling robot based on visual sense Download PDFInfo
- Publication number
- CN102350700A CN102350700A CN2011102772065A CN201110277206A CN102350700A CN 102350700 A CN102350700 A CN 102350700A CN 2011102772065 A CN2011102772065 A CN 2011102772065A CN 201110277206 A CN201110277206 A CN 201110277206A CN 102350700 A CN102350700 A CN 102350700A
- Authority
- CN
- China
- Prior art keywords
- staff
- robot
- point
- method based
- control method
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
Images
Landscapes
- Manipulator (AREA)
Abstract
The invention provides a method for controlling a robot based on visual sense. The method comprises the following steps of: (1) acquiring a gesture image of a human hand by using a camera; (2) extracting characteristic points of the human hand from the gesture image; (3) performing three-dimensional reconstruction on the characteristic points to obtain a position relation of the characteristic points of the human hand in a three-dimensional space; (4) converting coordinate points corresponding to the characteristic points of the human hand to be under a base coordinate of the robot; (5) performing inverse-solution calculation by using the position relation of the human hand under a base coordinate system of the robot to obtain a joint angle of the robot; and (6) driving the robot to run by using the calculated joint angle. The method has the advantages that: 1) the control is intuitive, and the holding gesture of the robot directly corresponds to the gesture of the human hand; 2) the control is flexible without contacting an onerous exchange tool; 3) an operator can be assisted to operate more accurately and safely by imitating the prior art; 4) the recovery is allowed to be interrupted or the operator is allowed to be replaced in midway; and 5) the operator does not need to walk in a wide range so that the operating pressure of the operator is reduced.
Description
Technical field
The invention belongs to the robot field of human-computer interaction, particularly a kind of robot control method based on vision.
Background technology
Expansion day by day along with the robot application field; Particularly robot gets into daily life; People and robot equity mutual (peer-to-peer interaction) research more and more are much accounted of; In equity is mutual; People and robot more are partnerships, rather than simple people and instrument relation.
The teleoperation robot system comes with some shortcomings with full autonomous machine robot system, makes the man-robot cooperative system have very big potentiality.In this type systematic, people and the member of robot cooperate each other according to self-ability and accomplish goal task jointly, and the advantage of this team pattern is the next goal task of effectively accomplishing of the intelligence of integrated people and robot.
The research of man-robot interactive subject relates to many different field, for example different interactive modes, cognitive model and evaluation method etc.In interactive mode research, the interactive mode that meets the interpersonal communication custom is the focus of current research, however gesture be much accounted of from interpersonal communication model intuitively as a kind of, wherein to follow the tracks of be the basis of gesture identification to gesture.Because gesture the ability diversity, ambiguity and the time that have and the characteristics such as otherness on the space, the gesture interaction of current practicality is often according to application characteristic design certain semantic storehouse, to guarantee the accuracy and the validity of gesture interaction in addition.
The mechanical arm operation needs the instruction of a series of complicacies, and the control requirement of mechanical arm can not be satisfied in simple certain semantic storehouse.The difficult point of robotic arm manipulation is the control of attitude.Because mechanical arm imitates the structure of staff to design, the hand control of choosing so mechanical arm is a method very intuitively.
Summary of the invention
The shortcoming that the objective of the invention is to overcome above-mentioned prior art provides a kind of flexible natural a kind of robot control method based on vision with not enough.For reaching above-mentioned purpose, the present invention adopts following technical scheme:
A kind of robot control method based on vision is characterized in that, comprises the steps:
S1, obtain the staff images of gestures through video camera;
The characteristic point of staff in S2, the extraction images of gestures;
S3, characteristic point is carried out three-dimensional reconstruction, obtain the staff characteristic point and concern in three-dimensional position;
S4, the corresponding coordinate points of staff characteristic point is transformed under the robot basis coordinates;
S5, utilize that the position orientation relation of staff under robot basis coordinates system is counter separates calculating, obtain the joint angles of robot;
The joint angles drive machines people motion that S6, utilization calculate.
2, the robot control method based on vision according to claim 1 is characterized in that, said step S1 comprises: according to the binocular positioning principle, two cameras are installed above staff, are caught human hand movement image in real time.
In the above-mentioned robot control method based on vision; Said step S2 comprises: according to the characteristics of staff characteristic in the staff image; Carry out image through the method for feature point extraction and handle, obtain the pixel region of staff characteristic point, the central point that extracts the staff characteristic area then is as the staff characteristic point.In feature point extraction, need use 24 (R, G, B) color model, in the RGB color model, all colours can be by R, G, three kinds of colors of B are formed, different combinations presents various colors.For the characteristic point in the ability recognition image, utilize the characteristic point of red color mark hand, and yellow gloves are for the ease of colouring, provide the concrete color model of feature point extraction below.
The value of supposing pixel i (i gets positive integer) in the image is (R
i, G
i, B
i), the model of red pixel is among the identification figure so:
δ wherein
g, δ
b, δ is the color threshold values, and the R value must be only gauge point above the value on inequality the right in the remarked pixel, and the R value of above-mentioned inequality indicator sign point is than G, and it is big that the B value is wanted.
In the above-mentioned robot control method based on vision, said step S3 comprises: in step S2, can obtain the position of red-label point in the image of the left and right sides, in order to reconstruct the three-dimensional coordinate of gauge point, adopt binocular to rebuild principle and carry out three-dimensional reconstruction.
The binocular emplacement depth calculates:
Order: d=x
l-x
r:
Wherein P is a measurement point, and T is wide line, and f is a focal length, and Z is the distance that measurement point arrives wide line, Q
lBe the photocentre of left video camera, Q
rBe the center of right video camera, P
lBe the projection of measurement point on left video camera, P
rBe the projection of measurement point on right video camera, x
1For left picture centre arrives left subpoint P
lVector, x
rFor right picture centre arrives right subpoint P
rVector.
In the above-mentioned robot control method based on vision, step S4 comprises: may further comprise the steps:
S41, the change in location through staff control robot;
S42, the attitude through staff control robot change;
S43, staff attitude and robot end's attitude are shone upon.
In the above-mentioned robot control method based on vision, step S41 comprises:
Owing in carrying out the mechanical arm control procedure, need not the operator walk about on a large scale, and the space of mechanical arm is bigger, can cause loss of significance with little space to the mapping of large space, so take the localization method of difference.At first behind the initialization mechanical arm, can obtain holding in hand terminal initial position (x through the normal solution algorithm
p, y
p, z
p); Next in the video camera coverage, stipulate a working space (working space); Staff can only be in this working space motion; ELSE instruction lost efficacy; And then define a director space (Direction Space); Director space and working space form a space, and this space is used to change the position of mechanical arm:
x
p=x
p+Δx*σ
y
p=z
p+Δy*σ
z
p=z
p+Δz*σ
Δ x, Δ y and Δ z are respectively the displacement of staff on three axis of orientations, and σ is an adjustable parameter, and then the control end position can be an immensity, can reach thick control and microcontrolled effect through the value of revising σ.
In the above-mentioned robot control method based on vision, step S42 comprises:
Hold terminal attitude and staff middle finger end in hand, the attitude of 3 compositions of groove point between forefinger end and thumb root and the forefinger root is consistent.
In the above-mentioned robot control method based on vision, step S43 comprises:
Earlier do not consider translation, suppose that the staff coordinate system overlaps with the initial point of console coordinate system.Transformation matrix is the matrix M of a 3*3, and a some A in the staff coordinate system then transforms in the console basis coordinates system and is A ', and A '=MA is arranged.
Wherein:
In the staff location, x axle unit vector P1[1 in the staff coordinate system, 0,0], y axle unit vector P2[0,1,0] and, z axle unit vector P3[0,0,1] under camera coordinate system be: [x
1, x
2, x
3], [y
1, y
2, y
3], [z
1, z
2, z
3], have so:
Get by following formula:
Because staff is consistent with respect to the posture changing matrix of console coordinate system with holding terminal posture changing matrix with respect to basis coordinates system in hand; Provided at location model and hold terminal translation relation in hand, be so obtain the transformation matrix of pose at last with respect to basis coordinates system:
[p wherein
1, p
2, p
3] for holding terminal translation matrix in hand with respect to basis coordinates system.
In the above-mentioned robot control method based on vision, step S5 comprises: may further comprise the steps:
In the Denavit-Hartenberg representation, A
iThe homogeneous coordinate transformation battle array (i get positive integer) of expression from coordinate system i-1 to coordinate system i usually has:
For a robot with n (n >=6) joint, the homogeneous transformation battle array from the support frame of axes to a last frame of axes is defined as:
Where
is the normal vector of the gripper,
is the sliding vector,
is close to vector,
is the position vector.
Above utilizing there be two formulas:
T
n=M
Obtain n joint motions angle value through finding the solution following formula: (θ
1, θ
2..., θ
n).
In the above-mentioned robot control method based on vision, step S6 comprises: may further comprise the steps: utilize step S5 to calculate n joint angle angle drive machines people motion, thereby make the robot end reach desired locations.
The present invention has following advantage and technique effect with respect to prior art:
1, control is directly perceived, and robot holds the direct corresponding staff attitude of attitude in hand.
2, control need not to contact with heavy exchange tool flexibly.
3, utilize virtual reality technology assist operator more accurately more safely to operate.
4, allow to interrupt to recover or change the operator midway.
5, the operator need not to walk about on a large scale, reduces operator's operating pressure.
Description of drawings
Fig. 1 is the frame model figure in the embodiment;
Fig. 2 is location model figure;
Fig. 3 a, Fig. 3 b hold the terminal attitude and the attitude illustraton of model of 3 compositions of groove point between staff middle finger end, forefinger end and thumb root and the forefinger root in hand.
The specific embodiment
Below in conjunction with embodiment and accompanying drawing the present invention is described in further detail, but embodiment of the present invention is not limited thereto embodiment, that Fig. 1 provides is frame model figure.
This basis comprises the steps: based on vision control robotic method
S1, obtain the staff images of gestures through video camera;
The characteristic point of staff in S2, the extraction images of gestures;
S3, characteristic point is carried out three-dimensional reconstruction, obtain the staff characteristic point and concern in three-dimensional position;
S4, the corresponding coordinate points of staff characteristic point is transformed under the robot basis coordinates;
S5, utilize that the position orientation relation of staff under robot basis coordinates system is counter separates calculating, obtain the joint angles of robot;
The joint angles drive machines people motion that S6, utilization calculate.
Said step S1 may further comprise the steps:
S11, according to the binocular positioning principle, two cameras are installed above staff, catch human hand movement image in real time.
Said step S2 may further comprise the steps:
S21, vision according to claim 1 control robotic method; It is characterized in that; Said step S2 comprises: according to the characteristics of staff characteristic in the staff image; Carrying out image through the method for feature point extraction handles; Obtain the pixel region of staff characteristic point, the central point that extracts the staff characteristic area then is as the staff characteristic point.In feature point extraction, need use 24 (R, G, B) color model, in the RGB color model, all colours can be by R, G, three kinds of colors of B are formed, different combinations presents various colors.For the characteristic point in the ability recognition image, utilize the characteristic point of red color mark hand, and yellow gloves are for the ease of colouring.
Said step S3 may further comprise the steps:
S31, in the staff recognition system, can obtain the position of red-label point in the image of the left and right sides, in order to reconstruct the three-dimensional coordinate of gauge point, adopt binocular to rebuild principle and carry out three-dimensional reconstruction.
Said step S4 may further comprise the steps:
S41, owing in carrying out the mechanical arm control procedure, need not the operator walk about on a large scale, and the space of mechanical arm is bigger, can cause loss of significance with little space to the mapping of large space, so take the localization method of difference.At first behind the initialization mechanical arm, can obtain holding in hand terminal initial position (x through the normal solution algorithm
p, y
p, z
p); As shown in Figure 2; Next in the video camera coverage, stipulate a working space (working space); Staff can only be in this working space motion; ELSE instruction lost efficacy; And then define a director space (Direction Space), and director space and working space form a space, and this space is used to change the position of mechanical arm:
x
p=x
p+Δx*σ
y
p=z
p+Δy*σ
z
p=z
p+Δz*σ
Δ x, Δ y and Δ z are respectively the displacement of staff on three axis of orientations, and σ is an adjustable parameter, and then the control end position can be an immensity, can reach thick control and microcontrolled effect through the value of revising σ.
S42, hold terminal attitude and staff middle finger end in hand, the attitude of 3 compositions of groove point between forefinger end and thumb root and the forefinger root is consistent, shown in Fig. 3 a and Fig. 3 b.
S43, elder generation do not consider translation, suppose that the staff coordinate system overlaps with the initial point of console coordinate system.Transformation matrix is the matrix M of a 3*3, and a some A in the staff coordinate system then transforms in the console basis coordinates system and is A ', and A '=MA is arranged.
Wherein:
In the staff location, x axle unit vector P1[1 in the staff coordinate system, 0,0], y axle unit vector P2[0,1,0] and, z axle unit vector P3[0,0,1] under camera coordinate system be: [x
1, x
2, x
3], [y
1, y
2, y
3], [z
1, z
2, z
3], have so:
Get by following formula:
Because staff is consistent with respect to the posture changing matrix of console coordinate system with holding terminal posture changing matrix with respect to basis coordinates system in hand; Provided at location model and hold terminal translation relation in hand, be so obtain the transformation matrix of pose at last with respect to basis coordinates system:
[p wherein
1, p
2, p
3] for holding terminal translation matrix in hand with respect to basis coordinates system.
Said step S5 may further comprise the steps:
S51, in the Denavit-Hartenberg representation, A
iThe homogeneous coordinate transformation battle array (i get positive integer) of expression from coordinate system i-1 to coordinate system i usually has:
For a robot with n (n >=6) joint, the homogeneous transformation battle array from the support frame of axes to a last frame of axes is defined as:
Where
is the normal vector of the gripper,
is the sliding vector,
is close to vector,
is the position vector.
Above utilizing there be two formulas:
T
n=M
Obtain n joint motions angle value through finding the solution following formula: (θ
1, θ
2..., θ
n).
Said step S6 may further comprise the steps:
S61, utilize step S5 to calculate n joint angle angle drive machines people motion, thereby make the robot end reach desired locations.
The foregoing description is a preferred implementation of the present invention; But embodiment of the present invention is not restricted to the described embodiments; Other any do not deviate from change, the modification done under spirit of the present invention and the principle, substitutes, combination, simplify; All should be the substitute mode of equivalence, be included within protection scope of the present invention.
Claims (10)
1. the robot control method based on vision is characterized in that, comprises the steps:
S1, obtain the staff images of gestures through video camera;
The characteristic point of staff in S2, the extraction images of gestures;
S3, characteristic point is carried out three-dimensional reconstruction, obtain the staff characteristic point and concern in three-dimensional position;
S4, the corresponding coordinate points of staff characteristic point is transformed under the robot basis coordinates;
S5, utilize that the position orientation relation of staff under robot basis coordinates system is counter separates calculating, obtain the joint angles of robot;
The joint angles drive machines people motion that S6, utilization calculate.
2. the robot control method based on vision according to claim 1 is characterized in that, said step S1 comprises: according to the binocular positioning principle, two cameras are installed above staff, are caught human hand movement image in real time.
3. the robot control method based on vision according to claim 1; It is characterized in that; Said step S2 comprises the characteristics according to staff characteristic in the staff image; Carrying out image through the method for feature point extraction handles; Obtain the pixel region of staff characteristic point, the central point that extracts the staff characteristic area then is as the staff characteristic point; In feature point extraction, need use 24 RGB color model, in the RGB color model, all colours is by R, G, and three kinds of colors of B are formed, and different combinations presents various colors; For the characteristic point in the ability recognition image, utilize the characteristic point of red color mark hand, and yellow gloves are for the ease of colouring, provide the concrete color model of feature point extraction below:
The value of supposing pixel i in the image is (R
i, G
i, B
i), i gets positive integer, and the model of red pixel is among the identification figure so:
δ wherein
g, δ
b, δ is the color threshold values, and the R value must be only gauge point above the value on inequality the right in the remarked pixel, and the R value of above-mentioned inequality indicator sign point is than G, and it is big that the B value is wanted.
4. the robot control method based on vision according to claim 1; It is characterized in that said step S3 comprises: in step S2, obtain the position of red-label point in the image of the left and right sides, in order to reconstruct the three-dimensional coordinate of gauge point; Adopt binocular to rebuild principle and carry out three-dimensional reconstruction
The binocular emplacement depth calculates:
Order: d=x
l-x
r:
Wherein P is a measurement point, and T is wide line, and f is a focal length, and Z is the distance that measurement point arrives wide line, Q
lBe the photocentre of left video camera, Q
rBe the center of right video camera, P
lBe the projection of measurement point on left video camera, P
rBe the projection of measurement point on right video camera, x
1For left picture centre arrives left subpoint P
lVector, x
rFor right picture centre arrives right subpoint P
rVector.
5. the robot control method based on vision according to claim 1 is characterized in that step S4 comprises: may further comprise the steps:
S41, the change in location through staff control robot;
S42, the attitude through staff control robot change;
S43, staff attitude and robot end's attitude are shone upon.
6. the robot control method based on vision according to claim 5 is characterized in that step S41 comprises:
At first behind the initialization mechanical arm, can obtain holding in hand terminal initial position (x through the normal solution algorithm
p, y
p, z
p), next in the video camera coverage, stipulate a working space, staff can only be in this working space motion; ELSE instruction lost efficacy; And then define a director space, and director space and working space form a space, and this space is used to change the position of mechanical arm:
x
p=x
p+Δx*σ
y
p=z
p+Δy*σ
z
p=z
p+Δz*σ
Δ x, Δ y and Δ z are respectively the displacement of staff on three axis of orientations, and σ is an adjustable parameter, and then the control end position is an immensity, slightly control and microcontroller through the value of revising σ.
7. the robot control method based on vision according to claim 5 is characterized in that step S42 comprises:
The attitude of holding 3 compositions of groove point between terminal attitude and staff middle finger end, forefinger end and thumb root and the forefinger root in hand is consistent.
8. the robot control method based on vision according to claim 5 is characterized in that step S43 comprises:
Suppose that the staff coordinate system overlaps with the initial point of console coordinate system; Transformation matrix is one 3 * 3 a matrix M, and a some A in the staff coordinate system then transforms in the console basis coordinates system and is A ', and A '=MA is arranged,
Wherein:
In the staff location, x axle unit vector P1[1 in the staff coordinate system, 0,0], y axle unit vector P2[0,1,0] and, z axle unit vector P3[0,0,1] under camera coordinate system be: [x
1, x
2, x
3], [y
1, y
2, y
3], [z
1, z
2, z
3], have so:
Get by following formula:
Because staff is consistent with respect to the posture changing matrix of console coordinate system with holding terminal posture changing matrix with respect to basis coordinates system in hand; Provided at location model and hold terminal translation relation in hand, be so obtain the transformation matrix of pose at last with respect to basis coordinates system:
[p wherein
1, p
2, p
3] for holding terminal translation matrix in hand with respect to basis coordinates system.
9. the robot control method based on vision according to claim 1 is characterized in that step S5 comprises: may further comprise the steps:
In the Denavit-Hartenberg representation, A
iThe homogeneous coordinate transformation battle array of expression from coordinate system i-1 to coordinate system i, i gets positive integer, has:
For a robot with n joint, n >=6, the homogeneous transformation battle array from the support frame of axes to a last frame of axes is defined as:
Where
for the gripper normal vector,
is the sliding vector,
is close to vector,
is the position vector;
Above utilizing there be two formulas:
T
n=M
Obtain n joint motions angle value through finding the solution following formula: (θ
1, θ
2..., θ
n).
10. the robot control method based on vision according to claim 1 is characterized in that, step S6 comprises that utilizing step S5 to calculate n joint angle angle drive machines people moves, thereby makes the robot end reach desired locations.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN2011102772065A CN102350700A (en) | 2011-09-19 | 2011-09-19 | Method for controlling robot based on visual sense |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN2011102772065A CN102350700A (en) | 2011-09-19 | 2011-09-19 | Method for controlling robot based on visual sense |
Publications (1)
Publication Number | Publication Date |
---|---|
CN102350700A true CN102350700A (en) | 2012-02-15 |
Family
ID=45574400
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN2011102772065A Pending CN102350700A (en) | 2011-09-19 | 2011-09-19 | Method for controlling robot based on visual sense |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN102350700A (en) |
Cited By (36)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN102650906A (en) * | 2012-04-06 | 2012-08-29 | 深圳创维数字技术股份有限公司 | Control method and device for user interface |
CN102692927A (en) * | 2012-06-13 | 2012-09-26 | 南京工业职业技术学院 | Gesture controlled trolley |
CN102773863A (en) * | 2012-07-31 | 2012-11-14 | 华南理工大学 | Fine-teleoperation method for robot |
CN102800126A (en) * | 2012-07-04 | 2012-11-28 | 浙江大学 | Method for recovering real-time three-dimensional body posture based on multimodal fusion |
CN103955207A (en) * | 2014-04-24 | 2014-07-30 | 哈尔滨工业大学 | Capture tolerance capacity testing system and method of three-jaw type space end effector in microgravity environment |
CN104227724A (en) * | 2014-08-28 | 2014-12-24 | 北京易拓智谱科技有限公司 | Visual identity-based manipulation method for end position of universal robot |
AT514528A1 (en) * | 2013-06-21 | 2015-01-15 | Engel Austria Gmbh | Shaping system with gesture control |
CN104602869A (en) * | 2012-09-05 | 2015-05-06 | 高通股份有限公司 | Robot control based on vision tracking of remote mobile device having camera |
CN104827474A (en) * | 2015-05-04 | 2015-08-12 | 南京理工大学 | Intelligent programming method and auxiliary device of virtual teaching robot for learning person |
CN104936748A (en) * | 2012-12-14 | 2015-09-23 | Abb技术有限公司 | Bare hand robot path teaching |
CN105068649A (en) * | 2015-08-12 | 2015-11-18 | 深圳市埃微信息技术有限公司 | Binocular gesture recognition device and method based on virtual reality helmet |
CN105082159A (en) * | 2015-08-21 | 2015-11-25 | 天津超众机器人科技有限公司 | Industrial robot system based on EEG signal control and demonstration method |
CN105094373A (en) * | 2015-07-30 | 2015-11-25 | 深圳汇达高科科技有限公司 | Gesture collection device for manipulating industrial robot and corresponding gesture collection method |
CN105204441A (en) * | 2015-09-24 | 2015-12-30 | 苏州安柯那智能科技有限公司 | Hand-push teaching type five-axis polishing grinding robot |
CN105960623A (en) * | 2014-04-04 | 2016-09-21 | Abb瑞士股份有限公司 | Portable apparatus for controlling robot and method thereof |
CN106020494A (en) * | 2016-06-20 | 2016-10-12 | 华南理工大学 | Three-dimensional gesture recognition method based on mobile tracking |
CN106295464A (en) * | 2015-05-15 | 2017-01-04 | 济南大学 | Gesture identification method based on Shape context |
CN106456145A (en) * | 2014-05-05 | 2017-02-22 | 维卡瑞斯外科手术股份有限公司 | Virtual reality surgical device |
WO2017084319A1 (en) * | 2015-11-18 | 2017-05-26 | 乐视控股(北京)有限公司 | Gesture recognition method and virtual reality display output device |
CN106863295A (en) * | 2015-12-10 | 2017-06-20 | 发那科株式会社 | Robot system |
CN106971050A (en) * | 2017-04-18 | 2017-07-21 | 华南理工大学 | A kind of Darwin joint of robot Mapping Resolution methods based on Kinect |
CN107049496A (en) * | 2017-05-22 | 2017-08-18 | 清华大学 | A kind of Visual servoing control method of multitask operating robot |
CN107093195A (en) * | 2017-03-10 | 2017-08-25 | 西北工业大学 | A kind of locating mark points method that laser ranging is combined with binocular camera |
CN107107338A (en) * | 2014-12-17 | 2017-08-29 | 库卡罗伯特有限公司 | Method for safely coupling and disconnecting input equipment |
CN107564065A (en) * | 2017-09-22 | 2018-01-09 | 东南大学 | The measuring method of man-machine minimum range under a kind of Collaborative environment |
CN107813310A (en) * | 2017-11-22 | 2018-03-20 | 浙江优迈德智能装备有限公司 | One kind is based on the more gesture robot control methods of binocular vision |
CN109579766A (en) * | 2018-12-24 | 2019-04-05 | 苏州瀚华智造智能技术有限公司 | A kind of product shape automatic testing method and system |
CN109693235A (en) * | 2017-10-23 | 2019-04-30 | 中国科学院沈阳自动化研究所 | A kind of Prosthetic Hand vision tracking device and its control method |
CN109940626A (en) * | 2019-01-23 | 2019-06-28 | 浙江大学城市学院 | A kind of thrush robot system and its control method based on robot vision |
CN110390898A (en) * | 2019-06-27 | 2019-10-29 | 安徽国耀通信科技有限公司 | A kind of indoor and outdoor full-color screen display control program |
CN110480634A (en) * | 2019-08-08 | 2019-11-22 | 北京科技大学 | A kind of arm guided-moving control method for manipulator motion control |
CN111202583A (en) * | 2020-01-20 | 2020-05-29 | 上海奥朋医疗科技有限公司 | Method, system and medium for tracking movement of surgical bed |
CN113070877A (en) * | 2021-03-24 | 2021-07-06 | 浙江大学 | Variable attitude mapping method for seven-axis mechanical arm visual teaching |
CN113384291A (en) * | 2021-06-11 | 2021-09-14 | 北京华医共享医疗科技有限公司 | Medical ultrasonic detection method and system |
CN113829357A (en) * | 2021-10-25 | 2021-12-24 | 香港中文大学(深圳) | Teleoperation method, device, system and medium for robot arm |
CN117021117A (en) * | 2023-10-08 | 2023-11-10 | 电子科技大学 | Mobile robot man-machine interaction and positioning method based on mixed reality |
Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP2001287191A (en) * | 2000-04-10 | 2001-10-16 | Kawasaki Heavy Ind Ltd | Method for detecting link position of master arm |
JP3742879B2 (en) * | 2003-07-30 | 2006-02-08 | 独立行政法人情報通信研究機構 | Robot arm / hand operation control method, robot arm / hand operation control system |
CN102073377A (en) * | 2010-12-31 | 2011-05-25 | 西安交通大学 | Man-machine interactive type two-dimensional locating method based on human eye-glanced signal |
WO2011065035A1 (en) * | 2009-11-24 | 2011-06-03 | 株式会社豊田自動織機 | Method of creating teaching data for robot, and teaching system for robot |
-
2011
- 2011-09-19 CN CN2011102772065A patent/CN102350700A/en active Pending
Patent Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP2001287191A (en) * | 2000-04-10 | 2001-10-16 | Kawasaki Heavy Ind Ltd | Method for detecting link position of master arm |
JP3742879B2 (en) * | 2003-07-30 | 2006-02-08 | 独立行政法人情報通信研究機構 | Robot arm / hand operation control method, robot arm / hand operation control system |
WO2011065035A1 (en) * | 2009-11-24 | 2011-06-03 | 株式会社豊田自動織機 | Method of creating teaching data for robot, and teaching system for robot |
CN102073377A (en) * | 2010-12-31 | 2011-05-25 | 西安交通大学 | Man-machine interactive type two-dimensional locating method based on human eye-glanced signal |
Non-Patent Citations (2)
Title |
---|
张纪元: "《机构分析与综合的解 机械工程专业》", 31 August 2007, article "坐标变化及坐标变换矩阵", pages: 5-6 * |
王晓华: "基于双目视觉的三维重建技术研究", 《中国优秀硕士学位论文》, 31 December 2004 (2004-12-31) * |
Cited By (57)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO2013149475A1 (en) * | 2012-04-06 | 2013-10-10 | 深圳创维数字技术股份有限公司 | User interface control method and device |
CN102650906B (en) * | 2012-04-06 | 2015-11-04 | 深圳创维数字技术有限公司 | A kind of control method of user interface and device |
CN102650906A (en) * | 2012-04-06 | 2012-08-29 | 深圳创维数字技术股份有限公司 | Control method and device for user interface |
CN102692927A (en) * | 2012-06-13 | 2012-09-26 | 南京工业职业技术学院 | Gesture controlled trolley |
CN102800126A (en) * | 2012-07-04 | 2012-11-28 | 浙江大学 | Method for recovering real-time three-dimensional body posture based on multimodal fusion |
CN102773863A (en) * | 2012-07-31 | 2012-11-14 | 华南理工大学 | Fine-teleoperation method for robot |
CN104602869A (en) * | 2012-09-05 | 2015-05-06 | 高通股份有限公司 | Robot control based on vision tracking of remote mobile device having camera |
CN104602869B (en) * | 2012-09-05 | 2016-08-24 | 高通股份有限公司 | Robot control method, system and the equipment of visual pursuit based on the remote mobile device with video camera |
CN104936748A (en) * | 2012-12-14 | 2015-09-23 | Abb技术有限公司 | Bare hand robot path teaching |
AT514528A1 (en) * | 2013-06-21 | 2015-01-15 | Engel Austria Gmbh | Shaping system with gesture control |
CN105960623B (en) * | 2014-04-04 | 2019-02-05 | Abb瑞士股份有限公司 | For controlling the mancarried device and its method of robot |
US10166673B2 (en) | 2014-04-04 | 2019-01-01 | Abb Schweiz Ag | Portable apparatus for controlling robot and method thereof |
CN105960623A (en) * | 2014-04-04 | 2016-09-21 | Abb瑞士股份有限公司 | Portable apparatus for controlling robot and method thereof |
CN103955207B (en) * | 2014-04-24 | 2016-06-22 | 哈尔滨工业大学 | A kind of three-pawl type space end executor fault tolerance of catching under microgravity environment tests system and method |
CN103955207A (en) * | 2014-04-24 | 2014-07-30 | 哈尔滨工业大学 | Capture tolerance capacity testing system and method of three-jaw type space end effector in microgravity environment |
CN106456145A (en) * | 2014-05-05 | 2017-02-22 | 维卡瑞斯外科手术股份有限公司 | Virtual reality surgical device |
CN104227724A (en) * | 2014-08-28 | 2014-12-24 | 北京易拓智谱科技有限公司 | Visual identity-based manipulation method for end position of universal robot |
CN104227724B (en) * | 2014-08-28 | 2017-01-18 | 北京易拓智谱科技有限公司 | Visual identity-based manipulation method for end position of universal robot |
CN107107338A (en) * | 2014-12-17 | 2017-08-29 | 库卡罗伯特有限公司 | Method for safely coupling and disconnecting input equipment |
US10518415B2 (en) | 2014-12-17 | 2019-12-31 | Kuka Deutschland Gmbh | Method for safe coupling and decoupling of an input device |
CN104827474A (en) * | 2015-05-04 | 2015-08-12 | 南京理工大学 | Intelligent programming method and auxiliary device of virtual teaching robot for learning person |
CN106295464A (en) * | 2015-05-15 | 2017-01-04 | 济南大学 | Gesture identification method based on Shape context |
CN105094373A (en) * | 2015-07-30 | 2015-11-25 | 深圳汇达高科科技有限公司 | Gesture collection device for manipulating industrial robot and corresponding gesture collection method |
CN105068649A (en) * | 2015-08-12 | 2015-11-18 | 深圳市埃微信息技术有限公司 | Binocular gesture recognition device and method based on virtual reality helmet |
CN105082159A (en) * | 2015-08-21 | 2015-11-25 | 天津超众机器人科技有限公司 | Industrial robot system based on EEG signal control and demonstration method |
CN105204441B (en) * | 2015-09-24 | 2018-06-29 | 苏州安柯那智能科技有限公司 | Five axis polishing grinding machine people of hand push teaching type |
CN105204441A (en) * | 2015-09-24 | 2015-12-30 | 苏州安柯那智能科技有限公司 | Hand-push teaching type five-axis polishing grinding robot |
WO2017084319A1 (en) * | 2015-11-18 | 2017-05-26 | 乐视控股(北京)有限公司 | Gesture recognition method and virtual reality display output device |
CN106863295A (en) * | 2015-12-10 | 2017-06-20 | 发那科株式会社 | Robot system |
US10543599B2 (en) | 2015-12-10 | 2020-01-28 | Fanuc Corporation | Robot system equipped with video display apparatus that displays image of virtual object in superimposed fashion on real image of robot |
US11345042B2 (en) | 2015-12-10 | 2022-05-31 | Fanuc Corporation | Robot system equipped with video display apparatus that displays image of virtual object in superimposed fashion on real image of robot |
CN106863295B (en) * | 2015-12-10 | 2019-09-10 | 发那科株式会社 | Robot system |
CN106020494B (en) * | 2016-06-20 | 2019-10-18 | 华南理工大学 | Three-dimensional gesture recognition method based on mobile tracking |
CN106020494A (en) * | 2016-06-20 | 2016-10-12 | 华南理工大学 | Three-dimensional gesture recognition method based on mobile tracking |
CN107093195A (en) * | 2017-03-10 | 2017-08-25 | 西北工业大学 | A kind of locating mark points method that laser ranging is combined with binocular camera |
CN107093195B (en) * | 2017-03-10 | 2019-11-05 | 西北工业大学 | A kind of locating mark points method of laser ranging in conjunction with binocular camera |
CN106971050A (en) * | 2017-04-18 | 2017-07-21 | 华南理工大学 | A kind of Darwin joint of robot Mapping Resolution methods based on Kinect |
CN106971050B (en) * | 2017-04-18 | 2020-04-28 | 华南理工大学 | Kinect-based Darwin robot joint mapping analysis method |
CN107049496B (en) * | 2017-05-22 | 2019-07-26 | 清华大学 | A kind of Visual servoing control method of multitask operating robot |
CN107049496A (en) * | 2017-05-22 | 2017-08-18 | 清华大学 | A kind of Visual servoing control method of multitask operating robot |
CN107564065B (en) * | 2017-09-22 | 2019-10-22 | 东南大学 | The measuring method of man-machine minimum range under a kind of Collaborative environment |
CN107564065A (en) * | 2017-09-22 | 2018-01-09 | 东南大学 | The measuring method of man-machine minimum range under a kind of Collaborative environment |
CN109693235A (en) * | 2017-10-23 | 2019-04-30 | 中国科学院沈阳自动化研究所 | A kind of Prosthetic Hand vision tracking device and its control method |
CN107813310A (en) * | 2017-11-22 | 2018-03-20 | 浙江优迈德智能装备有限公司 | One kind is based on the more gesture robot control methods of binocular vision |
CN109579766A (en) * | 2018-12-24 | 2019-04-05 | 苏州瀚华智造智能技术有限公司 | A kind of product shape automatic testing method and system |
CN109579766B (en) * | 2018-12-24 | 2020-08-11 | 苏州瀚华智造智能技术有限公司 | Automatic product appearance detection method and system |
CN109940626A (en) * | 2019-01-23 | 2019-06-28 | 浙江大学城市学院 | A kind of thrush robot system and its control method based on robot vision |
CN110390898A (en) * | 2019-06-27 | 2019-10-29 | 安徽国耀通信科技有限公司 | A kind of indoor and outdoor full-color screen display control program |
CN110480634A (en) * | 2019-08-08 | 2019-11-22 | 北京科技大学 | A kind of arm guided-moving control method for manipulator motion control |
CN111202583A (en) * | 2020-01-20 | 2020-05-29 | 上海奥朋医疗科技有限公司 | Method, system and medium for tracking movement of surgical bed |
CN113070877A (en) * | 2021-03-24 | 2021-07-06 | 浙江大学 | Variable attitude mapping method for seven-axis mechanical arm visual teaching |
CN113070877B (en) * | 2021-03-24 | 2022-04-15 | 浙江大学 | Variable attitude mapping method for seven-axis mechanical arm visual teaching |
CN113384291A (en) * | 2021-06-11 | 2021-09-14 | 北京华医共享医疗科技有限公司 | Medical ultrasonic detection method and system |
CN113829357A (en) * | 2021-10-25 | 2021-12-24 | 香港中文大学(深圳) | Teleoperation method, device, system and medium for robot arm |
CN113829357B (en) * | 2021-10-25 | 2023-10-03 | 香港中文大学(深圳) | Remote operation method, device, system and medium for robot arm |
CN117021117A (en) * | 2023-10-08 | 2023-11-10 | 电子科技大学 | Mobile robot man-machine interaction and positioning method based on mixed reality |
CN117021117B (en) * | 2023-10-08 | 2023-12-15 | 电子科技大学 | Mobile robot man-machine interaction and positioning method based on mixed reality |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN102350700A (en) | Method for controlling robot based on visual sense | |
US20210205986A1 (en) | Teleoperating Of Robots With Tasks By Mapping To Human Operator Pose | |
CN108762495B (en) | Virtual reality driving method based on arm motion capture and virtual reality system | |
Jin et al. | Multi-LeapMotion sensor based demonstration for robotic refine tabletop object manipulation task | |
Krupke et al. | Comparison of multimodal heading and pointing gestures for co-located mixed reality human-robot interaction | |
WO2011065035A1 (en) | Method of creating teaching data for robot, and teaching system for robot | |
CN107030692B (en) | Manipulator teleoperation method and system based on perception enhancement | |
Asfour et al. | Toward humanoid manipulation in human-centred environments | |
Almetwally et al. | Real-time tele-operation and tele-walking of humanoid Robot Nao using Kinect Depth Camera | |
CN103112007B (en) | Based on the man-machine interaction method of hybrid sensor | |
US20180215045A1 (en) | Robot apparatus, method for controlling the same, and computer program | |
CN105291138B (en) | It is a kind of to strengthen the visual feedback platform of virtual reality immersion sense | |
CN110815189B (en) | Robot rapid teaching system and method based on mixed reality | |
CN103192387A (en) | Robot and control method thereof | |
CN107662195A (en) | A kind of mechanical hand principal and subordinate isomery remote operating control system and control method with telepresenc | |
CN102830798A (en) | Mark-free hand tracking method of single-arm robot based on Kinect | |
Gratal et al. | Visual servoing on unknown objects | |
Elbrechter et al. | Bi-manual robotic paper manipulation based on real-time marker tracking and physical modelling | |
Guilamo et al. | Manipulability optimization for trajectory generation | |
Lin et al. | The implementation of augmented reality in a robotic teleoperation system | |
Bolder et al. | Visually guided whole body interaction | |
Huang et al. | Synthesizing robot manipulation programs from a single observed human demonstration | |
Cai et al. | 6D image-based visual servoing for robot manipulators with uncalibrated stereo cameras | |
CN111185906A (en) | Leap Motion-based dexterous hand master-slave control method | |
Lathuiliere et al. | Visual hand posture tracking in a gripper guiding application |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
C06 | Publication | ||
PB01 | Publication | ||
C10 | Entry into substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
C02 | Deemed withdrawal of patent application after publication (patent law 2001) | ||
WD01 | Invention patent application deemed withdrawn after publication |
Application publication date: 20120215 |