CN110605714B - Hand-eye coordination grabbing method based on human eye fixation point - Google Patents

Hand-eye coordination grabbing method based on human eye fixation point Download PDF

Info

Publication number
CN110605714B
CN110605714B CN201910721469.7A CN201910721469A CN110605714B CN 110605714 B CN110605714 B CN 110605714B CN 201910721469 A CN201910721469 A CN 201910721469A CN 110605714 B CN110605714 B CN 110605714B
Authority
CN
China
Prior art keywords
grabbing
mechanical arm
foreground
joint
eye
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201910721469.7A
Other languages
Chinese (zh)
Other versions
CN110605714A (en
Inventor
熊蔡华
周硕
李全林
万中华
黄剑
仇诗凯
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Huazhong University of Science and Technology
Original Assignee
Huazhong University of Science and Technology
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Huazhong University of Science and Technology filed Critical Huazhong University of Science and Technology
Priority to CN201910721469.7A priority Critical patent/CN110605714B/en
Publication of CN110605714A publication Critical patent/CN110605714A/en
Priority to PCT/CN2020/119316 priority patent/WO2021023315A1/en
Application granted granted Critical
Publication of CN110605714B publication Critical patent/CN110605714B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • BPERFORMING OPERATIONS; TRANSPORTING
    • B25HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
    • B25JMANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
    • B25J9/00Programme-controlled manipulators
    • B25J9/16Programme controls
    • B25J9/1656Programme controls characterised by programming, planning systems for manipulators
    • B25J9/1664Programme controls characterised by programming, planning systems for manipulators characterised by motion, path, trajectory planning
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B25HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
    • B25JMANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
    • B25J9/00Programme-controlled manipulators
    • B25J9/16Programme controls
    • B25J9/1694Programme controls characterised by use of sensors other than normal servo-feedback from position, speed or acceleration sensors, perception control, multi-sensor controlled systems, sensor fusion
    • B25J9/1697Vision controlled systems

Abstract

The invention belongs to the technical field of hand-eye coordination, and particularly discloses a hand-eye coordination grabbing method based on a human eye fixation point, which comprises the following steps: s1, determining a human eye gaze point by using an eye tracker, and determining the position of an object of interest of a user based on the human eye gaze point; s2 discretizing and traversing to obtain a feasible grabbing gesture set of the mechanical arm at each position in space and storing the feasible grabbing gesture set in a server; s3, accessing the server according to the position of the object in which the user is interested to inquire and obtain a feasible grabbing posture set of the position; s4, obtaining multiple groups of solutions of each joint angle of the mechanical arm based on the inverse solution of each grabbing posture in the feasible grabbing posture set, and determining the optimal solution in the multiple groups of solutions of the joint angles, namely obtaining the optimal joint angle corresponding to each joint of the mechanical arm; s5 the mechanical arm moves each joint to the corresponding optimal joint angle, and then grabs the object of interest of the user with the optimal grabbing posture. The invention has the advantages of accurate grabbing and convenient operation.

Description

Hand-eye coordination grabbing method based on human eye fixation point
Technical Field
The invention belongs to the technical field of hand-eye coordination, and particularly relates to a hand-eye coordination grabbing method based on a human eye fixation point.
Background
On one hand, the gaze point of human eye can be easily determined by human, and can convey certain information to the outside, for example, by observing the eyes of other people, the general gaze direction and position of the people can be known, and the interested objects of the people can be known by combining scene information. On the other hand, for the disabled with inconvenient hands and feet, the disabled can have the requirement of grabbing objects, but cannot use the traditional human-computer interaction mode, and the simple and visual human-computer interaction can be realized by extracting the gaze point through the eye tracker and analyzing the intention of the gaze point. In summary, it is an urgent problem to design a system that is simple and convenient to operate and can capture a human eye gaze point for controlling a mechanical arm to grasp an object of interest of a user.
At present, a plurality of eye tracker devices and arm-and-arm devices are available, but most of mechanical arms are based on machine vision, human eye vision is rarely utilized, and no one uses a three-dimensional fixation point extracted by the eye tracker device to control the arm-and-arm-to-eye coordination to grasp an object of interest. Patent CN109079777A discloses a robot arm and hand-eye coordination operation system, which estimates the position and posture of a target workpiece by a machine vision method and controls a robot arm to perform operation. Patent CN 109048918A discloses a visual guidance method for wheelchair mechanical arm robot, which finds the interested object manually input by the user through machine vision and executes the grabbing task by the mechanical arm, and similarly, the method does not consider human vision, and needs to determine the operation object in advance by the disabled through the traditional interactive mode. Patent US8888287B2 proposes a human-computer interaction system based on a three-dimensional gaze point tracker, which employs a depth camera and an RGB camera to extract gaze direction and gaze point for human-computer interaction, which can extract human eye three-dimensional gaze point for human-computer interaction, but is not used to control the robot arm to grasp an object of interest to the user in the actual scene.
Disclosure of Invention
Aiming at the defects or improvement requirements of the prior art, the invention provides a hand-eye coordination grabbing method based on a human eye gaze point, which determines the position of an object of interest of a user based on the human eye gaze point, guides a mechanical arm to grab the object based on the position, and has the advantages of accurate grabbing and convenient operation.
In order to achieve the purpose, the invention provides a hand-eye coordination grabbing method based on a human eye gaze point, which comprises the following steps:
s1, determining a human eye gaze point by using an eye tracker, and determining the position of an object of interest of a user based on the human eye gaze point;
s2, obtaining a feasible grabbing gesture set of the mechanical arm for executing grabbing actions at each position in space through discretization and storing the feasible grabbing gesture set in a server;
s3, accessing the server according to the position of the object of interest of the user determined in the step S1 to query and obtain a feasible grabbing posture set of the position;
s4, obtaining multiple groups of solutions of each joint angle of the mechanical arm based on the inverse solution of each grabbing posture in the feasible grabbing posture set, and determining one group of optimal solutions in the multiple groups of solutions of the joint angles, namely obtaining the optimal joint angle corresponding to each joint of the mechanical arm, wherein the grabbing pose corresponding to the optimal solution is the optimal grabbing pose;
and S5, moving each joint of the mechanical arm to the corresponding optimal joint angle, and grabbing the object of interest of the user in the optimal grabbing pose, so that the hand-eye coordination grabbing based on the human eye gaze point is completed.
As a further preference, step S1 includes the following sub-steps:
s11 recognizing centers of left and right pupils of the user using left and right eye cameras on the eye tracker, respectively, to extract information of the human eyes;
s12, mapping the left and right pupil centers obtained by identification to the foreground left camera to obtain a two-dimensional fixation point;
s13, extracting an object anchor frame in the foreground left camera, and then determining an object which is interested by the user according to the position relation between the two-dimensional gaze point and the object anchor frame;
s14, performing three-dimensional reconstruction on the object interested by the user to obtain the position of the object of interest in the foreground left camera;
s15 transforms the position of the object of interest in the foreground left camera into the mechanical arm base coordinate system to determine the position of the user' S object of interest.
More preferably, in step S4, each joint angle of the robot arm is obtained by inverse solution using an IK fast inverse solution module.
Preferably, in step S13, the object anchor frame in the foreground left camera is extracted by using a target recognition and tracking algorithm, specifically:
firstly, identifying and obtaining an anchor frame of an object in the left foreground camera by using a target identification algorithm, initializing a tracking target in a tracking algorithm by using the object anchor frame, and synchronously performing the target identification algorithm and the tracking algorithm;
and then, tracking the object by the initialized tracking algorithm, and if the object is lost, re-initializing the tracking algorithm by using the result of the target identification algorithm so as to track and obtain the anchor frame of the object.
Preferably, in step S14, the three-dimensional reconstruction algorithm is used to perform three-dimensional reconstruction on the object of interest of the user, specifically:
s141, calibrating to obtain internal and external parameters and a reprojection matrix of the foreground left camera and the foreground right camera;
s142, correcting and aligning images in the foreground left camera and the foreground right camera by using the internal and external parameters of the foreground left camera and the foreground right camera;
s143, obtaining a binocular vision difference value of the image pixels of the foreground left camera and the foreground right camera through a feature matching algorithm;
s144, reconstructing by using the binocular vision difference value and the re-projection matrix obtained by binocular calibration to obtain the three-dimensional coordinates of each pixel point in the image under the foreground left camera coordinate system, thereby completing the three-dimensional reconstruction of the object.
As a further preference, the step S15 specifically includes the following sub-steps:
s151, obtaining a transformation matrix of the coordinate system of the eye tracker relative to the coordinate system of the mechanical arm base by combining a plurality of infrared receivers on the eye tracker with an infrared generator in the coordinate system of the mechanical arm base;
s152, obtaining a transformation matrix between the foreground left camera and the eye tracker coordinate system through calibration;
s153, converting the pose of the interested object in the foreground left camera to the mechanical arm base coordinate system according to the two transformation matrixes.
It is further preferred that in step S4, a set of optimal solutions among the plurality of solutions for joint angles is determined with the optimization criterion of maximizing the sum of the relative angle margins of all joints, and preferably with the optimization criterion of minimizing the mean of the absolute values of the relative angle deviations.
As a further preferable mode, in step S5, the robot arm moves to each joint angle and then moves after the execution error acquired in advance so that the robot arm reaches the target position, and then grasps the object of interest of the user in the optimal grasp pose.
As a further preferred, the execution error is obtained by the following steps:
firstly, according to the position of an object which is interested by a user and the deviation of the optimal grabbing pose from a preset offset distance in the grabbing reverse direction, obtaining a pre-grabbing position, and accessing a server to inquire and obtain a feasible grabbing pose set of the pre-grabbing position;
then, obtaining multiple groups of solutions of each joint angle of the mechanical arm based on reverse solutions of each grabbing posture in the feasible grabbing posture set, and determining an optimal solution, namely a target joint angle, from the multiple groups of solutions of the joint angle;
then, enabling each joint of the mechanical arm to move to a corresponding target joint angle, measuring the position of the tail end of the mechanical arm and obtaining the real joint angle of each joint of the mechanical arm through inverse solution;
and finally, calculating the difference between the target joint angle and the real joint angle to obtain the execution error of each joint of the mechanical arm.
Generally, compared with the prior art, the above technical solution conceived by the present invention mainly has the following technical advantages:
1. the invention determines the eye gaze point through the eye tracker, determines the position of the object interested by the user based on the eye gaze point, and then guides the mechanical arm to realize the grabbing of the object interested by the user based on the determined position of the object interested by the user.
2. According to the invention, inverse solution of the grabbing posture is realized through the kinematics inverse solution module IK fast, and the target rotation angle of each joint of the mechanical arm can be accurately obtained.
3. According to the invention, the reachable spatial data set is generated by discretizing and traversing the position and the posture of the working space in advance, so that the efficiency of inverse kinematics solution is improved, and real-time inverse solution can be realized.
4. According to the invention, each joint of the mechanical arm moves to the corresponding joint angle and then moves to the execution error acquired in advance (namely, error compensation is carried out), so that the mechanical arm can accurately reach the target position, and the object can be accurately grabbed.
5. According to the method, the center of the pupil of the two eyes is extracted to determine the fixation point of the user, so that the accuracy of fixation point identification can be effectively ensured, and the accuracy of determination of the pose of the interested object is further ensured; the invention realizes the identification and pose estimation of the object of interest of the user by means of the two foreground cameras and the two eye cameras, can be used in an actual working scene, controls the mechanical arm to grasp the object of interest for the user by utilizing the information of the eyes of the user, and brings convenience for the disabled.
6. The object in the foreground left camera is extracted by a mode of fusing a tracking algorithm with a target recognition algorithm, wherein an object anchor frame is obtained by the target recognition algorithm for initializing the tracking algorithm, and the tracking algorithm is reinitialized by the result of the target recognition algorithm when the tracking algorithm is lost, so that the recognition success rate of common objects in the foreground camera is greatly improved.
7. The invention carries out three-dimensional reconstruction on the object which is interested by the user by utilizing a three-dimensional reconstruction algorithm, can accurately position the coordinate of the target object under the foreground camera coordinate system, and can obtain the pose of the target object by combining with the gesture recognition of the target object.
Drawings
Fig. 1 is a flowchart of a hand-eye coordination grasping method based on a human eye gaze point according to an embodiment of the present invention;
fig. 2 is a diagram of an actual application of the grasping apparatus for executing the hand-eye coordination grasping method according to the embodiment of the invention;
FIG. 3 is a diagram of a grasping apparatus for executing a method of coordinated grasping by hand and eye according to an embodiment of the present invention;
fig. 4 is a schematic structural diagram of an eye tracker according to an embodiment of the present invention;
FIG. 5 is a schematic diagram of object of interest extraction provided by embodiments of the present invention;
FIG. 6 is a schematic diagram of the inverse solution achieved by the transformation relationship of joint angles to a homogeneous transformation matrix.
The same reference numbers will be used throughout the drawings to refer to the same or like elements or structures, wherein:
1-infrared receiver, 2-inner head-wearing ring, 3-outer ring main body, 4-foreground left camera, 5-foreground right camera, 6-left eye camera, 7-right eye camera, 8-outer ring upper buckle, 9-movable rotating part, 10-bracket buckle, 11-mechanical arm, 12-rack, 13-mechanical arm, 14-small arm tracker, 15-first connecting rod, 16-second connecting rod, 17-lighthouse, 18-mechanical arm base, 19-small ball and 20-bottle.
Detailed Description
In order to make the objects, technical solutions and advantages of the present invention more apparent, the present invention is described in further detail below with reference to the accompanying drawings and embodiments. It should be understood that the specific embodiments described herein are merely illustrative of the invention and are not intended to limit the invention. In addition, the technical features involved in the embodiments of the present invention described below may be combined with each other as long as they do not conflict with each other.
As shown in fig. 1, an embodiment of the present invention provides a hand-eye coordination grasping method based on a gaze point of a human eye, which includes the following steps:
s1, determining a human eye gaze point by using an eye tracker, and determining the position of an object of interest of a user based on the human eye gaze point;
s2, obtaining a feasible grabbing gesture set of the mechanical arm for executing grabbing actions at each position in space through discretization and storing the feasible grabbing gesture set in a server;
s3, accessing the server according to the position of the object of interest of the user determined in the step S1 to query and obtain a feasible grabbing posture set of the position;
s4, obtaining multiple groups of solutions of each joint angle of the mechanical arm based on inverse solutions of each grabbing posture in the feasible grabbing posture set, determining an optimal solution in the multiple groups of solutions of the joint angles, namely obtaining the optimal joint angle corresponding to each joint of the mechanical arm, and determining a group of optimal joint angles to obtain the grabbing pose corresponding to the group of joint angles, namely the optimal grabbing pose;
s5, moving each joint of the mechanical arm to the corresponding optimal joint angle, and grabbing the object of interest of the user in the optimal grabbing posture, so that hand-eye coordination grabbing based on the human eye gaze point is completed.
As shown in fig. 2-3, the apparatus for carrying out the above method of the present invention comprises a robot arm 11 fixed to a frame 12 and serving as a right hand (or left hand) for the disabled, and an object to be grasped (a bottle, a ball, etc.) placed in front of the eye tracker. As shown in fig. 4, the eye tracker comprises an outer ring main body 3 and a head-wearing inner ring 2 positioned inside the outer ring main body 3, wherein the outer ring main body 3 and the head-wearing inner ring 2 are connected through two movable rotating parts 9, an outer ring upper buckle plate 8 is arranged above the outer ring main body 3, and support buckle plates 10 are arranged on two sides below the outer ring main body 3; the front side of outer lane main part 3 is located to prospect left camera 4 and the right camera 5 of prospect for gather user place ahead scene image information, the front end of two support buckle is located to left eye camera 6 and right eye camera 7, the information of image about being used for gathering people's eye, evenly distributed has a plurality of infrared receiver 1 on the buckle on the outer lane, be used for receiving outside infrared ray, this infrared ray for example by locating the infrared generator transmission at eye movement appearance rear, distance between infrared generator and the eye movement appearance is preferred to be more than a meter. During the use, data transmission data processing system (computer) are gone into with prospect left camera 4, prospect right camera 5, left eye camera 6 and right eye camera 7 to eye motion appearance accessible data line, and infrared receiver's relevant data send data processing system (computer) through wireless bluetooth for head gesture estimation.
Furthermore, the invention adopts a lighthouse positioning technology, and a positioning mark (an infrared receiver) is arranged on the eye tracker, thereby realizing the estimation of the head posture with higher precision. The lighthouse positioning technique is composed of lighthouse (infrared generator) and infrared receiver, the lighthouse is arranged in the base coordinate system of mechanical arm, and two screws are arranged in the lighthouseThe two motors are respectively provided with a linear infrared laser emitter, and each rotation period can sweep all points in a space in a visual field. The infrared receiver is arranged on the eye tracker and is specifically a group of infrared receiving diodes, and when receiving the infrared laser, the infrared receiving diodes can generate response pulses which are input into the data processing system. The lighthouse has a global exposure at the start of each rotation cycle, and the motor then sweeps the entire space with the linear laser emitter, once for each cycle, with two orthogonal motors. The infrared receiver on the eye tracker produces a longer pulse signal during the global exposure and a shorter pulse signal during the subsequent laser sweep. Processing the pulse signals to obtain the time difference between the global exposure and the laser scanning to a certain infrared receiver, and assuming that the scanning time difference of the rotating laser in the horizontal direction and the vertical direction of the ith receiver is delta ti1、Δti2And because the rotating speed r of the motor is known, the scanning angle of the corresponding horizontal motor and the vertical motor can be further calculated:
Figure BDA0002157381130000081
further, the coordinates of the 2D point on the virtual plane of the lighthouse for each infrared receiver can be expressed as:
Figure BDA0002157381130000082
meanwhile, the 3D point coordinate P of each infrared receiver under the coordinate system of the eye trackeri=(Xi,Yi,Zi) Can be obtained by the design parameters of the outer ring main body;
the motion of a 3D to 2D point pair is a PnP problem that can be solved using a direct linear transformation:
sxi=[R|t]Pi
where s is the magnification factor, xi=(ui,vi,1)THomogeneous coordinate representation of normalized planes for feature points, R and tRespectively representing a 3 × 3 rotation matrix and a 3 × 1 translation vector, Pi=(Xi,Yi,Zi,1)TProviding two linear constraints for the feature points corresponding to the homogeneous coordinates of the spatial points, each set of point pairs, [ R | t [ ]]There are 12 dimensions, and the transformation can be solved by 6 pairs of matching points at minimum.
The left foreground camera 4 and the right foreground camera 5 are binocular cameras and are fixed to the front portion of the outer ring main body through mounting holes, the two foreground cameras can collect image information of a scene in front of a user, a target object is identified through a target identification algorithm, and then three-dimensional coordinates of the target object are obtained through a binocular three-dimensional reconstruction algorithm. Two infrared cameras are adopted by the left eye camera 6 and the right eye camera 7, the two infrared cameras are fixed at the front end of the support buckle plate 10 and are located below the outer ring main body 3, the left eye camera 6 and the right eye camera 7 can be aligned to the left eye and the right eye of a user respectively by adjusting the two support buckle plates 10, an infrared light source is arranged outside each of the two eye cameras, a dark pupil effect can be generated when the infrared light source is close to an optical axis of the camera, namely, the color of the pupil area in the infrared camera image is darkened, the color of the iris and other areas is lightened, and the extraction of the pupil is facilitated.
Specifically, step S1 includes the following sub-steps:
s11, respectively identifying the centers of the left and right pupils of the user by using the left eye camera 6 and the right eye camera 7 on the eye tracker so as to extract the information of human eyes;
s12, mapping the left and right pupil centers obtained by identification to the foreground left camera 4 to obtain a two-dimensional fixation point;
s13, extracting an object anchor frame in the foreground left camera, and then determining an object which is interested by the user according to the position relation between the two-dimensional gaze point and the object anchor frame;
s14 performs three-dimensional reconstruction on the object of interest of the user to obtain the position of the object of interest in the foreground left camera 4, or performs pose estimation on the object of interest of the user by using the conventional pose estimation method to obtain the pose of the object of interest in the foreground left camera 4;
s15 transforms the position of the object of interest in the foreground left camera into the mechanical arm base coordinate system to determine the position of the user' S object of interest.
Further, step S11 includes the following sub-steps:
s111, respectively acquiring images shot by the left-eye camera 6 and the right-eye camera 7, and performing smoothing processing to obtain a smoothed gray-scale image, wherein the smoothing processing is graying and filtering operation;
s112, sending the smoothed gray-scale image to an edge detector (preferably, a canny edge detector) to obtain edge points, and performing filtering processing to filter out the remaining edge points corresponding to the pupil boundary (i.e., filter out the edge points that obviously do not belong to the pupil boundary), and how to filter in detail, a person skilled in the art may design a filtering rule as needed, which is not described herein any more, the remaining edge points corresponding to the pupil boundary are constructed as a pupil edge point set, and the canny edge detector is used for edge detection, so that the detection efficiency is high, and the accuracy is high;
s113, ellipse fitting is carried out on the edge points corresponding to the residual pupil boundaries after filtering to obtain coordinates (x) of the centers of the left and right pupilsl,yl) And (x)r,yr) Preferably, a random sampling consistency algorithm is used to perform ellipse fitting on the edge points corresponding to the pupil boundary to obtain the coordinates (x) of the centers of the left and right pupilsl,yl) And (x)r,yr) The ellipse parameter equation is known to have 5 free variables, so when a random sampling consistency algorithm is used for ellipse fitting, the ellipse parameter equation can be fitted by at least 5 edge points, and in order to obtain the best fitting ellipse, the random sampling consistency adopts an iteration mode, and the specific steps are as follows:
s1131 randomly selects 5 points from the pupil edge point set to fit an elliptical plane parameter equation, where the elliptical plane parameter equation is:
Q(x,y)=Ax2+Bxy+Cy2+Dx+Ey+F
in the formula, A-F are coefficients to be solved;
s1132 calculates support function values from all interior points in the pupil edge point set to the ellipse, where the interior points are specifically defined as:
inliers={(x,y)|error(Q,x,y)<ε}
in the formula (I), the compound is shown in the specification,
Figure BDA0002157381130000101
the method is a loss function, alpha is a normalization coefficient, epsilon is a preset value, and the method can be selected according to actual needs, for example, 0.5 is taken;
the support function is defined by calculating the support function value from the inner point to the ellipse by substituting the inner point corresponding to the ellipse into the following formula:
Figure BDA0002157381130000102
wherein a and b are respectively the major axis and the minor axis of the ellipse,
Figure BDA0002157381130000103
is the gray scale gradient of point (x, y); the ratio of the minor axis to the major axis is known from the definition of the support function
Figure BDA0002157381130000104
The larger, or the closer the gray gradient direction of the image at the interior point is to the normal direction at the interior point on the ellipse family (i.e., the closer the gray gradient direction of the image is to the normal direction at the interior point on the ellipse family)
Figure BDA0002157381130000105
Larger), the higher the support function value;
s1133 repeats steps S1131 to S1132 for a predetermined number of times (for example, 20 times) to fit a plurality of ellipses, and selects the corresponding ellipse when the support function value is the maximum, where the center of the ellipse is the pupil center.
Furthermore, the identified pupil center can be mapped to the foreground left camera by using a common algorithm in the prior art, such as a polynomial fitting method, to obtain a two-dimensional fixation point, and the invention preferably uses a gaussian process regression algorithm to map the coordinates (x) of the pupil centerl,yl) And (x)r,yr) Mapping into foreground camera to obtain fixation point (x)s,ys) Specifically, the Gaussian process is a set of arbitrary finite random variables with joint Gaussian distribution, and the basic principle is to construct the set before predictionA training set which comprises a series of left and right pupil center coordinates and a corresponding fixation point coordinate on a foreground left camera, wherein the data in the training set are collected in advance, and a four-dimensional vector (namely a test point x) is formed by the left and right pupil centers in prediction*) Calculating K (X, X) and K (X)*X) and then substituted into an expected value calculation formula to obtain an expected value, namely the corresponding fixation point (X)s,ys) Specifically, the mathematical model is as follows:
Figure BDA0002157381130000111
wherein, f is a set of coordinates of a fixation point on a foreground left camera in the training set, X is a set of input vectors X, and the input vectors are 4-dimensional vectors X (X) consisting of coordinates of the centers of left and right pupilsl,yl,xr,yr) K (X, X) is a symmetric positive definite covariance matrix of the training set, K (X)*And X) is actually measured X*N X1 covariance matrix with training set X, k (X)*,x*) For the covariance of the test point itself, the expectation of the predicted value is:
Figure BDA0002157381130000112
wherein the content of the first and second substances,
Figure BDA0002157381130000113
is an expected value, i.e. a predicted value (x) of the gaze point in the foreground camera obtained by regression through a Gaussian processs,ys)。
Preferably, step S13 includes the following sub-steps:
s131, identifying and obtaining an anchor frame of an object in the left foreground camera by using a target identification algorithm, wherein as shown in FIG. 5, a cylinder on the left represents a water cup, a circle on the right represents a sphere, a dotted line containing the water cup and the sphere is the anchor frame corresponding to the water cup and the sphere, a tracking target in the tracking algorithm is initialized by using the object anchor frame, the target identification algorithm and the tracking algorithm are synchronously performed, and both the target identification algorithm and the tracking algorithm can adopt the conventional method;
s132, tracking the object by the initialized tracking algorithm, if the object is lost, re-initializing the tracking algorithm by using the real-time identification result of the target identification algorithm, and then continuously tracking the object by using the initialized tracking algorithm to obtain an anchor frame of the object, wherein the success rate of the object identification in the left foreground camera can be effectively improved by the method;
s133 then determines an object in which the user is interested according to a positional relationship between the two-dimensional gaze point mapped to the foreground left camera and the object anchor frame, wherein as shown in fig. 5, if the gaze point falls within the cup anchor frame (e.g., a black point located within the cup anchor frame in fig. 5), the user is considered to be interested in the cup, and if the gaze point does not fall within the cup anchor frame (e.g., a white point located outside the cup anchor frame in fig. 5), the user is considered not to be interested in the cup; if the gaze point falls within the anchor frame of the ball (e.g., the black dot located within the ball anchor frame in fig. 5) the user is deemed to be interested in the ball, and if the gaze point does not fall within the anchor frame of the ball (e.g., the white dot located outside the ball anchor frame in fig. 5) the user is deemed to be uninteresting in the ball.
Preferably, the step S14 of three-dimensionally reconstructing the object of interest to the user is specifically:
s141, obtaining internal and external parameters of the left foreground camera and the right foreground camera through binocular calibration, specifically including internal parameter matrixes and external parameter matrixes of the left foreground camera and the right foreground camera, and deducing a reprojection matrix Q through the internal and external parameter matrixes;
s142, aligning images in the foreground left camera and the foreground right camera by using the binocular camera internal and external parameters, which is the prior art and is not described herein again;
s143, obtaining a binocular vision difference d of the image pixels of the foreground left camera and the foreground right camera through a feature matching algorithm, preferably, the feature matching adopts a normalized cross-correlation method, and the correlation measurement mode is as follows:
Figure BDA0002157381130000121
wherein, p (x, y) isShow the coordinates, W, of any point in the left camera of the foregroundpRepresenting a rectangular area centered at p, L (x, y) representing the gray value at point (x, y) in the foreground left camera image,
Figure BDA0002157381130000122
representing W in the foreground left camera imagepR (x + d, y) represents the gray value at the point (x + d, y) in the foreground right camera image,
Figure BDA0002157381130000123
representing the gray average value in a rectangular area corresponding to the midpoint (x + d, y) of the foreground right camera image, wherein d which enables the correlation to be maximum in the formula is the solved binocular vision difference value;
s144, reconstructing by using the binocular vision difference value and the re-projection matrix Q obtained by binocular calibration to obtain the three-dimensional coordinates of each pixel point in the foreground left camera image under the foreground left camera coordinate system, thereby completing the three-dimensional reconstruction of the object, and the principle is as follows:
[X Y Z W]T=Q*[x y d 1]T
wherein [ X Y Z W ] is a homogeneous coordinate in a foreground left camera coordinate system, and (X, Y) is a two-dimensional coordinate in a foreground left camera image coordinate system.
More specifically, step S15 preferably uses the following steps to convert the position of the object of interest in the foreground left camera to below the mechanical arm base coordinate system:
s151 obtaining a transformation matrix of the eye tracker coordinate system with respect to the robot arm base coordinate system by using a plurality of infrared receivers on the eye tracker and combining with an infrared generator (i.e. lighthouse) located in the robot arm base coordinate system, specifically including:
s1511 measuring the 2D coordinates (u) of each infrared receiver on the virtual plane of the infrared generatori,vi):
Figure BDA0002157381130000131
Wherein alpha isiSweeping in horizontal direction for driving motor of infrared generator to rotate horizontallyAngle of incidence, betaiThe scanning angle of a motor for driving the infrared generator to vertically rotate in the vertical direction;
s1512, a transformation matrix [ R | t ] of the eye tracker coordinate system relative to the mechanical arm base coordinate system is obtained by performing direct linear transformation solving according to the following formula:
sxi=[R|t]Pi
the above equation is a method for solving 3D to 2D point pair motion (PnP problem), where s is an amplification factor and xi=(ui,vi,1)TR and t denote a rotation matrix of 3 × 3 and a translation vector of 3 × 1, respectively, Pi=(Xi,Yi,Zi,1)T,(Xi,Yi,Zi) Representing the 3D coordinates of the infrared receiver in the eye tracker coordinate system, each set of point pairs providing two linear constraints, [ R | t [ ]]Has 12 dimensions, so that the transformation matrix [ R | t ] can be solved by 6 pairs of matching points at least]Therefore, more than 6 infrared receivers are arranged;
s152, obtaining a transformation matrix between the foreground left camera and the eye tracker coordinate system through calibration;
s153, converting the pose of the interested object in the foreground left camera to the mechanical arm base coordinate system according to the two transformation matrixes.
In step S2, discretizing and traversing to obtain feasible grabbing poses of the mechanical arm for performing grabbing actions at each position in space refers to discretizing to obtain reachable position points of the mechanical arm in a working space, and then discretizing to obtain a set of feasible grabbing poses corresponding to the position points.
In a preferred embodiment, the discretization of the location points is as follows:
the maximum distance from the tail end of the mechanical arm to the shoulder joint of the mechanical arm is r, and the maximum distance is taken as the volume of a sphere with the radius
Figure BDA0002157381130000141
The discretized sampling interval of the three axes is d (which is set according to the size of the grabbed object), so that the number of sampling points (i.e., the number of position points) in the whole mechanical arm working space is N ═ V/d3
The discretization mode of the grabbing gesture is as follows:
equally dividing the attitude angle by the interval of 0.255 radian, then the attitude sampling number (namely the attitude can be grabbed) of one point is 576, judging the attitude solvability corresponding to each position point by utilizing an IKfast reverse solution tool packet, and establishing a mechanical part reachable space database:
Figure BDA0002157381130000142
wherein p isiFor the ith position point, the upper limit of i is N ═ V/d3
Figure BDA0002157381130000143
Represents piSolvable set of poses of points whose upper bound M is defined by piThe number of resolvable poses of a point is determined, with a maximum value of 576. Generally, the front of the robot arm base has more resolvable postures, the maximum resolvable posture number is 94/576, and the edge of the working space has less resolvable postures.
In step S4, specifically, a kinematics inverse solution module is used to obtain each joint angle of the mechanical arm based on the grabbing posture inverse solution, preferably, an IK fast inverse solution module is used, and the problem solved by the inverse solution module is as follows:
Figure BDA0002157381130000144
wherein the content of the first and second substances,
Figure BDA0002157381130000145
for the end effector (manipulator) coordinate system O of a robot arm8To the base coordinate system O of the robot arm0A transformation matrix between (i.e. representing the pose of the end effector),
Figure BDA0002157381130000146
for the end effector of a robot arm8To the end of the second link of the robot arm7A constant transformation matrix between, wherein,
Figure BDA0002157381130000147
i is 1,2, …,7 is a link transformation corresponding to seven joints of the mechanical arm, as shown in fig. 6, and the coordinate system OiIs the joint angle thetaiIn the coordinate system, coordinate system O8Not indicated in FIG. 6 because of the coordinate system O8The required postures are multiple solutions, and d (se) and d (ew) in the figure 6 are respectively the lengths of the first connecting rod and the second connecting rod of the mechanical arm, and the lengths are 329.2mm and 255.5mm respectively; the vectors n, O, a and p are respectively a coordinate system O of the mechanical arm end effector8In the arm base coordinate system O0A lower pose vector (i.e., a grab pose) and a position vector (i.e., a target position, i.e., a position of an object of interest to the user). The transformation matrices can be solved by the above formula (one), i.e.
Figure BDA0002157381130000151
Then use
Figure BDA0002157381130000152
And (3) obtaining each joint angle of the mechanical arm by adopting the inverse solution of the following formula (II):
Figure BDA0002157381130000153
wherein c is a abbreviation for cos,. thetaiIs the ith joint angle, s is shorthand for sin, ai-1Is along Xi-1Axis from Zi-1Move to ZiA distance ofi-1To wind around Xi-1Axis from Zi-1Rotate to ZiAngle of (d)iIs along ZiAxis from Xi-1Move to XiDistance of (A), XiAs a coordinate system OiX-axis of (Z)iAs a coordinate system OiSee fig. 6, with the parameters in fig. 6 shown in table 1.
TABLE 1
Figure BDA0002157381130000154
Specifically, the mechanical arm adopted by the invention comprises 7 degrees of freedom and one redundant degree of freedom, infinite groups of inverse kinematics solutions can be obtained in a working space (namely, a plurality of groups of joint angles can be obtained through a formula (two), and each group of joint angles comprises 7 joint angles, namely theta1~θ7) Therefore, a certain rule is set according to actual conditions to find an optimal set of solutions. The inverse kinematics of the robot generally follows the shortest path principle, i.e. the motion amount of each joint is minimized, meanwhile, the general mechanical arm is a serial mechanism, and the rotation of a shoulder joint by a small angle has a great influence on the position of the end wrist joint, so that the weighting processing, i.e. more movement of the end joint and less movement of the anterior joint, is required. Based on this, the present invention optimizes the criteria to maximize the sum of the relative angle margins of all joints to find the optimal set of joint angles:
Figure BDA0002157381130000161
wherein θ ═ θi1 … 7, i.e. the further each joint angle is from the extreme position, the better, so the joint angle has higher flexibility and is less limited by the range of the joint.
The optimization objective of equation (1) is not convenient to optimize and can be equivalent to minimizing the mean of the absolute values of the relative angular deviations, i.e.:
Figure BDA0002157381130000162
where μ ═ mean (aad) is the mean of the absolute values of the relative angular deviations, and the absolute value aad of the relative angular deviation is expressed as:
Figure BDA0002157381130000163
meanwhile, in order to prevent the larger deviation of the individual joint angles, the standard deviation can be added into the optimization function, so that the optimization objective function becomes:
Figure BDA0002157381130000164
where σ ═ var (aad) is the variance of the absolute values of the relative angular deviations.
In actual operation, when the object is grabbed by the method, certain errors still exist, and in order to improve the grabbing precision, the method is preferably used for performing compensation processing. Specifically, the calculation of the execution error of each joint angle after step S4 includes:
firstly, according to the position of an object which is interested by a user and the determined optimal grabbing posture, deviating from a preset offset distance along the grabbing reverse direction to obtain a pre-grabbing position, and accessing a server to inquire and obtain a feasible grabbing posture set of the pre-grabbing position; specifically, the position of the object of interest of the user is c, and the pre-grabbing position is obtained as p ═ c + zRB, wherein zRB is a preset offset distance, preferably 10cm, for the main axis vector (direction perpendicular to the palm center) of the manipulator in the optimal grabbing posture;
then, obtaining a plurality of groups of solutions of each joint angle of the mechanical arm based on each grabbing posture inverse solution in the feasible grabbing posture set, and determining an optimal solution, namely a target joint angle, in the plurality of groups of solutions of the joint angle;
then, moving each joint of the mechanical arm to a corresponding target joint angle, then measuring the position of the tail end of the mechanical arm and obtaining a real joint angle of each joint of the mechanical arm in a reverse solution mode, specifically obtaining a real rotation angle of a 1-5 joint of the mechanical arm, and specifically obtaining the position of the tail end of the mechanical arm by measuring with a tracker fixed on a small arm (a second connecting rod) of the mechanical arm: through a tracker on a small arm of the mechanical arm, a transformation matrix of the tracker coordinate system relative to a robot base coordinate system can be obtained
Figure BDA0002157381130000171
The tracker is set on the forearm, so that the coordinate system O of the wrist joint (the end of the second link) is defined5Transformation matrix to the tracker coordinate system
Figure BDA0002157381130000172
Can be obtained by one-time calibration, so that the wrist joint coordinate system O can be obtained5To the robot base coordinate system (i.e., coordinate system O)0) Change of (2)
Figure BDA0002157381130000173
And then solving the following formula by using a kinematic inverse solver IK fast to obtain a least square solution (overdetermined solution) of the first 5 joints as an actual rotation angle
Figure BDA0002157381130000174
Figure BDA0002157381130000175
And finally, calculating the difference between the target joint angle and the real joint angle to obtain the execution error of each joint of the mechanical arm, namely, obtaining the execution error of the first five joints by making a difference between the target rotation angle and the corresponding actual rotation angle, then compensating each optimal joint angle obtained in the step S4 by using each execution error, namely adding each optimal joint angle to the corresponding execution error to obtain a compensated joint angle, and finally moving each joint of the mechanical arm to the corresponding compensated joint angle in the step S5, so as to ensure that the position actually reached by the mechanical arm is the target position (namely the position of the object interested by the user), and realize high-precision grabbing.
The invention can accurately position an object observed (interested) by a user, and control the tail end of the mechanical arm to reach the position of the object in an anthropomorphic mode, and the anthropomorphic mechanical arm can grab the object of interest and execute appointed actions, such as drinking water and carrying small balls, so that the invention can help the patient with serious physical disability to grab the object of interest.
It will be understood by those skilled in the art that the foregoing is only a preferred embodiment of the present invention, and is not intended to limit the invention, and that any modification, equivalent replacement, or improvement made within the spirit and principle of the present invention should be included in the scope of the present invention.

Claims (9)

1. A hand-eye coordination grabbing method based on a human eye gaze point is characterized by comprising the following steps:
s1, determining a human eye gaze point by using an eye tracker, and determining the position of an object of interest of a user based on the human eye gaze point;
s2, obtaining a feasible grabbing gesture set of the mechanical arm for executing grabbing actions at each position in space through discretization and storing the feasible grabbing gesture set in a server;
s3, accessing the server according to the position of the object of interest of the user determined in the step S1 to query and obtain a feasible grabbing posture set of the position;
s4, obtaining multiple groups of solutions of each joint angle of the mechanical arm based on the inverse solution of each grabbing posture in the feasible grabbing posture set, and determining one group of optimal solutions in the multiple groups of solutions of the joint angles, namely obtaining the optimal joint angle corresponding to each joint of the mechanical arm, wherein the grabbing pose corresponding to the optimal solution is the optimal grabbing pose;
and S5, moving each joint of the mechanical arm to the corresponding optimal joint angle, and grabbing the object of interest of the user in the optimal grabbing pose, so that the hand-eye coordination grabbing based on the human eye gaze point is completed.
2. The hand-eye coordination grasping method based on human eye gaze point according to claim 1, characterized by that step S1 comprises the following sub-steps:
s11 recognizing centers of left and right pupils of the user using left and right eye cameras on the eye tracker, respectively, to extract information of the human eyes;
s12, mapping the left and right pupil centers obtained by identification to the foreground left camera to obtain a two-dimensional fixation point;
s13, extracting an object anchor frame in the foreground left camera, and then determining an object which is interested by the user according to the position relation between the two-dimensional gaze point and the object anchor frame;
s14, performing three-dimensional reconstruction on the object interested by the user to obtain the position of the object of interest in the foreground left camera;
s15 transforms the position of the object of interest in the foreground left camera into the mechanical arm base coordinate system to determine the position of the user' S object of interest.
3. The hand-eye coordination grasping method based on the human eye gaze point as claimed in claim 2, wherein in step S13, the object anchor frame in the foreground left camera is extracted by using a target recognition and tracking algorithm, specifically:
firstly, identifying and obtaining an anchor frame of an object in the left foreground camera by using a target identification algorithm, initializing a tracking target in a tracking algorithm by using the object anchor frame, and synchronously performing the target identification algorithm and the tracking algorithm;
and then, tracking the object by the initialized tracking algorithm, and if the object is lost, re-initializing the tracking algorithm by using the result of the target identification algorithm so as to track and obtain the anchor frame of the object.
4. The hand-eye coordination grasping method based on the human eye gaze point as claimed in claim 2, wherein in step S14, the three-dimensional reconstruction algorithm is used to perform three-dimensional reconstruction on the object interested by the user, specifically:
s141, calibrating to obtain internal and external parameters and a reprojection matrix of the foreground left camera and the foreground right camera;
s142, correcting and aligning images in the foreground left camera and the foreground right camera by using the internal and external parameters of the foreground left camera and the foreground right camera;
s143, obtaining a binocular vision difference value of the image pixels of the foreground left camera and the foreground right camera through a feature matching algorithm;
s144, reconstructing by using the binocular vision difference value and the re-projection matrix obtained by binocular calibration to obtain the three-dimensional coordinates of each pixel point in the image under the foreground left camera coordinate system, thereby completing the three-dimensional reconstruction of the object.
5. The hand-eye coordination grasping method based on the human eye gaze point as claimed in claim 2, wherein the step S15 specifically comprises the following sub-steps:
s151, obtaining a transformation matrix of the coordinate system of the eye tracker relative to the coordinate system of the mechanical arm base by combining a plurality of infrared receivers on the eye tracker with an infrared generator in the coordinate system of the mechanical arm base;
s152, obtaining a transformation matrix between the foreground left camera and the eye tracker coordinate system through calibration;
s153, converting the pose of the interested object in the foreground left camera to the mechanical arm base coordinate system according to the two transformation matrixes.
6. The method for capturing the hand-eye coordination based on the human eye gaze point of claim 1, wherein in step S4, the IK fast inverse solution module is used to inversely solve the multiple sets of solutions of the mechanical arm joint angles.
7. The method for capturing hands and eyes based on the gaze point of human eyes according to any one of claims 1 to 6, wherein in step S4, an optimal solution is determined among the plurality of solutions of the joint angle by using the optimization criterion of maximizing the sum of the relative angle margins of all joints, or an optimal solution is determined among the plurality of solutions of the joint angle by using the optimization criterion of minimizing the mean of the absolute values of the relative angle deviations.
8. The hand-eye coordination grasping method based on the human eye gaze point according to any one of the claims 1-6, characterized by that in step S5, each joint of the mechanical arm moves to the corresponding joint angle and then moves after the pre-obtained execution error so that the mechanical arm reaches the target position, and then grasps the object of interest of the user with the optimal grasping pose.
9. The hand-eye coordination grasping method based on the human eye gaze point according to claim 8, characterized in that the execution error is obtained by the following steps:
firstly, according to the position of an object which is interested by a user and the deviation of the optimal grabbing pose from a preset offset distance in the grabbing reverse direction, obtaining a pre-grabbing position, and accessing a server to inquire and obtain a feasible grabbing pose set of the pre-grabbing position;
then, obtaining multiple groups of solutions of each joint angle of the mechanical arm based on reverse solutions of each grabbing posture in the feasible grabbing posture set, and determining an optimal solution, namely a target joint angle, from the multiple groups of solutions of the joint angle;
then, enabling each joint of the mechanical arm to move to a corresponding target joint angle, measuring the position of the tail end of the mechanical arm and obtaining the real joint angle of each joint of the mechanical arm through inverse solution;
and finally, calculating the difference between the target joint angle and the real joint angle to obtain the execution error of each joint of the mechanical arm.
CN201910721469.7A 2019-08-06 2019-08-06 Hand-eye coordination grabbing method based on human eye fixation point Active CN110605714B (en)

Priority Applications (2)

Application Number Priority Date Filing Date Title
CN201910721469.7A CN110605714B (en) 2019-08-06 2019-08-06 Hand-eye coordination grabbing method based on human eye fixation point
PCT/CN2020/119316 WO2021023315A1 (en) 2019-08-06 2020-09-30 Hand-eye-coordinated grasping method based on fixation point of person's eye

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910721469.7A CN110605714B (en) 2019-08-06 2019-08-06 Hand-eye coordination grabbing method based on human eye fixation point

Publications (2)

Publication Number Publication Date
CN110605714A CN110605714A (en) 2019-12-24
CN110605714B true CN110605714B (en) 2021-08-03

Family

ID=68890465

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910721469.7A Active CN110605714B (en) 2019-08-06 2019-08-06 Hand-eye coordination grabbing method based on human eye fixation point

Country Status (2)

Country Link
CN (1) CN110605714B (en)
WO (1) WO2021023315A1 (en)

Families Citing this family (14)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110605714B (en) * 2019-08-06 2021-08-03 华中科技大学 Hand-eye coordination grabbing method based on human eye fixation point
CN113043334B (en) * 2021-02-23 2022-12-06 上海埃奇机器人技术有限公司 Robot-based photovoltaic cell string positioning method
CN112907586B (en) * 2021-03-30 2024-02-02 贵州大学 Vision-based mechanical arm control method, device and system and computer equipment
CN113128417B (en) * 2021-04-23 2023-04-07 南开大学 Double-region eye movement tracking method based on head posture
CN113146633B (en) * 2021-04-23 2023-12-19 无锡信捷电气股份有限公司 High-precision hand-eye calibration method based on automatic box pasting system
CN113319863B (en) * 2021-05-11 2023-06-16 华中科技大学 Workpiece clamping pose optimization method and system for robot grinding and polishing machining of blisk
CN113610921B (en) * 2021-08-06 2023-12-15 沈阳风驰软件股份有限公司 Hybrid workpiece gripping method, apparatus, and computer readable storage medium
CN113673479A (en) * 2021-09-03 2021-11-19 济南大学 Method for identifying object based on visual attention point
CN113822946B (en) * 2021-10-09 2023-10-20 上海第二工业大学 Mechanical arm grabbing method based on computer vision
CN114147714B (en) * 2021-12-02 2023-06-09 浙江机电职业技术学院 Method and system for calculating control parameters of mechanical arm of autonomous robot
CN114734444B (en) * 2022-04-27 2023-06-27 博众精工科技股份有限公司 Target positioning method and device, electronic equipment and storage medium
CN115990889B (en) * 2023-03-23 2023-06-23 武汉益模科技股份有限公司 Multi-receptive field-based composite positioning method, device and computer system
CN116968037B (en) * 2023-09-21 2024-01-23 杭州芯控智能科技有限公司 Multi-mechanical-arm cooperative task scheduling method
CN117251058B (en) * 2023-11-14 2024-01-30 中国海洋大学 Control method of multi-information somatosensory interaction system

Citations (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103271784A (en) * 2013-06-06 2013-09-04 山东科技大学 Man-machine interactive manipulator control system and method based on binocular vision
CN105537824A (en) * 2016-01-27 2016-05-04 华南理工大学 Automatic welding control method based on hand-eye coordination of mechanical arm
US20170069052A1 (en) * 2015-09-04 2017-03-09 Qiang Li Systems and Methods of 3D Scanning and Robotic Application of Cosmetics to Human
CN106553195A (en) * 2016-11-25 2017-04-05 中国科学技术大学 Object 6DOF localization method and system during industrial robot crawl
CN107471218A (en) * 2017-09-07 2017-12-15 南京理工大学 A kind of tow-armed robot hand eye coordination method based on multi-vision visual
CN108972494A (en) * 2018-06-22 2018-12-11 华南理工大学 A kind of Apery manipulator crawl control system and its data processing method
CN109079777A (en) * 2018-08-01 2018-12-25 北京科技大学 A kind of mechanical arm hand eye coordination operating system
CN109333536A (en) * 2018-10-26 2019-02-15 北京因时机器人科技有限公司 A kind of robot and its grasping body method and apparatus
CN109664317A (en) * 2019-01-24 2019-04-23 深圳勇艺达机器人有限公司 The grasping body system and method for robot
CN109986560A (en) * 2019-03-19 2019-07-09 埃夫特智能装备股份有限公司 A kind of mechanical arm self-adapting grasping method towards multiple target type
CN110032278A (en) * 2019-03-29 2019-07-19 华中科技大学 A kind of method for recognizing position and attitude, the apparatus and system of human eye attention object

Family Cites Families (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR102015307B1 (en) * 2010-12-28 2019-08-28 삼성전자주식회사 Robot and method for controlling the same
KR101789756B1 (en) * 2010-12-29 2017-11-20 삼성전자주식회사 Robot and method for controlling the same
CN109877827B (en) * 2018-12-19 2022-03-29 东北大学 Non-fixed point material visual identification and gripping device and method of connecting rod manipulator
CN110605714B (en) * 2019-08-06 2021-08-03 华中科技大学 Hand-eye coordination grabbing method based on human eye fixation point

Patent Citations (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103271784A (en) * 2013-06-06 2013-09-04 山东科技大学 Man-machine interactive manipulator control system and method based on binocular vision
US20170069052A1 (en) * 2015-09-04 2017-03-09 Qiang Li Systems and Methods of 3D Scanning and Robotic Application of Cosmetics to Human
CN105537824A (en) * 2016-01-27 2016-05-04 华南理工大学 Automatic welding control method based on hand-eye coordination of mechanical arm
CN106553195A (en) * 2016-11-25 2017-04-05 中国科学技术大学 Object 6DOF localization method and system during industrial robot crawl
CN107471218A (en) * 2017-09-07 2017-12-15 南京理工大学 A kind of tow-armed robot hand eye coordination method based on multi-vision visual
CN108972494A (en) * 2018-06-22 2018-12-11 华南理工大学 A kind of Apery manipulator crawl control system and its data processing method
CN109079777A (en) * 2018-08-01 2018-12-25 北京科技大学 A kind of mechanical arm hand eye coordination operating system
CN109333536A (en) * 2018-10-26 2019-02-15 北京因时机器人科技有限公司 A kind of robot and its grasping body method and apparatus
CN109664317A (en) * 2019-01-24 2019-04-23 深圳勇艺达机器人有限公司 The grasping body system and method for robot
CN109986560A (en) * 2019-03-19 2019-07-09 埃夫特智能装备股份有限公司 A kind of mechanical arm self-adapting grasping method towards multiple target type
CN110032278A (en) * 2019-03-29 2019-07-19 华中科技大学 A kind of method for recognizing position and attitude, the apparatus and system of human eye attention object

Also Published As

Publication number Publication date
WO2021023315A1 (en) 2021-02-11
CN110605714A (en) 2019-12-24

Similar Documents

Publication Publication Date Title
CN110605714B (en) Hand-eye coordination grabbing method based on human eye fixation point
CN110032278B (en) Pose identification method, device and system for human eye interested object
CN109255813B (en) Man-machine cooperation oriented hand-held object pose real-time detection method
CN110421562B (en) Mechanical arm calibration system and calibration method based on four-eye stereoscopic vision
CN110238845B (en) Automatic hand-eye calibration method and device for optimal calibration point selection and error self-measurement
US8244402B2 (en) Visual perception system and method for a humanoid robot
CN111801198B (en) Hand-eye calibration method, system and computer storage medium
CN109297413A (en) A kind of large-size cylinder body Structural visual measurement method
WO2009043927A1 (en) Apparatus for acquiring and processing information relating to human eye movements
CN111267106A (en) Intelligent body temperature detection robot based on vision and detection method
CN113858217B (en) Multi-robot interaction three-dimensional visual pose perception method and system
CN111524174A (en) Binocular vision three-dimensional construction method for moving target of moving platform
Busam et al. A stereo vision approach for cooperative robotic movement therapy
CN112183316B (en) Athlete human body posture measuring method
Khan et al. On the calibration of active binocular and RGBD vision systems for dual-arm robots
CN115410233B (en) Gesture attitude estimation method based on Kalman filtering and deep learning
Rüther et al. The narcissistic robot: Robot calibration using a mirror
CN112712030A (en) Three-dimensional attitude information restoration method and device
CN116616812A (en) NeRF positioning-based ultrasonic autonomous navigation method
CN113345010B (en) Multi-Kinect system coordinate calibration and conversion method based on improved ICP
Arsirii et al. Development of the intelligent software and hardware subsystem for capturing an object by robot manipulato
CN114310957A (en) Robot system for medical detection and detection method
CN111860275B (en) Gesture recognition data acquisition system and method
CN112656366B (en) Method and system for measuring pupil size in non-contact manner
CN111829489B (en) Visual positioning method and device

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant