CN102848388A - Service robot locating and grabbing method based on multiple sensors - Google Patents

Service robot locating and grabbing method based on multiple sensors Download PDF

Info

Publication number
CN102848388A
CN102848388A CN2012100967434A CN201210096743A CN102848388A CN 102848388 A CN102848388 A CN 102848388A CN 2012100967434 A CN2012100967434 A CN 2012100967434A CN 201210096743 A CN201210096743 A CN 201210096743A CN 102848388 A CN102848388 A CN 102848388A
Authority
CN
China
Prior art keywords
robot
arm
coordinate
service robot
location
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN2012100967434A
Other languages
Chinese (zh)
Inventor
刘路
李昕
吕小听
张德兴
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
University of Shanghai for Science and Technology
Original Assignee
University of Shanghai for Science and Technology
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by University of Shanghai for Science and Technology filed Critical University of Shanghai for Science and Technology
Priority to CN2012100967434A priority Critical patent/CN102848388A/en
Publication of CN102848388A publication Critical patent/CN102848388A/en
Pending legal-status Critical Current

Links

Images

Landscapes

  • Manipulator (AREA)

Abstract

The invention discloses a service robot locating and grabbing method based on multiple sensors. The method comprises the following steps that: tags of an RFID (Radio Frequency Identification) transceiving module are placed on the ground according to a certain rule to establish a gridded coordinate system; a robot freely walks, locates and searches a target in the established gridded environment by means of combined radio frequency identification (RFID) and binocular vision; the space coordinates of an object obtained by a binocular vision system is converted into the coordinate system of a human-simulated arm, mathematical modelling is performed on the arm through an improved D-H model, and all angles of the human-simulated arm are calculated by a new inverse solution algorithm, so that the object is followed and grabbed. By utilizing the locating and grabbing method of the multi-sensor-based service robot disclosed by the invention, the self-locating as well as target following and grabbing of the service robot is realized.

Description

Service robot location and grasping means based on multisensor
Technical field
The invention belongs to the Robotics field, what be specifically related to is a kind of service robot location and grasping means based on multisensor.
Background technology
Five or six during the decade in the past, and researchers are endeavouring to study the correlation technique of robot application always.Early stage in the sixties in last century, along with industrial expansion, robot realizes dangerous operation and task with helping people, however these robots mainly under structural environment, work, can only carry out work according to specific pattern.But along with the demand of the development of technology and human daily life, robot but faces the challenge of destructuring, the series of problems such as complicated.Therefore in order to make robot can simulate certain home environment, be again people's service in the daily life, it should possess certain interactive capability, can be according to phonetic order Intelligent Recognition object, then carry out autonomous localization and navigation, and can keep away barrier according to the environment of self perception, gripping finger earnest product and give the people of appointment then.
Summary of the invention
The object of the invention is to the deficiency for present technology, propose a kind of service robot location and grasping means based on many sensings, make robot in a kind of structurized environment, provide more intelligentized service for the mankind.Robot can carry out intelligent interaction by the sensors such as vision, radio frequency, ultrasonic, photoelectricity and external environment condition in this method, in the gridding environment that RF tag makes up, finish walking freely, target search, target object hand and grab and follow and the flexible intelligent behavior such as crawl.
For achieving the above object, design of the present invention is:
Service robot location and grasping means based on multisensor involved in the present invention, its experiment porch comprises the binocular image acquisition module, RFID transceiver module, return pulley motion module, and apery mechanical arm control module.The label of radio frequency (RFID) transceiver module is placed on ground according to certain rule, makes up the gridding environment; The method that robot adopts radio frequency (RFID) to combine with binocular vision, robot walking freely, location and searching target in the gridding environment that makes up; The space coordinates of the object that binocular vision system is obtained is transformed into the arm coordinate system; Utilize improved D-H model that arm is carried out modeling, obtain the contrary of copy man arm and separate, realize location and the crawl of object; Utilize feedforward and the feedback strategy of machine vision constantly to adjust the position that hand is grabbed grasping device, realize following and accurately crawl target object.
Service robot location and grasping means based on multisensor involved in the present invention comprise following functions:
(1) independent navigation and locating module: combining RFID and binocular camera shooting header, realize crawl, independent navigation and location to target object.
(2) apery mechanical arm open loop handling module: at first binocular obtains the three-dimensional coordinate information of target object by color, shape, the Texture eigenvalue of target object; Obtain target object with respect to the coordinate figure of right hand mechanical arm through after the coordinate transform; 4+1 degree-of-freedom manipulator after utilizing improved D-H model to degeneration proposes a new inverse arithmetic according to the author again through the line number modeling, can obtain the angle in each joint of arm, according to specific track crawl target object.
(3) apery mechanical arm closed loop handling module: machine vision can calculate the error that paw and object exist between the two, utilize the closed loop strategy constantly to adjust the position that hand is grabbed grasping device, to reduce the two error, realize following and accurately crawl target object.
Described binocular image acquisition module refers to that native system is based on binocular stereo vision; Described RFID R-T unit is the passive RF tag, contains the coordinate information of environment; Described return pulley motion module adopts the differential mode of two-wheel, and in the bottom two color mark sensors is installed; Described apery mechanical arm is the 6+1 free degree, mechanical arm is deteriorated to the 4+1 free degree from the 6+1 free degree, carry out mathematical modeling by improved D-H model, again according to a kind of inverse arithmetic of new proposition, ask for the inverse kinematics solution of equation after the simplification, so that robotic arm can be realized following in real time and flexible crawl for target object.
Described radio frequency (RFID) R-T unit, comprise RF tag and radio frequency induction dispatch coil, wherein RF tag is installed in the bottom of robot, and label is placed on ground according to certain distance, make up the environment of gridding, the situation that robot is roamed in the gridding environment as shown in Figure 5.
The described gridding environment that utilizes RF tag to make up carries out determining of initial position to robot, the judgement of course and planning, and the adjustment of the position before the target object, and wherein, the judgement of course and the flow process of planning are shown in 6.
Suppose the calculating diagram of robot steering angle as shown in Figure 7, the path of the required walking of robot is A-B-C, needs the angle θ of rotation at the B point, can solve by the cosine law:
(1)
When robot wide object, can be by mutual through the lang sound with robot, obtain the final destination of robot, therefrom just can cook up the course of robot, when distance objective object near (about 1 meter), adjust own attitude about robot passes through, control and target between distance, can realize accurate location, its algorithm flow chart as shown in Figure 7.
Described apery mechanical arm is the 6+1 free degree, mechanical arm is deteriorated to the 4+1 free degree from the 6+1 free degree process, and wherein the equivalent-simplification figure of the arm models of the 6+1 free degree and the 4+1 free degree is shown in Fig. 8,9.
In the simplified model, cumbersome in the calculating if according to original D-H model solution, and restriction
Figure 502987DEST_PATH_IMAGE002
Freely be orientated.The present invention adopts improved D-H model, increases the Y row on original D-H parameter list basis, and along the Y direction translation, specific practice is as follows:
(1) robot arm modeling
If: a represents the length (joint skew) of the common vertical line of adjacent two joints axes directions;
Figure 111823DEST_PATH_IMAGE003
Represent the angle (joint is reversed) between two adjacent z axles; D is illustrated on the z axle distance between two adjacent common vertical lines; θ represents the anglec of rotation around the z axle.By Selecting All Parameters substitution A matrix from parameter list, can write out the conversion between per two adjacent segments.Wherein, the A matrix is the transition matrix that a rear joint transforms to previous joint, shown in formula (2)-(5).
Figure 718833DEST_PATH_IMAGE005
Figure 686789DEST_PATH_IMAGE006
(2) ask normal solution
Figure 494525DEST_PATH_IMAGE008
Known each joint rotation angle can be in the hope of the robot arm forward kinematics solution shown in formula (6) by interarticular pose matrix.
Figure 422030DEST_PATH_IMAGE009
(6)
Wherein, the 4th of T the classify as
Figure 560887DEST_PATH_IMAGE010
(7)
In formula 7, s1, c1, s2, the expressions such as c2 are sin, cos, sin, cos.
(3) solution of inverting
If the mechanical arm tail end attitude matrix is:
Figure 144315DEST_PATH_IMAGE011
(8)
In formula (8), P[1], P[2] and, P[3] refer to respectively the last paw angle of pitch, roll angle, yaw angle is at the direction cosines of arm coordinate system, P[4] be the last present position of finger paw.The contrary solution of mechanical arm is exactly to find the solution unknown quantity ,
Figure 192354DEST_PATH_IMAGE013
,
Figure 502113DEST_PATH_IMAGE014
,
Figure 572837DEST_PATH_IMAGE015
The contrary solution procedure of separating is:
1. in flow chart 7, can learn, target object is almost in the center of Robot Binocular Vision, in order to make things convenient for the crawl of robot gripper, can be with the plane of clamper perpendicular to object, clamper is maximum in the face of the opening of target object like this, be convenient to most grasp target, as shown in figure 10, it be easy to show that
Figure 688560DEST_PATH_IMAGE016
, wherein
Figure 528340DEST_PATH_IMAGE017
Be joint 2 to the angle of inner rotary,
Figure 9000DEST_PATH_IMAGE018
The angle that joint three rotates inwards, for last paw perpendicular to objective plane, can allow
Figure 567020DEST_PATH_IMAGE016
2. suppose that the final position of paw is
Figure 968658DEST_PATH_IMAGE019
, by top analysis 1. as can be known, the locus of joint of robot five is:
Figure 662945DEST_PATH_IMAGE020
(9)
3. order
Figure 580085DEST_PATH_IMAGE021
=T[4], with T[4] the second row divided by the first row, can obtain a result into:
Figure 422139DEST_PATH_IMAGE022
(10)
4. as shown in Figure 6, learn from method of geometry:
Figure 817349DEST_PATH_IMAGE023
(11)
5. T[4] the formula of the third line
Figure 631721DEST_PATH_IMAGE024
(12)
Wherein
Figure 454183DEST_PATH_IMAGE018
Value learn in formula (12), to only have
Figure 534266DEST_PATH_IMAGE017
The unknown, can suppose like this:
(13)
Figure 402045DEST_PATH_IMAGE026
(14)
Figure 395409DEST_PATH_IMAGE027
(15)
Figure 212055DEST_PATH_IMAGE028
(16)
(4) conversion of trick coordinate
For the depth information that vision is obtained is converted to the position that mechanical paw will arrive, must set up the conversion of visual coordinate system and mechanical arm coordinate system.The robot phantom eye as shown in figure 11.Wherein, the performing step of trick conversion] as follows:
Figure 949067DEST_PATH_IMAGE029
(17)
Wherein, x, y, z are the coordinates of targets values of obtaining of binocular.After the head that this curl of premultiplication is equivalent to robot lifts, the position of target object in binocular vision.
2. be that initial point overlaps with the initial point of right mechanical arm coordinate system with the visual coordinate after the step 1 conversion, transformation for mula is as follows:
Figure 472452DEST_PATH_IMAGE030
(18)
3. make the x of visual coordinate system, y, the z direction of principal axis is fully identical with the mechanical arm coordinate system, and transformation for mula is as follows:
Figure 715346DEST_PATH_IMAGE031
(19)
Wherein,
Figure 956971DEST_PATH_IMAGE032
Be the position that paw will arrive, θ is the angle that robot head turns forward, shown in figure eight:
Figure 232095DEST_PATH_IMAGE033
,
Figure 875566DEST_PATH_IMAGE034
,
Figure 273049DEST_PATH_IMAGE035
Value can be by measure obtaining.
The real-time target of the apery mechanical arm that the present invention relates to follows and gripping portion, refers to that robot utilizes machine vision
Feedforward and feedback strategy are constantly adjusted the position of paw grasping device, realize that paw grasps following with accurate of target object, and specific practice is as follows:
(1) FEEDFORWARD CONTROL of machine vision
Gain knowledge according to visual information and arm motion, the desired locations that calculates the paw grasping device is the angle in each joint relatively, and robot hand is placed near the position target object, and concrete realization flow as shown in figure 12.
(2) feedback of machine vision
Owing to there being mechanical clearance, can produce transformed error between vision system and the arm coordinate, often cause first the crawl unsuccessfully, the present invention is used as compensating error by the distance of extracting between two kinds of different colours in the binocular, wherein, hand is grabbed and is posted red-ticket, and target object posts green-ticket, and the kinematic error function is:
Figure 1971DEST_PATH_IMAGE036
, the specific implementation flow process as shown in figure 13.
The apery mechanical arm location that the present invention relates to and crawl target object part refer to that the robot navigation controls mechanical arm crawl target object behind assigned address, and key step is as follows:
(1) binocular obtains the three-dimensional coordinate information of target object by color, shape, the Texture eigenvalue of target object.
(2) obtain target object with respect to the coordinate figure of right arm through after the coordinate transform.
(3) right hand arm is found the solution contrary the solution, obtain each joint angles value of arm, according to specific track crawl target object.
Based on above-mentioned 6 points, performing step of the present invention is as follows:
(1) utilize voice to learn the final destination of robot, the recycling RF receiving/transmission device, the label that will contain the environment coordinate figure is placed on ground according to certain rule, makes up the gridding environment, realizes the first location of robot;
(2) utilize binocular vision to extract the depth information of target, in the gridding environment that makes up, realize the accurate location to robot;
(3) utilize improved D-H model that arm is carried out modeling, obtain the normal solution of copy man arm, in order to be that the flexible crawl of next step realize target object is ready;
The space coordinates of the object that (4) binocular vision system is obtained is transformed into the arm coordinate system, utilizes the new inverse arithmetic that proposes of this paper, calculates the angle in each joint.
(5) utilize the feedback control strategy of machine vision constantly to adjust the position of paw grasping device, the following of realize target object.
(6) tag addresses that radio frequency is received for the first time is as new point of destination, and robot advances to this point of destination.
According to the foregoing invention design, the present invention adopts following technical proposals:
A kind of service robot location and grasping means based on multisensor is characterized in that: utilize the robot of radio-frequency module to locate for the first time; Utilize binocular vision through the accurate location of row robot, the return pulley motion module adopts the differential mode of two-wheel, and in the bottom two color mark sensors is installed; The model that utilizes improved D-H to robot arm through the line number modeling; Robot grasps target for the first time according to given path planning and new inverse arithmetic; When first grasp unsuccessfully after, can obtain by binocular vision the depth information of the red-ticket on the paw, compare with the depth information of object, by the error that exists, utilize the PD algorithm, finally realize the successful crawl of paw.
Compared with prior art, the present invention has following apparent outstanding substantive distinguishing features and marked improvement: with machine vision and radio frequency ID fusion application in service robot, give full play to the advantage of two kinds of sensors, realization is walking freely in the gridding environment, target search, self-align, the target object hand is grabbed and is followed, the multiple intelligent behaviors such as flexible crawl have improved the degree of accuracy of grasping greatly.
Description of drawings
Fig. 1 is based on the service robot location of multisensor and the flowsheet of grasping manipulation method
Fig. 2 is system architecture diagram of the present invention;
Fig. 3 is the robot external view;
Fig. 4 is the experimental result picture of the embodiment of the invention
Fig. 5 is the environment of the gridding of structure
Fig. 6 is first positioning flow figure
Fig. 7 is the steering angle figure of robot
Fig. 8 is the accurate positioning flow figure of robot
Fig. 9 is 6+1 degree of freedom robot arm models
Figure 10 is 4+1 degree of freedom robot arm models
Figure 11 is for finding the solution
Figure 80785DEST_PATH_IMAGE013
, ,
Figure 163459DEST_PATH_IMAGE015
Relations Among figure
Figure 12 is the transition diagram of trick coordinate
Figure 13 robot vision feed forward principle figure
Figure 14 robot vision feedback principle figure.
The specific embodiment
Below in conjunction with accompanying drawing preferred enforcement of the present invention is elaborated:
Example one:
Referring to Fig. 1, this sensor-based service robot location and grasping means is characterized in that concrete operation step is as follows:
(1) utilize the robot of radio-frequency module to locate for the first time.RF receiving/transmission device comprises antenna, receiver and passive RF tag, wherein all contains the information such as coordinate figure of environment in every label;
(2) utilize binocular vision through the accurate location of row robot, the return pulley motion module adopts the differential mode of two-wheel, and in the bottom two color mark sensors is installed;
(3) model that utilizes improved D-H to robot arm through the line number modeling;
(4) robot grasps target for the first time according to given path planning and new inverse arithmetic;
(5) when first grasp unsuccessfully after, can obtain by binocular vision the depth information of the red-ticket on the paw, compare with the depth information of object, by the error that exists, utilize the PD algorithm, finally realize the successful crawl of paw.
Example two:
4, the present embodiment and embodiment one are basic identical, special character is as follows: RF receiving/transmission device in the described step (1), comprise RF tag and radio frequency induction dispatch coil, wherein the radio frequency induction dispatch coil is installed in the bottom of robot, label is placed on ground according to certain distance, makes up the coordinate system of gridding, with the binocular information fusion, by in the reading tag data piece constantly coordinate information is judged the route of advancing and plan, realize locating for the first time; The accurate location implementing method of the robot of described step (2) is: the colouring information that utilizes binocular vision system to extract object carries out HSV(Hue-Saturation-Value) Threshold segmentation, obtain the three-dimensional coordinate of target object; By the current label position of robot and the calculating of obtaining the three-dimensional coordinate Relations Among of target, cook up the course of robot, before the object that arrival will be grasped; The essence of the improved D-H model of described step (3) is that previous coordinate origin can pass through some variations, can translation or rotation, overlap with a rear coordinate origin, as long as meet this principle; If increase the Y row on D-H parameter list basis, can along the Y direction translation, can reduce like this because the mistake in computation that the trigonometric function conversion brings; Although increase a matrix multiple, homogeneous transformation matrices is comparatively easy when multiplying each other more; The kinematic inverse arithmetic of robotic arm of described step (4); The mechanical arm location of described step (5) and crawl target object, again grasp two parts after comprising first crawl and failure, at first utilize the first crawl of the feedforward realization paw of machine vision, the recycling feedback strategy is constantly adjusted the position of paw, with gradually near target object, until grasp successfully.
Example three:
As shown in Figure 2, the location of the service robot that the present embodiment relates to and grasping system, by the binocular image acquisition module, sound identification module, RFID transceiver module, return pulley motion module, and two apery mechanical arm control modules.
As shown in Figure 3, the experiment porch robot of this example has the binocular vision camera, 3 anterior ultrasonic sensors, 2 sidepiece ultrasonic sensors, the barrier sensor is kept away on 7 chassis, 2 loudspeakers, 2 mechanical arms, 1 touch-screen, the user can finish by the button of man-machine interface the control of robot.The user can external microphone, directly and robot engage in the dialogue, conversation content user can oneself design.In addition, can also pass through remote controller, finish the functions such as machine human motion, information and amusement are chosen.
As shown in Figure 4, the present embodiment is with the analog family environment, and what the realization robot can be intelligent is human service.The present embodiment mainly may further comprise the steps:
The first step, the mandator inputs voice by microphone, identifies via sound identification module, and gives robot controlling platform recognition result, and robot then carries out related command according to recognition result.In the present embodiment, we do as issuing orders robot: " green tea being taken back to me ".Robot can ask: " green tea where? " we answer robot again: " green tea is located at (x, y)." wherein (x, y) be to space coordinates information by our eye-observation.Robot can learn from above interchange: the article that take are green tea, and the address is (x, y), and send it back to original place.Robot can implement figure shown in (a) among Fig. 3.Robot is according to the phonetic order of the first step, robot utilizes the return pulley motion module to walk along any direction, when robot runs into first label in the process of walking, can learn the present residing absolute position of robot, but do not know the direction of advancing, robot continues forward walking.Robot can run into second label, therefrom read the absolute position of label, the final destination that obtains by the voice in the analytical procedure one, angle and forward travel distance that can planning robot's rotation, robot walks forward like this, stops until running into the label of destination.Shown in (b) among Fig. 4.
Second step, after robot arrived label position, this was to utilize the binocular vision image capture module to obtain the three-dimensional coordinate information of green tea, the accurate location through the row robot is so that the grasping manipulation of robot arm.Shown in (c) among Fig. 4.
The 3rd step, utilize improved D-H model to robot arm through the line number modeling, and set up normal solution.
The 4th step, after robot arrives the destination, at first utilize the binocular vision image capture module to obtain the three-dimensional coordinate information of green tea, obtain target object with respect to the coordinate figure of right mechanical arm through after the coordinate transform, then right hand mechanical arm is found the solution contrary the solution, obtain each joint angles of mechanical arm, grasp for the first time target object according to specific track.Shown in (d) among Fig. 4.
The 5th step, if robot grasps unsuccessfully in step 4, binocular vision can according to the error of the depth information of the color on the paw and color of object, utilize PI to regulate algorithm control paw constantly near object, when its error within the specific limits the time, the control paw goes crawl.Shown in (e) among Fig. 4.
In the 6th step, the label that originally robot is run into for the first time at second step is as new point of destination, and robot advances to point of destination according to the algorithm shown in step 2.Shown in Fig. 4 (f).
The present embodiment is implemented under take technical solution of the present invention as prerequisite, provided detailed embodiment and concrete operating process, but protection scope of the present invention is not limited to the above embodiments.

Claims (6)

1. the service robot based on multisensor is located and the method that grasps, and it is characterized in that concrete operation step is as follows:
(1) utilize the robot of radio-frequency module to locate for the first time; RF receiving/transmission device comprises antenna, receiver and passive RF tag, wherein all contains the information such as coordinate figure of environment in every label;
(2) utilize binocular vision through the accurate location of row robot, the return pulley motion module adopts the differential mode of two-wheel, and in the bottom two color mark sensors is installed;
(3) model that utilizes improved D-H to robot arm through the line number modeling;
(4) robot grasps target for the first time according to given path planning and new inverse arithmetic;
(5) when first grasp unsuccessfully after, can obtain by binocular vision the depth information of the red-ticket on the paw, compare with the depth information of object, by the error that exists, utilize the PD algorithm, finally realize the successful crawl of paw.
2. service robot location and grasping means based on multisensor according to claim 1, it is characterized in that: RF receiving/transmission device in the described step (1), comprise RF tag and radio frequency induction dispatch coil, wherein the radio frequency induction dispatch coil is installed in the bottom of robot, label is placed on ground according to certain distance, make up the coordinate system of gridding, with the binocular information fusion, by in the reading tag data piece constantly coordinate information is judged the route of advancing and plan, realize locating for the first time.
3. service robot location and grasping means based on multisensor according to claim 1, it is characterized in that: the accurate location implementing method of the robot of described step (2) is: the colouring information that utilizes binocular vision system to extract object carries out HSV(Hue-Saturation-Value) Threshold segmentation, obtain the three-dimensional coordinate of target object; By the current label position of robot and the calculating of obtaining the three-dimensional coordinate Relations Among of target, cook up the course of robot, before the object that arrival will be grasped.
4. service robot location and grasping means based on multisensor according to claim 1, it is characterized in that: the essence of the improved D-H model of described step (3) is that previous coordinate origin can pass through some variations-can translation or rotation, overlap with a rear coordinate origin, as long as meet this principle; If increase the Y row on D-H parameter list basis, can along the Y direction translation, can reduce like this because the mistake in computation that the trigonometric function conversion brings; Although increase a matrix multiple, homogeneous transformation matrices is comparatively easy when multiplying each other more.
5. service robot location and grasping means based on multisensor according to claim 1 is characterized in that: the kinematic inverse arithmetic of robotic arm of described step (4), and solution procedure may further comprise the steps:
(1) at first color, shape, the textural characteristics of binocular by target object obtains the three-dimensional coordinate information of target object;
(2) set up again the conversion of visual coordinate system and mechanical arm coordinate system, the depth information of vision acquisition is converted to the positional information that mechanical paw will arrive; Be that initial point overlaps with the initial point of right mechanical arm coordinate system with the visual coordinate after the conversion, and the x of visual coordinate system, y, the z direction of principal axis is fully identical with the mechanical arm coordinate system;
(3) utilize the inverse arithmetic of the 4+1 free degree, calculate the angle in each joint of robot arm;
(4) at last path planning is carried out in each joint motions of robot arm control, in order to do not allow it run into desk etc. in first crawl process.
6. service robot location and grasping means based on multisensor according to claim 1 and 2, it is characterized in that: the mechanical arm location of described step (5) and crawl target object, again grasp two parts after comprising first crawl and failure, at first utilize the first crawl of the feedforward realization paw of machine vision, the recycling feedback strategy is constantly adjusted the position of paw, with gradually near target object, until grasp successfully.
CN2012100967434A 2012-04-05 2012-04-05 Service robot locating and grabbing method based on multiple sensors Pending CN102848388A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN2012100967434A CN102848388A (en) 2012-04-05 2012-04-05 Service robot locating and grabbing method based on multiple sensors

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN2012100967434A CN102848388A (en) 2012-04-05 2012-04-05 Service robot locating and grabbing method based on multiple sensors

Publications (1)

Publication Number Publication Date
CN102848388A true CN102848388A (en) 2013-01-02

Family

ID=47395566

Family Applications (1)

Application Number Title Priority Date Filing Date
CN2012100967434A Pending CN102848388A (en) 2012-04-05 2012-04-05 Service robot locating and grabbing method based on multiple sensors

Country Status (1)

Country Link
CN (1) CN102848388A (en)

Cited By (25)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103481288A (en) * 2013-08-27 2014-01-01 浙江工业大学 5-joint robot end-of-arm tool pose controlling method
CN103529856A (en) * 2013-08-27 2014-01-22 浙江工业大学 5-joint robot end tool position and posture control method
CN104827470A (en) * 2015-05-25 2015-08-12 山东理工大学 Mobile manipulator control system based on GPS and binocular vision positioning
CN104842362A (en) * 2015-06-18 2015-08-19 厦门理工学院 Method for grabbing material bag by robot and robot grabbing device
CN105014666A (en) * 2015-07-13 2015-11-04 广州霞光技研有限公司 Multi-DOF manipulator independent grabbing inverse solution engineering algorithm
CN105372622A (en) * 2015-11-09 2016-03-02 深圳市中科鸥鹏智能科技有限公司 Intelligent positioning floor
CN105751220A (en) * 2016-05-13 2016-07-13 齐鲁工业大学 Walking human-shaped robot and fusion method for multiple sensors thereof
CN106372552A (en) * 2016-08-29 2017-02-01 北京理工大学 Human body target identification and positioning method
CN106625687A (en) * 2016-10-27 2017-05-10 安徽马钢自动化信息技术有限公司 Kinematics modeling method for articulated robot
CN106708028A (en) * 2015-08-04 2017-05-24 范红兵 Intelligent prediction and automatic planning system for action trajectory of industrial robot
CN106945037A (en) * 2017-03-22 2017-07-14 北京建筑大学 A kind of target grasping means and system applied to small scale robot
CN107015193A (en) * 2017-04-18 2017-08-04 中国矿业大学(北京) A kind of binocular CCD vision mine movable object localization methods and system
CN107618031A (en) * 2016-07-13 2018-01-23 本田技研工业株式会社 The engagement confirmation method performed by robot
CN107862716A (en) * 2017-11-29 2018-03-30 合肥泰禾光电科技股份有限公司 Mechanical arm localization method and positioning mechanical arm
CN108115688A (en) * 2017-12-29 2018-06-05 深圳市越疆科技有限公司 Crawl control method, system and the mechanical arm of a kind of mechanical arm
CN108657534A (en) * 2017-03-28 2018-10-16 晓视自动化科技(上海)有限公司 Automatic packaging equipment based on machine vision
CN109916352A (en) * 2017-12-13 2019-06-21 北京柏惠维康科技有限公司 A kind of method and apparatus obtaining robot TCP coordinate
CN110666820A (en) * 2019-10-12 2020-01-10 合肥泰禾光电科技股份有限公司 High-performance industrial robot controller
CN110711701A (en) * 2019-09-16 2020-01-21 华中科技大学 Grabbing type flexible sorting method based on RFID space positioning technology
CN111612823A (en) * 2020-05-21 2020-09-01 云南电网有限责任公司昭通供电局 Robot autonomous tracking method based on vision
CN111746313A (en) * 2020-06-02 2020-10-09 上海理工大学 Unmanned charging method and system based on mechanical arm guidance
CN111815683A (en) * 2019-04-12 2020-10-23 北京京东尚科信息技术有限公司 Target positioning method and device, electronic equipment and computer readable medium
CN112589809A (en) * 2020-12-03 2021-04-02 武汉理工大学 Tea pouring robot based on binocular vision of machine and artificial potential field obstacle avoidance method
CN113352289A (en) * 2021-06-04 2021-09-07 山东建筑大学 Mechanical arm track planning control system of overhead ground wire hanging and dismounting operation vehicle
CN114734466A (en) * 2022-06-14 2022-07-12 中国科学技术大学 Mobile robot chemical experiment operation system and method

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR20050005151A (en) * 2003-07-04 2005-01-13 주식회사유진로보틱스 Method of home security service using robot and robot thereof
KR20080090150A (en) * 2007-04-04 2008-10-08 삼성전자주식회사 Service robot, service system using service robot and controlling method of the service system using service robot
JP2009045692A (en) * 2007-08-20 2009-03-05 Saitama Univ Communication robot and its operating method
CN101559600A (en) * 2009-05-07 2009-10-21 上海交通大学 Service robot grasp guidance system and method thereof
US20090265133A1 (en) * 2005-08-01 2009-10-22 Moonhong Baek Localization system and method for mobile object using wireless communication
CN101661098A (en) * 2009-09-10 2010-03-03 上海交通大学 Multi-robot automatic locating system for robot restaurant
CN102323817A (en) * 2011-06-07 2012-01-18 上海大学 Service robot control platform system and multimode intelligent interaction and intelligent behavior realizing method thereof

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR20050005151A (en) * 2003-07-04 2005-01-13 주식회사유진로보틱스 Method of home security service using robot and robot thereof
US20090265133A1 (en) * 2005-08-01 2009-10-22 Moonhong Baek Localization system and method for mobile object using wireless communication
KR20080090150A (en) * 2007-04-04 2008-10-08 삼성전자주식회사 Service robot, service system using service robot and controlling method of the service system using service robot
JP2009045692A (en) * 2007-08-20 2009-03-05 Saitama Univ Communication robot and its operating method
CN101559600A (en) * 2009-05-07 2009-10-21 上海交通大学 Service robot grasp guidance system and method thereof
CN101661098A (en) * 2009-09-10 2010-03-03 上海交通大学 Multi-robot automatic locating system for robot restaurant
CN102323817A (en) * 2011-06-07 2012-01-18 上海大学 Service robot control platform system and multimode intelligent interaction and intelligent behavior realizing method thereof

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
李瑞峰等: "《基于双目视觉的双臂作业型服务机器人的研制》", 《机械设计与制造》 *
贾东永等: "《基于视觉前馈和视觉反馈的仿人机器人抓取操作》", 《北京理工大学学报》 *

Cited By (30)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103529856A (en) * 2013-08-27 2014-01-22 浙江工业大学 5-joint robot end tool position and posture control method
CN103529856B (en) * 2013-08-27 2016-04-13 浙江工业大学 5 rotary joint robot end instrument posture control methods
CN103481288A (en) * 2013-08-27 2014-01-01 浙江工业大学 5-joint robot end-of-arm tool pose controlling method
CN104827470A (en) * 2015-05-25 2015-08-12 山东理工大学 Mobile manipulator control system based on GPS and binocular vision positioning
CN104842362A (en) * 2015-06-18 2015-08-19 厦门理工学院 Method for grabbing material bag by robot and robot grabbing device
CN105014666A (en) * 2015-07-13 2015-11-04 广州霞光技研有限公司 Multi-DOF manipulator independent grabbing inverse solution engineering algorithm
CN106708028A (en) * 2015-08-04 2017-05-24 范红兵 Intelligent prediction and automatic planning system for action trajectory of industrial robot
CN105372622A (en) * 2015-11-09 2016-03-02 深圳市中科鸥鹏智能科技有限公司 Intelligent positioning floor
CN105751220A (en) * 2016-05-13 2016-07-13 齐鲁工业大学 Walking human-shaped robot and fusion method for multiple sensors thereof
CN107618031A (en) * 2016-07-13 2018-01-23 本田技研工业株式会社 The engagement confirmation method performed by robot
CN106372552A (en) * 2016-08-29 2017-02-01 北京理工大学 Human body target identification and positioning method
CN106372552B (en) * 2016-08-29 2019-03-26 北京理工大学 Human body target recognition positioning method
CN106625687A (en) * 2016-10-27 2017-05-10 安徽马钢自动化信息技术有限公司 Kinematics modeling method for articulated robot
CN106945037A (en) * 2017-03-22 2017-07-14 北京建筑大学 A kind of target grasping means and system applied to small scale robot
CN108657534A (en) * 2017-03-28 2018-10-16 晓视自动化科技(上海)有限公司 Automatic packaging equipment based on machine vision
CN107015193A (en) * 2017-04-18 2017-08-04 中国矿业大学(北京) A kind of binocular CCD vision mine movable object localization methods and system
CN107862716A (en) * 2017-11-29 2018-03-30 合肥泰禾光电科技股份有限公司 Mechanical arm localization method and positioning mechanical arm
CN109916352A (en) * 2017-12-13 2019-06-21 北京柏惠维康科技有限公司 A kind of method and apparatus obtaining robot TCP coordinate
CN109916352B (en) * 2017-12-13 2020-09-25 北京柏惠维康科技有限公司 Method and device for acquiring TCP (Transmission control protocol) coordinates of robot
CN108115688A (en) * 2017-12-29 2018-06-05 深圳市越疆科技有限公司 Crawl control method, system and the mechanical arm of a kind of mechanical arm
CN111815683B (en) * 2019-04-12 2024-05-17 北京京东乾石科技有限公司 Target positioning method and device, electronic equipment and computer readable medium
CN111815683A (en) * 2019-04-12 2020-10-23 北京京东尚科信息技术有限公司 Target positioning method and device, electronic equipment and computer readable medium
CN110711701A (en) * 2019-09-16 2020-01-21 华中科技大学 Grabbing type flexible sorting method based on RFID space positioning technology
CN110666820A (en) * 2019-10-12 2020-01-10 合肥泰禾光电科技股份有限公司 High-performance industrial robot controller
CN111612823A (en) * 2020-05-21 2020-09-01 云南电网有限责任公司昭通供电局 Robot autonomous tracking method based on vision
CN111746313A (en) * 2020-06-02 2020-10-09 上海理工大学 Unmanned charging method and system based on mechanical arm guidance
CN111746313B (en) * 2020-06-02 2022-09-20 上海理工大学 Unmanned charging method and system based on mechanical arm guidance
CN112589809A (en) * 2020-12-03 2021-04-02 武汉理工大学 Tea pouring robot based on binocular vision of machine and artificial potential field obstacle avoidance method
CN113352289A (en) * 2021-06-04 2021-09-07 山东建筑大学 Mechanical arm track planning control system of overhead ground wire hanging and dismounting operation vehicle
CN114734466A (en) * 2022-06-14 2022-07-12 中国科学技术大学 Mobile robot chemical experiment operation system and method

Similar Documents

Publication Publication Date Title
CN102848388A (en) Service robot locating and grabbing method based on multiple sensors
CN108838991B (en) Autonomous humanoid double-arm robot and tracking operation system thereof for moving target
CN109108942B (en) Mechanical arm motion control method and system based on visual real-time teaching and adaptive DMPS
CN114080583B (en) Visual teaching and repetitive movement manipulation system
CN109202885B (en) Material carrying and moving composite robot
CN102902271A (en) Binocular vision-based robot target identifying and gripping system and method
CN106774309A (en) A kind of mobile robot is while visual servo and self adaptation depth discrimination method
CN106354161A (en) Robot motion path planning method
Stückler et al. Mobile manipulation, tool use, and intuitive interaction for cognitive service robot cosero
Lee The study of mechanical arm and intelligent robot
Zhang et al. Multi‐target detection and grasping control for humanoid robot NAO
CN112207839A (en) Mobile household service robot and method
Kragic et al. A framework for visual servoing
Kumar et al. Design and development of an automated robotic pick & stow system for an e-commerce warehouse
CN109048911B (en) Robot vision control method based on rectangular features
CN116175582A (en) Intelligent mechanical arm control system and control method based on machine vision
Wang et al. A visual servoing system for interactive human-robot object transfer
CN115918377A (en) Control method and control device of automatic tree fruit picking machine and automatic tree fruit picking machine
CN114888768A (en) Mobile duplex robot cooperative grabbing system and method based on multi-sensor fusion
CN115194774A (en) Binocular vision-based control method for double-mechanical-arm gripping system
CN112757274B (en) Human-computer cooperative operation oriented dynamic fusion behavior safety algorithm and system
CN114954723A (en) Humanoid robot
Song et al. Object pose estimation for grasping based on robust center point detection
Wang et al. Object Grabbing of Robotic Arm Based on OpenMV Module Positioning
Gong et al. Mobile robot manipulation system design in given environments

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
C02 Deemed withdrawal of patent application after publication (patent law 2001)
WD01 Invention patent application deemed withdrawn after publication

Application publication date: 20130102