CN101396829A - Robot control method and robot - Google Patents
Robot control method and robot Download PDFInfo
- Publication number
- CN101396829A CN101396829A CNA2007101630751A CN200710163075A CN101396829A CN 101396829 A CN101396829 A CN 101396829A CN A2007101630751 A CNA2007101630751 A CN A2007101630751A CN 200710163075 A CN200710163075 A CN 200710163075A CN 101396829 A CN101396829 A CN 101396829A
- Authority
- CN
- China
- Prior art keywords
- manipulator
- image
- mentioned
- characteristic quantity
- backbone network
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
Landscapes
- Manipulator (AREA)
- Image Processing (AREA)
- Image Analysis (AREA)
Abstract
The invention provides a method for controlling a robot device; when controlling the manipulator of the robot device, a plurality of benchmark images are not necessary to be prepared, computation process is much easier, the control can be carried out rapidly, and the position and gesture of objects corresponding to the manipulator can be controlled in the three-dimensional space. According to the difference between the average images (3a) of the object (101) gained in advance and the image of the object (101) gained by a camera unit (6) during the action process of the manipulator (1), a characteristic quantity is worked out by the main composition analysis by an image processing part (3b); the characteristic quantity is input in an action instruction generation part (3c); the output of a neutral network (3d) controlled by the action instruction generation part (3c) is used as an command signal, based on which the front end (5) of the manipulator (1) acts towards the position expected by the object (101).
Description
Technical field
The present invention relates to possess among the robot device of video camera at the front end of manipulator, so that in advance the target signature amount of calculating and storing from dbjective state with from the consistent mode of the resulting characteristic quantity of image by the video camera acquisition, make the robot device's of manipulator behavior control method and robot controller.
Background technology
In the past, proposing had the robot device who possesses manipulator, and in addition, proposition is useful on the robot device's of the manipulator behavior of controlling this robot device control method.And,, propose to have the various objects that are used for utilizing manipulator to carry out clamping to make the control method that moves of manipulator at hand as target as robot device's control method.
For example, in patent documentation 1 (TOHKEMY 2003-231078 communique), put down in writing following robot device: camera position and posture when preparing the benchmark image of a plurality of objects in advance and taking this image, and, the image information of the video camera at hand that is installed in manipulator passes, and by the benchmark image in the retrieving image information, to calculate the position relation at hand of object and manipulator near the shape of object.
In addition, in patent documentation 2 (TOHKEMY 2000-263482 communique), put down in writing following robot device: the benchmark image of preparing object in advance, in by the video camera photographic images at hand that is installed in manipulator, it is a certain amount of that manipulator is moved, up to the image that obtains near benchmark image.
Have again, in patent documentation 3 (07-No. 080790 communique of Japanese kokai publication hei), put down in writing following robot device:, obtain the characteristic quantity of vertex position etc. by the video camera of outside, so that backbone network is learnt these characteristic quantities for the object of by the manipulator clamping.This robot device will be input to by the characteristic quantity that external camera is obtained in the backbone network when action, regulate the angle in each joint of manipulator.In addition, the local feature amount of the center of gravity that utilization obtains from object in advance etc., and the angle in each joint of the manipulator of this moment, further make backbone network learn these characteristic quantities, in course of action, based on the local feature amount of utilizing backbone network to obtain, try to achieve the angle in each joint of manipulator.
Yet the robot device that patent documentation 1 is put down in writing is in order to improve control accuracy, need to prepare a large amount of benchmark images, manipulator whenever once moves, and just need carry out the retrieval of whole benchmark images, thereby computation burden is very big, can not control rapidly.
In addition, the robot device that patent documentation 2 is put down in writing since can only tackle consistent with the position and the posture of manipulator in the two dimensional surface, so can not be in the position and the posture of three dimensions inner control manipulator to object.Also have, in this robot device because must make a video recording repeatedly, the action of instrumentation and manipulator, so can not control fast.In addition, in this robot device, also be necessary to carry out the alignment of camera lens or align the camera calibration work of coordinate system etc., and, the error of manipulator link rod length or the error that dislocation caused of coordinate system can accumulate, and become bad danger thereby have operation precision.
And, the robot device that patent documentation 3 is put down in writing, because the precision that image is handled directly has influence on the precision of manipulator behavior, so be difficult to improve operation precision, in addition, image is handled also needs long time.In addition, in this robot device, owing to make manipulator behavior based on the local feature amount of center of gravity etc., so also there is the low problem of reliability.
Summary of the invention
So, the present invention is the technical scheme that proposes in view of above-mentioned truth, its purpose is, provide a kind of when control robot device's manipulator, needn't prepare a lot of benchmark images, computation burden is lighter, can control rapidly, and can be control method and the robot device of three dimensions inner control manipulator to the robot device of the position of object and posture.
In order to address the above problem, realize above-mentioned purpose, the control method that relates to robot device of the present invention has any one of following structure.
(structure 1)
The control method that relates to robot device of the present invention, be to have the above manipulator of six-freedom degree, the control method that has image unit at manipulator at hand on the position, it is characterized in that, difference from the image of the average image of the object of prior acquisition and the object that the course of action of manipulator, obtains by image unit, calculate characteristic quantity by principal component analysis, this characteristic quantity is input in the backbone network controller, the output of the backbone network that will be controlled by this backbone network controller is as command signal, based on this command signal, the front end that makes manipulator is to the desirable posture action to object.
In addition, as the backbone network controller, can use fuzzy backbone network (backbone network) with structure of fuzzy rule, the fuzzy backbone network of the wild formula of perhaps high wooden villous themeda is (in having the backbone network of fuzzy rule, be published in IEEE Transactions on Systems in February, 1985 by the wooden villous themeda of height open country, Man, andCybernetics, vol.SMC-15, no.1, the exercise question among the pp.116-132 are the structure in the article of " Fuzzy Identification ofSystems and Its Applications to Modeling and control ").
(structure 2)
Relate to robot device of the present invention, possess: the manipulator that six-freedom degree is above; Be arranged on the locational image unit of manipulator at hand; The memory cell of the average image of the object that storage obtains in advance; The image of the object that calculating is obtained by image unit from the manipulator behavior process and the difference of the average image are calculated the arithmetic element of characteristic quantity by principal component analysis from this difference; And after characteristic quantity input, the control backbone network makes the backbone network controller of its output instruction signal, it is characterized in that, based on command signal control manipulator, the front end that makes manipulator is to the posture action to the regulation of object.
The present invention has following effect.
The control method that relates to a plurality of robot devices of the present invention, owing to have structure 1, from the average image of the prior object that obtains and manipulator behavior by the difference of the image of the object that image unit obtained, calculate characteristic quantity by principal component analysis, this characteristic quantity is input in the backbone network controller, the output signal of the backbone network that will be controlled by this backbone network controller is controlled manipulator as command signal, therefore, owing to needn't prepare a lot of benchmark images, thereby computation burden is lighter, can control fast.And, this robot device's control method, owing to try to achieve the characteristic quantity of the overall situation from integral image, so, compare with the robot device's in the past of local feature amounts such as position of using certain shape or size control method, reliability is very high, in addition, can be at three dimensions inner control manipulator, thereby can reduce the amount of calculation that is used to control.
Relate to a plurality of robot device of the present invention, owing to have structure 2, calculate obtained in advance be stored in the average image in the memory cell and manipulator difference in action by the image of the object that image unit obtained, calculate characteristic quantity by principal component analysis, this characteristic quantity is input in the backbone network controller, the control backbone network makes its output instruction signal, and based on this command signal control manipulator, therefore, owing to needn't prepare a lot of benchmark images, thereby computation burden is lighter, can control fast.This robot device, owing to obtain the characteristic quantity of the overall situation from the integral body of image, so, compare with the robot device in the past of local feature amounts such as position of using certain shape or size, reliability is very high, in addition, can be at three dimensions inner control manipulator, thereby can reduce the amount of calculation that is used to control.
Promptly, it is a kind of when control robot device's manipulator that the present invention can provide, needn't prepare a lot of benchmark images, computation burden is lighter, can control rapidly, and can be control method and the robot device of three dimensions inner control manipulator to the robot device of the position of object and posture.
Description of drawings
Fig. 1 is the pattern side view that expression relates to robot device's of the present invention structure.
Fig. 2 is illustrated in to relate among the robot device of the present invention, when make manipulator near target location and target pose near the time, the front view of a plurality of images that obtained.
Among the figure:
1-manipulator, 1a-first link rod, 1b-second link rod, 1c-the 3rd link rod, 1d-four-bar linkage, 1e-5-linked bar, 1f-the 6th link rod, 2a-first joint, 2b-second joint, 2c-the 3rd joint, 2d-the 4th joint, 2e-the 5th joint, 2f-the 6th joint, 3-computer, 3a-memory, 3b-image processing part, 3c-action command generating unit, 5-front end tool mechanism, 6-video camera, 101-object.
The specific embodiment
Below, be used to implement preferred implementation of the present invention with reference to description of drawings.
(robot device's structure)
Fig. 1 is the pattern side view that expression relates to robot device's of the present invention structure.
As shown in Figure 1, relate to the manipulator 1 that robot device of the present invention has six-freedom degree, by this manipulator 1, can clamping or carrying object 101, perhaps it is assembled on other the parts.
Manipulator 1 is to be made of a plurality of executive components (drive unit) and link rod (structural member of rigidity), has six-freedom degree.That is, between each link rod by means of can rotate (bending) or rotatable joint 2a, 2b, 2c, 2d, 2e, 2f connects, and drives relatively by executive component separately.Each executive component is controlled by the computer 3 as control module.
In this manipulator 1, first link rod (link rod of base end part) 1a is arranged to by the first joint 2a cardinal extremity one side is connected on the base station part 4.The first joint 2a is can be around the joint of vertical axis (z axle) rotation.Cardinal extremity one side of the second link rod 1b is connected front end one side of this first link rod 1a by second joint 2b.Second joint 2b can make the joint of the second link rod 1b around the trunnion axis rotation.Cardinal extremity one side of the 3rd link rod 1c is connected front end one side of the second link rod 1b by the 3rd joint 2c.The 3rd joint 2c can make the joint of the 3rd link rod 1c around the trunnion axis rotation.
And cardinal extremity one side of four-bar linkage 1d is connected front end one side of the 3rd link rod 1c by the 4th joint 2d.The 4th joint 2d can make the joint of four-bar linkage 1d around the axle rotation of the 3rd link rod 1c.Cardinal extremity one side of 5-linked bar 1e is connected front end one side of four-bar linkage 1d by the 5th joint 2e.The 5th joint 2e is the joint that axle that 5-linked bar 1e is intersected vertically around the axle with four-bar linkage 1d rotates.Cardinal extremity one side of the 6th link rod 1f is connected front end one side of 5-linked bar 1e by the 6th joint 2f.The 6th joint 2f can make the joint of the 6th link rod 1f around the axle rotation of 5-linked bar 1e.
Like this, in manipulator 1, amount to six rotating joints and rotatable joint owing to alternately be provided with, thereby guaranteed six-freedom degree.
On front end one side (hereinafter being called " at hand ") of the 6th link rod 1f, be provided with the front end operating mechanism 5 of clamping or processing object thing 101.This front end operating mechanism 5 is controlled by other control device of not representing among computer 3 or the figure.In addition, at one's fingertips near, the camera 6 that constitutes the image unit of being made up of the solid-state imager of pick-up lens and CCD or CMOS and so on is installed.
Computer 3 has the memory 3a that becomes the memory cell of having stored the average image eigenvalue matrix.This memory 3a, with the average image of the prior object that obtains as the average image eigenvalue matrix stores.Be stored in the average image eigenvalue matrix among the memory 3a, sending to becomes among the image processing part of the arithmetic element 3b.This image processing part 3b calculates the image of the object 101 that obtains by video camera 6 and the difference of the average image eigenvalue matrix in the course of action of manipulator 1, and calculates characteristic quantity from this difference by principal component analysis.
And it is among action command generating unit (backbone network controller) 3c that this characteristic quantity is sent to the backbone network controller.This action command produces part 3c, the backbone network 3d that control has been learnt, output and the characteristic quantity corresponding instruction signal of importing.This command signal sends in the manipulator 1.
And,, make front end tool mechanism 5 to posture action to the regulation of object 101 based on command signal control manipulator 1.Promptly, in manipulator 1, control at hand position by the link rod position of the position of the first link rod 1a, the 3rd link rod 1c being controlled front end one side to the position of the second link rod 1b, four-bar linkage 1d successively to the order of the position of the 3rd link rod 1c with the second link rod 1b, by this front end tool at hand mechanism 5, the workpiece of clamping can be transported on the position of regulation.
(robot device's control method)
And, in this robot device, by carrying out the control method that relates to robot device of the present invention shown below, but robot brain tool hand 1.That is, in this robot device, so that the average image that prior learning arrives and the consistent mode of image that obtains by video camera 6 make manipulator 1 action.
The control of the manipulator 1 among this robot device roughly is made up of " preliminary treatment " and " online online processing " two steps.
(1) preliminary treatment
In preliminary treatment, make manipulator 1 be in state near target location and target pose, change position and posture a little at every turn, take the photo of several objects 101 by video camera 6, obtain the average image of these photos.And, the difference of each captured image and the average image be multiply by the matrix of having arranged the characteristic value vector, calculate characteristic quantity.With the information of this characteristic quantity as input backbone network 3d, the position of the front end tool mechanism 5 during with each image of acquisition and posture are as the information of output, so that backbone network 3d study.
(2) online online processing
In online online processing, to multiply by the matrix of having arranged the characteristic value vector, obtain characteristic quantity by the image of video camera 6 acquisitions and the difference of the average image by image processing part 3b.The characteristic quantity of being tried to achieve is input among the backbone network 3d, obtains the required joint angles of manipulator 1 action, position, posture at hand.
In the present invention, no matter be in preliminary treatment or in the online online processing, all use " characteristic quantity that the difference of acquisition image and the average image be multiply by the matrix of having arranged the characteristic value vector ".Using this characteristic quantity is feature of the present invention.To further specify this characteristic quantity and various processing below.This characteristic quantity is the characteristic quantity of the overall situation obtained from the integral body of image, compare with the prior art of local feature amount of position, size etc. of given shape in using object 101, has the reliability height, can tackle the control in the three dimensions, and, can reduce the amount of calculation in the online online processing.
(calculating) about characteristic quantity
The calculating of the characteristic quantity among the present invention utilizes the mode that is referred to as principal component analysis (PCA) to carry out.At first, prepare P and open identical exploring degree, for example, vertical q1 pixel, the image of the progressive series of greys of horizontal q2 pixel.The brightness of p being opened image is as the vertical vector Ip that longitudinally forms a line, and with the X coordinate x of this image, the brightness of the pixel of the y of Y coordinate is as Ipxy.That is, in Ip Ip11 arranged side by side, Ip12, Ip13 ... Ip1q1, Ip21, Ip22 ... and the brightness of pixel.The mean flow rate Iaxy of this pixel is shown below.
Iaxy=(I1xy+I2xy+......+IPxy)/P
If whole pixels are all carried out this calculating, then can obtain the average image Ia.In addition, though image originally is that the pixel of q1 * q2 is lined up, but, in Ia, not being the image of the progressive series of greys but the occasion of coloured image, to each chrominance channel, promptly, in the occasion of RGB image,, obtain the average image by mean flow rate is obtained in red, blue, green each chrominance channel.
The poor Δ Ip of this average image Ia and p image I p is defined as follows.
ΔIp=Ip-Ia
Δ Ip and Ip are that size is vertical vector of q1 * q2 in the same manner.Respectively P is opened image and obtain Δ Ip, these are transversely arranged resulting matrix as A.
A=(ΔI1、ΔI2、......、ΔIp)
And, obtain following correlation matrix C.
C=A’A/P
The A ' here refers to the transposed matrix of A.Because the size of this matrix is the real symmetric matrix of P * P, so can obtain P characteristic value.In this P characteristic value, the M that numerical value is a bigger characteristic value is made as λ i (wherein, i=1~M, M are regulated values), fixed vector corresponding with it be made as ei (wherein, i=1~M), and definition with characteristic vector ei along transversely arranged matrix U.
U=(e1、e2、......、eM)
And the characteristic quantity y of the overall situation defines with following formula.
y=U’(I-Ia)
Here, I is made into the arbitrarily image onesize with Ia with shape.The characteristic quantity y that the luminance difference of arbitrary image and the average image be multiply by behind the transformation matrix U ' is that size is the vector of M.Because image has the information of q1 * q2, so compressed information amount significantly.For example, if make q1=64, q2=48, be 3072 information though then become pixel count, the information content of handling can be compressed to M=6.This characteristic quantity y can be referred to as the global characteristics amount.
(preliminary treatment)
As mentioned above, preliminary treatment is the processing that makes backbone network 3d study.At first, make manipulator 1 be in state, obtain P and open image I p (p=1~P), and obtain the average image from these near target location and target pose.In addition, also obtain simultaneously the position relatively at hand and the posture xp (p=1~6, size are 6 vector) of the six-freedom degree of the position of the manipulator 1 when having obtained image separately and posture.
Fig. 2 is the front view that is illustrated near a plurality of images that obtain of target location and target pose.
For example, as shown in Figure 2, suppose manipulator 1 is under the state near target location and target pose, obtained 9 image I 1, I2 ... Ip.In these each images, for the target location, the outward appearance of observed object 101 (global characteristics amount) has nothing in common with each other.Make backbone network 3d learn the relation of the difference of the position of the difference of outward appearance of this observed object 101 and manipulator 1 and posture.
That is, as mentioned above, if each image is tried to achieve the characteristic quantity yp (p=1~P), then can access the position and the pose information of the manipulator 1 when having obtained each global characteristics amount of the overall situation.And, global characteristics amount yp to backbone network 3d input, is carried out the study of backbone network 3d, so that obtain position and posture xp relatively at hand.
In addition, owing to used backbone network, therefore, there is no need the whole positions and the posture of manipulator 1 are preserved the global characteristics amount.For the global characteristics amount of the image that does not in fact obtain, insert by the study of backbone network.For the structure or the method that makes it to learn of this backbone network, be used to the processing of the data of the input and output that make it to learn, can use known technology.
(online online processing)
Online online processing is while the processing that makes manipulator 1 action based on the action of the image real time control machine tool hand 1 that obtains by video camera 6.At first, in the course of action of manipulator 1, obtain image by video camera.And as mentioned above, poor from the image that obtained and the average image tried to achieve global characteristics amount y.If with global characteristics amount y as the input information to the backbone network 3d that learnt, the target location of the manipulator 1 in the time of then can obtaining from global characteristics amount y and the relative position and the posture x of posture.
That is,, can infer the position and the posture of manipulator 1 from the difference of the outward appearance (global characteristics amount) of observed object 101 by backbone network 3d such as above-mentioned study.
And,, manipulator 1 is controlled with realizing target location and posture to be obtained by making the position and only the relative position and the posture x action of posture of manipulator 1 from current manipulator 1.
(feature of the present invention)
As mentioned above, the control method of the present invention and the absolute value of merely storing target location and posture is not With, because the present invention is based on from the difference acquisition of the image that obtains by video camera 6 and the average image The global characteristics amount is inferred relative position that manipulator 1 should move and posture, so, machine no matter How are the initial position of tool hand 1 and posture, can both make manipulator 1 with the optional position on object 101 Corresponding target location and gesture actions.
In the past, from the image that obtains, extracted characteristic quantity, all be take circle or polygon (line segment) as Carry out shape recognition after profile extracts, and obtain thus area or position of centre of gravity, center, size etc., And these situations as characteristic quantity are more. In this case, as long as small interference or photograph are arranged The variation of bright state etc., recognition result will be affected, and in addition, identification error can be to manipulator 1 Action produces very big impact. Have, image is processed needs long time again.
In contrast, advance owing to the present invention is based on the numerical information that is included in as in the image of electronic data Row is processed, so anti-interference stronger, very reliable. In addition, as mentioned above, owing to use regulating The quantity M aspect of characteristic quantity, aspect quick calculated characteristics amount, make many efforts, so can shorten Computing time.
Claims (2)
1. a robot device control method has the above manipulator of six-freedom degree, has image unit on manipulator position at hand, it is characterized in that,
Difference from the image of the average image of the object of prior acquisition and the above-mentioned object that the course of action of above-mentioned manipulator, obtains by above-mentioned image unit, calculate characteristic quantity by principal component analysis, this characteristic quantity is input in the backbone network controller, the output of the backbone network that will be controlled by this backbone network controller is as command signal, based on this command signal, the front end that makes above-mentioned manipulator is to the desirable posture action to above-mentioned object.
2. robot device possesses:
The manipulator that six-freedom degree is above;
Be arranged on the locational image unit of above-mentioned manipulator at hand;
The memory cell of the average image of the object that storage obtains in advance;
The image of the above-mentioned object that calculating is obtained by above-mentioned image unit from above-mentioned manipulator behavior process and the difference of above-mentioned the average image are calculated the arithmetic element of characteristic quantity by principal component analysis from this difference; And
After above-mentioned characteristic quantity input, control backbone network, make the backbone network controller of its output instruction signal, it is characterized in that,
Control above-mentioned manipulator based on above-mentioned command signal, the front end that makes above-mentioned manipulator is to the posture action to the regulation of above-mentioned object.
Priority Applications (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CNA2007101630751A CN101396829A (en) | 2007-09-29 | 2007-09-29 | Robot control method and robot |
JP2008223796A JP5200772B2 (en) | 2007-09-29 | 2008-09-01 | Robot apparatus control method and robot apparatus |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CNA2007101630751A CN101396829A (en) | 2007-09-29 | 2007-09-29 | Robot control method and robot |
Publications (1)
Publication Number | Publication Date |
---|---|
CN101396829A true CN101396829A (en) | 2009-04-01 |
Family
ID=40515794
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CNA2007101630751A Pending CN101396829A (en) | 2007-09-29 | 2007-09-29 | Robot control method and robot |
Country Status (2)
Country | Link |
---|---|
JP (1) | JP5200772B2 (en) |
CN (1) | CN101396829A (en) |
Cited By (13)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN102939188A (en) * | 2010-06-08 | 2013-02-20 | Keba股份公司 | Method for programming or setting movements or sequences of industrial robot |
CN103568014A (en) * | 2012-07-26 | 2014-02-12 | 发那科株式会社 | Apparatus and method of taking out bulk stored articles by manipulator |
CN103568024A (en) * | 2012-07-31 | 2014-02-12 | 发那科株式会社 | Apparatus for taking out bulk stored articles by robot |
CN104108103A (en) * | 2013-04-18 | 2014-10-22 | 发那科株式会社 | Robot System Having A Robot For Conveying A Workpiece |
CN104797386A (en) * | 2012-11-30 | 2015-07-22 | 株式会社安川电机 | Robotic system |
CN106313085A (en) * | 2015-06-30 | 2017-01-11 | 发那科株式会社 | Robot system using a vision sensor |
CN106476011A (en) * | 2015-08-31 | 2017-03-08 | 发那科株式会社 | Employ the robot system of vision sensor |
CN110268358A (en) * | 2017-02-09 | 2019-09-20 | 三菱电机株式会社 | Position control and position control method |
CN110315505A (en) * | 2018-03-29 | 2019-10-11 | 发那科株式会社 | Machine learning device and method, robot controller, robotic vision system |
CN110769984A (en) * | 2017-06-21 | 2020-02-07 | 川崎重工业株式会社 | Robot system and control method for robot system |
CN111688526A (en) * | 2020-06-18 | 2020-09-22 | 福建百城新能源科技有限公司 | User side new energy automobile energy storage charging station |
CN111757796A (en) * | 2018-02-23 | 2020-10-09 | 仓敷纺绩株式会社 | Method for moving tip of linear object, control device, and three-dimensional camera |
CN113631325A (en) * | 2019-03-29 | 2021-11-09 | 株式会社Ihi | Remote operation device |
Families Citing this family (14)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN105082161B (en) * | 2015-09-09 | 2017-09-29 | 新疆医科大学第一附属医院 | Binocular stereo camera Robot Visual Servoing control device and its application method |
CN105469423B (en) * | 2015-11-16 | 2018-06-22 | 北京师范大学 | A kind of online method for tracking target based on continuous attraction sub-neural network |
JP6546618B2 (en) | 2017-05-31 | 2019-07-17 | 株式会社Preferred Networks | Learning apparatus, learning method, learning model, detection apparatus and gripping system |
JP6676030B2 (en) * | 2017-11-20 | 2020-04-08 | 株式会社安川電機 | Grasping system, learning device, gripping method, and model manufacturing method |
CN108705536A (en) * | 2018-06-05 | 2018-10-26 | 雅客智慧(北京)科技有限公司 | A kind of the dentistry robot path planning system and method for view-based access control model navigation |
CN109227540A (en) * | 2018-09-28 | 2019-01-18 | 深圳蓝胖子机器人有限公司 | A kind of robot control method, robot and computer readable storage medium |
JP7207704B2 (en) * | 2018-11-14 | 2023-01-18 | 旭鉄工株式会社 | Learning system and robot positioning system |
JP6978454B2 (en) | 2019-02-22 | 2021-12-08 | ファナック株式会社 | Object detector, control device and computer program for object detection |
JP7051751B2 (en) * | 2019-06-19 | 2022-04-11 | 株式会社Preferred Networks | Learning device, learning method, learning model, detection device and gripping system |
JP7349423B2 (en) * | 2019-06-19 | 2023-09-22 | 株式会社Preferred Networks | Learning device, learning method, learning model, detection device and grasping system |
JP7458741B2 (en) * | 2019-10-21 | 2024-04-01 | キヤノン株式会社 | Robot control device and its control method and program |
JP7396125B2 (en) | 2020-03-03 | 2023-12-12 | オムロン株式会社 | Model generation device, estimation device, model generation method, and model generation program |
WO2022086157A1 (en) * | 2020-10-20 | 2022-04-28 | 삼성전자주식회사 | Electronic apparatus and control method thereof |
CN114677429B (en) * | 2022-05-27 | 2022-08-30 | 深圳广成创新技术有限公司 | Positioning method and device of manipulator, computer equipment and storage medium |
Family Cites Families (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JPH0588721A (en) * | 1991-09-30 | 1993-04-09 | Fujitsu Ltd | Controller for articulated robot |
JPH0780790A (en) * | 1993-09-16 | 1995-03-28 | Fujitsu Ltd | Three-dimensional object grasping system |
JP3560670B2 (en) * | 1995-02-06 | 2004-09-02 | 富士通株式会社 | Adaptive recognition system |
JP2000263482A (en) * | 1999-03-17 | 2000-09-26 | Denso Corp | Attitude searching method and attitude searching device of work, and work grasping method and work grasping device by robot |
JP2003231078A (en) * | 2002-02-14 | 2003-08-19 | Denso Wave Inc | Position control method for robot arm and robot device |
JP4267005B2 (en) * | 2006-07-03 | 2009-05-27 | ファナック株式会社 | Measuring apparatus and calibration method |
-
2007
- 2007-09-29 CN CNA2007101630751A patent/CN101396829A/en active Pending
-
2008
- 2008-09-01 JP JP2008223796A patent/JP5200772B2/en active Active
Cited By (25)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN102939188A (en) * | 2010-06-08 | 2013-02-20 | Keba股份公司 | Method for programming or setting movements or sequences of industrial robot |
CN102939188B (en) * | 2010-06-08 | 2016-05-11 | Keba股份公司 | For motion to industry mechanical arm or flow process is programmed or default method |
CN103568014B (en) * | 2012-07-26 | 2015-08-12 | 发那科株式会社 | Manipulator is utilized to take out device and the method for article in bulk |
CN103568014A (en) * | 2012-07-26 | 2014-02-12 | 发那科株式会社 | Apparatus and method of taking out bulk stored articles by manipulator |
US9079310B2 (en) | 2012-07-26 | 2015-07-14 | Fanuc Corporation | Apparatus and method of taking out bulk stored articles by robot |
CN103568024A (en) * | 2012-07-31 | 2014-02-12 | 发那科株式会社 | Apparatus for taking out bulk stored articles by robot |
CN103568024B (en) * | 2012-07-31 | 2015-06-03 | 发那科株式会社 | Apparatus for taking out bulk stored articles by robot |
CN104797386A (en) * | 2012-11-30 | 2015-07-22 | 株式会社安川电机 | Robotic system |
CN104108103B (en) * | 2013-04-18 | 2015-07-22 | 发那科株式会社 | Robot System Having A Robot For Conveying A Workpiece |
CN104108103A (en) * | 2013-04-18 | 2014-10-22 | 发那科株式会社 | Robot System Having A Robot For Conveying A Workpiece |
CN106313085B (en) * | 2015-06-30 | 2018-06-26 | 发那科株式会社 | Use the robot system of visual sensor |
CN106313085A (en) * | 2015-06-30 | 2017-01-11 | 发那科株式会社 | Robot system using a vision sensor |
US10016898B2 (en) | 2015-06-30 | 2018-07-10 | Fanuc Corporation | Robot system using a vision sensor |
CN106476011B (en) * | 2015-08-31 | 2018-01-16 | 发那科株式会社 | The robot system of vision sensor is used |
CN106476011A (en) * | 2015-08-31 | 2017-03-08 | 发那科株式会社 | Employ the robot system of vision sensor |
CN110268358A (en) * | 2017-02-09 | 2019-09-20 | 三菱电机株式会社 | Position control and position control method |
CN110268358B (en) * | 2017-02-09 | 2022-11-04 | 三菱电机株式会社 | Position control device and position control method |
CN110769984A (en) * | 2017-06-21 | 2020-02-07 | 川崎重工业株式会社 | Robot system and control method for robot system |
CN110769984B (en) * | 2017-06-21 | 2022-09-02 | 川崎重工业株式会社 | Robot system and control method for robot system |
CN111757796A (en) * | 2018-02-23 | 2020-10-09 | 仓敷纺绩株式会社 | Method for moving tip of linear object, control device, and three-dimensional camera |
CN111757796B (en) * | 2018-02-23 | 2023-09-29 | 仓敷纺绩株式会社 | Method for moving tip of thread, control device, and three-dimensional camera |
US11964397B2 (en) | 2018-02-23 | 2024-04-23 | Kurashiki Boseki Kabushiki Kaisha | Method for moving tip of line-like object, controller, and three-dimensional camera |
CN110315505A (en) * | 2018-03-29 | 2019-10-11 | 发那科株式会社 | Machine learning device and method, robot controller, robotic vision system |
CN113631325A (en) * | 2019-03-29 | 2021-11-09 | 株式会社Ihi | Remote operation device |
CN111688526A (en) * | 2020-06-18 | 2020-09-22 | 福建百城新能源科技有限公司 | User side new energy automobile energy storage charging station |
Also Published As
Publication number | Publication date |
---|---|
JP2009083095A (en) | 2009-04-23 |
JP5200772B2 (en) | 2013-06-05 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN101396829A (en) | Robot control method and robot | |
CN109344882B (en) | Convolutional neural network-based robot control target pose identification method | |
US20190143517A1 (en) | Systems and methods for collision-free trajectory planning in human-robot interaction through hand movement prediction from vision | |
CN108081268A (en) | Robot control system, robot, program and robot control method | |
CN113379849B (en) | Robot autonomous recognition intelligent grabbing method and system based on depth camera | |
CN101733755A (en) | Robot system, robot control device and method for controlling robot | |
CN109940626B (en) | Control method of eyebrow drawing robot system based on robot vision | |
CN109079787B (en) | Non-rigid robot automatic hand-eye calibration method based on neural network | |
CN105096341B (en) | Mobile robot position and orientation estimation method based on trifocal tensor and key frame strategy | |
De Luca et al. | Image-based visual servoing schemes for nonholonomic mobile manipulators | |
Akinola et al. | Learning precise 3d manipulation from multiple uncalibrated cameras | |
CN111445523A (en) | Fruit pose calculation method and device, computer equipment and storage medium | |
Zhou et al. | 3d pose estimation of robot arm with rgb images based on deep learning | |
CN116460843A (en) | Multi-robot collaborative grabbing method and system based on meta heuristic algorithm | |
Garcia et al. | Guidance of robot arms using depth data from RGB-D camera | |
Nadi et al. | Visual servoing control of robot manipulator with Jacobian matrix estimation | |
Sebastián et al. | Uncalibrated visual servoing using the fundamental matrix | |
Fried et al. | Uncalibrated image-based visual servoing approach for translational trajectory tracking with an uncertain robot manipulator | |
CN110928311A (en) | Indoor mobile robot navigation method based on linear features under panoramic camera | |
CN113910218A (en) | Robot calibration method and device based on kinematics and deep neural network fusion | |
Zhang et al. | Vision-based six-dimensional peg-in-hole for practical connector insertion | |
Tadeusz | Application of vision information to planning trajectories of Adept Six-300 robot | |
CN112171664B (en) | Production line robot track compensation method, device and system based on visual identification | |
Kumar et al. | Visual motor control of a 7 DOF robot manipulator using function decomposition and sub-clustering in configuration space | |
Xu et al. | A fast and straightforward hand-eye calibration method using stereo camera |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
C06 | Publication | ||
PB01 | Publication | ||
C02 | Deemed withdrawal of patent application after publication (patent law 2001) | ||
WD01 | Invention patent application deemed withdrawn after publication |
Open date: 20090401 |