CN109676583B - Deep learning visual acquisition method based on target posture, learning system and storage medium - Google Patents
Deep learning visual acquisition method based on target posture, learning system and storage medium Download PDFInfo
- Publication number
- CN109676583B CN109676583B CN201811466680.0A CN201811466680A CN109676583B CN 109676583 B CN109676583 B CN 109676583B CN 201811466680 A CN201811466680 A CN 201811466680A CN 109676583 B CN109676583 B CN 109676583B
- Authority
- CN
- China
- Prior art keywords
- teaching
- image information
- function
- target
- robot
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
- 238000000034 method Methods 0.000 title claims abstract description 47
- 238000013135 deep learning Methods 0.000 title claims abstract description 19
- 230000000007 visual effect Effects 0.000 title claims abstract description 16
- 230000006870 function Effects 0.000 claims abstract description 77
- 230000009471 action Effects 0.000 claims abstract description 70
- 238000006073 displacement reaction Methods 0.000 claims abstract description 34
- 230000008569 process Effects 0.000 claims abstract description 26
- 238000004088 simulation Methods 0.000 claims abstract description 8
- 230000008859 change Effects 0.000 claims description 25
- 238000004590 computer program Methods 0.000 claims description 4
- 238000007405 data analysis Methods 0.000 claims description 4
- 239000003550 marker Substances 0.000 claims description 3
- 230000010355 oscillation Effects 0.000 claims 3
- 230000009467 reduction Effects 0.000 abstract description 10
- 230000036544 posture Effects 0.000 description 19
- 238000004364 calculation method Methods 0.000 description 6
- 238000012937 correction Methods 0.000 description 4
- 206010034719 Personality change Diseases 0.000 description 2
- 230000001133 acceleration Effects 0.000 description 2
- 238000012885 constant function Methods 0.000 description 2
- 239000002245 particle Substances 0.000 description 2
- 241001465754 Metazoa Species 0.000 description 1
- 230000003044 adaptive effect Effects 0.000 description 1
- 238000004458 analytical method Methods 0.000 description 1
- 230000006835 compression Effects 0.000 description 1
- 238000007906 compression Methods 0.000 description 1
- 238000013016 damping Methods 0.000 description 1
- 238000010586 diagram Methods 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
- 238000003909 pattern recognition Methods 0.000 description 1
- 238000003825 pressing Methods 0.000 description 1
- 238000011160 research Methods 0.000 description 1
- 230000000630 rising effect Effects 0.000 description 1
- 238000006467 substitution reaction Methods 0.000 description 1
- 238000012795 verification Methods 0.000 description 1
- 238000003466 welding Methods 0.000 description 1
Images
Classifications
-
- B—PERFORMING OPERATIONS; TRANSPORTING
- B25—HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
- B25J—MANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
- B25J9/00—Programme-controlled manipulators
- B25J9/0081—Programme-controlled manipulators with leader teach-in means
-
- B—PERFORMING OPERATIONS; TRANSPORTING
- B25—HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
- B25J—MANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
- B25J9/00—Programme-controlled manipulators
- B25J9/16—Programme controls
- B25J9/1694—Programme controls characterised by use of sensors other than normal servo-feedback from position, speed or acceleration sensors, perception control, multi-sensor controlled systems, sensor fusion
- B25J9/1697—Vision controlled systems
Landscapes
- Engineering & Computer Science (AREA)
- Robotics (AREA)
- Mechanical Engineering (AREA)
- Manipulator (AREA)
- Numerical Control (AREA)
Abstract
The invention relates to the technical field of robots, and discloses a target posture based deep learning visual acquisition method, a learning system and a storage medium, which are used for controlling the learning and teaching actions of a robot, wherein the target posture based deep learning visual acquisition method comprises the following steps: collecting teaching image information of the teaching action process from multiple directions; analyzing the teaching image information, selecting a plurality of reference points of the teaching action, and fitting the reference points into at least two functions through the relation between movement and time: attitude function, displacement function; and generating a control program so that the robot can realize the teaching operation according to the posture function and the displacement function. The teaching action function is simplified, the point taking calculated amount is reduced, the driving program is generated by collecting the teaching action of the target, the requirement degree on manual participation is reduced, and the teaching action function has the advantages of high intelligent degree, high simulation reduction degree and the like.
Description
Technical Field
The invention relates to the technical field of robots, in particular to a visual acquisition method, a learning system and a storage medium based on deep learning of target postures.
Background
A Robot (Robot) is a high-tech product, in which a program or a principle outline is preset, and after receiving a signal or an instruction, the Robot can judge and take an action to a certain extent, such as moving, taking, swinging a limb, and the like. The task of the robot is mainly to assist or even replace human work in some occasions, judgment of actions and information related in an actual working scene is very complicated, and the actions and the information are difficult to be recorded in the robot in a program mode in advance, so that how to learn by itself according to existing knowledge to improve adaptive capacity and intelligent level, namely, the robot learning becomes a very popular research focus in the robot industry.
In the prior art, the process of simulating human teaching actions by a robot mainly includes: 1. digitally collecting a plurality of key point coordinates of the teaching action; 2. and taking points to solve the points reversely into a robot control program. In the above two steps, a lot of manual involvement is required, and particularly in step 1, not only the key point needs to be selected, but also the teaching action needs to be simplified, for example, the teaching action is moved from point a to point B, and then the degree of simplification rises or falls at point B, the higher the degree of simplification of the teaching action is, the lower the simulated reduction degree of the robot is, the lower the degree of simplification of the teaching action is, the larger the calculation amount of the relevant point taking is, and finally the robot is difficult to realize the simulated human teaching action with high reduction degree.
Disclosure of Invention
The invention aims to provide a visual acquisition method, a learning system and a storage medium for deep learning based on a target posture, and aims to solve the problems of low action reduction degree, large point taking calculation amount, more manual participation and low intelligent degree when a robot in the prior art simulates human teaching actions.
The invention provides a target posture-based deep learning visual acquisition method, which is used for simulating teaching actions of a target and comprises the following steps: collecting teaching image information of the teaching action process from multiple directions; analyzing the teaching image information, selecting a plurality of reference points of the teaching action, and fitting the reference points into at least two functions through the relation between movement and time: an attitude function for describing the change of the target attitude along with time, and a displacement function for describing the change of the target position along with time; and generating a control program so that the robot can realize the teaching operation according to the posture function and the displacement function.
The present invention also provides a learning system for controlling a robot to learn a teaching action, the robot having an execution end, comprising: an image acquisition unit which captures and acquires teaching image information of the teaching operation process from a plurality of directions; a data analysis unit which receives the teaching image information and analyzes the teaching image information to obtain a motion function of the teaching operation; and a drive control unit which receives the motion function, generates a drive program, and controls the execution end to perform a simulation operation.
The invention also provides a computer readable storage medium storing a computer program which, when executed by a processor, implements the steps of the aforementioned target pose-based deep learning visual acquisition method
Compared with the prior art, the teaching action of the target is simplified into at least two functions for description: a displacement function describing a displacement over time; and the attitude function describes the relationship of the attitude along with time. After the action is simplified, the point-taking calculated amount is reduced, the driving program is generated by the teaching action of the acquisition target, the requirement degree on manual participation is reduced, and the method has the advantages of high intelligent degree, high simulation reduction degree and the like.
Drawings
Fig. 1 is a schematic flowchart of a visual acquisition method for deep learning based on a target pose according to an embodiment of the present invention;
fig. 2 is a schematic diagram of calculating a swing angle of a pose function in a target pose-based deep learning visual acquisition method according to an embodiment of the present invention.
Detailed Description
In order to make the objects, technical solutions and advantages of the present invention more apparent, the present invention is described in further detail below with reference to the accompanying drawings and embodiments. It should be understood that the specific embodiments described herein are merely illustrative of the invention and are not intended to limit the invention.
In the description of the present invention, it is to be understood that the terms "length", "width", "upper", "lower", "front", "rear", "left", "right", "vertical", "horizontal", "top", "bottom", "inner", "outer", and the like, indicate orientations or positional relationships based on the orientations or positional relationships illustrated in the drawings, and are used merely for convenience in describing the present invention and for simplicity in description, and do not indicate or imply that the devices or elements referred to must have a particular orientation, be constructed in a particular orientation, and be operated, and thus, are not to be construed as limiting the present invention.
In the description of the present invention, "a plurality" means two or more unless specifically defined otherwise.
In the present invention, unless otherwise expressly stated or limited, the terms "mounted," "connected," "secured," and the like are to be construed broadly and can, for example, be fixedly connected, detachably connected, or integrally formed; can be mechanically or electrically connected; either directly or indirectly through intervening media, either internally or in any other relationship. The specific meanings of the above terms in the present invention can be understood by those skilled in the art according to specific situations.
The implementation of this embodiment will be described in detail below with reference to the specific drawings, and for convenience of description, a spatial coordinate system (x, y, z) is established, wherein the x axis and the y axis are located on a horizontal plane and are perpendicular to each other, and the z axis is located in a vertical direction.
The embodiment provides a target posture-based deep learning visual acquisition method, which is used for learning teaching actions of a target and comprises the following steps:
101. and collecting teaching image information of a teaching action process from multiple directions. The target in this embodiment may be the whole of a human, animal or other mechanical device, or a specific part, such as a human hand, a bird wing, a human tip, etc. Specifically, in this embodiment, a calligraphy operation in which a person receives a writing brush to write a certain chinese character is described as an example, where the writing brush is used as a target, the writing brush itself is used as a teaching operation, during a process in which a writer holds the writing brush to write a certain chinese character, teaching image information of the writing brush is captured from multiple directions, and the teaching image information is a multi-segment video file because of having a time axis. It is understood that the hand may be selected as the target in this embodiment, and in other embodiments.
102. Analyzing the teaching image information, selecting a plurality of reference points of teaching actions, and fitting the reference points into at least two functions through the relation between movement and time: the attitude function is used for describing the change of the target attitude along with time, and the displacement function is used for describing the change of the target position along with time. In this step, the image information of the viewing angle obtained in step 101 is subjected to pattern recognition analysis, and in this embodiment, as shown in fig. 1, A, B, C points are collected on the writing brush as reference points, where point B is a rotation center point of the writing brush during writing, a certain interval time is set as a shooting unit, for example, when t is selected to be 0.5s, the position change information of each reference point every 0.5s is analyzed and fitted into at least two functions: a pose function and a displacement function. In the course of action, the change of the target's own posture can be described by a posture function, for example, a certain angle of rotation in the vertical direction. In the displacement function, the target is considered as a particle, describing the amount of displacement change of the target, such as moving from point A to point B, and then rising to point C. In other embodiments, the number of functions may also be increased, such as the action function: it is described that the output signal performs a specified operation, such as welding, pressing, etc. at time t, at some specific point in time. It should be understood that if the teaching action of the target has no change of the posture all the way, only the change of the displacement, the posture function is fitted to be a constant function with the value of 0, otherwise, only the change of the posture has no change of the displacement all the way, the displacement function is fitted to be a constant function with the value of 0, and the two conditions are obviously included by having at least two functions of the posture function and the displacement function. And the position information photographed at this time is recorded simultaneously with the time point corresponding thereto.
As shown in fig. 1 and fig. 2, in the present embodiment, since the point B is a rotation center point, that is, if the displacement of the writing brush is ignored, the point B is regarded as a stationary point during writing, and therefore, the change of the point B with time can be used as a displacement function. By the change of the position of the point A in the time t and the relative distance between the point A and the point B (l in the figure)1Length of the arm), the swing angle, i.e., the attitude change, can be calculated. The specific calculation method may be various, for example, let the distance between points A and B be l1The distance between the points B and C is l2Simplifying the writing brush shot at intervals of t time into t1And t2Two straight lines, the B points of the two straight lines are coincided, and t is calculated1And t2A distance X between points A on1,X1And l1The angle alpha can be calculated through a cosine formula, and the change of the angle alpha relative to the time t is the attitude function at the moment. In the same way, t1And t2Distance X between upper C points2And l2The angle β can be calculated by a cosine formula, and should be theoretically equal to the angle α, and can be used as mutual verification data in the calculation.
The attitude function and the displacement function both include the same variables: and on the other hand, the speed and the acceleration of the robot at a specific position/time can be known through increment of unit time as reference data for controlling the robot.
In the writing process of the writing brush, the displacement function is used for recording the movement of the pen in three coordinate directions in space along with the change of time t, wherein the change of coordinates on an x axis and a y axis can be used as data for describing actions such as rough stroke trend, font size, writing range and the like when writing characters. The coordinate change on the z axis can be approximately used as a function for describing the thickness of the stroke, and the paper surface is taken as a z coordinate 0 point, the closer the z coordinate is to 0, the higher the compression force applied to the pen point is, the thicker the stroke is, and the larger the corresponding writing force is; the larger the z-axis coordinate, the less compressive force the pen tip is subjected to, and the thinner the stroke. And the part of the displacement function, of which the z-axis coordinate exceeds the threshold value, indicates that the pen point is separated from the paper surface at the moment, is marked as invalid writing operation, and is recorded as the displacement operation for recording the position of the moving pen.
The attitude function is used for recording the change along with the time t, and the pen self rotates in three axial directions of x, y and z. The posture function can be used for describing the posture change of a penholder in the writing process, and corresponding to the calligraphy, the posture change of the penpoint can be understood as the posture change using the penpoint.
103. A control program is generated so that the robot can perform teaching operations in accordance with the attitude function and the displacement function. The robot can simulate actions according to a driving program and move according to a desired movement mode: the movement of the execution end along with time follows a displacement function, and the posture change of the execution end per se follows a posture function in the process of moving according to the displacement function, so that the teaching action of the target is simulated.
It can be seen from the above process that, in the method for deep learning and visual acquisition based on the attitude of the target provided in this embodiment, a plurality of reference points are determined, then the teaching action process of the target is visually photographed, the raw action data is acquired, then the raw action data is combed and then forms two functions with a time variable to describe the teaching action process of the target, because the two functions are independent from each other, the attitude function only records the attitude change of the target itself relative to time, the displacement function regards the target as a particle and only records the position change of the target relative to time, so that the action data is simplified, the inverse solution is fitted into the two functions, a control program is generated according to the two functions, and the robot runs the control program to simulate the operation process of the target. Because the action data is simplified, the point-taking calculated amount when the teaching action which simulates the complex teaching action is reduced, the teaching action which has higher reduction degree can be simulated, the action is simplified without manual participation judgment, the requirement of the process of simulating the learning on manual participation is low, and the intelligent degree is high.
Preferably, as shown in fig. 1, after step 103, the following steps are further included:
104. and driving the robot to simulate the motion according to the control program.
105. The simulated image information simulating the action process is collected from multiple directions.
106. And comparing the simulated image information with the teaching image information, and correcting the control program.
Because the generated control program is only based on data acquisition and automatic calculation, the executed action does not necessarily completely meet the simulation requirement, the control program is tried to run in step 104, the simulation image information is recorded in the execution mode according to step 101, then the simulation image information is compared with the teaching image information, and then the control program is modified to form a control closed loop, namely the robot learning process.
The specific comparison mode can be various, for example, the process from step 101 to step 103 is repeated, the execution end of the robot is used as the acquisition target of the teaching action, a new displacement function and a new attitude function are generated secondarily, and the new displacement function and the new attitude function are compared with the displacement function and the new attitude function generated by the original action data to find out whether the deviation exceeding the threshold value occurs; or directly comparing the simulated image information with the teaching image in an image comparison mode, adjusting the transparency, superposing the simulated image information and the teaching image, and comparing errors on the images to judge the similarity. If the error is found to exceed the threshold, the correction direction and magnitude are determined, and the correction control routine is then de-calibrated.
The above steps 103 to 106 may be repeated, and after a plurality of trial runs, acquisition comparison, and correction learning, the difference between the final execution action result and the original action result is smaller than the threshold. The whole learning process is completed.
Preferably, before step 101, a marker may be formed by drawing a dot of a special color on the target, pasting a pattern of a special shape, and installing a component capable of emitting a special light, and the marker may be directly marked as a reference point when image recognition is performed after the image is captured. In other embodiments, the reference point may also be processed as digital information in the system after image capturing and image recognition, and there is no point actually marked on the target.
Preferably, in step 102, there may be practically no rotation center point B remaining stationary during rotation, and the reference point with the smallest swing angle may be selected to correct the influence on the displacement during swing as the reference point of the displacement function.
Preferably, in step 102, during the calculation of the attitude function, the swing occurs only in one plane, so in the embodiment, the specific swing angle α is calculated in the following way: the images of the target object are respectively collected on an xy plane, an xz plane and a yz plane, namely the projection graph of the target on the xy plane, the swing angles of the projection graphs on the three planes are calculated, and then the swing angles are fitted to be the swing angles alpha on the space. It will be readily appreciated that in a particular attitude function, three equations can also be directly associated, each describing the variation of the roll angle in time in three planes.
Preferably, a sensor such as an acceleration sensor may be mounted on the target to collect data during the operation and the damping, and a sensor may be mounted on the robot execution end to record data during the execution of the control program, and the data and the recorded data may be compared to determine the simulated reduction degree.
The embodiment also provides a learning system, which is used for controlling the robot to learn teaching actions, wherein the robot comprises an image acquisition part, a data analysis part, a drive control part and an execution end, teaching image information of the teaching action process is shot and acquired in multiple directions by the image acquisition part, the data analysis part receives the teaching image information and then analyzes the teaching image information to obtain a motion function of the teaching action, namely the displacement function and the attitude function of the above text, the drive control part receives the motion function and then generates a control program, and the execution end is controlled to simulate actions.
The learning system in this embodiment can deconstruct and analyze the motion function by itself by collecting the teaching action, and then generate the control program, and after the control program is run, the execution end performs the simulation action to simulate the teaching action. Because the action data is simplified, the point-taking calculated amount when the teaching action which simulates the complex teaching action is reduced, the teaching action which has higher reduction degree can be simulated, the action is simplified without manual participation judgment, the requirement of the process of simulating the learning on manual participation is low, and the intelligent degree is high.
Preferably, the image acquisition unit acquires not only the teaching operation but also simulated image information simulating the teaching operation. The learning system also comprises a learning part, and the learning part corrects the control program by comparing the simulated image information with the teaching image information, namely, the robot learning process is carried out. By repeating the learning and correction, the reduction degree of the simulated movement can be improved, and the robot can simulate the teaching movement of the reduction target with higher precision.
In this embodiment, the image capturing unit specifically includes a plurality of cameras, and the cameras are placed in various directions, and capture images, capture image information, and record image information.
The embodiment also provides a computer readable storage medium, which stores a computer program, and the computer program is executed by a processor to implement the steps of the target posture-based deep learning visual acquisition method.
The present invention is not limited to the above preferred embodiments, and any modifications, equivalent substitutions and improvements made within the spirit and principle of the present invention should be included in the protection scope of the present invention.
Claims (8)
1. The deep learning vision acquisition method based on the target posture is used for controlling a robot to learn teaching actions, and is characterized by comprising the following steps of:
collecting teaching image information of the teaching action process from multiple directions;
analyzing the teaching image information, selecting a plurality of reference points of the teaching action, and fitting the reference points into at least two functions through the relation between movement and time: an attitude function for describing the change of the target attitude along with time, and a displacement function for describing the change of the target position along with time;
generating a control program to enable the robot to realize the teaching action according to the attitude function and the displacement function;
after the control program is generated, the method further comprises the following steps:
driving the robot to simulate actions according to the control program;
collecting simulated image information of the simulated motion process from multiple directions;
and comparing the simulated image information with the teaching image information, and correcting the control program.
2. The target pose-based deep learning vision acquisition method of claim 1, further comprising the following steps after acquiring teaching image information of the teaching action process from a plurality of directions:
and selecting the reference point with the minimum swing angle, and correcting the influence on the displacement in the swing process to be used as the reference point of the displacement function.
3. The method for deep learning visual acquisition based on target pose as claimed in claim 1, wherein selecting a plurality of reference points for said teaching actions, fitting to at least two functions by moving versus time, specifically comprising the steps of:
the images at intervals of time t are superimposed and the distance between the same reference points is measured, the angle of oscillation is calculated, and the attitude function at this moment is obtained from the angle of oscillation and time t.
4. The method for visual acquisition of deep learning of the pose of an object according to claim 2, characterized in that the images of said object taken at intervals t are superimposed and the distances between the same reference points are measured, calculating the oscillation angle, comprising in particular the steps of:
collecting projection graphs of the target on three planes which are vertical to each other, calculating a swing partial angle of the projection graphs on each plane, and fitting the swing partial angle into the swing angle in space.
5. The method for visually acquiring the deep learning of the posture of the target as claimed in claim 1, wherein the target is provided with a marker which is convenient for observing and taking points.
6. A learning system for controlling a robot to learn a teaching action, the robot having an execution end, comprising:
an image acquisition unit which captures and acquires teaching image information of the teaching operation process from a plurality of directions;
a data analysis unit which receives the teaching image information and analyzes the teaching image information to obtain a motion function of the teaching operation;
a drive control unit which receives the motion function, generates a drive program, and controls the execution end to perform a simulation operation;
also includes a learning part; the image acquisition part is also used for acquiring simulated image information of the simulated motion from multiple directions, and the learning part compares the teaching image information with the simulated image information and corrects the control program.
7. The learning system as claimed in claim 6, wherein the image capturing part comprises a plurality of cameras, and captures and records image information from a plurality of cameras at the same time.
8. A computer-readable storage medium, in which a computer program is stored which, when being executed by a processor, carries out the steps of the method for deep learning visual acquisition based on a target pose according to any one of claims 1 to 5.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201811466680.0A CN109676583B (en) | 2018-12-03 | 2018-12-03 | Deep learning visual acquisition method based on target posture, learning system and storage medium |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201811466680.0A CN109676583B (en) | 2018-12-03 | 2018-12-03 | Deep learning visual acquisition method based on target posture, learning system and storage medium |
Publications (2)
Publication Number | Publication Date |
---|---|
CN109676583A CN109676583A (en) | 2019-04-26 |
CN109676583B true CN109676583B (en) | 2021-08-24 |
Family
ID=66186069
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201811466680.0A Active CN109676583B (en) | 2018-12-03 | 2018-12-03 | Deep learning visual acquisition method based on target posture, learning system and storage medium |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN109676583B (en) |
Families Citing this family (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN111230862B (en) * | 2020-01-10 | 2021-05-04 | 上海发那科机器人有限公司 | Handheld workpiece deburring method and system based on visual recognition function |
US20230278198A1 (en) * | 2020-07-29 | 2023-09-07 | Siemens Ltd., China | Method and Apparatus for Robot to Grab Three-Dimensional Object |
CN114789470B (en) * | 2022-01-25 | 2024-10-25 | 北京萌特博智能机器人科技有限公司 | Adjustment method and device for simulation robot |
Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
EP2993002A1 (en) * | 2014-09-03 | 2016-03-09 | Canon Kabushiki Kaisha | Robot apparatus and method for controlling robot apparatus |
CN106182003A (en) * | 2016-08-01 | 2016-12-07 | 清华大学 | A kind of mechanical arm teaching method, Apparatus and system |
CN107309882A (en) * | 2017-08-14 | 2017-11-03 | 青岛理工大学 | Robot teaching programming system and method |
CN108527319A (en) * | 2018-03-28 | 2018-09-14 | 广州瑞松北斗汽车装备有限公司 | The robot teaching method and system of view-based access control model system |
Family Cites Families (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP6755724B2 (en) * | 2016-06-20 | 2020-09-16 | キヤノン株式会社 | Control methods, robot systems, and article manufacturing methods |
-
2018
- 2018-12-03 CN CN201811466680.0A patent/CN109676583B/en active Active
Patent Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
EP2993002A1 (en) * | 2014-09-03 | 2016-03-09 | Canon Kabushiki Kaisha | Robot apparatus and method for controlling robot apparatus |
CN106182003A (en) * | 2016-08-01 | 2016-12-07 | 清华大学 | A kind of mechanical arm teaching method, Apparatus and system |
CN107309882A (en) * | 2017-08-14 | 2017-11-03 | 青岛理工大学 | Robot teaching programming system and method |
CN108527319A (en) * | 2018-03-28 | 2018-09-14 | 广州瑞松北斗汽车装备有限公司 | The robot teaching method and system of view-based access control model system |
Also Published As
Publication number | Publication date |
---|---|
CN109676583A (en) | 2019-04-26 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN112476434B (en) | Visual 3D pick-and-place method and system based on cooperative robot | |
CN109590986B (en) | Robot teaching method, intelligent robot and storage medium | |
CN107813310B (en) | Multi-gesture robot control method based on binocular vision | |
CN109993073B (en) | Leap Motion-based complex dynamic gesture recognition method | |
CN109676583B (en) | Deep learning visual acquisition method based on target posture, learning system and storage medium | |
CN109590987B (en) | Semi-intelligent teaching learning method, intelligent robot and storage medium | |
DE102015101710A1 (en) | A method of calibrating a moveable gripping member using a remote digital camera | |
CN104325268A (en) | Industrial robot three-dimensional space independent assembly method based on intelligent learning | |
CN102830798A (en) | Mark-free hand tracking method of single-arm robot based on Kinect | |
CN113246131B (en) | Motion capture method and device, electronic equipment and mechanical arm control system | |
CN107363834A (en) | A kind of mechanical arm grasping means based on cognitive map | |
DE112018007232T5 (en) | INFORMATION PROCESSING DEVICE, INFORMATION PROCESSING METHOD AND RECORDING MEDIUM | |
US20130054028A1 (en) | System and method for controlling robot | |
CN210361314U (en) | Robot teaching device based on augmented reality technology | |
CN109685828B (en) | Deep learning tracking acquisition method based on target posture, learning system and storage medium | |
CN114029952A (en) | Robot operation control method, device and system | |
JP2019077026A (en) | Control device, robot system, and control device operating method and program | |
CN109079777A (en) | A kind of mechanical arm hand eye coordination operating system | |
US11383386B2 (en) | Robotic drawing | |
CN112257655A (en) | Method for robot to recognize human body sewing action | |
CN109591012B (en) | Reinforcement learning method, robot and storage medium | |
CN112824060B (en) | Machining route generating device and method | |
US20220161438A1 (en) | Automatic control method of mechanical arm and automatic control system | |
CN117921682A (en) | Welding robot rapid teaching device and method based on binocular vision | |
Jayasurya et al. | Gesture controlled AI-robot using Kinect |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |