CN113134839A - Robot precision flexible assembly method based on vision and force position image learning - Google Patents

Robot precision flexible assembly method based on vision and force position image learning Download PDF

Info

Publication number
CN113134839A
CN113134839A CN202110451138.3A CN202110451138A CN113134839A CN 113134839 A CN113134839 A CN 113134839A CN 202110451138 A CN202110451138 A CN 202110451138A CN 113134839 A CN113134839 A CN 113134839A
Authority
CN
China
Prior art keywords
force
image
assembly
robot
classification model
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202110451138.3A
Other languages
Chinese (zh)
Other versions
CN113134839B (en
Inventor
李明富
罗威
赵文权
邓旭康
刘振宇
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Xiangtan University
Original Assignee
Xiangtan University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Xiangtan University filed Critical Xiangtan University
Priority to CN202110451138.3A priority Critical patent/CN113134839B/en
Publication of CN113134839A publication Critical patent/CN113134839A/en
Application granted granted Critical
Publication of CN113134839B publication Critical patent/CN113134839B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • BPERFORMING OPERATIONS; TRANSPORTING
    • B25HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
    • B25JMANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
    • B25J9/00Programme-controlled manipulators
    • B25J9/16Programme controls
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B25HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
    • B25JMANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
    • B25J9/00Programme-controlled manipulators
    • B25J9/0081Programme-controlled manipulators with master teach-in means
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B25HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
    • B25JMANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
    • B25J9/00Programme-controlled manipulators
    • B25J9/16Programme controls
    • B25J9/1628Programme controls characterised by the control loop
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B25HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
    • B25JMANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
    • B25J9/00Programme-controlled manipulators
    • B25J9/16Programme controls
    • B25J9/1679Programme controls characterised by the tasks executed
    • B25J9/1687Assembly, peg and hole, palletising, straight line, weaving pattern movement
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B25HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
    • B25JMANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
    • B25J9/00Programme-controlled manipulators
    • B25J9/16Programme controls
    • B25J9/1694Programme controls characterised by use of sensors other than normal servo-feedback from position, speed or acceleration sensors, perception control, multi-sensor controlled systems, sensor fusion
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B25HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
    • B25JMANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
    • B25J9/00Programme-controlled manipulators
    • B25J9/16Programme controls
    • B25J9/1694Programme controls characterised by use of sensors other than normal servo-feedback from position, speed or acceleration sensors, perception control, multi-sensor controlled systems, sensor fusion
    • B25J9/1697Vision controlled systems

Landscapes

  • Engineering & Computer Science (AREA)
  • Robotics (AREA)
  • Mechanical Engineering (AREA)
  • Manipulator (AREA)

Abstract

The invention provides a robot precision flexible assembly method based on vision and force position image learning, belonging to the field of intelligent manufacturing; the method comprises the steps of grabbing a workpiece to be assembled to reach the position near a hole of an assembly body by controlling an industrial robot, then controlling the industrial robot to acquire force and pose signals in the assembling process to generate a force-position relation graph and mark a label, inputting the force-position relation graph into an image classification model I for training, acquiring a visual image in the assembling process and marking the label, inputting the visual image into an image classification model II for training, and obtaining an output result of the model I for guiding the moving direction and the size of the industrial robot; by adopting the technical scheme of the invention, the robot can be controlled to finish precise flexible assembly by learning the force-position relation image and the visual image.

Description

Robot precision flexible assembly method based on vision and force position image learning
Technical Field
The invention relates to the field of intelligent manufacturing, in particular to a robot precision flexible assembly method based on vision and force position image learning.
Background
Robotic assembly is an important research direction in the field of smart manufacturing. With the proposal of intelligent production, more and more manufacturing industries begin to assemble by using robots, and the cooperative robots have certain limitations when assembling heavy workpieces due to insufficient load capacity; most of traditional industrial robot assembly is based on teaching, and only simple and low-precision assembly work can be completed; the existing solution is that a plurality of sensors are added to an industrial robot, so that the robot is endowed with the capability of sensing the external environment, the problems of loss of characteristic points, inaccurate matching and the like easily occur in the traditional vision method by adopting characteristic point matching and visual servo, and the accuracy of the control of force sense and pose by adopting a compliance control algorithm is not high enough in specific application.
Disclosure of Invention
In order to solve the problems, the invention aims to disclose a robot precision flexible assembly method based on vision and force position image learning, in particular to a solution method for classifying and learning force position images and visual images in an assembly process based on machine learning and guiding a robot to finish precision flexible assembly through a learning result;
in order to achieve the above purpose, the technical scheme adopted by the invention mainly comprises the following processes:
step 1, establishing a kinematics model of an industrial robot to obtain a kinematics forward solution and a kinematics inverse solution of the industrial robot, manually operating a demonstrator or controlling the industrial robot through a program to enable a workpiece to be assembled at the tail end of the robot to be close to a hole of an assembly body, controlling a tail end tool to adjust the workpiece to be assembled to a pose to be assembled, setting a step pitch in an assembling process, and controlling the industrial robot to execute assembling operation;
the kinematic model is that a coordinate system of each connecting rod and a terminal tool of the robot is established by establishing a D-H parameter table, a kinematic forward solution is derived through a coordinate system transformation matrix, and a kinematic inverse solution is derived through a Pieper criterion;
the obtained kinematics positive solution and inverse solution are used for determining a terminal pose value of the robot, so that pose signals can be conveniently acquired subsequently;
preferably, the workpiece to be installed is grabbed by a terminal clamping jaw, the terminal clamping jaw is connected to the terminal position of a six-dimensional force sensor, and the six-dimensional force sensor is connected to the terminal position of the last joint of the industrial robot;
preferably, the to-be-assembled pose is a relative pose of the to-be-assembled workpiece relative to the hole of the assembly body, which is calculated by a visual method;
the step pitch is set as a distance value of one stepping motion of the industrial robot in a Cartesian space along any direction, and the value is less than or equal to the assembly precision requirement;
the method for executing the assembly operation comprises manual teaching, visual guidance and program control of the industrial robot for assembly, and preferably adopts a manual teaching method;
step 2, in the assembly process, acquiring a force signal and a pose signal once when the robot executes a step pitch, synthesizing the acquired force signal and pose signal according to a timestamp to generate a force-position relation image, and marking a stepping motion label for the image;
the method for executing the one-step pitch comprises the steps of manual teaching, visual guidance and program control of the industrial robot to move in a stepping mode or rotate in a stepping mode once, and preferably, a manual teaching method is adopted;
the force signals are collected through an industrial robot tail end force sensor and comprise contact force signals Fx, Fy and Fz in three coordinate axes x, y and z directions of a three-dimensional space, an assembly moment signal T around an assembly axis and a time stamp;
the pose signal is obtained through the terminal pose of the robot and comprises terminal position coordinate values Px, Py and Pz, terminal pose values Rx, Ry and Rz and a timestamp;
the timestamp is used for recording the time information of the currently acquired signal;
the force position relation image is a two-dimensional curve cluster obtained by drawing a force signal and a pose signal in the assembly process, and the drawing method comprises the following steps: respectively taking 3 contact force signals and 3 assembly moment signals in the force signals as vertical coordinates, and respectively taking 3 terminal position coordinate values and 3 terminal attitude values in the pose signals as horizontal coordinates, and generating at most 24 curves, wherein the curves comprise Fx-Px, Fx-Py, Fx-Pz, Fx-Rx, Fx-Ry, Fx-Rz, Fy-Px, Fy-Py, Fy-Pz, Fy-Rx, Fy-Ry, Fy-Rz, Fz-Px, Fz-Py, Fz-Pz, Fz-Rx, Fz-Rz, T-Px, T-Pz, T-Rx, T-Ry and T-Rz relation curves for reflecting different force position relations, and each relation curve is represented by different colors;
preferably, the image background color of the force-position relationship image is selected to be white;
the step motion tag is used for indicating the next step of motion and comprises a motion direction and a motion step length, wherein the motion direction comprises x, y, z, Rx, Ry and Rz directions and the synthesis direction of x, y, z, Rx, Ry and Rz, and the motion step length is integral multiple of the step distance;
step 3, repeating the step 2 until the assembly is completed, and obtaining all force and position relation images and all labels in the process;
all the force position relation images and all the labels comprise all the force position relation images and the labels corresponding to the force position relation images in one assembling process, and the number of the images is equal to the number of times of executing steps in the assembling process;
step 4, building a force and position relation image library through multiple assembling, namely, multiple executing steps 1-3, wherein the image library consists of force and position relation images and labels obtained in all assembling processes;
the multiple assembling is to collect assembling data under different initial conditions, because the initial poses of the workpieces to be assembled obtained by executing the step 1 each time are different, the collecting of the assembling data under the multiple assembling is beneficial to improving the generalization capability of the model;
the force and position relation image library comprises all force and position relation images and corresponding labels obtained in multiple assembling processes, and preferably at least 4000 pictures and corresponding labels thereof;
step 5, acquiring a visual image of the relative pose of the workpiece to be assembled and the assembly body in the assembling process, marking the label as unfinished assembly or finished assembly, and establishing an assembly state image library consisting of the visual image and the label;
the visual image is obtained frame by frame through a video stream of a visual sensor, preferably, the content of the visual image is in an RGB format, and the stored picture format is in a PNG format;
the assembly state image library comprises visual images acquired in the assembly process and corresponding label data thereof, and preferably comprises at least 2000 pictures and corresponding labels thereof;
step 6, constructing an image classification model I by taking the force-position relation image in the force-position relation image library as input and a label in the force-position relation image library as output, and inputting data in the force-position relation image library into the image classification model I for training;
preferably, the image classification model I is constructed by adopting a convolutional neural network;
preferably, the structure of the convolutional neural network adopts a ResNet50 model structure and consists of a 50-layer network structure: the first layer is a convolution and maximum pooling layer, the second to forty-ninth layers are residual modules, and the fiftieth layer is an average pooling and softmax classifier;
the convolution layer of the first layer uses 64 convolution kernels of size 7 x 7, and one maximum pooling layer of size 3 x 3;
the residual module is a module structure capable of mapping input identity to output when the gradient disappears due to too deep network layer number
The second layer to the fourth nineteenth layer are composed of sixteen residual modules, each residual module comprises three convolution layers, and forty-eight layers in total, and each convolution layer is followed by a normalization layer and a RELU activation layer;
the first three residual modules are composed of convolution layers of 64 convolution kernels of 1 × 1, 64 convolution kernels of 3 × 3 and 256 convolution kernels of 1 × 1;
then four residual modules are formed by convolution layers of 128 convolution kernels with 1 × 1, 128 convolution kernels with 3 × 3 and 512 convolution kernels with 1 × 1;
then six residual modules are formed by 256 convolution layers of 1 × 1 convolution kernels, 256 3 × 3 convolution kernels and 1024 convolution layers of 1 × 1 convolution kernels;
the last three residual modules are composed of 512 convolution layers of 1 × 1 convolution kernels, 512 3 × 3 convolution kernels and 2048 convolution layers of 1 × 1 convolution kernels;
the fiftieth layer is an average pooling layer of 7 by 7 size, and a softmax classifier with twelve classes of outputs;
during training, the training can be accelerated through a GPU acceleration technology, preferably, the training of the image classification model I is accelerated by adopting a CUDA parallel operation architecture and a CUDNN deep neural network acceleration library;
preferably, one quarter of images in the power-taking position relation image library are used as a test set, and three quarters of images are used as a training set;
preferably, a ResNet50 model weight value pre-trained on ImageNet is called, and a network weight value is finely adjusted by using a transfer learning method until satisfactory precision is obtained;
preferably, the model training algebra should reach more than 100 generations, and the average classification precision on the test set should reach more than 95%;
step 7, taking the image in the assembly state image library as input and the label in the assembly state image library as output, constructing an image classification model II, and inputting the data in the assembly state image library into the image classification model II for training;
preferably, the image classification model II is constructed by adopting a convolutional neural network;
preferably, the structure and the model training mode of the convolutional neural network are similar to those of the image classification model I, but the output of the softmax classifier at the last layer in the modified network is two types;
step 8, controlling the robot to assemble in real time based on the output results of the image classification model I and the image classification model II, generating a real-time force-position relation image by using a force signal and a pose signal which are acquired in real time, inputting the image classification model I which is trained, and controlling the industrial robot to move by using an output stepping motion label; meanwhile, inputting the real-time visual image into the trained image classification model II, and stopping movement when the real-time visual image is output as the image classification model II is assembled, so that precision assembly is completed;
preferably, the motion of the industrial robot is controlled by a moveit module of the ROS according to the output result of the image classification model I.
Detailed Description
In order to make the objects, technical solutions and advantages of the present invention more clear, the following detailed description of the embodiments of the present invention:
step 1, establishing a kinematics model of an industrial robot to obtain a kinematics forward solution and a kinematics inverse solution of the industrial robot, manually operating a demonstrator or controlling the industrial robot through a program to enable a workpiece to be assembled at the tail end of the robot to be close to a hole of an assembly body, controlling a tail end tool to adjust the workpiece to be assembled to a pose to be assembled, setting a step pitch in an assembling process, and controlling the industrial robot to execute assembling operation;
the kinematic model is that a coordinate system of each connecting rod and a terminal tool of the robot is established by establishing a D-H parameter table, a kinematic forward solution is derived through a coordinate system transformation matrix, and a kinematic inverse solution is derived through a Pieper criterion;
the obtained kinematics positive solution and inverse solution are used for determining a terminal pose value of the robot, so that pose signals can be conveniently acquired subsequently;
grabbing a workpiece to be installed through a tail end clamping jaw, wherein the tail end clamping jaw is connected to the tail end position of a six-dimensional force sensor, and the six-dimensional force sensor is connected to the tail end position of the last joint of the industrial robot;
the to-be-assembled pose is the relative pose of the to-be-assembled workpiece relative to the hole of the assembly body, which is obtained through calculation by a visual method;
the step pitch is set as a distance value of one stepping motion of the industrial robot in a Cartesian space along any direction, and the value is less than or equal to the assembly precision requirement;
controlling the industrial robot to assemble in one step by adopting a manual teaching method;
step 2, in the assembly process, acquiring a force signal and a pose signal once when the robot executes a step pitch, synthesizing the acquired force signal and pose signal according to a timestamp to generate a force-position relation image, and marking a stepping motion label for the image;
controlling the industrial robot to move in a stepping mode or rotate in a stepping mode once by adopting a manual teaching method, namely executing one step;
the force signals are collected through an industrial robot tail end force sensor and comprise contact force signals Fx, Fy and Fz in three coordinate axes x, y and z directions of a three-dimensional space, an assembly moment signal T around an assembly axis and a time stamp;
the pose signal is obtained through the terminal pose of the robot and comprises terminal position coordinate values Px, Py and Pz, terminal pose values Rx, Ry and Rz and a timestamp;
the timestamp is used for recording the time information of the currently acquired signal;
the force position relation image is a two-dimensional curve cluster obtained by drawing a force signal and a pose signal in the assembly process, and the drawing method comprises the following steps: respectively taking 3 contact force signals and 3 assembly moment signals in the force signals as vertical coordinates, and respectively taking 3 terminal position coordinate values and 3 terminal attitude values in the pose signals as horizontal coordinates, and generating 24 curves, wherein the curves comprise Fx-Px, Fx-Py, Fx-Pz, Fx-Rx, Fx-Ry, Fx-Rz, Fy-Px, Fy-Py, Fy-Pz, Fy-Rx, Fy-Ry, Fy-Rz, Fz-Px, Fz-Py, Fz-Pz, Fz-Rx, Fz-Rz, T-Px, T-Pz, T-Rx, T-Ry and T-Rz relation curves for reflecting different force position relations, each relation curve is represented by different colors, and the background color of the images is selected to be white;
drawing a force-position relation image by a matplotlib library and a numpy library;
the step motion tag is used for indicating the next step of motion and comprises a motion direction and a motion step length, wherein the motion direction comprises x, y, z, Rx, Ry and Rz directions and the synthesis direction of x, y, z, Rx, Ry and Rz, and the motion step length is integral multiple of the step distance;
step 3, repeating the step 2 until the assembly is completed, and obtaining all force and position relation images and all labels in the process;
all the force position relation images and all the labels comprise all the force position relation images and the labels corresponding to the force position relation images in one assembling process, and the number of the images is equal to the number of times of executing steps in the assembling process;
step 4, building a force and position relation image library through multiple assembling, namely, multiple executing steps 1-3, wherein the image library consists of force and position relation images and labels obtained in all assembling processes;
the multiple assembling is to collect assembling data under different initial conditions, because the initial poses of the workpieces to be assembled obtained by executing the step 1 each time are different, the collecting of the assembling data under the multiple assembling is beneficial to improving the generalization capability of the model;
the force and position relation image library comprises all force and position relation images and corresponding labels obtained in multiple assembling processes, and comprises 5000 pictures and the corresponding labels.
Step 5, acquiring a visual image of the relative pose of the workpiece to be assembled and the assembly body in the assembling process, marking the label as unfinished assembly or finished assembly, and establishing an assembly state image library consisting of the visual image and the label;
the visual image is obtained frame by frame through video stream of a visual sensor, the content of the visual image is in an RGB format, and the format of a stored picture is in a PNG format;
the assembly state image library comprises visual images acquired in the assembly process and label data corresponding to the visual images, and comprises 2500 pictures and labels corresponding to the pictures;
step 6, constructing an image classification model I by taking the force-position relation image in the force-position relation image library as input and a label in the force-position relation image library as output, and inputting data in the force-position relation image library into the image classification model I for training;
the image classification model I is constructed by adopting a convolutional neural network;
the structure of the convolutional neural network adopts a ResNet50 model structure and consists of a 50-layer network structure: the first layer is a convolution and maximum pooling layer, the second to forty-ninth layers are residual modules, and the fiftieth layer is an average pooling and softmax classifier;
the convolution layer of the first layer uses 64 convolution kernels of size 7 x 7, and one maximum pooling layer of size 3 x 3;
the residual module is a module structure which can map input identity to output when the gradient disappears due to too deep network layer number;
the second layer to the fourth nineteenth layer are composed of sixteen residual modules, each residual module comprises three convolution layers, and forty-eight layers in total, and each convolution layer is followed by a normalization layer and a RELU activation layer;
the first three residual modules are composed of convolution layers of 64 convolution kernels of 1 × 1, 64 convolution kernels of 3 × 3 and 256 convolution kernels of 1 × 1;
then four residual modules are formed by convolution layers of 128 convolution kernels with 1 × 1, 128 convolution kernels with 3 × 3 and 512 convolution kernels with 1 × 1;
then six residual modules are formed by 256 convolution layers of 1 × 1 convolution kernels, 256 3 × 3 convolution kernels and 1024 convolution layers of 1 × 1 convolution kernels;
the last three residual modules are composed of 512 convolution layers of 1 × 1 convolution kernels, 512 3 × 3 convolution kernels and 2048 convolution layers of 1 × 1 convolution kernels;
the fiftieth layer is an average pooling layer of 7 by 7 size, and a softmax classifier with twelve classes of outputs;
accelerating training through a GPU (graphics processing Unit) acceleration technology during training, installing a CUDA (compute unified device architecture) parallel operation architecture and a CUDNN (compute unified network) deep neural network acceleration library, and accelerating the training of the image classification model I by using a plurality of GPUs;
taking one quarter of images in the force bit relational image library as a test set, taking three quarters of images as a training set, and realizing the separation of the test set and the training set by the train _ test _ split of a model _ selection module in a sklern library;
constructing a model structure of ResNet50 through tensorflow;
calling a weight value of a ResNet50 model pre-trained on ImageNet through a keras interface, removing a final full-connection layer by using a transfer learning method, and then replacing the final full-connection layer by using a group of full-connection layers initialized randomly;
freezing all layers except the full connection layer to enable weights of the layers not to be updated, training the network by using a small learning rate, enabling a new full connection layer to learn a mode from other layers learned by the network at an early stage, and unfreezing the rest part of the network layer by layer and continuing training when the precision is improved until the precision fullness is obtained;
the training model iterates for 100 generations, then continues training, and stops training when the average classification precision on the test set reaches more than 95%;
step 7, taking the image in the assembly state image library as input and the label in the assembly state image library as output, constructing an image classification model II, and inputting the data in the assembly state image library into the image classification model II for training;
the image classification model II is constructed by adopting a convolutional neural network;
the structure and the model training mode of the convolutional neural network are similar to those of the image classification model I, but the output of the softmax classifier at the last layer in the correction network is two types;
step 8, controlling the robot to assemble in real time based on the output results of the image classification model I and the image classification model II, generating a real-time force-position relation image by using a force signal and a pose signal which are acquired in real time, inputting the image classification model I which is trained, and controlling the industrial robot to move by using an output stepping motion label; meanwhile, inputting the real-time visual image into the trained image classification model II, and stopping movement when the real-time visual image is output as the image classification model II is assembled, so that precision assembly is completed;
the method for controlling the motion of the industrial robot by the output result comprises the following steps: 1. obtaining a motion direction and a motion step length from the image classification model I, inputting the motion direction and the motion step length into a moveit module of the ROS, and controlling the robot to complete the current appointed motion; 2. and repeating the previous step until the output of the image classification model II is the finished assembly.

Claims (5)

1. A robot precision flexible assembly method based on vision and force position image learning is characterized by comprising the following steps:
step 1, controlling an industrial robot to enable a workpiece to be assembled at the tail end of the robot to be close to a hole of an assembly body, controlling a tail end tool to adjust the workpiece to be assembled to be in an assembly pose, setting a step pitch in an assembly process, and controlling the industrial robot to execute assembly operation;
step 2, in the assembly process, acquiring a force signal and a pose signal once when the robot executes a step pitch, synthesizing the acquired force signal and pose signal according to a timestamp to generate a force-position relation image, and marking a stepping motion label for the image;
step 3, repeating the step 2 until the assembly is completed, and obtaining all force and position relation images and all labels in the process;
step 4, building a force and position relation image library through multiple assembling, namely, multiple executing steps 1-3, wherein the image library consists of force and position relation images and labels obtained in all assembling processes;
step 5, acquiring a visual image of the relative pose of the workpiece to be assembled and the assembly body in the assembling process, marking the label as unfinished assembly or finished assembly, and establishing an assembly state image library consisting of the visual image and the label;
step 6, constructing an image classification model I by taking the force-position relation image in the force-position relation image library as input and a label in the force-position relation image library as output, and inputting data in the force-position relation image library into the image classification model I for training;
step 7, taking the image in the assembly state image library as input and the label in the assembly state image library as output, constructing an image classification model II, and inputting the data in the assembly state image library into the image classification model II for training;
step 8, controlling the robot to assemble in real time based on the output results of the image classification model I and the image classification model II, generating a real-time force-position relation image by using a force signal and a pose signal which are acquired in real time, inputting the image classification model I which is trained, and controlling the industrial robot to move by using an output stepping motion label; meanwhile, the real-time visual image is input into the trained image classification model II, and when the real-time visual image is output as the image classification model II, the motion is stopped, and the precision assembly is completed.
2. The method for robot precise flexible assembly based on vision and force-position image learning as claimed in claim 1, characterized in that: in the step 1, the step distance is set as a distance value of one stepping motion of the industrial robot in the Cartesian space along any direction, and the value is smaller than or equal to the assembly precision requirement.
3. The method for robot precise flexible assembly based on vision and force-position image learning as claimed in claim 1, characterized in that: in the step 2, force signals are collected through an industrial robot terminal force sensor, the force signals comprise contact force signals Fx, Fy and Fz in three coordinate axes x, y and z directions of a three-dimensional space, an assembling moment signal T and a time stamp around an assembling axis, and position signals are obtained through a robot terminal position and position, and the position signals comprise terminal position coordinate values Px, Py and Pz, terminal position values Rx, Ry and Rz and the time stamp.
4. The method for robot precise flexible assembly based on vision and force-position image learning as claimed in claim 1, characterized in that: in the step 2, the force-position relation image is a two-dimensional curve cluster obtained by drawing the force signal and the pose signal in the assembly process, and the drawing method comprises the following steps: and respectively taking 3 contact force signals and 3 assembly moment signals in the force signals as vertical coordinates, respectively taking 3 tail end position coordinate values and 3 tail end attitude values in the pose signals as horizontal coordinates, and generating at most 24 curves to reflect different force-position relationships.
5. The method for robot precise flexible assembly based on vision and force-position image learning as claimed in claim 1, characterized in that: in step 2, the step motion tag is used to indicate the next step motion, and includes a motion direction and a motion step length, where the motion direction includes x, y, z, Rx, Ry, Rz directions and a synthesis direction of x, y, z, Rx, Ry, Rz, and the motion step length is an integer multiple of the step length.
CN202110451138.3A 2021-04-26 2021-04-26 Robot precision flexible assembly method based on vision and force position image learning Active CN113134839B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110451138.3A CN113134839B (en) 2021-04-26 2021-04-26 Robot precision flexible assembly method based on vision and force position image learning

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110451138.3A CN113134839B (en) 2021-04-26 2021-04-26 Robot precision flexible assembly method based on vision and force position image learning

Publications (2)

Publication Number Publication Date
CN113134839A true CN113134839A (en) 2021-07-20
CN113134839B CN113134839B (en) 2022-06-10

Family

ID=76812028

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110451138.3A Active CN113134839B (en) 2021-04-26 2021-04-26 Robot precision flexible assembly method based on vision and force position image learning

Country Status (1)

Country Link
CN (1) CN113134839B (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114789444A (en) * 2022-05-05 2022-07-26 山东省人工智能研究院 Compliant human-computer contact method based on deep reinforcement learning and impedance control

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106272416A (en) * 2016-08-29 2017-01-04 上海交通大学 Feel based on power and the robot slender axles Fine Boring system and method for vision
CN108161934A (en) * 2017-12-25 2018-06-15 清华大学 A kind of method for learning to realize robot multi peg-in-hole using deeply
CN109740270A (en) * 2019-01-09 2019-05-10 中国工程物理研究院激光聚变研究中心 The big L/D ratio peg-in-hole assembly system and method predicted and analyzed based on contact force and torque
CN110065068A (en) * 2019-04-08 2019-07-30 浙江大学 A kind of robotic asssembly operation programming by demonstration method and device based on reverse-engineering
CN110543892A (en) * 2019-08-05 2019-12-06 青岛理工大学 part identification method based on multilayer random forest
CN111598951A (en) * 2020-05-18 2020-08-28 清华大学 Method, device and storage medium for identifying space target

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106272416A (en) * 2016-08-29 2017-01-04 上海交通大学 Feel based on power and the robot slender axles Fine Boring system and method for vision
CN108161934A (en) * 2017-12-25 2018-06-15 清华大学 A kind of method for learning to realize robot multi peg-in-hole using deeply
CN109740270A (en) * 2019-01-09 2019-05-10 中国工程物理研究院激光聚变研究中心 The big L/D ratio peg-in-hole assembly system and method predicted and analyzed based on contact force and torque
CN110065068A (en) * 2019-04-08 2019-07-30 浙江大学 A kind of robotic asssembly operation programming by demonstration method and device based on reverse-engineering
CN110543892A (en) * 2019-08-05 2019-12-06 青岛理工大学 part identification method based on multilayer random forest
CN111598951A (en) * 2020-05-18 2020-08-28 清华大学 Method, device and storage medium for identifying space target

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114789444A (en) * 2022-05-05 2022-07-26 山东省人工智能研究院 Compliant human-computer contact method based on deep reinforcement learning and impedance control
CN114789444B (en) * 2022-05-05 2022-12-16 山东省人工智能研究院 Compliant human-computer contact method based on deep reinforcement learning and impedance control

Also Published As

Publication number Publication date
CN113134839B (en) 2022-06-10

Similar Documents

Publication Publication Date Title
CN112132894B (en) Mechanical arm real-time tracking method based on binocular vision guidance
CN111243017B (en) Intelligent robot grabbing method based on 3D vision
CN111695562B (en) Autonomous robot grabbing method based on convolutional neural network
CN109571487B (en) Robot demonstration learning method based on vision
CN111046948B (en) Point cloud simulation and deep learning workpiece pose identification and robot feeding method
CN112171661A (en) Method for grabbing target object by mechanical arm based on visual information fusion
CN112297013A (en) Robot intelligent grabbing method based on digital twin and deep neural network
JP7067816B1 (en) Robot teaching system and method based on image segmentation and surface EMG
CN113134839B (en) Robot precision flexible assembly method based on vision and force position image learning
JP2022187983A (en) Network modularization to learn high dimensional robot tasks
CN115861780B (en) Robot arm detection grabbing method based on YOLO-GGCNN
Takizawa et al. Learning from observation of tabletop knotting using a simple task model
CN115194774A (en) Binocular vision-based control method for double-mechanical-arm gripping system
CN115674204A (en) Robot shaft hole assembling method based on deep reinforcement learning and admittance control
CN211890823U (en) Four-degree-of-freedom mechanical arm vision servo control system based on RealSense camera
CN113119073A (en) Mechanical arm system based on computer vision and machine learning and oriented to 3C assembly scene
CN111002292B (en) Robot arm humanoid motion teaching method based on similarity measurement
Kawagoshi et al. Visual servoing using virtual space for both learning and task execution
CN110919650A (en) Low-delay grabbing teleoperation system based on SVM (support vector machine)
CN112123329A (en) Robot 3D vision hand-eye calibration method
Elachkar et al. An Experimental Setup for Robot Learning From Human Observation using Deep Neural Networks
CN112257655B (en) Method for robot to recognize human body sewing action
CN113492404B (en) Humanoid robot action mapping control method based on machine vision
Liu et al. Robotic visual servoing based on convolutional neural network
CN117415051A (en) Robot intelligent sorting method based on RGB-D image and teaching experiment platform

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant