WO2021176902A1 - Dispositif de traitement d'apprentissage, dispositif et procédé de commande de robot, et programme - Google Patents

Dispositif de traitement d'apprentissage, dispositif et procédé de commande de robot, et programme Download PDF

Info

Publication number
WO2021176902A1
WO2021176902A1 PCT/JP2021/002976 JP2021002976W WO2021176902A1 WO 2021176902 A1 WO2021176902 A1 WO 2021176902A1 JP 2021002976 W JP2021002976 W JP 2021002976W WO 2021176902 A1 WO2021176902 A1 WO 2021176902A1
Authority
WO
WIPO (PCT)
Prior art keywords
image
correlation information
learning
robot
camera
Prior art date
Application number
PCT/JP2021/002976
Other languages
English (en)
Japanese (ja)
Inventor
洋貴 鈴木
Original Assignee
ソニーグループ株式会社
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by ソニーグループ株式会社 filed Critical ソニーグループ株式会社
Publication of WO2021176902A1 publication Critical patent/WO2021176902A1/fr

Links

Images

Classifications

    • BPERFORMING OPERATIONS; TRANSPORTING
    • B25HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
    • B25JMANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
    • B25J13/00Controls for manipulators
    • B25J13/08Controls for manipulators by means of sensing devices, e.g. viewing or touching devices
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/70Determining position or orientation of objects or cameras

Definitions

  • the present disclosure relates to learning processing devices, robot control devices, methods, and programs. Specifically, the present invention relates to a learning process for mounting a component in a predetermined position using a robot, a learning process device for performing a robot control process, a robot control device, a method, and a program.
  • the robot acquires the object a at the P1 point and moves the acquired object to the P2 point. Further, the acquired object a is attached to another object b located at the P2 point, and the like is executed.
  • processing is an important task in, for example, work automation using an arm robot in a factory.
  • Learning processing is used as one method for constructing this control algorithm. For example, by performing learning processing that analyzes images taken by a camera mounted on a robot or fixed at a specific position in a work place, for example, an algorithm for realizing automation of the above processing (task) can be used. Can be built.
  • the learning process requires inputting a large amount of sample data, for example, captured images with various different settings, and analyzing a large number of images, which requires a predetermined time and labor.
  • deep learning is a learning process using a deep neural network (DNN), as a learning process for causing a robot to execute an object position / orientation control task. If deep learning is used, a large amount of sample data is input, and the learning processing unit itself automatically extracts the feature amount from the large amount of data to generate an optimum solution corresponding to various data, for example, a robot control parameter. be able to.
  • DNN deep neural network
  • Patent Document 1 Japanese Unexamined Patent Publication No. 2015-102928 discloses a configuration using a phase-limited correlation method as a method for detecting the position and orientation of a robot. This method is a method of detecting the position and orientation of an object in the input image by comparing the input image obtained by capturing the object and the template image by using the phase-limited correlation method.
  • the present disclosure has been made in view of the above problems, for example, and is a learning process for mounting a component in a predetermined position using a robot, a learning process device for performing a robot control process, a robot control device, and a method. , As well as the purpose of providing the program.
  • the first aspect of the disclosure is An image correlation information calculation unit that generates image correlation information between a camera-photographed image and a model image
  • the learning processing apparatus has a learning processing unit that executes a learning process of inputting the image correlation information and generating a learning model that outputs control parameters for controlling the position of an object captured by the camera image.
  • the second aspect of the present disclosure is A robot control unit that controls the robot and An image correlation information calculation unit that generates image correlation information between a camera-photographed image of an object acquired by the robot and a model image in which the object is arranged at an ideal position. It has a learning model that inputs the image correlation information and outputs a control parameter target value for controlling the position of the object captured in the image captured by the camera.
  • the robot control unit is in a robot control device that controls the robot by using a control parameter target value which is an output of the learning model.
  • the third aspect of the present disclosure is It is a learning processing method executed in the learning processing device.
  • the image correlation information calculation unit generates the image correlation information of the camera-captured image and the model image, and the image correlation information calculation step.
  • the learning processing unit has a step of executing a learning process of inputting the image correlation information and generating a learning model for outputting control parameters for controlling the position of an object captured by the camera image. ..
  • the fourth aspect of the present disclosure is It is a robot control method executed in a robot control device.
  • the steps in which the robot control unit controls the robot An image correlation information calculation step in which the image correlation information calculation unit generates image correlation information between the camera-photographed image of the object acquired by the robot and the model image in which the object is arranged at an ideal position.
  • the robot control unit is in a robot control method for recontrolling the robot by using a control parameter target value which is an output of the learning model.
  • the fifth aspect of the present disclosure is It is a program that executes learning processing in the learning processing device.
  • An image correlation information calculation step that causes the image correlation information calculation unit to generate image correlation information of a camera-photographed image and a model image
  • the program has a program for executing a step of inputting the image correlation information into the learning processing unit and executing a learning process of generating a learning model that outputs control parameters for controlling the position of an object captured by the camera captured image.
  • the sixth aspect of the present disclosure is A program that executes robot control processing in a robot control device. Steps to let the robot control unit control the robot, An image correlation information calculation step for causing the image correlation information calculation unit to generate image correlation information between the camera-photographed image of the object acquired by the robot and the model image in which the object is arranged at an ideal position. A step of inputting the image correlation information to the learning model and outputting a control parameter target value for controlling the position of the captured object in the camera-captured image. The program is for the robot control unit to recontrol the robot by using the control parameter target value which is the output of the learning model.
  • the program of the present disclosure is, for example, a program that can be provided by a storage medium or a communication medium that is provided in a computer-readable format to an information processing device or a computer system that can execute various program codes.
  • a program that can be provided by a storage medium or a communication medium that is provided in a computer-readable format to an information processing device or a computer system that can execute various program codes.
  • system is a logical set configuration of a plurality of devices, and the devices having each configuration are not limited to those in the same housing.
  • the phase-limited correlation information calculation unit that generates the phase-limited correlation information of the camera-captured image and the model image, and the phase-limited correlation information are input to control the position of the object captured in the camera-captured image.
  • it is a robot control device that controls the robot using the generated learning model, and provides phase-limited correlation information between the camera-captured image of the object acquired by the robot and the model image in which the object is placed at an ideal position.
  • the robot is controlled by inputting to the learning model and using the control parameter target value obtained as the output from the learning model.
  • PoC information phase-limited correlation information
  • PoC information phase-limited correlation information
  • FIG. 1 is a diagram showing an example of processing executed by the robot 100.
  • Ring-shaped (hollow disk-shaped) parts 20 are piled up in bulk in the parts box 10. Further, the component mounting pin 40 is moved on the belt conveyor 30. The belt conveyor 30 is moving in the direction of the arrow shown in the figure.
  • the robot 100 acquires a component 20 from the component box 10 and executes a process (task) of inserting the component 20 into the component mounting pin 40 on the belt conveyor 30.
  • a suction unit 101 is provided at the tip of the arm of the robot 100, and the component 20 can be sucked and released by controlling the suction operation of the suction unit 101.
  • Step S01 the robot 100 sucks one part 20 from the parts box 10 in which the ring-shaped (hollow disk-shaped) parts 20 are piled up in bulk to the suction portion 101 provided at the tip of the arm of the robot 100. And get it.
  • Step S02 The robot 100 that has attracted one component 20 then rotates the arm (robot arm) of the robot 100 in step S02 to a position directly above one pin of the component mounting pin 40 that moves on the belt conveyor 30. Move the part 20.
  • Step S03 Next, in step S03, the robot 100 releases the suction of the component 20 sucked by the suction portion 101 at the tip of the arm of the robot 100, and mounts the component 20 on the pin of the component mounting pin 40.
  • step S03 the robot 100 rotates the arm (robot arm) of the robot 100 and performs the process of step S01, that is, the process of taking out the component 20 from the component box 10.
  • step S01 that is, the process of taking out the component 20 from the component box 10.
  • the robot 100 repeatedly executes the above steps S01 to S03.
  • the operations required of the robot 100 are a component picking operation for acquiring one component 20 from parts in a bulk state and a component for moving the acquired components on the belt conveyor 30.
  • the operation of moving to the position directly above one pin of the mounting pin 40 and the center hole of the ring-shaped (bulk-shaped disk shape) component 20 are aligned with the pin position, and the suction of the component 20 is released at that position.
  • the operation is to mount the component 20 on the pin of the component mounting pin 40.
  • the component mounting pin 40 that moves on the belt conveyor 30 is moved to a position directly above one pin, and the central hole of the ring-shaped (hollow disk-shaped) component 20 is aligned with the pin position, and the position is adjusted.
  • the position of the center hole of the component 20 and the relative position of the component mounting pin 40 Is various. That is, various different "deviations" occur for each component mounting process.
  • the camera 120 captures an image of the robot 100 moving the component 20 onto the component mounting pin 40. At this position, the robot 100 releases the suction of the component 20 and mounts the component 20 on the pin of the component mounting pin 40.
  • the camera 120 captures the camera captured image 121.
  • the camera-captured image 121 captures an image of a part of the component 20 and a part of the component mounting pin 40 observed from the hole in the center of the component 20.
  • the image of a part of the part 20 included in the camera-captured image 121 is only a part of the hole in the center of the part and a part of the ring around it, and the part.
  • the outer peripheral portion of the ring of 20 exists around the camera-captured image 121 shown in FIG.
  • the captured image 121 captured by the camera 120 is input to the learning processing unit 130. Further, the robot control parameter 126 generated by the robot control unit 125 for controlling the robot 100 is input to the learning processing unit 130.
  • the robot control parameter 126 is, for example, a parameter such as a control position and an angle of each arm of the robot 100, or a parameter set in a control command for setting these control positions and angles.
  • the learning processing unit 130 executes machine learning processing using these input data.
  • the learning processing unit 130 acquires the feature information of the captured image 121 captured by the camera 120. For example, image feature information such as pixel value distribution and edge information is acquired, learning processing is performed on these feature information, and a learning model 131 associated with the optimum control parameters of the robot corresponding to the image feature information is generated.
  • the learning model 131 generated by the learning processing unit 130 is The input is an image, i.e., an image with the component 20 moved onto the component mounting pin 40.
  • the output is the robot control parameter target value, that is, the robot control parameter target value corresponding to various images. Is.
  • the robot control parameter target value which is the output of the learning model 131, is also a parameter such as a control position or angle of each arm of the robot 100, or a parameter set in a control command for setting these control positions or angles. Consists of.
  • the learning processing unit 130 displays a number of different state images, that is, various different (A) learning images in a state where the robot 100 moves the component 20 onto the component mounting pin 40. It is input as learning data, feature information of these images is acquired, and a learning model 131 in which robot control parameter target values corresponding to each image are associated with each other is generated.
  • Deep learning which is a learning process using a deep neural network (DNN)
  • DNN deep neural network
  • the learning processing unit 130 itself automatically extracts the feature amount from the large amount of data to generate an optimum solution corresponding to various data, for example, a robot control parameter. be able to.
  • the learning processing unit 130 shown in FIG. 5 can be configured as, for example, a learning processing unit using a deep neural network (DNN) having such a configuration.
  • the learning processing unit 130 can be realized as a learning processing unit to which various learning algorithms are applied.
  • the learning processing unit 130 has a number of images of different states in which the component 20 has been successfully inserted into the component mounting pin 40, that is, the “(A) learning image” shown in FIG. 5 and these images.
  • the robot control parameters at the time of shooting that is, the data corresponding to the “(B) robot control parameters at the time of successful component insertion” shown in FIG. 5 are input to the learning processing unit 130.
  • (B) Robot control parameter when component insertion is successful is composed of, for example, the position (x, y, z) of the robot arm and the roll, pitch, and yaw (roll, pitch, yaw) indicating the arm posture.
  • Dimensional real-valued vectors (actions) are available.
  • the learning processing unit 130 sets these input data as components of the neural network by using a learning process using a neural network in which the above-mentioned CNN is combined with BN and ReLU, for example, a stochastic gradient descent method.
  • the learning process for determining the weight information to be performed is executed.
  • the learning processing unit 130 inputs an image showing the situation immediately before the component insertion and outputs an optimum robot control parameter for inserting the component 20 into the component mounting pin 40, that is, a robot control parameter target value. Generate a learning model 131 to be used.
  • FIG. 6 is a diagram showing a control example of the robot 100 using the learning model 131.
  • the camera-captured image 121 captured by the camera 120 provided at the same position as when the learning model was generated is input to the learning model 131.
  • the learning model 131 acquires feature information from the captured image 121, and outputs the optimum control parameter of the robot corresponding to the acquired feature information, that is, the robot control parameter target value 132 shown in FIG.
  • the robot control parameter target value 132 is input to the robot control unit 125, and the robot control unit 125 calculates the difference between the current robot control parameter 126 and the robot control parameter target value 132, and the difference. If there is, the robot 100 is corrected and controlled so that the difference amount is zero or reduced.
  • the robot 100 By controlling the robot 100 using the robot control parameter target value 132 in this way, the position and orientation of the component 20 attached to the tip of the arm of the robot 100 are corrected and controlled, and then the component 20 is attracted. By releasing the component 20, the component 20 can be securely inserted into the component mounting pin 40.
  • the acquisition position (suction position) of the component 20 by the robot 100 is various, and the component 20 is attached to the component mounting pin 40. There are various states relative to the component mounting pin 40, and there are many variations indicating these situations.
  • phase-limited correlation information between images The learning processing device and the robot control device of the present disclosure solve the above-mentioned problems, and for this solution, phase-limited correlation information, rotation-invariant phase-limited correlation information, and image phase spectrum are used. Image correlation information that can calculate image deviation information such as the existing image correlation information in pixel (bixel) units is used.
  • Phase-only Correlation information is also referred to as PoC information.
  • the phase-limited correlation information (PoC information) will be described with reference to FIG. 7.
  • the phase-limited correlation information (PoC information) is information having a translational movement amount (shift amount) of each object in two images.
  • FIG. 7 shows two images, i.e. (1a) Image a (1b) Image b These two images are shown. These two images are images of the same subject, but their relative positions are deviated. The amount of deviation (shift amount) between images is 12 pixels in the x direction and 13 pixels in the y direction.
  • phase-limited correlation information for example, the amount of deviation between two images having such a positional deviation can be easily analyzed.
  • phase-limited correlation information (PoC information) will be described.
  • the processing is performed in the order of steps S11 to S12 shown in FIG. Each processing step will be described in sequence.
  • Steps S11a, 11b First, a discrete Fourier transform (DFT) is performed on each of the images a and b. By this discrete Fourier transform (DFT), the amplitude information A and the phase information E j ⁇ corresponding to each of the images a and b are output.
  • DFT discrete Fourier transform
  • Step S12 the correlation between the phase information corresponding to each of the images a and b is calculated, and only the phase information is returned to the image space by IDFT (discrete Fourier transform).
  • the phase information returned to the image space is the phase-limited correlation information (PoC) 140 shown in FIG. 7.
  • phase-limited correlation information PoC
  • the image a and the image b are in a relationship of shifting the entire image, only one peak is detected in the phase-limited correlation information (PoC) generated from the two images. ing.
  • a plurality of peaks corresponding to the deviation amount of each subject are generated in the phase-limited correlation information (PoC).
  • PoC phase-limited correlation information
  • the image a and the image b are images taken at different timings, and a vehicle moving in the subject is photographed.
  • the amount of deviation (shift amount) between the image a and the entire image b is 12 pixels in the x direction and 13 pixels in the y direction, as in the example described with reference to FIG. 7.
  • the amount of deviation of the moving vehicle is different from the amount of deviation of the entire image.
  • phase-limited correlation information (PoC) generated from the images a and b
  • two peaks appear as shown in the phase-limited correlation information (PoC) 140 shown in FIG.
  • the phase-limited correlation information contains a plurality of peaks according to the deviation amount of each subject. Will be generated.
  • the learning processing device and the robot control device of the present disclosure execute learning processing and robot control processing using this phase-limited correlation information (PoC).
  • PoC phase-limited correlation information
  • the learning processing device and the robot control device of the present disclosure are not limited to phase-limited correlation information, but pixel (Vixel) image deviation information such as rotation-invariant phase-limited correlation information and image correlation information using an image phase spectrum. ) It is possible to perform processing using image correlation information that can be calculated in units. In the following, an example using phase-limited correlation information will be described as an example.
  • the learning processing unit 130 described above with reference to FIGS. 4 and 5 inputs a learning image, that is, a number of images in different states in which the component 20 is successfully inserted into the component mounting pin 40, and the input images.
  • the image was analyzed and the feature information required to generate the robot control parameters was extracted.
  • the learning processing unit of the learning processing device of the present disclosure described below does not input an image of the component 20 or the component mounting pin 40.
  • Phase-limited correlation information that can be generated based on an image is input to the learning processing unit of the learning processing apparatus of the present disclosure. Specifically, phase-limited correlation information is input between various images at the insertion position of the component 20 with respect to the component mounting pin 40 and a model image having a position and orientation that serves as a model for successful insertion of the component 20 into the component mounting pin 40. do.
  • FIG. 9 is a diagram illustrating the configuration and processing of the learning processing device 160 according to the first embodiment of the present disclosure.
  • the learning processing device 160 has a phase-limited correlation information (PoC) calculation unit 161 and a learning processing unit 162.
  • PoC phase-limited correlation information
  • a camera-captured image 121 captured by the same camera 120 as described above with reference to FIG. 4 is input to the phase-limited correlation information (PoC) calculation unit 161.
  • the camera 120 captures an image of the robot 100 moving the component 20 onto the component mounting pin 40.
  • the camera-captured image 121 is an image in various states. That is, it is an image of various different states in which the relative positions of the component 20 and the component mounting pin 40 described above with reference to FIG. 3 are different.
  • a model image 151 that has been captured in advance and stored in the storage unit 150 is input to the phase-limited correlation information (PoC) calculation unit 161.
  • the model image 151 is an image in which the relative positions of the component 20 and the component mounting pin 40 are set as model positions. That is, if the positions of the component 20 and the component mounting pin 40 are set at the same positions as the model image 151, the component 20 can be reliably inserted into the component mounting pin 40.
  • phase-limited correlation information (PoC) calculation unit 161 is provided with the phase-limited correlation information (PoC) calculation unit 161.
  • Camera shot image 121, Model image 151, These pairs of images are input.
  • the phase-limited correlation information (PoC) calculation unit 161 calculates these two images, that is, the camera-captured image 121, the model image 151, and the phase-limited correlation information (PoC) corresponding to these two images.
  • FIG. 9 shows an example of the phase-limited correlation information (PoC) 171 generated by the phase-limited correlation information (PoC) calculation unit 161.
  • phase-limited correlation information generates peaks according to the shift of the object (subject) unit included in the two images.
  • the coordinates of the peak position appearing in the phase-limited correlation information (PoC) are the coordinates indicating the amount and direction of the shift in the object (subject) unit.
  • the camera-captured image 121 and the model image 151 include two objects, that is, a component 20 and a component mounting pin 40, and these two objects as subjects, and phase-limited correlation information (PoC).
  • phase-limited correlation information (PoC) 171 generated by the calculation unit 161, a peak appears at a coordinate position corresponding to the amount of deviation between the images of these two objects.
  • One of the two coordinates (x, y) corresponding to these two peaks corresponds to the deviation of the component 20, and the other corresponds to the deviation of the component mounting pin 40.
  • the two peak coordinate positions of the phase-limited correlation information (PoC) 171 are images taken by the camera. It is data (positional deviation vector) indicating the amount and direction of the positional deviation from the reference position of the positions of the component 20 and the component mounting pin 40 in 121.
  • the phase-limited correlation information (PoC) 171 generated by the phase-limited correlation information (PoC) calculation unit 161 is input to the learning processing unit 162.
  • the learning processing unit 162 inputs the phase-limited correlation information (PoC) 171 generated by the phase-limited correlation information (PoC) calculation unit 161.
  • the robot control parameter 126 generated by the robot control unit 125 for controlling the robot 100 is input to the learning processing unit 162.
  • the robot control parameter 126 is, for example, a parameter such as a control position and an angle of each arm of the robot 100, or a parameter set in a control command for setting these control positions and angles.
  • the learning processing unit 162 executes machine learning processing using these input data.
  • the learning processing unit 162 performs learning processing by applying the phase-limited correlation information (PoC) 171 and the robot control parameter 126, and the optimum control parameter of the robot corresponding to various phase-limited correlation information (PoC) 171, that is, robot control.
  • a learning model 172 associated with a parameter target value is generated.
  • the learning model 172 generated by the learning processing unit 162 The input is phase-limited correlation information (PoC), that is, the phase-limited correlation information of the two images generated based on the image in which the component 20 is moved onto the component mounting pin 40 and the model image. PoC), The output is a robot control parameter target value, that is, a robot control parameter target value corresponding to various phase-limited correlation information (PoC). Is.
  • PoC phase-limited correlation information
  • the robot control parameter target value which is the output of the learning model 172, is also a parameter such as a control position or angle of each arm of the robot 100, or a parameter set in a control command for setting these control positions or angles. Consists of.
  • phase-limited correlation information (PoC) calculation unit 161 Details of the processing executed by the phase-limited correlation information (PoC) calculation unit 161 and the processing executed by the learning processing unit 162 will be described with reference to FIGS. 10 and 11.
  • FIG. 10 is a diagram illustrating details of the process executed by the phase-limited correlation information (PoC) calculation unit 161.
  • PoC phase-limited correlation information
  • the phase-limited correlation information (PoC) calculation unit 161 sequentially inputs a large number of images taken by the camera 120 while the robot 100 moves the component 20 onto the component mounting pin 40.
  • the robot 100 is repeatedly executed to generate the images captured by the camera (A) shown in FIG. 10, and these are sequentially input to the phase-limited correlation information (PoC) calculation unit 161.
  • Each of the images taken by the camera (A) shown in FIG. 10 is various images in which the relative positions of the component 20 and the component mounting pin 40 are different.
  • the phase-limited correlation information (PoC) calculation unit 161 sequentially executes the calculation processing of the phase-limited correlation information (PoC) with the model image 151 acquired from the storage unit 150 for each of the input camera-captured images.
  • the phase-limited correlation information (PoC) for learning corresponding to each of the images captured by the camera shown in FIG. 10 is generated.
  • Each of the (B) learning phase-limited correlation information (PoC) shown in FIG. 10 is a reference position (model) of the position of each of the component 20 and the component mounting pin 40 included in each of the (A) camera captured images shown in FIG. It is the PoC data in which the peak is generated at the coordinate position corresponding to the deviation position from the component 20 of the image 151 and the position of each of the component mounting pins 40).
  • the (B) learning phase-limited correlation information (PoC) shown in FIG. 10 is input to the learning processing unit 162. Next, the details of the processing executed by the learning processing unit 162 will be described with reference to FIG.
  • the learning processing unit 162 (B) Phase-limited correlation information for learning (PoC), (C) Robot control parameters when insertion is successful, Enter many sets of these data.
  • PoC Phase-limited correlation information for learning
  • C Robot control parameters when insertion is successful
  • phase-limited correlation information for learning is based on an image immediately before the component is inserted (camera image 121) when the component 20 is successfully inserted into the component mounting pin 40 and a model image 151.
  • Phase-limited correlation information This is the phase-limited correlation information (PoC) calculated by the calculation unit 161.
  • Robot control parameter when insertion is successful is robot control at the shooting timing of the image (camera shot image 121) applied to the calculation of the associated "(B) phase-limited correlation information for learning (PoC)". It is a parameter.
  • the "(C) robot control parameter when the insertion is successful" is composed of, for example, the position (x, y, z) of the robot arm and the roll, pitch, and yaw (roll, pitch, yaw) indicating the arm posture. Dimensional real-valued vector (action), etc.
  • the learning model 172 generated by the learning processing unit 162 The input is phase-limited correlation information (PoC).
  • the output is the robot control parameter target value, that is, the robot control parameter target value, which is the optimum robot control parameter for inserting the component 20 into the component mounting pin 40.
  • This is a learning model with the above settings.
  • the learning process executed by the learning processing unit 162 is so-called machine learning, and various learning algorithms can be applied.
  • the learning process using the deep neural network (DNN) described above can be applied.
  • DNN deep neural network
  • a learning process in which a convolutional neural network CNN (Convolutional Neural Network) is combined with a BN (Batch Rectification) or a ReLU (Rectified Linear Unit) can be applied.
  • the learning processing unit 162 can be realized as a learning processing unit to which various learning algorithms are applied.
  • the learning processing unit 162 learns the weight of the neural network by using, for example, a learning process using a neural network in which BN or ReLU is combined with the above-mentioned CNN, for example, a stochastic gradient descent method. From this learning process, the learning processing unit 162 receives the learning phase-limited correlation information (PoC) generated from the image showing the situation immediately before the component insertion and the model image as input, and inserts the component 20 into the component mounting pin 40.
  • a learning model 172 that outputs a robot control parameter target value, which is the optimum robot control parameter of the above, is generated.
  • FIG. 12 is a diagram illustrating the configuration and processing of the robot control device 180 that controls the robot 100 using the learning model 172.
  • the robot control device 180 has a phase-limited correlation information (PoC) calculation unit 181, a learning model 172, and a robot control unit 125.
  • PoC phase-limited correlation information
  • the phase-limited correlation information (PoC) calculation unit 181 includes a camera image 121 taken by the camera 120 with the robot 100 moving the component 20 onto the component mounting pin 40 and a model image stored in the storage unit 150. 151 is input and the phase-limited correlation information (PoC) 191 is calculated from these two images.
  • the camera-captured image 121 and the model image 151 include two objects, that is, a component 20 and a component mounting pin 40, and these two objects as subjects, and the phase-limited correlation information (PoC) calculation unit 181
  • PoC phase-limited correlation information
  • peaks appear at coordinate positions corresponding to the amount of deviation between the images of these two objects.
  • One of the two coordinates (x, y) corresponding to these two peaks corresponds to the deviation of the component 20, and the other corresponds to the deviation of the component mounting pin 40.
  • the phase-limited correlation information (PoC) 191 calculated by the phase-limited correlation information (PoC) calculation unit 181 is input to the learning model 172.
  • the learning model 172 is a learning model 172 generated by the learning processing device 160 described above with reference to FIG. This learning model 172
  • the input is phase-limited correlation information (PoC).
  • the output is the robot control parameter target value, that is, the robot control parameter target value, which is the optimum robot control parameter for inserting the component 20 into the component mounting pin 40. This is a learning model with the above settings.
  • the learning model 172 acquires the phase-limited correlation information (PoC) 191 calculated by the phase-limited correlation information (PoC) calculation unit 181, and the optimum control parameter of the robot corresponding to the acquired phase-limited correlation information (PoC) 191.
  • the robot control parameter target value 192 shown in FIG. 12 is output.
  • the robot control parameter target value 192 is input to the robot control unit 125, and the robot control unit 125 calculates the difference between the current robot control parameter 126 and the robot control parameter target value 192, and the difference. If there is, the robot 100 is corrected (re-controlled) so that the difference amount is zero or reduced.
  • the robot 100 By controlling the robot 100 using the robot control parameter target value 192 in this way, the position and orientation of the component 20 attached to the tip of the arm of the robot 100 are corrected and controlled, and then the component 20 is attracted. By releasing the component 20, the component 20 can be securely inserted into the component mounting pin 40.
  • the learning model 160 generated by the learning processing device 160 described with reference to FIGS. 9 to 11 and used by the robot control device 180 described with reference to FIG. 12 can be set in various ways.
  • FIG. 13 shows a plurality of setting examples for each of the following data.
  • the setting example (1) is the following setting.
  • the input data of the learning processing unit of the learning processing device is (A1) Phase-limited correlation information between the camera-captured image and the model image when the component insertion is successful (a2) Robot control parameters at the capture timing of the camera-captured image when the component insertion is successful
  • the learning model input / output data of the robot control device is
  • the training model input data is (Min1) Phase-limited correlation information between the camera-captured image and the model image at the component insertion timing
  • the learning model output data is (Mout) Robot control parameter target value.
  • This setting example (1) is a setting example corresponding to the embodiment described with reference to FIGS. 10 to 12.
  • the setting example (2) is the following setting.
  • the input data of the learning processing unit of the learning processing device is (A1) Phase-limited correlation information between the camera-captured image of the component insertion timing and the model image (a2) Robot control parameters at the capture timing of the camera-captured image of the component insertion timing (a3) Insertion success / failure flag indicating success or failure of component insertion
  • the learning model input / output data of the robot control device is
  • the training model input data is (Min1) Phase-limited correlation information between the camera-captured image and the model image at the component insertion timing
  • the learning model output data is (Mout) Robot control parameter target value.
  • this setting example (2) is based not only on the phase-limited correlation information based on the captured image when the component insertion is successful but also on the captured image when the component insertion fails in the learning process in the learning processing device.
  • This setting also uses phase-limited correlation information.
  • the failure information By including the failure information as learning data in this way, it is possible to classify different phase-limited correlation information at the time of success and failure, and highly accurate learning model output data, that is, (Mout) robot control parameter target value can be obtained. It can be generated and output.
  • the setting example (3) is the following setting.
  • the input data of the learning processing unit of the learning processing device is (A1) Phase-limited correlation information between the camera-captured image and the model image when the component insertion is successful (a2) Robot control parameters at the capture timing of the camera-captured image when the component insertion is successful
  • the learning model input / output data of the robot control device is
  • the training model input data is (Min1) Phase-limited correlation information between the camera-captured image and the model image of the component insertion timing (Min2) Robot control parameters at the capture timing of the camera-captured image
  • the learning model output data is (Mout) The difference between the current robot control parameter and the target value.
  • the input data of the learning processing unit of the (A) learning processing apparatus is the same as the setting example (1), that is, the embodiment described with reference to FIGS. 10 to 12.
  • (M) As learning model input data of the robot control device (Min2) Robot control parameter at the shooting timing of the image shot by the camera The difference is that this control parameter is added.
  • (M) As learning model output data of the robot control device (Mout) Difference between the current robot control parameter and the target value It is a setting to output such difference data.
  • the robot control unit 125 can correct and control the robot 100 based on the difference data.
  • the setting example (4) is the following setting.
  • the input data of the learning processing unit of the learning processing device is (A1) Phase-limited correlation information between the camera-captured image and the model image when the component is successfully inserted (a2) Robot control parameters at the capture timing of the camera-captured image when the component is successfully inserted (a3) The camera-captured image when the component is successfully inserted
  • the learning model input / output data of the robot control device is
  • the training model input data is (Min1) Phase-limited correlation information between the camera-captured image of the component insertion timing and the model image (Min2)
  • the learning model output data is (Mout) The difference between the current robot control parameter and the target value.
  • This setting example (3) is the input data of the learning processing unit of the learning processing apparatus (A) with respect to the setting example (1), that is, the input data of the embodiment described with reference to FIGS. 10 to 12. , (A3) Image taken by the camera when the component is successfully inserted The difference is that this photographed image is added. Also, as the learning model input data of the (M) robot control device, (Min2) Camera-captured image of camera-captured image The difference is that this captured image is added.
  • the phase-limited correlation information (PoC) between the camera-captured image and the model image of the component insertion timing includes two objects, that is, the component 20 and the component mounting pin 40, and between the images of these two objects. Two peaks appear at the coordinate positions according to the amount of deviation.
  • the learning processing unit (A3) Image taken by the camera when the component is successfully inserted By inputting this captured image data, the two peaks appearing in the phase-limited correlation information (PoC) are the component 20, the component mounting pin 40, and these two objects. It is possible to easily determine which object corresponds to.
  • FIG. 14 is a diagram showing a configuration example of the learning processing device 160 corresponding to the setting example (4).
  • the following data are input to the learning processing unit 162 of the learning processing device 160.
  • (A1) Phase-limited correlation information (PoC) 171 generated by the calculation unit 161,
  • (A2) Robot control parameter 126 generated by the robot control unit 125 for controlling the robot 100,
  • (A3) Camera-photographed image 121 taken by the camera 120
  • the learning processing unit 162 executes machine learning processing using these input data.
  • the learning process unit 162 ensures that the two peaks appearing in the phase-limited correlation information (PoC) correspond to the component 20, the component mounting pin 40, or which of these two objects. It is possible to perform highly accurate learning processing.
  • PoC phase-limited correlation information
  • FIG. 15 is a diagram showing a specific example of input data for the learning processing unit 162 of the learning processing device 160 shown in FIG. As shown in FIG. 15, the learning processing unit 162 (A) Camera captured image (B) Phase-limited correlation information for learning (PoC), (C) Robot control parameters when insertion is successful, Enter many sets of these data.
  • A Camera captured image
  • B Phase-limited correlation information for learning
  • C Robot control parameters when insertion is successful, Enter many sets of these data.
  • the “(A) camera-photographed image” is an image immediately before the member is inserted when the component 20 is successfully inserted into the component mounting pin 40.
  • “(B) Phase-limited correlation information for learning (PoC)” is based on an image immediately before the component is inserted (camera image 121) when the component 20 is successfully inserted into the component mounting pin 40 and a model image 151.
  • Phase-limited correlation information (PoC) This is the phase-limited correlation information (PoC) calculated by the calculation unit 161.
  • Robot control parameter when insertion is successful is robot control at the shooting timing of the image (camera shot image 121) applied to the calculation of the associated "(B) phase-limited correlation information for learning (PoC)". It is a parameter.
  • the learning processing unit 162 executes learning processing on these input data and generates a learning model 172.
  • the learning model 172 generated by the learning processing unit 162 Input, Phase-limited correlation information for learning (PoC) and Camera shot image (camera shot image just before inserting parts),
  • the output is the robot control parameter target value, that is, the robot control parameter target value, which is the optimum robot control parameter for inserting the component 20 into the component mounting pin 40. This is a learning model with the above settings.
  • the learning processing unit 162 ensures that the two peaks appearing in the phase-limited correlation information (PoC) correspond to the component 20, the component mounting pin 40, and which of these two objects. It is possible to perform highly accurate learning processing.
  • PoC phase-limited correlation information
  • the image taken immediately before the component 20 is attached to the component mounting pin 40 is used as the image used for generating the phase-limited correlation information (PoC).
  • PoC phase-limited correlation information
  • An image taken at another timing may be applied to the image input to the phase-limited correlation information (PoC) calculation unit of the learning processing device or the robot control device.
  • PoC phase-limited correlation information
  • the component acquisition position camera captured image 211 and the image immediately before the robot 100 attaches the component 20 to the component mounting pin 40 are captured by the camera 250, and the component mounting position camera captured image 251 shown in FIG. To get.
  • Two types of phase-limited correlation information (PoC) are calculated using these two images taken at different timings, and learning processing is performed.
  • FIG. 17 is a diagram showing an additional configuration in this embodiment with respect to the learning processing device 160 of the first embodiment described above with reference to FIG.
  • the phase-limited correlation information (PoC) calculation unit 231 of the learning processing device 230 shown in FIG. 17 has two images of the component acquisition position camera captured image 211 and the component acquisition position model image 221 stored in the storage unit 150.
  • the phase-limited correlation information (PoC) is calculated.
  • the component acquisition position model image 221 stored in the storage unit 150 is an image showing the acquisition status of the ideal component 20, and is an image taken in advance.
  • phase-limited correlation information (PoC) 241 generated by the phase-limited correlation information (PoC) calculation unit 231 has one peak at the coordinate position corresponding to the positional deviation between the two images of the component 20 as shown in the figure. The image will only be detected.
  • the learning processing unit 232 executes the learning process using this image.
  • FIG. 18 is a diagram illustrating details of the processing executed by the learning processing unit 232. As shown in FIG. 18, the learning processing unit 232 is (A) Phase-limited correlation information (PoC) for learning corresponding to the component acquisition position (B) Phase-limited correlation information (PoC) for learning corresponding to the mounting position of parts, (C) Robot control parameters when insertion is successful, Enter many sets of these data.
  • PoC Phase-limited correlation information
  • RoC Phase-limited correlation information for learning corresponding to the component acquisition position
  • PoC Phase-limited correlation information
  • Robot control parameters when insertion is successful Enter many sets of these data.
  • phase-limited correlation information (PoC) for learning for component acquisition position is a photographed image (camera image 211) at the component acquisition position when the component 20 is successfully inserted into the component mounting pin 40, and a model image. This is the phase-limited correlation information (PoC) calculated by the phase-limited correlation information (PoC) calculation unit 161 based on 151.
  • “(B) Phase-limited correlation information (PoC) for learning component mounting position correspondence” includes an image (camera image 121) immediately before component insertion when the component 20 is successfully inserted into the component mounting pin 40, and a model image 151. Based on the above, it is the phase-limited correlation information (PoC) calculated by the phase-limited correlation information (PoC) calculation unit 161.
  • Robot control parameter when insertion is successful is robot control at the shooting timing of the image (camera shot image 121) applied to the calculation of the associated "(B) phase-limited correlation information for learning (PoC)". It is a parameter.
  • the "(C) robot control parameter when the insertion is successful" is composed of, for example, the position (x, y, z) of the robot arm and the roll, pitch, and yaw (roll, pitch, yaw) indicating the arm posture. Dimensional real-valued vector (action), etc.
  • phase-limited correlation information for learning for component acquisition position
  • phase-limited correlation information PoC
  • PoC phase-limited correlation information
  • C when insertion is successful.
  • a large number of pairs of robot control parameters are prepared, and these are input to the learning processing unit 232.
  • the learning processing unit 232 executes learning processing on these input data and generates a learning model 242.
  • the learning model 242 generated by the learning processing unit 232 is Inputs are phase-limited correlation information (PoC) corresponding to the component acquisition position and phase-limited correlation information (PoC) corresponding to the component mounting position.
  • the output is the robot control parameter target value, that is, the robot control parameter target value, which is the optimum robot control parameter for inserting the component 20 into the component mounting pin 40.
  • the learning process executed by the learning processing unit 232 is so-called machine learning, and various learning algorithms can be applied.
  • the learning process using the deep neural network (DNN) described above can be applied.
  • a learning process in which a convolutional neural network CNN (Convolutional Neural Network) is combined with a BN (Batch Rectification) or a ReLU (Rectified Linear Unit) can be applied.
  • the learning processing unit 162 can be realized as a learning processing unit to which various learning algorithms are applied.
  • the learning processing unit 232 learns the weight of the neural network by using, for example, a learning process using a neural network in which BN or ReLU is combined with the above-mentioned CNN, for example, a stochastic gradient descent method.
  • the learning processing unit 232 is subjected to this learning processing.
  • Inputs are phase-limited correlation information (PoC) corresponding to the component acquisition position and phase-limited correlation information (PoC) corresponding to the component mounting position.
  • the output is the robot control parameter target value, that is, the robot control parameter target value, which is the optimum robot control parameter for inserting the component 20 into the component mounting pin 40.
  • the learning model 242 set as described above is generated.
  • FIG. 19 is a diagram illustrating the configuration and processing of the robot control device 260 that controls the robot 100 using the learning model 242.
  • the robot control device 260 has a phase-limited correlation information (PoC) calculation unit 261a, 261b, a learning model 242, and a robot control unit 125.
  • PoC phase-limited correlation information
  • the phase-limited correlation information (PoC) calculation unit 261a is a component acquisition position captured by the camera 210 with the robot 100 taking out the component 20 from the component box 10, and a component acquisition position stored in the storage unit a150a.
  • the model image 221 is input, and the component acquisition position corresponding phase-limited correlation information (PoC) 271a is calculated from these two images.
  • phase-limited correlation information (PoC) calculation unit 261b is attached to the component mounting position camera image 251 captured by the camera 250 while the robot 100 moves the component 20 onto the component mounting pin 40 and the storage unit b150b.
  • the stored component mounting position model image 242 is input, and the component mounting position corresponding phase-limited correlation information (PoC) 271b is calculated from these two images.
  • the original image that generated the phase-limited correlation information (PoC) 271a corresponding to the component acquisition position includes one object, that is, the component 20 as a subject, and was generated by the phase-limited correlation information (PoC) calculation unit 261a.
  • the phase-limited correlation information (PoC) 271a corresponding to the component acquisition position one peak appears at the coordinate position corresponding to the amount of deviation between the images for one object.
  • the original image that generated the phase-limited correlation information (PoC) 271b corresponding to the component mounting position includes two objects, that is, the component 20, the component mounting pin 40, and these two objects as subjects, and the phase-limited correlation.
  • the component mounting position-corresponding phase-limited correlation information (PoC) 271b generated by the information (PoC) calculation unit 261b a peak appears at a coordinate position corresponding to the amount of deviation between the images of these two objects.
  • One of the two coordinates (x, y) corresponding to these two peaks corresponds to the amount of deviation of the component 20, and the other corresponds to the amount of deviation of the component mounting pin 40.
  • the component acquisition position-corresponding phase-limited correlation information (PoC) 271a calculated by the phase-limited correlation information (PoC) calculation units 261a and 261b and the component mounting position-corresponding phase-limited correlation information (PoC) 271b are input to the learning model 242. ..
  • the learning model 242 is a learning model 242 generated by the learning processing unit 232 described above with reference to FIG.
  • This learning model 242 Inputs are phase-limited correlation information (PoC) corresponding to the component acquisition position and phase-limited correlation information (PoC) corresponding to the component mounting position.
  • the output is the robot control parameter target value, that is, the robot control parameter target value, which is the optimum robot control parameter for inserting the component 20 into the component mounting pin 40. It is a learning model 242 set as described above.
  • the learning model 242 acquires the component acquisition position-corresponding phase-limited correlation information (PoC) 271a calculated by the phase-limited correlation information (PoC) calculation units 261a and 261b and the component mounting position-corresponding phase-limited correlation information (PoC) 271b.
  • the parameter that is, the robot control parameter target value 273 shown in FIG. 19 is output.
  • the robot control parameter target value 273 is input to the robot control unit 125, and the robot control unit 125 calculates the difference between the current robot control parameter 126 and the robot control parameter target value 273, and the difference. If there is, the robot 100 is corrected and controlled so that the difference amount is zero or reduced.
  • the robot 100 By controlling the robot 100 using the robot control parameter target value 273 in this way, the position and orientation of the component 20 attached to the tip of the arm of the robot 100 are corrected and controlled, and then the component 20 is attracted. By releasing the component 20, the component 20 can be securely inserted into the component mounting pin 40.
  • the learning model generated by the learning processing device can be updated one after another. For example, during the period when the robot 100 is actually operating in the manufacturing process, a new image is continuously captured by the camera, and the captured image is used to generate phase-limited correlation information (PoC) to generate the phase-limited correlation information (PoC). Correlation information (PoC) can be input to the learning processing unit to continuously execute the learning process to update the learning model.
  • PoC phase-limited correlation information
  • failure information By including the failure information as learning data in this way, it is possible to classify different phase-limited correlation information at the time of success and failure, and generate highly accurate learning model output data, that is, a robot control parameter target value. Can be output.
  • parameters are updated by the stochastic gradient descent method of a normal neural network.
  • the neural network parameters after training are written to, for example, a file, and the parameters of the network constituting the training model are updated as appropriate using this file.
  • the learning process described above corresponds to regression learning that determines the control parameters (actions) of the robot using images taken by the camera, but is a reinforcement learning type that updates the learning model according to the reward. It may be a model of.
  • the learning processing device and the robot control device using the phase-limited correlation information have been described.
  • the learning processing device and the robot control device of the present disclosure are phase-limited. Not limited to correlation information, a configuration that applies various image correlation information that can calculate image deviation information in pixel (bixel) units, such as rotation-invariant phase-limited correlation information and image correlation information using image phase spectra. It is possible.
  • FIG. 20 is a block diagram showing a configuration example of the learning processing device of the present disclosure and an information processing device constituting the robot control device.
  • the CPU (Central Processing Unit) 301 functions as a control unit or a data processing unit that executes various processes according to a program stored in the ROM (Read Only Memory) 302 or the storage unit 308. For example, the process according to the sequence described in the above-described embodiment is executed.
  • the RAM (Random Access Memory) 303 stores programs and data executed by the CPU 301. These CPU 301, ROM 302, and RAM 303 are connected to each other by a bus 304.
  • the CPU 301 is connected to the input / output interface 305 via the bus 304, and the input / output interface 305 is connected to an input unit 306 consisting of various switches, a keyboard, a mouse, a microphone, a sensor, etc., and an output unit 307 consisting of a display, a speaker, and the like. Has been done.
  • the CPU 301 executes various processes in response to a command input from the input unit 306, and outputs the process results to, for example, the output unit 307.
  • the storage unit 308 connected to the input / output interface 305 is composed of, for example, a hard disk or the like, and stores programs executed by the CPU 301 and various data.
  • the communication unit 309 functions as a transmission / reception unit for Wi-Fi communication, Bluetooth (registered trademark) (BT) communication, and other data communication via a network such as the Internet or a local area network, and communicates with an external device.
  • Wi-Fi Wi-Fi
  • BT registered trademark
  • the drive 310 connected to the input / output interface 305 drives a removable medium 311 such as a magnetic disk, an optical disk, a magneto-optical disk, or a semiconductor memory such as a memory card, and records or reads data.
  • a removable medium 311 such as a magnetic disk, an optical disk, a magneto-optical disk, or a semiconductor memory such as a memory card
  • An image correlation information calculation unit that generates image correlation information between a camera-photographed image and a model image
  • a learning processing device having a learning processing unit that executes a learning process for inputting the image correlation information and generating a learning model that outputs control parameters for controlling the position of an object captured by the camera image.
  • the image correlation information generated by the image correlation information calculation unit is The learning processing device according to (1) or (2), which is information having a peak at a coordinate position corresponding to a positional deviation between the camera-captured image and an object included in the model image.
  • the image taken by the camera is An image containing objects acquired by a robot
  • the learning processing device according to any one of (1) to (3), wherein the model image is an image having an ideal position of the object.
  • the image taken by the camera is It is an image including two objects, a part object acquired by a robot and an object to which the part object is mounted.
  • the learning processing device according to any one of (1) to (4), wherein the model image is an image having a positional relationship between two objects that can reliably mount the component object on the mounting destination object.
  • the image correlation information generated by the image correlation information calculation unit is The learning according to (5), which is information having individual peaks at coordinate positions corresponding to the positional deviations of the camera-captured image, the component object included in the model image, and the object corresponding to each of the mounting destination objects. Processing equipment.
  • the learning processing unit With the image correlation information The learning processing apparatus according to any one of (1) to (6), wherein a learning process for generating the learning model is executed by inputting a control parameter for controlling the position of the object at the shooting timing of the image captured by the camera.
  • the image captured by the camera is an image immediately before the successful mounting process of the component object acquired by the robot on the mounting destination object.
  • the learning processing unit For inputting the image correlation information generated using the image immediately before the successful mounting process of the component object acquired by the robot to the mounting destination object, and controlling the position of the object captured in the camera captured image.
  • the learning processing apparatus according to any one of (1) to (7), which generates a learning model that outputs control parameters.
  • the camera-captured image includes both an image immediately before the successful mounting process of the component object to the mounting destination object acquired by the robot and an image immediately before the failure.
  • the learning processing unit The image correlation information generated by using the image immediately before the successful mounting process of the component object acquired by the robot and the image correlation generated by using the image immediately before the failure.
  • the learning process according to any one of (1) to (8), which inputs information and success / failure information of the mounting process and generates a learning model that outputs a control parameter for controlling the position of the object captured by the camera image. Device.
  • the learning processing unit is The learning processing apparatus according to any one of (1) to (9), which executes a learning process using a deep neural network (DNN).
  • DNN deep neural network
  • a robot control unit that controls the robot and An image correlation information calculation unit that generates image correlation information between a camera-photographed image of an object acquired by the robot and a model image in which the object is arranged at an ideal position. It has a learning model that inputs the image correlation information and outputs a control parameter target value for controlling the position of the object captured in the image captured by the camera.
  • the robot control unit is a robot control device that controls the robot by using a control parameter target value that is an output of the learning model.
  • the image correlation information generated by the image correlation information calculation unit is The robot control device according to (11) or (12), which is information having a peak at a coordinate position corresponding to a positional deviation between the camera-captured image and an object included in the model image.
  • the image taken by the camera is It is an image including two objects, a part object acquired by the robot and an object to which the part object is mounted.
  • the robot control device according to any one of (11) to (13), wherein the model image is an image having a positional relationship between two objects that can reliably mount the component object on the mounting destination object.
  • the image correlation information generated by the image correlation information calculation unit is The robot according to (14), which is information having individual peaks at coordinate positions corresponding to the positional deviations of the camera-captured image, the component object included in the model image, and the object corresponding to each of the mounting destination objects. Control device.
  • the image correlation information calculation unit generates the image correlation information of the camera-captured image and the model image, and the image correlation information calculation step.
  • a learning processing method comprising a step in which a learning processing unit executes a learning process of inputting the image correlation information and generating a learning model that outputs a control parameter for controlling the position of an object captured by the camera captured image.
  • the steps in which the robot control unit controls the robot An image correlation information calculation step in which the image correlation information calculation unit generates image correlation information between the camera-photographed image of the object acquired by the robot and the model image in which the object is arranged at an ideal position.
  • a robot control method in which the robot control unit recontrols the robot by using a control parameter target value which is an output of the learning model.
  • a program that executes learning processing in a learning processing device An image correlation information calculation step that causes the image correlation information calculation unit to generate image correlation information of a camera-photographed image and a model image, and A program that causes a learning process unit to execute a step of inputting the image correlation information and executing a learning process of generating a learning model that outputs control parameters for controlling the position of an object captured by the camera image.
  • a program that executes robot control processing in a robot control device Steps to let the robot control unit control the robot, An image correlation information calculation step for causing the image correlation information calculation unit to generate image correlation information between the camera-photographed image of the object acquired by the robot and the model image in which the object is arranged at an ideal position.
  • the series of processes described in the specification can be executed by hardware, software, or a composite configuration of both.
  • the program can be pre-recorded on a recording medium.
  • LAN Local Area Network
  • the various processes described in the specification are not only executed in chronological order according to the description, but may also be executed in parallel or individually as required by the processing capacity of the device that executes the processes.
  • the system is a logical set configuration of a plurality of devices, and the devices having each configuration are not limited to those in the same housing.
  • the phase-limited correlation information calculation unit that generates the phase-limited correlation information of the camera-captured image and the model image, and the phase-limited correlation information are input to control the position of the object captured in the camera-captured image.
  • it is a robot control device that controls the robot using the generated learning model, and provides phase-limited correlation information between the camera-captured image of the object acquired by the robot and the model image in which the object is placed at an ideal position. The robot is controlled by inputting to the learning model and using the control parameter target value obtained as the output from the learning model.

Landscapes

  • Engineering & Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Human Computer Interaction (AREA)
  • Robotics (AREA)
  • Mechanical Engineering (AREA)
  • Manipulator (AREA)

Abstract

L'invention concerne un dispositif et un procédé pour exécuter un traitement de commande de robot et un traitement d'apprentissage à l'aide d'informations de corrélation de phase uniquement. La présente invention comprend une unité de calcul d'informations de corrélation de phase uniquement qui génère des informations de corrélation de phase uniquement entre une image capturée par caméra et une image de modèle. Un modèle appris est généré, qui reçoit une entrée des informations de corrélation de phase uniquement et délivre un paramètre de commande utilisé pour commander la position d'un objet capturé dans l'image capturée par caméra. La présente invention concerne un dispositif de commande de robot qui utilise le modèle appris généré pour exécuter une commande de robot. Les informations de corrélation de phase uniquement entre l'image capturée par caméra d'un objet prise par le robot et l'image modèle dans laquelle l'objet est disposé à une position idéale sont entrées dans le modèle appris et une valeur cible de paramètre de commande obtenue en tant que sortie à partir du modèle appris est utilisée pour commander le robot.
PCT/JP2021/002976 2020-03-02 2021-01-28 Dispositif de traitement d'apprentissage, dispositif et procédé de commande de robot, et programme WO2021176902A1 (fr)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
JP2020-034754 2020-03-02
JP2020034754 2020-03-02

Publications (1)

Publication Number Publication Date
WO2021176902A1 true WO2021176902A1 (fr) 2021-09-10

Family

ID=77614230

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/JP2021/002976 WO2021176902A1 (fr) 2020-03-02 2021-01-28 Dispositif de traitement d'apprentissage, dispositif et procédé de commande de robot, et programme

Country Status (1)

Country Link
WO (1) WO2021176902A1 (fr)

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2017090983A (ja) * 2015-11-03 2017-05-25 株式会社デンソーアイティーラボラトリ 画像処理方法及び画像処理装置
WO2018146769A1 (fr) * 2017-02-09 2018-08-16 三菱電機株式会社 Dispositif de commande de position et procédé de commande de position
JP2019057250A (ja) * 2017-09-22 2019-04-11 Ntn株式会社 ワーク情報処理装置およびワークの認識方法
JP2019091138A (ja) * 2017-11-13 2019-06-13 株式会社日立製作所 画像検索装置、画像検索方法、及び、それに用いる設定画面

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2017090983A (ja) * 2015-11-03 2017-05-25 株式会社デンソーアイティーラボラトリ 画像処理方法及び画像処理装置
WO2018146769A1 (fr) * 2017-02-09 2018-08-16 三菱電機株式会社 Dispositif de commande de position et procédé de commande de position
JP2019057250A (ja) * 2017-09-22 2019-04-11 Ntn株式会社 ワーク情報処理装置およびワークの認識方法
JP2019091138A (ja) * 2017-11-13 2019-06-13 株式会社日立製作所 画像検索装置、画像検索方法、及び、それに用いる設定画面

Similar Documents

Publication Publication Date Title
Tang et al. A framework for manipulating deformable linear objects by coherent point drift
US10857673B2 (en) Device, method, program and recording medium, for simulation of article arraying operation performed by robot
EP3171236B1 (fr) Simulateur, procédé de simulation et programme de simulation
EP3357649A2 (fr) Dispositif de commande, robot et système de robot
US20180222048A1 (en) Control device, robot, and robot system
EP0291965B1 (fr) Méthode et système de commande de robots pour l'assemblage de produits
CN111565895B (zh) 机器人系统及机器人控制方法
EP3733355A1 (fr) Système et procédé d'optimisation de mouvement de robot
JP2017094407A (ja) シミュレーション装置、シミュレーション方法、およびシミュレーションプログラム
Chatzilygeroudis et al. Benchmark for bimanual robotic manipulation of semi-deformable objects
EP3828654A1 (fr) Système de commande, organe de commande et procédé de commande
Fu et al. Active learning-based grasp for accurate industrial manipulation
WO2020095735A1 (fr) Dispositif de commande de robot, procédé de simulation et programme de simulation
CN113412178A (zh) 机器人控制装置、机器人系统以及机器人控制方法
El Zaatari et al. iTP-LfD: Improved task parametrised learning from demonstration for adaptive path generation of cobot
Jha et al. Imitation and supervised learning of compliance for robotic assembly
WO2021176902A1 (fr) Dispositif de traitement d'apprentissage, dispositif et procédé de commande de robot, et programme
Luqman et al. Chess brain and autonomous chess playing robotic system
WO2020142498A1 (fr) Robot à mémoire visuelle
Bobka et al. Development of an automated assembly process supported with an artificial neural network
WO2020022040A1 (fr) Système de commande, procédé de commande et programme
JPH08118272A (ja) ロボットのキャリブレーション方法
US20230130816A1 (en) Calibration system, calibration method, and calibration apparatus
Lin et al. Inference of 6-DOF robot grasps using point cloud data
CN112533739B (zh) 机器人控制装置、机器人控制方法以及存储介质

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 21765301

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 21765301

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: JP