CN109927033B - Target object dynamic adaptation method applied to conveyor belt sorting - Google Patents

Target object dynamic adaptation method applied to conveyor belt sorting Download PDF

Info

Publication number
CN109927033B
CN109927033B CN201910255347.3A CN201910255347A CN109927033B CN 109927033 B CN109927033 B CN 109927033B CN 201910255347 A CN201910255347 A CN 201910255347A CN 109927033 B CN109927033 B CN 109927033B
Authority
CN
China
Prior art keywords
target object
pictures
personal computer
coordinate system
conveyor belt
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201910255347.3A
Other languages
Chinese (zh)
Other versions
CN109927033A (en
Inventor
蔡修秀
曾静
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Hangzhou Dianzi University
Original Assignee
Hangzhou Dianzi University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Hangzhou Dianzi University filed Critical Hangzhou Dianzi University
Priority to CN201910255347.3A priority Critical patent/CN109927033B/en
Publication of CN109927033A publication Critical patent/CN109927033A/en
Application granted granted Critical
Publication of CN109927033B publication Critical patent/CN109927033B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Landscapes

  • Image Analysis (AREA)
  • Manipulator (AREA)

Abstract

The invention discloses a dynamic target object adaptation method applied to conveyor belt sorting, which at least comprises the following steps: step S1: a machine vision unit is arranged in a sorting control system and image information of a target object on a conveyor belt is acquired in real time; step S2: inputting the image information into a neural network to be matched with a pre-trained recognition model and recognizing the category and position information of the target object in real time; step S3: and controlling the robot arm according to the category and the position information of the target object so as to adaptively adjust the motion plan according to the actual state of the target object. Compared with the prior art, the invention combines machine vision and robot arm control, and adopts a deep neural network training model in advance, so that the robot arm can be controlled according to image information acquired by the machine vision, the stroke track can be adjusted in a self-adaptive manner according to the actual condition of a target object, the sorting of the target object is realized, and the dynamic embedding and self-adaptation of a target object model are realized.

Description

Target object dynamic adaptation method applied to conveyor belt sorting
Technical Field
The invention relates to the technical field of industrial automation, in particular to a dynamic target object adaptation method applied to conveyor belt sorting.
Background
In recent years, robots have been widely used in industry, but the robots used in industry are basically operated according to a preprogrammed program, and have a high degree of customization, such as repetitive work of carrying, painting, welding, and the like. However, these operations of the robot arm require a fixed timing, and if a deviation from a fixed trajectory or a product type change occurs during the process, the robot arm control system still executes the operation according to the original program, and cannot be adaptively changed.
Therefore, it is necessary to provide a technical solution to solve the technical problems of the prior art.
Disclosure of Invention
In view of the above, it is necessary to provide a dynamic target object adaptation method applied to conveyor belt sorting, which combines machine vision and robot arm control, takes the target object on the conveyor belt as a carrier, trains a deep neural network model in advance, and controls a mechanical arm according to image information obtained by the machine vision so that the mechanical arm can adaptively adjust a stroke trajectory according to the actual situation of the target object to realize sorting of the target object, thereby realizing a dynamic embedded and adaptive target object model method.
In order to solve the technical problems in the prior art, the technical scheme of the invention is as follows:
a target object dynamic adaptation method applied to conveyor belt sorting at least comprises the following steps:
step S1: a machine vision unit is arranged in a sorting control system and image information of a target object on a conveyor belt is acquired in real time;
step S2: inputting the image information into a neural network to be matched with a pre-trained recognition model and recognizing the category and position information of the target object in real time;
step S3: controlling the robot arm according to the category and the position information of the target object so as to adaptively adjust the motion plan according to the actual state of the target object;
in step S2, the training of the recognition model using the labeled image information in advance further includes:
step S201: establishing a unified coordinate system in a sorting control system and calibrating a machine vision unit and a machine arm, wherein the machine vision coordinate system is established when the machine vision unit is calibrated, the machine arm coordinate system is established when the machine arm is calibrated, and a coordinate conversion matrix between the two coordinate systems is obtained through calculation;
step S202: shooting a target object to be sorted in different scenes to obtain a large number of pictures, and acquiring a large number of sample pictures.
Step S203: and training the recognition target by adopting the collected sample picture and obtaining a model file capable of recognizing the target object.
As a further improvement, in step S3, the trained model file of the target object is embedded into an industrial personal computer for recognition of the target object to be sorted.
As a further improvement, the machine vision unit is a plurality of industrial cameras disposed at specific locations.
As a further improvement scheme, the industrial camera synchronously shoots a target object on the conveyor belt at regular time and transmits the shot picture to the industrial personal computer in real time; the industrial personal computer executes the recognition model, recognizes the two-dimensional coordinates of the target object in the picture, obtains three-dimensional coordinates in a unified coordinate system through the coordinate conversion matrix, and uses the three-dimensional coordinates as a target position for planning the motion of the mechanical arm.
As a further development, a signal device is provided, which is used to generate a synchronization signal for the plurality of industrial cameras to acquire pictures synchronously.
As a further improvement scheme, a plurality of pairs of industrial cameras arranged on the periphery of the conveyor belt are used for acquiring pictures of the whole process of the target object transmitted on the conveyor belt, and the industrial personal computer calculates the pictures to obtain a time-sequential target object motion sequence.
As a further improvement, the recognition model adopts a deep neural network.
Compared with the prior art, the method realizes the replacement of the sorted target object by dynamically guiding the sorted target object model into the sorting control system, obtains the target object and the position information thereof by using the target object model to perform instant recognition on the picture shot by the industrial camera, and realizes the self-adaptive control of the robot arm by using the information to grab the target object so as to realize the sorting purpose.
Drawings
Fig. 1 is a flow chart of a target object dynamic adaptation method applied to conveyor belt sorting.
Fig. 2 is a block diagram of a system implementing the method of the present invention.
FIG. 3 is a block diagram of a process for unified coordinate system calibration according to the present invention.
FIG. 4 is a block diagram of a process for training the recognition model of the present invention.
FIG. 5 is a block flow diagram of the recognition model execution of the present invention.
FIG. 6 is a diagram illustrating information for model execution in the present invention.
The following specific embodiments will further illustrate the invention in conjunction with the above-described figures.
Detailed Description
The technical solution provided by the present invention will be further explained with reference to the accompanying drawings.
Referring to fig. 1, there is shown a flow chart of a method for dynamically adapting a target object for sorting on a conveyor belt according to the present invention, which at least includes the following steps:
step S1: a machine vision unit is arranged in a sorting control system and image information of a target object on a conveyor belt is acquired in real time;
step S2: inputting the image information into a neural network to be matched with a pre-trained recognition model and recognizing the category and position information of the target object in real time;
step S3: controlling the robot arm according to the category and the position information of the target object so as to adaptively adjust the motion plan according to the actual state of the target object;
in step S2, the training of the recognition model using the labeled image information in advance further includes:
step S201: establishing a unified coordinate system in a sorting control system, calibrating a machine vision unit and a machine arm, establishing a machine vision coordinate system when calibrating the machine vision unit, establishing a machine arm coordinate system when calibrating the machine arm, and calculating to obtain a coordinate conversion matrix between the two coordinate systems;
step S202: shooting a target object to be sorted in different scenes to obtain a large number of pictures, and acquiring a large number of sample pictures.
Step S203: and training the recognition target by adopting the collected sample picture and obtaining a model file capable of recognizing the target object.
Referring to fig. 2, there is shown a block diagram of the system of the present invention, including a plurality of conveyor belts, a plurality of industrial cameras, an industrial personal computer, a signal device, a robot arm, and a plurality of objects. Wherein, objects moving along with the conveyor belts are placed on the conveyor belts (1) and (2), the industrial personal computer (7) sends signals to the signal device (8), the signal device (8) sends synchronous trigger signals to the industrial cameras in the same group, and the industrial personal computer takes pictures once every time the industrial personal computer sends signals; the industrial cameras (3), (4), (5) and (6) take pictures and transmit the pictures back to the industrial personal computer (7), and the shooting range of the industrial cameras covers the whole conveyor belt; the industrial personal computer (7) sends a command to the robot arm (9) to move the robot arm to a specified position and grab an object.
In the technical scheme, the identification mode of the objects on the conveyor belt is separated from the sorting control system, pictures of the target objects to be sorted, which are shot in different scenes, are input into the artificial neural network for off-line learning, model files of the target objects are obtained, and then the model files are embedded into an industrial personal computer of the sorting control system. After embedding the model file of the object to be sorted, the sorting control system regularly shoots the picture of the object to be sorted when the object to be sorted is transmitted by the conveyor belt through the industrial camera, then the picture is immediately input to the industrial personal computer to be subjected to feature matching with the model file of the object to be sorted, the object to be sorted and the position information of the object to be sorted are identified, and the position information can be used as the target position for controlling the motion of the mechanical arm. The sorting control system comprises: industrial cameras, industrial personal computers, signaling devices, robotic arms, and conveyor belts. The specific process is as follows:
first, a unified coordinate system is established in the sort control system. The method comprises the steps of calibrating an industrial camera and a robot arm, establishing a machine vision coordinate system when calibrating the industrial camera, establishing a robot arm coordinate system when calibrating the robot arm, and calculating to obtain a coordinate transformation matrix between the two coordinate systems. The visual coordinates of the target object calculated from the industrial camera are converted to robot arm coordinates by a coordinate conversion matrix and a matrix calculation.
The identification of the target object is realized by shooting a large number of pictures of the target object to be sorted under different scenes through an artificial neural network technology, then training by using the pictures on a personal computer, and obtaining a model file capable of identifying the target object after the training is finished.
And then, embedding the trained target object model file into an industrial personal computer for identifying the target object to be sorted. The sorting control system synchronously shoots a target object on the conveyor belt at regular time by using an industrial camera and immediately transmits the shot picture to an industrial personal computer; the industrial personal computer executes an artificial neural network, identifies a target object to be sorted in the picture by using the target object model file, and simultaneously identifies a two-dimensional coordinate of the target object in the picture; and (4) calculating by using the industrial personal computer to obtain three-dimensional coordinates under a unified coordinate system through the coordinate conversion matrix established in the previous step, and using the three-dimensional coordinates as a target position for planning the motion of the mechanical arm.
Finally, a plurality of pairs of industrial cameras arranged on the periphery of the conveyor belt are used for collecting pictures of the whole process of the target object transmitted on the conveyor belt, and the industrial personal computer can calculate to obtain a target object motion sequence with a time sequence; the motion sequence can be used for the purposes of motion track fitting, track prejudgment and the like by a sorting control system, the transmitted final position information is output to the mechanical arm, and the mechanical arm grabs the target object to realize the sorting function.
The following describes the specific implementation process of the inventive concept by way of example.
Referring to fig. 3, which is a flow chart showing unified coordinate system calibration in the present invention, a calibration board between black and white squares is prepared, after a camera is fixed, n (n > -9) calibration board pictures at different positions in space are taken, the position can be used for placing the calibration board in a 3-row and 3-column manner (S11), the taken calibration board picture is transmitted to an industrial personal computer, the industrial personal computer is used for extracting the top point of each black and white square of the calibration board as a calibration feature point (S12), thereby obtaining two matrices of two-dimensional coordinate conversion from a world coordinate system at a real space position to the picture: m and K (S13), establishing a machine vision coordinate system; the method comprises the steps of shooting m (m > is 9) pictures of the robot arm at different positions, wherein the positions can also adopt a 3-row and 3-column mode (S14), transmitting the shot robot arm pictures to an industrial personal computer, selecting a characteristic point, extracting the characteristic point from each picture (S15), and calculating a robot arm coordinate system. And finally, describing the matrix R of the robot arm coordinate system transformation of the object position by the characteristic points on the robot arm (S16). The three conversion matrixes M, K and R obtained in the previous step are used for converting two-dimensional coordinate points in a subsequent picture into three-dimensional coordinate points in a unified coordinate system.
Referring to fig. 4, which is a block diagram of a process of training a recognition model in the present invention, a process of training and extracting a model of a target object to be sorted is as follows:
establishing artificial neuron network frame on personal computer, inputting a large number of pictures of target object to be sorted, which are shot under multiple angles and multiple backgrounds (S21), setting the image training times as m times and setting the batch as n times, then inputting the pictures into the neural network frame, performing model training on the target object (S22) to generate a model file of the target object (S23), then performing image prediction on the object to be sorted (S24), obtaining confidence coefficient of the frame for identifying the object after performing image prediction, finishing training if the confidence coefficient reaches a threshold value, obtaining the neural network model stored in a memory data structure by a model training program, wherein the neural network model comprises parameters such as neural network layer structure, excitation function, and neural element parameter weight, compressing and reformatting the neural network model, converting the memory data structure into a hierarchical data format file, i.e. the target object model file that can be embedded in the sorting control system, otherwise, the process returns to S21 to continue training. (S25).
Referring to fig. 5, showing a flow diagram for identifying model execution in the present invention, firstly, an embeddable target object model file is transmitted from a personal computer at a training end to an industrial personal computer in a binary stream type transmission mode, the industrial personal computer rewrites the binary stream into the file, after the file transmission is finished, a control program of the industrial personal computer starts a new process to preheat the model, a hierarchical data format file of the target object model is read into a memory and is inversely formatted, the compressed and formatted stored data is decompressed again, each network layer and information such as configured excitation functions and each parameter weight are reconstructed to restore the information to be sent to neurons, neuron connections and network frames, finally, the neural network obtained by training is reproduced in the new process of the industrial personal computer, the frames are loaded into a neural network model, after the loading method is called, indicating that the process is ready. When the process is set as a new process at the time of recognition, the image recognition work of the target object can be started (S31).
Then, the industrial personal computer sends a signal for taking pictures to the signal device, and then the signal device sends the synchronous trigger signal to the left and right industrial cameras (such as the industrial cameras (3) and (4) in fig. 1) (S32), so that the left and right industrial cameras respectively take one picture, then the picture is sent back to the industrial personal computer, and the pictures are taken at intervals of time T (S33).
Then, the target object is identified and the obtained two-dimensional coordinates are converted into three-dimensional coordinates (for convenience of expression, only the first picture shooting of the industrial cameras (3) and (4) is taken as an example), the shot pictures are transmitted back to the industrial personal computer, and then the target object to be sorted is identified and respectively sent to the industrial personal computer by matching the embedded target object model with the object in the picturesTwo-dimensional coordinates of the target object on the two pictures are marked as P respectivelyl1-1 (Xl1-1,Yl1-1) And Pr1-1(Xr1-1,Yr1-1) (S34) converting the information of the sorting target into three-dimensional coordinates P in a robot arm coordinate system by using the three matrixes K, M, R obtained by calibration and two-dimensional coordinates extracted from the two pictures respectively1-1(X1-1,Y1-1,Z1-1)(S35)。
Then, the industrial cameras (3), (4) and (5), (6) shoot for a period of time under a fixed time period, and a plurality of three-dimensional coordinate sequences can be obtained. As shown in FIG. 6, the industrial cameras (3), (4) can obtain a time-sequential point sequence R1(P1-1,P1-2,P1-3,P1-4… …) and likewise the industrial cameras (5), (6) can also obtain a chronological point sequence R2(P2-1,P2-2,P2-3,P2-4… …) (S36), the points of both point sequences are in the same coordinate system, while R is1Tail of the sequence and R2The points of the head of the sequence are partially overlapped, so that a continuous sequence R (P) of the moving points of the sorting target object can be obtained1-1,P1-2,P1-3,P1-4…… P2-1,P2-2,P2-3,P2-4……)(S37)。
Pre-judging the motion track of the target object on the conveyor belt; knowing the point sequence R and the time interval T between the two points, the motion speed of the sorted target can be calculated, so that the complete track can be predicted by the fitting of the previous part of the track, the grabbing points of the sorted target are predicted according to the rule of the motion speed of the object, and the industrial personal computer sends a command to the robot arm to grab the sorted target.
The above description of the embodiments is only intended to facilitate the understanding of the method of the invention and its core idea. It should be noted that, for those skilled in the art, it is possible to make various improvements and modifications to the present invention without departing from the principle of the present invention, and those improvements and modifications also fall within the scope of the claims of the present invention.
The previous description of the disclosed embodiments is provided to enable any person skilled in the art to make or use the present invention. Various modifications to these embodiments will be readily apparent to those skilled in the art, and the generic principles defined herein may be applied to other embodiments without departing from the spirit or scope of the invention. Thus, the present invention is not intended to be limited to the embodiments shown herein but is to be accorded the widest scope consistent with the principles and novel features disclosed herein.

Claims (1)

1. A method for dynamic adaptation of target objects for sorting on a conveyor, characterized in that it comprises at least the following steps:
step S1: a machine vision unit is arranged in a sorting control system and image information of a target object on a conveyor belt is acquired in real time;
step S2: inputting the image information into a neural network to be matched with a pre-trained recognition model and recognizing the category and position information of the target object in real time;
step S3: controlling the robot arm according to the category and the position information of the target object so as to adaptively adjust the motion plan according to the actual state of the target object;
in step S2, the training of the recognition model using the labeled image information in advance further includes:
step S201: establishing a unified coordinate system in a sorting control system and calibrating a machine vision unit and a machine arm, wherein the machine vision coordinate system is established when the machine vision unit is calibrated, the machine arm coordinate system is established when the machine arm is calibrated, and a coordinate conversion matrix between the two coordinate systems is obtained through calculation;
step S202: shooting a target object to be sorted in different scenes to obtain a large number of pictures;
step S203: training the recognition target by adopting the collected sample picture and obtaining a model file capable of recognizing the target object;
in step S201, a calibration board with black and white checkers alternating is used to obtain a coordinate transformation matrix between two coordinate systems, which includes the following steps:
step S11: a camera at a fixed position shoots n calibration plate pictures at different positions in space and transmits the n calibration plate pictures to an industrial personal computer;
step S12: the industrial personal computer extracts the vertexes of the black and white squares of the calibration plate as calibration characteristic points;
step S13: acquiring two matrixes of two-dimensional coordinate conversion extracted from a picture from a world coordinate system at a real space position: m and K, establishing a machine vision coordinate system;
step S14: shooting m pictures of the robot arms at different positions and transmitting the pictures to the industrial personal computer;
step S15: selecting a characteristic point, extracting the characteristic point from each picture, and calculating to obtain a mechanical arm coordinate system;
step S16: describing a matrix R of the transformation of the robot arm coordinate system of the object position by using the characteristic points on the robot arm, and obtaining three transformation matrices M, K and R which are used for converting two-dimensional coordinate points in subsequent pictures into three-dimensional coordinate points in a unified coordinate system;
in the step S3, embedding the trained model file of the target object into an industrial personal computer for identifying the target object to be sorted;
the machine vision unit is a plurality of industrial cameras arranged at specific positions; the industrial camera carries out timing synchronous shooting on a target object on the conveyor belt and transmits shot pictures to the industrial personal computer in real time;
the industrial personal computer executes the identification model, identifies the two-dimensional coordinates of the target object in the picture, obtains three-dimensional coordinates under a unified coordinate system through a coordinate conversion matrix, and uses the three-dimensional coordinates as a target position for mechanical arm motion planning;
setting a signal device, wherein the signal device is used for generating a synchronous signal so that a plurality of industrial cameras synchronously acquire pictures;
a plurality of pairs of industrial cameras arranged on the periphery of the conveyor belt are used for acquiring pictures of the whole process of the target object transmitted on the conveyor belt, and the industrial personal computer calculates the pictures to obtain a target object motion sequence with a time sequence; the recognition model adopts a deep neural network.
CN201910255347.3A 2019-04-01 2019-04-01 Target object dynamic adaptation method applied to conveyor belt sorting Active CN109927033B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910255347.3A CN109927033B (en) 2019-04-01 2019-04-01 Target object dynamic adaptation method applied to conveyor belt sorting

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910255347.3A CN109927033B (en) 2019-04-01 2019-04-01 Target object dynamic adaptation method applied to conveyor belt sorting

Publications (2)

Publication Number Publication Date
CN109927033A CN109927033A (en) 2019-06-25
CN109927033B true CN109927033B (en) 2021-02-26

Family

ID=66988920

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910255347.3A Active CN109927033B (en) 2019-04-01 2019-04-01 Target object dynamic adaptation method applied to conveyor belt sorting

Country Status (1)

Country Link
CN (1) CN109927033B (en)

Families Citing this family (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111002317A (en) * 2019-11-20 2020-04-14 希美埃(芜湖)机器人技术有限公司 Novel spraying method and novel spraying device based on robot vision in door and window spraying industry
CN111015662B (en) * 2019-12-25 2021-09-07 深圳蓝胖子机器智能有限公司 Method, system and equipment for dynamically grabbing object and method, system and equipment for dynamically grabbing garbage
CN111460909A (en) * 2020-03-09 2020-07-28 兰剑智能科技股份有限公司 Vision-based goods location management method and device
CN111421539A (en) * 2020-04-01 2020-07-17 电子科技大学 Industrial part intelligent identification and sorting system based on computer vision
WO2022067665A1 (en) * 2020-09-30 2022-04-07 西门子(中国)有限公司 Coordinate transformation method, apparatus, and system, program and electronic device thereof
CN115816469B (en) * 2023-02-21 2023-05-05 北京科技大学 Cloud PLC control material sorting method and system based on machine vision
CN117032262B (en) * 2023-09-12 2024-03-19 南栖仙策(南京)科技有限公司 Machine control method, device, electronic equipment and storage medium

Family Cites Families (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103325106B (en) * 2013-04-15 2015-11-25 浙江工业大学 Based on the Moving Workpieces method for sorting of LabVIEW
CN103706568B (en) * 2013-11-26 2015-11-18 中国船舶重工集团公司第七一六研究所 Based on the robot method for sorting of machine vision
EP4088889A1 (en) * 2015-11-13 2022-11-16 Berkshire Grey Operating Company, Inc. Sortation systems and methods for providing sortation of a variety of objects
CN108109174B (en) * 2017-12-13 2022-02-18 上海电气集团股份有限公司 Robot monocular guidance method and system for randomly sorting scattered parts
CN108908334A (en) * 2018-07-20 2018-11-30 汕头大学 A kind of intelligent grabbing system and method based on deep learning

Also Published As

Publication number Publication date
CN109927033A (en) 2019-06-25

Similar Documents

Publication Publication Date Title
CN109927033B (en) Target object dynamic adaptation method applied to conveyor belt sorting
CN111046948B (en) Point cloud simulation and deep learning workpiece pose identification and robot feeding method
CN110948492B (en) Three-dimensional grabbing platform and grabbing method based on deep learning
CN110000785B (en) Agricultural scene calibration-free robot motion vision cooperative servo control method and equipment
CN109483573A (en) Machine learning device, robot system and machine learning method
CN109397285B (en) Assembly method, assembly device and assembly equipment
US12053886B2 (en) Device control using policy training based on task embeddings
CN110281231B (en) Three-dimensional vision grabbing method for mobile robot for unmanned FDM additive manufacturing
CN109074513A (en) The depth machine learning method and device grasped for robot
KR20180004898A (en) Image processing technology and method based on deep learning
CN110428464B (en) Multi-class out-of-order workpiece robot grabbing pose estimation method based on deep learning
CN108748149B (en) Non-calibration mechanical arm grabbing method based on deep learning in complex environment
WO2012052615A1 (en) Method for the filtering of target object images in a robot system
CN111085997A (en) Capturing training method and system based on point cloud acquisition and processing
CN110315544B (en) Robot operation learning method based on video image demonstration
CN110969660A (en) Robot feeding system based on three-dimensional stereoscopic vision and point cloud depth learning
CN113172629A (en) Object grabbing method based on time sequence tactile data processing
CN112947458B (en) Robot accurate grabbing method based on multi-mode information and computer readable medium
CN111152227A (en) Mechanical arm control method based on guided DQN control
CN108972593A (en) Control method and system under a kind of industrial robot system
CN111582395A (en) Product quality classification system based on convolutional neural network
KR20230061612A (en) Object picking automation system using machine learning and method for controlling the same
CN114131603A (en) Deep reinforcement learning robot grabbing method based on perception enhancement and scene migration
CN117381793A (en) Material intelligent detection visual system based on deep learning
CN114187312A (en) Target object grabbing method, device, system, storage medium and equipment

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant