CN114758236A - Non-specific shape object identification, positioning and manipulator grabbing system and method - Google Patents
Non-specific shape object identification, positioning and manipulator grabbing system and method Download PDFInfo
- Publication number
- CN114758236A CN114758236A CN202210384412.4A CN202210384412A CN114758236A CN 114758236 A CN114758236 A CN 114758236A CN 202210384412 A CN202210384412 A CN 202210384412A CN 114758236 A CN114758236 A CN 114758236A
- Authority
- CN
- China
- Prior art keywords
- target
- manipulator
- information
- objects
- image
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
- 238000000034 method Methods 0.000 title claims abstract description 33
- 238000012545 processing Methods 0.000 claims description 57
- 238000003384 imaging method Methods 0.000 claims description 26
- 238000005259 measurement Methods 0.000 claims description 20
- 238000013528 artificial neural network Methods 0.000 claims description 14
- 230000033001 locomotion Effects 0.000 claims description 12
- 238000004458 analytical method Methods 0.000 claims description 11
- 238000003062 neural network model Methods 0.000 claims description 10
- 238000012549 training Methods 0.000 claims description 9
- 238000000605 extraction Methods 0.000 claims description 7
- 239000011159 matrix material Substances 0.000 claims description 7
- 238000012937 correction Methods 0.000 claims description 6
- 238000007781 pre-processing Methods 0.000 claims description 6
- 230000003042 antagnostic effect Effects 0.000 claims description 4
- 238000010801 machine learning Methods 0.000 claims description 4
- 230000009466 transformation Effects 0.000 claims description 4
- 238000001914 filtration Methods 0.000 claims description 3
- 238000009499 grossing Methods 0.000 claims description 3
- 238000013507 mapping Methods 0.000 claims description 3
- 230000002093 peripheral effect Effects 0.000 claims description 3
- 230000008859 change Effects 0.000 claims description 2
- 230000004807 localization Effects 0.000 claims 1
- 238000004519 manufacturing process Methods 0.000 description 12
- 238000006243 chemical reaction Methods 0.000 description 8
- 238000004364 calculation method Methods 0.000 description 6
- 230000008569 process Effects 0.000 description 5
- 241001292396 Cirrhitidae Species 0.000 description 4
- 238000005516 engineering process Methods 0.000 description 4
- 230000006870 function Effects 0.000 description 4
- 230000005540 biological transmission Effects 0.000 description 3
- 239000002184 metal Substances 0.000 description 3
- 230000000007 visual effect Effects 0.000 description 3
- 238000013473 artificial intelligence Methods 0.000 description 2
- 238000013135 deep learning Methods 0.000 description 2
- 238000013461 design Methods 0.000 description 2
- 238000001514 detection method Methods 0.000 description 2
- 238000009776 industrial production Methods 0.000 description 2
- 238000009434 installation Methods 0.000 description 2
- 238000012544 monitoring process Methods 0.000 description 2
- 230000036544 posture Effects 0.000 description 2
- 238000003860 storage Methods 0.000 description 2
- 230000009471 action Effects 0.000 description 1
- 230000004075 alteration Effects 0.000 description 1
- 238000013459 approach Methods 0.000 description 1
- 230000009286 beneficial effect Effects 0.000 description 1
- 230000008901 benefit Effects 0.000 description 1
- 230000000903 blocking effect Effects 0.000 description 1
- 210000000078 claw Anatomy 0.000 description 1
- 230000007547 defect Effects 0.000 description 1
- 230000000694 effects Effects 0.000 description 1
- 230000004907 flux Effects 0.000 description 1
- 230000006872 improvement Effects 0.000 description 1
- 230000003993 interaction Effects 0.000 description 1
- 239000000463 material Substances 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
- 230000003287 optical effect Effects 0.000 description 1
- 238000002310 reflectometry Methods 0.000 description 1
- 230000011218 segmentation Effects 0.000 description 1
- 230000008054 signal transmission Effects 0.000 description 1
- 238000006467 substitution reaction Methods 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/21—Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
- G06F18/214—Generating training patterns; Bootstrap methods, e.g. bagging or boosting
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/24—Classification techniques
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/04—Architecture, e.g. interconnection topology
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/08—Learning methods
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Physics & Mathematics (AREA)
- Data Mining & Analysis (AREA)
- Life Sciences & Earth Sciences (AREA)
- Artificial Intelligence (AREA)
- General Physics & Mathematics (AREA)
- General Engineering & Computer Science (AREA)
- Evolutionary Computation (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Computational Linguistics (AREA)
- Bioinformatics & Computational Biology (AREA)
- Bioinformatics & Cheminformatics (AREA)
- Health & Medical Sciences (AREA)
- Biomedical Technology (AREA)
- Biophysics (AREA)
- Evolutionary Biology (AREA)
- General Health & Medical Sciences (AREA)
- Molecular Biology (AREA)
- Computing Systems (AREA)
- Mathematical Physics (AREA)
- Software Systems (AREA)
- Image Analysis (AREA)
- Manipulator (AREA)
Abstract
The invention provides a system and a method for recognizing, positioning and grabbing a non-specific shaped object, belonging to the field of grabbing of mechanical arms. The invention also provides a method for identifying, positioning and picking the target. The method and the system have strong adaptability and can flexibly pick up the target.
Description
Technical Field
The invention belongs to the field of mechanical arm grabbing, and particularly relates to a non-specific shaped object recognition and positioning and mechanical arm grabbing system and method.
Background
In recent years, internet technology has been rapidly developed, and automation has become the mainstream in industrial production. Traditional manual production has gradually disappeared in the field of vision, and more advanced production equipment such as emerging industrial robots are brought into the field of vision of people. Many problems follow for the part production industry. Fully automated production line, should one identify when facing different kinds of parts? How should the manipulator grasp?
At present, when an object is grabbed by a traditional mechanical arm with automation in an industrial production line, the characteristics of a target to be grabbed need to be analyzed and programmed in advance, and then a machine vision system is matched to position the specific target, so that the mechanical arm is guided to complete the grabbing process. In addition, in the prior art, the algorithm of visual recognition only aims at parts with certain characteristics, and cannot completely recognize random objects, and a certain projection error exists in the system in the calculation of coordinates, so that the manipulator has an error of about several millimeters when grabbing the objects. In addition, the strategy of the mechanical arm grabbing is not optimal, and the mechanical arm still has some redundant actions in the grabbing process, so that the grabbing efficiency is not very high. When the manipulator grabs an object, an unsatisfactory grabbing posture exists, so that grabbing errors occur to cause the manipulator to stop. The manipulator has a size limitation on the object to be grasped, which can be neither too large nor too small. Occasionally, when grabbing an object, the manipulator can grab the object unstably, so that the object falls down.
In view of the above drawbacks of the prior art, it is desirable to develop a non-specific shaped object identification, positioning and grasping system and method for a robot.
Disclosure of Invention
Aiming at the defects of the prior art, the invention aims to provide a system and a method for recognizing and positioning a non-specific shaped object and grabbing by a manipulator, which realize automatic judgment and classification of targets by combining deep neural network learning and guide the manipulator to realize picking of targets with random positions in a certain range, and aims to solve the problems of insufficient grabbing flexibility and insufficient adaptability of the manipulator in the prior art.
In order to achieve the above object, the present invention provides a non-specific shaped object recognition, positioning and manipulator grabbing system, comprising a multi-degree-of-freedom manipulator, a machine vision imaging module and a central processing module, wherein,
the multi-degree-of-freedom mechanical arm comprises a clamping jaw or/and a suction nozzle, is connected with the central processing module and is controlled by the central processing module to pick up the target according to the plane motion coordinate and the peripheral dimension of the target given by the central processing module,
the machine vision imaging module comprises an industrial camera and an imaging lens, the machine vision imaging module is connected with the central processing module to transmit the image data acquired by the machine vision imaging module to the central processing module,
The central processing module is used for receiving the image data acquired by the machine vision imaging module, processing the image data, extracting the outline of the target, combining the mapping relation between the object space size and the pixel number of the image space after calibration according to the outline of the target to realize the measurement of the outline dimension of each target, and is also used for classifying the target according to the outline dimension of the target by adopting a deep neural network model so as to guide and control the multi-degree-of-freedom manipulator to identify, position and pick up the target.
Further, an image processing submodule is integrated in the central processing module and is used for executing image data processing, specifically, a working area image with improved contrast is obtained through histogram equalization, then, an edge extraction algorithm is used for segmenting a target to be picked up in the working area image, and all target contours in the working area image are extracted to provide a data base for subsequent analysis and processing.
Furthermore, a size measuring submodule is integrated in the central processing module and is used for measuring target size information according to the target contour, wherein the target size information comprises the length-width ratio, the rectangularity and the roundness of a circumscribed rectangle, and measuring the actual space coordinate of the central position of a target plane according to the target contour information and guiding the subsequent picking of the mechanical arm.
Further, a target classification submodule is integrated in the central processing module and is used for classifying targets through a trained deep neural network model according to target contour information extracted by the image processing submodule and target size information acquired by the size measurement submodule, and when the targets cannot be classified into a known target characteristic classification library, the targets are used as new target characteristics and added into the classification library.
Furthermore, a picking guide sub-module is integrated in the central processing module and used for guiding the manipulator to complete the picking work of the targets according to the position information and the classification information of each target in the working area and a set picking mode.
According to a second aspect of the present invention, there is also provided a non-specific shaped object identifying, positioning and robot grasping method, comprising the steps of:
s1: acquiring image data, wherein the image data is in a digital format,
s2: image preprocessing is carried out on the image data in the data format to obtain a target image, edge information of a target in the target image is obtained by utilizing edge extraction and sub-pixel analysis, a circumscribed rectangle of the target image is determined according to the edge information,
s3: the edge information and the circumscribed rectangle of each target are paired, the pairing result is stored in a target information list, each piece of information in the list is analyzed, the length and width information of the circumscribed rectangle of each target and the coordinate information of the central position of the circumscribed rectangle are calculated, the information is stored in the target information list and is used for classifying the targets,
S4: classifying the target information by using a pre-trained deep neural network model for classification, adopting a deep neural network consisting of an input layer, a plurality of hidden layers and an output layer as a classification network, outputting the classification result as the network input information which is the target outline and size information,
when the target classification is carried out, if a new type of target is found, overlapping of the positions of the known targets is eliminated firstly, if the characteristics of the new target are still expressed, the new target is regarded as a new target which appears for the first time, the obtained new target data are added into a training set, a large amount of target data similar to the characteristics of the antagonistic neural network are generated, parameters of a deep neural network model are updated in a machine learning training mode, and the accurate classification of the newly-appeared unspecified target is realized,
and S5, guiding and controlling the multi-degree-of-freedom manipulator to identify, position and pick up the target according to a set mode, and placing the non-specific target into a corresponding collection space according to a preset collection mode when picking up the non-specific target.
Further, in step S4, the specific operation of excluding the overlap of the positions of the new type of target with the known target is to guide the manipulator to move near the new type of target according to the position measurement result, perform touch operation on the new type of target, change the position of the target, perform image acquisition on the working area again, perform target identification on the area where the new type of target appears, and regard the new type of target as the new target that appears for the first time if the new type of target still shows the characteristics of the new target.
Further, in step S3, when the length and width of each target circumscribed rectangle and the coordinate information of the center position of the circumscribed rectangle are calculated, the first coordinate system is a spatial coordinate system, the second coordinate system is a picture coordinate system, the third coordinate system is a manipulator coordinate system, the picture coordinate system takes a fixed image obtained by shooting as a quasi-system, the spatial coordinate system takes the specific position of an object on an object plane as a quasi-system, the object plane is taken as an xy plane, the vertical direction is upward taken as a z-axis positive direction, the manipulator coordinate system takes the difference position of two feature points before and after movement as a quasi-system in an actual space when the manipulator moves, the conversion of the coordinate on the second coordinate system to the first coordinate system is completed through one matrix, and the conversion of the coordinate on the first coordinate system to the third coordinate system is completed through the second matrix.
Further, in step S5, after placing the target, the multi-degree-of-freedom manipulator returns its pose to an external central processing module to determine whether the coordinates of the manipulator are correct, and the multi-degree-of-freedom manipulator sorts the targets according to the sizes of the target objects before grabbing the target, so as to grab the objects, and sorts the objects according to types and sizes to place the different objects at different positions correspondingly.
Further, in step S2, the image preprocessing includes one or more of gray stretching, histogram equalization, smoothing filtering, distortion correction, and white balance correction, and in step S5, the multi-degree-of-freedom manipulator is guided and controlled to recognize, position, and pick up the target according to a setting mode, where the setting mode is a mode in which coordinates are sorted according to the size, shape, and/or color gamut of the target.
The method identifies different parts by extracting the information of the parts with the visual field, and the main approaches comprise the calculation of the length-width ratio of the external rectangle of the parts and the calculation of the squareness and the roundness of the parts. Considering that the template matching requires more and complicated parameters, more time is needed during calculation, and the setting of the parameters has greater influence on the speed and the result, so that the aspect ratio, the rectangularity and the roundness of a part are distinguished without selecting the template matching for identifying the object to be grabbed, and the method undoubtedly enables the method and the system of the invention to be more efficient. For the grabbing of the manipulator, the algorithm design provided by the invention enables the manipulator to automatically adjust the pose during grabbing, so that the grabbing is more accurate, and meanwhile, the manipulator can be controlled more accurately through an operation panel and a program.
After the manipulator is placed on an object, the pose of the manipulator can be returned to a computer or an industrial personal computer, so that monitoring can be performed to a certain degree to confirm whether the coordinate of the manipulator is correct or not. The manipulator can be according to the size of object and order and pick the object with this before picking the object to classify according to kind and size and put different positions with different objects.
Generally, compared with the prior art, the technical scheme conceived by the invention has the following beneficial effects:
in the system, a deep neural network model is constructed by combining machine vision and the thought of a deep learning method to realize automatic judgment and classification of the first-appearing target, the characteristics are automatically combined into a target characteristic information base accumulated for a long time, and intelligent identification, classification and position measurement can be adaptively realized on non-specific targets in later application. The system and the method provided by the invention realize the size measurement, appearance characteristic analysis, shielding condition identification, characteristic classification and position measurement of the target by combining with an image processing algorithm of an artificial intelligence technology on the basis of the working area image shot by a machine vision system, and provide an optimized target picking scheme on the basis of the size measurement, the appearance characteristic analysis, the shielding condition identification, the characteristic classification and the position measurement, so that a manipulator is guided to quickly, accurately and orderly pick up random positions, random types and possibly shielded targets.
Drawings
Fig. 1 is a schematic flow chart illustrating the operation of the non-specific shaped object recognition, positioning and robot grasping system according to an embodiment of the present invention.
Detailed Description
In order to make the objects, technical solutions and advantages of the present invention more apparent, the present invention is further described in detail below with reference to the accompanying drawings and embodiments. It should be understood that the specific embodiments described herein are merely illustrative of the invention and do not limit the invention.
The non-specific shaped object recognition, positioning and manipulator grabbing system mainly comprises a functional platform part and a submodule part integrated on the functional platform part, wherein the functional platform part comprises a multi-degree-of-freedom manipulator, a machine vision imaging module and a central processing module, the submodule part integrated on the functional platform part comprises an image processing submodule, a size measuring submodule, a target classification submodule and a picking guide submodule, and the two parts are matched with each other to realize the functions of recognizing and positioning a non-specific shaped target, which are designed by the application of the invention.
The multi-degree-of-freedom manipulator can select products with different forming modes, different load capacities and different motion accuracies according to task requirements, and the basic requirement is to accurately grab a target by using parts such as clamping jaws, suction nozzles and the like which are arranged on the manipulator according to given plane motion coordinates of the manipulator and peripheral dimension information of the target. The machine vision imaging module is composed of a high-resolution industrial camera, an imaging lens, an illuminating light source and an installation structure of the imaging lens, wherein the resolution of the high-resolution industrial camera is determined by the minimum size of a target to be picked up and the focal length parameter of the imaging lens, the focal length, the aperture and other parameters of the imaging lens are calculated according to the range of a working area to be detected, the illuminating light source is used for effectively illuminating the target and the working area, and the illuminating light source is calculated according to the surface color and the reflectivity characteristic of the target to be detected, the background reflection characteristic of the working area and the like. The mounting structure in the module is responsible for guaranteeing that the machine vision imaging module and the mechanical arm are in a relatively fixed spatial position, and the coordinate transformation matrix is determined and the guiding precision is improved when the mechanical arm is guided in a later stage. The central processing module is mainly responsible for the operation of the software system, including collecting the image data of the camera, operating the analysis software to give guidance data, and controlling each component of the mechanical arm and the machine vision imaging module to normally work according to the given parameters and coordinate positions.
The central processing module is integrated with a plurality of sub-modules, including an image processing sub-module, a size measuring sub-module, a target classifying sub-module and a picking guide sub-module, and the functions of the modules are as follows:
the image processing submodule carries out image processing on a working area image acquired by the machine vision system, the working area image with improved contrast is obtained through image preprocessing methods such as histogram equalization and the like, then the segmentation of the target to be picked in the working area image is realized through an edge extraction algorithm, all target contours in the working area are accurately extracted, and a data basis is provided for subsequent analysis and processing.
And the dimension measurement sub-module realizes measurement of the outline dimension of each target through the mapping relation between the calibrated object space dimension and the pixel space pixel number on the basis that the image processing module obtains all target outline information in a working area, and measures the actual space coordinate of the plane center position of the target according to the target outline information for subsequent guidance of mechanical arm picking.
And the target classification sub-module classifies the target contour information extracted by the image processing module and the target size information acquired by the size measuring module through the trained deep neural network, and if the target cannot be classified into the known target characteristic classification library, the target is taken as a new target characteristic and added into the classification library.
And the picking guiding sub-module guides the mechanical arm to complete the picking work of the target according to the established logic through the position information and the classification information of each target in the working area obtained after the processing of all the modules.
Fig. 1 is a schematic flow chart of the work performed by the non-specific shaped object identifying, positioning and manipulator grabbing system according to the embodiment of the present invention, and it can be seen from the above description that the method of the present invention mainly includes the following steps:
(1) the machine vision imaging module performs imaging detection on a working area, the light source uniformly illuminates a target in the working area, the target is imaged on an image surface of the high-resolution industrial camera through the imaging lens, a formed image is transmitted to the central processing module through the digital signal transmission line, and the image is acquired by the central processing module and then stored into a digital format image for subsequent steps to process and use.
(2) Sending the digital format image into an image processing module, and firstly obtaining a target image with high contrast, low noise and no distortion through image preprocessing means such as gray level stretching, histogram equalization, smoothing filtering, distortion correction, white balance correction and the like in computer graphics; then, analyzing and acquiring high-precision edge information of a target in the image by utilizing an edge extraction algorithm and a sub-pixel analysis algorithm; and finally, determining the external rectangle of the target edge by using the target edge obtained by analysis for further dimension measurement.
(3) And matching the edge contour of each analyzed target with the circumscribed rectangle, storing the edge contour and the circumscribed rectangle in a target information list, analyzing each information in the list by using a size measuring module, calculating the length and width information of the circumscribed rectangle and the coordinate information of the center position of the rectangle, and storing the information in the target list for classifying the targets in the next step.
(4) The method comprises the steps of utilizing a pre-trained deep neural network for classification to classify target information, wherein the classification basis is the measured size and the specific outline information of the target, and the randomly placed target can be in various postures when being shot, the outline obtained by processing the shot image can be in various shapes, and the measured size is different, so that the target information cannot be directly classified by adopting a simple linear model. The method is characterized in that a deep neural network consisting of an input layer, a plurality of hidden layers and an output layer is taken as a classification network, network input information is target outline and size information, the network input information is output as a classification result, prior measurement data is taken as training data to train the model, and the obtained neural network model can classify the known targets with the accuracy of more than 99%. And if the condition that the target cannot be accurately classified occurs, considering a new type of target which may appear for the first time, and entering the next operation.
(5) The method comprises the steps of firstly eliminating the situation that the known targets are mistakenly classified due to different characteristics on a two-dimensional image after the positions of the targets are overlapped, guiding a manipulator to run to the position close to the targets according to the position measurement result after each new target is discovered, slightly changing the positions of the targets, shooting a working area by using a machine vision system, then performing target identification on the new target appearing area, regarding the new targets as new targets appearing for the first time if the new targets still appear after the manipulator operation and the characteristics of the new targets still appear, adding the obtained new target data into a training set, generating a large amount of target data similar to the characteristics of the antagonistic neural network by generating the antagonistic neural network to enrich the training data of the new type, and updating the parameters of the deep neural network in a machine learning training mode, and the accurate classification of the newly-appeared unspecific object targets is realized.
(6) After the outlines, the positions and the classification information lists of all the targets in the working area are obtained, the control software determines a picking sequence according to the picking logic, guides the manipulator to move to the corresponding actual coordinate position according to the sequence, picks up the unspecific target and places the unspecific target in the corresponding collecting space according to the appointed collecting method.
In the invention, the target position is determined by a system vision part, such as an eye, and a system manipulator motion part, such as a hand, of the system, including the grabbing and placing of the manipulator, and a hand-eye combination part, including data transmission and coordinate conversion, combine the system 'hand' and 'eye'.
In one embodiment of the present invention, the image processing is performed by using halcon software, specifically, under the condition of a given background and sufficient light source, the original picture is obtained by photographing, the image is corrected by using halcon, the aberration is eliminated, the rectangular view field picture on the basic object plane is obtained, and the correctness of the view field is determined according to the size of the n × n standard calibration board downloaded from the halcon, that is, the image position is matched with the actual position. Through the powerful visual processing ability of halcon, draw out the external rectangle of every object of waiting to snatch in the field of vision to characteristics such as aspect ratio and area to the rectangle realize the discernment of different kinds of objects. The scanned objects are classified according to factors such as size, shape, color gamut and the like, and coordinates of the pictures are sorted according to the sequence required by a user such as the color gamut with the size and the shape. Currently, the objects used in the project are mainly parts commonly used in industry, such as nuts, which are clearly characterized in shape and have a more regular shape, relatively easy to identify and classify.
When the manipulator snatchs and places the target object, need set up the manipulator path of motion. Due to the self-influence of the manipulator, the restriction of the surrounding metal frame and the blocking of small objects to be grabbed on the object placing plane, the moving path of the manipulator before grabbing is planned, and the motion of the manipulator can be prevented from being blocked and the manipulator can be prevented from damaging the original scene. When grabbing, the mechanical arm obtains the initial coordinates of the parts in sequence, and moves the mechanical arm to the initial coordinates. The mechanical arm calculates the reasonable grabbing angle of the object to be detected according to the preset length of the picture, and controls the two fingers to steer. And the mechanical arm calculates a vertical coordinate falling point with a preset length according to the horizontal and vertical coordinate difference of the current characteristic point, adjusts the vertical coordinate to move to a target position, and tightens the mechanical claw. When the mechanical arm identifies that the object to be grabbed is not clamped, the coordinate of the next target can be obtained until the object is grabbed. After the mechanical arm clamps an object to be grabbed, a path reaching the object placing position is identified according to the position of the current characteristic point, the advancing direction of the mechanical arm is planned, and the phenomenon that the motion of the mechanical arm is blocked or the original object placing scene is damaged is avoided. When the mechanical arm identification characteristic point reaches the preset object placing position, the mechanical arm identification characteristic point can slowly fall, the stress condition of the mechanical arm can be always checked during falling, if the mechanical arm receives an upward force, the mechanical arm can be judged to fall to the bottom end, the mechanical arm can be released to mechanically grab the mechanical arm to grab the mechanical arm, and the mechanical arm can place different places according to the types of parts. In addition, the mechanical arm can classify the objects according to different sizes of the grabbed objects, and parts can be placed in different places according to different areas. The industrial personal computer controlling the manipulator can display the coordinate of the manipulator grabbing the target object and the coordinate of the manipulator placing the object at the target position in real time, and the monitoring effect is achieved.
In the invention, when the information of the machine vision is converted into the information which can be used by the mechanical arm, the 'hand' and the 'eye' are combined, and the coordinate conversion and the data transmission are mainly required. When the coordinate conversion is carried out, the first coordinate system is a space coordinate system, the second coordinate system is a picture coordinate system, the third coordinate system is a manipulator coordinate system, the picture coordinate system takes a picture in halcon obtained by fixed shooting as a quasi-building system, the space coordinate system takes the specific position of an object on an object placing plane as a quasi-building system, the object placing plane is taken as an xy plane, the vertical direction is upward to be a z-axis positive direction building system, when the manipulator coordinate system takes the motion of the manipulator, the difference position of two characteristic points before and after the motion is taken as a quasi-building system on an actual space, the conversion of the coordinate on the second coordinate system to the first coordinate system is completed through a matrix, and the conversion of the coordinate on the first coordinate system to the third coordinate system is completed through a second matrix. And after coordinates in the data arrangement of the image processing are obtained, coordinate transformation is carried out, a redis database is used for carrying out image processing and rapid transmission of thermal data of the mechanical arm system, and data interaction of the coordinates is carried out in the database.
In one embodiment of the invention, a robot gripper robot based on the Universal robot e5, proud, incorporated with two fingers, performs a gripping task with an effective working radius of up to 850 mm and a maximum weight of five kilograms. A metal frame is placed around the robot arm, wherein it is ensured that the robot arm is located in the middle of one edge of the rectangular projection of the metal frame in plan view, and that the vertical distance of the CCD camera from the object placement plane is, for example, 88 cm. A GO-5100M-USB camera from JAI was selected, whose CCD target surface was 2/3 inches, and a lens with a focal length f of 16mm was used to obtain a rectangular field of view covering 50cm by 40cm from the camera, based on calculations and final actual matching. The luminous flux E ═ (1/(D) ×) lnx, D is the distance from the light source to the object to be measured, and x is the distance from the object to the lens (working distance) WD. The image magnification PMAG is calculated from the known Sensor imaging surface height Hi and the measured object Size (Field height) Ho, where PMAG is Sensor Size (mm)/Field of View (mm) Hi/Ho. And calculating the required focal distance f by using a formula, wherein f is WD and PMAG/(1+ PMAG), selecting the standard lens product closest to the calculated value, and taking the focal distance value. The standard lens focal length is for example: 8mm, 12.5mm, 16mm, 25mm and 50 mm. The lens-to-object distance WD is recalculated based on the selected lens focal length. Wherein, LE ═ Di-f ═ PMAG ═ f, PMAG ═ Di/WD or WD ═ f (1+ PMAG)/PMAG, converted into chinese: resolution is the size of the photosensitive chip/pixel size is the length or width of the field of view/detection accuracy. Based on the carried flight definition optical platform, a pvc plate and a pure black curtain can be adopted to build a storage area with the size of 90cm × 90cm on the storage plane, and meanwhile, in the contrast, no light source is additionally added.
According to the invention, a set of object identification, positioning and grabbing system combining machine vision imaging and non-specific shaped object self-adaptive identification intelligent algorithm is constructed, so that the purpose of analyzing or programming an object to be grabbed is not needed, the objects with different appearance and size are automatically classified and identified by directly utilizing an artificial intelligent algorithm and an edge calculation module, high-precision positioning is realized through a positioning algorithm, and a manipulator is guided to accurately grab the non-specific object with any shape and size.
The intelligent algorithm for target self-adaptive recognition designed by the invention utilizes the design ideas of deep neural network and machine learning, realizes automatic extraction and clustering of appearance characteristics and size parameters by constructing and training the neural network, and judges whether to add object types or not in a self-adaptive manner according to the classification result, thereby realizing efficient self-adaptive recognition of unspecified objects with any new characteristics.
In the invention, the designed machine vision imaging module is relatively fixed with the space installation reference of the mechanical arm, the high-precision calibration algorithm is used for realizing the rapid and efficient conversion of the image coordinate and the mechanical arm space coordinate, the artificial intelligent object recognition and positioning functions combined with the image processing algorithm are realized, and the mechanical arm is guided to obtain a specific characteristic target according to an instruction or sort and pick the target according to the target characteristic.
The invention has the following two more ingenious and novel aspects: (1) and intelligently and self-adaptively classifying new targets. The demand of the intelligent manufacturing field on flexible production is increasing day by day, the demand that a large number of parts or components which are in small batches, in multiple specifications and randomly appear need to be carried and assembled can appear in a modern intelligent production line, when a traditional manipulator carries the parts, the traditional manipulator needs to firstly know the characteristics of the size, the shape and the like of an object to be carried, and accurate and reliable grabbing can be realized after analysis and programming in advance. The invention constructs a deep neural network to realize automatic judgment and classification of a first-appearing target by combining machine vision and a deep learning method, automatically combines the characteristics into a target characteristic information base accumulated for a long time, and can realize intelligent identification, classification and position measurement of a non-specific target in a self-adaptive manner in the subsequent application. (2) The robot is self-adaptive and guided to automatically pick up the target in real time. In the current stage, the mode of picking up the target by the mechanical arm on the industrial automatic production line is mostly based on fixed-point picking up of a fixed target position, the mechanical arm is guided by machine vision in few applications to realize the function of picking up the target with random positions in a certain range, and once products are various in types, numerous in quantity and randomly placed or even blocked, the system cannot realize effective recognition and picking up. According to the system, on the basis that a machine vision system shoots a working area image, size measurement, appearance feature analysis, shielding condition identification, characteristic classification and position measurement of a target are achieved by combining an image processing algorithm of an artificial intelligence technology, an optimized target picking scheme is given in real time on the basis, and a manipulator is guided to quickly, accurately and orderly pick a random position, a random type and a possible shielding target.
The system and method of the present invention may be applied in the following fields: (1) automatic produce part letter sorting on the line: because the parts are fine, different parts can be classified according to size, weight, shape and color by the system and automatically sorted into different areas, and manpower and material resources are greatly reduced; (2) intelligent classification in flexible manufacturing: the flexible manufacturing system is an automatic production system which is controlled by a computer and is based on a group technology, and can simultaneously process a group or a class of products with similar shapes. The system is suitable for high-efficiency manufacturing modes of multiple varieties and small batches, and can be integrated into the system according to the individual difference of the varieties under different varieties, so that intelligent classification is realized, the stock of blanks and products in process is reduced, and direct labor force is reduced. In addition, the system and method of the present application can be applied in many places where classification is required.
It will be understood by those skilled in the art that the foregoing is only an exemplary embodiment of the present invention, and is not intended to limit the invention to the particular forms disclosed, since various modifications, substitutions and improvements within the spirit and scope of the invention are possible and within the scope of the appended claims.
Claims (10)
1. A non-specific shaped object recognition, positioning and manipulator grabbing system is characterized by comprising a multi-degree-of-freedom manipulator, a machine vision imaging module and a central processing module, wherein,
the multi-degree-of-freedom manipulator comprises a clamping jaw or/and a suction nozzle, is connected with the central processing module to be controlled by the central processing module, picks up the target according to the plane motion coordinate and the peripheral size of the target given by the central processing module,
the machine vision imaging module comprises an industrial camera and an imaging lens, the machine vision imaging module is connected with the central processing module to transmit the image data acquired by the machine vision imaging module to the central processing module,
the central processing module is used for receiving the image data acquired by the machine vision imaging module, processing the image data, extracting the outline of the target, combining the mapping relation between the object space size and the pixel number of the image space after calibration according to the outline of the target to realize the measurement of the outline dimension of each target, and is also used for classifying the target according to the outline dimension of the target by adopting a deep neural network model so as to guide and control the multi-degree-of-freedom manipulator to identify, position and pick up the target.
2. The system for non-specific shaped object recognition, positioning and manipulator grabbing according to claim 1, wherein the central processing module is integrated with an image processing sub-module, the image processing sub-module is used for processing image data, specifically, a working area image with improved contrast is obtained through histogram equalization, then an edge extraction algorithm is used to segment an object to be picked up in the working area image, and all object contours in the working area image are extracted to provide a data basis for subsequent analysis and processing.
3. A non-specific shaped object recognition, positioning and robot grasping system according to claim 2, wherein the central processing module further integrates a dimension measurement sub-module for measuring target dimension information including aspect ratio, rectangularity and roundness of the circumscribed rectangle based on the target profile, and for measuring actual spatial coordinates of the center position of the target plane based on the target profile information for subsequent guidance of robot picking.
4. The system for non-specific shaped object recognition, positioning and manipulator grabbing according to claim 3, wherein the central processing module further integrates a target classification sub-module, which is used to classify the targets by the trained deep neural network model according to the target contour information extracted by the image processing sub-module and the target dimension information obtained by the dimension measurement sub-module, and to add the targets as new target features to the classification library if the targets cannot be classified into the known target feature classification library.
5. A non-specific shaped object recognition, positioning and manipulator grabbing system according to claim 4, wherein the central processing module further comprises a picking guide sub-module integrated therein for guiding the manipulator to complete the picking operation of the object according to the set picking mode based on the position information and classification information of each object in the working area.
6. A non-specific shape object identification, positioning and mechanical arm grabbing method is characterized by comprising the following steps:
s1: acquiring image data, wherein the image data is in a digital format,
s2: image preprocessing is carried out on the image data in the data format to obtain a target image, edge information of a target in the target image is obtained by utilizing edge extraction and sub-pixel analysis, a circumscribed rectangle of the target image is determined according to the edge information,
s3: the edge information and the circumscribed rectangle of each target are paired, the pairing result is stored in a target information list, each piece of information in the list is analyzed, the length and width information of the circumscribed rectangle of each target and the coordinate information of the central position of the circumscribed rectangle are calculated, the information is stored in the target information list and is used for classifying the targets,
S4: classifying the target information by using a pre-trained deep neural network model for classification, adopting a deep neural network consisting of an input layer, a plurality of hidden layers and an output layer as a classification network, inputting the information of the network as target contour and size information, outputting the information as a classification result,
when the target classification is carried out, if a new type target is found, the overlapping of the positions of the known targets is firstly eliminated, if the characteristics of the new target are still expressed, the new target is regarded as a new target appearing for the first time, the obtained new target data is added into a training set, a large amount of target data similar to the characteristics of the antagonistic neural network is generated, the parameters of a deep neural network model are updated in a machine learning training mode, the accurate classification of the newly appeared non-specific target is realized,
and S5, guiding and controlling the multi-degree-of-freedom manipulator to identify, position and pick the target according to a set mode, and placing the non-specific target into a corresponding collection space according to a preset collection mode when the non-specific target is picked up.
7. The method as claimed in claim 6, wherein the step S4 of excluding the overlapping of the positions of the objects of the new type and the known objects is to guide the robot to move to the vicinity of the objects of the new type according to the position measurement result, touch the objects of the new type, change the positions of the objects, re-acquire the images of the working area, recognize the objects in the areas where the objects of the new type appear, and if the objects still show the characteristics of the new objects, regard the objects as the new objects appearing for the first time.
8. The non-specific shaped object identification, positioning and robot grasping method according to claim 7, wherein, when the length and width of each target circumscribed rectangle and the coordinate information of the center position of the circumscribed rectangle are calculated in step S3, the first coordinate system is a space coordinate system, the second coordinate system is a picture coordinate system, the third coordinate system is a manipulator coordinate system, the picture coordinate system takes the fixed image as a standard system, the space coordinate system takes the specific position of the object on the object plane as a standard, a system is established by taking the object placing plane as an xy plane and taking the vertical direction upwards as the positive direction of a z axis, a system is established in an actual space by taking the difference position of two characteristic points before and after movement of the manipulator when the manipulator moves, the transformation of the coordinates in the second coordinate system into the first coordinate system is done by means of one matrix and the transformation of the coordinates in the first coordinate system into the third coordinate system is done by means of the second matrix.
9. The non-specific shaped object recognition, positioning and manipulator grabbing method of claim 8, wherein in step S5, after the object is placed, the multi-degree of freedom manipulator returns its pose to the external central processing module to confirm whether the coordinates of the manipulator are correct,
The multi-degree-of-freedom manipulator sorts the objects according to the sizes of the object objects before grabbing the objects, so that the objects are grabbed, and different objects are placed at different positions correspondingly according to the types and sizes of the objects.
10. The non-specific shaped object recognition, localization and manipulator capture method according to claim 6, wherein in step S2, the image pre-processing comprises one or more of gray scale stretching, histogram equalization, smoothing filtering, distortion correction, white balance correction,
in step S5, when the multi-degree-of-freedom manipulator is guided and controlled to recognize, position, and pick up the target according to the setting mode, the setting mode is a mode in which coordinates are sorted according to the size, shape, or/and color gamut of the target.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202210384412.4A CN114758236B (en) | 2022-04-13 | 2022-04-13 | Non-specific shape object identification, positioning and manipulator grabbing system and method |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202210384412.4A CN114758236B (en) | 2022-04-13 | 2022-04-13 | Non-specific shape object identification, positioning and manipulator grabbing system and method |
Publications (2)
Publication Number | Publication Date |
---|---|
CN114758236A true CN114758236A (en) | 2022-07-15 |
CN114758236B CN114758236B (en) | 2024-09-17 |
Family
ID=82331618
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202210384412.4A Active CN114758236B (en) | 2022-04-13 | 2022-04-13 | Non-specific shape object identification, positioning and manipulator grabbing system and method |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN114758236B (en) |
Cited By (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN115239657A (en) * | 2022-07-18 | 2022-10-25 | 无锡雪浪数制科技有限公司 | Industrial part increment identification method based on deep learning target segmentation |
CN115359112A (en) * | 2022-10-24 | 2022-11-18 | 爱夫迪(沈阳)自动化科技有限公司 | Stacking control method of high-level material warehouse robot |
CN115463804A (en) * | 2022-08-04 | 2022-12-13 | 东莞市慧视智能科技有限公司 | Dispensing method based on dispensing path |
CN116086965A (en) * | 2023-03-06 | 2023-05-09 | 安徽省(水利部淮河水利委员会)水利科学研究院(安徽省水利工程质量检测中心站) | Concrete test block compressive strength test system and method based on machine vision |
RU2813958C1 (en) * | 2022-12-22 | 2024-02-20 | Автономная некоммерческая организация высшего образования "Университет Иннополис" | Intelligent system for robotic sorting of randomly arranged objects |
CN117649736A (en) * | 2024-01-29 | 2024-03-05 | 深圳市联之有物智能科技有限公司 | Video management method and system based on AI video management platform |
CN118155176A (en) * | 2024-05-09 | 2024-06-07 | 江苏智搬机器人科技有限公司 | Automatic control method and system for transfer robot based on machine vision |
Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP2008257353A (en) * | 2007-04-02 | 2008-10-23 | Advanced Telecommunication Research Institute International | Learning system for learning visual representation of object, and computer program |
WO2019080229A1 (en) * | 2017-10-25 | 2019-05-02 | 南京阿凡达机器人科技有限公司 | Chess piece positioning method and system based on machine vision, storage medium, and robot |
CN113269723A (en) * | 2021-04-25 | 2021-08-17 | 浙江省机电设计研究院有限公司 | Unordered grasping system for three-dimensional visual positioning and mechanical arm cooperative work parts |
CN113524194A (en) * | 2021-04-28 | 2021-10-22 | 重庆理工大学 | Target grabbing method of robot vision grabbing system based on multi-mode feature deep learning |
-
2022
- 2022-04-13 CN CN202210384412.4A patent/CN114758236B/en active Active
Patent Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP2008257353A (en) * | 2007-04-02 | 2008-10-23 | Advanced Telecommunication Research Institute International | Learning system for learning visual representation of object, and computer program |
WO2019080229A1 (en) * | 2017-10-25 | 2019-05-02 | 南京阿凡达机器人科技有限公司 | Chess piece positioning method and system based on machine vision, storage medium, and robot |
CN113269723A (en) * | 2021-04-25 | 2021-08-17 | 浙江省机电设计研究院有限公司 | Unordered grasping system for three-dimensional visual positioning and mechanical arm cooperative work parts |
CN113524194A (en) * | 2021-04-28 | 2021-10-22 | 重庆理工大学 | Target grabbing method of robot vision grabbing system based on multi-mode feature deep learning |
Cited By (9)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN115239657A (en) * | 2022-07-18 | 2022-10-25 | 无锡雪浪数制科技有限公司 | Industrial part increment identification method based on deep learning target segmentation |
CN115239657B (en) * | 2022-07-18 | 2023-11-21 | 无锡雪浪数制科技有限公司 | Industrial part increment identification method based on deep learning target segmentation |
CN115463804A (en) * | 2022-08-04 | 2022-12-13 | 东莞市慧视智能科技有限公司 | Dispensing method based on dispensing path |
CN115359112A (en) * | 2022-10-24 | 2022-11-18 | 爱夫迪(沈阳)自动化科技有限公司 | Stacking control method of high-level material warehouse robot |
CN115359112B (en) * | 2022-10-24 | 2023-01-03 | 爱夫迪(沈阳)自动化科技有限公司 | Stacking control method of high-level material warehouse robot |
RU2813958C1 (en) * | 2022-12-22 | 2024-02-20 | Автономная некоммерческая организация высшего образования "Университет Иннополис" | Intelligent system for robotic sorting of randomly arranged objects |
CN116086965A (en) * | 2023-03-06 | 2023-05-09 | 安徽省(水利部淮河水利委员会)水利科学研究院(安徽省水利工程质量检测中心站) | Concrete test block compressive strength test system and method based on machine vision |
CN117649736A (en) * | 2024-01-29 | 2024-03-05 | 深圳市联之有物智能科技有限公司 | Video management method and system based on AI video management platform |
CN118155176A (en) * | 2024-05-09 | 2024-06-07 | 江苏智搬机器人科技有限公司 | Automatic control method and system for transfer robot based on machine vision |
Also Published As
Publication number | Publication date |
---|---|
CN114758236B (en) | 2024-09-17 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN114758236B (en) | Non-specific shape object identification, positioning and manipulator grabbing system and method | |
CN110580725A (en) | Box sorting method and system based on RGB-D camera | |
CN109483554B (en) | Robot dynamic grabbing method and system based on global and local visual semantics | |
CN108399639B (en) | Rapid automatic grabbing and placing method based on deep learning | |
JP4309439B2 (en) | Object take-out device | |
CN105729468B (en) | A kind of robotic workstation based on the enhancing of more depth cameras | |
CN112561886A (en) | Automatic workpiece sorting method and system based on machine vision | |
CN108290286A (en) | Method for instructing industrial robot to pick up part | |
CN106000904A (en) | Automatic sorting system for household refuse | |
CN113103215B (en) | Motion control method for robot vision flyswatter | |
CN113146172A (en) | Multi-vision-based detection and assembly system and method | |
CN105690393A (en) | Four-axle parallel robot sorting system based on machine vision and sorting method thereof | |
CN114029243B (en) | Soft object grabbing and identifying method for sorting robot | |
CN113878576B (en) | Robot vision sorting process programming method | |
CN116228854B (en) | Automatic parcel sorting method based on deep learning | |
CN114419437A (en) | Workpiece sorting system based on 2D vision and control method and control device thereof | |
CN113689509A (en) | Binocular vision-based disordered grabbing method and system and storage medium | |
CN113495073A (en) | Auto-focus function for vision inspection system | |
CN115629066A (en) | Method and device for automatic wiring based on visual guidance | |
CN113715012A (en) | Automatic assembly method and system for remote controller parts | |
CN116030449B (en) | Automatic sorting method and automatic sorting system for laser cutting pieces | |
CN210589323U (en) | Steel hoop processing feeding control system based on three-dimensional visual guidance | |
CN115213122B (en) | Disorder sorting method based on 3D depth network | |
CN116863463A (en) | Egg assembly line rapid identification and counting method | |
CN116175542B (en) | Method, device, electronic equipment and storage medium for determining clamp grabbing sequence |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |