CN114758236A - Non-specific shape object identification, positioning and manipulator grabbing system and method - Google Patents

Non-specific shape object identification, positioning and manipulator grabbing system and method Download PDF

Info

Publication number
CN114758236A
CN114758236A CN202210384412.4A CN202210384412A CN114758236A CN 114758236 A CN114758236 A CN 114758236A CN 202210384412 A CN202210384412 A CN 202210384412A CN 114758236 A CN114758236 A CN 114758236A
Authority
CN
China
Prior art keywords
target
manipulator
information
objects
image
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202210384412.4A
Other languages
Chinese (zh)
Other versions
CN114758236B (en
Inventor
夏珉
许晗
李正泳
马宇霆
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Huazhong University of Science and Technology
Original Assignee
Huazhong University of Science and Technology
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Huazhong University of Science and Technology filed Critical Huazhong University of Science and Technology
Priority to CN202210384412.4A priority Critical patent/CN114758236B/en
Publication of CN114758236A publication Critical patent/CN114758236A/en
Application granted granted Critical
Publication of CN114758236B publication Critical patent/CN114758236B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/214Generating training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Artificial Intelligence (AREA)
  • General Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • Evolutionary Computation (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Computational Linguistics (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Health & Medical Sciences (AREA)
  • Biomedical Technology (AREA)
  • Biophysics (AREA)
  • Evolutionary Biology (AREA)
  • General Health & Medical Sciences (AREA)
  • Molecular Biology (AREA)
  • Computing Systems (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Image Analysis (AREA)
  • Manipulator (AREA)

Abstract

本发明提供了一种非特定形状物体识别、定位与机械手抓取系统及方法,属于机械手抓取领域,系统包括多自由度机械手、机器视觉成像模块和中央处理模块,多自由度机械手与中央处理模块相连接,机器视觉成像模块与中央处理模块相连接,以将采集的图像数据传输给中央处理模块,中央处理模块对图像数据进行处理,提取目标轮廓,并根据目标轮廓结合标定后的物空间尺寸与像空间像素数量映射关系实现对每个目标的外形尺寸的测量,还用于采用深度神经网络模型依据目标外形尺寸对目标进行分类,以控制多自由度机械手对目标进行识别、定位和拾取。本发明还提供了对目标进行识别、定位和拾取的方法。本发明方法和系统适应性强,能对目标进行灵活拾取。

Figure 202210384412

The invention provides a non-specific shape object recognition, positioning and manipulator grabbing system and method, belonging to the field of manipulator grabbing. The system includes a multi-degree-of-freedom manipulator, a machine vision imaging module and a central processing module. The modules are connected, and the machine vision imaging module is connected with the central processing module to transmit the collected image data to the central processing module. The central processing module processes the image data, extracts the target contour, and combines the calibrated object space according to the target contour. The mapping relationship between the size and the number of pixels in the image space realizes the measurement of the external size of each target, and is also used to classify the target according to the target size by using the deep neural network model to control the multi-degree-of-freedom manipulator to identify, locate and pick up the target. . The invention also provides a method for identifying, locating and picking up the target. The method and system of the invention have strong adaptability and can flexibly pick up the target.

Figure 202210384412

Description

一种非特定形状物体识别、定位与机械手抓取系统及方法A non-specific shape object recognition, positioning and robotic grasping system and method

技术领域technical field

本发明属于机械手抓取领域,更具体地,涉及一种非特定形状物体识别、定位与机械手的抓取系统及方法。The invention belongs to the field of manipulator grabbing, and more particularly, relates to a system and method for identifying, positioning and manipulating an object of non-specific shape.

背景技术Background technique

近年来互联网技术发展迅速,自动化已成为工业生产中的主流。传统的人工生产已渐渐消失在视野中,新兴的工业机器人等更先进的生产设备迈入人们的视野。而对于零件生产这一行业,许多问题随之而来。全自动化的生产线,在面对不同种类的零件时该如识别?机械手该如何抓取?In recent years, Internet technology has developed rapidly, and automation has become the mainstream of industrial production. Traditional manual production has gradually disappeared from the field of vision, and more advanced production equipment such as emerging industrial robots has entered people's field of vision. For the part production industry, many problems follow. How should a fully automated production line identify different types of parts? How does the manipulator grab it?

目前,工业产线自动化的传统机械手进行物体抓取时,需要事先对待抓取目标的特征进行分析、编程,再配合机器视觉系统实现对特定目标的位置定位,最终指导机械手完成抓取过程。而且,现有技术中,视觉识别的算法只针对某些特点的零件,还无法做到完全对随机物体的识别,而且系统在坐标的计算上存在一定的投影误差,导致机械手在抓取物体的时候有着几毫米左右的误差。此外,机械手抓取的策略并未做到最佳,机械手在抓取过程中还存在一些多余的动作,因此抓取效率并不是很高。机械手在抓取物体的时候,会有着不理想的抓取姿势,从而使得抓取出现错误而导致机械手停止。机械手对抓取的物体有着大小的限制,所抓的物体既不能太大也不能太小。机械手在抓取物体的时候偶尔会有抓不稳导致物体掉下来的情况。At present, when the traditional manipulator of industrial production line automation grasps objects, it is necessary to analyze and program the characteristics of the target to be grasped in advance, and then cooperate with the machine vision system to realize the position positioning of the specific target, and finally guide the manipulator to complete the grasping process. Moreover, in the prior art, the visual recognition algorithm is only for parts with certain characteristics, and cannot completely recognize random objects, and the system has a certain projection error in the calculation of coordinates, which causes the robot to grasp the object. There is an error of about a few millimeters. In addition, the grasping strategy of the manipulator is not optimal, and the manipulator still has some redundant actions during the grasping process, so the grasping efficiency is not very high. When the manipulator grasps the object, it will have an unsatisfactory grasping posture, which will cause the grasping error and cause the manipulator to stop. The manipulator has a size limit on the grasped object, and the grasped object can neither be too large nor too small. Occasionally, when the manipulator grasps the object, the grasping is unstable and the object falls off.

针对现有技术的以上缺陷,需要开发一种用于机械手的非特定形状物体识别、定位与抓取系统及方法。In view of the above defects of the prior art, it is necessary to develop a non-specific shape object recognition, positioning and grasping system and method for a manipulator.

发明内容SUMMARY OF THE INVENTION

针对现有技术的缺陷,本发明的目的在于提供一种非特定形状物体识别、定位与机械手抓取系统及方法,通过结合深度神经网络学习实现目标的自动判断和分类,并引导机械手实现对一定范围内位置随机的目标进行拾取,旨在解决现有技术中机械手的抓取不够灵活、适应性不足的问题。In view of the defects of the prior art, the purpose of the present invention is to provide a non-specific shape object recognition, positioning and manipulator grasping system and method, which realizes the automatic judgment and classification of the target by combining with deep neural network learning, and guides the manipulator to achieve certain Targets with random positions within the range are picked up, which aims to solve the problems that the grasping of the manipulator in the prior art is not flexible enough and the adaptability is insufficient.

为实现上述目的,本发明提供了一种非特定形状物体识别、定位与机械手抓取系统,其包括多自由度机械手、机器视觉成像模块和中央处理模块,其中,In order to achieve the above object, the present invention provides a non-specific shape object recognition, positioning and manipulator grasping system, which includes a multi-degree-of-freedom manipulator, a machine vision imaging module and a central processing module, wherein,

多自由度机械手包括卡爪或/和吸嘴,多自由度机械手与中央处理模块相连接,以受中央处理模块控制,根据中央处理模块给出的平面运动坐标和目标外围尺寸执行对目标的拾取,The multi-degree-of-freedom manipulator includes jaws or/and suction nozzles, and the multi-degree-of-freedom manipulator is connected to the central processing module to be controlled by the central processing module, and pick up the target according to the plane motion coordinates and the peripheral size of the target given by the central processing module. ,

机器视觉成像模块包括工业相机和成像镜头,机器视觉成像模块与中央处理模块相连接,以将自身采集的图像数据传输给中央处理模块,The machine vision imaging module includes an industrial camera and an imaging lens. The machine vision imaging module is connected to the central processing module to transmit the image data collected by itself to the central processing module.

中央处理模块用于接收机器视觉成像模块采集的图像数据,对图像数据进行处理,提取目标轮廓,并根据目标轮廓结合标定后的物空间尺寸与像空间像素数量映射关系实现对每个目标的外形尺寸的测量,还用于采用深度神经网络模型依据目标外形尺寸对目标进行分类,进而指导并控制多自由度机械手对目标进行识别、定位和拾取。The central processing module is used to receive the image data collected by the machine vision imaging module, process the image data, extract the target contour, and realize the shape of each target according to the target contour combined with the mapping relationship between the calibrated object space size and the number of pixels in the image space. The measurement of size is also used to classify the target according to the size of the target by using the deep neural network model, and then guide and control the multi-degree-of-freedom manipulator to identify, locate and pick up the target.

进一步的,中央处理模块内集成有图像处理子模块,图像处理子模块用于图像数据执行处理,具体为,通过直方图均衡化获得对比度改善的工作区域图像,再通过边缘提取算法实现对工作区域图像中的待拾取目标进行分割,将工作区域图像中所有的目标轮廓提取,以为后续的分析和处理提供数据基础。Further, an image processing submodule is integrated in the central processing module, and the image processing submodule is used for image data processing. The target to be picked up in the image is segmented, and all target contours in the image of the working area are extracted to provide a data basis for subsequent analysis and processing.

进一步的,中央处理模块内还集成有尺寸测量子模块,其用于根据目标轮廓测量目标尺寸信息,目标尺寸信息包括外接矩形长宽比、矩形度和圆度,还用于据目标轮廓信息测量目标平面中心位置的实际空间坐标,用于后续对机械手拾取的引导。Further, a size measurement sub-module is also integrated in the central processing module, which is used to measure the target size information according to the target contour, and the target size information includes the aspect ratio of the circumscribed rectangle, the rectangularity and the roundness, and is also used for measuring according to the target contour information. The actual space coordinate of the center position of the target plane, which is used to guide the pick-up of the manipulator later.

进一步的,中央处理模块内还集成有目标分类子模块,其用于根据图像处理子模块提取的目标轮廓信息和尺寸测量子模块获取的目标尺寸信息,通过训练好的深度神经网络模型对目标进行分类,并在目标无法被分类到已知目标特性分类库中,则将其作为新的目标特征加入分类库。Further, a target classification sub-module is also integrated in the central processing module, which is used to perform target classification on the target through the trained deep neural network model according to the target outline information extracted by the image processing sub-module and the target size information obtained by the size measurement sub-module. Classification, and when the target cannot be classified into the known target feature classification library, it is added to the classification library as a new target feature.

进一步的,中央处理模块内还集成有拾取引导子模块,其用于根据工作区域内每个目标的位置信息和分类信息,按照设定的拾取方式引导机械手完成对目标的拾取工作。Further, the central processing module is also integrated with a picking and guiding sub-module, which is used to guide the manipulator to complete the picking work of the target according to the set picking mode according to the position information and classification information of each target in the working area.

按照本发明的第二个方面,还提供一种非特定形状物体识别、定位与机械手抓取方法,其包括如下步骤:According to the second aspect of the present invention, there is also provided a method for identifying, locating and grasping an object with a non-specific shape, which comprises the following steps:

S1:采集获得图像数据,图像数据为数字格式,S1: collect and obtain image data, the image data is in digital format,

S2:对数据格式的图像数据执行图像预处理,获取目标图像,利用边缘提取和亚像素分析获取目标图像中目标的边缘信息,并根据边缘信息确定其外接矩形,S2: Perform image preprocessing on the image data in the data format, obtain the target image, obtain the edge information of the target in the target image by edge extraction and sub-pixel analysis, and determine its circumscribed rectangle according to the edge information,

S3:对每个目标的边缘信息和外接矩形进行配对,将配对结果存放于目标信息列表中,对列表中的每一个信息进行分析,计算出每个目标外接矩形的长、宽信息以及外接矩形中心位置坐标信息,并将信息存入目标信息列表中,用于对目标进行分类,S3: Pair the edge information of each target with the circumscribed rectangle, store the pairing result in the target information list, analyze each information in the list, and calculate the length, width and circumscribing rectangle of each target circumscribing rectangle. center position coordinate information, and store the information in the target information list for classifying the target,

S4:利用预先训练好的分类用深度神经网络模型对目标信息进行分类,采用由输入层、多个隐藏层、输出层构成的深度神经网络作为分类网络,网络输入信息为目标轮廓及尺寸信息,输出为分类结果,S4: Use the pre-trained deep neural network model for classification to classify the target information, use a deep neural network composed of an input layer, multiple hidden layers, and an output layer as the classification network, and the network input information is the target outline and size information, The output is the classification result,

在进行目标分类时,如果发现新类型目标,首先排除其是已知目标发生位置重叠,如果仍然表现出新目标的特征,则将其视作第一次出现的新目标,将获取到的新目标数据加入到训练集中,并通过生成对抗神经网络产生大量与之特征相似的目标数据,再通过机器学习训练的方式更新深度神经网络模型的参数,实现对新出现的非特定对象目标的准确分类,When classifying targets, if a new type of target is found, it is first ruled out that it is a known target that overlaps its location. If it still exhibits the characteristics of a new target, it is regarded as a new target that appears for the first time. The target data is added to the training set, and a large amount of target data with similar characteristics is generated through the generative adversarial neural network, and then the parameters of the deep neural network model are updated through machine learning training to achieve accurate classification of new non-specific objects. ,

S5:指导并控制多自由度机械手按照设定方式对目标进行识别、定位和拾取,拾取非特定目标时,按照预定的收集方式将其放置到对应的收集空间中。S5: Instruct and control the multi-degree-of-freedom manipulator to identify, locate and pick up the target according to the set method, and when picking up a non-specific target, place it in the corresponding collection space according to the predetermined collection method.

进一步的,步骤S4中,排除新类型目标是已知目标发生位置重叠的具体操作为,首先根据位置测量结果引导机械手运行到新类型目标附近,对新类型目标进行触碰操作,改变目标位置,再重新对工作区域进行图像采集,然后对新类型目标出现区域进行目标识别,如果其仍然表现出新目标的特征,则将其视作第一次出现的新目标。Further, in step S4, the specific operation of excluding the new type of target from overlapping the known target position is as follows: first, according to the position measurement result, guide the manipulator to run near the new type of target, perform a touch operation on the new type of target, and change the target position, Then re-acquire the image of the working area, and then perform target recognition on the area where the new type of target appears. If it still shows the characteristics of the new target, it is regarded as the new target that appears for the first time.

进一步的,步骤S3中,计算出每个目标外接矩形的长、宽以及外接矩形中心位置坐标信息时,第一坐标系为空间坐标系,第二坐标系为图片坐标系,第三坐标系为机械手坐标系,图片坐标系以固定的拍摄所得图像为准建系,空间坐标系以置物平面的物体具体位置为准,以置物平面为xy平面,竖直方向向上为z轴正方向建系,机械手坐标系以机械手运动时,运动前后的两个特征点的差异位置为准在实际空间上建系,通过一个矩阵完成对在第二坐标系的坐标向第一坐标系的转换,通过第二个矩阵完成对在第一坐标系的坐标向第三坐标系的转换。Further, in step S3, when calculating the length and width of each target circumscribed rectangle and the coordinate information of the center position of the circumscribed rectangle, the first coordinate system is a space coordinate system, the second coordinate system is a picture coordinate system, and the third coordinate system is The coordinate system of the manipulator, the picture coordinate system is based on the fixed photographed image, and the space coordinate system is based on the specific position of the object on the object placement plane. The coordinate system of the manipulator is based on the difference between the two feature points before and after the movement of the manipulator. A matrix completes the transformation of the coordinates in the first coordinate system to the third coordinate system.

进一步的,步骤S5中,多自由度机械手在放置目标后,将自身的位姿返还给外界的中央处理模块,以确认机械手的坐标是否是正确的,多自由度机械手在抓取目标前按照目标物体的大小进行排序,以此对物体进行抓取,并且按照种类和大小进行分类将不同的物体摆放到对应地不同位置。Further, in step S5, after placing the target, the multi-DOF manipulator returns its own pose to the external central processing module to confirm whether the coordinates of the manipulator are correct. The multi-DOF manipulator follows the target before grabbing the target. The size of the objects is sorted, so that the objects are grasped, and different objects are placed in corresponding different positions according to their types and sizes.

进一步的,步骤S2中,图像预处理包括灰度拉伸、直方图均衡化、平滑滤波、畸变校正、白平衡校正中的一种或者多种,步骤S5中,指导并控制多自由度机械手按照设定方式对目标进行识别、定位和拾取时,设定方式是指按照目标大小,形状或/和色域进行坐标的排序后的方式。Further, in step S2, the image preprocessing includes one or more of grayscale stretching, histogram equalization, smooth filtering, distortion correction, and white balance correction. In step S5, instructing and controlling the multi-degree-of-freedom manipulator to follow Setting method When identifying, locating and picking up the target, the setting method refers to the method after the coordinates are sorted according to the size, shape or/and color gamut of the target.

本发明通过提取视野的零件的信息来识别不同的零件,其主要途径包括对零件外接矩形长宽比的计算,对零件矩形度,圆度的计算。考虑到模版匹配所需的参数较多且复杂,在计算时需要更多的时间,参数的设置对于速度和结果有着较大的影响,因此并没有选用模版匹配进行待抓取对象的识别,而是对零件的长宽比、矩形度和圆度进行区分,这种方法无疑使得本发明方法和系统更为高效。而对于机械手的抓取,本发明申请的算法设计使得机械手能在抓取时自动地调整位姿,使得抓取更为精确,同时通过操作面板与程序能实现对机械手较为精确的控制。The invention identifies different parts by extracting the information of the parts in the field of view, and the main methods include the calculation of the aspect ratio of the circumscribed rectangle of the part, and the calculation of the rectangularity and roundness of the part. Considering that the parameters required for template matching are many and complex, it takes more time to calculate, and the parameter settings have a greater impact on the speed and results. Therefore, template matching is not selected to identify the object to be grasped, but It is to distinguish the aspect ratio, squareness and roundness of the parts, which undoubtedly makes the method and system of the present invention more efficient. As for the grasping of the manipulator, the algorithm design of the present application enables the manipulator to automatically adjust the position and posture during grasping, making the grasping more precise, and at the same time, the control panel and the program can realize more precise control of the manipulator.

机械手在放置物体后,会将自己的位姿返还给电脑或者工控机,因此可以做到一定程度的监控,来确认机械手的坐标是否是正确的。机械手在抓取物体前可以按照物体的大小进行排序来以此对物体进行抓取,并且按照种类和大小进行分类将不同的物体摆放到不同的位置。After the manipulator places the object, it will return its own posture to the computer or industrial computer, so a certain degree of monitoring can be done to confirm whether the coordinates of the manipulator are correct. Before grasping the objects, the manipulator can sort the objects according to the size of the objects to grasp the objects, and classify different objects according to their types and sizes to place different objects in different positions.

总体而言,通过本发明所构思的以上技术方案与现有技术相比,具有以下有益效果:In general, compared with the prior art, the above technical solutions conceived by the present invention have the following beneficial effects:

本发明系统中,通过结合机器视觉并结合深度学习方法的思想,构造深度神经网络模型实现对首次出现目标的自动判断与分类,并将这些特征自动合并到长期积累的目标特征信息库中,在此后的应用中能够自适应的对非特定目标实现智能识别、分类与位置测量。本发明申请的系统和方法在机器视觉系统拍摄工作区域图像的基础上,通过结合人工智能技术的图像处理算法,实现对目标的尺寸测量、外观特征分析、遮挡情况识别、特性分类与位置测量,在此基础上实时给出最优化目标拾取方案,引导机械手快速、准确、有序的对随机位置、随机种类、可能存在遮挡目标进行拾取。In the system of the present invention, by combining machine vision and the idea of deep learning method, a deep neural network model is constructed to realize automatic judgment and classification of targets that appear for the first time, and these features are automatically merged into the long-term accumulated target feature information database. In subsequent applications, intelligent identification, classification and location measurement can be implemented adaptively for non-specific targets. The system and method of the present invention are based on the image of the working area captured by the machine vision system, and realize the size measurement, appearance feature analysis, occlusion recognition, feature classification and position measurement of the target by combining the image processing algorithm of artificial intelligence technology. On this basis, the optimal target picking scheme is given in real time, and the manipulator is guided to pick up random positions, random types, and possibly occluded targets quickly, accurately and orderly.

附图说明Description of drawings

图1是本发明实施例提供的非特定形状物体识别、定位与机械手抓取系统进行工作的流程示意图。FIG. 1 is a schematic flowchart of the operation of a system for identifying, positioning and manipulating an object with a non-specific shape provided by an embodiment of the present invention.

具体实施方式Detailed ways

为了使本发明的目的、技术方案及优点更加清楚明白,以下结合附图及实施例,对本发明进行进一步详细说明。应当理解,此处所描述的具体实施例仅仅用以解释本发明,并不用于限定本发明。In order to make the objectives, technical solutions and advantages of the present invention clearer, the present invention will be further described in detail below with reference to the accompanying drawings and embodiments. It should be understood that the specific embodiments described herein are only used to explain the present invention, but not to limit the present invention.

本发明的非特定形状物体识别、定位与机械手抓取系统主要包括功能平台部分和集成在功能平台部分的子模块部分,功能平台部分包括多自由度机械手、机器视觉成像模块和中央处理模块,集成在功能平台部分的子模块部分包括图像处理子模块、尺寸测量子模块、目标分类子模块和拾取导引子模块,这两个部分相互配合实现本发明申请所设计的对非特定形状目标进行识别、定位于抓取的功能。The non-specific shape object recognition, positioning and manipulator grasping system of the present invention mainly includes a functional platform part and a sub-module part integrated in the functional platform part, and the functional platform part includes a multi-degree-of-freedom manipulator, a machine vision imaging module and a central processing module. The sub-module part of the functional platform part includes an image processing sub-module, a size measurement sub-module, a target classification sub-module and a pick-up guide sub-module. , is located in the function of grabbing.

其中,多自由度机械手可以根据任务需要选择不同构成方式、不同负载能力和不同运动精度的产品,其基本要求是要具备根据给定的机械手平面运动坐标和目标外围尺寸信息,利用机械手配备的卡爪、吸嘴等部件实现对目标的准确抓取。机器视觉成像模块由高分辨率工业相机、成像镜头、照明光源及其安装结构构成,其中高分辨率工业相机的分辨率由需要拾取的目标的最小尺寸和成像镜头的焦距参数决定,而成像镜头的焦距、光圈等参数则需要根据待探测的工作区域范围来计算得到,照明光源用来对目标和工作区域进行有效照明,根据待测目标表面颜色和反射率特性以及工作区域背景反射特性等计算得到。模块中的安装结构负责保证机器视觉成像模块与机械手之间处于相对固定的空间位置,方便后期对机械手进行导引时确定坐标转换矩阵并提高导引精度。中央处理模块主要负责软件系统的运行,包括采集相机的图像数据、运行分析软件给出导引数据、控制机械手和机器视觉成像模块的各部件按照给定的参数和坐标位置正常工作。Among them, the multi-degree-of-freedom manipulator can choose products with different composition methods, different load capacities and different motion accuracy according to the needs of the task. Claws, suction nozzles and other components achieve accurate grasping of the target. The machine vision imaging module consists of a high-resolution industrial camera, an imaging lens, an illumination light source and its installation structure. The resolution of the high-resolution industrial camera is determined by the minimum size of the target to be picked up and the focal length parameter of the imaging lens, while the imaging lens The parameters such as focal length and aperture need to be calculated according to the scope of the working area to be detected. The illumination light source is used to effectively illuminate the target and the working area, and is calculated according to the surface color and reflectance characteristics of the target to be measured and the background reflection characteristics of the working area. get. The installation structure in the module is responsible for ensuring that the machine vision imaging module and the manipulator are in a relatively fixed spatial position, which is convenient for determining the coordinate transformation matrix and improving the guidance accuracy when guiding the manipulator in the later stage. The central processing module is mainly responsible for the operation of the software system, including collecting the image data of the camera, running the analysis software to give the guidance data, and controlling the manipulator and the components of the machine vision imaging module to work normally according to the given parameters and coordinate positions.

中央处理模块集成有多个子模块,包括图像处理子模块、尺寸测量子模块、目标分类子模块和拾取引导子模块,各个模块具体的作用如下:The central processing module integrates multiple sub-modules, including an image processing sub-module, a size measurement sub-module, a target classification sub-module and a pick-up guide sub-module. The specific functions of each module are as follows:

图像处理子模块对机器视觉系统采集的工作区域图像进行图像处理,通过直方图均衡化等图像预处理方法获得对比度改善的工作区域图像,再通过边缘提取算法实现对其中的待拾取目标的分割,将工作区域中所有的目标轮廓准确提取出来,为后续的分析和处理提供数据基础。The image processing sub-module performs image processing on the image of the working area collected by the machine vision system, obtains the image of the working area with improved contrast through image preprocessing methods such as histogram equalization, and then realizes the segmentation of the target to be picked through the edge extraction algorithm. Accurately extract all target contours in the work area to provide a data basis for subsequent analysis and processing.

尺寸测量子模块在图像处理模块获得了工作区域内所有目标轮廓信息的基础上,通过标定后的物空间尺寸与像空间像素数量映射关系实现对每个目标的外形尺寸的测量,同时根据目标轮廓信息测量其平面中心位置的实际空间坐标用于后续对机械手拾取的引导。The size measurement sub-module realizes the measurement of the outer size of each target through the mapping relationship between the calibrated object space size and the number of pixels in the image space on the basis that the image processing module obtains all the target contour information in the working area, and at the same time according to the target contour The information measures the actual space coordinates of the center position of the plane for the subsequent guidance of the pick-up by the manipulator.

目标分类子模块利用图像处理模块提取的目标轮廓信息和尺寸测量模块获取的目标尺寸信息,通过训练好的深度神经网络对其进行分类,如果目标无法被分类到已知目标特性分类库中,则将其作为新的目标特征加入分类库。The target classification sub-module uses the target outline information extracted by the image processing module and the target size information obtained by the size measurement module to classify it through the trained deep neural network. If the target cannot be classified into the known target feature classification library, then Add it to the classification library as a new target feature.

拾取引导子模块通过前面所有模块处理后得到的工作区域内每个目标的位置信息和分类信息,按照既定的逻辑引导机械手完成对目标进行拾取的工作。The picking and guiding sub-module guides the manipulator to complete the work of picking up the target according to the established logic through the position information and classification information of each target in the working area obtained after processing by all the previous modules.

图1是本发明实施例提供的非特定形状物体识别、定位与机械手抓取系统进行工作的流程示意图,结合该图以及非特定形状物体识别、定位与机械手抓取方法可知,本发明方法主要包括如下步骤:FIG. 1 is a schematic flow chart of the operation of a non-specific shape object recognition, positioning and robotic grasping system provided by an embodiment of the present invention. Combined with this figure and the non-specific shape object recognition, positioning and robotic grasping method, it can be known that the method of the present invention mainly includes: Follow the steps below:

(1)、机器视觉成像模块对工作区域进行成像探测,光源均匀照明工作区域中的目标,经过成像镜头成像在高分辨率工业相机像面上,所形成的图像经过数字信号传输线传输到中央处理模块,由该模块采集后存储为数字格式图像供后续步骤处理使用。(1) The machine vision imaging module performs imaging detection on the working area, the light source illuminates the target in the working area uniformly, and is imaged on the image surface of the high-resolution industrial camera through the imaging lens, and the formed image is transmitted to the central processing unit through the digital signal transmission line. module, which is collected and stored as a digital format image for use in subsequent steps.

(2)、数字格式图像送入图像处理模块,首先通过计算机图形学中灰度拉伸、直方图均衡化、平滑滤波、畸变校正、白平衡校正等图像预处理手段,获取高对比度、低噪声、无失真的目标图像;随后利用边缘提取和亚像素分析算法分析获取图像中目标的高精度边缘信息;最后利用分析得到的目标边缘确定其外接矩形,用于进一步的尺寸测量。(2) The digital format image is sent to the image processing module. First, the image preprocessing methods such as grayscale stretching, histogram equalization, smooth filtering, distortion correction, and white balance correction in computer graphics are used to obtain high contrast and low noise. , distortion-free target image; then use edge extraction and sub-pixel analysis algorithms to analyze and obtain high-precision edge information of the target in the image; finally, use the analyzed target edge to determine its circumscribed rectangle for further size measurement.

(3)、对每一个分析得到的目标的边缘轮廓和外接矩形进行配对,将其存放于目标信息列表中,利用尺寸测量模块对列表中的每一个信息进行分析,计算出外接矩形的长、宽信息以及矩形中心位置坐标信息,并将信息存入目标列表中,用于下一步对目标进行分类。(3) Pair the edge contour and the circumscribed rectangle of each analyzed target, store it in the target information list, use the size measurement module to analyze each information in the list, and calculate the length of the circumscribed rectangle, Width information and rectangular center position coordinate information, and store the information in the target list for the next step to classify the target.

(4)、利用预先训练好的分类用深度神经网络对目标信息进行分类,分类的依据是测量得到的尺寸和目标的具体轮廓信息,由于随机放置的目标在被拍摄的时候可能处于各种姿态,拍摄的图像处理得到的外轮廓可能有各种不同的形状,测量得到的尺寸也会有差异,因此不能采用简单的线性模型直接对其进行分类。考虑采用由输入层、多个隐藏层、输出层构成的深度神经网络作为分类网络,网络输入信息为目标轮廓及尺寸信息,输出为分类结果,以先验测量数据作为训练数据对模型进行训练,所得到神经网络模型能够以99%以上的准确率对已知目标进行分类。如果出现无法准确分类的情况,则考虑可能是第一次出现的新类型目标,进入下一步操作。(4) Use the pre-trained classification to classify the target information with a deep neural network. The classification is based on the measured size and the specific contour information of the target. Since the randomly placed target may be in various poses when it is photographed , the outer contour obtained by the processing of the captured image may have various shapes, and the measured size will also be different, so it cannot be directly classified by a simple linear model. Consider using a deep neural network composed of an input layer, multiple hidden layers, and an output layer as the classification network. The network input information is the target outline and size information, and the output is the classification result. The prior measurement data is used as the training data to train the model. The resulting neural network model is able to classify known objects with over 99% accuracy. If there is a situation that cannot be accurately classified, consider the new type of target that may be the first occurrence, and go to the next step.

(5)、对分类中出现的新类型目标,必须首先排除其是已知目标发生位置重叠后在二维图像上出现特征不同而误分类的情况,因此每次发现有新目标后,首先根据位置测量结果引导机械手运行到目标附近,对目标进行小幅度触碰操作,轻微改变目标位置,再利用机器视觉系统对工作区域进行拍摄,然后重点对新目标出现区域进行目标识别,如果经过机械手操作后仍然是一个目标,而且仍然表现出新目标的特征,则将其视作第一次出现的新目标,将获取到的新目标数据加入到训练集中,并通过生成对抗神经网络产生大量与之特征相似的目标数据,用来充实新类型的训练数据,再通过机器学习训练的方式更新深度神经网络的参数,实现对新出现的非特定对象目标的准确分类。(5) For new types of targets that appear in the classification, it is necessary to first rule out that they are misclassified due to different features on the two-dimensional image after the overlapping positions of the known targets. Therefore, every time a new target is found, first The position measurement results guide the manipulator to run near the target, touch the target slightly, change the target position slightly, and then use the machine vision system to take pictures of the working area, and then focus on the target recognition of the area where the new target appears. After it is still a target and still exhibits the characteristics of a new target, it is regarded as a new target that appears for the first time, and the acquired new target data is added to the training set, and a large number of corresponding Target data with similar characteristics is used to enrich new types of training data, and then the parameters of the deep neural network are updated through machine learning training to achieve accurate classification of emerging non-specific target targets.

(6)、在得到工作区域内所有目标的轮廓、位置及其分类信息列表后,控制软件根据拾取逻辑确定拾取顺序,并按照顺序指引机械手运动到相应的实际坐标位置,拾取非特定目标,然后按照约定的收集方法将其放置到对应的收集空间中。(6) After obtaining the outline, position and classification information list of all targets in the working area, the control software determines the picking sequence according to the picking logic, and instructs the manipulator to move to the corresponding actual coordinate position according to the sequence, pick up non-specific targets, and then Place it in the corresponding collection space according to the agreed collection method.

本发明中,“眼”比如系统视觉部分,通过一系列的图像处理,确定目标位置,“手”比如系统的机械手运动部分,包括机械手的抓取和放置,手眼结合的部分,包括数据传输和坐标转换将系统的“手”和“眼”结合起来。In the present invention, "eye" is such as the visual part of the system, which determines the target position through a series of image processing, "hand" is such as the moving part of the robot arm of the system, including the grasping and placing of the robot arm, and the part of hand-eye combination, including data transmission and Coordinate transformations combine the "hands" and "eyes" of the system.

本发明的一个实施例中,利用halcon软件完成相关图像处理,具体的,在给定背景和充足光源的条件下,通过照相获取原始图片,利用halcon进行图像矫正,消除像差,获得基本置物平面上的矩形视场图片,通过在halcon上下载下来的n*n标准标定板,按照其尺寸确定视场的正确,即图像位置和实际位置相匹配。通过halcon强大的视觉处理能力,将视野中每个待抓取物体的外接矩形提取出来,并通过对矩形的长宽比和面积等特点来实现不同种类物体的识别。将扫描到的各个物体进行分类,以如大小,形状,色域等因素为基准分类,再对这些图片坐标以使用者要求的顺序如按照大小形状色域等进行坐标的排序。目前,在项目中所使用的物体主要是工业中的常用零件,比如螺母,螺帽这种在形状上有着明显特征以及具有较为规则的形状,相对比较容易识别和分类。In an embodiment of the present invention, halcon software is used to complete related image processing. Specifically, under the condition of a given background and sufficient light source, the original picture is obtained by taking pictures, and halcon is used to correct the image, eliminate aberration, and obtain the basic object plane. The rectangular field of view picture on the halcon, through the n*n standard calibration plate downloaded from the halcon, according to its size to determine the correct field of view, that is, the image position and the actual position match. Through the powerful visual processing capability of halcon, the circumscribed rectangle of each object to be grasped in the field of view is extracted, and the recognition of different types of objects is realized by the aspect ratio and area of the rectangle. The scanned objects are classified based on factors such as size, shape, color gamut, etc., and then the coordinates of these pictures are sorted in the order required by the user, such as size, shape, and color gamut. At present, the objects used in the project are mainly common parts in the industry, such as nuts and nuts, which have obvious characteristics in shape and have relatively regular shapes, which are relatively easy to identify and classify.

机械手进行对目标物体的抓取和放置时,需要对机械手运动路径设置。因为机械手的自身影响,周围金属框架的制约和置物平面上待抓取小物体的阻挡,规划机械手在实施抓取前的移动路径,可以避免机械手的运动被阻挡以及机械手对原有场景的破坏。抓取时,机械手臂按照顺序获取零件的初始坐标,移动机械手臂至初始坐标处。机械手臂通过图片预置长度的大小计算出被测物体的合理被抓取角度,控制二指机械爪转向。机械手臂根据当前特征点的横纵坐标差异计算预置长度的竖直坐标落点,调整竖直坐标移动到目标位置,收紧机械爪。机械手臂识别未夹起待抓取物体时,会获取到下一个目标的坐标直至抓取起物体。机械手臂夹起待抓取物体后,根据当前特征点的位置识别到达物体放置位置的路径,规划机械手的行进方向,避免机械手的运动被阻挡或者破坏原有的置物场景。机械手臂识别特征点到达预定置物位置上方时,会缓缓下落,下落期间会一直检查机械手的受力情况,如机械手受到一个向上的力,则会判断机械手下降到了底端,机械手会放开机械抓,进行抓取,机械手会根据零件的种类放置到不同的地方。此外,机械手会根据所抓取物体的大小不同而进行分类,会根据不同的面积的区间把零件放到不同的地方。控制机械手的工控机上会实时显示机械手抓取目标物体的坐标以及机械手将物体放置在目标位置的坐标,起到监控的效果。When the manipulator grasps and places the target object, it is necessary to set the motion path of the manipulator. Due to the influence of the manipulator itself, the restriction of the surrounding metal frame and the obstruction of small objects to be grasped on the storage plane, planning the movement path of the manipulator before implementing the grasping can avoid the movement of the manipulator being blocked and the damage of the original scene by the manipulator. When grabbing, the robotic arm obtains the initial coordinates of the part in sequence, and moves the robotic arm to the initial coordinates. The mechanical arm calculates the reasonable grasped angle of the object to be measured through the preset length of the picture, and controls the steering of the two-finger mechanical claw. The robot arm calculates the vertical coordinate landing point of the preset length according to the difference between the horizontal and vertical coordinates of the current feature point, adjusts the vertical coordinate to move to the target position, and tightens the mechanical claw. When the robotic arm recognizes that the object to be grasped is not grasped, it will obtain the coordinates of the next target until it grasps the object. After the robotic arm picks up the object to be grasped, it identifies the path to the placement position of the object according to the position of the current feature point, and plans the direction of travel of the robotic arm to prevent the movement of the robotic arm from being blocked or destroying the original storage scene. When the recognition feature point of the robot arm reaches the predetermined position, it will fall slowly. During the fall, the force of the robot arm will be checked all the time. If the robot arm receives an upward force, it will be judged that the robot arm has descended to the bottom, and the robot arm will release the robot. Grab, grab, and the robot will place it in different places according to the type of parts. In addition, the manipulator will be classified according to the size of the object to be grasped, and the parts will be placed in different places according to different areas of the area. The industrial computer that controls the manipulator will display the coordinates of the manipulator grabbing the target object and the coordinates of the manipulator placing the object at the target position in real time, which has the effect of monitoring.

本发明中,将机器视觉的信息转换成机械手臂可利用的信息时,相当于将“手”和“眼”进行结合,主要需要进行坐标转换和数据传输。进行坐标转换时,第一坐标系为空间坐标系,第二坐标系为图片坐标系,第三坐标系为机械手坐标系,图片坐标系以固定的拍摄所得的halcon中图片为准建系,空间坐标系以置物平面的物体具体位置为准,以置物平面为xy平面,竖直方向向上为z轴正方向建系,机械手坐标系以机械手运动时,运动前后的两个特征点的差异位置为准在实际空间上建系,通过一个矩阵完成对在第二坐标系的坐标向第一坐标系的转换,通过第二个矩阵完成对在第一坐标系的坐标向第三坐标系的转换。获得图像处理的数据排列中的坐标后,进行坐标变换,运用redis数据库进行图像处理和机械手臂系统的热数据的快速传输,在数据库中进行坐标的数据交互。In the present invention, converting machine vision information into information usable by a robotic arm is equivalent to combining "hands" and "eyes", mainly requiring coordinate conversion and data transmission. When performing coordinate conversion, the first coordinate system is the space coordinate system, the second coordinate system is the picture coordinate system, and the third coordinate system is the manipulator coordinate system. The coordinate system is based on the specific position of the object on the object placement plane. The object placement plane is the xy plane, and the vertical direction upward is the positive direction of the z-axis. When the manipulator moves, the difference between the two feature points before and after the motion is The system is constructed in the actual space, and the transformation of the coordinates in the second coordinate system to the first coordinate system is completed through a matrix, and the transformation of the coordinates in the first coordinate system to the third coordinate system is completed through the second matrix. After obtaining the coordinates in the data array of image processing, perform coordinate transformation, use the redis database to perform image processing and rapid transmission of thermal data of the robotic arm system, and perform coordinate data interaction in the database.

在本发明的一个实施例中,基于优傲机器人公司的Universal Robots e5,搭配二指机器爪进行有效工作半径最高850毫米,最大重量五公斤的抓取任务。围绕机械臂在四周搭设金属框架,其中确保在俯视图上机械臂位于金属框架矩形投影一条边上的中点位置,确保CCD相机离置物平面的竖直距离比如为88cm。选择了JAI公司的GO-5100M-USB相机,该相机CCD靶面是2/3寸,根据计算和最后的实际匹配,采用了焦距f=16mm的镜头,以得到相机覆盖到50cm*40cm的矩形视场。光通量E=(1/(D*D))*lnx,D为光源至被测物体的距离,x为发光强度得物体至镜头的距离(工作距离)WD。通过已知的传感器成像面高度Hi和被测物尺寸(视场高度)Ho计算图像放大倍数PMAG,PMAG=Sensor Size(mm)/Field of View(mm)=Hi/Ho。利用公式计算所需的焦距f,f=WD*PMAG/(1+PMAG),选取与计算值最接近的标准镜头产品,并取其焦距值。标准镜头焦距比如为:8mm、12.5mm、16mm、25mm和50mm。根据所选镜头焦距重新核算镜头到物体的距离WD。其中,LE=Di-f=PMAG*f,PMAG=Di/WD或者WD=f*(1+PMAG)/PMAG,换算成中文为:分辨率=感光芯片尺寸/像素尺寸=视野长或宽/检测精度。基于所搭载的飞创义达光学平台,在置物平面上可以采用pvc版和纯黑色幕布搭建比如90cm*90cm大小的置物区域,同时在这样的对比下,不另外添加光源。In an embodiment of the present invention, based on Universal Robots e5 of Universal Robots, it is equipped with a two-finger robot gripper to perform a grasping task with an effective working radius of up to 850 mm and a maximum weight of five kilograms. Set up a metal frame around the manipulator, which ensures that the manipulator is located at the midpoint of one side of the rectangular projection of the metal frame in the top view, and ensures that the vertical distance between the CCD camera and the object plane is, for example, 88cm. The GO-5100M-USB camera from JAI company was selected. The CCD target surface of the camera is 2/3 inch. According to the calculation and the final actual matching, a lens with a focal length of f=16mm was used to obtain a rectangle with a coverage of 50cm*40cm. field of view. Luminous flux E=(1/(D*D))*lnx, D is the distance from the light source to the object to be measured, and x is the distance from the object to the lens (working distance) WD from the luminous intensity. Calculate the image magnification PMAG by using the known sensor imaging surface height Hi and the measured object size (field of view height) Ho, PMAG=Sensor Size(mm)/Field of View(mm)=Hi/Ho. Use the formula to calculate the required focal length f, f=WD*PMAG/(1+PMAG), select the standard lens product that is closest to the calculated value, and take its focal length value. Examples of standard lens focal lengths are: 8mm, 12.5mm, 16mm, 25mm and 50mm. Recalculate the lens-to-object distance WD based on the selected lens focal length. Among them, LE=Di-f=PMAG*f, PMAG=Di/WD or WD=f*(1+PMAG)/PMAG, converted into Chinese as: resolution=photosensitive chip size/pixel size=field length or width/ Detection accuracy. Based on the mounted Feichuang Yida optical platform, a pvc version and a pure black curtain can be used to build a storage area such as 90cm*90cm on the object plane. At the same time, under such a comparison, no additional light source is added.

本发明中,构建了一套结合机器视觉成像和非特定形状目标自适应识别智能算法的物体识别、定位与抓取系统,实现了无需对待抓取目标实现分析或编程,直接利用人工智能算法配合边缘计算模块对新出现的外形、尺寸不同的物体进行自动分类、识别,通过定位算法实现高精度定位并指导机械手对任意形状、尺寸的非特定目标进行准确抓取。In the present invention, a set of object recognition, positioning and grasping system combining machine vision imaging and non-specific shape target self-adaptive identification intelligent algorithm is constructed, which realizes that there is no need to analyze or program the grasped target, and the artificial intelligence algorithm is directly used to cooperate. The edge computing module automatically classifies and recognizes emerging objects with different shapes and sizes, achieves high-precision positioning through positioning algorithms, and guides the manipulator to accurately grasp non-specific targets of any shape and size.

本发明所设计的目标自适应识别智能算法利用了深度神经网络和机器学习的设计思想,通过构造并训练神经网络,实现对外形特征、尺寸参数的自动抽取和聚类,并根据分类结果自适应判断是否要增加物体种类,从而实现对任意新特性的非特定物体的高效自适应识别。The target self-adaptive identification intelligent algorithm designed by the present invention utilizes the design idea of deep neural network and machine learning, and realizes the automatic extraction and clustering of shape features and size parameters by constructing and training the neural network, and self-adapting according to the classification results. Determine whether to increase the types of objects, so as to achieve efficient adaptive recognition of non-specific objects with any new characteristics.

本发明中,所设计的机器视觉成像模块与机械手的空间安装基准相对固定,通过高精度标定算法实现图像坐标和机械手空间坐标的快速、高效转换,实现结合图像处理算法的人工智能物体识别、定位功能,指导机械手按照指令获取特定特征目标或按照目标特征对目标进行分类拾取。In the present invention, the designed machine vision imaging module and the space installation reference of the manipulator are relatively fixed, and the high-precision calibration algorithm realizes the rapid and efficient conversion of the image coordinates and the space coordinates of the manipulator, and realizes the artificial intelligence object recognition and positioning combined with the image processing algorithm. The function guides the manipulator to obtain specific feature targets according to the instructions or to classify and pick the targets according to the target characteristics.

本发明具有如下两个较为巧妙和新颖的方面:(1)、智能自适应新目标分类。智能制造领域对柔性生产的需求日益增多,现代智能产线中会出现需要对大量小批次、多规格、随机出现的零件或部件进行搬运、组装操作的需求,传统机械手在进行零件搬运时,必须首先知道要搬运的目标的尺寸、外形等特征,经过分析、事先编程后才能实现准确、可靠的抓取。本发明申请通过结合机器视觉和深度学习方法,构造深度神经网络实现对首次出现目标的自动判断与分类,并将这些特征自动合并到长期积累的目标特征信息库中,在此后的应用中能够自适应的对非特定目标实现智能识别、分类与位置测量。(2)、自适应、实时指导机械手自动拾取目标。现阶段工业自动产线上机械手拾取目标的方式大多基于固定目标位置的定点拾取,极少数应用中采用了机器视觉引导机械手实现了对一定范围内位置随机的目标的拾取功能,一旦产品种类繁多、数量众多、摆放位置随机甚至相互之间出现遮挡,系统就无法实现有效识别和拾取了。本发明申请的系统在机器视觉系统拍摄工作区域图像的基础上,通过结合人工智能技术的图像处理算法,实现对目标的尺寸测量、外观特征分析、遮挡情况识别、特性分类与位置测量,在此基础上实时给出最优化目标拾取方案,引导机械手快速、准确、有序的对随机位置、随机种类、可能存在遮挡目标进行拾取。The present invention has the following two relatively ingenious and novel aspects: (1) intelligent self-adaptive new target classification. The demand for flexible production in the field of intelligent manufacturing is increasing. In modern intelligent production lines, there will be a need to carry and assemble a large number of small batches, multi-specification, and random parts or components. When traditional manipulators carry out parts handling, The size and shape of the object to be handled must be known first, and accurate and reliable grasping can only be achieved after analysis and pre-programming. The application of the present invention constructs a deep neural network by combining machine vision and deep learning methods to realize automatic judgment and classification of targets that appear for the first time, and automatically merge these features into the long-term accumulated target feature information database, which can be automatically used in subsequent applications. Adaptive intelligent identification, classification and location measurement of non-specific targets. (2) Self-adaptive and real-time guidance of the manipulator to automatically pick up the target. At this stage, most of the methods of manipulators picking targets on industrial automatic production lines are based on fixed-point picking at fixed target positions. In very few applications, machine vision-guided manipulators are used to achieve the picking function of random targets within a certain range. With a large number, random placement, or even mutual occlusion, the system cannot effectively identify and pick up. The system of the present invention is based on the image of the working area captured by the machine vision system, and realizes the size measurement, appearance feature analysis, occlusion recognition, feature classification and position measurement of the target through the image processing algorithm combined with artificial intelligence technology. Here On this basis, the optimal target picking scheme is given in real time, and the manipulator is guided to pick up random positions, random types, and possibly occluded targets quickly, accurately and orderly.

本发明系统和方法可以在如下领域进行应用:(1)自动产线上的零件分拣:因为零件的精细,可以通过系统将不同的零件通过大小,重量,形状,颜色进行分类,自动分拣到不同的区域内,大大减少人力物力;(2)柔性制造当中的智能分类:柔性制造系统是建立在成组技术的基础上,由计算机控制的自动化生产系统,可同时加工形状相近的一组或一类产品。适合多品种、小批量的高效制造模式,在不同品种下,根据品种的个性差异,可以将本发明申请的系统融入其中,实现智能分类,减少毛坯和在制品的库存量,减少直接劳动力。除此之外,在很多需要分类的地方,都可以应用本发明申请的系统和方法。The system and method of the present invention can be applied in the following fields: (1) Parts sorting on an automatic production line: Because of the fineness of the parts, different parts can be sorted by size, weight, shape and color through the system, and the automatic sorting In different areas, manpower and material resources are greatly reduced; (2) Intelligent classification in flexible manufacturing: The flexible manufacturing system is an automated production system controlled by a computer based on group technology, which can process a group of similar shapes at the same time. or a class of products. The high-efficiency manufacturing mode is suitable for multiple varieties and small batches. Under different varieties, according to the individual differences of the varieties, the system of the application of the present invention can be integrated into it to realize intelligent classification, reduce the inventory of blanks and work-in-process, and reduce direct labor. Besides, in many places where classification is required, the system and method of the present application can be applied.

本领域的技术人员容易理解,以上所述仅为本发明的较佳实施例而已,并不用以限制本发明,凡在本发明的精神和原则之内所作的任何修改、等同替换和改进等,均应包含在本发明的保护范围之内。Those skilled in the art can easily understand that the above are only preferred embodiments of the present invention, and are not intended to limit the present invention. Any modifications, equivalent replacements and improvements made within the spirit and principles of the present invention, etc., All should be included within the protection scope of the present invention.

Claims (10)

1. A non-specific shaped object recognition, positioning and manipulator grabbing system is characterized by comprising a multi-degree-of-freedom manipulator, a machine vision imaging module and a central processing module, wherein,
the multi-degree-of-freedom manipulator comprises a clamping jaw or/and a suction nozzle, is connected with the central processing module to be controlled by the central processing module, picks up the target according to the plane motion coordinate and the peripheral size of the target given by the central processing module,
the machine vision imaging module comprises an industrial camera and an imaging lens, the machine vision imaging module is connected with the central processing module to transmit the image data acquired by the machine vision imaging module to the central processing module,
the central processing module is used for receiving the image data acquired by the machine vision imaging module, processing the image data, extracting the outline of the target, combining the mapping relation between the object space size and the pixel number of the image space after calibration according to the outline of the target to realize the measurement of the outline dimension of each target, and is also used for classifying the target according to the outline dimension of the target by adopting a deep neural network model so as to guide and control the multi-degree-of-freedom manipulator to identify, position and pick up the target.
2. The system for non-specific shaped object recognition, positioning and manipulator grabbing according to claim 1, wherein the central processing module is integrated with an image processing sub-module, the image processing sub-module is used for processing image data, specifically, a working area image with improved contrast is obtained through histogram equalization, then an edge extraction algorithm is used to segment an object to be picked up in the working area image, and all object contours in the working area image are extracted to provide a data basis for subsequent analysis and processing.
3. A non-specific shaped object recognition, positioning and robot grasping system according to claim 2, wherein the central processing module further integrates a dimension measurement sub-module for measuring target dimension information including aspect ratio, rectangularity and roundness of the circumscribed rectangle based on the target profile, and for measuring actual spatial coordinates of the center position of the target plane based on the target profile information for subsequent guidance of robot picking.
4. The system for non-specific shaped object recognition, positioning and manipulator grabbing according to claim 3, wherein the central processing module further integrates a target classification sub-module, which is used to classify the targets by the trained deep neural network model according to the target contour information extracted by the image processing sub-module and the target dimension information obtained by the dimension measurement sub-module, and to add the targets as new target features to the classification library if the targets cannot be classified into the known target feature classification library.
5. A non-specific shaped object recognition, positioning and manipulator grabbing system according to claim 4, wherein the central processing module further comprises a picking guide sub-module integrated therein for guiding the manipulator to complete the picking operation of the object according to the set picking mode based on the position information and classification information of each object in the working area.
6. A non-specific shape object identification, positioning and mechanical arm grabbing method is characterized by comprising the following steps:
s1: acquiring image data, wherein the image data is in a digital format,
s2: image preprocessing is carried out on the image data in the data format to obtain a target image, edge information of a target in the target image is obtained by utilizing edge extraction and sub-pixel analysis, a circumscribed rectangle of the target image is determined according to the edge information,
s3: the edge information and the circumscribed rectangle of each target are paired, the pairing result is stored in a target information list, each piece of information in the list is analyzed, the length and width information of the circumscribed rectangle of each target and the coordinate information of the central position of the circumscribed rectangle are calculated, the information is stored in the target information list and is used for classifying the targets,
S4: classifying the target information by using a pre-trained deep neural network model for classification, adopting a deep neural network consisting of an input layer, a plurality of hidden layers and an output layer as a classification network, inputting the information of the network as target contour and size information, outputting the information as a classification result,
when the target classification is carried out, if a new type target is found, the overlapping of the positions of the known targets is firstly eliminated, if the characteristics of the new target are still expressed, the new target is regarded as a new target appearing for the first time, the obtained new target data is added into a training set, a large amount of target data similar to the characteristics of the antagonistic neural network is generated, the parameters of a deep neural network model are updated in a machine learning training mode, the accurate classification of the newly appeared non-specific target is realized,
and S5, guiding and controlling the multi-degree-of-freedom manipulator to identify, position and pick the target according to a set mode, and placing the non-specific target into a corresponding collection space according to a preset collection mode when the non-specific target is picked up.
7. The method as claimed in claim 6, wherein the step S4 of excluding the overlapping of the positions of the objects of the new type and the known objects is to guide the robot to move to the vicinity of the objects of the new type according to the position measurement result, touch the objects of the new type, change the positions of the objects, re-acquire the images of the working area, recognize the objects in the areas where the objects of the new type appear, and if the objects still show the characteristics of the new objects, regard the objects as the new objects appearing for the first time.
8. The non-specific shaped object identification, positioning and robot grasping method according to claim 7, wherein, when the length and width of each target circumscribed rectangle and the coordinate information of the center position of the circumscribed rectangle are calculated in step S3, the first coordinate system is a space coordinate system, the second coordinate system is a picture coordinate system, the third coordinate system is a manipulator coordinate system, the picture coordinate system takes the fixed image as a standard system, the space coordinate system takes the specific position of the object on the object plane as a standard, a system is established by taking the object placing plane as an xy plane and taking the vertical direction upwards as the positive direction of a z axis, a system is established in an actual space by taking the difference position of two characteristic points before and after movement of the manipulator when the manipulator moves, the transformation of the coordinates in the second coordinate system into the first coordinate system is done by means of one matrix and the transformation of the coordinates in the first coordinate system into the third coordinate system is done by means of the second matrix.
9. The non-specific shaped object recognition, positioning and manipulator grabbing method of claim 8, wherein in step S5, after the object is placed, the multi-degree of freedom manipulator returns its pose to the external central processing module to confirm whether the coordinates of the manipulator are correct,
The multi-degree-of-freedom manipulator sorts the objects according to the sizes of the object objects before grabbing the objects, so that the objects are grabbed, and different objects are placed at different positions correspondingly according to the types and sizes of the objects.
10. The non-specific shaped object recognition, localization and manipulator capture method according to claim 6, wherein in step S2, the image pre-processing comprises one or more of gray scale stretching, histogram equalization, smoothing filtering, distortion correction, white balance correction,
in step S5, when the multi-degree-of-freedom manipulator is guided and controlled to recognize, position, and pick up the target according to the setting mode, the setting mode is a mode in which coordinates are sorted according to the size, shape, or/and color gamut of the target.
CN202210384412.4A 2022-04-13 2022-04-13 Non-specific shape object identification, positioning and manipulator grabbing system and method Active CN114758236B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210384412.4A CN114758236B (en) 2022-04-13 2022-04-13 Non-specific shape object identification, positioning and manipulator grabbing system and method

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210384412.4A CN114758236B (en) 2022-04-13 2022-04-13 Non-specific shape object identification, positioning and manipulator grabbing system and method

Publications (2)

Publication Number Publication Date
CN114758236A true CN114758236A (en) 2022-07-15
CN114758236B CN114758236B (en) 2024-09-17

Family

ID=82331618

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210384412.4A Active CN114758236B (en) 2022-04-13 2022-04-13 Non-specific shape object identification, positioning and manipulator grabbing system and method

Country Status (1)

Country Link
CN (1) CN114758236B (en)

Cited By (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115239657A (en) * 2022-07-18 2022-10-25 无锡雪浪数制科技有限公司 Industrial part increment identification method based on deep learning target segmentation
CN115359112A (en) * 2022-10-24 2022-11-18 爱夫迪(沈阳)自动化科技有限公司 Stacking control method of high-level material warehouse robot
CN115463804A (en) * 2022-08-04 2022-12-13 东莞市慧视智能科技有限公司 Dispensing method based on dispensing path
CN116086965A (en) * 2023-03-06 2023-05-09 安徽省(水利部淮河水利委员会)水利科学研究院(安徽省水利工程质量检测中心站) Concrete test block compressive strength test system and method based on machine vision
RU2813958C1 (en) * 2022-12-22 2024-02-20 Автономная некоммерческая организация высшего образования "Университет Иннополис" Intelligent system for robotic sorting of randomly arranged objects
CN117649736A (en) * 2024-01-29 2024-03-05 深圳市联之有物智能科技有限公司 Video management method and system based on AI video management platform
CN118155176A (en) * 2024-05-09 2024-06-07 江苏智搬机器人科技有限公司 Automatic control method and system for transfer robot based on machine vision
CN118701646A (en) * 2024-06-24 2024-09-27 南通大学 Automatic cable reel stacking system based on image processing technology

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2008257353A (en) * 2007-04-02 2008-10-23 Advanced Telecommunication Research Institute International Learning system and computer program for learning visual representation of objects
WO2019080229A1 (en) * 2017-10-25 2019-05-02 南京阿凡达机器人科技有限公司 Chess piece positioning method and system based on machine vision, storage medium, and robot
CN113269723A (en) * 2021-04-25 2021-08-17 浙江省机电设计研究院有限公司 Unordered grasping system for three-dimensional visual positioning and mechanical arm cooperative work parts
CN113524194A (en) * 2021-04-28 2021-10-22 重庆理工大学 Target grabbing method of robot vision grabbing system based on multi-mode feature deep learning

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2008257353A (en) * 2007-04-02 2008-10-23 Advanced Telecommunication Research Institute International Learning system and computer program for learning visual representation of objects
WO2019080229A1 (en) * 2017-10-25 2019-05-02 南京阿凡达机器人科技有限公司 Chess piece positioning method and system based on machine vision, storage medium, and robot
CN113269723A (en) * 2021-04-25 2021-08-17 浙江省机电设计研究院有限公司 Unordered grasping system for three-dimensional visual positioning and mechanical arm cooperative work parts
CN113524194A (en) * 2021-04-28 2021-10-22 重庆理工大学 Target grabbing method of robot vision grabbing system based on multi-mode feature deep learning

Cited By (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115239657A (en) * 2022-07-18 2022-10-25 无锡雪浪数制科技有限公司 Industrial part increment identification method based on deep learning target segmentation
CN115239657B (en) * 2022-07-18 2023-11-21 无锡雪浪数制科技有限公司 Industrial part increment identification method based on deep learning target segmentation
CN115463804A (en) * 2022-08-04 2022-12-13 东莞市慧视智能科技有限公司 Dispensing method based on dispensing path
CN115359112A (en) * 2022-10-24 2022-11-18 爱夫迪(沈阳)自动化科技有限公司 Stacking control method of high-level material warehouse robot
CN115359112B (en) * 2022-10-24 2023-01-03 爱夫迪(沈阳)自动化科技有限公司 Stacking control method of high-level material warehouse robot
RU2813958C1 (en) * 2022-12-22 2024-02-20 Автономная некоммерческая организация высшего образования "Университет Иннополис" Intelligent system for robotic sorting of randomly arranged objects
CN116086965A (en) * 2023-03-06 2023-05-09 安徽省(水利部淮河水利委员会)水利科学研究院(安徽省水利工程质量检测中心站) Concrete test block compressive strength test system and method based on machine vision
CN117649736A (en) * 2024-01-29 2024-03-05 深圳市联之有物智能科技有限公司 Video management method and system based on AI video management platform
CN118155176A (en) * 2024-05-09 2024-06-07 江苏智搬机器人科技有限公司 Automatic control method and system for transfer robot based on machine vision
CN118701646A (en) * 2024-06-24 2024-09-27 南通大学 Automatic cable reel stacking system based on image processing technology
CN118701646B (en) * 2024-06-24 2024-12-03 南通大学 Automatic cable reel stacking system based on image processing technology

Also Published As

Publication number Publication date
CN114758236B (en) 2024-09-17

Similar Documents

Publication Publication Date Title
CN114758236B (en) Non-specific shape object identification, positioning and manipulator grabbing system and method
CN110580725A (en) A kind of box sorting method and system based on RGB-D camera
JP4309439B2 (en) Object take-out device
CN109483554B (en) Robot dynamic grabbing method and system based on global and local visual semantics
CN113524194A (en) Target grabbing method of robot vision grabbing system based on multi-mode feature deep learning
CN111537517A (en) An unmanned intelligent stamping defect identification method
CN110555889A (en) CALTag and point cloud information-based depth camera hand-eye calibration method
CN112561886A (en) Automatic workpiece sorting method and system based on machine vision
CN106000904A (en) Automatic sorting system for household refuse
CN108080289A (en) Robot sorting system, robot sorting control method and device
CN113103215B (en) Motion control method for robot vision flyswatter
CN109785317A (en) Vision system of automatic palletizing truss robot
CN208574964U (en) A device for automatic identification and sorting of cigarette boxes based on machine vision
CN115816460B (en) Mechanical arm grabbing method based on deep learning target detection and image segmentation
CN113146172A (en) Multi-vision-based detection and assembly system and method
CN111428731A (en) Multi-class target identification and positioning method, device and equipment based on machine vision
Hsu et al. Development of a faster classification system for metal parts using machine vision under different lighting environments
CN116529760A (en) Grabbing control method, grabbing control device, electronic equipment and storage medium
CN116277025A (en) Object sorting control method and system of a robot for intelligent manufacturing
CN114155301A (en) Robot target positioning and grabbing method based on Mask R-CNN and binocular camera
CN109079777B (en) Manipulator hand-eye coordination operation system
CN116188763A (en) A YOLOv5-based method for carton identification, positioning and placement angle measurement
CN113878576A (en) A method for programming a robot visual sorting process
CN116228854B (en) Automatic parcel sorting method based on deep learning
CN113495073A (en) Auto-focus function for vision inspection system

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant