CN112497219B - Columnar workpiece classifying and positioning method based on target detection and machine vision - Google Patents

Columnar workpiece classifying and positioning method based on target detection and machine vision Download PDF

Info

Publication number
CN112497219B
CN112497219B CN202011419779.2A CN202011419779A CN112497219B CN 112497219 B CN112497219 B CN 112497219B CN 202011419779 A CN202011419779 A CN 202011419779A CN 112497219 B CN112497219 B CN 112497219B
Authority
CN
China
Prior art keywords
target
workpiece
eye
workpieces
detection
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202011419779.2A
Other languages
Chinese (zh)
Other versions
CN112497219A (en
Inventor
刘志峰
雷旦
赵永胜
李龙飞
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing University of Technology
Original Assignee
Beijing University of Technology
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing University of Technology filed Critical Beijing University of Technology
Priority to CN202011419779.2A priority Critical patent/CN112497219B/en
Publication of CN112497219A publication Critical patent/CN112497219A/en
Application granted granted Critical
Publication of CN112497219B publication Critical patent/CN112497219B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • BPERFORMING OPERATIONS; TRANSPORTING
    • B25HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
    • B25JMANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
    • B25J9/00Programme-controlled manipulators
    • B25J9/16Programme controls
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B25HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
    • B25JMANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
    • B25J9/00Programme-controlled manipulators
    • B25J9/16Programme controls
    • B25J9/1628Programme controls characterised by the control loop
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B25HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
    • B25JMANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
    • B25J9/00Programme-controlled manipulators
    • B25J9/16Programme controls
    • B25J9/1694Programme controls characterised by use of sensors other than normal servo-feedback from position, speed or acceleration sensors, perception control, multi-sensor controlled systems, sensor fusion
    • B25J9/1697Vision controlled systems
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02PCLIMATE CHANGE MITIGATION TECHNOLOGIES IN THE PRODUCTION OR PROCESSING OF GOODS
    • Y02P90/00Enabling technologies with a potential contribution to greenhouse gas [GHG] emissions mitigation
    • Y02P90/30Computing systems specially adapted for manufacturing

Landscapes

  • Engineering & Computer Science (AREA)
  • Robotics (AREA)
  • Mechanical Engineering (AREA)
  • Image Analysis (AREA)
  • Manipulator (AREA)

Abstract

The invention discloses a columnar workpiece classification high-precision positioning method based on target detection and machine vision, which comprises two parts of target detection, defect detection, coarse positioning and high-precision positioning of the machine vision of yolov 3. The yolov3 part comprises the steps of making a data set, improving a network structure, adjusting candidate frame parameters, carrying out real-time positioning identification and defect detection. And acquiring a workpiece image by an eye-to-hand camera, fusing an image enhancement algorithm, and improving the candidate frame parameters by adopting a vector similarity measurement method. And a machine vision part, wherein the yolov3 algorithm coarse positioning position guides an eye-in-hand camera to obtain an image, image features are extracted, maximum constraint is adopted to reject abnormal features, and finally workpiece contour features are fitted to obtain the high-precision position of the target workpiece.

Description

一种基于目标检测和机器视觉的柱状工件分类定位方法A method for classifying and positioning columnar workpieces based on target detection and machine vision

技术领域Technical field

本发明涉及工业机器人和机器视觉应用,具体地,涉及一种基于目标检测和机器视觉的柱状工件分类高精度定位方法。The present invention relates to industrial robots and machine vision applications, and specifically to a high-precision positioning method for columnar workpiece classification based on target detection and machine vision.

背景技术Background technique

随着智能制造的发展,工业机器人具有通用性好,重复定位精度高等优点,在一些工业自动化领域,其中大部分都采用机器人示教的方法。对于实现真正的智能制造还有很远的距离,传统的示教也无法满足智能制造的需求。机器视觉技术很好地解决了机器人位置控制需求,但同时存在着识别柔性和精度难以兼容的问题。基于深度学习的目标检测技术能较好满足多目标识别的柔性要求,但存在定位精度不够的问题。传统的机器视觉检测技术识别精度高,但识别特征单一。With the development of intelligent manufacturing, industrial robots have the advantages of good versatility and high repeated positioning accuracy. In some industrial automation fields, most of them use robot teaching methods. There is still a long way to go to realize true intelligent manufacturing, and traditional teaching cannot meet the needs of intelligent manufacturing. Machine vision technology can well solve the needs of robot position control, but at the same time there is the problem of incompatible recognition flexibility and accuracy. Target detection technology based on deep learning can better meet the flexibility requirements of multi-target recognition, but there is a problem of insufficient positioning accuracy. Traditional machine vision inspection technology has high recognition accuracy, but the recognition features are single.

公开号为CN111238450A的专利公开了视觉定位方法及装置,该方法对单一目标工件采集多帧图像,通过各帧图像对应的视觉定位信息满足各帧图像对应的采集位姿变换关系,对多目标工件无法识别定位。公开号为CN106272416A的专利公开了基于力觉和视觉的机器人细长轴精密装配系统及方法,该发明在实现精密装配时借助了视觉、位置、力觉等多种类型传感器,具有一定的局限性。The patent with publication number CN111238450A discloses a visual positioning method and device. This method collects multiple frames of images for a single target workpiece, and uses the visual positioning information corresponding to each frame of image to satisfy the acquisition pose transformation relationship corresponding to each frame of image. For multi-target workpieces Unable to recognize location. The patent with publication number CN106272416A discloses a precision assembly system and method for a robot's slender axis based on force sense and vision. This invention uses various types of sensors such as vision, position, and force sense to achieve precision assembly, and has certain limitations. .

基于深度学习的能实现目标检测但精度较差;传统机器视觉识别定位精度高但检测目标过于单一。因此,对于多目标工件的分类和高精度定位是工业机器人和机器视觉应用领域一个亟需解决的问题。Target detection based on deep learning can be achieved but the accuracy is poor; traditional machine vision recognition and positioning accuracy is high but the detection target is too single. Therefore, the classification and high-precision positioning of multi-target workpieces is an urgent problem that needs to be solved in the field of industrial robots and machine vision applications.

发明内容Contents of the invention

本发明提供了一种基于目标检测和机器视觉的柱状工件分类高精度定位方法。通过深度学习对目标工件进行目标检测完成工件分类和目标粗定位,粗定位目标位置引导机械手至工件上方,通过机器视觉完成目标高精度定位。从而,实现多目标工件的分类识别和高精度定位。The invention provides a high-precision positioning method for columnar workpiece classification based on target detection and machine vision. The target workpiece is detected through deep learning to complete the workpiece classification and rough positioning of the target. The rough positioning of the target position guides the manipulator to the top of the workpiece, and high-precision positioning of the target is completed through machine vision. Thus, the classification, identification and high-precision positioning of multi-target workpieces are achieved.

因此,本发明提供了一种基于目标检测和机器视觉的柱状工件分类高精度定位方法,包括如下步骤:Therefore, the present invention provides a high-precision positioning method for columnar workpiece classification based on target detection and machine vision, which includes the following steps:

基于yolov3目标检测算法多目标识别及粗定位、缺陷识别过程:Multi-target recognition, rough positioning and defect identification process based on yolov3 target detection algorithm:

利用实验平台的Eye-To-Hand相机采集多目标工件的图像。所述的实验平台包括机械手、视觉控制系统、Eye-To-Hand相机和Eye-In-Hand相机。所述的Eye-To-Hand相机固定于试验台正上方,相机距离实验平台具有较高的工作距离,以将不同类型的多目标工件成像于视野面。所述的Eye-To-Hand相机因工作距离较大,造成粗识别定位精度较低。The Eye-To-Hand camera of the experimental platform is used to collect images of multi-target workpieces. The experimental platform includes a manipulator, a visual control system, an Eye-To-Hand camera and an Eye-In-Hand camera. The Eye-To-Hand camera is fixed directly above the test platform, and the camera has a high working distance from the experimental platform to image different types of multi-target workpieces on the field of view. The Eye-To-Hand camera has low rough recognition and positioning accuracy due to its large working distance.

S1:所述的Eye-To-Hand相机对试验台上的多目标工件进行图像采集,将采集到的图像输入到改进网络结构的yolov3算法,对改进网络结构的yolov3算法模型进行训练,用训练好的yolov3算法多目标检测模型进行目标检测,获得多目标工件的各个类别和粗精度的图像坐标。S1: The Eye-To-Hand camera collects images of multi-target workpieces on the test bench, inputs the collected images into the yolov3 algorithm that improves the network structure, and trains the yolov3 algorithm model that improves the network structure. Use the training The good yolov3 algorithm multi-target detection model performs target detection and obtains various categories and coarse-precision image coordinates of multi-target workpieces.

S2:基于坐标变换,通过标定板标定法对Eye-To-Hand相机和机械手末端进行手眼标定,将获得的多目标工件粗精度的图像坐标结合手眼标定参数,解算出各个目标工件的世界坐标,同时返回各个目标工件的类别。S2: Based on coordinate transformation, hand-eye calibration is performed on the Eye-To-Hand camera and the end of the manipulator through the calibration plate calibration method. The obtained rough-precision image coordinates of the multi-target workpiece are combined with the hand-eye calibration parameters to calculate the world coordinates of each target workpiece. Also returns the category of each target artifact.

S3:所述的改进网络结构的yolov3算法模型在训练时对多目标工件类型进行训练,同时对各个工件的典型缺陷进行训练。用训练好改进网络结构的yolov3算法进行目标检测时,对划痕、缺角等目标的关键缺陷进行识别。S3: The yolov3 algorithm model with improved network structure is trained on multi-target workpiece types during training, and the typical defects of each workpiece are trained at the same time. When using the yolov3 algorithm that has been trained to improve the network structure for target detection, it can identify key defects such as scratches and missing corners.

基于机器视觉的目标工件高精度定位过程:High-precision positioning process of target workpiece based on machine vision:

S4:所述的粗定位工件坐标由改进网络结构的yolov3算法模型识别获得,并将位置坐标基于通讯协议传递给视觉控制系统,控制系统发送给机械手。所述的视觉控制系统由工控机承担作用;所述的Eye-In-Hand相机于机械手末端连接在一起。Eye-In-Hand相机随同机械手运动至目标工件上方。S4: The rough positioning workpiece coordinates are obtained by identifying the yolov3 algorithm model with an improved network structure, and the position coordinates are transmitted to the visual control system based on the communication protocol, and the control system sends them to the manipulator. The vision control system is played by an industrial computer; the Eye-In-Hand camera is connected at the end of the manipulator. The Eye-In-Hand camera moves along with the manipulator to the target workpiece.

S5:所述Eye-In-Hand相机运动至工件上方对工件进行图像采集,所述的工件放置在试验台上方,系统对获取到的采集图像进行图像处理及特征提取,获得工件关键特征坐标,结合Eye-In-Hand相机的手眼标定参数,获取工件的高精度世界坐标并发送至视觉系统。S5: The Eye-In-Hand camera moves above the workpiece to collect images of the workpiece. The workpiece is placed above the test bench. The system performs image processing and feature extraction on the acquired images to obtain the key feature coordinates of the workpiece. Combined with the hand-eye calibration parameters of the Eye-In-Hand camera, the high-precision world coordinates of the workpiece are obtained and sent to the vision system.

S6:系统处理器根据高精度坐标引导机械手进行夹取搬运或者装配。S6: The system processor guides the manipulator to carry out clamping, transportation or assembly based on high-precision coordinates.

S7:重复上述步骤S4-S6,对不同类别的目标工件进行高精度定位,实现多目标工件的高精度定位。S7: Repeat the above steps S4-S6 to perform high-precision positioning of target workpieces of different categories to achieve high-precision positioning of multiple target workpieces.

所述的工件为轴类零件;所述多目标工件包含四种不同类型的工件;所述的相机与视觉系统之间基于GigE协议通讯,进行图像传输;所述的视觉系统与机械手之间基于TCP/IP协议通讯,进行位置坐标传输。The workpiece is a shaft part; the multi-target workpiece includes four different types of workpieces; the camera and the vision system communicate based on the GigE protocol for image transmission; the vision system and the manipulator are based on TCP/IP protocol communication for position coordinate transmission.

进一步地,上述的S1步骤具体为:Further, the above-mentioned S1 step is specifically:

S11:利用试验工作台上的Eye-To-Hand相机对待检测目标的图像进行采集,采集之后对不同类型的工件进行标记分类制作训练数据集。所述的工件标记分类分为五大类,其包括四种不同类型的轴类零件和四种带缺陷不同类型的工件。S11: Use the Eye-To-Hand camera on the test workbench to collect images of the target to be detected. After collecting, mark and classify different types of workpieces to create a training data set. The workpiece marking classification is divided into five categories, including four different types of shaft parts and four different types of workpieces with defects.

S12:将所述的训练数据集进行增强处理,将增强处理后的数据集输入改进的yolov3算法模型进行训练,获得参数模型。S12: Enhance the training data set, input the enhanced data set into the improved yolov3 algorithm model for training, and obtain a parameter model.

S13:将待识别的原始多目标工件图像输入到训练好的改进网络的yolov3模型,输出对应的缺陷检测、分类识别粗定位结果。S13: Input the original multi-target workpiece image to be identified into the yolov3 model of the trained improved network, and output the corresponding defect detection, classification and identification rough positioning results.

S14:采用向量相似性度量方法对对训练集中的候选框参数进行度量,根据标准化欧氏距离,对其进行统计分析,根据标准化欧式距离大小对候选框参数进行统计分析,将误差最小的参数写入配置文件,对yolov3目标检测框进行改进。S14: Use the vector similarity measurement method to measure the parameters of the candidate boxes in the training set, perform statistical analysis on them based on the standardized Euclidean distance, perform statistical analysis on the parameters of the candidate boxes based on the standardized Euclidean distance, and write the parameter with the smallest error Enter the configuration file to improve the yolov3 target detection frame.

所述的改进网络结构的yolov3模型基于darknet53的网络结果进行改进,满足多目标工件目标检测要求。本发明提供的目标检测与缺陷识别方法中,优化改进yolov3算法网络结构模型,具体包括:The yolov3 model with improved network structure is improved based on the network results of darknet53 to meet the requirements of multi-target workpiece target detection. In the target detection and defect identification method provided by the present invention, the yolov3 algorithm network structure model is optimized and improved, specifically including:

Yolov3目标检测算法的原始网络模型由一系列下采样过程得到13×13×75、26×26×75和52×52×75三种尺度下的检测结果,其中13、26、52代表采样尺度。75拆分为3×(4+1+20),3代表三种尺度的检测box,4代表每个检测box的位置信息,其包括检测box的宽高和检测box中心位置坐标,1代表识别的概率,20代表可以检测出的目标种类。改进网络结构的yolov3算法,是修改后的网络结构能满足四种不同类型的多目标工件目标检测,同时也能识别出不同种类的缺陷工件,得到13×13×39、26×26×39、52×52×39这三种不同尺度的输出。The original network model of the Yolov3 target detection algorithm obtains detection results at three scales: 13×13×75, 26×26×75, and 52×52×75 through a series of downsampling processes, where 13, 26, and 52 represent the sampling scales. 75 is split into 3×(4+1+20), 3 represents the detection box of three scales, 4 represents the position information of each detection box, which includes the width and height of the detection box and the center position coordinates of the detection box, 1 represents recognition probability, 20 represents the type of target that can be detected. The yolov3 algorithm that improves the network structure is that the modified network structure can meet four different types of multi-target workpiece target detection, and can also identify different types of defective workpieces, obtaining 13×13×39, 26×26×39, Output of three different scales: 52×52×39.

进一步地,上述的S2步骤具体为:Further, the above-mentioned S2 step is specifically:

S21:基于halcon的标定板的标定方法对Eye-To-Hand相机进行手眼标定;S21: Hand-eye calibration of Eye-To-Hand camera based on the calibration method of halcon calibration plate;

S22:手眼标定获得Eye-To-Hand相机的外参数,将参数标准化为矩阵形式;S22: Hand-eye calibration obtains the external parameters of the Eye-To-Hand camera, and normalizes the parameters into matrix form;

S23:将yolov3目标检测模型获得图像坐标结合外参数矩阵,将获得图像坐标转换成机器人的世界坐标。S23: Combine the image coordinates obtained by the yolov3 target detection model with the external parameter matrix, and convert the obtained image coordinates into the world coordinates of the robot.

进一步地,上述的S5步骤具体为:Further, the above-mentioned S5 step is specifically:

S51:所述的Eye-In-Hand相机对单目标工件进行拍照后,进行图像预处理降噪等操作;预处理过后的图像进行自适应二值化获得柱状工件的边缘特征信息。S51: After the Eye-In-Hand camera takes a picture of a single target workpiece, it performs image preprocessing, noise reduction and other operations; the preprocessed image is adaptively binarized to obtain edge feature information of the columnar workpiece.

S52:根据圆的边缘特征信息,基于异常值检测的方法拟合柱状工件的圆轮廓,采用极大值约束的select_max_length_contour方法获得柱状工件的最大外圆轮廓,实现视觉定位的高精度。S52: According to the edge feature information of the circle, the method based on outlier detection is used to fit the circular contour of the cylindrical workpiece, and the select_max_length_contour method with maximum value constraints is used to obtain the maximum outer circle contour of the cylindrical workpiece to achieve high accuracy in visual positioning.

所述的select_max_length_contour方法对工件关键信息拟合后获得柱状工件的同心圆轮廓进行极大值约束,返回柱状工件的轮廓特征信息。该方法首先初始化最长长度、最长长度索引初始化,而后对获取到的轮廓特征长度进行遍历,保存最长轮廓的长度和索引,最后返回最长轮廓长度的索引。The select_max_length_contour method performs maximum value constraints on the concentric circle contour of the cylindrical workpiece obtained after fitting the key information of the workpiece, and returns the contour feature information of the cylindrical workpiece. This method first initializes the longest length and longest length index, then traverses the obtained contour feature length, saves the length and index of the longest contour, and finally returns the index of the longest contour length.

本发明方法对柱状工件的分类定位精度可达到微米级,同时能识别多种不同类型的柱状工件。对多种不同类型的柱状工件识别准确率可达到90%以上,识别速度达到50fps以上。The method of the present invention can classify and position columnar workpieces with an accuracy of micron level, and can identify multiple different types of columnar workpieces at the same time. The recognition accuracy of many different types of columnar workpieces can reach more than 90%, and the recognition speed can reach more than 50fps.

与现有技术相比,本发明具有如下优点:Compared with the prior art, the present invention has the following advantages:

1、该方法能对试验工作台上的多目标工件进行高精度定位,同时配合机械手,从而实现多目标工件的夹取、搬运、装配过程中的全自动化,实现整个过程中无人工干预,大幅提高生产效率。1. This method can perform high-precision positioning of multi-target workpieces on the test workbench, and at the same time cooperate with the manipulator to achieve full automation in the process of clamping, handling and assembly of multi-target workpieces, without manual intervention in the entire process, and significantly improve Increase productivity.

2、通过固定在试验工作台的Eye-To-Hand相机基于改进网络的yolov3模型,自动对多目标工件进行目标检测并完成粗定位,同时对有缺陷的工件进行缺陷检测。2. Through the Eye-To-Hand camera fixed on the test workbench based on the yolov3 model of the improved network, multi-target workpieces are automatically detected and rough positioned, and defective workpieces are simultaneously detected.

3、粗定位返回的坐标位置传递给机械手,机械手带着Eye-In-Hand相机运动至目标工件上方,对其进行高精度定位。该方法能克服基于深度学习的目标检测技术能较好满足多目标识别的柔性要求,但存在定位精度不够和传统的机器视觉检测技术识别精度高,但识别特征单一的问题。3. The coordinate position returned by rough positioning is passed to the manipulator, and the manipulator takes the Eye-In-Hand camera to move above the target workpiece to position it with high precision. This method can overcome the problem that target detection technology based on deep learning can better meet the flexibility requirements of multi-target recognition, but has insufficient positioning accuracy and traditional machine vision detection technology has high recognition accuracy but single recognition features.

附图说明Description of drawings

图1为本发明相机布局示意图。Figure 1 is a schematic diagram of the camera layout of the present invention.

图2为本发明提供的基于目标检测和机器视觉的柱状工件分类高精度定位方法的流程示意图。Figure 2 is a schematic flow chart of a high-precision positioning method for columnar workpiece classification based on target detection and machine vision provided by the present invention.

图3为本发明改进yolov3目标检测模型结构的示意图。Figure 3 is a schematic diagram of the structure of the improved yolov3 target detection model of the present invention.

图4为本发明采用的select_max_length_contour算法流程图。Figure 4 is a flow chart of the select_max_length_contour algorithm used in the present invention.

图5为基于目标检测和机器视觉的柱状工件分类高精度定位方法效果图。Figure 5 is a rendering of the high-precision positioning method for columnar workpiece classification based on target detection and machine vision.

具体实施方式Detailed ways

本发明提供了一种基于目标检测和机器视觉的柱状工件分类高精度定位方法。通过深度学习对目标工件进行目标检测完成工件分类和目标粗定位,粗定位目标位置引导机械手至工件上方,通过机器视觉完成目标高精度定位。从而,实现多目标工件的分类识别和高精度定位。The invention provides a high-precision positioning method for columnar workpiece classification based on target detection and machine vision. The target workpiece is detected through deep learning to complete the workpiece classification and rough positioning of the target. The rough positioning of the target position guides the manipulator to the top of the workpiece, and high-precision positioning of the target is completed through machine vision. Thus, the classification, identification and high-precision positioning of multi-target workpieces are achieved.

因此,本发明提供了一种基于目标检测和机器视觉的柱状工件分类高精度定位方法,包括如下步骤:Therefore, the present invention provides a high-precision positioning method for columnar workpiece classification based on target detection and machine vision, which includes the following steps:

基于yolov3目标检测算法多目标识别及粗定位、缺陷识别过程:Multi-target recognition, rough positioning and defect identification process based on yolov3 target detection algorithm:

利用实验平台的Eye-To-Hand相机采集多目标工件的图像。所述的实验平台包括机械手、视觉控制系统、Eye-To-Hand相机和Eye-In-Hand相机。所述的Eye-To-Hand相机固定于试验台正上方,相机距离实验平台具有较高的工作距离,以将不同类型的多目标工件成像于视野面。所述的Eye-To-Hand相机因工作距离较大,造成粗识别定位精度较低。The Eye-To-Hand camera of the experimental platform is used to collect images of multi-target workpieces. The experimental platform includes a manipulator, a visual control system, an Eye-To-Hand camera and an Eye-In-Hand camera. The Eye-To-Hand camera is fixed directly above the test platform, and the camera has a high working distance from the experimental platform to image different types of multi-target workpieces on the field of view. The Eye-To-Hand camera has low rough recognition and positioning accuracy due to its large working distance.

S1:所述的Eye-To-Hand相机对试验台上的多目标工件进行图像采集,将采集到的图像输入到改进网络结构的yolov3算法,对改进网络结构的yolov3算法模型进行训练,用训练好的yolov3算法多目标检测模型进行目标检测,获得多目标工件的各个类别和粗精度的图像坐标。S1: The Eye-To-Hand camera collects images of multi-target workpieces on the test bench, inputs the collected images into the yolov3 algorithm that improves the network structure, and trains the yolov3 algorithm model that improves the network structure. Use the training The good yolov3 algorithm multi-target detection model performs target detection and obtains various categories and coarse-precision image coordinates of multi-target workpieces.

S2:基于坐标变换,通过标定板标定法对Eye-To-Hand相机和机械手末端进行手眼标定,将获得的多目标工件粗精度的图像坐标结合手眼标定参数,解算出各个目标工件的世界坐标,同时返回各个目标工件的类别。S2: Based on coordinate transformation, hand-eye calibration is performed on the Eye-To-Hand camera and the end of the manipulator through the calibration plate calibration method. The obtained rough-precision image coordinates of the multi-target workpiece are combined with the hand-eye calibration parameters to calculate the world coordinates of each target workpiece. Also returns the category of each target artifact.

S3:所述的改进网络结构的yolov3算法模型在训练时对多目标工件类型进行训练,同时对各个工件的典型缺陷进行训练。用训练好改进网络结构的yolov3算法进行目标检测时,对划痕、缺角等目标的关键缺陷进行识别。S3: The yolov3 algorithm model with improved network structure is trained on multi-target workpiece types during training, and the typical defects of each workpiece are trained at the same time. When using the yolov3 algorithm that has been trained to improve the network structure for target detection, it can identify key defects such as scratches and missing corners.

基于机器视觉的目标工件高精度定位过程:High-precision positioning process of target workpiece based on machine vision:

S4:所述的粗定位工件坐标由改进网络结构的yolov3算法模型识别获得,并将位置坐标基于通讯协议传递给视觉控制系统,控制系统发送给机械手。所述的视觉控制系统由工控机承担作用;所述的Eye-In-Hand相机于机械手末端连接在一起。Eye-In-Hand相机随同机械手运动至目标工件上方。S4: The rough positioning workpiece coordinates are obtained by identifying the yolov3 algorithm model with an improved network structure, and the position coordinates are transmitted to the visual control system based on the communication protocol, and the control system sends them to the manipulator. The vision control system is played by an industrial computer; the Eye-In-Hand camera is connected at the end of the manipulator. The Eye-In-Hand camera moves along with the manipulator to the target workpiece.

S5:所述Eye-In-Hand相机运动至工件上方对工件进行图像采集,所述的工件放置在试验台上方,系统对获取到的采集图像进行图像处理及特征提取,获得工件关键特征坐标,结合Eye-In-Hand相机的手眼标定参数,获取工件的高精度世界坐标并发送至视觉系统。S5: The Eye-In-Hand camera moves above the workpiece to collect images of the workpiece. The workpiece is placed above the test bench. The system performs image processing and feature extraction on the acquired images to obtain the key feature coordinates of the workpiece. Combined with the hand-eye calibration parameters of the Eye-In-Hand camera, the high-precision world coordinates of the workpiece are obtained and sent to the vision system.

S6:系统处理器根据高精度坐标引导机械手进行夹取搬运或者装配。S6: The system processor guides the manipulator to carry out clamping, transportation or assembly based on high-precision coordinates.

S7:重复上述步骤S4-S6,对不同类别的目标工件进行高精度定位,实现多目标工件的高精度定位。S7: Repeat the above steps S4-S6 to perform high-precision positioning of target workpieces of different categories to achieve high-precision positioning of multiple target workpieces.

所述的工件为轴类零件;所述多目标工件包含四种不同类型的工件;所述的相机与视觉系统之间基于GigE协议通讯,进行图像传输;所述的视觉系统与机械手之间基于TCP/IP协议通讯,进行位置坐标传输。The workpiece is a shaft part; the multi-target workpiece includes four different types of workpieces; the camera and the vision system communicate based on the GigE protocol for image transmission; the vision system and the manipulator are based on TCP/IP protocol communication for position coordinate transmission.

进一步地,上述的S1步骤具体为:Further, the above-mentioned S1 step is specifically:

S11:利用试验工作台上的Eye-To-Hand相机对待检测目标的图像进行采集,采集之后对不同类型的工件进行标记分类制作训练数据集。所述的工件标记分类分为五大类,其包括四种不同类型的轴类零件和四种带缺陷不同类型的工件。S11: Use the Eye-To-Hand camera on the test workbench to collect images of the target to be detected. After collecting, mark and classify different types of workpieces to create a training data set. The workpiece marking classification is divided into five categories, including four different types of shaft parts and four different types of workpieces with defects.

S12:将所述的训练数据集进行增强处理,将增强处理后的数据集输入改进的yolov3算法模型进行训练,获得参数模型。S12: Enhance the training data set, input the enhanced data set into the improved yolov3 algorithm model for training, and obtain a parameter model.

S13:将待识别的原始多目标工件图像输入到训练好的改进网络的yolov3模型,输出对应的缺陷检测、分类识别粗定位结果。S13: Input the original multi-target workpiece image to be identified into the yolov3 model of the trained improved network, and output the corresponding defect detection, classification and identification rough positioning results.

S14:采用向量相似性度量方法对对训练集中的候选框参数进行度量,根据标准化欧氏距离,对其进行统计分析,根据标准化欧式距离大小对候选框参数进行统计分析,将误差最小的参数写入配置文件,对yolov3目标检测框进行改进。S14: Use the vector similarity measurement method to measure the parameters of the candidate boxes in the training set, perform statistical analysis on them based on the standardized Euclidean distance, perform statistical analysis on the parameters of the candidate boxes based on the standardized Euclidean distance, and write the parameter with the smallest error Enter the configuration file to improve the yolov3 target detection frame.

所述的改进网络结构的yolov3模型基于darknet53的网络结果进行改进,满足多目标工件目标检测要求。本发明提供的目标检测与缺陷识别方法中,优化改进yolov3算法网络结构模型,具体包括:The yolov3 model with improved network structure is improved based on the network results of darknet53 to meet the requirements of multi-target workpiece target detection. In the target detection and defect identification method provided by the present invention, the yolov3 algorithm network structure model is optimized and improved, specifically including:

Yolov3目标检测算法的原始网络模型由一系列下采样过程得到13×13×75、26×26×75和52×52×75三种尺度下的检测结果,其中13、26、52代表采样尺度。75拆分为3×(4+1+20),3代表三种尺度的检测box,4代表每个检测box的位置信息,其包括检测box的宽高和检测box中心位置坐标,1代表识别的概率,20代表可以检测出的目标种类。改进网络结构的yolov3算法,是修改后的网络结构能满足四种不同类型的多目标工件目标检测,同时也能识别出不同种类的缺陷工件,得到13×13×39、26×26×39、52×52×39这三种不同尺度的输出。The original network model of the Yolov3 target detection algorithm obtains detection results at three scales: 13×13×75, 26×26×75, and 52×52×75 through a series of downsampling processes, where 13, 26, and 52 represent the sampling scales. 75 is split into 3×(4+1+20), 3 represents the detection box of three scales, 4 represents the position information of each detection box, which includes the width and height of the detection box and the center position coordinates of the detection box, 1 represents recognition probability, 20 represents the type of target that can be detected. The yolov3 algorithm that improves the network structure is that the modified network structure can meet four different types of multi-target workpiece target detection, and can also identify different types of defective workpieces, obtaining 13×13×39, 26×26×39, Output of three different scales: 52×52×39.

进一步地,上述的S2步骤具体为:Further, the above-mentioned S2 step is specifically:

S21:基于halcon的标定板的标定方法对Eye-To-Hand相机进行手眼标定;S21: Hand-eye calibration of Eye-To-Hand camera based on the calibration method of halcon calibration plate;

S22:手眼标定获得Eye-To-Hand相机的外参数,将参数标准化为矩阵形式;S22: Hand-eye calibration obtains the external parameters of the Eye-To-Hand camera, and normalizes the parameters into matrix form;

S23:将yolov3目标检测模型获得图像坐标结合外参数矩阵,将获得图像坐标转换成机器人的世界坐标。S23: Combine the image coordinates obtained by the yolov3 target detection model with the external parameter matrix, and convert the obtained image coordinates into the world coordinates of the robot.

进一步地,上述的S5步骤具体为:Further, the above-mentioned S5 step is specifically:

S51:所述的Eye-In-Hand相机对单目标工件进行拍照后,进行图像预处理降噪等操作;预处理过后的图像进行自适应二值化获得柱状工件的边缘特征信息。S51: After the Eye-In-Hand camera takes a picture of a single target workpiece, it performs image preprocessing, noise reduction and other operations; the preprocessed image is adaptively binarized to obtain edge feature information of the columnar workpiece.

S52:根据圆的边缘特征信息,基于异常值检测的方法拟合柱状工件的圆轮廓,采用极大值约束的select_max_length_contour方法获得柱状工件的最大外圆轮廓,实现视觉定位的高精度。S52: According to the edge feature information of the circle, the method based on outlier detection is used to fit the circular contour of the cylindrical workpiece, and the select_max_length_contour method with maximum value constraints is used to obtain the maximum outer circle contour of the cylindrical workpiece to achieve high accuracy in visual positioning.

所述的select_max_length_contour方法对工件关键信息拟合后获得柱状工件的同心圆轮廓进行极大值约束,返回柱状工件的轮廓特征信息。该方法首先初始化最长长度、最长长度索引初始化,而后对获取到的轮廓特征长度进行遍历,保存最长轮廓的长度和索引,最后返回最长轮廓长度的索引。The select_max_length_contour method performs maximum value constraints on the concentric circle contour of the cylindrical workpiece obtained after fitting the key information of the workpiece, and returns the contour feature information of the cylindrical workpiece. This method first initializes the longest length and longest length index, then traverses the obtained contour feature length, saves the length and index of the longest contour, and finally returns the index of the longest contour length.

本发明方法对柱状工件的分类定位精度可达到微米级,同时能识别多种不同类型的柱状工件。对多种不同类型的柱状工件识别准确率可达到90%以上,识别速度达到50fps以上。The method of the present invention can classify and position columnar workpieces with an accuracy of micron level, and can identify multiple different types of columnar workpieces at the same time. The recognition accuracy of many different types of columnar workpieces can reach more than 90%, and the recognition speed can reach more than 50fps.

Claims (8)

1. A columnar workpiece classification high-precision positioning method based on target detection and machine vision is characterized by comprising the following steps:
acquiring images of the multi-target workpiece by using an Eye-To-Hand camera of the experimental platform; the experimental platform comprises a manipulator, a visual control system, a Eye-To-Hand camera and an Eye-In-Hand camera; the Eye-To-Hand camera is fixed above the experimental platform, and images different types of multi-target workpieces on the visual field surface; the multi-target workpiece includes four types of columnar workpieces;
s1: the Eye-To-Hand camera acquires images of the multi-target workpieces on the experimental platform, inputs the acquired images into a yolov3 algorithm model with an improved network structure, trains the yolov3 algorithm model with the improved network structure, and performs target detection by using the trained yolov3 algorithm model with the improved network structure To obtain image coordinates of various categories and coarse precision of the multi-target workpieces;
s2: based on coordinate transformation, performing Hand-Eye calibration on the Eye-To-Hand camera and the tail end of the manipulator through a calibration plate calibration method, combining the obtained image coordinates of the multi-target workpiece with Hand-Eye calibration parameters, calculating the world coordinates of each target workpiece, and returning To the category of each target workpiece;
s3: the yolov3 algorithm model with the improved network structure trains multiple target workpiece types during training, and trains scratch and unfilled corner target defects of all workpieces; when the yolov3 algorithm model with the trained improved network structure is used for target detection, identifying scratch and unfilled corner target defects;
s4: the image coordinates of each category and coarse precision of the multi-target workpiece are identified and obtained by a yolov3 algorithm model with an improved network structure, the image coordinates are transmitted to a vision control system based on a communication protocol, and the vision control system sends the image coordinates to a manipulator; the visual control system is acted by the industrial personal computer; the Eye-In-Hand camera is connected with the tail end of the manipulator; the Eye-In-Hand camera moves to the position above the target workpiece along with the manipulator;
s5: the Eye-In-Hand camera moves to the upper part of the workpiece to collect images of the workpiece, the workpiece is placed above the experimental platform, the vision control system performs image processing and feature extraction on the collected images to obtain key feature coordinates of the workpiece, and the Eye-Hand calibration parameters of the Eye-In-Hand camera are combined to obtain high-precision world coordinates of the workpiece and send the world coordinates to the vision control system;
s6: the system processor guides the manipulator to clamp, carry or assemble according to the high-precision world coordinates;
s7: and repeating the steps S4-S6, and carrying out high-precision positioning on different types of target workpieces to realize high-precision positioning of multiple target workpieces.
2. The high-precision positioning method for classifying cylindrical workpieces based on target detection and machine vision according To claim 1, wherein the Eye-To-Hand camera, the Eye-In-Hand camera and the vision control system are communicated based on a GigE protocol for image transmission; and the vision control system and the manipulator are communicated based on TCP/IP protocol to carry out position coordinate transmission.
3. The method for classifying and positioning columnar workpieces with high precision based on target detection and machine vision according to claim 1, wherein the step S1 specifically comprises the following steps:
s11: acquiring images of target workpieces To be detected by using an Eye-To-Hand camera on an experimental platform, and performing marking classification on different types of workpieces after acquisition To manufacture a training data set;
s12: performing enhancement processing on the training data set, and inputting the data set subjected to the enhancement processing into a yolov3 algorithm model with an improved network structure for training to obtain a parameter model;
s13: inputting an original multi-target workpiece image to be identified into a yolov3 algorithm model of a trained improved network structure, and outputting corresponding defect detection and classification identification coarse positioning results;
s14: and measuring the candidate frame parameters in the training set by adopting a vector similarity measurement method, carrying out statistical analysis on the candidate frame parameters according to the standardized Euclidean distance, writing the parameters with the minimum error into a configuration file, and improving the yolov3 target detection frame.
4. The columnar workpiece classification high-precision positioning method based on target detection and machine vision according to claim 1, wherein the yolov3 algorithm model with an improved network structure is improved based on a network result of a dark net53, and the requirement of multi-target workpiece target detection is met.
5. The method for classifying and positioning columnar workpieces with high precision based on target detection and machine vision according to claim 1, wherein an original network model of a yolov3 algorithm model obtains detection results under three scales of 13×13×75, 26×26×75 and 52×52×75 by a series of downsampling processes, wherein 13, 26 and 52 represent sampling scales; 75 is split into 3× (4+1+20), 3 represents three dimensions of the detection boxes, 4 represents the position information of each detection box, which includes the width and height of the detection box and the central position coordinates of the detection box, 1 represents the probability of identification, and 20 represents the detected target species; the yolov3 algorithm model with improved network structure is a modified network structure, can meet the target detection of four different types of multi-target workpieces, and can identify different types of defective workpieces to obtain three different-scale outputs of 13×13×39, 26×26×39 and 52×52×39.
6. The method for classifying and positioning columnar workpieces with high precision based on object detection and machine vision according to claim 1, wherein the step S2 is specifically as follows:
s21: the Eye-Eye calibration is carried out on the Eye-To-Hand camera by a calibration method of a calibration plate based on the halcon;
s22: the Hand-Eye calibration is carried out To obtain the external parameters of the Eye-To-Hand camera, and the parameters are standardized into a matrix form;
s23: and combining the image coordinates obtained by the yolov3 algorithm model with an external parameter matrix, and converting the obtained image coordinates into world coordinates of the manipulator.
7. The method for classifying and positioning columnar workpieces with high precision based on object detection and machine vision according to claim 1, wherein the step S5 specifically comprises:
s51: after photographing a single target workpiece by the Eye-In-Hand camera, performing image preprocessing and noise reduction operation; performing self-adaptive binarization on the preprocessed image to obtain edge characteristic information of the columnar workpiece;
s52: fitting the circular outline of the columnar workpiece according to the edge characteristic information of the circle based on an abnormal value detection method, and obtaining the maximum excircle outline of the columnar workpiece by adopting a maximum value constraint selection_max_length_contour method to realize high precision of visual positioning.
8. The high-precision positioning method for classifying cylindrical workpieces based on target detection and machine vision according to claim 1, wherein the classification positioning precision of the cylindrical workpieces is in a micron level, and a plurality of different types of cylindrical workpieces can be identified.
CN202011419779.2A 2020-12-06 2020-12-06 Columnar workpiece classifying and positioning method based on target detection and machine vision Active CN112497219B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202011419779.2A CN112497219B (en) 2020-12-06 2020-12-06 Columnar workpiece classifying and positioning method based on target detection and machine vision

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202011419779.2A CN112497219B (en) 2020-12-06 2020-12-06 Columnar workpiece classifying and positioning method based on target detection and machine vision

Publications (2)

Publication Number Publication Date
CN112497219A CN112497219A (en) 2021-03-16
CN112497219B true CN112497219B (en) 2023-09-12

Family

ID=74971073

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202011419779.2A Active CN112497219B (en) 2020-12-06 2020-12-06 Columnar workpiece classifying and positioning method based on target detection and machine vision

Country Status (1)

Country Link
CN (1) CN112497219B (en)

Families Citing this family (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113134683A (en) * 2021-05-13 2021-07-20 兰州理工大学 Laser marking method and device based on machine learning
CN113538417A (en) * 2021-08-24 2021-10-22 安徽顺鼎阿泰克科技有限公司 Transparent container defect detection method and device based on multi-angle and target detection
CN113657551B (en) * 2021-09-01 2023-10-20 陕西工业职业技术学院 Robot grabbing gesture task planning method for sorting and stacking multiple targets
CN113814987B (en) * 2021-11-24 2022-06-03 季华实验室 Multi-camera robot hand-eye calibration method, device, electronic device and storage medium
CN115159149B (en) * 2022-07-28 2024-05-24 深圳市罗宾汉智能装备有限公司 Visual positioning-based material taking and unloading method and device

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102229146A (en) * 2011-04-27 2011-11-02 北京工业大学 Remote control humanoid robot system based on exoskeleton human posture information acquisition technology
CN105690386A (en) * 2016-03-23 2016-06-22 北京轩宇智能科技有限公司 Teleoperation system and teleoperation method for novel mechanical arm
CN108555908A (en) * 2018-04-12 2018-09-21 同济大学 A kind of identification of stacking workpiece posture and pick-up method based on RGBD cameras
CN109448054A (en) * 2018-09-17 2019-03-08 深圳大学 Target step-by-step positioning method, application, device and system based on visual fusion
CN109483554A (en) * 2019-01-22 2019-03-19 清华大学 Robotic Dynamic grasping means and system based on global and local vision semanteme

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US8111905B2 (en) * 2009-10-29 2012-02-07 Mitutoyo Corporation Autofocus video tool and method for precise dimensional inspection

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102229146A (en) * 2011-04-27 2011-11-02 北京工业大学 Remote control humanoid robot system based on exoskeleton human posture information acquisition technology
CN105690386A (en) * 2016-03-23 2016-06-22 北京轩宇智能科技有限公司 Teleoperation system and teleoperation method for novel mechanical arm
CN108555908A (en) * 2018-04-12 2018-09-21 同济大学 A kind of identification of stacking workpiece posture and pick-up method based on RGBD cameras
CN109448054A (en) * 2018-09-17 2019-03-08 深圳大学 Target step-by-step positioning method, application, device and system based on visual fusion
CN109483554A (en) * 2019-01-22 2019-03-19 清华大学 Robotic Dynamic grasping means and system based on global and local vision semanteme

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
基于图像分辨率处理与卷积神经网络的工件识别分类系统;陈春谋;系统仿真技术;第15卷(第2期);第99-106页 *

Also Published As

Publication number Publication date
CN112497219A (en) 2021-03-16

Similar Documents

Publication Publication Date Title
CN112497219B (en) Columnar workpiece classifying and positioning method based on target detection and machine vision
CN111062915B (en) Real-time steel pipe defect detection method based on improved YOLOv3 model
CN111537517B (en) An Unmanned Intelligent Stamping Defect Identification Method
CN109840900B (en) A fault online detection system and detection method applied to intelligent manufacturing workshops
CN110806736B (en) A method for detecting quality information of forgings in intelligent manufacturing production line of die forging
CN109785317B (en) Automatic pile up neatly truss robot's vision system
CN109978940B (en) Visual measurement method for SAB safety airbag size
CN107705293A (en) A kind of hardware dimension measurement method based on CCD area array cameras vision-based detections
CN110443791B (en) Workpiece detection method and device based on deep learning network
CN115439458A (en) Industrial image defect target detection algorithm based on depth map attention
CN113393426A (en) Method for detecting surface defects of rolled steel plate
CN118314138B (en) Laser processing method and system based on machine vision
CN113569922A (en) A method for intelligent non-destructive sorting of apples
CN115937203A (en) Visual detection method, device, equipment and medium based on template matching
CN115035092A (en) Image-based bottle detection method, device, equipment and storage medium
CN111784688A (en) Flower automatic grading method based on deep learning
CN116465335A (en) Automatic thickness measurement method and system based on point cloud matching
CN117314829A (en) Industrial part quality inspection method and system based on computer vision
CN118247331A (en) A method and system for automatically detecting part size based on image recognition
JP4814116B2 (en) Mounting board appearance inspection method
CN114998357B (en) Industrial detection method, system, terminal and medium based on multi-information analysis
CN117207191A (en) High-precision welding robot hand-eye calibration method based on machine vision
CN116664540A (en) Surface defect detection method of rubber sealing ring based on Gaussian line detection
CN117299596B (en) Material screening system and method for automatic detection
CN119048488B (en) Steel pipe defect detection system and method based on image analysis

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant