CN112561886A - Automatic workpiece sorting method and system based on machine vision - Google Patents

Automatic workpiece sorting method and system based on machine vision Download PDF

Info

Publication number
CN112561886A
CN112561886A CN202011506905.8A CN202011506905A CN112561886A CN 112561886 A CN112561886 A CN 112561886A CN 202011506905 A CN202011506905 A CN 202011506905A CN 112561886 A CN112561886 A CN 112561886A
Authority
CN
China
Prior art keywords
workpiece
camera
robot
coordinate system
calibration
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202011506905.8A
Other languages
Chinese (zh)
Inventor
曹江中
廖秉旺
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Guangdong University of Technology
Original Assignee
Guangdong University of Technology
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Guangdong University of Technology filed Critical Guangdong University of Technology
Priority to CN202011506905.8A priority Critical patent/CN112561886A/en
Publication of CN112561886A publication Critical patent/CN112561886A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/0002Inspection of images, e.g. flaw detection
    • G06T7/0004Industrial image inspection
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B07SEPARATING SOLIDS FROM SOLIDS; SORTING
    • B07CPOSTAL SORTING; SORTING INDIVIDUAL ARTICLES, OR BULK MATERIAL FIT TO BE SORTED PIECE-MEAL, e.g. BY PICKING
    • B07C5/00Sorting according to a characteristic or feature of the articles or material being sorted, e.g. by control effected by devices which detect or measure such characteristic or feature; Sorting by manually actuated devices, e.g. switches
    • B07C5/36Sorting apparatus characterised by the means used for distribution
    • B07C5/361Processing or control devices therefor, e.g. escort memory
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/60Analysis of geometric attributes
    • G06T7/66Analysis of geometric attributes of image moments or centre of gravity
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/70Determining position or orientation of objects or cameras
    • G06T7/73Determining position or orientation of objects or cameras using feature-based methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/80Analysis of captured images to determine intrinsic or extrinsic camera parameters, i.e. camera calibration
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10004Still image; Photographic image
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20024Filtering details
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30108Industrial image inspection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30204Marker

Abstract

The invention provides a machine vision-based automatic workpiece sorting method for overcoming the defect that errors of internal parameters and external parameters of an industrial camera affect sorting results, which comprises the following steps: the method comprises the following steps that a camera shoots different types of workpiece pictures, a corresponding type of workpiece matching template is generated and stored in a workpiece feature template library; calibrating the camera to obtain the original internal reference and external reference of the camera; optimizing internal parameters and external parameters of the camera by adopting a longicorn stigma optimization algorithm; constructing a transformational affine matrix of a camera coordinate system and a robot coordinate system according to the optimized camera internal parameters and the optimized camera external parameters; and obtaining a workpiece grabbing position coordinate S through the affine matrix of the camera coordinate system and the robot coordinate system, and executing workpiece grabbing work by the sorting robot according to the workpiece grabbing position coordinate S. The invention also provides an automatic workpiece sorting system based on machine vision, which applies the method.

Description

Automatic workpiece sorting method and system based on machine vision
Technical Field
The invention relates to the technical field of machine vision, in particular to a method and a system for automatically sorting workpieces based on machine vision.
Background
With the rapid development of national economy, the manufacturing industry as the national pillar is also rapidly developing. Traditional manufacturing enterprises have extremely high transformation and upgrade requirements on production mechanical equipment, and industrial robots play an important role as important mechanical equipment on the intelligent road of manufacturing industry. At present, the ownership of industrial robots in China leaps the top of the world, the industrial robots are applied to various production and manufacturing fields, such as equipment welding, material consignment, workpiece sorting and the like, and the sorting robot can continuously and stably operate in a severe environment, so that the health of workers is effectively protected, and the sorting accuracy is improved. Therefore, how to effectively and efficiently sort the workpieces becomes a key for improving the capacity of the sorting robot.
In a traditional workpiece sorting method, a computer obtains internal parameters and external parameters of an industrial camera through camera calibration, and then obtains a transformational affine matrix of a camera coordinate system and a robot coordinate system through hand-eye calibration. Because a certain error exists in the manufacturing process and the assembly process of the industrial camera, the picture information shot by the industrial camera has a small amount of distortion, the internal reference and the external reference of the industrial camera obtained by calibration of the camera have a certain error, and the transformational affine matrix obtained by using the internal reference and the external reference of the camera with a certain error in the hand-eye calibration process cannot accurately reflect the transformational relation between the coordinate system of the camera and the coordinate system of the robot. In actual operation, the sorting robot uses the workpiece position information obtained by the incorrect affine transformation matrix to have a certain error, and the sorting often fails.
Disclosure of Invention
The invention provides a machine vision-based automatic workpiece sorting method and a machine vision-based automatic workpiece sorting system, aiming at overcoming the defect that the errors of internal parameters and external parameters of an industrial camera in the prior art influence sorting results.
In order to solve the technical problems, the technical scheme of the invention is as follows:
a workpiece automatic sorting method based on machine vision comprises the following steps:
s1: the method comprises the following steps that a camera shoots different types of workpiece pictures, a corresponding type of workpiece matching template is generated and stored in a workpiece feature template library;
s2: shooting first calibration plate pictures at different positions of a workbench by a camera, and calibrating the camera by adopting the first calibration plate pictures to obtain original internal parameters and external parameters of the camera;
s3: optimizing internal parameters and external parameters of the camera by adopting a longicorn stigma optimization algorithm;
s4: the robot clamps the calibration plate to move in the visual field range of the camera, the camera uses the optimized internal reference and external reference to shoot a second calibration plate picture at different positions, and a pose file of the robot is stored;
s5: performing hand-eye calibration by using the optimized internal reference and external reference of the camera, the second calibration plate picture and the pose file of the robot to obtain a transformational affine matrix of a camera coordinate system and a robot coordinate system;
s6: setting the motion speed v of a conveyor belt, placing a workpiece to be sorted on the conveyor belt to move, capturing the workpiece flowing through a photographing area by a camera, performing feature extraction on a photographed workpiece picture, matching the photographed workpiece picture with a workpiece matching template stored in a workpiece feature template library to obtain workpiece type information, and acquiring spatial position information S of the workpiece to be sorted on the conveyor belt through a conversion affine matrix of a camera coordinate system and a robot coordinate system according to the photographed workpiece picture1
S7: acquiring the moving distance delta X of the conveyor belt for capturing the workpiece at the camera and the workpiece capturing action device of the robot, and obtaining the workpiece capturing position S of the robot according to the spatial position information of the workpiece to be sorted at the conveyor belt1And + delta X, the robot performs workpiece grabbing work according to the workpiece grabbing position coordinate S, and places the workpieces to be sorted into the placing grooves corresponding to the types of the workpieces according to the workpiece type information.
Preferably, in the step S1, a filtering algorithm of weighted average is used to pre-process the shot workpiece picture, then the edge contour shape information of the workpiece in the picture is extracted, different types of workpiece matching templates are generated, and then the workpiece matching templates are stored in a workpiece feature template library.
Preferably, in the filtering algorithm using weighted average, the weight is the inverse of the gray gradient, and the expression formula is as follows:
Figure BDA0002845184120000021
Figure BDA0002845184120000022
d(i+k,j+l)=(|f(i+k,j+l)-f(i,j)|+1)-1,(k,l)≠(0,0)
in the formula, (i, j) is the position coordinate of the central pixel point, d (-) is the gray gradient calculation function of the image pixel point, (k, l) represents the increment of the position of the image pixel point, and w (i, j) is the weight value of the pixel point with the position coordinate of (i, j).
Preferably, in the step S3, the specific steps of optimizing the internal reference and the external reference of the camera are as follows:
s3.1: identifying a calibration angular point on the calibration plate by adopting a Harris angular point extraction algorithm, calculating pixel coordinates (x, y) of the calibration angular point, and obtaining back projection coordinates (x ', y') of the calibration angular point on the calibration plate by using original internal reference and external reference of a camera; m isi
S3.2: taking the average error of the pixel coordinates (x, y) of the calibration corner point and the back-projected coordinates (x ', y') of the calibration corner point as an objective function y, and the expression formula is as follows:
Figure BDA0002845184120000031
wherein m isiDenotes the sub-pixel coordinates, m ', of the ith calibration corner point'iThe (x ', y') represents the sub-pixel back projection coordinates of the ith calibration angular point, and N is the number of calibration angular points on the calibration plate;
s3.3: optimizing the objective function y by adopting a longicorn whisker optimization algorithm, updating internal parameters and external parameters of the camera when the objective function value is smaller than a preset global optimum value, and outputting the internal parameters and the external parameters as optimized internal parameters and optimized external parameters of the camera; otherwise, the iterative optimization process is carried out, and the step S3.3 is repeatedly executed.
Preferably, the strategy model for optimizing the objective function y by adopting the longicorn whisker optimization algorithm is as follows:
(1) abstracting the longicorn into a mass center, and defining the positions of two antennae on the left and right of the mass center as:
Xr=X+l*d
Xl=X-l*d
in the formula, Xr represents the position of the centroid left antenna, and Xl represents the position of the centroid right antenna; x is the current centroid position, l is a proportionality coefficient, and d is the distance between centroid tips;
(2) the distance d between the centroidal tips is normalized, and the expression formula is as follows:
Figure BDA0002845184120000032
in the formula (I), the compound is shown in the specification,
Figure BDA0002845184120000033
representing the result of normalization of the distance d between the centroidal tips, the rands (m, n) function is used to generate random numbers with m rows and n columns uniformly distributed in the interval (-1, 1);
(3) and calculating the position of the next step of the centroid, wherein the calculation formula is as follows:
Figure BDA0002845184120000034
in the formula, XtRepresenting the position of the centroid at the t-th iteration, Xt+1Represents the position of the centroid at the t +1 th iteration; deltatRepresenting the exploration step length of the t iteration; sign (. cndot.) function is a sign function, and f (. cndot.) function is a fitness function.
Preferably, the expression formula of the affine transformation matrix H of the camera coordinate system and the robot coordinate system is as follows:
H=A×T1
wherein A represents an internal parameter of the cameraNumber, T1Representing extrinsic parameters of the camera;
the complete expression formula of the transformational affine matrix H of the camera coordinate system and the robot coordinate system is as follows:
Figure BDA0002845184120000041
in the formula, ZcRepresents an arbitrary scale factor; k is a radical ofx,kyDenotes the focal length of the camera in the x-and y-directions, and in general kx=ky;u0,v0Principal point coordinates representing the camera relative to the imaging plane; r is a rotation matrix in the camera external parameter matrix, and T is a translation matrix in the camera external parameter matrix; o isTIs a zero matrix.
Preferably, the method further comprises the following steps: and judging the error of the transformational affine matrix of the camera coordinate system and the robot coordinate system by using the reprojection error as an evaluation index, and skipping to execute the step S2 when the error is greater than a preset threshold value.
The invention also provides a machine vision-based automatic workpiece sorting system, which is applied to the machine vision-based automatic workpiece sorting method provided by any technical scheme, and the system specifically comprises a camera, a robot, an upper computer, a workbench and a placing groove, wherein:
the camera is placed in a shooting area at one side of the workbench, and the robot is placed in a grabbing area at the other side of the workbench;
the workbench is provided with a conveyor belt, and the conveyor belt is used for conveying different types of workpieces to a shooting area and a grabbing area;
the upper computer classifies the types of workpieces according to the workpiece pictures shot by the camera, generates a workpiece matching template and stores the workpiece matching template in a workpiece feature template library;
the upper computer optimizes internal parameters and external parameters of the camera according to the calibration pictures shot by the camera;
the upper computer performs hand-eye calibration according to the optimized internal reference and external reference of the camera, the second calibration plate picture and the pose file of the robot to obtain a transformational affine matrix of a camera coordinate system and a robot coordinate system;
the upper computer obtains workpiece type information according to the shot picture of the workpiece to be sorted, and obtains spatial position information S of the workpiece to be sorted on the conveyor belt according to the shot picture of the workpiece to be sorted1Acquiring the moving distance delta X of the camera-captured workpiece of the conveyor belt and the workpiece-capturing action device of the robot, and obtaining the workpiece-capturing position S of the robot according to the spatial position information of the workpiece to be sorted on the conveyor belt1+ delta X, converting the converted affine matrix of the camera coordinate system and the robot coordinate system into a workpiece grabbing position coordinate S' of the robot, generating a robot grabbing instruction and sending the robot grabbing instruction to the robot;
and the robot executes automatic sorting operation according to the received grabbing instruction, and places the workpieces to be sorted into the placing grooves corresponding to the types of the workpieces.
Preferably, a speed measuring meter is arranged on the conveyor belt, and the output end of the speed measuring meter is connected with the input end of the upper computer; and the speed meter measures the speed v of the conveyor belt in real time and uploads the speed v to the upper computer.
Preferably, the robot grabbing end is provided with an air compressor and an electromagnetic valve, and the air compressor and the electromagnetic valve form a vacuum generator.
Compared with the prior art, the technical scheme of the invention has the beneficial effects that: the invention utilizes machine vision technology, introduces a longicorn stigma search optimization algorithm, optimizes internal parameters and external parameters of an industrial camera, provides data with higher accuracy for subsequent hand-eye calibration, and further optimizes a transformational affine matrix of a camera coordinate system and a robot coordinate system; the computer can calculate to obtain the spatial position information of the workpiece according to the transformational affine matrix of the camera coordinate system and the robot coordinate system with higher accuracy, and can effectively improve the sorting accuracy of the sorting robot.
Drawings
Fig. 1 is a flowchart of a method for automatically sorting workpieces based on machine vision according to embodiment 1.
Fig. 2 is a flowchart of optimizing the internal reference and the external reference of the camera according to embodiment 1.
Fig. 3 is a schematic structural diagram of an automatic workpiece sorting system based on machine vision according to embodiment 2.
Detailed Description
The drawings are for illustrative purposes only and are not to be construed as limiting the patent;
for the purpose of better illustrating the embodiments, certain features of the drawings may be omitted, enlarged or reduced, and do not represent the size of an actual product;
it will be understood by those skilled in the art that certain well-known structures in the drawings and descriptions thereof may be omitted.
The technical solution of the present invention is further described below with reference to the accompanying drawings and examples.
Example 1
The embodiment provides a workpiece automatic sorting method based on machine vision, which comprises the following steps:
s1: the camera shoots different types of workpiece pictures, generates corresponding types of workpiece matching templates and stores the workpiece matching templates in a workpiece feature template library.
In the step, a shot workpiece picture is preprocessed by adopting a weighted average filtering algorithm to improve the quality of the workpiece picture, edge contour shape information of the workpiece in the picture is extracted, different types of workpiece matching templates are generated, and then the workpiece matching templates are stored in a workpiece feature template library.
Wherein, the weight in the weighted average filtering algorithm is the reciprocal of the gray gradient, and the expression formula is as follows:
Figure BDA0002845184120000061
Figure BDA0002845184120000062
d(i+k,j+l)=(|f(i+k,j+l)-f(i,j)|+1)-1,(k,l)≠(0,0)
in the formula, (i, j) is the position coordinate of the central pixel point, d (-) is the gray gradient calculation function of the image pixel point, (k, l) represents the increment of the position of the image pixel point, and w (i, j) is the weight value of the pixel point with the position coordinate of (i, j).
S2: the camera shoots first calibration plate pictures located at different positions of the workbench, and the camera is calibrated by adopting the first calibration plate pictures to obtain original internal parameters and external parameters of the camera.
In this embodiment, a circular calibration board with 7 × 7 circles is used and placed on the workbench, and 26 calibration board pictures at different positions of the workbench are taken by the camera for calibration of the camera, so as to obtain the original internal reference and the external reference of the camera.
S3: and optimizing the internal reference and the external reference of the camera by adopting a longicorn whisker optimization algorithm.
Fig. 2 is a flowchart illustrating the optimization of the internal and external parameters of the camera by using the longicorn whisker optimization algorithm according to this embodiment. The method comprises the following specific steps of optimizing internal parameters and external parameters of the camera:
s3.1: identifying a calibration angular point on the calibration plate by adopting a Harris angular point extraction algorithm, calculating pixel coordinates (x, y) of the calibration angular point, and obtaining back projection coordinates (x ', y') of the calibration angular point on the calibration plate by using original internal reference and external reference of a camera;
s3.2: taking the average error of the pixel coordinates (x, y) of the calibration corner point and the back-projected coordinates (x ', y') of the calibration corner point as an objective function y, and the expression formula is as follows:
Figure BDA0002845184120000063
wherein m isiDenotes the sub-pixel coordinates, m ', of the ith calibration corner point'iThe (x ', y') represents the sub-pixel back projection coordinates of the ith calibration angular point, and N is the number of calibration angular points on the calibration plate;
s3.3: optimizing the objective function y by adopting a longicorn whisker optimization algorithm, updating internal parameters and external parameters of the camera when the objective function value is smaller than a preset global optimum value, and outputting the internal parameters and the external parameters as optimized internal parameters and optimized external parameters of the camera; otherwise, the iterative optimization process is carried out, and the step S3.3 is repeatedly executed.
In this embodiment, the strategy model for optimizing the objective function y by using the longicorn whisker optimization algorithm is as follows:
(1) abstracting the longicorn into a mass center, and defining the positions of two antennae on the left and right of the mass center as:
Xr=X+l*d
Xl=X-l*d
in the formula, Xr represents the position of the centroid left antenna, and Xl represents the position of the centroid right antenna; x is the current centroid position, l is a proportionality coefficient, and d is the distance between centroid tips;
(2) the distance d between the centroidal tips is normalized, and the expression formula is as follows:
Figure BDA0002845184120000071
in the formula (I), the compound is shown in the specification,
Figure BDA0002845184120000072
representing the result of normalization of the distance d between the centroidal tips, the rands (m, n) function is used to generate random numbers with m rows and n columns uniformly distributed in the interval (-1, 1);
(3) and calculating the position of the next step of the centroid, wherein the calculation formula is as follows:
Figure BDA0002845184120000073
in the formula, XtRepresenting the position of the centroid at the t-th iteration, Xt+1Represents the position of the centroid at the t +1 th iteration; deltatRepresenting the exploration step length of the t iteration; sign (. cndot.) function is a sign function, and f (. cndot.) function is a fitness function.
Further, the policy model of the objective function y may be generalized to n as a space, specifically:
setting the initial position of the centroid as an n-dimensional vector, and respectively representing the positions of the left and right tentacle tips as xr、xl2 antennaThe distance between the tips being d0=|xl-xrAnd the orientation of each step of the centroid is random, a random n-dimensional vector d' is generated to represent the direction and is normalized, and the expression formula is as follows:
Figure BDA0002845184120000074
the left and right feeler positions of the mass point are obtained according to the formula as follows:
Figure BDA0002845184120000075
Figure BDA0002845184120000076
setting the step length s of each step of iteration as follows:
s=μs
where μ is the step change factor, and in order to avoid entering a locally optimal solution, μ is typically set to 0.9;
according to the left and right antenna positions x of mass pointsr、xlRespectively calculate its objective function y (x)l)、y(xr) And comparing them:
if y (x)l)>y(xr) And if the centroid jumps to the right feeler by the step length s, the coordinates of the centroid are as follows:
x=x-s||xl-xr||
if y (x)l)<y(xr) And if the centroid jumps to the left antenna by the step length s, the coordinates of the centroid are as follows:
x=x+s||xl-xr||。
s4: the robot clamps the calibration plate to move in the field of view of the camera, the camera uses the optimized internal reference and external reference to take pictures of the second calibration plate at different positions, and the position and pose files of the robot are stored.
In this embodiment, a circular calibration plate with 13 × 15 circles is placed on a clamp of the robot, the robot clamps the calibration plate to move within a camera view range, the camera takes 20 pictures of the second calibration plate, and the computer stores and stores a pose file of the robot in the moving process.
S5: and performing hand-eye calibration by using the optimized internal reference and external reference of the camera, the second calibration plate picture and the pose file of the robot to obtain a transformational affine matrix of the camera coordinate system and the robot coordinate system.
In this step, the expression formula of the affine transformation matrix H of the camera coordinate system and the robot coordinate system is as follows:
H=A×T1
wherein A represents an internal parameter of the camera, T1Representing extrinsic parameters of the camera;
the complete expression formula of the transformational affine matrix H of the camera coordinate system and the robot coordinate system is as follows:
Figure BDA0002845184120000081
in the formula, ZcRepresenting any scale factor which is used for facilitating the operation of converting the affine matrix, and the scale factor does not change coordinate values for the homogeneous coordinate; k is a radical ofx,kyDenotes the focal length of the camera in the x-and y-directions, and in general kx=ky;u0,v0Principal point coordinates representing the camera relative to the imaging plane; r is a rotation matrix in the camera external parameter matrix, and T is a translation matrix in the camera external parameter matrix; o isTIs a zero matrix.
S6: setting the motion speed v of a conveyor belt, placing a workpiece to be sorted on the conveyor belt to move, capturing the workpiece flowing through a photographing area by a camera, performing feature extraction on a shot workpiece picture, matching the shot workpiece picture with a workpiece matching template stored in a workpiece feature template library to obtain workpiece type information, and acquiring spatial position information S of the workpiece to be sorted on the conveyor belt through a conversion affine matrix of a camera coordinate system and a robot coordinate system according to the shot workpiece picture1
The moving speed of the conveyor belt can be calculated by the conveying distance delta x and the conveying time delta t of the conveyor belt: v ═ Δ x/Δ t.
In another embodiment, the moving speed of the conveyor belt can be obtained by directly matching with a speed measuring meter to measure in real time.
S7: acquiring the moving distance delta X of the conveyor belt for capturing the workpiece at the camera and the workpiece capturing action device of the robot, and obtaining the workpiece capturing position S of the robot according to the spatial position information of the workpiece to be sorted at the conveyor belt1And + delta X, the robot performs workpiece grabbing work according to the workpiece grabbing position coordinates S, and places the workpieces to be sorted into the placing grooves corresponding to the types of the workpieces according to the type information of the workpieces.
Further, the reprojection error is used as an evaluation index to judge the error of the affine transformation matrix between the camera coordinate system and the robot coordinate system, and when the error is greater than a preset threshold value, the step S2 is skipped to be executed.
In the embodiment, the characteristic of camera imaging distortion is considered, a machine vision technology is utilized, a longicorn whisker search optimization algorithm is introduced, internal parameters and external parameters of an industrial camera are optimized, data with higher accuracy are provided for subsequent hand-eye calibration, and a conversion affine matrix of a camera coordinate system and a robot coordinate system is further optimized. The computer calculates to obtain the spatial position information of the workpiece according to the transformational affine matrix of the camera coordinate system and the robot coordinate system with higher accuracy, and can effectively improve the sorting accuracy of the sorting robot.
Example 2
The present embodiment provides an automatic workpiece sorting system based on machine vision, and applies the automatic workpiece sorting method based on machine vision provided in embodiment 1. Fig. 3 is a schematic structural diagram of the automatic workpiece sorting system based on machine vision according to this embodiment.
In the work piece automatic sorting system based on machine vision that this embodiment provided, including camera 1, sorting robot 2, host computer 3, workstation 4 and standing groove 5, wherein:
the camera 1 is placed in a shooting area at one side of the workbench 4, and the sorting robot 2 is placed in a grabbing area at the other side of the workbench 4;
the workbench 4 is provided with a conveyor belt which is used for conveying different types of workpieces to a shooting area and a grabbing area;
the upper computer 3 classifies the types of workpieces according to the workpiece pictures shot by the camera 1, generates a workpiece matching template and stores the workpiece matching template in a workpiece feature template library;
the upper computer 3 optimizes the internal parameters and the external parameters of the camera 1 according to the calibration pictures shot by the camera 1;
the upper computer 3 carries out hand-eye calibration according to the optimized internal reference and external reference of the camera 1, the second calibration plate picture and the pose file of the sorting robot 2 to obtain a transformational affine matrix of a coordinate system of the camera 1 and a coordinate system of the robot;
the upper computer 3 acquires the workpiece type information according to the shot picture of the workpiece to be sorted and acquires the spatial position information S of the workpiece to be sorted on the conveyor belt according to the shot picture of the workpiece to be sorted1Acquiring the distance delta X of the movement of the camera 1 for capturing the workpiece by the conveyor belt and the workpiece action device captured by the sorting robot 2, and acquiring the position S of the workpiece captured by the sorting robot 2 according to the spatial position information of the workpiece to be sorted on the conveyor belt1+ delta X, converting the coordinate system of the camera 1 and the coordinate system of the sorting robot 2 into a workpiece grabbing position coordinate S' of the sorting robot 2 through a conversion affine matrix, generating a grabbing instruction of the sorting robot 2 and sending the grabbing instruction to the sorting robot 2;
the sorting robot 2 executes automatic sorting operation according to the received grabbing instruction, and places the workpieces to be sorted into the placing grooves 5 corresponding to the types of the workpieces.
Further, a speed measuring meter is arranged on the conveyor belt, and the output end of the speed measuring meter is connected with the input end of the upper computer 3; the speed meter measures the speed v of the conveyor belt in real time and uploads the speed v to the upper computer 3.
Further, the grabbing end of the sorting robot 2 is provided with an air compressor 6 and an electromagnetic valve 7, the air compressor 6 and the electromagnetic valve 7 form a vacuum generator, workpiece displacement is avoided in the grabbing process of workpieces, and accurate sorting is guaranteed.
The same or similar reference numerals correspond to the same or similar parts;
the terms describing positional relationships in the drawings are for illustrative purposes only and are not to be construed as limiting the patent;
it should be understood that the above-described embodiments of the present invention are merely examples for clearly illustrating the present invention, and are not intended to limit the embodiments of the present invention. Other variations and modifications will be apparent to persons skilled in the art in light of the above description. And are neither required nor exhaustive of all embodiments. Any modification, equivalent replacement, and improvement made within the spirit and principle of the present invention should be included in the protection scope of the claims of the present invention.

Claims (10)

1. A workpiece automatic sorting method based on machine vision is characterized by comprising the following steps:
s1: the method comprises the following steps that a camera shoots different types of workpiece pictures, a corresponding type of workpiece matching template is generated and stored in a workpiece feature template library;
s2: shooting first calibration plate pictures at different positions of a workbench by a camera, and calibrating the camera by adopting the first calibration plate pictures to obtain original internal parameters and external parameters of the camera;
s3: optimizing internal parameters and external parameters of the camera by adopting a longicorn stigma optimization algorithm;
s4: the robot clamps the calibration plate to move in the visual field range of the camera, the camera uses the optimized internal reference and external reference to shoot a second calibration plate picture at different positions, and a pose file of the robot is stored;
s5: performing hand-eye calibration by using the optimized internal reference and external reference of the camera, the second calibration plate picture and the pose file of the robot to obtain a transformational affine matrix of a camera coordinate system and a robot coordinate system;
s6: setting the motion speed v of a conveyor belt, placing a workpiece to be sorted on the conveyor belt to move, capturing the workpiece flowing through a photographing area by a camera, performing feature extraction on a photographed workpiece picture, matching the workpiece picture with a workpiece matching template stored in a workpiece feature template library,obtaining the type information of the workpiece, and obtaining the space position information S of the workpiece to be sorted on the conveyor belt through the transformational affine matrix of the camera coordinate system and the robot coordinate system according to the shot workpiece picture1
S7: acquiring the moving distance delta X of the conveyor belt for capturing the workpiece at the camera and the workpiece capturing action device of the robot, and obtaining the workpiece capturing position S of the robot according to the spatial position information of the workpiece to be sorted at the conveyor belt1And + delta X, the robot performs workpiece grabbing work according to the workpiece grabbing position coordinate S, and places the workpieces to be sorted into the placing grooves corresponding to the types of the workpieces according to the workpiece type information.
2. The method for automatically sorting workpieces based on machine vision according to claim 1, wherein in the step S1, the taken picture of the workpiece is preprocessed by a weighted average filtering algorithm, the edge contour shape information of the workpiece in the picture is extracted, and after different types of workpiece matching templates are generated, the workpiece matching templates are stored in a workpiece feature template library.
3. The method according to claim 2, wherein the weighted average filtering algorithm is expressed by the reciprocal of gray gradient, and the expression formula is as follows:
Figure FDA0002845184110000011
Figure FDA0002845184110000021
d(i+k,j+l)=(|f(i+k,j+l)-f(i,j)|+1)-1,(k,l)≠(0,0)
in the formula, (i, j) is the position coordinate of the central pixel point, d (-) is the gray gradient calculation function of the image pixel point, (k, l) represents the increment of the position of the image pixel point, and w (i, j) is the weight value of the pixel point with the position coordinate of (i, j).
4. The method for automatically sorting workpieces based on machine vision according to claim 1, wherein in the step of S3, the specific steps for optimizing the internal parameters and the external parameters of the camera are as follows:
s3.1: identifying a calibration angular point on the calibration plate by adopting a Harris angular point extraction algorithm, calculating pixel coordinates (x, y) of the calibration angular point, and obtaining back projection coordinates (x ', y') of the calibration angular point on the calibration plate by using original internal reference and external reference of a camera;
s3.2: taking the average error of the pixel coordinates (x, y) of the calibration corner point and the back-projected coordinates (x ', y') of the calibration corner point as an objective function y, and the expression formula is as follows:
Figure FDA0002845184110000022
wherein m isiDenotes the sub-pixel coordinates, m ', of the ith calibration corner point'iThe (x ', y') represents the sub-pixel back projection coordinates of the ith calibration angular point, and N is the number of calibration angular points on the calibration plate;
s3.3: optimizing the objective function y by adopting a longicorn whisker optimization algorithm, updating internal parameters and external parameters of the camera when the objective function value is smaller than a preset global optimum value, and outputting the internal parameters and the external parameters as optimized internal parameters and optimized external parameters of the camera; otherwise, the iterative optimization process is carried out, and the step S3.3 is repeatedly executed.
5. The method for automatically sorting workpieces based on machine vision according to claim 4, wherein the strategy model for optimizing the objective function y by adopting the longicorn whisker optimization algorithm is as follows:
(1) abstracting the longicorn into a mass center, and defining the positions of two antennae on the left and right of the mass center as:
Xr=X+l*d
Xl=X-l*d
in the formula, Xr represents the position of the centroid left antenna, and Xl represents the position of the centroid right antenna; x is the current centroid position, l is a proportionality coefficient, and d is the distance between centroid tips;
(2) the distance d between the centroidal tips is normalized, and the expression formula is as follows:
Figure FDA0002845184110000023
in the formula (I), the compound is shown in the specification,
Figure FDA0002845184110000024
representing the result of normalization of the distance d between the centroidal tips, the rands (m, n) function is used to generate random numbers with m rows and n columns uniformly distributed in the interval (-1, 1);
(3) and calculating the position of the next step of the centroid, wherein the calculation formula is as follows:
Figure FDA0002845184110000031
in the formula, XtRepresenting the position of the centroid at the t-th iteration, Xt+1Represents the position of the centroid at the t +1 th iteration; deltatRepresenting the exploration step length of the t iteration; sign (. cndot.) function is a sign function, and f (. cndot.) function is a fitness function.
6. The method for automatically sorting workpieces based on machine vision according to claim 1, wherein the expression formula of the transformational affine matrix H of the camera coordinate system and the robot coordinate system is as follows:
H=A×T1
wherein A represents an internal parameter of the camera, T1Representing extrinsic parameters of the camera;
the complete expression formula of the transformational affine matrix H of the camera coordinate system and the robot coordinate system is as follows:
Figure FDA0002845184110000032
in the formula, ZcRepresents an arbitrary scale factor; k is a radical ofx,kyDenotes the focal length of the camera in the x-and y-directions, and in general kx=ky;u0,v0Principal point coordinates representing the camera relative to the imaging plane; r is a rotation matrix in the camera external parameter matrix, and T is a translation matrix in the camera external parameter matrix; o isTIs a zero matrix.
7. The method of claim 1, further comprising the steps of: and judging the error of the transformational affine matrix of the camera coordinate system and the robot coordinate system by using the reprojection error as an evaluation index, and skipping to execute the step S2 when the error is greater than a preset threshold value.
8. The automatic workpiece sorting system based on the machine vision is applied to the automatic workpiece sorting method based on the machine vision, which is characterized by comprising a camera, a robot, an upper computer, a workbench and a placing groove, wherein:
the camera is placed in a shooting area at one side of the workbench, and the robot is placed in a grabbing area at the other side of the workbench;
the workbench is provided with a conveyor belt, and the conveyor belt is used for conveying different types of workpieces to a shooting area and a grabbing area;
the upper computer classifies the types of workpieces according to the workpiece pictures shot by the camera, generates a workpiece matching template and stores the workpiece matching template in a workpiece feature template library;
the upper computer optimizes internal parameters and external parameters of the camera according to the calibration pictures shot by the camera;
the upper computer performs hand-eye calibration according to the optimized internal reference and external reference of the camera, the second calibration plate picture and the pose file of the robot to obtain a transformational affine matrix of a camera coordinate system and a robot coordinate system;
the upper computer acquires a workpiece type information according to the shot picture of the workpiece to be sortedAnd according to the shot picture of the workpiece to be sorted, acquiring the spatial position information S of the workpiece to be sorted on the conveyor belt1Acquiring the moving distance delta X of the camera-captured workpiece of the conveyor belt and the workpiece-capturing action device of the robot, and obtaining the workpiece-capturing position S of the robot according to the spatial position information of the workpiece to be sorted on the conveyor belt1+ delta X, converting the converted affine matrix of the camera coordinate system and the robot coordinate system into a workpiece grabbing position coordinate S' of the robot, generating a robot grabbing instruction and sending the robot grabbing instruction to the robot;
and the robot executes automatic sorting operation according to the received grabbing instruction, and places the workpieces to be sorted into the placing grooves corresponding to the types of the workpieces.
9. The automatic workpiece sorting system based on the machine vision as claimed in claim 8, wherein a speed measuring meter is arranged on the conveyor belt, and an output end of the speed measuring meter is connected with an input end of the upper computer; and the speed meter measures the speed v of the conveyor belt in real time and uploads the speed v to the upper computer.
10. The automatic workpiece sorting system based on machine vision as claimed in claim 8, characterized in that the robot gripping end is provided with an air compressor and an electromagnetic valve, and the air compressor and the electromagnetic valve constitute a vacuum generator.
CN202011506905.8A 2020-12-18 2020-12-18 Automatic workpiece sorting method and system based on machine vision Pending CN112561886A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202011506905.8A CN112561886A (en) 2020-12-18 2020-12-18 Automatic workpiece sorting method and system based on machine vision

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202011506905.8A CN112561886A (en) 2020-12-18 2020-12-18 Automatic workpiece sorting method and system based on machine vision

Publications (1)

Publication Number Publication Date
CN112561886A true CN112561886A (en) 2021-03-26

Family

ID=75030992

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202011506905.8A Pending CN112561886A (en) 2020-12-18 2020-12-18 Automatic workpiece sorting method and system based on machine vision

Country Status (1)

Country Link
CN (1) CN112561886A (en)

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113369173A (en) * 2021-06-29 2021-09-10 上海希翎智能科技有限公司 Full-automatic plastic bottle color sorting equipment
CN114750155A (en) * 2022-04-26 2022-07-15 广东天太机器人有限公司 Object classification control system and method based on industrial robot
CN114800520A (en) * 2022-05-23 2022-07-29 北京迁移科技有限公司 High-precision hand-eye calibration method
CN115319762A (en) * 2022-10-18 2022-11-11 上海航天壹亘智能科技有限公司 Robot control method for production line, production line and numerical control machine tool
WO2023040095A1 (en) * 2021-09-16 2023-03-23 梅卡曼德(北京)机器人科技有限公司 Camera calibration method and apparatus, electronic device, and storage medium

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107992881A (en) * 2017-11-13 2018-05-04 广州中国科学院先进技术研究所 A kind of Robotic Dynamic grasping means and system
CN109571477A (en) * 2018-12-17 2019-04-05 西安工程大学 A kind of improved robot vision and conveyer belt composite calibration method
CN111645074A (en) * 2020-06-01 2020-09-11 李思源 Robot grabbing and positioning method
CN111775154A (en) * 2020-07-20 2020-10-16 广东拓斯达科技股份有限公司 Robot vision system
CN111915704A (en) * 2020-06-13 2020-11-10 东北林业大学 Apple hierarchical identification method based on deep learning

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107992881A (en) * 2017-11-13 2018-05-04 广州中国科学院先进技术研究所 A kind of Robotic Dynamic grasping means and system
CN109571477A (en) * 2018-12-17 2019-04-05 西安工程大学 A kind of improved robot vision and conveyer belt composite calibration method
CN111645074A (en) * 2020-06-01 2020-09-11 李思源 Robot grabbing and positioning method
CN111915704A (en) * 2020-06-13 2020-11-10 东北林业大学 Apple hierarchical identification method based on deep learning
CN111775154A (en) * 2020-07-20 2020-10-16 广东拓斯达科技股份有限公司 Robot vision system

Non-Patent Citations (4)

* Cited by examiner, † Cited by third party
Title
吴安成 等: "基于OpenCV的码垛机器人手眼标定方法", 《工业机器人》 *
廖家骥: "基于机器视觉的机器人分拣系统研究与设计", 《中国优秀博硕士学位论文全文数据库(硕士)信息科技辑》 *
王道累: "基于天牛须搜索算法的单目相机标定方法", 《济南大学学报(自然科学版)》 *
胡寒: "基于机器视觉的Delta机器人动态抓取系统关键技术的研究", 《中国优秀博硕士学位论文全文数据库(硕士)信息科技辑》 *

Cited By (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113369173A (en) * 2021-06-29 2021-09-10 上海希翎智能科技有限公司 Full-automatic plastic bottle color sorting equipment
WO2023040095A1 (en) * 2021-09-16 2023-03-23 梅卡曼德(北京)机器人科技有限公司 Camera calibration method and apparatus, electronic device, and storage medium
CN114750155A (en) * 2022-04-26 2022-07-15 广东天太机器人有限公司 Object classification control system and method based on industrial robot
CN114800520A (en) * 2022-05-23 2022-07-29 北京迁移科技有限公司 High-precision hand-eye calibration method
CN114800520B (en) * 2022-05-23 2024-01-23 北京迁移科技有限公司 High-precision hand-eye calibration method
CN115319762A (en) * 2022-10-18 2022-11-11 上海航天壹亘智能科技有限公司 Robot control method for production line, production line and numerical control machine tool

Similar Documents

Publication Publication Date Title
CN112561886A (en) Automatic workpiece sorting method and system based on machine vision
CN109344882B (en) Convolutional neural network-based robot control target pose identification method
CN113524194A (en) Target grabbing method of robot vision grabbing system based on multi-mode feature deep learning
CN111791239B (en) Method for realizing accurate grabbing by combining three-dimensional visual recognition
CN110014426B (en) Method for grabbing symmetrically-shaped workpieces at high precision by using low-precision depth camera
JP5839971B2 (en) Information processing apparatus, information processing method, and program
CN110580725A (en) Box sorting method and system based on RGB-D camera
CN113146172B (en) Multi-vision-based detection and assembly system and method
CN112894815B (en) Method for detecting optimal position and posture for article grabbing by visual servo mechanical arm
CN114355953B (en) High-precision control method and system of multi-axis servo system based on machine vision
CN110560373A (en) multi-robot cooperation sorting and transporting method and system
CN113927597B (en) Robot connecting piece six-degree-of-freedom pose estimation system based on deep learning
CN114758236A (en) Non-specific shape object identification, positioning and manipulator grabbing system and method
CN112109072B (en) Accurate 6D pose measurement and grabbing method for large sparse feature tray
Zheng et al. Industrial part localization and grasping using a robotic arm guided by 2D monocular vision
Ma et al. Binocular vision object positioning method for robots based on coarse-fine stereo matching
Ruan et al. Feature-based autonomous target recognition and grasping of industrial robots
CN113822946B (en) Mechanical arm grabbing method based on computer vision
CN116206189A (en) Curved surface graphic identification code and identification method thereof
CN113436293B (en) Intelligent captured image generation method based on condition generation type countermeasure network
CN206864487U (en) A kind of solar battery sheet SPEED VISION positioning and correction system
CN113592962B (en) Batch silicon wafer identification recognition method based on machine vision
Shi et al. A fast workpiece detection method based on multi-feature fused SSD
CN114851206A (en) Method for grabbing stove based on visual guidance mechanical arm
Guo et al. A contour-guided pose alignment method based on Gaussian mixture model for precision assembly

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication

Application publication date: 20210326

RJ01 Rejection of invention patent application after publication