CN109657708B - Workpiece recognition device and method based on image recognition-SVM learning model - Google Patents

Workpiece recognition device and method based on image recognition-SVM learning model Download PDF

Info

Publication number
CN109657708B
CN109657708B CN201811482549.3A CN201811482549A CN109657708B CN 109657708 B CN109657708 B CN 109657708B CN 201811482549 A CN201811482549 A CN 201811482549A CN 109657708 B CN109657708 B CN 109657708B
Authority
CN
China
Prior art keywords
workpiece
image
detected
learning model
sample
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201811482549.3A
Other languages
Chinese (zh)
Other versions
CN109657708A (en
Inventor
杨林杰
李俊
崇米娜
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Fujian Institute of Research on the Structure of Matter of CAS
Original Assignee
Fujian Institute of Research on the Structure of Matter of CAS
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Fujian Institute of Research on the Structure of Matter of CAS filed Critical Fujian Institute of Research on the Structure of Matter of CAS
Priority to CN201811482549.3A priority Critical patent/CN109657708B/en
Publication of CN109657708A publication Critical patent/CN109657708A/en
Application granted granted Critical
Publication of CN109657708B publication Critical patent/CN109657708B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • G06F18/241Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches
    • G06F18/2411Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches based on the proximity to a decision surface, e.g. support vector machines
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/0002Inspection of images, e.g. flaw detection
    • G06T7/0004Industrial image inspection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/12Edge-based segmentation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/136Segmentation; Edge detection involving thresholding
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/50Extraction of image or video features by performing operations within image blocks; by using histograms, e.g. histogram of oriented gradients [HoG]; by summing image-intensity values; Projection analysis
    • G06V10/507Summing image-intensity values; Histogram projection analysis
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10004Still image; Photographic image
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30108Industrial image inspection
    • G06T2207/30164Workpiece; Machine component

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Data Mining & Analysis (AREA)
  • Artificial Intelligence (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Multimedia (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Evolutionary Biology (AREA)
  • Evolutionary Computation (AREA)
  • General Engineering & Computer Science (AREA)
  • Quality & Reliability (AREA)
  • Image Analysis (AREA)

Abstract

The application discloses a workpiece recognition device and a method thereof based on an image recognition-SVM learning model, comprising the following steps: the robot comprises an image acquisition unit, an identification unit and a robot, wherein the image acquisition unit is used for acquiring an image of a workpiece to be detected and is in data connection with the identification unit; the recognition unit is used for classifying the workpiece to be detected by adopting an SVM learning model classifier after extracting the characteristic vector of the workpiece in the image, outputting a classification result and connecting the classification result with the robot control; and the robot is used for classifying the workpiece to be detected according to the classification result. The device can identify and grab the preset target in a complex environment, is not influenced by the geometric changes of translation, scale and rotation, and has high stability and real-time performance. Another aspect of the present application also provides a method of the apparatus.

Description

Workpiece recognition device and method based on image recognition-SVM learning model
Technical Field
The application relates to a workpiece recognition device and method based on an image recognition-SVM learning model, and belongs to the field of image recognition.
Background
With the continuous development of robot technology and artificial intelligence, machine vision has been widely applied in various fields such as industrial detection, sorting, production automation and the like. In recent years, related machine vision recognition algorithms are widely applied to workpiece recognition and automatic sorting in a production line. Most of existing machine vision recognition algorithms recognize through traditional matching cost functions and workpiece specific features, such as gray levels, corner points, geometric primitives and the like, but the recognition algorithms are poor in generalization performance and lack the capability of actively sensing working environment changes and random strain, and once the environment changes, the robot recognition or grabbing failure is easily caused.
The machine learning pushes the generalization performance of the recognition algorithm to a new height, the learning ability of the algorithm is obtained by utilizing the prior knowledge of the sample, the specific knowledge is converted into the intrinsic parameters of the machine learning model, the core is that the classifier or the regression model with the learning ability is formed by combining the basic characteristics of the sample space and the classifier, and the deep learning becomes the hot point of the target recognition for further optimizing the generalization performance of the learner. How to correctly identify and grab various workpieces from a production line is a core problem of research in the field of automation, and an intelligent identification algorithm is the core of a visual robot sensing technology.
In recent years, related machine vision recognition algorithms are widely applied to workpiece recognition and automatic sorting in a production line. Most of the existing machine vision recognition algorithms recognize through traditional matching cost functions and workpiece specific features, such as gray scale, angular points, geometric primitives and the like. However, the generalization performance of the recognition algorithm is poor, and the capability of actively sensing the working environment change and random strain is lacked, so that the robot fails to recognize or grab.
Disclosure of Invention
According to one aspect of the application, a workpiece recognition device based on an image recognition-SVM learning model is provided, and the device can realize recognition and classification of a specific workpiece.
The method comprises the following steps: an image acquisition unit, an identification unit and a robot,
the image acquisition unit is used for acquiring the image of the workpiece to be detected and is in data connection with the identification unit;
the recognition unit is used for classifying the workpiece to be detected by adopting an SVM learning model classifier after extracting the feature vector of the workpiece in the image, outputting a classification result and connecting the classification result with the robot control;
and the robot is used for classifying the workpiece to be detected according to the classification result.
Optionally, a conveyor belt and a position sensor are also included,
the position sensor is arranged on the conveyor belt, is in data connection with the identification unit and is used for acquiring the position of the workpiece;
the identification unit is used for judging whether the position of the workpiece to be detected enters the image acquisition range of the image acquisition unit, and if so, the identification unit controls the image acquisition unit to acquire the image of the workpiece to be detected.
Optionally, the identification unit is in communication connection with the robot through Socket;
the identification unit is in control connection with the conveyor belt and is used for controlling the conveyor belt to compensate the position deviation of the workpiece.
Optionally, the identification unit comprises:
and the training module is used for extracting the characteristics of each sample workpiece image in a sample library, adopting an SVM learning model classifier to identify and classify the multi-angle image characteristics of the workpiece, detecting a classification result, returning to continue classification if the classification result is wrong, and outputting the trained SVM learning model classifier if the classification result is correct.
Optionally, the sample image features in the training module are obtained through the following modules:
the contour extraction module is used for identifying the contour of a sample workpiece in each sample picture in the sample library and extracting the feature vector of the sample workpiece;
and the characteristic regularizing module is used for regularizing the characteristic vectors by adopting shape vector histogram characteristics and unifying the dimensions.
Optionally, the identification unit comprises:
the learning module is used for inputting the image of the workpiece to be detected into the SVM learning model classifier trained by the training module and outputting a sample picture similar to the image of the workpiece to be detected as a pre-classification result;
and the classification module is used for calculating the SVH feature similarity between the sample picture and the workpiece image to be detected and judging whether the similarity meets the expected tolerance of the system to the judgment result. If yes, outputting the pre-classification result, and if not, displaying the unqualified part.
Optionally, the identification unit is configured to acquire a grabbing point of the workpiece to be detected;
and the robot is used for grabbing the workpiece according to the grabbing points.
Optionally, the identification unit comprises:
the position module is used for identifying the coordinates of the workpiece in the image of the workpiece to be detected, converting the coordinates of the workpiece to be detected into space coordinates of the robot, acquiring transverse and longitudinal main shaft parameters of the image of the workpiece to be detected, extracting a first pair of intersection points of the transverse main shaft of the workpiece to be detected and the contour of the workpiece, extracting a second pair of intersection points of the longitudinal main shaft of the workpiece to be detected and the contour of the workpiece, and outputting the first pair of intersection points and the second pair of intersection points as the grabbing points of the robot.
The application further provides a workpiece recognition method based on the image recognition-SVM learning model, which comprises the following steps:
step S100: acquiring the image of the workpiece to be detected;
step S200: classifying the workpiece to be detected by adopting an SVM learning model classifier, and outputting a classification result;
step S300: and classifying the workpiece to be detected according to the classification result.
Optionally, the step S200 includes the following steps:
step S210: identifying the outline of the sample workpiece in each sample picture in the sample library, extracting the feature vector of the sample workpiece, performing normalization and dimension unification on the feature vector by adopting the shape vector histogram feature, and outputting the feature of each sample workpiece image;
step S220: extracting the characteristics of each sample workpiece image in a sample library, adopting an SVM learning model classifier to identify the characteristics of the workpiece multi-angle images and classify the workpiece multi-angle images, detecting a classification result, returning to continue classification if the classification result is wrong, and outputting the trained SVM learning model classifier if the classification result is correct;
step S230: inputting the image of the workpiece to be detected into an SVM learning model classifier trained by the training module, and outputting a sample picture similar to the image of the workpiece to be detected as a pre-classification result;
step S240: and calculating the SVH feature similarity of the sample picture and the workpiece image to be detected, and judging whether the similarity meets the expected tolerance of the system to the judgment result. If yes, outputting the pre-classification result, and if not, displaying a qualified product;
step S250: recognizing the coordinates of the workpiece in the image of the workpiece to be detected, converting the coordinates of the workpiece to be detected into the space coordinates of the robot, acquiring the parameters of the transverse main shaft and the longitudinal main shaft of the image of the workpiece to be detected, extracting a first pair of intersection points of the transverse main shaft of the workpiece to be detected and the contour of the workpiece, extracting a second pair of intersection points of the longitudinal main shaft of the workpiece to be detected and the contour of the workpiece, and outputting the first pair of intersection points and the second pair of intersection points as the grabbing points of the robot.
The beneficial effect that this application can produce includes:
1) The workpiece recognition device and the method thereof based on the image recognition-SVM learning model, which are provided by the application, aim at the problems of low intellectualization level and poor flexibility of the assembly line for sorting workpieces, meanwhile, in order to realize the purpose that the robot actively learns and captures a preset target, a multi-workpiece recognition method based on a shape learning model is provided. The method can identify and grab the preset target in a complex environment, is not influenced by the geometric changes of translation, scale and rotation of the workpiece, and has high stability and real-time performance. Experimental analysis shows that the provided multi-target recognition matching method is hardly influenced by geometric transformation such as expansion, rotation and translation of a workpiece in a recognition image, recognition accuracy and generalization are greatly improved by utilizing a machine learning model and SVH (Shape vector histogram) features, and the method has certain advantages compared with other conventional recognition algorithms.
2) According to the workpiece recognition device and method based on the image recognition-SVM learning model, the shape feature vector based on the polar radius is designed, normalization processing of the maximum module value is carried out on the shape feature vector, the adopted shape feature vector can deeply reflect the change of the shape topological structure of the workpiece, images of the same workpiece at different angles can be accurately recognized, features are extracted, and the recognition robustness of the method is improved by adopting the feature vector.
3) According to the workpiece recognition device and method based on the image recognition-SVM learning model, aiming at the problem that the dimensions of the workpiece shape feature vectors are different, the dimensions of the initial feature vectors are normalized by adopting SVH, the feature dimensions are the same after quantization, and the described features keep the description characteristics of the original features.
4) According to the workpiece recognition device and method based on the image recognition-SVM learning model, quantized features are introduced into an SVM support vector machine model, class prediction is conducted on a target to be recognized, class verification is conducted on the classified workpiece through SVH distribution information, and recognition accuracy is further improved.
5) According to the workpiece recognition device and method based on the image recognition-SVM learning model, the main shaft parameters of the target are obtained according to the MOI theory, the 2-pair position reference of the robot for grabbing the preset target is determined, the reliability of workpiece recognition and grabbing task completion can be improved, and the workpiece falling-off condition after grabbing is avoided.
6) According to the workpiece recognition device and method based on the image recognition-SVM learning model, a sample set used in a training module guarantees sampling generalization, the samples comprise placement postures of various workpieces, then a feature vector of a target is constructed, a sample library of the recognized workpieces is established and marked, and guarantee is provided for generalization of a learning algorithm, so that the recognition algorithm is guaranteed not to be influenced by rigid transformation of the workpieces, and dimension unification of input quantity of a classifier is achieved.
Drawings
FIG. 1 is a schematic flow chart of a workpiece recognition method based on an image recognition-SVM learning model according to an embodiment of the present application;
FIG. 2 is a schematic diagram illustrating a usage state of a workpiece recognition device based on an image recognition-SVM learning model according to an embodiment of the present application;
FIG. 3 is a schematic diagram of a distribution of feature vectors of a workpiece A in various poses and in various poses according to an embodiment of the present disclosure; wherein a) is a front view enlargement of the workpiece a; a') is the waveform distribution of the characteristic vector under the posture of the A) picture of the workpiece; b) Zooming out the front view of the workpiece A; b') is the waveform distribution of the characteristic vectors under the posture of the workpiece A picture b); c) Is another angle of the front view of the workpiece A; c') is the waveform distribution of the characteristic vectors under the posture of the workpiece A picture c);
FIG. 4 is a schematic diagram of the distribution of feature vectors for various poses and positions of a workpiece B according to one embodiment of the present disclosure; wherein a) is a front view magnification of the workpiece B; a') is the waveform distribution of the characteristic vector under the posture of the workpiece B picture a); b) Zooming out the front view of the workpiece B; b') is the waveform distribution of the characteristic vectors under the posture of the workpiece B picture B); c) Is another angle of the front view of the workpiece B; c') is the waveform distribution of the characteristic vector under the posture of the workpiece B picture c);
fig. 5 is a diagram illustrating distribution of quantized eigenvectors of bin =30 in SVH under various postures of a workpiece a according to an embodiment of the present application; wherein a) is the quantized feature vector distribution of the picture in fig. 3 a); b) Distribution of the quantized eigenvectors for the picture in fig. 3 b); c) Distribution of the quantized eigenvectors for the pictures in fig. 3 c);
fig. 6 is a diagram illustrating distribution of quantized eigenvectors of bin =30 in SVH under various postures of a workpiece B according to an embodiment of the present application; wherein a) is the quantized feature vector distribution of the picture in fig. 4 a); b) Distribution of the quantized eigenvectors for the pictures in fig. 4 b); c) Distribution of the quantized eigenvectors for the picture in fig. 4 c);
FIG. 7 is a diagram illustrating target spindle determination for workpieces C-F in one embodiment of the present application; wherein a) the workpiece C is positioned on the same plane, each graph is randomly placed relative to the previous graph, and a target spindle determination result is determined in the obtained image; b) The workpiece D is positioned on the same plane, each graph is randomly placed relative to the previous graph, and a target spindle determining result is determined in the obtained image; c) The workpiece E is positioned on the same plane, each graph is randomly placed relative to the previous graph, and a target spindle determining result is determined in the obtained image; d) The workpiece F is positioned on the same plane, each graph is randomly placed relative to the previous graph, and a target spindle determining result is determined in the obtained image;
FIG. 8 is a graph illustrating the distribution of C values of a test experiment according to one embodiment of the present disclosure.
Detailed Description
The present application will be described in detail with reference to examples, but the present application is not limited to these examples.
Referring to fig. 2, the present application provides an image recognition-SVM learning model-based workpiece recognition gripping apparatus, including: the device comprises an image acquisition unit, an identification unit and a robot, wherein the image acquisition unit, the identification unit and the robot are used for acquiring an image of the workpiece to be detected and are in data connection with the identification unit; the recognition unit is used for classifying the workpiece to be detected by adopting an SVM learning model classifier after extracting the feature vector of the workpiece in the image, outputting a classification result and connecting the classification result with the robot control; and the robot is used for classifying the workpiece to be detected according to the classification result.
SVM (Support Vector Machine) in the present application. Robots in this application include, but are not limited to, robotic arms and the like, which may be cooperatively controlled remotely. The image acquisition unit can be various camera devices for waiting to trigger the control device. The classification may include, but is not limited to, placing conforming parts with non-conforming parts, and may also be a classification of multiple types of workpieces.
Preferably, the position sensor is arranged on the conveyor belt and is in data connection with the identification unit for acquiring the position of the workpiece; the identification unit is used for judging whether the position of the workpiece to be detected enters the image acquisition range of the image acquisition unit or not, and if so, the identification unit controls the image acquisition unit to acquire the image of the workpiece to be detected.
Preferably, the identification unit is in communication connection with the robot through Socket.
Preferably, the identification unit is in control connection with the conveyor belt, and the identification unit is used for controlling the conveyor belt to compensate the position offset of the workpiece.
The use method comprises the following steps: the conveyor belt normally runs under the drive of the servo motor, when the position sensor senses that the workpiece reaches a preset area, the I/O signal triggers the image acquisition unit to finish image acquisition through a certain distance delay,
then the workpiece recognition unit recognizes the specific workpiece and converts the visual position coordinate into the space coordinate of the robot;
and a command is issued to the robot through Socket communication to finish specific grabbing work, and an encoder in the servo motor finishes position offset compensation.
Referring to fig. 1, preferably, the identification unit includes:
and the training module is used for extracting the characteristics of each sample workpiece image in the sample library, adopting an SVM learning model classifier to identify the multi-angle image characteristics of the workpiece and classify the workpiece, detecting the classification result, returning to continue classification if the classification result is wrong, and outputting the trained SVM learning model classifier if the classification result is correct.
And determining differences among the workpieces, determining a classification standard for grabbing the workpieces, and performing pre-classification on the workpieces to be detected.
The outline of the workpiece in each picture in the sample library needs to be identified, the algorithm used for identification can be an OSTU threshold value and a Freeman 8 chain code structure to obtain a space point set of a target shape, and the specific steps are carried out according to the existing method.
Preferably, the sample image features in the training module are obtained by:
the contour extraction module is used for identifying the contour of the sample workpiece in each sample picture in the sample library and extracting the characteristic vector of the sample workpiece;
and the SVH feature regularizing module is used for regularizing the feature vectors and unifying the dimensions by adopting shape vector histogram features.
Extracting features in the sample image:
extracting a shape vector: the radius feature vector generation algorithm is defined in R 2 The space, the metric distance, is the euclidean distance, and this feature can be understood as the probe wave that is emitted by the center of gravity of the profile towards the inner surface of the profile, which wave encounters obstacles that hinder the return of the particular topology. Set any point C on the outline i (x i ,y i ) The centroid is O (x) o ,y o ),R i Is the Euclidean distance of point C to point O, R max Polar radius normalization of feature direction for maximum distance modulusMeasurement of
Figure BDA0001893731740000081
The expression of (2) is shown in formulas (1) and (2).
Figure BDA0001893731740000082
Figure BDA0001893731740000083
As can be seen from equation (1), the geometric relationship between the gravity center of the profile and the sampling point does not change with the change of the target position, the normalized feature vector is not affected by the change of the target scale, the vector is only related to the geometric shape of the target, fig. 3 shows the change distribution diagram of the feature vector of two workpieces under different postures (rotation and scale change, respectively), the abscissa of the change distribution diagram is the data of the sampling point of the workpiece, and the ordinate of the change distribution diagram is the generated module value distribution of the feature vector. The change trend of the characteristic vector completely conforms to the shape trend of the workpiece, and the change trends are basically consistent.
In order to illustrate that the similarity of the homomorphic contours and the difference of the heteromorphic contours are important properties of the target descriptor, in one embodiment, feature vectors are performed on the workpieces A and B, and the waveform distribution of the feature vectors is obtained, and the results are shown in FIGS. 3 to 4.
In fig. 3, (a) - (c) are sample images of the workpiece a, and the waveform distributions corresponding to different images are (a '), (b '), (c '), respectively. As can be seen from (a '), (b '), (c ') in fig. 3, although the workpiece a image is enlarged or reduced and rotated in translation, the waveform distribution of each image is basically similar (except for the difference between dimension and phase), indicating that the feature vector has strong shape characterization capability.
However, the feature vector has two problems:
(1) Due to the dimension change of the workpiece and the specification difference of different types of targets, the number of sampling points is different, for example, the workpieces of the same type cause the dimension difference of the feature vector, and the distribution waveform has dislocation in space, as shown in fig. 3 (b) and fig. 4 (a) and as shown in fig. 3 (b ') and fig. 4 (a').
(2) Rotation of the workpiece also causes misalignment of the signature, as shown in fig. 3 (b) and 4 (c), and fig. 3 (b ') and 4 (c'). Because the distance between the image acquisition unit and the target to be identified is fixed, the pose of the workpiece is randomly distributed, the characteristic dislocation caused by rotation is more common, and the characteristic dislocation is mainly represented as the phase shift of a distribution waveform.
The two points can bring influence to subsequent learning models and similarity verification, and actual operation is inconvenient. The present application further proposes to solve the above problems with SVH.
In order to keep the feature vector dimensions of the images in the sample library uniform and not lose basic description information, the feature vector extraction result is subjected to SVH feature normalization in the application. And (3) using a Shape Vector Histogram (SVH) to regulate and unify the dimensions of the features, wherein the specific steps are as follows:
(1) A 1 × N shape feature vector of a specific target based on the centroid can be obtained by the following equations (1) and (2), and is denoted as S = { S = { (S) } i |S i I =1,2 … N }, each element has a value between 0 and 1, and N is not a constant and is adjusted according to the scale of the recognition target due to the change in the scale of the recognition target.
(2) Establishing a histogram containing M bins, wherein the statistical range of each bin is as follows: [ (M-1)/M, M/M ], M is the number index of the statistical histogram. The initial value starts from 1, and every time a feature vector falls into the corresponding bin value, the counter of the bin is added with 1, and the initialization of the counter is 0.
(3) And dividing the final index number by the corresponding dimension N of the characteristic to map the characteristic value between 0 and 1, and using the characteristic value as the input of a learning model to finish the normalization processing of the SVH.
In an embodiment, fig. 5 is a distribution diagram of the SVH of the a workpiece in fig. 3 in various poses, where the abscissa is the number of bins and the ordinate is the distribution of the quantized feature vectors, where the bins = 30. FIGS. 6 (a) - (c) show the SVH distributions of FIGS. 4 (a) - (c), respectively. As can be seen, the SVH of the same type of workpiece is basically not affected by the geometric transformation in the target workpiece image, and the distribution is similar. The SVH can well solve the problem of workpiece profile distribution dislocation caused by the scale and rotation of the feature vector. And the dimensionality of the eigenvector is only related to the bin value of the SVH and has no relation with the number of target sampling points. And finally, taking the SVM model as the input of the SVM learning model classifier.
In order to explain the principle of SVM for classifying target workpieces, two types of workpiece target classifications are used for explanation, and a training sample (x) containing SVH characteristics is provided 1 ,y 1 ),(x 2 ,y 2 ),…(x n ,y n ) N is the number of training samples, as can be seen from the above section, x i E [0,1), for the case that the SVH feature sample can be divided (cannot be divided by the kernel function), let the hyperplane of the classification be:
ω T x+b=0 (3)
omega is a normal vector of a plane, b is a displacement term, and a linear kernel function is introduced to convert inseparable linear feature vectors of a low-dimensional space into linear divisible samples of a high-dimensional space, so that the samples of the high-dimensional feature space are classified by adopting a linear algorithm.
Because the SVM is only a two-class classifier, for the multi-classification problem of the workpiece, 1 of a plurality of SVM is adopted: and forming a learning model in an M cascade mode, and searching a hypothesis space by adopting a greedy learning strategy to obtain a local optimal solution of the model, thereby strengthening the decision-making capability of the model.
Preferably, the identification unit includes:
the learning module is used for inputting the image of the workpiece to be detected into the SVM learning model classifier trained by the training module and outputting a sample picture similar to the workpiece to be detected as a pre-classification result;
and the classification module is used for calculating the SVH feature similarity between the sample picture and the workpiece image to be detected and judging whether the similarity meets the expected tolerance of the system to the judgment result. If yes, outputting the pre-classification result, and if not, displaying the unqualified part.
Specifically, in order to prevent the erroneous classification of the classifier from affecting the accuracy of recognition, the result of the SVM learning model classifier is verified by adopting SVH subsection information.
Similarity between a sample picture and a shape vector histogram of the workpiece image to be detected (namely, a target predicted by the classifier and a standard sample corresponding to the class of the target are calculated).
The matching cost function refers to an SC (Shape Context) construction method, and the similarity calculation between Shape histograms is carried out according to the following formula:
Figure BDA0001893731740000101
wherein, C i,M Representing the similarity between the prediction type i of the classifier and the corresponding standard sample M, h () is a distribution function of bin values of the histogram, K represents the number of bin values of the SVH, and the judgment criterion is as follows:
Figure BDA0001893731740000102
wherein g (C) i,M ) And if the function value of the discrimination function is 1, the type discrimination result of the classifier is correct, otherwise, the discrimination result of the classifier is wrong. T represents the expected tolerance of the system to the discrimination result.
In order to ensure the accuracy of the discrimination result, if the discrimination result is 0, the target workpiece is defaulted to be unidentifiable, and the robot sorts out the workpieces uniformly as waste products and processes the waste products independently.
Preferably, the identification unit includes: (ii) a
And the position module is used for identifying the coordinates of the workpiece in the image of the workpiece to be detected, converting the coordinates of the workpiece to be detected into the space coordinates of the robot, acquiring the parameters of the transverse main shaft and the longitudinal main shaft of the image of the workpiece to be detected, extracting a first pair of intersection points of the transverse main shaft of the workpiece to be detected and the contour of the workpiece, extracting a second pair of intersection points of the longitudinal main shaft of the workpiece to be detected and the contour of the workpiece, and outputting the first pair of intersection points and the second pair of intersection points as the grabbing points of the robot.
In order to provide basic position reference for grabbing a workpiece by the robot, the method and the device solve the main shaft parameter Of the workpiece target according to a physical Inertia Moment J distribution theory, wherein MOI (Moment Of Inertia) is a measure for keeping constant-speed circular motion or static characteristics Of a revolving body when the revolving body rotates around a specific axis.
In order to describe the characteristics of the inertia moment, an orthogonal coordinate system O-XYZ in a 3-dimensional space is established, and the coordinate axes of the orthogonal coordinate system are taken as the rotation axes of the rigid body. The respective principal axes of the rigid bodies are calculated according to the following formula:
Figure BDA0001893731740000111
wherein, H = I ω, H x ,H y ,H z Inertia moments respectively corresponding to an x axis, a y axis and a z axis, wherein I is a rigid inertia matrix, I xx Is an X-axis inertia matrix, I xy Is XY wheelbase inertia matrix, I xz Is an XZ wheelbase inertia matrix, I yy Is a Y-axis inertia matrix, I yz Is a YZ wheelbase inertia matrix, I zx 、I zy Are each independently of I xz 、I yz Same physical meaning, I zz Is a Z-axis inertia matrix. Subscript notation is a corresponding direction inertia matrix, and similarly, omega is a rigid body rotation angular velocity, omega xyz The angular velocities of the x-axis, y-axis, and z-axis, respectively.
Inertia elements in the I matrix are calculated as follows:
Figure BDA0001893731740000121
Figure BDA0001893731740000122
Figure BDA0001893731740000123
Figure BDA0001893731740000124
Figure BDA0001893731740000125
Figure BDA0001893731740000126
from the above formula, x g 、y g 、z g Respectively as the mass center, x, of the target workpiece profile in each axis i 、y i 、z i Is the position coordinates of the workpiece. In the present application, the profile variation is understood as a rigid body rotation inertia distribution, so that the rigid body rotation inertia distribution is transferred to the profile distribution, thereby solving a workpiece profile distribution main axis, and a rotation matrix of the rigid body main axis is a standardized eigenvector of an inertia matrix I, i.e. the standardized eigenvector e of the inertia matrix I is equal to the rotation matrix of the profile.
In the present application, a target main axis of a workpiece is obtained according to the above method, and a position reference for the robot to grab the workpiece is determined according to the target main axis: the two intersection points of the target spindle extension end of the workpiece and the target contour are used as a pair of holding points.
Then, a linear equation which is perpendicular to the main shaft and passes through the center of mass is obtained, the intersection point of the linear equation and the outline is used as another pair of holding points, and 4 points on the workpiece which is held by the robot are obtained and used as position references.
The 4-point position reference of the robot for grabbing the workpiece is obtained through the steps, and if the profile distribution is uniform (namely equivalent to the uniform distribution of the target mass of the workpiece), the plane center coordinate formed according to the 4-point position reference is the profile distribution center of the workpiece.
In one embodiment, referring to FIG. 7, workpieces C-F, illustrated by example as workpiece C, disclose 5 images therein. In order to test the generalization of the algorithm, the workpieces C between every two adjacent images are randomly placed, and no posture correlation exists between the front image and the rear image. The grab point obtained by the method is not changed in 5 images. Accurate grabbing positions can be obtained in images of other workpieces, and grabbing reliability is improved.
The application further provides a workpiece recognition method based on the image recognition-SVM learning model, which comprises the following steps:
step S100: acquiring the image of the workpiece to be detected;
step S200: classifying the workpiece to be detected by adopting an SVM learning model classifier, and outputting a classification result;
step S300: and classifying the workpiece to be detected according to the classification result.
Preferably, the step S200 includes the steps of:
step S210: identifying the outline of the sample workpiece in each sample picture in the sample library, extracting the feature vector of the sample workpiece, performing normalization and dimension unification on the feature vector by adopting the shape vector histogram feature, and outputting the feature of each sample workpiece image;
step S220: extracting the characteristics of each sample workpiece image in a sample library, adopting an SVM learning model classifier to identify and classify the multi-angle image characteristics of the workpiece, detecting a classification result, returning to continue classification if the classification result is wrong, and outputting the trained SVM learning model classifier if the classification result is correct;
step S230: inputting the image of the workpiece to be detected into an SVM learning model classifier trained by the training module, and outputting a sample picture similar to the image of the workpiece to be detected as a pre-classification result;
step S240: and calculating the SVH feature similarity of the sample picture and the workpiece image to be detected, and judging whether the similarity meets the expected tolerance of the system to the discrimination result. If yes, outputting the pre-classification result, and if not, displaying a qualified product;
step S250: recognizing the coordinates of the workpiece in the image of the workpiece to be detected, converting the coordinates of the workpiece to be detected into the space coordinates of the robot, acquiring the parameters of the transverse main shaft and the longitudinal main shaft of the image of the workpiece to be detected, extracting a first pair of intersection points of the transverse main shaft of the workpiece to be detected and the contour of the workpiece, extracting a second pair of intersection points of the longitudinal main shaft of the workpiece to be detected and the contour of the workpiece, and outputting the first pair of intersection points and the second pair of intersection points as the grabbing points of the robot.
The apparatus provided herein is described in detail below with reference to specific examples.
(1) In order to verify the advantages of the SVM learning model classifier in workpiece identification, a tree structure-based weak classifier RandomForest is taken as a comparative example 1, a BP neural network based on a mapping model is taken as a comparative example 2, and the device provided by the application is taken as an embodiment.
And the accuracy and the real-time performance of the identification algorithm are used as check indexes.
50 types of workpiece samples were examined, and in each of examples and comparative examples 1 to 2: the number of training samples is 35, and the number of testing samples is 15 for classification and identification. The performance results of examples and comparative examples 1 to 2 are shown in table 1.
TABLE 1 study model Performance comparison
Learning model Accuracy of algorithm Algorithm real time/ms
SVM (example) 98% 15ms
Randomforest (comparative example 1) 85% 12.3ms
BP-neural network (comparative example 2) 87% 20.3ms
As can be seen from Table 1, comparative example 1 takes the least amount of algorithmic time because it is primarily a logical judgment of two classes, but its accuracy is also the lowest. Comparative example 2 the recognition algorithm was the longest and the recognition accuracy was lower than the device provided in the present application. The main reason is that the data of the small sample cannot exert the powerful generalization performance of the neural network, and the network is in an under-fitting state.
The SVM has better performance than other machine learning models in small sample data learning, and can determine a classification surface by using few support vectors in a sample.
Therefore, the comprehensive performance of the device is optimal, and the device is particularly suitable for identifying workpieces which need to be changed frequently in a sample library.
(2) In order to test the effectiveness of the class verification model, formula (8) is used as a verification standard, 18 samples of the same workpiece type are taken, and meanwhile, 2 workpieces of other types are intentionally mixed into the samples, and 20 workpieces are classified by using the device provided by the application. The 2 other types of artifacts are classes misclassified by the SVM learning model classifier.
(3) The C value distribution calculation formula of 20 workpieces is as follows:
Figure BDA0001893731740000141
the results of the C values of the 20 workpieces are shown in FIG. 8, in which the abscissa of FIG. 8 represents the number of each workpiece sample and the ordinate represents the C value corresponding to each workpiece.
Referring to fig. 8, samples 11 and 16 are other types of workpieces misclassified by the classifier, and the C value of the workpieces is much higher than that of other sample values of the same type, which indicates that the apparatus provided by the present application can effectively sort the workpieces of the same type, the workpieces of different types have little influence on the sorting result, and the validity of similarity verification is increased in the classification module.
Although the present application has been described with reference to a few embodiments, it should be understood that various changes, substitutions and alterations can be made herein without departing from the spirit and scope of the application as defined by the appended claims.

Claims (6)

1. An apparatus for recognizing a workpiece based on an image recognition-SVM learning model, comprising: an image acquisition unit, an identification unit and a robot,
the image acquisition unit is used for acquiring an image of a workpiece to be detected and is in data connection with the identification unit;
the recognition unit is used for classifying the workpiece to be detected by adopting an SVM learning model classifier after extracting the feature vectors of the workpiece in the image, outputting a classification result, acquiring a grabbing point of the workpiece to be detected and connecting the grabbing point with the robot control;
the robot is used for classifying the workpiece to be detected according to the classification result;
wherein the identification unit includes:
the training module is used for extracting the characteristics of each sample workpiece image in a sample library, identifying and classifying the multi-angle image characteristics of the workpiece by adopting an SVM learning model classifier, detecting a classification result, returning to continue classification if the classification result is wrong, and outputting the trained SVM learning model classifier if the classification result is correct;
the learning module is used for inputting the image of the workpiece to be detected into the SVM learning model classifier trained by the training module and outputting a sample picture similar to the image of the workpiece to be detected as a pre-classification result;
the classification module is used for calculating the SVH feature similarity between the sample picture and the workpiece image to be detected, judging whether the similarity meets the expected tolerance of a system to a judgment result, if so, outputting the pre-classification result, and if not, displaying a qualified piece;
and the position module is used for identifying the coordinates of the workpiece in the image of the workpiece to be detected, converting the coordinates of the workpiece to be detected into the space coordinates of the robot, acquiring the parameters of the transverse main shaft and the longitudinal main shaft of the image of the workpiece to be detected, extracting a first pair of intersection points of the transverse main shaft of the workpiece to be detected and the contour of the workpiece, extracting a second pair of intersection points of the longitudinal main shaft of the workpiece to be detected and the contour of the workpiece, and outputting the first pair of intersection points and the second pair of intersection points as the grabbing points of the robot.
2. The image recognition-SVM learning model-based workpiece recognition device of claim 1, further comprising a conveyor belt and a position sensor,
the position sensor is arranged on the conveyor belt, is in data connection with the identification unit and is used for acquiring the position of the workpiece;
the identification unit is used for judging whether the position of the workpiece to be detected enters the image acquisition range of the image acquisition unit, and if so, the identification unit controls the image acquisition unit to acquire the image of the workpiece to be detected.
3. The workpiece recognition device based on the image recognition-SVM learning model as recited in claim 2, wherein the recognition unit is in communication connection with the robot through a Socket;
the identification unit is in control connection with the conveyor belt and is used for controlling the conveyor belt to compensate the position deviation of the workpiece.
4. The image recognition-SVM learning model-based workpiece recognition apparatus as claimed in claim 1, wherein the features of the sample workpiece images in the training module extracted from the sample library are obtained by:
the contour extraction module is used for identifying the contour of the sample workpiece in each sample picture in the sample library and extracting the feature vector of the sample workpiece;
and the characteristic regularizing module is used for regularizing the characteristic vectors by adopting shape vector histogram characteristics and unifying the dimensions.
5. The image recognition-SVM learning model-based workpiece recognition apparatus as claimed in claim 1, wherein said robot is configured to grasp a workpiece according to said grasping point.
6. A workpiece recognition method based on an image recognition-SVM learning model is characterized by comprising the following steps:
step S100: acquiring an image of a workpiece to be detected;
step S200: classifying the workpiece to be detected by adopting an SVM learning model classifier, and outputting a classification result;
step S300: classifying the workpiece to be detected according to the classification result; wherein the step S200 includes the steps of:
step S210: identifying the outline of a sample workpiece in each sample picture in a sample library, extracting a feature vector of the sample workpiece, carrying out normalization and dimension unification on the feature vector by adopting shape vector histogram features, and outputting the features of each sample workpiece image;
step S220: extracting the characteristics of each sample workpiece image in a sample library, adopting an SVM learning model classifier to identify the characteristics of the workpiece multi-angle images and classify the workpiece multi-angle images, detecting a classification result, returning to continue classification if the classification result is wrong, and outputting the trained SVM learning model classifier if the classification result is correct;
step S230: inputting the image of the workpiece to be detected into an SVM learning model classifier trained by a training module, and outputting a sample picture similar to the workpiece to be detected as a pre-classification result;
step S240: calculating the SVH feature similarity of the sample picture and the workpiece image to be detected, judging whether the similarity meets the expected tolerance of a system to a judgment result, if so, outputting the pre-classification result, and if not, displaying a non-qualified piece;
step S250: recognizing the coordinates of the workpiece in the image of the workpiece to be detected, converting the coordinates of the workpiece to be detected into space coordinates of the robot, acquiring parameters of a transverse main shaft and a longitudinal main shaft of the image of the workpiece to be detected, extracting a first pair of intersection points of the transverse main shaft of the workpiece to be detected and the contour of the workpiece, extracting a second pair of intersection points of the longitudinal main shaft of the workpiece to be detected and the contour of the workpiece, and outputting the first pair of intersection points and the second pair of intersection points as grabbing points of the robot.
CN201811482549.3A 2018-12-05 2018-12-05 Workpiece recognition device and method based on image recognition-SVM learning model Active CN109657708B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201811482549.3A CN109657708B (en) 2018-12-05 2018-12-05 Workpiece recognition device and method based on image recognition-SVM learning model

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201811482549.3A CN109657708B (en) 2018-12-05 2018-12-05 Workpiece recognition device and method based on image recognition-SVM learning model

Publications (2)

Publication Number Publication Date
CN109657708A CN109657708A (en) 2019-04-19
CN109657708B true CN109657708B (en) 2023-04-18

Family

ID=66112376

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201811482549.3A Active CN109657708B (en) 2018-12-05 2018-12-05 Workpiece recognition device and method based on image recognition-SVM learning model

Country Status (1)

Country Link
CN (1) CN109657708B (en)

Families Citing this family (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110276405B (en) * 2019-06-26 2022-03-01 北京百度网讯科技有限公司 Method and apparatus for outputting information
CN110728655A (en) * 2019-09-06 2020-01-24 重庆东渝中能实业有限公司 Machine vision-based numerical control machine tool workpiece abnormity detection method and device
CN113714179B (en) * 2020-03-23 2023-12-01 江苏独角兽电子科技有限公司 Multifunctional medical instrument cleaning device
CN111932490B (en) * 2020-06-05 2023-05-05 浙江大学 Visual system grabbing information extraction method for industrial robot
CN112084964A (en) * 2020-09-11 2020-12-15 浙江水晶光电科技股份有限公司 Product identification apparatus, method and storage medium
CN112649780B (en) * 2020-12-17 2023-07-25 泰安轻松信息科技有限公司 Intelligent instrument circuit identification and detection equipment
CN112862770B (en) * 2021-01-29 2023-02-14 珠海迪沃航空工程有限公司 Defect analysis and diagnosis system, method and device based on artificial intelligence
CN116841270B (en) * 2023-09-01 2023-11-14 贵州通利数字科技有限公司 Intelligent production line scheduling method and system based on Internet of things

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR20140078163A (en) * 2012-12-17 2014-06-25 한국전자통신연구원 Apparatus and method for recognizing human from video
CN106372666A (en) * 2016-08-31 2017-02-01 同观科技(深圳)有限公司 Target identification method and device
CN107016391A (en) * 2017-04-14 2017-08-04 中国科学院合肥物质科学研究院 A kind of complex scene workpiece identification method
CN107451601A (en) * 2017-07-04 2017-12-08 昆明理工大学 Moving Workpieces recognition methods based on the full convolutional network of space-time context
CN108010074A (en) * 2017-10-19 2018-05-08 宁波蓝圣智能科技有限公司 A kind of workpiece inspection method and system based on machine vision

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR20140078163A (en) * 2012-12-17 2014-06-25 한국전자통신연구원 Apparatus and method for recognizing human from video
CN106372666A (en) * 2016-08-31 2017-02-01 同观科技(深圳)有限公司 Target identification method and device
CN107016391A (en) * 2017-04-14 2017-08-04 中国科学院合肥物质科学研究院 A kind of complex scene workpiece identification method
CN107451601A (en) * 2017-07-04 2017-12-08 昆明理工大学 Moving Workpieces recognition methods based on the full convolutional network of space-time context
CN108010074A (en) * 2017-10-19 2018-05-08 宁波蓝圣智能科技有限公司 A kind of workpiece inspection method and system based on machine vision

Also Published As

Publication number Publication date
CN109657708A (en) 2019-04-19

Similar Documents

Publication Publication Date Title
CN109657708B (en) Workpiece recognition device and method based on image recognition-SVM learning model
Deshpande et al. One-shot recognition of manufacturing defects in steel surfaces
CN111251295A (en) Visual mechanical arm grabbing method and device applied to parameterized parts
Semeniuta et al. Vision-based robotic system for picking and inspection of small automotive components
Jasim et al. Contact-state modeling of robotic assembly tasks using gaussian mixture models
Kuo et al. Improving defect inspection quality of deep-learning network in dense beans by using hough circle transform for coffee industry
CN114913346A (en) Intelligent sorting system and method based on product color and shape recognition
Cheng et al. Smart Grasping of a Soft Robotic Gripper Using NI Vision Builder Automated Inspection Based on LabVIEW Program
Dong et al. A review of robotic grasp detection technology
CN109670538B (en) Method for identifying assembly contact state of non-rigid part
Liu et al. Robust 3-d object recognition via view-specific constraint
Brudka et al. Intelligent robot control using ultrasonic measurements
Li et al. Robot vision model based on multi-neural network fusion
Uçar et al. Determination of Angular Status and Dimensional Properties of Objects for Grasping with Robot Arm
Shi et al. A fast workpiece detection method based on multi-feature fused SSD
Yang et al. A multi-workpieces recognition algorithm based on shape-SVM learning model
Mercier et al. Deep object ranking for template matching
Hammer et al. White box classification of dissimilarity data
Guo et al. Real-time detection and classification of machine parts with embedded system for industrial robot grasping
Peña et al. Invariant object recognition robot vision system for assembly
Guan et al. Recognition of micro force locking contact state based on one-dimensional residual network
Dev Anand et al. Robotics in online inspection and quality control using moment algorithm
Furukawa et al. Grasping position detection using template matching and differential evolution for bulk bolts
Zhang et al. Vision-based robotic grasp success determination with convolutional neural network
Grana et al. Some applications of morphological neural networks

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant