CN111242159B - Image recognition and robot automatic positioning method based on product features - Google Patents

Image recognition and robot automatic positioning method based on product features Download PDF

Info

Publication number
CN111242159B
CN111242159B CN201911329167.1A CN201911329167A CN111242159B CN 111242159 B CN111242159 B CN 111242159B CN 201911329167 A CN201911329167 A CN 201911329167A CN 111242159 B CN111242159 B CN 111242159B
Authority
CN
China
Prior art keywords
robot
features
image recognition
image
deviation
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201911329167.1A
Other languages
Chinese (zh)
Other versions
CN111242159A (en
Inventor
何智成
马亚东
胡朝辉
宋凯
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Guangxi Liuzhou United Farming Technology Co ltd
Original Assignee
Guangxi Liuzhou United Farming Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Guangxi Liuzhou United Farming Technology Co ltd filed Critical Guangxi Liuzhou United Farming Technology Co ltd
Priority to CN201911329167.1A priority Critical patent/CN111242159B/en
Publication of CN111242159A publication Critical patent/CN111242159A/en
Application granted granted Critical
Publication of CN111242159B publication Critical patent/CN111242159B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B25HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
    • B25JMANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
    • B25J19/00Accessories fitted to manipulators, e.g. for monitoring, for viewing; Safety devices combined with or specially adapted for use in connection with manipulators
    • B25J19/02Sensing devices
    • B25J19/04Viewing devices
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/214Generating training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/25Fusion techniques
    • G06F18/253Fusion techniques of extracted features

Landscapes

  • Engineering & Computer Science (AREA)
  • Data Mining & Analysis (AREA)
  • Theoretical Computer Science (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • General Engineering & Computer Science (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Evolutionary Biology (AREA)
  • Evolutionary Computation (AREA)
  • Physics & Mathematics (AREA)
  • Artificial Intelligence (AREA)
  • General Physics & Mathematics (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Robotics (AREA)
  • Mechanical Engineering (AREA)
  • Image Analysis (AREA)
  • Image Processing (AREA)
  • Manipulator (AREA)

Abstract

The invention provides a product feature-based image recognition and robot automatic positioning method, which tightly combines the image recognition and the robot positioning process, provides a new high-precision image recognition algorithm, provides more accurate position information for a robot, fuses the shape features of global features and SURF features of local features, combines the advantages of strong robustness, insensitivity to rotation and the like of the context information of the global features and the local features, can more accurately realize the image recognition based on the product features, can communicate with the robot, controls the robot to reach a designated position, has an important function of automatic correction, can compare the current position of the robot with the position output by the image recognition, calculates deviation, fuses laser information, accurately calculates the distance between the tail end of a robot hand and the surface of the feature, and realizes the automatic accurate positioning of the robot with self-adaption correction.

Description

Image recognition and robot automatic positioning method based on product features
Technical Field
The invention belongs to the technical field of process production, and particularly relates to an image recognition and robot automatic positioning method based on product characteristics.
Background
With the increasing level of industrial production and the increasing demands of today's society for products as well as product quality, smart manufacturing has become a necessary trend. Automatic identification of products and automatic positioning of robots are developed along with continuous requirements of automation and intelligent technologies in modern industrial production, and the automatic positioning method has important significance in time and labor saving and efficiency and accuracy improvement.
Existing product feature-based image recognition primarily recognizes global features and local features including products. Global features include texture features, shape features, color features, and the like. Local features include SIFT features and SURF features. In global features, texture is an important feature of an expressed image, reflecting the homogeneity of the image independent of color or brightness, reflecting important information of the arrangement of surface structure tissues and their connection to the surrounding environment. Texture features are not pixel-based features, often have rotational invariance, and are more resistant to noise. Common extraction methods based on texture features of products include statistical methods, signal analysis methods, model methods, structural methods and geometric methods. The statistical method is simple and easy to implement, but lacks global information of the image; the signal analysis method can carry out multi-resolution scanning on textures, but has poor processing effect on irregular texture images and large calculated amount; the structuring method is only suitable for images with large texture primitives and regular arrangement. Shape features belong to the middle layer features of the image, and shape is generally considered to be the area enclosed by a closed contour curve. The description of shape feature points can be largely divided into two categories: based on contour shape and based on region shape. The distinguishing method is whether the shape feature is extracted from the outline only or from the entire shape region. Color features are a basic visual feature that humans perceive and distinguish between different objects, describing the surface properties of a scene to which an image or image area corresponds. Common feature extraction and matching methods are color histograms, color sets, color moments, and color aggregate vectors. Under the current hardware condition, a color image is usually converted into a gray image, and the gray image is processed according to a gray image processing method, so that the complexity is reduced. Among the local features, there are a large number of algorithms based on SIFT feature extraction and SURF feature extraction that can implement image recognition.
From the above, the image recognition methods based on the global features and the local features of the product are many, the global features and the local features are compared, the global features are the feature extraction of the whole image, the process is faster, more context information can be provided, but many local features are lost; the local features have strong robustness, are insensitive to rotation and scaling, have the advantages of high matching degree, difficult mismatching and the like, but cannot identify similar positions. And the identified result is not directly transmitted to the robot or the process of inputting the identified result into the robot is completed by manual intervention, so that the identified result cannot be automatically transmitted to the robot, and the resource waste and the result of strong subjectivity of the manual intervention are caused, thereby affecting the speed and the precision of the whole production or detection line.
Disclosure of Invention
Aiming at the technical problems, the invention provides an image recognition and robot automatic positioning method based on product characteristics, which tightly combines the image recognition and the robot positioning process, improves the flow speed of a production line and reduces the subjective influence caused by manual intervention, and comprises the following steps:
(1) The gray level image generated by the CCD camera is filtered by using a median filtering method, so that the noise point of the image is effectively reduced;
(2) Extracting and fusing global features and local features of the image to realize more accurate target positioning and contour extraction;
(3) Converting the position information under the image coordinate system in the step (2) into the position information under the world coordinate system through a parameter matrix of the calibration camera, and transmitting the world coordinate to a motion control module of the robot through a communication module;
(4) The motion control module of the robot receives the world coordinates of the step (3) and controls the robot to move to the target position according to the information;
(5) Calculating whether the actual position reached by the robot in the step (4) exceeds a set threshold value or not by a feedback module;
(6) And (3) if the deviation calculated in the step (5) exceeds a set deviation threshold, controlling the robot to move towards the target position by a motion control module to realize automatic deviation correction, otherwise, considering that the robot reaches the target position without deviation correction.
The beneficial effects are that:
the invention provides an image recognition and robot automatic positioning method based on product features, which unifies the image recognition and the robot automatic positioning based on the product features, and can complete the whole process from image acquisition to image recognition to robot positioning in a full-automatic way.
In the image recognition module, the advantages of providing more context information by the global features and having strong local feature robustness are combined, the features are fused after being extracted, and finally, the classification result is output through classification processing, so that the image recognition based on the product features is more accurate, and the defects that the local features are lost by the global features and the local features have insufficient context information are overcome.
Drawings
FIG. 1 is a schematic diagram of the position of a workpiece and the actual and expected positions of a robot according to the present invention.
Detailed Description
The present invention will be described in further detail with reference to examples and drawings, but embodiments of the present invention are not limited thereto.
The invention employs a gray camera, so the extracted global features do not include color features. The global feature extracted is a shape feature and the local feature is a SURF feature, both of which share a common feature of rotational invariance. As for the shape feature, the shape feature is one of key information required for human vision in object recognition, and it is stable information that does not change with changes in surrounding environment such as brightness and the like. For the local feature SURF features, the core matrix of SURF features is a heption matrix, which enhances scale invariance by finding feature points at different scales. In order to improve the accuracy of SURF feature matching, a RANSAC (random sample consensus) algorithm is introduced, and error data is proposed through continuous iteration. And finally, fusing the extracted global features with the local features, training the local features, and finally, outputting a classification result by using a classification model. The method comprises the following steps:
(1) Gray level images generated by the CCD camera, filtering the images, and reducing noise of the images;
(2) SURF feature extraction is carried out on the filtered image, and feature matching is carried out by using KD-tree algorithm, so that accurate positioning of the target is realized;
(3) And (3) extracting shape characteristics of the target positioned in the step (2) to realize contour detection of the target.
The steps are used for respectively extracting the global features and the local features of the image, realizing multi-feature fusion, simultaneously considering the advantages of rotation invariance of the context information and the local features and the like, and further improving the positioning precision.
In the mechanical manufacturing process, the loading, unloading, sorting or carrying of the mechanical parts is finished manually or semi-manually, so that the efficiency is low and personnel may be accidentally injured. The invention can be used for part identification in the field of mechanical intelligent manufacturing, and the automatic positioning of the robot can complete sorting or carrying of parts in real time, so that the invention has great advantages in the field of manufacturing. The method comprises the following steps:
(4) Converting the position information under the image coordinate system in the step (3) into position information under the world coordinate system, and transmitting the position information to a motion control module of the robot;
(5) The motion control module of the robot controls the robot to move to a designated position, and due to errors and calibration errors of the robot, the actually arrived position and the expected position have a certain deviation, as shown in fig. 1;
(6) First comparing the deviation |D in the Z direction Z The distance between the tail end of the robot and the Z direction of the workpiece is measured by a laser range finder, if the calculated deviation is larger than the set threshold, the deviation is automatically corrected by a motion control module, the deviation from the expected position is reduced, and the Z direction correction is completed;
(7) Then comparing the deviation |D in the X direction X The distance deviation between the I and the set threshold is obtained by converting and calculating the image coordinates of the position in the step (6) through a parameter matrix, if the deviation is larger than the set threshold, the deviation is automatically corrected through a motion control module, the deviation from the expected position is reduced, and the X-direction deviation correction is completed;
(8) Finally comparing the deviation |D in the Y direction Y And (3) calculating the distance deviation between the I and the set threshold value in the Y direction by converting the image coordinates of the position in the step (7) through a parameter matrix, and if the deviation is larger than the set threshold value, automatically correcting the deviation through a motion control module, reducing the deviation from the expected position and finishing Y-direction correction.

Claims (1)

1. The image recognition and robot automatic positioning method based on the product features comprises the following steps:
(1) The gray level image generated by the CCD camera is filtered by using a median filtering method, so that the noise point of the image is effectively reduced;
(2) Extracting and fusing global features and local features of the image to realize more accurate target positioning and contour extraction, wherein the global features are shape features, the local features are SURF features, and the method specifically comprises the following steps:
(2.1) carrying out SURF feature extraction on the image, carrying out feature matching by using a KD-tree algorithm to realize accurate positioning of a target, introducing a RANSAC algorithm to improve the accuracy of SURF feature matching, and extracting error data through continuous iteration;
(2.2) extracting shape characteristics of the target positioned in the step (2.1) to realize contour detection of the target;
(3) Converting the position information under the image coordinate system in the step (2) into the position information under the world coordinate system through a parameter matrix of the calibration camera, and transmitting the world coordinate to a motion control module of the robot through a communication module;
(4) The motion control module of the robot receives the world coordinates of the step (3) and controls the robot to move to the position above the target position according to the information;
(5) Calculating the distance between the tail end of the arm of the current robot and the characteristic surface of the product by the information obtained by the laser range finder, if the distance is smaller than a distance threshold, moving in the direction away from the product until reaching a set distance, and if the distance is larger than the distance threshold, moving in the direction close to the product until reaching the set distance;
(6) Calculating whether the actual position deviation reached by the robot in the step (5) exceeds a set threshold value or not by a feedback module;
(7) And (3) if the deviation calculated in the step (6) exceeds a set deviation threshold, controlling the robot to move towards the target position by a motion control module to realize automatic deviation correction, otherwise, considering that the robot reaches the target position without deviation correction.
CN201911329167.1A 2019-12-20 2019-12-20 Image recognition and robot automatic positioning method based on product features Active CN111242159B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201911329167.1A CN111242159B (en) 2019-12-20 2019-12-20 Image recognition and robot automatic positioning method based on product features

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201911329167.1A CN111242159B (en) 2019-12-20 2019-12-20 Image recognition and robot automatic positioning method based on product features

Publications (2)

Publication Number Publication Date
CN111242159A CN111242159A (en) 2020-06-05
CN111242159B true CN111242159B (en) 2024-04-16

Family

ID=70872792

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201911329167.1A Active CN111242159B (en) 2019-12-20 2019-12-20 Image recognition and robot automatic positioning method based on product features

Country Status (1)

Country Link
CN (1) CN111242159B (en)

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105157680A (en) * 2015-08-28 2015-12-16 北京控制工程研究所 Vision measurement system and method based on combination of global feature and local feature
CN107345814A (en) * 2017-07-11 2017-11-14 海安中科智能制造与信息感知应用研发中心 A kind of mobile robot visual alignment system and localization method
CN109141437A (en) * 2018-09-30 2019-01-04 中国科学院合肥物质科学研究院 A kind of robot global method for relocating
CN109781092A (en) * 2019-01-19 2019-05-21 北京化工大学 Localization for Mobile Robot and drawing method is built in a kind of danger chemical accident

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105157680A (en) * 2015-08-28 2015-12-16 北京控制工程研究所 Vision measurement system and method based on combination of global feature and local feature
CN107345814A (en) * 2017-07-11 2017-11-14 海安中科智能制造与信息感知应用研发中心 A kind of mobile robot visual alignment system and localization method
CN109141437A (en) * 2018-09-30 2019-01-04 中国科学院合肥物质科学研究院 A kind of robot global method for relocating
CN109781092A (en) * 2019-01-19 2019-05-21 北京化工大学 Localization for Mobile Robot and drawing method is built in a kind of danger chemical accident

Also Published As

Publication number Publication date
CN111242159A (en) 2020-06-05

Similar Documents

Publication Publication Date Title
CN110223345B (en) Point cloud-based distribution line operation object pose estimation method
CN111721259B (en) Underwater robot recovery positioning method based on binocular vision
CN110580725A (en) Box sorting method and system based on RGB-D camera
CN114821114B (en) Groove cutting robot image processing method based on vision system
CN111784655B (en) Underwater robot recycling and positioning method
CN113538486B (en) Method for improving identification and positioning accuracy of automobile sheet metal workpiece
CN111311618A (en) Circular arc workpiece matching and positioning method based on high-precision geometric primitive extraction
CN114279357B (en) Die casting burr size measurement method and system based on machine vision
CN111179233B (en) Self-adaptive deviation rectifying method based on laser cutting of two-dimensional parts
He et al. A critical review for machining positioning based on computer vision
CN113781561B (en) Target pose estimation method based on self-adaptive Gaussian weight quick point feature histogram
CN112729112B (en) Engine cylinder bore diameter and hole site detection method based on robot vision
CN112010024A (en) Automatic container grabbing method and system based on laser and vision fusion detection
CN116486287A (en) Target detection method and system based on environment self-adaptive robot vision system
CN115171097A (en) Processing control method and system based on three-dimensional point cloud and related equipment
CN115049821A (en) Three-dimensional environment target detection method based on multi-sensor fusion
CN113996500A (en) Intelligent dispensing identification system based on visual dispensing robot
CN109308707B (en) Non-contact type online measuring method for thickness of aluminum ingot
CN114387262A (en) Nut positioning detection method, device and system based on machine vision
CN111539951B (en) Visual detection method for outline size of ceramic grinding wheel head
CN111242159B (en) Image recognition and robot automatic positioning method based on product features
CN112588621A (en) Agricultural product sorting method and system based on visual servo
CN113808206B (en) Typesetting system and method based on vision tracking robot
CN116823708A (en) PC component side mold identification and positioning research based on machine vision
CN113406920A (en) Box positioning control system and method based on machine vision

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant