CN114549420A - Workpiece identification and positioning method based on template matching - Google Patents

Workpiece identification and positioning method based on template matching Download PDF

Info

Publication number
CN114549420A
CN114549420A CN202210090650.4A CN202210090650A CN114549420A CN 114549420 A CN114549420 A CN 114549420A CN 202210090650 A CN202210090650 A CN 202210090650A CN 114549420 A CN114549420 A CN 114549420A
Authority
CN
China
Prior art keywords
template
matching
image
workpiece
method based
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Withdrawn
Application number
CN202210090650.4A
Other languages
Chinese (zh)
Inventor
郭发勇
黄志宇
李玮
车金庆
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Changzhou Vocational Institute of Engineering
Original Assignee
Changzhou Vocational Institute of Engineering
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Changzhou Vocational Institute of Engineering filed Critical Changzhou Vocational Institute of Engineering
Priority to CN202210090650.4A priority Critical patent/CN114549420A/en
Publication of CN114549420A publication Critical patent/CN114549420A/en
Withdrawn legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/0002Inspection of images, e.g. flaw detection
    • G06T7/0004Industrial image inspection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/22Matching criteria, e.g. proximity measures
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/20Image enhancement or restoration using local operators
    • G06T5/30Erosion or dilatation, e.g. thinning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/40Image enhancement or restoration using histogram techniques
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/70Determining position or orientation of objects or cameras
    • G06T7/73Determining position or orientation of objects or cameras using feature-based methods
    • G06T7/75Determining position or orientation of objects or cameras using feature-based methods involving models
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/80Analysis of captured images to determine intrinsic or extrinsic camera parameters, i.e. camera calibration
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20024Filtering details
    • G06T2207/20032Median filtering
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20092Interactive image processing based on input by user
    • G06T2207/20104Interactive definition of region of interest [ROI]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30108Industrial image inspection
    • G06T2207/30164Workpiece; Machine component
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30244Camera pose
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02PCLIMATE CHANGE MITIGATION TECHNOLOGIES IN THE PRODUCTION OR PROCESSING OF GOODS
    • Y02P90/00Enabling technologies with a potential contribution to greenhouse gas [GHG] emissions mitigation
    • Y02P90/30Computing systems specially adapted for manufacturing

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Physics & Mathematics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Data Mining & Analysis (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Evolutionary Biology (AREA)
  • Evolutionary Computation (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • General Engineering & Computer Science (AREA)
  • Artificial Intelligence (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Quality & Reliability (AREA)
  • Image Analysis (AREA)
  • Image Processing (AREA)

Abstract

The invention discloses a workpiece identification and positioning method based on template matching, which is characterized in that coordinate values of feature points on an imaging plane are obtained, the coordinate values of the feature points in a three-dimensional space are obtained by combining the coordinates of corresponding points in a camera image coordinate system, a three-dimensional space geometric model is established according to the coordinate values of the imaging points, the matching is searched and completed in the camera visual field range by utilizing a shape-based template matching method, and an accurate positioning result of a workpiece is obtained, and the result can be subsequently calibrated by matching with an industrial mechanical arm and hand-eye system to complete the grabbing and sorting actions of the workpiece.

Description

Workpiece identification and positioning method based on template matching
Technical Field
The invention belongs to the technical field of industrial field image processing and identification positioning, and particularly relates to a workpiece identification positioning method based on template matching.
Background
The industrial image processing and positioning technology is an important research content in the field of machine vision, and the identification and positioning of workpieces have important application in the fields of industrial production lines, mechanical arm grabbing and the like.
The chinese patent application No. 2017112279315 discloses a layered positioning method for an industrial robot in an industrial environment, and belongs to the field of object positioning. Comprises the following steps: s1: acquiring image information of an object to be positioned according to a binocular vision system; s2: performing preliminary processing on the image information of the object by adopting a MeanShift algorithm to cut out target picture information; s3: aiming at the target picture information, matching and screening the interest point pairs of the target area by using an improved SURF algorithm; s4: and calculating the three-dimensional coordinates of the point positions by utilizing a triangular measurement algorithm according to the matched and screened interest point pairs, and accurately positioning the three-dimensional coordinates of the object. The invention provides a novel layered target object positioning method aiming at the problems that the positioning time is long and the precision is low when an industrial robot grabs an object, so that the influence of an irrelevant point on an overall result is avoided, the overall matching precision is improved, and the overall matching speed is accelerated. However, the conventional positioning method, such as a template matching technology based on gray scale or geometric primitives, is easily affected by illumination change, occlusion, and the like, resulting in slow recognition speed and unstable positioning.
Disclosure of Invention
In order to solve the above problems, the present invention provides a workpiece identification and positioning method based on template matching, which comprises the following steps:
s1, collecting a standard workpiece image;
s2, creating a template image, creating an ROI template area, and creating a template by an image processing method;
s3, specifying a conversion relation, specifying a template image mark point pixel coordinate system coordinate and a corresponding world coordinate system coordinate, specifying a corresponding relation, and calibrating a camera;
s4, acquiring a target workpiece image, and performing preprocessing and image enhancement processing on the acquired target workpiece image;
s5, matching the shape template, matching the acquired target workpiece image with the template image, and determining the three-dimensional poses of all matched workpieces;
s6, detecting whether the matching is successful, if so, executing the step S7, otherwise, re-executing the step S2;
and S7, ending positioning.
Preferably, the method of blob is used when the ROI template region is created in step S2.
Preferably, the specific method for specifying the transformation relationship in step S3 is to first define a three-dimensional plane, generate the cross mark point by specifying the coordinates of a plurality of feature points in the pixel coordinate system, then specify the corresponding coordinate point in the world coordinate system, and obtain the transformation relationship.
Preferably, the image preprocessing method in step S4 employs dynamic threshold, histogram equalization, stop filtering and linear gray scale transformation.
Preferably, an image pyramid method is adopted in the matching in step S5.
Preferably, the method further includes step S61, the step is before step S7, and step S61 is to adjust the matching parameters to further optimize the matching rate.
Preferably, the ROI template region created in step S2 is in any regular shape or any irregular shape to adapt to the positioning of workpieces in different shapes.
The invention obtains the coordinate value of the characteristic point in the three-dimensional space by obtaining the coordinate value of the characteristic point on the imaging plane and combining the coordinate of the corresponding point in the camera image coordinate system, establishes a three-dimensional space geometric model according to the coordinate value of the imaging point, searches and completes matching in the camera visual field range by utilizing a shape-based template matching method, obtains the accurate positioning result of the workpiece, and has the advantages of strong illumination change resistance, workpiece shielding resistance, high matching speed and the like.
Drawings
FIG. 1 is a schematic flow chart of the present invention.
Detailed Description
The technical solution in the embodiments of the present invention will be clearly and completely described below with reference to the accompanying drawings in the embodiments of the present invention.
Examples
As shown in fig. 1, a workpiece identifying and positioning method based on template matching includes the following steps:
and S1, acquiring and acquiring clear and reliable-quality standard workpiece images.
S2, creating an ROI template area, and creating a template through blob analysis, morphological processing methods such as image threshold transformation, feature selection, expansion corrosion and the like and other image processing methods to obtain an image shape template;
s3, coordinates of a plurality of image feature points of the template in a pixel coordinate system and world coordinate system coordinates corresponding to the image feature points are designated, so that a corresponding relation between the world coordinate system and the pixel coordinate system is established, camera calibration is carried out before the corresponding relation, internal reference and external reference of the camera are obtained, and a data model of the world coordinate system and a data model of the pixel coordinate system are established;
s4, collecting a target workpiece, preprocessing and enhancing the target workpiece image, carrying out histogram equalization processing on the collected standard workpiece image, carrying out median filtering processing on the image after the histogram equalization to remove noise in the image, carrying out linear gray level conversion processing on the processed image, detecting a model in a new image with a potential inclined object example after image preprocessing operation, and carrying out matching and positioning detection model based on a deformed template;
s5, matching the acquired target workpiece image with the template image, determining the three-dimensional poses and matching scores of all matched target workpieces, and matching the template during matchingCalculating the similarity, defining a template of the target as a point set and a direction vector associated with each point, and performing dot product calculation on the direction vector of each point of the shape contour in the template and the direction vector of the corresponding point in the image; the specific calculation formula is as follows:
Figure DEST_PATH_IMAGE001
in very special applications it may even be necessary to ignore local contrast direction changes. In this case, the similarity measure needs to be modified as follows:
Figure 669689DEST_PATH_IMAGE002
the normalized similarity measures in the above equations will all return a number less than 1 as the score for the potential matching object. In all cases, a score of 1 indicates perfect agreement between the template and the image. In addition, this score is approximately related to how many parts of the template appear in the image. For example, if an object is 50% occluded, the (average) score will not exceed 0.5. This attribute of the match score is highly desirable because it provides the user with meaningful data that the user can select an intuitive threshold for deciding when a match should be considered as being found. When the template is matched, an image pyramid method is adopted, the image is divided into different levels of sizes, after the relevant key information on each level of the image pyramid is ensured, the reasonable range of pyramid layer number is set, then the optimal layer number set value is selected according to the matching result,
specifically, firstly calculating the appropriate layer number range of a search target and a template, then carrying out complete matching under the condition that the target characteristics on the highest-level image can be distinguished and a stopping condition is added, mapping the template result searched at the highest level downwards, simultaneously transmitting each layer of matching result to the highest level, and if the bottom-level matching result is not good, automatically reducing the layer number, and obtaining an ideal optimal layer number. And judging whether the matching is successful or not according to the matching result, if so, executing the step S6 for further optimization, and if not, returning to the step S2 again to recreate the template.
S6, adjusting matching parameters such as greedy degree, deformation size, pyramid level, overlapping coefficient and the like to improve the speed and accuracy of template matching, tracking the matching positions to the lower level of the pyramid after determining the potential matching positions through the pyramid level searching until finding the potential matching positions at the bottom level of the image pyramid, and once finding the target object at the bottom level of the image pyramid, obtaining a final pose more accurate than the discretized searching spatial resolution.
Although embodiments of the present invention have been shown and described, it will be appreciated by those skilled in the art that changes, modifications, substitutions and alterations can be made in these embodiments without departing from the principles and spirit of the invention, the scope of which is defined in the appended claims and their equivalents.

Claims (7)

1. A workpiece identification and positioning method based on template matching is characterized by comprising the following steps:
s1, collecting a standard workpiece image;
s2, creating a template image, creating an ROI template area, and creating a template by an image processing method;
s3, specifying a conversion relation, specifying coordinates of a pixel coordinate system of the template image mark points and coordinates of a world coordinate system corresponding to the coordinates, specifying a corresponding relation, and calibrating the camera;
s4, acquiring a target workpiece image, and performing preprocessing and image enhancement processing on the acquired target workpiece image;
s5, matching the shape template, matching the acquired target workpiece image with the template image, and determining the three-dimensional poses of all matched workpieces;
s6, detecting whether the matching is successful, if so, executing the step S7, otherwise, re-executing the step S2;
and S7, ending positioning.
2. The workpiece recognition and positioning method based on template matching as claimed in claim 1, wherein: and in the step S2, a blob method is adopted when the ROI template region is created.
3. The workpiece recognition and positioning method based on template matching as claimed in claim 1, wherein: the specific method for specifying the transformation relationship in step S3 is to first define a three-dimensional plane, generate the cross mark point by specifying the coordinates of a plurality of feature points in the pixel coordinate system, then specify the corresponding coordinate point in the world coordinate system, and obtain the transformation relationship.
4. The workpiece recognition and positioning method based on template matching as claimed in claim 1, wherein: the image preprocessing method in step S4 adopts dynamic threshold, histogram equalization, stop filtering, and linear gray scale transformation.
5. The workpiece recognition and positioning method based on template matching as claimed in claim 1, wherein: in the step S5, an image pyramid method is adopted for matching.
6. The workpiece recognition and positioning method based on template matching as claimed in claim 1, wherein: step S61 is also included, which is before step S7, and step S61 is to adjust the matching parameters.
7. The workpiece recognition and positioning method based on template matching as claimed in claim 1, wherein: the ROI template region created in step S2 is of an arbitrary regular shape or an arbitrary irregular shape.
CN202210090650.4A 2022-01-26 2022-01-26 Workpiece identification and positioning method based on template matching Withdrawn CN114549420A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210090650.4A CN114549420A (en) 2022-01-26 2022-01-26 Workpiece identification and positioning method based on template matching

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210090650.4A CN114549420A (en) 2022-01-26 2022-01-26 Workpiece identification and positioning method based on template matching

Publications (1)

Publication Number Publication Date
CN114549420A true CN114549420A (en) 2022-05-27

Family

ID=81674422

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210090650.4A Withdrawn CN114549420A (en) 2022-01-26 2022-01-26 Workpiece identification and positioning method based on template matching

Country Status (1)

Country Link
CN (1) CN114549420A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115049860A (en) * 2022-06-14 2022-09-13 广东天太机器人有限公司 System based on feature point identification and capturing method

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115049860A (en) * 2022-06-14 2022-09-13 广东天太机器人有限公司 System based on feature point identification and capturing method
CN115049860B (en) * 2022-06-14 2023-02-28 广东天太机器人有限公司 System based on feature point identification and capturing method

Similar Documents

Publication Publication Date Title
CN107230203B (en) Casting defect identification method based on human eye visual attention mechanism
CN107392929B (en) Intelligent target detection and size measurement method based on human eye vision model
CN107993224B (en) Object detection and positioning method based on circular marker
CN113724231A (en) Industrial defect detection method based on semantic segmentation and target detection fusion model
CN110222661B (en) Feature extraction method for moving target identification and tracking
CN111401449A (en) Image matching method based on machine vision
CN111783722B (en) Lane line extraction method of laser point cloud and electronic equipment
CN116079749B (en) Robot vision obstacle avoidance method based on cluster separation conditional random field and robot
CN116486287A (en) Target detection method and system based on environment self-adaptive robot vision system
CN110930425B (en) Damaged target detection method based on neighborhood vector inner product local contrast image enhancement
Fan et al. Visual localization using semantic segmentation and depth prediction
CN114549420A (en) Workpiece identification and positioning method based on template matching
CN112991327B (en) Steel grid welding system, method and terminal equipment based on machine vision
CN112381867B (en) Automatic filling method for large-area depth image cavity of industrial sorting assembly line
Songhui et al. Objects detection and location based on mask RCNN and stereo vision
CN117496401A (en) Full-automatic identification and tracking method for oval target points of video measurement image sequences
CN115187744A (en) Cabinet identification method based on laser point cloud
CN113688819A (en) Target object expected point tracking matching method based on mark points
CN112419337A (en) Detection method for robot grabbing position under complex background
CN113160332A (en) Multi-target identification and positioning method based on binocular vision
CN117975175B (en) Plastic pipeline appearance defect detection method based on machine vision
Kang et al. Regular Target Recognition Based on FAST Feature Point Extraction and Contour Recognition
Zhang Research on visual recognition and positioning of industrial robots based on big data technology
CN116385389A (en) Surface defect detection method based on point cloud clustering
CN118295309A (en) Visual servo control system of online slag dragging robot

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
WW01 Invention patent application withdrawn after publication
WW01 Invention patent application withdrawn after publication

Application publication date: 20220527