CN114549420A - Workpiece identification and positioning method based on template matching - Google Patents
Workpiece identification and positioning method based on template matching Download PDFInfo
- Publication number
- CN114549420A CN114549420A CN202210090650.4A CN202210090650A CN114549420A CN 114549420 A CN114549420 A CN 114549420A CN 202210090650 A CN202210090650 A CN 202210090650A CN 114549420 A CN114549420 A CN 114549420A
- Authority
- CN
- China
- Prior art keywords
- template
- matching
- image
- workpiece
- method based
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Withdrawn
Links
- 238000000034 method Methods 0.000 title claims abstract description 28
- 238000012545 processing Methods 0.000 claims description 8
- 230000009466 transformation Effects 0.000 claims description 7
- 238000007781 pre-processing Methods 0.000 claims description 6
- 238000003672 processing method Methods 0.000 claims description 4
- 238000006243 chemical reaction Methods 0.000 claims description 3
- 238000001914 filtration Methods 0.000 claims description 3
- 230000001788 irregular Effects 0.000 claims description 2
- 238000003384 imaging method Methods 0.000 abstract description 4
- 230000000007 visual effect Effects 0.000 abstract description 2
- 238000004364 calculation method Methods 0.000 description 2
- 230000008859 change Effects 0.000 description 2
- 238000013499 data model Methods 0.000 description 2
- 238000005516 engineering process Methods 0.000 description 2
- 238000005286 illumination Methods 0.000 description 2
- 238000011524 similarity measure Methods 0.000 description 2
- 230000004075 alteration Effects 0.000 description 1
- 238000004458 analytical method Methods 0.000 description 1
- 238000005260 corrosion Methods 0.000 description 1
- 230000007797 corrosion Effects 0.000 description 1
- 238000001514 detection method Methods 0.000 description 1
- 230000002708 enhancing effect Effects 0.000 description 1
- 238000009776 industrial production Methods 0.000 description 1
- 238000013507 mapping Methods 0.000 description 1
- 238000005259 measurement Methods 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
- 230000000877 morphologic effect Effects 0.000 description 1
- 238000005457 optimization Methods 0.000 description 1
- 238000011160 research Methods 0.000 description 1
- 238000012216 screening Methods 0.000 description 1
- 238000006467 substitution reaction Methods 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/0002—Inspection of images, e.g. flaw detection
- G06T7/0004—Industrial image inspection
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/22—Matching criteria, e.g. proximity measures
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T5/00—Image enhancement or restoration
- G06T5/20—Image enhancement or restoration using local operators
- G06T5/30—Erosion or dilatation, e.g. thinning
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T5/00—Image enhancement or restoration
- G06T5/40—Image enhancement or restoration using histogram techniques
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/70—Determining position or orientation of objects or cameras
- G06T7/73—Determining position or orientation of objects or cameras using feature-based methods
- G06T7/75—Determining position or orientation of objects or cameras using feature-based methods involving models
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/80—Analysis of captured images to determine intrinsic or extrinsic camera parameters, i.e. camera calibration
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20024—Filtering details
- G06T2207/20032—Median filtering
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20092—Interactive image processing based on input by user
- G06T2207/20104—Interactive definition of region of interest [ROI]
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/30—Subject of image; Context of image processing
- G06T2207/30108—Industrial image inspection
- G06T2207/30164—Workpiece; Machine component
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/30—Subject of image; Context of image processing
- G06T2207/30244—Camera pose
-
- Y—GENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
- Y02—TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
- Y02P—CLIMATE CHANGE MITIGATION TECHNOLOGIES IN THE PRODUCTION OR PROCESSING OF GOODS
- Y02P90/00—Enabling technologies with a potential contribution to greenhouse gas [GHG] emissions mitigation
- Y02P90/30—Computing systems specially adapted for manufacturing
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- General Physics & Mathematics (AREA)
- Physics & Mathematics (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Data Mining & Analysis (AREA)
- Bioinformatics & Cheminformatics (AREA)
- Evolutionary Biology (AREA)
- Evolutionary Computation (AREA)
- Bioinformatics & Computational Biology (AREA)
- General Engineering & Computer Science (AREA)
- Artificial Intelligence (AREA)
- Life Sciences & Earth Sciences (AREA)
- Quality & Reliability (AREA)
- Image Analysis (AREA)
- Image Processing (AREA)
Abstract
The invention discloses a workpiece identification and positioning method based on template matching, which is characterized in that coordinate values of feature points on an imaging plane are obtained, the coordinate values of the feature points in a three-dimensional space are obtained by combining the coordinates of corresponding points in a camera image coordinate system, a three-dimensional space geometric model is established according to the coordinate values of the imaging points, the matching is searched and completed in the camera visual field range by utilizing a shape-based template matching method, and an accurate positioning result of a workpiece is obtained, and the result can be subsequently calibrated by matching with an industrial mechanical arm and hand-eye system to complete the grabbing and sorting actions of the workpiece.
Description
Technical Field
The invention belongs to the technical field of industrial field image processing and identification positioning, and particularly relates to a workpiece identification positioning method based on template matching.
Background
The industrial image processing and positioning technology is an important research content in the field of machine vision, and the identification and positioning of workpieces have important application in the fields of industrial production lines, mechanical arm grabbing and the like.
The chinese patent application No. 2017112279315 discloses a layered positioning method for an industrial robot in an industrial environment, and belongs to the field of object positioning. Comprises the following steps: s1: acquiring image information of an object to be positioned according to a binocular vision system; s2: performing preliminary processing on the image information of the object by adopting a MeanShift algorithm to cut out target picture information; s3: aiming at the target picture information, matching and screening the interest point pairs of the target area by using an improved SURF algorithm; s4: and calculating the three-dimensional coordinates of the point positions by utilizing a triangular measurement algorithm according to the matched and screened interest point pairs, and accurately positioning the three-dimensional coordinates of the object. The invention provides a novel layered target object positioning method aiming at the problems that the positioning time is long and the precision is low when an industrial robot grabs an object, so that the influence of an irrelevant point on an overall result is avoided, the overall matching precision is improved, and the overall matching speed is accelerated. However, the conventional positioning method, such as a template matching technology based on gray scale or geometric primitives, is easily affected by illumination change, occlusion, and the like, resulting in slow recognition speed and unstable positioning.
Disclosure of Invention
In order to solve the above problems, the present invention provides a workpiece identification and positioning method based on template matching, which comprises the following steps:
s1, collecting a standard workpiece image;
s2, creating a template image, creating an ROI template area, and creating a template by an image processing method;
s3, specifying a conversion relation, specifying a template image mark point pixel coordinate system coordinate and a corresponding world coordinate system coordinate, specifying a corresponding relation, and calibrating a camera;
s4, acquiring a target workpiece image, and performing preprocessing and image enhancement processing on the acquired target workpiece image;
s5, matching the shape template, matching the acquired target workpiece image with the template image, and determining the three-dimensional poses of all matched workpieces;
s6, detecting whether the matching is successful, if so, executing the step S7, otherwise, re-executing the step S2;
and S7, ending positioning.
Preferably, the method of blob is used when the ROI template region is created in step S2.
Preferably, the specific method for specifying the transformation relationship in step S3 is to first define a three-dimensional plane, generate the cross mark point by specifying the coordinates of a plurality of feature points in the pixel coordinate system, then specify the corresponding coordinate point in the world coordinate system, and obtain the transformation relationship.
Preferably, the image preprocessing method in step S4 employs dynamic threshold, histogram equalization, stop filtering and linear gray scale transformation.
Preferably, an image pyramid method is adopted in the matching in step S5.
Preferably, the method further includes step S61, the step is before step S7, and step S61 is to adjust the matching parameters to further optimize the matching rate.
Preferably, the ROI template region created in step S2 is in any regular shape or any irregular shape to adapt to the positioning of workpieces in different shapes.
The invention obtains the coordinate value of the characteristic point in the three-dimensional space by obtaining the coordinate value of the characteristic point on the imaging plane and combining the coordinate of the corresponding point in the camera image coordinate system, establishes a three-dimensional space geometric model according to the coordinate value of the imaging point, searches and completes matching in the camera visual field range by utilizing a shape-based template matching method, obtains the accurate positioning result of the workpiece, and has the advantages of strong illumination change resistance, workpiece shielding resistance, high matching speed and the like.
Drawings
FIG. 1 is a schematic flow chart of the present invention.
Detailed Description
The technical solution in the embodiments of the present invention will be clearly and completely described below with reference to the accompanying drawings in the embodiments of the present invention.
Examples
As shown in fig. 1, a workpiece identifying and positioning method based on template matching includes the following steps:
and S1, acquiring and acquiring clear and reliable-quality standard workpiece images.
S2, creating an ROI template area, and creating a template through blob analysis, morphological processing methods such as image threshold transformation, feature selection, expansion corrosion and the like and other image processing methods to obtain an image shape template;
s3, coordinates of a plurality of image feature points of the template in a pixel coordinate system and world coordinate system coordinates corresponding to the image feature points are designated, so that a corresponding relation between the world coordinate system and the pixel coordinate system is established, camera calibration is carried out before the corresponding relation, internal reference and external reference of the camera are obtained, and a data model of the world coordinate system and a data model of the pixel coordinate system are established;
s4, collecting a target workpiece, preprocessing and enhancing the target workpiece image, carrying out histogram equalization processing on the collected standard workpiece image, carrying out median filtering processing on the image after the histogram equalization to remove noise in the image, carrying out linear gray level conversion processing on the processed image, detecting a model in a new image with a potential inclined object example after image preprocessing operation, and carrying out matching and positioning detection model based on a deformed template;
s5, matching the acquired target workpiece image with the template image, determining the three-dimensional poses and matching scores of all matched target workpieces, and matching the template during matchingCalculating the similarity, defining a template of the target as a point set and a direction vector associated with each point, and performing dot product calculation on the direction vector of each point of the shape contour in the template and the direction vector of the corresponding point in the image; the specific calculation formula is as follows:in very special applications it may even be necessary to ignore local contrast direction changes. In this case, the similarity measure needs to be modified as follows:
the normalized similarity measures in the above equations will all return a number less than 1 as the score for the potential matching object. In all cases, a score of 1 indicates perfect agreement between the template and the image. In addition, this score is approximately related to how many parts of the template appear in the image. For example, if an object is 50% occluded, the (average) score will not exceed 0.5. This attribute of the match score is highly desirable because it provides the user with meaningful data that the user can select an intuitive threshold for deciding when a match should be considered as being found. When the template is matched, an image pyramid method is adopted, the image is divided into different levels of sizes, after the relevant key information on each level of the image pyramid is ensured, the reasonable range of pyramid layer number is set, then the optimal layer number set value is selected according to the matching result,
specifically, firstly calculating the appropriate layer number range of a search target and a template, then carrying out complete matching under the condition that the target characteristics on the highest-level image can be distinguished and a stopping condition is added, mapping the template result searched at the highest level downwards, simultaneously transmitting each layer of matching result to the highest level, and if the bottom-level matching result is not good, automatically reducing the layer number, and obtaining an ideal optimal layer number. And judging whether the matching is successful or not according to the matching result, if so, executing the step S6 for further optimization, and if not, returning to the step S2 again to recreate the template.
S6, adjusting matching parameters such as greedy degree, deformation size, pyramid level, overlapping coefficient and the like to improve the speed and accuracy of template matching, tracking the matching positions to the lower level of the pyramid after determining the potential matching positions through the pyramid level searching until finding the potential matching positions at the bottom level of the image pyramid, and once finding the target object at the bottom level of the image pyramid, obtaining a final pose more accurate than the discretized searching spatial resolution.
Although embodiments of the present invention have been shown and described, it will be appreciated by those skilled in the art that changes, modifications, substitutions and alterations can be made in these embodiments without departing from the principles and spirit of the invention, the scope of which is defined in the appended claims and their equivalents.
Claims (7)
1. A workpiece identification and positioning method based on template matching is characterized by comprising the following steps:
s1, collecting a standard workpiece image;
s2, creating a template image, creating an ROI template area, and creating a template by an image processing method;
s3, specifying a conversion relation, specifying coordinates of a pixel coordinate system of the template image mark points and coordinates of a world coordinate system corresponding to the coordinates, specifying a corresponding relation, and calibrating the camera;
s4, acquiring a target workpiece image, and performing preprocessing and image enhancement processing on the acquired target workpiece image;
s5, matching the shape template, matching the acquired target workpiece image with the template image, and determining the three-dimensional poses of all matched workpieces;
s6, detecting whether the matching is successful, if so, executing the step S7, otherwise, re-executing the step S2;
and S7, ending positioning.
2. The workpiece recognition and positioning method based on template matching as claimed in claim 1, wherein: and in the step S2, a blob method is adopted when the ROI template region is created.
3. The workpiece recognition and positioning method based on template matching as claimed in claim 1, wherein: the specific method for specifying the transformation relationship in step S3 is to first define a three-dimensional plane, generate the cross mark point by specifying the coordinates of a plurality of feature points in the pixel coordinate system, then specify the corresponding coordinate point in the world coordinate system, and obtain the transformation relationship.
4. The workpiece recognition and positioning method based on template matching as claimed in claim 1, wherein: the image preprocessing method in step S4 adopts dynamic threshold, histogram equalization, stop filtering, and linear gray scale transformation.
5. The workpiece recognition and positioning method based on template matching as claimed in claim 1, wherein: in the step S5, an image pyramid method is adopted for matching.
6. The workpiece recognition and positioning method based on template matching as claimed in claim 1, wherein: step S61 is also included, which is before step S7, and step S61 is to adjust the matching parameters.
7. The workpiece recognition and positioning method based on template matching as claimed in claim 1, wherein: the ROI template region created in step S2 is of an arbitrary regular shape or an arbitrary irregular shape.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202210090650.4A CN114549420A (en) | 2022-01-26 | 2022-01-26 | Workpiece identification and positioning method based on template matching |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202210090650.4A CN114549420A (en) | 2022-01-26 | 2022-01-26 | Workpiece identification and positioning method based on template matching |
Publications (1)
Publication Number | Publication Date |
---|---|
CN114549420A true CN114549420A (en) | 2022-05-27 |
Family
ID=81674422
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202210090650.4A Withdrawn CN114549420A (en) | 2022-01-26 | 2022-01-26 | Workpiece identification and positioning method based on template matching |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN114549420A (en) |
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN115049860A (en) * | 2022-06-14 | 2022-09-13 | 广东天太机器人有限公司 | System based on feature point identification and capturing method |
-
2022
- 2022-01-26 CN CN202210090650.4A patent/CN114549420A/en not_active Withdrawn
Cited By (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN115049860A (en) * | 2022-06-14 | 2022-09-13 | 广东天太机器人有限公司 | System based on feature point identification and capturing method |
CN115049860B (en) * | 2022-06-14 | 2023-02-28 | 广东天太机器人有限公司 | System based on feature point identification and capturing method |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN107230203B (en) | Casting defect identification method based on human eye visual attention mechanism | |
CN107392929B (en) | Intelligent target detection and size measurement method based on human eye vision model | |
CN107993224B (en) | Object detection and positioning method based on circular marker | |
CN113724231A (en) | Industrial defect detection method based on semantic segmentation and target detection fusion model | |
CN110222661B (en) | Feature extraction method for moving target identification and tracking | |
CN111401449A (en) | Image matching method based on machine vision | |
CN111783722B (en) | Lane line extraction method of laser point cloud and electronic equipment | |
CN116079749B (en) | Robot vision obstacle avoidance method based on cluster separation conditional random field and robot | |
CN116486287A (en) | Target detection method and system based on environment self-adaptive robot vision system | |
CN110930425B (en) | Damaged target detection method based on neighborhood vector inner product local contrast image enhancement | |
Fan et al. | Visual localization using semantic segmentation and depth prediction | |
CN114549420A (en) | Workpiece identification and positioning method based on template matching | |
CN112991327B (en) | Steel grid welding system, method and terminal equipment based on machine vision | |
CN112381867B (en) | Automatic filling method for large-area depth image cavity of industrial sorting assembly line | |
Songhui et al. | Objects detection and location based on mask RCNN and stereo vision | |
CN117496401A (en) | Full-automatic identification and tracking method for oval target points of video measurement image sequences | |
CN115187744A (en) | Cabinet identification method based on laser point cloud | |
CN113688819A (en) | Target object expected point tracking matching method based on mark points | |
CN112419337A (en) | Detection method for robot grabbing position under complex background | |
CN113160332A (en) | Multi-target identification and positioning method based on binocular vision | |
CN117975175B (en) | Plastic pipeline appearance defect detection method based on machine vision | |
Kang et al. | Regular Target Recognition Based on FAST Feature Point Extraction and Contour Recognition | |
Zhang | Research on visual recognition and positioning of industrial robots based on big data technology | |
CN116385389A (en) | Surface defect detection method based on point cloud clustering | |
CN118295309A (en) | Visual servo control system of online slag dragging robot |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
WW01 | Invention patent application withdrawn after publication | ||
WW01 | Invention patent application withdrawn after publication |
Application publication date: 20220527 |