CN117078730A - Anti-protruding clamp registration method based on template matching - Google Patents

Anti-protruding clamp registration method based on template matching Download PDF

Info

Publication number
CN117078730A
CN117078730A CN202311315990.3A CN202311315990A CN117078730A CN 117078730 A CN117078730 A CN 117078730A CN 202311315990 A CN202311315990 A CN 202311315990A CN 117078730 A CN117078730 A CN 117078730A
Authority
CN
China
Prior art keywords
image
protruding
value
clamp
fixture
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202311315990.3A
Other languages
Chinese (zh)
Other versions
CN117078730B (en
Inventor
谢华
邵洋
李勇
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Ziyang Construction Engineering Construction Co ltd
Original Assignee
Ziyang Construction Engineering Construction Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Ziyang Construction Engineering Construction Co ltd filed Critical Ziyang Construction Engineering Construction Co ltd
Priority to CN202311315990.3A priority Critical patent/CN117078730B/en
Publication of CN117078730A publication Critical patent/CN117078730A/en
Application granted granted Critical
Publication of CN117078730B publication Critical patent/CN117078730B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/30Determination of transform parameters for the alignment of images, i.e. image registration
    • G06T7/33Determination of transform parameters for the alignment of images, i.e. image registration using feature-based methods
    • G06T7/337Determination of transform parameters for the alignment of images, i.e. image registration using feature-based methods involving reference images or patches
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/13Edge detection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/136Segmentation; Edge detection involving thresholding
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/168Segmentation; Edge detection involving transform domain methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/194Segmentation; Edge detection involving foreground-background segmentation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20048Transform domain processing
    • G06T2207/20061Hough transform
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20112Image segmentation details
    • G06T2207/20164Salient point detection; Corner detection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30108Industrial image inspection
    • G06T2207/30132Masonry; Concrete
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02PCLIMATE CHANGE MITIGATION TECHNOLOGIES IN THE PRODUCTION OR PROCESSING OF GOODS
    • Y02P90/00Enabling technologies with a potential contribution to greenhouse gas [GHG] emissions mitigation
    • Y02P90/30Computing systems specially adapted for manufacturing

Landscapes

  • Engineering & Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Image Analysis (AREA)

Abstract

The application relates to the technical field of image processing, in particular to an anti-convex fixture registration method based on template matching, which is characterized in that a concrete anti-convex fixture surface image is collected, a specification measurement value of an anti-convex fixture is obtained according to a pixel point gradient value in a neighborhood of a characteristic point of the anti-convex fixture in the foreground image, inner and outer edge vertexes of the anti-convex fixture in an edge image are obtained through Hough circle detection and corner detection, the self-adaptive size of a standard template image is obtained according to the distance between the vertexes of the anti-convex fixture combined with the specification measurement value, and the anti-convex fixture surface image is registered according to the self-adaptive size of the standard template image combined with a template matching algorithm, so that the anti-convex fixture registration based on template matching is realized, the accuracy of template matching and the registration precision of the anti-convex fixture are improved while the calculated amount is reduced.

Description

Anti-protruding clamp registration method based on template matching
Technical Field
The application relates to the technical field of image processing, in particular to an anti-protruding clamp registration method based on template matching.
Background
The registration of the concrete anti-heave clamp means that the exact position and alignment of the anti-heave clamp is ensured in order to obtain the desired shape and size when casting the concrete. Registration is an important step in ensuring the quality and accuracy of concrete structures. It can help avoid problems of deformation, cracking, uneven thickness, etc. of the concrete structure.
The traditional template matching method usually completes matching by traversing the whole image based on gray values or feature points through a fixed sliding window, but the method causes that local features of the image cannot be combined with the size of the window due to the fixed sliding window size, so that feature omission or saturation is easily caused, and finally, the problem of matching errors or low accuracy is caused.
In summary, the application provides a registration method of an anti-protruding clamp based on template matching, which combines the size of a sliding window with local features of an image to obtain a self-adaptive sliding window, thereby realizing registration of the anti-protruding clamp and improving the accuracy of template matching.
Disclosure of Invention
In order to solve the technical problems, the application provides an anti-protruding clamp registration method based on template matching so as to solve the existing problems.
The anti-protruding clamp registration method based on template matching adopts the following technical scheme:
one embodiment of the application provides a template matching-based anti-protruding clamp registration method, which comprises the following steps:
collecting a surface image of a concrete anti-protruding clamp; acquiring a foreground image in the gray level image of the anti-protruding clamp by using an Ojin method;
acquiring each characteristic point in the foreground image through a characteristic extraction algorithm; obtaining an average gradient value of each characteristic point according to each characteristic point neighborhood; combining the feature points pairwise to obtain feature point groups; obtaining gradient weighted distances of each feature point group according to the position distances among the feature points and the average gradient values of each feature point; obtaining the specification measurement value of the anti-protruding clamp according to the gradient weighted distance of each characteristic point group;
acquiring an edge image of the anti-protruding clamp image; obtaining the vertex of each outer side edge of the anti-protruding clamp in the edge image; acquiring each corner point in the edge image through corner point detection; obtaining the thickness of each inner side edge vertex and each anti-protruding clamp according to the distance between each corner point and each outer side edge vertex; obtaining the inner peripheral width and length of the anti-protruding clamp according to the distance between the vertexes of the inner side edges; obtaining the self-adaptive width and length of the standard template image according to the thickness, the inner peripheral width and the length of the anti-protruding clamp;
and registering the surface image of the anti-protruding fixture according to the self-adaptive length and width of the standard template image and the template matching algorithm.
Preferably, the obtaining the foreground image in the gray image of the anti-protruding clamp by the oxford method specifically includes:
in the gray level image of the anti-protruding clamp, an optimal segmentation threshold value is obtained through an Ojin method; and setting the gray value of the pixel point with the gray value smaller than the optimal segmentation threshold value to be 0 and the gray value of the pixel point with the gray value larger than the optimal segmentation threshold value to be unchanged, wherein the obtained image is used as a foreground image.
Preferably, the obtaining the average gradient value of each feature point according to the neighborhood of each feature point specifically includes:
acquiring gradient values of all pixel points in a foreground image; constructing a neighborhood by taking each characteristic point as a central pixel point; and taking the average value of the gradient values of all the pixel points in the neighborhood of each characteristic point as the average gradient value of each characteristic point.
Preferably, the obtaining the gradient weighted distance of each feature point group according to the position distance between the feature points and the average gradient value of each feature point specifically includes:
calculating the average value of the average gradient values of the two feature points in each feature point group, and obtaining the position distance between the two feature points in each feature point group; calculating the sum of the average gradient values of all the feature points; calculating the ratio of the average value to the sum value; and taking the product of the ratio and the position distance as the gradient weighted distance of each characteristic point group.
Preferably, the specification measurement value of the anti-protruding fixture is: and taking the average value of the gradient weighted distances of all the characteristic point groups as the specification measurement value of the anti-protruding fixture.
Preferably, the said; the method comprises the steps of obtaining the vertex of each outer side edge of the anti-protruding clamp in the edge image, and specifically comprises the following steps: obtaining each arc in the edge image through Hough circle detection; and taking the middle point of each arc as the vertex of each outer side edge of the anti-protruding clamp.
Preferably, the thickness of each inner side edge vertex and each anti-protruding clamp is obtained according to the distance between each corner point and each outer side edge vertex, and the method specifically includes:
for each outer edge vertex, calculating the distance from each corner point to the outer edge vertex; obtaining the minimum value of all the distances, and taking the minimum value as the thickness of the vertex of the outer side edge; taking the average thickness value of all the outer side edge vertexes as the thickness of the anti-protruding clamp; and taking the corner point corresponding to the minimum value as an inner edge vertex.
Preferably, the width and length of the inner periphery of the anti-protruding clamp are obtained according to the distance between the vertices of the inner side edges, specifically:
and calculating the distance between any one inner side edge vertex and other inner side edge vertices, and taking the minimum and the second smallest distances as the inner peripheral width and the length of the anti-protruding clamp respectively.
Preferably, the self-adaptive width and length of the standard template image are obtained according to the thickness, the inner peripheral width and the length of the anti-protruding clamp, specifically:
calculating the sum of the width and the double thickness of the anti-protruding clamp; taking the product of the specification measurement value of the anti-protruding clamp and the sum value as the self-adaptive width of the standard template image; an adaptive length of a standard template image is obtained.
Preferably, the registering of the surface image of the anti-protruding fixture according to the self-adaptive length and width of the standard template image and the template matching algorithm is specifically as follows:
obtaining the similarity of the standard template image and the surface image of the anti-protruding fixture when the matching degree of the standard template image and the surface image of the anti-protruding fixture is maximum through a template matching algorithm; presetting a similarity threshold; if the similarity when the matching degree is maximum is smaller than the similarity threshold value, the anti-protruding clamp is considered to be unqualified in registration; and if the similarity threshold is greater than or equal to the similarity threshold, the anti-protruding clamp is considered to be qualified in registration.
The application has at least the following beneficial effects:
according to the application, the size of the sliding window in the template matching algorithm is improved, so that the sliding window is combined with local image characteristics to obtain a self-adaptive sliding window, thus the characteristics of a target are more accurately captured, the self-adaptive sliding window has better adaptability to factors such as size change and rotation of the target, the robustness of template matching is improved, and meanwhile, the size of the self-adaptive template can reduce the search range to a neighborhood region around the target, so that the calculation amount is reduced, and the accuracy of template matching and the registration precision of an anti-protruding fixture are improved;
the application acquires the surface image of the concrete anti-protruding fixture, acquires a foreground image, and obtains the gradient weighted distance between any two characteristic points according to the gradient value change of the pixel points in the neighborhood of each characteristic point in the foreground image and the distance change between the characteristic points, thereby constructing the specification weighing value of the anti-protruding fixture; acquiring an edge image of the anti-protruding clamp; obtaining the inner and outer edge vertexes of the anti-convex clamp in the edge image through Hough circle detection and corner detection, obtaining the length and the width of the anti-convex clamp according to the distance between the vertexes, obtaining the self-adaptive size of the standard template image according to the length and the width of the anti-convex clamp and the combination specification measurement value, performing template matching according to the self-adaptive standard template image, judging whether the registration of the anti-convex clamp is qualified according to the similarity when the maximum matching degree is achieved, and improving the registration precision.
Drawings
In order to more clearly illustrate the embodiments of the application or the technical solutions and advantages of the prior art, the following description will briefly explain the drawings used in the embodiments or the description of the prior art, and it is obvious that the drawings in the following description are only some embodiments of the application, and other drawings can be obtained according to the drawings without inventive effort for a person skilled in the art.
FIG. 1 is a flow chart of a template matching-based anti-cam clamp registration method provided by the application;
fig. 2 is a schematic diagram of steps of a registration method of an anti-protruding clamp.
Description of the embodiments
In order to further describe the technical means and effects adopted by the application to achieve the preset aim, the following detailed description is given below of the specific implementation, structure, characteristics and effects of the anti-protruding clamp registration method based on template matching according to the application with reference to the attached drawings and the preferred embodiment. In the following description, different "one embodiment" or "another embodiment" means that the embodiments are not necessarily the same. Furthermore, the particular features, structures, or characteristics of one or more embodiments may be combined in any suitable manner.
Unless defined otherwise, all technical and scientific terms used herein have the same meaning as commonly understood by one of ordinary skill in the art to which this application belongs.
The following specifically describes a specific scheme of the anti-protruding fixture registration method based on template matching provided by the application with reference to the accompanying drawings.
The application provides a template matching-based anti-protruding clamp registration method.
Specifically, the following anti-protruding fixture registration method based on template matching is provided, please refer to fig. 1, and the method comprises the following steps:
and S001, collecting a surface image of the concrete anti-protruding clamp.
The concrete anti-protruding fixture surface image is shot through industry CCD camera fixed angle overlook, and specific shooting mode implementation person can set for by oneself, and this embodiment does not do specific limitation. To remove speckle noise in an image, the image is denoised by gaussian filtering. And converting the corrected image into a gray scale image. The gaussian filtering and affine transformation are known techniques, and the detailed process is not repeated.
Meanwhile, the standard template image of the existing anti-protruding fixture is obtained.
Step S002, obtaining the specification measurement value of the anti-protruding fixture through the gradient value of the pixel points in the neighborhood of the characteristic points of the anti-protruding fixture in the surface image of the anti-protruding fixture, and obtaining the self-adaptive size of the standard template image through combining the vertex of the inner edge and the vertex of the outer edge of the anti-protruding fixture with the specification measurement value.
In order to remove the background part in the gray level image of the anti-convex clamp, the optimal segmentation threshold value of the gray level image of the anti-convex clamp is obtained through an Ojin method, in the gray level image of the anti-convex clamp, the gray level value of a pixel point with the gray level value smaller than the optimal segmentation threshold value is reset to 0, the gray level value of a pixel point with the gray level value larger than the optimal segmentation threshold value is kept unchanged, and finally the image of the foreground part in the gray level image of the anti-convex clamp, namely the image only comprising the anti-convex clamp is obtained and is recorded as the foreground image.
In consideration of the problems that the standard template image of the anti-protruding clamp is fixed in size and is easy to cause feature omission or useless feature interference and the like when the anti-protruding clamp is subjected to template matching due to different sizes and models of the produced anti-protruding clamp in the production and use processes of the concrete anti-protruding clamp, the window size in the template matching process is adjusted to obtain the self-adaptive window size, the size of the standard template image is determined according to the actual size of the target image, and therefore the features in the target image are captured more accurately when the template matching is performed.
The foreground image is subjected to feature extraction through a scale invariant feature transform algorithm (SIFT algorithm) to obtain a series of feature points capable of representing the features of the anti-protruding clampWherein->Representing the a-th feature point, j representing a total of j feature points detected. The SIFT algorithm is a well-known technique, and the specific process is not described in detail. When the size of the anti-protruding fixture image is larger, the position distance between the characteristic points in the image is relatively larger, so that the size of the anti-protruding fixture in the foreground image is primarily judged through the position distance between the characteristic points in the foreground image, specifically:
first, in a foreground image, each feature point is constructed by taking each feature point as a central pixel pointThe neighborhood window is used as an initial neighborhood window, and it should be noted that the n value enforcer can self-useSetting, the present embodiment sets the value of n to 3. Then calculating the gradient value of each pixel point in the foreground image through a Sobel operator, so as to calculate the average gradient value of each feature point according to the gradient value of the pixel point in the initial neighborhood window of each feature point:
in the method, in the process of the application,for the average gradient value of the a-th feature point, < >>For the number of pixels in the initial neighborhood window of the a-th feature point, +.>The gradient value of the ith pixel point in the initial neighborhood window of the (a) th feature point. The faster the gray value of the pixel point in the neighborhood window changes, the larger the gradient value of the pixel point, the larger the average gradient value of the central pixel point, the more likely the feature point represented by the central pixel point is an obvious feature point, and the larger the weight occupied by the feature point in calculating other indexes.
Then, as the specific size of the anti-protruding clamp in the image is unknown and the distance between the feature points can represent the size of the anti-protruding clamp to a certain extent, any two feature points are combined for the j detected feature points, each obtained group of feature points is marked as each feature point group, and the gradient weighted distance of each feature point group is calculated according to the position distance between each feature point group and the average gradient value of each feature point:
in the method, in the process of the application,weighting the distance, ++for the gradient of the kth feature point set>、/>Two feature points in the kth feature point combination +.>、/>Coordinates of->Is the characteristic point->Mean gradient value of>Is the characteristic point->Mean gradient value of>And j is the number of the feature points in the foreground image, wherein the average gradient value of the a-th feature point in the foreground image is the average gradient value of the a-th feature point in the foreground image.
For Euclidean distance between two feature points in the kth feature point group, ++>The normalization weight of the Euclidean distance shows that the larger the Euclidean distance is, the larger the distance between the two characteristic points is, and the larger the size of the anti-protruding clamp is; the larger the normalization weight, the more important the euclidean distance is, and the stronger the data reliability of the euclidean distance is.
Finally, in order to obtain the measurement value of the dimension of the anti-protruding fixture, calculating the specification measurement value of the anti-protruding fixture according to the gradient weighted distance of each characteristic point group:
in the method, in the process of the application,is the specification and the measurement value of the anti-protruding clamp>For the number of feature point groups, < > for>The distances are weighted for the gradients of the kth feature point group. The approximate specification of the anti-protruding clamp in the image is judged through the distance average value among all the characteristic points, and the larger the gradient weighted distance of each characteristic point group is, the farther the distance among the characteristic points in the image is, the larger the clamp size in the image is relatively, the larger the F is, and therefore the larger the window size in the template matching process is, namely the larger the size of the standard template image of the anti-protruding clamp is.
Because the outer side edges of the four vertex angles of the fixture are arc-shaped, the inner side edges are right-angled, in order to obtain the thickness of the vertex angles of the anti-convex fixture, firstly, edge detection is carried out on the anti-convex fixture image through a Canny operator to obtain an edge image, hough circle detection is carried out on the edge image to obtain circle centers in the edge image and corresponding circular arcs, the circle centers respectively correspond to quarter circular arc parts of the four vertex angles of the anti-convex fixture, and the middle point of each circular arc is obtainedAnd the vertex points of the outer side edges at the vertex angles of the anti-protruding clamp are obtained.
Meanwhile, harris corner points are detected on the edge image to obtain corner points in the edge image, the number of the detected corner points is E, the closer the corner points are to the vertexes of the outer side edges at the positions of the vertex angles, the more likely the corner points are to be the vertexes of the inner side edges at the positions of the vertex angles, and the thickness of each vertex point of the anti-protruding clamp is calculated according to the distance between the corner points and the midpoint of the circular arc:
in the method, in the process of the application,the thickness of the anti-protruding clamp at the c-th vertex angle is +.>In order to take the function of the minimum value,is->Arc midpoints at each vertex angle->Coordinates of the outside edge vertex at the c-th anti-protrusion clamp vertex, ++>Is the>Corner points->E is the number of corner points in the edge image. The closer each corner point is to the outside edge vertex, the more likely each corner point is to be the inside edge vertex, and the closer the distance from the corner point to the outside edge vertex is to the thickness of the vertex angle of the anti-protruding clamp. The nearest corner points at the other outer side edge vertexes are obtained through the method and used as the inner side edge vertexes, and the distance between the inner side vertexes and the outer side vertexes at the same vertex angle is used as the thickness of the anti-protruding clamp. The thickness at each vertex angle is averaged, and the average value is taken as the thickness of the anti-convex fixture in the foreground image +.>
Four inside edge vertices found according to the above, for whichAny one vertex can obtain the distance between the vertex and the rest three vertices, and the distance with the smallest value is the inner peripheral width of the anti-convex clamp in the foreground image, and is recorded asThe method comprises the steps of carrying out a first treatment on the surface of the The second smallest distance is the inner peripheral length of the anti-protruding clamp in the foreground image, which is marked as +.>
In order to ensure that the anti-protruding clamp has higher stability and longer service life, the anti-protruding clamp shell usually has a certain thickness, therefore, the thickness of the anti-protruding clamp shell cannot be ignored in the process of template matching, and the self-adaptive sliding window size of the anti-protruding clamp template is obtained by comprehensively analyzing the width and the length of the anti-protruding clamp in the obtained foreground image and is recorded asAnd->The specific expression of the self-adaptive sliding window size is as follows:
in the method, in the process of the application,for adapting the width of the sliding window, +.>For adapting the length of the sliding window +.>For normalization function->Is the specification and the measurement value of the anti-protruding clamp>、/>The inner peripheral width and the length of the anti-protruding clamp in the foreground image are respectively +.>Is the thickness of the anti-protruding fixture in the foreground image. />The greater the value, i.e. +.>The closer to 1 the value of (c) is, the larger the width or length of the adaptive sliding window is, when +.>The smaller the value, i.e. +.>The closer the value of (2) is to 0, the smaller the width or length of the adaptive sliding window; />、/>Is->The larger the adaptive sliding window is, the larger the width or length of the adaptive sliding window is.
The length and width of the self-adaptive sliding window are used as the adjusted length and width of the standard template image, so as to obtain the self-adaptive size of the template image.
And step S003, registering the surface image of the anti-protruding clamp according to the self-adaptive size of the standard template image and a template matching algorithm.
According to the adjusted standard template image and template matching algorithm, template matching is carried out on the surface image of the anti-protruding clamp, the similarity is obtained when the matching degree is maximum, the higher the similarity is, the higher the template matching degree between the template and the surface image of the anti-protruding clamp is, the similarity threshold alpha is set, and it is to be noted that a value implementation of alpha can be set by a user, in the embodiment, the value of alpha is set to be 0.95, if the maximum similarity is greater than or equal to 95%, the qualified anti-protruding clamp registration is indicated, and if the maximum similarity is less than 95%, the unqualified anti-protruding clamp registration is indicated. The registration method of the above steps is shown in fig. 2.
In summary, the embodiment of the application obtains the self-adaptive sliding window by improving the size of the sliding window in the template matching algorithm and combining the sliding window with the local features of the image, thereby capturing the features of the target more accurately, enabling the self-adaptive sliding window to have better adaptability to factors such as size change and rotation of the target, improving the robustness of template matching, and simultaneously, reducing the searching range to the neighborhood region around the target by the self-adaptive template size, thereby reducing the calculated amount and improving the accuracy of template matching and the registration precision of the anti-protruding fixture;
the method comprises the steps of collecting a surface image of a concrete anti-protruding fixture, obtaining a foreground image, and obtaining a gradient weighted distance between any two feature points according to gradient value changes of pixel points in a neighborhood of each feature point in the foreground image and distance changes between the feature points, so that a specification weighing value of the anti-protruding fixture is constructed; acquiring an edge image of the anti-protruding clamp; obtaining the inner and outer edge vertexes of the anti-protruding clamp in the edge image through Hough circle detection and corner detection, obtaining the length and the width of the anti-protruding clamp according to the distance between the vertexes, obtaining the self-adaptive size of the standard template image according to the length and the width of the anti-protruding clamp, performing template matching according to the self-adaptive standard template image, judging whether the registration of the anti-protruding clamp is qualified according to the similarity when the maximum matching degree is achieved, and improving the registration precision.
It should be noted that: the sequence of the embodiments of the present application is only for description, and does not represent the advantages and disadvantages of the embodiments. And the foregoing description has been directed to specific embodiments of this specification. In addition, the processes depicted in the accompanying figures do not necessarily require the particular order shown, or sequential order, to achieve desirable results. In some embodiments, multitasking and parallel processing are also possible or may be advantageous.
In this specification, each embodiment is described in a progressive manner, and the same or similar parts of each embodiment are referred to each other, and each embodiment mainly describes differences from other embodiments.
The above embodiments are only for illustrating the technical solution of the present application, and not for limiting the same; the technical solutions described in the foregoing embodiments are modified or some of the technical features are replaced equivalently, so that the essence of the corresponding technical solutions does not deviate from the scope of the technical solutions of the embodiments of the present application, and all the technical solutions are included in the protection scope of the present application.

Claims (10)

1. The anti-protruding clamp registration method based on template matching is characterized by comprising the following steps of:
collecting a surface image of a concrete anti-protruding clamp; acquiring a foreground image in the gray level image of the anti-protruding clamp by using an Ojin method;
acquiring each characteristic point in the foreground image through a characteristic extraction algorithm; obtaining an average gradient value of each characteristic point according to each characteristic point neighborhood; combining the feature points pairwise to obtain feature point groups; obtaining gradient weighted distances of each feature point group according to the position distances among the feature points and the average gradient values of each feature point; obtaining the specification measurement value of the anti-protruding clamp according to the gradient weighted distance of each characteristic point group;
acquiring an edge image of the anti-protruding clamp image; obtaining the vertex of each outer side edge of the anti-protruding clamp in the edge image; acquiring each corner point in the edge image through corner point detection; obtaining the thickness of each inner side edge vertex and each anti-protruding clamp according to the distance between each corner point and each outer side edge vertex; obtaining the inner peripheral width and length of the anti-protruding clamp according to the distance between the vertexes of the inner side edges; the self-adaptive width and length of the standard template image are measured according to the thickness, the inner peripheral width, the length and the specification of the anti-protruding clamp;
and registering the surface image of the anti-protruding fixture according to the self-adaptive length and width of the standard template image and the template matching algorithm.
2. The registration method of the anti-protruding clamp based on template matching as claimed in claim 1, wherein the step of obtaining the foreground image in the gray level image of the anti-protruding clamp by the oxford method comprises the following steps:
in the gray level image of the anti-protruding clamp, an optimal segmentation threshold value is obtained through an Ojin method; and setting the gray value of the pixel point with the gray value smaller than the optimal segmentation threshold value to be 0 and the gray value of the pixel point with the gray value larger than the optimal segmentation threshold value to be unchanged, wherein the obtained image is used as a foreground image.
3. The template matching-based anti-protrusion fixture registration method as claimed in claim 1, wherein the obtaining the average gradient value of each feature point according to each feature point neighborhood specifically comprises:
acquiring gradient values of all pixel points in a foreground image; constructing a neighborhood by taking each characteristic point as a central pixel point; and taking the average value of the gradient values of all the pixel points in the neighborhood of each characteristic point as the average gradient value of each characteristic point.
4. The template matching-based anti-protrusion fixture registration method according to claim 1, wherein the obtaining the gradient weighted distance of each feature point group according to the position distance between feature points and the average gradient value of each feature point specifically comprises:
calculating the average value of the average gradient values of the two feature points in each feature point group, and obtaining the position distance between the two feature points in each feature point group; calculating the sum of the average gradient values of all the feature points; calculating the ratio of the average value to the sum value; and taking the product of the ratio and the position distance as the gradient weighted distance of each characteristic point group.
5. The template matching-based anti-bump fixture registration method according to claim 1, wherein the specification metric value of the anti-bump fixture is: and taking the average value of the gradient weighted distances of all the characteristic point groups as the specification measurement value of the anti-protruding fixture.
6. The registration method of the anti-protruding fixture based on template matching as claimed in claim 1, wherein the vertex of each outer edge of the anti-protruding fixture in the acquired edge image is specifically: obtaining each arc in the edge image through Hough circle detection; and taking the middle point of each arc as the vertex of each outer side edge of the anti-protruding clamp.
7. The template matching-based anti-protrusion clamp registration method according to claim 1, wherein the obtaining each inner edge vertex and each anti-protrusion clamp thickness according to the distance between each corner point and each outer edge vertex specifically comprises:
for each outer edge vertex, calculating the distance from each corner point to the outer edge vertex; obtaining the minimum value of all the distances, and taking the minimum value as the thickness of the vertex of the outer side edge; taking the average thickness value of all the outer side edge vertexes as the thickness of the anti-protruding clamp; and taking the corner point corresponding to the minimum value as an inner edge vertex.
8. The registration method of the anti-protruding fixture based on template matching according to claim 1, wherein the inner peripheral width and length of the anti-protruding fixture are obtained according to the distance between the vertices of the inner edge, specifically:
and calculating the distance between any one inner side edge vertex and other inner side edge vertices, and taking the minimum and the second smallest distances as the inner peripheral width and the length of the anti-protruding clamp respectively.
9. The registration method of the anti-protruding fixture based on template matching according to claim 1, wherein the measurement values of the thickness, the inner peripheral width, the length and the specification of the anti-protruding fixture are the adaptive width and the length of the standard template image, specifically:
calculating the sum of the width and the double thickness of the anti-protruding clamp; taking the product of the specification measurement value of the anti-protruding clamp and the sum value as the self-adaptive width of the standard template image; an adaptive length of a standard template image is obtained.
10. The method for registering an anti-protruding fixture based on template matching according to claim 1, wherein the registering the surface image of the anti-protruding fixture according to the template matching algorithm is performed by combining the adaptive length and width of the standard template image, specifically:
obtaining the similarity of the standard template image and the surface image of the anti-protruding fixture when the matching degree of the standard template image and the surface image of the anti-protruding fixture is maximum through a template matching algorithm; presetting a similarity threshold; if the similarity when the matching degree is maximum is smaller than the similarity threshold value, the anti-protruding clamp is considered to be unqualified in registration; and if the similarity threshold is greater than or equal to the similarity threshold, the anti-protruding clamp is considered to be qualified in registration.
CN202311315990.3A 2023-10-12 2023-10-12 Anti-protruding clamp registration method based on template matching Active CN117078730B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202311315990.3A CN117078730B (en) 2023-10-12 2023-10-12 Anti-protruding clamp registration method based on template matching

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202311315990.3A CN117078730B (en) 2023-10-12 2023-10-12 Anti-protruding clamp registration method based on template matching

Publications (2)

Publication Number Publication Date
CN117078730A true CN117078730A (en) 2023-11-17
CN117078730B CN117078730B (en) 2024-01-23

Family

ID=88704491

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202311315990.3A Active CN117078730B (en) 2023-10-12 2023-10-12 Anti-protruding clamp registration method based on template matching

Country Status (1)

Country Link
CN (1) CN117078730B (en)

Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US4815215A (en) * 1987-09-10 1989-03-28 Saylor Millard P Universal holding fixture with templates for routing door and door jambs
CN102632273A (en) * 2012-05-16 2012-08-15 中铁宝桥(扬州)有限公司 Adjustable multi-point positioning hole making device and hole making method
CN103292701A (en) * 2013-06-24 2013-09-11 哈尔滨工业大学 Machine-vision-based online dimensional measurement method of precise instrument
CN103310453A (en) * 2013-06-17 2013-09-18 北京理工大学 Rapid image registration method based on sub-image corner features
US20200018994A1 (en) * 2017-01-27 2020-01-16 Carl Zeiss Ag Method, computing device, and computer program for providing a mounting edge model
CN111553927A (en) * 2020-04-24 2020-08-18 厦门云感科技有限公司 Checkerboard corner detection method, checkerboard corner detection system, computer device and storage medium
WO2021017361A1 (en) * 2019-07-31 2021-02-04 苏州中科全象智能科技有限公司 Template matching algorithm based on edge and gradient feature
CN115170669A (en) * 2022-09-05 2022-10-11 合肥安迅精密技术有限公司 Identification and positioning method and system based on edge feature point set registration and storage medium
CN116551048A (en) * 2023-02-21 2023-08-08 苏州迪纳精密设备有限公司 Geometric self-adaptive machining equipment and method

Patent Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US4815215A (en) * 1987-09-10 1989-03-28 Saylor Millard P Universal holding fixture with templates for routing door and door jambs
CN102632273A (en) * 2012-05-16 2012-08-15 中铁宝桥(扬州)有限公司 Adjustable multi-point positioning hole making device and hole making method
CN103310453A (en) * 2013-06-17 2013-09-18 北京理工大学 Rapid image registration method based on sub-image corner features
CN103292701A (en) * 2013-06-24 2013-09-11 哈尔滨工业大学 Machine-vision-based online dimensional measurement method of precise instrument
US20200018994A1 (en) * 2017-01-27 2020-01-16 Carl Zeiss Ag Method, computing device, and computer program for providing a mounting edge model
WO2021017361A1 (en) * 2019-07-31 2021-02-04 苏州中科全象智能科技有限公司 Template matching algorithm based on edge and gradient feature
CN111553927A (en) * 2020-04-24 2020-08-18 厦门云感科技有限公司 Checkerboard corner detection method, checkerboard corner detection system, computer device and storage medium
CN115170669A (en) * 2022-09-05 2022-10-11 合肥安迅精密技术有限公司 Identification and positioning method and system based on edge feature point set registration and storage medium
CN116551048A (en) * 2023-02-21 2023-08-08 苏州迪纳精密设备有限公司 Geometric self-adaptive machining equipment and method

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
NENG WAN等: "An intelligent fixture design method based on smart modular fixture unit", 《THE INTERNATIONAL JOURNAL OF ADVANCED MANUFACTURING TECHNOLOGY》, pages 2629 *
冯燕敏: "蝶扣锁模夹具模板在路肩混凝土施工中的应用", 《设备管理与维修》, pages 143 - 144 *

Also Published As

Publication number Publication date
CN117078730B (en) 2024-01-23

Similar Documents

Publication Publication Date Title
CN110097093B (en) Method for accurately matching heterogeneous images
CN107798326B (en) Contour vision detection method
CN106355577B (en) Rapid image matching method and system based on significant condition and global coherency
CN106709500B (en) Image feature matching method
JP5503018B2 (en) Scale space normalization technique to improve feature detection in uniform and non-uniform illumination changes
CN107369159B (en) Threshold segmentation method based on multi-factor two-dimensional gray level histogram
CN106981077B (en) Infrared image and visible light image registration method based on DCE and LSS
CN108764004B (en) Annular coding mark point decoding and identifying method based on coding ring sampling
CN108765332B (en) Ellipse search window and parameter self-adaptive non-local mean denoising method
CN107169979A (en) A kind of method for detecting image edge of improvement Canny operators
CN108399627B (en) Video inter-frame target motion estimation method and device and implementation device
CN107749071B (en) Large-distortion checkerboard image corner detection method and device
CN109063774B (en) Image tracking effect evaluation method, device and equipment and readable storage medium
CN112017223A (en) Heterologous image registration method based on improved SIFT-Delaunay
CN117058111B (en) Quality inspection method and system for automobile aluminum alloy die casting die
CN116468728B (en) Production quality detection method for hardware stamping part die
CN111199197A (en) Image extraction method and processing equipment for face recognition
CN107808165B (en) Infrared image matching method based on SUSAN corner detection
CN116664892A (en) Multi-temporal remote sensing image registration method based on cross attention and deformable convolution
CN112734816A (en) Heterogeneous image registration method based on CSS-Delaunay
CN103077528A (en) Rapid image matching method based on DCCD (Digital Current Coupling)-Laplace and SIFT (Scale Invariant Feature Transform) descriptors
CN117372432A (en) Electronic cigarette surface defect detection method and system based on image segmentation
CN104915951B (en) A kind of stippled formula DPM two-dimension code area localization methods
CN117078730B (en) Anti-protruding clamp registration method based on template matching
CN116934744B (en) MOSFET etching defect detection method based on machine vision

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant