CN112734761B - Industrial product image boundary contour extraction method - Google Patents

Industrial product image boundary contour extraction method Download PDF

Info

Publication number
CN112734761B
CN112734761B CN202110365155.5A CN202110365155A CN112734761B CN 112734761 B CN112734761 B CN 112734761B CN 202110365155 A CN202110365155 A CN 202110365155A CN 112734761 B CN112734761 B CN 112734761B
Authority
CN
China
Prior art keywords
contour
image
point
control point
candidate
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202110365155.5A
Other languages
Chinese (zh)
Other versions
CN112734761A (en
Inventor
张壮壮
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Casi Vision Technology Luoyang Co Ltd
Casi Vision Technology Beijing Co Ltd
Original Assignee
Casi Vision Technology Luoyang Co Ltd
Casi Vision Technology Beijing Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Casi Vision Technology Luoyang Co Ltd, Casi Vision Technology Beijing Co Ltd filed Critical Casi Vision Technology Luoyang Co Ltd
Priority to CN202110365155.5A priority Critical patent/CN112734761B/en
Publication of CN112734761A publication Critical patent/CN112734761A/en
Application granted granted Critical
Publication of CN112734761B publication Critical patent/CN112734761B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/0002Inspection of images, e.g. flaw detection
    • G06T7/0004Industrial image inspection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/13Edge detection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/136Segmentation; Edge detection involving thresholding
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/60Analysis of geometric attributes
    • G06T7/62Analysis of geometric attributes of area, perimeter, diameter or volume
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/74Image or video pattern matching; Proximity measures in feature spaces
    • G06V10/75Organisation of the matching processes, e.g. simultaneous or sequential comparisons of image or video features; Coarse-fine approaches, e.g. multi-scale approaches; using context analysis; Selection of dictionaries
    • G06V10/752Contour matching
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10004Still image; Photographic image
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30108Industrial image inspection

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Computing Systems (AREA)
  • Evolutionary Computation (AREA)
  • General Health & Medical Sciences (AREA)
  • Medical Informatics (AREA)
  • Software Systems (AREA)
  • Databases & Information Systems (AREA)
  • Artificial Intelligence (AREA)
  • Health & Medical Sciences (AREA)
  • Multimedia (AREA)
  • Quality & Reliability (AREA)
  • Geometry (AREA)
  • Image Analysis (AREA)
  • Image Processing (AREA)

Abstract

The application discloses an industrial product image boundary contour extraction method, which comprises the following steps: step 1: inputting a reference image and extracting a reference outline; step 2: obtaining a coordinate transformation matrix; and step 3: registering the reference image to the image to be detected; and 4, step 4: sampling contour sampling points; and 5: calculating an approximation of the normal direction at the point; step 6: searching the corresponding strongest response point; and 7: screening the candidate target control points; and 8: and calculating the deformed positions of all points on the reference contour, and connecting the deformed contour sampling points to obtain the deformed contour. The continuity of the extracted contour is ensured; the method has the advantages of simple principle, strong robustness, high precision and high operation efficiency; the method has high universality, and can be widely applied to the boundary contour extraction of various industrial product images rather than the contour extraction of a specified product.

Description

Industrial product image boundary contour extraction method
Technical Field
The invention relates to the technical field of computer image preprocessing, in particular to an industrial product image area boundary contour extraction method based on reference contour and contour deformation.
Background
The contour extraction technology can be applied to various tasks such as ROI accurate frame selection, contour-based defect detection, target tracking and identification, stroke generation and the like, and is a fundamental core problem in the field of image processing. Roi (region of interest). In machine vision and image processing, a region to be processed, called a region of interest, ROI, is delineated from a processed image in the form of a box, circle, ellipse, irregular polygon, or the like. Various operators (operators) and functions are commonly used in machine vision software such as Halcon, OpenCV, Matlab and the like to obtain a region of interest (ROI), and the image is processed in the next step.
Currently mainstream contour extraction techniques can be classified into contour extraction based on regional property difference, contour extraction based on edge detection, contour extraction based on graph cut method, extraction based on active contour, and contour extraction based on learning, and the like.
The contour extraction method based on the regional property difference is used for integrally segmenting the foreground and the background by using the property difference of gray scale, color, texture and the like of the front and background regions so as to extract the contour, and is typically applied to the conditions of uniform regional internal properties, large foreground and background difference and clear and continuous boundary, and the contour extraction method is often not satisfactory under the conditions of severe illumination change, severe noise, fuzzy boundary, even fracture and the like.
The contour extraction based on the edge detection utilizes the strong gradient property at the contour to carry out the edge detection, and further realizes the extraction of the contour through edge connection and screening, wherein the most representative is Canny edge detection operator. However, as shown by the Canny edge detection operator, the method can only ensure the local continuity of the contour but not the overall continuity of the contour, the extracted contour is also easily affected by noise and accompanied by interference such as burrs, and more irrelevant contours are extracted, which requires more complicated post-processing.
The image is mapped into a weighted undirected graph by taking pixels as nodes, the similarity between the pixels and the foreground and the background and the similarity between adjacent pixels as the weight between the nodes, and the foreground and the background are distinguished by solving the minimum cut. However, the segmentation result of the graph cutting method depends on the seed point selection quality of the target and the background, and only the characteristics of the bottom layer image are considered, so that the method is easily influenced by local noise, and the smoothness of the contour is difficult to ensure.
The basic idea of the active contour-based approach is to use a continuous curve to simulate the target contour edge, promote contour deformation by the "attraction" of the peripheral pixels to the contour, constrain contour deformation by the "resistance to rejection" depending on the shape of the curve itself, the contour deforms under the combined action of the external "attraction" and the internal "resistance to rejection" and finally reaches an equilibrium position. The method based on the active contour converts the contour extraction process into a top-down optimal contour approximation process, and can ensure the continuity and smoothness of the contour, however, the method depends on the selection of the initial contour, needs repeated iteration and consumes long time, and is not suitable for industrial application with high real-time requirements.
The learning-based contour extraction method essentially treats a contour extraction task as a pixel classification task, and although the method is pursued in the academic world in recent years, the problems of the requirement of a data set containing a large number of marked images, high uncertainty of model training, lack of interpretability and result controllability, poor model reusability, overlong time consumption of the method, difficult model deployment and the like still exist, and the method is difficult to fall into the industrial world.
In summary, in the prior art, only the method based on the active contour utilizes the prior knowledge of the reference contour, and can simultaneously ensure the continuity and smoothness of the contour. However, the method based on the active contour needs repeated iteration and is long in time consumption, the contour extraction result lacks sufficient controllability, and is not suitable for industrial application with high real-time requirements, and other mainstream contour extraction methods are difficult to ensure continuity and smoothness of the contour due to the fact that prior information about the contour is not utilized, and are more difficult to apply to scenes with high real-time and precision requirements due to the problems of robustness, processing efficiency and the like. Therefore, there is a strong need in the industry for an efficient and simple contour extraction method using a priori information of a reference contour.
Disclosure of Invention
The technical problem to be solved by the invention is that the prior industrial image region boundary outline extraction method often does not utilize prior knowledge, continuous actual outlines are difficult to extract or too many irrelevant outlines are extracted to bring heavy post-processing burden when the image has the problems of uneven brightness, noise interference, local distortion and deformation of the image, local fracture of the outlines, low contrast on two sides of the outlines, inversion of brightness relation on two sides of the outlines and the like, and the method is difficult to satisfy comprehensive indexes such as interpretability, stability, precision, universality, method efficiency and the like.
In order to solve the above technical problems, the present invention provides a method for extracting a boundary contour of an industrial product image region based on a reference contour and a contour deformation, wherein an actual contour is regarded as being deformed from the reference contour, and the actual contour is fitted by extracting a plurality of matching points on the reference contour and the actual contour to control the deformation of the reference contour, comprising the following steps:
step 1: inputting a reference image and extracting a reference outline;
step 2: positioning an interested object in the image to be detected, and obtaining a coordinate transformation matrix which is registered from the reference image and the reference contour to the image to be detected;
and step 3: carrying out coordinate transformation on the reference contour and the reference image by using the coordinate transformation matrix obtained in the step 2, and registering to the image to be detected;
and 4, step 4: sampling contour points of the registered reference contour obtained in the step 3 to serve as candidate source control points for controlling contour deformation;
and 5: for each candidate source control point, calculating the gradient direction of the registered reference image at the point as an approximation of the normal direction of the registered reference contour at the point;
step 6: for each candidate source control point, searching a corresponding strongest response point as a candidate target control point on the actual contour of the image to be detected;
and 7: screening the candidate target control points, wherein the screened source control points and the screened target control points are used as control point pairs for finally controlling the profile deformation;
and 8: and for each point on the registered contour, calculating a local coordinate transformation matrix according to the control point pair based on a moving least square method so as to obtain the deformed position of the point, then connecting the deformed contour points to obtain a deformed contour, and taking the contour deformation result as a final contour extraction result.
Preferably, the method for extracting the reference contour in step 1 includes: and after filtering and preprocessing the reference image, performing thresholding operation in a gradient domain to obtain a reference contour and/or performing gray level morphological on-off operation on the reference image and thresholding after solving the gradient.
Preferably, the positioning operation method in step 2 includes: the locations of the matches are extracted by shape-based template matching and/or NCC-based template shape matching and/or feature point-based.
The aforementioned shape-based refers to a contour shape based on the object of interest. Shape-based matching is a generic term for a class of matching methods. There is an operator in the machine vision software Halcon that is based on shape matching.
NCC is an abbreviation for Normalized Cross-Correlation. NCC template matching is a proper term in the field of image processing and means the operation of matching and locating by searching an image block in an image for which the normalized cross-correlation with the template image block is the greatest.
Preferably, equidistant sampling is used for contour point sampling of the post-registration reference contour obtained in step 3 in step 4.
Preferably, in step 6, for the candidate source control point on the contour, the search for the target control point on the corresponding actual contour is searched within a certain depth range along the normal direction of the contour at the point.
Preferably, the method for screening candidate target control points in step 7 includes based on the intra-neighborhood gradient direction global consistency and/or the average gray level difference in the positive and negative gradient directions.
Preferably, the method for screening candidate target control points in step 7 further includes: the method of local gradient direction pixel-by-pixel comparison and/or the method of deformation continuity and/or the method of target control point position prediction are/is utilized.
Preferably, the method for realizing the contour deformation in the step 8 comprises the following steps: moving least squares.
The beneficial effects of the invention include: the continuity of the extracted contour is ensured in principle; the priori knowledge and the prior art improvement method are easily integrated to adapt to specific situations; the method has the advantages of simple principle, strong robustness, high precision and high operation efficiency; the method has high universality, and can be widely applied to the region boundary contour extraction of various industrial product images instead of the contour extraction of a specified product.
Drawings
In order to more clearly illustrate the embodiments or technical solutions in the prior art of the present invention, the drawings used in the description of the embodiments or prior art will be briefly described below, it is obvious that the drawings in the following description are only a part of the embodiments or prior art, and other similar or related drawings can be obtained by those skilled in the art without creative efforts.
FIG. 1 is a flow chart of a contour extraction method according to an embodiment of the present invention;
FIG. 2 is a diagram of a preliminary reference image according to an embodiment of the present invention;
FIG. 3 is a diagram of an image to be inspected according to an embodiment of the present invention;
FIG. 4 is a registered image map obtained after registration of FIG. 2 to FIG. 3;
FIG. 5 is a registered contour map obtained after the preliminary reference contour is registered to FIG. 3 according to an embodiment of the present invention;
FIG. 6 is a resulting graph of the registered contours of FIG. 5 overlaid on FIG. 3;
FIG. 7 is a diagram of the results of the candidate source control points shown in FIG. 2 according to one embodiment of the present invention;
FIG. 8 is a diagram of the results of the candidate source control points shown in FIG. 3 according to one embodiment of the present invention;
FIG. 9 is a diagram illustrating the result of the candidate target control points shown in FIG. 3 according to an embodiment of the present invention;
FIG. 10 is a graph of the results of the filtered target control points shown in FIG. 3 according to an embodiment of the present invention;
fig. 11 is a result diagram of the source control point corresponding to the filtered target control point shown in fig. 4 according to the embodiment of the present invention;
FIG. 12 is a graph of the result of the final extracted profile shown in FIG. 3 according to an embodiment of the present invention;
FIG. 13 is a diagram of the contour extraction result according to embodiment 2 of the present invention;
fig. 14 is a diagram of the contour extraction result according to embodiment 3 of the present invention.
Detailed Description
The present invention will be described in detail with reference to examples. The present invention will be described in further detail below to make the objects, aspects and advantages of the present invention clearer and more clear, but the present invention is not limited to these examples.
In one embodiment of the invention, an industrial product image region boundary contour extraction method based on a reference contour and contour deformation is disclosed. Fig. 1 is a flowchart of a contour extraction method according to an embodiment of the present invention. The figure includes the following steps:
step 1: inputting a reference image and extracting a reference outline;
step 2: positioning an interested object in the image to be detected, and obtaining a coordinate transformation matrix which is registered from the reference image and the reference contour to the image to be detected;
and step 3: carrying out coordinate transformation on the reference contour and the reference image by using the coordinate transformation matrix obtained in the step 2, and registering to the image to be detected;
and 4, step 4: sampling contour points of the registered reference contour obtained in the step 3 to serve as candidate source control points for controlling contour deformation;
and 5: for each candidate source control point, calculating the gradient direction of the registered reference image at the point as an approximation of the normal direction of the registered reference contour at the point;
step 6: for each candidate source control point, searching a corresponding strongest response point as a candidate target control point on the actual contour of the image to be detected;
searching the corresponding strongest response point, in the embodiment of the present invention, is performed by the following method: recording sampling point
Figure 953150DEST_PATH_IMAGE001
The unit vector in the gradient direction is
Figure 111599DEST_PATH_IMAGE002
Current search point
Figure 823203DEST_PATH_IMAGE003
The unit vector in the gradient direction is
Figure 688391DEST_PATH_IMAGE004
Gradient modulus of
Figure 889565DEST_PATH_IMAGE005
Then the responsivity of the current search point to the sampling point is defined as
Figure 472993DEST_PATH_IMAGE006
Wherein
Figure 722709DEST_PATH_IMAGE007
Is composed of
Figure 770300DEST_PATH_IMAGE002
And
Figure 80058DEST_PATH_IMAGE004
a vector dot product of (a); the point with the maximum corresponding responsiveness is the strongest response point;
and 7: screening the candidate target control points, wherein the screened source control points and the screened target control points are used as control point pairs for finally controlling the profile deformation;
and 8: and for each point on the registered contour, calculating a local coordinate transformation matrix according to the control point pair based on a moving least square method so as to obtain the deformed position of the point, then connecting the deformed contour points to obtain a deformed contour, and taking the contour deformation result as a final contour extraction result.
Preferably, the method for extracting the reference contour in step 1 includes: and after filtering and preprocessing the reference image, performing thresholding operation in a gradient domain to obtain a reference contour and/or performing gray level morphological on-off operation on the reference image and thresholding after solving the gradient.
Preferably, the positioning operation method in step 2 includes: the locations of the matches are extracted by shape-based template matching and/or NCC-based template shape matching and/or feature point-based.
The aforementioned shape-based refers to a contour shape based on the object of interest. Shape-based matching is a generic term for a class of matching methods. There is an operator in the machine vision software Halcon that is based on shape matching.
NCC is an abbreviation for Normalized Cross-Correlation. NCC template matching is a proper term in the field of image processing and means the operation of matching and locating by searching an image block in an image for which the normalized cross-correlation with the template image block is the greatest.
Preferably, in step 4, equidistant sampling is used for sampling the contour points of the post-registration reference contour obtained in step 3.
Preferably, in step 6, for the candidate source control point on the contour, the search for the target control point on the corresponding actual contour is performed within a certain depth range along the normal direction of the contour at that point.
Preferably, the method for screening the candidate target control points in step 7 includes a step of performing global consistency based on the gradient directions in the neighborhood and/or an average gray level difference in the positive and negative gradient directions.
Preferably, the method for screening candidate target control points in step 7 further includes: the method of local gradient direction pixel-by-pixel comparison and/or the method of deformation continuity and/or the method of target control point position prediction are/is utilized.
Preferably, the method for realizing the contour deformation in the step 8 comprises the following steps: moving least squares.
In another embodiment of the present invention, a method for extracting a boundary contour of an image region of an industrial product based on a reference contour and a contour deformation is further disclosed, which includes the following steps:
step 1: inputting a reference image, and extracting a reference contour by a conventional method;
since the reference image can be selected to be a high-quality image with less interference, the extraction of the reference contour can be realized by a conventional method, and a typical method is to perform thresholding operation on the reference image in a gradient domain after filtering preprocessing to obtain the reference contour. Since the images in different scenes are very different, the reference image preprocessing and reference contour extraction method should select an appropriate processing method for a specific scene, which is not limited in the present invention.
The method for extracting the boundary contour of the industrial product image, provided by the embodiment of the invention, is used for extracting the preliminary reference contour of the image to be detected, and further comprises the following steps:
(1) carrying out gray scale opening operation and closing operation on the preliminary reference image by using a disc-shaped structural element in sequence, and eliminating fine scratches and noise interference to obtain an intermediate image A1;
(2) extracting gradients in the x direction and the y direction from the intermediate image A1 by using a sobel operator and calculating a gradient module value to obtain an intermediate image A2;
(3) binarizing the intermediate image A2 by using a set threshold value, and eliminating a small connected domain with the area smaller than the set area to obtain an intermediate image A3;
(4) and extracting a skeleton from the intermediate image A3 to obtain a preliminary reference contour.
The specific method comprises the following steps:
fig. 2 shows a reference image used in the embodiment of the present invention, where the resolution of the image is 1400 × 1700, and the corresponding reference profile is obtained by performing gray-scale morphological opening and closing operations on the reference image, then obtaining a gradient, thresholding the gradient, eliminating small connected components, and extracting a skeleton. The method comprises the steps of firstly carrying out gray science opening operation and then closing operation on a reference image by using a disc-shaped structural element with the size of 9 x 9 to eliminate fine scratches and noise interference, then extracting gradients in x and y directions by using a sobel operator and calculating a gradient module value, carrying out binarization on the image by using 20 as a threshold value, eliminating a small connected domain with the area smaller than 100, and then extracting a skeleton from the image to obtain the outline with the width of 1. It is noted that the contours in the figures are dilated for better visualization of the present invention.
Step 2: positioning an interested object in the image to be detected, and obtaining a coordinate transformation matrix which is registered from the reference image and the reference contour to the image to be detected;
the positioning operation can be realized by various methods, such as template matching based on shape, shape matching based on NCC template, positioning based on feature point extraction and matching, and the like. Different positioning methods are suitable for different scenes, and the positioning method also needs to be selected according to the specific application scene, which is not limited by the invention.
FIG. 3 shows an image to be inspected for which positioning is achieved by shape-based template matching, as used in an embodiment of the present invention.
Wherein, fix a position the object of interest in waiting to examine the image, further include:
(1) establishing a pyramid-shaped template of the logo on the reference image according to the reference image, and setting the layer number of the pyramid;
(2) performing template matching on the image to be detected, and searching out the position and the direction of the shape;
(3) a coordinate transformation matrix for registration is calculated.
The specific method comprises the following steps:
firstly, establishing a pyramid-shaped template of the apple logo according to a reference image, wherein the number of layers of the pyramid is set to be 5; and then template matching is carried out on the image to be detected to search the position and the orientation of the shape, and a coordinate transformation matrix for registration is further calculated.
And step 3: carrying out coordinate transformation on the reference contour and the reference image by using the coordinate transformation matrix obtained in the step 2, and registering to the image to be detected;
the aim of registering the reference contour to the image to be detected is to obtain the initial position and form of the region boundary contour in the image to be detected, the aim of registering the reference image to the image to be detected is to facilitate the calculation of the normal direction of the contour after registration, and the aim of locally comparing the image to be detected and the reference image is to judge whether the central pixel is a real contour point.
Fig. 4 is an example of the result of registering the reference image to the to-be-detected image, fig. 5 is an example of the result of registering the reference contour to the to-be-detected image, and the result of displaying the contour by superimposing the contour on the to-be-detected image is shown in fig. 6, which shows that the upper half is better registered and the lower half is larger in registration error, that is, the actual contour is locally deformed relative to the reference contour, so that the actual contour cannot be obtained from the reference contour by only simple scale expansion and contraction, which is the necessity of using contour deformation.
And 4, step 4: as shown in fig. 7, performing contour point sampling on the post-registration reference contour obtained in step 3 to serve as a candidate source control point for controlling contour deformation;
the sampling of the contour points is realized because a small amount of control points are needed to control the contour deformation to achieve higher deformation accuracy, the searching of each point on the registered reference contour for the corresponding point on the deformed contour is meaningless, and the problem that different points on the contour are mapped to the same point on the deformed contour possibly occurs, which brings unexpected trouble. In addition, the amount of computation of the contour point search and the contour deformation is proportional to the number of contour points, and therefore the contour points are sampled. Equidistant sampling is recommended in operation, and the sampling density can be increased appropriately for the part with more drastic change in the contour.
FIG. 8 shows the positions of candidate source control points on the to-be-examined image obtained by equidistant sampling with 20 as step size, which will be the starting positions for searching candidate target control points in step 6 described below;
and 5: for each candidate source control point, calculating the gradient direction of the registered reference image at the point as an approximation of the normal direction of the contour at the point;
according to the embodiment of the invention
Figure 150782DEST_PATH_IMAGE008
The directional gradient image is obtained by acting sobel operator on the image, but then filtering the image by adopting a 5 × 5 mean filter to reduce noise interference, so that the calculation of the gradient direction is more robust. Setting pixel points
Figure 266506DEST_PATH_IMAGE009
Has a gradient of
Figure 106286DEST_PATH_IMAGE010
Then the unit vector of the gradient direction at that point is
Figure 147798DEST_PATH_IMAGE011
Step 6: for each candidate source control point, searching a corresponding strongest response point in a certain depth range along the normal direction of the outline of the point as a candidate target control point on the actual outline of the image to be detected;
the profile normal direction of the sampling point at the corresponding point on the deformed actual profile should be close to the profile normal direction at the sampling point, and the corresponding point should have a larger gradient module value. Since the gradient direction can be used as a good approximation of the normal direction of the contour, an index combining the similarity of the gradient direction of the sampling point and the current searching point and the gradient module value of the current searching point can be defined as the responsivity of the current searching point to the sampling point. Recording sampling point
Figure 440239DEST_PATH_IMAGE001
The unit vector in the gradient direction is
Figure 359653DEST_PATH_IMAGE002
Current search point
Figure 53940DEST_PATH_IMAGE003
The unit vector in the gradient direction is
Figure 705501DEST_PATH_IMAGE004
Gradient modulus of
Figure 813134DEST_PATH_IMAGE005
Then the responsivity of the current search point to the sampling point is defined as
Figure 208344DEST_PATH_IMAGE006
Wherein
Figure 757137DEST_PATH_IMAGE007
Is composed of
Figure 845178DEST_PATH_IMAGE002
And
Figure 174529DEST_PATH_IMAGE004
the vector dot product of (c). It is necessary to point outIn the embodiment of the present invention, as shown in step 5, when the gradient direction is calculated, the gradient image is subjected to 5 × 5 mean filtering to enhance robustness, and the gradient modulus is calculated
Figure 373429DEST_PATH_IMAGE005
There is no post-filtering processing due to gradient modulus
Figure 839045DEST_PATH_IMAGE005
The gradient strength at that point should be accurately reflected.
Fig. 9 shows the strongest response points searched along the normal direction of the contour for the candidate source control points in fig. 8, and it can be seen that most of the strongest response points are on the actual contour, which indicates that the defined responsivity index works well, but there are also individual strongest response points which are not on the actual contour of the image to be inspected, and therefore still need to be screened.
And 7: screening candidate target control points based on the overall consistency of gradient directions in the neighborhood, the average gray difference in the positive and negative gradient directions and the like, wherein the screened source control points and target control points are used as control point pairs for finally controlling the profile deformation;
because the pixels inside and outside the contour belong to the foreground and background regions, if the candidate target control point is located on the contour, the gray levels in the positive gradient direction and the negative gradient direction have a more significant difference, and if the candidate target control point is located in the foreground region or the background region, the pixels in the positive gradient direction and the negative gradient direction are both located inside the region, and the gray level difference in the two directions is smaller. Therefore, the gray scale difference in the positive and negative gradient directions can be used as a screening basis of the target control point. Similarly, the minimum gradient module value at the candidate target control point, the consistency of the gray level in the neighborhood and the gray level range of the pixel point in the neighborhood can also be used for screening the target control point. In addition, the present invention also proposes 3 very effective methods for screening the target control points, which are a pixel-by-pixel comparison method using local gradient direction, a continuity method using deformation, and a prediction method using the position of the target control point, which are described below.
Firstly, a local gradient direction pixel-by-pixel comparison method is utilized.
Wherein, screening by pixel-by-pixel comparison in the local gradient direction further comprises:
(1) selecting a comparison window, and respectively taking an image block on the registered reference image and an image block on the to-be-detected image by taking the current source control point as the center and the candidate control point as the center and aligning the image blocks;
(2) calculating the gradient vector of any point on the registered reference image and the gradient vector of any point on the comparison window on the image to be detected, and judging whether the point is a salient point or not according to the modulus value of the gradient vector;
(3) calculating the inner product of unit vectors of gradients of the registered reference image and the to-be-detected image at the point as a consistency metric value of the registered reference image and the to-be-detected image in the gradient direction at the point;
(4) and accumulating the gradient direction consistency measurement values and dividing the accumulated gradient direction consistency measurement values by the total number of the salient points to obtain an average gradient direction consistency measurement value.
The specific method comprises the following steps:
for points located on the real contour, because the contours near the corresponding points before and after the contour deformation should be relatively consistent, if local pixel-by-pixel gradient direction comparison is performed on salient points in the registered reference image and the to-be-detected image, the candidate target control points located on the real contour should have relatively high average gradient direction consistency. Therefore, the local average gradient direction consistency can also be used as a good screening index. The size of the comparison window taken in the embodiment of the invention is 11 × 11, and the judgment standard of the salient point is that the gradient modulus value is not less than 3.0. The specific method comprises the steps of firstly respectively taking the current source control point as the center and taking the candidate control point as the center on the registered reference image
Figure 97988DEST_PATH_IMAGE012
Top and to-be-inspected image
Figure 852318DEST_PATH_IMAGE013
Take one image block of 11 × 11 size and align the two. For any point in the window, calculate it at
Figure 651646DEST_PATH_IMAGE012
Upper gradient vector
Figure 359053DEST_PATH_IMAGE002
And in
Figure 788897DEST_PATH_IMAGE013
Upper gradient vector
Figure 827260DEST_PATH_IMAGE004
If the gradient modulus value
Figure 430280DEST_PATH_IMAGE014
Or
Figure 73751DEST_PATH_IMAGE015
If the value is not less than 3.0, the point is judged to be a salient point. For each salient point, calculating the vector inner product
Figure 471234DEST_PATH_IMAGE016
As
Figure 387106DEST_PATH_IMAGE017
And
Figure 465921DEST_PATH_IMAGE013
the gradient direction consistency metric at that point. The gradient direction consistency measures are accumulated and then divided by the total number of salient points to obtain the average gradient direction consistency measure. The average gradient direction consistency threshold value adopted by the embodiment of the invention is 0.8.
The second is a method that utilizes the continuity of deformation.
The industrial product image boundary contour extraction method utilizes the continuity of deformation to carry out screening, and further comprises the following steps:
(1) calculating the deformation amount of each candidate target control point;
the amount of distortion is a signed quantity measured by the distance of the candidate target control point from the corresponding candidate source control point. If the candidate target control point is obtained by searching the corresponding candidate control point towards the positive gradient direction of the point, the deformation quantity sign is positive, and the opposite is negative;
(2) and setting a threshold value for the absolute value of the second derivative of the deformation quantity, and removing the candidate target control points exceeding the threshold value and the corresponding source control points.
The specific method comprises the following steps:
the continuity of deformation is a noticeable property in an industrial standardized production scenario. As a result of the continuity of the deformation, the second derivative of the deformation quantity should be substantially 0. Therefore, a threshold value can be set for the absolute value of the second derivative of the deformation quantity, and the candidate target control points exceeding the threshold value and the corresponding source control points are removed. The amount of distortion is a signed quantity measured by the distance of the candidate target control point from the corresponding candidate source control point. If the candidate target control point is searched for from the candidate control point toward the positive gradient direction in step 6, the sign of the deformation amount is positive, and conversely, negative. The threshold value of the absolute value of the second derivative of the deformation amount in the embodiment of the invention is set to 6.0.
And finally, a method for predicting by using the position of the candidate target control point.
Wherein, utilize the prediction of target control point position to carry out the screening, further include:
(1) calculating an optimal global coordinate transformation matrix by using the candidate source control points and the candidate target control points;
(2) performing coordinate transformation on the candidate source control points based on the matrix to obtain a predicted value corresponding to the position of the candidate target control point;
(3) calculating the distance between the predicted position and the candidate target control point as a position error value between the actual position and the predicted position;
(4) and eliminating the candidate control point matching point pairs with the error values exceeding the threshold value.
The specific method comprises the following steps:
since local deformation can be generally regarded as local fine tuning based on global deformation. Therefore, an optimal global coordinate transformation matrix can be calculated by using the candidate source control points and the candidate target control points, and the candidate source control points are subjected to coordinate transformation based on the optimal global coordinate transformation matrix to obtain the predicted values of the positions of the corresponding candidate target control points. And then calculating the distance between the predicted position and the candidate target control point as a position error value between the actual position and the predicted position, and rejecting the candidate control point matching point pair with the error value exceeding the threshold value. The optimal global coordinate transformation matrix of the embodiment of the invention is obtained by calculating the candidate control point matching point pair by using default parameters through a findHomography function of OpenCV4.1.0, and the position error threshold is set as 12.
The embodiment of the invention firstly adopts a method of utilizing local gradient direction pixel-by-pixel comparison and a method of utilizing deformation continuity to carry out parallel processing to obtain candidate control point matching point pairs after preliminary screening, and then utilizes a method of candidate target control point position prediction to carry out second screening on the matching point pairs to obtain final control point matching point pairs. Besides the indexes and the methods, some specific indexes can be designed for specific scenes to screen candidate matching point pairs.
Fig. 10 shows the target control points retained by the filtering, and fig. 11 shows the positions of the corresponding source control points on the reference image after registration.
And 8: and for each point on the registered contour, calculating a local coordinate transformation matrix according to the control point pair based on a moving least square method so as to obtain the deformed position of the point, then connecting the deformed contour sampling points to obtain a deformed contour, and taking the contour deformation result as a final contour extraction result.
According to the final control point pair, calculating a local coordinate transformation matrix of each point in the registration reference contour sampling point set to obtain the position of the point after deformation; then, connecting the deformed contour sampling points to obtain a deformed contour, and obtaining a contour extraction result, further comprising:
(1) solving a least square optimization objective function according to a preset deformation type to calculate to obtain a local coordinate transformation matrix;
(2) calculating the deformed position of the contour sampling point according to the coordinate transformation matrix;
(3) and connecting the deformed contour sampling points to obtain the deformed contour.
The specific method comprises the following steps:
based on the moving least square contour deformation principle, in brief, deformation is controlled through some source control points and target control points (deformed position points corresponding to the source control points), for each contour sampling point of which the deformed position is to be obtained, firstly, a local coordinate transformation matrix is obtained by solving a least square optimization objective function according to a preset deformation type (such as affine transformation, similarity transformation and rigid transformation), then, the deformed position of the contour sampling point is calculated according to the coordinate transformation matrix, and finally, the deformed contour sampling points are connected to obtain the deformed contour. Because an optimization objective function is established for each point of the position to be deformed to solve a local coordinate transformation matrix, the method is called as a 'moving least square method'. Is provided with the first
Figure 153778DEST_PATH_IMAGE018
The source control point is
Figure 722163DEST_PATH_IMAGE019
The corresponding target control point is
Figure 938381DEST_PATH_IMAGE020
For any point
Figure 820886DEST_PATH_IMAGE021
The corresponding least square optimization objective function form is as follows:
Figure 235687DEST_PATH_IMAGE022
wherein the content of the first and second substances,
Figure 240552DEST_PATH_IMAGE023
Figure 678487DEST_PATH_IMAGE024
in the form of a linear transformation matrix, the transformation matrix,
Figure 161421DEST_PATH_IMAGE025
translation) is a coordinate transformation matrix corresponding to the local deformation model;
Figure 633990DEST_PATH_IMAGE026
is composed of
Figure 544177DEST_PATH_IMAGE019
Corresponding weights: (
Figure 469408DEST_PATH_IMAGE027
The distribution of the weight is controlled in such a way that,
Figure 693716DEST_PATH_IMAGE028
a modulo operation for a vector). Visible, distance to current point
Figure 83109DEST_PATH_IMAGE021
In other words, the closer the source control point is, the higher the weight is, the smaller the position error between the corresponding expected deformed position and the preset target control point is, and the weight attenuation is rapid, and for the control point farther from the current point, the weight is approximately 0, which reflects the locality of the expected deformation model. Selected by the embodiments of the present invention
Figure 367460DEST_PATH_IMAGE027
2.0, the local deformation model adopts rigid transformation.
Fig. 12 shows the final contour extraction result of the embodiment of the present invention, and fig. 13 and 14 are the results of the operation of the present method on other pictures to be examined. The deformation models used in fig. 12, 13, and 14 are all rigid transformation models. It can be seen that even under the unfavorable conditions that the contrast ratio of the foreground and the background is very low, the brightness unevenness inside the area is high, and the image has noise, scratches and other interference factors, the contour extracted by the method can still keep continuity, smoothness and closeness, and has very high goodness of fit with the real contour. Meanwhile, after the image is positioned and registered, the searching and contour deformation of the target control point are only processed aiming at the contour line or the neighborhood of the contour line, and the required calculation amount is small, so the method has high calculation efficiency.
Although the present invention has been described with reference to a few embodiments, it should be understood that the present invention is not limited to the above embodiments, but rather, the present invention is not limited to the above embodiments, and those skilled in the art can make various changes and modifications without departing from the scope of the invention.

Claims (10)

1. The method for extracting the boundary contour of the industrial product image is characterized by comprising the following steps of:
step 1: inputting a preliminary reference image, and extracting a preliminary reference contour of the image to be detected;
step 2: positioning an object of interest in the image to be inspected, and obtaining a coordinate transformation matrix registered from the preliminary reference image and the preliminary reference contour to the image to be inspected;
and step 3: carrying out coordinate transformation on the preliminary reference contour and the preliminary reference image by using the coordinate transformation matrix, and registering to an image to be detected to obtain a registered reference contour and a registered reference image;
and 4, step 4: carrying out contour point sampling on the registration reference contour to obtain candidate source control points for controlling contour deformation;
and 5: calculating a gradient vector of the registration reference image at each candidate source control point as an approximate normal vector of the registration reference contour at the point;
step 6: searching the strongest response point corresponding to each candidate source control point according to the direction indicated by the approximate normal vector, and taking the strongest response point as a candidate target control point on the actual contour of the image to be detected; mapping and binding the candidate target control points and the candidate source control points one by one to form mutually matched candidate control point pairs;
and 7: screening the candidate control point pairs, wherein the screened control point pairs are used as final control point pairs for controlling the deformation of the outline;
and 8: carrying out contour point sampling on the registration reference contour again to obtain a registration reference contour sampling point set; according to the final control point pair, calculating a local coordinate transformation matrix of each point in the registration reference contour sampling point set to obtain the position of the point after deformation; and then connecting the deformed contour points to obtain a deformed contour, thereby obtaining a contour extraction result.
2. The industrial image boundary contour extraction method as claimed in claim 1, wherein the sub-step of extracting the preliminary reference contour of the to-be-detected image in step 1 further comprises: and after the preliminary reference image is subjected to filtering pretreatment, performing thresholding operation in a gradient domain to obtain a preliminary reference contour.
3. The industrial image boundary contour extraction method according to claim 1 or 2, wherein the sub-step of extracting the preliminary reference contour of the to-be-detected image in step 1 further comprises: and carrying out gray level morphological on-off operation on the preliminary reference image, and carrying out thresholding operation in a gradient domain to obtain a preliminary reference contour.
4. The industrial image boundary contour extraction method as claimed in claim 1, wherein the sub-step of locating the object of interest in the suspected image in the step 2 further comprises: shape-based template matching localization, NCC template-based shape matching localization, and/or feature point extraction matching localization.
5. The industrial product image boundary contour extraction method according to claim 1, wherein the sub-step of performing contour point sampling on the registration reference contour in the step 4 further comprises: equidistant sampling is used.
6. The industrial product image boundary contour extraction method as claimed in claim 1, wherein the sub-step of searching for the strongest response point corresponding to each candidate source control point in the step 6 further comprises: for candidate source control points on the contour, the search for the target control point on the corresponding actual contour is searched along the normal direction of the contour at the point.
7. The industrial product image boundary contour extraction method according to claim 1, wherein the sub-step of screening the candidate target control points in the step 7 further comprises: and screening based on the overall consistency of the gradient directions in the neighborhood and/or screening based on the average gray difference in the positive and negative gradient directions.
8. The industrial product image boundary contour extraction method according to claim 1, wherein the sub-step of screening the candidate target control points in the step 7 further comprises: and screening by utilizing local gradient direction pixel-by-pixel comparison, screening by utilizing the continuity of deformation and/or screening by utilizing the position prediction of a target control point.
9. The industrial product image boundary contour extraction method as claimed in claim 1, wherein the sub-step of calculating the local coordinate transformation matrix of each point on the registered contour according to the control point pair in step 8 further comprises: and calculating a local coordinate transformation matrix of each point on the registered contour based on a moving least square method.
10. The method for extracting boundary contour of industrial image as claimed in claim 1, wherein said step 1 of extracting preliminary reference contour of image to be inspected further comprises:
firstly, carrying out gray scale opening operation and closing operation on the preliminary reference image by using a disc-shaped structural element in sequence, and eliminating fine scratches and noise interference to obtain an intermediate image A1;
secondly, extracting gradients in the x direction and the y direction of the intermediate image A1 by using a sobel operator and calculating a gradient module value to obtain an intermediate image A2;
thirdly, binarizing the intermediate image A2 by using a set threshold value, and eliminating a small connected domain with the area smaller than the set area to obtain an intermediate image A3;
finally, a skeleton is extracted from the intermediate image A3 to obtain a preliminary reference contour.
CN202110365155.5A 2021-04-06 2021-04-06 Industrial product image boundary contour extraction method Active CN112734761B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110365155.5A CN112734761B (en) 2021-04-06 2021-04-06 Industrial product image boundary contour extraction method

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110365155.5A CN112734761B (en) 2021-04-06 2021-04-06 Industrial product image boundary contour extraction method

Publications (2)

Publication Number Publication Date
CN112734761A CN112734761A (en) 2021-04-30
CN112734761B true CN112734761B (en) 2021-07-02

Family

ID=75596504

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110365155.5A Active CN112734761B (en) 2021-04-06 2021-04-06 Industrial product image boundary contour extraction method

Country Status (1)

Country Link
CN (1) CN112734761B (en)

Families Citing this family (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113362440B (en) * 2021-06-29 2023-05-26 成都数字天空科技有限公司 Material map acquisition method and device, electronic equipment and storage medium
CN113808108B (en) * 2021-09-17 2023-08-01 太仓中科信息技术研究院 Visual detection method and system for defects of printing film
CN115922404B (en) * 2023-01-28 2024-04-12 中冶赛迪技术研究中心有限公司 Disassembling method, disassembling system, electronic equipment and storage medium
CN116542998B (en) * 2023-03-15 2023-11-17 锋睿领创(珠海)科技有限公司 Contour detection method, device, equipment and medium for photoetching film inductance
CN117274280A (en) * 2023-09-06 2023-12-22 强联智创(北京)科技有限公司 Method, apparatus and computer readable storage medium for vessel segmentation

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101727666A (en) * 2008-11-03 2010-06-09 深圳迈瑞生物医疗电子股份有限公司 Image segmentation method and device, and method for judging image inversion and distinguishing front side and back side of sternum
CN101847258A (en) * 2009-03-26 2010-09-29 陈贤巧 Optical remote sensing image registration method
CN102201119A (en) * 2011-06-10 2011-09-28 深圳大学 Method and system for image registering based on control point unbiased transformation

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US7463773B2 (en) * 2003-11-26 2008-12-09 Drvision Technologies Llc Fast high precision matching method

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101727666A (en) * 2008-11-03 2010-06-09 深圳迈瑞生物医疗电子股份有限公司 Image segmentation method and device, and method for judging image inversion and distinguishing front side and back side of sternum
CN101847258A (en) * 2009-03-26 2010-09-29 陈贤巧 Optical remote sensing image registration method
CN102201119A (en) * 2011-06-10 2011-09-28 深圳大学 Method and system for image registering based on control point unbiased transformation

Also Published As

Publication number Publication date
CN112734761A (en) 2021-04-30

Similar Documents

Publication Publication Date Title
CN112734761B (en) Industrial product image boundary contour extraction method
CN113160192B (en) Visual sense-based snow pressing vehicle appearance defect detection method and device under complex background
CN109961049B (en) Cigarette brand identification method under complex scene
CN111310558B (en) Intelligent pavement disease extraction method based on deep learning and image processing method
CN107833220B (en) Fabric defect detection method based on deep convolutional neural network and visual saliency
CN111223088B (en) Casting surface defect identification method based on deep convolutional neural network
CN106875381B (en) Mobile phone shell defect detection method based on deep learning
CN113592845A (en) Defect detection method and device for battery coating and storage medium
CN106683119B (en) Moving vehicle detection method based on aerial video image
CN113781402A (en) Method and device for detecting chip surface scratch defects and computer equipment
CN107230203B (en) Casting defect identification method based on human eye visual attention mechanism
CN108765402B (en) Non-woven fabric defect detection and classification method
CN111161222B (en) Printing roller defect detection method based on visual saliency
CN111354047B (en) Computer vision-based camera module positioning method and system
CN109978848A (en) Method based on hard exudate in multiple light courcess color constancy model inspection eye fundus image
CN104123554A (en) SIFT image characteristic extraction method based on MMTD
CN114494179A (en) Mobile phone back damage point detection method and system based on image recognition
CN111582004A (en) Target area segmentation method and device in ground image
CN110516528A (en) A kind of moving-target detection and tracking method based under movement background
CN113221881A (en) Multi-level smart phone screen defect detection method
CN110348307B (en) Path edge identification method and system for crane metal structure climbing robot
CN111915634A (en) Target object edge detection method and system based on fusion strategy
CN114332095A (en) Cell segmentation method and device based on multilayer structure and electronic equipment
CN116958837A (en) Municipal facilities fault detection system based on unmanned aerial vehicle
Xu et al. Based on improved edge detection algorithm for English text extraction and restoration from color images

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant