CN117541563A - Image defect detection method, device, computer equipment and medium - Google Patents

Image defect detection method, device, computer equipment and medium Download PDF

Info

Publication number
CN117541563A
CN117541563A CN202311564656.1A CN202311564656A CN117541563A CN 117541563 A CN117541563 A CN 117541563A CN 202311564656 A CN202311564656 A CN 202311564656A CN 117541563 A CN117541563 A CN 117541563A
Authority
CN
China
Prior art keywords
image
defect detection
characteristic point
template
feature points
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202311564656.1A
Other languages
Chinese (zh)
Inventor
胡涛
赵丙坤
林俊伍
宋军
林子吉
冯康康
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Anhui Key Information Technology Co ltd
Luzhou Laojiao Co Ltd
Luzhou Laojiao Brewing Co Ltd
Original Assignee
Anhui Key Information Technology Co ltd
Luzhou Laojiao Co Ltd
Luzhou Laojiao Brewing Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Anhui Key Information Technology Co ltd, Luzhou Laojiao Co Ltd, Luzhou Laojiao Brewing Co Ltd filed Critical Anhui Key Information Technology Co ltd
Priority to CN202311564656.1A priority Critical patent/CN117541563A/en
Publication of CN117541563A publication Critical patent/CN117541563A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/0002Inspection of images, e.g. flaw detection
    • G06T7/0004Industrial image inspection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/0464Convolutional networks [CNN, ConvNet]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T3/00Geometric image transformation in the plane of the image
    • G06T3/40Scaling the whole image or part thereof
    • G06T3/4038Scaling the whole image or part thereof for image mosaicing, i.e. plane images composed of plane sub-images
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/30Determination of transform parameters for the alignment of images, i.e. image registration
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/70Determining position or orientation of objects or cameras
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/80Analysis of captured images to determine intrinsic or extrinsic camera parameters, i.e. camera calibration
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20084Artificial neural networks [ANN]

Abstract

The invention relates to the field of image detection, and provides an image defect detection method, an image defect detection device, computer equipment and a medium. The image defect detection method comprises the following steps: acquiring at least one group of image sets of an object to be detected, wherein the group of image sets comprises at least one frame of first image, and the group of image sets is acquired through a camera; splicing the first images in each image set to obtain a second image; and performing defect detection on the second image according to the pre-constructed template image to obtain a first defect detection result. The invention realizes the defect detection of the image and improves the detection accuracy.

Description

Image defect detection method, device, computer equipment and medium
Technical Field
The present invention relates to the field of image detection, and in particular, to a method, an apparatus, a computer device, and a medium for detecting an image defect.
Background
Paper bags are widely arranged in life, and mainly play roles in protecting commodities, displaying advertisements and the like. The cartons vary in material, strength, size, and surface print pattern, with increasingly high quality requirements for surface pattern printing. The printing process of the paper box has the remarkable characteristics of high speed, large breadth, complex patterns, unfixed postures, incomplete waste removal and the like, and the difficulties restrict the development of online detection of the printing defects of the paper box.
Disclosure of Invention
In order to realize defect detection of an image and improve detection accuracy, the invention provides an image defect detection method, an image defect detection device, computer equipment and a medium.
In a first aspect, the present invention provides an image defect detection method, an apparatus, a computer device, and a medium, where the method includes:
acquiring at least one group of image sets of an object to be detected, wherein the group of image sets comprises at least one frame of first image, and the group of image sets is acquired through a camera;
splicing the first images in each image set to obtain a second image;
and performing defect detection on the second image according to the pre-constructed template image to obtain a first defect detection result.
According to the method, the multi-frame images of the object to be detected, which are acquired by the cameras, are acquired, the multi-frame images acquired by the cameras are spliced to obtain the second image, and the template image and the second image are compared to obtain the detection result of the image in the object to be detected.
In an alternative embodiment, stitching the first image in each image set to obtain a second image includes:
splicing the first images with the same frame number to obtain at least one third image;
obtaining masks of all third images;
connecting the masks to obtain a fourth image;
acquiring a superposition area in a fourth image;
and in the fourth image, removing the overlapping area to obtain a second image.
By the embodiment, the images acquired by the cameras are spliced to obtain the second image of the object to be detected, and the second image is formed by splicing the images acquired by the cameras, so that the image information of the object to be detected can be more accurately and comprehensively described.
In an alternative embodiment, performing defect detection on the second image according to the pre-constructed template image to obtain a first defect detection result, including:
acquiring a plurality of first feature points of a template image and description vectors corresponding to the first feature points, and a plurality of second feature points of a second image and description vectors corresponding to the second feature points;
matching each first characteristic point with each second characteristic point to obtain a plurality of first matching pairs;
according to each first matching pair, the description vector corresponding to the first characteristic point in each first matching pair and the description vector corresponding to the second characteristic point in each first matching pair, the registration of the template image and the second image is completed;
and obtaining a first defect detection result according to the template image after registration, the second image after registration and the pre-constructed twin convolutional neural network.
Considering that the object to be detected may possibly shake and change its position during the conveying process, in this embodiment, after the template image is registered with the second image, defect detection is performed on the second image according to the template image.
In an alternative embodiment, acquiring a plurality of first feature points of a template image and a plurality of second feature points of a second image includes:
acquiring a plurality of third feature points of the template image and response intensity of each third feature point;
screening each third characteristic point according to the response intensity of each third characteristic point to obtain each first characteristic point;
acquiring a plurality of fourth feature points of the second image and response intensity of each fourth feature point;
and screening each fourth characteristic point according to the response intensity of each fourth characteristic point to obtain each second characteristic point.
According to the embodiment, the response intensity of the characteristic points characterizes the intensity of the characteristic information in the image, the characteristic points in the template image and the characteristic points in the second image are screened according to the response intensity of the characteristic points, the characteristic points with strong image characteristic information are reserved, registration of the template image and the second image can be more accurate, and meanwhile, the calculation amount is reduced and the calculation efficiency is improved through screening the characteristic points.
In an alternative embodiment, acquiring a plurality of first feature points of a template image and a plurality of second feature points of a second image includes:
acquiring a plurality of third feature points of the template image and coordinate information of each third feature point;
only reserving one third characteristic point in each first preset neighborhood range, and taking the third characteristic point reserved in each first preset neighborhood range as a first characteristic point;
acquiring a plurality of fourth feature points of the second image and coordinate information of each fourth feature point;
and only reserving one fourth characteristic point in each second preset neighborhood range, and taking the fourth characteristic point reserved in each second preset neighborhood range as a second characteristic point.
According to the embodiment, in order to enable the characteristic points of the template image and the characteristic points of the second image to be uniformly distributed on the whole image and comprehensively reflect the overall characteristics of the image, the characteristic points of the template image and the characteristic points of the second image are prevented from being intensively distributed in the region with strong textures, and the region with weak textures is ignored.
In an alternative embodiment, the registering of the template image and the second image is completed according to each first matching pair, the description vector corresponding to the first feature point in each first matching pair, and the description vector corresponding to the second feature point in each first matching pair, including:
in each first matching pair, calculating Euclidean distance between the description vector corresponding to the first characteristic point and the description vector corresponding to the second characteristic point;
taking the first matching pair with the Euclidean distance smaller than a preset threshold value as a second matching pair;
calculating an affine transformation matrix between the template image and the second image according to the first characteristic points and the second characteristic points in each second matching pair;
and according to the affine transformation matrix, completing registration of the template image and the second image.
In an alternative embodiment, obtaining a first defect detection result according to the template image after registration, the second image after registration, and the pre-constructed twin convolutional neural network includes:
the template image after registration and the second image after registration are respectively subjected to blocking processing to obtain a processed template image and a processed second image;
inputting the processed template image and the processed second image into a pre-constructed twin convolutional neural network to obtain a second defect detection result;
when the second defect detection result is that the second image has defects, the second image is input into a pre-constructed classified convolution neural network to obtain a first defect detection result.
Through the embodiment, the twin convolutional neural network has the advantages that content difference information between the image of the object to be detected and the template image can be learned and identified and compared, so that the embodiment utilizes the twin convolutional neural network to detect the image defects, the robustness is better, meanwhile, the printing defect detection of the image is easily influenced to a certain extent due to the fact that various interferences are easily received in the printing process, such as paper scraps are not removed cleanly, small black spots which are not greasy on the paper surface exist, and the like, and meanwhile, local offset caused during imaging also causes influence, and therefore, the classification convolutional neural network is very necessary to carry out secondary judgment on the detection results of all the printing defects, the false detection rate can be greatly reduced, and the detection precision is improved.
In a second aspect, the present invention also provides an image defect detecting apparatus, including:
the acquisition module is used for acquiring at least one group of image sets of the object to be detected, wherein the group of image sets comprises at least one frame of first image, and the group of image sets is acquired through a camera;
the splicing module is used for splicing the first images in each image set to obtain a second image;
and the detection module is used for carrying out defect detection on the second image according to the pre-constructed template image to obtain a first defect detection result.
According to the device, the multi-frame images of the object to be detected, which are acquired by the cameras, are acquired, the multi-frame images acquired by the cameras are spliced to obtain the second image, and the template image and the second image are compared to obtain the detection result of the image in the object to be detected.
In a third aspect, the present invention also provides a computer device, including a memory and a processor, where the memory and the processor are communicatively connected to each other, and the memory stores computer instructions, and the processor executes the computer instructions, thereby executing the steps of the image defect detection method according to the first aspect or any implementation manner of the first aspect.
In a fourth aspect, the present invention also provides a computer readable storage medium having stored thereon a computer program which when executed by a processor implements the steps of the image defect detection method of the first aspect or any implementation of the first aspect.
Drawings
In order to more clearly illustrate the embodiments of the present invention or the technical solutions in the prior art, the drawings that are needed in the description of the embodiments or the prior art will be briefly described, and it is obvious that the drawings in the description below are some embodiments of the present invention, and other drawings can be obtained according to the drawings without inventive effort for a person skilled in the art.
FIG. 1 is a flowchart of an image defect detection method according to an exemplary embodiment;
fig. 2 is a schematic diagram of an image defect detecting apparatus according to an exemplary embodiment;
fig. 3 is a schematic diagram of a hardware structure of a computer device according to an exemplary embodiment.
Detailed Description
The following description of the embodiments of the present invention will be made apparent and fully in view of the accompanying drawings, in which some, but not all embodiments of the invention are shown. All other embodiments, which can be made by those skilled in the art based on the embodiments of the invention without making any inventive effort, are intended to be within the scope of the invention.
In addition, the technical features of the different embodiments of the present invention described below may be combined with each other as long as they do not collide with each other.
In order to realize defect detection of an image and improve detection accuracy, the invention provides an image defect detection method, an image defect detection device, computer equipment and a medium.
Fig. 1 is a flowchart of an image defect detection method according to an exemplary embodiment. As shown in fig. 1, the image defect detection method includes the following steps S101 to S104.
Step S101: at least one group of image sets of an object to be detected is acquired, wherein the image sets comprise at least one frame of first image, and the image sets are acquired through a camera.
In an alternative embodiment, the object to be detected may be a moving paper, such as a carton on a conveyor belt, or may be a paper in a stationary state, which is not particularly limited herein.
In an alternative embodiment, the camera may be a line camera, or may be another type of camera, which is not particularly limited herein. The number of cameras can be set according to actual situations.
In an alternative embodiment, the position of each camera may be set according to the motion state of the object to be detected. Illustratively, the target surfaces of the cameras are positioned on the same straight line and are perpendicular to the movement direction of the object to be detected. When the object to be detected is an unfolded carton, four horizontally mounted line cameras can be used to acquire the unfolded carton images.
In an alternative embodiment, each camera has been positionally calibrated. The position calibration means that the cameras are positioned on the same straight line, so that the initial lines of the images of different areas of the same carton shot by the cameras are basically consistent, the subsequent splicing is convenient, and the position calibration is also used for ensuring that the sensor direction of the cameras is vertical to the movement direction of the carton.
In an alternative embodiment, each camera has been distortion corrected. Distortion correction is exemplified by solving the problem of image distortion caused by the camera lens itself, and if distortion correction is not performed, pixel accuracy of the center and edge positions of each image is not uniform, resulting in deformation of the photographed image.
Step S102: and splicing the first images in each image set to obtain a second image.
In an alternative embodiment, first images with the same frame number in each image set are spliced to obtain images with the same frame number, which are shot by a plurality of cameras; then, obtaining masks of images of the same frame number shot by a plurality of cameras, and connecting the masks according to the sequence of the frame numbers to obtain a complete image of an object to be detected; and finally, deleting the coincident images shot by the adjacent cameras from the complete image of the object to be detected, and obtaining a second image.
Step S103: and performing defect detection on the second image according to the pre-constructed template image to obtain a first defect detection result.
In an alternative embodiment, the defect detection of the second image refers to detecting whether there is an image printing defect in the second image, such as ink, rocker, offset, reprint, greasy dirt, breakage, and the like, which is not particularly limited herein.
In an alternative embodiment, the template image is an image without image printing defects.
In an alternative embodiment, the template image and the second image may be registered, and the registered template image and second image are input into a pre-constructed machine learning algorithm, so as to obtain the first defect detection result.
According to the method, the multi-frame images of the object to be detected, which are acquired by the cameras, are acquired, the multi-frame images acquired by the cameras are spliced to obtain the second image, and the template image and the second image are compared to obtain the detection result of the image in the object to be detected.
In an example, in the above step S102, the first image in each image set is stitched by:
first, the first images with the same frame number are spliced to obtain at least one third image.
In an alternative embodiment, when each camera is horizontally installed, the first images of the same frame number are horizontally stitched, and the stitching is performed to obtain a horizontal large image with the height of pixels of a single frame image of the camera, where H is the width of pixels of the single frame image of the camera, and N is the number of cameras. Of course, when each camera is installed vertically, the first images of the same number of frames are stitched vertically.
Next, masks of the respective third images are acquired.
In an alternative embodiment, the conventional binary segmentation method is not applicable any more because the pattern in the image is relatively complex, so in the embodiment of the present invention, the target area and the background area in the third image are separated by segmenting the convolutional neural network, and the mask of the target area in the third image is extracted. The target area refers to the pattern in the object to be detected.
And connecting the masks to obtain a fourth image.
In an alternative embodiment, a connected domain detection algorithm based on run-length increase may be used to connect the masks of the target areas of the continuous frames, so as to obtain a fourth image.
Then, a registration area in the fourth image is acquired.
And finally, removing the overlapping area in the fourth image to obtain a second image.
In an alternative embodiment, two partial images at the overlapping positions of the two adjacent camera fields of view can be obtained from the fourth image, the relative offset relation of the two partial images is calculated by adopting an edge matching algorithm, and the overlapping area and the displacement amount to be intercepted are calculated, so that the overlapping area is removed in the fourth image, and the second image (to-be-detected image) is obtained.
In the embodiment of the invention, the images acquired by the cameras are spliced to obtain the second image of the object to be detected, and the second image is formed by splicing the images acquired by the cameras, so that the image information of the object to be detected can be more accurately and comprehensively described.
In an example, in the above step S104, the defect detection is performed on the second image by:
step a1: and acquiring a plurality of first feature points of the template image and description vectors corresponding to the first feature points, and a plurality of second feature points of the second image and description vectors corresponding to the second feature points.
In an alternative embodiment, a feature point detection algorithm may be used to obtain a first feature point of the template image and a description vector corresponding to each first feature point, and a second feature point of the second image and a description vector corresponding to each second feature point.
Step a2: and matching each first characteristic point with each second characteristic point to obtain a plurality of first matching pairs.
In an alternative embodiment, a first matching pair includes a first feature point and a second feature point.
Step a3: and according to each first matching pair, the description vector corresponding to the first characteristic point in each first matching pair and the description vector corresponding to the second characteristic point in each first matching pair, completing the registration of the template image and the second image.
Step a4: and obtaining a first defect detection result according to the template image after registration, the second image after registration and the pre-constructed twin convolutional neural network.
In an alternative embodiment, the registered template image and the registered second image may be separately subjected to a blocking process, so as to obtain a processed template image and a processed second image, that is, a template image sub-image and a second image sub-image, respectively, and then the obtained template image sub-image and the second image sub-image are input into the twin convolutional neural network. The template image and the second image are respectively segmented into m×n template image subgraphs of defw×defw and a second image subgraph (m is the number of segmentation in the horizontal direction, n is the number of segmentation in the vertical direction, and defw is the pixel length and width of the subgraph), and the template image subgraphs and the second image subgraphs are in one-to-one correspondence and are horizontally spliced to form a contrast graph. And when the length and width of the template image sub-image and the second image sub-image are smaller than the defw pixels, filling black at the image edges of the template image sub-image and the second image sub-image, and expanding the image edges to the defw.
In an alternative embodiment, the twin convolutional neural network is trained from a annotated training set of images.
Considering that an object to be detected is in an unstable state in the transmission process, the problems of tilting of the paper surface end, overall shaking, rotation and the like exist at the same time, and the existing template matching algorithm cannot be completed. In the embodiment of the invention, the registration method based on feature point detection is realized by referring to the pattern texture information in the object to be detected, and the method provided by the embodiment of the invention has the advantages of unchanged size and unchanged rotation.
In an example, in the step a1, the plurality of first feature points of the acquired template image and the plurality of second feature points of the second image may be screened by the response intensities corresponding to the feature points:
first, a plurality of third feature points of a template image, and response intensities of the respective third feature points are acquired.
And secondly, screening each third characteristic point according to the response intensity of each third characteristic point to obtain each first characteristic point. Illustratively, the third feature points are ordered in terms of response intensity; and reserving a preset number of third characteristic points with larger response intensity, and taking the reserved third characteristic points as the first characteristic points.
Again, a plurality of fourth feature points of the second image, and the response intensity of each fourth feature point are acquired.
And finally, screening each fourth characteristic point according to the response intensity of each fourth characteristic point to obtain each second characteristic point. Illustratively, the fourth feature points are ordered in terms of response intensity; and reserving a preset number of fourth characteristic points with larger response intensity, and taking the reserved fourth characteristic points as second characteristic points.
In the embodiment of the invention, the response intensity of the characteristic points represents the intensity of the characteristic information in the image, the characteristic points in the template image and the characteristic points in the second image are screened according to the response intensity of the characteristic points, the characteristic points with strong image characteristic information are reserved, the registration of the template image and the second image can be more accurate, and meanwhile, the calculation amount is reduced and the calculation efficiency is improved by screening the characteristic points.
In an example, in the step a1, the plurality of first feature points of the acquired template image and the plurality of second feature points of the second image may be further screened by the distribution of feature points:
first, a plurality of third feature points of a template image and coordinate information of each third feature point are acquired.
And secondly, only one third characteristic point is reserved in each first preset neighborhood range, and the third characteristic point reserved in each first preset neighborhood range is used as the first characteristic point.
And acquiring a plurality of fourth feature points of the second image and coordinate information of each fourth feature point.
And finally, reserving only one fourth characteristic point in each second preset neighborhood range, and taking the fourth characteristic point reserved in each second preset neighborhood range as a second characteristic point.
In order to uniformly distribute the characteristic points of the template image and the characteristic points of the second image on the whole image, comprehensively reflect the overall characteristics of the image, avoid the characteristic points of the template image and the characteristic points of the second image from being intensively distributed in the region with strong textures and neglect the region with weak textures.
In an example, the response intensity of the feature points may be combined with the distribution, and the third feature point and the fourth feature point may be screened to obtain the first feature point and the second feature point. Illustratively, the third feature point is screened for: firstly, screening each third characteristic point according to response intensity to obtain screened third characteristic points; then, when each preset neighborhood contains a plurality of screened third characteristic points, only the third characteristic point with the largest response intensity is reserved as the first characteristic point.
In an example, in step a3 above, the registration of the template image with the second image is done by:
first, in each first matching pair, the euclidean distance between the description vector corresponding to the first feature point and the description vector corresponding to the second feature point is calculated.
Secondly, taking the first matching pair with the Euclidean distance smaller than a preset threshold value as a second matching pair. The preset threshold may be set according to actual situations, and is not limited herein.
Again, an affine transformation matrix between the template image and the second image is calculated from the first feature point and the second feature point in each second matching pair.
In an alternative embodiment, a random sample consensus algorithm (RANdomSAmple Consensus, RANSAC) is used to calculate an optimal affine transformation matrix between the matched pairs of feature points that describes the transformation relationship between the image to be detected (the second image) and the template image.
Finally, according to the affine transformation matrix, the registration of the template image and the second image is completed.
In an alternative embodiment, the template image is transformed to the same size as the image to be detected (the second image) based on the optimal affine transformation matrix, and the printing text registration of the image to be detected (the second image) and the template image is completed.
In an example, in the step a3, the first defect detection result is obtained by:
firstly, the template image after registration and the second image after registration are respectively subjected to blocking processing, so that the processed template image and the processed second image are obtained.
And then, inputting the processed template image and the processed second image into a pre-constructed twin convolutional neural network to obtain a second defect detection result.
And finally, when the second defect detection result is that the second image has defects, inputting the second image into a pre-constructed classified convolution neural network to obtain a first defect detection result.
In an alternative embodiment, the defect detection result includes defect type and location information of the defect in the second image.
In the embodiment of the invention, the twin convolutional neural network has the advantages that the content difference information between the image (second image) of the object to be detected and the template image can be learned and identified, so that the twin convolutional neural network is utilized for detecting the image defects, the robustness is better, meanwhile, the image printing defect detection is easily influenced to a certain extent due to various interferences, such as unclean paper scraps, small black spots with non-greasy dirt on the paper surface, and the like, in the printing process, and meanwhile, the local offset caused by imaging is also influenced, so that the secondary judgment on each printing defect detection result (first defect detection result) is very necessary, the second image is judged again by utilizing the classification convolutional neural network, and the image defects are judged to be actual printing defects or misjudgment caused by interference.
Based on the same inventive concept, an embodiment of the present invention further provides an image defect detection apparatus, as shown in fig. 2, including:
an acquisition module 201, configured to acquire at least one set of images of an object to be detected, where the set of images includes at least one frame of a first image, and the set of images is acquired by a camera; the details are described in step S101 in the above embodiments, and are not described herein.
The stitching module 202 is configured to stitch the first images in each image set to obtain a second image; the details refer to the description of step S102 in the above embodiment, and are not repeated here.
And the detection module 203 is configured to detect the defect of the second image according to the pre-constructed template image, so as to obtain a first defect detection result. The details are described in step S103 in the above embodiments, and are not described herein.
According to the device, the multi-frame images of the object to be detected, which are acquired by the cameras, are acquired, the multi-frame images acquired by the cameras are spliced to obtain the second image, and the template image and the second image are compared to obtain the detection result of the image in the object to be detected.
In an example, the stitching module 202 further includes:
the splicing sub-module is used for splicing the first images with the same frame number to obtain at least one frame of third image; the details are described in the above embodiments, and are not repeated here.
The first acquisition submodule is used for acquiring masks of all the third images; the details are described in the above embodiments, and are not repeated here.
The connecting sub-module is used for connecting the masks to obtain a fourth image; the details are described in the above embodiments, and are not repeated here.
The second acquisition submodule is used for acquiring the superposition area in the fourth image; the details are described in the above embodiments, and are not repeated here.
And the removing sub-module is used for removing the overlapping area in the fourth image to obtain a second image. The details are described in the above embodiments, and are not repeated here.
In one example, the detection module 203 includes:
the third acquisition submodule is used for acquiring a plurality of first feature points of the template image, description vectors corresponding to the first feature points, a plurality of second feature points of the second image and description vectors corresponding to the second feature points; the details are described in the above embodiments, and are not repeated here.
The matching sub-module is used for matching each first characteristic point with each second characteristic point to obtain a plurality of first matching pairs; the details are described in the above embodiments, and are not repeated here.
The registration sub-module is used for completing registration of the template image and the second image according to each first matching pair, the description vector corresponding to the first characteristic point in each first matching pair and the description vector corresponding to the second characteristic point in each first matching pair; the details are described in the above embodiments, and are not repeated here.
The detection sub-module is used for obtaining a first defect detection result according to the template image after registration, the second image after registration and the pre-constructed twin convolutional neural network. The details are described in the above embodiments, and are not repeated here.
In an example, the third acquisition submodule includes:
a first acquisition unit configured to acquire a plurality of third feature points of the template image, and response intensities of the third feature points; the details are described in the above embodiments, and are not repeated here.
The first screening unit is used for screening each third characteristic point according to the response intensity of each third characteristic point to obtain each first characteristic point; the details are described in the above embodiments, and are not repeated here.
A second acquisition unit configured to acquire a plurality of fourth feature points of the second image, and response intensities of the fourth feature points; the details are described in the above embodiments, and are not repeated here.
And the second screening unit is used for screening each fourth characteristic point according to the response intensity of each fourth characteristic point to obtain each second characteristic point. The details are described in the above embodiments, and are not repeated here.
In an example, the third acquisition sub-module further includes:
a third acquisition unit configured to acquire a plurality of third feature points of the template image, and coordinate information of each third feature point; the details are described in the above embodiments, and are not repeated here.
The third screening unit is used for reserving only one third characteristic point in each first preset neighborhood range, and taking the third characteristic point reserved in each first preset neighborhood range as a first characteristic point; the details are described in the above embodiments, and are not repeated here.
A fourth acquisition unit configured to acquire a plurality of fourth feature points of the second image, and coordinate information of each fourth feature point; the details are described in the above embodiments, and are not repeated here.
And the fourth screening unit is used for reserving only one fourth characteristic point in each second preset neighborhood range and taking the fourth characteristic point reserved in each second preset neighborhood range as a second characteristic point. The details are described in the above embodiments, and are not repeated here.
In an example, the registration submodule includes:
the first computing unit is used for computing Euclidean distance between the description vector corresponding to the first characteristic point and the description vector corresponding to the second characteristic point in each first matching pair; the details are described in the above embodiments, and are not repeated here.
The pairing unit is used for taking the first matching pair with the Euclidean distance smaller than a preset threshold value as a second matching pair; the details are described in the above embodiments, and are not repeated here.
A second calculation unit configured to calculate an affine transformation matrix between the template image and the second image based on the first feature point and the second feature point in each of the second matching pairs; the details are described in the above embodiments, and are not repeated here.
And the registration unit is used for completing registration of the template image and the second image according to the affine transformation matrix. The details are described in the above embodiments, and are not repeated here.
In one example, the detection submodule includes:
the blocking unit is used for respectively carrying out blocking processing on the template image after registration and the second image after registration to obtain a processed template image and a processed second image; the details are described in the above embodiments, and are not repeated here.
The first detection unit is used for inputting the processed template image and the processed second image into a pre-constructed twin convolutional neural network to obtain a second defect detection result; the details are described in the above embodiments, and are not repeated here.
And the second detection unit is used for inputting the second image into the pre-constructed classified convolutional neural network to obtain the first defect detection result when the second defect detection result is that the second image has defects. The details are described in the above embodiments, and are not repeated here.
The specific limitation of the above device and the beneficial effects can be referred to the limitation of the image defect detection method, and are not described herein. The various modules described above may be implemented in whole or in part by software, hardware, or a combination thereof. The above modules may be embedded in hardware or may be independent of a processor in the computer device, or may be stored in software in a memory in the computer device, so that the processor may call and execute operations corresponding to the above modules.
Fig. 3 is a schematic diagram of a hardware structure of a computer device according to an exemplary embodiment. As shown in fig. 3, the device includes one or more processors 310 and a memory 320, the memory 320 including persistent memory, volatile memory and a hard disk, one processor 310 being illustrated in fig. 3. The apparatus may further include: an input device 330 and an output device 340.
The processor 310, memory 320, input device 330, and output device 340 may be connected by a bus or other means, for example in fig. 3.
The processor 310 may be a central processing unit (Central Processing Unit, CPU). The processor 310 may also be other general purpose processors, digital signal processors (Digital Signal Processor, DSPs), application specific integrated circuits (Application Specific Integrated Circuit, ASICs), field programmable gate arrays (Field-Programmable Gate Array, FPGAs) or other programmable logic devices, discrete gate or transistor logic devices, discrete hardware components, or a combination of the above. A general purpose processor may be a microprocessor or the processor may be any conventional processor or the like.
The memory 320, which is a non-transitory computer readable storage medium, includes persistent memory, volatile memory, and hard disk, may be used to store non-transitory software programs, non-transitory computer executable programs, and modules, such as program instructions/modules corresponding to the image defect detection method in the embodiments of the present application. The processor 310 executes various functional applications of the server and data processing, i.e., implements any of the image defect detection methods described above, by running non-transitory software programs, instructions, and modules stored in the memory 320.
Memory 320 may include a storage program area that may store an operating system, at least one application program required for functionality, and a storage data area; the storage data area may store data or the like used as needed. In addition, memory 320 may include high-speed random access memory, and may also include non-transitory memory, such as at least one magnetic disk storage device, flash memory device, or other non-transitory solid state storage device. In some embodiments, memory 320 may optionally include memory located remotely from processor 310, which may be connected to the data processing device via a network. Examples of such networks include, but are not limited to, the internet, intranets, local area networks, mobile communication networks, and combinations thereof.
The input device 330 may receive input numeric or character information and generate signal inputs related to user settings and function control. The output device 340 may include a display device such as a display screen.
One or more modules are stored in the memory 320 that, when executed by the one or more processors 310, perform the method as shown in fig. 1.
The product can execute the method provided by the embodiment of the invention, and has the corresponding functional modules and beneficial effects of the execution method. Technical details which are not described in detail in the present embodiment can be found in the embodiment shown in fig. 1.
The present invention also provides a non-transitory computer storage medium storing computer executable instructions that can perform the method of any of the above-described method embodiments. The storage medium may be a magnetic Disk, an optical Disk, a Read-Only Memory (ROM), a random access Memory (Random Access Memory, RAM), a Flash Memory (Flash Memory), a Hard Disk (HDD), or a Solid State Drive (SSD); the storage medium may also comprise a combination of memories of the kind described above.
It should be noted that in this document, relational terms such as "first" and "second" and the like are used solely to distinguish one entity or action from another entity or action without necessarily requiring or implying any actual such relationship or order between such entities or actions. Moreover, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising one … …" does not exclude the presence of other like elements in a process, method, article, or apparatus that comprises an element.
The foregoing is merely exemplary of embodiments of the present invention to enable those skilled in the art to understand or practice the invention. Various modifications to these embodiments will be readily apparent to those skilled in the art, and the generic principles defined herein may be applied to other embodiments without departing from the spirit or scope of the invention. Thus, the present invention is not intended to be limited to the embodiments shown herein but is to be accorded the widest scope consistent with the principles and novel features disclosed herein.

Claims (10)

1. An image defect detection method, the method comprising:
acquiring at least one group of image sets of an object to be detected, wherein the group of image sets comprises at least one frame of first image, and the group of image sets is acquired through a camera;
splicing the first images in each image set to obtain a second image;
and performing defect detection on the second image according to the pre-constructed template image to obtain a first defect detection result.
2. The method of claim 1, wherein stitching the first image in each of the image sets to obtain a second image comprises:
splicing the first images with the same frame number to obtain at least one third image;
obtaining masks of the third images;
connecting the masks to obtain a fourth image;
acquiring a superposition area in the fourth image;
and in the fourth image, removing the overlapping area to obtain the second image.
3. The method of claim 1, wherein performing defect detection on the second image based on the pre-constructed template image to obtain a first defect detection result comprises:
acquiring a plurality of first feature points of the template image and description vectors corresponding to the first feature points, and a plurality of second feature points of the second image and description vectors corresponding to the second feature points;
matching each first characteristic point with each second characteristic point to obtain a plurality of first matching pairs;
according to each first matching pair, the description vector corresponding to the first characteristic point in each first matching pair and the description vector corresponding to the second characteristic point in each first matching pair, the registration of the template image and the second image is completed;
and obtaining the first defect detection result according to the template image after registration, the second image after registration and the pre-constructed twin convolutional neural network.
4. A method according to claim 3, wherein acquiring a plurality of first feature points of the template image, a plurality of second feature points of the second image, comprises:
acquiring a plurality of third feature points of the template image and response intensity of each third feature point;
screening each third characteristic point according to the response intensity of each third characteristic point to obtain each first characteristic point;
acquiring a plurality of fourth feature points of the second image and response intensity of each fourth feature point;
and screening each fourth characteristic point according to the response intensity of each fourth characteristic point to obtain each second characteristic point.
5. A method according to claim 3, wherein acquiring a plurality of first feature points of the template image, a plurality of second feature points of the second image, comprises:
acquiring a plurality of third feature points of the template image and coordinate information of each third feature point;
only reserving one third characteristic point in each first preset neighborhood range, and taking the third characteristic point reserved in each first preset neighborhood range as the first characteristic point;
acquiring a plurality of fourth feature points of the second image and coordinate information of each fourth feature point;
and reserving only one fourth characteristic point in each second preset neighborhood range, and taking the fourth characteristic point reserved in each second preset neighborhood range as the second characteristic point.
6. A method according to claim 3, wherein completing registration of the template image with the second image based on each of the first matching pairs, the description vector corresponding to a first feature point in each of the first matching pairs, and the description vector corresponding to a second feature point in each of the first matching pairs comprises:
in each first matching pair, calculating Euclidean distance between the description vector corresponding to the first feature point and the description vector corresponding to the second feature point;
taking the first matching pair with the Euclidean distance smaller than a preset threshold value as a second matching pair;
calculating an affine transformation matrix between the template image and the second image according to the first characteristic points and the second characteristic points in each second matching pair;
and according to the affine transformation matrix, completing registration of the template image and the second image.
7. A method according to claim 3, wherein obtaining the first defect detection result from the registered template image, the registered second image, and the pre-constructed twin convolutional neural network comprises:
the template image after registration and the second image after registration are respectively subjected to blocking processing to obtain a processed template image and a processed second image;
inputting the processed template image and the processed second image into a pre-constructed twin convolutional neural network to obtain a second defect detection result;
when the second defect detection result is that the second image has defects, the second image is input into a pre-constructed classified convolutional neural network, and the first defect detection result is obtained.
8. An image defect detection apparatus, characterized in that the apparatus comprises:
the acquisition module is used for acquiring at least one group of image sets of the object to be detected, wherein the group of image sets comprises at least one frame of first image, and the group of image sets is acquired through a camera;
the splicing module is used for splicing the first images in each image set to obtain a second image;
and the detection module is used for carrying out defect detection on the second image according to the pre-constructed template image to obtain a first defect detection result.
9. A computer device comprising a memory and a processor, said memory and said processor being communicatively coupled to each other, said memory having stored therein computer instructions, said processor executing said computer instructions to perform the steps of the image defect detection method of any of claims 1-7.
10. A computer readable storage medium, on which a computer program is stored, characterized in that the computer program, when being executed by a processor, implements the steps of the image defect detection method according to any one of claims 1-7.
CN202311564656.1A 2023-11-22 2023-11-22 Image defect detection method, device, computer equipment and medium Pending CN117541563A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202311564656.1A CN117541563A (en) 2023-11-22 2023-11-22 Image defect detection method, device, computer equipment and medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202311564656.1A CN117541563A (en) 2023-11-22 2023-11-22 Image defect detection method, device, computer equipment and medium

Publications (1)

Publication Number Publication Date
CN117541563A true CN117541563A (en) 2024-02-09

Family

ID=89785747

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202311564656.1A Pending CN117541563A (en) 2023-11-22 2023-11-22 Image defect detection method, device, computer equipment and medium

Country Status (1)

Country Link
CN (1) CN117541563A (en)

Similar Documents

Publication Publication Date Title
CN110189285B (en) Multi-frame image fusion method and device
CN108896278B (en) Optical filter silk-screen defect detection method and device and terminal equipment
US20110149331A1 (en) Dynamic printer modelling for output checking
US11551350B2 (en) Inspecting for a defect on a print medium with an image aligned based on an object in the image and based on vertices of the inspection target medium and the reference medium
US10715683B2 (en) Print quality diagnosis
CN102279191A (en) Detection method and apparatus for defects in periodic texture images
CN113034488B (en) Visual inspection method for ink-jet printed matter
JP5750093B2 (en) Band-based patch selection with dynamic grid
CN113066088A (en) Detection method, detection device and storage medium in industrial detection
CN112200790B (en) Cloth defect detection method, device and medium
CN114419045A (en) Method, device and equipment for detecting defects of photoetching mask plate and readable storage medium
CN103824275A (en) System and method for finding saddle point-like structures in an image and determining information from the same
CN108765424B (en) Method and apparatus for detecting stain area, analyzer, and storage medium
JP7350637B2 (en) High-speed image distortion correction for image inspection
CN117094975A (en) Method and device for detecting surface defects of steel and electronic equipment
CN101685000B (en) Computer system and method for image boundary scan
Valente et al. Print defect mapping with semantic segmentation
CN117541563A (en) Image defect detection method, device, computer equipment and medium
CN109948605B (en) Picture enhancement method and device for small target
CN115187593B (en) Screen defect detection method and device
Haik et al. A novel inspection system for variable data printing using deep learning
WO2020050828A1 (en) Optical flow maps
CN115512381A (en) Text recognition method, text recognition device, text recognition equipment, storage medium and working machine
CN113850100A (en) Method and device for correcting two-dimensional code
CN111723802A (en) AI-based two-dimensional code identification method, device, equipment and medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination