CN114936997A - Detection method, detection device, electronic equipment and readable storage medium - Google Patents

Detection method, detection device, electronic equipment and readable storage medium Download PDF

Info

Publication number
CN114936997A
CN114936997A CN202111601257.9A CN202111601257A CN114936997A CN 114936997 A CN114936997 A CN 114936997A CN 202111601257 A CN202111601257 A CN 202111601257A CN 114936997 A CN114936997 A CN 114936997A
Authority
CN
China
Prior art keywords
image
target
target image
area
detection
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202111601257.9A
Other languages
Chinese (zh)
Inventor
不公告发明人
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Guangdong Lyric Robot Automation Co Ltd
Original Assignee
Guangdong Lyric Robot Intelligent Automation Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Guangdong Lyric Robot Intelligent Automation Co Ltd filed Critical Guangdong Lyric Robot Intelligent Automation Co Ltd
Priority to CN202111601257.9A priority Critical patent/CN114936997A/en
Publication of CN114936997A publication Critical patent/CN114936997A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/0002Inspection of images, e.g. flaw detection
    • G06T7/0004Industrial image inspection
    • G06T7/001Industrial image inspection using an image reference approach
    • G06T3/02
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/30Determination of transform parameters for the alignment of images, i.e. image registration
    • G06T7/33Determination of transform parameters for the alignment of images, i.e. image registration using feature-based methods
    • G06T7/337Determination of transform parameters for the alignment of images, i.e. image registration using feature-based methods involving reference images or patches
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/70Determining position or orientation of objects or cameras
    • G06T7/73Determining position or orientation of objects or cameras using feature-based methods
    • G06T7/74Determining position or orientation of objects or cameras using feature-based methods involving reference images or patches

Abstract

The application provides a detection method, a detection device, an electronic device and a readable storage medium, and the detection method comprises the following steps: positioning a first target area of a target image according to an image template to obtain a first coordinate system of the target image relative to the image template; positioning a pixel level region in the first target region according to a set algorithm in the first coordinate system to obtain a detection graph; calculating a difference map according to the detection map and the image template; and determining whether the target image has defects or not according to the difference map. According to the method and the device, the target image is roughly positioned on the basis of the image template, then each pixel point in the target image is finely positioned, and whether the target image has defects or not is judged through a difference image obtained through twice positioning calculation. Because the calculated difference image is obtained based on two positioning, the difference image is relatively accurate and more real, and the defects of the calculated target image are more accurate and real.

Description

Detection method, detection device, electronic equipment and readable storage medium
Technical Field
The present application relates to the field of image processing, and in particular, to a detection method, an apparatus, an electronic device, and a readable storage medium.
Background
Before the computer is taken out of the computer, performance detection of various items is required, wherein the performance detection comprises keyboard character detection, and the keyboard character detection mainly detects whether the keyboard has key shortage, key mistake, character incomplete and the like. At present, keyboard characters are detected mainly by manual work, but the manual detection has the problems of low efficiency, low accuracy and the like.
Disclosure of Invention
In view of the above, an object of the embodiments of the present application is to provide a detection method, an apparatus, an electronic device and a readable storage medium. Whether the keyboard has defects or not can be automatically detected by collecting the images of the keyboard, and the efficiency and accuracy of keyboard detection are improved.
In a first aspect, an embodiment of the present application provides a detection method, which positions a first target area of a target image according to an image template to obtain a first coordinate system of the target image relative to the image template; positioning a pixel level region in the first target region according to a set algorithm in the first coordinate system to obtain a detection graph; calculating a difference map according to the detection map and the image template; and determining whether the target image has defects or not according to the difference map.
In the implementation process, the first target area of the target image is positioned according to the template image to obtain the first coordinate system of the target image relative to the image template, and the target image is positioned relative to the image module, so that the problem of inaccurate coordinates caused by improper placement of the target image can be avoided. Furthermore, after the first coordinate system is determined, the pixel level area in the target area is positioned, each pixel level in the image is ensured to be positioned relative to the first coordinate system, the information of the target image can be accurately obtained, and finally defect calculation is carried out according to the target image and the image template. Whether the target area of the target image has defects is judged by positioning, calculating and the like on the image of the target and the image template, so that the automation of the keyboard defect detection is realized, and the efficiency and the accuracy of the defect detection can be improved.
With reference to the first aspect, an embodiment of the present application provides a first possible implementation manner of the first aspect, where: the positioning a first target area of a target image according to an image template to obtain a first coordinate system of the target image relative to the image template includes: calculating a first matching value of the first target area and an image template; transforming the first target area according to the first matching value to obtain a transformation coordinate; and transforming the transformation coordinates by using the reference coordinates of the image template to obtain a first coordinate system of the target image relative to the image template, wherein the reference coordinates are the coordinates of the first matching value corresponding to the image template.
In the implementation process, a first matching value with the highest matching value is obtained by calculating the first target area and the template image, and then the first target area is subjected to coordinate transformation according to the first matching value to obtain a first coordinate system of the target image. The first matching value is a result value with the highest matching degree calculated based on the image template, so that the transformation coordinate obtained by the first matching value is higher in accuracy, the accuracy of the first coordinate system is improved, and the accuracy of target image detection is enhanced.
With reference to the first possible implementation manner of the first aspect, an embodiment of the present application provides a second possible implementation manner of the first aspect, where: the transforming the first target area according to the first matching value to obtain a transformed coordinate includes: determining conversion data according to the reference coordinate and a first original coordinate, wherein the first original coordinate is a coordinate of the first matching value corresponding to the first target area; calculating a transformation matrix from the transformed data; and carrying out affine transformation on the first target area according to the transformation matrix to obtain a transformation coordinate.
In the implementation process, after the first matching value is obtained, conversion data of coordinate conversion is determined according to the coordinate corresponding to the first matching value and the reference coordinate, a conversion matrix is calculated according to the conversion data, affine transformation is carried out, and finally the conversion coordinate is obtained. The conversion data obtained by the first matching value and the reference coordinate can accurately reflect the position offset of the target image relative to the image template, so that the conversion matrix obtained by calculation of the conversion data and the conversion coordinate obtained by affine transformation are the conversion matrix and the conversion coordinate of the target image relative to the image template, which are obtained on the basis of the image template according to the conversion data, and the accuracy of defect detection is further improved.
With reference to the second possible implementation manner of the first aspect, this application example provides a third possible implementation manner of the first aspect, where the calculating a first matching value between the first target region and the image template includes: carrying out normalized correlation matching on the first target area and the image template to obtain a difference value between the target image and the image template; a first match value is determined from the difference value.
In the implementation process, correlation matching is performed through normalization to obtain a difference value and a first matching value of the target image and the image template, and the difference value and the first matching value are obtained based on the difference between the target image and the image template, so that the first matching value obtained through the difference value can accurately determine the matching value with the highest matching degree between the target image and the image template, the target image is conveniently converted by taking the image template as a reference, and the accuracy of coordinate conversion is improved.
With reference to the third possible implementation manner of the first aspect, an embodiment of the present application provides a fourth possible implementation manner of the first aspect, where the locating a second target region of the target image according to a set algorithm in the first coordinate system to obtain a detection map includes: calculating a registration parameter of a second target region of the target image based on a set algorithm in the first coordinate system; and registering the second target region with an image template according to the second target region registration parameter to obtain a detection image.
In the implementation process, the registration parameters in the second target region are further calculated according to a set algorithm, and the second target region is mapped according to the registration parameters and registered with the image template to obtain a detection image. And further, pixel-level registration is carried out on the target image, so that the accuracy of defect detection is improved.
With reference to the fourth possible implementation manner of the first aspect, this application example provides a fifth possible implementation manner of the first aspect, where the calculating, in the first coordinate system, a registration parameter of a second target region of the target image based on a setting algorithm includes: in the first coordinate system, calculating registration parameters of the second target region by affine transformation based on a gradient alignment mode.
With reference to the fifth possible implementation manner of the first aspect, this application provides a sixth possible implementation manner of the first aspect, where the calculating, in the first coordinate system, a registration parameter of a second target region of the target image based on a setting algorithm further includes: calculating the registration parameters of the second target region by a projective transformation gradient-based alignment mode.
In the implementation process, the registration parameters of the second target region are calculated through affine transformation or projection transformation, and the pixel-level images of the target region are subjected to secondary registration, so that the calculation amount is reduced, and the registration accuracy is further improved.
With reference to the sixth possible implementation manner of the first aspect, an embodiment of the present application provides a seventh possible implementation manner of the first aspect, where the determining whether the target image has a defect according to the difference map includes: carrying out contour searching on the difference image to determine a difference image contour; determining the contour area of the difference map contour according to the difference map contour; judging whether the area of the outline is larger than an area threshold value; and if the contour area is larger than the area threshold value, the target image has defects.
In the implementation process, the difference value between the contour area and the area threshold value is judged by determining the contour and the contour area of the difference value map, and whether the target image has defects is judged according to the difference value.
With reference to the seventh possible implementation manner of the first aspect, an embodiment of the present application provides an eighth possible implementation manner of the first aspect, where the method further includes: selecting the target image interesting area through edge filtering; filtering the interested region of the target image through morphological operation; a first target area of the target image is determined by contour finding.
In the implementation process, the interested area of the target image is selected through edge filtering, the interested area of the target image is filtered, then the first target area of the target image is determined through contour searching, the target image is preprocessed through the steps, irrelevant information in the image is eliminated, useful information is recovered, the detectability of relevant information is enhanced, meanwhile, data are simplified, and the detection efficiency is improved.
In a second aspect, an embodiment of the present application further provides a detection apparatus, including: a first positioning module: the system comprises an image template, a first target area, a second target area and a third coordinate area, wherein the first target area is used for positioning a target image according to the image template to obtain a first coordinate system of the target image relative to the image template; a second positioning module: the first target area is used for locating a pixel level area in the first target area according to a set algorithm in the first coordinate system to obtain a detection graph; a calculation module: the difference value graph is calculated according to the detection graph and the image template; a determination module: and determining whether the target image has defects according to the difference map.
In a third aspect, an embodiment of the present application further provides an electronic device, including: a processor, a memory storing machine-readable instructions executable by the processor, the machine-readable instructions, when executed by the processor, performing the steps of the method of the first aspect described above, or any possible implementation of the first aspect, when the electronic device is run.
In a fourth aspect, this embodiment of the present application further provides a computer-readable storage medium, where a computer program is stored on the computer-readable storage medium, and the computer program is executed by a processor to perform the steps of the foregoing first aspect, or the detection method in any possible implementation manner of the first aspect.
In order to make the aforementioned objects, features and advantages of the present application more comprehensible, embodiments accompanied with figures are described in detail below.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present application, the drawings that are required to be used in the embodiments will be briefly described below, it should be understood that the following drawings only illustrate some embodiments of the present application and therefore should not be considered as limiting the scope, and for those skilled in the art, other related drawings can be obtained from the drawings without inventive effort.
Fig. 1 is a block diagram of an electronic device according to an embodiment of the present disclosure.
Fig. 2 is a flowchart of a detection method according to an embodiment of the present application.
Fig. 3 is a flowchart of step 201 of a detection method according to an embodiment of the present application.
Fig. 4 is a flowchart of a detecting method step 202 according to an embodiment of the present application.
Fig. 5 is a schematic functional block diagram of a detection apparatus according to an embodiment of the present application.
Detailed Description
The technical solutions in the embodiments of the present application will be described below with reference to the drawings in the embodiments of the present application.
It should be noted that: like reference numbers and letters refer to like items in the following figures, and thus, once an item is defined in one figure, it need not be further defined and explained in subsequent figures. Meanwhile, in the description of the present application, the terms "first", "second", and the like are used only for distinguishing the description, and are not to be construed as indicating or implying relative importance.
To facilitate understanding of the present embodiment, an electronic device for performing the detection method disclosed in the embodiments of the present application will be described in detail first.
As shown in fig. 1, is a block schematic diagram of an electronic device. The electronic device 100 may include a memory 111, a processor 113. It will be understood by those of ordinary skill in the art that the structure shown in fig. 1 is merely exemplary and is not intended to limit the structure of the electronic device 100. For example, electronic device 100 may also include more or fewer components than shown in FIG. 1, or have a different configuration than shown in FIG. 1.
The aforementioned components of the memory 111 and the processor 113 are electrically connected to each other directly or indirectly to realize data transmission or interaction. For example, the components may be electrically connected to each other via one or more communication buses or signal lines. The processor 113 is used to execute the executable modules stored in the memory.
The Memory 111 may be, but is not limited to, a Random Access Memory (RAM), a Read Only Memory (ROM), a Programmable Read-Only Memory (PROM), an Erasable Read-Only Memory (EPROM), an electrically Erasable Read-Only Memory (EEPROM), and the like. The memory 111 is configured to store a program, and the processor 113 executes the program after receiving an execution instruction, and the method executed by the electronic device 100 defined by the process disclosed in any embodiment of the present application may be applied to the processor 113, or implemented by the processor 113.
The processor 113 may be an integrated circuit chip having signal processing capability. The Processor 113 may be a general-purpose Processor, and includes a Central Processing Unit (CPU), a Network Processor (NP), and the like; the Integrated Circuit may also be a Digital Signal Processor (DSP), an Application Specific Integrated Circuit (ASIC), a Field Programmable Gate Array (FPGA) or other programmable logic device, a discrete gate or transistor logic device, or a discrete hardware component. The various methods, steps, and logic blocks disclosed in the embodiments of the present application may be implemented or performed. A general purpose processor may be a microprocessor or the processor may be any conventional processor or the like.
Alternatively, the electronic device may be a detection robot, the electronic device may be a computer, the electronic device may be a quality analyzer, or the like.
Exemplarily, if the electronic device is a detection robot and the object to be detected is a keyboard, when the keyboard needs to be detected, the processor of the detection robot controls the acquisition device at the front end to acquire an image of the keyboard and controls the image processing module to preprocess the image. After image preprocessing, the processor positions, compares and judges the keyboard image according to an image template prestored in the detection robot, and finally outputs a detection result. The memory is used for storing information such as an image template, a keyboard image, a preprocessed keyboard image, a detection result and the like.
The electronic device 100 in this embodiment may be configured to perform each step in each method provided in this embodiment. The implementation of the detection method is described in detail below by means of several embodiments.
Please refer to fig. 2, which is a flowchart illustrating a detection method according to an embodiment of the present disclosure. The specific process shown in fig. 2 will be described in detail below.
Step 201, a first target area of a target image is positioned according to an image template, and a first coordinate system of the target image relative to the image template is obtained.
The image template can be established during first use, and is not reestablished after subsequent use; the image template can also be established at each detection; the image template may also be created after the keyboard is updated.
Optionally, the first target area may be a key area on a keyboard, the first target area may be a key area on a handle of a game machine, the first target area may be a key area of a landline telephone, the first target area may be an area to be detected of other objects to be detected, and the like, and the application is not limited specifically.
The first coordinate system is a coordinate system obtained by positioning the target image on the basis of the image template, and the adjustment parameters of the target image can be obtained when the target image is adjusted relative to the image template through the first coordinate system and the actual coordinate system of the target image.
Step 202, positioning a pixel level region in the first target region according to a set algorithm in the first coordinate system to obtain a detection map.
Alternatively, the specific algorithm may be determined based on the Reg module in OpenCV. OpenCV is an open source function library in the aspects of image processing, analysis and machine vision, the Reg module is a mapper for pixel-level registration, and the Reg module can perform calculation based on pixel gradients of pictures by applying various mathematical principles. The mathematical principles include, but are not limited to, translation transformation, affine transformation, projective transformation, euclidean algorithm, similarity metric, gaussian pyramid hierarchical motion estimation, and the like.
Step 203, calculating a difference map according to the detection map and the image template.
Wherein, the difference map is a defect map of the target image.
Alternatively, a difference map may be determined by comparing the differences between the inspection map and the image template; the difference between the detection map and the image template may be calculated according to a particular algorithm to determine a difference map.
And step 204, determining whether the target image has defects according to the difference map.
Optionally, when the target image has a defect, outputting the defect type and the defect position of the target image. When the target image has no defect, the output target image is normal.
In the above technical solution, as shown in fig. 3, the step 201 includes the step 2011-2013.
In step 2011, a first matching value between the first target region and the image template is calculated.
The first configuration value is a value with the minimum difference degree between a first target area of the target image and the detection template.
Alternatively, the first matching value is determined by overlapping the template with the target image, selecting a starting point, and detecting the degree of difference between the template and the first target region of the target image one by one from the starting point. The starting point may be the origin of coordinates of the target area, the starting point may be the point at the upper left corner of the target area, the starting point may be the point at the lower right corner of the target area, etc.
2012, transforming the first target area according to the first matching value to obtain a transformed coordinate.
Specifically, the transforming of the first target area according to the first matching value may be transforming all detection points to be detected in the first target area, and the transforming of the first target area according to the first matching value may also be transforming part of the detection points to be detected in the first target area.
Alternatively, the transformed coordinates may be one or more.
And 2013, converting the conversion coordinate by using the reference coordinate of the image template to obtain a first coordinate system of the target image relative to the image template.
And the reference coordinate is the coordinate of the image template corresponding to the first matching value.
Specifically, transforming the transformed coordinates with the reference coordinates of the image template may include: and transforming the plurality of transformed coordinates by using the reference coordinates of the image template to form a first coordinate system of the target image relative to the image template.
Optionally, the first coordinate system is a coordinate system formed after the target image is converted with the image template as a reference.
In the foregoing technical solution, step 2012 includes: determining conversion data according to the reference coordinate and the first original coordinate; calculating an affine matrix from the transformed data; and carrying out affine transformation on the first target area according to the affine matrix to obtain a transformation coordinate.
The first original coordinate is a coordinate value of the first matching value corresponding to the first target area. The first original coordinate is the central point position of the target image for affine transformation with the image template as the reference.
Wherein the conversion data may include: the target image rotation center, the rotation angle, the scaling factor, etc.
Exemplarily, the positioning reference key position "Q" in the target image is located at the central point position of the image template as T (x ', y'), the central point position of the positioning reference key position "Q" in the target image at the best matching position in the target image is located as I (x, y), the matching angle is θ, the scaling is γ, and then the rotation transformation matrix can be calculated:
Figure BDA0003433153540000111
calculating a translation vector from the vector difference between point I (x, y) and point T (x ', y'):
Figure BDA0003433153540000112
further, an affine matrix is obtained:
Figure BDA0003433153540000113
carrying out affine transformation on the first target area according to the affine matrix, wherein the transformation formula is as follows:
Figure BDA0003433153540000121
namely:
Figure BDA0003433153540000122
wherein, (x, y) is the central point position coordinate of the best matching position of the positioning reference key position "Q" in the target image, and (u, v) is the central point position coordinate of the positioning reference key position "Q" in the target image after affine transformation with the image template as the reference.
Alternatively, the above affine transformation is merely an example of the positioning reference key position "Q" in the target image, and affine transformations of other key positions can be obtained in the same manner. After affine transformation is carried out on all the key positions on the target image, the central point position coordinates of each key position after affine transformation are obtained by taking the image template as a reference.
In the above technical solution, step 2011 includes: carrying out normalized correlation matching on the first target area and the image template to obtain a difference value between the target image and the image template; a first match value is determined based on the difference value.
Wherein the difference value may be a plurality of difference values, and the first matching value is a value of the plurality of difference values at which the difference value is the largest.
Optionally, each point to be detected in the first target region may be subjected to normalized correlation matching with the image template, so as to obtain a difference value between each point to be detected and the image template.
Specifically, the calculation formula of the normalized correlation matching is as follows:
Figure BDA0003433153540000123
wherein T (T ', w') is a template matrix, I (T, w) is a target image matrix, and R ccorr_normed Is the difference value.
In the above technical solution, as shown in fig. 4, the step 202 includes steps 2021-2022.
Step 2021, in the first coordinate system, calculating a registration parameter of a second target region of the target image based on a setting algorithm.
The second target area is a pixel-level area in the first target area.
Alternatively, the registration parameters of the second target region of the target image may comprise a transformation matrix, a translated vector, a transformed vector, and the like.
Alternatively, the registration parameters may be calculated by selecting an algorithm in the Reg module in OpenCV. The Reg module in the OpenCV is a pixel level registration algorithm.
Step 2022, registering the second target region with the image template according to the second target region registration parameter to obtain a detection image.
Optionally, the second target region may be mapped according to the second target region registration parameter, so as to realize registration between the second target region and the image template.
In the above technical solution, step 2021 includes: in the first coordinate system, registration parameters of the second target region are calculated by affine transformation in a gradient-based alignment manner.
Alternatively, the affine transformation process is the same as the affine transformation in the above scheme, except that the affine transformation is pixel-level based.
Illustratively, the first pixel point of the positioning reference key position "Q" in the target image is set to T (x' 1 ,y′ 1 ) The first pixel position of the first pixel point of the positioning reference key position "Q" in the target image at the best matching position in the target image is set as I (x) 1 ,y 1 ) The matched angle is theta 1 Scaled by gamma 1 Then a rotation transformation matrix can be calculated:
Figure BDA0003433153540000131
at point I (x) 1 ,y 1 ) And point T (x' 1 ,y′ 1 ) Calculating a translation vector by the vector difference:
Figure BDA0003433153540000141
further, an affine matrix is obtained:
Figure BDA0003433153540000142
carrying out affine transformation on the first target area according to the affine matrix, wherein the transformation formula is as follows:
Figure BDA0003433153540000143
namely:
Figure BDA0003433153540000144
wherein (x) 1 ,y 1 ) For the pixel point position coordinates of the positioning reference key position "Q" in the target image at the best matching position in the target image, (u) 1 ,v 1 ) And positioning a reference key position 'Q' in the target image and taking the image template as a reference for affine transformation to obtain pixel point position coordinates.
Alternatively, the above affine transformation is merely an example of the pixel point of the positioning reference key position "Q" in the target image, and affine transformations of pixel points of other key positions can be obtained in the same manner. After affine transformation is carried out on all pixel points on the target image, the position coordinates of the pixel points after affine transformation are obtained by taking the image template as a reference.
In the above technical solution, step 2021 further includes: and calculating the registration parameters of the second target region through a projection transformation gradient-based alignment mode.
Illustratively, the formula for the projective transformation is as follows:
Figure BDA0003433153540000145
wherein (m, n) is the coordinate of the pixel point in the target image, and (p, q) is the coordinate after the pixel point is transformed,
Figure BDA0003433153540000146
is a projective transformation matrix.
Alternatively, the projective transformation matrix may be calculated by pixel gradients and pixel values of the image template and the target image at the point-to-point.
In the above technical solution, step 204 includes: carrying out contour searching on the difference image to determine a difference image contour; determining the contour area of the difference map contour according to the difference map contour; judging whether the area of the outline is larger than an area threshold value; and if the contour area is larger than the area threshold value, the target image has defects.
Alternatively, whether the contour area is larger than the area threshold may be determined by the difference contour area perimeter, whether the contour area is larger than the area threshold may be determined by the contour edge point of the difference contour area, and the like.
Optionally, before determining the difference map contour, median filtering may be performed on the difference map to filter the interference information.
Optionally, the contour area of the difference map includes contour areas of a plurality of points to be detected, whether the contour area is larger than an area threshold is judged according to the contour areas of the plurality of points to be detected, and if the contour area is larger than the area threshold, it is determined that the detection point has a defect.
In the above technical solution, the detection method further includes: selecting the target image region of interest by edge filtering; filtering the region of interest of the target image through morphological operation; a first target region of the target image is determined by contour finding.
The specific implementation of selecting the target image region of interest by edge filtering may be: and comparing the target image with the image template, and gradually reducing the area of the target image until the area is the minimum area containing the effective information in the target image, wherein the minimum area containing the effective information in the target image is the region of interest.
And filtering the interested region of the target image through morphological operation to eliminate noise of the target image or interference information such as dust, dirt and the like on target detection equipment acquired by the target image.
The method comprises the steps of carrying out image acquisition on an object to be detected, carrying out preprocessing, image-level positioning and pixel-level positioning on the acquired image and the image template, accurately matching the acquired image and the image template, calculating the matched image and the matched template image after matching to obtain a difference value, and judging whether the object to be detected has defects according to the difference value. The defect detection automation is realized by image recognition, the manual detection is replaced, the working intensity of workers is reduced, and the automatic detection also improves the detection efficiency and accuracy compared with the manual detection. In addition, the detection method of the application improves the detection accuracy by processing the target image for multiple times, gradually reducing the processing range from large to small, and finally based on the registration and judgment of the pixel level.
Based on the same application concept, a detection device corresponding to the detection method is further provided in the embodiment of the present application, and since the principle of solving the problem of the device in the embodiment of the present application is similar to that in the embodiment of the detection method, the implementation of the device in the embodiment of the present application may refer to the description in the embodiment of the method, and repeated details are not repeated.
Please refer to fig. 5, which is a schematic diagram of functional modules of a detection apparatus according to an embodiment of the present disclosure. Each module in the detection apparatus in this embodiment is configured to perform each step in the foregoing method embodiment. The detection device/system comprises a first positioning module 301, a second positioning module 302, a calculation module 303 and a determination module 304; wherein the content of the first and second substances,
the first positioning module 301 is configured to position a first target area of a target image according to an image template, so as to obtain a first coordinate system of the target image relative to the image template.
The second positioning module 302 is configured to position a pixel level region in the first target region according to a set algorithm in the first coordinate system, so as to obtain a detection map.
The calculation module 303 is configured to calculate a difference map according to the detection map and the image template.
The determining module 304 is configured to determine whether the target image has a defect according to the difference map.
In a possible implementation, the first positioning module 301 is further configured to: calculating a first matching value of the first target area and an image template; transforming the first target area according to the first matching value to obtain a transformation coordinate; and transforming the transformation coordinates by using the reference coordinates of the image template to obtain a first coordinate system of the target image relative to the image template, wherein the reference coordinates are the coordinates of the first matching value corresponding to the image template.
In a possible implementation manner, the first positioning module 301 is specifically configured to: determining conversion data according to the reference coordinate and a first original coordinate, wherein the first original coordinate is a coordinate of the first matching value corresponding to the first target area; calculating a transformation matrix from the transformed data; and carrying out affine transformation on the first target area according to the transformation matrix to obtain a transformation coordinate.
In a possible implementation manner, the first positioning module 301 is specifically configured to: carrying out normalized correlation matching on the first target area and the image template to obtain a difference value between the target image and the image template; a first match value is determined based on the difference value.
In a possible implementation, the second positioning module 302 is further configured to: calculating a registration parameter of a second target area of the target image based on a set algorithm in the first coordinate system; mapping the second target region according to the second target region registration parameter; and registering the second target region with an image template to obtain a detection image.
In a possible implementation manner, the second positioning module 302 is specifically configured to: in the first coordinate system, calculating registration parameters of the second target region by affine transformation based on a gradient alignment mode.
In a possible implementation, the second positioning module 302 is specifically configured to: calculating registration parameters of the second target region by a projective transformation gradient-based alignment.
In a possible implementation, the determining module 304 is further configured to: carrying out contour searching on the difference image to determine a difference image contour; determining the contour area of the difference map contour according to the difference map contour; judging whether the area of the outline is larger than an area threshold value or not; and if the area of the outline is larger than the area threshold value, the target image has defects.
In a possible embodiment, the apparatus further comprises a processing module for selecting the target image region of interest by edge filtering; filtering the interested region of the target image through morphological operation; a first target area of the target image is determined by contour finding.
In addition, an embodiment of the present application further provides a computer-readable storage medium, where a computer program is stored on the computer-readable storage medium, and when the computer program is executed by a processor, the computer program performs the steps of the detection method described in the foregoing method embodiment.
The computer program product of the detection method provided in the embodiment of the present application includes a computer-readable storage medium storing a program code, where instructions included in the program code may be used to execute the steps of the detection method in the above method embodiment, which may be referred to specifically in the above method embodiment, and are not described herein again.
In the embodiments provided in the present application, it should be understood that the disclosed apparatus and method can be implemented in other ways. The apparatus embodiments described above are merely illustrative, and for example, the flowchart and block diagrams in the figures illustrate the architecture, functionality, and operation of possible implementations of apparatus, methods and computer program products according to various embodiments of the present application. In this regard, each block in the flowchart or block diagrams may represent a module, segment, or portion of code, which comprises one or more executable instructions for implementing the specified logical function(s). It should also be noted that, in some alternative implementations, the functions noted in the block may occur out of the order noted in the figures. For example, two blocks shown in succession may, in fact, be executed substantially concurrently, or the blocks may sometimes be executed in the reverse order, depending upon the functionality involved. It will also be noted that each block of the block diagrams and/or flowchart illustration, and combinations of blocks in the block diagrams and/or flowchart illustration, can be implemented by special purpose hardware-based systems that perform the specified functions or acts, or combinations of special purpose hardware and computer instructions.
In addition, functional modules in the embodiments of the present application may be integrated together to form an independent part, or each module may exist separately, or two or more modules may be integrated to form an independent part.
The functions, if implemented in the form of software functional modules and sold or used as a stand-alone product, may be stored in a computer readable storage medium. Based on such understanding, the technical solution of the present application or portions thereof that substantially contribute to the prior art may be embodied in the form of a software product stored in a storage medium and including instructions for causing a computer device (which may be a personal computer, a server, or a network device) to execute all or part of the steps of the method according to the embodiments of the present application. And the aforementioned storage medium includes: a U-disk, a removable hard disk, a Read-Only Memory (ROM), a Random Access Memory (RAM), a magnetic disk or an optical disk, and other various media capable of storing program codes. It is noted that, herein, relational terms such as first and second, and the like may be used solely to distinguish one entity or action from another entity or action without necessarily requiring or implying any actual such relationship or order between such entities or actions. Also, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising … …" does not exclude the presence of other identical elements in a process, method, article, or apparatus that comprises the element.
The above description is only a preferred embodiment of the present application and is not intended to limit the present application, and various modifications and changes may be made by those skilled in the art. Any modification, equivalent replacement, improvement and the like made within the spirit and principle of the present application shall be included in the protection scope of the present application. It should be noted that: like reference numbers and letters refer to like items in the following figures, and thus, once an item is defined in one figure, it need not be further defined and explained in subsequent figures.
The above description is only for the specific embodiments of the present application, but the scope of the present application is not limited thereto, and any person skilled in the art can easily think of the changes or substitutions within the technical scope of the present application, and shall be covered by the scope of the present application. Therefore, the protection scope of the present application shall be subject to the protection scope of the claims.

Claims (12)

1. A method of detection, comprising:
positioning a first target area of a target image according to an image template to obtain a first coordinate system of the target image relative to the image template;
positioning a pixel level region in the first target region according to a set algorithm in the first coordinate system to obtain a detection graph;
calculating a difference map according to the detection map and the image template;
and determining whether the target image has defects or not according to the difference map.
2. The method of claim 1, wherein said locating a first target region of a target image according to an image template to obtain a first coordinate system of the target image relative to the image template comprises:
calculating a first matching value of the first target area and an image template;
transforming the first target area according to the first matching value to obtain a transformation coordinate;
and transforming the transformation coordinates by using the reference coordinates of the image template to obtain a first coordinate system of the target image relative to the image template, wherein the reference coordinates are the coordinates of the first matching value corresponding to the image template.
3. The method of claim 2, wherein transforming the first target region according to the first matching value to obtain transformed coordinates comprises:
determining conversion data according to the reference coordinate and a first original coordinate, wherein the first original coordinate is a coordinate of the first matching value corresponding to the first target area;
calculating an affine matrix according to the conversion data;
and carrying out affine transformation on the first target area according to the affine matrix to obtain a transformation coordinate.
4. The method of claim 2, wherein the calculating a first match value of the first target region to the image template comprises:
carrying out normalized correlation matching on the first target area and the image template to obtain a difference value between the target image and the image template;
a first match value is determined from the difference value.
5. The method of claim 1, wherein said locating a pixel level region in the first target region in the first coordinate system according to a set algorithm, resulting in a detection map, comprises:
calculating a registration parameter of a second target region of the target image based on a set algorithm in the first coordinate system;
and registering the second target region with an image template according to the second target region registration parameter to obtain a detection image.
6. The method of claim 5, wherein calculating, in the first coordinate system, registration parameters for a second target region of the target image based on a set algorithm comprises:
in the first coordinate system, calculating registration parameters of the second target region by affine transformation based on a gradient alignment mode.
7. The method of claim 5, wherein calculating, in the first coordinate system, registration parameters for a second target region of the target image based on a set algorithm, further comprises:
calculating the registration parameters of the second target region by a projective transformation gradient-based alignment mode.
8. The method of claim 1, wherein said determining whether the target image is defective from the difference map comprises:
carrying out contour searching on the difference image to determine a difference image contour;
determining the contour area of the difference map contour according to the difference map contour;
judging whether the area of the outline is larger than an area threshold value;
and if the area of the outline is larger than the area threshold value, the target image has defects.
9. The method of claim 1, further comprising:
selecting the target image interesting area through edge filtering;
filtering the region of interest of the target image through morphological operation;
a first target region of the target image is determined by contour finding.
10. A detection device, comprising:
a first positioning module: the system comprises an image template, a first target area, a second target area and a third coordinate area, wherein the first target area is used for positioning a target image according to the image template to obtain a first coordinate system of the target image relative to the image template;
a second positioning module: the detection map is used for positioning the pixel level region in the first target region according to a set algorithm in the first coordinate system to obtain a detection map;
a calculation module: the difference value graph is calculated according to the detection graph and the image template;
a determination module: and determining whether the target image has defects according to the difference map.
11. An electronic device, comprising: a processor, a memory storing machine-readable instructions executable by the processor, the machine-readable instructions when executed by the processor performing the steps of the method of any of claims 1 to 9 when the electronic device is operated.
12. A computer-readable storage medium, having stored thereon a computer program which, when being executed by a processor, is adapted to carry out the steps of the method according to any one of claims 1 to 9.
CN202111601257.9A 2021-12-24 2021-12-24 Detection method, detection device, electronic equipment and readable storage medium Pending CN114936997A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202111601257.9A CN114936997A (en) 2021-12-24 2021-12-24 Detection method, detection device, electronic equipment and readable storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202111601257.9A CN114936997A (en) 2021-12-24 2021-12-24 Detection method, detection device, electronic equipment and readable storage medium

Publications (1)

Publication Number Publication Date
CN114936997A true CN114936997A (en) 2022-08-23

Family

ID=82862845

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202111601257.9A Pending CN114936997A (en) 2021-12-24 2021-12-24 Detection method, detection device, electronic equipment and readable storage medium

Country Status (1)

Country Link
CN (1) CN114936997A (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116486126A (en) * 2023-06-25 2023-07-25 合肥联宝信息技术有限公司 Template determination method, device, equipment and storage medium
CN117131831A (en) * 2023-09-12 2023-11-28 上海世禹精密设备股份有限公司 Alignment method, device, equipment and medium for PCB electronic design diagram and physical diagram

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116486126A (en) * 2023-06-25 2023-07-25 合肥联宝信息技术有限公司 Template determination method, device, equipment and storage medium
CN116486126B (en) * 2023-06-25 2023-10-27 合肥联宝信息技术有限公司 Template determination method, device, equipment and storage medium
CN117131831A (en) * 2023-09-12 2023-11-28 上海世禹精密设备股份有限公司 Alignment method, device, equipment and medium for PCB electronic design diagram and physical diagram

Similar Documents

Publication Publication Date Title
CN107543828B (en) Workpiece surface defect detection method and system
US7133572B2 (en) Fast two dimensional object localization based on oriented edges
JP5699788B2 (en) Screen area detection method and system
US7340089B2 (en) Geometric pattern matching using dynamic feature combinations
CN106778737B (en) A kind of license plate antidote, device and a kind of video acquisition device
US8019164B2 (en) Apparatus, method and program product for matching with a template
CN114936997A (en) Detection method, detection device, electronic equipment and readable storage medium
JP2002259994A (en) Automatic image pattern detecting method and image processor
CN112598922B (en) Parking space detection method, device, equipment and storage medium
TWI522934B (en) Gyro sensor license plate recognition system for smart phone and method thereof
US8315457B2 (en) System and method for performing multi-image training for pattern recognition and registration
CN111832659B (en) Laser marking system and method based on feature point extraction algorithm detection
CN105718931B (en) System and method for determining clutter in acquired images
JP2023134688A (en) System and method for detecting and classifying pattern in image with vision system
JP4003465B2 (en) Specific pattern recognition method, specific pattern recognition program, specific pattern recognition program recording medium, and specific pattern recognition apparatus
CN113903024A (en) Handwritten bill numerical value information identification method, system, medium and device
CN110288040B (en) Image similarity judging method and device based on topology verification
Collins et al. Site model acquisition and extension from aerial images
CN108992033B (en) Grading device, equipment and storage medium for vision test
CN110991357A (en) Answer matching method and device and electronic equipment
JP2022009474A (en) System and method for detecting lines in vision system
CN115546219B (en) Detection plate type generation method, plate card defect detection method, device and product
CN113793370A (en) Three-dimensional point cloud registration method and device, electronic equipment and readable medium
US20030185432A1 (en) Method and system for image registration based on hierarchical object modeling
JP2007140729A (en) Method and device detecting position and attitude of article

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination