CN118096698A - Visual detection method and device, electronic equipment and storage medium - Google Patents

Visual detection method and device, electronic equipment and storage medium Download PDF

Info

Publication number
CN118096698A
CN118096698A CN202410244569.6A CN202410244569A CN118096698A CN 118096698 A CN118096698 A CN 118096698A CN 202410244569 A CN202410244569 A CN 202410244569A CN 118096698 A CN118096698 A CN 118096698A
Authority
CN
China
Prior art keywords
image
detected
region
determining
original image
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202410244569.6A
Other languages
Chinese (zh)
Inventor
万茂佳
张武杰
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Zhongke Huiyuan Intelligent Equipment Guangdong Co ltd
Casi Vision Technology Luoyang Co Ltd
Original Assignee
Zhongke Huiyuan Intelligent Equipment Guangdong Co ltd
Casi Vision Technology Luoyang Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Zhongke Huiyuan Intelligent Equipment Guangdong Co ltd, Casi Vision Technology Luoyang Co Ltd filed Critical Zhongke Huiyuan Intelligent Equipment Guangdong Co ltd
Priority to CN202410244569.6A priority Critical patent/CN118096698A/en
Publication of CN118096698A publication Critical patent/CN118096698A/en
Pending legal-status Critical Current

Links

Classifications

    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02PCLIMATE CHANGE MITIGATION TECHNOLOGIES IN THE PRODUCTION OR PROCESSING OF GOODS
    • Y02P90/00Enabling technologies with a potential contribution to greenhouse gas [GHG] emissions mitigation
    • Y02P90/30Computing systems specially adapted for manufacturing

Landscapes

  • Image Processing (AREA)
  • Image Analysis (AREA)

Abstract

The disclosure provides a visual detection method, a visual detection device, electronic equipment and a storage medium, and relates to the technical field of image processing. The method comprises the following steps: acquiring an original image and a template image of a product to be detected; determining a target region to be detected in the original image according to the template image to obtain an image to be detected; carrying out non-uniformity correction treatment on a target to-be-detected area of the to-be-detected image to obtain a homogenized image; carrying out gray morphology operation on the homogenized image to obtain a morphology transformation image; performing defect enhancement processing on the morphological transformation image to obtain a defect enhancement image; and detecting the defect enhanced image to obtain a visual detection result of the product to be detected. The method can effectively solve the problem that the detection result is affected due to the non-uniformity interference of the image to be detected, and has higher accuracy and reliability.

Description

Visual detection method and device, electronic equipment and storage medium
Technical Field
The disclosure relates to the technical field of image processing, and in particular relates to a visual detection method, a visual detection device, electronic equipment and a storage medium.
Background
In the field of industrial defect detection, machine vision detection has become a trend of development, which can replace human eyes to perform tasks with high repeatability to a great extent. However, in the machine vision inspection industry, non-uniformity interference, such as cross stripes, often occurs due to the presence of factors such as uneven illumination, background interference, equipment jitter, etc. The presence of such non-uniformity interference can greatly impact the accuracy and stability of machine vision detection. Therefore, a technical solution for solving the non-uniformity interference is needed.
Disclosure of Invention
The present disclosure provides a visual inspection method, apparatus, electronic device, and storage medium, so as to at least solve the above technical problems in the prior art.
According to a first aspect of the present disclosure, there is provided a visual inspection method, the method comprising: acquiring an original image and a template image of a product to be detected; determining a target region to be detected in the original image according to the template image to obtain an image to be detected; carrying out non-uniformity correction treatment on a target to-be-detected area of the to-be-detected image to obtain a homogenized image; carrying out gray morphology operation on the homogenized image to obtain a morphology transformation image; performing defect enhancement processing on the morphological transformation image to obtain a defect enhancement image; and detecting the defect enhanced image to obtain a visual detection result of the product to be detected.
In an embodiment, the determining, according to the template image, the target area to be measured in the original image to obtain the image to be measured includes: determining an affine transformation matrix according to pose information of the original image and the template image; mapping the region of interest on the template image onto the original image based on the affine transformation matrix to obtain an initial region to be detected on the original image; and carrying out morphological processing on the initial region to be detected, and determining a target region to be detected in the original image to obtain an image to be detected.
In an embodiment, before the determining the affine transformation matrix according to pose information of the original image and the template image, the method further includes: constructing a coordinate system based on the region of interest in the template image; pose information of the template image is determined based on the coordinate system.
In an embodiment, the constructing a coordinate system based on the region of interest in the template image includes: determining a first boundary line and a second boundary line based on the region of interest in the template image; determining an origin of a coordinate system according to an intersection point of the first boundary line and the second boundary line; determining coordinate axes of a coordinate system according to the first boundary line and the vertical line of the first boundary line; and constructing a coordinate system according to the origin and the coordinate axis.
In an embodiment, the performing morphological processing on the initial region to be measured to determine a target region to be measured in the original image, to obtain a image to be measured, includes: screening the pixel points based on the gray values of the pixel points of the initial region to be detected; determining a plurality of connected areas according to the screened pixel points; screening the plurality of communication areas according to the areas of the communication areas to obtain candidate areas to be detected; and adjusting the smoothness and the size of the candidate region to be measured, and determining a target region to be measured to obtain a image to be measured.
In an embodiment, the performing non-uniformity correction processing on the target area to be measured of the image to be measured to obtain a homogenized image includes: acquiring a first mask area corresponding to the target area to be detected; scaling the original image and the first mask area respectively to obtain a scaled original image and a scaled first mask area; multiplying the scaled original image and the scaled first mask area to obtain a first intermediate image; dividing the one-dimensional array corresponding to the first intermediate image and the one-dimensional array corresponding to the scaled first mask region to obtain a second intermediate image; and subtracting the second intermediate image from the scaled original image, and adding a preset background gray value to obtain a homogenized image.
In an embodiment, the performing gray scale morphological operation on the homogenized image to obtain a morphological transformed image includes: determining a forward ROI rectangle corresponding to the homogenized image; and carrying out gray morphology operation on the forward ROI rectangle to obtain the morphology transformation image.
In an embodiment, the performing defect enhancement processing on the morphological transformation image to obtain a defect enhanced image includes: obtaining a second mask region corresponding to the morphological transformation image; scaling the original image and the second mask area to obtain a scaled original image and a scaled second mask area; multiplying the scaled original image and the scaled second mask area to obtain a third intermediate image; performing primary filtering processing and secondary filtering processing on the scaled second mask region and the third intermediate image to respectively obtain a first filtering image and a second filtering image; obtaining a fourth intermediate image by subtracting the first filtered image and the second filtered image; filtering the fourth intermediate image after binarization processing and the scaled second mask area to obtain a fifth intermediate image; adjusting the size of the normalized fifth intermediate image to obtain a sixth intermediate image; and obtaining a contrast image corresponding to the target region to be detected, and obtaining the defect enhanced image based on the sixth intermediate image and the contrast image.
In an embodiment, the obtaining the first filtered image by performing a filtering process on the scaled second mask area and the third intermediate image includes: obtaining a mask processing image by multiplying the scaled second mask region with the third intermediate image; filtering the mask processing image to obtain a filtered mask processing image; filtering the third intermediate image to obtain a filtered third intermediate image; and dividing the filtered mask processing image with the filtered third intermediate image to obtain the first filtered image.
In an embodiment, the obtaining the control image corresponding to the target area to be measured includes: according to the structure of the target region to be detected, determining an all-zero array with the same structure as the target region to be detected; creating an circumscribed rectangle based on the size of the all-zero array; and performing large-scale median filtering operation on the all-zero array and the circumscribed rectangle to obtain the control image.
In an embodiment, the detecting the defect enhanced image to obtain the visual detection result of the product to be detected includes: acquiring image data of the defect enhanced image; and detecting the image data based on a preset defect threshold value, and determining a visual detection result of the product to be detected.
According to a second aspect of the present disclosure, there is provided a visual inspection apparatus, the apparatus comprising: the image acquisition module is used for acquiring an original image and a template image of a product to be detected; the to-be-detected region determining module is used for determining a target to-be-detected region in the original image according to the template image to obtain a to-be-detected image; the first processing module is used for carrying out non-uniformity correction processing on a target to-be-detected area of the to-be-detected image to obtain a homogenized image; the second processing module is used for carrying out gray morphology operation on the homogenized image to obtain a morphology transformation image; the third processing module is used for carrying out defect enhancement processing on the morphological transformation image to obtain a defect enhancement image; and the detection module is used for detecting the defect enhanced image to obtain a visual detection result of the product to be detected.
In an embodiment, the area under test determining module includes: the first determining submodule is used for determining an affine transformation matrix according to pose information of the original image and the template image; the mapping sub-module is used for mapping the interested area on the template image onto the original image based on the affine transformation matrix to obtain an initial area to be detected on the original image; and the second determination submodule is used for carrying out morphological processing on the initial region to be detected, determining a target region to be detected in the original image and obtaining an image to be detected.
In an embodiment, the device further comprises: the coordinate system construction module is used for constructing a coordinate system based on the region of interest in the template image; and the pose determining module is used for determining pose information of the template image based on the coordinate system.
In an embodiment, the coordinate system construction module is further configured to: determining a first boundary line and a second boundary line based on the region of interest in the template image; determining an origin of a coordinate system according to an intersection point of the first boundary line and the second boundary line; determining coordinate axes of a coordinate system according to the first boundary line and the vertical line of the first boundary line; and constructing a coordinate system according to the origin and the coordinate axis.
In an embodiment, the second determining sub-module is further configured to: screening the pixel points based on the gray values of the pixel points of the initial region to be detected; determining a plurality of connected areas according to the screened pixel points; screening the plurality of communication areas according to the areas of the communication areas to obtain candidate areas to be detected; and adjusting the smoothness and the size of the candidate region to be measured, and determining a target region to be measured to obtain a image to be measured.
In an embodiment, the first processing module includes: the first acquisition submodule is used for acquiring a first mask region corresponding to the target region to be detected; the first processing submodule is used for respectively carrying out scaling processing on the original image and the first mask area to obtain a scaled original image and a scaled first mask area; the second processing submodule is used for multiplying the scaled original image and the scaled first mask area to obtain a first intermediate image; the third processing sub-module is used for dividing the one-dimensional array corresponding to the first intermediate image and the one-dimensional array corresponding to the scaled first mask area to obtain a second intermediate image; and a fourth processing sub-module, configured to obtain a homogenized image by subtracting the second intermediate image from the scaled original image and adding a preset background gray value.
In an embodiment, the second processing module includes: a third determination submodule, configured to determine a forward ROI rectangle corresponding to the homogenized image; and a fifth processing sub-module, configured to perform gray morphology operation on the forward ROI rectangle, so as to obtain the morphology transformation image.
In an embodiment, the third processing module includes: the second acquisition submodule is used for acquiring a second mask region corresponding to the morphological transformation image; a sixth processing sub-module, configured to perform scaling processing on the original image and the second mask area, to obtain a scaled original image and a scaled second mask area; a seventh processing sub-module, configured to multiply the scaled original image with the scaled second mask area to obtain a third intermediate image; an eighth processing sub-module, configured to obtain a first filtered image and a second filtered image by performing a primary filtering process and a secondary filtering process on the scaled second mask area and the third intermediate image, respectively; a ninth processing sub-module, configured to obtain a fourth intermediate image by subtracting the first filtered image and the second filtered image; a tenth processing sub-module, configured to obtain a fifth intermediate image by performing filtering processing on the binarized fourth intermediate image and the scaled second mask area; an eleventh processing sub-module, configured to adjust the size of the normalized fifth intermediate image to obtain a sixth intermediate image; and a twelfth processing sub-module, configured to obtain a control image corresponding to the target area to be detected, and obtain the defect enhanced image based on the sixth intermediate image and the control image.
In an embodiment, the eighth processing submodule is further configured to: obtaining a mask processing image by multiplying the scaled second mask region with the third intermediate image; filtering the mask processing image to obtain a filtered mask processing image; filtering the third intermediate image to obtain a filtered third intermediate image; and dividing the filtered mask processing image with the filtered third intermediate image to obtain the first filtered image.
In an embodiment, the twelfth processing submodule is further configured to: according to the structure of the target region to be detected, determining an all-zero array with the same structure as the target region to be detected; creating an circumscribed rectangle based on the size of the all-zero array; and performing large-scale median filtering operation on the all-zero array and the circumscribed rectangle to obtain the control image.
In an embodiment, the detection module includes: a first detection sub-module, configured to acquire image data of the defect enhanced image; and the second detection sub-module is used for detecting the image data based on a preset defect threshold value and determining a visual detection result of the product to be detected.
According to a third aspect of the present disclosure, there is provided an electronic device comprising:
At least one processor; and
A memory communicatively coupled to the at least one processor; wherein,
The memory stores instructions executable by the at least one processor to enable the at least one processor to perform the methods described in the present disclosure.
According to a fourth aspect of the present disclosure, there is provided a non-transitory computer readable storage medium storing computer instructions for causing the computer to perform the method of the present disclosure.
The visual detection method, the visual detection device, the electronic equipment and the storage medium are characterized in that firstly, a target region to be detected in an original image is determined based on a template image, and the original image with the determined target region to be detected is taken as an image to be detected. And then carrying out non-uniformity correction processing, gray morphology processing and defect enhancement processing on the target region to be detected in sequence to obtain a defect enhancement image. And finally, detecting the defect enhanced image to obtain a visual detection result of the product to be detected. The method can effectively solve the problem of false detection caused by non-uniformity interference of the image to be detected, has higher accuracy and reliability, and can be widely applied to the field of industrial defect detection.
It should be understood that the description in this section is not intended to identify key or critical features of the embodiments of the disclosure, nor is it intended to be used to limit the scope of the disclosure. Other features of the present disclosure will become apparent from the following specification.
Drawings
The above, as well as additional purposes, features, and advantages of exemplary embodiments of the present disclosure will become readily apparent from the following detailed description when read in conjunction with the accompanying drawings. Several embodiments of the present disclosure are illustrated by way of example, and not by way of limitation, in the figures of the accompanying drawings, in which:
in the drawings, the same or corresponding reference numerals indicate the same or corresponding parts.
Fig. 1 shows a schematic diagram of an implementation flow of a visual inspection method according to an embodiment of the disclosure;
FIG. 2 shows a schematic diagram of an original image and a template image in accordance with an embodiment of the present disclosure;
FIG. 3 illustrates a schematic diagram of an imaging system according to an embodiment of the present disclosure;
FIG. 4 shows a schematic diagram I of an image to be measured in accordance with an embodiment of the present disclosure;
FIG. 5 shows a second schematic diagram of an image to be measured according to an embodiment of the disclosure;
FIG. 6 shows a schematic diagram III of an image to be measured in accordance with an embodiment of the present disclosure;
FIG. 7 shows a schematic diagram IV of an image to be measured in accordance with an embodiment of the present disclosure;
FIG. 8 shows a schematic diagram of a coordinate system according to an embodiment of the present disclosure;
Fig. 9 shows a second implementation flow diagram of a visual inspection method according to an embodiment of the disclosure;
FIG. 10 illustrates a third flow diagram for implementing a visual inspection method according to an embodiment of the disclosure;
FIG. 11 shows a fourth flowchart of an implementation of a visual inspection method according to an embodiment of the disclosure;
FIG. 12 is a schematic view showing the constitution of a visual inspection apparatus according to an embodiment of the present disclosure;
fig. 13 shows a schematic diagram of a composition structure of an electronic device according to an embodiment of the present disclosure.
Detailed Description
In order to make the objects, features and advantages of the present disclosure more comprehensible, the technical solutions in the embodiments of the present disclosure will be clearly described in conjunction with the accompanying drawings in the embodiments of the present disclosure, and it is apparent that the described embodiments are only some embodiments of the present disclosure, but not all embodiments. Based on the embodiments in this disclosure, all other embodiments that a person skilled in the art would obtain without making any inventive effort are within the scope of protection of this disclosure.
At present, in the field of visual defect detection, the following technical means are mainly adopted to detect images of products to be detected:
1. The image is smooth, and noise interference in the image is reduced through means, median, gaussian filtering and the like. Image smoothing algorithms typically consume significant computing resources and time.
2. Image enhancement, which aims to enhance useful information in an image and improve image quality. But while enhancing useful information in the image, interference in the background will also be enhanced. Thus, this approach does not improve the signal-to-noise ratio, making the image difficult to analyze and process.
3. The semantic segmentation method of deep learning is used, but the deep learning method requires a large amount of marker data for training. When the application scene changes, a large amount of data needs to be collected and marked again, and the process is time-consuming and labor-consuming.
Accordingly, the present application provides a visual inspection method, apparatus, electronic device and storage medium, which address the problems in the prior art. Fig. 1 shows a schematic implementation flow diagram of a visual inspection method according to an embodiment of the disclosure, as shown in fig. 1, and according to a first aspect of an embodiment of the disclosure, there is provided a visual inspection method, including:
step 101, obtaining an original image and a template image of a product to be detected.
The product to be measured refers to products which need quality detection, such as various electronic products, industrial products, medical appliances and the like. The template image is a pre-acquired image of a product to be detected, the region (Region of Interest, ROI) of interest of which is determined by manual frame selection and the like, and can be directly called in the detection process. The original image is a gray image of the product to be detected, which is acquired in real time by an acquisition device (such as a camera, a scanner, etc.) during the detection process. Referring specifically to fig. 2, the left side in fig. 2 is a template image, and the right side is an image to be measured.
In one embodiment, FIG. 3 illustrates an imaging system, as shown in FIG. 3, that includes CCD (Charge coupled Device) cameras, an object (product under test), a light source, and lenses of the cameras. The original image of the product to be measured can be obtained by the imaging system, specifically, the product to be measured is firstly placed on a fixed carrier, and the product to be measured is moved from the fixed carrier to a fixed position (such as the position where the object is in fig. 3) of the imaging system by using a mechanical system. And shooting by a CCD camera to obtain an original image of the product to be detected. It will be appreciated that the imaging system in the present embodiment is merely an illustrative example, and the acquisition of the original image in actual cases is not limited to the imaging system in the present embodiment.
Step 102, determining a target region to be detected in the original image according to the template image to obtain an image to be detected.
The target region to be detected is the region to be detected in the original image, and the region to be detected is the same as the region in the template image. Specifically, the template image and the original image can be matched through methods such as feature extraction and description algorithm, coordinate system establishment and the like, and finally the position of the ROI in the original image, namely the target region to be detected, is determined, so that the image to be detected is obtained.
And 103, carrying out non-uniformity correction processing on a target to-be-detected area of the to-be-detected image to obtain a homogenized image.
And 104, performing gray level morphological operation on the homogenized image to obtain a morphological transformation image.
And 105, performing defect enhancement processing on the morphological transformation image to obtain a defect enhanced image.
For example, an image of the target area to be measured is shown in fig. 4. In steps 103-105, first, non-uniformity correction is performed on a target area to be measured of an image to be measured, so as to eliminate illumination non-uniformity, background interference or other non-uniformity interference in the image, and obtain a homogenized image as shown in fig. 5. Then, gray-scale morphological processing, specifically, dilation operation, is performed on the non-uniform image to connect adjacent luminance areas, resulting in a morphologically transformed image as shown in fig. 6. Finally, defect enhancement processing is carried out on the morphological transformation image so as to highlight detail or edge of the defect and make the difference between the defect area and the background more obvious, thus obtaining the defect enhancement image shown in fig. 7.
And 106, detecting the defect enhanced image to obtain a visual detection result of the product to be detected.
And obtaining a visual detection result of whether the product to be detected has the concave-convex point defects or not through the gray values of the pixel points in the defect enhanced image. Specifically, a gray value of a pixel point corresponding to the concave-convex point defect is preset, and if the gray value exists in the defect enhanced image, a visual detection result of the concave-convex point defect of the product to be detected is obtained. Otherwise, a visual detection result that the product to be detected does not have concave-convex point defects is obtained.
According to the method, the target region to be detected of the image to be detected is sequentially subjected to non-uniformity correction treatment, gray scale expansion and defect enhancement treatment, so that the efficiency, the precision and the stability of defect detection can be improved. The scheme has wide applicability in the field of visual defect detection, can cope with non-uniformity interference under different products and scenes to be detected, and has stronger universality and practicability.
In one embodiment of the present disclosure, a target region to be measured in an original image is determined according to a template image, and a measured image is obtained. The method can be realized by the following technical means: step 201, determining an affine transformation matrix according to pose information of an original image and a template image; step 202, mapping a region of interest on a template image onto an original image based on an affine transformation matrix to obtain an initial region to be detected on the original image; and 203, performing morphological processing on the initial region to be detected, determining a target region to be detected in the original image, and obtaining the image to be detected.
In step 201, according to pose information of an original image and a template image, determining an affine transformation matrix may be implemented by the following technical means:
a coordinate system is first constructed based on the region of interest in the template image. Pose information of the template image and the original image is then determined according to the coordinate system. Specifically, an edge detection algorithm or a straight line detection algorithm may be used to determine the edge profile of the region of interest, thereby obtaining a plurality of straight lines. And then selecting the included angle between two straight lines and a certain datum line (such as a horizontal line or a vertical line) as pose information of the template map. And similarly, obtaining pose information of the image to be detected. Finally, calculating according to pose information of the template image and the original image to obtain an affine transformation matrix.
In one embodiment, constructing the coordinate system may be accomplished by:
As shown in fig. 8, first, a first boundary line (L1) and a second boundary line (L2) are determined based on a region of interest in a template image. An intersection point of the first boundary line and the second boundary line is set as an origin of the coordinate system. The first boundary line L1 is taken as the X axis of the coordinate system. A coordinate system is obtained by taking a straight line perpendicular to the first boundary line L1 and passing through the origin as the Y-axis of the coordinate system.
Step 202, mapping the region of interest on the template image onto the original image based on the affine transformation matrix to obtain an initial region to be detected on the original image.
And transforming the ROI on the template image by using the affine transformation matrix obtained in the step 201, and mapping the ROI into a coordinate space of the original image to obtain a corresponding area on the original image, namely an initial area to be detected.
And 203, performing morphological processing on the initial region to be detected, determining a target region to be detected in the original image, and obtaining the image to be detected.
In an embodiment, the step 203 may be implemented by the following steps:
Step 2031, screening pixel points based on gray values of the pixel points of the initial region to be tested;
Specifically, all the pixel points in the initial region to be detected are traversed. And screening out the pixel points with gray values larger than the parameter 'minimum gray threshold of the target object' and smaller than the parameter 'maximum gray threshold of the target object'. The target object minimum gray threshold and the target object maximum gray threshold are preset fixed values, and specific numerical values can be set according to actual conditions.
Step 2032, determining a plurality of connected areas according to the filtered pixel points;
Step 2033, screening the plurality of communication areas according to the areas of the communication areas to obtain candidate areas to be tested;
Specifically, the areas of the plurality of connected areas between the minimum value of the target area and the maximum value of the target area are determined, and the areas are combined to obtain candidate areas to be detected. The "minimum value of the target area" and the "maximum value of the target area" are preset fixed values, and specific values can be set according to actual conditions.
Step 2034, adjusting the smoothness and the size of the candidate region to be measured, and determining the target region to be measured to obtain the image to be measured.
Specifically, firstly, the smoothness of the outline of the largest candidate area to be measured is adjusted through the parameter 'outline opening and closing radius', so that noise or irregularity possibly existing is eliminated. And then the size of the outline of the largest candidate region to be measured is adjusted through the parameter 'outline corrosion radius', so that the edge of the candidate region to be measured is reduced to a certain extent, and the shape and the size of the target are better defined. And then, screening out the area between the minimum value of the target area and the maximum value of the target area again, and repeating the steps until the maximum connected area is determined. And finally, the output maximum connected domain is the target region to be detected.
The method of the embodiment can effectively screen and optimize the initial region to be detected, is beneficial to eliminating local noise or scattered target regions, enables the region to be detected to be more continuous and complete, reduces unnecessary regions and interference in subsequent processing, and improves detection accuracy and stability.
As shown in fig. 9, in one embodiment of the present disclosure, performing a non-uniformity correction process on a target area to be measured of an image to be measured to obtain a homogenized image, including:
Step 301, obtaining a first mask area corresponding to a target area to be detected;
And determining a mask image based on the contour point set of the target region to be detected, and determining an external positive rectangle corresponding to the mask image as a first mask region.
Step 302, scaling the original image and the first mask area respectively to obtain a scaled original image and a scaled first mask area;
and scaling the input original image and the first mask region based on the image scaling scale factors, namely adjusting the size of the image to adapt to algorithm requirements, and obtaining a scaled original image and a scaled first mask region.
Step 303, multiplying the scaled original image and the scaled first mask area to obtain a first intermediate image;
and (3) cross multiplying the scaled original image by the scaled first mask area to obtain a first intermediate image, which is marked as 'maskedResizedImg'.
Step 304, dividing the one-dimensional array corresponding to the first intermediate image and the one-dimensional array corresponding to the scaled first mask region to obtain a second intermediate image;
first intermediate image maskedResizedImg and the scaled first mask region are first converted into one-dimensional arrays sumY and numY, respectively. sumY and numY are divided element by element to obtain a new one-dimensional array meanY _1d. The array meanY _1d is converted to an image structure, resulting in a second intermediate image meanImgX.
In step 305, the zoomed original image is subtracted from the second intermediate image, and a preset background gray value is added to obtain a homogenized image.
The gray value of each pixel in the scaled original image is subtracted by the gray value of the pixel in the corresponding position of the second intermediate image meanImgX. And adding a preset background gray value TARGETVAL, and converting the obtained result into an image with the same structure as the zoomed image to obtain an image after the non-uniformity correction processing, namely a homogenized image.
The method in the embodiment has higher accuracy, applicability and robustness in the horizontal non-uniform correction processing, and can effectively improve the quality and efficiency of image processing.
As shown in fig. 10, in one embodiment of the present disclosure, performing a gray scale morphological operation on a homogenized image to obtain a morphologically transformed image, comprising:
Step 401: determining a forward ROI rectangle corresponding to the homogenized image;
Firstly, a target region to be measured after non-uniformity correction treatment, namely a homogenized image is treated, the boundary of the corrected target region to be measured is extracted, and a forward ROI point set is generated. The forward ROI point set represents boundary feature points of the target region to be detected. A circumscribed rectangle, i.e., a forward ROI rectangle, is then determined based on the generated forward ROI point set.
Step 402: gray morphology operation is carried out on the forward ROI rectangle, and a morphology transformation image is obtained;
The gray scale morphological operation is specifically to select a gray scale morphological operation element as a circle, and the gray scale morphological operation type is expansion, in this embodiment, a circle with a radius of 20 is used. And then applying the circular element to a forward ROI rectangle, and performing gray scale expansion operation to obtain a morphological transformation image.
According to the method, the corrected homogenized image is further processed by using gray morphology operation, and the target area can be clearer and more prominent by selecting the round operation elements and the expansion operation types, so that the quality and the analyzability of the image are further improved.
As shown in fig. 11, in one embodiment of the present disclosure, defect enhancement processing is performed on a morphological transformed image to obtain a defect enhanced image, which specifically includes the following steps:
step 501, obtaining a second mask area corresponding to the morphological transformation image;
firstly, processing a target region to be detected after gray morphology operation, namely a morphology transformation image, determining a mask image corresponding to the target region to be detected, and determining an external positive rectangle corresponding to the mask image as a second mask region.
Step 502, scaling the original image and the second mask area to obtain a scaled original image and a scaled second mask area;
Specifically, scaling is performed on the incoming original image and the second mask region based on the image scaling scale factor, resulting in a scaled original image resizedImg and a scaled second mask region resizedMask.
Step 503, multiplying the scaled original image and the scaled second mask area to obtain a third intermediate image;
Specifically, the array of the scaled original image resizedImg is cross-multiplied by the array of the scaled second mask region resizedMask, and the obtained result is converted into the image mode, so as to obtain the third intermediate image MASKEDIMG.
Step 504, performing primary filtering processing and secondary filtering processing on the scaled second mask region and the third intermediate image to obtain a first filtered image and a second filtered image respectively;
In an embodiment, in step 504, the scaled second mask area and the third intermediate image are subjected to a filtering process to obtain a first filtered image, which may be implemented by the following technical means:
Firstly, multiplying the scaled second mask area with a third intermediate image to obtain a mask processing image; then filtering the mask processing image to obtain a filtered mask processing image; filtering the third intermediate image to obtain a filtered third intermediate image; and finally, dividing the filtered mask processing image with the filtered third intermediate image to obtain a first filtered image.
Specifically, the scaled second mask region resizedMask is cross-multiplied by the third intermediate image MASKEDIMG to obtain a mask processed image MASKEDIMG. The mask processed image MASKEDIMG is filtered by selecting a filter with a filter kernel of (2×specified filter radius+1 ), and in this embodiment, the specified filter radius is determined to be 2, that is, a filter operation is performed on the mask processed image by using a 5×5 filter, so as to obtain a filtered mask processed image LMMASKEDIMG1. And filtering the third intermediate image by adopting the same filter to obtain a filtered third intermediate image lmMask. Finally, mask processed image LMMASKEDIMG1 is divided by the filtered third intermediate image lmMask. A first filtered image VALIDMEANIMG1 is obtained.
Performing secondary filtering processing on the scaled second mask region and the third intermediate image to obtain a second filtered image, wherein the second filtered image can be realized by the following technical means:
The scaled second mask region resizedMask is cross-multiplied by the third intermediate image MASKEDIMG to yield a mask processed image MASKEDIMG. The mask processed image MASKEDIMG is filtered by selecting a filter with a filter kernel of (2×specified filter radius+1 ), and in this embodiment, the specified filter radius is determined to be 6, that is, a filter operation is performed on the mask processed image LMMASKEDIMG by using a filter of 13×13, so as to obtain a filtered mask processed image LMMASKEDIMG. And filtering the third intermediate image by using the same 13×13 filter to obtain a filtered third intermediate image lmMask. LMMASKEDIMG2 is finally divided by lmMask. A second filtered image VALIDMEANIMG is obtained.
Step 505, obtaining a fourth intermediate image by subtracting the first filtered image and the second filtered image;
The first filtered image VALIDMEANIMG and the second filtered image VALIDMEANIMG2 are subtracted and the absolute value of the subtraction result is determined. The absolute value of the subtraction result is converted into an image MEDFREQIMG. MEDFREQIMG is binarized according to a given contrast threshold, in this embodiment, 6 is selected as the low threshold, and 255 is selected as the high threshold. A binarized image, i.e., a fourth intermediate image MEDFREQIMGRANGELIMITED, is obtained.
Step 506, filtering the binarized fourth intermediate image and the scaled second mask area to obtain a fifth intermediate image;
The fourth intermediate image MEDFREQIMGRANGELIMITED and the second mask region resizedMask are pilot filtered to yield a fifth intermediate image medFreqImgBlured. The method comprises the following specific steps:
Step 5061, the fourth intermediate image MEDFREQIMGRANGELIMITED is cross-multiplied by the second mask region resizedMask to obtain an image MASKEDIMG. The image MASKEDIMG is filtered by selecting a filter with a filter kernel of (2×specified filter radius+1 ), and in this embodiment, the specified filter radius is determined to be 4, that is, a filter operation is performed on the image by using a filter of 9×9, so as to obtain a filtered image LMMASKEDIMG.
In step 5062, a filter with a filter kernel of (2×specified filter radius+1 ) is selected to filter the second mask region resizedMask, where the specified filter radius is determined to be 4 in this embodiment, that is, a filter operation is performed on the filter with a filter of 9×9, so as to obtain a filtered image lmMask.
In step 5063, the image LMMASKEDIMG is divided by the image lmMask to obtain a fifth intermediate image medFreqImgBlured.
Step 507: adjusting the size of the normalized fifth intermediate image to obtain a sixth intermediate image;
the method is realized by the following steps:
In step 5071, the fifth intermediate image medFreqImgBlured is normalized by the following formula (1) to be normalized within the range of (min, max), and the result normW is obtained.
Wherein dst (i, j) characterizes the normalization result of the ith row and the jth column; src (i, j) characterizes the value of the ith row and jth column array; min (src (x, y)) characterizes the minimum value in the array; max (src (x, y)) characterizes the maximum value in the array; min characterizes the normalized range minimum; max characterizes the normalized range maximum.
Step 5072, the size of normW is adjusted to be the same as the size of the ROI, resulting in normW _ OrgRes. Then, the result normW _ OrgRes is multiplied by a preset magnification factor and added by 1 to obtain a sixth intermediate image w_final, where the preset magnification factor is selected to be 100 in this embodiment.
Step 508: and obtaining a contrast image corresponding to the target region to be detected, and obtaining a defect enhanced image based on the sixth intermediate image and the contrast image.
The sixth intermediate image w_ fina is multiplied by the contrast image fgImgF, and a preset target background gray value is added, so that the background needs to be adjusted to black in this embodiment, and therefore, the target background gray value in this embodiment is set to 0, so as to obtain the defect enhanced image ENHANCEDIMGF.
The method for acquiring the control image corresponding to the target region to be detected specifically comprises the following steps:
step 5081, determining an all-zero array identical to the structure of the target region to be detected according to the structure of the target region to be detected;
first, an array fgImg of the same size as the ROI is created, and the pixel value is initialized to 0.
Step 5082, creating an circumscribed rectangle based on the size of the all-zero array;
Circumscribed rectangle boundRect is created that is equal to the height and width of array fgImg.
In step 5083, a comparison image is obtained by performing a large-scale median filtering operation on the all-zero array and the circumscribed rectangle.
The specific operation steps of step 5083 are as follows:
In step 50831, scaling is performed on the incoming original image and the second mask image based on the image scaling factor, resulting in a scaled original image resizedImg and a scaled second mask image resizedMask.
In step 50832, a filter with a filter kernel of (2×specified filter radius+1 ) is selected to filter the scaled original image resizedImg, and in this embodiment, the specified filter radius is determined to be 2, that is, a filter operation is performed on the original image with a filter of 5×5, so as to obtain a filtered image filtResizedImgWithBorderProc.
In step 50833, a filter with a filter kernel of (2 x specified filter radius+1 ) is selected to erode the scaled second mask image resizedMask, and in this embodiment, the specified filter radius is determined to be 2, that is, the filter with a filter radius of 5 x 5 is used to erode the second mask image, so as to obtain an eroded image erodedResizedMask. Subtracting erodedResizedMask using the scaled second mask image resizedMask yields the result borderProcMask. The image filtResizedImgWithBorderProc is modified according to the pixel values of borderProcMask, replacing the pixel values that exceed the boundary with background values.
In step 50834, a filter with a filter kernel of (2×specified filter radius+1 ) is selected, and the scaled second mask image resizedMask is expanded, in this embodiment, the specified filter radius is determined to be 2, that is, the expanded filter is used to expand the second mask image with a filter of 5×5, so as to obtain an expanded image dilatedResizedMask. The scaled second mask image resizedMask is subtracted from the image dilatedResizedMask to yield a result borderDilatedMask. The image filtResizedImgWithBorderProc is modified according to the pixel values of borderDilatedMask, replacing the pixel values that exceed the boundary with background values.
In step 50835, the image filtResizedImgWithBorderProc is resized to the same size as the ROI, resulting in image fgImgF.
In step 50836, the image fgImgF is subtracted by a preset target background gray value, and the background is required to be adjusted to be black in this embodiment, so the target background gray value is set to 0 in this embodiment. Subtracting 0 from image fgImgF, a control image fgImgF is obtained.
In one embodiment of the present disclosure, detecting a defect enhanced image to obtain a visual detection result of a product to be detected includes: acquiring image data of a defect enhanced image; and detecting the image data based on a preset defect threshold value, and determining a visual detection result of the product to be detected.
Firstly, converting the defect enhanced image into a matrix form to obtain corresponding image data. The image data is then detected based on a preset defect threshold. In one embodiment, the preset defect threshold may be set between the low threshold 100 and the high threshold 255. If the image data has a pixel point with a gray value greater than or equal to 100 and less than or equal to 255, the visual detection result of the product to be detected is regarded as that the concave-convex point defect exists.
In one embodiment of the present disclosure, if a defect is detected in a product to be tested, size information of the defect is determined, and then the product to be tested is classified according to the size information of the defect.
Specifically, the size of the defect may be calculated according to the minimum circumscribed rectangle of the defect. For example, the product to be tested having a defect exceeding 2 mm is set as a defective product. The products to be tested can be classified by setting a classifier with the defect size larger than 2 mm, and unqualified products to be tested with the defect size larger than 2 mm can be sorted out.
In one embodiment of the present disclosure, if the product to be tested is judged to be a defective product, the defective product is repaired or scrapped. If the product to be measured is judged to be a qualified product, the qualified product is transmitted to the next step, and the next manufacturing process treatment is continued.
Fig. 12 is a schematic view showing a composition structure of a visual inspection apparatus according to an embodiment of the present disclosure, and as shown in fig. 12, according to a second aspect of an embodiment of the present disclosure, there is provided a visual inspection apparatus including:
The image acquisition module 1201 is used for acquiring an original image and a template image of a product to be detected; the to-be-detected region determining module 1202 is configured to determine a target to-be-detected region in the original image according to the template image, so as to obtain a to-be-detected image; the first processing module 1203 is configured to perform non-uniformity correction processing on a target to-be-detected region of the to-be-detected image to obtain a homogenized image; a second processing module 1204, configured to perform gray scale morphological operation on the homogenized image to obtain a morphological transformation image; a third processing module 1205, configured to perform defect enhancement processing on the morphological transformed image to obtain a defect enhanced image; and the detection module 1206 is used for detecting the defect enhanced image to obtain a visual detection result of the product to be detected.
In one embodiment of the present disclosure, the area under test determination module 1202 includes: the first determining submodule is used for determining an affine transformation matrix according to pose information of the original image and the template image; the mapping sub-module is used for mapping the interested area on the template image onto the original image based on the affine transformation matrix to obtain an initial area to be detected on the original image; and the second determination submodule is used for carrying out morphological processing on the initial region to be detected, determining a target region to be detected in the original image and obtaining an image to be detected.
In one embodiment of the present disclosure, the apparatus further comprises: a coordinate system construction module (not shown in the figure) for constructing a coordinate system based on the region of interest in the template image; a pose determination module (not shown in the figure) for determining pose information of the template image based on the coordinate system.
In one embodiment of the present disclosure, the coordinate system construction module is further configured to: determining a first boundary line and a second boundary line based on the region of interest in the template image; determining an origin of a coordinate system according to an intersection point of the first boundary line and the second boundary line; determining coordinate axes of a coordinate system according to the first boundary line and the vertical line of the first boundary line; and constructing a coordinate system according to the origin and the coordinate axis.
In one embodiment of the present disclosure, the second determining submodule is further configured to: screening the pixel points based on the gray values of the pixel points of the initial region to be detected; determining a plurality of connected areas according to the screened pixel points; screening the plurality of communication areas according to the areas of the communication areas to obtain candidate areas to be detected; and adjusting the smoothness and the size of the candidate region to be measured, and determining a target region to be measured to obtain a image to be measured.
In one embodiment of the present disclosure, the first processing module 1203 includes: the first acquisition submodule is used for acquiring a first mask region corresponding to the target region to be detected; the first processing submodule is used for respectively carrying out scaling processing on the original image and the first mask area to obtain a scaled original image and a scaled first mask area; the second processing submodule is used for multiplying the scaled original image and the scaled first mask area to obtain a first intermediate image; the third processing sub-module is used for dividing the one-dimensional array corresponding to the first intermediate image and the one-dimensional array corresponding to the scaled first mask area to obtain a second intermediate image; and a fourth processing sub-module, configured to obtain a homogenized image by subtracting the second intermediate image from the scaled original image and adding a preset background gray value.
In one embodiment of the present disclosure, the second processing module 1204 includes: a third determination submodule, configured to determine a forward ROI rectangle corresponding to the homogenized image; and a fifth processing sub-module, configured to perform gray morphology operation on the forward ROI rectangle, so as to obtain the morphology transformation image.
In one embodiment of the present disclosure, the third processing module 1205 includes: the second acquisition submodule is used for acquiring a second mask region corresponding to the morphological transformation image; a sixth processing sub-module, configured to perform scaling processing on the original image and the second mask area, to obtain a scaled original image and a scaled second mask area; a seventh processing sub-module, configured to multiply the scaled original image with the scaled second mask area to obtain a third intermediate image; an eighth processing sub-module, configured to obtain a first filtered image and a second filtered image by performing a primary filtering process and a secondary filtering process on the scaled second mask area and the third intermediate image, respectively; a ninth processing sub-module, configured to obtain a fourth intermediate image by subtracting the first filtered image and the second filtered image; a tenth processing sub-module, configured to obtain a fifth intermediate image by performing filtering processing on the binarized fourth intermediate image and the scaled second mask area; an eleventh processing sub-module, configured to adjust the size of the normalized fifth intermediate image to obtain a sixth intermediate image; and a twelfth processing sub-module, configured to obtain a control image corresponding to the target area to be detected, and obtain the defect enhanced image based on the sixth intermediate image and the control image.
In one embodiment of the disclosure, the eighth processing submodule is further configured to: obtaining a mask processing image by multiplying the scaled second mask region with the third intermediate image; filtering the mask processing image to obtain a filtered mask processing image; filtering the third intermediate image to obtain a filtered third intermediate image; and dividing the filtered mask processing image with the filtered third intermediate image to obtain the first filtered image.
In one embodiment of the present disclosure, the twelfth processing submodule is further configured to: according to the structure of the target region to be detected, determining an all-zero array with the same structure as the target region to be detected; creating an circumscribed rectangle based on the size of the all-zero array; and performing large-scale median filtering operation on the all-zero array and the circumscribed rectangle to obtain the control image.
In one embodiment of the present disclosure, the detection module 1206 further includes a first detection sub-module for acquiring image data of the defect-enhanced image; and the second detection sub-module is used for detecting the image data based on a preset defect threshold value and determining a visual detection result of the product to be detected.
According to embodiments of the present disclosure, the present disclosure also provides an electronic device and a readable storage medium.
Fig. 13 shows a schematic block diagram of an example electronic device 1200 that can be used to implement embodiments of the present disclosure. Electronic devices are intended to represent various forms of digital computers, such as laptops, desktops, workstations, personal digital assistants, servers, blade servers, mainframes, and other appropriate computers. The electronic device may also represent various forms of mobile devices, such as personal digital processing, cellular telephones, smartphones, wearable devices, and other similar computing devices. The components shown herein, their connections and relationships, and their functions, are meant to be exemplary only, and are not meant to limit implementations of the disclosure described and/or claimed herein.
As shown in fig. 13, the apparatus 1300 includes a computing unit 1301 that can perform various appropriate actions and processes according to a computer program stored in a Read Only Memory (ROM) 1302 or a computer program loaded from a storage unit 1308 into a Random Access Memory (RAM) 1303. In the RAM 1303, various programs and data required for the operation of the device 1300 can also be stored. The computing unit 1301, the ROM 1302, and the RAM 1303 are connected to each other through a bus 1304. An input/output (I/O) interface 1305 is also connected to bus 1304.
Various components in device 1300 are connected to I/O interface 1305, including: an input unit 1306 such as a keyboard, a mouse, or the like; an output unit 1307 such as various types of displays, speakers, and the like; storage unit 1308, such as a magnetic disk, optical disk, etc.; and a communication unit 1309 such as a network card, a modem, a wireless communication transceiver, or the like. The communication unit 1309 allows the device 1300 to exchange information/data with other devices through a computer network such as the internet and/or various telecommunication networks.
The computing unit 1301 may be a variety of general and/or special purpose processing components having processing and computing capabilities. Some examples of computing unit 1301 include, but are not limited to, a Central Processing Unit (CPU), a Graphics Processing Unit (GPU), various specialized Artificial Intelligence (AI) computing chips, various computing units running machine learning model algorithms, a Digital Signal Processor (DSP), and any suitable processor, controller, microcontroller, etc. The computing unit 1301 performs the respective methods and processes described above, for example, a visual detection method. For example, in some embodiments, a visual inspection method may be implemented as a computer software program tangibly embodied on a machine-readable medium, such as storage unit 1308. In some embodiments, part or all of the computer program may be loaded and/or installed onto the device 1300 via the ROM 1302 and/or the communication unit 1309. When the computer program is loaded into the RAM 1303 and executed by the computing unit 1301, one or more steps of one visual inspection method described above may be performed. Alternatively, in other embodiments, computing unit 1301 may be configured to perform a visual detection method by any other suitable means (e.g., by means of firmware).
Various implementations of the systems and techniques described here above may be implemented in digital electronic circuitry, integrated circuitry, field Programmable Gate Arrays (FPGAs), application Specific Integrated Circuits (ASICs), application Specific Standard Products (ASSPs), systems-on-a-chip (SOCs), complex Programmable Logic Devices (CPLDs), computer hardware, firmware, software, and/or combinations thereof. These various embodiments may include: implemented in one or more computer programs, the one or more computer programs may be executed and/or interpreted on a programmable system including at least one programmable processor, which may be a special purpose or general-purpose programmable processor, that may receive data and instructions from, and transmit data and instructions to, a storage system, at least one input device, and at least one output device.
Program code for carrying out methods of the present disclosure may be written in any combination of one or more programming languages. These program code may be provided to a processor or controller of a general purpose computer, special purpose computer, or other programmable data processing apparatus such that the program code, when executed by the processor or controller, causes the functions/operations specified in the flowchart and/or block diagram to be implemented. The program code may execute entirely on the machine, partly on the machine, as a stand-alone software package, partly on the machine and partly on a remote machine or entirely on the remote machine or server.
In the context of this disclosure, a machine-readable medium may be a tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device. The machine-readable medium may be a machine-readable signal medium or a machine-readable storage medium. The machine-readable medium may include, but is not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or any suitable combination of the foregoing. More specific examples of a machine-readable storage medium would include an electrical connection based on one or more wires, a portable computer diskette, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber, a portable compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing.
To provide for interaction with a user, the systems and techniques described here can be implemented on a computer having: a display device (e.g., a CRT (cathode ray tube) or LCD (liquid crystal display) monitor) for displaying information to a user; and a keyboard and pointing device (e.g., a mouse or trackball) by which a user can provide input to the computer. Other kinds of devices may also be used to provide for interaction with a user; for example, feedback provided to the user may be any form of sensory feedback (e.g., visual feedback, auditory feedback, or tactile feedback); and input from the user may be received in any form, including acoustic input, speech input, or tactile input.
The systems and techniques described here can be implemented in a computing system that includes a background component (e.g., as a data server), or that includes a middleware component (e.g., an application server), or that includes a front-end component (e.g., a user computer having a graphical user interface or a web browser through which a user can interact with an implementation of the systems and techniques described here), or any combination of such background, middleware, or front-end components. The components of the system can be interconnected by any form or medium of digital data communication (e.g., a communication network). Examples of communication networks include: local Area Networks (LANs), wide Area Networks (WANs), and the internet.
The computer system may include a client and a server. The client and server are typically remote from each other and typically interact through a communication network. The relationship of client and server arises by virtue of computer programs running on the respective computers and having a client-server relationship to each other. The server may be a cloud server, a server of a distributed system, or a server incorporating a blockchain.
It should be appreciated that various forms of the flows shown above may be used to reorder, add, or delete steps. For example, the steps recited in the present disclosure may be performed in parallel, sequentially, or in a different order, provided that the desired results of the disclosed aspects are achieved, and are not limited herein.
Furthermore, the terms "first," "second," and the like, are used for descriptive purposes only and are not to be construed as indicating or implying a relative importance or implicitly indicating the number of technical features indicated. Thus, a feature defining "a first" or "a second" may explicitly or implicitly include at least one such feature. In the description of the present disclosure, the meaning of "a plurality" is two or more, unless explicitly defined otherwise.
The foregoing is merely specific embodiments of the disclosure, but the protection scope of the disclosure is not limited thereto, and any person skilled in the art can easily think about changes or substitutions within the technical scope of the disclosure, and it is intended to cover the scope of the disclosure. Therefore, the protection scope of the present disclosure shall be subject to the protection scope of the claims.

Claims (10)

1. A method of visual inspection, the method comprising:
acquiring an original image and a template image of a product to be detected;
Determining a target region to be detected in the original image according to the template image to obtain an image to be detected;
carrying out non-uniformity correction treatment on a target to-be-detected area of the to-be-detected image to obtain a homogenized image;
Carrying out gray morphology operation on the homogenized image to obtain a morphology transformation image;
Performing defect enhancement processing on the morphological transformation image to obtain a defect enhancement image;
and detecting the defect enhanced image to obtain a visual detection result of the product to be detected.
2. The method according to claim 1, wherein determining the target area to be measured in the original image according to the template image, to obtain the image to be measured, includes:
determining an affine transformation matrix according to pose information of the original image and the template image;
Mapping the region of interest on the template image onto the original image based on the affine transformation matrix to obtain an initial region to be detected on the original image;
and carrying out morphological processing on the initial region to be detected, and determining a target region to be detected in the original image to obtain an image to be detected.
3. The method according to claim 2, wherein before the determining of the affine transformation matrix from pose information of the original image and the template image, the method further comprises:
constructing a coordinate system based on the region of interest in the template image;
pose information of the template image is determined based on the coordinate system.
4. A method according to claim 3, wherein constructing a coordinate system based on the region of interest in the template image comprises:
determining a first boundary line and a second boundary line based on the region of interest in the template image;
Determining an origin of a coordinate system according to an intersection point of the first boundary line and the second boundary line;
determining coordinate axes of a coordinate system according to the first boundary line and the vertical line of the first boundary line;
and constructing a coordinate system according to the origin and the coordinate axis.
5. The method according to claim 2, wherein the performing morphological processing on the initial region to be measured to determine a target region to be measured in the original image, and obtaining the image to be measured includes:
screening the pixel points based on the gray values of the pixel points of the initial region to be detected;
Determining a plurality of connected areas according to the screened pixel points;
screening the plurality of communication areas according to the areas of the communication areas to obtain candidate areas to be detected;
And adjusting the smoothness and the size of the candidate region to be measured, and determining a target region to be measured to obtain a image to be measured.
6. The method according to claim 1, wherein the performing the non-uniformity correction processing on the target area to be measured of the image to be measured to obtain the homogenized image includes:
acquiring a first mask area corresponding to the target area to be detected;
scaling the original image and the first mask area respectively to obtain a scaled original image and a scaled first mask area;
Multiplying the scaled original image and the scaled first mask area to obtain a first intermediate image;
dividing the one-dimensional array corresponding to the first intermediate image and the one-dimensional array corresponding to the scaled first mask region to obtain a second intermediate image;
And subtracting the second intermediate image from the scaled original image, and adding a preset background gray value to obtain a homogenized image.
7. The method of claim 1, wherein performing a gray scale morphological operation on the homogenized image results in a morphologically transformed image, comprising:
determining a forward ROI rectangle corresponding to the homogenized image;
and carrying out gray morphology operation on the forward ROI rectangle to obtain a morphology transformation image.
8. A visual inspection device, the device comprising:
the image acquisition module is used for acquiring an original image and a template image of a product to be detected;
the to-be-detected region determining module is used for determining a target to-be-detected region in the original image according to the template image to obtain a to-be-detected image;
the first processing module is used for carrying out non-uniformity correction processing on a target to-be-detected area of the to-be-detected image to obtain a homogenized image;
the second processing module is used for carrying out gray morphology operation on the homogenized image to obtain a morphology transformation image;
The third processing module is used for carrying out defect enhancement processing on the morphological transformation image to obtain a defect enhancement image;
and the detection module is used for detecting the defect enhanced image to obtain a visual detection result of the product to be detected.
9. An electronic device, comprising:
At least one processor; and
A memory communicatively coupled to the at least one processor; wherein,
The memory stores instructions executable by the at least one processor to enable the at least one processor to perform the method of any one of claims 1-7.
10. A non-transitory computer readable storage medium storing computer instructions for causing a computer to perform the method of any one of claims 1-7.
CN202410244569.6A 2024-03-04 2024-03-04 Visual detection method and device, electronic equipment and storage medium Pending CN118096698A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202410244569.6A CN118096698A (en) 2024-03-04 2024-03-04 Visual detection method and device, electronic equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202410244569.6A CN118096698A (en) 2024-03-04 2024-03-04 Visual detection method and device, electronic equipment and storage medium

Publications (1)

Publication Number Publication Date
CN118096698A true CN118096698A (en) 2024-05-28

Family

ID=91148756

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202410244569.6A Pending CN118096698A (en) 2024-03-04 2024-03-04 Visual detection method and device, electronic equipment and storage medium

Country Status (1)

Country Link
CN (1) CN118096698A (en)

Similar Documents

Publication Publication Date Title
CN107507173B (en) No-reference definition evaluation method and system for full-slice image
CN115908269B (en) Visual defect detection method, visual defect detection device, storage medium and computer equipment
CN111833306A (en) Defect detection method and model training method for defect detection
CN115456956B (en) Method, equipment and storage medium for detecting scratches of liquid crystal display
CN115908415B (en) Edge-based defect detection method, device, equipment and storage medium
CN116152261B (en) Visual inspection system for quality of printed product
CN115880288B (en) Detection method, system and computer equipment for electronic element welding
CN117392042A (en) Defect detection method, defect detection apparatus, and storage medium
CN114298985B (en) Defect detection method, device, equipment and storage medium
CN116559177A (en) Defect detection method, device, equipment and storage medium
CN113066088A (en) Detection method, detection device and storage medium in industrial detection
CN112070762A (en) Mura defect detection method and device for liquid crystal panel, storage medium and terminal
CN111221996A (en) Instrument screen visual detection method and system
CN114674826A (en) Visual detection method and detection system based on cloth
CN116385415A (en) Edge defect detection method, device, equipment and storage medium
Luo et al. Adaptive canny and semantic segmentation networks based on feature fusion for road crack detection
CN118096698A (en) Visual detection method and device, electronic equipment and storage medium
CN114937003A (en) Multi-type defect detection system and method for glass panel
CN109949245B (en) Cross laser detection positioning method and device, storage medium and computer equipment
JP6114559B2 (en) Automatic unevenness detector for flat panel display
CN114298984B (en) Method and device for detecting screen penetration line, electronic equipment and storage medium
CN112652004B (en) Image processing method, device, equipment and medium
CN117036282A (en) Defect detection method, device, equipment and storage medium
CN114581890B (en) Method and device for determining lane line, electronic equipment and storage medium
CN117593780B (en) Wrinkle depth index determination method and device, electronic equipment and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination