CN117830251A - Defect analysis method, defect analysis device and electronic equipment - Google Patents

Defect analysis method, defect analysis device and electronic equipment Download PDF

Info

Publication number
CN117830251A
CN117830251A CN202311840561.8A CN202311840561A CN117830251A CN 117830251 A CN117830251 A CN 117830251A CN 202311840561 A CN202311840561 A CN 202311840561A CN 117830251 A CN117830251 A CN 117830251A
Authority
CN
China
Prior art keywords
image
defect
dimensional
dimensional image
processing
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202311840561.8A
Other languages
Chinese (zh)
Inventor
左纯子
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Luster LightTech Co Ltd
Original Assignee
Luster LightTech Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Luster LightTech Co Ltd filed Critical Luster LightTech Co Ltd
Priority to CN202311840561.8A priority Critical patent/CN117830251A/en
Publication of CN117830251A publication Critical patent/CN117830251A/en
Pending legal-status Critical Current

Links

Abstract

The application discloses a defect analysis method, a defect analysis device and electronic equipment. The defect analysis method comprises the following steps: acquiring a two-dimensional image and a three-dimensional image of the surface of a piece to be detected; processing the two-dimensional image to obtain a defect area of the surface of the to-be-detected piece in the two-dimensional image, and processing the three-dimensional image to obtain a defect area of the surface of the to-be-detected piece in the three-dimensional image; the two-dimensional image and the three-dimensional image are fused to obtain fused image data of the surface of the to-be-detected piece, wherein the fused image data comprises three-dimensional coordinates of pixel points corresponding to points of the surface of the to-be-detected piece; and analyzing the fused image data to obtain defect information of the surface of the to-be-detected piece. The three-dimensional coordinates of the pixel points corresponding to the defect points are analyzed to determine whether the types of the defects are bulges or recesses, the distance between the pixel points corresponding to the defect points and a reference plane can be obtained, the depth of each defect area relative to the surface of the to-be-detected piece is obtained, and the analysis of the defects is more accurate.

Description

Defect analysis method, defect analysis device and electronic equipment
Technical Field
The present invention relates to the field of image processing, and more particularly, to a defect analysis method, a defect analysis apparatus, and an electronic device.
Background
Part of the parts to be detected, such as lithium batteries, have defects such as pits, bulges or scratches on the surfaces of the parts to be detected, which easily cause potential safety hazards, and the batteries are damaged, spontaneously ignited or even explode. The existing method for detecting the defects on the surface of the to-be-detected piece generally adopts a scheme of combining 2D scanning and deep learning, but only the size information of the defects in two dimensions can be obtained after the 2D scanning, the types of the defects are difficult to judge, and the depth information of the defects cannot be obtained.
Disclosure of Invention
The embodiment of the application provides a defect analysis method, a defect analysis device and electronic equipment, which are at least used for solving the problems that the type of a defect is difficult to judge and depth information of the defect cannot be obtained.
The defect analysis method of the embodiment of the application comprises the following steps: acquiring a two-dimensional image and a three-dimensional image of the surface of a piece to be detected; processing the two-dimensional image to obtain a defect area of the surface of the to-be-detected object in the two-dimensional image, and processing the two-dimensional image to obtain a defect area of the surface of the to-be-detected object in the three-dimensional image; the two-dimensional image and the three-dimensional image are fused to obtain fused image data of the surface of the to-be-detected object, wherein the fused image data comprises three-dimensional coordinates of pixel points corresponding to points of the surface of the to-be-detected object; and analyzing the fused image data to obtain defect information of the surface of the to-be-detected piece.
In some embodiments, the acquiring a three-dimensional image of the surface of the part to be inspected includes: acquiring a three-dimensional original image of the surface of the piece to be detected; performing image expansion processing and image corrosion processing on the original image to obtain an expansion image and a corrosion image; and performing differential processing on the expansion image and the corrosion image to acquire the three-dimensional image.
In some embodiments, the acquiring a three-dimensional image of the surface of the part to be inspected includes: acquiring a three-dimensional original image of the surface of the piece to be detected; performing image expansion processing and image corrosion processing on the original image to obtain an expansion image and a corrosion image; performing differential processing on the expansion image and the corrosion image to obtain a differential image; and performing binarization processing on the differential image to obtain the three-dimensional image.
In some embodiments, the fusing the two-dimensional image and the three-dimensional image to obtain fused image data of the surface of the object to be inspected includes: obtaining an affine matrix; mapping the two-dimensional image on the three-dimensional image according to the affine matrix to obtain a mapped image; and performing image expansion processing on the mapping image to acquire the fused image data.
In some embodiments, the defect area includes at least one defect point, each of the defect points corresponds to one pixel point in the fused image data, and the analyzing the fused image data to obtain defect information of the surface of the object to be detected includes: performing distance conversion processing on the pixel points corresponding to the defect points to obtain reference points; acquiring three-dimensional coordinates of the datum point and three-dimensional coordinates of the pixel point corresponding to the defect point in the fused image data; obtaining a reference fitting surface according to the coordinate fitting of the reference point; and analyzing the position relation between the three-dimensional coordinates of each defect point and the reference fitting surface to obtain the defect information corresponding to the defect area.
In some embodiments, the defect information includes a defect type, and the analyzing the positional relationship between the three-dimensional coordinates of each of the defect points and the reference fitting surface to obtain the defect information corresponding to the defect area includes: calculating a distance difference value between the three-dimensional coordinates of each pixel point corresponding to the defect point and the reference fitting surface; acquiring a first number of first pixel points higher than the reference fitting surface and a second number of second pixel points lower than the reference fitting surface according to the distance differences; and obtaining the defect type according to the first quantity and the second quantity.
In some embodiments, the obtaining the defect type from the first number and the second number includes: in the case where the first number is greater than the second number, the defect type is a bump; in case the first number is smaller than the second number, the defect type is a pit.
In some embodiments, the defect information includes a defect depth, and the analyzing the positional relationship between the three-dimensional coordinates of each of the defect points and the reference fitting surface to obtain the defect information corresponding to the defect area further includes: and obtaining the defect depth of the defect area corresponding to the pixel points according to the distance differences.
The defect analysis device of the embodiment of the application comprises an acquisition module, a processing module, a fusion module and an analysis module. The acquisition module is used for acquiring a two-dimensional image and a three-dimensional image of the surface of the to-be-detected piece. The processing module is used for processing the two-dimensional image to obtain a defect area of the surface of the to-be-detected piece in the two-dimensional image, and processing the two-dimensional image to obtain a defect area of the surface of the to-be-detected piece in the three-dimensional image. The fusion module is used for carrying out fusion processing on the two-dimensional image and the three-dimensional image to obtain fusion image data of the surface of the to-be-detected piece, wherein the fusion image data comprises three-dimensional coordinates of pixel points corresponding to each point of the surface of the to-be-detected piece. The analysis module is used for analyzing the fused image data to obtain defect information of the surface of the to-be-detected piece.
The electronic device of an embodiment of the application includes a memory and one or more processors. The memory stores a computer program. One or more of the processors are configured to perform the defect analysis method according to any one of the embodiments described above.
In the defect analysis method, the defect analysis device and the electronic equipment, under the condition that the two-dimensional image and the three-dimensional image are fused to obtain the fused image data, the three-dimensional coordinates and the reference plane of the pixel points corresponding to the defect points in the defect area of the surface of the object to be detected can be obtained by analyzing the fused image data. The type of the defect can be determined to be convex or concave by analyzing the three-dimensional coordinates of the pixel points corresponding to each defect point, the distance between the pixel points corresponding to each defect point and the reference plane can be obtained, the depth of each defect area relative to the convex or concave surface of the to-be-detected piece is obtained, and the analysis and measurement of the defect are more accurate.
Additional aspects and advantages of embodiments of the application will be set forth in part in the description which follows and, in part, will be obvious from the description, or may be learned by practice of embodiments of the application.
Drawings
The foregoing and/or additional aspects and advantages of the present application will become apparent and readily appreciated from the following description of the embodiments, taken in conjunction with the accompanying drawings, in which:
FIG. 1 is a flow chart of a defect analysis method according to certain embodiments of the present application;
FIG. 2 is a block diagram of a defect analysis device according to certain embodiments of the present application;
FIG. 3 is a schematic illustration of acquiring two-dimensional and three-dimensional images in accordance with certain embodiments of the present application;
FIG. 4 is a schematic diagram of a fusion of two-dimensional and three-dimensional images in accordance with certain embodiments of the present application;
FIG. 5 is a flow chart of a defect analysis method according to certain embodiments of the present application;
FIG. 6 is a schematic diagram of image dilation and image erosion in accordance with certain embodiments of the present application;
FIG. 7 is a flow chart of a defect analysis method according to certain embodiments of the present application;
FIG. 8 is a schematic diagram of image binarization according to certain embodiments of the present application;
FIG. 9 is a flow chart of a defect analysis method according to certain embodiments of the present application;
FIG. 10 is a schematic illustration of obtaining an affine matrix according to certain embodiments of the present application;
FIG. 11 is a flow chart of a defect analysis method according to certain embodiments of the present application;
FIG. 12 is a schematic illustration of acquiring a reference fit surface in accordance with certain embodiments of the present application;
FIG. 13 is a flow chart of a defect analysis method according to certain embodiments of the present application;
FIG. 14 is a schematic illustration of a reference fit surface according to some embodiments of the present application;
FIG. 15 is a flow chart of a defect analysis method according to certain embodiments of the present application;
FIG. 16 is a table schematic of defect data of certain embodiments of the present application;
fig. 17 is a schematic structural diagram of an electronic device according to some embodiments of the present application.
Description of main reference numerals:
100. a defect analysis device; 10. an acquisition module; 30. a processing module; 50. a fusion module; 70. and an analysis module.
Detailed Description
Embodiments of the present application are described in detail below, examples of which are illustrated in the accompanying drawings, wherein like or similar reference numerals refer to like or similar elements or elements having like or similar functions throughout. The embodiments described below by referring to the drawings are exemplary only for explaining the embodiments of the present application and are not to be construed as limiting the embodiments of the present application.
In the description of the present application, it should be understood that the terms "thickness," "upper," "top," "bottom," "inner," "outer," and the like indicate an orientation or a positional relationship based on that shown in the drawings, merely for convenience of description and to simplify the description, and do not indicate or imply that the devices or elements referred to must have a specific orientation, be configured and operated in a specific orientation, and thus should not be construed as limiting the present application. And the terms "first", "second" are used for descriptive purposes only and are not to be construed as indicating or implying relative importance or implicitly indicating the number of technical features indicated. Thus, a feature defining "a first" or "a second" may explicitly or implicitly include one or more features. In the description of the present application, the meaning of "a plurality" is two or more, unless explicitly defined otherwise.
In the description of the present application, it should be noted that, unless explicitly specified and limited otherwise, the terms "mounted," "connected," and "connected" are to be construed broadly and may be fixedly connected, or detachably connected, or integrally connected, in one example; may be mechanically or electrically connected, or may be in communication with each other; either directly or indirectly through intermediaries, may be in communication with each other between two elements or in an interaction relationship between the two elements.
Referring to fig. 1 and 2, in some embodiments, the defect analysis method of the present application includes:
01: acquiring a two-dimensional image and a three-dimensional image of the surface of a piece to be detected;
03: processing the two-dimensional image to obtain a defect area of the surface of the to-be-detected piece in the two-dimensional image, and processing the three-dimensional image to obtain a defect area of the surface of the to-be-detected piece in the three-dimensional image;
05: the two-dimensional image and the three-dimensional image are fused to obtain fused image data of the surface of the to-be-detected piece, wherein the fused image data comprises three-dimensional coordinates of pixel points corresponding to points of the surface of the to-be-detected piece; and
07: and analyzing the fused image data to obtain defect information of the surface of the to-be-detected piece.
Referring to the drawings, the defect analysis device 100 of the embodiment of the present application includes an acquisition module 10, a processing module 30, a fusion module 50, and an analysis module 70. The acquisition module 10 is used for acquiring two-dimensional images and three-dimensional images of the surface of the object to be detected. The processing module 30 is configured to process the two-dimensional image to obtain a defect area of the surface of the object to be detected in the two-dimensional image, and process the two-dimensional image to obtain a defect area of the surface of the object to be detected in the three-dimensional image. The fusion module 50 is configured to fuse the two-dimensional image and the three-dimensional image to obtain fused image data of the surface of the object to be detected, where the fused image data includes three-dimensional coordinates of pixel points corresponding to points of the surface of the object to be detected. The analysis module 70 is used for analyzing the fused image data to obtain defect information of the surface of the object to be detected.
With reference to fig. 3, the two-dimensional image refers to a plane image obtained after the surface of the object to be detected (for example, the surface of a lithium battery) is photographed by a 2D camera, and the plane image does not include depth information. The types of 2D cameras include, but are not limited to, 2D line scan cameras or 2D area array cameras, etc., without limitation. Because the surface of the object to be detected comprises a defect area and a non-defect area, the two-dimensional image obtained by shooting comprises the two-dimensional coordinates of each pixel point corresponding to the defect area and the two-dimensional coordinates of each pixel point corresponding to the non-defect area. The three-dimensional image refers to a stereoscopic image containing depth information obtained after a surface of an object to be detected (for example, a surface of a lithium battery) is photographed by a 3D camera. The types of 3D cameras include, but are not limited to, 3D laser scanning cameras, binocular structured light cameras, or TOF (Time-of-Flight) cameras, etc., without limitation.
The defects refer to areas formed by defects possibly generated in the preparation process of the surface of the to-be-detected piece, such as surface scratches, pits, bulges or gaps, and the like, and the defects comprise a plurality of defect points. The non-defect refers to a relatively smooth area of the surface of the object to be inspected, and the non-defect includes a plurality of non-defect points. Because the surface of the object to be detected comprises a plurality of defect points and a plurality of non-defect points, the three-dimensional image obtained by shooting comprises the three-dimensional coordinates of each pixel point in the defect area corresponding to the defect points and the three-dimensional coordinates of each pixel point in the non-defect area corresponding to the non-defect points. Since the relative heights between each defective point and each non-defective point are different, the luminance of the pixel in the defective area corresponding to the defective point and the pixel in the non-defective area corresponding to the non-defective point are generally different in the two-dimensional image and the three-dimensional image.
Please refer to fig. 4 in combination, the process of mapping the acquired two-dimensional image and three-dimensional image to obtain a new fused image. The fusion image may be obtained by mapping a two-dimensional image onto a three-dimensional image or by mapping a three-dimensional image onto a two-dimensional image. The fused image data is data contained in the fused image, and the fused image data includes, but is not limited to, three-dimensional coordinates of pixel points corresponding to points on the surface of the object to be detected, density distribution conditions of a plurality of pixel points corresponding to defect areas on the surface of the object to be detected, and the like.
The process of mapping the two-dimensional image and the three-dimensional image to each other includes, but is not limited to, affine transformation, image dilation, image erosion, perspective transformation, or the like. In order to enable the two-dimensional image and the three-dimensional image to be fused with each other, parameters such as an angle and a position of a surface of the object to be detected, which are shot by the 2D camera, should be the same as parameters such as an angle and a position of a surface of the object to be detected, which are shot by the 3D camera, so that errors generated in the process of fusing the images are reduced. In the mapping process, each pixel point in the two-dimensional image corresponds to each corresponding pixel point in the three-dimensional image one by one, and coordinates (X0, Y0) of the pixel point in the two-dimensional image correspond to coordinates (X0, Y0, Z0) of the pixel point in the three-dimensional image.
The defect information refers to information of a defect region corresponding to a defect portion of the surface of the object to be inspected, which is acquired in the process of analyzing the fused image data. The types of defect information include, but are not limited to, defect type, depth information of defects relative to non-defects, or distribution of individual defects on the surface of the object to be inspected, without limitation. According to the defect information, a user can know the position of the defect on the surface of the to-be-detected piece and the depth of the defect, and the defect is detected more accurately.
In the defect analysis method, under the condition that a two-dimensional image and a three-dimensional image are fused to obtain fused image data, the three-dimensional coordinates and a reference plane of pixel points corresponding to each defect point in a defect area of the surface of the object to be detected can be obtained by analyzing the fused image data. The type of the defect can be determined to be convex or concave by analyzing the three-dimensional coordinates of the pixel points corresponding to each defect point, the distance between the pixel points corresponding to each defect point and the reference plane can be obtained, the depth of each defect area relative to the convex or concave surface of the to-be-detected piece is obtained, and the analysis and measurement of the defect are more accurate.
Referring to fig. 5 and 6, in some embodiments, 01: acquiring a three-dimensional image of the surface of the part to be detected, comprising:
011: acquiring a three-dimensional original image of the surface of a piece to be detected;
013: performing image expansion processing and image corrosion processing on the original image to obtain an expanded image and a corroded image; and
015: differential processing is performed on the inflation image and the erosion image to obtain a three-dimensional image.
Correspondingly, referring to fig. 2, the acquiring module 10 is further configured to acquire a three-dimensional original image of the surface of the object to be detected, perform image expansion processing and image corrosion processing on the original image to acquire an expansion image and a corrosion image, and perform differential processing on the expansion image and the corrosion image to acquire a three-dimensional image.
The three-dimensional original image of the surface of the object to be detected refers to an unprocessed image acquired by the acquisition module 10 after being photographed by the 3D camera, and the original image can be transmitted to the acquisition module 10 by the 3D camera or can be directly read from a readable and writable memory of the 3D camera by the acquisition module 10. The original image may be obtained as either a gray scale image or a binary image, without limitation. The original image comprises pixel points corresponding to defect points of the to-be-detected piece and pixel points corresponding to non-defect points of the to-be-detected piece, and the brightness of the pixel points and the non-defect points of the to-be-detected piece are different.
The image expansion processing refers to a morphological analysis method of changing the shape of an image by changing the pixel value of the image. The image dilation process can enhance the outline of certain pixels in the original image and enlarge the size of the area of certain pixels in the original image. The image dilation can compare each pixel in the original image to be dilated with its surrounding pixels and take the maximum value as the value of the output pixel, so that the region edge formed by the multiple pixel points to be dilated becomes more obvious.
The expanded image is a new image obtained by performing image expansion processing on the original image, and compared with the original image, the expanded image comprises a region formed by a plurality of expanded pixels in the original image, and the region formed by the plurality of expanded pixels is larger in area than the region formed by the plurality of corresponding pixels in the original image, and the outline shape of the region formed by the plurality of expanded pixels is the same as the outline shape of the region formed by the plurality of corresponding pixels in the original image.
The image erosion process refers to a morphological analysis method in which the shape of an image is changed by changing the pixel value of the image, but the assignment of the pixel value of the image by the image erosion process is different from the assignment of the pixel value of the image by the image dilation process. Image erosion allows a preset structural element (also called a kernel or template) to be slid over the original image and compared to pixels at corresponding locations in the original image. If all pixels of the structural element match pixels of a corresponding location in the image, then the pixel value of that location remains unchanged. If any one pixel of the structural element does not match a pixel of a corresponding location in the image, then the pixel value of that location is set to 0 or other specified pixel value, thereby changing the shape and structure of the image. The image erosion process can remove tiny areas (e.g., burrs, etc.) in the original image, or separate the nearer two areas.
The corroded image is a new image obtained by image corrosion processing of the original image, compared with the original image, the area of an area formed by corroded pixels in the corroded image is smaller than that of an area formed by corresponding pixels in the original image, and the outline shape of the area formed by the corroded pixels is not necessarily the same as that of the area formed by the corresponding pixels in the original image.
The differential processing refers to that the expansion image and the corrosion image are mutually corresponding and compared and analyzed to obtain a three-dimensional image which can be subjected to mapping processing. In particular, the analysis of the differential processing may subtract corresponding pixel values of the dilated image and the eroded image to weaken the similar portion of the image, highlighting the varying portion of the image. The analysis of the differential processing may also add corresponding pixel values of the dilated image and the eroded image to preserve similar portions in the image and remove varying portions of the dilated image and the eroded image.
Through the processing mode, errors, image noise and the like in the three-dimensional original image can be effectively eliminated or reduced, so that the boundary between all areas in the obtained three-dimensional image is clearer, and the defect area in the three-dimensional image is convenient to obtain. The calculation amount in the process of fusing the two-dimensional image and the three-dimensional image is reduced, so that the mutual correspondence of the two-dimensional image and the three-dimensional image is more accurate.
Further, in order to remove the jumping interference data generated by the vibration of the machine, the positioning precision of the machine, the calibration error or the AI segmentation precision and other factors in the three-dimensional image, a maximum value and minimum value filtering algorithm can be adopted to analyze the three-dimensional image: firstly, selecting an average value of pixels in a preset convolution kernel area as a central pixel value, wherein the central pixel value can be the average value of pixel values of all or part of pixels, can be the median or mode of the pixel values, and the like, and is not limited. And sequencing the pixel values of all pixels in the three-dimensional image to obtain a maximum pixel value and a minimum pixel value. The center pixel value is then compared to the maximum pixel value and the minimum pixel value, respectively. If the center pixel value is less than the minimum value, the minimum value is replaced with the center pixel value. And if the central pixel value is larger than the maximum value, replacing the maximum value with the central pixel value. The maximum value and minimum value filtering algorithm can reduce noise of the image and reduce jumping interference data in the three-dimensional image.
Referring to fig. 7 and 8, in some embodiments, 01: acquiring a three-dimensional image of the surface of the part to be detected, comprising:
011: acquiring a three-dimensional original image of the surface of a piece to be detected;
013: performing image expansion processing and image corrosion processing on the original image to obtain an expanded image and a corroded image;
0151: performing differential processing on the expansion image and the corrosion image to obtain a differential image; and
0153: and carrying out binarization processing on the differential image to obtain a three-dimensional image.
Correspondingly, referring to fig. 2, the acquiring module 10 is further configured to acquire a three-dimensional original image of the surface of the object to be detected, perform image expansion processing and image corrosion processing on the original image to acquire an expansion image and a corrosion image, perform differential processing on the expansion image and the corrosion image to acquire a differential image, and perform binarization processing on the differential image to obtain a three-dimensional image.
The differential image is a new image obtained by differential processing of the expansion image and the corrosion image, and compared with the original image, the differential image can not only reserve specific characteristics in the original image, but also remove noise in the original image, so that the formed differential image is clearer and more accurate.
The binarization process refers to a process of setting the gray value of a pixel point on an image to 0 or 255 so that the entire image exhibits a remarkable black-and-white effect. Methods for binarizing the differential image include, but are not limited to, simple binary methods, average methods, bimodal methods, or the Otsu's method (OTSU method), etc. After binarizing the differential image, the data amount in the obtained three-dimensional image is greatly reduced compared with the differential image, and the boundary of the region formed by a plurality of pixel points with different brightness can be highlighted, so that the three-dimensional image can be conveniently analyzed to obtain a defect region.
Referring to fig. 9 and 10, in some embodiments, 05: the fusion processing of the two-dimensional image and the three-dimensional image to obtain fused image data of the surface of the object to be detected comprises the following steps:
051: obtaining an affine matrix;
053: mapping the two-dimensional image on the three-dimensional image according to the affine matrix to obtain a mapped image; and
055: and performing image expansion processing on the mapping image to obtain fused image data.
Correspondingly, please refer to fig. 2, the fusion module 50 is further configured to acquire an affine matrix, map a two-dimensional image on a three-dimensional image according to the affine matrix, acquire a mapped image, and perform image expansion processing on the mapped image to acquire fused image data.
The affine matrix refers to a matrix representing a two-dimensional image to be affine transformed to be mapped with a three-dimensional image in the defect detection method. In some implementations, the affine matrix is a matrix preset within the fusion module 50. In other embodiments, the affine matrix is a corresponding matrix obtained by calibrating a 2D camera for taking two-dimensional photos and a 3D camera for taking three-dimensional photos in the detection process.
Specifically, in some embodiments, the calibration process is as follows:
firstly, a user needs to set a plurality of calibration objects on the surface of an object to be detected, and the object to be detected can be a device independently used for calibration or a piece to be detected. The calibration object and the object to be measured can reflect light with different colors, so that the brightness of the calibration object in the image is different from the brightness of the object to be measured in the image. In some embodiments, the calibration object may be a circular ceramic plate, where the thickness of the circular ceramic plate is about 1mm, and the plurality of calibration objects are respectively located at different positions on the same surface of the object to be measured. Secondly, acquiring images of the surface of the measured object provided with the calibration object at the same position by using a 2D camera and a 3D camera so as to obtain a two-dimensional image and a three-dimensional image for calibration. The two-dimensional calibration area corresponding to the calibration object can be obtained after the calibrated two-dimensional image is processed, and the three-dimensional calibration area corresponding to the calibration object can be obtained after the calibrated three-dimensional image is processed. For a plurality of calibration objects, each calibration object has a corresponding two-dimensional calibration area and a corresponding three-dimensional calibration area, and under the condition that the calibration areas are sequentially selected, the sequence of selecting the two-dimensional calibration areas on the calibrated two-dimensional images by a user is the same as the sequence of selecting the three-dimensional calibration areas on the calibrated three-dimensional images by the user. And thirdly, after the two-dimensional calibration area and the three-dimensional calibration area corresponding to each calibration object are obtained, extracting a central point in each calibration area as a calibration point, and analyzing to obtain coordinates corresponding to the calibration points. For example: under the condition that the cross section of the calibration object is circular, the calibration points are the circle center of the circular two-dimensional calibration area and the circle center of the circular three-dimensional calibration area corresponding to the calibration object. And finally, fitting by using the calibration point coordinates of each corresponding two-dimensional calibration area and the calibration point coordinates of the corresponding three-dimensional calibration area, and obtaining an affine matrix. Fitting methods include, but are not limited to, least squares fitting, kernel methods, spline methods, or the like.
Referring to fig. 4, mapping refers to a process of mapping each pixel point in the two-dimensional image to each coordinate point in the three-dimensional image according to a corresponding relationship so that the two-dimensional image can be projected to the three-dimensional image and obtain three-dimensional coordinates. The mapping image refers to a new image with depth information obtained by mapping a two-dimensional image and a three-dimensional image. The mapping image at least comprises coordinate information of each pixel point corresponding to the defect point and the non-defect point on the to-be-detected piece.
The fused image data is image data obtained by performing image expansion processing on the mapped image. Because the influence of vibration of a machine, positioning accuracy of the machine, calibration error, AI segmentation accuracy and the like exists in the detection process, the pixel point coordinates corresponding to the defect points of the to-be-detected piece in the mapping image cannot completely cover the defects. After the image expansion processing is carried out on the mapping image, the coordinates of the pixel points corresponding to the defect points of the to-be-detected piece in the fused image data can completely cover the defect, and the coordinates of the pixel points corresponding to the defect points obtained after expansion are more accurate.
Referring to fig. 11 and 12, in some embodiments, the defect area includes at least one defect point, each corresponding to one pixel point in the fused image data, 07: analyzing the fused image data to obtain defect information of the surface of the part to be detected, including:
071: performing distance conversion processing on the pixel points corresponding to the defect points to obtain reference points;
073: acquiring three-dimensional coordinates of a reference point in the fused image data and three-dimensional coordinates of a pixel point corresponding to the defect point;
075: obtaining a reference fitting surface according to coordinate fitting of the reference point; and
077: and analyzing the position relation between the three-dimensional coordinates of each defect point and the reference fitting surface to obtain defect information corresponding to the defect area.
Correspondingly, referring to fig. 2, the analysis module 70 is further configured to perform a distance transformation process on the pixel points corresponding to the defect points, so as to obtain reference points, obtain three-dimensional coordinates of the reference points and three-dimensional coordinates of the pixel points corresponding to the defect points in the fused image data, obtain a reference fitting surface according to coordinate fitting of the reference points, and analyze a positional relationship between the three-dimensional coordinates of each defect point and the reference fitting surface to obtain defect information corresponding to the defect region.
The distance transformation is an image transformation algorithm in image processing, and the distance transformation can calculate the position of each pixel point after the preset distance is transformed according to the preset distance. The reference point is a point which is obtained by the distance conversion of the pixel point and is positioned in the non-defect area and corresponds to the non-defect point, and the reference point has corresponding three-dimensional coordinates.
Referring to fig. 14, the reference fitting surface refers to a plane or a curved surface as a reference formed by fitting based on the acquired plurality of reference points. Since the reference points are all located in the non-defective region, the reference fitting surface fitted by the plurality of reference points can be approximately regarded as a smooth surface of the object to be inspected. The predetermined distance should be at least greater than half the distance between the two furthest apart points in the defect region so that the reference point obtained after the distance transformation is outside the defect region
Illustrating: four defect points are arranged in the right circular defect area, and are respectively arranged above, below, left side and right side of the defect area, and arc distances among the four defect points are the same. When the preset distance is a, the defect point located above moves upward by a distance a to obtain a reference point corresponding to the defect point above, the defect point located below moves downward by a distance a to obtain a reference point corresponding to the defect point above, the defect point located left moves leftward by a distance a to obtain a reference point corresponding to the defect point above, and the defect point located right moves upward by a distance a to obtain a reference point corresponding to the defect point on the right. Since the surface of the object to be detected is generally an irregular surface inclined relative to the horizontal plane, acquiring the reference points corresponding to the pixel points corresponding to the defect points can more accurately fit the surface of the non-defect portion of the object to be detected as a reference surface that can be compared with each pixel point.
Referring to fig. 13 and 14, in some embodiments, the defect information includes defect type, 077: analyzing the position relation between the three-dimensional coordinates of each defect point and the reference fitting surface to obtain defect information corresponding to the defect area, wherein the method comprises the following steps:
0771: calculating a distance difference value between the three-dimensional coordinates of each pixel point corresponding to the defect point and the reference fitting surface;
0773: acquiring a first number of first pixel points higher than a reference fitting surface and a second number of second pixel points lower than the reference fitting surface according to the plurality of distance differences; and
0775: and obtaining the defect type according to the first quantity and the second quantity.
Correspondingly, referring to fig. 2, the analysis module 70 is further configured to calculate a distance difference between the three-dimensional coordinates of each pixel point corresponding to the defect point and the reference fitting surface, obtain a first number of first pixel points higher than the reference fitting surface and a second number of second pixel points lower than the reference fitting surface according to the plurality of distance differences, and obtain the defect type according to the first number and the second number.
The distance difference is a linear distance between the three-dimensional coordinates of the pixel point corresponding to the defect point and the reference fitting surface in the height direction. When the pixel point corresponding to the defect point is located above the reference fitting surface, the pixel point is a first pixel point higher than the reference fitting surface, a perpendicular line is drawn to the reference fitting surface by taking the first pixel point as a starting point, and at the moment, the reference fitting surface is also provided with a first projection point corresponding to the first pixel point. The first pixel point and the first projection point are provided with three-dimensional coordinate information, the coordinate of the first pixel point on the X axis is the same as the coordinate of the first projection point on the X axis, and the coordinate of the first pixel point on the Y axis is the same as the coordinate of the first projection point on the Y axis. Therefore, the difference between the coordinate of the first pixel point on the Z axis and the coordinate of the first projection point on the Z axis is the difference between the distances between the first pixel point and the reference fitting surface, and is recorded as a positive value. And when the pixel point corresponding to the defect point is positioned below the reference fitting surface, the pixel point is a second pixel point lower than the reference fitting surface, and a perpendicular line is drawn to the reference fitting surface by taking the second pixel point as a starting point, and at the moment, the reference fitting surface is also provided with a second projection point corresponding to the second pixel point. The second pixel point and the second projection point are provided with three-dimensional coordinate information, the coordinate of the second pixel point on the X axis is the same as the coordinate of the second projection point on the X axis, and the coordinate of the second pixel point on the Y axis is the same as the coordinate of the second projection point on the Y axis. Therefore, the difference between the coordinate of the second pixel point on the Z axis and the coordinate of the second projection point on the Z axis is the difference between the distances between the second pixel point and the reference fitting surface, and is marked as a negative value.
Referring to fig. 14 and 15, in some embodiments, 0775: obtaining the defect type according to the first quantity and the second quantity comprises the following steps:
07751: in the case where the first number is greater than the second number, the defect type is a bump;
07753: in case the first number is smaller than the second number, the defect type is dishing.
Specifically, in the case where the first number is greater than the second number, the number of pixels higher than the reference fitting surface is greater than the number of pixels lower than the reference fitting surface, and at this time, the portion of the current defect protruding with respect to the non-defect is greater than the portion recessed with respect to the non-defect, so that the defect may be defined as a protrusion. In the case that the first number is smaller than the second number, the number of pixels higher than the reference fitting surface is smaller than the number of pixels lower than the reference fitting surface, and at this time, the portion of the current defect with respect to the non-defect recess is larger than the portion with respect to the non-defect protrusion, so the defect can be defined as a recess. In particular, in the case where the first number is equal to the second number, the defect type may be judged according to a relationship between the distance difference values of the plurality of first pixel points and the distance difference values of the plurality of second pixel points. For example: and respectively averaging the distance differences of the first pixel points and the distance differences of the second pixel points, and then comparing, wherein the defect can be defined as a bulge under the condition that the average value of the distance differences of the first pixel points is larger. In the case where the average value of the distance differences of the second pixel points is large, the defect may be defined as a depression.
Referring to fig. 15 and 16, in some embodiments, the defect information includes defect depth, 077: analyzing the position relation between the three-dimensional coordinates of each defect point and the reference fitting surface to obtain defect information corresponding to the defect area, and further comprising:
0777: and obtaining the defect depth of the defect area corresponding to the plurality of pixel points according to the plurality of distance differences.
Correspondingly, please refer to the combined graph, the analysis module 70 is further configured to obtain the defect depths of the defect areas corresponding to the plurality of pixels according to the plurality of distance differences.
Specifically, in some embodiments, the coordinates and the distance difference value of each first pixel point and the coordinates and the distance difference value of each second pixel point may be respectively listed, and the data of the plurality of first pixel points and the data of the plurality of second pixel points may be screened according to a certain rule, so as to select part of the first pixel points and the second pixel points as measurement results and analyze the measurement results. For example: and sequencing each first pixel point according to the size of the distance difference value from large to small, and taking the data of a preset percentage as a measurement result. Or removing a predetermined percentage of the top-ranked data and the bottom-ranked data to reduce the error caused by the maximum and minimum values. In other embodiments, all the first pixel points and the second pixel points can be selected as measurement results and analyzed, and the data size is larger, but the obtained analysis results are more accurate.
The electronic device of an embodiment of the application includes a memory and one or more processors. The memory stores a computer program. The computer program, when executed by one or more processors, causes the one or more processors to perform the defect analysis method of any of the embodiments described above.
The electronic device may be a terminal, and an internal structure thereof may be as shown in fig. 17. The electronic device includes a processor, a memory, an input/output interface, a communication interface, a display unit, and an input device. The processor, the memory and the input/output interface are connected through a system bus, and the communication interface, the display unit and the input device are connected to the system bus through the input/output interface. Wherein the processor of the electronic device is configured to provide computing and control capabilities. The memory of the electronic device includes a nonvolatile storage medium and an internal memory.
The non-volatile storage medium stores an operating system and a computer program. The internal memory provides an environment for the operation of the operating system and computer programs in the non-volatile storage media. The input/output interface of the electronic device is used to exchange information between the processor and the external device. The communication interface of the electronic device is used for conducting wired or wireless communication with an external terminal, and the wireless communication can be realized through WIFI, a mobile cellular network, NFC (near field communication) or other technologies. The display unit of the electronic device is used for forming a visual picture, and can be a display screen, a projection device or a virtual reality imaging device. The display screen can be a liquid crystal display screen or an electronic ink display screen, and the input device of the electronic equipment can be a touch layer covered on the display screen, can also be keys, a track ball or a touch pad arranged on the shell of the electronic equipment, and can also be an external keyboard, a touch pad or a mouse and the like.
It will be appreciated by persons skilled in the art that the structures shown in the figures are block diagrams of only some of the structures associated with the aspects of the present application and do not constitute limitations of the electronic devices to which the aspects of the present application apply, and that a particular electronic device may include more or fewer components than shown, or may combine some of the components, or have a different arrangement of components.
In the description of the present specification, reference to the terms "certain embodiments," "in one example," "exemplary," and the like, means that a particular feature, structure, material, or characteristic described in connection with the embodiments or examples is included in at least one embodiment or example of the present application. In this specification, schematic representations of the above terms do not necessarily refer to the same embodiments or examples. Furthermore, the particular features, structures, materials, or characteristics described may be combined in any suitable manner in any one or more embodiments or examples. Furthermore, the different embodiments or examples described in this specification and the features of the different embodiments or examples may be combined and combined by those skilled in the art without contradiction.
Any process or method descriptions in flow charts or otherwise described herein may be understood as representing modules, segments, or portions of code which include one or more executable instructions for implementing specific logical functions or steps of the process, and further implementations are included within the scope of the preferred embodiment of the present application in which functions may be executed out of order from that shown or discussed, including substantially concurrently or in reverse order, depending on the functionality involved, as would be understood by those reasonably skilled in the art of the embodiments of the present application.
Logic and/or steps represented in the flow diagrams or otherwise described herein, e.g., a ordered listing of executable instructions for implementing logical functions, can be embodied in any computer-readable storage medium for use by or in connection with an instruction execution system, apparatus, or device, such as a computer-based system, system that includes the processor 50, or other system that can fetch the instructions from the instruction execution system, apparatus, or device and execute the instructions. For the purposes of this description, a computer-readable storage medium can be any means that can contain, store, communicate, propagate, or transport the program for use by or in connection with the instruction execution system, apparatus, or device. More specific examples (a non-exhaustive list) of the computer-readable storage medium would include the following: an electrical connection (electronic device) having one or more wires, a portable computer diskette (magnetic device), a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber device, and a portable compact disc read-only memory (CDROM). In addition, the computer-readable storage medium may even be paper or other suitable medium upon which the program is printed, as the program may be electronically captured, via, for instance, optical scanning of the paper or other medium, then compiled, interpreted, or otherwise processed in a suitable manner, if necessary, and then stored in a computer memory.
It is to be understood that portions of the present application may be implemented in hardware, software, firmware, or a combination thereof. In the above-described embodiments, the various steps or methods may be implemented in software or firmware stored in a memory and executed by a suitable instruction execution system. For example, if implemented in hardware, as in another embodiment, may be implemented using any one or combination of the following techniques, as is well known in the art: discrete logic circuits having logic gates for implementing logic functions on data signals, application specific integrated circuits having suitable combinational logic gates, programmable Gate Arrays (PGAs), field Programmable Gate Arrays (FPGAs), and the like.
Those of ordinary skill in the art will appreciate that all or a portion of the steps carried out in the method of the above-described embodiments may be implemented by a program to instruct related hardware, where the program may be stored in a computer readable storage medium, and where the program, when executed, includes one or a combination of the steps of the method embodiments. In addition, each functional unit in the embodiments of the present application may be integrated in one processing module 30, or each unit may exist alone physically, or two or more units may be integrated in one module. The integrated modules may be implemented in hardware or in software functional modules. The integrated modules may also be stored in a computer readable storage medium if implemented in the form of software functional modules and sold or used as a stand-alone product. The above-mentioned storage medium may be a read-only memory, a magnetic disk or an optical disk, or the like.
While embodiments of the present application have been shown and described above, it will be understood that the above embodiments are illustrative and not to be construed as limiting the present application, and that variations, modifications, alternatives, and variations may be made to the above embodiments by one of ordinary skill in the art within the scope of the present application.

Claims (10)

1. A defect analysis method, comprising:
acquiring a two-dimensional image and a three-dimensional image of the surface of a piece to be detected;
processing the two-dimensional image to obtain a defect area of the surface of the to-be-detected object in the two-dimensional image, and processing the three-dimensional image to obtain a defect area of the surface of the to-be-detected object in the three-dimensional image;
the two-dimensional image and the three-dimensional image are fused to obtain fused image data of the surface of the to-be-detected object, wherein the fused image data comprises three-dimensional coordinates of pixel points corresponding to points of the surface of the to-be-detected object; and
And analyzing the fused image data to obtain defect information of the surface of the to-be-detected piece.
2. The defect analysis method according to claim 1, wherein the acquiring a three-dimensional image of the surface of the object to be inspected comprises:
Acquiring a three-dimensional original image of the surface of the piece to be detected;
performing image expansion processing and image corrosion processing on the original image to obtain an expanded image and a 3D corrosion image; and
And carrying out differential processing on the expansion image and the corrosion image to acquire the three-dimensional image.
3. The defect analysis method according to claim 1, wherein the acquiring a three-dimensional image of the surface of the object to be inspected comprises:
acquiring a three-dimensional original image of the surface of the piece to be detected;
performing image expansion processing and image corrosion processing on the original image to obtain an expansion image and a corrosion image;
performing differential processing on the expansion image and the corrosion image to obtain a differential image; and
And carrying out binarization processing on the differential image to obtain the three-dimensional image.
4. The defect analysis method according to claim 1, wherein the fusion processing of the two-dimensional image and the three-dimensional image to acquire fused image data of the surface of the object to be inspected comprises:
obtaining an affine matrix;
mapping the two-dimensional image on the three-dimensional image according to the affine matrix to obtain a mapped image; and
And performing image expansion processing on the mapping image to acquire the fused image data.
5. The defect analysis method according to claim 1, wherein the defect area includes at least one defect point, each of the defect points corresponds to one pixel point in the fused image data, the analyzing the fused image data to obtain defect information of the surface of the object to be inspected includes:
performing distance conversion processing on the pixel points corresponding to the defect points to obtain reference points;
acquiring three-dimensional coordinates of the datum point and three-dimensional coordinates of the pixel point corresponding to the defect point in the fused image data;
obtaining a reference fitting surface according to the coordinate fitting of the reference point; and
And analyzing the position relation between the three-dimensional coordinates of each defect point and the reference fitting surface to acquire the defect information corresponding to the defect area.
6. The defect analysis method according to claim 5, wherein the defect information includes defect types, and wherein the analyzing the positional relationship between the three-dimensional coordinates of each of the defect points and the reference fitting surface to obtain the defect information corresponding to the defect area includes:
Calculating a distance difference value between the three-dimensional coordinates of each pixel point corresponding to the defect point and the reference fitting surface;
acquiring a first number of first pixel points higher than the reference fitting surface and a second number of second pixel points lower than the reference fitting surface according to the distance differences; and
And acquiring the defect type according to the first quantity and the second quantity.
7. The defect analysis method of claim 6, wherein the obtaining the defect type from the first number and the second number comprises:
in the case where the first number is greater than the second number, the defect type is a bump;
in case the first number is smaller than the second number, the defect type is a pit.
8. The defect analysis method according to claim 6, wherein the defect information includes a defect depth, the analyzing a positional relationship between a three-dimensional coordinate of each of the defect points and the reference fitting surface to obtain the defect information corresponding to the defect area, further comprising:
and obtaining the defect depth of the defect area corresponding to the pixel points according to the distance differences.
9. A defect analyzing apparatus, comprising:
the acquisition module is used for acquiring a two-dimensional image and a three-dimensional image of the surface of the to-be-detected piece;
the processing module is used for processing the two-dimensional image to obtain a defect area of the surface of the to-be-detected piece in the two-dimensional image and processing the two-dimensional image to obtain a defect area of the surface of the to-be-detected piece in the three-dimensional image;
the fusion module is used for carrying out fusion processing on the two-dimensional image and the three-dimensional image to obtain fusion image data of the surface of the to-be-detected piece, wherein the fusion image data comprises three-dimensional coordinates of pixel points corresponding to each point of the surface of the to-be-detected piece; and
And the analysis module is used for analyzing the fused image data to acquire defect information of the surface of the to-be-detected piece.
10. An electronic device, comprising:
a memory in which a computer program is stored; and
One or more processors configured to perform the defect analysis method of any one of claims 1 to 8.
CN202311840561.8A 2023-12-28 2023-12-28 Defect analysis method, defect analysis device and electronic equipment Pending CN117830251A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202311840561.8A CN117830251A (en) 2023-12-28 2023-12-28 Defect analysis method, defect analysis device and electronic equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202311840561.8A CN117830251A (en) 2023-12-28 2023-12-28 Defect analysis method, defect analysis device and electronic equipment

Publications (1)

Publication Number Publication Date
CN117830251A true CN117830251A (en) 2024-04-05

Family

ID=90518594

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202311840561.8A Pending CN117830251A (en) 2023-12-28 2023-12-28 Defect analysis method, defect analysis device and electronic equipment

Country Status (1)

Country Link
CN (1) CN117830251A (en)

Similar Documents

Publication Publication Date Title
JP6173655B1 (en) Detection apparatus and detection method
CN109801333B (en) Volume measurement method, device and system and computing equipment
JP3793100B2 (en) Information processing method, apparatus, and recording medium
CN107920246B (en) The gradient test method and device of camera module
US10024653B2 (en) Information processing apparatus, information processing method, and storage medium
US7986814B2 (en) Method for measuring a curved surface of an object
JP2011191312A (en) Image processing apparatus
US20180150969A1 (en) Information processing device, measuring apparatus, system, calculating method, storage medium, and article manufacturing method
CN104634242A (en) Point adding system and method of probe
WO2016018260A1 (en) Distortion quantifier
US20210004987A1 (en) Image processing apparatus, image processing method, and storage medium
JP7135418B2 (en) FLATNESS DETECTION METHOD, FLATNESS DETECTION APPARATUS AND FLATNESS DETECTION PROGRAM
US11928805B2 (en) Information processing apparatus, information processing method, and storage medium for defect inspection and detection
US20190325593A1 (en) Image processing apparatus, system, method of manufacturing article, image processing method, and non-transitory computer-readable storage medium
JP2003057019A (en) Pattern inspection device and inspection method using the same
US20090289953A1 (en) System and method for adjusting view of a measuring report of an object
CN116777877A (en) Circuit board defect detection method, device, computer equipment and storage medium
CN117058411B (en) Method, device, medium and equipment for identifying edge appearance flaws of battery
US11168976B2 (en) Measuring device for examining a specimen and method for determining a topographic map of a specimen
JP2005140547A (en) 3-dimensional measuring method, 3-dimensional measuring device and computer program
JP2010133744A (en) Defect detection method, and visual inspection device using the same
US8102516B2 (en) Test method for compound-eye distance measuring apparatus, test apparatus, and chart used for the same
CN117830251A (en) Defect analysis method, defect analysis device and electronic equipment
JP2007026423A (en) Image processor and image processing method
CN116125489A (en) Indoor object three-dimensional detection method, computer equipment and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination