CN115980059B - Surface defect detection system, detection method, detection device, detection equipment and storage medium - Google Patents

Surface defect detection system, detection method, detection device, detection equipment and storage medium Download PDF

Info

Publication number
CN115980059B
CN115980059B CN202211648264.9A CN202211648264A CN115980059B CN 115980059 B CN115980059 B CN 115980059B CN 202211648264 A CN202211648264 A CN 202211648264A CN 115980059 B CN115980059 B CN 115980059B
Authority
CN
China
Prior art keywords
target image
pixel point
normal vector
relative height
images
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202211648264.9A
Other languages
Chinese (zh)
Other versions
CN115980059A (en
Inventor
张正涛
吴搏
唐超
吕晓云
张武杰
杨化彬
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Zhongke Huiyuan Semiconductor Technology (Guangdong) Co.,Ltd.
Casi Vision Technology Luoyang Co Ltd
Casi Vision Technology Beijing Co Ltd
Original Assignee
Zhongke Huiyuan Intelligent Equipment Guangdong Co ltd
Casi Vision Technology Luoyang Co Ltd
Casi Vision Technology Beijing Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Zhongke Huiyuan Intelligent Equipment Guangdong Co ltd, Casi Vision Technology Luoyang Co Ltd, Casi Vision Technology Beijing Co Ltd filed Critical Zhongke Huiyuan Intelligent Equipment Guangdong Co ltd
Priority to CN202211648264.9A priority Critical patent/CN115980059B/en
Publication of CN115980059A publication Critical patent/CN115980059A/en
Application granted granted Critical
Publication of CN115980059B publication Critical patent/CN115980059B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Landscapes

  • Investigating Materials By The Use Of Optical Means Adapted For Particular Applications (AREA)

Abstract

The present disclosure provides a surface defect detection system, a detection method, a detection device, a detection apparatus, and a storage medium thereof, by acquiring a plurality of images to be detected under the irradiation conditions of illumination subunits of different angles; determining a normal vector of the target image according to the gray values of all pixel points in the plurality of images to be detected; and determining the relative height relation between each pixel point and the adjacent pixel points in the target image according to the normal vector of the target image, and obtaining the target image through the relative height relation, so that the defect characteristics of the surface of the object can be clearly represented, and the omission of detection is avoided.

Description

Surface defect detection system, detection method, detection device, detection equipment and storage medium
Technical Field
The disclosure relates to the technical field of machine vision detection, and in particular relates to a surface defect detection system, a detection method, a detection device, detection equipment and a storage medium thereof.
Background
At present, an automatic detection system based on machine vision is vigorously developed, and compared with the defects of low precision, low repeatability, high cost, non-traceability and the like of manual vision detection, the machine vision detection system has greater development potential and gradually replaces manual detection in the future.
In the prior art, a two-dimensional detection mode is generally used for the defect detection device on the surface of an object, the detected object only can show the contrast difference aiming at single incident light, and the defects such as slight surface pits are difficult to be highlighted, so that omission detection is easy to cause, and the performance of the detection device is affected.
Disclosure of Invention
The present disclosure provides a surface defect detection system, and a detection method, device, apparatus, and storage medium thereof, so as to at least solve the above technical problems in the prior art.
According to a first aspect of the present disclosure, there is provided a surface defect detection system, the system comprising: a camera component, a support frame, a multi-angle light source and a host, wherein,
the multi-angle light source is positioned between the camera assembly and the object to be tested, and is formed by overlapping a plurality of annular lighting units with different diameters, wherein each annular lighting unit is formed by splicing a plurality of lighting subunits and is used for providing different angle light sources for the object to be tested;
the camera component is connected with the support frame and is used for shooting the object to be detected under the irradiation conditions of different illumination subunits to obtain a plurality of images to be detected and sending the images to be detected to the host;
The host is used for synthesizing the plurality of images to be detected into a target image.
In an embodiment, the multi-angle light source is formed by overlapping a high-angle annular lighting unit, a medium-angle annular lighting unit and a low-angle annular lighting unit; wherein,
the medium-angle annular lighting unit is positioned between the high-angle annular lighting unit and the low-angle annular lighting unit, and the low-angle annular lighting unit is close to the object to be measured;
the diameter of the high-angle annular lighting unit is smaller than that of the medium-angle annular lighting unit, and the diameter of the medium-angle annular lighting unit is smaller than that of the low-angle annular lighting unit.
In an embodiment, the high-angle annular lighting unit, the medium-angle annular lighting unit and the low-angle annular lighting unit are respectively formed by splicing four identical lighting sub-units.
In an embodiment, the high-angle annular lighting unit comprises a high-angle first lighting subunit, a high-angle second lighting subunit, a high-angle third lighting subunit and a high-angle fourth lighting subunit which are spliced;
the medium-angle annular lighting unit comprises a medium-angle first lighting subunit, a medium-angle second lighting subunit, a medium-angle third lighting subunit and a medium-angle fourth lighting subunit which are spliced;
The low-angle annular lighting unit comprises a low-angle first lighting subunit, a low-angle second lighting subunit, a low-angle third lighting subunit and a low-angle fourth lighting subunit which are spliced;
wherein the high angle first illumination subunit, the medium angle first illumination subunit, and the low angle first illumination subunit are located in a first orthogonal partition; the high angle second lighting subunit, the medium angle second lighting subunit, and the low angle second lighting subunit are located in a second orthogonal partition; the high angle third lighting subunit, the medium angle third lighting subunit, and the low angle third lighting subunit are located in a third orthogonal partition; the high angle fourth lighting subunit, the medium angle fourth lighting subunit, and the low angle fourth lighting subunit are located in a fourth orthogonal partition.
According to a second aspect of the present disclosure, there is provided a detection method of a surface defect detection system, based on the surface defect detection system, comprising:
under the irradiation conditions of the illumination subunits with different angles, a plurality of images to be detected are obtained;
determining a normal vector of the target image according to the gray values of all pixel points in the plurality of images to be detected;
And determining the relative height relation between each pixel point and the adjacent pixel points in the target image according to the normal vector of the target image, and obtaining the target image through the relative height relation.
In an embodiment, the acquiring a plurality of images to be measured under the illumination conditions of the illumination subunits with different angles includes:
respectively selecting illumination sub-units in four orthogonal partitions to obtain four target illumination sub-units, wherein the four target illumination sub-units belong to annular illumination units with different angles;
and under the irradiation conditions of different target irradiation subunits, acquiring a plurality of images to be detected.
In an embodiment, the determining the normal vector of the target image according to the gray values of the pixels in the plurality of images to be measured includes:
determining the product of diffuse reflectance and normal vector of each pixel point in the target image according to the lambertian reflection principle and the gray value of each pixel point in the images to be detected;
and obtaining the normal vector of each pixel point of the target image by normalizing and separating the product of the diffuse reflectance and the normal vector of each pixel point in the target image.
In an embodiment, the determining the product of the diffuse reflectance and the normal vector of each pixel in the target image according to the lambertian reflection principle and the gray values of each pixel in the plurality of images to be measured includes:
Dividing pixel points in the images to be detected into c rows of pixel points respectively, wherein c is an integer greater than 1;
according to the lambertian reflection principle and gray values of the c rows of pixel points in the images to be detected, solving the product of the diffuse reflectance and the normal vector of each row of pixel points row by row to obtain the product of the diffuse reflectance and the normal vector of the c rows of pixel points;
and determining the product of the diffuse reflectance and the normal vector of each pixel point in the target image according to the product of the diffuse reflectance and the normal vector of the c rows of pixel points.
In an embodiment, the determining a relative height relationship between each pixel point and an adjacent pixel point in the target image according to the normal vector of the target image, and obtaining the target image according to the relative height relationship includes:
determining the relative height difference between each pixel point and the adjacent pixel point in the target image through the normal vector of the target image according to the relation between the gradient and the normal vector;
and determining the target image through the relative height difference between each pixel point and the adjacent pixel points in the target image and the height reference value.
In an embodiment, the determining the relative height difference between each pixel point and the adjacent pixel point in the target image by the normal vector of the target image includes:
Inputting the three-dimensional coordinates of the normal vector of each pixel point of the target image into a relative height difference formula, and determining the relative height difference between each pixel point and the adjacent pixel points in the target image, wherein the relative height difference formula is as follows:
R (x,x-1) =z x-1,y -z x,y =n x /n z
R (y,y-1) =z x,y-1 -z x,y =n y /n z
wherein n is x ,n y And n z Respectively the target diagramsThree-dimensional coordinate value of normal vector of each pixel point, z x,y For the height value of the pixel point with the (x, y) position in the target image, n x /n z R is the relative height difference between the pixel point with the (x, y) position and the adjacent pixel point (x-1, y) in the x direction in the target image (x,x-1) And (3) the relative height difference between the pixel point with the (x, y) position and the adjacent pixel point (x, y-1) in the y direction is the coordinate in the target image.
In an embodiment, after the determining the relative height difference between each pixel point and the adjacent pixel point in the target image, the method further includes:
determining the divergence value of each pixel point in the target image according to the normal vector of the target image;
strengthening the relative height difference between each pixel point and the adjacent pixel point in the target image through the divergence value of each pixel point in the target image to obtain the strengthening relative height difference between each pixel point and the adjacent pixel point in the target image;
Correspondingly, the determining the target image through the relative height difference between each pixel point and the adjacent pixel point in the target image and the height reference value comprises the following steps:
and determining the target image through the enhanced relative height difference between each pixel point and the adjacent pixel points in the target image and the height reference value.
According to a third aspect of the present disclosure, there is provided a detection apparatus of a surface defect detection system, the apparatus comprising:
the image acquisition module to be measured is used for acquiring a plurality of images to be measured under the irradiation conditions of the illumination subunits with different angles;
the normal vector determining module is used for determining the normal vector of the target image according to the gray values of all pixel points in the plurality of images to be detected;
and the target image determining module is used for determining the relative height relation between each pixel point and the adjacent pixel points in the target image according to the normal vector of the target image, and obtaining the target image through the relative height relation.
In an embodiment, the image acquisition module to be tested is specifically configured to:
respectively selecting illumination sub-units in four orthogonal partitions to obtain four target illumination sub-units, wherein the four target illumination sub-units belong to annular illumination units with different angles;
And under the irradiation conditions of different target irradiation subunits, acquiring a plurality of images to be detected.
In an embodiment, the normal vector determination module is specifically configured to:
determining the product of diffuse reflectance and normal vector of each pixel point in the target image according to the lambertian reflection principle and the gray value of each pixel point in the images to be detected;
and obtaining the normal vector of each pixel point of the target image by normalizing and separating the product of the diffuse reflectance and the normal vector of each pixel point in the target image.
In an embodiment, the normal vector determination module is specifically configured to:
dividing pixel points in the images to be detected into c rows of pixel points respectively, wherein c is an integer greater than 1;
according to the lambertian reflection principle and gray values of the c rows of pixel points in the images to be detected, solving the product of the diffuse reflectance and the normal vector of each row of pixel points row by row to obtain the product of the diffuse reflectance and the normal vector of the c rows of pixel points;
and determining the product of the diffuse reflectance and the normal vector of each pixel point in the target image according to the product of the diffuse reflectance and the normal vector of the c rows of pixel points.
In an embodiment, the target image determining module is specifically configured to:
Determining the relative height difference between each pixel point and the adjacent pixel point in the target image through the normal vector of the target image according to the relation between the gradient and the normal vector;
and determining the target image through the relative height difference between each pixel point and the adjacent pixel points in the target image and the height reference value.
In an embodiment, the target image determining module is specifically configured to:
inputting the three-dimensional coordinates of the normal vector of each pixel point of the target image into a relative height difference formula, and determining the relative height difference between each pixel point and the adjacent pixel points in the target image, wherein the relative height difference formula is as follows:
R (x,x-1) =z x-1,y -z x,y =n x /n z
R (y,y-1) =z x,y-1 -z x,y =n y /n z
wherein n is x ,n y And n z Three-dimensional coordinate values z of normal vectors of all pixel points of the target image x,y For the height value of the pixel point with the (x, y) position in the target image, n x /n z R is the height difference between the pixel point with the (x, y) position and the adjacent pixel point (x-1, y) in the x direction in the target image (x,x-1) And (3) the height difference between the pixel point with the (x, y) position and the adjacent pixel point (x, y-1) in the y direction is the coordinate in the target image.
In an embodiment, the target image determining module is specifically configured to:
After the relative height difference between each pixel point and the adjacent pixel points in the target image is determined, determining a divergence value of each pixel point in the target image according to a normal vector of the target image;
strengthening the relative height difference between each pixel point and the adjacent pixel point in the target image through the divergence value of each pixel point in the target image to obtain the strengthening relative height difference between each pixel point and the adjacent pixel point in the target image;
and determining the target image through the enhanced relative height difference between each pixel point and the adjacent pixel points in the target image and the height reference value.
According to a fourth aspect of the present disclosure, there is provided an electronic device comprising:
at least one processor; and
a memory communicatively coupled to the at least one processor; wherein,
the memory stores instructions executable by the at least one processor to enable the at least one processor to perform the methods described in the present disclosure.
According to a fifth aspect of the present disclosure, there is provided a non-transitory computer readable storage medium storing computer instructions for causing the computer to perform the method of the present disclosure.
The surface defect detection system, the detection method, the detection device, the detection equipment and the storage medium acquire a plurality of images to be detected under the irradiation conditions of illumination subunits with different angles; determining a normal vector of the target image according to the gray values of all pixel points in the plurality of images to be detected; and determining the relative height relation between each pixel point and the adjacent pixel points in the target image according to the normal vector of the target image, and obtaining the target image through the relative height relation, so that the defect characteristics of the surface of the object can be clearly represented, and the omission of detection is avoided.
It should be understood that the description in this section is not intended to identify key or critical features of the embodiments of the disclosure, nor is it intended to be used to limit the scope of the disclosure. Other features of the present disclosure will become apparent from the following specification.
Drawings
The above, as well as additional purposes, features, and advantages of exemplary embodiments of the present disclosure will become readily apparent from the following detailed description when read in conjunction with the accompanying drawings. Several embodiments of the present disclosure are illustrated by way of example, and not by way of limitation, in the figures of the accompanying drawings, in which:
in the drawings, the same or corresponding reference numerals indicate the same or corresponding parts.
FIG. 1 is a schematic diagram of a surface defect inspection system according to a first embodiment of the present disclosure;
FIG. 2 is a cross-sectional view showing a structure of a multi-angle light source in a surface defect inspection system according to an embodiment of the present disclosure;
FIG. 3 is a bottom view of a multi-angle light source in a surface defect inspection system according to one embodiment of the present disclosure;
FIG. 4 is a flow chart of a method for inspecting a surface defect inspection system according to a second embodiment of the present disclosure;
fig. 5 is a schematic diagram of an image to be measured taken under the condition of a middle-angle first lighting subunit according to a second embodiment of the present disclosure;
fig. 6 is a schematic diagram of an image to be measured taken under the condition of a second illumination subunit with a medium angle according to a second embodiment of the disclosure;
fig. 7 is a schematic diagram of an image to be measured taken under the condition of a middle-angle third lighting subunit according to a second embodiment of the present disclosure;
fig. 8 is a schematic diagram of an image to be measured taken under the condition of a middle-angle fourth lighting subunit according to a second embodiment of the present disclosure;
fig. 9 is a schematic diagram of a target image synthesized by a detection method based on a surface defect detection system according to a second embodiment of the present disclosure;
Fig. 10 is a schematic structural diagram of a detection device of a surface defect detection system according to a third embodiment of the present disclosure;
fig. 11 shows a schematic diagram of a composition structure of an electronic device according to an embodiment of the present disclosure.
Detailed Description
In order to make the objects, features and advantages of the present disclosure more comprehensible, the technical solutions in the embodiments of the present disclosure will be clearly described in conjunction with the accompanying drawings in the embodiments of the present disclosure, and it is apparent that the described embodiments are only some embodiments of the present disclosure, but not all embodiments. Based on the embodiments in this disclosure, all other embodiments that a person skilled in the art would obtain without making any inventive effort are within the scope of protection of this disclosure.
The camera module (Camera Compact Module, CCM) is one of the important components in photographic capture, and is generally composed of components such as a lens, a sensor, a circuit board, a metal sheet, and the like. The automatic detection system based on machine vision is vigorously developed nowadays, and compared with the defects of low precision, low repeatability, high cost, non-traceability and the like of manual vision detection, the machine vision detection system has greater development potential and gradually replaces manual detection in the future. At present, a two-dimensional detection mode is generally used for the CCM module machine vision defect detection equipment, the detected object only can show the contrast difference aiming at single incident light, and the defects such as slight surface pits and the like are difficult to be highlighted, so that the overlooking detection is caused, and the performance of the detection equipment is influenced; the three-dimensional detection can further obtain a relative depth relation, and is more beneficial to detecting the defects of slight unevenness on the surface of the detected object.
The surface defect detection system and the surface defect detection method can detect the surface defects of the camera module and other types of objects, particularly the surface defects meeting the lambertian reflection characteristic, and are specifically as follows.
Example 1
Fig. 1 is a schematic structural diagram of a surface defect detection system according to a first embodiment of the present disclosure, where, as shown in fig. 1, the system includes: a camera 101, a lens 102, a support 2, a multi-angle light source 103, and a host (not shown in fig. 1). Wherein the camera 101 and the lens 102 constitute a camera assembly.
The multi-angle light source 103 is positioned between the camera component (the camera 101 and the lens 102) and the object to be measured, the multi-angle light source 103 is formed by overlapping a plurality of layers of annular lighting units with different diameters, and each annular lighting unit is formed by splicing a plurality of lighting subunits and is used for providing different angle light sources for the object to be measured; the camera component (the camera 101 and the lens 102) is connected with the support frame 2 and is used for shooting an object to be detected under the irradiation conditions of different illumination subunits to obtain a plurality of images to be detected and sending the images to be detected to the host; and the host is used for synthesizing the plurality of images to be detected into the target image.
The host comprises an industrial personal computer controller and an image processor. The industrial personal computer controller is used for controlling the switch of the surface defect detection system and sending corresponding operation instructions to the camera 101, the lens 102 and the multi-angle light source 103. The industrial personal computer controller adjusts the positions of the camera 101 and the lens 102 to focus clearly on the object to be measured, sets the drawing parameters of the camera 101, controls the multi-angle light source 103 to light the light sources of different areas according to the shape of the object to be measured, generates multi-angle light to irradiate the surface of the object to be measured, collects the original image of the object to be measured, and stores the original image as the image to be measured. The image processor is used for receiving the original image acquired by the camera 101, performing data processing, reconstructing a normal vector of the surface of the object to be detected, obtaining a relative depth map of the surface of the object to be detected, synthesizing a target image, and highlighting the surface defect of the object to be detected.
In the embodiment of the disclosure, the multi-angle light source is formed by superposing a high-angle annular lighting unit, a medium-angle annular lighting unit and a low-angle annular lighting unit; the medium-angle annular lighting unit is positioned between the high-angle annular lighting unit and the low-angle annular lighting unit, and the low-angle annular lighting unit is close to the object to be measured; the diameter of the high-angle annular lighting unit is smaller than that of the medium-angle annular lighting unit, and the diameter of the medium-angle annular lighting unit is smaller than that of the low-angle annular lighting unit, as shown in fig. 2, fig. 2 is a structural cross-sectional view of a multi-angle light source in a surface defect detection system according to a first embodiment of the present disclosure, including multiple layers of annular lighting units with different diameters, namely, a low-angle annular lighting unit 103-1, a medium-angle annular lighting unit 103-2 and a high-angle annular lighting unit 103-3. Illustratively, the light source spatial angles of the low angle annular lighting unit 103-1, the medium angle annular lighting unit 103-2, and the high angle annular lighting unit 103-3 may be 20 °, 60 °, 80 °, respectively, with the coverage spatial angles being the broadest.
The low-angle annular lighting unit 103-1, the medium-angle annular lighting unit 103-2 and the high-angle annular lighting unit 103-3 may be formed by splicing a plurality of lighting sub-units, the size of each lighting sub-unit may be the same, the size of each lighting sub-unit may be different, and the lighting sub-units of each part may independently control the switch thereof. For example, the low angle annular lighting unit 103-1 may be composed of three lighting sub-units, and the three lighting sub-units may be the same or different in size; the middle angle annular lighting unit 103-2 may be composed of four lighting sub-units, and the four lighting sub-units may be the same or different in size; the high angle annular lighting unit 103-3 may be composed of five lighting sub-units, and the five lighting sub-units may be the same or different in size.
It should be noted that, in this embodiment, the number and the size of the illumination sub-units constituting each angle annular illumination unit are not limited, as long as the illumination sub-units of each angle annular illumination unit are spliced to form an annular illumination unit of a corresponding angle.
In the embodiment of the disclosure, the high-angle annular lighting unit, the medium-angle annular lighting unit and the low-angle annular lighting unit are respectively spliced by four identical lighting subunits.
In the embodiment of the disclosure, the high-angle annular lighting unit comprises a high-angle first lighting subunit, a high-angle second lighting subunit, a high-angle third lighting subunit and a high-angle fourth lighting subunit which are spliced; the medium-angle annular lighting unit comprises a medium-angle first lighting subunit, a medium-angle second lighting subunit, a medium-angle third lighting subunit and a medium-angle fourth lighting subunit which are spliced; the low-angle annular lighting unit comprises a low-angle first lighting subunit, a low-angle second lighting subunit, a low-angle third lighting subunit and a low-angle fourth lighting subunit which are spliced;
wherein the high angle first lighting subunit, the medium angle first lighting subunit, and the low angle first lighting subunit are located in a first orthogonal partition; the high-angle second illumination subunit, the medium-angle second illumination subunit, and the low-angle second illumination subunit are located in a second orthogonal partition; the high angle third lighting subunit, the medium angle third lighting subunit, and the low angle third lighting subunit are located in a third orthogonal partition; the high angle fourth lighting subunit, the medium angle fourth lighting subunit, and the low angle fourth lighting subunit are located in a fourth orthogonal partition.
Specifically, the high-angle annular lighting unit, the medium-angle annular lighting unit and the low-angle annular lighting unit in the embodiment may also be formed by respectively splicing four identical lighting sub-units. Fig. 3 is a bottom view of a structure of a multi-angle light source in a surface defect detection system according to an embodiment of the present disclosure, including: a high angle first lighting subunit 103-3-1, a high angle second lighting subunit 103-3-2, a high angle third lighting subunit 103-3-3, and a high angle fourth lighting subunit 103-3-4; a medium angle first lighting subunit 103-2-1, a medium angle second lighting subunit 103-2-2, a medium angle third lighting subunit 103-2-3, a medium angle fourth lighting subunit 103-2-4; the low angle first lighting subunit 103-1-1, the low angle second lighting subunit 103-1-2, the low angle third lighting subunit 103-1-3, and the low angle fourth lighting subunit 103-1-4.
As shown in FIG. 3, the high angle first lighting subunit 103-3-1, the medium angle first lighting subunit 103-2-1, and the low angle first lighting subunit 103-1-1 are located in a first orthogonal partition; the high angle second lighting subunit 103-3-2, the medium angle second lighting subunit 103-2-2, and the low angle second lighting subunit 103-1-2 are located in a second orthogonal partition; the high angle third lighting subunit 103-3-3, the medium angle third lighting subunit 103-2-3, and the low angle third lighting subunit 103-1-3 are located in a third orthogonal partition; the high angle fourth lighting subunit 103-3-4, the medium angle fourth lighting subunit 103-2-4, and the low angle fourth lighting subunit 103-1-4 are located in a fourth orthogonal partition.
Specifically, in this embodiment, each illumination subunit illuminates an object to be measured, but the shadow areas generated are different, and the spatial angles of the light sources irradiated by the illumination subunits in each orthogonal partition at each angle are different. According to the embodiment, the illumination subunit can be flexibly selected according to the size of the area of the protrusion or the depression of the object to be detected.
According to the surface defect detection system provided by the embodiment, as the multi-angle light source is formed by splicing the illumination subunits of the customized multiple angles and multiple partitions, objects to be detected under the condition of different angles of light sources can be shot, shadow areas are alternated, invalid data are reduced, and therefore effectiveness of depth information is enhanced.
Example two
Fig. 4 is a flowchart of a detection method of a surface defect detection system according to a second embodiment of the present disclosure, where the method may be performed by a surface defect detection apparatus according to an embodiment of the present disclosure, and the apparatus may be implemented in software and/or hardware. The method specifically comprises the following steps:
s110, under the irradiation conditions of the illumination subunits with different angles, a plurality of images to be detected are obtained.
The image to be measured can be an image of the object to be measured, which is shot by the camera under the condition of each light source angle.
Specifically, in order to rotate shadows and obtain a clear overall view of the surface of the object to be measured, illumination sub-units with different angles may be used to illuminate the object to be measured, for example, the illumination sub-units may be illumination sub-units located in different orthogonal partitions of the annular illumination unit with the same angle, or may be illumination sub-units located in different orthogonal partitions of the annular illumination unit with different angles.
In an embodiment of the present disclosure, acquiring a plurality of images to be measured under illumination conditions of illumination subunits of different angles includes: respectively selecting illumination sub-units in four orthogonal partitions to obtain four target illumination sub-units, wherein the four target illumination sub-units belong to annular illumination units with different angles; and under the irradiation conditions of different target irradiation subunits, acquiring a plurality of images to be detected.
Wherein the target lighting subunit may be a light source selected for illuminating the object to be measured.
Specifically, in this embodiment, at least four sets of illumination subunits with different angles may be used, and since the four illumination subunits with orthogonal partitions may just illuminate one circle of the object to be measured, one illumination subunit may be selected in each of the four orthogonal partitions, so as to provide different brightness to the object to be measured, and capture different images to be measured.
In the embodiment, at least four groups of illumination subunits with different angles are adopted to illuminate the object to be detected, namely four target illumination subunits, so that the image data of the shadow area is prevented from affecting the accuracy of image processing.
S120, determining the normal vector of the target image according to the gray values of the pixel points in the images to be detected.
The target image refers to a finally synthesized image, and can clearly represent the surface defect of the object to be detected.
In an embodiment of the present disclosure, determining a normal vector of a target image according to gray values of respective pixel points in a plurality of images to be measured includes: determining the product of diffuse reflectance and normal vector of each pixel point in the target image according to the lambertian reflection principle and the gray values of each pixel point in the images to be detected; and obtaining the normal vector of each pixel point of the target image by normalizing and separating the product of the diffuse reflectance and the normal vector of each pixel point in the target image.
Specifically, the present embodiment adopts a photometric stereo method, and obtains the geometric distribution of the surface of the object to be measured by using incident light of multiple angles. Compared with other traditional three-dimensional imaging methods, the method can be realized by using only one common camera, and has low cost; no relative movement between the object and the camera is required, no image alignment is required, and the system structure performance requirement is low. The object needs to have lambertian reflection properties so that the incident light is reflected in a diffuse manner.
Specifically, the incident light with an emission angle in a certain direction irradiates the object to be measured, and if the object to be measured meets the lambertian reflection condition, the following relation should be satisfied:
s(i,j)=I j L j ρ i N i (1)
wherein s (i, j) represents the pixel value of a pixel point with coordinates (i, j) in the image to be detected captured by the camera; i represents the intensity of the light source, L represents the direction of the light sourceA unit vector; ρ represents the diffuse reflectance of the surface of the point of the object to be measured, and N represents the normal vector of the surface of the object to be measured. I j The light intensity of a pixel point with coordinates (i, j) in the image to be detected in the j direction is obtained; l (L) j The space angle of the light source in the j direction of the pixel point with the coordinates (i, j) in the image to be detected; ρ i The diffuse reflectance of the pixel point with coordinates (i, j) in the image to be detected in the i direction is obtained; n (N) i And the normal vector of the pixel point with coordinates (i, j) in the direction i in the image to be detected. Wherein I and L represent the characteristics of the light source and can be determined according to different target illumination subunits calibrated in advance; ρ and N represent the characteristics of the surface of the object to be measured, which do not change with the light source, belong to an unknown quantity, and S is the set of the pixel points S (i, j) on the image to be measured.
Specifically, in the embodiment, the formula (1) contains three unknowns, ρ, N and S, and theoretically at least three groups of non-coplanar incident light are required to irradiate a point on the surface of the object to be measured, so that the normal vector n= (N) x ,n y ,n z ) And (5) obtaining. In practice, shadows may be generated when the light source irradiates the surface of the object to be measured. In order to avoid the influence of shadows, the embodiment can select an illumination light source with a corresponding incident angle according to the shape of an object to be measured, and take the image to be measured taken under the condition of four groups of non-coplanar target illumination subunits into the formula (1) through a least square method to solve an overdetermined equation.
It should be noted that non-coplanar means that any component (x, y, z) of the vector L is different, and if L is the same, the known amount of the equation is insufficient, and the equation cannot be solved. Therefore, the light source used in this embodiment may be a plurality of orthogonal partition light sources with low angles, a plurality of orthogonal partition light sources with medium angles or a plurality of orthogonal partition light sources with high angles, or a plurality of orthogonal partition light sources with mixed angles. Because each angular ring illumination source is not coplanar, the L values are different. By way of example, in this embodiment, three unknowns in the equation (1) of the four sets of data Jie Gong are adopted, so that the problem of inaccurate equation solving caused by a shadow area can be avoided, and the influence of the shadow area on data reconstruction can be effectively reduced by reasonably selecting an illumination angle.
In an embodiment of the present disclosure, determining a product of diffuse reflectance and normal vector of each pixel in a target image according to a lambertian reflection principle and gray values of each pixel in a plurality of images to be measured includes: splitting pixel points in a plurality of images to be detected into c rows of pixel points respectively, wherein c is an integer greater than 1; according to the lambertian reflection principle and gray values of the c rows of pixel points in the images to be detected, solving the product of the diffuse reflectance and the normal vector of each row of pixel points row by row to obtain the product of the diffuse reflectance and the normal vector of the c rows of pixel points; and determining the product of the diffuse reflectance and the normal vector of each pixel point in the target image according to the product of the diffuse reflectance and the normal vector of the pixel points in the row c.
Specifically, the embodiment adopts a method of alternating shadows of a (a > 3) groups of incident lights with different angles (namely, a plurality of groups of four non-coplanar target illumination sub-units) to solve. For single point pixels, the following set of linear equations can be obtained:
wherein s is a The pixel value of a pixel point on the image to be detected, which is shot under the condition of the target illumination subunit of the a-th group; i a The light source intensity of a pixel point on the image to be detected, which is shot under the condition of the target illumination subunit of the a-th group; l (L) ax 、l ay 、l az Respectively obtaining the light source space angles of a certain pixel point in the x, y and z directions on the image to be detected which is shot under the condition of the target illumination subunit of the a group; n is n x 、n y 、n z And the normal vectors of a pixel point in the x, y and z directions on the image to be detected, which is shot under the condition of the target illumination subunit of the a group, are respectively shown.
It should be noted that, the formula (2) is a solution formula for a single pixel, and in reality, S is a three-dimensional matrix, and in the actual calculation process, the three-dimensional matrix is split into two-dimensional matrices to be solved, and finally, the solved values are combined. For example, in this embodiment, the image gray value S (i, j) is split into c row vectors S (i) for operation, where i e (1, b), j e (1, c), and the calculation formula of each row vector S (i) after being brought into the formula (2) is:
wherein s is a, The pixel value of the b pixel point of the c row on the image to be detected shot under the condition of the a-th group target illumination subunit; n is n x,b 、n y,b 、n z,b On the image to be detected shot under the condition of each group of target illumination subunits, the normal vector of the c row and the b pixel point in the x, y and z directions; ρ b And (3) the diffuse reflectance of the b pixel point of the c row is calculated on the image to be detected shot under the condition of each group of target illumination subunits.
For example, if the image to be measured in this embodiment has c rows of pixel values, each row of pixel values needs to be input into the formula (3) to calculate, and the product of the diffuse reflectance of the pixel points of each row on the target image and the normal vector is solved. For convenience of understanding, if the product of diffuse reflectance and normal vector of the pixel points in the first row in the target image is solved, relevant values of all the pixel points in the first row in the image to be detected, which are shot under the condition of each group of target illumination subunits, are input into a formula (3). Specifically, for example, s 1, In an image to be detected shot under the condition of a first group of target illumination subunits, the pixel value of a first pixel point of a first row; s is(s) 1, In the image to be detected shot under the condition of the first group of target illumination subunits, the pixel value of the second pixel point of the first row; s is(s) 2, And (3) taking the pixel value of the first pixel point of the first row and so on in the image to be detected shot under the condition of the second group of target illumination subunits.
When known light source parameters are used, an overdetermined equation can be solved through a least square method, products of diffuse reflectance and normal vectors of each row of pixel points are solved row by row, products of diffuse reflectance and normal vectors of the row c of pixel points are obtained, and an approximate solution of products of diffuse reflectance rho and normal vectors N of each pixel point in a target image is determined according to the products of diffuse reflectance and normal vectors of the row c of pixel points:
ρN=((IL) T (IL)) -1 (IL) T S (4)
wherein T is a transpose operator.
Since the normal vector N in the formula (4) is a unit vector, and represents only a direction, which is a number smaller than 1, and the relative height between each pixel point and the adjacent pixel point in the target image is more concerned later in this embodiment, that is, the difference between the normal vector in each direction is more concerned, the present embodiment substitutes the formula (4) into the normal vector N by the matrix normalization formula (5) and the formula (6) in the linear algebra, and normalizes the separation ρn to obtain N i And ρ i The separation formula of matrix normalization is as follows:
ρ i =||(ρN) i || 2 ,i∈(1,b) (6)
therefore, in this embodiment, the normal vector of each pixel point in the target image can be obtained through the formula (5), and in addition, the diffuse reflectance of each pixel point in the target image can be obtained through the formula (6). And n is i In practice, the three-dimensional coordinates (n x ,n y ,n z ) The aggregate is denoted as N.
S130, determining the relative height relation between each pixel point and the adjacent pixel points in the target image according to the normal vector of the target image, and obtaining the target image through the relative height relation.
In an embodiment of the present disclosure, determining a relative height relationship between each pixel point and an adjacent pixel point in a target image according to a normal vector of the target image, and obtaining the target image through the relative height relationship, includes: determining the relative height difference between each pixel point and the adjacent pixel point in the target image through the normal vector of the target image according to the relation between the gradient and the normal vector; and determining the target image through the relative height difference between each pixel point and the adjacent pixel points in the target image and the height reference value.
Since the initial image to be measured is a two-dimensional image, only the relation between x and y is needed, the embodiment needs to reconstruct the target image according to the data of the image to be measured, and obtain the height difference between each pixel point and the adjacent pixel point on the target image, thereby clearly representing the surface defect of the object to be measured. Thus, the present embodiment constructs a ternary function w=f (x, y, z) for solving the relationship between z and (x, y). Wherein z is the height of each pixel point in the target image; w is a constructed ternary function, and the gradient of the ternary function w is the normal vector of the contour plane, which can also be said to be the normal vector N of the target image (i.e. the surface of the object to be measured). Therefore, in this embodiment, the relative height difference between each pixel point and the adjacent pixel point in the target image may be determined through the normal vector of the target image, and the target image may be determined through the relative height difference between each pixel point and the adjacent pixel point in the target image and given any one of the height reference values, so as to clearly represent the surface defect of the object to be measured.
In an embodiment of the present disclosure, determining a relative height difference between each pixel point and an adjacent pixel point in a target image by a normal vector of the target image includes: inputting the three-dimensional coordinates of the normal vector of each pixel point of the target image into a relative height difference formula, and determining the relative height difference between each pixel point and the adjacent pixel points in the target image, wherein the relative height difference formula is as follows:
R (x,x-1) =z x-1,y -z x,y =n x /n z ; (7)
R (y,y-1) =z x,y-1 -z x,y =n y /n z ; (8)
wherein n is x ,n y And n z Three-dimensional coordinate values z of normal vectors of all pixel points of the target image x, For the height value of the pixel point with the (x, y) position coordinate in the target image, n x /n z R is the relative height difference between the pixel point with the (x, y) position and the adjacent pixel point (x-1, y) in the x direction in the target image (,x-1) Relatively high in the y-direction for a pixel point with coordinates of (x, y) position and an adjacent pixel point (x, y-1) within the target imageAnd (5) the degree is poor.
In an embodiment of the present disclosure, after determining a relative height difference between each pixel point and an adjacent pixel point in the target image, the method further includes: determining the divergence value of each pixel point in the target image according to the normal vector of the target image; strengthening the relative height difference between each pixel point and the adjacent pixel point in the target image through the divergence value of each pixel point in the target image to obtain the strengthening relative height difference between each pixel point and the adjacent pixel point in the target image; correspondingly, determining the target image through the relative height difference between each pixel point and the adjacent pixel points in the target image and the height reference value comprises the following steps: and determining the target image through the enhanced relative height difference between each pixel point and the adjacent pixel points in the target image and the height reference value.
The surface divergence of an object can be expressed as:
D=n x /n z -n y /n z (9)
wherein D is the divergence value of each pixel point in the target image.
Specifically, the divergence has an effect on the image to denoise the image. However, if the image edge is often located at a position with a larger gradient value, the diffusion equation decelerates and diffuses in a region with a larger gradient value and accelerates and diffuses in a region with a smaller gradient value, so that the image useful details can be protected while denoising is emphasized, namely, the enhancement effect is achieved on the height difference between each pixel point in the target image and surrounding pixel points.
According to the detection method of the surface defect detection system, a plurality of images to be detected are obtained under the irradiation conditions of illumination subunits with different angles; determining a normal vector of the target image according to gray values of all pixel points in the images to be detected; according to the normal vector of the target image, the relative height relation between each pixel point and the adjacent pixel points in the target image is determined, the target image is obtained through the relative height relation, the surface gradient of the target image can be effectively obtained, three-dimensional information can be better captured, and therefore defect characteristics of the surface of an object can be clearly represented, and omission is avoided.
Exemplary, the method for detecting the surface defect of the CCM module steel sheet in the embodiment can include the following steps:
s1, adjusting an initial position of a camera lens, specifically: the object to be measured is placed in the field of view and fixed, the camera 101 and the lens 102 are adjusted to make the surface of the object to be measured focus clearly, camera software is controlled by an industrial personal computer, camera parameters are adjusted, for example, exposure time is set to 1000us, gain is set to 2, and image size is set to 4000×3060 pixels.
S2, adjusting a light source to collect an original image, wherein the method specifically comprises the following steps of: the multi-angle light source is regulated to a designated position, an object to be measured is placed in a test position, a middle-angle annular lighting unit is selected to be 60 degrees, a middle-angle first lighting subunit, a middle-angle second lighting subunit, a middle-angle third lighting subunit and a middle-angle fourth lighting subunit are sequentially turned on, four groups of lighting lights with different incident angles are provided, the lighting angles (space angles) and the light intensity information of the light source are recorded, and images of different shadow states of the object to be measured are collected and are stored as images to be measured.
S3, solving a solution vector, specifically: and (3) reading each row of the gray values of the four groups of images, sequentially assigning the gray values to a matrix S, respectively assigning the luminous angles and the luminous intensities of the light sources to a matrix L and a matrix I, carrying out formula (4), and separating out a normal measurement N expression through normalization.
S4, constructing a three-dimensional function w=f (x, y, z), obtaining a relative height relation according to the relation between the gradient and the normal vector, and determining the target image according to the relative height relation.
The images to be measured of the plurality of CCM module steel sheets acquired by the method in this embodiment may be shown in fig. 5 to 8, and the final synthesized target image of the CCM module steel sheet in this embodiment is shown in fig. 9. Fig. 5 is a schematic diagram of an image to be measured captured under a condition of a first illumination subunit with a middle angle according to a second embodiment of the disclosure; fig. 6 is a schematic diagram of an image to be measured captured under a condition of a second illumination subunit with a middle angle according to a second embodiment of the disclosure; fig. 7 is a schematic diagram of an image to be measured captured under a condition of a middle-angle third illumination subunit according to a second embodiment of the present disclosure; fig. 8 is a schematic diagram of an image to be measured captured under a condition of a middle-angle fourth illumination subunit according to a second embodiment of the disclosure; fig. 9 is a schematic diagram of a target image synthesized based on a surface defect detection method according to a second embodiment of the present disclosure, and is also a surface divergence effect diagram of a CCM module steel sheet. As shown in fig. 5-9, the image area marked by the white square frame is a defect area of the CCM module steel sheet, as shown in fig. 5-8, the defect areas in each image to be detected are not clear, but the defect areas in fig. 9 are obvious, so that the detection method of the surface defect detection system provided by the embodiment can also reflect that the defect area on the surface of the object can be effectively detected, and the omission of detection is avoided.
Example III
Fig. 10 is a schematic structural diagram of a detection device of a surface defect detection system according to an embodiment of the present disclosure, where the device specifically includes:
the image to be measured acquisition module 310 is configured to acquire a plurality of images to be measured under the illumination conditions of the illumination subunits with different angles;
the normal vector determining module 320 is configured to determine a normal vector of the target image according to gray values of each pixel point in the plurality of images to be detected;
the target image determining module 330 is configured to determine a relative height relationship between each pixel point and an adjacent pixel point in the target image according to a normal vector of the target image, and obtain the target image according to the relative height relationship.
In one embodiment, the image acquisition module to be measured 310 is specifically configured to: respectively selecting illumination sub-units in four orthogonal partitions to obtain four target illumination sub-units, wherein the four target illumination sub-units belong to annular illumination units with different angles; and under the irradiation conditions of different target irradiation subunits, acquiring a plurality of images to be detected.
In one embodiment, normal vector determination module 320 is specifically configured to: determining the product of diffuse reflectance and normal vector of each pixel point in the target image according to the lambertian reflection principle and the gray values of each pixel point in the images to be detected; and obtaining the normal vector of each pixel point of the target image by normalizing and separating the product of the diffuse reflectance and the normal vector of each pixel point in the target image.
In one embodiment, normal vector determination module 320 is specifically configured to: splitting pixel points in a plurality of images to be detected into c rows of pixel points respectively, wherein c is an integer greater than 1; according to the lambertian reflection principle and gray values of the c rows of pixel points in the images to be detected, solving the product of the diffuse reflectance and the normal vector of each row of pixel points row by row to obtain the product of the diffuse reflectance and the normal vector of the c rows of pixel points; and determining the product of the diffuse reflectance and the normal vector of each pixel point in the target image according to the product of the diffuse reflectance and the normal vector of the pixel points in the row c.
In one embodiment, the target image determining module 330 is specifically configured to: determining the relative height difference between each pixel point and the adjacent pixel point in the target image through the normal vector of the target image according to the relation between the gradient and the normal vector; and determining the target image through the relative height difference between each pixel point and the adjacent pixel points in the target image and the height reference value.
In one embodiment, the target image determining module 330 is specifically configured to: inputting the three-dimensional coordinates of the normal vector of each pixel point of the target image into a relative height difference formula, and determining the relative height difference between each pixel point and the adjacent pixel points in the target image, wherein the relative height difference formula is as follows:
R (x,x-1) =z x-1,y -z x,y =n x /n z
R (y,y-1) =z x,y-1 -z x,y =n y /n z
Wherein n is x ,n y And n z Three-dimensional coordinate values z of normal vectors of all pixel points of the target image x,y For the height value of the pixel point with the (x, y) position coordinate in the target image, n x /n z R is the height difference between the pixel point with the (x, y) position and the adjacent pixel point (x-1, y) in the x direction in the target image (x,x-1) The height difference in the y direction between the pixel point with the (x, y) position and the adjacent pixel point (x, y-1) is the coordinates in the target image.
In one embodiment, the target image determining module 330 is specifically configured to: after determining the relative height difference between each pixel point and the adjacent pixel points in the target image, determining the divergence value of each pixel point in the target image according to the normal vector of the target image; strengthening the relative height difference between each pixel point and the adjacent pixel point in the target image through the divergence value of each pixel point in the target image to obtain the strengthening relative height difference between each pixel point and the adjacent pixel point in the target image; and determining the target image through the enhanced relative height difference between each pixel point and the adjacent pixel points in the target image and the height reference value.
According to embodiments of the present disclosure, the present disclosure also provides an electronic device and a readable storage medium.
FIG. 11 illustrates a schematic block diagram of an example electronic device 400 that may be used to implement embodiments of the present disclosure. Electronic devices are intended to represent various forms of digital computers, such as laptops, desktops, workstations, personal digital assistants, servers, blade servers, mainframes, and other appropriate computers. The electronic device may also represent various forms of mobile devices, such as personal digital processing, cellular telephones, smartphones, wearable devices, and other similar computing devices. The components shown herein, their connections and relationships, and their functions, are meant to be exemplary only, and are not meant to limit implementations of the disclosure described and/or claimed herein.
As shown in fig. 11, the apparatus 400 includes a computing unit 401 that can perform various appropriate actions and processes according to a computer program stored in a Read Only Memory (ROM) 402 or a computer program loaded from a storage unit 408 into a Random Access Memory (RAM) 403. In RAM 403, various programs and data required for the operation of device 400 may also be stored. The computing unit 401, ROM 402, and RAM 403 are connected to each other by a bus 404. An input/output (I/O) interface 405 is also connected to bus 404.
Various components in device 400 are connected to I/O interface 405, including: an input unit 406 such as a keyboard, a mouse, etc.; an output unit 407 such as various types of displays, speakers, and the like; a storage unit 408, such as a magnetic disk, optical disk, etc.; and a communication unit 409 such as a network card, modem, wireless communication transceiver, etc. The communication unit 409 allows the device 400 to exchange information/data with other devices via a computer network, such as the internet, and/or various telecommunication networks.
The computing unit 401 may be a variety of general purpose and/or special purpose processing components having processing and computing capabilities. Some examples of computing unit 401 include, but are not limited to, a Central Processing Unit (CPU), a Graphics Processing Unit (GPU), various specialized Artificial Intelligence (AI) computing chips, various computing units running machine learning model algorithms, a Digital Signal Processor (DSP), and any suitable processor, controller, microcontroller, etc. The computing unit 401 performs the respective methods and processes described above, such as the surface defect detection method. For example, in some embodiments, the surface defect detection method may be implemented as a computer software program tangibly embodied on a machine-readable medium, such as the storage unit 408. In some embodiments, part or all of the computer program may be loaded and/or installed onto the device 400 via the ROM 402 and/or the communication unit 409. When the computer program is loaded into RAM 403 and executed by computing unit 401, one or more steps of the surface defect detection method described above may be performed. Alternatively, in other embodiments, the computing unit 401 may be configured to perform the surface defect detection method by any other suitable means (e.g. by means of firmware).
Various implementations of the systems and techniques described here above may be implemented in digital electronic circuitry, integrated circuitry, field Programmable Gate Arrays (FPGAs), application Specific Integrated Circuits (ASICs), application Specific Standard Products (ASSPs), systems-on-a-chip (SOCs), complex Programmable Logic Devices (CPLDs), computer hardware, firmware, software, and/or combinations thereof. These various embodiments may include: implemented in one or more computer programs, the one or more computer programs may be executed and/or interpreted on a programmable system including at least one programmable processor, which may be a special purpose or general-purpose programmable processor, that may receive data and instructions from, and transmit data and instructions to, a storage system, at least one input device, and at least one output device.
Program code for carrying out methods of the present disclosure may be written in any combination of one or more programming languages. These program code may be provided to a processor or controller of a general purpose computer, special purpose computer, or other programmable data processing apparatus such that the program code, when executed by the processor or controller, causes the functions/operations specified in the flowchart and/or block diagram to be implemented. The program code may execute entirely on the machine, partly on the machine, as a stand-alone software package, partly on the machine and partly on a remote machine or entirely on the remote machine or server.
In the context of this disclosure, a machine-readable medium may be a tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device. The machine-readable medium may be a machine-readable signal medium or a machine-readable storage medium. The machine-readable medium may include, but is not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or any suitable combination of the foregoing. More specific examples of a machine-readable storage medium would include an electrical connection based on one or more wires, a portable computer diskette, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber, a portable compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing.
To provide for interaction with a user, the systems and techniques described here can be implemented on a computer having: a display device (e.g., a CRT (cathode ray tube) or LCD (liquid crystal display) monitor) for displaying information to a user; and a keyboard and pointing device (e.g., a mouse or trackball) by which a user can provide input to the computer. Other kinds of devices may also be used to provide for interaction with a user; for example, feedback provided to the user may be any form of sensory feedback (e.g., visual feedback, auditory feedback, or tactile feedback); and input from the user may be received in any form, including acoustic input, speech input, or tactile input.
The systems and techniques described here can be implemented in a computing system that includes a background component (e.g., as a data server), or that includes a middleware component (e.g., an application server), or that includes a front-end component (e.g., a user computer having a graphical user interface or a web browser through which a user can interact with an implementation of the systems and techniques described here), or any combination of such background, middleware, or front-end components. The components of the system can be interconnected by any form or medium of digital data communication (e.g., a communication network). Examples of communication networks include: local Area Networks (LANs), wide Area Networks (WANs), and the internet.
The computer system may include a client and a server. The client and server are typically remote from each other and typically interact through a communication network. The relationship of client and server arises by virtue of computer programs running on the respective computers and having a client-server relationship to each other. The server may be a cloud server, a server of a distributed system, or a server incorporating a blockchain.
It should be appreciated that various forms of the flows shown above may be used to reorder, add, or delete steps. For example, the steps recited in the present disclosure may be performed in parallel or sequentially or in a different order, provided that the desired results of the technical solutions of the present disclosure are achieved, and are not limited herein.
Furthermore, the terms "first," "second," and the like, are used for descriptive purposes only and are not to be construed as indicating or implying a relative importance or implicitly indicating the number of technical features indicated. Thus, a feature defining "a first" or "a second" may explicitly or implicitly include at least one such feature. In the description of the present disclosure, the meaning of "a plurality" is two or more, unless explicitly defined otherwise.
The foregoing is merely specific embodiments of the disclosure, but the protection scope of the disclosure is not limited thereto, and any person skilled in the art can easily think about changes or substitutions within the technical scope of the disclosure, and it is intended to cover the scope of the disclosure. Therefore, the protection scope of the present disclosure shall be subject to the protection scope of the claims.

Claims (10)

1. A method of detecting a surface defect detection system, the surface defect detection system comprising: the multi-angle light source is positioned between the camera assembly and the object to be tested, and is formed by overlapping a plurality of annular lighting units with different diameters, wherein each annular lighting unit is formed by splicing a plurality of lighting subunits and is used for providing different angle light sources for the object to be tested; the camera component is connected with the support frame and is used for shooting the object to be detected under the irradiation conditions of different illumination subunits to obtain a plurality of images to be detected and sending the images to be detected to the host; the host computer is configured to synthesize the plurality of images to be tested into a target image, and the method is characterized by comprising:
under the irradiation conditions of the illumination subunits with different angles, a plurality of images to be detected are obtained;
determining a normal vector of the target image according to the gray values of all pixel points in the plurality of images to be detected;
determining a relative height relation between each pixel point and an adjacent pixel point in the target image according to the normal vector of the target image, and obtaining the target image according to the relative height relation, wherein the relative height relation is a relative height difference between each pixel point and the adjacent pixel point in the target image;
Wherein determining the relative height difference between each pixel point and the adjacent pixel point in the target image comprises:
inputting the three-dimensional coordinates of the normal vector of each pixel point of the target image into a relative height difference formula, and determining the relative height difference between each pixel point and the adjacent pixel points in the target image, wherein the relative height difference formula is as follows:
wherein,three-dimensional coordinate values of normal vectors of all pixel points of the target image respectively,/for>For the coordinates in the target image (+)>,/>) Height value of position pixel point, +.>For the coordinates in the target image (+)>,/>) Position pixel and adjacent pixel (+)>-1,/>) At->Relative height difference in direction, +.>For the coordinates in the target image (+)>,/>) Position pixel and adjacent pixel (+)>,/>) At->A relative height difference in the direction;
wherein after the relative height difference between each pixel point and the adjacent pixel point in the target image is obtained, the method further comprises: determining the divergence value of each pixel point in the target image according to the normal vector of the target image; strengthening the relative height difference between each pixel point and the adjacent pixel point in the target image through the divergence value of each pixel point in the target image to obtain the strengthening relative height difference between each pixel point and the adjacent pixel point in the target image;
Wherein the obtaining the target image through the relative height relation comprises: and determining the target image through the enhanced relative height difference between each pixel point and the adjacent pixel points in the target image and the height reference value.
2. The method according to claim 1, wherein acquiring a plurality of images to be measured under illumination conditions of illumination subunits of different angles comprises:
respectively selecting illumination sub-units in four orthogonal partitions to obtain four target illumination sub-units, wherein the four target illumination sub-units belong to annular illumination units with different angles;
and under the irradiation conditions of different target irradiation subunits, acquiring a plurality of images to be detected.
3. The method according to claim 2, wherein determining the normal vector of the target image according to the gray values of the pixels in the plurality of images to be measured comprises:
determining the product of diffuse reflectance and normal vector of each pixel point in the target image according to the lambertian reflection principle and the gray value of each pixel point in the images to be detected;
and obtaining the normal vector of each pixel point of the target image by normalizing and separating the product of the diffuse reflectance and the normal vector of each pixel point in the target image.
4. The method according to claim 3, wherein determining the product of the diffuse reflectance and the normal vector of each pixel in the target image according to the lambertian reflection principle and the gray values of each pixel in the plurality of images to be measured comprises:
dividing pixel points in the images to be detected into c rows of pixel points respectively, wherein c is an integer greater than 1;
according to the lambertian reflection principle and gray values of the c rows of pixel points in the images to be detected, solving the product of the diffuse reflectance and the normal vector of each row of pixel points row by row to obtain the product of the diffuse reflectance and the normal vector of the c rows of pixel points;
and determining the product of the diffuse reflectance and the normal vector of each pixel point in the target image according to the product of the diffuse reflectance and the normal vector of the c rows of pixel points.
5. A detection apparatus of a surface defect detection system, the surface defect detection system comprising: the multi-angle light source is positioned between the camera assembly and the object to be tested, and is formed by overlapping a plurality of annular lighting units with different diameters, wherein each annular lighting unit is formed by splicing a plurality of lighting subunits and is used for providing different angle light sources for the object to be tested; the camera component is connected with the support frame and is used for shooting the object to be detected under the irradiation conditions of different illumination subunits to obtain a plurality of images to be detected and sending the images to be detected to the host; the host computer is configured to synthesize the plurality of images to be measured into a target image, and is characterized in that the device includes:
The image acquisition module to be measured is used for acquiring a plurality of images to be measured under the irradiation conditions of the illumination subunits with different angles;
the normal vector determining module is used for determining the normal vector of the target image according to the gray values of all pixel points in the plurality of images to be detected;
the target image determining module is used for determining the relative height relation between each pixel point and the adjacent pixel point in the target image according to the normal vector of the target image, and obtaining the target image according to the relative height relation, wherein the relative height relation is the relative height difference between each pixel point and the adjacent pixel point in the target image;
the target image determining module is specifically configured to: inputting the three-dimensional coordinates of the normal vector of each pixel point of the target image into a relative height difference formula, and determining the relative height difference between each pixel point and the adjacent pixel points in the target image, wherein the relative height difference formula is as follows:
wherein,three-dimensional coordinate values of normal vectors of all pixel points of the target image respectively,/for>For the coordinates in the target image (+)>,/>) Height value of position pixel point, +.>For the coordinates in the target image (+) >,/>) Position pixel and adjacent pixel (+)>-1,/>) At->Relative height difference in direction, +.>For the coordinates in the target image (+)>,/>) Position pixel and adjacent pixel (+)>,/>) At->A relative height difference in the direction;
the target image determining module is specifically configured to: after the relative height difference between each pixel point and the adjacent pixel points in the target image is determined, determining a divergence value of each pixel point in the target image according to a normal vector of the target image; strengthening the relative height difference between each pixel point and the adjacent pixel point in the target image through the divergence value of each pixel point in the target image to obtain the strengthening relative height difference between each pixel point and the adjacent pixel point in the target image; and determining the target image through the enhanced relative height difference between each pixel point and the adjacent pixel points in the target image and the height reference value.
6. The device according to claim 5, wherein the image acquisition module to be tested is specifically configured to:
respectively selecting illumination sub-units in four orthogonal partitions to obtain four target illumination sub-units, wherein the four target illumination sub-units belong to annular illumination units with different angles;
And under the irradiation conditions of different target irradiation subunits, acquiring a plurality of images to be detected.
7. The apparatus of claim 6, wherein the normal vector determination module is specifically configured to:
determining the product of diffuse reflectance and normal vector of each pixel point in the target image according to the lambertian reflection principle and the gray value of each pixel point in the images to be detected;
and obtaining the normal vector of each pixel point of the target image by normalizing and separating the product of the diffuse reflectance and the normal vector of each pixel point in the target image.
8. The apparatus of claim 7, wherein the normal vector determination module is specifically configured to:
dividing pixel points in the images to be detected into c rows of pixel points respectively, wherein c is an integer greater than 1;
according to the lambertian reflection principle and gray values of the c rows of pixel points in the images to be detected, solving the product of the diffuse reflectance and the normal vector of each row of pixel points row by row to obtain the product of the diffuse reflectance and the normal vector of the c rows of pixel points;
and determining the product of the diffuse reflectance and the normal vector of each pixel point in the target image according to the product of the diffuse reflectance and the normal vector of the c rows of pixel points.
9. An electronic device, comprising:
at least one processor; and
a memory communicatively coupled to the at least one processor; wherein,
the memory stores instructions executable by the at least one processor to enable the at least one processor to perform the method of any one of claims 1-4.
10. A non-transitory computer readable storage medium storing computer instructions for causing a computer to perform the method of any one of claims 1-4.
CN202211648264.9A 2022-12-21 2022-12-21 Surface defect detection system, detection method, detection device, detection equipment and storage medium Active CN115980059B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202211648264.9A CN115980059B (en) 2022-12-21 2022-12-21 Surface defect detection system, detection method, detection device, detection equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202211648264.9A CN115980059B (en) 2022-12-21 2022-12-21 Surface defect detection system, detection method, detection device, detection equipment and storage medium

Publications (2)

Publication Number Publication Date
CN115980059A CN115980059A (en) 2023-04-18
CN115980059B true CN115980059B (en) 2023-12-15

Family

ID=85960472

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202211648264.9A Active CN115980059B (en) 2022-12-21 2022-12-21 Surface defect detection system, detection method, detection device, detection equipment and storage medium

Country Status (1)

Country Link
CN (1) CN115980059B (en)

Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108445007A (en) * 2018-01-09 2018-08-24 深圳市华汉伟业科技有限公司 A kind of detection method and its detection device based on image co-registration
CN109523541A (en) * 2018-11-23 2019-03-26 五邑大学 A kind of metal surface fine defects detection method of view-based access control model
CN110609039A (en) * 2019-09-23 2019-12-24 上海御微半导体技术有限公司 Optical detection device and method thereof
CN112669318A (en) * 2021-03-17 2021-04-16 上海飞机制造有限公司 Surface defect detection method, device, equipment and storage medium
CN112858318A (en) * 2021-04-26 2021-05-28 惠州高视科技有限公司 Method for distinguishing screen foreign matter defect from dust, electronic equipment and storage medium
CN113538432A (en) * 2021-09-17 2021-10-22 南通蓝城机械科技有限公司 Part defect detection method and system based on image processing
CN115272258A (en) * 2022-08-03 2022-11-01 无锡九霄科技有限公司 Metal cylindrical surface defect detection method, system and medium based on machine vision
CN218032792U (en) * 2022-06-30 2022-12-13 广州镭晨智能装备科技有限公司 Visual detection light source and automatic optical detection equipment

Family Cites Families (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20010005264A1 (en) * 1999-05-05 2001-06-28 Slemon Charles S. Linked cameras and processors for imaging system
JP6424020B2 (en) * 2014-06-09 2018-11-14 株式会社キーエンス Image inspection apparatus, image inspection method, image inspection program, computer-readable recording medium, and recorded apparatus

Patent Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108445007A (en) * 2018-01-09 2018-08-24 深圳市华汉伟业科技有限公司 A kind of detection method and its detection device based on image co-registration
CN109523541A (en) * 2018-11-23 2019-03-26 五邑大学 A kind of metal surface fine defects detection method of view-based access control model
CN110609039A (en) * 2019-09-23 2019-12-24 上海御微半导体技术有限公司 Optical detection device and method thereof
CN112669318A (en) * 2021-03-17 2021-04-16 上海飞机制造有限公司 Surface defect detection method, device, equipment and storage medium
CN112858318A (en) * 2021-04-26 2021-05-28 惠州高视科技有限公司 Method for distinguishing screen foreign matter defect from dust, electronic equipment and storage medium
CN113538432A (en) * 2021-09-17 2021-10-22 南通蓝城机械科技有限公司 Part defect detection method and system based on image processing
CN218032792U (en) * 2022-06-30 2022-12-13 广州镭晨智能装备科技有限公司 Visual detection light source and automatic optical detection equipment
CN115272258A (en) * 2022-08-03 2022-11-01 无锡九霄科技有限公司 Metal cylindrical surface defect detection method, system and medium based on machine vision

Also Published As

Publication number Publication date
CN115980059A (en) 2023-04-18

Similar Documents

Publication Publication Date Title
JP6291418B2 (en) Optical measurement arrangements and related methods
US20110228052A1 (en) Three-dimensional measurement apparatus and method
CN109767425B (en) Machine vision light source uniformity evaluation device and method
JP2023534175A (en) Neural network analysis of LFA specimens
KR20120014886A (en) Create recipes and inspect them based on recipes
CN110084873B (en) Method and apparatus for rendering three-dimensional model
US9756313B2 (en) High throughput and low cost height triangulation system and method
Pitard et al. Discrete modal decomposition for surface appearance modelling and rendering
US20240153066A1 (en) Visual inspection apparatus, visual inspection method, image generation apparatus, and image generation method
Ciortan et al. A practical reflectance transformation imaging pipeline for surface characterization in cultural heritage
CN114424046B (en) Inspection method, recording medium, and inspection system
JP2012242281A (en) Method, device and program for calculating center position of detection object
US10852125B2 (en) Apparatus for inspecting film on substrate by using optical interference and method thereof
CN115615353A (en) Method, apparatus, device and storage medium for detecting size of object by using parallel light
CN118762626B (en) Screen brightness uniformity detection method and detection equipment
CN118655084B (en) Surface defect detection method, system, electronic device and storage medium
CN112040138B (en) Stereoscopic light source system, image pickup method, image pickup device, storage medium, and electronic apparatus
CN115980059B (en) Surface defect detection system, detection method, detection device, detection equipment and storage medium
US20210018314A1 (en) Apparatus for inspecting substrate and method thereof
CN111566438B (en) Image acquisition method and system
CN116482109A (en) Surface defect detection method and device, storage medium and electronic equipment
KR102171773B1 (en) Inspection area determination method and visual inspection apparatus using the same
KR20190075283A (en) System and Method for detecting Metallic Particles
Pintus et al. Practical free-form RTI acquisition with local spot lights
US10325361B2 (en) System, method and computer program product for automatically generating a wafer image to design coordinate mapping

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
CB03 Change of inventor or designer information

Inventor after: Zhang Zhengtao

Inventor after: Wu Bo

Inventor after: Tang Chao

Inventor after: Lv Xiaoyun

Inventor after: Zhang Wujie

Inventor after: Yang Huabin

Inventor before: Wu Bo

Inventor before: Tang Chao

Inventor before: Lv Xiaoyun

Inventor before: Zhang Wujie

CB03 Change of inventor or designer information
CB02 Change of applicant information

Address after: 471033 Room 101 and Room 202, building 5, science and Technology Park, Luoyang National University, No. 2, Penglai Road, Jianxi District, Luoyang area, pilot Free Trade Zone, Luoyang City, Henan Province

Applicant after: CASI VISION TECHNOLOGY (LUOYANG) CO.,LTD.

Applicant after: Zhongke Huiyuan vision technology (Beijing) Co.,Ltd.

Applicant after: Zhongke Huiyuan Intelligent Equipment (Guangdong) Co.,Ltd.

Address before: No. 1107, 1st floor, building 4, No. 75 Suzhou street, Haidian District, Beijing 100080

Applicant before: Zhongke Huiyuan vision technology (Beijing) Co.,Ltd.

Applicant before: Zhongke Huiyuan Intelligent Equipment (Guangdong) Co.,Ltd.

Applicant before: CASI VISION TECHNOLOGY (LUOYANG) CO.,LTD.

CB02 Change of applicant information
GR01 Patent grant
GR01 Patent grant
CP03 Change of name, title or address

Address after: 471033 Room 101 and Room 202, building 5, science and Technology Park, Luoyang National University, No. 2, Penglai Road, Jianxi District, Luoyang area, pilot Free Trade Zone, Luoyang City, Henan Province

Patentee after: CASI VISION TECHNOLOGY (LUOYANG) CO.,LTD.

Country or region after: China

Patentee after: Zhongke Huiyuan vision technology (Beijing) Co.,Ltd.

Patentee after: Zhongke Huiyuan Semiconductor Technology (Guangdong) Co.,Ltd.

Address before: 471033 Room 101 and Room 202, building 5, science and Technology Park, Luoyang National University, No. 2, Penglai Road, Jianxi District, Luoyang area, pilot Free Trade Zone, Luoyang City, Henan Province

Patentee before: CASI VISION TECHNOLOGY (LUOYANG) CO.,LTD.

Country or region before: China

Patentee before: Zhongke Huiyuan vision technology (Beijing) Co.,Ltd.

Patentee before: Zhongke Huiyuan Intelligent Equipment (Guangdong) Co.,Ltd.

CP03 Change of name, title or address