CN115980059A - Surface defect detection system and detection method, device, equipment and storage medium thereof - Google Patents

Surface defect detection system and detection method, device, equipment and storage medium thereof Download PDF

Info

Publication number
CN115980059A
CN115980059A CN202211648264.9A CN202211648264A CN115980059A CN 115980059 A CN115980059 A CN 115980059A CN 202211648264 A CN202211648264 A CN 202211648264A CN 115980059 A CN115980059 A CN 115980059A
Authority
CN
China
Prior art keywords
angle
illumination
subunit
target image
pixel point
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202211648264.9A
Other languages
Chinese (zh)
Other versions
CN115980059B (en
Inventor
吴搏
唐超
吕晓云
张武杰
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Zhongke Huiyuan Intelligent Equipment Guangdong Co ltd
Casi Vision Technology Luoyang Co Ltd
Casi Vision Technology Beijing Co Ltd
Original Assignee
Zhongke Huiyuan Intelligent Equipment Guangdong Co ltd
Casi Vision Technology Luoyang Co Ltd
Casi Vision Technology Beijing Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Zhongke Huiyuan Intelligent Equipment Guangdong Co ltd, Casi Vision Technology Luoyang Co Ltd, Casi Vision Technology Beijing Co Ltd filed Critical Zhongke Huiyuan Intelligent Equipment Guangdong Co ltd
Priority to CN202211648264.9A priority Critical patent/CN115980059B/en
Publication of CN115980059A publication Critical patent/CN115980059A/en
Application granted granted Critical
Publication of CN115980059B publication Critical patent/CN115980059B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Abstract

The disclosure provides a surface defect detection system and a detection method, a detection device, equipment and a storage medium thereof, wherein a plurality of images to be detected are obtained under the irradiation conditions of illumination subunits at different angles; determining a normal vector of a target image according to the gray value of each pixel point in the plurality of images to be detected; and determining the relative height relationship between each pixel point and the adjacent pixel point in the target image according to the normal vector of the target image, and obtaining the target image according to the relative height relationship, so that the defect characteristics of the surface of the object can be clearly represented, and the omission is avoided.

Description

Surface defect detection system and detection method, device, equipment and storage medium thereof
Technical Field
The present disclosure relates to the field of machine vision inspection technologies, and in particular, to a surface defect inspection system, a method, an apparatus, a device, and a storage medium for inspecting surface defects.
Background
At present, an automatic detection system based on machine vision is developed vigorously, and compared with the defects of low precision, low repeatability, high cost, irreproducibility and the like of artificial vision detection, the machine vision detection system has a greater development potential and will gradually replace the artificial detection in the future.
In the prior art, a two-dimensional detection mode is usually used for the defect detection equipment on the surface of an object, and the detected object can only show the contrast difference of single incident light, so that the defects such as slight surface depression and the like are difficult to highlight, detection omission is easily caused, and the performance of the detection equipment is influenced.
Disclosure of Invention
The present disclosure provides a surface defect detection system, a detection method, an apparatus, a device, and a storage medium thereof, to at least solve the above technical problems in the prior art.
According to a first aspect of the present disclosure, there is provided a surface defect detection system, the system comprising: a camera component, a support frame, a multi-angle light source and a host, wherein,
the multi-angle light source is positioned between the camera component and the object to be detected, and is formed by overlapping a plurality of layers of annular lighting units with different diameters, wherein each annular lighting unit is formed by splicing a plurality of lighting subunits and is used for providing light sources with different angles for the object to be detected;
the camera assembly is connected with the support frame and used for shooting the object to be detected under the irradiation conditions of different lighting subunits to obtain a plurality of images to be detected and sending the plurality of images to be detected to the host;
and the host is used for synthesizing the plurality of images to be detected into a target image.
In one embodiment, the multi-angle light source is formed by superposing a high-angle annular lighting unit, a medium-angle annular lighting unit and a low-angle annular lighting unit; wherein the content of the first and second substances,
the middle-angle annular lighting unit is positioned between the high-angle annular lighting unit and the low-angle annular lighting unit, and the low-angle annular lighting unit is close to the object to be detected;
the diameter of the high-angle annular lighting unit is smaller than that of the medium-angle annular lighting unit, and the diameter of the medium-angle annular lighting unit is smaller than that of the low-angle annular lighting unit.
In an embodiment, the high-angle annular lighting unit, the medium-angle annular lighting unit and the low-angle annular lighting unit are respectively formed by splicing four same lighting subunits.
In an implementation manner, the high-angle annular lighting unit comprises a high-angle first lighting subunit, a high-angle second lighting subunit, a high-angle third lighting subunit and a high-angle fourth lighting subunit which are spliced to form the high-angle annular lighting unit;
the middle-angle annular lighting unit comprises a middle-angle first lighting subunit, a middle-angle second lighting subunit, a middle-angle third lighting subunit and a middle-angle fourth lighting subunit which are spliced;
the low-angle annular lighting unit comprises a low-angle first lighting subunit, a low-angle second lighting subunit, a low-angle third lighting subunit and a low-angle fourth lighting subunit which are spliced;
wherein the high angle first illumination subunit, the medium angle first illumination subunit, and the low angle first illumination subunit are located in a first orthogonal partition; the high angle second illumination subunit, the medium angle second illumination subunit, and the low angle second illumination subunit are located in a second orthogonal partition; the high angle third illumination subunit, the medium angle third illumination subunit, and the low angle third illumination subunit are located in a third orthogonal partition; the high angle fourth illumination subunit, the medium angle fourth illumination subunit, and the low angle fourth illumination subunit are located in a fourth orthogonal partition.
According to a second aspect of the present disclosure, there is provided a detection method of a surface defect detection system, based on the surface defect detection system, including:
acquiring a plurality of images to be detected under the illumination conditions of the illumination subunits at different angles;
determining a normal vector of a target image according to the gray values of all pixel points in the multiple images to be detected;
and determining the relative height relationship between each pixel point and the adjacent pixel point in the target image according to the normal vector of the target image, and obtaining the target image according to the relative height relationship.
In an embodiment, the acquiring a plurality of images to be measured under the illumination condition of the illumination subunit at different angles includes:
selecting illumination subunits in four orthogonal partitions respectively to obtain four target illumination subunits, wherein the four target illumination subunits belong to annular illumination units with different angles;
and acquiring a plurality of images to be measured under the irradiation conditions of different target illumination subunits.
In an implementation manner, the determining a normal vector of a target image according to the gray-level values of the pixel points in the multiple images to be detected includes:
determining the product of the diffuse reflectivity and the normal vector of each pixel point in the target image according to the Lambert reflection principle and the gray value of each pixel point in the plurality of images to be detected;
and obtaining the normal vector of each pixel point of the target image by normalizing and separating the product of the diffuse reflectance and the normal vector of each pixel point in the target image.
In an implementation manner, the determining, according to the lambertian reflection principle and the gray values of the pixel points in the multiple images to be detected, the product of the diffuse reflectance and the normal vector of each pixel point in the target image includes:
respectively splitting pixel points in the images to be detected into c rows of pixel points, wherein c is an integer greater than 1;
according to the Lambert reflection principle and gray values of c rows of pixel points in the multiple images to be detected, solving the product of the diffuse reflectance and the normal vector of each row of pixel points line by line to obtain the product of the diffuse reflectance and the normal vector of the c rows of pixel points;
and determining the product of the diffuse reflectance and the normal vector of each pixel point in the target image according to the product of the diffuse reflectance and the normal vector of the c lines of pixel points.
In an implementation manner, the determining, according to the normal vector of the target image, a relative height relationship between each pixel point and an adjacent pixel point in the target image, and obtaining the target image according to the relative height relationship includes:
determining the relative height difference between each pixel point and an adjacent pixel point in the target image through the normal vector of the target image according to the relation between the gradient and the normal vector;
and determining the target image according to the relative height difference between each pixel point and the adjacent pixel point in the target image and the height reference value.
In an embodiment, the determining, by using a normal vector of the target image, a relative height difference between each pixel point and an adjacent pixel point in the target image includes:
inputting the three-dimensional coordinates of the normal vector of each pixel point of the target image into a relative height difference formula, and determining the relative height difference between each pixel point and an adjacent pixel point in the target image, wherein the relative height difference formula is as follows:
R (x,x-1) =z x-1,y -z x,y =n x /n z
R (y,y-1) =z x,y-1 -z x,y =n y /n z
wherein the content of the first and second substances,n x ,n y and n z Three-dimensional coordinate values, z, of normal vectors of each pixel point of the target image x,y Is the height value, n, of a pixel point with coordinates of (x, y) position in the target image x /n z The relative height difference of a pixel point with coordinates of (x, y) position and an adjacent pixel point (x-1, y) in the x direction in the target image is represented by R (x,x-1) And the relative height difference between the pixel point with the coordinate (x, y) in the target image and the adjacent pixel point (x, y-1) in the y direction is obtained.
In an embodiment, after the determining the relative height difference between each pixel point and the adjacent pixel point in the target image, the method further includes:
determining the divergence value of each pixel point in the target image according to the normal vector of the target image;
strengthening the relative height difference between each pixel point and an adjacent pixel point in the target image according to the divergence value of each pixel point in the target image to obtain the strengthened relative height difference between each pixel point and the adjacent pixel point in the target image;
correspondingly, the determining the target image according to the relative height difference between each pixel point and the adjacent pixel point in the target image and the height reference value includes:
and determining the target image according to the strengthened relative height difference between each pixel point and the adjacent pixel point in the target image and the height reference value.
According to a third aspect of the present disclosure, there is provided a detection apparatus of a surface defect detection system, the apparatus comprising:
the to-be-detected image acquisition module is used for acquiring a plurality of to-be-detected images under the illumination conditions of the illumination subunits at different angles;
the normal vector determining module is used for determining a normal vector of the target image according to the gray value of each pixel point in the multiple images to be detected;
and the target image determining module is used for determining the relative height relationship between each pixel point and the adjacent pixel point in the target image according to the normal vector of the target image, and obtaining the target image according to the relative height relationship.
In an implementation manner, the module for acquiring an image to be tested is specifically configured to:
selecting illumination subunits in four orthogonal partitions respectively to obtain four target illumination subunits, wherein the four target illumination subunits belong to annular illumination units with different angles;
and acquiring a plurality of images to be measured under the irradiation conditions of different target illumination subunits.
In an implementation manner, the normal vector determination module is specifically configured to:
determining the product of the diffuse reflectivity and the normal vector of each pixel point in the target image according to the Lambert reflection principle and the gray value of each pixel point in the multiple images to be detected;
and obtaining the normal vector of each pixel point of the target image by normalizing and separating the product of the diffuse reflectance and the normal vector of each pixel point in the target image.
In an implementation manner, the normal vector determination module is specifically configured to:
respectively splitting pixel points in the images to be detected into c rows of pixel points, wherein c is an integer greater than 1;
according to the Lambert reflection principle and gray values of c rows of pixel points in the multiple images to be detected, solving the product of the diffuse reflectance and the normal vector of each row of pixel points line by line to obtain the product of the diffuse reflectance and the normal vector of the c rows of pixel points;
and determining the product of the diffuse reflectance and the normal vector of each pixel point in the target image according to the product of the diffuse reflectance and the normal vector of the c rows of pixel points.
In an implementation manner, the target image determination module is specifically configured to:
determining the relative height difference between each pixel point and an adjacent pixel point in the target image through the normal vector of the target image according to the relation between the gradient and the normal vector;
and determining the target image according to the relative height difference between each pixel point and the adjacent pixel point in the target image and the height reference value.
In an implementation manner, the target image determination module is specifically configured to:
inputting the three-dimensional coordinates of the normal vector of each pixel point of the target image into a relative height difference formula, and determining the relative height difference between each pixel point and the adjacent pixel point in the target image, wherein the relative height difference formula is as follows:
R (x,x-1) =z x-1,y -z x,y =n x /n z
R (y,y-1) =z x,y-1 -z x,y =n y /n z
wherein n is x ,n y And n z Three-dimensional coordinate values, z, of normal vectors of each pixel point of the target image x,y Is the height value, n, of a pixel point with coordinates of (x, y) position in the target image x /n z The height difference of a pixel point with coordinates of (x, y) position and an adjacent pixel point (x-1, y) in the x direction in the target image is represented by R (x,x-1) And the height difference of a pixel point with the coordinate (x, y) in the target image and an adjacent pixel point (x, y-1) in the y direction is obtained.
In an implementation manner, the target image determination module is specifically configured to:
after the relative height difference between each pixel point and the adjacent pixel point in the target image is determined, determining the divergence value of each pixel point in the target image according to the normal vector of the target image;
strengthening the relative height difference between each pixel point and an adjacent pixel point in the target image according to the divergence value of each pixel point in the target image to obtain the strengthened relative height difference between each pixel point and the adjacent pixel point in the target image;
and determining the target image according to the strengthened relative height difference between each pixel point and the adjacent pixel point in the target image and the height reference value.
According to a fourth aspect of the present disclosure, there is provided an electronic device comprising:
at least one processor; and
a memory communicatively coupled to the at least one processor; wherein, the first and the second end of the pipe are connected with each other,
the memory stores instructions executable by the at least one processor to enable the at least one processor to perform the methods of the present disclosure.
According to a fifth aspect of the present disclosure, there is provided a non-transitory computer readable storage medium having stored thereon computer instructions for causing the computer to perform the method of the present disclosure.
The surface defect detection system, the detection method, the detection device, the equipment and the storage medium thereof obtain a plurality of images to be detected under the irradiation conditions of the illumination subunits at different angles; determining a normal vector of a target image according to the gray value of each pixel point in the plurality of images to be detected; and determining the relative height relationship between each pixel point and the adjacent pixel point in the target image according to the normal vector of the target image, and obtaining the target image according to the relative height relationship, so that the defect characteristics of the surface of the object can be clearly represented, and the omission is avoided.
It should be understood that the statements in this section do not necessarily identify key or critical features of the embodiments of the present disclosure, nor do they limit the scope of the present disclosure. Other features of the present disclosure will become apparent from the following description.
Drawings
The above and other objects, features and advantages of exemplary embodiments of the present disclosure will become readily apparent from the following detailed description read in conjunction with the accompanying drawings. Several embodiments of the present disclosure are illustrated by way of example, and not by way of limitation, in the figures of the accompanying drawings and in which:
in the drawings, the same or corresponding reference numerals indicate the same or corresponding parts.
FIG. 1 is a schematic structural diagram of a surface defect detection system according to an embodiment of the present disclosure;
FIG. 2 is a cross-sectional view illustrating a multi-angle light source in a surface defect detecting system according to an embodiment of the present disclosure;
FIG. 3 is a structural bottom view of a multi-angle light source in a surface defect detecting system according to an embodiment of the present disclosure;
FIG. 4 is a flowchart illustrating a detection method of a surface defect detection system according to a second embodiment of the disclosure;
fig. 5 is a schematic diagram illustrating an image to be measured taken under a condition of a medium-angle first lighting subunit according to a second embodiment of the present disclosure;
fig. 6 illustrates a schematic diagram of an image to be measured taken under a condition of a medium-angle second lighting subunit according to a second embodiment of the present disclosure;
fig. 7 is a schematic diagram illustrating an image to be measured taken under a medium-angle third lighting subunit condition according to a second embodiment of the present disclosure;
fig. 8 is a schematic diagram illustrating an image to be measured taken under a medium-angle fourth lighting subunit condition according to a second embodiment of the present disclosure;
FIG. 9 is a schematic diagram illustrating a target image synthesized by a detection method based on a surface defect detection system according to a second embodiment of the disclosure;
fig. 10 is a schematic structural diagram illustrating a detection apparatus of a surface defect detection system according to a third embodiment of the present disclosure;
fig. 11 shows a schematic structural diagram of an electronic device according to an embodiment of the present disclosure.
Detailed Description
In order to make the objects, features and advantages of the present disclosure more apparent and understandable, the technical solutions in the embodiments of the present disclosure will be clearly and completely described below with reference to the accompanying drawings in the embodiments of the present disclosure, and it is apparent that the described embodiments are only a part of the embodiments of the present disclosure, and not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments disclosed herein without making any creative effort, shall fall within the protection scope of the present disclosure.
A Camera Module (CCM) is one of important components in capturing images, and generally includes a lens, a sensor, a circuit board, and a metal sheet. Nowadays, automatic detection systems based on machine vision are developed vigorously, and compared with the defects of low precision, low repeatability, high cost, irreproducibility and the like of artificial vision detection, the machine vision detection system has greater development potential and will gradually replace artificial detection in the future. The existing CCM module machine vision defect detection equipment usually adopts a two-dimensional detection mode, and a detected object can only show the contrast difference aiming at single incident light, so that the defects such as slight surface recess and the like are difficult to highlight, the detection is missed, and the performance of the detection equipment is influenced; the three-dimensional detection can further obtain a relative depth relation, and is more beneficial to detecting defects such as slight unevenness of the surface of the detected object.
The embodiment adopts a surface defect detection system and method, which can not only detect the surface defect of the camera module, but also detect the surface defects of other types of objects, especially the surface defect meeting the lambertian reflection characteristic, as follows.
Example one
Fig. 1 is a schematic structural diagram of a surface defect detection system according to an embodiment of the present disclosure, and as shown in fig. 1, the system includes: a camera 101, a lens 102, a support frame 2, a multi-angle light source 103 and a host (not shown in fig. 1). Among them, the camera 101 and the lens 102 constitute a camera assembly.
The multi-angle light source 103 is positioned between the camera assembly (the camera 101 and the lens 102) and the object to be detected, and the multi-angle light source 103 is formed by overlapping a plurality of layers of annular lighting units with different diameters, wherein each annular lighting unit is formed by splicing a plurality of lighting subunits and is used for providing light sources with different angles for the object to be detected; the camera component (the camera 101 and the lens 102) is connected with the support frame 2 and is used for shooting an object to be detected under the irradiation conditions of different lighting subunits to obtain a plurality of images to be detected and sending the plurality of images to be detected to the host; and the host is used for combining the plurality of images to be detected into a target image.
Wherein, the host computer comprises an industrial personal controller and an image processor. The industrial personal computer controller is used for controlling the on-off of the surface defect detection system and sending corresponding operation instructions to the camera 101, the lens 102 and the multi-angle light source 103. Illustratively, the industrial personal computer controller adjusts the positions of the camera 101 and the lens 102 to focus clearly with the object to be measured, sets the image acquisition parameters of the camera 101, controls the multi-angle light source 103 to illuminate light sources in different areas according to the shape of the object to be measured, generates multi-angle light to illuminate the surface of the object to be measured, acquires an original image of the object to be measured, and stores the original image as the image to be measured. The image processor is used for receiving the original image acquired by the camera 101, performing data processing, reconstructing a normal vector of the surface of the object to be detected, obtaining a relative depth map of the surface of the object to be detected, synthesizing a target image, and highlighting surface defects of the object to be detected.
In the embodiment of the disclosure, the multi-angle light source is formed by superposing a high-angle annular lighting unit, a medium-angle annular lighting unit and a low-angle annular lighting unit; the middle-angle annular lighting unit is positioned between the high-angle annular lighting unit and the low-angle annular lighting unit, and the low-angle annular lighting unit is close to an object to be measured; as shown in fig. 2, fig. 2 is a cross-sectional view of a structure of a multi-angle light source in a surface defect detecting system according to an embodiment of the present disclosure, which includes a plurality of layers of annular lighting units with different diameters, namely, a low-angle annular lighting unit 103-1, a medium-angle annular lighting unit 103-2, and a high-angle annular lighting unit 103-3. Illustratively, the light source space angles of the low-angle annular lighting unit 103-1, the medium-angle annular lighting unit 103-2 and the high-angle annular lighting unit 103-3 may be 20 °, 60 ° and 80 °, respectively, and the coverage space angle is the widest.
The low-angle annular lighting unit 103-1, the medium-angle annular lighting unit 103-2 and the high-angle annular lighting unit 103-3 can be respectively formed by splicing a plurality of lighting subunits, the size of each lighting subunit can be the same or different, and the lighting subunits of each part can independently control the on-off of the lighting subunits. For example, the low angle annular lighting unit 103-1 may be composed of three lighting subunits, and the three lighting subunits may be the same size or different sizes; the middle-angle annular lighting unit 103-2 can be composed of four lighting subunits, and the sizes of the four lighting subunits can be the same or different; the high angle annular lighting unit 103-3 may be composed of five lighting subunits, and the sizes of the five lighting subunits may be the same or different.
It should be noted that, in this embodiment, the number and the size of the lighting subunits forming each angle annular lighting unit are not limited, as long as the lighting subunits of each angle annular lighting unit are spliced to form an annular lighting unit at a corresponding angle.
In the embodiment of the present disclosure, the high-angle annular lighting unit, the medium-angle annular lighting unit, and the low-angle annular lighting unit are respectively formed by splicing four same lighting subunits.
In the embodiment of the present disclosure, the high-angle annular lighting unit includes a high-angle first lighting subunit, a high-angle second lighting subunit, a high-angle third lighting subunit, and a high-angle fourth lighting subunit which are spliced together; the middle-angle annular lighting unit comprises a middle-angle first lighting subunit, a middle-angle second lighting subunit, a middle-angle third lighting subunit and a middle-angle fourth lighting subunit which are spliced; the low-angle annular lighting unit comprises a low-angle first lighting subunit, a low-angle second lighting subunit, a low-angle third lighting subunit and a low-angle fourth lighting subunit which are spliced;
wherein the high angle first illumination subunit, the medium angle first illumination subunit, and the low angle first illumination subunit are located in a first orthogonal partition; the high-angle second illumination subunit, the medium-angle second illumination subunit and the low-angle second illumination subunit are positioned in the second orthogonal subarea; the high-angle third lighting subunit, the medium-angle third lighting subunit and the low-angle third lighting subunit are positioned in a third orthogonal subarea; the high angle fourth lighting subunit, the medium angle fourth lighting subunit, and the low angle fourth lighting subunit are located in a fourth orthogonal partition.
Specifically, the high-angle annular lighting unit, the medium-angle annular lighting unit and the low-angle annular lighting unit in this embodiment may also be formed by splicing four same lighting sub-units, respectively. As shown in fig. 3, fig. 3 is a bottom view of a multi-angle light source in a surface defect detecting system according to an embodiment of the present disclosure, which includes: a high angle first lighting subunit 103-3-1, a high angle second lighting subunit 103-3-2, a high angle third lighting subunit 103-3-3, and a high angle fourth lighting subunit 103-3-4; a medium angle first lighting subunit 103-2-1, a medium angle second lighting subunit 103-2-2, a medium angle third lighting subunit 103-2-3, and a medium angle fourth lighting subunit 103-2-4; a low angle first lighting subunit 103-1-1, a low angle second lighting subunit 103-1-2, a low angle third lighting subunit 103-1-3, and a low angle fourth lighting subunit 103-1-4.
As shown in FIG. 3, the high angle first illumination subunit 103-3-1, the medium angle first illumination subunit 103-2-1, and the low angle first illumination subunit 103-1-1 are located in a first orthogonal partition; the high angle second illumination subunit 103-3-2, the medium angle second illumination subunit 103-2-2, and the low angle second illumination subunit 103-1-2 are located in a second orthogonal partition; the high angle third lighting subunit 103-3-3, the medium angle third lighting subunit 103-2-3, and the low angle third lighting subunit 103-1-3 are located in a third orthogonal partition; the high angle fourth lighting subunit 103-3-4, the medium angle fourth lighting subunit 103-2-4, and the low angle fourth lighting subunit 103-1-4 are located in a fourth orthogonal partition.
Specifically, in this embodiment, each illumination subunit illuminates the object to be measured, and only the generated shadow areas are different, and the spatial angles of the light sources emitted by the illumination subunits of each orthogonal partition at each angle are different. The embodiment can flexibly select the lighting subunits according to the size of the range area of the bulge or the recess of the object to be measured.
The surface defect detection system that this embodiment provided, because the multi-angle light source is that the illumination subelement of the multiple subregion of a plurality of angles of customization is spliced forms, can shoot the object that awaits measuring under the different angle lamp source conditions to the shadow region is rotated, invalid data is reduced, thereby strengthens depth information's validity.
Example two
Fig. 4 is a flowchart of a detection method of a surface defect detection system according to a second embodiment of the present disclosure, where the method may be performed by a surface defect detection apparatus according to a second embodiment of the present disclosure, and the apparatus may be implemented in software and/or hardware. The method specifically comprises the following steps:
s110, under the illumination conditions of the illumination subunits at different angles, a plurality of images to be measured are obtained.
The image to be measured may be an image of the object to be measured captured by the camera under each light source angle condition.
Specifically, in order to alternate shadows and obtain a clear surface appearance of the object to be measured, the object to be measured may be irradiated by the illumination subunits at different angles, for example, the illumination subunits may be illumination subunits located in different orthogonal partitions of the annular illumination unit at the same angle, or illumination subunits located in different orthogonal partitions of the annular illumination unit at different angles, for example, in this embodiment, the illumination subunits at different angles may be selected from twelve groups of illumination subunits, and the angles at which the selected multiple-angle illumination subunits irradiate the object to be measured may be 360 degrees together.
In an embodiment of the present disclosure, under the illumination condition of the illumination subunits at different angles, acquiring a plurality of images to be measured includes: selecting illumination subunits in the four orthogonal partitions respectively to obtain four target illumination subunits, wherein the four target illumination subunits belong to annular illumination units with different angles; and acquiring a plurality of images to be measured under the irradiation conditions of different target illumination subunits.
The target illumination subunit may be a light source selected to illuminate the object to be measured.
Specifically, the present embodiment may use at least four sets of lighting subunits with different angles, and since the lighting subunits of the four orthogonal partitions may just illuminate the object to be measured for a circle, one lighting subunit may be selected in each of the four orthogonal partitions, so as to provide different luminances for the object to be measured, and capture different images to be measured.
In the embodiment, at least four groups of lighting subunits with different angles are adopted to illuminate the object to be measured, namely four target lighting subunits, so that the influence of image data of a shadow area on the accuracy of image processing is avoided.
And S120, determining a normal vector of the target image according to the gray values of all pixel points in the multiple images to be detected.
The target image refers to the final synthesized image, and can clearly represent the image of the surface defect of the object to be detected.
In this disclosure, determining a normal vector of a target image according to a gray value of each pixel point in a plurality of images to be detected includes: determining the product of the diffuse reflectivity and the normal vector of each pixel point in the target image according to the Lambert reflection principle and the gray values of each pixel point in a plurality of images to be detected; and obtaining the normal vector of each pixel point of the target image by normalizing and separating the product of the diffuse reflectance and the normal vector of each pixel point in the target image.
Specifically, the present embodiment employs a photometric stereo method, and obtains the distribution of the surface geometry of the object to be measured by using multi-angle incident light. Compared with other traditional three-dimensional imaging methods, the method can be realized by only using one common camera, and the cost is low; relative movement between the object and the camera is not required, image alignment is not required, and the requirement on the structural performance of the system is low. An object needs to have lambertian reflection characteristics so that it reflects incident light in a diffuse manner.
Specifically, incident light with an emission angle in a certain direction is irradiated on an object to be measured, and if the object to be measured meets a lambertian reflection condition, the following relational expression should be met:
s(i,j)=I j L j ρ i N i (1)
wherein s (i, j) represents a pixel point with coordinates (i, j) in the image to be detected captured by the cameraThe size of the pixel value of (a); i represents the intensity of the light source, L represents the unit vector of the light source direction; rho represents the diffuse reflectance of the surface of the point of the object to be measured, and N represents the normal vector of the surface of the object to be measured. I.C. A j The light intensity of a pixel point with coordinates (i, j) in the to-be-detected image in the j direction is measured; l is j The light source space angle of a pixel point with coordinates (i, j) in the j direction in the image to be detected is set; rho i The diffuse reflectance of a pixel point with coordinates (i, j) in the to-be-detected image in the i direction is determined; n is a radical of hydrogen i And (3) setting a normal vector of a pixel point with coordinates (i, j) in the image to be detected in the i direction. Wherein, I and L represent the characteristics of the light source and can be determined according to different target lighting subunits calibrated in advance; rho and N represent the characteristics of the surface of the object to be measured, do not change along with the light source and belong to unknown quantity, and S is the collection of all pixel points S (i, j) on the image to be measured.
Specifically, the formula (1) in this embodiment contains three unknowns, ρ, N, and S, and theoretically at least three groups of non-coplanar incident lights are required to irradiate a point on the surface of the object to be measured, so that the normal vector N = (N) of the point can be obtained x ,n y ,n z ) And (4) obtaining. In practical situations, the light source may illuminate the surface of the object to be measured to generate shadows. In order to avoid the influence of the shadow, the illumination light source with the corresponding incident angle can be selected according to the shape of the object to be measured, four groups of images to be measured which are shot under the condition of non-coplanar target illumination subunits are used, the formula (1) is substituted by a least square method, and the over-determined equation is solved.
It should be noted that non-coplanar means that any one component (x, y, z) of the vector L is different, and if L is the same, the known quantity of the equation is not enough, and the equation cannot be solved. Therefore, the light source adopted in this embodiment may refer to a plurality of orthogonal subarea light sources with low angles, may also be a plurality of orthogonal subarea light sources with medium angles or a plurality of orthogonal subarea light sources with high angles, and may also be a plurality of orthogonal subarea light sources with mixed angles. Because each of the angular ring illumination sources are not coplanar, the values of L are different. Illustratively, in the embodiment, three unknowns in the four groups of data solution formula (1) are adopted, so that the problem of inaccurate equation solution caused by a shadow region can be avoided, and the influence of the shadow region on data reconstruction can be effectively reduced by reasonably selecting the illumination angle.
In the embodiment of the present disclosure, determining a product of a diffuse reflectance and a normal vector of each pixel point in a target image according to a lambertian reflection principle and gray values of each pixel point in a plurality of images to be measured includes: respectively splitting pixel points in a plurality of images to be detected into c rows of pixel points, wherein c is an integer greater than 1; according to the Lambert reflection principle and gray values of c rows of pixel points in a plurality of images to be detected, solving the product of the diffuse reflectance and the normal vector of each row of pixel points line by line to obtain the product of the diffuse reflectance and the normal vector of the c rows of pixel points; and determining the product of the diffuse reflectance and the normal vector of each pixel point in the target image according to the product of the diffuse reflectance and the normal vector of the c-line pixel points.
Specifically, in this embodiment, a (a > 3) groups of incident lights with different angles (i.e., multiple groups of four target lighting sub-units that are not coplanar) are used to rotate the shadow for solution. For a single point pixel, the following system of linear equations can be obtained:
Figure BDA0004010726910000151
wherein s is a The pixel value of a certain pixel point on the image to be detected is shot under the condition of the a-th group of target lighting subunits; i is a The light source intensity of a certain pixel point on the image to be detected is shot under the condition of the a group of target lighting subunits; l ax 、l ay 、l az Respectively the light source space angles of a certain pixel point on the image to be detected shot under the condition of the a group of target lighting subunits in the directions of x, y and z; n is a radical of an alkyl radical x 、n y 、n z And the normal vectors of a certain pixel point on the image to be detected shot under the condition of the target lighting subunit of the a-th group in the directions of x, y and z are respectively.
It should be noted that the formula (2) is a solving formula for a single pixel point, and actually S is a three-dimensional matrix, and in an actual calculation process, the three-dimensional matrix is split into two-dimensional matrices for solving, and finally, the solved values are combined. For example, the image grayscale value S (i, j) in this embodiment is divided into c line vectors S (i) for operation, where i e (1, b) and j e (1, c), and the calculation formula of each line vector S (i) after being substituted into formula (2) is:
Figure BDA0004010726910000152
wherein s is a, The pixel value of the b-th pixel point on the c-th line of the image to be detected shot under the condition of the a-th group of target lighting subunits; n is x,b 、n y,b 、n z,b Normal vectors of the b-th pixel point in x, y and z directions of the c-th line on the image to be detected shot under the condition of each group of target lighting subunits; rho b And the diffuse reflectivity of the c-th pixel point on the image to be detected shot under the condition of each group of target lighting subunits is determined.
For example, if the image to be measured in this embodiment has c rows of pixel values, each row of pixel values is input into the formula (3) to be calculated, and the product of the diffuse reflectance and the normal vector of the pixel point of each row on the target image is solved. For convenience of understanding, if the product of the diffuse reflectance and the normal vector of the pixel point in the first line in the target image is solved, the correlation value of each pixel point in the first line in the image to be detected, which is shot under each group of target lighting subunit conditions, is input into the formula (3). Specific examples are s 1, The pixel value of a first pixel point in a first row in an image to be detected shot under the condition of a first group of target lighting subunits is obtained; s is 1, The pixel value of a second pixel point in a first line in the image to be detected shot under the condition of the first group of target lighting subunits; s is 2, And the pixel value of the first pixel point in the first row in the image to be detected shot under the condition of the second group of target lighting subunits is obtained, and the like.
When known light source parameters are used, an over-determined equation can be solved through a least square method, the product of the diffuse reflectance and the normal vector of each row of pixel points is solved line by line, the product of the diffuse reflectance and the normal vector of the c row of pixel points is obtained, and according to the product of the diffuse reflectance and the normal vector of the c row of pixel points, the approximate solution of the product of the diffuse reflectance rho and the normal vector N of each pixel point in the target image is determined as follows:
ρN=((IL) T (IL)) -1 (IL) T S (4)
wherein T is a transposition symbol.
Since the normal vector N in the formula (4) is a unit vector, only represents a direction, and is a number smaller than 1, and the relative height between each pixel point and an adjacent pixel point in the subsequent target image is of greater interest in the present embodiment, that is, the difference value of the normal vector in each direction is of greater interest, in the present embodiment, the formula (4) is substituted therein through the matrix normalization formula (5) and the formula (6) in linear algebra, and ρ N is separated by normalization, so as to obtain N i And ρ i The matrix normalization separation formula is:
Figure BDA0004010726910000161
ρ i =||(ρN) i || 2 ,i∈(1,b) (6)
therefore, in this embodiment, the normal vector of each pixel point in the target image can be obtained through the formula (5), and in addition, the diffuse reflectance of each pixel point in the target image can be obtained through the formula (6). And, n i In fact the three-dimensional coordinates (n) of the normal vector of each pixel point x ,n y ,n z ) The collection is denoted as N.
S130, determining the relative height relation between each pixel point and the adjacent pixel point in the target image according to the normal vector of the target image, and obtaining the target image according to the relative height relation.
In the embodiment of the present disclosure, determining a relative height relationship between each pixel point and an adjacent pixel point in a target image according to a normal vector of the target image, and obtaining the target image according to the relative height relationship includes: determining the relative height difference between each pixel point and an adjacent pixel point in the target image through the normal vector of the target image according to the relation between the gradient and the normal vector; and determining the target image according to the relative height difference between each pixel point and the adjacent pixel point in the target image and the height reference value.
Since the images to be detected are two-dimensional images at first and only have the relation between x and y, the requirement of this embodiment is to reconstruct the target image according to the data of the images to be detected, and obtain the height difference between each pixel point and the adjacent pixel point on the target image, thereby clearly representing the surface defect of the object to be detected. Therefore, the present embodiment constructs a ternary function w = f (x, y, z) for solving the relationship between z and (x, y). Wherein z is the height of each pixel point in the target image; w is a constructed ternary function, and the gradient of the ternary function w is a normal vector of an isopotential surface, which can also be said to be a normal vector N of a target image (i.e. the surface of an object to be measured). Therefore, in the embodiment, the relative height difference between each pixel point and the adjacent pixel point in the target image can be determined through the normal vector of the target image, and the target image can be determined through the relative height difference between each pixel point and the adjacent pixel point in the target image and any given height reference value, so that the surface defect of the object to be detected can be clearly represented.
In the embodiment of the present disclosure, determining a relative height difference between each pixel point and an adjacent pixel point in a target image according to a normal vector of the target image includes: inputting the three-dimensional coordinates of the normal vector of each pixel point of the target image into a relative height difference formula, and determining the relative height difference between each pixel point and an adjacent pixel point in the target image, wherein the relative height difference formula is as follows:
R (x,x-1) =z x-1,y -z x,y =n x /n z ; (7)
R (y,y-1) =z x,y-1 -z x,y =n y /n z ; (8)
wherein n is x ,n y And n z Three-dimensional coordinate values, z, of normal vectors of each pixel point of the target image x, Is the height value of a pixel point with the coordinate of (x, y) position in the target image, n x /n z The relative height difference of a pixel point with the coordinate of (x, y) position and an adjacent pixel point (x-1, y) in the x direction in a target image is R (,x-1) For the coordinates of (x, y) bits in the target imageAnd setting the relative height difference between the pixel point and the adjacent pixel point (x, y-1) in the y direction.
In this embodiment of the present disclosure, after determining a relative height difference between each pixel point and an adjacent pixel point in the target image, the method further includes: determining the divergence value of each pixel point in the target image according to the normal vector of the target image; strengthening the relative height difference between each pixel point and an adjacent pixel point in the target image according to the divergence value of each pixel point in the target image to obtain the strengthened relative height difference between each pixel point and the adjacent pixel point in the target image; correspondingly, the target image is determined through the relative height difference between each pixel point and the adjacent pixel point in the target image and the height reference value, and the method comprises the following steps of: and determining the target image according to the strengthened relative height difference between each pixel point and the adjacent pixel point in the target image and the height reference value.
The surface divergence of the object can be expressed as:
D=n x /n z -n y /n z (9)
and D is the divergence value of each pixel point in the target image.
Specifically, the divergence has a function of denoising the image in the image. However, if the image edge is often located at a position with a large gradient value, the diffusion equation decelerates diffusion in a region with a large gradient value, and accelerates diffusion in a region with a small gradient value, so that useful details of the image can be protected while de-noising is emphasized, that is, the method has a strengthening effect on the height difference between each pixel point in the target image and the surrounding pixel points.
In the detection method of the surface defect detection system disclosed in this embodiment, a plurality of images to be detected are obtained under the illumination conditions of the illumination subunits at different angles; determining a normal vector of a target image according to gray values of all pixel points in a plurality of images to be detected; according to the normal vector of the target image, the relative height relation between each pixel point and the adjacent pixel point in the target image is determined, the target image is obtained through the relative height relation, the surface gradient of the target image can be effectively obtained, three-dimensional information can be captured better, the defect characteristics of the surface of the object can be represented clearly, and missing detection is avoided.
For example, the present embodiment may detect the surface defect of the CCM module steel sheet, and may include the following steps:
s1, adjusting the initial position of a camera lens, specifically: an object to be measured is placed in a view field and fixed, a camera 101 and a lens 102 are adjusted to enable the surface of the object to be measured to be focused clearly, camera software is controlled through an industrial personal computer, camera parameters are adjusted, for example, exposure time is set to be 1000us, gain is set to be 2, and image size is set to be 4000 x 3060 pixels.
S2, adjusting a light source to collect an original image, which specifically comprises the following steps: the method comprises the steps of adjusting a multi-angle light source to a specified position, placing an object to be tested at a test position, selecting a middle-angle annular lighting unit 60 degrees, sequentially opening a middle-angle first lighting subunit, a middle-angle second lighting subunit, a middle-angle third lighting subunit and a middle-angle fourth lighting subunit, providing four groups of lighting lights with different incidence angles, recording lighting angles (space angles) and light intensity information of the light source, collecting images of the object to be tested in different shadow states, and storing the images as the images to be tested.
S3, solving the normal vector, specifically comprising the following steps: reading each row of the four groups of image gray values, sequentially assigning values to the matrix S, respectively assigning the light source lighting angle and the light intensity to the matrices L and I, carrying in (4), and obtaining an N expression by normalizing and separating.
And S4, constructing a three-dimensional function w = f (x, y, z), obtaining a relative height relation according to the relation between the gradient and the normal vector, and determining a target image according to the relative height relation.
For example, the images to be measured of a plurality of CCM module steel sheets acquired by the above method in this embodiment can be as shown in fig. 5 to 8, and the final synthesized target image of the CCM module steel sheet in this embodiment is as shown in fig. 9. Fig. 5 is a schematic diagram of an image to be measured captured under a medium-angle first lighting subunit condition according to a second embodiment of the present disclosure; fig. 6 is a schematic diagram of an image to be measured captured under a medium-angle second lighting subunit condition according to a second embodiment of the present disclosure; fig. 7 is a schematic diagram of an image to be measured captured under a condition of a middle-angle third lighting subunit according to a second embodiment of the present disclosure; fig. 8 is a schematic diagram of an image to be measured captured under a condition of a middle-angle fourth lighting subunit according to a second embodiment of the present disclosure; fig. 9 is a schematic diagram of a target image synthesized based on a surface defect detection method according to a second embodiment of the disclosure, and is also a surface divergence effect diagram of a CCM module steel sheet. As shown in fig. 5 to 9, the image area marked by the white square frame is a defect area of the CCM module steel sheet, as shown in fig. 5 to 8, the defect area in each image to be detected is not too clear, but the defect area in fig. 9 is obvious, so that the detection method of the surface defect detection system provided by the embodiment can effectively detect the defect area on the surface of the object, and avoid missing detection.
EXAMPLE III
Fig. 10 is a schematic structural diagram of a detection apparatus of a surface defect detection system provided in an embodiment of the present disclosure, where the apparatus specifically includes:
the to-be-detected image acquisition module 310 is configured to acquire a plurality of to-be-detected images under the illumination conditions of the illumination subunits at different angles;
the normal vector determining module 320 is configured to determine a normal vector of the target image according to the gray values of the pixel points in the multiple images to be detected;
the target image determining module 330 is configured to determine a relative height relationship between each pixel point and an adjacent pixel point in the target image according to a normal vector of the target image, and obtain the target image according to the relative height relationship.
In an implementation, the image under test acquisition module 310 is specifically configured to: selecting illumination subunits in the four orthogonal partitions respectively to obtain four target illumination subunits, wherein the four target illumination subunits belong to annular illumination units with different angles; and acquiring a plurality of images to be measured under the irradiation conditions of different target illumination subunits.
In an implementation, the normal vector determining module 320 is specifically configured to: determining the product of the diffuse reflectivity and the normal vector of each pixel point in the target image according to the Lambert reflection principle and the gray values of each pixel point in a plurality of images to be detected; and obtaining the normal vector of each pixel point of the target image by normalizing and separating the product of the diffuse reflectivity and the normal vector of each pixel point in the target image.
In an implementation, the normal vector determining module 320 is specifically configured to: respectively splitting pixel points in a plurality of images to be detected into c rows of pixel points, wherein c is an integer greater than 1; according to the Lambert reflection principle and gray values of c rows of pixel points in a plurality of images to be detected, solving the product of the diffuse reflectance and the normal vector of each row of pixel points line by line to obtain the product of the diffuse reflectance and the normal vector of the c rows of pixel points; and determining the product of the diffuse reflectance and the normal vector of each pixel point in the target image according to the product of the diffuse reflectance and the normal vector of the c-line pixel points.
In an implementation manner, the target image determining module 330 is specifically configured to: determining the relative height difference between each pixel point and an adjacent pixel point in the target image through the normal vector of the target image according to the relation between the gradient and the normal vector; and determining the target image according to the relative height difference between each pixel point and the adjacent pixel point in the target image and the height reference value.
In an implementation manner, the target image determining module 330 is specifically configured to: inputting the three-dimensional coordinates of the normal vector of each pixel point of the target image into a relative height difference formula, and determining the relative height difference between each pixel point and the adjacent pixel point in the target image, wherein the relative height difference formula is as follows:
R (x,x-1) =z x-1,y -z x,y =n x /n z
R (y,y-1) =z x,y-1 -z x,y =n y /n z
wherein n is x ,n y And n z Three-dimensional coordinate values, z, of normal vectors of each pixel point of the target image x,y Is the height value, n, of a pixel point with coordinates of (x, y) position in the target image x /n z The height difference of a pixel point with coordinates of (x, y) and an adjacent pixel point (x-1, y) in the x direction in a target image is represented by R (x,x-1) The pixel point with (x, y) coordinate in the target image and the adjacent pixel pointAnd the height difference of the pixel point (x, y-1) in the y direction.
In an implementation manner, the target image determining module 330 is specifically configured to: after the relative height difference between each pixel point and the adjacent pixel point in the target image is determined, determining the divergence value of each pixel point in the target image according to the normal vector of the target image; strengthening the relative height difference between each pixel point and an adjacent pixel point in the target image according to the divergence value of each pixel point in the target image to obtain the strengthened relative height difference between each pixel point and the adjacent pixel point in the target image; and determining the target image through the strengthened relative height difference between each pixel point and the adjacent pixel point in the target image and the height reference value.
The present disclosure also provides an electronic device and a readable storage medium according to an embodiment of the present disclosure.
FIG. 11 shows a schematic block diagram of an example electronic device 400 that may be used to implement embodiments of the present disclosure. Electronic devices are intended to represent various forms of digital computers, such as laptops, desktops, workstations, personal digital assistants, servers, blade servers, mainframes, and other appropriate computers. The electronic device may also represent various forms of mobile devices, such as personal digital processing, cellular phones, smart phones, wearable devices, and other similar computing devices. The components shown herein, their connections and relationships, and their functions, are meant to be examples only, and are not intended to limit implementations of the disclosure described and/or claimed herein.
As shown in fig. 11, the apparatus 400 includes a computing unit 401 that can perform various appropriate actions and processes according to a computer program stored in a Read Only Memory (ROM) 402 or a computer program loaded from a storage unit 408 into a Random Access Memory (RAM) 403. In the RAM 403, various programs and data required for the operation of the device 400 can also be stored. The computing unit 401, ROM 402, and RAM 403 are connected to each other via a bus 404. An input/output (I/O) interface 405 is also connected to bus 404.
A number of components in device 400 are connected to I/O interface 405, including: an input unit 406 such as a keyboard, a mouse, or the like; an output unit 407 such as various types of displays, speakers, and the like; a storage unit 408 such as a magnetic disk, optical disk, or the like; and a communication unit 409 such as a network card, modem, wireless communication transceiver, etc. The communication unit 409 allows the device 400 to exchange information/data with other devices via a computer network, such as the internet, and/or various telecommunication networks.
Computing unit 401 may be a variety of general and/or special purpose processing components with processing and computing capabilities. Some examples of the computing unit 401 include, but are not limited to, a Central Processing Unit (CPU), a Graphics Processing Unit (GPU), various dedicated Artificial Intelligence (AI) computing chips, various computing units running machine learning model algorithms, a Digital Signal Processor (DSP), and any suitable processor, controller, microcontroller, and so forth. The calculation unit 401 executes the respective methods and processes described above, such as the surface defect detection method. For example, in some embodiments, the surface defect detection method may be implemented as a computer software program tangibly embodied in a machine-readable medium, such as storage unit 408. In some embodiments, part or all of the computer program may be loaded and/or installed onto the device 400 via the ROM 402 and/or the communication unit 409. When the computer program is loaded into RAM 403 and executed by computing unit 401, one or more steps of the surface defect detection method described above may be performed. Alternatively, in other embodiments, the computing unit 401 may be configured to perform the surface defect detection method by any other suitable means (e.g., by means of firmware).
Various implementations of the systems and techniques described here above may be implemented in digital electronic circuitry, integrated circuitry, field Programmable Gate Arrays (FPGAs), application Specific Integrated Circuits (ASICs), application Specific Standard Products (ASSPs), systems on a chip (SOCs), complex Programmable Logic Devices (CPLDs), computer hardware, firmware, software, and/or combinations thereof. These various embodiments may include: implemented in one or more computer programs that are executable and/or interpretable on a programmable system including at least one programmable processor, which may be special or general purpose, receiving data and instructions from, and transmitting data and instructions to, a storage system, at least one input device, and at least one output device.
Program code for implementing the methods of the present disclosure may be written in any combination of one or more programming languages. These program code may be provided to a processor or controller of a general purpose computer, special purpose computer, or other programmable data processing apparatus, such that the program code, when executed by the processor or controller, causes the functions/acts specified in the flowchart and/or block diagram to be performed. The program code may execute entirely on the machine, partly on the machine, as a stand-alone software package partly on the machine and partly on a remote machine or entirely on the remote machine or server.
In the context of this disclosure, a machine-readable medium may be a tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device. The machine-readable medium may be a machine-readable signal medium or a machine-readable storage medium. A machine-readable medium may include, but is not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or any suitable combination of the foregoing. More specific examples of a machine-readable storage medium would include an electrical connection based on one or more wires, a portable computer diskette, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber, a compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing.
To provide for interaction with a user, the systems and techniques described here can be implemented on a computer having: a display device (e.g., a CRT (cathode ray tube) or LCD (liquid crystal display) monitor) for displaying information to a user; and a keyboard and a pointing device (e.g., a mouse or a trackball) by which a user can provide input to the computer. Other kinds of devices may also be used to provide for interaction with a user; for example, feedback provided to the user can be any form of sensory feedback (e.g., visual feedback, auditory feedback, or tactile feedback); and input from the user can be received in any form, including acoustic, speech, or tactile input.
The systems and techniques described here can be implemented in a computing system that includes a back-end component (e.g., as a data server), or that includes a middleware component (e.g., an application server), or that includes a front-end component (e.g., a user computer having a graphical user interface or a web browser through which a user can interact with an implementation of the systems and techniques described here), or any combination of such back-end, middleware, or front-end components. The components of the system can be interconnected by any form or medium of digital data communication (e.g., a communication network). Examples of communication networks include: local Area Networks (LANs), wide Area Networks (WANs), and the Internet.
The computer system may include clients and servers. A client and server are generally remote from each other and typically interact through a communication network. The relationship of client and server arises by virtue of computer programs running on the respective computers and having a client-server relationship to each other. The server may be a cloud server, a server of a distributed system, or a server with a combined blockchain.
It should be understood that various forms of the flows shown above may be used, with steps reordered, added, or deleted. For example, the steps described in the present disclosure may be executed in parallel, sequentially, or in different orders, as long as the desired results of the technical solutions disclosed in the present disclosure can be achieved, and the present disclosure is not limited herein.
Furthermore, the terms "first", "second" and "first" are used for descriptive purposes only and are not to be construed as indicating or implying relative importance or implicitly indicating the number of technical features indicated. Thus, a feature defined as "first" or "second" may explicitly or implicitly include at least one such feature. In the description of the present disclosure, "a plurality" means two or more unless specifically limited otherwise.
The above description is only for the specific embodiments of the present disclosure, but the scope of the present disclosure is not limited thereto, and any person skilled in the art can easily think of the changes or substitutions within the technical scope of the present disclosure, and shall cover the scope of the present disclosure. Therefore, the protection scope of the present disclosure should be subject to the protection scope of the claims.

Claims (10)

1. A surface defect detection system, the system comprising: a camera component, a support frame, a multi-angle light source and a host, wherein,
the multi-angle light source is positioned between the camera component and the object to be detected, and is formed by overlapping a plurality of layers of annular lighting units with different diameters, wherein each annular lighting unit is formed by splicing a plurality of lighting subunits and is used for providing light sources with different angles for the object to be detected;
the camera assembly is connected with the support frame and used for shooting the object to be detected under the irradiation conditions of different lighting subunits to obtain a plurality of images to be detected and sending the plurality of images to be detected to the host;
and the host is used for synthesizing the plurality of images to be detected into a target image.
2. The system of claim 1, wherein the multi-angle light source is formed by a high angle annular lighting unit, a medium angle annular lighting unit, and a low angle annular lighting unit in a stacked manner; wherein the content of the first and second substances,
the middle-angle annular lighting unit is positioned between the high-angle annular lighting unit and the low-angle annular lighting unit, and the low-angle annular lighting unit is close to the object to be detected;
the diameter of the high angle annular lighting unit is smaller than that of the medium angle annular lighting unit, and the diameter of the medium angle annular lighting unit is smaller than that of the low angle annular lighting unit.
3. The system of claim 2, wherein the high angle annular lighting unit, the medium angle annular lighting unit, and the low angle annular lighting unit are each formed by splicing four identical lighting sub-units.
4. The system of claim 3, wherein the high-angle annular lighting unit comprises a high-angle first lighting subunit, a high-angle second lighting subunit, a high-angle third lighting subunit, and a high-angle fourth lighting subunit;
the middle-angle annular lighting unit comprises a middle-angle first lighting subunit, a middle-angle second lighting subunit, a middle-angle third lighting subunit and a middle-angle fourth lighting subunit which are spliced;
the low-angle annular lighting unit comprises a low-angle first lighting subunit, a low-angle second lighting subunit, a low-angle third lighting subunit and a low-angle fourth lighting subunit which are spliced;
wherein the high angle first illumination subunit, the medium angle first illumination subunit, and the low angle first illumination subunit are located in a first orthogonal partition; the high angle second illumination subunit, the medium angle second illumination subunit, and the low angle second illumination subunit are located in a second orthogonal partition; the high angle third illumination subunit, the medium angle third illumination subunit, and the low angle third illumination subunit are located in a third orthogonal partition; the high angle fourth illumination subunit, the medium angle fourth illumination subunit, and the low angle fourth illumination subunit are located in a fourth orthogonal partition.
5. A method for inspecting a surface defect inspection system according to any one of claims 1 to 4, comprising:
acquiring a plurality of images to be detected under the illumination conditions of the illumination subunits at different angles;
determining a normal vector of a target image according to the gray value of each pixel point in the plurality of images to be detected;
and determining the relative height relationship between each pixel point and the adjacent pixel point in the target image according to the normal vector of the target image, and obtaining the target image according to the relative height relationship.
6. The method according to claim 5, wherein acquiring a plurality of images to be tested under the illumination condition of the illumination subunits at different angles comprises:
selecting illumination subunits in four orthogonal partitions respectively to obtain four target illumination subunits, wherein the four target illumination subunits belong to annular illumination units with different angles;
and acquiring a plurality of images to be measured under the irradiation conditions of different target illumination subunits.
7. A surface defect detecting system detecting device based on any one of claims 1-4, characterized in that the device comprises:
the to-be-detected image acquisition module is used for acquiring a plurality of to-be-detected images under the illumination conditions of the illumination subunits at different angles;
the normal vector determining module is used for determining a normal vector of the target image according to the gray values of all pixel points in the plurality of images to be detected;
and the target image determining module is used for determining the relative height relationship between each pixel point and the adjacent pixel point in the target image according to the normal vector of the target image, and obtaining the target image according to the relative height relationship.
8. The apparatus of claim 7, wherein the image acquisition module under test is specifically configured to:
selecting illumination subunits in four orthogonal partitions respectively to obtain four target illumination subunits, wherein the four target illumination subunits belong to annular illumination units with different angles;
and acquiring a plurality of images to be measured under the irradiation conditions of different target illumination subunits.
9. An electronic device, comprising:
at least one processor; and
a memory communicatively coupled to the at least one processor; wherein the content of the first and second substances,
the memory stores instructions executable by the at least one processor to enable the at least one processor to perform the method of any one of claims 5-6.
10. A non-transitory computer readable storage medium having stored thereon computer instructions for causing a computer to perform the method of any one of claims 5-6.
CN202211648264.9A 2022-12-21 2022-12-21 Surface defect detection system, detection method, detection device, detection equipment and storage medium Active CN115980059B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202211648264.9A CN115980059B (en) 2022-12-21 2022-12-21 Surface defect detection system, detection method, detection device, detection equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202211648264.9A CN115980059B (en) 2022-12-21 2022-12-21 Surface defect detection system, detection method, detection device, detection equipment and storage medium

Publications (2)

Publication Number Publication Date
CN115980059A true CN115980059A (en) 2023-04-18
CN115980059B CN115980059B (en) 2023-12-15

Family

ID=85960472

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202211648264.9A Active CN115980059B (en) 2022-12-21 2022-12-21 Surface defect detection system, detection method, detection device, detection equipment and storage medium

Country Status (1)

Country Link
CN (1) CN115980059B (en)

Citations (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20010005264A1 (en) * 1999-05-05 2001-06-28 Slemon Charles S. Linked cameras and processors for imaging system
US20150355101A1 (en) * 2014-06-09 2015-12-10 Keyence Corporation Image Inspection Apparatus, Image Inspection Method, Image Inspection Program, Computer-Readable Recording Medium And Recording Device
CN108445007A (en) * 2018-01-09 2018-08-24 深圳市华汉伟业科技有限公司 A kind of detection method and its detection device based on image co-registration
CN109523541A (en) * 2018-11-23 2019-03-26 五邑大学 A kind of metal surface fine defects detection method of view-based access control model
CN110609039A (en) * 2019-09-23 2019-12-24 上海御微半导体技术有限公司 Optical detection device and method thereof
CN112669318A (en) * 2021-03-17 2021-04-16 上海飞机制造有限公司 Surface defect detection method, device, equipment and storage medium
CN112858318A (en) * 2021-04-26 2021-05-28 惠州高视科技有限公司 Method for distinguishing screen foreign matter defect from dust, electronic equipment and storage medium
CN113538432A (en) * 2021-09-17 2021-10-22 南通蓝城机械科技有限公司 Part defect detection method and system based on image processing
CN115272258A (en) * 2022-08-03 2022-11-01 无锡九霄科技有限公司 Metal cylindrical surface defect detection method, system and medium based on machine vision
CN218032792U (en) * 2022-06-30 2022-12-13 广州镭晨智能装备科技有限公司 Visual detection light source and automatic optical detection equipment

Patent Citations (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20010005264A1 (en) * 1999-05-05 2001-06-28 Slemon Charles S. Linked cameras and processors for imaging system
US20150355101A1 (en) * 2014-06-09 2015-12-10 Keyence Corporation Image Inspection Apparatus, Image Inspection Method, Image Inspection Program, Computer-Readable Recording Medium And Recording Device
CN108445007A (en) * 2018-01-09 2018-08-24 深圳市华汉伟业科技有限公司 A kind of detection method and its detection device based on image co-registration
CN109523541A (en) * 2018-11-23 2019-03-26 五邑大学 A kind of metal surface fine defects detection method of view-based access control model
CN110609039A (en) * 2019-09-23 2019-12-24 上海御微半导体技术有限公司 Optical detection device and method thereof
CN112669318A (en) * 2021-03-17 2021-04-16 上海飞机制造有限公司 Surface defect detection method, device, equipment and storage medium
CN112858318A (en) * 2021-04-26 2021-05-28 惠州高视科技有限公司 Method for distinguishing screen foreign matter defect from dust, electronic equipment and storage medium
CN113538432A (en) * 2021-09-17 2021-10-22 南通蓝城机械科技有限公司 Part defect detection method and system based on image processing
CN218032792U (en) * 2022-06-30 2022-12-13 广州镭晨智能装备科技有限公司 Visual detection light source and automatic optical detection equipment
CN115272258A (en) * 2022-08-03 2022-11-01 无锡九霄科技有限公司 Metal cylindrical surface defect detection method, system and medium based on machine vision

Also Published As

Publication number Publication date
CN115980059B (en) 2023-12-15

Similar Documents

Publication Publication Date Title
JP6447637B2 (en) Surface defect detection apparatus, surface defect detection method, and steel material manufacturing method
US20110228052A1 (en) Three-dimensional measurement apparatus and method
JP2019530261A (en) Improved camera calibration system, target and process
CN110726724A (en) Defect detection method, system and device
TW201100779A (en) System and method for inspecting a wafer (3)
KR20120014886A (en) Inspection recipe generation and inspection based on an inspection recipe
EP1604194A1 (en) Optical inspection system and method for displaying imaged objects in greater than two dimensions
US9756313B2 (en) High throughput and low cost height triangulation system and method
CN112858318B (en) Method for distinguishing screen foreign matter defect from dust, electronic equipment and storage medium
TW201518694A (en) Method and system for detecting luminance of a light source
JPWO2016208626A1 (en) Surface defect detection method, surface defect detection apparatus, and steel material manufacturing method
CN107110790A (en) Systems for optical inspection
CA3185292A1 (en) Neural network analysis of lfa test strips
JP2018204956A (en) Coating-film blistering width measuring apparatus for coated metal plate and method for measuring coating-film blistering width of coated metal plate
CN116908185A (en) Method and device for detecting appearance defects of article, electronic equipment and storage medium
CN115980059A (en) Surface defect detection system and detection method, device, equipment and storage medium thereof
CN108106610B (en) Object stage perpendicularity detection method and system and control device thereof
CN115615353A (en) Method, apparatus, device and storage medium for detecting size of object by using parallel light
CN112040138B (en) Stereoscopic light source system, image pickup method, image pickup device, storage medium, and electronic apparatus
CN110441315B (en) Electronic component testing apparatus and method
US7747066B2 (en) Z-axis optical detection of mechanical feature height
CN111566438B (en) Image acquisition method and system
CN115272258A (en) Metal cylindrical surface defect detection method, system and medium based on machine vision
JP2012098131A (en) Light distribution property measuring device, light distribution property inspection device, light distribution property measuring program, light distribution property measuring method and light distribution property inspection method
US10325361B2 (en) System, method and computer program product for automatically generating a wafer image to design coordinate mapping

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
CB03 Change of inventor or designer information
CB03 Change of inventor or designer information

Inventor after: Zhang Zhengtao

Inventor after: Wu Bo

Inventor after: Tang Chao

Inventor after: Lv Xiaoyun

Inventor after: Zhang Wujie

Inventor after: Yang Huabin

Inventor before: Wu Bo

Inventor before: Tang Chao

Inventor before: Lv Xiaoyun

Inventor before: Zhang Wujie

CB02 Change of applicant information
CB02 Change of applicant information

Address after: 471033 Room 101 and Room 202, building 5, science and Technology Park, Luoyang National University, No. 2, Penglai Road, Jianxi District, Luoyang area, pilot Free Trade Zone, Luoyang City, Henan Province

Applicant after: CASI VISION TECHNOLOGY (LUOYANG) CO.,LTD.

Applicant after: Zhongke Huiyuan vision technology (Beijing) Co.,Ltd.

Applicant after: Zhongke Huiyuan Intelligent Equipment (Guangdong) Co.,Ltd.

Address before: No. 1107, 1st floor, building 4, No. 75 Suzhou street, Haidian District, Beijing 100080

Applicant before: Zhongke Huiyuan vision technology (Beijing) Co.,Ltd.

Applicant before: Zhongke Huiyuan Intelligent Equipment (Guangdong) Co.,Ltd.

Applicant before: CASI VISION TECHNOLOGY (LUOYANG) CO.,LTD.

GR01 Patent grant
GR01 Patent grant