Detailed Description
The technical solutions in the embodiments of the present disclosure will be clearly and completely described below with reference to the drawings in the embodiments of the present disclosure, and it is obvious that the described embodiments are some, but not all embodiments of the present disclosure. All other embodiments, which can be derived by a person skilled in the art from the embodiments disclosed herein without making any creative effort, shall fall within the protection scope of the present disclosure.
It should be understood that the terms "first," "second," "third," and "fourth," etc. in the claims, description, and drawings of the present disclosure are used to distinguish between different objects and are not used to describe a particular order. The terms "comprises" and "comprising," when used in the specification and claims of this disclosure, specify the presence of stated features, integers, steps, operations, elements, and/or components, but do not preclude the presence or addition of one or more other features, integers, steps, operations, elements, components, and/or groups thereof.
It is also to be understood that the terminology used in the description of the disclosure herein is for the purpose of describing particular embodiments only, and is not intended to be limiting of the disclosure. As used in the specification and claims of this disclosure, the singular forms "a", "an" and "the" are intended to include the plural forms as well, unless the context clearly indicates otherwise. It should be further understood that the term "and/or" as used in the specification and claims of this disclosure refers to any and all possible combinations of one or more of the associated listed items and includes such combinations.
Specific embodiments of the present disclosure are described in detail below with reference to the accompanying drawings.
The disclosure provides a method for detecting a three-dimensional edge defect of a cover plate. Referring to fig. 1, fig. 1 is a flowchart illustrating a method for detecting a three-dimensional edge defect of a cover plate according to an exemplary embodiment. As shown in FIG. 1, the method for detecting the cover plate solid edge defect comprises the following steps S101-S104.
Step S101: and acquiring a three-dimensional edge image of the cover plate.
Step S102: obtaining a plurality of intermediate images through a plurality of mean filtering according to the stereoscopic edge image, and obtaining the intermediate defect image according to the stereoscopic edge image and the plurality of intermediate images.
Step S103: and obtaining the defect highlighted image through image enhancement according to the intermediate defect image.
Step S104: and extracting the image of the defect at the stereoscopic edge according to the defect highlighted image.
In step S101, a stereoscopic edge image of the cover plate is acquired.
According to an embodiment of the present disclosure, the cover plate may be, for example, a glass cover plate of a mobile phone. The image of the stereoscopic edge of the glass cover plate of the mobile phone can be directly or indirectly acquired, for example, the stereoscopic edge of the glass cover plate can be directly imaged by an imaging device, or a prestored image of the stereoscopic edge of the glass cover plate can be acquired from other components through transmission.
In step S102, a plurality of intermediate images are obtained by a plurality of mean filtering from the stereoscopic edge image, and the intermediate defect image is obtained from the stereoscopic edge image and the plurality of intermediate images.
According to an embodiment of the present disclosure, this step is used to process the stereoscopic edge image, thereby obtaining an intermediate defect image, which is used for subsequent processing. Firstly, carrying out multiple times of mean filtering on a stereo edge image to obtain a plurality of intermediate images (mean filtered images), respectively carrying out pixel difference on the stereo edge image and each of the plurality of intermediate images to obtain a plurality of difference images, and then summing and normalizing the plurality of difference images according to pixels to obtain the intermediate defect image.
How to process the stereoscopic edge image to obtain the intermediate defect image will be explained by specific embodiments below.
Fig. 2a and 2b show an example of illumination at the edge of a handset.
Fig. 2a is an example of a handset product with 3D features. For products with curved edges, there is typically a shiny area at the edges, e.g., the left and right edges (solid line in-frame area). This is the lightening caused by the curvature of 3D, and the normal algorithm cannot accurately show the lightening area and the defect area (dotted line in fig. 2 a).
Fig. 2b is a partial view of fig. 2a with a defect. In fig. 2b, there are some irregularities at the edge of the handset that cannot be identified under the lighting conditions shown in fig. 2 a.
For example, when a glass cover plate of a mobile phone is imaged, due to uneven illumination and reflection at a three-dimensional edge region of the cover plate, the imaging is disturbed, so that the imaging effect is affected, and accurate detection of defects is not facilitated. In order to eliminate the influence of these disturbances on the imaging, 5 (the number of which can be adjusted manually according to the actual situation) filtering charts with sizes from M to N (M represents twice the excessive width of the edge light reflection area of the cover plate, N represents twice the silk-screen width, for example, M is 3, N is 11, both M and N can be obtained by measurement calculation or by parameters during design, of course, can also be adjusted manually according to the actual situation) which are sequentially increased are used to perform mean filtering on the stereo edge images to obtain 5 intermediate images, and the stereo edge image is respectively differenced with each intermediate image obtained after the mean value filtering according to pixels to obtain a plurality of differenced images, and summing the plurality of images subjected to difference according to pixels to obtain a composite image U, and finally normalizing the composite image U to obtain the intermediate defect image.
The composite image U can be realized by the following formula (1).
Wherein U represents a composite image, k represents the number of filter kernels, I represents a stereoscopic edge image, I'iRepresenting the intermediate image.
The grey values of the composite image U are then normalized to 0, 255, resulting in an intermediate defect image Z. The normalization of the composite image U can be achieved by the following equation (2):
Gz(x,y)=(GU(x,y)-Min(GU))*255/(Max(GU)-Min(GU)) (2)
wherein G isz(x,y)Representing the grey value, G, of the intermediate defect image Z at the coordinates (x, y) of the intermediate defect imageU(x,y)Represents the gray value of the composite image U at the coordinates (x, y) of the composite image Min (G)U) Representing the minimum gray value, Max (G), in the composite image UU) Representing the maximum gray value in the composite image U.
In step S103, the defect highlight image is obtained by image enhancement according to the intermediate defect image.
This step is used to process the intermediate defect image, thereby obtaining a defect saliency image, which is used for subsequent defect extraction, according to an embodiment of the present disclosure. In this embodiment, the intermediate defect image is image-enhanced to make the defect more conspicuous, thereby facilitating the extraction of the defect.
In conjunction with the above specific embodiments, how to perform image enhancement on the intermediate defect image to obtain a defect highlight image will be specifically explained below.
For example, when a glass cover plate of a mobile phone is imaged, due to the existence of a silk-screen area at the edge of a solid, the imaging is interfered, so that the imaging effect is influenced, and the accurate detection of defects is not facilitated. In order to eliminate the influence of the interference on the imaging, the intermediate defect image Z obtained by the above method may be first subjected to image enhancement to obtain two enhanced images, and then a defect highlight image is calculated from the two enhanced images.
Specifically, with respect to image enhancement, all pixels of the intermediate defect image Z are traversed, the current gray value Z (x, y) ((x, y) is the current pixel coordinate) of the current pixel is subtracted from the mean gray value of all pixels adjacent to the current pixel within the range of M × M (M represents twice the excessive width of the cover plate edge light reflection region as described above) pixel region, and if the subtracted absolute value is greater than a threshold T, for example, 8 (the threshold T is a division coefficient, which is associated with the gray of the silk-screen region at the solid edge and the gray of the background, which may be artificially set as the case may be), the gray value Z (x, y) of the current pixel is assigned as the sum of the current gray value Z (x, y) and the gray value Z (x-1, y) or Z (x, y-1) of the previous pixel, otherwise the gray value Z (x, y) is assigned to be 0, and after traversing all the pixels, a first enhanced image E is obtained, wherein the gray value of each pixel in the first enhanced image E is E (x, y). In addition, the formula f (x, y) is e (x, y)2A second enhanced image F may be obtained, where F (x, y) represents the gray value of the image F at (x, y) coordinates.
In addition, by traversing all the pixels on the above-described first enhanced image E and second enhanced image F, the gray value of each pixel in the defect saliency image S is calculated by the following equations (3), (4), and (5), thereby obtaining the defect saliency image S.
u(x,y)=e(x+M+1,y+M+1)-e(x+1,y+M+1)-e(x+M+1,y+1)+e(x+1,y+1) (3)
w(x,y)=f(x+M+1,y+M+1)-f(x+1,y+M+1)-f(x+M+1,y+1)+f(x+1,y+1) (4)
Wherein e is(x+M+1,y+M+1)、e(x+1,y+M+1)、e(x+M+1,y+1)And e(x+1,y+1)Represents the gray value of the first enhanced image E at its respective coordinate position; f. of(x+M+1,y+M+1)、f(x+1,y+M+1)、f(x+M+1,y+1)And f(x+1,y+1)Representing the gray value of the second enhanced image F at its respective coordinate position; u. of(x,y)、w(x,y): calculating intermediate variables of the process; s(x,y)Representing the gray scale value of the defect-highlighted image S at (x, y), M represents twice the excess width of the edge-reflecting region of the cover plate, as described above, and may be considered set.
In step S104, an image of a defect at the edge of the solid is extracted from the defect highlighted image.
According to an embodiment of the present disclosure, this step is used to extract the defect at the edge of the solid from the defect saliency image.
Further, the extracting, according to the defect highlighted image, the image of the defect at the stereoscopic edge includes: and extracting the image of the defect by performing threshold segmentation on the defect highlighted image.
According to the embodiment of the disclosure, each pixel in the defect highlighted image S is traversed, if the gray value of the current pixel is greater than or equal to the set threshold, for example, 10, the gray value of the current pixel is set to 255, otherwise, 0 is set, so that the image of the defect can be extracted.
Referring to fig. 2c, fig. 2c is a flowchart illustrating another method for detecting a three-dimensional edge defect of a cover plate according to an exemplary embodiment. As shown in fig. 2c, the method may comprise the following steps S201-S202 before step S101.
In step S201, an image of the cover plate is acquired.
In this step, the image of the cover plate may be acquired by any suitable image acquisition device.
In step S202, a stereo edge image of the cover plate is determined according to the image of the cover plate.
In this step, the image of the solid edge of the cover plate can be accurately determined from the overall image of the cover plate by any suitable method.
Further, the determining the stereoscopic edge image of the cover plate according to the image of the cover plate includes: and determining the stereo edge image of the cover plate according to the image of the cover plate and the cover plate reference image.
Specifically, the cover plate reference image is an image acquired by the same image acquisition device in advance at the same angle, and the stereo edge image of the cover plate is determined according to the current cover plate image and the cover plate reference image.
Further, determining the stereo edge image of the cover plate according to the image of the cover plate and the cover plate reference image comprises: taking a reference area in the cover plate reference image by taking a pixel point as a reference center; determining a positioning point corresponding to the reference center in the image of the cover plate through a search algorithm according to the reference area; determining the offset of the positioning point relative to the reference center according to the positioning point and the reference center; determining a stereoscopic edge image of the image from the offset and a position of a stereoscopic edge in the cover reference image. Preferably, the reference center is a center of a camera hole in the cover reference image, and the reference area is a rectangular area centered on the center of the camera hole.
Specifically, when a defect is actually detected, due to the image capturing device and/or the detecting device, the captured cover plate image may have a situation that the captured target is shifted up, down, left and right in the image, which may cause an error in defect detection when defects of the same product are continuously detected, and therefore, a reference image needs to be provided, which is, for example, a cover plate image captured in advance and accurately knows a parameter value to be used when the defect is detected through various measurement calculations. Therefore, the method can ensure that the cover plate three-dimensional edge is accurately positioned when the same product is continuously detected, thereby eliminating the interference of other areas of the image and ensuring more accurate detection.
For example, in order to accurately determine the stereo edge image, a rectangular reference region is taken in the cover plate reference image with the center of the camera hole as a reference center, and certainly, a region with other shapes can be taken with pixel points with obvious gray level difference at other positions as centers, and the size of the region can be artificially set and a certain number of pixel points should be included. All pixel gray values in the reference region can be extracted using any suitable pixel gray value extraction method, and the gray gradient value (gray contrast with adjacent pixel points) of each pixel point in the reference region is extracted by the following formula (6):
wherein, gx,yRepresenting the gray values of the pixel points located at the image coordinates x, y in the reference region. All pixels with a gray gradient value larger than, for example, 30 (which can be artificially adjusted according to the actual gradient value of the image) are grouped into a point set. From the point set, eight connected domains in the reference region are extracted by using a connected domain extraction algorithm, and all pixels in all connected domains with the number of pixels larger than 10 (which can be artificially adjusted according to the actual texture of the surface of the product) are used as a feature set.
Then, the image of the cover plate can be searched for a pixel point corresponding to the center of the camera hole by using the following search formula (7), that is, the center of the camera hole in the currently obtained image of the cover plate is searched for.
n represents the total number of pixels in all feature sets;
rja row coordinate representing a current pixel (jth pixel) in the current patch image;
cja column coordinate representing a current pixel (jth pixel) in the current patch image;
ri' line coordinates representing the ith pixel point in the feature set with respect to the center of the reference region;
c′irepresenting the column coordinates of the ith pixel point in the feature set relative to the center of the reference area;
t′irepresenting the gradient value of the ith pixel point in the feature set in the horizontal direction;
u′irepresenting the gradient value of the ith pixel point in the feature set in the vertical direction;
sliding the representative feature set horizontally and vertically on the current cover plate image to the r-th
j+r
i' line, c
j+c′
iGray scale gradient values in the horizontal direction in column;
sliding the representative feature set horizontally and vertically on the current image to the r-th
j+r
i' line, c
j+c′
iGray scale gradient values in the vertical direction in columns.
S represents the confidence coefficient of the center of the camera hole in the current image, namely the similarity between the camera hole in the current image and the center of the camera hole in the reference area, and is in [0, 1 ]]And taking a value, wherein when S is 1, the confidence coefficient is the highest. According to the above formula (7), the coordinates of all pixels in the current patch image are obtained by using the coordinates of all pixels (by sliding the search algorithm over the coordinates of all pixels)A plurality of S are obtained, so after all S are calculated, the row coordinate r used when the value of S is the highest is takenjAnd column coordinates cjThe value of (d) is the coordinates of the center of the camera hole in the current image.
After the center of the camera hole in the currently obtained cover plate image is searched, the difference between the pixel coordinate of the center of the camera hole and the pixel coordinate of the center of the camera hole in the cover plate reference image, that is, the offset coordinate of the center of the camera hole of the currently obtained cover plate image relative to the center of the camera hole in the cover plate reference image, can be obtained by making a difference between the pixel coordinate of the center of the camera hole and the pixel coordinate of the center of the camera hole in the cover plate reference image. The relative position of the center of the camera hole and the stereo edge is fixed and is not influenced by the acquisition sequence of the images, so that the pixel coordinate offset of the center of the camera hole is the offset of the stereo edge, the position of the stereo edge in the currently obtained image can be determined according to the offset value and the relative position (coordinate) of the stereo edge in the cover plate reference image, and the image of the stereo edge in the current cover plate image is determined
Referring to fig. 3, fig. 3 is a flowchart illustrating a method for detecting a three-dimensional edge defect of a cover plate according to an exemplary embodiment. As shown in fig. 2c, the method may comprise the following step S301 after step S201.
In step S301, the acquired image of the cover plate is filtered by gaussian filtering.
In this step S301, in order to perform noise reduction on the image so as to reduce interference when the image is subsequently processed, after the image of the cover plate is captured, the image is subjected to noise reduction by a gaussian filtering method. And determining the image of the stereoscopic edge of the cover plate image after noise reduction. Of course, the above-mentioned denoising of the image by gaussian filtering is only exemplary and not limiting, and those skilled in the art may also denoise the image by any other suitable method.
The embodiment of the disclosure also provides a device for detecting the three-dimensional edge defect of the cover plate. The detection device is used for executing the steps in the detection method embodiment.
Referring to fig. 4, fig. 4 is a block diagram illustrating an apparatus 100 for detecting a three-dimensional edge defect of a cover plate according to an exemplary embodiment. As shown in fig. 4, the detection apparatus 100 includes an acquisition unit 101, a first processing unit 102, a second processing unit 103, and an extraction unit 104. The acquisition unit 101 is configured to acquire a stereoscopic edge image of the cover plate. The first processing unit 102 is configured to obtain a plurality of intermediate images by means of multiple mean filtering according to the stereo edge image, and obtain the intermediate defect image according to the stereo edge image and the plurality of intermediate images. The second processing unit 103 is configured to obtain the defect saliency image by image enhancement from the intermediate defect image. The extraction unit 104 is configured to extract an image of a defect at an edge of the solid from the defect saliency image.
Further, the extracting unit 103 is configured to extract an image of the defect at the stereoscopic edge from the defect saliency image in the following manner: and extracting the image of the defect by performing threshold segmentation on the defect highlighted image.
Referring to fig. 5, fig. 5 is a block diagram illustrating another apparatus 200 for detecting a stereoscopic edge defect of a cover plate according to an exemplary embodiment of the present disclosure. The detection apparatus 200 shown in fig. 5 differs from the detection apparatus 100 shown in fig. 4 only in that the detection apparatus 200 further comprises an acquisition unit 201 and a determination unit 202. The acquisition unit 201 is configured to acquire an image of the cover plate before the acquisition unit 101 acquires the stereoscopic edge image of the cover plate. The determination unit 202 is configured for determining a stereo edge image of the cover plate from the image of the cover plate.
Further, the determining unit 202 is configured to determine a stereo edge image of the cover plate according to the image of the cover plate in the following manner: and determining the stereo edge image of the cover plate according to the image of the cover plate and the cover plate reference image.
Furthermore, the determination unit is configured to determine the stereo edge image of the cover plate from the image of the cover plate and the cover plate reference image in the following manner: determining the stereoscopic edge image of the cover plate according to the image of the cover plate and the cover plate reference image comprises: taking a reference area in the cover plate reference image by taking a pixel point as a reference center; determining a positioning point corresponding to the reference center in the image of the cover plate through a search algorithm according to the reference area; determining the offset of the positioning point relative to the reference center according to the positioning point and the reference center; determining a stereoscopic edge image of the image from the offset and a position of a stereoscopic edge in the cover reference image.
Furthermore, the reference center is the center of the camera hole in the cover plate reference image, and the reference area is a rectangular area with the center of the camera hole as the center.
Referring to fig. 6, fig. 6 is a block diagram illustrating a detection apparatus 300 for a cover plate solid edge defect according to an exemplary embodiment of the present disclosure. The detection apparatus 300 shown in fig. 6 differs from the detection apparatus 200 shown in fig. 5 only in that the detection apparatus 300 further comprises a filtering unit 301. The filtering unit 301 is configured to filter the acquired image of the cover plate by gaussian filtering after the acquisition unit acquires the image of the cover plate.
It will be appreciated that with respect to the apparatus in the above embodiments, the specific manner in which the respective units perform operations has been described in detail in relation to the embodiments of the method and will not be elaborated upon here.
The present invention also provides an electronic device comprising: one or more processors; and a memory having stored therein computer-executable instructions that, when executed by the one or more processors, cause the electronic device to perform the method as described above.
The present invention also provides a computer-readable storage medium comprising computer-executable instructions that, when executed by one or more processors, perform the method as described above.
The embodiments of the present disclosure are described in detail above, and the description of the embodiments is only used to help understanding the method and the core idea of the present disclosure. Meanwhile, a person skilled in the art should, based on the idea of the present disclosure, change or modify the specific embodiments and application scope of the present disclosure. In view of the above, the description is not intended to limit the present disclosure.