CN116205919A - Hardware part production quality detection method and system based on artificial intelligence - Google Patents
Hardware part production quality detection method and system based on artificial intelligence Download PDFInfo
- Publication number
- CN116205919A CN116205919A CN202310491469.9A CN202310491469A CN116205919A CN 116205919 A CN116205919 A CN 116205919A CN 202310491469 A CN202310491469 A CN 202310491469A CN 116205919 A CN116205919 A CN 116205919A
- Authority
- CN
- China
- Prior art keywords
- illumination
- image block
- image
- under
- illumination angle
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/0002—Inspection of images, e.g. flaw detection
- G06T7/0004—Industrial image inspection
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/20—Image preprocessing
- G06V10/26—Segmentation of patterns in the image field; Cutting or merging of image elements to establish the pattern region, e.g. clustering-based techniques; Detection of occlusion
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/40—Extraction of image or video features
- G06V10/46—Descriptors for shape, contour or point-related descriptors, e.g. scale invariant feature transform [SIFT] or bags of words [BoW]; Salient regional features
- G06V10/462—Salient features, e.g. scale invariant feature transforms [SIFT]
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/70—Arrangements for image or video recognition or understanding using pattern recognition or machine learning
- G06V10/82—Arrangements for image or video recognition or understanding using pattern recognition or machine learning using neural networks
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20084—Artificial neural networks [ANN]
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/30—Subject of image; Context of image processing
- G06T2207/30108—Industrial image inspection
- G06T2207/30164—Workpiece; Machine component
-
- Y—GENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
- Y02—TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
- Y02P—CLIMATE CHANGE MITIGATION TECHNOLOGIES IN THE PRODUCTION OR PROCESSING OF GOODS
- Y02P90/00—Enabling technologies with a potential contribution to greenhouse gas [GHG] emissions mitigation
- Y02P90/30—Computing systems specially adapted for manufacturing
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Multimedia (AREA)
- Evolutionary Computation (AREA)
- Health & Medical Sciences (AREA)
- Quality & Reliability (AREA)
- Artificial Intelligence (AREA)
- Computing Systems (AREA)
- Databases & Information Systems (AREA)
- General Health & Medical Sciences (AREA)
- Medical Informatics (AREA)
- Software Systems (AREA)
- Image Analysis (AREA)
Abstract
The invention relates to the technical field of image data processing, in particular to an artificial intelligence-based hardware part production quality detection method and system, comprising the following steps: collecting standard images and gear area images under different illumination angles; acquiring a target area and a non-target area in each image block, further acquiring an illumination influence gray level distribution curve, acquiring an illumination influence area according to the illumination influence gray level distribution curve, and further acquiring the influence degree of an illumination angle; acquiring a gray trend distribution curve according to gray distribution of the image block in a target area under different illumination angles, and further obtaining a characteristic influence factor of a saliency value of the image block; the characteristic influence factors acquire regional difference values, and a salient image is further obtained; and identifying the defect of shortage according to the saliency image, and realizing the production quality detection of hardware parts. The invention eliminates the interference of oil stain defects on the salient images, so that the production quality detection result of hardware parts is more accurate.
Description
Technical Field
The invention relates to the technical field of image data processing, in particular to an artificial intelligence-based hardware part production quality detection method and system.
Background
With the development of the field of transmission machinery, the requirements on hardware parts forming the transmission machinery are higher and higher, so that accurate quality detection is required before a plurality of fine hardware parts leave the factory, wherein a gear is the most common hardware part in the transmission machinery. The traditional manual detection mode is replaced by the artificial intelligent detection mode in the existing gear production quality detection method, so that the production cost is reduced, the detection efficiency is improved, and the production quality detection flow based on the artificial intelligence can be carried out in the modern hardware part production process to ensure the quality of products.
In the existing artificial intelligent detection process, clear images of gears are acquired through an arranged image acquisition system, most of the images are acquired through saliency analysis on the acquired images, a neural network model is trained through a saliency map of the images, and defect positions in the images are marked through the trained neural network model. The CA algorithm is a saliency detection algorithm based on local features and global features, which determines local saliency based on local color features, however, in the process of capturing an image, different defects are similar in the image, for example, oil stains on gears and starved materials are basically similar in the effect of the image, the oil stain defects are defects in a quality fault tolerance range, the influence on the quality of the gears is small, the starved materials are serious quality defects, and a large production accident can be caused by using the starved gears. If the color features are only considered to determine the local saliency, the saliency of the greasy dirt is larger, the accuracy of the saliency map is affected, the recognition result of the neural network model is affected, the greasy dirt is recognized as a material shortage defect, and an error detection result is obtained.
Disclosure of Invention
The invention provides an artificial intelligence-based hardware part production quality detection method and system, which aim to solve the existing problems.
The hardware part production quality detection method based on artificial intelligence adopts the following technical scheme:
the embodiment of the invention provides an artificial intelligence-based hardware part production quality detection method, which comprises the following steps of:
collecting standard images and gear area images under different illumination angles; dividing all the gear area images and the standard image into a plurality of image blocks respectively; acquiring a target area and a non-target area in each image block of the gear area image according to the standard image and the gear area image;
acquiring illumination influence gray level distribution curves of each image block according to non-target areas in each image block, and acquiring illumination influence areas under each illumination angle according to the illumination influence gray level distribution curves of all the image blocks under each illumination angle;
acquiring contour similarity between a target area in each image block in an illumination influence area under each illumination angle and a target area in a corresponding image block under other illumination angles, and acquiring the influence degree of each illumination angle according to the contour similarity;
If the image block is positioned in the illumination influence area under one illumination angle, taking the corresponding illumination angle as a first illumination angle of the image block; taking the first illumination angle with the greatest influence degree in all the first illumination angles of the image block as the second illumination angle of the image block; acquiring a target sequence according to a pixel point with the minimum gray value and a pixel point with the maximum gray value in a target area of the image block under a second illumination angle; acquiring gray trend distribution curves of the image blocks under the second illumination angles and each first illumination angle according to the target sequence; acquiring characteristic influence factors of saliency values of the image blocks according to gray scale trend distribution curves under different illumination angles and influence degrees of the second illumination angles;
obtaining a regional difference value according to the characteristic influence factor of the saliency value of each image block under each illumination angle, and obtaining the saliency value of each pixel point under each illumination angle by using a saliency detection algorithm according to the regional difference value; acquiring a saliency image according to the saliency value of each pixel point under each illumination angle;
and identifying the defect of material shortage according to the saliency image and the image of the gear area under the illumination angle with the greatest influence degree, and realizing the production quality detection of the gears.
Preferably, the step of acquiring the target area and the non-target area in each image block of the gear area image according to the standard image and the gear area image includes the following specific steps:
taking an area formed by non-0 pixel points in each image block of the standard image as a gear normal distribution area, and taking the edge of the gear normal distribution area as a standard edge; clustering non-0 pixel points in each image block of each gear area image into a plurality of class clusters, acquiring the edge of an area formed by each class cluster as a class cluster edge, counting the number of the pixel points overlapped in each class cluster edge in each image block of each gear area image and the standard edge in the image block corresponding to the standard image, and taking the obtained result as the overlap ratio of each class cluster; taking the region formed by each class cluster except the class cluster with the largest contact ratio in each image block as a differential region, performing convex hull detection on edge pixel points of all the differential regions in each image block to obtain a convex hull region, taking the convex hull region as a target region in each image block, and taking the region except the target region in each image block as a non-target region in each image block.
Preferably, the step of acquiring the illumination influence gray level distribution curve of each image block according to the non-target area in each image block includes the following specific steps:
connecting any two edge points of a non-target area in each image block under one illumination angle to form a line segment serving as an edge line segment, acquiring angle differences between all edge line segments of each image block and corresponding illumination angles, and taking the edge line segment corresponding to the minimum angle difference as a target edge line segment of each image block; and forming a one-dimensional sequence of gray values of all pixel points positioned in a non-target area on all target edge line segments in one image block, and taking a curve corresponding to the one-dimensional sequence as an illumination influence gray distribution curve of the corresponding image block.
Preferably, the obtaining the illumination influence area under each illumination angle according to the illumination influence gray scale distribution curve of all the image blocks under each illumination angle includes the following specific steps:
calculating the DTW distances of non-target areas between every two image blocks under each illumination angle, carrying out negative correlation normalization on the DTW distances, and taking the obtained result as the similarity between the non-target areas between every two image blocks under each illumination angle; taking two image blocks with similarity of non-target areas larger than a similarity threshold value under each illumination angle as an illumination category; when one image block belongs to a plurality of illumination categories, merging the plurality of illumination categories to which the image block belongs into the same illumination category; and calculating the gray value average value of all non-0 pixel points in each illumination category under each illumination angle, and taking the area after all the image blocks in the illumination category with the maximum gray value average value are combined as the illumination influence area under each illumination angle.
Preferably, the obtaining the influence degree of each illumination angle according to the profile similarity includes the following specific steps:
wherein the method comprises the steps ofIs the firstThe degree of influence of the individual illumination angles; l is divided byThe number of other illumination angles except the individual illumination angles;is the firstA target area in a z-th image block in the illumination influence area under the illumination angle;is the firstUnder various illumination anglesA target area in the corresponding image block;is the firstThe number of image blocks in the illumination influence area under the illumination angles;is thatAndhu moment contour similarity between;is the firstThe maximum gray value of the target area in the z-th image block in the illumination influence area under the illumination angle;is the firstThe minimum gray value of the target area in the z-th image block in the illumination influence area under the illumination angle;is an exponential function with a base of natural constant.
Preferably, the step of obtaining the target sequence according to the pixel point with the smallest gray value and the pixel point with the largest gray value in the target area of the image block under the second illumination angle includes the following specific steps:
respectively marking the positions of a pixel point with the minimum gray value and a pixel point with the maximum gray value in a target area of the image block under a second illumination angle as M1 and M2; and connecting M1 and M2 to form a line segment, and arranging all pixel points on the line segment into a one-dimensional sequence serving as a target sequence of the image block.
Preferably, the step of obtaining the gray trend distribution curve of the image block under the second illumination angle and each first illumination angle according to the target sequence includes the following specific steps:
taking a curve formed by gray values corresponding to all pixel points in the target sequence under a second illumination angle as a gray trend distribution curve under the second illumination angle, wherein the abscissa of the gray trend distribution curve is the sequence of the pixel points in the target sequence, and the ordinate is the gray value; and taking a curve formed by gray values of corresponding pixel points of all the pixel points in the target sequence under each first illumination angle as a gray trend distribution curve under each first illumination angle.
Preferably, the obtaining the characteristic influence factor of the saliency value of the image block according to the gray scale trend distribution curve under different illumination angles and the influence degree of the second illumination angle includes the following specific steps:
acquiring trend items of each gray scale trend distribution curve of the image block under the second illumination angle and each first illumination angle by adopting a time sequence segmentation algorithm; taking the average value of the slopes of all points in the trend item of each gray scale trend distribution curve as the slope of the trend item of each gray scale trend distribution curve; taking the average value of the slopes of all the gray scale trend distribution curves of the image block under the second illumination angle as the average slope of the trend terms of the gray scale trend distribution curves of the image block under the second illumination angle; taking the average value of the slopes of all the gray scale trend distribution curves of the image block under each first illumination angle as the average slope of the trend terms of the gray scale trend distribution curves of the image block under each first illumination angle;
Taking the difference between the average slope of the trend item of the gray trend distribution curve of the image block under each first illumination angle and the average slope of the trend item of the gray trend distribution curve of the image block under the second illumination angle as the first difference between each first illumination angle and the second illumination angle of the image block; taking the difference between the influence degree of each first illumination angle of the image block and the influence degree of the second illumination angle of the image block as a second difference of each first illumination angle and the second illumination angle of the image block; taking the product of the first difference and the second difference as the integral difference of each first illumination angle and each second illumination angle of the image block; and obtaining the sum of the overall differences of all the first illumination angles and the second illumination angles of the image block, carrying out negative correlation normalization, and taking the obtained result as a characteristic influence factor of the saliency value of the image block.
Preferably, the obtaining the region difference value according to the feature influence factor of the saliency value of each image block under each illumination angle includes the following specific steps:
first, theGear region image under various illumination anglesImage block numberRegional difference values for individual image blocks The method comprises the following steps:
wherein the method comprises the steps ofIs the firstGear region image under various illumination anglesImage block numberRegional difference values for the individual image blocks;represent the firstA feature impact factor of the saliency values of the individual image blocks;represent the firstGear region image under various illumination anglesImage block numberThe color euclidean distance of the individual image blocks,represent the firstImage block numberSpatial Euclidean distance of individual image blocks;represent the firstGear region image under various illumination anglesImage block numberCentroid distance of individual image block, whenImage block numberThe image blocks are all at the firstWhen the illumination influence range under the individual illumination angles is within,whereinIs the firstThe target area in the ith image block at the individual illumination angles,is the firstThe target area in the jth image block at the illumination angle,is the firstUnder the illumination angle ofImage block numberThe euclidean distance between the centroids of the target regions in the image blocks,as a normalization function, whenImage block numberThe image blocks are not simultaneously at the firstWhen the illumination influence range under the individual illumination angles is within,。
the invention also provides an artificial intelligence-based hardware part production quality detection system, which comprises a memory, a processor and a computer program stored in the memory and capable of running on the processor, wherein the processor realizes any one step of an artificial intelligence-based hardware part production quality detection method when executing the computer program.
The technical scheme of the invention has the beneficial effects that: according to the method, the characteristic influence factors of the saliency values of each image block are obtained by considering the difference of the distribution characteristics of the target areas in the image blocks of the gear area images acquired under different illumination angles to represent the influence degree of the illumination angles and combining the gray scale trend distribution characteristics in the target areas in each image block. In the process of calculating the characteristic influence factors, according to the obtained influence degree values under different illumination angles as influence weight values for calculating gray scale trend distribution, the influence of a gear region image under a smaller reference degree on gray scale trend distribution calculation is avoided, and the difference of average slopes of gray scale trend distribution curves is combined, so that the size of the characteristic influence factors of the calculated saliency values is favorable for reducing the influence of greasy dirt on a saliency detection result, the characteristic influence factors are introduced into a CA saliency detection algorithm to carry out self-adaptive adjustment on the obtained region saliency values, and further an accurate saliency image is obtained, so that the defect identification result of a neural network model is more accurate, namely, the production quality detection result of hardware parts is more accurate; the effects of the oil stain on the gear and the missing material on the image are basically similar, the traditional CA significance detection algorithm only considers the color characteristics to determine the local significance, so that the significance of the oil stain is larger, the accuracy of a significance map is influenced, and further the recognition result of a neural network model is influenced, the oil stain is recognized as a missing material defect, and an error detection result is obtained.
Drawings
In order to more clearly illustrate the embodiments of the invention or the technical solutions in the prior art, the drawings that are required in the embodiments or the description of the prior art will be briefly described, it being obvious that the drawings in the following description are only some embodiments of the invention, and that other drawings may be obtained according to these drawings without inventive effort for a person skilled in the art.
FIG. 1 is a flow chart of the steps of the hardware production quality detection method based on artificial intelligence;
FIG. 2 is a gray scale image of a gear region according to the present invention;
FIG. 3 is a saliency image obtained by a conventional CA saliency detection algorithm;
fig. 4 is a saliency image obtained by the present invention.
Detailed Description
In order to further describe the technical means and effects adopted by the invention to achieve the preset aim, the following detailed description is given below of the hardware part production quality detection method based on artificial intelligence according to the invention, which is provided by combining the accompanying drawings and the preferred embodiment. In the following description, different "one embodiment" or "another embodiment" means that the embodiments are not necessarily the same. Furthermore, the particular features, structures, or characteristics of one or more embodiments may be combined in any suitable manner.
Unless defined otherwise, all technical and scientific terms used herein have the same meaning as commonly understood by one of ordinary skill in the art to which this invention belongs.
The following specifically describes a specific scheme of the hardware production quality detection method based on artificial intelligence provided by the invention with reference to the accompanying drawings.
Referring to fig. 1, a flowchart of steps of an artificial intelligence-based hardware production quality detection method according to an embodiment of the invention is shown, the method includes the following steps:
s001, acquiring a gear image, and preprocessing the gear image to obtain a gear region image.
Arranging an image acquisition system, wherein the image acquisition system comprises: industrial CCD camera, FA industrial lens, top adjustable illumination angle light source, detection platform and image transmission system.
And adjusting the angle of the top adjustable illumination angle light source in the image acquisition system, shooting gears to be detected under different illumination angles, and obtaining a plurality of gear images under different illumination angles. And carrying out semantic segmentation on the acquired gear image, and removing the influence of other factors except the gear on the surface of the detection platform. The semantic segmentation network adopts a DNN neural network, an acquired gear image is used as a training set, the pixels of the gear area are manually marked as 1, the pixels of the rest background area are marked as 0, and a loss function of the semantic segmentation network adopts a cross entropy function.
And recording the semantically segmented gear image as a gear region image. And carrying out Gaussian filtering denoising treatment on the gear region image, so that noise interference is reduced. It should be noted that the research of the present invention focuses on generating an accurate saliency map, and thus the present invention does not consider the influence of noise in the subsequent steps.
Thus, a gear area image is acquired, as shown in fig. 2.
S002, dividing the gear area image to obtain an image block and a target area in the image block.
In the conventional CA saliency detection algorithm, the saliency value of a region is obtained by calculating differences between color euclidean distances and spatial euclidean distances between regions in a segmented region of an image at different scales, however, this is based on the fact that there is a significant difference between regions in an image. For gear hardware parts, oil stains on the surface of the gear belong to surface foreign matters, the oil stains can be reprocessed to meet product delivery standards, and for gear material shortage defects, the quality of the gear is seriously affected, so that the embodiment of the invention needs to identify the material shortage defects. The oil stain and the material shortage defect show similar characteristics in the acquired image, so that the significance of the oil stain is larger when the significance value is calculated according to the CA significance detection algorithm, and the oil stain is erroneously identified to be deficient by the significance detection, so that larger deviation occurs. Therefore, the characteristic influence factor of each region is obtained by analyzing the pixel value distribution characteristics among different regions under different illumination angles, and the obtained region saliency value is adaptively adjusted by introducing the characteristic influence factor into a CA saliency detection algorithm, so that the interference of greasy dirt is eliminated, and an accurate saliency image is obtained.
It should be further noted that, because the gear images are collected according to different illumination angles in the present invention, different performance effects exist in the collected images, that is, the degrees of the image information contained in the corresponding gear region images at different illumination angles are different, so that when the feature influence factor is calculated subsequently, the reference value of the gear region image at the current illumination angle needs to be determined according to the degree of the image information represented in the gear region images at different illumination angles. In the embodiment of the invention, in the process of calculating the image information degree of the gear region image representation of different illumination angles, the difference of the distribution characteristics of the target region in the image block of the gear region image of different illumination angles is adopted for representation. Due to the different illumination angles, the influence ranges in the corresponding gear region images are different (namely, the corresponding illumination regions in the gear region images under different illumination angles are different, namely, the influence degree of the illumination on the different regions in the gear region images is different). The edge shape distribution of the target area in the image block of the gear area image is also different, and for the illumination area, the pixel value change in the illumination area is not obvious, at the moment, if the illumination area has a defect area, the contrast in the defect area is lower, the characteristic influence factors cannot be analyzed, and the shape distribution of the target area is further larger than the shape distribution obtained by other illumination angles. In order to acquire the distribution characteristics of a target area in each image block, in the process of detecting the gear production quality, a clear standard image of the gear is acquired for auxiliary analysis, wherein the standard image is an image which has no illumination influence and is subjected to semantic segmentation. In order to obtain the influence degree of each illumination angle, the gear area image under each illumination angle is firstly required to be segmented to obtain image blocks, and the target area and the non-target area in each image block are further obtained.
In the embodiment of the invention, the method for acquiring the target area and the non-target area in each image block of the gear area image under each illumination angle is as follows:
since the method for acquiring the target area and the non-target area in each image block of the gear area image under each illumination angle is the same, the gear area image under the u-th illumination angle is taken as an example for explanation in the embodiment of the present invention. The method comprises the following steps:
according to the principle of the CA significance detection algorithm, the size of the scale is divided into R= {100%,80%,60%,40%,20% }, and the gear region image under the u-th illumination angle is divided into K image blocks under any scale. Wherein K is a preset size, in the embodiment of the present invention, k=64, and in other embodiments, the operator may set the preset size K according to the specific implementation situation; the parameter setting of the CA significance detection algorithm can be determined according to the specific implementation situation of an implementation person, and the parameter setting is an empirical reference value.
Similarly, the standard image is divided into K image blocks at the same scale. And taking an area formed by non-0 pixel points in each image block of the standard image as a gear normal distribution area, and acquiring the edge of the gear normal distribution area in each image block of the standard image as a standard edge. It should be noted that, in each image block of the standard image, 0 pixel points are the background after semantic segmentation, and non-0 pixel points are the gear areas, so the embodiment of the invention only focuses on the non-0 pixel points.
Non-0 pixel points in each image block of the gear region image under the u-th illumination angle are obtained, the non-0 pixel points in each image block are clustered by using a DBSCAN density clustering algorithm, and all the non-0 pixel points in each image block are divided into a plurality of class clusters. In the embodiment of the invention, the scanning radius of the DBSCAN density clustering algorithm is set to 0.5, minPts is set to 6, and in other embodiments, the embodiment personnel can set according to specific implementation conditions.
The method comprises the steps of obtaining the number of edge points, which are overlapped with the standard edge of a corresponding image block in a standard image, of the edge of an area formed by each class cluster in each image block of a gear area image under the u-th illumination angle, taking the area formed by the class cluster with the largest contact ratio as a gear distribution area, taking the area formed by each other class cluster as a differential area, carrying out convex hull detection on edge pixel points of all the differential areas, obtaining a corresponding convex hull area, taking the convex hull area as a target area in the corresponding image block, and taking the areas outside the convex hull area as non-target areas in the corresponding image block. The target area in the ith image block of the gear area image under the ith illumination angle is recorded as The non-target area in the ith image block of the gear area image under the ith illumination angle is recorded as。
Thus, the target area and the non-target area in each image block under the u-th illumination angle are obtained.
S003, acquiring an illumination influence area under each illumination angle according to each image block under each illumination angle.
It should be noted that, in the image blocks under the influence of the same illumination angle, the gray value of the pixel point changes regularly, so the first can be calculated by combining the change rule of the gray value in each image blockThe similarity of non-target areas in all the image blocks under the illumination angles is that the image blocks corresponding to the non-target areas with large similarity are combined, and the combined total area is the first areaIllumination impact area at individual illumination angles. It should be noted that, the gear area image is an RGB image, and the gray value in the embodiment of the present invention is a result of converting a pixel value of a pixel point in the gear area image into a gray value.
In the embodiment of the invention, the first is acquiredThe specific method of the illumination influence area under the individual illumination angles is as follows:
will be the firstThe illumination angles are recorded asObtain the firstAnd connecting any two edge points of the non-target area in each image block under the illumination angles to form a line segment serving as an edge line segment. And similarly, acquiring all edge line segments. Acquiring the angle and the first angle of each edge line segment Angle of illuminationAnd taking the edge line segment corresponding to the minimum angle difference as a target edge line segment, and forming a one-dimensional sequence by the gray values of all pixel points positioned in a non-target area on all target edge line segments in one image block, wherein a curve corresponding to the one-dimensional sequence is the illumination influence gray distribution curve of the image block.
The same thing obtains the firstThe illumination of each image block at each illumination angle affects the gray scale distribution curve. Calculation of the first Using the DTW AlgorithmSimilarity between non-target areas of all image blocks at various illumination angles, e.g. the firstUnder the illumination angle ofNon-target areas of individual image blocksAnd the firstNon-target areas of individual image blocksSimilarity betweenThe method comprises the following steps:
wherein the method comprises the steps ofIs the firstUnder the illumination angle ofNon-target areas of individual image blocksAnd the firstNon-target areas of individual image blocksSimilarity between;is the firstUnder the illumination angle ofNon-target areas of the image blocks;is the firstUnder the illumination angle ofNon-target areas of the image blocks;first, theNon-target areas of individual image blocksAnd the firstNon-target areas of individual image blocksThe illumination between affects the gray level distribution curveA distance;is an exponential function with a base of natural constant.
When the similarity of the non-target areas of the two image blocks is larger than the similarity threshold T, the two image blocks are used as one illumination type. There may be a case where one image block belongs to a plurality of illumination categories, and at this time, the plurality of illumination categories to which the image block belongs are combined into the same illumination category. Thus realize the followingAll image blocks at each illumination angle are divided into a plurality of illumination categories. In the embodiment of the present invention, the similarity threshold t=0.65, and in other embodiments, the embodiment personnel may set according to the specific implementation situation.
It should be noted that each illumination category may be an illumination influence area or a non-illumination influence area, and the illumination influence area is brighter under the influence of illumination, that is, the gray value of the illumination influence area is larger.
In the embodiment of the invention, the gray value average value of all non-0 pixel points in each illumination category is calculated, and the area after all the image blocks in the illumination category with the maximum gray value average value are combined is used as the illumination influence area.
And similarly, acquiring an illumination influence area under each illumination angle.
S004, obtaining the influence degree value of the illumination angle according to the illumination influence area under each illumination angle.
Acquisition of the firstCalculating Hu moment contour similarity between the target areas in the image blocks and the target areas in the corresponding image blocks under other illumination angles according to the target areas in each image block in the illumination influence areas under the illumination angles, and acquiring the first moment according to the Hu moment contour similarityDegree of influence of individual illumination angles:
Wherein the method comprises the steps ofIs the firstThe degree of influence of the individual illumination angles; l is divided byThe number of other illumination angles except the individual illumination angles;is the firstA target area in a z-th image block in the illumination influence area under the illumination angle;is the firstUnder various illumination anglesA target area in the corresponding image block;is the firstThe number of image blocks in the illumination influence area under the illumination angles;is thatAndhu moment contour similarity between;is the firstThe maximum gray value of the target area in the z-th image block in the illumination influence area under the illumination angle;is the firstThe minimum gray value of the target area in the z-th image block in the illumination influence area under the illumination angle;is an exponential function with a natural constant as a base;is the firstThe average value of the contour similarity between the target area in the z-th image block in the illumination influence area under the illumination angles and the target area in the corresponding image block under other illumination angles is shown as the larger value The more similar the outline of the target area in the z-th image block in the illumination influence area under the illumination angles is to the outline of the target area in the corresponding image blocks under other illumination angles, at the momentThe illumination angles have smaller influence on the target area in the z-th image block in the illumination influence area, and the influence is larger;is the firstReciprocal of gray value range of target region in z-th image block in illumination influence region under illumination angle, and when the range is smaller, the description is thatThe gray value of the pixel point in the target area of the z-th image block in the illumination influence area under the illumination angle is not changed greatly, at the momentThe illumination angles have smaller influence on the target area in the z-th image block in the illumination influence area, and the influence is larger; therefore, when the range is smaller and the contour similarity mean is larger, the firstThe greater the degree of influence of the individual illumination angles.
Thus, the influence degree of the u-th illumination angle is obtained. It should be noted that, in the embodiment of the present invention, the change of the outline shape and the gray level range of the target area in the image block of the illumination influence area are analyzed at the same time, so that the obtained influence degree of the illumination angle is more accurate.
And similarly, obtaining the influence degree of each illumination angle.
S005, acquiring a characteristic influence factor of the saliency value of each image block according to the gray distribution characteristics of the target area.
It should be noted that the extent of influence and the extent of influence of illumination at different angles reflect the extent of characterization of the image information of the gear region. For a single illumination angle, because the influence degree and the illumination influence range under the illumination angle are different, when CA saliency detection is carried out on the gear area image, a larger color space Euclidean distance is generated for greasy dirt, and the accuracy of a final saliency image is influenced at the moment, so that if the saliency of the gear area image is calculated under the single illumination angle, the obtained saliency is lower in accuracy under the illumination influence. According to the embodiment of the invention, the characteristics in the gear region images under different illumination angles are considered to determine the comprehensive accurate saliency image, so that the influence of different illumination angles on the saliency can be reduced. Furthermore, the embodiment of the invention calculates the characteristic influence factor of the saliency value by utilizing the gray distribution characteristics of the greasy dirt and the lack of materials under different illumination angles on the basis of optimizing the CA saliency detection algorithm by combining all illumination angles, so that the obtained saliency image is more accurate.
It should be further described that, according to the priori knowledge, the greasy dirt in the gear belongs to the liquid with uneven distribution on the gear surface, and the uneven characteristic of gray level change appears under different illumination angles, and for the material shortage in the gear, the material shortage of the gear belongs to the incomplete of the gear surface itself, and has the characteristic of darkness, and the gray level change of the area with the characteristic of darkness under different illumination angles has regularity, namely the gray level value of the material shortage area changes regularly from deep to shallow or from shallow to deep under the same illumination angle correspondingly.
In the embodiment of the invention, each image block under different illumination angles is respectively taken as a target image block, and if the target image block is positioned in an illumination influence area under one illumination angle, the illumination angle is taken as a first illumination angle of the target image block. And taking the first illumination angle with the greatest influence degree in all the first illumination angles of the target image block as a second illumination angle of the target image block, wherein the second illumination angle is not taken as the first illumination angle any more.
And acquiring a pixel point with the minimum gray value and a pixel point with the maximum gray value in a target area of the target image block under the second illumination angle, and marking the positions of the two pixel points in the image as M1 and M2 respectively. And connecting M1 and M2 to form a line segment, and arranging all pixel points on the line segment into a one-dimensional sequence serving as a target sequence of the target image block. It should be noted that, when there are a plurality of pixels with the smallest gray value or a plurality of pixels with the largest gray value in the target area of the target image block under the second illumination angle, there are a plurality of M1 or M2, and at this time, a target sequence is obtained according to each M1 and each M2, and finally a plurality of target sequences are obtained.
And taking a curve formed by gray values corresponding to all pixel points in the target sequence under the second illumination angle as a gray trend distribution curve under the second illumination angle. The abscissa of the gray trend distribution curve is the sequence of the pixel points in the target sequence, and the ordinate is the gray value. And taking a curve formed by gray values of corresponding pixel points of all the pixel points in the target sequence under each first illumination angle as a gray trend distribution curve under each first illumination angle.
Because the representation effects of gray values corresponding to oil stains and material shortage defects under different illumination angles are different, the embodiment of the invention adopts an STL time sequence segmentation algorithm to calculate the trend item of each gray trend distribution curve, wherein the trend item represents the general trend change of the gray trend distribution curve. And acquiring the average value of the slopes of all points in the trend item of each gray scale trend distribution curve, taking the average value of the slopes of the trend items of all gray scale trend distribution curves of the target image block under the second illumination angle as the average slope of the trend items of the gray scale trend distribution curves of the target image block under the second illumination angle, and similarly, acquiring the average slope of the trend items of the gray scale trend distribution curves of the target image block under the first illumination angle. It should be noted that, since the trend term of the gray scale trend distribution curve is affected by the gray scale value distribution of the curve, the trend terms of the obtained curve cannot be directly compared, so the embodiment of the invention adopts the average slope of the trend term for measurement.
Obtaining a characteristic influence factor of significance value of each target image block, e.g. the firstFeature impact factor for saliency values of individual target image blocksThe method comprises the following steps:
wherein the method comprises the steps ofA feature influence factor which is the saliency value of the r-th target image block;is the (r) th targetSecond illumination angle of image blockIs a degree of influence of (a);the (r) th target image blockThe degree of influence of the first illumination angle;second illumination angle for the r-th target image blockAverage slope of trend term of gray scale trend distribution curve;the (r) th target image blockAverage slope of trend term of gray scale trend distribution curve under first illumination angle;is an absolute value symbol; exp () is an exponential function based on a natural constant; q is the number of first illumination angles of the r-th target image block;for the difference between average slopes of trend items of gray scale trend distribution curves of the r-th target image block under different illumination angles, representing the regular degree of gray scale change in the same image block, if for greasy dirt, the gray scale change is uneven, and the characteristic of lower regular degree can be presented under different illumination angles;the influence degree of the second illumination angle of the r-th target image block and the second illumination angle The difference value of the influence degree of the first illumination angle is characterized by being the difference value of the information reference degree of the image under different illumination angles, the difference value is used as the weight of the difference between the average slopes of the trend items of the gray trend distribution curve of the target image block under different illumination angles, the phenomenon that the average slopes of the trend items of the gray trend distribution curve are overlarge due to the illumination angles is avoided, and the size of the characteristic influence factor of the saliency value is favorable for reducing the influence of greasy dirt on saliency detection.
Thus, the characteristic influence factor of the saliency value of each image block is obtained.
It should be noted that, in the embodiment of the present invention, the difference of the influence degrees under different illumination angles is used as the weight of the difference between the average slopes of the trend items of the gray trend distribution curve, so that the influence of the gear area image acquired by the illumination angle with smaller reference degrees on the calculation of the gray trend distribution curve is avoided, and the difference of the average slopes of the gray trend distribution curve is combined, so that the magnitude of the characteristic influence factor of the calculated saliency value is favorable for reducing the influence of the greasy dirt on the saliency detection, and the subsequently acquired saliency map is more accurate.
S006, optimizing a CA saliency detection algorithm according to the characteristic influence factors of the saliency values of each image block, and obtaining a saliency image.
Converting the gear area image under each illumination angle from RGB space to Lab space, and calculating the color Euclidean distance and the space Euclidean distance between any two image blocks in the gear area image under each illumination angle. Obtaining the region difference value between two image blocks according to the color Euclidean distance and the space Euclidean distance, such as the firstGear region image under various illumination anglesImage block numberRegional difference values for individual image blocksThe method comprises the following steps:
wherein the method comprises the steps ofIs the firstGear region image under various illumination anglesImage block numberRegional difference values for the individual image blocks;represent the firstA feature impact factor of the saliency values of the individual image blocks;represent the firstGear region image under various illumination anglesImage block numberThe color euclidean distance of the individual image blocks,represent the firstImage block numberSpatial Euclidean distance of individual image blocks;represent the firstGear region image under various illumination anglesImage block numberCentroid distance of individual image block, whenImage block numberThe image blocks are all at the firstWhen the illumination influence range under the individual illumination angles is within,whereinIs the firstThe target area in the ith image block at the individual illumination angles, Is the firstThe target area in the jth image block at the illumination angle,is the firstUnder the illumination angle ofImage block numberThe euclidean distance between the centroids of the target regions in the image blocks,as a normalization function, whenImage block numberThe image blocks are not simultaneously at the firstWhen the illumination influence range under the individual illumination angles is within,the method comprises the steps of carrying out a first treatment on the surface of the When the first isThe larger the feature influence factor of the saliency value of each image block is, the indication of the firstThe more image blocks need to be adjusted, the higher the significance of the non-greasy defect area can be ensured.
According to the firstCalculating the regional difference value under the current scale by using CA significance detection algorithmThe saliency value of each image block under each illumination angle is obtained by the same method under all scalesAngle of illuminationThe mean value of the saliency values of each image block under the degree is combined with the Euclidean distance between the image block and the nearest region of interest, and the Gaussian distribution is combined to obtain the firstSignificance value for each pixel at each illumination angle. It should be noted that, according to the region difference value, the saliency value of each pixel point obtained by using the CA saliency detection algorithm is a known technology, and detailed description is omitted in the embodiment of the present invention.
And similarly, obtaining the saliency value of each pixel point under each illumination angle, taking the influence degree of each illumination angle as the weight of the saliency value of each pixel point, and carrying out weighted summation on the saliency value of one pixel point under all illumination angles to obtain the final saliency value of the pixel point. The final saliency value of each pixel point is mapped to the range from 0 to 255 to obtain a saliency image, and the saliency image obtained by the embodiment of the invention is shown in fig. 3.
The saliency image obtained by the traditional CA saliency detection algorithm is shown in fig. 4, and the area of the light color area (namely the area with larger gray value) in fig. 4 comprises a material shortage area and an oil pollution area besides the texture on the gear, wherein the oil pollution area is obvious, while the saliency image (shown in fig. 3) obtained by the method in the embodiment of the invention has the advantages that the saliency of the corresponding oil pollution area is obviously weakened except the texture on the gear, the saliency of the corresponding material shortage area is enhanced, so that the characteristics of the material shortage defect area are more focused in the subsequent neural network training process, and the result of production quality detection is more accurate.
So far, a saliency image is acquired.
S007, detecting production quality according to the saliency image.
And acquiring a gear region image under the illumination angle with the greatest influence degree, taking the image and the saliency image as input data, inputting the image and the saliency image into a trained neural network, and outputting a bounding box of the material shortage region.
The training process of the neural network comprises the following steps: the input data of the neural network are a gear area image and a corresponding saliency image under the illumination angle with the greatest influence degree; the output data is a bounding box of the material shortage area, and comprises a bounding box center point and a bounding box length and width; the training set of the neural network is a historical gear region image and a corresponding saliency image, and the gear region image and a material shortage region in the saliency image are marked in a manual marking mode, wherein the training set comprises a material shortage region bounding box center point and bounding box length and width; the loss function of the neural network is cross entropy loss.
So far, the defect detection of the shortage of materials is completed according to the saliency image, and the production quality detection of the gear is realized.
The embodiment of the invention also provides a hardware part production quality detection system based on artificial intelligence, which comprises a memory, a processor and a computer program stored in the memory and capable of running on the processor, wherein the processor realizes any one step of the hardware part production quality detection method based on artificial intelligence when executing the computer program.
According to the embodiment of the invention, the characteristic influence factors of the significance values of each image block are obtained by considering the differences of the distribution characteristics of the target areas in the image blocks of the gear area images acquired under different illumination angles to characterize the influence degree of the illumination angles and combining the gray trend distribution characteristics in the target areas in each image block. In the process of calculating the characteristic influence factors, according to the obtained influence degree values under different illumination angles as influence weight values for calculating gray scale trend distribution, the influence of a gear region image under a smaller reference degree on gray scale trend distribution calculation is avoided, and the difference of average slopes of gray scale trend distribution curves is combined, so that the size of the characteristic influence factors of the calculated saliency values is favorable for reducing the influence of greasy dirt on a saliency detection result, the characteristic influence factors are introduced into a CA saliency detection algorithm to carry out self-adaptive adjustment on the obtained region saliency values, and further an accurate saliency image is obtained, so that the defect identification result of a neural network model is more accurate, namely, the production quality detection result of hardware parts is more accurate; the effects of the oil stain on the gear and the missing material on the image are basically similar, the traditional CA significance detection algorithm only considers the color characteristics to determine the local significance, so that the significance of the oil stain is larger, the accuracy of a significance map is influenced, and further the recognition result of a neural network model is influenced, the oil stain is recognized as a missing material defect, and an error detection result is obtained.
The foregoing description of the preferred embodiments of the invention is not intended to be limiting, but rather is intended to cover all modifications, equivalents, alternatives, and improvements that fall within the spirit and scope of the invention.
Claims (10)
1. The hardware part production quality detection method based on artificial intelligence is characterized by comprising the following steps of:
collecting standard images and gear area images under different illumination angles; dividing all the gear area images and the standard image into a plurality of image blocks respectively; acquiring a target area and a non-target area in each image block of the gear area image according to the standard image and the gear area image;
acquiring illumination influence gray level distribution curves of each image block according to non-target areas in each image block, and acquiring illumination influence areas under each illumination angle according to the illumination influence gray level distribution curves of all the image blocks under each illumination angle;
acquiring contour similarity between a target area in each image block in an illumination influence area under each illumination angle and a target area in a corresponding image block under other illumination angles, and acquiring the influence degree of each illumination angle according to the contour similarity;
If the image block is positioned in the illumination influence area under one illumination angle, taking the corresponding illumination angle as a first illumination angle of the image block; taking the first illumination angle with the greatest influence degree in all the first illumination angles of the image block as the second illumination angle of the image block; acquiring a target sequence according to a pixel point with the minimum gray value and a pixel point with the maximum gray value in a target area of the image block under a second illumination angle; acquiring gray trend distribution curves of the image blocks under the second illumination angles and each first illumination angle according to the target sequence; acquiring characteristic influence factors of saliency values of the image blocks according to gray scale trend distribution curves under different illumination angles and influence degrees of the second illumination angles;
obtaining a regional difference value according to the characteristic influence factor of the saliency value of each image block under each illumination angle, and obtaining the saliency value of each pixel point under each illumination angle by using a saliency detection algorithm according to the regional difference value; acquiring a saliency image according to the saliency value of each pixel point under each illumination angle;
and identifying the defect of material shortage according to the saliency image and the image of the gear area under the illumination angle with the greatest influence degree, and realizing the production quality detection of the gears.
2. The method for detecting the production quality of hardware parts based on artificial intelligence according to claim 1, wherein the step of obtaining the target area and the non-target area in each image block of the gear area image according to the standard image and the gear area image comprises the following specific steps:
taking an area formed by non-0 pixel points in each image block of the standard image as a gear normal distribution area, and taking the edge of the gear normal distribution area as a standard edge; clustering non-0 pixel points in each image block of each gear area image into a plurality of class clusters, acquiring the edge of an area formed by each class cluster as a class cluster edge, counting the number of the pixel points overlapped in each class cluster edge in each image block of each gear area image and the standard edge in the image block corresponding to the standard image, and taking the obtained result as the overlap ratio of each class cluster; taking the region formed by each class cluster except the class cluster with the largest contact ratio in each image block as a differential region, performing convex hull detection on edge pixel points of all the differential regions in each image block to obtain a convex hull region, taking the convex hull region as a target region in each image block, and taking the region except the target region in each image block as a non-target region in each image block.
3. The method for detecting the production quality of hardware parts based on artificial intelligence according to claim 1, wherein the step of obtaining the illumination influence gray level distribution curve of each image block according to the non-target area in each image block comprises the following specific steps:
connecting any two edge points of a non-target area in each image block under one illumination angle to form a line segment serving as an edge line segment, acquiring angle differences between all edge line segments of each image block and corresponding illumination angles, and taking the edge line segment corresponding to the minimum angle difference as a target edge line segment of each image block; and forming a one-dimensional sequence of gray values of all pixel points positioned in a non-target area on all target edge line segments in one image block, and taking a curve corresponding to the one-dimensional sequence as an illumination influence gray distribution curve of the corresponding image block.
4. The hardware component production quality detection method based on artificial intelligence according to claim 1, wherein the obtaining the illumination influence area under each illumination angle according to the illumination influence gray scale distribution curve of all image blocks under each illumination angle comprises the following specific steps:
Calculating the DTW distances of non-target areas between every two image blocks under each illumination angle, carrying out negative correlation normalization on the DTW distances, and taking the obtained result as the similarity between the non-target areas between every two image blocks under each illumination angle; taking two image blocks with similarity of non-target areas larger than a similarity threshold value under each illumination angle as an illumination category; when one image block belongs to a plurality of illumination categories, merging the plurality of illumination categories to which the image block belongs into the same illumination category; and calculating the gray value average value of all non-0 pixel points in each illumination category under each illumination angle, and taking the area after all the image blocks in the illumination category with the maximum gray value average value are combined as the illumination influence area under each illumination angle.
5. The method for detecting the production quality of hardware parts based on artificial intelligence according to claim 1, wherein the step of obtaining the influence degree of each illumination angle according to the profile similarity comprises the following specific steps:
wherein the method comprises the steps ofIs->The degree of influence of the individual illumination angles; l is except for->The number of other illumination angles except the individual illumination angles;is->A target area in a z-th image block in the illumination influence area under the illumination angle; / >Is->Under the illumination angle and->A target area in the corresponding image block; />Is->The number of image blocks in the illumination influence area under the illumination angles; />Is->And->Hu moment contour similarity between; />Is->The maximum gray value of the target area in the z-th image block in the illumination influence area under the illumination angle; />Is->The minimum gray value of the target area in the z-th image block in the illumination influence area under the illumination angle; />Is an exponential function with a base of natural constant.
6. The method for detecting the production quality of hardware parts based on artificial intelligence according to claim 1, wherein the step of obtaining the target sequence according to the pixel point with the smallest gray value and the pixel point with the largest gray value in the target area of the image block under the second illumination angle comprises the following specific steps:
respectively marking the positions of a pixel point with the minimum gray value and a pixel point with the maximum gray value in a target area of the image block under a second illumination angle as M1 and M2; and connecting M1 and M2 to form a line segment, and arranging all pixel points on the line segment into a one-dimensional sequence serving as a target sequence of the image block.
7. The method for detecting the production quality of hardware parts based on artificial intelligence according to claim 1, wherein the step of obtaining the gray trend distribution curve of the image block under the second illumination angle and each first illumination angle according to the target sequence comprises the following specific steps:
Taking a curve formed by gray values corresponding to all pixel points in the target sequence under a second illumination angle as a gray trend distribution curve under the second illumination angle, wherein the abscissa of the gray trend distribution curve is the sequence of the pixel points in the target sequence, and the ordinate is the gray value; and taking a curve formed by gray values of corresponding pixel points of all the pixel points in the target sequence under each first illumination angle as a gray trend distribution curve under each first illumination angle.
8. The method for detecting the quality of hardware parts based on artificial intelligence according to claim 1, wherein the characteristic influence factor for obtaining the saliency value of the image block according to the gray scale trend distribution curve under different illumination angles and the influence degree of the second illumination angle comprises the following specific steps:
acquiring trend items of each gray scale trend distribution curve of the image block under the second illumination angle and each first illumination angle by adopting a time sequence segmentation algorithm; taking the average value of the slopes of all points in the trend item of each gray scale trend distribution curve as the slope of the trend item of each gray scale trend distribution curve; taking the average value of the slopes of all the gray scale trend distribution curves of the image block under the second illumination angle as the average slope of the trend terms of the gray scale trend distribution curves of the image block under the second illumination angle; taking the average value of the slopes of all the gray scale trend distribution curves of the image block under each first illumination angle as the average slope of the trend terms of the gray scale trend distribution curves of the image block under each first illumination angle;
Taking the difference between the average slope of the trend item of the gray trend distribution curve of the image block under each first illumination angle and the average slope of the trend item of the gray trend distribution curve of the image block under the second illumination angle as the first difference between each first illumination angle and the second illumination angle of the image block; taking the difference between the influence degree of each first illumination angle of the image block and the influence degree of the second illumination angle of the image block as a second difference of each first illumination angle and the second illumination angle of the image block; taking the product of the first difference and the second difference as the integral difference of each first illumination angle and each second illumination angle of the image block; and obtaining the sum of the overall differences of all the first illumination angles and the second illumination angles of the image block, carrying out negative correlation normalization, and taking the obtained result as a characteristic influence factor of the saliency value of the image block.
9. The hardware component production quality detection method based on artificial intelligence according to claim 1, wherein the obtaining the regional difference value according to the characteristic influence factor of the saliency value of each image block under each illumination angle comprises the following specific steps:
First, theThe +.>Image block and->Regional difference value of individual image blocks->The method comprises the following steps:
wherein the method comprises the steps ofIs->The +.>Image block and->Regional difference values for the individual image blocks; />Indicate->A feature impact factor of the saliency values of the individual image blocks; />Indicate->The +.>Image block and->Color Euclidean distance of individual image blocks, +.>Indicate->Image block and->Personal drawingsThe spatial Euclidean distance of the image block; />Indicate->Gear region image under various illumination anglesImage block and->Centroid distance of individual image block, when +.>Image block and->The image blocks are all at +.>When the illumination influence range under the individual illumination angles is within, < >>Wherein->Is->Target area in the ith image block under the illumination angle,/and/or>Is->Target area in jth image block under illumination angle,/>Is->Light angle->Image block and->Euclidean distance between centroids of target areas in individual image blocks, < >>As a normalization function, when->Image block and->The individual image blocks are not at the same time +.>When the illumination influence range under the individual illumination angles is within, 。
10. Hardware production quality detection system based on artificial intelligence, comprising a memory, a processor and a computer program stored in the memory and running on the processor, characterized in that the processor implements the detection method according to any one of claims 1-9 when executing the computer program.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202310491469.9A CN116205919B (en) | 2023-05-05 | 2023-05-05 | Hardware part production quality detection method and system based on artificial intelligence |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202310491469.9A CN116205919B (en) | 2023-05-05 | 2023-05-05 | Hardware part production quality detection method and system based on artificial intelligence |
Publications (2)
Publication Number | Publication Date |
---|---|
CN116205919A true CN116205919A (en) | 2023-06-02 |
CN116205919B CN116205919B (en) | 2023-06-30 |
Family
ID=86509803
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202310491469.9A Active CN116205919B (en) | 2023-05-05 | 2023-05-05 | Hardware part production quality detection method and system based on artificial intelligence |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN116205919B (en) |
Cited By (9)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN116385448A (en) * | 2023-06-07 | 2023-07-04 | 深圳市华伟精密陶瓷有限公司 | Alumina ceramic surface machining defect detection method based on machine vision |
CN116433663A (en) * | 2023-06-13 | 2023-07-14 | 肥城恒丰塑业有限公司 | Intelligent geotechnical cell quality detection method |
CN116503393A (en) * | 2023-06-26 | 2023-07-28 | 深圳市创智捷科技有限公司 | Circuit board plasma nano coating quality detection method based on image processing |
CN116862912A (en) * | 2023-09-04 | 2023-10-10 | 山东恒信科技发展有限公司 | Raw oil impurity detection method based on machine vision |
CN116958136A (en) * | 2023-09-19 | 2023-10-27 | 惠州市金箭精密部件有限公司 | Lead screw thread production defect detection method based on image processing |
CN117058142A (en) * | 2023-10-11 | 2023-11-14 | 贵州省畜牧兽医研究所 | Goose house killing spray liquid image detection method based on machine vision |
CN117058130A (en) * | 2023-10-10 | 2023-11-14 | 威海威信光纤科技有限公司 | Visual inspection method for coating quality of optical fiber drawing surface |
CN117115153A (en) * | 2023-10-23 | 2023-11-24 | 威海坤科流量仪表股份有限公司 | Intelligent printed circuit board quality detection method based on visual assistance |
CN117474891A (en) * | 2023-11-10 | 2024-01-30 | 艾普零件制造(苏州)股份有限公司 | Gear heat treatment defect detection method |
Citations (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN105279774A (en) * | 2015-10-13 | 2016-01-27 | 金晨晖 | Digital image identification method of refractive errors |
US20160180188A1 (en) * | 2014-12-19 | 2016-06-23 | Beijing University Of Technology | Method for detecting salient region of stereoscopic image |
WO2018068415A1 (en) * | 2016-10-11 | 2018-04-19 | 广州视源电子科技股份有限公司 | Detection method and system for wrong part |
CN113538432A (en) * | 2021-09-17 | 2021-10-22 | 南通蓝城机械科技有限公司 | Part defect detection method and system based on image processing |
CN115170572A (en) * | 2022-09-08 | 2022-10-11 | 山东瑞峰新材料科技有限公司 | BOPP composite film surface gluing quality monitoring method |
CN115187602A (en) * | 2022-09-13 | 2022-10-14 | 江苏骏利精密制造科技有限公司 | Injection molding part defect detection method and system based on image processing |
CN115620061A (en) * | 2022-10-20 | 2023-01-17 | 深圳市智宇精密五金塑胶有限公司 | Hardware part defect detection method and system based on image recognition technology |
-
2023
- 2023-05-05 CN CN202310491469.9A patent/CN116205919B/en active Active
Patent Citations (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20160180188A1 (en) * | 2014-12-19 | 2016-06-23 | Beijing University Of Technology | Method for detecting salient region of stereoscopic image |
CN105279774A (en) * | 2015-10-13 | 2016-01-27 | 金晨晖 | Digital image identification method of refractive errors |
WO2018068415A1 (en) * | 2016-10-11 | 2018-04-19 | 广州视源电子科技股份有限公司 | Detection method and system for wrong part |
CN113538432A (en) * | 2021-09-17 | 2021-10-22 | 南通蓝城机械科技有限公司 | Part defect detection method and system based on image processing |
CN115170572A (en) * | 2022-09-08 | 2022-10-11 | 山东瑞峰新材料科技有限公司 | BOPP composite film surface gluing quality monitoring method |
CN115187602A (en) * | 2022-09-13 | 2022-10-14 | 江苏骏利精密制造科技有限公司 | Injection molding part defect detection method and system based on image processing |
CN115620061A (en) * | 2022-10-20 | 2023-01-17 | 深圳市智宇精密五金塑胶有限公司 | Hardware part defect detection method and system based on image recognition technology |
Non-Patent Citations (2)
Title |
---|
KUANG-CHIH LEE ET AL.: "\"Acquiring Linear Subspaces for Face Recognition under Variable Lighting\"", 《IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE》, vol. 27, no. 5, pages 684 - 696 * |
陈海永等: ""基于谱残差视觉显著性的带钢表面缺陷检测"", 《光学精密工程》, no. 10, pages 2572 - 2580 * |
Cited By (17)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN116385448B (en) * | 2023-06-07 | 2023-08-25 | 深圳市华伟精密陶瓷有限公司 | Alumina ceramic surface machining defect detection method based on machine vision |
CN116385448A (en) * | 2023-06-07 | 2023-07-04 | 深圳市华伟精密陶瓷有限公司 | Alumina ceramic surface machining defect detection method based on machine vision |
CN116433663A (en) * | 2023-06-13 | 2023-07-14 | 肥城恒丰塑业有限公司 | Intelligent geotechnical cell quality detection method |
CN116433663B (en) * | 2023-06-13 | 2023-08-18 | 肥城恒丰塑业有限公司 | Intelligent geotechnical cell quality detection method |
CN116503393A (en) * | 2023-06-26 | 2023-07-28 | 深圳市创智捷科技有限公司 | Circuit board plasma nano coating quality detection method based on image processing |
CN116503393B (en) * | 2023-06-26 | 2023-09-08 | 深圳市创智捷科技有限公司 | Circuit board plasma nano coating quality detection method based on image processing |
CN116862912A (en) * | 2023-09-04 | 2023-10-10 | 山东恒信科技发展有限公司 | Raw oil impurity detection method based on machine vision |
CN116862912B (en) * | 2023-09-04 | 2023-11-24 | 山东恒信科技发展有限公司 | Raw oil impurity detection method based on machine vision |
CN116958136A (en) * | 2023-09-19 | 2023-10-27 | 惠州市金箭精密部件有限公司 | Lead screw thread production defect detection method based on image processing |
CN116958136B (en) * | 2023-09-19 | 2024-01-30 | 惠州市金箭精密部件有限公司 | Lead screw thread production defect detection method based on image processing |
CN117058130B (en) * | 2023-10-10 | 2024-01-09 | 威海威信光纤科技有限公司 | Visual inspection method for coating quality of optical fiber drawing surface |
CN117058130A (en) * | 2023-10-10 | 2023-11-14 | 威海威信光纤科技有限公司 | Visual inspection method for coating quality of optical fiber drawing surface |
CN117058142A (en) * | 2023-10-11 | 2023-11-14 | 贵州省畜牧兽医研究所 | Goose house killing spray liquid image detection method based on machine vision |
CN117058142B (en) * | 2023-10-11 | 2023-12-19 | 贵州省畜牧兽医研究所 | Goose house killing spray liquid image detection method based on machine vision |
CN117115153A (en) * | 2023-10-23 | 2023-11-24 | 威海坤科流量仪表股份有限公司 | Intelligent printed circuit board quality detection method based on visual assistance |
CN117115153B (en) * | 2023-10-23 | 2024-02-02 | 威海坤科流量仪表股份有限公司 | Intelligent printed circuit board quality detection method based on visual assistance |
CN117474891A (en) * | 2023-11-10 | 2024-01-30 | 艾普零件制造(苏州)股份有限公司 | Gear heat treatment defect detection method |
Also Published As
Publication number | Publication date |
---|---|
CN116205919B (en) | 2023-06-30 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN116205919B (en) | Hardware part production quality detection method and system based on artificial intelligence | |
CN115082683B (en) | Injection molding defect detection method based on image processing | |
CN113989279B (en) | Plastic film quality detection method based on artificial intelligence and image processing | |
CN115082467B (en) | Building material welding surface defect detection method based on computer vision | |
CN112819772B (en) | High-precision rapid pattern detection and recognition method | |
CN116664559B (en) | Machine vision-based memory bank damage rapid detection method | |
CN105046252B (en) | A kind of RMB prefix code recognition methods | |
CN110472479B (en) | Finger vein identification method based on SURF feature point extraction and local LBP coding | |
CN111968098A (en) | Strip steel surface defect detection method, device and equipment | |
CN108846831B (en) | Band steel surface defect classification method based on combination of statistical characteristics and image characteristics | |
CN114757900A (en) | Artificial intelligence-based textile defect type identification method | |
CN113393426B (en) | Steel rolling plate surface defect detection method | |
CN117197140B (en) | Irregular metal buckle forming detection method based on machine vision | |
CN117689655B (en) | Metal button surface defect detection method based on computer vision | |
CN116246174B (en) | Sweet potato variety identification method based on image processing | |
CN114926410A (en) | Method for detecting appearance defects of brake disc | |
CN115797361B (en) | Aluminum template surface defect detection method | |
CN115131359A (en) | Method for detecting pitting defects on surface of metal workpiece | |
CN111178405A (en) | Similar object identification method fusing multiple neural networks | |
Zhang et al. | Fabric defect detection based on visual saliency map and SVM | |
CN116703895B (en) | Small sample 3D visual detection method and system based on generation countermeasure network | |
CN117593193A (en) | Sheet metal image enhancement method and system based on machine learning | |
CN116385435B (en) | Pharmaceutical capsule counting method based on image segmentation | |
CN110276260B (en) | Commodity detection method based on depth camera | |
CN112184619A (en) | Metal part surface defect detection method based on deep learning |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |