CN111951237A - Visual appearance detection method - Google Patents

Visual appearance detection method Download PDF

Info

Publication number
CN111951237A
CN111951237A CN202010772989.3A CN202010772989A CN111951237A CN 111951237 A CN111951237 A CN 111951237A CN 202010772989 A CN202010772989 A CN 202010772989A CN 111951237 A CN111951237 A CN 111951237A
Authority
CN
China
Prior art keywords
image
defect
detection
workpiece
model
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202010772989.3A
Other languages
Chinese (zh)
Other versions
CN111951237B (en
Inventor
王罡
侯大为
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shanghai Weiyi Intelligent Manufacturing Technology Co ltd
Changzhou Weiyizhi Technology Co Ltd
Original Assignee
Shanghai Weiyi Intelligent Manufacturing Technology Co ltd
Changzhou Weiyizhi Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shanghai Weiyi Intelligent Manufacturing Technology Co ltd, Changzhou Weiyizhi Technology Co Ltd filed Critical Shanghai Weiyi Intelligent Manufacturing Technology Co ltd
Priority to CN202010772989.3A priority Critical patent/CN111951237B/en
Publication of CN111951237A publication Critical patent/CN111951237A/en
Application granted granted Critical
Publication of CN111951237B publication Critical patent/CN111951237B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/0002Inspection of images, e.g. flaw detection
    • G06T7/0004Industrial image inspection
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01BMEASURING LENGTH, THICKNESS OR SIMILAR LINEAR DIMENSIONS; MEASURING ANGLES; MEASURING AREAS; MEASURING IRREGULARITIES OF SURFACES OR CONTOURS
    • G01B11/00Measuring arrangements characterised by the use of optical techniques
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/22Matching criteria, e.g. proximity measures
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • G06F18/243Classification techniques relating to the number of classes
    • G06F18/2431Multiple classes
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/70Denoising; Smoothing
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/11Region-based segmentation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/136Segmentation; Edge detection involving thresholding
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/194Segmentation; Edge detection involving foreground-background segmentation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20048Transform domain processing
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20084Artificial neural networks [ANN]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30108Industrial image inspection
    • G06T2207/30164Workpiece; Machine component

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Data Mining & Analysis (AREA)
  • Evolutionary Computation (AREA)
  • Artificial Intelligence (AREA)
  • General Engineering & Computer Science (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Evolutionary Biology (AREA)
  • Biophysics (AREA)
  • Biomedical Technology (AREA)
  • Health & Medical Sciences (AREA)
  • Computational Linguistics (AREA)
  • General Health & Medical Sciences (AREA)
  • Molecular Biology (AREA)
  • Computing Systems (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Quality & Reliability (AREA)
  • Image Analysis (AREA)
  • Image Processing (AREA)

Abstract

The invention provides an appearance visual detection method, which comprises the following steps: an optical guiding step: the similarity of the images to be compared is calculated by taking the reference image as a standard, so that the optical imaging consistency of the images acquired by the same batch of products on different machines is ensured; a visual guidance step: the feeding accuracy of small parts is required to be preset, otherwise, the mechanical arm cannot carry out normal feeding, the deviation angle, the X position and the Y position of the product are required to be obtained through a visual guidance algorithm before the product is fed, and the machine is informed to adjust so as to ensure the feeding accuracy of the product. The machine vision detection method based on the deep learning and the traditional image processing is innovatively adopted, so that the requirement of similarity comparison in the image acquisition stage exists, the consistent images are acquired, and the accuracy of subsequent data for depth model detection and the accuracy of image detection are ensured.

Description

Visual appearance detection method
Technical Field
The invention relates to the field of graphic detection, in particular to an appearance visual detection method.
Background
The traditional optical guidance mainly depends on the subjective experience of an engineer, controls the optical imaging quality by adjusting the focal length, the aperture, the working distance and other modes of a camera, and has the advantages of common effect and poor imaging optical consistency of different machines.
The conventional visual guidance process is generally as follows: the feeding mechanism (mechanical arm, sucker and the like) with the industrial camera takes a picture before grabbing materials every time, and calculates the information of the position deviation and the angle deviation of the materials to be grabbed through visual software so as to ensure that the feeding mechanism can meet the feeding precision during feeding. The existing defect is that the precision cannot be ensured.
The traditional target detection algorithm usually adopts a sliding window strategy to traverse the whole image, then uses a Haar, SIFT, HOG and other feature extractors to extract a target object, and then uses SVM, Adaboost and other classifiers to classify the extracted target, although the exhaustive strategy contains all possible positions of the target, the defects are obvious: too high in temporal complexity, too many redundant windows are generated, which also seriously affects the speed and performance of subsequent feature extraction and classification. Moreover, it is not easy to design a robust feature due to the morphological diversity of the target, the illumination variation diversity, the background diversity, etc.
The traditional target detection algorithm usually adopts a sliding window strategy to traverse the whole image, then uses a Haar, SIFT, HOG and other feature extractors to extract a target object, and then uses SVM, Adaboost and other classifiers to classify the extracted target, although the exhaustive strategy contains all possible positions of the target, the defects are obvious: too high in temporal complexity, too many redundant windows are generated, which also seriously affects the speed and performance of subsequent feature extraction and classification. Moreover, it is not easy to design a robust feature due to the morphological diversity of the target, the illumination variation diversity, the background diversity, etc.
Disclosure of Invention
In view of the defects in the prior art, the invention aims to provide an appearance visual detection method.
The visual appearance detection method provided by the invention comprises the following steps:
an optical guiding step: the similarity of the images to be compared is calculated by taking the reference image as a standard, so that the optical imaging consistency of the images acquired by the same batch of products on different machines is ensured;
a visual guidance step: the feeding accuracy of small part products needs to reach preset high accuracy, otherwise, the mechanical arm cannot carry out normal feeding, the deviation angle, the X position and the Y position of the products need to be obtained through a visual guidance algorithm before the products are fed, and the machines are informed to adjust to ensure the feeding accuracy of the products;
a product image acquisition step: the method comprises the steps of setting a camera according to different products and different optical surfaces of the same product to obtain an original image of the camera, wherein the original image is consistent with a reference optical surface, and transmitting the original image to an image processing module for image processing to finally obtain an image capable of carrying out model detection;
and (3) detecting the model: sending the obtained image capable of carrying out model detection to a depth model and a gray level detection model for detection, and finally obtaining a detection result returned by each optical surface of the product;
physical quantity filtration step: filtering the returned detection result by a threshold value or defect length, defect width and defect brightness physical quantity;
product blanking: and judging whether the workpiece is good or not according to the detection result, and classifying and blanking to different blanking ports according to the good workpiece and defective products with different defects.
Preferably, the optically directing step:
step S101: respectively carrying out smooth filtering on the standard image and the image to be compared, and filtering random noise;
step S102: according to the brightness characteristics of a workpiece, half of the brightness value of a background image is taken as a separation threshold, if the difference between the value of a current pixel point and the background value is larger than the current separation threshold, the current pixel point is taken as a foreground, otherwise, the current pixel point is taken as a background value, for an image which is difficult to separate, an accuracy value is reserved on an interface, and a user can manually adjust the accuracy value, so that a reference image can be correctly divided;
step S103: performing threshold segmentation on the image to be compared and the standard image by using the segmentation threshold, performing binarization on the image, and segmenting the positions of the foreground and the background of the image;
step S104: searching the outline of the image in the binary image, wherein the maximum outline is the workpiece according to the characteristics of the workpiece, and other smaller outlines are directly omitted due to noise; respectively extracting parameters such as the mass center, the pixel point position, the area and the like of the workpiece;
step S105: judging whether the foreground of the image is consistent according to the centroid and the area;
step S106: only the brightness of the background image is compared according to the workpiece characteristics.
Preferably, the visually guiding step:
step S201: the mechanical arm grabs a material and puts the material at a photographing position, nine photos are photographed according to the nine-square-grid point design, and each point is taken as coordinate data and stored in a RobotCores.
Step S202: the mechanical arm grabs the material to shoot a preset number of photos at a preset rotation angle;
step S203: creating a calibration template, and dragging a mouse to demarcate an interested area on a picture;
step S204: carrying out nine-point calibration calculation according to the nine-grid point position of the mechanical arm and the positioning template data;
step S205: performing rotation center calculation according to point position coordinate data during rotation photographing, and fitting a circle by using position data of a preset number of photographed pictures;
step S206: reading the rotation angle of the mechanical arm from the photographing position to the placing position, and writing the rotation angle into a file RobotRetAngel.txt;
step S207: the mechanical arm grabs a standard material from the carrying platform to a photographing position, a camera photographs the standard material, the photographed picture is named as StdImap. tiff, then coordinates of the standard position in X, Y and Z directions are calculated, and the coordinates are stored in StdCore. tup;
step S208: testing whether calibration is feasible requires recalibration if not feasible.
Preferably, the product image acquiring step:
step S301: moving the position of the camera and the workpiece to a specified optical point location
Step S302: triggering a camera to take a picture after setting camera parameters and a light source according to the optical surface information;
step S303: receiving an original image returned by a camera, and adding workpiece information corresponding to the original image to the head of the original image;
step S304: storing the camera original image with the head information for tracing the reason when the problem occurs;
step S305: distributing the camera original image with the head information to different optical surface picture preprocessing modules for parallel processing;
step S306: the image preprocessing module performs cutting, compressing, rotating, horizontal mirroring and vertical mirroring algorithm operations on the original camera image according to the number of the workpiece carrying platforms and the number of the machine station channels, and outputs an image meeting the model detection requirement, wherein the image preprocessing module comprises the following conditions:
the camera original image comprises a plurality of workpieces, and the workpieces in the image need to be cut out respectively;
the original image of the camera needs to be compressed to be smaller due to overlarge size so as to improve the model operation speed;
one workpiece is formed by combining a plurality of original camera images, and the original camera images need to be combined after being rotated and mirrored.
Preferably, the model detecting step includes:
the construction step comprises: designing and building a deep learning model for detecting the defects of the workpiece;
and (3) classification step: classifying each pixel point in the learning image to be learned according to categories, and judging the confidence of the type of each pixel point;
deep learning model training: training the learning image subjected to the pixel point class classification and confidence judgment to obtain a trained deep learning model;
and a defect detection step: and detecting the defects of the workpiece by using the trained deep learning model.
Preferably, the step of classifying: classifying each pixel point in the learning image, taking 0 as a background to represent the category of the pixel point, taking 1 as a defect to represent the category of the pixel point, and dividing the learning image into a background area and a defect area according to the category;
the step of classifying includes:
and (3) convolution step: performing feature extraction on an input layer, filtering partial useless information and reserving feature effective information;
a step of pooling: reducing the dimension of the input layer and reducing the calculated amount;
and (3) feature fusion step: performing cross-layer connection on different layers with the same dimension;
a category judgment step: quantizing the feature information obtained in the feature fusion step into confidence of a certain category;
an output step: outputting a multi-dimensional array vector as a result, representing the category and confidence of each pixel value in a learning image;
the multidimensional array vector comprises a [ m, n, c, s ] vector, wherein: m denotes the image width, n denotes the image height, c denotes the class, and s denotes the confidence.
Preferably, the deep learning model training step: and setting a set training step number, training the non-defective images and the defect images in the training set in a single-double alternative mode during training, stopping training until loss is not obviously reduced, and outputting a model corresponding to the step number at the moment as an output model of the training.
Preferably, the method further comprises the gray model detection step of: detecting the defects of the workpiece through gray level transformation and spatial filtering;
the gray model detecting step includes a top depression detecting step:
obtaining a rotation and translation matrix of the image through shape template matching positioning;
the affine transformed top region is obtained by rotating the translation matrix,
performing sub-pixel threshold segmentation on the top area of the affine transformation, and adding the edge line segment obtained by segmentation into the metering model;
calculating the maximum and minimum distances from the edge points to the base line, wherein the difference between the maximum distance and the minimum distance is the value of the recess;
the gray model detection step comprises a flash burr detection step:
obtaining a rotation and translation matrix of the image through shape template matching positioning;
finding an inner hole area through threshold segmentation, erasing an inner angle area of the inner hole area after affine transformation, and detecting burrs through closed operation;
the gray scale model detection step comprises a water gap height detection step:
obtaining a rotation and translation matrix of the image through shape template matching positioning;
judging whether the height of the water gap reaches the standard or not through the rotation angle of the matrix;
the gray scale model detection step comprises a top crack detection step:
carrying out threshold segmentation on the picture to find out a detected region, and carrying out Fourier transform on the picture to transform the time threshold of the picture to a frequency domain;
filtering the intermediate frequency component by a Gaussian filter, converting the intermediate frequency component into a time threshold, and solving the shape of the line by a second derivative mode.
Preferably, the physical quantity filtering step:
setting a filtering rule: different products, different defects and different parts are judged to have different parameter conditions corresponding to the defects, and different rules are set to be used as the conditions for judging the defects; the filtering rules can set combination rules and distinguish rule priorities, and comparison is carried out on the rule priorities; if the first rule is compared and accords with the defect rule, the detection record of the product is directly judged to be the corresponding defect record, and other rules are not compared; if the defect rule is not met, continuing to compare the second rule; if the defect is not met, continuing to compare until all defect rules are compared; if the quality is not met, judging the detection record as a good product record at present;
setting rule conditions: the rule condition refers to a linearly quantized numerical value that can be taken as a condition, and includes the following physical quantity conditions: the defect threshold value, the defect length, the defect width, the defect area, the defect average brightness, the defect contrast, the defect gradient and the defect length-width ratio are combined into a judgment rule by setting one or more rules;
a non-detection area filtering step: aiming at different optical surfaces of a product, corresponding non-detection areas are provided, defects do not need to be detected in the areas, the defects are conditionally shielded and detected by setting area detection rule conditions, and then the target of accurate detection is achieved.
Preferably, the product blanking step comprises:
judging the blanking, namely completing the blanking and overtime blanking according to the detection result:
the detection result of finishing the blanking refers to a process of judging the blanking by returning detection results to all optical surfaces of the workpiece;
the overtime blanking refers to that when all optical surfaces of the workpiece are photographed and started to time, all detection results are returned after the preset overtime time is exceeded, and the judgment on blanking is carried out;
judging the good products and the defective products with different defects:
judging the defective product defect or good product to which the workpiece belongs finally according to the defect result of the filtered workpiece, wherein the defective product defect can be finally judged by a final judgment algorithm such as a threshold value maximum algorithm or values calculated according to different weights of the threshold value, the defect area size and the defect brightness respectively to obtain the final judgment defect to which the workpiece belongs;
classified blanking:
and according to the judgment of good products and defective products, different blanking ports are arranged in a classified manner.
Compared with the prior art, the invention has the following beneficial effects:
1. according to the invention, through deep learning detection, the time complexity is greatly reduced, the generation of redundant windows is reduced, and the speed and performance of subsequent feature extraction and classification are greatly improved;
2. according to the invention, through deep learning detection, the characteristics of image robustness are improved;
3. the invention combines a plurality of detection modes, and the detection is more comprehensive and accurate.
4. The machine vision detection method based on deep learning and traditional image processing is innovatively adopted, so that the requirement of similarity comparison in the image acquisition stage exists, and the requirement is used for ensuring that consistent images are acquired, so that the accuracy of subsequent data for depth model detection and the accuracy of image detection are ensured.
Drawings
Other features, objects and advantages of the invention will become more apparent upon reading of the detailed description of non-limiting embodiments with reference to the following drawings:
fig. 1 is a schematic view of an appearance visual inspection process.
Detailed Description
The present invention will be described in detail with reference to specific examples. The following examples will assist those skilled in the art in further understanding the invention, but are not intended to limit the invention in any way. It should be noted that it would be obvious to those skilled in the art that various changes and modifications can be made without departing from the spirit of the invention. All falling within the scope of the present invention.
The present invention will be described more specifically below with reference to preferred examples.
Preferred example 1:
first, optical guiding
The optical guidance means that the reference image is used as a standard, and the similarity of the brightness, the contour, the area and the like of the image to be compared is calculated, so that the optical imaging consistency (including the imaging angle, the imaging size and the imaging brightness of the product) of the image acquired by the same batch of products on different machines is ensured.
Aiming at different sizes and different colors of similar products, optical image capturing is carried out to achieve consistency, field personnel adjust imaging by adjusting parameters such as aperture, focal length and angle of a camera, and confirm whether the imaging of the current product is consistent with the imaging of a reference product according to experience and sense. Because of the differences in the senses of different persons, it is not reliable to ensure the consistency of imaging depending on the experience and sense of the persons, so that it is necessary to provide corresponding reference parameter data by an optical guidance algorithm to ensure consistency.
In the ordinary optical detection, the position of the image sensor is fixed, the detection target area is large, the collected images are good in consistency, and the image consistency comparison in the image collection stage is not needed. In the surface defect detection of electronic products (some small parts are small, the length is 1cm, and the width is 3mm) made by my department, the common practice in the industry is that a person holds a microscope to repeatedly detect whether a real object has defects, so the image acquisition stage does not exist, and the comparison of the image similarity in the image acquisition stage does not exist. Traditional manual detection and the monotonous repeated work have great harm to the eyesight, spirit and the like of workers, in order to improve the working efficiency, the workers are saved from the painful environment and the target of national industry 4.0 is assisted, the machine vision detection method adopting deep learning and traditional image processing is innovatively adopted, so that the requirement of similarity comparison in the image acquisition stage exists, the requirement is used for ensuring that consistent images are acquired, and the accuracy of subsequent data for depth model detection and the accuracy of image detection are ensured.
The specific optical guiding steps and algorithms are as follows:
1) respectively carrying out smooth filtering on the standard image and the image to be compared, and filtering random noise;
2) according to the brightness characteristics of a workpiece, half of the brightness value of a background image is taken as a separation threshold, if the difference between the value of a current pixel point and the background value is larger than the current separation threshold, the current pixel point is taken as a foreground, otherwise, the current pixel point is taken as a background value, and meanwhile, when an image which is difficult to separate is encountered, an accuracy value is reserved on an interface, so that a user can manually adjust the accuracy value, and the reference image can be correctly separated. During processing, a hard condition exists, the default reference image can be absolutely segmented, if the default reference image cannot be segmented, the acquired reference image is required to be acquired again, and if the default reference image cannot be segmented, the acquired reference image has problems;
3) carrying out threshold segmentation on the image to be compared and the standard image by using the threshold, carrying out binarization on the image, and segmenting the positions of the foreground and the background of the image;
4) the contour of the image is searched in the binary image, according to the characteristics of the workpiece, the largest contour is the workpiece, and other smaller contours are caused by noise and are directly ignored. Respectively extracting parameters such as the mass center, the pixel point position, the area and the like of the workpiece;
5) judging whether the foreground of the image is consistent according to the centroid and the area;
6) according to the characteristics of the workpiece, in order to simplify the calculated amount, only the brightness of the background image is compared; background brightness refers to other areas after removing the foreground of the workpiece, why is the background brightness compared and not the foreground of the workpiece? Because the workpieces of different batches have different brightness due to the influence of the production process, and the workpieces with different brightness are mixed together, the average pixel value of the foreground cannot be compared
Second, visual guidance
The scene that normal material loading just can't be carried out to little part product material loading need reach very high accuracy and have a little deviation arm slightly, need obtain the skew angle of product, X position, Y position through vision guide algorithm before the product material loading, inform the machinery to adjust and ensure product material loading accuracy nature.
The specific visual guidance steps and algorithms are as follows:
1) the mechanical arm grabs a material and puts the material at a photographing position, nine photos are photographed according to the nine-square-grid point design, and each point is taken as coordinate data and stored in a RobotCores.
2) The mechanical arm grabs the material to take seven pictures with the rotation angle of 5; the number of the mechanical arms can be 5 or 9, and theoretically, the more the number of the collected mechanical arms is, the more accurate the rotation center of the mechanical arm is calculated;
3) creating a calibration template (dragging a mouse to demarcate an area of interest on a picture, namely creating a shape template to realize searching and positioning, for example, creating a template by using two circles in a product, acquiring a new product image, searching (two circles) in a specified range by an algorithm based on the created template so as to know the specific position of the product), and dragging the mouse to demarcate the area of interest on the picture;
4) carrying out nine-point calibration calculation according to the nine-point motion coordinates (namely the nine-grid point positions) of the mechanical arm and the positioning template data;
5) performing rotation center calculation according to point position coordinate data during rotation photographing, and fitting a circle by using position data of 7 photos;
6) reading the rotation angle of the mechanical arm from the photographing position to the placing position, and writing the rotation angle into a file RobotRetAngel.txt;
7) the mechanical arm grabs a standard material from the carrying platform to a photographing position, a camera photographs the standard material, the photographed picture is named as StdImap. tiff, then coordinates of the standard position in X, Y and Z directions are calculated, and the coordinates are stored in StdCore. tup; the calculation of the step is based on the steps 1) -6), a group of positions for the mechanical arm to adjust and grab the material are calculated according to the information such as coordinate transformation, circular position, angle rotation and the like, and therefore the condition that the mechanical arm fails to be placed on the carrying platform can be reduced to the maximum extent
8) Finally, whether the calibration is feasible (namely, the manipulator grabs and the test blanking is inaccurate) needs to be tested, and if the calibration is not feasible, the calibration needs to be carried out again.
Thirdly, acquiring a product image
The camera original image consistent with the reference optical surface is obtained by carrying out camera setting, exposure, the position of an interested area, line frequency, photographing delay and other parameters on different optical surfaces of different products and the same product, and the obtained original image is transmitted to an image processing module for image processing (including algorithms of cutting, compression, rotation, horizontal mirror image, vertical mirror image, combination and the like), and finally an image capable of being subjected to model detection is obtained.
The specific steps and algorithm for obtaining the product image are as follows:
1) moving the position of the camera and the workpiece to a specified optical point location
2) After camera parameters (including exposure value, gamma value, line frequency, region of interest and the like) and a light source are set according to the optical surface information, triggering the camera to take a picture
3) Receiving the original image returned by the camera, adding the workpiece information (including the workpiece number and the channel number) corresponding to the original image to the head of the original image
4) Storing camera original pictures with header information for tracing reasons when problems occur
5) Distributing camera original image with head information to different optical surface picture preprocessing modules for parallel processing
6) The image preprocessing module performs algorithm operations such as cutting, compressing, rotating, horizontal mirroring and vertical mirroring on the original camera image according to information such as the number of the workpiece carrying platforms and the number of machine passages, and outputs an image meeting the model detection requirement. The method specifically comprises the following conditions:
a) a camera original image comprises a plurality of workpieces, and the workpieces in the image need to be cut out respectively
b) The original image of the camera needs to be compressed to be smaller due to the overlarge size of the original image so as to improve the model operation speed
c) One workpiece is formed by combining a plurality of original images of cameras, and the original images of the cameras need to be combined after being rotated and mirrored
Fourth, model detection
And sending the obtained model detection diagram to a model pipeline service for detection, wherein part of the model detection diagram is sent to a depth model for detection aiming at different products and different optical surfaces, and the other part of the model detection diagram is sent to a gray detection model for detection, and finally detection information (including defects, defect threshold position information x coordinate, y coordinate, width, height, defect length, defect height, defect area, defect average brightness information, defect gradient information, defect contrast information, 20% average brightness information with the brightest defects and 20% average brightness information with the darkest defects) returned by each optical surface of the product is obtained.
The traditional target detection algorithm usually adopts a sliding window strategy to traverse the whole image, then uses a Haar, SIFT, HOG and other feature extractors to extract a target object, and then uses SVM, Adaboost and other classifiers to classify the extracted target, although the exhaustive strategy contains all possible positions of the target, the defects are obvious: too high in temporal complexity, too many redundant windows are generated, which also seriously affects the speed and performance of subsequent feature extraction and classification. Moreover, it is not easy to design a robust feature due to the morphological diversity of the target, the illumination variation diversity, the background diversity, etc.
The model detection is divided into depth model detection and gray model detection.
1. The depth model detection comprises the following specific implementation steps:
1) the deep learning model is designed and set up and used for detecting the defects of the workpiece and is composed of a segmentation network and a classification network two-stage, wherein the deep learning model mainly comprises a convolution module, a pooling module, a feature fusion module, a category judgment module and an output module.
Dividing classification categories to which each pixel point in the network learning image belongs, expressing the pixel category as a background by 0, expressing the pixel category as a defect by 1, and dividing an image into a background area and a defect area according to the categories; the classification network judges each pixel point in the extracted background area and defect area on the basis of the segmentation network to give the possibility that each pixel point belongs to a certain category, namely confidence.
The convolution layer is characterized in that the input layer is subjected to characteristic extraction, part of useless information is filtered, and most of effective information of the characteristics is reserved; the pooling layer is characterized in that dimension reduction is carried out on the input layer to reduce the calculated amount; the characteristic fusion layer is characterized in that different layers with the same dimensionality are connected in a cross-layer mode to obtain richer characteristic information; the category judgment layer is characterized in that the feature information obtained by the feature fusion layer is quantized into a probability value of a certain category; the output layer is characterized in that after convolution pooling feature fusion and the like, a vector [ m, n, c, s ] is output as a result, and the category and the confidence coefficient of each pixel value in an image are represented.
2) Circularly training a deep learning model by using the divided data sets;
specifically, the embodiment (5) includes:
the deep learning training method is characterized in that all images in a well-divided training set folder are trained, the number of training steps is set to be more than 1000, good images and defect images in a training set are trained in a single-double alternative mode during training, and until loss is not reduced obviously, training is stopped to output a model corresponding to the number of steps at the moment, and the model is used as an output model of the training.
3) Carrying out appearance defect detection on a real scene workpiece by using a deep learning model, and judging and quantifying a detection result;
when the appearance defects of the real scene workpiece are detected, the output model is used for detecting the workpiece formed by injecting the metal powder, and the result is output according to the vector [ m, n, c, s ].
2. The detection of the gray-scale model is carried out,
the gray model detection is to use a gray conversion and spatial filtering mode for detection, and different detection modes are used for different defects. Specific defect types are: top depression, flash burr, nozzle height, impact defect, crack defect, deformation defect, etc. Image inversion, piecewise linear transformation, histogram equalization and matching, spatial filtering and the like exist in the algorithm, some special detections such as bruises and cracks are processed from a time domain to a frequency domain, and the detection is performed by using the characteristics of the frequency domain, and the specific algorithm is as follows:
1) detection of top recession
And positioning through shape template matching to obtain a matrix of the rotation and translation of the image. And obtaining a top area after affine transformation by rotating and translating the matrix, performing sub-pixel threshold segmentation on the top area after affine transformation, adding an edge line segment obtained by segmentation into the metering model, and then calculating the maximum and minimum distances from the edge points to the base line, wherein the difference between the maximum distance and the minimum distance is a depression value.
2) Detection of burrs on burrs
And positioning through shape template matching to obtain a matrix of the rotation and translation of the image. And finding an inner hole area through threshold segmentation, then erasing an inner angle area of the inner hole area after affine transformation, and finally detecting burrs through closed operation.
3) Height of water gap
And positioning through shape template matching to obtain a matrix of the rotation and translation of the image. Because the length of the bottom is fixed, whether the height of the water gap reaches the standard is judged through the rotation angle of the matrix.
4) Detection of roof cracks
The image is subjected to threshold segmentation to find out a detected area, the image is subjected to threshold transformation sometimes to a frequency domain through Fourier transformation, then components of an intermediate frequency are filtered through a Gaussian filter, and then the image is subjected to threshold transformation to obtain the shape of a line through a second derivative mode.
Fifthly, filtering the physical quantity
Meets the requirement of dynamic adjustment of the on-site shipment yield of the customer, adopts a linear physical quantity filtering mode,
and (4) filtering physical quantities such as a threshold value, or defect length, defect width, defect brightness and the like of a returned detection result of the model.
The physical quantity filtration step was as follows:
1) setting filtering rules
Different products, defects and parts are judged to have different parameter conditions corresponding to the defects, so different rules are required to be set as the conditions for judging the defects. The filtering rules can set combination rules and distinguish rule priorities, and comparison is carried out on the rule priorities; if the first rule is compared and accords with the defect rule, the detection record of the product is directly judged to be the corresponding defect record, and other rules are not compared; if the defect rule is not met, continuing to compare the second rule; if the defect is not met, continuing to compare until all defect rules are compared; and if the quality is not met, judging the detection record as a good product record at present.
2) Setting rule conditions
The rule condition refers to a linear quantization value which can be used as a condition, and includes physical quantity conditions such as a defect threshold, a defect length, a defect width, a defect area, a defect average brightness, a defect contrast, a defect gradient, a defect length-width ratio and the like, and can be combined into a judgment rule by setting one or more rules, such as a scratch large-area rule: the area >3mm & & threshold >0.4& & defect length >0.5mm is judged as the defect record.
3) Non-detection region filtering
And if the area has over-killed defect records (the defect is detected by conditionally shielding by setting area detection rule conditions to achieve the aim of accurate detection), filtering and shielding.
Supplementary explanation: filtering and shielding defects occurring in non-detection area
Sixthly, blanking of products
The product blanking is the process that whether the product is a good product or not is judged according to the detection result, and classified blanking is carried out to different blanking ports according to the good product and defective products with different defects.
The specific blanking steps are as follows:
1) judging the blanking to be divided into detection results to finish blanking and overtime blanking
The detection result of finishing the blanking refers to the process of judging the blanking by returning all optical surfaces of the workpiece to the detection result.
The overtime blanking refers to the process of judging blanking whether all model results are returned or not after the workpiece is photographed for 10s if the overtime time is set to be 10s when all optical surfaces of the workpiece are photographed and timing is started.
2) Judging the good products and the defective products with different defects
And judging the defect or good product of the workpiece according to the defect result of the filtered workpiece. The final determination of the defective product defects can be achieved by a final determination algorithm, such as a threshold maximization algorithm, or by a weighting algorithm (values calculated according to different weights such as a threshold, a defect area size, and a defect brightness), so as to finally determine the defects to which the workpiece belongs.
3) Classified blanking
And according to the judgment of good products and defective products, different blanking ports are arranged in a classified manner. For example, good products correspond to No. 1 blanking openings; the scratch of the defective products corresponds to the No. 2 feed opening and the like for classified feeding.
In the description of the present application, it is to be understood that the terms "upper", "lower", "front", "rear", "left", "right", "vertical", "horizontal", "top", "bottom", "inner", "outer", and the like indicate orientations or positional relationships based on those shown in the drawings, and are only for convenience in describing the present application and simplifying the description, but do not indicate or imply that the referred device or element must have a specific orientation, be constructed in a specific orientation, and be operated, and thus, should not be construed as limiting the present application.
Those skilled in the art will appreciate that, in addition to implementing the systems, apparatus, and various modules thereof provided by the present invention in purely computer readable program code, the same procedures can be implemented entirely by logically programming method steps such that the systems, apparatus, and various modules thereof are provided in the form of logic gates, switches, application specific integrated circuits, programmable logic controllers, embedded microcontrollers and the like. Therefore, the system, the device and the modules thereof provided by the present invention can be considered as a hardware component, and the modules included in the system, the device and the modules thereof for implementing various programs can also be considered as structures in the hardware component; modules for performing various functions may also be considered to be both software programs for performing the methods and structures within hardware components.
The foregoing description of specific embodiments of the present invention has been presented. It is to be understood that the present invention is not limited to the specific embodiments described above, and that various changes or modifications may be made by one skilled in the art within the scope of the appended claims without departing from the spirit of the invention. The embodiments and features of the embodiments of the present application may be combined with each other arbitrarily without conflict.

Claims (10)

1. A visual appearance inspection method, comprising:
an optical guiding step: the similarity of the images to be compared is calculated by taking the reference image as a standard, so that the optical imaging consistency of the images acquired by the same batch of products on different machines is ensured;
a visual guidance step: the feeding accuracy of small part products needs to reach preset high accuracy, otherwise, the mechanical arm cannot carry out normal feeding, the deviation angle, the X position and the Y position of the products need to be obtained through a visual guidance algorithm before the products are fed, and the machines are informed to adjust to ensure the feeding accuracy of the products;
a product image acquisition step: the method comprises the steps of setting a camera according to different products and different optical surfaces of the same product to obtain an original image of the camera, wherein the original image is consistent with a reference optical surface, and transmitting the original image to an image processing module for image processing to finally obtain an image capable of carrying out model detection;
and (3) detecting the model: sending the obtained image capable of carrying out model detection to a depth model and a gray level detection model for detection, and finally obtaining a detection result returned by each optical surface of the product;
physical quantity filtration step: filtering the returned detection result by a threshold value or defect length, defect width and defect brightness physical quantity;
product blanking: and judging whether the workpiece is good or not according to the detection result, and classifying and blanking to different blanking ports according to the good workpiece and defective products with different defects.
2. The visual appearance inspection method according to claim 1, wherein the optically directing step:
step S101: respectively carrying out smooth filtering on the standard image and the image to be compared, and filtering random noise;
step S102: according to the brightness characteristics of a workpiece, half of the brightness value of a background image is taken as a separation threshold, if the difference between the value of a current pixel point and the background value is larger than the current separation threshold, the current pixel point is taken as a foreground, otherwise, the current pixel point is taken as a background value, for an image which is difficult to separate, an accuracy value is reserved on an interface, and a user can manually adjust the accuracy value, so that a reference image can be correctly divided;
step S103: performing threshold segmentation on the image to be compared and the standard image by using the segmentation threshold, performing binarization on the image, and segmenting the positions of the foreground and the background of the image;
step S104: searching the outline of the image in the binary image, wherein the maximum outline is the workpiece according to the characteristics of the workpiece, and other smaller outlines are directly omitted due to noise; respectively extracting parameters such as the mass center, the pixel point position, the area and the like of the workpiece;
step S105: judging whether the foreground of the image is consistent according to the centroid and the area;
step S106: only the brightness of the background image is compared according to the workpiece characteristics.
3. The visual appearance inspection method according to claim 1, wherein the visual guide step:
step S201: the mechanical arm grabs a material and puts the material at a photographing position, nine photos are photographed according to the nine-square-grid point design, and each point is taken as coordinate data and stored in a RobotCores.
Step S202: the mechanical arm grabs the material to shoot a preset number of photos at a preset rotation angle;
step S203: creating a calibration template, and dragging a mouse to demarcate an interested area on a picture;
step S204: carrying out nine-point calibration calculation according to the nine-grid point position of the mechanical arm and the positioning template data;
step S205: performing rotation center calculation according to point position coordinate data during rotation photographing, and fitting a circle by using position data of a preset number of photographed pictures;
step S206: reading the rotation angle of the mechanical arm from the photographing position to the placing position, and writing the rotation angle into a file RobotRetAngel.txt;
step S207: the mechanical arm grabs a standard material from the carrying platform to a photographing position, a camera photographs the standard material, the photographed picture is named as StdImap. tiff, then coordinates of the standard position in X, Y and Z directions are calculated, and the coordinates are stored in StdCore. tup;
step S208: testing whether calibration is feasible requires recalibration if not feasible.
4. The visual appearance inspection method according to claim 1, wherein the product image acquisition step:
step S301: moving the position of the camera and the workpiece to a specified optical point location
Step S302: triggering a camera to take a picture after setting camera parameters and a light source according to the optical surface information;
step S303: receiving an original image returned by a camera, and adding workpiece information corresponding to the original image to the head of the original image;
step S304: storing the camera original image with the head information for tracing the reason when the problem occurs;
step S305: distributing the camera original image with the head information to different optical surface picture preprocessing modules for parallel processing;
step S306: the image preprocessing module performs cutting, compressing, rotating, horizontal mirroring and vertical mirroring algorithm operations on the original camera image according to the number of the workpiece carrying platforms and the number of the machine station channels, and outputs an image meeting the model detection requirement, wherein the image preprocessing module comprises the following conditions:
the camera original image comprises a plurality of workpieces, and the workpieces in the image need to be cut out respectively;
the original image of the camera needs to be compressed to be smaller due to overlarge size so as to improve the model operation speed;
one workpiece is formed by combining a plurality of original camera images, and the original camera images need to be combined after being rotated and mirrored.
5. The visual appearance inspection method according to claim 1, wherein the model inspection step comprises:
the construction step comprises: designing and building a deep learning model for detecting the defects of the workpiece;
and (3) classification step: classifying each pixel point in the learning image to be learned according to categories, and judging the confidence of the type of each pixel point;
deep learning model training: training the learning image subjected to the pixel point class classification and confidence judgment to obtain a trained deep learning model;
and a defect detection step: and detecting the defects of the workpiece by using the trained deep learning model.
6. The visual appearance inspection method according to claim 5, wherein the classifying step: classifying each pixel point in the learning image, taking 0 as a background to represent the category of the pixel point, taking 1 as a defect to represent the category of the pixel point, and dividing the learning image into a background area and a defect area according to the category;
the step of classifying includes:
and (3) convolution step: performing feature extraction on an input layer, filtering partial useless information and reserving feature effective information;
a step of pooling: reducing the dimension of the input layer and reducing the calculated amount;
and (3) feature fusion step: performing cross-layer connection on different layers with the same dimension;
a category judgment step: quantizing the feature information obtained in the feature fusion step into confidence of a certain category;
an output step: outputting a multi-dimensional array vector as a result, representing the category and confidence of each pixel value in a learning image;
the multidimensional array vector comprises a [ m, n, c, s ] vector, wherein: m denotes the image width, n denotes the image height, c denotes the class, and s denotes the confidence.
7. The visual appearance inspection method according to claim 5, wherein the deep learning model training step: and setting a set training step number, training the non-defective images and the defect images in the training set in a single-double alternative mode during training, stopping training until loss is not obviously reduced, and outputting a model corresponding to the step number at the moment as an output model of the training.
8. The visual appearance inspection method according to claim 5, further comprising a gray model inspection step of: detecting the defects of the workpiece through gray level transformation and spatial filtering;
the gray model detecting step includes a top depression detecting step:
obtaining a rotation and translation matrix of the image through shape template matching positioning;
the affine transformed top region is obtained by rotating the translation matrix,
performing sub-pixel threshold segmentation on the top area of the affine transformation, and adding the edge line segment obtained by segmentation into the metering model;
calculating the maximum and minimum distances from the edge points to the base line, wherein the difference between the maximum distance and the minimum distance is the value of the recess;
the gray model detection step comprises a flash burr detection step:
obtaining a rotation and translation matrix of the image through shape template matching positioning;
finding an inner hole area through threshold segmentation, erasing an inner angle area of the inner hole area after affine transformation, and detecting burrs through closed operation;
the gray scale model detection step comprises a water gap height detection step:
obtaining a rotation and translation matrix of the image through shape template matching positioning;
judging whether the height of the water gap reaches the standard or not through the rotation angle of the matrix;
the gray scale model detection step comprises a top crack detection step:
carrying out threshold segmentation on the picture to find out a detected region, and carrying out Fourier transform on the picture to transform the time threshold of the picture to a frequency domain;
filtering the intermediate frequency component by a Gaussian filter, converting the intermediate frequency component into a time threshold, and solving the shape of the line by a second derivative mode.
9. The visual appearance inspection method according to claim 1, wherein the physical quantity filtering step:
setting a filtering rule: different products, different defects and different parts are judged to have different parameter conditions corresponding to the defects, and different rules are set to be used as the conditions for judging the defects; the filtering rules can set combination rules and distinguish rule priorities, and comparison is carried out on the rule priorities; if the first rule is compared and accords with the defect rule, the detection record of the product is directly judged to be the corresponding defect record, and other rules are not compared; if the defect rule is not met, continuing to compare the second rule; if the defect is not met, continuing to compare until all defect rules are compared; if the quality is not met, judging the detection record as a good product record at present;
setting rule conditions: the rule condition refers to a linearly quantized numerical value that can be taken as a condition, and includes the following physical quantity conditions: the defect threshold value, the defect length, the defect width, the defect area, the defect average brightness, the defect contrast, the defect gradient and the defect length-width ratio are combined into a judgment rule by setting one or more rules;
a non-detection area filtering step: aiming at different optical surfaces of a product, corresponding non-detection areas are provided, defects do not need to be detected in the areas, the defects are conditionally shielded and detected by setting area detection rule conditions, and then the target of accurate detection is achieved.
10. The visual appearance inspection method according to claim 1, wherein the product blanking step comprises:
judging the blanking, namely completing the blanking and overtime blanking according to the detection result:
the detection result of finishing the blanking refers to a process of judging the blanking by returning detection results to all optical surfaces of the workpiece;
the overtime blanking refers to that when all optical surfaces of the workpiece are photographed and started to time, all detection results are returned after the preset overtime time is exceeded, and the judgment on blanking is carried out;
judging the good products and the defective products with different defects:
judging the defective product defect or good product to which the workpiece belongs finally according to the defect result of the filtered workpiece, wherein the defective product defect can be finally judged by a final judgment algorithm such as a threshold value maximum algorithm or values calculated according to different weights of the threshold value, the defect area size and the defect brightness respectively to obtain the final judgment defect to which the workpiece belongs;
classified blanking:
and according to the judgment of good products and defective products, different blanking ports are arranged in a classified manner.
CN202010772989.3A 2020-08-04 2020-08-04 Visual appearance detection method Active CN111951237B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010772989.3A CN111951237B (en) 2020-08-04 2020-08-04 Visual appearance detection method

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010772989.3A CN111951237B (en) 2020-08-04 2020-08-04 Visual appearance detection method

Publications (2)

Publication Number Publication Date
CN111951237A true CN111951237A (en) 2020-11-17
CN111951237B CN111951237B (en) 2021-06-08

Family

ID=73339342

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010772989.3A Active CN111951237B (en) 2020-08-04 2020-08-04 Visual appearance detection method

Country Status (1)

Country Link
CN (1) CN111951237B (en)

Cited By (22)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112116668A (en) * 2020-11-23 2020-12-22 常州微亿智造科技有限公司 Optical copying method and device for quality inspection and quality inspection equipment
CN112508950A (en) * 2021-02-02 2021-03-16 常州微亿智造科技有限公司 Anomaly detection method and device
CN112529762A (en) * 2020-12-04 2021-03-19 成都新西旺自动化科技有限公司 Machine vision system configuration screening method and device and readable storage medium
CN112598642A (en) * 2020-12-22 2021-04-02 苏州睿信诺智能科技有限公司 High-speed high-precision visual detection method
CN112950619A (en) * 2021-03-25 2021-06-11 征图智能科技(江苏)有限公司 Visual detection method based on visual simulation
CN113340895A (en) * 2021-06-04 2021-09-03 深圳中科飞测科技股份有限公司 Adjusting method of detection system and detection system
CN113706495A (en) * 2021-08-23 2021-11-26 广东奥普特科技股份有限公司 Machine vision detection system for automatically detecting lithium battery parameters on conveyor belt
CN113959951A (en) * 2021-11-21 2022-01-21 天津宏华焊研机器人科技有限公司 Machine vision device for online detection of workpiece assembly and detection method
CN114018935A (en) * 2021-11-05 2022-02-08 苏州中锐图智能科技有限公司 Multipoint rapid calibration method
CN114359276A (en) * 2022-03-17 2022-04-15 武汉海明亮科技发展有限公司 Steel die blanking optimization scheme obtaining method based on pockmark defects
CN114353702A (en) * 2021-12-06 2022-04-15 北京动力机械研究所 Rotary opening adjusting area measuring device based on visual detection
CN114638792A (en) * 2022-03-03 2022-06-17 浙江达峰科技有限公司 Method for detecting polarity defect of electrolytic capacitor of plug-in circuit board
CN114708209A (en) * 2022-03-25 2022-07-05 广东高景太阳能科技有限公司 Production interaction method and system based on 3D modeling and visual inspection
CN114827466A (en) * 2022-04-20 2022-07-29 武汉三江中电科技有限责任公司 Human eye-imitated equipment image acquisition device and image acquisition method
CN115115622A (en) * 2022-08-24 2022-09-27 苏州新实达精密电子科技有限公司 Punching press terminal visual detection device
CN115639207A (en) * 2022-12-26 2023-01-24 广东省农业科学院设施农业研究所 Machine vision detection method and system for simultaneously detecting multiple products
CN115861315A (en) * 2023-02-27 2023-03-28 常州微亿智造科技有限公司 Defect detection method and device
CN116087216A (en) * 2022-12-14 2023-05-09 广东九纵智能科技有限公司 Multi-axis linkage visual detection equipment, method and application
CN116245877A (en) * 2023-05-08 2023-06-09 济南达宝文汽车设备工程有限公司 Material frame detection method and system based on machine vision
CN116452598A (en) * 2023-06-20 2023-07-18 曼德惟尔(山东)智能制造有限公司 Axle production quality rapid detection method and system based on computer vision
CN116652956A (en) * 2023-06-20 2023-08-29 上海微亿智造科技有限公司 Photographing path self-adaptive planning method and device for appearance detection
CN116934746A (en) * 2023-09-14 2023-10-24 常州微亿智造科技有限公司 Scratch defect detection method, system, equipment and medium thereof

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104268857A (en) * 2014-09-16 2015-01-07 湖南大学 Rapid sub pixel edge detection and locating method based on machine vision
CN106053479A (en) * 2016-07-21 2016-10-26 湘潭大学 System for visually detecting workpiece appearance defects based on image processing
CN106204614A (en) * 2016-07-21 2016-12-07 湘潭大学 A kind of workpiece appearance defects detection method based on machine vision
CN106625676A (en) * 2016-12-30 2017-05-10 易思维(天津)科技有限公司 Three-dimensional visual accurate guiding and positioning method for automatic feeding in intelligent automobile manufacturing
CN110675341A (en) * 2019-09-18 2020-01-10 哈尔滨工程大学 Monocular-vision-guided underwater robot and seabed platform butt joint method
CN210500279U (en) * 2019-07-11 2020-05-12 常州星宇车灯股份有限公司 Positioning and grabbing system based on visual guidance

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104268857A (en) * 2014-09-16 2015-01-07 湖南大学 Rapid sub pixel edge detection and locating method based on machine vision
CN106053479A (en) * 2016-07-21 2016-10-26 湘潭大学 System for visually detecting workpiece appearance defects based on image processing
CN106204614A (en) * 2016-07-21 2016-12-07 湘潭大学 A kind of workpiece appearance defects detection method based on machine vision
CN106625676A (en) * 2016-12-30 2017-05-10 易思维(天津)科技有限公司 Three-dimensional visual accurate guiding and positioning method for automatic feeding in intelligent automobile manufacturing
CN210500279U (en) * 2019-07-11 2020-05-12 常州星宇车灯股份有限公司 Positioning and grabbing system based on visual guidance
CN110675341A (en) * 2019-09-18 2020-01-10 哈尔滨工程大学 Monocular-vision-guided underwater robot and seabed platform butt joint method

Non-Patent Citations (3)

* Cited by examiner, † Cited by third party
Title
F.RUFFIER等: "BIO-INSPIRED OPTICAL FLOW CIRCUITS FOR THE VISUAL GUIDANCE OF MICRO-AIR VEHICLES", 《 PROCEEDINGS OF THE 2003 INTERNATIONAL SYMPOSIUM ON CIRCUITS AND SYSTEMS》 *
叶鹏等: "一种融合深度基于灰度共生矩阵的感知模型", 《计算机科学》 *
陈思伟: "视觉引导抓取机械手工作平面定位误差与修正", 《中国优秀硕士学位论文全文数据库 信息科技辑》 *

Cited By (31)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112116668A (en) * 2020-11-23 2020-12-22 常州微亿智造科技有限公司 Optical copying method and device for quality inspection and quality inspection equipment
CN112529762A (en) * 2020-12-04 2021-03-19 成都新西旺自动化科技有限公司 Machine vision system configuration screening method and device and readable storage medium
CN112598642A (en) * 2020-12-22 2021-04-02 苏州睿信诺智能科技有限公司 High-speed high-precision visual detection method
CN112598642B (en) * 2020-12-22 2024-05-10 苏州睿信诺智能科技有限公司 High-speed high-precision visual detection method
CN112508950A (en) * 2021-02-02 2021-03-16 常州微亿智造科技有限公司 Anomaly detection method and device
CN112508950B (en) * 2021-02-02 2021-05-11 常州微亿智造科技有限公司 Anomaly detection method and device
CN112950619A (en) * 2021-03-25 2021-06-11 征图智能科技(江苏)有限公司 Visual detection method based on visual simulation
CN113340895A (en) * 2021-06-04 2021-09-03 深圳中科飞测科技股份有限公司 Adjusting method of detection system and detection system
CN113706495A (en) * 2021-08-23 2021-11-26 广东奥普特科技股份有限公司 Machine vision detection system for automatically detecting lithium battery parameters on conveyor belt
CN114018935A (en) * 2021-11-05 2022-02-08 苏州中锐图智能科技有限公司 Multipoint rapid calibration method
CN113959951A (en) * 2021-11-21 2022-01-21 天津宏华焊研机器人科技有限公司 Machine vision device for online detection of workpiece assembly and detection method
CN114353702A (en) * 2021-12-06 2022-04-15 北京动力机械研究所 Rotary opening adjusting area measuring device based on visual detection
CN114638792A (en) * 2022-03-03 2022-06-17 浙江达峰科技有限公司 Method for detecting polarity defect of electrolytic capacitor of plug-in circuit board
CN114359276A (en) * 2022-03-17 2022-04-15 武汉海明亮科技发展有限公司 Steel die blanking optimization scheme obtaining method based on pockmark defects
CN114359276B (en) * 2022-03-17 2022-05-27 武汉海明亮科技发展有限公司 Steel die blanking optimization scheme obtaining method based on pockmark defects
CN114708209A (en) * 2022-03-25 2022-07-05 广东高景太阳能科技有限公司 Production interaction method and system based on 3D modeling and visual inspection
CN114827466B (en) * 2022-04-20 2023-07-04 武汉三江中电科技有限责任公司 Human eye-like equipment image acquisition device and image acquisition method
CN114827466A (en) * 2022-04-20 2022-07-29 武汉三江中电科技有限责任公司 Human eye-imitated equipment image acquisition device and image acquisition method
CN115115622A (en) * 2022-08-24 2022-09-27 苏州新实达精密电子科技有限公司 Punching press terminal visual detection device
CN116087216A (en) * 2022-12-14 2023-05-09 广东九纵智能科技有限公司 Multi-axis linkage visual detection equipment, method and application
CN116087216B (en) * 2022-12-14 2024-02-20 广东九纵智能科技有限公司 Multi-axis linkage visual detection equipment, method and application
CN115639207A (en) * 2022-12-26 2023-01-24 广东省农业科学院设施农业研究所 Machine vision detection method and system for simultaneously detecting multiple products
CN115861315A (en) * 2023-02-27 2023-03-28 常州微亿智造科技有限公司 Defect detection method and device
CN116245877B (en) * 2023-05-08 2023-11-03 济南达宝文汽车设备工程有限公司 Material frame detection method and system based on machine vision
CN116245877A (en) * 2023-05-08 2023-06-09 济南达宝文汽车设备工程有限公司 Material frame detection method and system based on machine vision
CN116452598A (en) * 2023-06-20 2023-07-18 曼德惟尔(山东)智能制造有限公司 Axle production quality rapid detection method and system based on computer vision
CN116652956A (en) * 2023-06-20 2023-08-29 上海微亿智造科技有限公司 Photographing path self-adaptive planning method and device for appearance detection
CN116652956B (en) * 2023-06-20 2024-03-22 上海微亿智造科技有限公司 Photographing path self-adaptive planning method and device for appearance detection
CN116452598B (en) * 2023-06-20 2023-08-29 曼德惟尔(山东)智能制造有限公司 Axle production quality rapid detection method and system based on computer vision
CN116934746A (en) * 2023-09-14 2023-10-24 常州微亿智造科技有限公司 Scratch defect detection method, system, equipment and medium thereof
CN116934746B (en) * 2023-09-14 2023-12-01 常州微亿智造科技有限公司 Scratch defect detection method, system, equipment and medium thereof

Also Published As

Publication number Publication date
CN111951237B (en) 2021-06-08

Similar Documents

Publication Publication Date Title
CN111951237B (en) Visual appearance detection method
CN111951238A (en) Product defect detection method
CN110314854B (en) Workpiece detecting and sorting device and method based on visual robot
CN106709436B (en) Track traffic panoramic monitoring-oriented cross-camera suspicious pedestrian target tracking system
CN110163853B (en) Edge defect detection method
CN110866903B (en) Ping-pong ball identification method based on Hough circle transformation technology
CN111652085B (en) Object identification method based on combination of 2D and 3D features
CN105678213B (en) Dual-mode mask person event automatic detection method based on video feature statistics
CN112184648A (en) Piston surface defect detection method and system based on deep learning
US20240005148A1 (en) System and method for finding and classifying patterns in an image with a vision system
CN105158268A (en) Intelligent online detection method, system and device for defects of fine-blanked parts
CN112907519A (en) Metal curved surface defect analysis system and method based on deep learning
CN109559324A (en) A kind of objective contour detection method in linear array images
CN113252568A (en) Lens surface defect detection method, system, product and terminal based on machine vision
CN111402238A (en) Defect identification system realized through machine vision
CN110189375A (en) A kind of images steganalysis method based on monocular vision measurement
CN113177924A (en) Industrial production line product flaw detection method
CN117152161B (en) Shaving board quality detection method and system based on image recognition
CN114004814A (en) Coal gangue identification method and system based on deep learning and gray scale third moment analysis
CN111665199A (en) Wire and cable color detection and identification method based on machine vision
CN113822810A (en) Method for positioning workpiece in three-dimensional space based on machine vision
Fu et al. Research on image-based detection and recognition technologies for cracks on rail surface
CN113012228B (en) Workpiece positioning system and workpiece positioning method based on deep learning
CN111178405A (en) Similar object identification method fusing multiple neural networks
CN110866917A (en) Tablet type and arrangement mode identification method based on machine vision

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant