CN114387262A - Nut positioning detection method, device and system based on machine vision - Google Patents

Nut positioning detection method, device and system based on machine vision Download PDF

Info

Publication number
CN114387262A
CN114387262A CN202210048417.XA CN202210048417A CN114387262A CN 114387262 A CN114387262 A CN 114387262A CN 202210048417 A CN202210048417 A CN 202210048417A CN 114387262 A CN114387262 A CN 114387262A
Authority
CN
China
Prior art keywords
nut
image
hexagon
detection
center
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202210048417.XA
Other languages
Chinese (zh)
Inventor
李冰
张效铭
王巍
翟永杰
杨耀权
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
North China Electric Power University
Original Assignee
North China Electric Power University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by North China Electric Power University filed Critical North China Electric Power University
Priority to CN202210048417.XA priority Critical patent/CN114387262A/en
Publication of CN114387262A publication Critical patent/CN114387262A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/0002Inspection of images, e.g. flaw detection
    • G06T7/0004Industrial image inspection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/23Clustering techniques
    • G06F18/232Non-hierarchical techniques
    • G06F18/2321Non-hierarchical techniques using statistics or function optimisation, e.g. modelling of probability density functions
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/12Edge-based segmentation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/13Edge detection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/80Analysis of captured images to determine intrinsic or extrinsic camera parameters, i.e. camera calibration
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20076Probabilistic image processing
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30108Industrial image inspection
    • G06T2207/30164Workpiece; Machine component

Abstract

The invention relates to a nut positioning detection method, a device and a system based on machine vision, belonging to the technical field of image processing, wherein a Retinex algorithm is adopted to carry out image enhancement on a collected nut image to be detected, so that the influence of external complex environments such as heavy fog, darkness and the like on the workpiece identification precision is reduced; the depth Hough transform is used for detecting the straight line of the hexagonal edge of the nut, so that the method has high-efficiency context straight line information extraction capability, not only can obtain higher detection precision, but also can greatly reduce the detection time; meanwhile, the central point of the nut is found by utilizing a hexagonal constraint method, and the positioning detection speed and precision of the center of the nut are improved.

Description

Nut positioning detection method, device and system based on machine vision
Technical Field
The invention relates to the technical field of image processing, in particular to a nut positioning detection method, device and system based on machine vision.
Background
The transmission line is used as the transmission stage of the power industry in China, and the safe operation of the transmission line is the key for guaranteeing the electric energy transmission. The bolt and the nut are used as a fastener for fixing various hardware fittings on the power transmission line and connecting various structural components, are the parts with the largest number and the widest existence range on the power transmission line, are used as important fixing parts of the tower and are of great importance for ensuring the power transmission safety. However, the transmission line is in an outdoor environment throughout the year and is influenced by complex external force for a long time, the bolts and the nuts are very easy to loosen or lose, once a certain bolt is damaged, the important hidden dangers of electric leakage, even line loosening and the like are possible to occur, and then the safety of the whole transmission line is influenced, so that the timely detection and fastening of the bolts are the key for guaranteeing the safe operation of the transmission line.
The mode that the tradition adopted artifical detection and fastened bolt and nut is inefficient, and intensity of labour is big to there is the possibility of artifical hourglass inspection.
In recent years, the demand of towers has been rapidly developed with the increase of the demand of electric energy. The machine vision application improves the detection efficiency and accuracy, the machine vision detection is a technology for realizing detection by using machine vision instead of human eyes, is an important component in industrial production and is a tool for realizing intelligent manufacturing, and the machine vision application in production is a mark for the traditional manufacturing industry to change towards informatization.
Machine vision compares in human eye and has better tolerance, does not receive the influence of subjective factor, has very high detection efficiency, simultaneously along with the continuous development of detection algorithm, the rate of accuracy that detects has very big promotion, and the missed measure is constantly lowered with the false retrieval rate. And the real-time performance is good, and the method can meet the requirements of high-speed mass detection tasks in modern industry.
At present, the visual inspection technology is widely applied to an intelligent industrial production line, and can meet various inspection requirements, such as the classification and quality inspection of industrial products, the information provision for the movement of an industrial robot, the inspection of some precise instruments and the like. The machine vision is applied to improving the intelligent manufacturing level in China, plays an extremely important role in the aspects of ensuring the quality of industrial products and numerous safety monitoring, and replaces manual work with machines to complete detection and fastening, so that the automation and the intelligent degree of power inspection are greatly improved.
The non-contact part size detection technology based on the CCD, the large part size measurement technology based on the machine vision and the shaft part size detection technology based on the image processing comprise the following steps: (1) the CCD-based non-contact part size detection technology provides a new algorithm for fusing contour edges, and realizes coordinate detection of corner point positions by using a calculation method of KD curvature and an intensity threshold value, thereby realizing size detection. (2) According to the large part size measurement technology based on machine vision, SIFT algorithm is adopted to extract image feature points for matching, a hat function weighted average fusion algorithm based on weighted average is adopted to fuse splicing seams of images, sub-pixel extraction is carried out on the edges of the images, and then each geometric size of the parts is calculated. (3) The method comprises the steps of establishing a distortion compensation function according to a stepped shaft image with a known size, and then performing sub-pixel edge detection based on least square curve fitting on the basis of Prewitt operator edge extraction to obtain the accurate edge of a stepped shaft, thereby calculating the diameter of the shaft to be measured. (1) The methods (1) to (3) have certain effects, but the method models have large parameter quantity, weak generalization capability and limited calculation performance.
Disclosure of Invention
The invention aims to provide a nut positioning detection method, device and system based on machine vision so as to improve the speed and precision of nut center positioning detection.
In order to achieve the purpose, the invention provides the following scheme:
a machine vision based nut positioning detection method, the method comprising:
constructing a nut detection deep learning model based on lightweight YOLOv 5;
adopting Retinex algorithm to carry out image enhancement on the collected nut image to be detected; the nut in the nut image to be detected is a hexagon nut;
inputting the image of the nut to be detected after image enhancement into the nut detection deep learning model, and outputting the image of the nut to be detected marked with a nut coordinate frame;
according to the nut image to be detected marked with the nut coordinate frame, performing linear detection on the edge of the nut in the nut coordinate frame by adopting depth Hough transform to obtain a plurality of straight lines on the hexagonal edge of the nut;
positioning the center of a hexagon by adopting a hexagon constraint method according to a plurality of straight lines on the edge of the hexagon of the nut to be used as the center of the nut;
based on camera calibration parameters and a small hole imaging principle, pixel coordinates of the center of the nut are converted into actual physical coordinates, and the actual coordinates of the center of the nut are obtained.
Optionally, the building of the nut detection deep learning model based on the lightweight YOLOv5 specifically includes:
replacing backbone network backbone of the YOLOv5 model with mobilene-V2;
adding a lightweight attention module to mobilene-V2, obtaining a lightweight YOLOv5 model;
shooting a plurality of hexagon nut images by using an industrial camera, and labeling a hexagon nut in each hexagon nut image to form a data sample set;
training the light-weight YOLOv5 model by using the data sample set, and determining the light-weight YOLOv5 model meeting the training evaluation index as a nut detection deep learning model; the training evaluation index comprises precision and recall rate.
Optionally, according to the image of the nut to be detected marked with the nut coordinate frame, performing linear detection on the edge of the nut in the nut coordinate frame by adopting depth hough transform to obtain a plurality of straight lines on the hexagonal edge of the nut, and the method further includes the following steps:
graying the nut image to be detected marked with the nut coordinate frame by adopting a weighted average method to obtain a nut grayscale image;
sequentially carrying out histogram equalization and Gaussian filtering on the nut gray level image to obtain a filtered nut gray level image;
and (4) carrying out binarization processing on the filtered nut gray level image by using a maximum inter-class variance method to obtain a nut binarization image.
Optionally, according to the to-be-detected nut image marked with the nut coordinate frame, performing linear detection on the nut edge in the nut coordinate frame by adopting depth hough transform to obtain a plurality of straight lines on the nut hexagonal edge, and specifically including:
constructing a nut semantic line detection data set; the nut semantic line detection data set comprises a plurality of nut sample images in different scenes, and each nut sample image is marked with a semantic line;
shearing a nut coordinate frame from a nut image to be detected marked with the nut coordinate frame to obtain a nut subimage;
performing linear marking on the nut subimages according to a nut semantic line detection data set;
parameterizing each straight line; the parameterization is that the angle of a straight line and the distance from the straight line to an origin are used as parameters of the straight line;
extracting the spatial characteristics of the nut subimages through a CNN characteristic extractor;
traversing all possible straight lines in the nut sub-images after the straight line marking, aggregating the spatial characteristics to corresponding points in the parameter space along the straight lines, and realizing the characteristic aggregation by using summation operation;
and mapping the corresponding points aggregated into the parameter space to nut subimages to obtain a plurality of straight lines on the hexagonal edge of the nut.
Optionally, the center of the hexagon is positioned by a hexagon constraint method according to a plurality of straight lines on the hexagon edge of the nut, and the nut specifically includes, as the nut center:
calculating the intersection point of any two straight lines in the plurality of straight lines on the hexagonal edge of the nut as a candidate vertex;
selecting a correct vertex from the candidate vertices by adopting a voting method according to hexagonal distance constraint; the hexagonal distance constraint is that the distance between any two vertexes of the hexagon satisfies three distance values;
fitting an circumscribed circle of a hexagon according to any three correct vertexes, and calculating the center coordinates of the circumscribed circle;
and determining the average value of the coordinates of the plurality of circle centers as the central coordinate of the hexagon, and taking the central coordinate as the pixel coordinate of the nut center.
Optionally, the step of converting the pixel coordinate of the center of the nut into an actual physical coordinate based on the camera calibration parameter and the pinhole imaging principle to obtain the actual coordinate of the center of the nut specifically includes:
calibrating the camera by adopting a Zhangyingyou plane calibration method to obtain internal parameters and external parameters of the camera;
and determining the actual coordinate of the center of the nut based on the small hole imaging principle according to the internal parameter and the external parameter of the camera, the vertical distance between the camera and the steel plate where the nut is located and the pixel coordinate of the center of the nut.
Optionally, the apparatus includes: the device comprises an industrial camera, an edge detection device, a controller and a nut locking execution mechanism;
the industrial camera is connected with the edge detection device; the edge detection device is used for detecting an image of the nut to be detected, which is acquired by an industrial camera, by using the nut positioning detection method based on machine vision as claimed in any one of claims 1-6, and determining the actual coordinates of the center of the nut;
the controller is respectively connected with the edge detection device and the nut locking actuating mechanism; and the controller is used for controlling the nut locking actuating mechanism to execute the nut fastening operation according to the actual coordinate of the nut center determined by the edge detection device.
A machine vision based nut positioning detection system, the system comprising:
the model building module is used for building a nut detection deep learning model based on lightweight YOLOv 5;
the image enhancement module is used for carrying out image enhancement on the acquired nut image to be detected by adopting a Retinex algorithm; the nut in the nut image to be detected is a hexagon nut;
the nut marking module is used for inputting the image of the nut to be detected after the image enhancement into the nut detection deep learning model and outputting the image of the nut to be detected marked with the nut coordinate frame;
the straight line detection module is used for carrying out straight line detection on the edge of the nut in the nut coordinate frame by adopting depth Hough transform according to the nut image to be detected marked with the nut coordinate frame to obtain a plurality of straight lines on the hexagonal edge of the nut;
the center positioning module is used for positioning the center of the hexagon as the center of the nut by adopting a hexagon constraint method according to a plurality of straight lines on the edge of the hexagon of the nut;
and the central coordinate determination module is used for converting the pixel coordinate of the nut center into an actual physical coordinate based on the camera calibration parameters and the small hole imaging principle to obtain the actual coordinate of the nut center.
Optionally, the line detection module specifically includes:
the data set construction submodule is used for constructing a nut semantic line detection data set; the nut semantic line detection data set comprises a plurality of nut sample images in different scenes, and each nut sample image is marked with a semantic line;
the shearing submodule is used for shearing the nut coordinate frame from the nut image to be detected marked with the nut coordinate frame to obtain a nut subimage;
the straight line marking submodule is used for carrying out straight line marking on the nut subimage according to the nut semantic line detection data set;
a parameterization submodule for parameterizing each straight line; the parameterization is that the angle of a straight line and the distance from the straight line to an origin are used as parameters of the straight line;
the extraction submodule is used for extracting the spatial characteristics of the nut subimage through a CNN characteristic extractor;
the aggregation submodule is used for traversing all possible straight lines in the nut sub-image subjected to the straight line marking, aggregating the spatial characteristics to corresponding points in the parameter space along the straight lines, and realizing the characteristic aggregation by using summation operation;
and the straight line obtaining submodule is used for mapping the corresponding points aggregated into the parameter space to the nut subimages to obtain a plurality of straight lines on the hexagonal edge of the nut.
Optionally, the center positioning module specifically includes:
the candidate vertex calculation submodule is used for calculating the intersection point of any two straight lines in the straight lines on the hexagonal edge of the nut to serve as a candidate vertex;
the correct vertex selecting submodule is used for selecting a correct vertex from the candidate vertices by adopting a voting method according to the hexagonal distance constraint; the hexagonal distance constraint is that the distance between any two vertexes of the hexagon satisfies three distance values;
the fitting submodule is used for fitting an circumscribed circle of the hexagon according to any three correct vertexes and calculating the center coordinates of the circumscribed circle;
and the pixel coordinate determination submodule is used for determining the average value of the coordinates of the plurality of circle centers as the central coordinate of the hexagon and taking the central coordinate as the pixel coordinate of the nut center.
According to the specific embodiment provided by the invention, the invention discloses the following technical effects:
the invention discloses a nut positioning detection method, a device and a system based on machine vision.A Retinex algorithm is adopted to carry out image enhancement on a collected nut image to be detected, so that the influence of external complex environments such as heavy fog, darkness and the like on the workpiece identification precision is reduced; the depth Hough transform is used for detecting the straight line of the hexagonal edge of the nut, so that the method has high-efficiency context straight line information extraction capability, not only can obtain higher detection precision, but also can greatly reduce the detection time; meanwhile, the central point of the nut is found by utilizing a hexagonal constraint method, and the positioning detection speed and precision of the center of the nut are improved.
Drawings
In order to more clearly illustrate the embodiments of the present invention or the technical solutions in the prior art, the drawings needed to be used in the embodiments will be briefly described below, and it is obvious that the drawings in the following description are only some embodiments of the present invention, and it is obvious for those skilled in the art to obtain other drawings without inventive exercise.
FIG. 1 is a basic flow chart of a nut positioning and detecting method based on machine vision according to the present invention;
FIG. 2 is a schematic diagram of the precise nut center positioning provided by the present invention;
FIG. 3 is a schematic diagram of the distance constraint principle provided by the present invention;
FIG. 4 is a schematic structural diagram of a nut positioning and detecting system based on machine vision according to the present invention;
fig. 5 is a schematic view of the camera installation provided by the present invention.
Detailed Description
The technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are only a part of the embodiments of the present invention, and not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
The invention aims to provide a nut positioning detection method, device and system based on machine vision so as to improve the speed and precision of nut center positioning detection.
In order to make the aforementioned objects, features and advantages of the present invention comprehensible, embodiments accompanied with figures are described in further detail below.
The invention provides a nut positioning detection method based on machine vision, which comprises the following steps of:
step 1, constructing a nut detection deep learning model based on lightweight YOLOv 5.
In one example, the method specifically comprises the following steps:
1-1, replacing backbone network backbone of the YOLOv5 model with mobilene-V2;
1-2 adding a lightweight attention module to mobilene-V2, obtaining a lightweight YOLOv5 model;
cbam (conditional block Attention Module) includes 2 independent sub-modules, a Channel Attention Module (CAM) and a Spatial Attention Module (SAM), which perform channel and spatial Attention, respectively. This not only saves parameters and computing power, but also ensures that it can be integrated into existing network architectures as a plug and play module.
1-3, shooting a plurality of hexagon nut images by using an industrial camera, and labeling a hexagon nut in each hexagon nut image to form a data sample set;
the experimental object collected in the experiment is a hexagon bolt or a hexagon nut. The workpiece has the characteristics of stable and single shape and less sample number. The original images are all hexagon nut images shot by using a Haicanwei industrial color camera, the total number of the images is 1000, and in order to enable the detection process to meet the requirements of real-time performance and accuracy in the industrial field, the collection resolution of the original images is set to be 1024 x 682, and the size enables the algorithm to be trained and detected quickly. And marking the collected 1000 hexagon nut workpieces after data collection is completed.
1-4, training a light-weight YOLOv5 model by using a data sample set, and determining the light-weight YOLOv5 model meeting a training evaluation index as a nut detection deep learning model; the training evaluation indexes comprise precision and recall rate.
1000 nut pictures collected and labeled were used to train the YOLOv5 model. The network training parameters including batch size, learning rate, iteration times and the like are set, the labeled data set is used for training the network model, and when the network training is converged and the loss function is smaller than a given value, the network training is completed.
The Yolov5 model training evaluation index:
precision (Precision): (find positive class/all found positive classes);
Figure BDA0003472720870000081
recall (Recall): (find pair positive class/all positive classes that should be found pair);
Figure BDA0003472720870000082
wherein, FN: false Negative, is judged as a Negative sample, but is actually a positive sample. FP: false Positive, is judged as a Positive sample, but is in fact a negative sample. TP: true Positive, is determined to be a Positive sample, and is in fact a Positive sample.
Step 2, adopting a Retinex algorithm to perform image enhancement on the acquired nut image to be detected; the nut in the nut image to be detected is a hexagonal nut.
2-1 the image S of the object seen by the viewer is reflected from the surface of the object to the incident light L, the reflectivity R being determined by the object itself and not being altered by the incident light L. The original image S is the product of the illumination image L and the reflectance image R, i.e. can be expressed in the form of:
S(x,y)=R(x,y)·L(x,y) (3)
2-2 the purpose of Retinex-based image enhancement is to estimate the illumination L from the original image S, to resolve R, and to remove the effects of illumination inhomogeneity, to improve the visual impression of the image, just as the human visual system does. In processing, the image is typically transferred to the log domain, converting the product relationship to a sum relationship, namely:
S=logS2l=logL2r=logR (4)
wherein S is2Representing a log domain original image, L2Representing log domain estimatesAnd measuring the illumination intensity.
The core of the 2-3 Retinex method is to estimate the illumination l, estimate l components from the image S, and remove l components to obtain the original reflection component r, i.e.:
l=f(s) (5)
r=s-f(s) (6)
f(s) represents the estimation function for the estimated illumination l, with the lower case s being the logarithmic domain.
And 3, inputting the image of the nut to be detected after image enhancement into a nut detection deep learning model, and outputting the image of the nut to be detected marked with a nut coordinate frame.
After the initial positioning of the nut is realized by using the YOLOv5 algorithm and nut sub-images are obtained, the characteristics of the nut can be correctly extracted only after a series of image processing algorithms are carried out. The image feature extraction mainly comprises algorithms such as graying, Gaussian filtering, depth Hough transformation and the like, and finally, the accurate positioning of the nut is realized.
The image preprocessing is carried out on the nut image to be detected marked with the nut coordinate frame, and the image preprocessing specifically comprises the following steps:
3-1, graying the nut image to be detected marked with the nut coordinate frame by adopting a weighted average method to obtain a nut grayscale image;
in order to accelerate the speed of visual positioning, the collected color image is often subjected to graying processing. The image after greying is a special color image R, G, B with the same three components. Therefore, in the visual positioning process, the collected color image is generally converted into a grayed image with less subsequent calculation, and the image after graying still maintains information such as the characteristics, local colors, local brightness levels and the like of the whole image although the calculation amount is less. Therefore, the image graying transformation is realized by adopting a weighted average method, namely, R, G, B three components are weighted and averaged by different weights according to the importance of three gray values. A typical gray scale transformation algorithm is:
Gray(i,j)=0.299*R(i,j)+0.578*G(i,j)+0.114*B(i,j) (7)
where (i, j) represents a coordinate point in the two-dimensional image.
3-2, sequentially carrying out histogram equalization and Gaussian filtering on the nut gray level image to obtain a filtered nut gray level image;
histogram Equalization (Histogram Equalization) is a method for enhancing Image Contrast (Image Contrast), and the main idea is to change the Histogram distribution of one Image into an approximately uniform distribution, thereby enhancing the Contrast of the Image. Suppose the gray levels are normalized [0,1]]And the probability density function of the gray level in the image is set as pr(r), the subscripts differing to represent the probability density function of the image after the change in probability density. Then, the gray level of the original image is applied with equalization change, and the gray level of the transformed image is:
Figure BDA0003472720870000101
T(r)gray value, T, representing the corresponding position of the equalized image(r)In [0, L-1 ]]Strictly monotonically increasing; when r is more than or equal to 0 and less than or equal to L-1, T is more than or equal to 0(r)Less than or equal to L-1. r represents the gray scale of the image to be processed, and the value range of r is [0, L-1 ]]R-0 represents black, and r-L-1 represents white. W is a form integral variable.
The transformed image gray level probability density function can be found to be uniform, i.e.:
Figure BDA0003472720870000102
therefore, the gray level of the image after the image histogram equalization is relatively equalized, and the normalized histogram covers the whole range of [0,1 ]. The final result of the histogram equalization processing is to expand the dynamic range of one image, so that the contrast of a local image area is improved, and the overall contrast is not influenced.
Gaussian denoising
The nut image may generate noise during both acquisition and transmission. The noise information is invalid information, the noise can obscure the original information of the image, and interference is caused to the accurate extraction of the nut edge information in the subsequent nut image, so that the image needs to be subjected to smooth denoising processing.
A gaussian filter is a widely used linear smoothing filter. The gaussian filtering is to perform weighted average on the whole image to implement filtering and denoising. The gray value of each pixel point is obtained by weighted averaging of the gray values of the pixel point and the adjacent pixel points, and the used weight can make the pixel points follow the normal distribution principle, so that the method is suitable for eliminating Gaussian noise.
The basic form of a two-dimensional gaussian function is as follows:
Figure BDA0003472720870000111
wherein, σ is a standard deviation, and controls the dispersion of the gaussian function, and the smaller σ is, the more concentrated the gaussian function is, and the larger σ is, the more diffused the gaussian function is. In general, coordinates x and y are integers, and in order to generate a filter template with a size of m × n from the function, in order to ensure that the sum of the template weights after the integers is 1, the weights of the filter template need to be normalized, so that a uniform gray area of an image is not affected by gaussian filtering.
The image was gaussian filtered with a variance of 1 using a gaussian kernel of size 3 x 3.
And 3-3, carrying out binarization processing on the filtered nut gray level image by using a maximum inter-class variance method to obtain a nut binarization image.
The binarization processing is to change the obtained image gray level image into a black-and-white image and extract key information from the black-and-white image. The image binarization mainly adopts fixed thresholding and self-adaptive thresholding. The fixed threshold is to set the pixels in the image below a certain threshold to black and the two other pixels to white. For a gray image with 8-bit depth (the gray value range is 0-255), 128 is generally selected as a threshold, and for a clear image, the effect of adopting a fixed threshold is better. For an underexposed image or an overexposed image, the binarized image obtained after binarization processing may be completely black or completely white, and all feature information of the image disappears at this time, resulting in failure of identification. The adaptive thresholding method computes the threshold of each pixel neighborhood in the image and compares the value of each pixel to the average of the neighborhoods.
If the value of a pixel in an image differs significantly from the threshold value in its domain, then this value is treated as an outlier and is isolated during the thresholding process. And the adaptive threshold value determines the binary threshold value at the pixel position according to the pixel value distribution of the neighborhood block of the pixel. Therefore, the self-adaptive thresholding method adopted by the project is a maximum inter-class variance method, and the image is divided into two classes with the maximum variance through the statistical characteristics of the gray level characteristics of the image, so that the target object in the foreground is separated from the background.
And 4, performing linear detection on the edge of the nut in the nut coordinate frame by adopting depth Hough transform according to the image of the nut to be detected marked with the nut coordinate frame to obtain a plurality of straight lines on the hexagonal edge of the nut.
And Hough transformation is carried out on the depth characteristics, so that the characteristic learning capability of the CNN and the high efficiency of the Hough transformation are considered, and the semantic line detection problem of the spatial domain is converted into the single point detection problem of the parameter domain. The method can easily extract the characteristics of the context line, greatly simplify the efficiency of extracting the context information and obtain more accurate workpiece edges.
In one example, the method specifically comprises the following steps:
constructing a nut semantic line detection data set; the nut semantic line detection data set comprises a plurality of nut sample images of different scenes, and each nut sample image is marked with a semantic line;
shearing a nut coordinate frame from a nut image to be detected marked with the nut coordinate frame to obtain a nut subimage;
performing linear marking on the nut and child images according to the nut semantic line detection data set;
parameterizing each straight line; parameterization means that the angle of a straight line and the distance from the straight line to an origin are used as parameters of the straight line;
extracting the spatial characteristics of the nut subimages through a CNN characteristic extractor;
traversing all possible straight lines in the nut sub-images after the straight line marking, aggregating the spatial characteristics to corresponding points in the parameter space along the straight lines, and realizing the characteristic aggregation by using summation operation;
and mapping the corresponding points aggregated into the parameter space to nut subimages to obtain a plurality of straight lines on the hexagonal edge of the nut.
And 5, positioning the center of the hexagon as the center of the nut by adopting a hexagon constraint method according to the straight lines on the edge of the hexagon of the nut.
Since the nut in real environment is affected by various interference factors, such as uneven lighting, rust and fouling, and other factors, the profile obtained by using the edge algorithm is generally incomplete, and therefore, the center of the nut needs to be accurately positioned from the incomplete edge profile.
Constraint relation exists between all sides of the hexagon, namely angle constraint, namely an included angle between any two sides satisfies one of 0 degrees, 60 degrees and 120 degrees. Each of the candidate edges has an intersection with all other edges, and when the intersection satisfies the angle constraint relationship and the intersection falls within the circle, the intersection is selected as the candidate vertex. This constraint relationship also exists between vertices of a hexagon, which is called distance constraint, i.e. the distance between any two vertices satisfies a certain condition.
When the camera height is fixed, three distance values are also determined. With the distance constraint, we can select the correct hexagonal vertex from the candidate vertices.
The correct vertex is selected from the candidate points by using a voting method. Each candidate vertex votes for the other candidate vertices, and a point will get a vote when the distance from the point to the vertex does not satisfy the distance constraint. The distances between the wrong candidate point and all other vertexes generally do not satisfy the distance constraint condition, so votes of all the other points are obtained, and the distance constraint relation between the correct candidate points is satisfied, so that the votes from other correct candidate points are obtained by capturing the correct candidate points, and the total number of votes obtained is less.
Knowing the three vertices, the three hexagonal centers can be found. The error can be reduced by taking the average value of the centers of the plurality of hexagons, and the robustness is improved.
In one example, the method specifically comprises the following steps:
calculating the intersection point of any two straight lines in the plurality of straight lines on the hexagonal edge of the nut as a candidate vertex;
selecting a correct vertex from the candidate vertices by a voting method according to the hexagonal distance constraint; the hexagonal distance constraint is that the distance between any two vertices of the hexagon satisfies three distance values, as shown in fig. 3;
fitting an circumscribed circle of the hexagon according to any three correct vertexes, and calculating the center coordinates of the circumscribed circle;
and determining the average value of the coordinates of the plurality of circle centers as the central coordinate of the hexagon, and taking the central coordinate as the pixel coordinate of the nut center.
And 6, converting the pixel coordinate of the center of the nut into an actual physical coordinate based on the camera calibration parameters and the small hole imaging principle to obtain the actual coordinate of the center of the nut.
And decomposing the obtained homography matrix to obtain the internal and external parameters by adopting a method of replacing a three-dimensional calibration block with a planar template proposed by Zhangyingyou. The distance between the detection lens and the steel plate object is fixed to be 200mm, and the actual physical coordinate of the nut can be calculated according to the pinhole imaging principle.
In one example, the method specifically comprises the following steps:
calibrating the camera by adopting a Zhangyingyou plane calibration method to obtain internal parameters and external parameters of the camera;
and determining the actual coordinate of the center of the nut based on the small hole imaging principle according to the internal parameter and the external parameter of the camera, the vertical distance between the camera and the steel plate where the nut is located and the pixel coordinate of the center of the nut.
The method greatly reduces the parameter quantity of the model, improves the positioning detection speed and precision of the workpiece and the deployment capability of the algorithm, extracts the context information of adjacent straight lines by using the deep Hough algorithm, and correspondingly extracts the characteristics of adjacent points in the parameter space, thereby greatly simplifying the efficiency of extracting the context information. Compared with the traditional Hough transformation, the method not only can obtain higher detection precision, but also can greatly reduce the detection time. And a Retinex image enhancement algorithm is added, so that the influence of an external complex environment on workpiece identification is reduced.
The invention introduces the target detection technology into the on-site detection units such as the transmission tower and the like. The Hekangindustrial camera is used for shooting the hexagonal nut in real time, a workpiece image collected by the camera is input into a YOLOv5 detection network, the center coordinate of a target frame is obtained, the nut center point is found through methods such as binarization processing, Gaussian filtering, depth Hough transformation and hexagon constraint after shearing, and the accurate positioning detection performance of the bolt is improved.
The method has the advantages that the Retinex image enhancement algorithm is added, and the influence of external complex environments such as fog, darkness and the like on the workpiece identification precision is reduced. By replacing a backbone network and increasing an attention mechanism and combining a deep learning algorithm with machine vision, the model deployment capability is improved. The depth Hough transformation is used for detecting the straight line of the hexagonal edge of the nut, the efficient context straight line information extraction capability is achieved, the higher detection precision can be obtained, and meanwhile the time consumed by detection can be greatly reduced. The method for finding the central point of the nut by utilizing the methods such as hexagonal constraint and the like improves the positioning detection speed and precision of the workpiece.
The invention also provides a nut positioning detection device based on machine vision, as shown in fig. 4, the device comprises: the device comprises an industrial camera, an edge detection device, a controller and a nut locking actuating mechanism.
The industrial camera is connected with the edge detection device; the edge detection device is used for detecting the nut image to be detected acquired by the industrial camera by utilizing the nut positioning detection method based on machine vision, and determining the actual coordinate of the center of the nut. The controller is respectively connected with the edge detection device and the nut locking actuating mechanism; the controller is used for controlling the nut locking actuating mechanism to execute the nut fastening operation according to the actual coordinate of the nut center determined by the edge detection device.
In fig. 4, the image collected by the camera is transmitted to the edge intelligent processing unit, which includes a hardware layer and a software layer, where the hardware layer is an edge computing device and the software layer implements the nut positioning process in the embedded operating system.
The JETSON TX2 development board is used in the patent, and the JETSON series is an embedded platform which is pushed by Nvidia and faces to the field of unmanned intelligence, can process complex data on edge equipment, and realizes artificial intelligence. JETSON TX2 is a modular AI super computer, using NVIDIAPascalTMAnd (5) architecture. The intelligent edge device is powerful in performance, small in appearance, energy-saving and efficient, and is very suitable for intelligent edge devices such as robots, unmanned planes, intelligent cameras and portable medical equipment. It supports all the functions of the JetsonTX1 module, while allowing for larger, more complex deep neural networks. And deploying the trained bolt detection model to TX2 edge detection equipment, and recognizing the nut image acquired by the industrial camera by using the trained network.
As shown in fig. 5, when acquiring nut images, the camera and the sleeve are mounted on the sleeve transverse slide rail, two ends of the sleeve transverse slide rail are slidably arranged on the sleeve longitudinal guide rail, and the bolt and the nut are fixed on the angle steel.
According to the invention, a Henan industrial camera is used for shooting a hexagonal nut in real time, a workpiece image acquired by the camera is input into a YOLOv5 detection network to obtain a target frame center coordinate, the target frame center coordinate is cut, the nut accurate center coordinate is found by methods of binarization processing, Gaussian filtering, Canny edge extraction, depth Hough transformation, hexagon constraint and the like, an edge computing device TX2 is introduced for computing, the positioning detection speed and precision of the workpiece are greatly improved, and the influence of image distortion on the detection precision is better solved.
The invention also provides a nut positioning detection system based on machine vision, which comprises:
the model building module is used for building a nut detection deep learning model based on lightweight YOLOv 5;
the image enhancement module is used for carrying out image enhancement on the acquired nut image to be detected by adopting a Retinex algorithm; the nut in the nut image to be detected is a hexagon nut;
the nut marking module is used for inputting the image of the nut to be detected after the image enhancement into the nut detection deep learning model and outputting the image of the nut to be detected marked with the nut coordinate frame;
the straight line detection module is used for carrying out straight line detection on the edge of the nut in the nut coordinate frame by adopting depth Hough transform according to the nut image to be detected marked with the nut coordinate frame to obtain a plurality of straight lines on the hexagonal edge of the nut;
the center positioning module is used for positioning the center of the hexagon as the center of the nut by adopting a hexagon constraint method according to a plurality of straight lines on the edge of the hexagon of the nut;
and the central coordinate determination module is used for converting the pixel coordinate of the nut center into an actual physical coordinate based on the camera calibration parameters and the small hole imaging principle to obtain the actual coordinate of the nut center.
The straight line detection module specifically includes:
the data set construction submodule is used for constructing a nut semantic line detection data set; the nut semantic line detection data set comprises a plurality of nut sample images of different scenes, and each nut sample image is marked with a semantic line;
the shearing submodule is used for shearing the nut coordinate frame from the nut image to be detected marked with the nut coordinate frame to obtain a nut subimage;
the straight line marking submodule is used for carrying out straight line marking on the nut image according to the nut semantic line detection data set;
a parameterization submodule for parameterizing each straight line; parameterization means that the angle of a straight line and the distance from the straight line to an origin are used as parameters of the straight line;
the extraction submodule is used for extracting the spatial characteristics of the nut subimage through a CNN characteristic extractor;
the aggregation submodule is used for traversing all possible straight lines in the nut sub-image subjected to the straight line marking, aggregating the spatial characteristics to corresponding points in the parameter space along the straight lines, and realizing the characteristic aggregation by using summation operation;
and the straight line obtaining submodule is used for mapping the corresponding points aggregated into the parameter space to the nut subimages to obtain a plurality of straight lines on the hexagonal edge of the nut.
The center positioning module specifically comprises:
the candidate vertex calculation submodule is used for calculating the intersection point of any two straight lines in the straight lines on the hexagonal edge of the nut to serve as a candidate vertex;
the correct vertex selecting submodule is used for selecting a correct vertex from the candidate vertices by adopting a voting method according to the hexagonal distance constraint; the hexagonal distance constraint is that the distance between any two vertexes of the hexagon satisfies three distance values;
the fitting submodule is used for fitting an circumscribed circle of the hexagon according to any three correct vertexes and calculating the center coordinates of the circumscribed circle;
and the pixel coordinate determination submodule is used for determining the average value of the coordinates of the plurality of circle centers as the central coordinate of the hexagon and taking the central coordinate as the pixel coordinate of the nut center.
The embodiments in the present description are described in a progressive manner, each embodiment focuses on differences from other embodiments, and the same and similar parts among the embodiments are referred to each other. For the system disclosed by the embodiment, the description is relatively simple because the system corresponds to the method disclosed by the embodiment, and the relevant points can be referred to the method part for description.
The principles and embodiments of the present invention have been described herein using specific examples, which are provided only to help understand the method and the core concept of the present invention; meanwhile, for a person skilled in the art, according to the idea of the present invention, the specific embodiments and the application range may be changed. In view of the above, the present disclosure should not be construed as limiting the invention.

Claims (10)

1. A nut positioning detection method based on machine vision is characterized by comprising the following steps:
constructing a nut detection deep learning model based on lightweight YOLOv 5;
adopting Retinex algorithm to carry out image enhancement on the collected nut image to be detected; the nut in the nut image to be detected is a hexagon nut;
inputting the image of the nut to be detected after image enhancement into the nut detection deep learning model, and outputting the image of the nut to be detected marked with a nut coordinate frame;
according to the nut image to be detected marked with the nut coordinate frame, performing linear detection on the edge of the nut in the nut coordinate frame by adopting depth Hough transform to obtain a plurality of straight lines on the hexagonal edge of the nut;
positioning the center of a hexagon by adopting a hexagon constraint method according to a plurality of straight lines on the edge of the hexagon of the nut to be used as the center of the nut;
based on camera calibration parameters and a small hole imaging principle, pixel coordinates of the center of the nut are converted into actual physical coordinates, and the actual coordinates of the center of the nut are obtained.
2. The machine vision-based nut positioning detection method according to claim 1, wherein the building of the nut detection deep learning model based on the light weight YOLOv5 specifically comprises:
replacing backbone network backbone of the YOLOv5 model with mobilene-V2;
adding a lightweight attention module to mobilene-V2, obtaining a lightweight YOLOv5 model;
shooting a plurality of hexagon nut images by using an industrial camera, and labeling a hexagon nut in each hexagon nut image to form a data sample set;
training the light-weight YOLOv5 model by using the data sample set, and determining the light-weight YOLOv5 model meeting the training evaluation index as a nut detection deep learning model; the training evaluation index comprises precision and recall rate.
3. The nut positioning detection method based on machine vision according to claim 1, characterized in that according to the nut image to be detected marked with the nut coordinate frame, a depth hough transform is adopted to perform straight line detection on the nut edge in the nut coordinate frame, so as to obtain a plurality of straight lines on the nut hexagon edge, and the method further comprises the following steps:
graying the nut image to be detected marked with the nut coordinate frame by adopting a weighted average method to obtain a nut grayscale image;
sequentially carrying out histogram equalization and Gaussian filtering on the nut gray level image to obtain a filtered nut gray level image;
and (4) carrying out binarization processing on the filtered nut gray level image by using a maximum inter-class variance method to obtain a nut binarization image.
4. The nut positioning detection method based on machine vision according to claim 1, characterized in that according to the nut image to be detected marked with the nut coordinate frame, a depth hough transform is adopted to perform straight line detection on the nut edge in the nut coordinate frame, so as to obtain a plurality of straight lines on the nut hexagonal edge, specifically comprising:
constructing a nut semantic line detection data set; the nut semantic line detection data set comprises a plurality of nut sample images in different scenes, and each nut sample image is marked with a semantic line;
shearing a nut coordinate frame from a nut image to be detected marked with the nut coordinate frame to obtain a nut subimage;
performing linear marking on the nut subimages according to a nut semantic line detection data set;
parameterizing each straight line; the parameterization is that the angle of a straight line and the distance from the straight line to an origin are used as parameters of the straight line;
extracting the spatial characteristics of the nut subimages through a CNN characteristic extractor;
traversing all possible straight lines in the nut sub-images after the straight line marking, aggregating the spatial characteristics to corresponding points in the parameter space along the straight lines, and realizing the characteristic aggregation by using summation operation;
and mapping the corresponding points aggregated into the parameter space to nut subimages to obtain a plurality of straight lines on the hexagonal edge of the nut.
5. The nut positioning detection method based on machine vision according to claim 1, characterized in that the positioning of the center of the hexagon by a hexagon constraint method according to a plurality of straight lines on the edge of the nut hexagon as the nut center specifically comprises:
calculating the intersection point of any two straight lines in the plurality of straight lines on the hexagonal edge of the nut as a candidate vertex;
selecting a correct vertex from the candidate vertices by adopting a voting method according to hexagonal distance constraint; the hexagonal distance constraint is that the distance between any two vertexes of the hexagon satisfies three distance values;
fitting an circumscribed circle of a hexagon according to any three correct vertexes, and calculating the center coordinates of the circumscribed circle;
and determining the average value of the coordinates of the plurality of circle centers as the central coordinate of the hexagon, and taking the central coordinate as the pixel coordinate of the nut center.
6. The nut positioning detection method based on machine vision according to claim 5, characterized in that the pixel coordinates of the nut center are converted into actual physical coordinates based on camera calibration parameters and a pinhole imaging principle, and the actual coordinates of the nut center are obtained, specifically comprising:
calibrating the camera by adopting a Zhangyingyou plane calibration method to obtain internal parameters and external parameters of the camera;
and determining the actual coordinate of the center of the nut based on the small hole imaging principle according to the internal parameter and the external parameter of the camera, the vertical distance between the camera and the steel plate where the nut is located and the pixel coordinate of the center of the nut.
7. A machine vision based nut positioning detection apparatus, the apparatus comprising: the device comprises an industrial camera, an edge detection device, a controller and a nut locking execution mechanism;
the industrial camera is connected with the edge detection device; the edge detection device is used for detecting an image of the nut to be detected, which is acquired by an industrial camera, by using the nut positioning detection method based on machine vision as claimed in any one of claims 1-6, and determining the actual coordinates of the center of the nut;
the controller is respectively connected with the edge detection device and the nut locking actuating mechanism; and the controller is used for controlling the nut locking actuating mechanism to execute the nut fastening operation according to the actual coordinate of the nut center determined by the edge detection device.
8. A machine vision based nut positioning inspection system, the system comprising:
the model building module is used for building a nut detection deep learning model based on lightweight YOLOv 5;
the image enhancement module is used for carrying out image enhancement on the acquired nut image to be detected by adopting a Retinex algorithm; the nut in the nut image to be detected is a hexagon nut;
the nut marking module is used for inputting the image of the nut to be detected after the image enhancement into the nut detection deep learning model and outputting the image of the nut to be detected marked with the nut coordinate frame;
the straight line detection module is used for carrying out straight line detection on the edge of the nut in the nut coordinate frame by adopting depth Hough transform according to the nut image to be detected marked with the nut coordinate frame to obtain a plurality of straight lines on the hexagonal edge of the nut;
the center positioning module is used for positioning the center of the hexagon as the center of the nut by adopting a hexagon constraint method according to a plurality of straight lines on the edge of the hexagon of the nut;
and the central coordinate determination module is used for converting the pixel coordinate of the nut center into an actual physical coordinate based on the camera calibration parameters and the small hole imaging principle to obtain the actual coordinate of the nut center.
9. The machine vision-based nut positioning detection system according to claim 8, wherein the straight line detection module specifically comprises:
the data set construction submodule is used for constructing a nut semantic line detection data set; the nut semantic line detection data set comprises a plurality of nut sample images in different scenes, and each nut sample image is marked with a semantic line;
the shearing submodule is used for shearing the nut coordinate frame from the nut image to be detected marked with the nut coordinate frame to obtain a nut subimage;
the straight line marking submodule is used for carrying out straight line marking on the nut subimage according to the nut semantic line detection data set;
a parameterization submodule for parameterizing each straight line; the parameterization is that the angle of a straight line and the distance from the straight line to an origin are used as parameters of the straight line;
the extraction submodule is used for extracting the spatial characteristics of the nut subimage through a CNN characteristic extractor;
the aggregation submodule is used for traversing all possible straight lines in the nut sub-image subjected to the straight line marking, aggregating the spatial characteristics to corresponding points in the parameter space along the straight lines, and realizing the characteristic aggregation by using summation operation;
and the straight line obtaining submodule is used for mapping the corresponding points aggregated into the parameter space to the nut subimages to obtain a plurality of straight lines on the hexagonal edge of the nut.
10. The machine vision based nut positioning detection system of claim 8, wherein said centering module specifically comprises:
the candidate vertex calculation submodule is used for calculating the intersection point of any two straight lines in the straight lines on the hexagonal edge of the nut to serve as a candidate vertex;
the correct vertex selecting submodule is used for selecting a correct vertex from the candidate vertices by adopting a voting method according to the hexagonal distance constraint; the hexagonal distance constraint is that the distance between any two vertexes of the hexagon satisfies three distance values;
the fitting submodule is used for fitting an circumscribed circle of the hexagon according to any three correct vertexes and calculating the center coordinates of the circumscribed circle;
and the pixel coordinate determination submodule is used for determining the average value of the coordinates of the plurality of circle centers as the central coordinate of the hexagon and taking the central coordinate as the pixel coordinate of the nut center.
CN202210048417.XA 2022-01-17 2022-01-17 Nut positioning detection method, device and system based on machine vision Pending CN114387262A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210048417.XA CN114387262A (en) 2022-01-17 2022-01-17 Nut positioning detection method, device and system based on machine vision

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210048417.XA CN114387262A (en) 2022-01-17 2022-01-17 Nut positioning detection method, device and system based on machine vision

Publications (1)

Publication Number Publication Date
CN114387262A true CN114387262A (en) 2022-04-22

Family

ID=81202430

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210048417.XA Pending CN114387262A (en) 2022-01-17 2022-01-17 Nut positioning detection method, device and system based on machine vision

Country Status (1)

Country Link
CN (1) CN114387262A (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115205286A (en) * 2022-09-13 2022-10-18 国网天津市电力公司建设分公司 Mechanical arm bolt identification and positioning method for tower-climbing robot, storage medium and terminal
JP7457784B1 (en) 2022-12-23 2024-03-28 楽天グループ株式会社 Information processing device, method and program

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115205286A (en) * 2022-09-13 2022-10-18 国网天津市电力公司建设分公司 Mechanical arm bolt identification and positioning method for tower-climbing robot, storage medium and terminal
CN115205286B (en) * 2022-09-13 2023-01-24 国网天津市电力公司建设分公司 Method for identifying and positioning bolts of mechanical arm of tower-climbing robot, storage medium and terminal
JP7457784B1 (en) 2022-12-23 2024-03-28 楽天グループ株式会社 Information processing device, method and program

Similar Documents

Publication Publication Date Title
CN110314854B (en) Workpiece detecting and sorting device and method based on visual robot
CN111223088B (en) Casting surface defect identification method based on deep convolutional neural network
CN112651968B (en) Wood board deformation and pit detection method based on depth information
CN104112269B (en) A kind of solar battery laser groove parameter detection method and system based on machine vision
CN116205919B (en) Hardware part production quality detection method and system based on artificial intelligence
CN103424409B (en) Vision detecting system based on DSP
CN114387262A (en) Nut positioning detection method, device and system based on machine vision
CN104680519A (en) Seven-piece puzzle identification method based on contours and colors
CN110910350B (en) Nut loosening detection method for wind power tower cylinder
CN106204528A (en) A kind of size detecting method of part geometry quality
CN113012098B (en) Iron tower angle steel punching defect detection method based on BP neural network
CN111523540A (en) Metal surface defect detection method based on deep learning
CN110473184A (en) A kind of pcb board defect inspection method
CN110866915A (en) Circular inkstone quality detection method based on metric learning
CN111127384A (en) Strong reflection workpiece vision measurement method based on polarization imaging
CN111667473A (en) Insulator hydrophobicity grade judging method based on improved Canny algorithm
CN114155226A (en) Micro defect edge calculation method
CN108898080B (en) Ridge line neighborhood evaluation model-based crack connection method
CN116051808A (en) YOLOv 5-based lightweight part identification and positioning method
CN114529536A (en) Solid wood quality detection method
Hu et al. Research on bamboo defect segmentation and classification based on improved U-net network
CN108335296B (en) Polar plate identification device and method
CN117078608B (en) Double-mask guide-based high-reflection leather surface defect detection method
Ma et al. Solder joints detection method based on surface recovery
CN114897827B (en) Tobacco leaf packing box state detection method based on machine vision

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination