CN112767304B - Vision-based sunflower module position and direction detection method - Google Patents
Vision-based sunflower module position and direction detection method Download PDFInfo
- Publication number
- CN112767304B CN112767304B CN202011397186.0A CN202011397186A CN112767304B CN 112767304 B CN112767304 B CN 112767304B CN 202011397186 A CN202011397186 A CN 202011397186A CN 112767304 B CN112767304 B CN 112767304B
- Authority
- CN
- China
- Prior art keywords
- image
- module
- sunflower
- vision
- detection method
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
- 241000208818 Helianthus Species 0.000 title claims abstract description 72
- 235000003222 Helianthus annuus Nutrition 0.000 title claims abstract description 72
- 238000001514 detection method Methods 0.000 title claims abstract description 52
- 238000012545 processing Methods 0.000 claims abstract description 19
- 230000007547 defect Effects 0.000 claims description 22
- 238000001914 filtration Methods 0.000 claims description 19
- 238000000034 method Methods 0.000 claims description 15
- 238000003708 edge detection Methods 0.000 claims description 12
- 238000007781 pre-processing Methods 0.000 claims description 7
- 230000011218 segmentation Effects 0.000 claims description 7
- 238000005530 etching Methods 0.000 claims description 6
- 230000003628 erosive effect Effects 0.000 claims description 3
- 230000001960 triggered effect Effects 0.000 claims description 3
- 239000003292 glue Substances 0.000 abstract description 3
- 238000005429 filling process Methods 0.000 abstract description 2
- 238000004519 manufacturing process Methods 0.000 description 5
- 230000000694 effects Effects 0.000 description 4
- 230000008569 process Effects 0.000 description 4
- 238000006243 chemical reaction Methods 0.000 description 3
- 230000010339 dilation Effects 0.000 description 3
- 230000000717 retained effect Effects 0.000 description 3
- 238000005260 corrosion Methods 0.000 description 2
- 230000007797 corrosion Effects 0.000 description 2
- 238000010586 diagram Methods 0.000 description 2
- 238000005516 engineering process Methods 0.000 description 2
- 238000013507 mapping Methods 0.000 description 2
- 238000012986 modification Methods 0.000 description 2
- 230000004048 modification Effects 0.000 description 2
- 230000002829 reductive effect Effects 0.000 description 2
- 230000001629 suppression Effects 0.000 description 2
- 206010063385 Intellectualisation Diseases 0.000 description 1
- 239000006002 Pepper Substances 0.000 description 1
- 238000004364 calculation method Methods 0.000 description 1
- 230000008859 change Effects 0.000 description 1
- 238000009826 distribution Methods 0.000 description 1
- 238000009499 grossing Methods 0.000 description 1
- 238000005286 illumination Methods 0.000 description 1
- 238000007373 indentation Methods 0.000 description 1
- 230000000670 limiting effect Effects 0.000 description 1
- 230000000873 masking effect Effects 0.000 description 1
- 239000011159 matrix material Substances 0.000 description 1
- 230000000877 morphologic effect Effects 0.000 description 1
- 238000002360 preparation method Methods 0.000 description 1
- 238000011160 research Methods 0.000 description 1
- 230000004044 response Effects 0.000 description 1
- 238000012360 testing method Methods 0.000 description 1
- 238000012546 transfer Methods 0.000 description 1
- 230000000007 visual effect Effects 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/0002—Inspection of images, e.g. flaw detection
- G06T7/0004—Industrial image inspection
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T5/00—Image enhancement or restoration
- G06T5/40—Image enhancement or restoration using histogram techniques
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T5/00—Image enhancement or restoration
- G06T5/70—Denoising; Smoothing
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/10—Segmentation; Edge detection
- G06T7/12—Edge-based segmentation
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/10—Segmentation; Edge detection
- G06T7/136—Segmentation; Edge detection involving thresholding
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/10—Segmentation; Edge detection
- G06T7/181—Segmentation; Edge detection involving edge growing; involving edge linking
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/60—Analysis of geometric attributes
- G06T7/66—Analysis of geometric attributes of image moments or centre of gravity
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/10—Image acquisition modality
- G06T2207/10004—Still image; Photographic image
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20024—Filtering details
- G06T2207/20032—Median filtering
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20036—Morphological image processing
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20048—Transform domain processing
- G06T2207/20061—Hough transform
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20076—Probabilistic image processing
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/30—Subject of image; Context of image processing
- G06T2207/30108—Industrial image inspection
Landscapes
- Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Geometry (AREA)
- Quality & Reliability (AREA)
- Image Analysis (AREA)
- Length Measuring Devices By Optical Means (AREA)
- Image Processing (AREA)
Abstract
The invention belongs to the technical field of detection methods, and particularly relates to a sunflower module position and direction detection method based on vision. A vision-based sunflower module position and direction detection method comprises the following steps: s1, identifying the position of a whole-layer module; s2, identifying the position of a single module, wherein the identification is carried out after the identification of the position of the 12 holes in the step S1 is carried out; and S3, identifying the direction to realize the searching of the sunflower module direction. The invention provides a vision-based sunflower module position and direction detection method which can detect the position and direction of a sunflower module through image processing, realize that a sunflower can be aligned to a sensor when being placed and facilitate the execution of a subsequent glue filling process.
Description
Technical Field
The invention belongs to the technical field of detection methods, and particularly relates to a sunflower module position and direction detection method based on vision.
Background
With the increasing labor cost in China, the demands of enterprises and factories on robot replacement are increasingly urgent. China is a large manufacturing country, wherein industrial manufacturing cannot be assisted by machines, the automation degree is higher, and the advantages of China in the manufacturing field are more obvious.
The vision guide robot applies a machine vision technology to the robot to realize the intellectualization of the robot. Machine vision is to acquire image signals of a target through an image sensor and other devices, convert the images into digital signals in an image processing system and analyze the digital signals to obtain characteristic information of the target, such as pixel distribution, color, brightness and the like.
In the production process of the street lamp, the grabbing position of the mechanical arm is determined according to the position of the module radiator during feeding of the production line, and when the modules are piled in a stack in batches, the current position cannot be directly judged, so that the positioning detection of the modules is realized by the aid of a machine vision technology.
Disclosure of Invention
The invention aims to provide a vision-based sunflower module position and direction detection method which can detect the position and direction of a sunflower module through image processing, realize that a sunflower can be aligned to a sensor when placed and facilitate the execution of a subsequent glue filling process.
In order to achieve the purpose, the invention adopts the following technical scheme:
a sunflower module position and direction detection method based on vision comprises the following steps:
s1, identifying the position of the whole-layer module, comprising the following steps:
s11, photographing, wherein a camera shoots the whole layer of sunflower module;
s12, preprocessing, namely performing histogram equalization and median filtering noise removal processing on the picture obtained in the step S11;
s13, threshold segmentation, namely forming a binary image on the picture obtained in the step S12 through threshold segmentation;
s14, etching, namely etching the image obtained in the step S13 to form a closed hole;
s15, judging and returning data, and finding a target area for the image obtained in the step S14 by judging the area and the circularity of the hole outline;
s2, identifying the position of a single module, and after identifying the position of 12 holes in the step S1, performing the following steps:
s21, taking a picture, wherein a camera takes a picture of a single sunflower module;
s22, preprocessing, namely performing median filtering on the picture obtained in the step S21 to remove noise;
s23, detecting, namely performing Hough circle detection on the sunflower module in the image obtained in the step S22;
s24, finding the smallest circle, traversing all the circles of the image obtained in the step S23 to find the smallest circle, and returning to the circle center position and the radius;
s3, direction identification is carried out, the sun flower module direction searching is achieved, and the method comprises the following steps:
s31, highlighting the edge of the image, and representing the edge contour of the image by using the difference between the binary image and the corrosion map of the image obtained in the step S21;
s32, mask processing is carried out, the maximum notch is shown, the position of the center of the smallest circle obtained in the step S24 is set as the position of the circle center, and the position is multiplied by the image to be processed to obtain an ROI image;
s33, edge detection, namely identifying the image obtained in the step S32 by adopting a Canny edge detection algorithm to detect an actual edge;
s34, expansion processing, namely expanding the highlight part in the image obtained in the step S33;
s35, searching a maximum contour, traversing all contours in the image obtained in the step S34 and obtaining the maximum contour;
s36, detecting convex hulls and convex defects, namely detecting the convex hulls and the convex defects of the image with the maximum outline obtained in the step S35 to obtain the characteristic quantity of a convex defect area;
and S37, determining the maximum opening, calculating the characteristic quantity obtained in the step S36, comparing the characteristic quantity, and finding the opening direction of the sunflower module.
The Canny edge detection algorithm is a multi-stage edge detection algorithm developed by John f. The purpose of edge detection in general is to significantly reduce the data size of an image while preserving the original image attributes. There are many algorithms for edge detection, and although the Canny algorithm is a standard algorithm for edge detection, it is still widely used in research.
On the basis of the technical scheme, the invention can also adopt the following further technical scheme:
the step S11 further includes: the camera moves to a position 1m above the whole layer of sunflower module along with the mechanical arm, and the camera is triggered to take a picture;
the size of the convolution kernel eroded in step S14 is 120 × 120.
The step S12 further includes:
s121, histogram equalization, namely performing histogram equalization processing on the photo obtained in the step S11 to enhance the image contrast;
and S122, median filtering, namely performing median filtering processing on the image obtained in the step S121 to eliminate image noise.
The step S13 further includes: and (4) calculating the total pixel number and the probability of occurrence of the gray value of the picture obtained in the step (S12), and performing binarization processing on the picture, wherein the pixel value larger than the threshold is 255, and the pixel value smaller than the threshold is 0.
The step S15 further includes:
s151, detecting all contours in the image obtained in the step S14, storing the contours in a container, calculating the area and the perimeter of the contours, calculating the circularity according to the area and the perimeter, and if the area is larger than 12500 and smaller than 92500 and the circularity is larger than 0.65, judging that the contours are target areas;
s152, traversing all the target areas meeting the conditions obtained in the step S151, finding the minimum circle surrounding the outline, and returning to the center position and the radius of the minimum circle.
The step S21 further includes: the camera is driven by the mechanical arm to sequentially move to the circle center position of each circle and take a picture, and the taken picture is a gray scale image;
in step S22, a convolution kernel of 3*3 is used for median filtering.
The expanded convolution kernel in step S34 is 19 × 19.
The step S35 further includes: and finding all the contours in the images obtained in the step S34, sequentially storing the contours in a container, traversing all the contours and finding the maximum value.
The feature quantities in step S36 include a start point, an end point, a farthest point, and a distance from the farthest point to the convex defect.
The step S37 further includes: and calculating the distance between the starting point and the ending point of each convex defect, storing the distance into a container, and comparing the calculated distances to find the area where the maximum distance is located, wherein the area is the opening direction of the sunflower module.
The invention has the following technical characteristics:
according to the invention, the position and the direction of the sunflower module are automatically identified, so that the sunflower module is convenient to grab and rotate to a specified direction, the sunflower module can be aligned to the sensor when being placed, and the subsequent glue pouring process is convenient to execute. The method has the advantages of high precision, strict process and extremely low error rate.
Drawings
Fig. 1 is an image obtained in step S11 of the vision-based sunflower module position and direction detection method according to the present invention.
Fig. 2 is a gray level histogram of step S121 of the vision-based sunflower module position and orientation detection method according to the present invention.
Fig. 3 is an image obtained in step S121 of the vision-based sunflower module position and orientation detection method of the present invention.
Fig. 4 is an image obtained in step S122 of the vision-based sunflower module position and orientation detection method according to the present invention.
Fig. 5 is an image obtained in step S13 of the vision-based sunflower module position and orientation detection method according to the present invention.
Fig. 6 is an image obtained in step S14 of the vision-based sunflower module position and orientation detection method according to the present invention.
Fig. 7 is an image obtained in step S21 of the vision-based sunflower module position and orientation detection method according to the present invention.
Fig. 8 is an image obtained in step S22 of the vision-based sunflower module position and orientation detection method according to the present invention.
Fig. 9 is an image obtained in step S23 of the vision-based sunflower module position and orientation detection method according to the present invention.
Fig. 10 is an image obtained in step S31 of the vision-based sunflower module position and orientation detection method according to the present invention.
Fig. 11 is a mask image obtained in step S32 of the vision-based sunflower module position and orientation detection method according to the present invention.
Fig. 12 is an ROI image obtained in step S32 of the vision-based sunflower module position and orientation detection method of the present invention.
Fig. 13 is an image obtained in step S33 of the vision-based sunflower module position and orientation detection method of the present invention.
Fig. 14 is an image obtained in step S34 of the vision-based sunflower module position and orientation detection method according to the present invention.
Fig. 15 is an image obtained in step S35 of the vision-based sunflower module position and orientation detection method according to the present invention.
Fig. 16 is a maximum outline convex hull diagram obtained in step S36 of the vision-based sunflower module position and direction detection method according to the present invention.
Fig. 17 is a maximum contour convex defect map obtained in step S36 of the vision-based sunflower module position and orientation detection method according to the present invention.
FIG. 18 is a flow chart of a vision-based sunflower module position and orientation detection method of the present invention.
Fig. 19 is a schematic structural diagram of a sunflower module of the vision-based sunflower module position and direction detection method of the invention.
Detailed Description
The following specific examples are further illustrative of the methods and techniques provided by the present invention and should not be construed as limiting the invention thereto.
First embodiment, as shown in fig. 18 and 19, a vision-based sunflower module position and orientation detection method includes the following steps:
s1, identifying the position of the whole-layer module, comprising the following steps:
s11, photographing, wherein the camera moves to a position 1m above the whole layer module along with the mechanical arm, and the camera is triggered to photograph. The camera model selection is obtained by detecting the position and the direction of a single module, and the shooting effect on the workbench is shown in figure 1.
S12, preprocessing, namely performing histogram equalization and median filtering noise removal processing on the picture obtained in the step S11, wherein the preprocessing comprises the following steps:
and S121, histogram equalization, namely performing histogram equalization processing on the picture obtained in the step S11 to enhance the image contrast.
Histogram equalization is to convert an input image into an output image with approximately the same number of pixel points at each gray level through a certain gray level mapping, as shown in fig. 1, it can be seen that the image is dark as a whole and the details are not prominent, so the histogram equalization is used first to increase the image contrast, which is convenient for seeing the image details. The gray histogram is a two-dimensional image, and as shown in fig. 2, the abscissa is the gray level of each pixel in the image, and the ordinate is the frequency or the number of times that the pixel having each gray level appears in the image.
Specifically, it is assumed that the gray scale range of the image is 0 to 1 and continuous, and the normalized histogram of the image is the probability density function at this time.
p(x),0≤x≤1
From the nature of the probability density function, one can derive:
let the pre-conversion image probability density function be p r (r) the converted image probability density function is p s (s), the mapping relationship of the transfer function is s = f (r). Where r and s are the gray values of the input image and the output image at any point, respectively. Then from the probability theory:
if one wants to make the converted probability density function p s (s) =1,0 ≦ s ≦ 1 (i.e., the histogram is uniform) then it must satisfy:
the two sides of the equation are integrated over r, which gives:
for gray scale values at 0,255]In the case of (2), it is necessary to multiply the maximum gradation value D max At this time, the conversion formula of the gray balance is
Wherein D is B To the converted grey value, D A Is the gray value before conversion.
And S122, median filtering, namely performing median filtering processing on the image obtained in the step S121 to eliminate image noise.
After histogram equalization, it can be seen that the image noise is obvious, so that the median filtering is adopted to eliminate the image noise, which is convenient for the subsequent processing of the image. Median filtering is a statistical ordering filter, and for a point in the original image, the median filtering takes the statistically ordered median of all pixels in the neighborhood centered at the point as the response of the point.
In this embodiment, a 3*3 median filter is used to sweep through each pixel of the image, and the value arranged at the 5 th bit replaces the original pixel value. This method is better able to reject salt-pepper noise effects than linear filters, because linear filters need to factor in the noise, but in median filtering the noise is directly ignored.
And S13, threshold segmentation, namely forming a binary image by threshold segmentation on the picture obtained in the step S12, wherein the threshold segmentation is carried out by a maximum inter-class variance method.
It can be seen that the gray values of the target region are very different from the surrounding black ring parts, so that the target region can be filtered out using thresholding. However, due to uneven illumination, for example, under strong light on the right side, the gray value of the black area is obviously changed compared with that of the other areas of the same type, so that all the shot images cannot be divided by using a uniform threshold.
Specifically, as shown in fig. 5, the total number of pixels and the probability of occurrence of each gradation value are calculated. Traversing the image, counting the number of pixels with the gray scale i, and recording the number as n i I ranges from 0 to 255, the total number of pixelsThe probability of each gray value occurring isPixels in an image are classified into two categories, A, by a threshold value T 1 And A 2 ,A 1 Is composed of pixels with the gray value of 0-T-1 and the probability ofMean gray scale Where μ is the average gray scale of the entire image. A. The 2 Is composed of pixels with the gray value of T-255 and the probability ofMean gray scaleThen A 1 And A 2 Total variance σ = P of two regions 1 P 2 (μ 1 -μ 2 ) 2 . By calculation, the value T that maximizes σ is obtained. And (4) carrying out binarization on the image, wherein the pixel value of the image which is larger than the threshold value is 255, and the pixel value of the image which is smaller than the threshold value is 0.
And S14, etching, namely forming closed holes on the image obtained in the step S13 through etching.
As shown in fig. 6, the binarized map was etched to obtain closed regions of the circular object. The corroded convolution kernel size is 120 × 120.
And S15, judging and returning data, and finding a target area of the image obtained in the step S14 by judging the area and the circularity of the hole outline.
And S151, detecting all contours in the image obtained in the step S14 and storing the contours in a container.
The contour shape determination method may be such that the area of a circle is the largest in a closed geometric shape having an equal perimeter length according to the theorem of equality. Expressed as: l ^2 is less than or equal to 4 pi S. In the formula, S is an area surrounded by a curve, and L is a perimeter of a closed curve. The circularity is defined as 4 π S/L ^2. The area S of the outline can be determined by the number of pixels occupied by the connected component.
For the contour i, the area S and the perimeter L are calculated, so that the circularity R =4 pi S/L is calculated 2 If S can satisfy 12500<S<92500 with R satisfying R>0.65, the contour is considered to be a circle, i.e., the target area.
S152, traversing all the target areas meeting the conditions obtained in the step S151, finding the minimum circle surrounding the outline, and returning to the center position and the radius of the minimum circle.
S2, identifying the position of a single module, and after identifying the position of 12 holes in the step S1, performing the following steps:
s21, photographing, wherein the camera shoots the single sunflower module.
After the positions of the 12 holes are identified, the camera is driven by the mechanical arm to sequentially move to the circle center position of each circle and take a picture.
Specifically, the camera type selection: and (3) calculating the resolution ratio: the detection precision is 0.1mm, the diameter of the sunflower module is 160mm, so the visual field size is about 200 x 200mm, the minimum resolution is 400 ten thousand, and the actual resolution is at least 1200 ten thousand because the actual application generally selects 3-5 times of the calculated value to ensure the precision.
Therefore, the GigE industrial camera with the model number of the large and constant image being MER-1220-32U3M-C and the resolution being 1220 ten thousand pixels is finally selected.
The image is a grey scale image, and the effect of shooting at the working position 20cm above the module is shown in fig. 7.
And S22, preprocessing, namely performing median filtering on the picture obtained in the step S21 to remove noise. As shown in fig. 8, median filtering is performed using the convolution kernel of 3*3, specifically as described in step S1.
And S23, detecting, namely, carrying out Hough circle detection on the sunflower module in the image obtained in the step S22.
Hough circle detection is carried out on the sunflower module, and after a test is carried out, the minimum distance between a circle and the circle center is 45, the minimum value of the circle radius is 200, and the maximum value is 250. The effect after detection is shown in fig. 9.
And S24, finding the smallest circle, traversing all the circles of the image obtained in the step S23 to find the smallest circle, and returning to the circle center position and the radius.
It can be seen that after hough circle detection, all circles surround the target area, and the smallest circle is the result. Thus traversing all found circles R 1 ,R 2 …R n Finding the smallest circle R i Returning to its center position (x) i ,y i ) And radius r i 。
Thus, the position of a single sunflower module is found, and the following steps are performed to find the direction of the sunflower module.
S3, direction identification is carried out synchronously with the step S2, the finding of the sun flower module direction is realized, and the method comprises the following steps:
s31, highlighting the image edge, and representing the edge contour of the image by using the difference between the binary image and the erosion map of the image obtained in the step S21.
In order to reduce unnecessary noise and obtain a clear outline of an image, the edge of a block is protruded, and the difference between a binary image and a corrosion map is used for representing the internal gradient of the image, namely the edge outline of the image. As shown in fig. 10, the internal gradient is one of morphological gradients, and is a difference image obtained by subtracting an image after erosion from an original image, and is referred to as an internal gradient of the image.
And S32, masking for displaying the maximum notch, setting the minimum circle center position obtained in the step S24 as the circle center position, and multiplying the minimum circle center position by the image to be processed to obtain the ROI image.
To reveal the largest notch and remove the excess sunflower, the image center area is taken in preparation for subsequent finding of the largest opening. The ROI region can be extracted by the mask operation, and when the mask is manufactured, as shown in fig. 11, the center position is the minimum circle center position (x) after the hough circle detection i ,y i ). The image to be processed is multiplied by the image to be processed, as shown in fig. 12, to obtain an ROI image, the image value in the ROI is kept unchanged, and the outer pixel value is set to 0.
S33, edge detection, as shown in fig. 13, the image obtained in step S32 is identified by using a Canny edge detection algorithm to detect an actual edge.
In order to identify the edge of the image, the Canny edge detection algorithm can be used for detecting the actual edge as much as possible, and meanwhile, false alarm generated by noise is reduced as much as possible.
In particular, noise is removed and noise is reduced by convolution using a Gaussian smoothing filter
The edges of the image can point in different directions by calculating the gradient magnitude and direction, so the classic Canny algorithm uses four gradient operators to calculate the gradient in the horizontal, vertical and diagonal directions, respectively. But four gradient operators are generally not used to compute four directions separately. The Sobel operator is used herein to calculate the horizontal and vertical gradients.
And A is the original image pixel matrix.
Finally obtaining the gradient amplitude
Direction of gradient
Non-maxima suppression, which is an edge refinement method. The gradient edges that are usually obtained are more than one pixel wide, but many pixels wide, so such gradient maps are also "blurred". Non-maxima suppression can help preserve local maximum gradients while suppressing all other gradient values. This means that only the sharpest positions in the gradient change are retained. The algorithm is as follows: 1) And comparing the gradient strength of the current point with the gradient strength of the positive and negative gradient direction points. 2) If the gradient strength of the current point is maximum as compared with the gradient strength of other points in the same direction, the value is retained. Otherwise, inhibit, i.e., set to 0. For example, the direction of the current point is directed 90 deg. directly above, which requires comparison with the vertical direction, the pixels directly above and below it.
A hysteresis threshold. The hysteresis threshold requires two thresholds, a high threshold and a low threshold: 1) If the magnitude of a pixel location exceeds a high threshold, the pixel remains as an edge pixel. 2) If the magnitude of a pixel location is less than the low threshold, the pixel is excluded. 3) If the amplitude is between the two thresholds, the pixel is only retained when connected to a pixel above the high threshold. Typically the high to low threshold ratio is between 2:1 and 3:1. In this embodiment, the high threshold 59 and the low threshold 13 are preferable.
S34, an expansion process of expanding the highlight in the image obtained in step S33.
The dilation process is used because many breakpoints occur on the processed edge. The dilation is a dilation of a highlight in an image, and as shown in fig. 14, the highlight has a larger area than the original. The expanded convolution kernel is 19 x 19.
S35, searching the maximum contour, traversing all contours in the image obtained in the step S34, and obtaining the maximum contour.
At present, a plurality of contours exist in an image, and a maximum contour needs to be found for convex hull and convex defect detection. A contour generally corresponds to a series of points, i.e., a curve, in an image. First, all the contours C need to be found 1 ,C 2 …C n Sequentially storing the contour data in a container, traversing all the contours, and searching for a maximum value C i The maximum profile can be obtained as shown in fig. 15.
And S36, detecting convex hulls and convex defects, namely detecting the convex hulls and the convex defects of the image with the maximum outline obtained in the step S35 to obtain the characteristic quantity of the convex defect area.
The maximum profile convex hull plot shown in fig. 16 and the maximum profile convex defect plot shown in fig. 17, it can be seen that the maximum profile contains many indentations and protrusions. In order to obtain the maximum opening and further obtain the sunflower direction, convex hull and convex defect detection is carried out on the image.
Convex hulls and convex defect detection provide a good understanding of the shape or contour of an object. For a given two-dimensional planar point set, a convex hull is a convex polygon formed by connecting the outermost points, and is a polygon that can contain all the points in the point set. Convex defect refers to a recessed portion in a section of the profile, and each convex defect region has four characteristic quantities: starting point, ending point, farthest from point to distance of convex defect.
And S37, determining the maximum opening, calculating the characteristic quantity obtained in the step S36, comparing the characteristic quantity, and finding the opening direction of the sunflower module.
Calculating the distance L between the starting point and the end point of each convex defect 1 ,L 2 …L n Stored in the container, compares these distances to find the maximum distance L i ,L i The area is the opening direction of the sunflower module. To be provided withThe above description of the embodiments is only intended to facilitate the understanding of the method of the invention and its core idea. It should be noted that, for those skilled in the art, without departing from the principle of the present invention, it is possible to make various improvements and modifications to the present invention, and those improvements and modifications also fall within the scope of the claims of the present invention.
Claims (10)
1. A vision-based sunflower module position and direction detection method is characterized by comprising the following steps:
s1, identifying the position of the whole-layer module, comprising the following steps:
s11, photographing, wherein a camera shoots the whole layer of sunflower module;
s12, preprocessing, namely performing histogram equalization and median filtering noise removal processing on the picture obtained in the step S11;
s13, threshold segmentation, namely forming a binary image on the picture obtained in the step S12 through threshold segmentation;
s14, etching, namely etching the image obtained in the step S13 to form a closed hole;
s15, judging and returning data, and finding a target area for the image obtained in the step S14 by judging the area and the circularity of the hole outline;
s2, identifying the position of a single module, and after identifying the position of 12 holes in the step S1, performing the following steps:
s21, taking a picture, wherein a camera shoots a single sunflower module;
s22, preprocessing, namely performing median filtering on the picture obtained in the step S21 to remove noise;
s23, detecting, namely performing Hough circle detection on the sunflower module in the image obtained in the step S22;
s24, finding the smallest circle, traversing all the circles of the image obtained in the step S23 to find the smallest circle, and returning to the circle center position and the radius; and
s3, direction identification is carried out, the sun flower module direction searching is achieved, and the method comprises the following steps:
s31, highlighting the edge of the image, and representing the edge contour of the image by using the difference between the binarized image and the erosion map for the image obtained in the step S21;
s32, mask processing is carried out, the maximum notch is shown, the position of the center of the smallest circle obtained in the step S24 is set as the position of the circle center, and the position is multiplied by the image to be processed to obtain an ROI image;
s33, edge detection, namely identifying the image obtained in the step S32 by adopting a Canny edge detection algorithm to detect an actual edge;
s34, expansion processing, namely expanding the highlight part in the image obtained in the step S33;
s35, searching a maximum contour, traversing all contours in the image obtained in the step S34 and obtaining the maximum contour;
s36, detecting convex hulls and convex defects, namely detecting the convex hulls and the convex defects of the image with the maximum outline obtained in the step S35 to obtain the characteristic quantity of a convex defect area;
and S37, determining the maximum opening, calculating the characteristic quantity obtained in the step S36, comparing the characteristic quantity, and finding the opening direction of the sunflower module.
2. The vision-based sunflower module position and orientation detection method of claim 1, wherein said step S11 further comprises: the camera moves to a position 1m above the whole layer of sunflower module along with the mechanical arm, and the camera is triggered to take a picture;
the size of the convolution kernel eroded in step S14 is 120 × 120.
3. The vision-based sunflower module position and orientation detection method of claim 1, wherein said step S12 further comprises:
s121, histogram equalization, namely performing histogram equalization processing on the photo obtained in the step S11 to enhance the image contrast;
and S122, median filtering, namely performing median filtering processing on the image obtained in the step S121 to eliminate image noise.
4. The vision-based sunflower module position and orientation detection method of claim 1, wherein said step S13 further comprises: and (4) calculating the total pixel number and the probability of occurrence of the gray value of the picture obtained in the step (S12), and performing binarization processing on the picture, wherein the pixel value larger than the threshold is 255, and the pixel value smaller than the threshold is 0.
5. The vision-based sunflower module position and orientation detection method of claim 1, wherein said step S15 further comprises:
s151, detecting all contours in the image obtained in the step S14, storing the contours in a container, calculating the area and the perimeter of the contours, calculating the circularity according to the area and the perimeter, and if the area is larger than 12500 and smaller than 92500 and the circularity is larger than 0.65, judging that the contours are target areas;
s152, traversing all the target areas meeting the conditions obtained in the step S151, finding the minimum circle surrounding the outline, and returning to the center position and the radius of the minimum circle.
6. The vision-based sunflower module position and orientation detection method of claim 1, wherein step S21 further comprises: the camera is driven by the mechanical arm to sequentially move to the circle center position of each circle and take a picture, and the taken picture is a gray scale image;
in step S22, a convolution kernel of 3*3 is used for median filtering.
7. The vision-based sunflower module position and orientation detection method of claim 1, wherein the convolution kernel of the expansion in step S34 is 19 x 19.
8. The vision-based sunflower module position and orientation detection method of claim 1, wherein said step S35 further comprises: and finding all the contours in the image obtained in the step S34, sequentially storing the contours in a container, and traversing all the contours to find the maximum value.
9. The vision-based sunflower module position and orientation detection method of claim 1, wherein said feature parameters in step S36 include a starting point, an ending point, a farthest point, and a distance from the farthest point to the convex defect.
10. The vision-based sunflower module position and orientation detection method of claim 9, wherein step S37 further comprises: and calculating the distance between the starting point and the ending point of each convex defect, storing the distance into a container, and comparing the calculated distances to find the area where the maximum distance is located, wherein the area is the opening direction of the sunflower module.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202011397186.0A CN112767304B (en) | 2020-12-04 | 2020-12-04 | Vision-based sunflower module position and direction detection method |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202011397186.0A CN112767304B (en) | 2020-12-04 | 2020-12-04 | Vision-based sunflower module position and direction detection method |
Publications (2)
Publication Number | Publication Date |
---|---|
CN112767304A CN112767304A (en) | 2021-05-07 |
CN112767304B true CN112767304B (en) | 2023-02-28 |
Family
ID=75693407
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202011397186.0A Active CN112767304B (en) | 2020-12-04 | 2020-12-04 | Vision-based sunflower module position and direction detection method |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN112767304B (en) |
Citations (12)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN101615252A (en) * | 2008-06-25 | 2009-12-30 | 中国科学院自动化研究所 | A kind of method for extracting text information from adaptive images |
CN103208002A (en) * | 2013-04-10 | 2013-07-17 | 桂林电子科技大学 | Method and system used for recognizing and controlling gesture and based on hand profile feature |
CN104647893A (en) * | 2015-02-09 | 2015-05-27 | 西安科赛图像科技有限责任公司 | Overprinting error detection method based on cross line |
CN106409711A (en) * | 2016-09-12 | 2017-02-15 | 佛山市南海区广工大数控装备协同创新研究院 | Solar silicon wafer defect detecting system and method |
CN106802113A (en) * | 2016-12-23 | 2017-06-06 | 西安交通大学 | Intelligent hit telling system and method based on many shell hole algorithm for pattern recognitions |
CN106855951A (en) * | 2016-12-16 | 2017-06-16 | 陕西科技大学 | A kind of grain kind quality detecting method based on computer vision |
CN107766784A (en) * | 2016-08-20 | 2018-03-06 | 宋坤骏 | A kind of novel video people counting algorithm |
CN108573476A (en) * | 2018-03-14 | 2018-09-25 | 浙江大学山东工业技术研究院 | The heronsbill module surface screw hole site detection method of view-based access control model |
CN108596925A (en) * | 2018-03-14 | 2018-09-28 | 浙江大学山东工业技术研究院 | The heronsbill module surface screw hole site image processing method of view-based access control model |
CN110288619A (en) * | 2018-03-14 | 2019-09-27 | 浙江大学山东工业技术研究院 | The sun embossing die group surface screw hole site detection method of view-based access control model |
CN110503633A (en) * | 2019-07-29 | 2019-11-26 | 西安理工大学 | A kind of applique ceramic disk detection method of surface flaw based on image difference |
CN110866903A (en) * | 2019-11-08 | 2020-03-06 | 北京工业大学 | Ping-pong ball identification method based on Hough circle transformation technology |
-
2020
- 2020-12-04 CN CN202011397186.0A patent/CN112767304B/en active Active
Patent Citations (12)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN101615252A (en) * | 2008-06-25 | 2009-12-30 | 中国科学院自动化研究所 | A kind of method for extracting text information from adaptive images |
CN103208002A (en) * | 2013-04-10 | 2013-07-17 | 桂林电子科技大学 | Method and system used for recognizing and controlling gesture and based on hand profile feature |
CN104647893A (en) * | 2015-02-09 | 2015-05-27 | 西安科赛图像科技有限责任公司 | Overprinting error detection method based on cross line |
CN107766784A (en) * | 2016-08-20 | 2018-03-06 | 宋坤骏 | A kind of novel video people counting algorithm |
CN106409711A (en) * | 2016-09-12 | 2017-02-15 | 佛山市南海区广工大数控装备协同创新研究院 | Solar silicon wafer defect detecting system and method |
CN106855951A (en) * | 2016-12-16 | 2017-06-16 | 陕西科技大学 | A kind of grain kind quality detecting method based on computer vision |
CN106802113A (en) * | 2016-12-23 | 2017-06-06 | 西安交通大学 | Intelligent hit telling system and method based on many shell hole algorithm for pattern recognitions |
CN108573476A (en) * | 2018-03-14 | 2018-09-25 | 浙江大学山东工业技术研究院 | The heronsbill module surface screw hole site detection method of view-based access control model |
CN108596925A (en) * | 2018-03-14 | 2018-09-28 | 浙江大学山东工业技术研究院 | The heronsbill module surface screw hole site image processing method of view-based access control model |
CN110288619A (en) * | 2018-03-14 | 2019-09-27 | 浙江大学山东工业技术研究院 | The sun embossing die group surface screw hole site detection method of view-based access control model |
CN110503633A (en) * | 2019-07-29 | 2019-11-26 | 西安理工大学 | A kind of applique ceramic disk detection method of surface flaw based on image difference |
CN110866903A (en) * | 2019-11-08 | 2020-03-06 | 北京工业大学 | Ping-pong ball identification method based on Hough circle transformation technology |
Non-Patent Citations (2)
Title |
---|
基于机器视觉的晶圆缺陷检测系统分析与设计;陈治杉;《中国优秀博硕士学位论文全文数据库(硕士)信息科技辑》;20190915;论文第11-22页 * |
基于机器视觉的齿轮缺陷检测方法研究;张钰婷,邵勇,顾桂鹏,昝鹏;《工业控制计算机》;20171231;论文第90-94页 * |
Also Published As
Publication number | Publication date |
---|---|
CN112767304A (en) | 2021-05-07 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN109141232B (en) | Online detection method for disc castings based on machine vision | |
CN106934803B (en) | method and device for detecting surface defects of electronic device | |
CN110286124B (en) | Machine vision-based refractory brick measuring system | |
CN103593670B (en) | A kind of copper plate/strip detection method of surface flaw based on online limit of sequence learning machine | |
CN109632808B (en) | Edge defect detection method and device, electronic equipment and storage medium | |
CN113362306B (en) | Packaged chip defect detection method based on deep learning | |
CN113963042B (en) | Metal part defect degree evaluation method based on image processing | |
CN112308832A (en) | Bearing quality detection method based on machine vision | |
CN112233181A (en) | 6D pose recognition method and device and computer storage medium | |
CN114549981A (en) | Intelligent inspection pointer type instrument recognition and reading method based on deep learning | |
CN111354047B (en) | Computer vision-based camera module positioning method and system | |
WO2021109697A1 (en) | Character segmentation method and apparatus, and computer-readable storage medium | |
CN112734761B (en) | Industrial product image boundary contour extraction method | |
CN116777907A (en) | Sheet metal part quality detection method | |
CN115018846B (en) | AI intelligent camera-based multi-target crack defect detection method and device | |
CN114926441B (en) | Defect detection method and system for injection molding part machining and forming | |
CN117788467B (en) | Defect image classification method for titanium metal plate | |
CN112669286A (en) | Infrared thermal image-based method for identifying defects and evaluating damage degree of external thermal insulation system of external wall | |
CN116109637B (en) | System and method for detecting appearance defects of turbocharger impeller based on vision | |
CN115100191A (en) | Metal casting defect identification method based on industrial detection | |
CN115797813B (en) | Water environment pollution detection method based on aerial image | |
CN115274486B (en) | Semiconductor surface defect identification method | |
CN110544235A (en) | Flexible circuit board image area identification method based on differential geometry | |
CN115587966A (en) | Method and system for detecting whether parts are missing or not under condition of uneven illumination | |
CN113781413B (en) | Electrolytic capacitor positioning method based on Hough gradient method |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |