CN112767304A - Vision-based sunflower module position and direction detection method - Google Patents

Vision-based sunflower module position and direction detection method Download PDF

Info

Publication number
CN112767304A
CN112767304A CN202011397186.0A CN202011397186A CN112767304A CN 112767304 A CN112767304 A CN 112767304A CN 202011397186 A CN202011397186 A CN 202011397186A CN 112767304 A CN112767304 A CN 112767304A
Authority
CN
China
Prior art keywords
image
sunflower
module
vision
detection method
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202011397186.0A
Other languages
Chinese (zh)
Other versions
CN112767304B (en
Inventor
曹衍龙
薛小杰
杨将新
曹彦鹏
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shandong Industrial Technology Research Institute of ZJU
Original Assignee
Shandong Industrial Technology Research Institute of ZJU
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shandong Industrial Technology Research Institute of ZJU filed Critical Shandong Industrial Technology Research Institute of ZJU
Priority to CN202011397186.0A priority Critical patent/CN112767304B/en
Publication of CN112767304A publication Critical patent/CN112767304A/en
Application granted granted Critical
Publication of CN112767304B publication Critical patent/CN112767304B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/0002Inspection of images, e.g. flaw detection
    • G06T7/0004Industrial image inspection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/40Image enhancement or restoration by the use of histogram techniques
    • G06T5/70
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/12Edge-based segmentation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/136Segmentation; Edge detection involving thresholding
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/181Segmentation; Edge detection involving edge growing; involving edge linking
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/60Analysis of geometric attributes
    • G06T7/66Analysis of geometric attributes of image moments or centre of gravity
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10004Still image; Photographic image
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20024Filtering details
    • G06T2207/20032Median filtering
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20036Morphological image processing
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20048Transform domain processing
    • G06T2207/20061Hough transform
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20076Probabilistic image processing
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30108Industrial image inspection

Abstract

The invention belongs to the technical field of detection methods, and particularly relates to a sunflower module position and direction detection method based on vision. A vision-based sunflower module position and direction detection method comprises the following steps: s1, identifying the position of the whole-layer module; s2, identifying the position of a single module, and performing the identification after identifying 12 hole positions in the step S1; and S3, direction identification, and finding of the sunflower module direction. The invention provides a vision-based sunflower module position and direction detection method which can detect the position and direction of a sunflower module through image processing, realize that a sunflower can be aligned to a sensor when being placed and facilitate the execution of a subsequent glue filling process.

Description

Vision-based sunflower module position and direction detection method
Technical Field
The invention belongs to the technical field of detection methods, and particularly relates to a sunflower module position and direction detection method based on vision.
Background
With the increasing labor cost in China, the demands of enterprises and factories on robot replacement are increasingly urgent. China is a large manufacturing country, wherein industrial manufacturing cannot be assisted by machines, the automation degree is higher, and the advantages of China in the manufacturing field are more obvious.
The vision guide robot applies a machine vision technology to the robot to realize the intellectualization of the robot. Machine vision is to acquire image signals of a target through an image sensor and other devices, convert the images into digital signals in an image processing system and analyze the digital signals to obtain characteristic information of the target, such as pixel distribution, color, brightness and the like.
In the street lamp production process, the grabbing position of the mechanical arm is determined according to the position of the module radiator when the production line is used for feeding, and the current position cannot be directly judged when the modules are piled in batches, so that the positioning detection of the modules is realized by the aid of a machine vision technology.
Disclosure of Invention
The invention aims to provide a vision-based sunflower module position and direction detection method which can detect the position and direction of a sunflower module through image processing, realize that a sunflower can be aligned to a sensor when placed and facilitate the execution of a subsequent glue filling process.
In order to achieve the purpose, the invention adopts the following technical scheme:
a vision-based sunflower module position and direction detection method comprises the following steps:
s1, identifying the position of the whole-layer module, comprising the following steps:
s11, taking a picture, wherein the camera takes a picture of the whole layer of sunflower module;
s12, preprocessing, namely, carrying out histogram equalization and median filtering noise removal processing on the picture obtained in the step S11;
s13, performing threshold segmentation, and forming a binary image on the picture obtained in the step S12 through threshold segmentation;
s14, etching, namely forming closed holes on the image obtained in the step S13 through etching;
s15, judging and returning data, and finding a target area of the image obtained in the step S14 by judging the area and the circularity of the hole outline;
s2, identifying the single die set position, and after identifying the 12 hole positions in step S1, performing the following steps:
s21, taking a picture, wherein the camera takes a single sunflower module;
s22, preprocessing, namely performing median filtering on the picture obtained in the step S21 to remove noise;
s23, detecting, namely, carrying out Hough circle detection on the sunflower module in the image obtained in the step S22;
s24, finding the smallest circle, traversing all the circles of the image obtained in the step S23 to find the smallest circle, and returning to the circle center position and the radius;
s3, direction identification, and the finding of the sunflower module direction is realized, and the method comprises the following steps:
s31, highlighting the image edge, and representing the edge contour of the image by using the difference between the gray scale image and the corrosion image of the image obtained in the step S21;
s32, performing masking processing for showing the maximum notch, setting the center position of the minimum circle obtained in the step S24 as the position of the center of the circle, and multiplying the position by the image to be processed to obtain an ROI image;
s33, edge detection, namely identifying the image obtained in the step S32 by adopting a Canny edge detection algorithm to detect an actual edge;
s34, performing dilation processing for dilating the highlight in the image obtained in step S33;
s35, searching a maximum contour, traversing all contours in the image obtained in the step S34 and obtaining the maximum contour;
s36, detecting convex hulls and convex defects, namely detecting the convex hulls and the convex defects of the image with the maximum outline obtained in the step S35 to obtain the characteristic quantity of a convex defect area;
s37, determining the maximum opening, calculating the characteristic quantity obtained in the step S36, comparing the characteristic quantity, and finding the opening direction of the sunflower module.
The Canny edge detection algorithm is a multi-stage edge detection algorithm developed by John f. The purpose of edge detection in general is to significantly reduce the data size of an image while preserving the original image attributes. There are many algorithms for edge detection, and although the Canny algorithm is a standard algorithm for edge detection, it is still widely used in research.
On the basis of the technical scheme, the invention can also adopt the following further technical scheme:
the step S11 further includes: the camera moves to a position 1m above the whole layer of sunflower module along with the mechanical arm, and the camera is triggered to take a picture;
the size of the convolution kernel eroded in step S14 is 120 × 120.
The step S12 further includes:
s121, histogram equalization, namely performing histogram equalization processing on the photo obtained in the step S11 to enhance the image contrast;
and S122, median filtering, namely performing median filtering processing on the image obtained in the step S121 to eliminate image noise.
The step S13 further includes: the total number of pixels and the probability of occurrence of the gray value are calculated for the picture obtained in step S12, and binarization processing is performed on the image, where the pixel value greater than the threshold is 255 and the pixel value less than the threshold is 0.
The step S15 further includes:
s151, detecting all contours in the image obtained in the step S14, storing the contours in a container, calculating the area and the perimeter of the contours, calculating the circularity according to the area and the perimeter, and if the area is larger than 12500 and smaller than 92500 and the circularity is larger than 0.65, judging that the contours are target areas;
s152, traversing all the target areas meeting the conditions obtained in the step S151, finding the minimum circle surrounding the outline, and returning to the center position and the radius of the minimum circle.
The step S21 further includes: the camera is driven by the mechanical arm to sequentially move to the circle center position of each circle and take a picture, and the taken picture is a gray scale image;
in step S22, a convolution kernel of 3 × 3 is used for median filtering.
The expanded convolution kernel in step S34 is 19 × 19.
The step S35 further includes: and finding all the contours in the images obtained in the step S34, sequentially storing the contours in the container, and traversing all the contours to find the maximum value.
The feature quantities in said step S36 include a start point, an end point, a farthest point, and a distance from the farthest point to the convex defect.
The step S37 further includes: and calculating the distance between the starting point and the ending point of each convex defect, storing the distance into a container, and comparing the calculated distances to find the area where the maximum distance is located, wherein the area is the opening direction of the sunflower module.
The invention has the following technical characteristics:
according to the invention, the position and the direction of the sunflower module are automatically identified, so that the sunflower module is convenient to grab and rotate to a specified direction, the sunflower module can be aligned to the sensor when being placed, and the subsequent glue pouring process is convenient to execute. The method has the advantages of high precision, strict process and extremely low error rate.
Drawings
Fig. 1 is an image obtained in step S11 of the vision-based sunflower module position and orientation detection method according to the present invention.
Fig. 2 is a gray level histogram of step S121 of the vision-based sunflower module position and orientation detection method according to the present invention.
Fig. 3 is an image obtained in step S121 of the vision-based sunflower module position and orientation detection method according to the present invention.
Fig. 4 is an image obtained in step S122 of the vision-based sunflower module position and orientation detection method according to the present invention.
Fig. 5 is an image obtained in step S13 of the vision-based sunflower module position and orientation detection method according to the present invention.
Fig. 6 is an image obtained in step S14 of the vision-based sunflower module position and orientation detection method according to the present invention.
Fig. 7 is an image obtained in step S21 of the vision-based sunflower module position and orientation detection method according to the present invention.
Fig. 8 is an image obtained in step S22 of the vision-based sunflower module position and orientation detection method according to the present invention.
Fig. 9 is an image obtained in step S23 of the vision-based sunflower module position and orientation detection method according to the present invention.
Fig. 10 is an image obtained in step S31 of the vision-based sunflower module position and orientation detection method according to the present invention.
Fig. 11 is a mask image obtained in step S32 of the vision-based sunflower module position and orientation detection method according to the present invention.
Fig. 12 is an ROI image obtained in step S32 of the vision-based sunflower module position and orientation detection method according to the present invention.
Fig. 13 is an image obtained in step S33 of the vision-based sunflower module position and orientation detection method according to the present invention.
Fig. 14 is an image obtained in step S34 of the vision-based sunflower module position and orientation detection method according to the present invention.
Fig. 15 is an image obtained in step S35 of the vision-based sunflower module position and orientation detection method according to the present invention.
Fig. 16 is a convex hull diagram of the maximum outline obtained in step S36 of the vision-based sunflower module position and orientation detection method according to the present invention.
Fig. 17 is a maximum contour convex defect map obtained in step S36 of the vision-based sunflower module position and orientation detection method according to the present invention.
FIG. 18 is a flow chart of a vision-based sunflower module position and orientation detection method of the present invention.
Fig. 19 is a schematic structural diagram of a sunflower module of the vision-based sunflower module position and direction detection method of the invention.
Detailed Description
The following specific examples are further illustrative of the methods and techniques provided by the present invention and should not be construed as limiting the invention thereto.
First embodiment, as shown in fig. 18 and 19, a vision-based sunflower module position and orientation detection method includes the following steps:
s1, identifying the position of the whole-layer module, comprising the following steps:
and S11, photographing, wherein the camera moves to a position 1m above the whole layer module by the mechanical arm to trigger the camera to photograph. The camera model selection is obtained by detecting the position and the direction of a single module, and the shooting effect on the workbench is shown in figure 1.
S12, preprocessing, namely, carrying out histogram equalization and median filtering noise removal processing on the photo obtained in the step S11, wherein the method comprises the following steps:
and S121, histogram equalization, namely performing histogram equalization processing on the photo obtained in the step S11 to enhance the image contrast.
Histogram equalization is to convert an input image into an output image with approximately the same number of pixel points at each gray level through some gray level mapping, as shown in fig. 1, it can be seen that the image is dark as a whole and the details are not prominent, so that the contrast of the image is increased by using histogram equalization first, which is convenient for seeing the details of the image. The gray histogram is a two-dimensional image, and as shown in fig. 2, the abscissa is the gray level of each pixel in the image, and the ordinate is the frequency or the number of times that the pixel having each gray level appears in the image.
Specifically, firstly, the gray scale range of the image is assumed to be 0-1 and continuous, and the normalized histogram of the image at this time is the probability density function.
p(x),0≤x≤1
From the nature of the probability density function, one can derive:
Figure BDA0002815629560000061
rotating deviceBefore-change image probability density function is pr(r) the converted image probability density function is ps(s), the mapping relation of the conversion function is s ═ f (r). Where r and s are the gray values of the input image and the output image at any point, respectively. Then from the probability theory:
Figure BDA0002815629560000062
if one wants to make the converted probability density function ps(s) ≦ 1, 0 ≦ s ≦ 1 (i.e., the histogram is uniform) then must satisfy:
Figure BDA0002815629560000063
the two sides of the equation are integrated over r, which gives:
Figure BDA0002815629560000064
for gray scale values at 0, 255]In the case of (2), it is necessary to multiply the maximum gradation value DmaxAt this time, the conversion formula of the gray balance is
Figure BDA0002815629560000065
Wherein D isBTo the converted grey value, DAIs the gray value before conversion.
And S122, median filtering, namely performing median filtering processing on the image obtained in the step S121 to eliminate image noise.
After histogram equalization, it can be seen that the image noise is obvious, so that the median filtering is adopted to eliminate the image noise, which is convenient for the subsequent processing of the image. Median filtering is a statistical ordering filter, and for a point in the original image, the median filtering takes the statistically ordered median of all pixels in the neighborhood centered at the point as the response of the point.
In this embodiment, a 3 x 3 median filter is used to sweep through each pixel of the image, and the value arranged at the 5 th bit replaces the original pixel value. This method is better able to reject salt-pepper noise effects than linear filters, because linear filters need to factor in the noise, but in median filtering the noise is directly ignored.
S13, threshold segmentation, wherein the picture obtained in step S12 is subjected to threshold segmentation to form a binary image, and the binary image is subjected to threshold segmentation by the maximum inter-class variance method.
It can be seen that the gray values of the target region are very different from the surrounding black ring parts, so that the target region can be filtered out using thresholding. However, due to uneven illumination, for example, under strong light on the right side, the gray value of the black area is obviously changed compared with that of the other areas of the same type, so that all the shot images cannot be divided by using a uniform threshold.
Specifically, as shown in fig. 5, the total number of pixels and the probability of occurrence of each gradation value are calculated. Traversing the image, counting the number of pixels with the gray level of i, and recording the number as niI ranges from 0 to 255, total number of pixels
Figure BDA0002815629560000071
The probability of each gray value occurring is
Figure BDA0002815629560000072
Pixels in an image are classified into two categories, A, by a threshold T1And A2,A1Is composed of pixels with the gray value of 0-T-1 and the probability of
Figure BDA0002815629560000073
Mean gray scale
Figure BDA0002815629560000074
Where μ is the average gray scale of the entire image. A. the2Is composed of pixels with the gray value of T-255 and the probability of
Figure BDA0002815629560000075
Mean gray scale
Figure BDA0002815629560000076
Then
Figure BDA0002815629560000077
A1And A2The total variance σ of the two regions is P1P212)2. By calculation, the value T that maximizes σ is obtained. And (4) carrying out binarization on the image, wherein the pixel value of the image which is larger than the threshold value is 255, and the pixel value of the image which is smaller than the threshold value is 0.
And S14, etching, and forming closed holes on the image obtained in the step S13 by etching.
As shown in fig. 6, the binarized map was etched to obtain closed regions of the circular object. The size of the eroded convolution kernel is 120 × 120.
And S15, judging and returning data, and finding a target area by judging the area and the circularity of the hole outline of the image obtained in the step S14.
S151, detecting all the outlines in the image obtained in the step S14 and storing the outlines in a container.
The contour shape determination method may be such that the area of a circle is the largest in a closed geometric shape having an equal perimeter length according to the theorem of equality. Expressed as: l ^2 is less than or equal to 4 pi S. In the formula, S is an area surrounded by a curve, and L is a perimeter of a closed curve. The circularity is defined as 4 π S/L ^ 2. The area S of the outline can be determined by the number of pixels occupied by the connected component.
For the contour i, the area S and the perimeter L are calculated, and the circularity R-4 pi S/L is calculated2If S can satisfy 12500 < S < 92500, while R satisfies R > 0.65, the contour is considered to be a circle, i.e., a target region.
S152, traversing all the target areas meeting the conditions obtained in the step S151, finding the minimum circle surrounding the outline, and returning to the center position and the radius of the minimum circle.
S2, identifying the single die set position, and after identifying the 12 hole positions in step S1, performing the following steps:
and S21, taking a picture, and taking a picture of the single sunflower module by the camera.
After the positions of the 12 holes are identified, the camera is driven by the mechanical arm to sequentially move to the circle center position of each circle and take a picture.
Specifically, the camera type selection: and (3) resolution calculation: the detection precision is 0.1mm, the diameter of the sunflower module is 160mm, so the view field size is about 200 x 200mm, the minimum resolution is 400 ten thousand, and the actual resolution is at least 1200 ten thousand because the actual application generally selects 3-5 times of the calculated value to ensure the precision.
Therefore, the GigE industrial camera with the model number of a large constant image being MER-1220-32U3M-C and the resolution of 1220 ten thousand pixels is finally selected.
The image taken is a grey scale image and the effect of the image taken in the working position 20cm above the module is shown in figure 7.
And S22, preprocessing, namely, carrying out median filtering on the picture obtained in the step S21 to remove noise. As shown in fig. 8, median filtering is performed using a convolution kernel of 3 × 3, specifically as described in step S1.
And S23, detecting the sunflower module in the image obtained in the step S22 by Hough circle detection.
Hough circle detection is carried out on the sunflower module, and after a test is carried out, the minimum distance between a circle and the circle center is 45, the minimum value of the circle radius is 200, and the maximum value is 250. The effect after detection is shown in fig. 9.
And S24, finding the smallest circle, traversing all the circles of the image obtained in the step S23 to find the smallest circle, and returning to the circle center position and the radius.
It can be seen that after hough circle detection, all circles surround the target area, and the smallest circle is the result. Thus traversing all found circles R1,R2...RnFinding the smallest circle RiReturning to its center position (x)i,yi) And radius ri
Thus, the position of a single sunflower module is found, and the following steps are performed to find the direction of the sunflower module.
S3, direction identification is carried out synchronously with the step S2, the finding of the sunflower module direction is realized, and the method comprises the following steps:
s31, highlighting the image edge, and representing the edge contour of the image by the difference between the gray scale map and the erosion map for the image obtained in step S21.
In order to reduce unnecessary noise and obtain a clear outline of an image, the edge of a block is protruded, and the difference between a gray-scale image and a corrosion image is used for representing the internal gradient of the image, namely the edge outline of the image. As shown in fig. 10, the internal gradient is one of morphological gradients, and is a difference image obtained by subtracting an image after erosion from an original image, and is referred to as an internal gradient of the image.
And S32, performing masking processing for showing the maximum notch, setting the center position of the minimum circle obtained in the step S24 as the center position of the circle, and multiplying the center position of the minimum circle by the image to be processed to obtain the ROI image.
To reveal the largest notch and remove the excess sunflower, the image center area is taken in preparation for subsequent finding of the largest opening. The ROI region can be extracted by the mask operation, and when the mask is manufactured, as shown in fig. 11, the center position is the minimum circle center position (x) after the hough circle detectioni,yi). The image to be processed is multiplied by the image to be processed, as shown in fig. 12, to obtain an ROI image, the image value in the ROI is kept unchanged, and the outer pixel value is set to 0.
S33, edge detection, as shown in fig. 13, the image obtained in step S32 is identified by a Canny edge detection algorithm to detect an actual edge.
In order to identify the edge of the image, the Canny edge detection algorithm can be used to detect the actual edge as much as possible, and meanwhile, false alarm generated by noise is reduced as much as possible.
In particular, noise is removed and noise is reduced by convolution using a Gaussian smoothing filter
The edges of the image can point in different directions by calculating the gradient magnitude and direction, so the classic Canny algorithm uses four gradient operators to calculate the gradient in the horizontal, vertical and diagonal directions, respectively. But four gradient operators are generally not used to compute four directions separately. The Sobel operator is used herein to calculate the horizontal and vertical gradients.
Figure BDA0002815629560000091
Figure BDA0002815629560000092
And A is the original image pixel matrix.
Finally obtaining the gradient amplitude
Figure BDA0002815629560000101
Direction of gradient
Figure BDA0002815629560000102
Non-maxima suppression, which is an edge refinement method. The gradient edges that are usually obtained are more than one pixel wide, but many pixels wide, so such gradient maps are also "blurred". Non-maxima suppression can help preserve local maximum gradients while suppressing all other gradient values. This means that only the sharpest positions in the gradient change remain. The algorithm is as follows: 1) and comparing the gradient strength of the current point with the gradient strength of the positive and negative gradient direction points. 2) If the gradient strength of the current point is maximum as compared with the gradient strength of other points in the same direction, the value is retained. Otherwise, inhibit, i.e., set to 0. For example, the direction of the current point is directed 90 deg. directly above, which requires comparison with the vertical direction, the pixels directly above and below it.
A hysteresis threshold. The hysteresis threshold requires two thresholds, a high threshold and a low threshold: 1) if the magnitude of a pixel location exceeds a high threshold, the pixel remains as an edge pixel. 2) If the magnitude of a pixel location is less than the low threshold, the pixel is excluded. 3) If the amplitude is between two thresholds, the pixel is only retained when connected to a pixel above the high threshold. Typically the high to low threshold ratio is between 2:1 and 3: 1. In this embodiment, the high threshold 59 and the low threshold 13 are preferable.
S34, dilation process, dilation is performed on the highlight portion in the image obtained in step S33.
The dilation process is used because many breakpoints occur on the processed edge. The dilation is a dilation of a highlight in an image, and as shown in fig. 14, the highlight has a larger area than the original. The expanded convolution kernel is 19 x 19.
And S35, searching the maximum contour, traversing all contours in the image obtained in the step S34 and obtaining the maximum contour.
At present, a plurality of contours exist in an image, and a maximum contour needs to be found for convex hull and convex defect detection. A contour generally corresponds to a series of points, i.e., a curve, in an image. First, all the contours C need to be found1,C2...CnSequentially storing the contour data in a container, traversing all the contours, and searching for a maximum value CiThe maximum profile can be obtained as shown in fig. 15.
And S36, detecting convex hulls and convex defects, and detecting the convex hulls and the convex defects of the image with the maximum outline obtained in the step S35 to obtain the characteristic quantity of the convex defect area.
The maximum profile convex hull plot shown in fig. 16 and the maximum profile convex defect plot shown in fig. 17, it can be seen that the maximum profile contains many indentations and protrusions. In order to obtain the maximum opening and further obtain the sunflower direction, convex hull and convex defect detection is carried out on the image.
Convex hulls and convex defect detection provide a good understanding of the shape or contour of an object. For a given two-dimensional planar point set, a convex hull is a convex polygon formed by connecting the outermost points, and is a polygon that can contain all the points in the point set. Convex defect refers to a recessed portion in a section of the profile, and each convex defect region has four characteristic quantities: starting point, ending point, farthest from point to distance of convex defect.
S37, determining the maximum opening, calculating the characteristic quantity obtained in the step S36, comparing the characteristic quantity, and finding the opening direction of the sunflower module.
Calculating the distance L between the starting point and the ending point of each convex defect1,L2...LnIs stored toIn the container, these distances are compared to find the maximum distance Li,LiThe area is the opening direction of the sunflower module.
The above description of the embodiments is only intended to facilitate the understanding of the method of the invention and its core ideas. It should be noted that, for those skilled in the art, it is possible to make various improvements and modifications to the present invention without departing from the principle of the present invention, and those improvements and modifications also fall within the scope of the claims of the present invention.

Claims (10)

1. A vision-based sunflower module position and direction detection method is characterized by comprising the following steps:
s1, identifying the position of the whole-layer module, comprising the following steps:
s11, taking a picture, wherein the camera takes a picture of the whole layer of sunflower module;
s12, preprocessing, namely, carrying out histogram equalization and median filtering noise removal processing on the picture obtained in the step S11;
s13, performing threshold segmentation, and forming a binary image on the picture obtained in the step S12 through threshold segmentation;
s14, etching, namely forming closed holes on the image obtained in the step S13 through etching;
s15, judging and returning data, and finding a target area of the image obtained in the step S14 by judging the area and the circularity of the hole outline;
s2, identifying the single die set position, and after identifying the 12 hole positions in step S1, performing the following steps:
s21, taking a picture, wherein the camera takes a single sunflower module;
s22, preprocessing, namely performing median filtering on the picture obtained in the step S21 to remove noise;
s23, detecting, namely, carrying out Hough circle detection on the sunflower module in the image obtained in the step S22;
s24, finding the smallest circle, traversing all the circles of the image obtained in the step S23 to find the smallest circle, and returning to the circle center position and the radius; and
s3, direction identification, and the finding of the sunflower module direction is realized, and the method comprises the following steps:
s31, highlighting the image edge, and representing the edge contour of the image by using the difference between the gray scale image and the corrosion image of the image obtained in the step S21;
s32, performing masking processing for showing the maximum notch, setting the center position of the minimum circle obtained in the step S24 as the position of the center of the circle, and multiplying the position by the image to be processed to obtain an ROI image;
s33, edge detection, namely identifying the image obtained in the step S32 by adopting a Canny edge detection algorithm to detect an actual edge;
s34, performing dilation processing for dilating the highlight in the image obtained in step S33;
s35, searching a maximum contour, traversing all contours in the image obtained in the step S34 and obtaining the maximum contour;
s36, detecting convex hulls and convex defects, namely detecting the convex hulls and the convex defects of the image with the maximum outline obtained in the step S35 to obtain the characteristic quantity of a convex defect area;
s37, determining the maximum opening, calculating the characteristic quantity obtained in the step S36, comparing the characteristic quantity, and finding the opening direction of the sunflower module.
2. The vision-based sunflower module position and orientation detection method of claim 1, wherein said step S11 further comprises: the camera moves to a position 1m above the whole layer of sunflower module along with the mechanical arm, and the camera is triggered to take a picture;
the size of the convolution kernel eroded in step S14 is 120 × 120.
3. The vision-based sunflower module position and orientation detection method of claim 1, wherein said step S12 further comprises:
s121, histogram equalization, namely performing histogram equalization processing on the photo obtained in the step S11 to enhance the image contrast;
and S122, median filtering, namely performing median filtering processing on the image obtained in the step S121 to eliminate image noise.
4. The vision-based sunflower module position and orientation detection method of claim 1, wherein said step S13 further comprises: the total number of pixels and the probability of occurrence of the gray value are calculated for the picture obtained in step S12, and binarization processing is performed on the image, where the pixel value greater than the threshold is 255 and the pixel value less than the threshold is 0.
5. The vision-based sunflower module position and orientation detection method of claim 1, wherein said step S15 further comprises:
s151, detecting all contours in the image obtained in the step S14, storing the contours in a container, calculating the area and the perimeter of the contours, calculating the circularity according to the area and the perimeter, and if the area is larger than 12500 and smaller than 92500 and the circularity is larger than 0.65, judging that the contours are target areas;
s152, traversing all the target areas meeting the conditions obtained in the step S151, finding the minimum circle surrounding the outline, and returning to the center position and the radius of the minimum circle.
6. The vision-based sunflower module position and orientation detection method of claim 1, wherein said step S21 further comprises: the camera is driven by the mechanical arm to sequentially move to the circle center position of each circle and take a picture, and the taken picture is a gray scale image;
in step S22, a convolution kernel of 3 × 3 is used for median filtering.
7. The vision-based sunflower module position and orientation detection method of claim 1, wherein the convolution kernel of the expansion in step S34 is 19 x 19.
8. The vision-based sunflower module position and orientation detection method of claim 1, wherein said step S35 further comprises: and finding all the contours in the images obtained in the step S34, sequentially storing the contours in the container, and traversing all the contours to find the maximum value.
9. The vision-based sunflower module position and orientation detection method of claim 1, wherein said characteristic quantities in step S36 include a starting point, an ending point, a farthest point, and a distance from the farthest point to the convex defect.
10. The vision-based sunflower module position and orientation detection method of claim 9, wherein said step S37 further comprises: and calculating the distance between the starting point and the ending point of each convex defect, storing the distance into a container, and comparing the calculated distances to find the area where the maximum distance is located, wherein the area is the opening direction of the sunflower module.
CN202011397186.0A 2020-12-04 2020-12-04 Vision-based sunflower module position and direction detection method Active CN112767304B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202011397186.0A CN112767304B (en) 2020-12-04 2020-12-04 Vision-based sunflower module position and direction detection method

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202011397186.0A CN112767304B (en) 2020-12-04 2020-12-04 Vision-based sunflower module position and direction detection method

Publications (2)

Publication Number Publication Date
CN112767304A true CN112767304A (en) 2021-05-07
CN112767304B CN112767304B (en) 2023-02-28

Family

ID=75693407

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202011397186.0A Active CN112767304B (en) 2020-12-04 2020-12-04 Vision-based sunflower module position and direction detection method

Country Status (1)

Country Link
CN (1) CN112767304B (en)

Citations (12)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101615252A (en) * 2008-06-25 2009-12-30 中国科学院自动化研究所 A kind of method for extracting text information from adaptive images
CN103208002A (en) * 2013-04-10 2013-07-17 桂林电子科技大学 Method and system used for recognizing and controlling gesture and based on hand profile feature
CN104647893A (en) * 2015-02-09 2015-05-27 西安科赛图像科技有限责任公司 Overprinting error detection method based on cross line
CN106409711A (en) * 2016-09-12 2017-02-15 佛山市南海区广工大数控装备协同创新研究院 Solar silicon wafer defect detecting system and method
CN106802113A (en) * 2016-12-23 2017-06-06 西安交通大学 Intelligent hit telling system and method based on many shell hole algorithm for pattern recognitions
CN106855951A (en) * 2016-12-16 2017-06-16 陕西科技大学 A kind of grain kind quality detecting method based on computer vision
CN107766784A (en) * 2016-08-20 2018-03-06 宋坤骏 A kind of novel video people counting algorithm
CN108573476A (en) * 2018-03-14 2018-09-25 浙江大学山东工业技术研究院 The heronsbill module surface screw hole site detection method of view-based access control model
CN108596925A (en) * 2018-03-14 2018-09-28 浙江大学山东工业技术研究院 The heronsbill module surface screw hole site image processing method of view-based access control model
CN110288619A (en) * 2018-03-14 2019-09-27 浙江大学山东工业技术研究院 The sun embossing die group surface screw hole site detection method of view-based access control model
CN110503633A (en) * 2019-07-29 2019-11-26 西安理工大学 A kind of applique ceramic disk detection method of surface flaw based on image difference
CN110866903A (en) * 2019-11-08 2020-03-06 北京工业大学 Ping-pong ball identification method based on Hough circle transformation technology

Patent Citations (12)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101615252A (en) * 2008-06-25 2009-12-30 中国科学院自动化研究所 A kind of method for extracting text information from adaptive images
CN103208002A (en) * 2013-04-10 2013-07-17 桂林电子科技大学 Method and system used for recognizing and controlling gesture and based on hand profile feature
CN104647893A (en) * 2015-02-09 2015-05-27 西安科赛图像科技有限责任公司 Overprinting error detection method based on cross line
CN107766784A (en) * 2016-08-20 2018-03-06 宋坤骏 A kind of novel video people counting algorithm
CN106409711A (en) * 2016-09-12 2017-02-15 佛山市南海区广工大数控装备协同创新研究院 Solar silicon wafer defect detecting system and method
CN106855951A (en) * 2016-12-16 2017-06-16 陕西科技大学 A kind of grain kind quality detecting method based on computer vision
CN106802113A (en) * 2016-12-23 2017-06-06 西安交通大学 Intelligent hit telling system and method based on many shell hole algorithm for pattern recognitions
CN108573476A (en) * 2018-03-14 2018-09-25 浙江大学山东工业技术研究院 The heronsbill module surface screw hole site detection method of view-based access control model
CN108596925A (en) * 2018-03-14 2018-09-28 浙江大学山东工业技术研究院 The heronsbill module surface screw hole site image processing method of view-based access control model
CN110288619A (en) * 2018-03-14 2019-09-27 浙江大学山东工业技术研究院 The sun embossing die group surface screw hole site detection method of view-based access control model
CN110503633A (en) * 2019-07-29 2019-11-26 西安理工大学 A kind of applique ceramic disk detection method of surface flaw based on image difference
CN110866903A (en) * 2019-11-08 2020-03-06 北京工业大学 Ping-pong ball identification method based on Hough circle transformation technology

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
张钰婷,邵勇,顾桂鹏,昝鹏: "基于机器视觉的齿轮缺陷检测方法研究", 《工业控制计算机》 *
陈治杉: "基于机器视觉的晶圆缺陷检测系统分析与设计", 《中国优秀博硕士学位论文全文数据库(硕士)信息科技辑》 *

Also Published As

Publication number Publication date
CN112767304B (en) 2023-02-28

Similar Documents

Publication Publication Date Title
CN109141232B (en) Online detection method for disc castings based on machine vision
CN110163853B (en) Edge defect detection method
CN110286124B (en) Machine vision-based refractory brick measuring system
WO2021109697A1 (en) Character segmentation method and apparatus, and computer-readable storage medium
CN114549981A (en) Intelligent inspection pointer type instrument recognition and reading method based on deep learning
CN112734761B (en) Industrial product image boundary contour extraction method
CN102441581A (en) Machine vision-based device and method for online detection of structural steel section size
CN106780526A (en) A kind of ferrite wafer alligatoring recognition methods
CN116777907A (en) Sheet metal part quality detection method
CN111354047B (en) Computer vision-based camera module positioning method and system
CN115018846B (en) AI intelligent camera-based multi-target crack defect detection method and device
CN116109637B (en) System and method for detecting appearance defects of turbocharger impeller based on vision
CN115100191A (en) Metal casting defect identification method based on industrial detection
CN111489337A (en) Method and system for removing false defects through automatic optical detection
CN115797813B (en) Water environment pollution detection method based on aerial image
CN112669286A (en) Infrared thermal image-based method for identifying defects and evaluating damage degree of external thermal insulation system of external wall
CN113177924A (en) Industrial production line product flaw detection method
CN112102278A (en) Metal workpiece machining surface defect detection method based on computer vision
CN113781413B (en) Electrolytic capacitor positioning method based on Hough gradient method
CN115100174A (en) Ship sheet metal part paint surface defect detection method
CN110544235A (en) Flexible circuit board image area identification method based on differential geometry
CN113252103A (en) Method for calculating volume and mass of material pile based on MATLAB image recognition technology
CN112767304B (en) Vision-based sunflower module position and direction detection method
CN112561875A (en) Photovoltaic cell panel coarse grid detection method based on artificial intelligence
CN115511884A (en) Punching compound die surface quality detection method based on computer vision

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant