CN112767304B - A Vision-Based Detection Method for the Position and Orientation of Sunflower Modules - Google Patents
A Vision-Based Detection Method for the Position and Orientation of Sunflower Modules Download PDFInfo
- Publication number
- CN112767304B CN112767304B CN202011397186.0A CN202011397186A CN112767304B CN 112767304 B CN112767304 B CN 112767304B CN 202011397186 A CN202011397186 A CN 202011397186A CN 112767304 B CN112767304 B CN 112767304B
- Authority
- CN
- China
- Prior art keywords
- image
- sunflower
- module
- vision
- detection method
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
- 241000208818 Helianthus Species 0.000 title claims abstract description 70
- 235000003222 Helianthus annuus Nutrition 0.000 title claims abstract description 70
- 238000001514 detection method Methods 0.000 title claims abstract description 34
- 238000012545 processing Methods 0.000 claims abstract description 19
- 238000000034 method Methods 0.000 claims description 32
- 230000007547 defect Effects 0.000 claims description 22
- 238000001914 filtration Methods 0.000 claims description 17
- 238000003708 edge detection Methods 0.000 claims description 12
- 230000011218 segmentation Effects 0.000 claims description 9
- 238000005530 etching Methods 0.000 claims description 6
- 238000007781 pre-processing Methods 0.000 claims description 6
- 230000003628 erosive effect Effects 0.000 claims description 3
- 230000001960 triggered effect Effects 0.000 claims 1
- 238000005429 filling process Methods 0.000 abstract description 3
- 239000003292 glue Substances 0.000 abstract description 3
- 238000006243 chemical reaction Methods 0.000 description 6
- 238000004519 manufacturing process Methods 0.000 description 5
- 238000010586 diagram Methods 0.000 description 4
- 230000000694 effects Effects 0.000 description 4
- 230000001629 suppression Effects 0.000 description 3
- 238000004364 calculation method Methods 0.000 description 2
- 238000005516 engineering process Methods 0.000 description 2
- 238000013507 mapping Methods 0.000 description 2
- 238000012986 modification Methods 0.000 description 2
- 230000004048 modification Effects 0.000 description 2
- 235000002566 Capsicum Nutrition 0.000 description 1
- 239000006002 Pepper Substances 0.000 description 1
- 235000016761 Piper aduncum Nutrition 0.000 description 1
- 235000017804 Piper guineense Nutrition 0.000 description 1
- 244000203593 Piper nigrum Species 0.000 description 1
- 235000008184 Piper nigrum Nutrition 0.000 description 1
- 238000004458 analytical method Methods 0.000 description 1
- 230000008859 change Effects 0.000 description 1
- 230000007797 corrosion Effects 0.000 description 1
- 238000005260 corrosion Methods 0.000 description 1
- 230000010339 dilation Effects 0.000 description 1
- 238000009826 distribution Methods 0.000 description 1
- 238000009499 grossing Methods 0.000 description 1
- 238000005286 illumination Methods 0.000 description 1
- 239000000463 material Substances 0.000 description 1
- 239000011159 matrix material Substances 0.000 description 1
- 230000000877 morphologic effect Effects 0.000 description 1
- 230000008569 process Effects 0.000 description 1
- 238000011160 research Methods 0.000 description 1
- 230000004044 response Effects 0.000 description 1
- 150000003839 salts Chemical class 0.000 description 1
- 238000012360 testing method Methods 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/0002—Inspection of images, e.g. flaw detection
- G06T7/0004—Industrial image inspection
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T5/00—Image enhancement or restoration
- G06T5/40—Image enhancement or restoration using histogram techniques
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T5/00—Image enhancement or restoration
- G06T5/70—Denoising; Smoothing
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/10—Segmentation; Edge detection
- G06T7/12—Edge-based segmentation
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/10—Segmentation; Edge detection
- G06T7/136—Segmentation; Edge detection involving thresholding
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/10—Segmentation; Edge detection
- G06T7/181—Segmentation; Edge detection involving edge growing; involving edge linking
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/60—Analysis of geometric attributes
- G06T7/66—Analysis of geometric attributes of image moments or centre of gravity
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/10—Image acquisition modality
- G06T2207/10004—Still image; Photographic image
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20024—Filtering details
- G06T2207/20032—Median filtering
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20036—Morphological image processing
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20048—Transform domain processing
- G06T2207/20061—Hough transform
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20076—Probabilistic image processing
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/30—Subject of image; Context of image processing
- G06T2207/30108—Industrial image inspection
Landscapes
- Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Geometry (AREA)
- Quality & Reliability (AREA)
- Image Analysis (AREA)
- Length Measuring Devices By Optical Means (AREA)
- Image Processing (AREA)
Abstract
Description
技术领域technical field
本发明属于检测方法技术领域,特别是涉及一种基于视觉的太阳花模组位置和方向检测方法。The invention belongs to the technical field of detection methods, in particular to a method for detecting the position and direction of a sunflower module based on vision.
背景技术Background technique
随着国内人工成本不断攀升,企业和工厂对于机器换人的需求愈加迫切。我国是制造大国,其中工业制造离不开机器的辅助,自动化程度愈高,我国在制造领域优势就愈明显。As domestic labor costs continue to rise, enterprises and factories have an increasingly urgent need for machine replacement. my country is a big manufacturing country, and industrial manufacturing is inseparable from the assistance of machines. The higher the degree of automation, the more obvious our advantages in the field of manufacturing.
视觉引导机器人是将机器视觉技术应用在机器人上,实现机器人的智能化。机器视觉是通过图像传感器等装置采集目标的图像信号,在图像处理系统中将图像转化成数字信号进行解析,得到目标的特征信息,如像素分布、颜色、亮度等。Vision-guided robots apply machine vision technology to robots to realize the intelligence of robots. Machine vision is to collect the image signal of the target through the image sensor and other devices, convert the image into a digital signal in the image processing system for analysis, and obtain the characteristic information of the target, such as pixel distribution, color, brightness, etc.
在路灯生产过程中,由于生产线上料时需要根据模组散热器的位置来确定机械臂的抓取位置,而批量模组堆成垛时,无法直接判断当前位置,因此需要机器视觉技术辅助实现模组的定位检测。In the production process of street lamps, since the position of the robot arm needs to be determined according to the position of the module radiator when loading materials on the production line, and when the batch modules are piled up, the current position cannot be directly judged, so machine vision technology is needed to assist the realization Module positioning detection.
发明内容Contents of the invention
本发明的目的在于提供一种通过图像处理检测太阳花模组的位置方向,实现太阳花放置时能对准传感器,便于后续灌胶工序执行的基于视觉的太阳花模组位置和方向检测方法。The purpose of the present invention is to provide a vision-based sunflower module position and direction detection method that detects the position and direction of the sunflower module through image processing, realizes that the sensor can be aligned when the sunflower is placed, and facilitates the subsequent glue filling process.
为了达到上述的目的,本发明采取以下技术方案:In order to achieve the above-mentioned purpose, the present invention takes the following technical solutions:
一种基于视觉的太阳花模组位置和方向检测方法,包括以下步骤:A method for detecting the position and direction of a sunflower module based on vision, comprising the following steps:
S1、整层模组位置识别,包括以下步骤:S1. Identifying the position of the entire layer of modules, including the following steps:
S11、拍照,相机拍摄整层太阳花模组;S11, take a picture, the camera shoots the whole sunflower module;
S12、预处理,对步骤S11所得的照片进行直方图均衡化和中值滤波去除噪点处理;S12, preprocessing, performing histogram equalization and median filtering to remove noise on the photos obtained in step S11;
S13、阈值分割,对步骤S12所得的图片通过阈值分割形成二值化图像;S13, threshold segmentation, forming a binarized image through threshold segmentation on the picture obtained in step S12;
S14、腐蚀,对步骤S13所得的图像通过腐蚀形成闭合孔洞;S14, etching, forming closed holes by etching the image obtained in step S13;
S15、判断并返回数据,对步骤S14所得的图像通过判断孔洞轮廓的面积和圆形度找到目标区域;S15, judge and return the data, and find the target area by judging the area and circularity of the hole contour for the image obtained in step S14;
S2、单个模组位置识别,在步骤S1识别到12个孔洞位置后,进行以下步骤:S2. Position identification of a single module. After 12 hole positions are identified in step S1, the following steps are performed:
S21、拍照,相机拍摄单个太阳花模组;S21, take a picture, the camera shoots a single sunflower module;
S22、预处理,对步骤S21所得的照片进行中值滤波去除噪点;S22, preprocessing, performing median filtering on the photos obtained in step S21 to remove noise;
S23、检测,对步骤S22所得的图像中的太阳花模组进行霍夫圆检测;S23, detecting, performing Hough circle detection on the sunflower module in the image obtained in step S22;
S24、找最小圆,对步骤S23所得的图像遍历所有圆找出最小圆并返回圆心位置及半径;S24, finding the smallest circle, traversing all the circles for the image obtained in step S23 to find the smallest circle and returning the center position and radius;
S3、方向识别,实现太阳花模组方向的寻找,包括以下步骤:S3. Direction recognition, realizing the search for the direction of the Sunflower module, including the following steps:
S31、突出图像边缘,对步骤S21获得的图像利用二值化图像与腐蚀图的差来表示图像的边缘轮廓;S31, highlighting the edge of the image, using the difference between the binarized image and the corrosion map for the image obtained in step S21 to represent the edge profile of the image;
S32、掩膜处理,用于展现最大缺口,将步骤S24所得的最小圆圆心位置设为圆心位置并与待处理图像相乘,得到ROI图像;S32, mask processing, for displaying the largest gap, setting the position of the center of the smallest circle obtained in step S24 as the position of the center of the circle and multiplying it with the image to be processed to obtain the ROI image;
S33、边缘检测,对步骤S32所得的图像采用Canny边缘检测算法进行识别以检测出实际边缘;S33, edge detection, using the Canny edge detection algorithm to identify the image obtained in step S32 to detect the actual edge;
S34、膨胀处理,对步骤S33所得的图像中高亮部分进行膨胀;S34. Expansion processing, expanding the highlighted part of the image obtained in step S33;
S35、寻找最大轮廓,对步骤S34所得的图像中所有轮廓进行遍历并得到最大轮廓;S35. Find the maximum contour, traverse all the contours in the image obtained in step S34 and obtain the maximum contour;
S36、凸包和凸缺陷检测,对步骤S35所得的最大轮廓的图像进行凸包和凸缺陷检测,以得到凸缺陷区域的特征量;S36. Convex hull and convex defect detection, performing convex hull and convex defect detection on the image of the largest contour obtained in step S35, to obtain the feature value of the convex defect area;
S37、确定最大开口,计算步骤S36所得的特征量并进行比较,找到太阳花模组的开口方向。S37. Determine the maximum opening, calculate and compare the feature quantities obtained in step S36, and find the opening direction of the sunflower module.
其中,Canny边缘检测算法是John F.Canny于1986年开发出来的一个多级边缘检测算法。通常情况下边缘检测的目的是在保留原有图像属性的情况下,显著减少图像的数据规模。有多种算法可以进行边缘检测,虽然Canny算法年代久远,但可以说它是边缘检测的一种标准算法,而且仍在研究中广泛使用。Among them, the Canny edge detection algorithm is a multi-level edge detection algorithm developed by John F.Canny in 1986. Usually, the purpose of edge detection is to significantly reduce the data size of the image while retaining the original image attributes. There are various algorithms for edge detection, although the Canny algorithm is old, it can be said that it is a standard algorithm for edge detection, and it is still widely used in research.
在采用上述技术方案的基础上,本发明还可采用以下进一步的技术方案:On the basis of adopting the above-mentioned technical solution, the present invention can also adopt the following further technical solutions:
所述步骤S11还包括:相机随机械臂运动到整层太阳花模组上方1米处,触发相机拍照;The step S11 also includes: the camera moves with the mechanical arm to a
所述步骤S14中腐蚀的卷积核大小为120*120。The size of the corroded convolution kernel in step S14 is 120*120.
所述步骤S12还包括:The step S12 also includes:
S121、直方图均衡化,对步骤S11所得的照片进行直方图均衡化处理以增强图像对比度;S121, histogram equalization, performing histogram equalization processing on the photos obtained in step S11 to enhance image contrast;
S122、中值滤波,对步骤S121所得的图像进行中值滤波处理以消除图像噪点。S122, median filtering, performing median filtering processing on the image obtained in step S121 to eliminate image noise.
所述步骤S13还包括:对步骤S12所得的图片进行总像素数及灰度值出现的概率的计算,并对图像进行二值化处理,大于阈值的令像素值为255,小于阈值的为0。The step S13 also includes: calculating the total number of pixels and the probability of occurrence of the gray value of the picture obtained in the step S12, and performing binarization processing on the image, making the pixel value greater than the threshold value 255, and making the pixel value smaller than the
所述步骤S15还包括:The step S15 also includes:
S151、检测步骤S14所得的图像中的所有轮廓并保存到容器中,计算轮廓的面积及周长并以此计算出圆形度,若面积满足大于12500且小于92500、圆形度满足大于0.65,则判断为轮廓是目标区域;S151. Detect all contours in the image obtained in step S14 and save them in the container, calculate the area and perimeter of the contours and calculate the circularity based on this, if the area is greater than 12500 and less than 92500, and the circularity is greater than 0.65, Then it is judged that the contour is the target area;
S152、遍历步骤S151所得的所有满足条件的目标区域,找到包围该轮廓的最小圆,返回最小圆圆心位置及半径。S152. Traversing all the target areas satisfying the conditions obtained in step S151, finding the smallest circle enclosing the outline, and returning the center position and radius of the smallest circle.
所述步骤S21中还包括:相机在机械臂的带动下依次移动至每个圆的圆心位置并拍照,拍摄图像为灰度图;The step S21 also includes: the camera moves to the center position of each circle sequentially under the drive of the mechanical arm and takes pictures, and the captured image is a grayscale image;
所述步骤S22中采用3*3的卷积核进行中值滤波。In the step S22, a 3*3 convolution kernel is used to perform median filtering.
所述步骤S34中膨胀的卷积核为19*19。The expanded convolution kernel in step S34 is 19*19.
所述步骤S35还包括:查找到所有步骤S34所得图像中的轮廓并依次存储到容器中,遍历所有轮廓查找到最大值。The step S35 also includes: finding all the contours in the image obtained in the step S34 and storing them in the container sequentially, and traversing all the contours to find the maximum value.
所述步骤S36中的特征量包括起始点、结束点、距离最远点、最远点到凸缺陷的距离。The feature quantity in the step S36 includes the starting point, the ending point, the furthest point, and the distance from the furthest point to the convex defect.
所述步骤S37中还包括:计算每一个凸缺陷的起始点和结束点的距离,并存储到容器中,对计算所得的距离进行比较以找到最大距离所在区域,该区域为太阳花模组的开口方向。The step S37 also includes: calculating the distance between the starting point and the ending point of each convex defect, and storing it in the container, and comparing the calculated distance to find the area where the maximum distance is located, which is the area of the Sunflower module. Opening direction.
本发明具有以下技术特点:The present invention has the following technical characteristics:
本发明通过自动识别太阳花模组的位置和方向,便于抓取并旋转到指定方向,可实现太阳花放置时能对准传感器,便于后续灌胶工序执行。本发明的方法精度高、过程严密、出错率极低。The invention automatically recognizes the position and direction of the sunflower module, which is convenient for grabbing and rotating to a designated direction, and can realize that the sunflower can be aligned with the sensor when it is placed, and is convenient for the execution of the subsequent glue filling process. The method of the invention has high precision, strict process and extremely low error rate.
附图说明Description of drawings
图1为本发明一种基于视觉的太阳花模组位置和方向检测方法的步骤S11所得图像。Fig. 1 is an image obtained in step S11 of a method for detecting the position and direction of a sunflower module based on vision in the present invention.
图2为本发明一种基于视觉的太阳花模组位置和方向检测方法的步骤S121的灰度直方图。FIG. 2 is a gray histogram of step S121 of a method for detecting the position and direction of a sunflower module based on vision in the present invention.
图3为本发明一种基于视觉的太阳花模组位置和方向检测方法的步骤S121所得图像。FIG. 3 is an image obtained in step S121 of a method for detecting the position and direction of a sunflower module based on vision in the present invention.
图4为本发明一种基于视觉的太阳花模组位置和方向检测方法的步骤S122所得图像。FIG. 4 is an image obtained in step S122 of a method for detecting the position and direction of a sunflower module based on vision in the present invention.
图5为本发明一种基于视觉的太阳花模组位置和方向检测方法的步骤S13所得图像。Fig. 5 is an image obtained in step S13 of a method for detecting the position and direction of a sunflower module based on vision in the present invention.
图6为本发明一种基于视觉的太阳花模组位置和方向检测方法的步骤S14所得图像。FIG. 6 is an image obtained in step S14 of a method for detecting the position and direction of a sunflower module based on vision in the present invention.
图7为本发明一种基于视觉的太阳花模组位置和方向检测方法的步骤S21所得图像。FIG. 7 is an image obtained in step S21 of a method for detecting the position and direction of a sunflower module based on vision in the present invention.
图8为本发明一种基于视觉的太阳花模组位置和方向检测方法的步骤S22所得图像。FIG. 8 is an image obtained in step S22 of a method for detecting the position and direction of a sunflower module based on vision in the present invention.
图9为本发明一种基于视觉的太阳花模组位置和方向检测方法的步骤S23所得图像。FIG. 9 is an image obtained in step S23 of a method for detecting the position and direction of a sunflower module based on vision in the present invention.
图10为本发明一种基于视觉的太阳花模组位置和方向检测方法的步骤S31所得图像。FIG. 10 is an image obtained in step S31 of a method for detecting the position and direction of a sunflower module based on vision in the present invention.
图11为本发明一种基于视觉的太阳花模组位置和方向检测方法的步骤S32所得掩膜图像。FIG. 11 is a mask image obtained in step S32 of a method for detecting the position and direction of a sunflower module based on vision in the present invention.
图12为本发明一种基于视觉的太阳花模组位置和方向检测方法的步骤S32所得ROI图像。FIG. 12 is an ROI image obtained in step S32 of a method for detecting the position and direction of a sunflower module based on vision in the present invention.
图13为本发明一种基于视觉的太阳花模组位置和方向检测方法的步骤S33所得图像。FIG. 13 is an image obtained in step S33 of a method for detecting the position and direction of a sunflower module based on vision in the present invention.
图14为本发明一种基于视觉的太阳花模组位置和方向检测方法的步骤S34所得图像。FIG. 14 is an image obtained in step S34 of a method for detecting the position and direction of a sunflower module based on vision in the present invention.
图15为本发明一种基于视觉的太阳花模组位置和方向检测方法的步骤S35所得图像。Fig. 15 is an image obtained in step S35 of a method for detecting the position and direction of a sunflower module based on vision in the present invention.
图16为本发明一种基于视觉的太阳花模组位置和方向检测方法的步骤S36所得最大轮廓凸包图。Fig. 16 is a maximum contour convex hull diagram obtained in step S36 of a method for detecting the position and direction of a sunflower module based on vision in the present invention.
图17为本发明一种基于视觉的太阳花模组位置和方向检测方法的步骤S36所得最大轮廓凸缺陷图。Fig. 17 is a maximum contour convex defect map obtained in step S36 of a method for detecting the position and direction of a sunflower module based on vision in the present invention.
图18为本发明一种基于视觉的太阳花模组位置和方向检测方法的流程图。Fig. 18 is a flowchart of a method for detecting the position and direction of a sunflower module based on vision in the present invention.
图19为本发明一种基于视觉的太阳花模组位置和方向检测方法的太阳花模组结构示意图。Fig. 19 is a schematic diagram of the sunflower module structure of a method for detecting the position and direction of the sunflower module based on vision in the present invention.
具体实施方式Detailed ways
以下具体实施例是对本发明提供的方法与技术方案的进一步说明,但不应理解成对本发明的限制。The following specific examples are further descriptions of the methods and technical solutions provided by the present invention, but should not be construed as limiting the present invention.
实施例一,如图18和19所示,一种基于视觉的太阳花模组位置和方向检测方法,包括以下步骤:
S1、整层模组位置识别,包括以下步骤:S1. Identifying the position of the entire layer of modules, including the following steps:
S11、拍照,相机随机械臂运动至整层模组上方1m处,触发相机拍照。相机选型由检测单个模组的位置和方向得出,在工作台的拍摄效果如图1所示。S11. Taking pictures, the camera moves with the mechanical arm to 1m above the entire layer of the module, triggering the camera to take pictures. The camera selection is obtained by detecting the position and direction of a single module, and the shooting effect on the workbench is shown in Figure 1.
S12、预处理,对步骤S11所得的照片进行直方图均衡化和中值滤波去除噪点处理,包括以下步骤:S12, preprocessing, performing histogram equalization and median filtering to remove noise on the photos obtained in step S11, including the following steps:
S121、直方图均衡化,对步骤S11所得的照片进行直方图均衡化处理以增强图像对比度。S121. Histogram equalization, performing histogram equalization processing on the photos obtained in step S11 to enhance image contrast.
直方图均衡化是通过某种灰度映射使输入图像转换为在每一灰度级上都有近似相同的像素点数的输出图像,如图1所示,可以看出图像整体偏暗,细节不突出,因此先使用直方图均衡化增加图像对比度,便于看到图像细节。灰度直方图是一个二维图像,如图2所示,横坐标是图像中各个像素点的灰度级别,纵坐标为具有各个灰度级别的像素在图像中出现的次数或者频率。Histogram equalization is to convert the input image into an output image with approximately the same number of pixels on each gray level through some grayscale mapping. As shown in Figure 1, it can be seen that the overall image is dark and the details are not clear. Prominent, so first use histogram equalization to increase the contrast of the image, so that you can see the details of the image. The gray histogram is a two-dimensional image, as shown in Figure 2, the abscissa is the gray level of each pixel in the image, and the ordinate is the number or frequency of pixels with each gray level appearing in the image.
具体地,首先假设图像灰度范围为0~1且连续,此时图像的归一化直方图即为概率密度函数。Specifically, it is first assumed that the gray scale of the image ranges from 0 to 1 and is continuous, then the normalized histogram of the image is the probability density function.
p(x),0≤x≤1p(x),0≤x≤1
由概率密度函数的性质,可得:According to the properties of the probability density function, we can get:
设转换前图像概率密度函数为pr(r),转换后图像概率密度函数为ps(s),转换函数的映射关系为s=f(r)。其中,r和s分别为输入图像和输出图像在任一点的灰度值。则由概率论得:Suppose the image probability density function before conversion is p r (r), the image probability density function after conversion is p s (s), and the mapping relationship of the conversion function is s=f(r). Among them, r and s are the gray value of the input image and the output image at any point, respectively. Then from probability theory:
如果想使转换后的概率密度函数ps(s)=1,0≤s≤1(即直方图是均匀的)则必须满足:If you want to make the converted probability density function p s (s)=1, 0≤s≤1 (that is, the histogram is uniform), it must satisfy:
等式两边对r积分,可得:Integrating both sides of the equation with respect to r, we get:
对于灰度值在[0,255]的情况,需要乘以最大灰度值Dmax,此时,灰度均衡的转换公式为For the case where the gray value is in [0,255], it needs to be multiplied by the maximum gray value D max , at this time, the conversion formula of gray balance is
其中,DB为转换后的灰度值,DA为转换前的灰度值。Among them, D B is the gray value after conversion, and D A is the gray value before conversion.
S122、中值滤波,对步骤S121所得的图像进行中值滤波处理以消除图像噪点。S122, median filtering, performing median filtering processing on the image obtained in step S121 to eliminate image noise.
经过直方图均衡化后可以看出,图像噪点明显,因此采用中值滤波消除图像噪点,便于图像的后续处理。中值滤波是一种统计排序滤波器,对于原图像中某点,中值滤波以改点为中心的邻域内的所有像素的统计排序的中值作为该点的响应。After the histogram equalization, it can be seen that the image noise is obvious, so the median filter is used to eliminate the image noise, which is convenient for the subsequent processing of the image. Median filtering is a statistical sorting filter. For a point in the original image, median filtering takes the median of the statistical sorting of all pixels in the neighborhood centered on the changed point as the response of the point.
本实施例中采用3*3的中值滤波器,扫过图像每一个像素,排在第5位的值替换原像素值。这种方法相对于线性滤波器能够更好地排除椒盐噪声影响,因为线性滤波器需要将噪点计算在内,但在中值滤波中,噪点直接被忽略掉。In this embodiment, a 3*3 median filter is used to scan each pixel of the image, and the value at the fifth place replaces the original pixel value. Compared with the linear filter, this method can better eliminate the influence of salt and pepper noise, because the linear filter needs to calculate the noise, but in the median filter, the noise is directly ignored.
S13、阈值分割,对步骤S12所得的图片通过阈值分割形成二值化图,此处用基于最大类间方差法进行阈值分割。S13. Threshold segmentation. The image obtained in step S12 is subjected to threshold segmentation to form a binary image. Here, threshold segmentation is performed based on the method of maximum variance between classes.
可以看出,目标区域的灰度值与周围黑色圆环部分差异巨大,因此可以使用阈值分割来过滤出目标区域。但由于光照不均匀,如在右侧强光下,黑色区域灰度值较其他同类区域有明显变化,因此不能对所有拍摄到图像使用统一阈值进行分割。It can be seen that the gray value of the target area is very different from the surrounding black circle, so threshold segmentation can be used to filter out the target area. However, due to uneven illumination, for example, under the strong light on the right, the gray value of the black area has obvious changes compared with other similar areas, so it is not possible to use a uniform threshold for segmentation of all captured images.
具体地,如图5所示,计算总的像素数及各灰度值出现的概率。遍历图像,统计灰度为i的像素数,记录为ni,i的范围是0~255,总的像素数各灰度值出现的概率为把图像中的像素用阈值T分为两类,A1和A2,A1是由灰度值在0~T-1的像素组成,概率为平均灰度 其中μ是整幅图像的平均灰度。A2是由灰度值在T~255的像素组成,概率为平均灰度则 A1和A2两个区域的总方差σ=P1P2(μ1-μ2)2。通过计算,求出使得σ最大的T值。对图像进行二值化,大于阈值的令像素值为255,小于阈值的为0。Specifically, as shown in FIG. 5 , the total number of pixels and the occurrence probability of each gray value are calculated. Traverse the image, count the number of pixels whose gray level is i, and record it as n i , the range of i is 0-255, the total number of pixels The probability of occurrence of each gray value is Divide the pixels in the image into two categories with a threshold T, A 1 and A 2 , A 1 is composed of pixels with gray values between 0 and T-1, and the probability is average gray where μ is the average gray level of the entire image. A 2 is composed of pixels whose gray value is T~255, the probability is average gray but The total variance σ=P 1 P 2 (μ 1 −μ 2 ) 2 of the two areas of A 1 and A 2 . By calculation, find the T value that makes σ the largest. The image is binarized, and the pixel value greater than the threshold is 255, and the pixel value smaller than the threshold is 0.
S14、腐蚀,对步骤S13所得的图像通过腐蚀形成闭合孔洞。S14, etching, forming closed holes by etching the image obtained in step S13.
如图6所示,对二值化后的图进行腐蚀,获得圆形目标的闭合区域。腐蚀的卷积核大小为120*120。As shown in Figure 6, the binarized graph is corroded to obtain the closed area of the circular target. The corroded convolution kernel size is 120*120.
S15、判断并返回数据,对步骤S14所得的图像通过判断孔洞轮廓的面积和圆形度找到目标区域。S15, judge and return the data, and find the target area by judging the area and circularity of the hole contour on the image obtained in step S14.
S151、检测步骤S14所得的图像中的所有轮廓并保存到容器中。S151. Detect all contours in the image obtained in step S14 and save them in a container.
轮廓形状判断方法可以根据等周定理,在周界长度相等的封闭几何形状中,圆形的面积最大。用不等式可以表达为:4πS≤L^2。式中,S为曲线包围的面积,L为封闭曲线的周长。定义圆形度为4πS/L^2。轮廓的面积S通过连通域所占的像素数量可以求出。The contour shape judgment method can be based on the equiperimeter theorem, in closed geometric shapes with equal perimeter lengths, the area of the circle is the largest. Inequality can be expressed as: 4πS≤L^2. In the formula, S is the area enclosed by the curve, and L is the perimeter of the closed curve. Define the circularity as 4πS/L^2. The area S of the contour can be calculated by the number of pixels occupied by the connected domain.
对于轮廓i,计算其面积S及周长L,从而计算出圆形度R=4πS/L2,若S能够满足12500<S<92500,同时R满足R>0.65,则认为该轮廓是圆形,即目标区域。For contour i, calculate its area S and perimeter L, so as to calculate the circularity R=4πS/L 2 , if S can satisfy 12500<S<92500, and R satisfies R>0.65, then the contour is considered to be circular , the target area.
S152、遍历步骤S151所得的所有满足条件的目标区域,找到包围该轮廓的最小圆,返回最小圆圆心位置及半径。S152. Traversing all the target areas satisfying the conditions obtained in step S151, finding the smallest circle enclosing the outline, and returning the center position and radius of the smallest circle.
S2、单个模组位置识别,在步骤S1识别到12个孔洞位置后,进行以下步骤:S2. Position identification of a single module. After 12 hole positions are identified in step S1, the following steps are performed:
S21、拍照,相机拍摄单个太阳花模组。S21. Taking pictures, the camera shoots a single sunflower module.
识别12个孔洞的位置后,相机在机械臂的带动下依次移动至每个圆的圆心位置并拍照。After identifying the positions of the 12 holes, the camera moves to the center of each circle in turn under the drive of the robotic arm and takes pictures.
具体地,相机选型:分辨率计算:检测精度为0.1mm,太阳花模组直径为160mm,因此视野大小最好200*200mm左右,得到最小分辨率400万,由于实际应用一般会选择计算值的3-5倍来确保精度,则实际分辨率至少为1200万。Specifically, camera selection: resolution calculation: the detection accuracy is 0.1mm, and the diameter of the sunflower module is 160mm, so the best field of view is about 200*200mm, and the minimum resolution is 4 million. Due to practical applications, the calculated value is generally selected 3-5 times to ensure the accuracy, the actual resolution is at least 12 million.
因此最终选择大恒图像的型号为MER-1220-32U3M-C,分辨率为1220万像素的GigE工业相机。Therefore, the model of Daheng Image was finally selected as MER-1220-32U3M-C, a GigE industrial camera with a resolution of 12.2 million pixels.
拍摄图像为灰度图,在模组上方20cm的工作位置拍摄效果如图7所示。The captured image is a grayscale image, and the shooting effect at the working position 20cm above the module is shown in Figure 7.
S22、预处理,对步骤S21所得的照片进行中值滤波去除噪点。如图8所示,使用3*3的卷积核进行中值滤波,具体地如步骤S1中所述。S22. Preprocessing, performing median filtering on the photo obtained in step S21 to remove noise. As shown in FIG. 8 , a 3*3 convolution kernel is used to perform median filtering, specifically as described in step S1.
S23、检测,对步骤S22所得的图像中的太阳花模组进行霍夫圆检测。S23, detection, performing Hough circle detection on the sunflower module in the image obtained in step S22.
对太阳花模组进行霍夫圆检测,进行试验后确定圆和圆心的最小距离为45,圆半径最小值为200,最大值为250。检测后的效果如图9所示。Carry out Hough circle detection on the Sunflower module, and after the test, it is determined that the minimum distance between the circle and the center of the circle is 45, the minimum radius of the circle is 200, and the maximum is 250. The effect after detection is shown in Figure 9.
S24、找最小圆,对步骤S23所得的图像遍历所有圆找出最小圆并返回圆心位置及半径。S24. Find the smallest circle, traverse all the circles in the image obtained in step S23 to find the smallest circle and return the center position and radius of the circle.
可以看出,经过霍夫圆检测后,所有圆围绕目标区域,最小的圆即为所求。因此遍历所有找到的圆R1,R2…Rn,找出最小圆Ri,返回其圆心位置(xi,yi)和半径ri。It can be seen that after the Hough circle detection, all circles surround the target area, and the smallest circle is the desired one. Therefore, traverse all found circles R 1 , R 2 ...R n , find the smallest circle R i , and return its center position (xi , y i ) and radius r i .
至此,找到了单个太阳花模组的位置,并进行下述步骤来寻找太阳花模组的方向。So far, the location of a single Sunflower module has been found, and the following steps are performed to find the direction of the Sunflower module.
S3、方向识别,与步骤S2同步进行,实现太阳花模组方向的寻找,包括以下步骤:S3. Direction recognition is carried out synchronously with step S2 to realize the search for the direction of the Sunflower module, including the following steps:
S31、突出图像边缘,对步骤S21获得的图像利用二值化图像与腐蚀图的差来表示图像的边缘轮廓。S31. Highlighting the edge of the image, using the difference between the binarized image and the erosion map to represent the edge profile of the image obtained in step S21.
为减少不必要的噪声,获得图像的清晰轮廓,将团块的边缘突出出来,利用二值化图像与腐蚀图的差来表示图像的内部梯度,即图像的边缘轮廓。如图10所示,内部梯度是形态学梯度的一种,是用原图像减去腐蚀之后的图像得到差值图像,称为图像的内部梯度。In order to reduce unnecessary noise and obtain a clear outline of the image, the edges of the clumps are highlighted, and the difference between the binarized image and the erosion map is used to represent the internal gradient of the image, that is, the edge outline of the image. As shown in Figure 10, the internal gradient is a kind of morphological gradient, which is a difference image obtained by subtracting the corroded image from the original image, which is called the internal gradient of the image.
S32、掩膜处理,用于展现最大缺口,将步骤S24所得的最小圆圆心位置设为圆心位置并与待处理图像相乘,得到ROI图像。S32. Mask processing, for displaying the largest gap, setting the position of the center of the smallest circle obtained in step S24 as the position of the center of the circle and multiplying it with the image to be processed to obtain an ROI image.
为了将最大缺口展现出来,并且去掉多余的太阳花,取图像中心区域,为后续寻找最大开口做准备。掩膜操作可以提取ROI区域,制作掩模时,如图11所示,圆心位置即为霍夫圆检测后的最小圆圆心位置(xi,yi)。将与待处理图像相乘,如图12所示,得到ROI图像,ROI内图像值保持不变,外侧像素值置为0。In order to show the largest gap and remove the redundant sunflowers, the central area of the image is taken to prepare for the subsequent search for the largest opening. The mask operation can extract the ROI area. When making a mask, as shown in Figure 11, the center position is the minimum circle center position ( xi , y i ) after Hough circle detection. Multiply it with the image to be processed, as shown in Figure 12, to get the ROI image, the image value in the ROI remains unchanged, and the outer pixel value is set to 0.
S33、边缘检测,如图13所示,对步骤S32所得的图像采用Canny边缘检测算法进行识别以检测出实际边缘。S33. Edge detection. As shown in FIG. 13 , the Canny edge detection algorithm is used to identify the image obtained in step S32 to detect actual edges.
为了识别出图像的边缘,使用Canny边缘检测算法可以尽可能的检测出实际边缘,同时尽可能地减少噪声产生的误报。In order to identify the edge of the image, the Canny edge detection algorithm can be used to detect the actual edge as much as possible, while reducing the false positives caused by noise as much as possible.
具体地,消除噪声,使用高斯平滑滤波器卷积降噪Specifically, to remove noise, use Gaussian smoothing filter convolution denoising
计算梯度幅值和方向图像的边缘可以指向不同方向,因此经典Canny算法用了四个梯度算子来分别计算水平,垂直和对角线方向的梯度。但是通常都不用四个梯度算子来分别计算四个方向。本文采用Sobel算子计算水平和垂直梯度。Calculate the gradient magnitude and direction The edges of the image can point to different directions, so the classic Canny algorithm uses four gradient operators to calculate the gradients in the horizontal, vertical and diagonal directions respectively. However, four gradient operators are usually not used to calculate the four directions separately. In this paper, the Sobel operator is used to calculate the horizontal and vertical gradients.
A是原图像像素矩阵。A is the original image pixel matrix.
最终得到梯度幅值Finally get the gradient magnitude
梯度方向gradient direction
非极大值抑制,非极大值抑制是一种边缘细化方法。通常得出来的梯度边缘不止一个像素宽,而是多个像素宽,因此这样的梯度图还是很“模糊”。非极大值抑制能帮助保留局部最大梯度而抑制所有其他梯度值。这意味着只保留了梯度变化中最锐利的位置。算法:1)比较当前点的梯度强度和正负梯度方向点的梯度强度。2)如果当前点的梯度强度和同方向的其他点的梯度强度相比较是最大,保留其值。否则抑制,即设为0。比如当前点的方向指向正上方90°方向,那它需要和垂直方向,它的正上方和正下方的像素比较。Non-maximum suppression, non-maximum suppression is an edge refinement method. Usually the obtained gradient edge is not only one pixel wide, but multiple pixels wide, so such a gradient map is still very "fuzzy". Non-maximum suppression can help preserve the local maximum gradient while suppressing all other gradient values. This means that only the sharpest locations in the gradient change are preserved. Algorithm: 1) Compare the gradient strength of the current point with the gradient strength of the positive and negative gradient direction points. 2) If the gradient strength of the current point is the largest compared with the gradient strength of other points in the same direction, keep its value. Otherwise suppressed, ie set to 0. For example, the direction of the current point points to the 90° direction directly above, then it needs to be compared with the vertical direction, the pixels directly above and directly below it.
滞后阈值。滞后阈值需要两个阈值,高阈值和低阈值:1)若某一像素位置的幅值超过高阈值,该像素保留为边缘像素。2)若某一像素位置的幅值小于低阈值,该像素被排除。3)若幅值在两个阈值之间,则像素仅仅在连接到一个高于高阈值的像素时被保留。一般高低阈值比在2:1到3:1之间。本实施例中,高阈值59、低阈值13时效果比较好。hysteresis threshold. The hysteresis threshold requires two thresholds, a high threshold and a low threshold: 1) If the magnitude of a certain pixel position exceeds the high threshold, the pixel remains as an edge pixel. 2) If the magnitude of a certain pixel position is less than the low threshold, the pixel is excluded. 3) If the magnitude is between two thresholds, the pixel is only kept if it is connected to a pixel above the high threshold. Generally, the ratio of high and low thresholds is between 2:1 and 3:1. In this embodiment, the effect is better when the high threshold is 59 and the low threshold is 13.
S34、膨胀处理,对步骤S33所得的图像中高亮部分进行膨胀。S34. Expansion processing, expanding the highlighted part of the image obtained in step S33.
因为处理后的边缘出现很多断点,因此采用膨胀处理。膨胀就是图像中的高亮部分进行膨胀,如图14所示,拥有比原图更大的高亮区域。膨胀的卷积核为19*19。Because there are many breakpoints in the processed edge, dilation is used. Expansion is the expansion of the highlighted part of the image, as shown in Figure 14, which has a larger highlighted area than the original image. The expanded convolution kernel is 19*19.
S35、寻找最大轮廓,对步骤S34所得的图像中所有轮廓进行遍历并得到最大轮廓。S35. Find the maximum contour, traverse all the contours in the image obtained in step S34 and obtain the maximum contour.
目前图像中有若干轮廓,需要找到最大轮廓来进行凸包和凸缺陷检测。一个轮廓一般对应一系列的点,也就是图像中的一条曲线。首先需要查找所有轮廓C1,C2…Cn,依次存储到容器中,然后遍历所有轮廓,查找最大值Ci,可以得到最大轮廓,如图15所示。Currently there are several contours in the image, and the largest contour needs to be found for convex hull and convex defect detection. A contour generally corresponds to a series of points, that is, a curve in the image. Firstly, it is necessary to find all the contours C 1 , C 2 ...C n and store them in the container in turn, then traverse all the contours and find the maximum value C i to get the maximum contour, as shown in Figure 15.
S36、凸包和凸缺陷检测,对步骤S35所得的最大轮廓的图像进行凸包和凸缺陷检测,以得到凸缺陷区域的特征量。S36. Convex hull and convex defect detection, performing convex hull and convex defect detection on the image of the largest contour obtained in step S35, so as to obtain the feature value of the convex defect area.
如图16所示的最大轮廓凸包图和如图17所示的最大轮廓凸缺陷图,可以看出最大轮廓包含很多缺口和凸起。为了得到最大开口,进而得到太阳花方向,对图像进行凸包和凸缺陷检测。The largest contour convex hull diagram shown in Figure 16 and the largest contour convex defect diagram shown in Figure 17, it can be seen that the largest contour contains many gaps and protrusions. In order to get the maximum opening and then the direction of the sunflower, the convex hull and convex defect detection are performed on the image.
凸包和凸缺陷检测可以很好地理解物体的形状或轮廓。对于给定的二维平面点集,凸包就是将最外层的点连接起来构成的凸多边形,它是能包含点集中所有点的。凸缺陷指的是一段轮廓中凹进去的部分,每个凸缺陷区域有四个特征量:起始点,结束点,距离最远点,最远点到凸缺陷的距离。Convex Hull and Convex Defect Detection provide a good understanding of the shape or contour of an object. For a given two-dimensional plane point set, the convex hull is a convex polygon formed by connecting the outermost points, which can contain all points in the point set. A convex defect refers to a concave part in a section of contour. Each convex defect area has four characteristic quantities: start point, end point, distance from the farthest point, and distance from the farthest point to the convex defect.
S37、确定最大开口,计算步骤S36所得的特征量并进行比较,找到太阳花模组的开口方向。S37. Determine the maximum opening, calculate and compare the feature quantities obtained in step S36, and find the opening direction of the sunflower module.
计算每一个凸缺陷的起始点和结束点的距离L1,L2…Ln,存储到容器中,对这些距离进行比较,找到最大距离Li,Li所在区域即为太阳花模组的开口方向。以上实施例的说明只是用于帮助理解本发明方法及其核心思想。应当指出,对于本技术领域的普通技术人员来说,在不脱离本发明原理的前提下,还可以对本发明进行若干改进和修饰,这些改进和修饰也落入本发明权利要求保护范围内。Calculate the distance L 1 , L 2 ...L n between the start point and the end point of each convex defect, store it in the container, compare these distances, find the maximum distance L i , and the area where L i is located is the Sunflower module Opening direction. The descriptions of the above embodiments are only used to help understand the method of the present invention and its core idea. It should be pointed out that for those skilled in the art, without departing from the principles of the present invention, some improvements and modifications can be made to the present invention, and these improvements and modifications also fall within the protection scope of the claims of the present invention.
Claims (10)
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202011397186.0A CN112767304B (en) | 2020-12-04 | 2020-12-04 | A Vision-Based Detection Method for the Position and Orientation of Sunflower Modules |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202011397186.0A CN112767304B (en) | 2020-12-04 | 2020-12-04 | A Vision-Based Detection Method for the Position and Orientation of Sunflower Modules |
Publications (2)
Publication Number | Publication Date |
---|---|
CN112767304A CN112767304A (en) | 2021-05-07 |
CN112767304B true CN112767304B (en) | 2023-02-28 |
Family
ID=75693407
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202011397186.0A Active CN112767304B (en) | 2020-12-04 | 2020-12-04 | A Vision-Based Detection Method for the Position and Orientation of Sunflower Modules |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN112767304B (en) |
Citations (12)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN101615252A (en) * | 2008-06-25 | 2009-12-30 | 中国科学院自动化研究所 | An Adaptive Image Text Information Extraction Method |
CN103208002A (en) * | 2013-04-10 | 2013-07-17 | 桂林电子科技大学 | Method and system used for recognizing and controlling gesture and based on hand profile feature |
CN104647893A (en) * | 2015-02-09 | 2015-05-27 | 西安科赛图像科技有限责任公司 | Overprinting error detection method based on cross line |
CN106409711A (en) * | 2016-09-12 | 2017-02-15 | 佛山市南海区广工大数控装备协同创新研究院 | Solar silicon wafer defect detecting system and method |
CN106802113A (en) * | 2016-12-23 | 2017-06-06 | 西安交通大学 | Intelligent hit telling system and method based on many shell hole algorithm for pattern recognitions |
CN106855951A (en) * | 2016-12-16 | 2017-06-16 | 陕西科技大学 | A kind of grain kind quality detecting method based on computer vision |
CN107766784A (en) * | 2016-08-20 | 2018-03-06 | 宋坤骏 | A kind of novel video people counting algorithm |
CN108573476A (en) * | 2018-03-14 | 2018-09-25 | 浙江大学山东工业技术研究院 | Detection method of screw hole position on the surface of sunflower module based on vision |
CN108596925A (en) * | 2018-03-14 | 2018-09-28 | 浙江大学山东工业技术研究院 | The heronsbill module surface screw hole site image processing method of view-based access control model |
CN110288619A (en) * | 2018-03-14 | 2019-09-27 | 浙江大学山东工业技术研究院 | Detection method of screw hole position on the surface of sunflower module based on vision |
CN110503633A (en) * | 2019-07-29 | 2019-11-26 | 西安理工大学 | A method for surface defect detection of applique ceramic discs based on image difference |
CN110866903A (en) * | 2019-11-08 | 2020-03-06 | 北京工业大学 | Ping-pong ball identification method based on Hough circle transformation technology |
-
2020
- 2020-12-04 CN CN202011397186.0A patent/CN112767304B/en active Active
Patent Citations (12)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN101615252A (en) * | 2008-06-25 | 2009-12-30 | 中国科学院自动化研究所 | An Adaptive Image Text Information Extraction Method |
CN103208002A (en) * | 2013-04-10 | 2013-07-17 | 桂林电子科技大学 | Method and system used for recognizing and controlling gesture and based on hand profile feature |
CN104647893A (en) * | 2015-02-09 | 2015-05-27 | 西安科赛图像科技有限责任公司 | Overprinting error detection method based on cross line |
CN107766784A (en) * | 2016-08-20 | 2018-03-06 | 宋坤骏 | A kind of novel video people counting algorithm |
CN106409711A (en) * | 2016-09-12 | 2017-02-15 | 佛山市南海区广工大数控装备协同创新研究院 | Solar silicon wafer defect detecting system and method |
CN106855951A (en) * | 2016-12-16 | 2017-06-16 | 陕西科技大学 | A kind of grain kind quality detecting method based on computer vision |
CN106802113A (en) * | 2016-12-23 | 2017-06-06 | 西安交通大学 | Intelligent hit telling system and method based on many shell hole algorithm for pattern recognitions |
CN108573476A (en) * | 2018-03-14 | 2018-09-25 | 浙江大学山东工业技术研究院 | Detection method of screw hole position on the surface of sunflower module based on vision |
CN108596925A (en) * | 2018-03-14 | 2018-09-28 | 浙江大学山东工业技术研究院 | The heronsbill module surface screw hole site image processing method of view-based access control model |
CN110288619A (en) * | 2018-03-14 | 2019-09-27 | 浙江大学山东工业技术研究院 | Detection method of screw hole position on the surface of sunflower module based on vision |
CN110503633A (en) * | 2019-07-29 | 2019-11-26 | 西安理工大学 | A method for surface defect detection of applique ceramic discs based on image difference |
CN110866903A (en) * | 2019-11-08 | 2020-03-06 | 北京工业大学 | Ping-pong ball identification method based on Hough circle transformation technology |
Non-Patent Citations (2)
Title |
---|
基于机器视觉的晶圆缺陷检测系统分析与设计;陈治杉;《中国优秀博硕士学位论文全文数据库(硕士)信息科技辑》;20190915;论文第11-22页 * |
基于机器视觉的齿轮缺陷检测方法研究;张钰婷,邵勇,顾桂鹏,昝鹏;《工业控制计算机》;20171231;论文第90-94页 * |
Also Published As
Publication number | Publication date |
---|---|
CN112767304A (en) | 2021-05-07 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN110286124B (en) | Machine vision-based refractory brick measuring system | |
CN109342456B (en) | Welding spot defect detection method, device and equipment and readable storage medium | |
CN109272489B (en) | Infrared weak and small target detection method based on background suppression and multi-scale local entropy | |
CN106934803B (en) | method and device for detecting surface defects of electronic device | |
CN111292305A (en) | Improved YOLO-V3 metal processing surface defect detection method | |
CN112508939B (en) | Flange surface defect detection method, system and equipment | |
CN106404793B (en) | Vision-based defect detection method for bearing seals | |
CN105160652A (en) | Handset casing testing apparatus and method based on computer vision | |
CN110230978A (en) | A kind of refractory brick geometric dimension measurement method | |
CN109685760B (en) | MATLAB-based SLM powder bed powder laying image convex hull depression defect detection method | |
Zakaria et al. | Object shape recognition in image for machine vision application | |
CN116777907A (en) | Sheet metal part quality detection method | |
CN114372983A (en) | Shielding box coating quality detection method and system based on image processing | |
CN113393426B (en) | Steel rolling plate surface defect detection method | |
CN112734761B (en) | Industrial product image boundary contour extraction method | |
CN106529551B (en) | Intelligent identification and counting detection method of circle-like objects for packaging industry | |
CN117557565B (en) | Detection method and device for lithium battery pole piece | |
CN110544235A (en) | A Differential Geometry-Based Image Region Recognition Method for Flexible Circuit Boards | |
CN115587966A (en) | A method and system for detecting whether parts are missing or not under the condition of uneven illumination | |
CN115752969A (en) | Method, system and equipment for detecting sealing performance of aluminum foil seal | |
CN113516619A (en) | Product surface flaw identification method based on image processing technology | |
CN116051808A (en) | YOLOv 5-based lightweight part identification and positioning method | |
CN109584212B (en) | MATLAB-based SLM powder bed powder laying image scratch defect identification method | |
Ahmed et al. | An observation of vision based underwater object detection and tracking | |
CN114663457A (en) | Bobbin identification method and system based on circular convolution kernel under complex background |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |