WO2017177717A1 - 基于颜色和梯度的元件定位方法和系统 - Google Patents

基于颜色和梯度的元件定位方法和系统 Download PDF

Info

Publication number
WO2017177717A1
WO2017177717A1 PCT/CN2016/112882 CN2016112882W WO2017177717A1 WO 2017177717 A1 WO2017177717 A1 WO 2017177717A1 CN 2016112882 W CN2016112882 W CN 2016112882W WO 2017177717 A1 WO2017177717 A1 WO 2017177717A1
Authority
WO
WIPO (PCT)
Prior art keywords
image
tested
gradient
template
edge
Prior art date
Application number
PCT/CN2016/112882
Other languages
English (en)
French (fr)
Inventor
林建民
Original Assignee
广州视源电子科技股份有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 广州视源电子科技股份有限公司 filed Critical 广州视源电子科技股份有限公司
Publication of WO2017177717A1 publication Critical patent/WO2017177717A1/zh

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/0002Inspection of images, e.g. flaw detection
    • G06T7/0004Industrial image inspection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10004Still image; Photographic image
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30108Industrial image inspection
    • G06T2207/30141Printed circuit board [PCB]

Definitions

  • the present invention relates to the field of automated optical inspection, and more particularly to color and gradient based component positioning methods and systems.
  • the PCB circuit board (printed circuit board) is tested, and the AOI (Automatic Optic Inspection) system is used more.
  • the automatic optical inspection is an essential part of the industrial production process, and the surface of the finished product is obtained optically. Status, image processing to detect foreign objects or surface defects.
  • the fault, leakage and reverse detection of electronic components is a common application in the field of circuit board defect detection.
  • the machine automatically scans the circuit board to acquire images, extracts partial images of each electronic component, and judges electronic components through image processing technology. Whether there are errors, leaks, and anti-defects, and finally display or mark the components with suspected defects for easy viewing and overhaul.
  • the first problem to be solved for the detection of electronic components is the precise positioning of the electronic components. Only after obtaining the accurate positioning results of the electronic components can the defects of the components, such as missing parts, missing parts, and reverse parts, be detected.
  • the precise positioning of the electronic components is mainly obtained by template matching of color images, that is, the template image of the electronic components obtained by the worker's plate making is searched in the area to be searched to obtain electronic components. Location information.
  • the information considered by the color image template matching method is too monotonous, and only depends on the color information of the three channels of the color image, and is relatively susceptible to illumination and surrounding similar color regions, and the positioning result is not stable enough.
  • a color and gradient based component positioning method comprising the following steps:
  • V channel values of the pixels in the HSV image of the image to be tested are respectively replaced with the gradient magnitudes of the corresponding pixel points in the gradient amplitude image of the image to be tested, and the target image to be tested is obtained;
  • the target template image is used to perform template matching on the target image to be tested, and the position of the device to be tested in the image to be tested is determined.
  • a component positioning system based on color and gradient including the following units:
  • a first acquiring unit configured to acquire a template image of the component to be tested and an image to be tested that is actually captured by the component to be tested;
  • a second acquiring unit configured to acquire an HSV image and a gradient amplitude image of the template image, and acquire an HSV image and a gradient amplitude image of the image to be tested;
  • a synthesizing unit configured to replace a V channel value of each pixel point in the HSV image of the template image with a gradient amplitude of a corresponding pixel point in the gradient amplitude image of the template image, to obtain a target template image
  • the synthesizing unit is further configured to replace the V channel values of the pixels in the HSV image of the image to be tested with the gradient magnitudes of the corresponding pixel points in the gradient amplitude image of the image to be tested, to obtain the target image to be tested;
  • the matching unit is configured to perform template matching on the target image to be tested by using the target template image to determine a position of the component to be tested in the image to be tested.
  • the template image of the device to be tested and the image to be tested actually taken by the device to be tested are respectively acquired, and then the HSV image and the gradient amplitude image of the template image and the HSV image of the image to be tested are acquired.
  • the gradient amplitude image is replaced by the V-channel value of each pixel in the HSV image of the template image, and the gradient amplitude of the corresponding pixel in the gradient image of the template image is obtained, and the target template image is obtained, and the HSV image of the image to be tested is obtained.
  • the value of the V channel of each pixel is also replaced by the gradient amplitude of the corresponding pixel in the gradient amplitude image of the image to be measured, and the target image to be measured is obtained, and the target template image and the target image to be tested are used for template matching. Position the component under test in the image to be measured.
  • the image is converted to the HSV color space, and the V channel values in the three channels of the HSV are replaced with the gradient magnitude of the image. Since the gradient information of the image is taken into account in the template matching, the influence of different illuminations can be effectively reduced. Therefore, the stability of positioning of electronic components is effectively improved.
  • FIG. 1 is a schematic flow chart of a color and gradient based component positioning method in one embodiment
  • FIG. 2 is a schematic structural view of a component positioning system based on color and gradient in one embodiment
  • FIG. 3 is a schematic structural view of a component positioning system based on color and gradient in one embodiment
  • FIG. 4 is a schematic structural view of a component positioning system based on color and gradient in one embodiment.
  • the color and gradient based component positioning method in this embodiment includes the following steps:
  • Step S101 acquiring a template image of the component to be tested and an image to be tested that is actually captured by the component to be tested;
  • the component to be tested can be an electronic component on the PCB, such as a resistor, an inductor, a capacitor, etc.; the template image only includes image information of the component to be tested; the image to be tested is a PCB image including the component to be tested, and is included The PCB of the measuring component is actually taken;
  • Step S102 acquiring an HSV image and a gradient amplitude image of the template image, and acquiring an HSV image and a gradient amplitude image of the image to be tested;
  • the HSV image of the template image is an image of the template image in three channels of H (hue), S (saturation), and V (lightness), and each pixel in the HSV image of the template image corresponds to each pixel of the template image.
  • the gradient amplitude image of the template image is composed according to the gradient amplitude of each pixel in the template image, and each pixel in the gradient amplitude image of the template image also corresponds to each pixel of the template image;
  • the HSV image of the image is an image of the three channels of H (hue), S (saturation), and V (lightness) of the image to be tested. Each pixel in the HSV image of the image to be tested is the pixel of each image to be tested.
  • the gradient amplitude image of the image to be tested is composed according to the gradient amplitude of each pixel in the image to be tested, and each pixel in the gradient amplitude image of the image to be tested is also the pixel of the image to be tested.
  • Step S103 replacing the V channel values of the pixels in the HSV image of the template image with the gradient magnitudes of the corresponding pixel points in the gradient amplitude image of the template image, to obtain the target template image;
  • Step S104 replacing the V channel values of the pixels in the HSV image of the image to be tested with the gradient magnitudes of the corresponding pixel points in the gradient amplitude image of the image to be tested, to obtain the target image to be tested;
  • Step S105 Perform template matching on the target image to be tested through the target template image, and determine that the device to be tested is in the to-be-tested image. Like the location in which it is located.
  • the image is to convert the image into the HSV color space, and then replace the V channel value in the three channels of the HSV with the gradient magnitude of the image. Since the gradient information of the image is considered in the template matching, the image can be effectively Reduce the influence of different illumination, thus effectively improving the stability of electronic component positioning.
  • the step of acquiring the HSV image of the template image comprises the steps of:
  • the step of obtaining an HSV image of the image to be tested includes the following steps:
  • the image to be tested is converted from the RGB color space to the HSV color space to obtain an HSV image of the image to be tested.
  • the HSV images of both the template image and the image to be tested are obtained by converting the original image from the RGB color space to the HSV color space, and the RGB color space data of the general image is relatively easy to obtain, and is convenient to convert. To HSV color space data.
  • the formula for converting from RGB color space to HSV color space is:
  • R, G, and B are the values of three channels in the RGB color space of any pixel in the image before conversion, respectively, H, S, and V are respectively three channels of the corresponding pixel in the HSV color space after conversion.
  • Numerical, mod 6 table Dividing by the value 6 to take the remainder, the image before conversion can be a template image or an image to be tested, and the HSV image of the template image and the HSV image of the image to be tested can be respectively obtained according to the above formula.
  • the step of acquiring a gradient magnitude image of the template image comprises the steps of:
  • the step of obtaining a gradient magnitude image of the image to be tested includes the following steps:
  • the image and the fourth edge image acquire a gradient magnitude image of the image to be tested.
  • the gradient magnitude image is acquired according to the grayscale image, and the edge image obtained by performing the convolution operation on the grayscale image includes the edge information of the component to be tested, and thus the acquired template image and the image to be tested are
  • the gradient magnitude image also contains the edge information of the component to be tested. This feature of the gradient amplitude image contributes to the detection and positioning of the component to be tested and improves the stability of the positioning.
  • the step of acquiring the grayscale image of the template image from the template image comprises the following steps:
  • the step of acquiring the grayscale image of the image to be tested according to the image to be tested includes the following steps:
  • the image to be tested is converted from the RGB color space to the gray space to obtain a grayscale image of the image to be tested.
  • the gray image obtained by acquiring both the template image and the image to be tested is obtained by converting the original image from the RGB color space to the gray space, and the RGB color space data of the general image is relatively easy to obtain, and is convenient. Convert to grayscale spatial data.
  • the formula for converting from RGB color space to gray space is:
  • R, G, B are the values of three channels in the RGB color space of any pixel in the image before conversion
  • Gray is the gray value of the corresponding pixel after conversion
  • the image before conversion can be template image or to be Measure the image.
  • the step of convoluting the grayscale image of the template image comprises the steps of:
  • the step of convoluting the grayscale image of the image to be measured includes the following steps:
  • the convolution operation is performed on the gray image of the image to be measured by any one of the Sobel operator, the Rubinson operator or the Laplacian operator.
  • a pair of pairs of operators such as a Sobel operator (Sobel operator), a Rubinson operator (Robinson operator), and a Laplacian operator (Laplace operator) can be flexibly selected.
  • a convolution operation is performed to obtain an edge image containing edge information of the component to be tested.
  • the step of acquiring the gradient magnitude image of the template image from the first edge image and the second edge image comprises the steps of:
  • the step of acquiring the gradient magnitude image of the image to be tested according to the third edge image and the fourth edge image includes the following steps:
  • the gradient magnitude of the corresponding pixel in the original image can be obtained, thereby obtaining Gradient amplitude image.
  • the edge image in the lateral direction and the edge image in the longitudinal direction can be obtained by the Sobel operator convolution operation, and the square of the gradient amplitude of the edge image pixel in the lateral direction and the longitudinal direction can be obtained.
  • the sum of the squares of the gradient magnitudes of the corresponding pixel points in the upper edge image is used as the gradient magnitude of the corresponding pixel point in the gradient magnitude image, thereby obtaining a gradient magnitude image.
  • the formula for calculating the gradient magnitude of the corresponding pixel in the gradient amplitude image according to the gradient magnitude of each pixel in the edge image in the lateral direction and the gradient amplitude of the corresponding pixel in the edge image in the longitudinal direction is:
  • x is the gradient magnitude of any pixel in the edge image in the lateral direction
  • y is the gradient magnitude of the corresponding pixel in the edge image in the longitudinal direction
  • m is the gradient of the corresponding pixel in the gradient magnitude image Amplitude
  • the gradient magnitude matrix of the gradient magnitude image may also be calculated according to the gradient magnitude matrix of all the pixel points in the edge image in the lateral direction and the gradient magnitude matrix corresponding to all the pixel points in the edge image in the longitudinal direction. for:
  • IM_X is the gradient magnitude matrix of all the pixels in the edge image in the lateral direction
  • IM_Y is the gradient magnitude matrix corresponding to all the pixels in the edge image in the longitudinal direction
  • M is the corresponding pixel in the gradient amplitude image.
  • the gradient magnitude matrix of the point which is the square operation of the corresponding elements in the matrix.
  • the step of performing template matching on the target image to be tested by the target template image, and determining the position of the component to be tested in the image to be tested includes the following steps:
  • the matching degree between the sub-image and the target template image is to determine the position of the image with the word to be the position of the component to be tested.
  • the template matching is to compare the sliding target template image with the target image to be tested to locate the component to be tested, generally by calculating the matching degree between the target template image and the corresponding sub image in the target image to be tested.
  • the matching of template matching is usually calculated in the following ways:
  • T represents the color information of the pixel points in the target template image and the value of the gradient information
  • I represents the value of the color information and the gradient information of the pixel points in the target image to be tested
  • x', y' are respectively in the target template image.
  • Each pixel The horizontal and vertical coordinate values, x, y are the horizontal and vertical coordinate values of each pixel in the target image to be tested. The smaller the matching value R(x, y), the higher the degree of matching.
  • Such correlation matching uses a multiplication operation between the target template image and the target image to be tested.
  • the formula is:
  • Such correlation matching matches the relative value of the target template image to its mean value and the relative value of the target image to be measured, 1 indicates a perfect match, -1 indicates a bad match, and 0 indicates no correlation (random sequence).
  • the formula is:
  • T'(x',y') T(x',y')-1/(w ⁇ h) ⁇ x',y' T(x',y')
  • w and h respectively represent the number of pixels in the lateral direction and the number of pixels in the longitudinal direction in the target template image.
  • the invention provides a component positioning method based on color and gradient. According to the color and gradient information in the image of the known component, the position of the component is located in the image to be tested, and the positioning is accurate, and the component is wrong, leaked, reversed, etc. Test offering An important basis. By considering the gradient magnitude information in the image, the problem of inaccurate positioning due to the influence of illumination is avoided, and the stability of component positioning is improved.
  • the present invention also provides a component positioning system, which will be described in detail below with respect to an embodiment of the component positioning system of the present invention.
  • the color and gradient based component positioning system in this embodiment includes a first acquisition unit 210, a second acquisition unit 220, a synthesis unit 230, and a matching unit 240, wherein:
  • a first acquiring unit 210 configured to acquire a template image of the device to be tested and an image to be tested that is actually captured by the device to be tested;
  • a second acquiring unit 220 configured to acquire an HSV image and a gradient amplitude image of the template image, and acquire an HSV image and a gradient amplitude image of the image to be tested;
  • a synthesizing unit 230 configured to replace a V channel value of each pixel point in the HSV image of the template image with a gradient amplitude of a corresponding pixel point in the gradient amplitude image of the template image, to obtain a target template image;
  • the synthesizing unit 230 is further configured to replace the V channel values of the pixel points in the HSV image of the image to be tested with the gradient magnitudes of the corresponding pixel points in the gradient magnitude image of the image to be tested, to obtain the target image to be tested;
  • the matching unit 240 is configured to perform template matching on the target image to be tested by using the target template image to determine a position of the device to be tested in the image to be tested.
  • the second obtaining unit 220 converts the template image from the RGB color space to the HSV color space to obtain an HSV image of the template image;
  • the second obtaining unit 220 also converts the image to be tested from the RGB color space to the HSV color space to obtain an HSV image of the image to be tested.
  • the second obtaining unit 220 includes the following units:
  • a grayscale obtaining unit 221, configured to acquire a grayscale image of the template image according to the template image
  • a convolution unit 222 configured to perform a convolution operation on the grayscale image of the template image to obtain a first edge image in a lateral direction and a second edge image in a longitudinal direction;
  • a gradient obtaining unit 223, configured to acquire a gradient magnitude image of the template image according to the first edge image and the second edge image;
  • the grayscale acquiring unit 221 is further configured to acquire a grayscale image of the image to be tested according to the image to be tested;
  • the convolution unit 222 is further configured to perform a convolution operation on the grayscale image of the image to be measured to obtain a third edge image in the lateral direction and a fourth edge image in the longitudinal direction;
  • the gradient obtaining unit 223 is further configured to acquire a gradient magnitude image of the image to be tested according to the third edge image and the fourth edge image.
  • the grayscale acquiring unit 221 converts the template image from the RGB color space to the grayscale space to obtain a grayscale image of the template image;
  • the grayscale acquiring unit 221 converts the image to be tested from the RGB color space to the grayscale space, and obtains a grayscale image of the image to be tested.
  • the convolution unit 222 performs a convolution operation on the grayscale image of the template image by any one of a Sobel operator, a Rubinson operator, or a Laplacian operator;
  • the convolution unit 222 performs a convolution operation on the grayscale image of the image to be measured by any one of the Sobel operator, the Rubinson operator, or the Laplacian operator.
  • the gradient obtaining unit 223 calculates the gradient magnitude of the corresponding pixel in the template image according to the gradient magnitude of each pixel in the first edge image and the gradient magnitude of the corresponding pixel in the second edge image. a gradient magnitude image of the template image;
  • the gradient obtaining unit 223 calculates the gradient magnitude of the corresponding pixel in the image to be measured according to the gradient magnitude of each pixel in the third edge image and the gradient magnitude of the corresponding pixel in the fourth edge image, and obtains the gradient of the image to be tested. Value image.
  • the matching unit 240 includes the following units:
  • the selecting unit 241 is configured to select any pixel in the target image to be tested, and obtain an image of the same size as the target template image as the sub-image of the target image to be tested in the target image to be tested according to the selected pixel, wherein The lateral edge of the sub-image is parallel to the lateral edge of the target image to be tested, the longitudinal edge of the sub-image is parallel to the longitudinal edge of the target image to be tested, and the selected pixel is a vertex of the sub-image;
  • the locating unit 242 is configured to calculate a matching degree between each sub-image and the target template image, select a sub-image corresponding to the matching degree with the highest matching degree, and determine that the sub-image corresponding to the matching degree with the highest matching degree is in the target image to be tested.
  • the position is the position of the component to be tested in the image to be tested.
  • the color and gradient based component positioning system of the present invention corresponds to the color and gradient based component positioning method of the present invention in one-to-one correspondence, and the technical features and beneficial effects described in the above embodiments of the color and gradient based component positioning method are applicable.
  • a color and gradient based component positioning system In an embodiment of a color and gradient based component positioning system.

Abstract

一种基于颜色和梯度的元件定位方法和系统,其是获取模板图像的HSV图像和梯度幅值图像以及待测图像的HSV图像和梯度幅值图像,将模板图像的HSV图像中各像素点的V通道数值分别替换为模板图像的梯度幅值图像中对应像素点的梯度幅值,得到目标模板图像,将待测图像的HSV图像中各像素点的V通道数值也分别替换为待测图像的梯度幅值图像中对应像素点的梯度幅值,得到目标待测图像,利用目标模板图像和目标待测图像进行模板匹配,就可以对待测图像中的待测元件进行定位。此方案中由于在模板匹配时考虑了图像的梯度信息,可以有效地降低不同光照的影响,从而有效地提高电子元件定位的稳定性。

Description

基于颜色和梯度的元件定位方法和系统 技术领域
本发明涉及自动光学检测领域,特别是涉及基于颜色和梯度的元件定位方法和系统。
背景技术
当前,对PCB电路板(印制电路板)进行检测,使用较多的是AOI(Automatic Optic Inspection,自动光学检测)系统,自动光学检测是工业制作过程的必要环节,利用光学方式取得成品的表面状态,以影像处理来检测异物或表面瑕疵。电子元件的错、漏、反检测是电路板缺陷检测领域中的一种常见应用,机器通过摄像头自动扫描电路板获取图像,提取每个电子元件的局部图像,并通过图像处理技术,判断电子元件是否存在错、漏、反缺陷,最后将疑似缺陷的元件显示或标记出来,方便查看与检修。
对电子元件的检测的首先要解决的问题是电子元件的精确定位,只有取得了电子元件的精确定位的结果后,才能进行元件错件、漏件、反件等缺陷问题的检测。在传统的AOI系统中,电子元件的精确定位主要是通过彩色图像的模板匹配得到的,也即是通过工人制版时得到的电子元件的模板图像片在待搜索的区域进行搜索,以得到电子元件的定位信息。
但是这种基于彩色图像模板匹配的方法考虑的信息太过单调,只依赖于彩色图像的三个通道的颜色信息,比较容易受到光照、周围类似颜色区域的影响,定位的结果不够稳定。
发明内容
基于此,有必要针对现有元件定位方法得到定位结果的稳定性不够的问题,提供一种基于颜色和梯度的元件定位方法和系统。
一种基于颜色和梯度的元件定位方法,包括以下步骤:
获取待测元件的模板图像和对待测元件进行实际拍摄的待测图像;
获取模板图像的HSV图像和梯度幅值图像,获取待测图像的HSV图像和梯度幅值图像;
将模板图像的HSV图像中各像素点的V通道数值分别替换为模板图像的梯度幅值图像中对应像素点的梯度幅值,获得目标模板图像;
将待测图像的HSV图像中各像素点的V通道数值分别替换为待测图像的梯度幅值图像中对应像素点的梯度幅值,获得目标待测图像;
通过目标模板图像对目标待测图像进行模板匹配,确定待测元件在待测图像中所在的位置。
一种基于颜色和梯度的元件定位系统,包括以下单元:
第一获取单元,用于获取待测元件的模板图像和对待测元件进行实际拍摄的待测图像;
第二获取单元,用于获取模板图像的HSV图像和梯度幅值图像,获取待测图像的HSV图像和梯度幅值图像;
合成单元,用于将模板图像的HSV图像中各像素点的V通道数值分别替换为模板图像的梯度幅值图像中对应像素点的梯度幅值,获得目标模板图像;
合成单元还用于将待测图像的HSV图像中各像素点的V通道数值分别替换为待测图像的梯度幅值图像中对应像素点的梯度幅值,获得目标待测图像;
匹配单元,用于通过目标模板图像对目标待测图像进行模板匹配,确定待测元件在待测图像中所在的位置。
根据上述本发明的方案,其是先分别获取待测元件的模板图像和对待测元件进行实际拍摄的待测图像,再获取模板图像的HSV图像和梯度幅值图像以及待测图像的HSV图像和梯度幅值图像,将模板图像的HSV图像中各像素点的V通道数值分别替换为模板图像的梯度幅值图像中对应像素点的梯度幅值,得到目标模板图像,将待测图像的HSV图像中各像素点的V通道数值也分别替换为待测图像的梯度幅值图像中对应像素点的梯度幅值,得到目标待测图像,利用目标模板图像和目标待测图像进行模板匹配,就可以对待测图像中的待测元件进行定位。此方案中将图像转换到HSV颜色空间,再将HSV三个通道中的V通道数值替换为图像的梯度幅值,由于在模板匹配时考虑了图像的梯度信息,可以有效地降低不同光照的影响,从而有效地提高电子元件定位的稳定性。
附图说明
图1是其中一个实施例中基于颜色和梯度的元件定位方法的流程示意图;
图2是其中一个实施例中基于颜色和梯度的元件定位系统的结构示意图;
图3是其中一个实施例中基于颜色和梯度的元件定位系统的结构示意图;
图4是其中一个实施例中基于颜色和梯度的元件定位系统的结构示意图。
具体实施方式
为使本发明的目的、技术方案及优点更加清楚明白,以下结合附图及实施例,对本发明进行进一步的详细说明。应当理解,此处所描述的具体实施方式仅仅用以解释本发明,并不限定本发明的保护范围。
参见图1所示,为本发明的基于颜色和梯度的元件定位方法的实施例。该实施例中的基于颜色和梯度的元件定位方法包括如下步骤:
步骤S101:获取待测元件的模板图像和对待测元件进行实际拍摄的待测图像;
待测元件可以为PCB板上的电子元器件,如电阻、电感、电容等;模板图像中只包括待测元件的图像信息;待测图像是包括待测元件的PCB板图像,是对包括待测元件的PCB板进行实际拍摄得到的;
步骤S102:获取模板图像的HSV图像和梯度幅值图像,获取待测图像的HSV图像和梯度幅值图像;
模板图像的HSV图像是模板图像在H(色调)、S(饱和度)、V(明度)三个通道的图像,模板图像的HSV图像中的各像素点是与模板图像的各像素点相对应的;模板图像的梯度幅值图像是根据模板图像中各像素点的梯度幅值组成的,模板图像的梯度幅值图像中的各像素点也是与模板图像的各像素点相对应的;待测图像的HSV图像是待测图像在H(色调)、S(饱和度)、V(明度)三个通道的图像,待测图像的HSV图像中的各像素点是与待测图像的各像素点相对应的;待测图像的梯度幅值图像是根据待测图像中各像素点的梯度幅值组成的,待测图像的梯度幅值图像中的各像素点也是与待测图像的各像素点相对应的;
步骤S103:将模板图像的HSV图像中各像素点的V通道数值分别替换为模板图像的梯度幅值图像中对应像素点的梯度幅值,获得目标模板图像;
步骤S104:将待测图像的HSV图像中各像素点的V通道数值分别替换为待测图像的梯度幅值图像中对应像素点的梯度幅值,获得目标待测图像;
步骤S105:通过目标模板图像对目标待测图像进行模板匹配,确定待测元件在待测图 像中所在的位置。
在本实施例中,其是将图像转换到HSV颜色空间,再将HSV三个通道中的V通道数值替换为图像的梯度幅值,由于在模板匹配时考虑了图像的梯度信息,可以有效地降低不同光照的影响,从而有效地提高电子元件定位的稳定性。
在其中一个实施例中,获取模板图像的HSV图像的步骤包括以下步骤:
将模板图像从RGB颜色空间转换到HSV颜色空间,获得模板图像的HSV图像;
获取待测图像的HSV图像的步骤包括以下步骤:
将待测图像从RGB颜色空间转换到HSV颜色空间,获得待测图像的HSV图像。
在本实施例中,获取模板图像和待测图像两者的HSV图像,均是将原始图像从RGB颜色空间转换到HSV颜色空间获得的,一般图像的RGB颜色空间数据较易获得,而且方便转换到HSV颜色空间数据。
优选的,从RGB颜色空间转换到HSV颜色空间的公式为:
Figure PCTCN2016112882-appb-000001
Figure PCTCN2016112882-appb-000002
Figure PCTCN2016112882-appb-000003
V=Cmax
式中,R、G、B分别为转换前图像中任意一个像素点在RGB颜色空间中三个通道的数值,H、S、V分别为转换后对应像素点在HSV颜色空间中三个通道的数值,mod 6表 示除以数值6取余数,转换前图像可以为模板图像或待测图像,根据上述公式就可以分别获得模板图像的HSV图像和待测图像的HSV图像。
在其中一个实施例中,获取模板图像的梯度幅值图像的步骤包括以下步骤:
根据模板图像获取模板图像的灰度图像,对模板图像的灰度图像进行卷积运算,获得在横向方向上的第一边缘图像和在纵向方向上的第二边缘图像,根据第一边缘图像和第二边缘图像获取模板图像的梯度幅值图像;
获取待测图像的梯度幅值图像的步骤包括以下步骤:
根据待测图像获取待测图像的灰度图像,对待测图像的灰度图像进行卷积运算,获得在横向方向上的第三边缘图像和在纵向方向上的第四边缘图像,根据第三边缘图像和第四边缘图像获取待测图像的梯度幅值图像。
在本实施例中,梯度幅值图像是根据灰度图像获取的,对灰度图像进行卷积运算获得的边缘图像中包含了待测元件的边缘信息,因此获取的模板图像和待测图像两者的梯度幅值图像也包含了待测元件的边缘信息,梯度幅值图像的这一特点有助于待测元件的检测定位,提高定位的稳定性。
在其中一个实施例中,根据模板图像获取模板图像的灰度图像的步骤包括以下步骤:
将模板图像从RGB颜色空间转换到灰度空间,获得模板图像的灰度图像;
根据待测图像获取待测图像的灰度图像的步骤包括以下步骤:
将待测图像从RGB颜色空间转换到灰度空间,获得待测图像的灰度图像。
在本实施例中,获取模板图像和待测图像两者的灰度图像,均是将原始图像从RGB颜色空间转换到灰度空间获得的,一般图像的RGB颜色空间数据较易获得,而且方便转换到灰度空间数据。
优选的,从RGB颜色空间转换到灰度空间的公式为:
Gray=0.30×R+0.59×G+0.11×B
式中,R、G、B分别为转换前图像中任意一个像素点在RGB颜色空间中三个通道的数值,Gray为转换后对应像素点的灰度值,转换前图像可以为模板图像或待测图像。
在其中一个实施例中,对模板图像的灰度图像进行卷积运算的步骤包括以下步骤:
通过索贝尔算子、鲁宾孙算子或拉普拉斯算子中的任意一种算子对模板图像的灰度图像进行卷积运算;
对待测图像的灰度图像进行卷积运算的步骤包括以下步骤:
通过索贝尔算子、鲁宾孙算子或拉普拉斯算子中的任意一种算子对待测图像的灰度图像进行卷积运算。
在本实施例中,可以灵活选用索贝尔算子(Sobel算子)、鲁宾孙算子(Robinson算子)、拉普拉斯算子(Laplace算子)等算子中的一种对图像进行卷积运算,便于得到包含待测元件边缘信息的边缘图像。
在其中一个实施例中,根据第一边缘图像和第二边缘图像获取模板图像的梯度幅值图像的步骤包括以下步骤:
根据第一边缘图像中各像素点的梯度幅值和第二边缘图像中对应像素点的梯度幅值计算模板图像中对应像素点的梯度幅值,获得模板图像的梯度幅值图像;
根据第三边缘图像和第四边缘图像获取待测图像的梯度幅值图像的步骤包括以下步骤:
根据第三边缘图像中各像素点的梯度幅值和第四边缘图像中对应像素点的梯度幅值计算待测图像中对应像素点的梯度幅值,获得待测图像的梯度幅值图像。
在本实施例中,获得横向方向上的边缘图像和纵向方向上的边缘图像后,结合这两种图像中各像素点的梯度幅值可以获得原始图像中对应像素点的梯度幅值,从而获得梯度幅值图像。
在一个优选的实施例中,可以通过Sobel算子卷积运算获取横向方向上的边缘图像,以及纵向方向上的边缘图像,将横向方向上的边缘图像像素点的梯度幅值的平方与纵向方向上的边缘图像中对应像素点的梯度幅值的平方之和作为梯度幅值图像中对应像素点的梯度幅值,从而获得梯度幅值图像。
根据横向方向上的边缘图像中各像素点的梯度幅值和纵向方向上的边缘图像中对应像素点的梯度幅值计算梯度幅值图像中对应像素点的梯度幅值的公式为:
Figure PCTCN2016112882-appb-000004
式中,x为横向方向上的边缘图像中任意一个像素点的梯度幅值,y为纵向方向上的边缘图像中对应像素点的梯度幅值,m为梯度幅值图像中对应像素点的梯度幅值,可以获得模板图像的梯度幅值图像或者待测图像的梯度幅值图像。
另外,也可以根据横向方向上的边缘图像中所有像素点的梯度幅值矩阵和纵向方向上的边缘图像中对应所有像素点的梯度幅值矩阵来计算梯度幅值图像的梯度幅值矩阵,公式为:
Figure PCTCN2016112882-appb-000005
式中,IM_X为横向方向上的边缘图像中所有像素点的梯度幅值矩阵,IM_Y为纵向方向上的边缘图像中对应所有像素点的梯度幅值矩阵,M为梯度幅值图像中对应所有像素点的梯度幅值矩阵,矩阵平方运算是指对矩阵中对应元素进行平方运算。
在其中一个实施例中,通过目标模板图像对目标待测图像进行模板匹配,确定待测元件在待测图像中所在的位置的步骤包括以下步骤:
选取目标待测图像中任意一个像素点,并根据所选取的像素点在目标待测图像中获取与目标模板图像相同大小的图像作为目标待测图像的子图像,其中,子图像的横向边缘与目标待测图像的横向边缘平行,子图像的纵向边缘与目标待测图像的纵向边缘平行,所选取的像素点为所述子图像的一个顶点;
计算各子图像与目标模板图像的匹配度,选取代表匹配程度最高的匹配度对应的子图像,确定代表匹配程度最高的匹配度对应的子图像在目标待测图像中的位置为待测元件在待测图像中的位置。
在本实施例中,通过在目标待测图像中选取与目标模板图像大小相同的子图像,并计算子图像与目标模板图像的匹配度,只要某一子图像与目标模板图像的匹配度代表两者的匹配程度最高,就可以确定带字图像的位置即为待测元件的位置。
在一个具体的实施例中,模板匹配是通过滑动目标模板图像和目标待测图像进行比较来定位待测元件,一般是通过计算目标模板图像与目标待测图像中对应的子图像的匹配度来确定的,模板匹配的匹配度的计算通常有以下几种方式:
(1)平方差匹配
Figure PCTCN2016112882-appb-000006
其中,T表示目标模板图像中像素点的表示颜色信息和梯度信息的数值,I表示目标待测图像中像素点的表示颜色信息和梯度信息的数值,x’,y’分别是目标模板图像中各像素点 的横纵坐标值,x,y分别是目标待测图像中各像素点的横纵坐标值。匹配值R(x,y)值越小,表示匹配程度越高。
标准平方差匹配,公式为:
Figure PCTCN2016112882-appb-000007
(2)相关匹配
此类相关匹配采用目标模板图像和目标待测图像之间的乘法操作,匹配值越大表示匹配程度越高,0表示最差的匹配效果,公式为:
Figure PCTCN2016112882-appb-000008
标准相关匹配,公式为:
Figure PCTCN2016112882-appb-000009
(3)CV_TM_CCOEFF相关匹配
此类相关匹配将目标模板图像对其均值的相对值与目标待测图像对其均值的相对值进行匹配,1表示完美匹配,-1表示糟糕的匹配,0表示没有任何相关性(随机序列),公式为:
Figure PCTCN2016112882-appb-000010
其中,T’(x′,y′)=T(x′,y′)-1/(w·h)·∑x′,y′T(x′,y′)
Figure PCTCN2016112882-appb-000011
w、h分别表示目标模板图像中横向方向上像素点的个数和纵向方向上像素点的个数。
CV_TM_CCOEFF标准相关匹配,公式为:
Figure PCTCN2016112882-appb-000012
本发明提供了一种基于颜色和梯度的元件定位方法,根据已知元件的图像中颜色和梯度信息,在待测图像中定位元件的位置,定位准确,为对元件进行错,漏,反等检测提供 了重要依据。通过考虑图像中的梯度幅值信息,避免了元件因为光照的影响而导致定位不准的问题,提高了元件定位的稳定性。
根据上述基于颜色和梯度的元件定位方法,本发明还提供一种元件的定位系统,以下就本发明的元件的定位系统的实施例进行详细说明。
参见图2所示,为本发明的基于颜色和梯度的元件定位系统的实施例。该实施例中的基于颜色和梯度的元件定位系统包括第一获取单元210,第二获取单元220,合成单元230,匹配单元240,其中:
第一获取单元210,用于获取待测元件的模板图像和对待测元件进行实际拍摄的待测图像;
第二获取单元220,用于获取模板图像的HSV图像和梯度幅值图像,获取待测图像的HSV图像和梯度幅值图像;
合成单元230,用于将模板图像的HSV图像中各像素点的V通道数值分别替换为模板图像的梯度幅值图像中对应像素点的梯度幅值,获得目标模板图像;
合成单元230还用于将待测图像的HSV图像中各像素点的V通道数值分别替换为待测图像的梯度幅值图像中对应像素点的梯度幅值,获得目标待测图像;
匹配单元240,用于通过目标模板图像对目标待测图像进行模板匹配,确定待测元件在待测图像中所在的位置。
在其中一个实施例中,第二获取单元220将模板图像从RGB颜色空间转换到HSV颜色空间,获得模板图像的HSV图像;
第二获取单元220还将待测图像从RGB颜色空间转换到HSV颜色空间,获得待测图像的HSV图像。
在其中一个实施例中,如图3所示,第二获取单元220包括以下单元:
灰度获取单元221,用于根据模板图像获取模板图像的灰度图像;
卷积单元222,用于对模板图像的灰度图像进行卷积运算,获得在横向方向上的第一边缘图像和在纵向方向上的第二边缘图像;
梯度获取单元223,用于根据第一边缘图像和第二边缘图像获取模板图像的梯度幅值图像;
灰度获取单元221还用于根据待测图像获取待测图像的灰度图像;
卷积单元222还用于对待测图像的灰度图像进行卷积运算,获得在横向方向上的第三边缘图像和在纵向方向上的第四边缘图像;
梯度获取单元223还用于根据第三边缘图像和第四边缘图像获取待测图像的梯度幅值图像。
在其中一个实施例中,灰度获取单元221将模板图像从RGB颜色空间转换到灰度空间,获得模板图像的灰度图像;
灰度获取单元221将待测图像从RGB颜色空间转换到灰度空间,获得待测图像的灰度图像。
在其中一个实施例中,卷积单元222通过索贝尔算子、鲁宾孙算子或拉普拉斯算子中的任意一种算子对模板图像的灰度图像进行卷积运算;
卷积单元222通过索贝尔算子、鲁宾孙算子或拉普拉斯算子中的任意一种算子对待测图像的灰度图像进行卷积运算。
在其中一个实施例中,梯度获取单元223根据第一边缘图像中各像素点的梯度幅值和第二边缘图像中对应像素点的梯度幅值计算模板图像中对应像素点的梯度幅值,获得模板图像的梯度幅值图像;
梯度获取单元223根据第三边缘图像中各像素点的梯度幅值和第四边缘图像中对应像素点的梯度幅值计算待测图像中对应像素点的梯度幅值,获得待测图像的梯度幅值图像。
在其中一个实施例中,如图4所示,匹配单元240包括以下单元:
选图单元241,用于选取目标待测图像中任意一个像素点,并根据所选取的像素点在目标待测图像中获取与目标模板图像相同大小的图像作为目标待测图的子图像,其中,子图像的横向边缘与目标待测图像的横向边缘平行,子图像的纵向边缘与目标待测图像的纵向边缘平行,所选取的像素点为子图像的一个顶点;
定位单元242,用于计算各子图像与目标模板图像的匹配度,选取代表匹配程度最高的匹配度对应的子图像,确定代表匹配程度最高的匹配度对应的子图像在目标待测图像中的位置为待测元件在待测图像中的位置。
本发明的基于颜色和梯度的元件定位系统与本发明的基于颜色和梯度的元件定位方法一一对应,在上述基于颜色和梯度的元件定位方法的实施例阐述的技术特征及其有益效果均适用于基于颜色和梯度的元件定位系统的实施例中。
以上所述实施例的各技术特征可以进行任意的组合,为使描述简洁,未对上述实施例中的各个技术特征所有可能的组合都进行描述,然而,只要这些技术特征的组合不存在矛盾,都应当认为是本说明书记载的范围。
以上所述实施例仅表达了本发明的几种实施方式,其描述较为具体和详细,但并不能因此而理解为对发明专利范围的限制。应当指出的是,对于本领域的普通技术人员来说,在不脱离本发明构思的前提下,还可以做出若干变形和改进,这些都属于本发明的保护范围。因此,本发明专利的保护范围应以所附权利要求为准。

Claims (10)

  1. 一种基于颜色和梯度的元件定位方法,其特征在于,包括以下步骤:
    获取待测元件的模板图像和对所述待测元件进行实际拍摄的待测图像;
    获取所述模板图像的HSV图像和梯度幅值图像,获取所述待测图像的HSV图像和梯度幅值图像;
    将所述模板图像的HSV图像中各像素点的V通道数值分别替换为所述模板图像的梯度幅值图像中对应像素点的梯度幅值,获得目标模板图像;
    将所述待测图像的HSV图像中各像素点的V通道数值分别替换为所述待测图像的梯度幅值图像中对应像素点的梯度幅值,获得目标待测图像;
    通过所述目标模板图像对所述目标待测图像进行模板匹配,确定所述待测元件在所述待测图像中所在的位置。
  2. 根据权利要求1所述的基于颜色和梯度的元件定位方法,其特征在于:
    获取所述模板图像的梯度幅值图像的步骤包括以下步骤:
    根据所述模板图像获取所述模板图像的灰度图像,对所述模板图像的灰度图像进行卷积运算,获得在横向方向上的第一边缘图像和在纵向方向上的第二边缘图像,根据所述第一边缘图像和所述第二边缘图像获取所述模板图像的梯度幅值图像;
    获取所述待测图像的梯度幅值图像的步骤包括以下步骤:
    根据所述待测图像获取所述待测图像的灰度图像,对所述待测图像的灰度图像进行卷积运算,获得在横向方向上的第三边缘图像和在纵向方向上的第四边缘图像,根据所述第三边缘图像和所述第四边缘图像获取所述待测图像的梯度幅值图像。
  3. 根据权利要求2所述的基于颜色和梯度的元件定位方法,其特征在于:
    所述对所述模板图像的灰度图像进行卷积运算的步骤包括以下步骤:
    通过索贝尔算子、鲁宾孙算子或拉普拉斯算子中的任意一种算子对所述模板图像的灰度图像进行卷积运算;
    所述对所述待测图像的灰度图像进行卷积运算的步骤包括以下步骤:
    通过索贝尔算子、鲁宾孙算子或拉普拉斯算子中的任意一种算子对所述待测图像的灰度图像进行卷积运算。
  4. 根据权利要求2所述的基于颜色和梯度的元件定位方法,其特征在于:
    所述根据所述第一边缘图像和所述第二边缘图像获取所述模板图像的梯度幅值图像的步骤包括以下步骤:
    根据所述第一边缘图像中各像素点的梯度幅值和所述第二边缘图像中对应像素点的梯度幅值计算所述模板图像中对应像素点的梯度幅值,获得所述模板图像的梯度幅值图像;
    所述根据所述第三边缘图像和所述第四边缘图像获取所述待测图像的梯度幅值图像的步骤包括以下步骤:
    根据所述第三边缘图像中各像素点的梯度幅值和所述第四边缘图像中对应像素点的梯度幅值计算所述待测图像中对应像素点的梯度幅值,获得所述待测图像的梯度幅值图像。
  5. 根据权利要求1至4中任意一项所述的基于颜色和梯度的元件定位方法,其特征在于,所述通过所述目标模板图像对所述目标待测图像进行模板匹配,确定所述待测元件在所述待测图像中所在的位置的步骤包括以下步骤:
    选取所述目标待测图像中任意一个像素点,并根据所选取的像素点在所述目标待测图像中获取与所述目标模板图像相同大小的图像作为所述目标待测图像的子图像,其中,所述子图像的横向边缘与所述目标待测图像的横向边缘平行,所述子图像的纵向边缘与所述目标待测图像的纵向边缘平行,所选取的像素点为所述子图像的一个顶点;
    计算各所述子图像与所述目标模板图像的匹配度,选取代表匹配程度最高的匹配度对应的子图像,确定所述代表匹配程度最高的匹配度对应的子图像在所述目标待测图像中的位置为所述待测元件在所述待测图像中的位置。
  6. 一种基于颜色和梯度的元件定位系统,其特征在于,包括以下单元:
    第一获取单元,用于获取待测元件的模板图像和对所述待测元件进行实际拍摄的待测图像;
    第二获取单元,用于获取所述模板图像的HSV图像和梯度幅值图像,获取所述待测图像的HSV图像和梯度幅值图像;
    合成单元,用于将所述模板图像的HSV图像中各像素点的V通道数值分别替换为所述模板图像的梯度幅值图像中对应像素点的梯度幅值,获得目标模板图像;
    所述合成单元还用于将所述待测图像的HSV图像中各像素点的V通道数值分别替换为所述待测图像的梯度幅值图像中对应像素点的梯度幅值,获得目标待测图像;
    匹配单元,用于通过所述目标模板图像对所述目标待测图像进行模板匹配,确定所述 待测元件在所述待测图像中所在的位置。
  7. 根据权利要求6所述的基于颜色和梯度的元件定位系统,其特征在于:
    所述第二获取单元包括以下单元:
    灰度获取单元,用于根据所述模板图像获取所述模板图像的灰度图像;
    卷积单元,用于对所述模板图像的灰度图像进行卷积运算,获得在横向方向上的第一边缘图像和在纵向方向上的第二边缘图像;
    梯度获取单元,用于根据所述第一边缘图像和所述第二边缘图像获取所述模板图像的梯度幅值图像;
    所述灰度获取单元还用于根据所述待测图像获取所述待测图像的灰度图像;
    所述卷积单元还用于对所述待测图像的灰度图像进行卷积运算,获得在横向方向上的第三边缘图像和在纵向方向上的第四边缘图像;
    所述梯度获取单元还用于根据所述第三边缘图像和所述第四边缘图像获取所述待测图像的梯度幅值图像。
  8. 根据权利要求7所述的基于颜色和梯度的元件定位系统,其特征在于:
    所述卷积单元通过索贝尔算子、鲁宾孙算子或拉普拉斯算子中的任意一种算子对所述模板图像的灰度图像进行卷积运算;
    所述卷积单元通过索贝尔算子、鲁宾孙算子或拉普拉斯算子中的任意一种算子对所述待测图像的灰度图像进行卷积运算。
  9. 根据权利要求7所述的基于颜色和梯度的元件定位系统,其特征在于:
    所述梯度获取单元根据所述第一边缘图像中各像素点的梯度幅值和所述第二边缘图像中对应像素点的梯度幅值计算所述模板图像中对应像素点的梯度幅值,获得所述模板图像的梯度幅值图像;
    所述梯度获取单元根据所述第三边缘图像中各像素点的梯度幅值和所述第四边缘图像中对应像素点的梯度幅值计算所述待测图像中对应像素点的梯度幅值,获得所述待测图像的梯度幅值图像。
  10. 根据权利要求6至9中任意一项所述的基于颜色和梯度的元件定位系统,其特征在于,所述匹配单元包括以下单元:
    选图单元,用于选取所述目标待测图像中任意一个像素点,并根据所选取的像素点在所述目标待测图像中获取与所述目标模板图像相同大小的图像作为所述目标待测图的子图像,其中,所述子图像的横向边缘与所述目标待测图像的横向边缘平行,所述子图像的纵向边缘与所述目标待测图像的纵向边缘平行,所选取的像素点为所述子图像的一个顶点;
    定位单元,用于计算各所述子图像与所述目标模板图像的匹配度,选取代表匹配程度最高的匹配度对应的子图像,确定所述代表匹配程度最高的匹配度对应的子图像在所述目标待测图像中的位置为所述待测元件在所述待测图像中的位置。
PCT/CN2016/112882 2016-04-14 2016-12-29 基于颜色和梯度的元件定位方法和系统 WO2017177717A1 (zh)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN201610235180.0 2016-04-14
CN201610235180.0A CN105976354B (zh) 2016-04-14 2016-04-14 基于颜色和梯度的元件定位方法和系统

Publications (1)

Publication Number Publication Date
WO2017177717A1 true WO2017177717A1 (zh) 2017-10-19

Family

ID=56988868

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2016/112882 WO2017177717A1 (zh) 2016-04-14 2016-12-29 基于颜色和梯度的元件定位方法和系统

Country Status (2)

Country Link
CN (1) CN105976354B (zh)
WO (1) WO2017177717A1 (zh)

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111402280A (zh) * 2020-03-10 2020-07-10 西安电子科技大学 基于对数图像处理模型的图像边缘检测系统及方法
CN111931785A (zh) * 2020-06-19 2020-11-13 国网山西省电力公司吕梁供电公司 一种电力设备红外图像目标的边缘检测方法
CN113870293A (zh) * 2021-09-27 2021-12-31 东莞拓斯达技术有限公司 图像处理方法、装置、电子设备及存储介质
CN114972346A (zh) * 2022-07-29 2022-08-30 山东通达盛石材有限公司 基于计算机视觉的石材识别方法
CN116645368A (zh) * 2023-07-27 2023-08-25 青岛伟东包装有限公司 一种流延膜卷边在线视觉检测方法

Families Citing this family (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105976354B (zh) * 2016-04-14 2019-02-01 广州视源电子科技股份有限公司 基于颜色和梯度的元件定位方法和系统
CN106528665B (zh) * 2016-10-21 2019-09-03 广州视源电子科技股份有限公司 Aoi设备测试文件查找方法和系统
CN107543507A (zh) * 2017-09-15 2018-01-05 歌尔科技有限公司 屏幕轮廓的确定方法及装置
CN109544552A (zh) * 2018-12-06 2019-03-29 合刃科技(深圳)有限公司 一种光栅无损检测方法及系统
CN112634227A (zh) * 2020-12-21 2021-04-09 广州镭晨智能科技有限公司 Pcb拼板的检测标识方法、装置、电子设备以及存储介质

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104899871A (zh) * 2015-05-15 2015-09-09 广东工业大学 一种ic元件焊点空焊检测方法
CN105069466A (zh) * 2015-07-24 2015-11-18 成都市高博汇科信息科技有限公司 基于数字图像处理的行人服饰颜色识别方法
CN105354547A (zh) * 2015-10-30 2016-02-24 河海大学 一种结合纹理和彩色特征的行人检测方法
CN105976354A (zh) * 2016-04-14 2016-09-28 广州视源电子科技股份有限公司 基于颜色和梯度的元件定位方法和系统

Family Cites Families (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101308607A (zh) * 2008-06-25 2008-11-19 河海大学 基于视频的混合交通环境下移动目标多特征融合跟踪方法
US9524543B2 (en) * 2012-09-28 2016-12-20 Skyworks Solutions, Inc. Automated detection of potentially defective packaged radio-frequency modules
CN104504375B (zh) * 2014-12-18 2019-03-05 广州视源电子科技股份有限公司 一种pcb元件的识别方法及装置

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104899871A (zh) * 2015-05-15 2015-09-09 广东工业大学 一种ic元件焊点空焊检测方法
CN105069466A (zh) * 2015-07-24 2015-11-18 成都市高博汇科信息科技有限公司 基于数字图像处理的行人服饰颜色识别方法
CN105354547A (zh) * 2015-10-30 2016-02-24 河海大学 一种结合纹理和彩色特征的行人检测方法
CN105976354A (zh) * 2016-04-14 2016-09-28 广州视源电子科技股份有限公司 基于颜色和梯度的元件定位方法和系统

Cited By (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111402280A (zh) * 2020-03-10 2020-07-10 西安电子科技大学 基于对数图像处理模型的图像边缘检测系统及方法
CN111402280B (zh) * 2020-03-10 2023-03-24 西安电子科技大学 基于对数图像处理模型的图像边缘检测系统及方法
CN111931785A (zh) * 2020-06-19 2020-11-13 国网山西省电力公司吕梁供电公司 一种电力设备红外图像目标的边缘检测方法
CN113870293A (zh) * 2021-09-27 2021-12-31 东莞拓斯达技术有限公司 图像处理方法、装置、电子设备及存储介质
CN113870293B (zh) * 2021-09-27 2022-10-14 东莞拓斯达技术有限公司 图像处理方法、装置、电子设备及存储介质
CN114972346A (zh) * 2022-07-29 2022-08-30 山东通达盛石材有限公司 基于计算机视觉的石材识别方法
CN114972346B (zh) * 2022-07-29 2022-11-04 山东通达盛石材有限公司 基于计算机视觉的石材识别方法
CN116645368A (zh) * 2023-07-27 2023-08-25 青岛伟东包装有限公司 一种流延膜卷边在线视觉检测方法
CN116645368B (zh) * 2023-07-27 2023-10-03 青岛伟东包装有限公司 一种流延膜卷边在线视觉检测方法

Also Published As

Publication number Publication date
CN105976354A (zh) 2016-09-28
CN105976354B (zh) 2019-02-01

Similar Documents

Publication Publication Date Title
WO2017177717A1 (zh) 基于颜色和梯度的元件定位方法和系统
TWI585398B (zh) Projection-type printed circuit board re-inspection system and methods, and marking the location of the defect
WO2017181724A1 (zh) 电子元件漏件检测方法和系统
WO2018068415A1 (zh) 元件错件检测方法和系统
CN109472271B (zh) 印刷电路板图像轮廓提取方法及装置
WO2017092427A1 (zh) 一种电子元件定位方法及装置
US10713780B2 (en) Color quality assessment based on multispectral imaging
CN108520514B (zh) 基于计算机视觉的印刷电路板电子元器一致性检测方法
CN108445010B (zh) 自动光学检测方法及装置
WO2014045508A1 (ja) 検査装置、検査方法、および検査プログラム
CN107239742A (zh) 一种仪表指针刻度值计算方法
WO2017071406A1 (zh) 金针类元件的引脚检测方法和系统
CN108871185B (zh) 零件检测的方法、装置、设备以及计算机可读存储介质
WO2017050088A1 (zh) 一种电子元件定位方法及装置
CN105372259A (zh) 测量装置、基板检查装置以及其控制方法
CN107271445B (zh) 一种缺陷检测方法及装置
WO2017181722A1 (zh) 元件漏检方法和系统
US20210325299A1 (en) Digital assessment of chemical dip tests
WO2017080295A1 (zh) 元件的定位方法和系统
CN108205210A (zh) 基于傅里叶梅林及特征匹配的lcd缺陷检测系统及方法
CN109509165B (zh) 图像定位区域选取方法及装置
CN115375608A (zh) 检测方法及装置、检测设备和存储介质
WO2018068414A1 (zh) 色环电阻的检测方法、装置和自动光学检测系统
JP2012194030A (ja) 画像の検査方法および装置
CN109215068B (zh) 图像放大率测量方法及装置

Legal Events

Date Code Title Description
NENP Non-entry into the national phase

Ref country code: DE

121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 16898527

Country of ref document: EP

Kind code of ref document: A1

122 Ep: pct application non-entry in european phase

Ref document number: 16898527

Country of ref document: EP

Kind code of ref document: A1