WO2023103426A1 - 零件视觉检测的自动对焦方法和装置 - Google Patents

零件视觉检测的自动对焦方法和装置 Download PDF

Info

Publication number
WO2023103426A1
WO2023103426A1 PCT/CN2022/110367 CN2022110367W WO2023103426A1 WO 2023103426 A1 WO2023103426 A1 WO 2023103426A1 CN 2022110367 W CN2022110367 W CN 2022110367W WO 2023103426 A1 WO2023103426 A1 WO 2023103426A1
Authority
WO
WIPO (PCT)
Prior art keywords
detected
image
focusing
focus
focus area
Prior art date
Application number
PCT/CN2022/110367
Other languages
English (en)
French (fr)
Inventor
詹明昊
侯晓楠
王春雷
Original Assignee
中电科机器人有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 中电科机器人有限公司 filed Critical 中电科机器人有限公司
Priority to PCT/CN2022/110367 priority Critical patent/WO2023103426A1/zh
Priority to DE112022002746.0T priority patent/DE112022002746T5/de
Publication of WO2023103426A1 publication Critical patent/WO2023103426A1/zh

Links

Images

Classifications

    • GPHYSICS
    • G02OPTICS
    • G02BOPTICAL ELEMENTS, SYSTEMS OR APPARATUS
    • G02B27/00Optical systems or apparatus not provided for by any of the groups G02B1/00 - G02B26/00, G02B30/00
    • G02B27/0075Optical systems or apparatus not provided for by any of the groups G02B1/00 - G02B26/00, G02B30/00 with means for altering, e.g. increasing, the depth of field or depth of focus
    • GPHYSICS
    • G02OPTICS
    • G02BOPTICAL ELEMENTS, SYSTEMS OR APPARATUS
    • G02B7/00Mountings, adjusting means, or light-tight connections, for optical elements
    • G02B7/28Systems for automatic generation of focusing signals

Definitions

  • the invention belongs to the technical field of industrial visual inspection, in particular to an automatic focusing method and device for visual inspection of parts.
  • the process of making the image clear by adjusting the object distance and image distance is called focusing.
  • the autofocus method is generally implemented by measuring the sharpness of the image, which also corresponds to people's subjective feelings.
  • Image clarity is calculated by gradient calculation for each pixel grid of the image. For the collected image, the larger the gradient value is, the clearer the image is, and the more accurate the focus is theoretically.
  • the widely used autofocus methods all consider the full-frame method, that is, calculate the gradient function for the entire image, and adjust the object distance to achieve the maximum average gradient value.
  • the calculation of the gradient function generally uses second-order differential operators such as Laplacian operators.
  • the existing auto-focus method only considers to maximize the gradient function in the full-frame case during the focusing process, but in the industrial visual inspection process, there are a large number of parts with step differences, that is, there are sections of the workpiece that are not on the same level. In this case, if only the global gradient is used for focusing, the global gradient will reach the maximum, but the section to be measured is still not clear.
  • the present invention provides an automatic focusing method for visual inspection of parts, the parts have step differences
  • the autofocus method includes: acquiring an image of the part to be inspected; determining the focus area of the part to be inspected on the image of the part to be inspected according to the inspection item of the part to be inspected; to focus.
  • the determining the focus area of the part to be detected on the image of the part to be detected according to the detection item of the part to be detected includes: according to the The detection item of the part to be detected is based on the interest extraction method, and the focus area of the part to be detected is determined on the image of the part to be detected; wherein, the focus area is the area corresponding to the detection item of the part to be detected area of interest.
  • the autofocus method further includes: for each inspection item of the part to be inspected, A focus area of the part to be inspected is determined on the image of the part; correspondingly, the focusing on the focus area includes: focusing on a focus area corresponding to each inspection item of the part to be inspected.
  • the focusing on the focus area includes: focusing on the focus area based on an image clarity evaluation algorithm.
  • the inspection items of the part to be inspected include size.
  • an automatic focus device for visual inspection of a part, the part has a section with a step difference, and the automatic focus device includes: an acquisition module, used to obtain an image of the part to be inspected; a determination module, used to The detection item of the part to be detected, the focus area of the part to be detected is determined on the image of the part to be detected; the focus module is used to focus on the focus area.
  • the determining module is configured to: determine, on the image of the part to be detected, the A focus area for detecting a part; wherein, the focus area is an area of interest corresponding to a detection item of the part to be detected.
  • the autofocus device is further configured to: determine the part to be inspected on the image of the part to be inspected for each inspection item of the part to be inspected
  • the focus module is used to: focus on the focus area corresponding to each inspection item of the part to be inspected.
  • the focus device is used for: focusing on the focus area based on the image clarity evaluation algorithm.
  • the inspection items of the part to be inspected include size.
  • Another aspect provides an electronic device, the electronic device comprising: a processor and a memory for storing executable instructions of the processor; wherein the processor is configured to execute any one of the above-mentioned Autofocus method.
  • Another aspect provides a computer-readable storage medium, at least one instruction, at least one program, code set or instruction set is stored in the computer-readable storage medium, and the at least one instruction, at least one program, code set or The instruction set is loaded and executed by the processor to implement the autofocus method described in any one of the above.
  • Fig. 1 is a schematic structural view of a part provided by an embodiment of the present invention
  • FIG. 2 is a schematic flowchart of an autofocus method for visual inspection of parts provided by an embodiment of the present invention
  • FIG. 3 is a schematic flowchart of another autofocus method for visual inspection of parts provided by an embodiment of the present invention.
  • FIG. 4 is a schematic structural diagram of an autofocus device for visual inspection of parts provided by an embodiment of the present invention.
  • Part visual inspection method refers to a method that uses machines instead of human eyes to measure and judge parts (or objects to be inspected).
  • the machine has an image acquisition device such as an industrial camera.
  • the image acquisition device is equipped with an optical sensor, which is used to photograph the part to convert the part into an image signal, and then the machine processes the image signal to obtain the parameters of the part, such as size and angle.
  • the part is usually placed horizontally, and the image acquisition device is located above the part.
  • Parts with step differences refer to: when the parts are placed horizontally, there are surfaces (or sections) that are not on the same level, that is, several surfaces are in height. There is a drop (or called a step difference), and at this time, the object distance between these surfaces and the image acquisition device is different.
  • first surface 10 and second surface 20 there are first surface 10 and second surface 20, the height of first surface 10 is higher than the height of second surface 20, namely first surface 10 and second surface 20 can form two A step surface, the first surface 10 and the first surface 20 are cross-sections that are not on the same horizontal plane, and this embodiment does not limit the specific shape of the part and the number of cross-sections that are not on the same horizontal plane.
  • the image acquisition equipment adopts the auto-focus method when shooting parts with step differences, based on the existing auto-focus method, the part of the captured image that involves the section is still unclear. Therefore, this implementation
  • the example provides an auto-focus method for visual inspection of parts, see FIG. 2, the flow of the auto-focus method provided in this embodiment is as follows:
  • Step 101 acquiring an image of a part to be detected.
  • an image acquisition device is used to photograph the part to be inspected to obtain an image of the part to be inspected, which may be called an initial image.
  • the image acquisition device may be an area array camera, preferably a high resolution area array camera.
  • Step 102 according to the detection items of the part to be detected, determine the focus area of the part to be detected on the image of the part to be detected.
  • the focus area to be detected is determined on the image of the part to be detected.
  • the method of determining the focus area is the method of extracting interest (ROI, Region Of Interest).
  • the detection focus area is the area corresponding to the detection item of the part to be detected.
  • the area of interest is the area to be processed that is outlined in the form of a box, circle, ellipse, irregular polygon, ring box, etc. from the image of the part to be detected.
  • focus areas to be detected that is, a focus area to be detected (or called the area to be detected or the focus frame) is determined for each detection item of the part to be detected, Identify multiple focus areas, such as numbers.
  • the items to be detected can be the outer diameter of the pinion gear and the outer diameter of the large gear, and a focus area is set for the pinion gear, and a focus area is set for the large gear.
  • the item to be detected may be the diameter of each through-hole in a circular circle of through-holes, and a focus area is set for each through-hole to focus on each through-hole individually.
  • the inspection items of the parts to be inspected include dimensions, and the accuracy is high. For the inner diameter, outer diameter, length, width, thickness, etc., the accuracy can reach 0.1 mm level; for the angle, the accuracy can reach the second level.
  • the method for extracting the region of interest may include the following steps:
  • the Gaussian convolution kernel can adopt the Gaussian convolution kernel as shown below.
  • the Laplacian convolution kernel can use the 3rd-order Laplacian operator as shown below.
  • each generated bounding box is a focus frame, which is stored in the device and used for focusing. Mark the multiple extracted focus frames, for example, with a serial number i.
  • extraction methods may also be used, such as methods in the prior art and extraction methods based on deep learning, which are not limited in this embodiment.
  • Step 103 focus on the focus area.
  • each focus frame is individually focused. When focusing, make each focus frame reach the maximum gradient value, and the image corresponding to the maximum gradient value at this time is the clearest image.
  • a focusing device is used, which is a device that controls the camera (image acquisition device) to move along the axial direction to adjust the object distance.
  • the axial direction refers to the direction along the central axis of the camera lens.
  • the focusing device has a fixed stroke, such as controlling the object distance from 0 to 200mm, where 0 and 200 are respectively used as two ends of the Fibonacci sequence search method.
  • the existing auto-focus method is combined with the ROI extraction method, and the focus area is extracted before performing auto-focus.
  • the gradient value only the minimum of the focus area is considered In the case of sharpness, ignore the sharpness of background parts and irrelevant parts.
  • another embodiment of the present invention provides an automatic focusing method for visual inspection of parts, which includes the following steps: collecting images, performing edge detection on the collected images, and drawing and focusing the images after edge detection processing according to ROI frame, and then trigger autofocus, calculate the current gradient value (that is, the gradient value of the current image or the first calculation result), calculate the gradient value, move the focusing device according to the Fibonacci sequence search method, and calculate the result after the focusing device moves The gradient value of the collected image (or the second calculation result), compare the two calculation results, and judge whether the latest calculation result (ie the second calculation result) is the peak value, if so, the focus is completed, and the current clearest image is stored image, and then judge whether the focus image acquisition of all focus frames is completed.
  • the focus image acquisition of all focus frames is not completed, start the next focus frame, and then calculate the current gradient value. If the focus image acquisition of all focus frames is completed , then end the focus, if it is not the peak value, then continue the peak search, that is, continue to move the focus device according to the Fibonacci sequence search method, and calculate the gradient value of the image collected after the focus device moves.
  • an embodiment of the present invention provides an auto-focus device for visual inspection of parts.
  • the parts have sections with step differences.
  • the auto-focus device includes: an acquisition module 201 , a determination module 202 and a focus module 203 .
  • the obtaining module 201 is used for obtaining images of parts to be detected.
  • the determining module 202 is used for determining the focus area of the part to be detected on the image of the part to be detected according to the detection items of the part to be detected.
  • the focus module 203 is used to focus on the focus area.
  • the determining module 202 is configured to: determine the focus area of the part to be detected on the image of the part to be detected based on the method of interest extraction according to the detection item of the part to be detected; The region of interest corresponding to the item.
  • the autofocus device is also used to: determine the focus area of the part to be detected on the image of the part to be detected for each detection item of the part to be detected;
  • the focusing module 203 is configured to: focus on the focus area corresponding to each inspection item of the part to be inspected.
  • the focusing device 203 is configured to focus on a focus area based on an image definition evaluation algorithm.
  • the inspection items of the parts to be inspected include dimensions, and the accuracy is relatively high.
  • the accuracy can reach 0.1 mm level; for the angle, the accuracy can reach the second level.
  • the autofocus device provided in the above-mentioned embodiment focuses, the division of the above-mentioned functional modules is used as an example for illustration. In practical applications, the above-mentioned function allocation can be completed by different functional modules according to needs. The internal structure of the system is divided into different functional modules to complete all or part of the functions described above.
  • the autofocus device and the autofocus method embodiments provided in the above embodiments belong to the same idea, and the specific implementation process thereof is detailed in the method embodiments, and will not be repeated here.
  • An embodiment of the present invention provides an electronic device, which includes: a memory and a processor.
  • the processor is connected with the memory and is configured to execute the above auto-focus method based on the instructions stored in the memory.
  • the number of processors can be one or more, and the processors can be single-core or multi-core.
  • Memory may include non-permanent memory in computer-readable media, in the form of random access memory (RAM) and/or non-volatile memory, such as read-only memory (ROM) or flash memory (flash RAM), memory including at least one memory chip.
  • RAM random access memory
  • ROM read-only memory
  • flash RAM flash memory
  • the memory may be an example of a computer readable medium as described below.
  • An embodiment of the present invention provides a computer-readable storage medium on which at least one instruction, at least one program, code set or instruction set is stored, and the at least one instruction, at least one program, code set or instruction set is processed by The controller is loaded and executed to implement the above autofocus method.
  • Computer-readable storage media includes: volatile and non-volatile, removable and non-removable media may implement information storage by any method or technology. Information may be computer readable instructions, data structures, modules of a program, or other data.
  • Examples of storage media for computers include, but are not limited to: phase change memory (PRAM), static random access memory (SRAM), dynamic random access memory (DRAM), other types of random access memory (RAM), read only memory (ROM), Electrically Erasable Programmable Read-Only Memory (EEPROM), flash memory or other memory technology, compact disc-read-only memory (CD-ROM), digital versatile disc (DVD) or other optical storage , magnetic cartridges, disk storage, or other magnetic storage device, or any other non-transmission medium, that may be used to store information that can be accessed by a computing device.
  • PRAM phase change memory
  • SRAM static random access memory
  • DRAM dynamic random access memory
  • RAM random access memory
  • ROM read only memory
  • EEPROM Electrically Erasable Programmable Read-Only Memory
  • flash memory or other memory technology
  • compact disc-read-only memory (CD-ROM) digital versatile disc (DVD) or other optical storage
  • magnetic cartridges disk storage, or other magnetic storage device, or any other non-transmission medium, that may be

Landscapes

  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Optics & Photonics (AREA)
  • Automatic Focus Adjustment (AREA)
  • Investigating Materials By The Use Of Optical Means Adapted For Particular Applications (AREA)

Abstract

本发明属于工业视觉检测技术领域,其公开了一种零件视觉检测的自动对焦方法和装置,所述零件具有存在段差的断面,所述自动对焦方法包括:获取待检测零件的图像;根据所述待检测零件的检测项目,在所述待检测零件的图像上确定所述待检测零件的对焦区域;对所述对焦区域进行对焦。自动对焦装置包括:获取模块,用于获取待检测零件的图像;确定模块,用于根据所述待检测零件的检测项目,在所述待检测零件的图像上确定所述待检测零件的对焦区域;对焦模块,用于对所述对焦区域进行对焦。通过上述方案能够实现对对焦区域的精确图像采集,降低测量误差,有效提高工业视觉检测精度,降低误检率与次品率,同时相比人工手动对焦,提高检测效率。

Description

零件视觉检测的自动对焦方法和装置 技术领域
本发明属于工业视觉检测技术领域,特别涉及一种零件视觉检测的自动对焦方法和装置。
背景技术
在相机拍摄过程中,通过调整物距与像距使成像变清晰的过程称为对焦。
在工业零件视觉检测中,需要采集待检测零件的视觉成像,根据成像来计算零件的内径、外径、角度等参数,其参数精度相当高,通常能达到0.1毫米级、秒级。工业视觉检测不仅需要高分辨率相机,也需要高精度的对焦来完成图像采集。人工手动对焦费时费力且不精确,常使用自动对焦方法使检测过程无人化、流程化。
自动对焦方法一般采用衡量图像清晰度的方法实现,这也与人的主观感受相对应。图像清晰度通过对图像的每个像素格进行梯度计算,对于采集到的图像,其梯度值越大,其图像越清晰,理论上其对焦越精确。
目前广泛被采用的自动对焦方法都是考虑全画幅的方法,即对整张图像进行梯度函数的计算,调整物距以达到最大的平均梯度值。在该种方法下,梯度函数的计算一般采用拉普拉斯算子等二阶微分算子。
现有自动对焦方法在对焦过程中仅考虑使全画幅情况下梯度函数达到最大值,而在工业视觉检测过程中,有大量零部件存在段差,即工件存在不处于同一水平面上的断面。在这种情况下,若仅采用全局梯度进行调焦,则会出现全局梯度达到最大,但待测断面仍未清晰的情况。
发明内容
为了至少解决现有技术中存在的对于有段差的零件,对焦后所拍摄的图像出现未清晰的问题,本发明一方面提供了一种零件视觉检测的自动对焦方法,所述零件具有存在段差的断面,所述自动对焦方法包括:获取待检测零 件的图像;根据所述待检测零件的检测项目,在所述待检测零件的图像上确定所述待检测零件的对焦区域;对所述对焦区域进行对焦。
在如上所述的自动对焦方法中,可选地,所述根据所述待检测零件的检测项目,在所述待检测零件的图像上确定所述待检测零件的对焦区域,包括:根据所述待检测零件的检测项目,基于感兴趣提取方法,在所述待检测零件的图像上确定所述待检测零件的对焦区域;其中,所述对焦区域为与所述待检测零件的检测项目对应的感兴趣区域。
在如上所述的自动对焦方法中,可选地,所述对所述对焦区域进行对焦之前,所述自动对焦方法还包括:为每一个所述待检测零件的检测项目,在所述待检测零件的图像上确定所述待检测零件的对焦区域;相应地,所述对所述对焦区域进行对焦包括:对与每一个所述待检测零件的检测项目对应的对焦区域进行对焦。
在如上所述的自动对焦方法中,可选地,所述对所述对焦区域进行对焦,包括:基于图像清晰度评价算法对焦区域进行对焦。
在如上所述的自动对焦方法中,可选地,所述待检测零件的检测项目中包括尺寸。
另一方面提供了一种零件视觉检测的自动对焦装置,所述零件具有存在段差的断面,所述自动对焦装置包括:获取模块,用于获取待检测零件的图像;确定模块,用于根据所述待检测零件的检测项目,在所述待检测零件的图像上确定所述待检测零件的对焦区域;对焦模块,用于对所述对焦区域进行对焦。
在如上所述的自动对焦装置中,可选地,所述确定模块用于:根据所述待检测零件的检测项目,基于感兴趣提取方法,在所述待检测零件的图像上确定所述待检测零件的对焦区域;其中,所述对焦区域为与所述待检测零件的检测项目对应的感兴趣区域。
在如上所述的自动对焦装置中,可选地,所述自动对焦装置还用于:为每一个所述待检测零件的检测项目,在所述待检测零件的图像上确定所述待检测零件的对焦区域;相应地,所述对焦模块用于:对与每一个所述待检测零件的检测项目对应的对焦区域进行对焦。
在如上所述的自动对焦装置中,可选地,所述对焦装置用于:基于图像 清晰度评价算法对焦区域进行对焦。
在如上所述的自动对焦装置中,可选地,所述待检测零件的检测项目中包括尺寸。
另一方面提供了一种电子设备,所述电子设备包括:处理器和用于存储所述处理器的可执行指令的存储器;其中,所述处理器被配置为执行上述任一项所述的自动对焦方法。
又一方面提供了一种计算机可读存储介质,所述计算机可读存储介质中存储有至少一条指令、至少一段程序、代码集或指令集,所述至少一条指令、至少一段程序、代码集或指令集由处理器加载并执行以实现上述任一项所述的自动对焦方法。
本发明实施例提供的技术方案带来的有益效果是:
对有段差的零件,通过获取待检测零件的图像,根据待检测零件的检测项目,在待检测零件的图像上确定待检测零件的对焦区域,对对焦区域进行对焦,使得在对焦时,仅考虑对焦区域的最清晰情况,忽略背景部分及无关部分的清晰度,从而能够实现对对焦区域的精确图像采集,降低测量误差,有效提高工业视觉检测精度,降低误检率与次品率,同时相比人工手动对焦,提高检测效率。
附图说明
图1为本发明实施例提供的一种零件的结构示意图;
图2为本发明实施例提供的一种零件视觉检测的自动对焦方法的流程示意图;
图3为本发明实施例提供的另一种零件视觉检测的自动对焦方法的流程示意图;
图4为本发明实施例提供的一种零件视觉检测的自动对焦装置的结构示意图。
具体实施方式
下面将参考附图并结合实施例来详细说明本发明。
零件视觉检测方法是指用机器代替人眼对零件(或称待检测目标)进行 测量和判断的一种方法。该机器具有图像采集设备,例如工业相机。图像采集设备配置有光学传感器,其用于对零件进行拍摄,以将零件转换成图像信号,然后由机器再对图像信号进行处理,来获取零件的参数,如尺寸、角度等。检测时,零件通常是水平放置,图像采集设备位于零件的上方。零件的种类有很多,有的零件是有段差的零件,有段差的零件是指:当零件水平放置时,存在着不处于同一水平面上的表面(或称断面),即若干个表面在高度上存在落差(或称段差),此时,这些表面与图像采集设备之间的物距是不同的。
例如:在图1所示的零件中,有第一表面10和第二表面20,第一表面10的高度高于第二表面20的高度,即第一表面10和第二表面20可以形成两个台阶面,第一表面10和第一表面20为不处于同一水平面上的断面,本实施例不对零件的具体形状和不处于同一水平面上的断面数量进行限定。
图像采集设备在对有段差的零件进行拍摄时,虽然采取了自动对焦方法,但基于现有的自动对焦方法,所拍摄的图像中涉及断面的部分仍出现不清晰的情况,为此,本实施例提供了一种零件视觉检测的自动对焦方法,参见图2,本实施例提供的自动对焦方法的流程具体如下:
步骤101,获取待检测零件的图像。
待检测零件放置于检测平台上之后,使用图像采集设备对待检测零件进行拍摄,获取待检测零件的图像,该图像可以称为初始图像。图像采集设备可以是面阵相机,优选采用高分辨率面阵相机。
步骤102,根据待检测零件的检测项目,在待检测零件的图像上确定待检测零件的对焦区域。
根据待检测零件的检测项目,在待检测零件的图像上把待检测的对焦区域确定出来,确定对焦区域的方式使用的是感兴趣(ROI,Region Of Interest)提取方法,感兴趣区域即为待检测的对焦区域,其是与待检测零件的检测项目对应的区域。感兴趣区域是从待检测零件的图像以方框、圆、椭圆、不规则多边形、环形框等方式勾勒出需要处理的区域。当待检测零件的项目为多个时,待检测的对焦区域也为多个,即为每一个待检测零件的检测项目确定一个待检测的对焦区域(或称待检测区域或称对焦框),对多个对焦区域进行标识,例如编号。在图1所示的零件中,有两个外直径不同的齿轮,分别为: 齿轮和大齿轮。待检测项目可以是小齿轮的外直径、大齿轮的外直径,为小齿轮设置一个对焦区域,为大齿轮设置一个对焦区域。在其他零件中,待检测项目可以是呈环形的一圈通孔中各通孔的孔径,则为每一个通孔设置一个对焦区域,以为每一个通孔单独进行对焦。待检测零件的检测项目中包括尺寸,精度较高。对于内径、外径、长、宽、厚度等来说,精度可达0.1毫米级;对于角度来说,精度可达秒级。
提取感兴趣区域的方法可以包括如下步骤:
1)高斯降噪
利用高斯卷积核对待检测零件的图像进行降噪,去除冗余的斑点、杂质等背景信息,使图像中仅剩余大致的轮廓信息。高斯卷积核可以采用如下所示的高斯卷积核。
Figure PCTCN2022110367-appb-000001
2)边缘提取
利用拉普拉斯卷积核提取降噪后图像的边缘信息。拉普拉斯卷积核可以采用如下所示的3阶拉普拉斯算子。
Figure PCTCN2022110367-appb-000002
3)用边界框框选每段闭合边缘,所产生的每个边界框即为对焦框,存储在设备中,用于对焦。对提取的多个对焦框进行标记,例如用序号i标记。
在其他的实施例中,还可以采用其他提取方法,如现有技术中的方法,基于深度学习的提取方法,本实施例不对此进行限定。
步骤103,对对焦区域进行对焦。
对焦框框确定完成后,对对焦框进行对焦,以避免由于全局对焦而产生的全局梯度值最高,但待检测区域仍然模糊的情况。当对焦框的数量为多个时,对每个对焦框分别进行对焦。对焦时,使每个对焦框内达到最大的梯度值,此时最大梯度值对应的图像为最清晰的图像。
下面以待检测零件的检测项目为多个时为例,对该步骤的具体流程进行说明:
1)计算序号为i的对焦框中成像的梯度值,记为f(i 1);
2)移动对焦装置,根据斐波那契搜索法(Fibonacci search),寻找最大的梯度值(或称峰值),寻找到最大的梯度值,则完成对焦,此时采集到的图像即是对于序号为i的对焦框来说,最清晰的成像。
3)序号为i的对焦框对焦完成,采集、记录成像;
4)序号i+1,重复步骤1)-3),直至所有对焦框都完成成像采集记录。对于清晰度计算方法,可以利用拉普拉斯算子计算梯度值,还可以利用梯度函数值随着清晰度单调变化的函数,例如:Brenner梯度函数、SMD灰度方差函数等,本实施例不对清晰度的具体计算方法进行限定。斐波那契搜索法是一种函数估值次数最少的最优搜索方法,算法时间、空间复杂度低,适合部署在工业场景。
在对焦过程中,使用到了对焦装置,其是一种控制相机(图像采集设备)沿轴向移动的装置,以调整物距,该轴向是指沿相机镜头的中轴线方向。该对焦装置具有固定的行程,例如控制物距从0变化到200mm,其中0和200分别作为斐波那锲数列搜索法的两端。
对有段差的零件,通过获取待检测零件的图像,根据待检测零件的检测项目,在待检测零件的图像上确定待检测零件的对焦区域,对对焦区域进行对焦,使得在对焦时,仅对对焦区域进行对焦,仅考虑对焦区域的最清晰情况,忽略背景部分及无关部分的清晰度,即对焦时,采取局部对焦而非全局对焦,从而能够实现对对焦区域的精确图像采集,降低测量误差,有效提高工业视觉检测精度,降低误检率与次品率,同时相比人工手动对焦,提高检测效率。
需要说明的是,在本方法中,将现有的自动对焦方法与感兴趣区域提取方法相结合,在执行自动对焦前先进行对焦区域的提取,在计算梯度值时,仅考虑对焦区域的最清晰情况,忽略背景部分及无关部分的清晰度。
参见图3,本发明另一实施例提供了一种零件视觉检测的自动对焦方法,其包括以下步骤:采集图像,对采集的图像进行边缘检测,对经过边缘检测 处理后的图像根据ROI绘制对焦框,然后触发自动对焦,计算当前梯度值(即当前图像的梯度值或称第一次计算结果),梯度值计算,根据斐波那锲数列搜索法移动对焦装置,计算对焦装置移动后,所采集的图像的梯度值(或称第二次计算结果),对比两次计算结果,判断最近一次的计算结果(即第二次计算结果)是否是峰值,若是,则对焦完成,储存当前最清晰图像,然后判断是否完成所有的对焦框的对焦图像采集,若没有完成所有的对焦框的对焦图像采集,则开始下一个对焦框,然后计算当前梯度值,若完成所有的对焦框的对焦图像采集,则结束对焦,若不是峰值,则继续峰值搜索,即继续执行根据斐波那锲数列搜索法移动对焦装置,计算对焦装置移动后,所采集的图像的梯度值。
参见图4,本发明一实施例提供了一种零件视觉检测的自动对焦装置,零件具有存在段差的断面,自动对焦装置包括:获取模块201、确定模块202和对焦模块203。
其中,获取模块201用于获取待检测零件的图像。确定模块202用于根据待检测零件的检测项目,在待检测零件的图像上确定待检测零件的对焦区域。对焦模块203用于对对焦区域进行对焦。
可选地,确定模块202用于:根据待检测零件的检测项目,基于感兴趣提取方法,在待检测零件的图像上确定待检测零件的对焦区域;其中,对焦区域为与待检测零件的检测项目对应的感兴趣区域。
可选地,自动对焦装置还用于:为每一个待检测零件的检测项目,在待检测零件的图像上确定待检测零件的对焦区域;
相应地,对焦模块203用于:对与每一个待检测零件的检测项目对应的对焦区域进行对焦。
可选地,对焦装置203用于基于图像清晰度评价算法对焦区域进行对焦。
可选地,待检测零件的检测项目中包括尺寸,精度较高。对于内径、外径、长、宽、厚度等来说,精度可达0.1毫米级;对于角度来说,精度可达秒级。
需要说明的是:上述实施例提供的自动对焦装置在对焦时,仅以上述各功能模块的划分进行举例说明,实际应用中,可以根据需要而将上述功能分 配由不同的功能模块完成,即将设备的内部结构划分成不同的功能模块,以完成以上描述的全部或者部分功能。另外,上述实施例提供的自动对焦装置与自动对焦方法实施例属于同一构思,其具体实现过程详见方法实施例,此处不再一一赘述。
本发明一实施例提供了一种电子设备,其包括:存储器和处理器。处理器与存储器连接,被配置为基于存储在存储器中的指令,执行上述自动对焦方法。处理器的数量可以为一个或多个,处理器可以是单核或多核。存储器可能包括计算机可读介质中的非永久性存储器,随机存取存储器(RAM)和/或非易失性内存等形式,如只读存储器(ROM)或闪存(flash RAM),存储器包括至少一个存储芯片。存储器可以是下述的计算机可读介质的示例。
本发明一实施例提供了一种计算机可读存储介质,其上存储有至少一条指令、至少一段程序、代码集或指令集,所述至少一条指令、至少一段程序、代码集或指令集由处理器加载并执行以实现上述自动对焦方法。计算机可读存储介质包括:永久性和非永久性、可移动和非可移动媒体可以由任何方法或技术来实现信息存储。信息可以是计算机可读指令、数据结构、程序的模块或其他数据。计算机的存储介质的例子包括但不限于:相变内存(PRAM)、静态随机存取存储器(SRAM)、动态随机存取存储器(DRAM)、其他类型的随机存取存储器(RAM)、只读存储器(ROM)、电可擦除可编程只读存储器(EEPROM)、快闪记忆体或其他内存技术、只读光盘-只读存储器(CD-ROM)、数字多功能光盘(DVD)或其他光学存储、磁盒式磁带、磁盘存储或其他磁性存储设备或任何其他非传输介质,可用于存储可以被计算设备访问的信息。
由技术常识可知,本发明可以通过其它的不脱离其精神实质或必要特征的实施方案来实现。因此,上述公开的实施方案,就各方面而言,都只是举例说明,并不是仅有的。所有在本发明范围内或在等同于本发明的范围内的改变均被本发明包含。

Claims (10)

  1. 一种零件视觉检测的自动对焦方法,所述零件具有存在段差的断面,其特征在于,所述自动对焦方法包括:
    获取待检测零件的图像;
    根据所述待检测零件的检测项目,在所述待检测零件的图像上确定所述待检测零件的对焦区域;
    对所述对焦区域进行对焦。
  2. 根据权利要求1所述的自动对焦方法,其特征在于,所述根据所述待检测零件的检测项目,在所述待检测零件的图像上确定所述待检测零件的对焦区域,包括:
    根据所述待检测零件的检测项目,基于感兴趣提取方法,在所述待检测零件的图像上确定所述待检测零件的对焦区域;
    其中,所述对焦区域为与所述待检测零件的检测项目对应的感兴趣区域。
  3. 根据权利要求1或2所述的自动对焦方法,其特征在于,所述对所述对焦区域进行对焦之前,所述自动对焦方法还包括:
    为每一个所述待检测零件的检测项目,在所述待检测零件的图像上确定所述待检测零件的对焦区域;
    相应地,所述对所述对焦区域进行对焦包括:
    对与每一个所述待检测零件的检测项目对应的对焦区域进行对焦。
  4. 根据权利要求1所述的自动对焦方法,其特征在于,所述对所述对焦区域进行对焦,包括:
    基于图像清晰度评价算法对焦区域进行对焦。
  5. 根据权利要求1所述的自动对焦方法,其特征在于,所述待检测零件的检测项目中包括尺寸。
  6. 一种零件视觉检测的自动对焦装置,所述零件具有存在段差的断面, 其特征在于,所述自动对焦装置包括:
    获取模块,用于获取待检测零件的图像;
    确定模块,用于根据所述待检测零件的检测项目,在所述待检测零件的图像上确定所述待检测零件的对焦区域;
    对焦模块,用于对所述对焦区域进行对焦。
  7. 根据权利要求6所述的自动对焦装置,其特征在于,所述确定模块用于:
    根据所述待检测零件的检测项目,基于感兴趣提取方法,在所述待检测零件的图像上确定所述待检测零件的对焦区域;
    其中,所述对焦区域为与所述待检测零件的检测项目对应的感兴趣区域。
  8. 根据权利要求6或7所述的自动对焦装置,其特征在于,所述自动对焦装置还用于:
    为每一个所述待检测零件的检测项目,在所述待检测零件的图像上确定所述待检测零件的对焦区域;
    相应地,所述对焦模块用于:
    对与每一个所述待检测零件的检测项目对应的对焦区域进行对焦。
  9. 根据权利要求6所述的自动对焦装置,其特征在于,所述对焦装置用于:
    基于图像清晰度评价算法对焦区域进行对焦。
  10. 根据权利要求6所述的自动对焦装置,其特征在于,所述待检测零件的检测项目中包括尺寸。
PCT/CN2022/110367 2022-08-04 2022-08-04 零件视觉检测的自动对焦方法和装置 WO2023103426A1 (zh)

Priority Applications (2)

Application Number Priority Date Filing Date Title
PCT/CN2022/110367 WO2023103426A1 (zh) 2022-08-04 2022-08-04 零件视觉检测的自动对焦方法和装置
DE112022002746.0T DE112022002746T5 (de) 2022-08-04 2022-08-04 Verfahren und Vorrichtung zur automatischen Fokussierung bei der visuellen Inspektion von Bauteilen

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
PCT/CN2022/110367 WO2023103426A1 (zh) 2022-08-04 2022-08-04 零件视觉检测的自动对焦方法和装置

Publications (1)

Publication Number Publication Date
WO2023103426A1 true WO2023103426A1 (zh) 2023-06-15

Family

ID=86729583

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2022/110367 WO2023103426A1 (zh) 2022-08-04 2022-08-04 零件视觉检测的自动对焦方法和装置

Country Status (2)

Country Link
DE (1) DE112022002746T5 (zh)
WO (1) WO2023103426A1 (zh)

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102759788A (zh) * 2011-04-26 2012-10-31 鸿富锦精密工业(深圳)有限公司 表面多点对焦系统及方法
CN110488481A (zh) * 2019-09-19 2019-11-22 广东工业大学 一种显微镜对焦方法、显微镜及相关设备
WO2020110712A1 (ja) * 2018-11-27 2020-06-04 オムロン株式会社 検査システム、検査方法およびプログラム
CN112752021A (zh) * 2020-11-27 2021-05-04 乐金显示光电科技(中国)有限公司 一种摄像头系统自动对焦方法和自动对焦摄像头系统
CN113495073A (zh) * 2020-04-07 2021-10-12 泰连服务有限公司 视觉检查系统的自动对焦功能

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102759788A (zh) * 2011-04-26 2012-10-31 鸿富锦精密工业(深圳)有限公司 表面多点对焦系统及方法
WO2020110712A1 (ja) * 2018-11-27 2020-06-04 オムロン株式会社 検査システム、検査方法およびプログラム
CN110488481A (zh) * 2019-09-19 2019-11-22 广东工业大学 一种显微镜对焦方法、显微镜及相关设备
CN113495073A (zh) * 2020-04-07 2021-10-12 泰连服务有限公司 视觉检查系统的自动对焦功能
CN112752021A (zh) * 2020-11-27 2021-05-04 乐金显示光电科技(中国)有限公司 一种摄像头系统自动对焦方法和自动对焦摄像头系统

Also Published As

Publication number Publication date
DE112022002746T5 (de) 2024-04-18

Similar Documents

Publication Publication Date Title
CN106934803B (zh) 电子器件表面缺陷的检测方法及装置
US8648918B2 (en) Method and system for obtaining a point spread function using motion information
Kumar et al. Machine vision method for non-contact measurement of surface roughness of a rotating workpiece
CN111083365B (zh) 一种最佳焦平面位置快速检测方法及装置
US20140307941A1 (en) Analysis of the digital image of the external surface of a tyre and processing of false measurement points
CN113160161B (zh) 目标边缘处缺陷的检测方法和装置
CN105953741B (zh) 一种钢结构局部几何变形的测量系统和方法
CN112686920A (zh) 一种圆形零件几何尺寸参数视觉测量方法及系统
WO2021000948A1 (zh) 配重重量的检测方法与系统、获取方法与系统及起重机
JP2009259036A (ja) 画像処理装置、画像処理方法、画像処理プログラム、記録媒体、及び画像処理システム
TW201421990A (zh) 物體偵測裝置、方法及其電腦可讀取紀錄媒體
CN110225335B (zh) 相机稳定性评估方法及装置
KR101716111B1 (ko) 이물질 검출 시스템 및 방법
WO2023103426A1 (zh) 零件视觉检测的自动对焦方法和装置
CN116817796B (zh) 基于双远心镜头的曲面工件精确度参数测量方法及装置
US11508052B2 (en) Systems and methods for quantifying light flares in images
CN113375555A (zh) 一种基于手机影像的电力线夹测量方法及系统
JP4981433B2 (ja) 検査装置、検査方法、検査プログラムおよび検査システム
JP2018205011A (ja) ねじ形状測定装置および方法
CN111243006A (zh) 一种基于图像处理的液滴接触角及尺寸的测量方法
TW201317587A (zh) 尺寸檢測裝置及方法
CN113838075B (zh) 单目测距方法、装置及计算机可读存储介质
Ali et al. Vision based measurement system for gear profile
CN114286078A (zh) 摄像头模组镜片外观检查方法及设备
Abdullah et al. Measuring fish length from digital images (FiLeDI)

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 22902860

Country of ref document: EP

Kind code of ref document: A1

WWE Wipo information: entry into national phase

Ref document number: 112022002746

Country of ref document: DE