CN106504279A - Coloured image auto focusing method - Google Patents

Coloured image auto focusing method Download PDF

Info

Publication number
CN106504279A
CN106504279A CN201610922542.3A CN201610922542A CN106504279A CN 106504279 A CN106504279 A CN 106504279A CN 201610922542 A CN201610922542 A CN 201610922542A CN 106504279 A CN106504279 A CN 106504279A
Authority
CN
China
Prior art keywords
sampling
window
sampling point
color image
color
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201610922542.3A
Other languages
Chinese (zh)
Other versions
CN106504279B (en
Inventor
郑馨
王远志
钱萌
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Hebei Kaitong Information Technology Service Co ltd
Original Assignee
Anqing Normal University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Anqing Normal University filed Critical Anqing Normal University
Priority to CN201610922542.3A priority Critical patent/CN106504279B/en
Publication of CN106504279A publication Critical patent/CN106504279A/en
Application granted granted Critical
Publication of CN106504279B publication Critical patent/CN106504279B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Landscapes

  • Image Processing (AREA)
  • Processing Of Color Television Signals (AREA)
  • Image Analysis (AREA)

Abstract

本发明公开了一种彩色图像自动聚焦方法,所述方法包括彩色图像拮抗图获取、基于聚焦窗口的聚焦采样点采集和改进Brenner的清晰度评价函数。本发明所述方法,首先充分利用彩色图像的颜色信息,避免了仅使用灰度图去评价清晰度时导致的颜色细节的丢失;其次,综合考虑主体目标多位于视场中心的特性以及黄金分割点的重要性,减少了因统计颜色信息而增加的计算量以及非目标区域的负面影响,有效地提高了聚焦的实时性和准确性;最后,通过改进Brenner函数,在不增加计算量的前提下,更全面反映整幅图像的清晰度。

The invention discloses a color image automatic focusing method, which comprises acquisition of color image antagonism map, focus sampling point acquisition based on focus window and improved Brenner's sharpness evaluation function. The method of the present invention firstly makes full use of the color information of the color image, avoiding the loss of color details caused by only using the grayscale image to evaluate the sharpness; secondly, comprehensively considering the characteristics that the main target is mostly located in the center of the field of view and the golden section The importance of points reduces the amount of calculations due to statistical color information and the negative impact of non-target areas, effectively improving the real-time and accuracy of focusing; finally, by improving the Brenner function, without increasing the amount of calculations Next, it reflects the sharpness of the whole image more comprehensively.

Description

彩色图像自动聚焦方法Color Image Autofocus Method

技术领域technical field

本发明属于图像处理技术领域,更具体地,涉及一种彩色图像自动聚焦方法。The invention belongs to the technical field of image processing, and more particularly relates to an automatic focusing method for a color image.

背景技术Background technique

随着仪器自动化、智能化的发展,基于图像的自动聚焦已成为照相机、摄像机、显微镜等成像系统的关键技术。实现基于图像的自动聚焦技术的关键是构造一个快速、可靠的清晰度评价函数,该函数的极值所对应的位置为成像的聚焦位置。由于图像清晰度评价函数需要对成像系统逐渐趋近聚焦位置到恰位于聚焦位置再到远离聚焦位置这一完整过程中的大量图像都进行离焦量的评价,因此,一个良好的图像清晰度函数除了具备无偏性、单峰性、锐利性和抗噪性之外,还要尽可能的便于计算,以满足自动聚焦过程实时性的需要。With the development of instrument automation and intelligence, image-based autofocus has become a key technology for imaging systems such as cameras, video cameras, and microscopes. The key to realizing image-based autofocus technology is to construct a fast and reliable sharpness evaluation function, and the position corresponding to the extremum of the function is the focus position of imaging. Since the image sharpness evaluation function needs to evaluate the defocus amount of a large number of images in the complete process of the imaging system gradually approaching the focus position to just at the focus position and then away from the focus position, a good image sharpness function In addition to having unbiasedness, unimodality, sharpness and noise resistance, it is also necessary to facilitate calculation as much as possible to meet the real-time requirements of the autofocus process.

为了减小计算量,现有的清晰度评价函数均只针对灰度图进行计算,导致大量颜色信息的丢失。然而,在生物医学等大量实际场景图像中,目标的颜色信息至关重要,丢失颜色信息可能导致无法获取最佳聚焦位置;而如果对RGB三个颜色通道图都计算清晰度评价函数,又难以避免计算量大大增加的问题。In order to reduce the amount of calculation, the existing sharpness evaluation functions are only calculated for the grayscale image, resulting in the loss of a large amount of color information. However, in a large number of actual scene images such as biomedicine, the color information of the target is very important, and the loss of color information may lead to the inability to obtain the best focus position; and if the sharpness evaluation function is calculated for the RGB three color channel images, it is difficult to Avoid the problem of greatly increasing the amount of calculation.

发明内容Contents of the invention

针对现有技术的以上缺陷或改进需求,本发明提供了一种彩色图像自动聚焦方法,其目的在于既充分利用图像颜色信息,又能够实现快速自动聚焦。本发明提出的技术方案如下:In view of the above defects or improvement needs of the prior art, the present invention provides an automatic focusing method for color images, the purpose of which is to make full use of image color information and realize fast automatic focusing. The technical scheme that the present invention proposes is as follows:

一种彩色图像自动聚焦方法,其特征在于,所述方法包括以下几个步骤:A color image automatic focusing method is characterized in that the method comprises the following steps:

(1)输入RGB格式的原始图像,分别计算得到红-绿拮抗图Irg和蓝-黄拮抗图Iby(1) input the original image of RGB format, calculate and obtain red-green antagonism figure I rg and blue-yellow antagonism figure I by respectively;

(2)获取聚焦采样点;(2) Obtain the focused sampling point;

先将原始图像划分为三类聚焦窗口:中心窗口、四边窗口以及四角窗口;First divide the original image into three types of focus windows: center window, four-side window and four-corner window;

对不同的聚焦窗口内的红-绿拮抗图Irg和蓝-黄拮抗图Iby设定不同的采样策略:中心窗口内像素进行全采样,四边窗口内像素进行横向隔点采样,四角窗口内像素横向和纵向分别进行隔点采样;采样后分别得到与红-绿拮抗图Irg对应的采样点链表甲Lrg、与蓝-黄拮抗图Iby对应的采样点链表乙Lby,所述的采样点链表甲Lrg、采样点链表乙Lby中记录了聚焦采样点的像素位置信息;Set different sampling strategies for the red-green antagonism map I rg and the blue-yellow antagonism map I by in different focusing windows: the pixels in the center window are fully sampled, the pixels in the four-side windows are sampled at intervals horizontally, and the pixels in the four-corner windows are sampled at intervals. Pixels are sampled horizontally and vertically at intervals respectively; after sampling, a linked list of sampling points L rg corresponding to the red-green antagonism graph I rg , and a linked list of sampling points B L by corresponding to the blue-yellow antagonism graph I by , are respectively obtained. The pixel position information of the focused sampling point is recorded in the sampling point linked list A L rg and the sampling point linked list B L by ;

(3)利用改进的Brenner函数分别计算红-绿拮抗图Irg聚焦采样点的函数值与蓝-黄拮抗图Iby聚焦采样点的函数值FMBrenner,统计大于给定阈值T的所有函数值之和,通过该统计值评价彩色图像的清晰度。(3) Use the improved Brenner function to calculate the function value of the red-green antagonism map I rg focused sampling point and the function value F MBrenner of the blue-yellow antagonism map I by focused sampling point, and count all function values greater than a given threshold T The sum is used to evaluate the sharpness of color images through this statistical value.

所述的步骤(1)具体计算过程如下:The specific calculation process of described step (1) is as follows:

Irg=R-GI rg = RG

Iby=B-YI by = BY

其中,in,

Y=(R+G)/2Y=(R+G)/2

R、G、B分别为输入彩色图像的红、绿、蓝三个颜色通道。R, G, and B are the red, green, and blue color channels of the input color image, respectively.

所述步骤(2)中:原始图像中心点相邻的4个子图构成中心窗口Wa;原始图像四角的4个子图构成四角窗口Wc;原始图像中,四角窗口Wc、中心窗口Wa以外的部分构成四边窗口WbIn the step (2): the four subimages adjacent to the center point of the original image form the central window W a ; the four subimages at the four corners of the original image form the four corner window W c ; in the original image, the four corner window W c and the central window W a The parts other than constitute the four-sided window W b .

所述步骤(3)中的改进Brenner函数计算公式如下:The improved Brenner function calculation formula in described step (3) is as follows:

其中,Lc.length为采样点链表Lc的长度,Among them, L c .length is the length of the sampling point linked list L c ,

MBc(i)=|Ic(Lc[i].x+2,Lc[i].y)-Ic(Lc[i].x,Lc[i].y)|×MB c (i)=|I c (L c [i].x+2, L c [i].y)-I c (L c [i].x, L c [i].y)|×

|Ic(Lc[i].x,Lc[i].y+2)-Ic(Lc[i].x,Lc[i].y)| c=rg,by|I c (L c [i].x, L c [i].y+2)-I c (L c [i].x, L c [i].y) | c=rg, by

其中,(Lc[i].x,Lc[i].y)为采样点链表Lc中第i个元素的像素位置。Wherein, (L c [i].x, L c [i].y) is the pixel position of the i-th element in the sampling point linked list L c .

本发明能够达到的有益效果如下:The beneficial effect that the present invention can reach is as follows:

由于自动聚焦对实时性的要求非常高,大量的清晰度评价函数的设计中,为了减少计算量而丢失颜色信息,以致某些特定场合下无法精确评价彩色图像的清晰度。而本发明提出的方法在不增加计算量的前提下让图像的颜色特征得到最大化的利用,对彩色图像清晰度的评价结果更精确。Due to the high real-time requirements of autofocus, in the design of a large number of sharpness evaluation functions, color information is lost in order to reduce the amount of calculation, so that it is impossible to accurately evaluate the sharpness of color images in some specific occasions. However, the method proposed by the present invention maximizes the use of the color features of the image without increasing the amount of calculation, and the evaluation result of the definition of the color image is more accurate.

附图说明Description of drawings

图1是本发明的彩色图像自动聚焦方法的流程图;Fig. 1 is the flowchart of the color image automatic focusing method of the present invention;

图2是本发明中聚焦窗口的划分示意图。Fig. 2 is a schematic diagram of division of focus windows in the present invention.

具体实施方式detailed description

为了使本发明的目的、技术方案及优点更加清楚明白,以下结合附图及实施例,对本发明进行进一步详细说明。In order to make the object, technical solution and advantages of the present invention clearer, the present invention will be further described in detail below in conjunction with the accompanying drawings and embodiments.

步骤一,计算红-绿拮抗图、蓝-黄拮抗图:Step 1, calculate the red-green antagonism graph and the blue-yellow antagonism graph:

(1.1)输入RGB模式的原始图像,其宽度计为W,其高度记为H;将原始图像分解为红(R)、绿(G)和蓝(B)三个颜色通道图像;(1.1) input the original image of RGB mode, its width is counted as W, and its height is recorded as H; the original image is decomposed into three color channel images of red (R), green (G) and blue (B);

(1.2)利用红、绿两个颜色通道图像得到黄色(Y)通道图像;(1.2) Utilize two color channel images of red and green to obtain a yellow (Y) channel image;

Y=(R+G)/2Y=(R+G)/2

(1.3)利用R、G、B、Y计算得到红-绿拮抗图Irg和蓝-黄拮抗图Iby(1.3) Utilize R, G, B, Y to calculate and obtain red-green antagonistic figure I rg and blue-yellow antagonistic figure I by ;

Irg=R-GI rg = RG

Iby=B-YI by = BY

步骤二,对原始图像相同大小的像素点位置进行采样,分别构建采样点链表甲Lrg和采样点链表乙Lby。采样点链表甲Lrg将用于后续红-绿拮抗图Irg的清晰度的计算,采样点链表乙Lby将用于蓝-黄拮抗图Iby的清晰度的计算。构建采样点链表的目的在于通过抽取具有代表性的聚焦采样点减少后续清晰度评价函数的计算量。红-绿拮抗图Irg和蓝-黄拮抗图Iby各自的聚焦采样点的选取原则为尽可能使两个采样点链表的位置互补,在减少计算量的同时尽可能充分地利用整幅图像的颜色信息。具体步骤包括:Step 2: Sampling the pixel positions of the same size as the original image, respectively constructing the sampling point linked list A L rg and the sampling point linked list B L by . The sampling point linked list A L rg will be used for the calculation of the clarity of the subsequent red-green antagonism map I rg , and the sampling point linked list B L by will be used for the calculation of the clarity of the blue-yellow antagonism map I by . The purpose of constructing the linked list of sampling points is to reduce the calculation amount of the subsequent sharpness evaluation function by extracting representative focused sampling points. The selection principle of the focus sampling points of the red-green antagonism map I rg and the blue-yellow antagonism map I by is to make the positions of the two sampling point linked lists complementary as much as possible, and make full use of the entire image as much as possible while reducing the amount of calculation color information. Specific steps include:

(2.1)将输入的原始图像按尺寸等分为十六个子图,每个子图大小为W/4×H/4;(2.1) Divide the input original image into sixteen sub-pictures according to the size, and the size of each sub-picture is W/4×H/4;

(2.2)如图2所示,将十六个子图分为三类聚焦窗口:中心窗口Wa、四边窗口Wb以及四角窗口Wc(2.2) As shown in Figure 2, the sixteen sub-pictures are divided into three types of focus windows: central window W a , four-side window W b and four-corner window W c ;

原始图像中心点相邻的4个子图构成中心窗口WaThe four subimages adjacent to the center point of the original image constitute the central window W a ;

原始图像四角的4个子图构成四角窗口WcFour sub-pictures at the four corners of the original image constitute the four-corner window W c ;

原始图像中,四角窗口Wc、中心窗口Wa以外的部分构成四边窗口WbIn the original image, the parts other than the four-corner window W c and the central window W a constitute the four-side window W b .

(2.3)记录中心窗口Wa内每个像素的位置(x,y),加入采样点链表甲Lrg中;记录中心窗口Wa内每个像素的位置,加入采样点链表乙Lby中;(2.3) record the position (x, y) of each pixel in the central window W a , add in the sampling point linked list A L rg ; record the position of each pixel in the central window W a , add in the sampling point linked list B L by ;

(2.4)对四边窗口Wb内的像素进行横向隔点采样,具体地,逐行地记录四边窗口Wb内的奇数位像素的位置(x,y),并将其加入采样点链表甲Lrg中;逐行地记录四边窗口Wb内的偶数位像素的位置(x,y),并将其加入采样点链表乙Lby中;(2.4) The pixels in the four-sided window W b are horizontally sampled at intervals, specifically, record the positions (x, y) of the odd-numbered pixels in the four-sided window W b line by line, and add it to the sampling point linked list A L In rg ; record the position (x, y) of the even-numbered pixel in the four-side window W b line by line, and add it in the sampling point linked list B L by ;

(2.5)对四角窗口Wc内的像素进行2倍下采样,即横向和纵向分别进行隔点采样,并将采样像素位置加入采样点链表甲Lrg或采样点链表乙Lby中。(2.5) Perform 2 times down-sampling on the pixels in the four-corner window W c , that is, perform horizontal and vertical sampling at intervals, and add the sampling pixel position to the sampling point linked list A L rg or the sampling point linked list B L by .

具体地,将四角窗口Wc分别按照2×2像素划分为若干子窗口,所述子窗口内的左上角像素的位置(x,y)加入采样点链表甲Lrg中,所述子窗口内的右下角像素的位置(x,y)加入采样点链表乙Lby中。Specifically, the four-corner window W c is divided into several sub-windows according to 2×2 pixels, and the position (x, y) of the upper-left corner pixel in the sub-window is added to the sampling point link list A L rg , and the sub-window The position (x, y) of the pixel in the lower right corner of the pixel is added to the sampling point linked list B L by .

由于主体目标多位于视场中心,因此中心窗口Wa内像素的采样率最高;同时,在摄影构图中黄金分割点的作用至关重要,因此,中心窗口Wa的大小设置使得黄金分割点恰好位于其中。相对于中心窗口Wa,四边窗口Wb和四角窗口Wc内的非目标区域增多,因此,这两类窗口的采样率较低。聚焦窗口和采样策略的设置使得两个采样点链表的长度之和为9/8×W×H,略大于原始图像大小。Since the main target is mostly located in the center of the field of view, the sampling rate of the pixels in the central window W a is the highest; at the same time, the role of the golden section point in the photographic composition is very important, so the size setting of the central window W a makes the golden section point exactly located in it. Compared with the central window W a , the non-target areas in the four-sided windows W b and the four-cornered windows W c increase, so the sampling rates of these two types of windows are lower. The focus window and sampling strategy are set so that the sum of the lengths of the two sampling point linked lists is 9/8×W×H, which is slightly larger than the original image size.

步骤三,利用改进的Brenner函数统计所有聚焦采样点(x,y)的清晰度评价函数值总和FMBrennerStep 3, use the improved Brenner function to count the sum of the sharpness evaluation function values F MBrenner of all focused sampling points (x, y):

(3.1)利用改进的Brenner函数公式统计红-绿拮抗图Irg图内属于采样点链表甲Lrg中的每个像素的函数值与蓝-黄拮抗图Iby内属于采样点链表乙Lby中的每个像素的函数值:(3.1) Utilize the improved Brenner function formula to count the function value of each pixel in the red-green antagonism graph I rg belonging to the sampling point linked list A L rg and the blue-yellow antagonistic graph I by belonging to the sampling point linked list B L by The function value for each pixel in :

MBc(i)=|Ic(Lc[i].x+2,Lc[i].y)-Ic(Lc[i].x,Lc[i].y)|×MB c (i)=|I c (L c [i].x+2, L c [i].y)-I c (L c [i].x, L c [i].y)|×

|Ic(Lc[i].x,Lc[i].y+2)-Ic(Lc[i].x,Lc[i].y)| c=rg,by|I c (L c [i].x, L c [i].y+2)-I c (L c [i].x, L c [i].y) | c=rg, by

其中,(Lc[i].x,Lc[i].y)为采样点链表Lc中第i个元素的像素位置。Wherein, (L c [i].x, L c [i].y) is the pixel position of the i-th element in the sampling point linked list L c .

改进的Brenner函数不仅考虑了水平方向相差两个单元的像素的灰度差,还考虑了垂直方向相差两个单元的像素的灰度差,能更全面反映整幅图像的局部梯度变化,同时,相对原Brenner函数并未过多增加计算量。The improved Brenner function not only considers the grayscale difference of pixels with a difference of two units in the horizontal direction, but also considers the grayscale difference of pixels with a difference of two units in the vertical direction, which can more comprehensively reflect the local gradient changes of the entire image. At the same time, Compared with the original Brenner function, the amount of calculation is not increased too much.

(3.2)统计所有大于给定阈值T的函数值FMBrenner之和,即为输入彩色图像的清晰度。(3.2) Count the sum of all function values F MBrenner greater than a given threshold T, which is the sharpness of the input color image.

其中,Lc.length为采样点链表Lc的长度。Wherein, L c .length is the length of the sampling point linked list L c .

本方法对应程序可嵌入图像采集设备中,作业过程中,通过对比若干次计算结果,选择最高的统计值,即可实时选择最优聚焦方案。The corresponding program of this method can be embedded in the image acquisition equipment. During the operation process, by comparing several calculation results and selecting the highest statistical value, the optimal focusing scheme can be selected in real time.

本发明所提供的彩色图像自动聚焦方法,首先充分利用彩色图像的颜色信息,避免了仅使用灰度图去评价清晰度时导致的颜色细节的丢失;其次,综合考虑主体目标多位于视场中心的特性以及黄金分割点的重要性,减少了因统计颜色信息而增加的计算量以及非目标区域的负面影响,有效地提高了聚焦的实时性和准确性;最后,通过改进Brenner函数,在不增加计算量的前提下,更全面反映整幅图像的清晰度。The color image automatic focusing method provided by the present invention firstly makes full use of the color information of the color image, avoiding the loss of color details caused by only using the grayscale image to evaluate the sharpness; secondly, considering that the main target is mostly located in the center of the field of view The characteristics of the golden section point and the importance of the golden section point reduce the amount of calculations due to statistical color information and the negative impact of non-target areas, effectively improving the real-time and accuracy of focusing; finally, by improving the Brenner function, without Under the premise of increasing the amount of calculation, it can more fully reflect the clarity of the entire image.

Claims (4)

1. A method for automatic focusing of a color image, said method comprising the steps of:
(1) inputting an original image in RGB format, and respectively calculating to obtain a red-green antagonistic graph IrgAnd blue-yellow antagonism map Iby
(2) Acquiring a focused sampling point;
firstly, dividing an original image into three types of focusing windows: the window comprises a central window, four-side windows and four-corner windows;
antagonism of red-green within different focus windowsrgAnd blue-yellowAntagonistic Panel IbySetting different sampling strategies: carrying out full sampling on pixels in the central window, carrying out transverse point separation sampling on pixels in the four-side window, and respectively carrying out point separation sampling on pixels in the four-corner window in the transverse direction and the longitudinal direction; after sampling, obtaining a red-green antagonistic graph IrgCorresponding sampling point chain table A LrgAnd blue-yellow antagonism map IbyCorresponding sampling point chain table B LbyThe sampling point chain table A LrgSampling point chain table B LbyThe pixel position information of a focusing sampling point is recorded;
(3) separate calculation of the Red-Green antagonism map I Using the modified Brenner functionrgFunction value of focusing sampling point and blue-yellow antagonism graph IbyFunction value F of focused sampling pointMBrennerAnd counting the sum of all function values larger than a given threshold value T, and evaluating the definition of the color image through the counted value.
2. The color image sharpness evaluation method according to claim 1, wherein the step (1) is implemented by the following steps:
Irg=R-G
Iby=B-Y
wherein,
Y=(R+G)/2
r, G, B are the three color channels of the input color image, red, green, and blue, respectively.
3. A color image sharpness evaluation method according to claim 1 or 2, characterized in that: in the step (2), 4 sub-graphs adjacent to the center point of the original image form a center window Wa(ii) a 4 sub-graphs of four corners of the original image form a four-corner window Wc(ii) a In the original image, the four corner windows WcA central window WaThe other parts constituting a quadrilateral window Wb
4. A color image sharpness evaluation method according to claim 1 or 2, characterized in that the modified Brenner function calculation formula in the step (3) is as follows:
F M B r e n n e r = Σ c = r g , b y Σ i = 1 L c . l e n g t h [ MB c ( i ) > T ]
wherein L iscLength is the sampling point chain table LcThe length of (a) of (b),
MBc(i)=|Ic(Lc[i].x+2,Lc[i].y)-Ic(Lc[i].x,Lc[i].y)|×|Ic(Lc[i].x,Lc[i].y+2)-Ic(Lc[i].x,Lc[i].y)| c=rg,by
wherein (L)c[i].x,Lc[i]Y) is a linked list L of sampling pointscThe pixel position of the ith element.
CN201610922542.3A 2016-10-18 2016-10-18 Color image autofocus method Active CN106504279B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201610922542.3A CN106504279B (en) 2016-10-18 2016-10-18 Color image autofocus method

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201610922542.3A CN106504279B (en) 2016-10-18 2016-10-18 Color image autofocus method

Publications (2)

Publication Number Publication Date
CN106504279A true CN106504279A (en) 2017-03-15
CN106504279B CN106504279B (en) 2019-02-19

Family

ID=58319509

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201610922542.3A Active CN106504279B (en) 2016-10-18 2016-10-18 Color image autofocus method

Country Status (1)

Country Link
CN (1) CN106504279B (en)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110455258A (en) * 2019-09-01 2019-11-15 中国电子科技集团公司第二十研究所 A kind of unmanned plane Terrain Clearance Measurement method based on monocular vision
CN112099217A (en) * 2020-08-18 2020-12-18 宁波永新光学股份有限公司 Automatic focusing method for microscope
CN113822877A (en) * 2021-11-17 2021-12-21 武汉中导光电设备有限公司 AOI equipment microscope defect detection picture quality evaluation method and system

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1877438A (en) * 2006-07-10 2006-12-13 南京邮电大学 Self-adaptive automatic focusing method used in digital camera
CN101494737A (en) * 2009-03-09 2009-07-29 杭州海康威视数字技术股份有限公司 Integrated camera device and self-adapting automatic focus method
WO2011099626A1 (en) * 2010-02-15 2011-08-18 株式会社ニコン Focus adjusting device and focus adjusting program
CN105938243A (en) * 2016-06-29 2016-09-14 华南理工大学 Multi-magnification microscope fast focusing method applied to TFT-LCD detection

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1877438A (en) * 2006-07-10 2006-12-13 南京邮电大学 Self-adaptive automatic focusing method used in digital camera
CN101494737A (en) * 2009-03-09 2009-07-29 杭州海康威视数字技术股份有限公司 Integrated camera device and self-adapting automatic focus method
WO2011099626A1 (en) * 2010-02-15 2011-08-18 株式会社ニコン Focus adjusting device and focus adjusting program
CN105938243A (en) * 2016-06-29 2016-09-14 华南理工大学 Multi-magnification microscope fast focusing method applied to TFT-LCD detection

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110455258A (en) * 2019-09-01 2019-11-15 中国电子科技集团公司第二十研究所 A kind of unmanned plane Terrain Clearance Measurement method based on monocular vision
CN112099217A (en) * 2020-08-18 2020-12-18 宁波永新光学股份有限公司 Automatic focusing method for microscope
CN113822877A (en) * 2021-11-17 2021-12-21 武汉中导光电设备有限公司 AOI equipment microscope defect detection picture quality evaluation method and system

Also Published As

Publication number Publication date
CN106504279B (en) 2019-02-19

Similar Documents

Publication Publication Date Title
CN111223088B (en) A casting surface defect recognition method based on deep convolutional neural network
CN103327220B (en) With green channel for the denoising method guided on low-light (level) Bayer image
CN104159091B (en) A kind of color interpolation method based on rim detection
CN111160301B (en) Tunnel disease target intelligent identification and extraction method based on machine vision
CN109559275B (en) Microscopic image stitching method of urine analyzer
WO2021042909A1 (en) Scene switching detection method and apparatus, electronic device, and storage medium
CN101605209A (en) Camera head and image-reproducing apparatus
CN108305253B (en) Pathological image classification method based on multiple-time rate deep learning
CN105740809A (en) Expressway lane line detection method based on onboard camera
CN102663720B (en) Image splicing method based on minimum mean square error criterion
CN102306307B (en) Positioning method of fixed point noise in color microscopic image sequence
CN108039044A (en) The system and method that Vehicular intelligent based on multiple dimensioned convolutional neural networks is lined up
EP4063836A1 (en) Algae analysis method
EP4064194A1 (en) Method for analyzing yeasts
CN106504279A (en) Coloured image auto focusing method
CN104700405B (en) A kind of foreground detection method and system
CN109360145A (en) A method for stitching infrared thermal images based on eddy current pulses
CN102169583B (en) Vehicle occlusion detection and segmentation method based on vehicle window location
CN112749741A (en) Hand brake fastening fault identification method based on deep learning
CN111193860B (en) One-frame calibration method for working point positions of inspection robot
CN111881914A (en) License plate character segmentation method and system based on self-learning threshold
WO2014196097A1 (en) Image processing system, image processing device, program, storage medium, and image processing method
CN104038746B (en) A kind of BAYER form view data interpolation method
CN104956669B (en) Image processing apparatus, camera head and image processing method
CN108833874A (en) A panoramic image color correction method for driving recorder

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant
TR01 Transfer of patent right
TR01 Transfer of patent right

Effective date of registration: 20240403

Address after: 073000 West 200m northbound at the intersection of Dingzhou commercial street and Xingding Road, Baoding City, Hebei Province (No. 1910, 19th floor, building 3, jueshishan community)

Patentee after: Hebei Kaitong Information Technology Service Co.,Ltd.

Country or region after: China

Address before: Anqing Normal University, 1318 Jixian North Road, Anqing City, Anhui Province, 246133

Patentee before: ANQING NORMAL University

Country or region before: China