CN107170002B - Automatic image focusing method and device - Google Patents

Automatic image focusing method and device Download PDF

Info

Publication number
CN107170002B
CN107170002B CN201710307961.0A CN201710307961A CN107170002B CN 107170002 B CN107170002 B CN 107170002B CN 201710307961 A CN201710307961 A CN 201710307961A CN 107170002 B CN107170002 B CN 107170002B
Authority
CN
China
Prior art keywords
image
obtaining
value
focusing
target image
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Expired - Fee Related
Application number
CN201710307961.0A
Other languages
Chinese (zh)
Other versions
CN107170002A (en
Inventor
韩邦强
宗明成
孟璐璐
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Institute of Microelectronics of CAS
Original Assignee
Institute of Microelectronics of CAS
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Institute of Microelectronics of CAS filed Critical Institute of Microelectronics of CAS
Priority to CN201710307961.0A priority Critical patent/CN107170002B/en
Publication of CN107170002A publication Critical patent/CN107170002A/en
Application granted granted Critical
Publication of CN107170002B publication Critical patent/CN107170002B/en
Expired - Fee Related legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/30Determination of transform parameters for the alignment of images, i.e. image registration
    • G06T7/33Determination of transform parameters for the alignment of images, i.e. image registration using feature-based methods
    • GPHYSICS
    • G02OPTICS
    • G02BOPTICAL ELEMENTS, SYSTEMS OR APPARATUS
    • G02B21/00Microscopes
    • G02B21/36Microscopes arranged for photographic purposes or projection purposes or digital imaging or video purposes including associated control and data processing arrangements
    • G02B21/365Control or image processing arrangements for digital or video microscopes
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/70Denoising; Smoothing

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Multimedia (AREA)
  • Theoretical Computer Science (AREA)
  • Chemical & Material Sciences (AREA)
  • Analytical Chemistry (AREA)
  • Optics & Photonics (AREA)
  • Image Processing (AREA)
  • Facsimile Image Signal Circuits (AREA)
  • Image Analysis (AREA)

Abstract

The invention relates to the technical field of image acquisition, in particular to an automatic image focusing method and automatic image focusing equipment. The method comprises the following steps: obtaining a first image, and obtaining a first target image according to the first image; obtaining a first salient region image according to the first target image; obtaining a minimum circumscribed rectangular area of the first salient area image; determining the minimum circumscribed rectangular area as a focusing window area, wherein the focusing window area comprises a second image; obtaining a second target image according to the second image, and obtaining a focusing evaluation function value of the second target image according to the pixel gray value and the pixel gray average value; and obtaining a third target image according to the extreme value of the focusing evaluation function. By the technical scheme, the technical problems of long scanning and collecting time and low focusing sensitivity of the conventional image focusing technology are solved, and the SEM focusing method has the technical effect of enabling the SEM to be focused quickly and precisely.

Description

Automatic image focusing method and device
Technical Field
The invention relates to the technical field of image acquisition, in particular to an automatic image focusing method and automatic image focusing equipment.
Background
As the feature size of the lsi becomes smaller, a Scanning Electron Microscope (SEM) plays an increasingly important role in the lsi inspection apparatus. There are many factors that cause the focus state of the system to change when the SEM is in operation, thereby degrading SEM image quality. In order to improve the working efficiency of SEM and obtain clear SEM images quickly, the SEM must have a quick and precise auto-focusing function. The SEM equipment is used for scanning imaging, the longer the scanning time is, the more electrons are received by each pixel element, the higher the signal-to-noise ratio of the image is, and the longer the acquisition time of the corresponding image is.
The person skilled in the art finds the following problems in the prior art during routine work:
the existing focusing technology has certain requirements on the quality of images, correspondingly needs more scanning and collecting time of an SEM, reduces the focusing sensitivity, and is not suitable for real-time focusing occasions.
Disclosure of Invention
The embodiment of the application provides an automatic image focusing method and equipment, solves the technical problems of long scanning and collecting time and low focusing sensitivity of the existing image focusing technology, and has the technical effect of enabling an SEM to be focused quickly and precisely.
The embodiment of the application provides an automatic image focusing method, which comprises the following steps: obtaining a first image, wherein the first image comprises a background image and a noise image; obtaining a first target image according to the first image, wherein the first target image is an image obtained by filtering the background image and the noise image from the first image; obtaining a first salient region image according to the first target image; wherein the first salient region image is a salient region image of the first target image; obtaining a minimum circumscribed rectangular area of the first salient area image; determining the minimum circumscribed rectangular area as a focusing window area, wherein the focusing window area comprises a second image; obtaining a second target image according to the second image, wherein the second target image is an image of the second image after smoothing processing; obtaining the pixel gray value h of the second target image in the focusing window arealv(m, n); obtaining the average value of the pixel gray scale of the second target image in the focusing window area
Figure BDA0001286263270000011
Obtaining a value of a focus evaluation function of the second target image according to the pixel gray value and the pixel gray average value, wherein the focus evaluation function
Figure BDA0001286263270000021
Obtaining an extreme value of the focusing evaluation function; and obtaining a third target image according to the extreme value, wherein the third target image is a final focusing image.
Further, the obtaining a first target image from the first image comprises obtaining an average pixel value of the first image; obtaining the first image processed by the band-pass filter; obtaining a pixel value of each point of the processed first image; obtaining a significant value of the processed first image, wherein the significant value refers to a Euclidean distance between a pixel value of each point of the processed first image and an average pixel value of the first image; and obtaining the first target image according to the significant value of the processed first image, wherein the significant value of the processed first image can be converted into a gray image, and the first target image is a part of the gray image.
Further, the obtaining of the minimum circumscribed rectangular region of the first significant region image specifically includes: and obtaining the minimum circumscribed rectangular area of the first salient area image according to a convex hull method.
Further, the obtaining the second target image from the second image comprises obtaining the second target image from L og operator and the second image, wherein the formula of L og operator is
Figure BDA0001286263270000022
Wherein f (m, n) represents the second image, h (m, n) represents the second target image, g (m, n) represents a Gaussian filter,
Figure BDA0001286263270000023
Figure BDA0001286263270000024
representing the laplacian operator.
An embodiment of the present application provides an image auto-focusing apparatus, including: a processor adapted to implement instructions; a storage device adapted to store a plurality of instructions, the instructions adapted to be loaded and executed by a processor; obtaining a first image, wherein the first image comprises a background image and a noise image; obtaining a first target image according to the first image, wherein the first target image is an image obtained by filtering the background image and the noise image from the first image; obtaining a first salient region image according to the first target image; wherein, the firstA salient region image is a salient region image of the first target image; obtaining a minimum circumscribed rectangular area of the first salient area image; determining the minimum circumscribed rectangular area as a focusing window area, wherein the focusing window area comprises a second image; obtaining a second target image according to the second image, wherein the second target image is an image of the second image after smoothing processing; obtaining the pixel gray value h of the second target image in the focusing window arealv(m, n); obtaining the average value of the pixel gray scale of the second target image in the focusing window area
Figure BDA0001286263270000025
Obtaining a value of a focus evaluation function of the second target image according to the pixel gray value and the pixel gray average value, wherein the focus evaluation function
Figure BDA0001286263270000031
Obtaining an extreme value of the focusing evaluation function; and obtaining a third target image according to the extreme value, wherein the third target image is a final focusing image.
Further, the apparatus further comprises: obtaining an average pixel value of the first image; obtaining the first image processed by the band-pass filter; obtaining a pixel value of each point of the processed first image; obtaining a significant value of the processed first image, wherein the significant value refers to a Euclidean distance between a pixel value of each point of the processed first image and an average pixel value of the first image; and obtaining the first target image according to the significant value of the processed first image, wherein the significant value of the processed first image can be converted into a gray image, and the first target image is a part of the gray image.
Further, the apparatus further comprises: and obtaining the minimum circumscribed rectangular area of the first salient area image according to a convex hull method.
Further, the apparatus comprises a function according to L og operator andthe second image obtains the second target image, wherein the L og operator has the formula of
Figure BDA0001286263270000032
Wherein f (m, n) represents the second image, h (m, n) represents the second target image, g (m, n) represents a Gaussian filter,
Figure BDA0001286263270000033
representing the laplacian operator.
One or more technical solutions in the embodiments of the present application have at least one or more of the following technical effects:
1. the embodiment of the application provides an automatic image focusing method and automatic image focusing equipment. The method comprises the following steps: obtaining a first image, wherein the first image comprises a background image and a noise image; obtaining a first target image according to the first image, wherein the first target image is an image obtained by filtering the background image and the noise image from the first image; obtaining a first salient region image according to the first target image; wherein the first salient region image is a salient region image of the first target image; obtaining a minimum circumscribed rectangular area of the first salient area image; determining the minimum circumscribed rectangular area as a focusing window area, wherein the focusing window area comprises a second image; obtaining a second target image according to the second image, wherein the second target image is an image of the second image after smoothing processing; obtaining the pixel gray value h of the second target image in the focusing window arealv(m, n); obtaining the average value of the pixel gray scale of the second target image in the focusing window area
Figure BDA0001286263270000034
Obtaining a value of a focus evaluation function of the second target image according to the pixel gray value and the pixel gray average value, wherein the focus evaluation function
Figure BDA0001286263270000035
Obtaining an extreme value of the focusing evaluation function; and obtaining a third target image according to the extreme value, wherein the third target image is a final focusing image. By the technical scheme, the technical problems of long scanning and collecting time and low focusing sensitivity of the conventional image focusing technology are solved, the background image and noise are effectively filtered, the calculated amount is reduced, the application range and the working efficiency of the SEM automatic focusing technology are increased, and the technical effect of rapid and precise focusing of the SEM is realized.
Drawings
Fig. 1 is a flowchart of an image auto-focusing method according to an embodiment of the present disclosure;
fig. 2 is a flowchart of a method for obtaining a salient image according to an embodiment of the present disclosure.
Detailed Description
The embodiment of the application provides an image automatic focusing method and equipment, solves the technical problems of long scanning and collecting time and low focusing sensitivity of the existing image focusing technology, and has the advantages of effectively filtering background images and noises and reducing calculated amount, thereby increasing the application range and working efficiency of the SEM automatic focusing technology and realizing the technical effect of SEM rapid and precise focusing.
In order to solve the technical problems, the general idea of the embodiment of the application is as follows:
the embodiment of the application provides an automatic image focusing method and device. The method comprises the following steps: obtaining a first image, wherein the first image comprises a background image and a noise image; obtaining a first target image according to the first image, wherein the first target image is an image obtained by filtering the background image and the noise image from the first image; obtaining a first salient region image according to the first target image; wherein the first salient region image is a salient region image of the first target image; obtaining a minimum circumscribed rectangular area of the first salient area image; determining the minimum circumscribed rectangular area as a focusing window area, wherein the focusing window area comprises a second image; according toObtaining a second target image by the second image, wherein the second target image is an image of the second image after smoothing processing; obtaining the pixel gray value h of the second target image in the focusing window arealv(m, n); obtaining the average value of the pixel gray scale of the second target image in the focusing window area
Figure BDA0001286263270000041
Obtaining a value of a focus evaluation function of the second target image according to the pixel gray value and the pixel gray average value, wherein the focus evaluation function
Figure BDA0001286263270000042
Obtaining an extreme value of the focusing evaluation function; and obtaining a third target image according to the extreme value, wherein the third target image is a final focusing image. By the technical scheme, the technical problems of long scanning and collecting time and low focusing sensitivity of the conventional image focusing technology are solved, the background image and noise are effectively filtered, the calculated amount is reduced, the application range and the working efficiency of the SEM automatic focusing technology are increased, and the technical effect of rapid and precise focusing of the SEM is realized.
The technical solutions of the present invention are described in detail below with reference to the drawings and specific embodiments, and it should be understood that the specific features in the embodiments and examples of the present invention are described in detail in the technical solutions of the present application, and are not limited to the technical solutions of the present application, and the technical features in the embodiments and examples of the present application may be combined with each other without conflict.
The term "and/or" herein is merely an association describing an associated object, meaning that three relationships may exist, e.g., a and/or B, may mean: a exists alone, A and B exist simultaneously, and B exists alone. In addition, the character "/" herein generally indicates that the former and latter related objects are in an "or" relationship.
Example 1:
fig. 1 is a flowchart of an image auto-focusing method according to an embodiment of the present disclosure.
The method comprises the following steps:
step 101: obtaining a first image, wherein the first image comprises a background image and a noise image;
specifically, the first image refers to an initial image scanned by a scanning electron microscope SEM when acquiring an image, and the initial image of the SEM includes image information to be acquired, a background pattern, and noise. The background image is a background pattern, and the noise image is noise in the first image. The background pattern and the noise image may cause a long focusing time when the image is collected, and reduce the focusing sensitivity.
Step 102: obtaining a first target image according to the first image, wherein the first target image is an image obtained by filtering the background image and the noise image from the first image;
specifically, since the background image and the noise image included in the first image cause a focusing time when an image is captured to become long, the focusing sensitivity is reduced. Therefore, in this step, the background pattern and the noise in the first image are filtered out, and the first target image is obtained, where the first target image is an SEM significant image. The background graph and the noise are filtered, so that the influence of the background graph and the noise on the focusing technology can be reduced, the image quality requirement of the focusing technology is reduced, and the image acquisition time is reduced.
Step 103: obtaining a first salient region image according to the first target image; wherein the first salient region image is a salient region image of the first target image;
specifically, the first target image is a salient image, a threshold of a gray level is set, a salient region is extracted from a region in the threshold range, that is, the salient region image, that is, the first salient region image is obtained, wherein the threshold can be set by a user according to requirements.
Step 104: obtaining a minimum circumscribed rectangular area of the first salient area image;
and obtaining a minimum circumscribed rectangle of the first salient region image according to a convex hull method, wherein the maximum region included by the minimum circumscribed rectangle is the minimum circumscribed rectangle region.
Step 105: determining the minimum circumscribed rectangular area as a focusing window area, wherein the focusing window area comprises a second image;
and determining the minimum circumscribed rectangular area as the focusing window area, wherein the scanning electron microscope can automatically extract an image in the focusing window area, and the image with the area of M × N is intercepted from the image extracted in the focusing window area, namely the second image.
Step 106: and obtaining a second target image according to the second image, wherein the second target image is the smoothed image of the second image.
And removing the background image and the noise image to obtain a significant image and extracting a significant area. And the second image extracted in the focusing window region still has some isolated noise points, and the second target image is obtained after the second image is subjected to smoothing processing.
Further, smoothing the second image by using L og operator to obtain the second target image, wherein the formula of L og operator is
Figure BDA0001286263270000061
Wherein f (m, n) represents the second image, h (m, n) represents the second target image, g (m, n) represents a Gaussian filter,
Figure BDA0001286263270000062
the second image is processed by using L og operator, so that the noise image can be effectively filtered.
Step 107: obtaining the pixel gray value h of the second target image in the focusing window arealv(m,n);
Specifically, this step calculates and obtains the pixel gray-scale value h of the second target image in the focusing window regionlv(m,n)。
Step 108: obtaining the average value of the pixel gray scale of the second target image in the focusing window area
Figure BDA0001286263270000063
Specifically, this step calculates and obtains an average value of image pixel gradations of the second target image within the focusing window region according to an average value calculation method
Figure BDA0001286263270000064
Taking the size of the focusing window area as X × Y as an example, the average value of the pixel grayscales of the second target image is:
Figure BDA0001286263270000065
step 109: obtaining a value of a focus evaluation function of the second target image according to the pixel gray value and the pixel gray average value, wherein the focus evaluation function
Figure BDA0001286263270000066
And calculating to obtain the value of the focusing evaluation function according to the formula of the focusing evaluation function, wherein the larger the value of the focusing evaluation function is, the clearer the finally obtained SEM image is, and the better the focusing state of the current SEM is.
Step 110: obtaining an extreme value of the focusing evaluation function;
in particular, the magnitude of the value of the focus evaluation function reflects the sharpness of the in-focus image, and therefore, steps 101 to 109 are repeated to obtain a series of out-of-focus images, and from the images in the focus window area, the value of the corresponding focus evaluation function is calculated and obtained for each image, the values of the focus evaluation functions for the different images describing the sharpness of the image. And obtaining an extreme point of the focusing evaluation function value according to the value of the focusing evaluation function of each image, wherein the image corresponding to the extreme point is the clearest image, and the focusing process is completed.
Step 111: and obtaining a third target image according to the extreme value, wherein the third target image is a final focusing image.
As stated in step 110, the image corresponding to the extreme point is the clearest image, i.e., the third target image, i.e., the final focused image obtained after the focusing process is completed.
As shown in fig. 2, an embodiment of the present application further provides a flowchart of a method for obtaining a salient image, where the method includes:
step 201: obtaining an average pixel value of the first image;
specifically, the first image refers to an initial image scanned by a scanning electron microscope SEM when acquiring an image, and an average pixel value of the initial image is calculated and obtained.
Step 202: obtaining the first image processed by the band-pass filter;
specifically, after the first image is processed by using a plurality of bandpass filters with gaussian differential combination, the processed first image is obtained.
Step 203: obtaining a pixel value of each point of the processed first image;
specifically, after the processed first image is obtained, the pixel value of each point of the processed first image is calculated and obtained.
Step 204: obtaining a significant value of the processed first image, wherein the significant value refers to a Euclidean distance between a pixel value of each point of the processed first image and an average pixel value of the first image;
specifically, this step obtains the saliency value of the processed first image. And calculating and obtaining the Euclidean distance between the pixel value of each point of the processed first image and the average pixel value of the first image, wherein the Euclidean distance is the significant value of the processed first image.
Step 205: and obtaining the first target image according to the significant value of the processed first image, wherein the significant value of the processed first image can be converted into a gray image, and the first target image is a part of the gray image.
Specifically, the salient value of the processed first image is converted into a gray image, and a salient value part in a set range is extracted from the gray image to serve as a salient image, namely the first target image, wherein the set range of the salient value can be set in a self-defined manner according to requirements.
An embodiment of the present application provides an image auto-focusing apparatus, including: a processor adapted to implement instructions; a storage device adapted to store a plurality of instructions, the instructions adapted to be loaded and executed by a processor; obtaining a first image, wherein the first image comprises a background image and a noise image; obtaining a first target image according to the first image, wherein the first target image is an image obtained by filtering the background image and the noise image from the first image; obtaining a first salient region image according to the first target image; wherein the first salient region image is a salient region image of the first target image; obtaining a minimum circumscribed rectangular area of the first salient area image; determining the minimum circumscribed rectangular area as a focusing window area, wherein the focusing window area comprises a second image; obtaining a second target image according to the second image, wherein the second target image is an image of the second image after smoothing processing; obtaining the pixel gray value h of the second target image in the focusing window arealv(m, n); obtaining the average value of the pixel gray scale of the second target image in the focusing window area
Figure BDA0001286263270000081
Obtaining a value of a focus evaluation function of the second target image according to the pixel gray value and the pixel gray average value, wherein the focus evaluation function
Figure BDA0001286263270000082
Obtaining an extreme value of the focusing evaluation function; and obtaining a third target image according to the extreme value, wherein the third target image is a final focusing image.
Further, the apparatus further comprises: obtaining an average pixel value of the first image; obtaining the first image processed by the band-pass filter; obtaining a pixel value of each point of the processed first image; obtaining a significant value of the processed first image, wherein the significant value refers to a Euclidean distance between a pixel value of each point of the processed first image and an average pixel value of the first image; and obtaining the first target image according to the significant value of the processed first image, wherein the significant value of the processed first image can be converted into a gray image, and the first target image is a part of the gray image.
Further, the apparatus further comprises: and obtaining the minimum circumscribed rectangular area of the first salient area image according to a convex hull method.
Further, the device comprises a step of obtaining the second target image according to L og operator and the second image, wherein the formula of L og operator is
Figure BDA0001286263270000083
Wherein f (m, n) represents the second image, h (m, n) represents the second target image, g (m, n) represents a Gaussian filter,
Figure BDA0001286263270000084
representing the laplacian operator.
The image automatic focusing method and the image automatic focusing equipment in the embodiment of the application have at least the following technical effects:
1. the embodiment of the application provides an automatic image focusing method and automatic image focusing equipment. The method comprises the following steps: obtaining a first image, wherein the first image comprises a background image and a noise image; obtaining a first target image according to the first image, wherein the first target image is obtained by filtering the background image and the noise from the first imageAn image obtained after the acoustic image; obtaining a first salient region image according to the first target image; wherein the first salient region image is a salient region image of the first target image; obtaining a minimum circumscribed rectangular area of the first salient area image; determining the minimum circumscribed rectangular area as a focusing window area, wherein the focusing window area comprises a second image; obtaining a second target image according to the second image, wherein the second target image is an image of the second image after smoothing processing; obtaining the pixel gray value h of the second target image in the focusing window arealv(m, n); obtaining the average value of the pixel gray scale of the second target image in the focusing window area
Figure BDA0001286263270000091
Obtaining a value of a focus evaluation function of the second target image according to the pixel gray value and the pixel gray average value, wherein the focus evaluation function
Figure BDA0001286263270000092
Obtaining an extreme value of the focusing evaluation function; and obtaining a third target image according to the extreme value, wherein the third target image is a final focusing image. By the technical scheme, the technical problems of long scanning and collecting time and low focusing sensitivity of the conventional image focusing technology are solved, the background image and noise are effectively filtered, the calculated amount is reduced, the application range and the working efficiency of the SEM automatic focusing technology are increased, and the technical effect of rapid and precise focusing of the SEM is realized.
While preferred embodiments of the present invention have been described, additional variations and modifications in those embodiments may occur to those skilled in the art once they learn of the basic inventive concepts. Therefore, it is intended that the appended claims be interpreted as including preferred embodiments and all such alterations and modifications as fall within the scope of the invention.
It will be apparent to those skilled in the art that various changes and modifications may be made in the present invention without departing from the spirit and scope of the invention. Thus, if such modifications and variations of the present invention fall within the scope of the claims of the present invention and their equivalents, the present invention is also intended to include such modifications and variations.

Claims (8)

1. A method of image auto-focus, the method comprising:
obtaining a first image, wherein the first image comprises a background image and a noise image;
obtaining a first target image according to the first image, wherein the first target image is an image obtained by filtering the background image and the noise image from the first image;
obtaining a first salient region image according to the first target image; wherein the first salient region image is a salient region image of the first target image;
obtaining a minimum circumscribed rectangular area of the first salient area image;
determining the minimum circumscribed rectangular area as a focusing window area, wherein the focusing window area comprises a second image;
obtaining a second target image according to the second image, wherein the second target image is an image of the second image after smoothing processing;
obtaining the pixel gray value of the second target image in the focusing window area
Figure 394326DEST_PATH_IMAGE001
Obtaining the average value of the pixel gray scale of the second target image in the focusing window area
Figure 654406DEST_PATH_IMAGE002
Obtaining a value of a focus evaluation function of the second target image according to the pixel gray value and the pixel gray average value, wherein the focus evaluation function
Figure 264379DEST_PATH_IMAGE003
The second image is an image with an intercepted image area of M × N extracted from the focusing window region, M is the maximum value of the abscissa of the second image, N is the maximum value of the ordinate of the second image, M is the abscissa of any point position in the second image, and N is the ordinate of any point position in the second image;
obtaining an extreme value of the focusing evaluation function;
and obtaining a third target image according to the extreme value, wherein the third target image is a final focusing image.
2. The method of claim 1, wherein said obtaining a first target image from said first image comprises:
obtaining an average pixel value of the first image;
obtaining the first image processed by the band-pass filter;
obtaining a pixel value of each point of the processed first image;
obtaining a significant value of the processed first image, wherein the significant value refers to a Euclidean distance between a pixel value of each point of the processed first image and an average pixel value of the first image;
and obtaining the first target image according to the significant value of the processed first image, wherein the significant value of the processed first image can be converted into a gray image, and the first target image is a part of the gray image.
3. The method according to claim 1, wherein the obtaining of the minimum bounding rectangle region of the first salient region image is specifically:
and obtaining the minimum circumscribed rectangular area of the first salient area image according to a convex hull method.
4. The method of claim 1, wherein said obtaining the second target image from the second image comprises:
obtaining the second target image according to L og operator and the second image;
wherein the L og operator has the formula of
Figure 27935DEST_PATH_IMAGE004
(ii) a Wherein, the
Figure 330741DEST_PATH_IMAGE005
Representing the second image or images of the second image,
Figure 314003DEST_PATH_IMAGE006
representing the second target image in question,
Figure 727666DEST_PATH_IMAGE007
which is representative of a gaussian filter, is,
Figure 611309DEST_PATH_IMAGE008
Figure 819436DEST_PATH_IMAGE009
representing the laplacian operator.
5. An image auto-focusing apparatus, characterized in that the apparatus comprises:
a processor adapted to implement instructions;
a storage device adapted to store a plurality of instructions, the instructions adapted to be loaded and executed by a processor;
obtaining a first image, wherein the first image comprises a background image and a noise image;
obtaining a first target image according to the first image, wherein the first target image is an image obtained by filtering the background image and the noise image from the first image;
obtaining a first salient region image according to the first target image; wherein the first salient region image is a salient region image of the first target image;
obtaining a minimum circumscribed rectangular area of the first salient area image;
determining the minimum circumscribed rectangular area as a focusing window area, wherein the focusing window area comprises a second image;
obtaining a second target image according to the second image, wherein the second target image is an image of the second image after smoothing processing;
obtaining the pixel gray value of the second target image in the focusing window area
Figure 788529DEST_PATH_IMAGE001
Obtaining the average value of the pixel gray scale of the second target image in the focusing window area
Figure 740305DEST_PATH_IMAGE010
Obtaining a value of a focus evaluation function of the second target image according to the pixel gray value and the pixel gray average value, wherein the focus evaluation function
Figure 744033DEST_PATH_IMAGE011
The second image is an image with an intercepted image area of M × N extracted from the focusing window region, M is the maximum value of the abscissa of the second image, N is the maximum value of the ordinate of the second image, M is the abscissa of any point position in the second image, and N is the ordinate of any point position in the second image;
obtaining an extreme value of the focusing evaluation function;
and obtaining a third target image according to the extreme value, wherein the third target image is a final focusing image.
6. The apparatus of claim 5, wherein the apparatus further comprises:
obtaining an average pixel value of the first image;
obtaining the first image processed by the band-pass filter;
obtaining a pixel value of each point of the processed first image;
obtaining a significant value of the processed first image, wherein the significant value refers to a Euclidean distance between a pixel value of each point of the processed first image and an average pixel value of the first image;
and obtaining the first target image according to the significant value of the processed first image, wherein the significant value of the processed first image can be converted into a gray image, and the first target image is a part of the gray image.
7. The apparatus of claim 5, wherein the apparatus further comprises:
and obtaining the minimum circumscribed rectangular area of the first salient area image according to a convex hull method.
8. The apparatus of claim 5, wherein the apparatus further comprises:
obtaining the second target image according to L og operator and the second image;
wherein the L og operator has the formula of
Figure 388641DEST_PATH_IMAGE012
Wherein, the
Figure 346495DEST_PATH_IMAGE013
Representing the second image or images of the second image,
Figure 101961DEST_PATH_IMAGE014
representing the second target image in question,
Figure 960196DEST_PATH_IMAGE015
which is representative of a gaussian filter, is,
Figure 775705DEST_PATH_IMAGE016
Figure 719390DEST_PATH_IMAGE017
representing the laplacian operator.
CN201710307961.0A 2017-05-04 2017-05-04 Automatic image focusing method and device Expired - Fee Related CN107170002B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201710307961.0A CN107170002B (en) 2017-05-04 2017-05-04 Automatic image focusing method and device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201710307961.0A CN107170002B (en) 2017-05-04 2017-05-04 Automatic image focusing method and device

Publications (2)

Publication Number Publication Date
CN107170002A CN107170002A (en) 2017-09-15
CN107170002B true CN107170002B (en) 2020-07-21

Family

ID=59812545

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201710307961.0A Expired - Fee Related CN107170002B (en) 2017-05-04 2017-05-04 Automatic image focusing method and device

Country Status (1)

Country Link
CN (1) CN107170002B (en)

Families Citing this family (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110531484B (en) * 2019-07-24 2021-04-20 中国地质大学(武汉) Microscope automatic focusing method with settable focusing process model
CN111182282B (en) * 2019-12-30 2022-03-29 成都极米科技股份有限公司 Method and device for detecting projection focusing area and projector
CN115835016B (en) * 2022-11-16 2024-04-23 中国科学院新疆理化技术研究所 Open-loop type automatic focusing method, device, equipment and medium for radiation-resistant camera
CN117066702B (en) * 2023-08-25 2024-04-19 上海频准激光科技有限公司 Laser marking control system based on laser

Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101620360A (en) * 2008-06-30 2010-01-06 佳能株式会社 Image capturing apparatus and control method for the same
CN102940510A (en) * 2012-08-31 2013-02-27 华南理工大学 Automatic focusing method for ultrasonic elastography
CN103745237A (en) * 2013-12-26 2014-04-23 暨南大学 Face identification algorithm under different illumination conditions
CN105243660A (en) * 2015-09-16 2016-01-13 浙江大学 Quality evaluation method of light source scene-containing automatic focusing image
CN105631828A (en) * 2015-12-29 2016-06-01 华为技术有限公司 Image processing method and device
CN105845534A (en) * 2016-03-23 2016-08-10 浙江东方光学眼镜有限公司 Automatic focusing method of electron microscope
CN106161941A (en) * 2016-07-29 2016-11-23 深圳众思科技有限公司 Dual camera chases after burnt method, device and terminal automatically
CN106324945A (en) * 2015-06-30 2017-01-11 中兴通讯股份有限公司 Non-contact automatic focusing method and device
CN205883406U (en) * 2016-07-29 2017-01-11 深圳众思科技有限公司 Automatic burnt device and terminal of chasing after of two cameras

Patent Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101620360A (en) * 2008-06-30 2010-01-06 佳能株式会社 Image capturing apparatus and control method for the same
CN102940510A (en) * 2012-08-31 2013-02-27 华南理工大学 Automatic focusing method for ultrasonic elastography
CN103745237A (en) * 2013-12-26 2014-04-23 暨南大学 Face identification algorithm under different illumination conditions
CN106324945A (en) * 2015-06-30 2017-01-11 中兴通讯股份有限公司 Non-contact automatic focusing method and device
CN105243660A (en) * 2015-09-16 2016-01-13 浙江大学 Quality evaluation method of light source scene-containing automatic focusing image
CN105631828A (en) * 2015-12-29 2016-06-01 华为技术有限公司 Image processing method and device
CN105845534A (en) * 2016-03-23 2016-08-10 浙江东方光学眼镜有限公司 Automatic focusing method of electron microscope
CN106161941A (en) * 2016-07-29 2016-11-23 深圳众思科技有限公司 Dual camera chases after burnt method, device and terminal automatically
CN205883406U (en) * 2016-07-29 2017-01-11 深圳众思科技有限公司 Automatic burnt device and terminal of chasing after of two cameras

Also Published As

Publication number Publication date
CN107170002A (en) 2017-09-15

Similar Documents

Publication Publication Date Title
WO2019148739A1 (en) Comprehensive processing method and system for blurred image
CN107170002B (en) Automatic image focusing method and device
US8073286B2 (en) Detection and correction of flash artifacts from airborne particulates
WO2009146297A1 (en) Device and method for estimating whether an image is blurred
EP3371741B1 (en) Focus detection
JP7449507B2 (en) Method of generating a mask for a camera stream, computer program product and computer readable medium
KR102582261B1 (en) Method for determining a point spread function of an imaging system
KR101792564B1 (en) Image processing System and Image processing Method
CN109087347B (en) Image processing method and device
KR102599498B1 (en) Multi-focus microscopic image fusion method using local area feature extraction
CN115995078A (en) Image preprocessing method and system for plankton in-situ observation
CN111414877B (en) Table cutting method for removing color frame, image processing apparatus and storage medium
JP3860540B2 (en) Entropy filter and region extraction method using the filter
CN107680083B (en) Parallax determination method and parallax determination device
Safonov et al. Adaptive sharpening of photos
JP2003141550A (en) Position detection method
JP5853369B2 (en) Image processing apparatus, image processing method, and program
CN115187587B (en) Detection method and imaging system for continuous tin detection
Baharin et al. Enhancement of Low-Quality Diatom Images using Integrated Automatic Background Removal (IABR) Method from Digital Microscopic Image
CN112862708B (en) Adaptive recognition method of image noise, sensor chip and electronic equipment
CN110796611B (en) Image correction method and image correction device for shadow and free-form surface
JP2011203827A (en) Image processing apparatus and method
JP6314281B1 (en) Image processing method and foreground region acquisition method
CN115714912A (en) FV value calculation method based on image and DSP device
CN114972108A (en) Microscopic image definition detection method and system and computer equipment

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant
CF01 Termination of patent right due to non-payment of annual fee
CF01 Termination of patent right due to non-payment of annual fee

Granted publication date: 20200721