WO2005026802A1 - Autofocus control method, autofocus controller, and image processor - Google Patents

Autofocus control method, autofocus controller, and image processor Download PDF

Info

Publication number
WO2005026802A1
WO2005026802A1 PCT/JP2004/012609 JP2004012609W WO2005026802A1 WO 2005026802 A1 WO2005026802 A1 WO 2005026802A1 JP 2004012609 W JP2004012609 W JP 2004012609W WO 2005026802 A1 WO2005026802 A1 WO 2005026802A1
Authority
WO
WIPO (PCT)
Prior art keywords
focus
evaluation value
image
image data
calculating
Prior art date
Application number
PCT/JP2004/012609
Other languages
French (fr)
Japanese (ja)
Other versions
WO2005026802B1 (en
Inventor
Hiroki Ebe
Masaya Yamauchi
Kiyoyuki Kikuchi
Kiyotaka Kuroda
Junichi Takahashi
Original Assignee
Sony Corporation
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Sony Corporation filed Critical Sony Corporation
Priority to US10/569,480 priority Critical patent/US20070187571A1/en
Publication of WO2005026802A1 publication Critical patent/WO2005026802A1/en
Publication of WO2005026802B1 publication Critical patent/WO2005026802B1/en

Links

Classifications

    • GPHYSICS
    • G02OPTICS
    • G02BOPTICAL ELEMENTS, SYSTEMS OR APPARATUS
    • G02B7/00Mountings, adjusting means, or light-tight connections, for optical elements
    • G02B7/28Systems for automatic generation of focusing signals
    • G02B7/36Systems for automatic generation of focusing signals using image sharpness techniques, e.g. image processing techniques for generating autofocus signals

Definitions

  • the present invention is suitably used, for example, in an apparatus for photographing, observing, and inspecting an object sample with a video camera.
  • the present invention relates to a control method, an autofocus control device, and an image processing device. Background art
  • image autofocus control has been performed by using a focus evaluation value that has been quantified by evaluating the degree of focus from image data of a subject sample (work).
  • the image data of the sample is collected while changing the distance between the lens and the work, and the focus evaluation value 3 ⁇ 4 is calculated for each of them to search for a suitable focus position.
  • Fig. 21 shows an example of the relationship between the lens-work distance (horizontal axis) and the focus evaluation value (vertical axis).
  • images are captured by changing the distance between the lens and the workpiece at regular intervals, and the focus evaluation value of each image is calculated and plotted.
  • the maximum focus evaluation value in the graph is the focus position, that is, the optimum focus position (focus position).
  • the plot of the focus evaluation value with respect to the lens-work distance is referred to as a “focus curve”.
  • the distance between the lens and the workpiece is changed within a predetermined search range.
  • the maximum value of the focus evaluation value in this graph was used as the optimum focus position, or the optimum focus position was calculated from the focus evaluation values before and after the maximum value.
  • the focus evaluation value the maximum value of the brightness, the differential value of the brightness, the variance of the brightness, the variance of the differential value of the brightness, and the like are used.
  • As an algorithm for finding an optimum focus position from the maximum focus evaluation value there is a hill-climbing method, and a method of dividing a search operation into several stages has been put to practical use in order to reduce search time (Japanese Patent Laid-Open Publication No. No. 6-2171800, Japanese Unexamined Patent Application Publication No. 2000-333351 and Japanese Patent No. 2971892.
  • the size of the target work has been miniaturized, and an improvement in resolution has been demanded for an inspection instrument to which this focus skin surgery is applied.
  • This improvement in resolution can be accommodated by shortening the wavelength of the illumination light source and using a single wavelength.
  • the optical resolution is increased by shortening the wavelength, and the effects of chromatic aberration and the like are avoided by shortening the wavelength.
  • speckle refers to a state in which the brightness of the screen is distributed in a patchy manner, and a unique pattern of light and shade is obtained depending on the wavelength of the light source and the configuration of the optical system.
  • the above-mentioned focus scab may have a larger value in the affected part of the optical system than the focus evaluation value of the optimum focus position as shown in FIG. Since the shape and numerical value range of the focus curve are not uniquely determined by the surface conditions such as the reflectivity of the object, the conventional technology that determines the focus position from the maximum focus evaluation value in this state stabilizes the optimal focus position You cannot ask for it.
  • the present invention has been made in view of the above-described problems, and provides an autofocus control method, an autofocus control device, and an image processing device capable of realizing a stable autofocus operation by eliminating an influence caused by an optical system. As an issue. Disclosure of the invention
  • the autofocus control method of the present invention includes an image acquisition step of acquiring an image of a subject at a plurality of focus positions having different distances between a lens and a subject.
  • An evaluation value calculation step of calculating a focus evaluation value for each of a plurality of focus positions based on each of the acquired image data; and a focus position calculation step of calculating a focus position at which the focus evaluation value becomes maximum as a focus position. Moving the lens relative to the subject to the calculated focus position, performing a smoothing process on the image data obtained in the image obtaining process, and performing the smoothing process on the image data.
  • the focus evaluation value is calculated based on the above.
  • an image smoothing process is added in the present invention. This smoothing process enables the focus evaluation value to be calculated appropriately by capturing the characteristics of the target sample (subject) while reducing the density distribution pattern of speckles.
  • the processing conditions such as the number of images to be processed (unit processing range), filtering coefficients, the number of times of processing, and the presence or absence of weighting are determined by the type of optical system to be applied and the size of the sample. It can be set appropriately according to the surface properties and the like.
  • the focus evaluation value it is preferable to detect a difference in luminance data between adjacent pixels in the acquired image data. Edge enhancement processing to be extracted can be used.
  • the calculated evaluation value is divided by the average luminance of the entire screen to normalize the focus evaluation value by the average luminance of the screen.
  • the autofocus control device includes an evaluation unit that calculates a focus evaluation value for each of a plurality of focus positions based on each image data acquired at a plurality of focus positions having different lens-subject distances.
  • Value calculation means focus position calculation means for calculating a focus position based on the calculated maximum focus evaluation value, and image smoothing means for smoothing the acquired image data. Of each image data based on the image data smoothed by the The focus evaluation value is calculated.
  • the autofocus control device of the present invention is configured as one image processing device in combination with image acquisition means for acquiring image data of a subject at a plurality of focus positions, drive means for adjusting the distance between a lens and a subject, and the like.
  • the image acquisition unit and the driving unit may be configured as a separate and independent member.
  • FIG. 1 is a schematic configuration diagram of an image processing apparatus 1 according to a first embodiment of the present invention.
  • FIG. 2 is a block diagram illustrating the configuration of the controller 7.
  • FIG. 3 is a flowchart illustrating the operation of the image processing apparatus 1.
  • FIG. 4 is a flowchart illustrating another operation example of the image processing apparatus 1.
  • FIG. 5 is an example of a focus curve for explaining an operation of the present invention.
  • FC 1 is an example when image smoothing processing and luminance normalization processing of focus evaluation values are performed, and
  • FC 2 is an image smoothing processing.
  • FC 3 shows an example in which only the conversion process is performed, and FC 3 shows a conventional example.
  • FIG. 6 is a diagram for explaining a method of calculating a focus position by approximating a curve near the maximum focus evaluation value.
  • FIG. 7 is a diagram showing the relationship between the command voltage for the lens driving unit 4 and the actual movement voltage of the lens.
  • FIG. 8 is a diagram for explaining a method of performing parallel processing of capturing a sample image and calculating a focus evaluation value.
  • FIG. 9 is a diagram showing the second embodiment of the present invention, and is a diagram for explaining a method of dividing a screen into a plurality of parts and detecting a focus position in each divided region.
  • FIG. 10 is a process flow chart according to the third embodiment of the present invention.
  • FIG. 11 is a memory configuration diagram applied to the third embodiment of the present invention.
  • FIG. 12 is a flowchart illustrating an all-focus image acquiring step.
  • FIG. 13 is a diagram showing the fourth embodiment of the present invention, and is a diagram for explaining a method of acquiring a stereoscopic image by combining the focus positions of the sample images in the focus axis direction.
  • FIG. 14 is a flowchart illustrating a method of synthesizing the above-mentioned stereoscopic image.
  • FIG. 15 is a functional block diagram showing a first configuration example of an auto force control device according to a fifth embodiment of the present invention.
  • FIG. 16 is a functional block diagram showing a second configuration example of the auto force control device according to the fifth embodiment of the present invention.
  • FIG. 17 is a functional block diagram showing a third configuration example of the auto force control device according to the fifth embodiment of the present invention.
  • FIG. 18 is a functional block diagram showing a fourth configuration example of the automatic force control device according to the fifth embodiment of the present invention.
  • FIG. 19 is a diagram showing a fifth configuration example of the auto force control device according to the fifth embodiment of the present invention.
  • FIGS. 20A to 20B are block diagrams showing modified examples of the configuration of the drive system of the image processing apparatus 1.
  • FIG. 20A to 20B are block diagrams showing modified examples of the configuration of the drive system of the image processing apparatus 1.
  • FIG. 21 is an example of a focus curve showing a relationship between a lens-work distance (focus position) and a focus evaluation value.
  • FIG. 22 is a diagram for explaining the problems of the prior art. BEST MODE FOR CARRYING OUT THE INVENTION
  • FIG. 1 is a schematic configuration diagram of an image processing apparatus to which an autofocus control method and an autofocus control device according to an embodiment of the present invention are applied.
  • the image processing apparatus 1 is used for observing the surface of an object sample (work), and is particularly used for detecting a defect of an element structure formed by performing fine processing on a surface such as a semiconductor device. It is configured as a microscope.
  • the image processing device 1 includes a measurement stage 2, an objective lens 3, a lens driving unit 4, a lens barrel 5, a CCD (Charge Coupled Dev ice) camera 6, a controller 7, a driver 8, a monitor 9, and an illumination light source 10. Is provided.
  • a measurement stage 2 an objective lens 3
  • a lens driving unit 4 a lens barrel 5
  • a CCD (Charge Coupled Dev ice) camera 6 a controller 7
  • a driver 8 a monitor 9, and an illumination light source 10. Is provided.
  • the measurement stage 2 supports a subject sample (for example, semiconductor wafer 18) W and is configured to move in the X-Y direction (the left-right direction in the figure and the direction perpendicular to the paper).
  • a subject sample for example, semiconductor wafer 18
  • the lens driving unit 4 moves the objective lens 3 to the subject sample W on the measurement stage 2 in a predetermined direction in the focus axis direction (vertical direction in the figure). Relative movement is performed over one spot position search range, and the lens-workpiece distance is variably adjusted.
  • the lens driving section 4 corresponds to the “driving J means” of the present invention.
  • the lens driving section 4 is constituted by a piezo element, but other than this, a precision feed mechanism such as a pulse motor can be employed.
  • the objective lens 3 is moved in the focus axis direction for adjusting the distance between the lens and the work, the measurement stage 2 may be moved in the focus axis direction instead.
  • the CCD camera 6 functions as a video camera that takes an image of a specific area on the surface of the subject sample W on the measurement stage 2 via the objective lens 3 that moves within the focus position search range, and outputs the acquired image data to the controller 7. I do.
  • another solid-state imaging device such as a CMOS imager may be applied.
  • the controller 7 is composed of a computer, controls the operation of the entire image processing apparatus 1, and detects an optimum focus position (focus position) in a specific area on the surface of the sample W. ) A control unit is provided.
  • the autofocus controller 11 corresponds to the “autofocus controller” of the present invention.
  • the driver 8 receives a control signal from the autofocus control unit 11 and generates a drive signal for driving the lens drive unit 4.
  • the driver 8 is constituted by a piezo driver having a hysteresis compensation function. Note that this Dryno 8 is It may be incorporated in the single control unit 11.
  • the auto focus control unit 11 drives the lens driving unit 4 via the driver 8 and changes the distance (lens-work distance) between the objective lens 3 and the subject sample W at a fixed interval.
  • the image data of the subject sample W is acquired by the CCD camera 6, and various processes described later are performed to detect an optimal focus position, that is, a focus position, in the imaging region of the subject sample W.
  • the monitor 9 displays the contents of processing by the controller 7 and also displays an image of the subject sample W captured by the CCD camera 6, and the like.
  • a continuous laser or a pulse laser light source having a wavelength of 196 nm is used as the illumination light source 10.
  • the wavelength range of the illumination light source is not limited to the above-described ultraviolet light range, and it is of course possible to use another ultraviolet light having a different wavelength range depending on the application or a light source in the visible light range.
  • FIG. 2 is a block diagram of a configuration of the image processing apparatus 1.
  • the analog image signal output from the CCD camera 6 is converted into a digital image signal by the A / D converter 13.
  • the output signal of the A / D converter 13 is supplied to the memory 14 and stored.
  • the autofocus controller 11 of the controller 7 reads the converted digital image signal from the memory 14 and performs autofocus control described later. Then, the driver 8 generates a drive signal for the lens drive unit 4 based on a control signal from the controller 7 supplied via the DZA converter 17.
  • the focus control unit 11 includes a smoothing processing circuit 11 A, an average luminance calculation circuit 11 B, an evaluation value calculation circuit 11 C, and a focus position calculation. Circuit 1 ID is provided.
  • the smoothing processing circuit 11 A smoothes the auto-force target area (the entire screen or a partial area within the screen) of each image signal (sample image) of the object sample W acquired at a plurality of focus positions. This is a circuit for performing the image processing, and corresponds to the “image smoothing means” of the present invention.
  • the focus control unit 11 reduces the uneven distribution (speckle) of brightness of each of the acquired sample images by the smoothing processing circuit 11A.
  • An example of the smoothing process is shown in [Equation 1].
  • the processing conditions for image smoothing are based on the sample W captured by the CCD camera 6. It can be set arbitrarily as long as the original feature and contour of the surface are not crushed, and these processing conditions are set via an input device 16 such as a keyboard, a mouse, and a touch panel. .
  • the average luminance calculation circuit 11B is a circuit that calculates the screen average luminance of the autofocus target area of each sample image, and corresponds to “average luminance calculation means” of the present invention.
  • the screen average luminance at each focus position obtained by the average luminance calculation circuit 11B is described later. This is used to calculate the focus evaluation value Pv of the focus position in the evaluation value calculation circuit 11C.
  • the evaluation value calculation circuit 11C is a circuit that calculates the focus evaluation value P V of each sample image, and corresponds to “evaluation value calculation means” of the present invention.
  • the evaluation value calculation circuit 11C is configured to include an edge enhancement processing circuit.
  • the focus evaluation value is an index that numerically evaluates a state in which a characteristic portion and a contour portion of an image are clearly visible. Looking at the change in luminance between pixels in the feature-contour portion, a sharp change occurs in a clear image, and a gradual change occurs in a blurred image. Therefore, in the present embodiment, the focus evaluation value Pv is calculated by evaluating the luminance data difference between adjacent pixels using edge enhancement processing. In addition, the focus evaluation value may be calculated based on a differential value of brightness, variance of brightness, or the like.
  • the pixel area to be processed is 3 ⁇ 3, but it may be 5 ⁇ 5 or 7 ⁇ 7.
  • the coefficients are weighted, the setting of the coefficients is arbitrary, and the processing may be performed without weighting.
  • the division processing is executed by the screen average luminance at the corresponding focus position calculated by the average luminance calculation circuit 11 ⁇ . That is, as shown in [Equation 3], the focus evaluation value ⁇ V of each sample image is obtained by dividing the focus evaluation value ⁇ obtained by the edge enhancement processing circuit by the screen average luminance P ave at the corresponding focus position. Value.
  • Equation 3 Smell Pv (i) at i-th focus position
  • the focus evaluation value P vo (i) whose brightness has been normalized is the focus evaluation value at the i-th focus position
  • P ave (i) is the screen average brightness at the i-th focus position.
  • the focus evaluation value Pv may be calculated by multiplying the calculated value obtained in [Equation 3] by the maximum value P millax of the screen average luminance. This compensates for the loss (quantity decrease) of the focus evaluation value due to division by the average luminance, and makes it easier to see the quantitative change of the focus evaluation value when referring to the focus force later.
  • the screen average luminance to be multiplied is not limited to the maximum value, and may be, for example, a minimum value.
  • the focus evaluation value (Pv) is a value obtained by dividing the evaluation value calculated by the edge enhancement processing by the screen average luminance
  • the focus evaluation value is calculated based on the evaluation point (pixel) and its surroundings. Since there is a variation in the brightness between the acquired images, the average brightness of the screen (the sum of the brightness of the individual pixels that compose the screen is calculated as This is to prevent the absolute value of the index calculated from the change if the luminance value itself divided by the total number of pixels changes).
  • the focus evaluation value calculated by the edge enhancement processing is normalized by the screen average luminance (P ave), thereby obtaining the focus evaluation value due to the screen luminance change.
  • the focus evaluation value when the screen average luminance is 50 and the luminance difference is 20% is 0.2 (10/50).
  • the pin evaluation value is also 0.2 (20/100), which matches each other. The effect on the evaluation value is eliminated.
  • the focus position calculation circuit 11D is a circuit that calculates the focus position based on the maximum value of the focus evaluation value calculated by the evaluation value calculation circuit 11C, and the “focus position calculation means” of the present invention.
  • image autofocus control involves acquiring sample images at multiple focus positions with different distances between the lens and the workpiece, and detecting the force position of the sample image from which the maximum focus evaluation value can be obtained. Determines the focus position. Therefore, the more the number of sample images (the smaller the amount of focus movement between samples), the more accurate autofocus control can be realized.
  • the number of samples increases, the time required for processing also increases, and the high-speed autofocus control cannot be ensured. Therefore, in the present embodiment, as shown in FIG.
  • the vicinity of the focus position is close to a quadratic curve convex upward. Therefore, the approximate quadratic curve is calculated by the least-squares method using the points near the focus position, the vertices are obtained, and this is set as the focus position.
  • the solid lines are 3 points (Pv (m), Pv (ml), Pv (m + 1)), and the broken lines are 5 points (Pv (m), Pv (ml), Pv (m + l).
  • a straight line passing through two points Pv (m) and Pv (m + l) and another line of PV (m-1) and PV (m-2) Calculate the point of intersection from a straight line passing through two points and use it as the focus position (linear approximation method), or use other approximation methods such as normal distribution curve approximation to detect the bin position. It may be.
  • memory 15 is used for various calculations of the CPU of controller 7.
  • a first memory unit 15A and a second memory unit 15B that are used for various operations in the autofocus control unit 11 are allocated to the memory space of the memory 15.
  • sample images are respectively obtained at multiple focus positions while continuously changing the distance between the lens and the work. This As a result, the speed of the autofocus control can be increased as compared with the case where the lens is stopped at each focus position to acquire an image.
  • FIG. 7 shows the relationship between the command voltage of the driver 8 for the lens driving unit 4 and the actual moving voltage of the lens driving unit 4.
  • the lens driving section 4 composed of a piezo element has a position control movement amount detection sensor.
  • the actual moving voltage in FIG. 7 is this sensor monitor signal.
  • the command voltage is changed by a predetermined amount for each video signal frame of the CCD camera 6 after moving the lens to the autofocus control start position. Comparing the indicated voltage and the actual moving voltage, although the response is delayed, the movement is smooth, and the slopes of both graphs in the gradually increasing region are almost the same while the steps of the indicated voltage are crushed. From this, it can be seen that the lens is operating at a constant speed with respect to the command voltage corresponding to the constant speed. Therefore, if a sample image is acquired in synchronization with the image synchronization signal, it becomes possible to calculate and acquire the focus evaluation value at fixed intervals of the focus axis coordinates.
  • the sample image obtaining step and the focus evaluation value calculating step are performed in parallel.
  • the focus evaluation value PV is calculated by processing the image data already captured in the second memory unit 15B while loading the image data into the first memory unit 15A as shown in Fig. 8. It can be configured with double buffering.
  • the first memory unit 15A processes image data captured in even-numbered frames
  • the second memory unit 15B processes image data captured in odd-numbered frames.
  • FIG. 3 is a process flow chart in the autofocus controller 11.
  • step S1 initial settings such as the autofocus processing area of the subject sample W, the focus position search range, the amount of focus movement between acquired image samples (focus axis step length), image smoothing processing conditions, and edge enhancement processing conditions are input.
  • step S1 the auto force control is executed.
  • the objective lens 3 starts moving along the focus axis direction from the auto-force control start position by driving the lens driving unit 4 (in the present embodiment, the direction approaching the subject sample W), and the surface image is synchronized. Acquire a sample image of the subject sample W in synchronization with the signal (steps S2 and S3). Next, the focus axis coordinates (coordinates of the distance between the lens and the work) of the obtained sample image are obtained (step S4).
  • focus evaluation processing including screen average luminance calculation processing, image smoothing processing, edge enhancement processing, and luminance normalization processing is performed on the acquired sample image (steps S5 to S8).
  • the screen average luminance calculation step (step S5) is calculated by the average luminance calculation circuit 11B.
  • the calculated average screen brightness is later used for calculating the focus evaluation value.
  • This screen average luminance calculation step may be performed after the smoothing processing step (step S6).
  • the image smoothing processing step (step S6) is processed by the smoothing processing circuit 11A.
  • the image smoothing process is performed using, for example, an arithmetic expression represented by [Equation 1]. As a result, in the acquired sample image, the influence of the scattering caused by the single-wavelength light source is eliminated.
  • the edge enhancement processing step includes an evaluation value calculation circuit 1 1 Executed in C.
  • the pixel of the feature part / contour part is obtained by the edge emphasis processing equation shown in the above [Equation 2]. The difference between the luminance data is calculated, and this is used as the basic data for the focus evaluation value.
  • Step S8 a brightness standardization process for normalizing the focus evaluation value calculated in Step S7 with the screen average brightness is performed.
  • This step is executed by the evaluation value calculation circuit 11C.
  • the focus evaluation value (Pvo (i)) obtained in the previous edge enhancement processing step (step S7) is obtained in the screen average luminance calculation step (step S5).
  • the focus evaluation value Pv (i) whose luminance has been normalized in Expression 3 is calculated.
  • steps S2 to S8 constitute the auto-force sloop (AF loop).
  • AF loop auto-force sloop
  • Step S3 and the focus evaluation value calculation process are processed as parallel stirrups (Figs. 6 and 7). Therefore, while calculating the focus evaluation value of the previously captured sample image, the next sampler image can be obtained. As a result, the focus evaluation value can be calculated in one frame cycle of the video signal, and Faster one-focus operation is realized.
  • the AF loop ends, and the focus evaluation value of each obtained sample image is multiplied by the maximum value of the average screen brightness (Pavemax). Is executed (steps S 9 and S 10). As a result, the focus value Pv of each sample image becomes the same as the case where it is obtained by the arithmetic expression shown in the above [Equation 4].
  • the AF loop is completed by calculating the focus evaluation value by the edge enhancement processing, and as shown by step S10A in FIG.
  • the result is as shown in Fig. 3. Processing equivalent to the above example can be realized.
  • the focus squib (FC 1) obtained by performing the smoothing process (step S 6 in FIG. 3) and the luminance normalization process (step S 8 in FIG. 3) is represented by a solid line and the screen average
  • the focus curves (FC 2) obtained by performing only the smoothing process without normalizing by the luminance are indicated by dashed lines.
  • the conventional focus curve (FC 3) shown in FIG. 22 is indicated by a dotted line.
  • the affected part of the optical system is greatly improved, and the peak of the focus evaluation value to be detected as the optimal focus position (focus position) is made to be obvious. Can be done. As a result, stable and accurate autofocus operation can be realized even in an optical system having a short wavelength and a single wavelength.
  • the luminance standardization processing step (step S8 in the third step) may be omitted as necessary. Brightness normalization processing By doing so, the effect of the optical system can be further improved, and the focus position can be detected more accurately.
  • a focus position calculation process force S is performed (step S11).
  • This focus position calculation processing is executed by the focus position calculation circuit 11D.
  • To calculate the focus position as described with reference to Fig. 6, an approximate curve passing through the maximum value of the focus evaluation value and each point of a plurality of focus evaluation values in the vicinity is obtained, and the vertex is detected. This is the focus position.
  • the focus position can be detected more efficiently and with higher precision than the hill-climbing method that has been widely used in the past, so that it is possible to significantly increase the speed of autofocus operation. .
  • the autofocus control according to the present embodiment is completed through a movement step of moving the objective lens 3 to the focus position (step S12).
  • the focus evaluation value is calculated for the entire acquired sample screen (or a part of the target area).
  • the same image smoothing processing and normalization processing based on the screen average luminance as in the first embodiment are executed. This will affect the optics
  • the focus position can be detected with high accuracy without the need.
  • the divided screens may overlap each other, and the number of screen divisions may be changed dynamically according to the use situation.
  • a special optical system such as a confocal optical system is used to obtain an in-focus image that is entirely in focus, or to obtain an all-focus image from images at different angles based on trigonometry.
  • confocal optical system is used to obtain an in-focus image that is entirely in focus, or to obtain an all-focus image from images at different angles based on trigonometry.
  • an all-focus image of the subject sample W is obtained in the process of executing the autofocus control method described in the first embodiment.
  • the control flow is shown in FIG. After the process of standardizing the focus evaluation value of the acquired image (sample point) with the screen average luminance (step S8), an image synthesis process (step S8M) is added.
  • the acquired sample screen is divided into a plurality of regions (FIG. 9) as described in the second embodiment described above, and each divided region W ij is used as an image.
  • the number of screen divisions is not particularly limited, and the greater the number of divisions, the more finely the processing can be performed, and the smaller the divided area down to one pixel unit.
  • the shape of the divided area is not limited to a square, but can be changed to a circular shape or the like.
  • a memory 15 (FIG. 2) includes a first memory unit 15A for processing image data captured in even frames and a second memory unit 15B for processing image data captured in odd frames.
  • a third memory section 15C for omnifocal processing is prepared.
  • the third memory unit 15C includes a composite image data storage area 15C1 and a height (distance between lenses and work) information storage area 15C2 of each divided area Wij constituting the composite image.
  • a focus evaluation value information storage area 15 C 3 of each of the divided areas W ij is provided.
  • sample images are obtained at a plurality of focus positions with different lens-park distances, and the focus evaluation value is calculated for each of the sample images for each divided area W ij Then, after extracting an image having the highest focus evaluation value independently from each other among the divided areas W ij, a process of synthesizing the entire image is performed.
  • the “all-focus image synthesizing unit” of the present invention is configured. Referring to the process flow chart shown in FIG. 10, the steps S1 to S8 are performed in the same manner as in the above-described first embodiment. 'After each execution, move to the image synthesis process in step S8M.
  • FIG. 12 shows the details of step S8M.
  • the third memory unit 15C is initialized using the first captured image (steps a and b). That is, in step b, the first image is copied to the composite image data storage area 15 C 1.
  • the height information storage area 15 C2 is filled with the first data, and the focus evaluation value is copied to the focus evaluation value information storage area 15 C 3 of each divided area Wij and initialized. .
  • the focus evaluation value of the acquired image and the focus evaluation value of the composite image are compared for each divided region Wij (step c;). If the focus evaluation value of the acquired image is large, the image is copied, and the corresponding height information and focus evaluation value information are updated (step d). Conversely, if the focus evaluation value of the acquired image is small, no processing is performed. This is repeated for the number of divisions (step e). This completes the processing of one frame (33.3 msec).
  • the above-described processing is performed, for example, while fetching even-numbered frame image data into the first memory unit 15A, and storing one frame already captured in the second memory unit 15B.
  • the process is performed for each divided area Wij of the previous odd-numbered frame image, and necessary data and information are copied or updated in the corresponding storage area of the third memory unit 15C.
  • the above-described processing is performed along with the autofocus control of the subject sample W described in the first embodiment, but it is needless to say that the processing is performed alone. Is also possible.
  • the objective lens 3 moves over the entire search range, and is divided by each divided area. Since the in-focus state can be observed, the displayed state of the height distribution of the subject sample W can be easily grasped during the autofocus operation.
  • the all-focus image of the object sample is synthesized by using the auto-focus control method according to the present invention, high-precision auto-focusing is possible while eliminating the influence caused by the short-wavelength, single-wavelength optical system. Control is ensured, so that an all-in-focus image of the surface of a hierarchically developed structure such as a semiconductor wafer can be acquired with high resolution.
  • a three-dimensional image can be synthesized by extracting a focused part from the acquired sample image and combining this with information in the height direction. For example, as shown in FIG. 13, after performing focus position detection on each of the sample images Ra, Rb, Rc, and Rd acquired during the autofocus operation, the focus position is determined. By extracting and combining this in the height direction (focus axis direction), A stereoscopic image of the structure R can be synthesized.
  • FIG. 3 An example of a method of synthesizing a stereoscopic image according to the present embodiment is shown in a flowchart of FIG. In the figure, steps corresponding to those in the above-described first embodiment (FIG. 3) are denoted by the same reference numerals, and detailed description thereof will be omitted.
  • a vertical #: screen notch clearing step (step S1A) is provided.
  • the memory area for storing the stereoscopic screen acquired in the past is initialized.
  • sample images of the subject sample are acquired at a plurality of focus positions, and for each of them, a focus evaluation value is calculated by smoothing processing and edge enhancement processing, and the calculated focus is calculated.
  • the standardization process is performed based on the screen average brightness of the evaluation value (Step S2 to Step S8).
  • step S8A After calculating the focus evaluation value, compare each point in the screen with the data captured so far and the captured data to see if they are in focus. If the captured data is in focus The data is updated (step S8A). This process is performed for each sample image.
  • the “stereoscopic image combining means” of the present invention is configured.
  • the screen is divided into a plurality of regions W ij as in the above-described second embodiment, and the above-described processing is performed for each of the divided regions.
  • the processing is not limited, and the processing may be performed in units of one pixel.
  • the stereoscopic image of the subject sample is synthesized using the autofocus control method according to the present invention, the high-precision autofocus control with eliminating the effects caused by the short-wavelength, single-wavelength optical system is eliminated.
  • a three-dimensional image of the surface of a hierarchically developed structure such as a semiconductor wafer or the like can be acquired with high resolution.
  • this autofocus control device can be configured with a video signal decoder, an arithmetic element represented by an FPGA (Field Programmable Gate Array), a memory for storing settings, and the like. Integrated circuits such as a ProcessingUnit), PMC (Pulse Motor Controller), and external memory are used. These elements are common It is used as a single board unit or as a package component that houses it by being mounted on a printed circuit board.
  • FPGA Field Programmable Gate Array
  • PMC Pulse Motor Controller
  • FIG. 15 shows a functional block diagram according to a first configuration example of the autofocus control device of the present invention.
  • the illustrated auto focus control device 31 includes a video signal decoder 41, an FPGA 42, a field memory 43, a CPU 44, a ROM / RAM 45, a PMC 46, and an I / F circuit 47. It is composed of
  • the video signal used for the focus operation is an analog image signal encoded in the NTSC system. This is a horizontal / vertical sync signal by the video signal decoder 41, EVEN (even number) ZO DD (odd number) field information, It is converted into a digital image signal of luminance information.
  • the FPGA 42 is configured by an arithmetic element that performs a predetermined arithmetic processing in the autofocus control flow (FIG. 3) according to the present invention described in the above-described first embodiment. “Means”, “edge enhancement processing means”, and “evaluation value calculation means”.
  • the FPGA 42 extracts effective portion information in the screen from the synchronization signal and the field information digitized by the video signal decoder 41, and stores the luminance information in the field memory 43. At the same time, the data is sequentially read from the field memory 43, and arithmetic processing such as filtering (image smoothing processing), average luminance calculation, and focus evaluation value calculation is performed. Note that, depending on the degree of integration of the FPGA 42, it is possible to incorporate the functions of the field memory 43, the CPU 44, and the PMC 46 into the FPGA 42.
  • the field memory 43 is used for temporarily storing the above-mentioned field information in order to handle a video signal which is output in an interlaced manner and is composed of even and odd fields.
  • the CPU 44 changes the distance between the lens and the work by moving the stage that supports the object sample via the PCM 46 and the IZF circuit 47, and is obtained at each focus position and calculated by the FPGA 42. It manages the operation of the entire system, such as calculating the optimal force position (focus position) from the focus evaluation value of each sample image obtained.
  • CPU 44 corresponds to the “focus position calculating means” of the present invention.
  • the ROM / RAM 45 is used for storing the operating software (program) of the CPU 44 and the parameters required for calculating the focus position.
  • the ROM / RAM 45 may be built in the CPU.
  • the PMC 46 is a drive control element for a pulse motor (not shown) for moving the stage, and controls the stage via an interface circuit (I / F circuit) 47. Further, the output of the sensor for detecting the stage position is supplied to the PCM 46 through the IZF circuit 47.
  • a video signal of a sample image is supplied from a CCD camera (not shown).
  • This video signal is input to the FPGA 42 through the video signal decoder 41, where the input image is smoothed, the average luminance is calculated, and the focus evaluation value is calculated.
  • the FPGA 42 focuses on the CPU 44 at the timing of the synchronization signal at the end of the field. Transfer pricing data.
  • the CPU 44 obtains the coordinates of the focus stage at the end of the field and uses it as the lens-work distance. After repeating the above processing the number of times necessary for the autofocus operation of the present invention, the CPU 44 calculates the focus position. Then, the stage is moved to the optimal focus position, and the auto focus operation ends. As necessary, a screen division function, an all-focus image synthesizing process of a subject sample, and / or a stereoscopic image synthesizing process are performed.
  • the autofocus control device of the present invention By organically connecting the autofocus control device of the present invention configured as described above to a focus axis moving means such as an existing CCD camera, monitor, pulse motor, or the like, a function equivalent to that of the image processing device 1 described above is obtained. Therefore, the autofocus control method of the present invention can be implemented with a simple and simple configuration, which is very advantageous in terms of cost and installation space.
  • FIG. 16 is a functional block diagram of a second example of the configuration of the autofocus control device according to the present embodiment. Parts corresponding to those in the first configuration example (FIG. 15) are denoted by the same reference numerals, and detailed description thereof will be omitted.
  • the autofocus control device 32 in this configuration example includes a video signal decoder 41, an FPGA 42, a CPU 44, an R / M / RAM 45, a PMC 46, and an IZF circuit 47. I have.
  • the field memory 43 is used to process the image of the in-night race as an image similar to a TV (television). Control from the frame information.
  • the frame information considering only the autofocus operation, there is no need to use the frame information, and processing on a field-by-field basis may be sufficient, and this may be an advantage.
  • the autofocus control device 32 in the present configuration example has a configuration in which the field memory 43 is removed from the first configuration example. With this configuration, there is no need to perform a timing process for transferring information to the field memory, so that a configuration that is physically and logically simpler than that of the above-described first configuration example can be made. Also, since focus evaluation processing can be performed in field units, there is an advantage that the sampling interval of focus evaluation values is shorter than in the first configuration example in which processing is performed in frame units.
  • FIG. 17 is a functional block diagram of a third configuration example of the autofocus control device according to the present embodiment. Parts corresponding to those in the first configuration example (FIG. 15) are denoted by the same reference numerals, and detailed description thereof will be omitted.
  • the autofocus control device 33 in this configuration example includes a video signal decoder 41, an FPGA 42, a CPU 44, a ROM / RAM 45>? 46 and a 1 / circuit 47. Has been established.
  • the autofocus control device 33 in this configuration example incorporates the PMC 46 logic block in the FPGA 42, and does not require an independent logic circuit for the PMC 46 as compared to the second configuration example described above. It has the following configuration. With this configuration, an independent IC chip for the PMC 46 is not required, and the board size and mounting cost can be reduced. (Fourth configuration example)
  • FIG. 18 is a functional block diagram of a fourth configuration example of the autofocus control device according to the present embodiment. Parts corresponding to those in the first configuration example (FIG. 15) are denoted by the same reference numerals, and detailed description thereof will be omitted.
  • the auto focus control device 34 in this configuration example is composed of a video signal decoder 41, FPGA 42, CPU 44, ROM / RAM 45, AD (Analog to Digital) DA (Digital to Analog) circuit 4 8, and IZF circuit 47.
  • the auto-focus control device 34 in this configuration example shows an example in which the driving source of the force stage is configured by a piezo stage of an analog signal controller from a pulse motor, and in the above-described second configuration example, An ADZDA circuit 48 is used instead of the PMC 46. Note that the circuit 708 can be taken into, for example, the CPU 44. In this case, the circuit 48 need not be an external circuit.
  • the DA circuit part is a circuit for converting the instruction voltage from the CPU 44 into an analog signal, and the AD circuit part detects the moving position of the piezo stage. This is a circuit for converting a signal from a sensor (not shown) to a digital signal and feeding it back to the CPU 44. When the feedback control is not performed, the AD circuit part can be omitted.
  • FIG. 19 shows a specific configuration example of the autofocus control device 33 in the above-described third configuration example (FIG. 17) as a fifth configuration example of the present embodiment. Note that the corresponding parts in the figure And the same reference numerals are given, and detailed description thereof is omitted.
  • the auto-focus control device 35 in this configuration example includes a video signal decoder 41, an FPGA 42, a CPU 44, a flash memory 45A, a SRAM (SRAM) on a common wiring board 50.
  • tic Random Access Memory) 45 B RS driver 47 A, power supply monitoring circuit 51, FPGA initialization ROM 52, and multiple connectors 53A, 53B, 53C, 53D It is implemented and configured.
  • the flash memory 45 A and the SR AM 45 B correspond to the ROM / RAM 45 described above, while the flash memory 45 A has an operation program of the CPU 44 and an initial state of auto focus operation.
  • the setting information (focus movement speed, smoothing processing conditions, etc.) is stored, and the other SRAM 45 B is used to temporarily store various parameters necessary for the CPU 44 to calculate the focus position. Used.
  • the RS dryino, 47A is an interface circuit necessary for communication with the external device that is connected via the connectors 53A to 53D.
  • a CCD camera is connected to the connector 53A, and a host controller or a CPU is connected to the connector 53B.
  • a power supply circuit is connected to the connector 53C, and a focus stage is connected to the connector 53D.
  • the focus stage includes a pulse motor as a drive source, and its controller, PMC, is incorporated in the FPGA 42.
  • the autofocus control device 35 of the present configuration example various elements capable of executing the algorithm for realizing the autoforce control method of the present invention are mounted on one wiring board 50.
  • it can be configured as a board mounted body having an outer dimension of, for example, 100 mm square. This reduces equipment costs and simplifies equipment configuration. Can be achieved.
  • the degree of freedom of equipment installation can be increased, it is possible to easily respond to the on-site needs that require autofocus operation in industrial fields that could not be used until now.
  • Di 2 may be moved.
  • the driving system for changing the distance between the lens and the sample is constituted by the lens driving unit 4 composed of a piezo element and its driver 8, but the present invention is not limited to this.
  • Other drive systems may be applied as long as the distance can be changed accurately and smoothly.
  • FIG. 20A shows an example in which a pulse motor 20 is used as a drive source.
  • the dryino 21 generates a drive signal for the pulse motor 20 based on a control signal supplied from the pulse motor controller 22.
  • the lens driving unit 4 and the pulse motor 20 are driven by so-called feedforward control.
  • a sensor for detecting the lens position or the stage position is provided to control the driving source by feed pack control.
  • a configuration is also applicable.
  • FIG. 20B shows an example of the configuration of a drive system that controls a drive source by feed pack control.
  • Driver 2 4 is output instruction circuit 2
  • a drive signal for the drive system 23 is generated based on the control signal supplied from 5.
  • a cylinder device, a motor, or the like can be applied as the drive system 23.
  • the position sensor 26 can be constituted by a strain gauge, a potentiometer, etc., and the output thereof is supplied to the acquisition circuit 27.
  • the take-in circuit 27 supplies a position compensation signal to the output instruction circuit 25 based on the output of the position sensor 26, and performs position correction of the drive system 23.
  • the video signal supplied from the CCD camera has been described in the NTSC format.
  • the present invention is not limited to this.
  • the video signal can be processed in a PAL (Phase Alternation by Line) format.
  • PAL Phase Alternation by Line
  • the function of the video signal decoder circuit can be incorporated into the FPGA 42.
  • the focus evaluation value and the focus position of each sample image obtained by executing the auto focus control of the present invention can be displayed on the monitor 9 (FIG. 1) together with the sample image.
  • an encoder circuit for converting such information into NTSSC or the like and displaying it may be provided separately.
  • This encoder circuit may be, for example, one of the board mounted components of the autofocus control device having the configuration described in the fifth embodiment.

Landscapes

  • Physics & Mathematics (AREA)
  • Engineering & Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • General Physics & Mathematics (AREA)
  • Optics & Photonics (AREA)
  • Automatic Focus Adjustment (AREA)
  • Studio Devices (AREA)
  • Microscoopes, Condenser (AREA)
  • Stereoscopic And Panoramic Photography (AREA)

Abstract

An autofocus control method, an autofocus controller, and an image processor capable of eliminating influence originated from an optical system, realizing a stable autofocus function. In calculating a focus evaluation value of each sample image obtained at plural focus positions, dark and light patterns of brightness originated from the optical system is reduced by applying smoothing processing to the obtained sample image, and the focus evaluation value is calculated based on the smoothed sample image. Further, the focus evaluation value is standardized by average display brightness of the smoothed sample image to obtain an optimal evaluation value.

Description

明細書 ォ一トフォーカス制御方法、 オートフォーカス制御装置および画 像処理装置 技術分野  Description: Autofocus control method, autofocus control device, and image processing device
本発明は、 例えば、 ビデオカメラにより被写体試料を撮影し、 観察 ·検査する装置に好適に用いられ、 特に、 光学系の影響を排 除して安定したオートフォーカス動作を実現できるォ一トフォ —カス制御方法、 オートフォーカス制御装置および画像処理装置 に関する。 背景技術  INDUSTRIAL APPLICABILITY The present invention is suitably used, for example, in an apparatus for photographing, observing, and inspecting an object sample with a video camera. The present invention relates to a control method, an autofocus control device, and an image processing device. Background art
従来より、 画像オートフォーカス制御は、 被写体試料(ワーク) の画像データからピントの合い具合を評価し数値化したピント 評価値を用いて行っている。 つまり、 レンズ一ワーク間距離を変 えて試料の画像データを収集し、 その各々についてピント評価値 ¾計算して好適なフォーカス位置を検索している。  Conventionally, image autofocus control has been performed by using a focus evaluation value that has been quantified by evaluating the degree of focus from image data of a subject sample (work). In other words, the image data of the sample is collected while changing the distance between the lens and the work, and the focus evaluation value ¾ is calculated for each of them to search for a suitable focus position.
第 2 1 図は、 レンズ一ワーク間距離(横軸) とピント評価値(縦 軸) との関係例を示している。 これは、 レンズ一ワーク間距離を 一定間隔で変化させて画像を取り込み、 各画像のピント評価値を 計算してプロッ 卜したものである。 グラフ中のピント評価値最大 値が、 ピントの合った位置すなわち最適フォーカス位置 (ピン ト 位置) である。 以下、 このレンズ一ワーク間距離に対するピント 評価値のプロッ トを 「フォ一カスカーブ」 と呼ぶ。  Fig. 21 shows an example of the relationship between the lens-work distance (horizontal axis) and the focus evaluation value (vertical axis). In this method, images are captured by changing the distance between the lens and the workpiece at regular intervals, and the focus evaluation value of each image is calculated and plotted. The maximum focus evaluation value in the graph is the focus position, that is, the optimum focus position (focus position). Hereinafter, the plot of the focus evaluation value with respect to the lens-work distance is referred to as a “focus curve”.
従来の技術では、 所定の検索範囲でレンズ一ワーク間距離を変 化させ、 このグラフ中ピント評価値の最大値を最適フォ 一カス位 置としたり、 あるいは、 最大値前後のピント評価値から 最適フォ 一カス位置を計算していた。 ピント評価値としては、 明るさの最 大値、 明るさの微分値、 明るさの分散、 明るさの微分値の分散な どが使われている。 ピント評価値最大値から最適フォ一カス位置 を求めるアルゴリズムとして、 山登り法などがあり、 また、 検索 時間短縮のために、 検索動作を何段階かに分ける方法が実用化さ れている (特開平 6 — 2 1 7 1 8 0号公報、 特開 2 0 0 2 — 3 3 3 5 7 1号公報及び特許第 2 9 7 1 8 9 2号公報)。 In the conventional technology, the distance between the lens and the workpiece is changed within a predetermined search range. In this graph, the maximum value of the focus evaluation value in this graph was used as the optimum focus position, or the optimum focus position was calculated from the focus evaluation values before and after the maximum value. As the focus evaluation value, the maximum value of the brightness, the differential value of the brightness, the variance of the brightness, the variance of the differential value of the brightness, and the like are used. As an algorithm for finding an optimum focus position from the maximum focus evaluation value, there is a hill-climbing method, and a method of dividing a search operation into several stages has been put to practical use in order to reduce search time (Japanese Patent Laid-Open Publication No. No. 6-2171800, Japanese Unexamined Patent Application Publication No. 2000-333351 and Japanese Patent No. 2971892.
さて、 対象ワークの微細化が進み、 このフォーカス皮術を適用 する検査器においては、 分解能の向上が求められている。 この分 解能の向上には、照明光源の短波長化 ·単一波長化で対応できる。 短波長化により光学的分解能を上げ、 単一波長化により色収差等 の影響を回避する。  By the way, the size of the target work has been miniaturized, and an improvement in resolution has been demanded for an inspection instrument to which this focus skin surgery is applied. This improvement in resolution can be accommodated by shortening the wavelength of the illumination light source and using a single wavelength. The optical resolution is increased by shortening the wavelength, and the effects of chromatic aberration and the like are avoided by shortening the wavelength.
しかしながら、 照明光源の短波長化により光路に使用するレン ズ等の光学材料に制約が生じ、 また、 単一波長化によりスペック ルなどの影響が出てく るという問題がある。 ここでいうスペック ルとは、 画面の明るさが斑状に分布した状態をいい、 光源の波長 や光学系の構成によってユニークなパターンの濃淡分布を取る。  However, shortening the wavelength of the illumination light source causes restrictions on optical materials such as lenses used in the optical path, and there is a problem in that the use of a single wavelength has an effect such as speckle. The speckle here refers to a state in which the brightness of the screen is distributed in a patchy manner, and a unique pattern of light and shade is obtained depending on the wavelength of the light source and the configuration of the optical system.
これらの影響により、 前出のフォ一カスカ一ブは第 2 2図に示 すように、 最適フォーカス位置のピント評価値よりも、 光学系の 影響部が大きな値をとる場合がある。 フォーカスカーブの形状や 数値範囲は、 対象物の反射率など表面状態により一義に決まらな いので、 この状態でピント評価値の最大値からフォーカス位置を 求める従来の技術では、 最適なフォーカス位置を安定して求める ことはできない。 本発明は上述の問題に鑑みてなされ、 光学系に起因する影響を 排除して安定したオートフォーカス動作を実現することができ るオートフォーカス制御方法、 オートフォーカス 制御装置および 画像処理装置を提供することを課題とする。 発明の開示 Due to these effects, the above-mentioned focus scab may have a larger value in the affected part of the optical system than the focus evaluation value of the optimum focus position as shown in FIG. Since the shape and numerical value range of the focus curve are not uniquely determined by the surface conditions such as the reflectivity of the object, the conventional technology that determines the focus position from the maximum focus evaluation value in this state stabilizes the optimal focus position You cannot ask for it. The present invention has been made in view of the above-described problems, and provides an autofocus control method, an autofocus control device, and an image processing device capable of realizing a stable autofocus operation by eliminating an influence caused by an optical system. As an issue. Disclosure of the invention
以上の課題を解決するに当たり、 本発明のォー トフォ一カス制 御方法においては、 レンズ一被写体間距離が異な る複数のフォー カス位置で被写体の画像デ一夕を各々取得する画像取得工程と、 取得した各画像データに基づいて複数のフォーカス位置毎に 各々ピント評価値を算出する評価値算出工程と、 ピント評価値が 最大となるフォ一カス位置をピント位置として算出するピント 位置算出工程と、 算出したピント位置へレンズを前記被写体に対 して相対移動させる移動工程とを有し、 上記画像取得工程で取得 した画像デ一夕を平滑化処理し、 この平滑化処理した画像デ一夕 に基づいてピント評価値を算出するようにしてレ ^る。  In order to solve the above-described problems, the autofocus control method of the present invention includes an image acquisition step of acquiring an image of a subject at a plurality of focus positions having different distances between a lens and a subject. An evaluation value calculation step of calculating a focus evaluation value for each of a plurality of focus positions based on each of the acquired image data; and a focus position calculation step of calculating a focus position at which the focus evaluation value becomes maximum as a focus position. Moving the lens relative to the subject to the calculated focus position, performing a smoothing process on the image data obtained in the image obtaining process, and performing the smoothing process on the image data. The focus evaluation value is calculated based on the above.
すなわち、 斑状の明るさの濃淡分布は、 単一波長によるスぺッ クルが原因である。 そこで、 この濃淡分布パターンを軽減するた めに、 本発明では画像の平滑化処理を加えた。 この平滑化処理に より、スペックルの濃淡分布パターンを軽減しつつ、対象試料(被 写体) の特徴を捉えてピント評価値を適正に算 できるようにし ている。  That is, the density distribution of the patchy brightness is caused by a single-wavelength scattering. Therefore, in order to reduce the shading distribution pattern, an image smoothing process is added in the present invention. This smoothing process enables the focus evaluation value to be calculated appropriately by capturing the characteristics of the target sample (subject) while reducing the density distribution pattern of speckles.
画像平滑化処理を行うに当たり、 処理対象画秦数 (単位処理範 囲)、 フィルタリ ング係数、 処理回数、 重み付けの有無等の処理 条件の設定は、 適用される光学系の種類や被写 試料の表面性状 等に応じて適宜設定することができる。 一方、 ピント評価値の算出には、 取得した画像データにお ける 隣接画素間の輝度データ差を検出するのが好適であり、 例えば、 特徴部 ·輪郭部の画素間の輝度デ一夕変化を抽出するエッジ強調 処理を用いることができる。 In performing the image smoothing process, the processing conditions such as the number of images to be processed (unit processing range), filtering coefficients, the number of times of processing, and the presence or absence of weighting are determined by the type of optical system to be applied and the size of the sample. It can be set appropriately according to the surface properties and the like. On the other hand, to calculate the focus evaluation value, it is preferable to detect a difference in luminance data between adjacent pixels in the acquired image data. Edge enhancement processing to be extracted can be used.
ピント評価値を算出するにあたって、 同じ対象領域においてフ ォ一カス位置により輝度のバラツキがあると、 隣接画素間の輝度 データ差の絶対値が変化してしまい、 適正にピント評価値を算出 することができなくなる。 そこで、 このような問題を回避するた めに、 算出した評価値を当該画面全体の平均輝度で除算する こと によって、 ピント評価値の画面平均輝度による規格化を行う のが 好適である。  When calculating the focus evaluation value, if the luminance varies due to the focus position in the same target area, the absolute value of the luminance data difference between adjacent pixels changes, and the focus evaluation value must be calculated properly. Can not be done. Therefore, in order to avoid such a problem, it is preferable that the calculated evaluation value is divided by the average luminance of the entire screen to normalize the focus evaluation value by the average luminance of the screen.
なお、 このオー トフォーカス制御動作に付随して、 複数の フォ 一カス位置で取得した各サンプル画像のピント評価値から、 被写 体の全焦点画像や立体画像を合成する機能を付加することがで きる。 これらの処理は、 各フォーカス位置における取得画像を面 内で複数の領域に分割するとともに、 分割領域ごとに得られたピ ント評価値やフォーカス位置情報に基づいて行われる。 この場合、 三次元構造の被写体表面を、 光学系の影響を排除して、 優れた分 解能で検査、 観察することが可能となる。  In addition to this autofocus control operation, it is possible to add a function of synthesizing an all-focus image and a stereoscopic image of the object from the focus evaluation values of each sample image acquired at a plurality of focus positions. it can. These processes are performed based on a focus evaluation value and focus position information obtained for each of the divided regions, while dividing the acquired image at each focus position into a plurality of regions in the plane. In this case, it is possible to inspect and observe the three-dimensional object surface with excellent resolution while excluding the influence of the optical system.
また、 本発明のオートフォーカス制御装置は、 レンズ一被写体 間距離が異なる複数のフォ一カス位置で取得された各画像デー 夕に基づいて複数のフォーカス位置毎に各々ピント評価値を算 出する評価値算出手段と、 算出したピント評価値の最大値に基づ いてピント位置を算出するピン ト位置算出手段と、 取得した画像 データを平滑化処理する画像平滑化手段とを有し、 この画像平滑 化手段で平滑化処理した画像データに基づいて各画像データの ピント評価値を算出するようにしている。 In addition, the autofocus control device according to the present invention includes an evaluation unit that calculates a focus evaluation value for each of a plurality of focus positions based on each image data acquired at a plurality of focus positions having different lens-subject distances. Value calculation means, focus position calculation means for calculating a focus position based on the calculated maximum focus evaluation value, and image smoothing means for smoothing the acquired image data. Of each image data based on the image data smoothed by the The focus evaluation value is calculated.
本発明のオートフォーカス制御装置は、 複数のフォ一カス位置 で被写体の画像データを取得する画像取得手段、 レンズ—被写体 間距離を調整する駆動手段などと組み合わせて、 1つ の画像処理 装置として構成してもよいし、 これら画像取得手段、 駆動手段と は独立した別個の構成体として構成することができる。  The autofocus control device of the present invention is configured as one image processing device in combination with image acquisition means for acquiring image data of a subject at a plurality of focus positions, drive means for adjusting the distance between a lens and a subject, and the like. Alternatively, the image acquisition unit and the driving unit may be configured as a separate and independent member.
本発明によれば、 光学系に起因する影響を排除して高精度なォ —トフオーカス制御を安定して行う ことができるので、 短波長 Z 単一波長光源を用いた試料観察が可能となり、 例えば微細加工化 が進む半導体ゥェ一ハ等を高い分解能でもって観察することが できる。 図面の簡単な説明  According to the present invention, it is possible to stably perform high-precision autofocus control while eliminating the influence due to the optical system, and thus it becomes possible to observe a sample using a short-wavelength Z single-wavelength light source. It is possible to observe, with high resolution, semiconductor wafers and the like, for which fine processing is progressing. Brief Description of Drawings
第 1図は、 本発明の第 1の実施の形態による画像処理装置 1 の 概略構成図である。  FIG. 1 is a schematic configuration diagram of an image processing apparatus 1 according to a first embodiment of the present invention.
第 2図は、 コントローラ 7の構成を説明するブロヅ ク図である。 第 3図は、 画像処理装置 1の動作を説明するフローチャートで ある。  FIG. 2 is a block diagram illustrating the configuration of the controller 7. FIG. 3 is a flowchart illustrating the operation of the image processing apparatus 1.
第 4図は、 画像処理装置 1の他の動作例を説明するフローチヤ —卜である。  FIG. 4 is a flowchart illustrating another operation example of the image processing apparatus 1.
第 5図は、 本発明の一作用を説明するフォーカス ^ーブの一例 であり、 F C 1 は画像平滑化処理及びピント評価値の輝度規格化 処理を行ったときの例、 F C 2は画像平滑化処理のみ行ったとき の例、 F C 3は従来例をそれぞれ示している。  FIG. 5 is an example of a focus curve for explaining an operation of the present invention. FC 1 is an example when image smoothing processing and luminance normalization processing of focus evaluation values are performed, and FC 2 is an image smoothing processing. FC 3 shows an example in which only the conversion process is performed, and FC 3 shows a conventional example.
第 6図は、 ピント位置をピント評価値最大値近傍で曲線近似す ることにより算出する方法を説明する図である。 第 7図は、 レンズ駆動部 4に対する指示電圧とレンズの実移動 電圧との関係を示す図である。 FIG. 6 is a diagram for explaining a method of calculating a focus position by approximating a curve near the maximum focus evaluation value. FIG. 7 is a diagram showing the relationship between the command voltage for the lens driving unit 4 and the actual movement voltage of the lens.
第 8図は、 サンプル画像の取込みとピン ト評価値算出を並列処 理する方法を説明する図である。  FIG. 8 is a diagram for explaining a method of performing parallel processing of capturing a sample image and calculating a focus evaluation value.
第 9図は、 本発明の第 2の実施の形態を示す図で、 画面を複数 に分割して各々の分割領域でピント位置を検出する方法を説明 する図である。  FIG. 9 is a diagram showing the second embodiment of the present invention, and is a diagram for explaining a method of dividing a screen into a plurality of parts and detecting a focus position in each divided region.
第 1 0図は、 本発明の第 3の実施の形態による工程フロー図で ある。  FIG. 10 is a process flow chart according to the third embodiment of the present invention.
第 1 1図は、 本発明の第 3の実施の形態に適用されるメモリ構 成図である。  FIG. 11 is a memory configuration diagram applied to the third embodiment of the present invention.
第 1 2図は、 全焦点画像取得工程を説明するフローチャートで ある。  FIG. 12 is a flowchart illustrating an all-focus image acquiring step.
第 1 3図は、 本発明の第 4の実施の形態を示す図で、 サンプル 画像のピント位置をフォーカス軸方向に組み合わせて立体画像 を取得する方法を説明する図である。  FIG. 13 is a diagram showing the fourth embodiment of the present invention, and is a diagram for explaining a method of acquiring a stereoscopic image by combining the focus positions of the sample images in the focus axis direction.
第 1 4図は、 上記立体画像の合成方法を説明するフローチヤ一 トである。  FIG. 14 is a flowchart illustrating a method of synthesizing the above-mentioned stereoscopic image.
第 1 5図は、 本発明の第 5の実施の形態によるオートフォー力 ス制御装置の第 1 の構成例を示す機能ブロック図である。  FIG. 15 is a functional block diagram showing a first configuration example of an auto force control device according to a fifth embodiment of the present invention.
第 1 6図は、 本発明の第 5の実施の形態によるオートフォー力 ス制御装置の第 2の構成例を示す機能ブロック図である。  FIG. 16 is a functional block diagram showing a second configuration example of the auto force control device according to the fifth embodiment of the present invention.
第 1 7図は、 本発明の第 5の実施の形態によるオートフォー力 ス制御装置の第 3の構成例を示す機能ブロック図である。  FIG. 17 is a functional block diagram showing a third configuration example of the auto force control device according to the fifth embodiment of the present invention.
第 1 8図は、 本発明の第 5の実施の形態によるオートフォー力 ス制御装置の第 4の構成例を示す機能ブロック図である。 第 1 9図は、 本発明の第 5の実施の形態によるオートフォー力 ス制御装置の第 5の構成例を示す図である。 FIG. 18 is a functional block diagram showing a fourth configuration example of the automatic force control device according to the fifth embodiment of the present invention. FIG. 19 is a diagram showing a fifth configuration example of the auto force control device according to the fifth embodiment of the present invention.
第 2 0 A図乃至第 2 0 B図は、 画像処理装置 1 の駆動系の構成 の変形例を示すブロック図である。  FIGS. 20A to 20B are block diagrams showing modified examples of the configuration of the drive system of the image processing apparatus 1. FIG.
第 2 1 図は、 レンズ一ワーク間距離 (フォーカス位置) とピン ト評価値との関係を示すフォーカスカーブの一例である。  FIG. 21 is an example of a focus curve showing a relationship between a lens-work distance (focus position) and a focus evaluation value.
第 2 2図は、 従来技術の問題点を説明する図である。 発明を実施するための最良の形態  FIG. 22 is a diagram for explaining the problems of the prior art. BEST MODE FOR CARRYING OUT THE INVENTION
以下、 本発明の各実施の形態について図面を参照して説明する。  Hereinafter, embodiments of the present invention will be described with reference to the drawings.
(第 1 の実施の形態)  (First Embodiment)
第 1図は本発明の実施の形態によるオートフォーカス制御方 法及びォ一トフォ一カス制御装置が適用される画像処理装置の 概略構成図である。 画像処理装置 1 は、 被写体試料 (ワーク) の 表面観察に用いられ、 特に、 例えば半導体ゥエー Λ等のよう に表 面に微細加工が施されて構成された素子構造体の欠陥検出筝に 用いられる顕微鏡として構成されている。  FIG. 1 is a schematic configuration diagram of an image processing apparatus to which an autofocus control method and an autofocus control device according to an embodiment of the present invention are applied. The image processing apparatus 1 is used for observing the surface of an object sample (work), and is particularly used for detecting a defect of an element structure formed by performing fine processing on a surface such as a semiconductor device. It is configured as a microscope.
画像処理装置 1 は、 測定ステージ 2、 対物レンズ 3、 レンズ駆 動部 4、 鏡筒 5 、 C C D ( Ch a rge Coup l e d Dev i c e) カメラ 6 、 コントローラ 7、 ドライバ 8、 モニタ 9及び照明光源 1 0を備え ている。  The image processing device 1 includes a measurement stage 2, an objective lens 3, a lens driving unit 4, a lens barrel 5, a CCD (Charge Coupled Dev ice) camera 6, a controller 7, a driver 8, a monitor 9, and an illumination light source 10. Is provided.
測定ステージ 2は、 被写体試料 (例えば半導体ゥェ一八) Wを 支持し、 X - Y方向 (図中左右方向及び紙面垂直方向) に移動自 在に構成されている。  The measurement stage 2 supports a subject sample (for example, semiconductor wafer 18) W and is configured to move in the X-Y direction (the left-right direction in the figure and the direction perpendicular to the paper).
レンズ駆動部 4は、 測定ステージ 2上の被写体試料 Wに対 して 対物レンズ 3 をフォーカス軸方向 (図中上下方向) に所定の フォ 一カス位置検索範囲にわたって相対移動させ、 レンズ一ワーク閎 距離を可変調整する。 なお、 レンズ駆動部 4は、 本発明の 「駆 ¾J 手段」 に対応する。 The lens driving unit 4 moves the objective lens 3 to the subject sample W on the measurement stage 2 in a predetermined direction in the focus axis direction (vertical direction in the figure). Relative movement is performed over one spot position search range, and the lens-workpiece distance is variably adjusted. The lens driving section 4 corresponds to the “driving J means” of the present invention.
本実施の形態では、 レンズ駆動部 4はピエゾ素子で構成されて いるが、 これ以外にも、 例えばパルスモ一夕等の精密送り機構 採用可能である。 また、 レンズ一ワーク間距離の調整に対物レン ズ 3 をフォーカス軸方向へ移動させるようにしているが、 これに 代えて、 測定ステージ 2をフォーカス軸方向に移動させるように してもよい。  In the present embodiment, the lens driving section 4 is constituted by a piezo element, but other than this, a precision feed mechanism such as a pulse motor can be employed. Although the objective lens 3 is moved in the focus axis direction for adjusting the distance between the lens and the work, the measurement stage 2 may be moved in the focus axis direction instead.
C C Dカメラ 6は、 フォーカス位置検索範囲内で移動する対物 レンズ 3 を介して、 測定ステージ 2上の被写体試料 W表面の特定 領域を撮像するビデオカメラとして機能し、 取得した画像データ をコントローラ 7へ出力する。 C C Dカメラ 6は、対物レンズ 3 、 レンズ駆動部 4及び鏡筒 5 と共に本発明の 「画像取得手段」 を構 成する。 なお、 C C D以外にも、 C M O Sイメージャ等の他の固 体撮像素子が適用されてもよい。  The CCD camera 6 functions as a video camera that takes an image of a specific area on the surface of the subject sample W on the measurement stage 2 via the objective lens 3 that moves within the focus position search range, and outputs the acquired image data to the controller 7. I do. The CCD camera 6, together with the objective lens 3, the lens driving unit 4, and the lens barrel 5, constitutes "image acquisition means" of the present invention. Note that other than the CCD, another solid-state imaging device such as a CMOS imager may be applied.
コントローラ 7はコンピュータで構成され、 画像処理装置 1全 体の動作を制御すると共に、 被写体試料 Wの表面の特定領域にお ける最適なフォーカス位置 (ピント位置) を検出するォ一 トフォ —カス (A F ) 制御部 1 1 を備えている。 なお、 このォ一トフォ —カス制御部 1 1 は、 本発明の 「ォ一トフォーカス制御装置」 こ 対応する。  The controller 7 is composed of a computer, controls the operation of the entire image processing apparatus 1, and detects an optimum focus position (focus position) in a specific area on the surface of the sample W. ) A control unit is provided. The autofocus controller 11 corresponds to the “autofocus controller” of the present invention.
ドライバ 8は、 オートフォーカス制御部 1 1からの制御信号を 受けてレンズ駆動部 4を駆動する駆動信号を生成する。 本実施の 形態では、 ドライバ 8はヒステリシス補償機能を備えたピエゾド ライバで構成されている。 なお、 このドライノ 8は、 ォ一トフォ 一カス制御部 1 1内に組み込まれていてもよい。 The driver 8 receives a control signal from the autofocus control unit 11 and generates a drive signal for driving the lens drive unit 4. In the present embodiment, the driver 8 is constituted by a piezo driver having a hysteresis compensation function. Note that this Dryno 8 is It may be incorporated in the single control unit 11.
オートフォーカス制御部 1 1 は、 ドライバ 8を介してレンズ駆 動部 4を駆動し、 対物レンズ 3 と被写体試料 Wとの間の距離 (レ ンズーワーク間距離) を一定間隔で変化させた複数のフォーカス 位置において、 C C Dカメラ 6 により被写体試料 Wの画像データ を各々取得し、 後述するような各種処理を行って被写体試料 Wの 撮像領域における最適なフォーカス位置、 すなわちピント位置を 検出する。  The auto focus control unit 11 drives the lens driving unit 4 via the driver 8 and changes the distance (lens-work distance) between the objective lens 3 and the subject sample W at a fixed interval. At the position, the image data of the subject sample W is acquired by the CCD camera 6, and various processes described later are performed to detect an optimal focus position, that is, a focus position, in the imaging region of the subject sample W.
モニタ 9は、 コントローラ 7 による処理内容を表示すると共に、 C C Dカメラ 6で撮像された被写体試料 Wの画像等を表示する。 照明光源 1 0 としては、 本実施の形態では例えば波長 1 9 6 n mの連続レーザ又はパルスレーザ光源が用いられている。 なお、 照明光源の波長領域は上記の紫外光領域に限らず、 用途等に応じ て波長領域が異なる他の紫外光や、 可視光領域の光源を用いるこ とも勿論可能である。  The monitor 9 displays the contents of processing by the controller 7 and also displays an image of the subject sample W captured by the CCD camera 6, and the like. In the present embodiment, for example, a continuous laser or a pulse laser light source having a wavelength of 196 nm is used as the illumination light source 10. Note that the wavelength range of the illumination light source is not limited to the above-described ultraviolet light range, and it is of course possible to use another ultraviolet light having a different wavelength range depending on the application or a light source in the visible light range.
第 2図は画像処理装置 1 の構成のブロック図である。  FIG. 2 is a block diagram of a configuration of the image processing apparatus 1.
C C Dカメラ 6から出力されたアナログ画像信号は、 A / D変 換器 1 3によってデジタル画像信号に変換される。 A / D変換器 1 3の出力信号はメモリ 1 4に供給され記憶される。 コントロー ラ 7のオートフォーカス制御部 1 1 は、 メモリ 1 4から変換され たデジタル画像信号を読み出して、 後述するオートフォーカス制 御を行う。 そして、 ドライバ 8は D Z A変換器 1 7 を介して供給 されるコントローラ 7からの制御信号に基づいてレンズ駆動部 4に対する駆動信号を生成する。  The analog image signal output from the CCD camera 6 is converted into a digital image signal by the A / D converter 13. The output signal of the A / D converter 13 is supplied to the memory 14 and stored. The autofocus controller 11 of the controller 7 reads the converted digital image signal from the memory 14 and performs autofocus control described later. Then, the driver 8 generates a drive signal for the lens drive unit 4 based on a control signal from the controller 7 supplied via the DZA converter 17.
ォ一トフォーカス制御部 1 1 は、 平滑化処理回路 1 1 A、 平均 輝度算出回路 1 1 B、 評価値算出回路 1 1 C及びピン ト位置算出 回路 1 I Dを備えている。 The focus control unit 11 includes a smoothing processing circuit 11 A, an average luminance calculation circuit 11 B, an evaluation value calculation circuit 11 C, and a focus position calculation. Circuit 1 ID is provided.
平滑化処理回路 1 1 Aは、 複数のフォーカス位置で取得した被 写体試料 Wの各々の画像信号 (サンプル画像) のオー トフォー力 ス対象領域 (画面全体あるいは画面内の一部領域) を平滑化処理 する回路であり、 本発明の 「画像平滑化手段」 に対応する。 一 トフォーカス制御部 1 1 は、 この平滑化処理回路 1 1 Aによっ て、 取得した各サンプル画像の明るさの斑状の分布 (スペックル) を 低減する。 平滑化処理例を [数 1 ] に示す。  The smoothing processing circuit 11 A smoothes the auto-force target area (the entire screen or a partial area within the screen) of each image signal (sample image) of the object sample W acquired at a plurality of focus positions. This is a circuit for performing the image processing, and corresponds to the “image smoothing means” of the present invention. The focus control unit 11 reduces the uneven distribution (speckle) of brightness of each of the acquired sample images by the smoothing processing circuit 11A. An example of the smoothing process is shown in [Equation 1].
[数 1 ]  [Number 1]
Figure imgf000012_0001
Figure imgf000012_0001
なお、 画像平滑化の処理条件 (処理対象画素数 (上の例では 3 X 3 )、 フィルタリ ング係数、 処理回数、 重み付けの有無および 係数のとり方等) は、 C C Dカメラ 6で取り込まれた試料 W表面 の本来の特徴部 ·輪郭部をつぶさない範囲で任意に設定可能であ り、 これらの処理条件は、 例えばキーボードやマウス、 タツチパ ネル等の入力装置 1 6 を介して設定するようにしている。 The processing conditions for image smoothing (number of pixels to be processed (3 x 3 in the above example), filtering coefficients, number of processings, presence / absence of weighting, and how to take the coefficients) are based on the sample W captured by the CCD camera 6. It can be set arbitrarily as long as the original feature and contour of the surface are not crushed, and these processing conditions are set via an input device 16 such as a keyboard, a mouse, and a touch panel. .
平均輝度算出回路 1 1 Bは、 各サンプル画像のォ一トフオーカ ス対象領域の画面平均輝度を算出する回路であり、 本発明の 「平 均輝度算出手段」 に対応する。 この平均輝度算出回路 1 1 Bによ つて得られた各フォーカス位置における画面平均輝度は、 後述す る評価値算出回路 1 1 Cにおける当該フォーカス位置のピント 評価値 P vの算出に供される。 The average luminance calculation circuit 11B is a circuit that calculates the screen average luminance of the autofocus target area of each sample image, and corresponds to “average luminance calculation means” of the present invention. The screen average luminance at each focus position obtained by the average luminance calculation circuit 11B is described later. This is used to calculate the focus evaluation value Pv of the focus position in the evaluation value calculation circuit 11C.
評価値算出回路 1 1 Cは、 各サンプル画像のピント評価値 P V を各々算出する回路であり、 本発明の 「評価値算出手段」 に対応 する。 本実施の形態では、 この評価値算出回路 1 1 Cをエッジ強 調処理回路を含む構成としている。  The evaluation value calculation circuit 11C is a circuit that calculates the focus evaluation value P V of each sample image, and corresponds to “evaluation value calculation means” of the present invention. In the present embodiment, the evaluation value calculation circuit 11C is configured to include an edge enhancement processing circuit.
本実施の形態において、 ピント評価値とは、 画像の特徴部 ' 輪 郭部がはっきり見える状態を数値で評価した指標をいう。 特徴 部 -輪郭部の画素間の輝度デ一夕変化を見ると、 はっきり した像 では急峻な変化となり、 ぼけた像では緩やかな変化となる。 そこ で本実施の形態では、 隣り合う画素間の輝度データ差をエッジ強 調処理を用いて評価することで、 ピント評価値 P vを計算するよ うにしている。 なお、 これ以外にも、 明るさの微分値、 明るさの 分散等に基づいてピント評価値を算出するようにしてもよい。  In the present embodiment, the focus evaluation value is an index that numerically evaluates a state in which a characteristic portion and a contour portion of an image are clearly visible. Looking at the change in luminance between pixels in the feature-contour portion, a sharp change occurs in a clear image, and a gradual change occurs in a blurred image. Therefore, in the present embodiment, the focus evaluation value Pv is calculated by evaluating the luminance data difference between adjacent pixels using edge enhancement processing. In addition, the focus evaluation value may be calculated based on a differential value of brightness, variance of brightness, or the like.
実処理例では、 取り込んだ画像中全画素に [数 2 ] で示す演算 を行い、 周囲の画素との輝度データ差を求める。 この式で、 前項 は縦方向、 後項は横方向の輝度変化を検出する。 これにより、 処 理画素輝度によらず、 評価点とその周囲との間の輝度変化分のみ を抽出できる。  In the actual processing example, the calculation shown in [Equation 2] is performed on all the pixels in the captured image, and the luminance data difference with surrounding pixels is obtained. In this equation, the first term detects the luminance change in the vertical direction, and the second term detects the luminance change in the horizontal direction. As a result, it is possible to extract only the change in luminance between the evaluation point and its surroundings irrespective of the processing pixel luminance.
[数 2 ]
Figure imgf000014_0001
なお、 この例では処理対象画素領域を 3 X 3 としているが、 5 Χ 5や 7 X 7等であってもよい。 また、 係数に重み付けを行って いるが係数の設定の仕方は任意であり、 重み付けなしで処理する ようにしてもよい。
[Number 2]
Figure imgf000014_0001
In this example, the pixel area to be processed is 3 × 3, but it may be 5 × 5 or 7 × 7. Although the coefficients are weighted, the setting of the coefficients is arbitrary, and the processing may be performed without weighting.
ピント評価値 Ρ Vの算出の際は、 上記エッジ強調処理式による 計算後、 平均輝度算出回路 1 1 Βで算出した対応フォーカス位置 における画面平均輝度で除算処理を実行する。 すなわち、 各サン プル画像のピント評価値 Ρ Vは、 [数 3 ] で示すように、 エッジ 強調処理回路により得られるピント評価値 Ρνοの、当該フォ一力 ス位置の画面平均輝度 P aveによる除算値とする。  At the time of calculating the focus evaluation value 後 V, after the calculation by the above-described edge enhancement processing formula, the division processing is executed by the screen average luminance at the corresponding focus position calculated by the average luminance calculation circuit 11 回路. That is, as shown in [Equation 3], the focus evaluation value ΡV of each sample image is obtained by dividing the focus evaluation value Ρνο obtained by the edge enhancement processing circuit by the screen average luminance P ave at the corresponding focus position. Value.
[数 3 ]  [Number 3]
Pvo(i) Pvo (i)
Pv(i) =  Pv (i) =
Pave(i)  Pave (i)
[数 3 ] におい Pv(i)は i番目のフォーカス位置におけ る 輝度規格化済のピント評価値、 P v o ( i )は i 番目のフォーカス位 置におけるピント評価値、 P ave ( i )は i番目のフォーカス位置に おける画面平均輝度である。 [Equation 3] Smell Pv (i) at i-th focus position The focus evaluation value P vo (i) whose brightness has been normalized is the focus evaluation value at the i-th focus position, and P ave (i) is the screen average brightness at the i-th focus position.
なお、 [数 4 ] に示すように、 [数 3 ] で得られる演算値に画面 平均輝度の最大値 P avemaxを乗算してピント評価値 P vを算出 するようにしてもよい。 これにより、 平均輝度での除算によるピ ント評価値の目減り (量的減少分) が捕償され、 後にフォーカス 力一ブを参照する上でピント評価値の量的推移が見やすくなる。 また、 乗算する画面平均輝度はその最大値に限られず、 例えば最 小値等であってもよい。  Note that, as shown in [Equation 4], the focus evaluation value Pv may be calculated by multiplying the calculated value obtained in [Equation 3] by the maximum value P avemax of the screen average luminance. This compensates for the loss (quantity decrease) of the focus evaluation value due to division by the average luminance, and makes it easier to see the quantitative change of the focus evaluation value when referring to the focus force later. Further, the screen average luminance to be multiplied is not limited to the maximum value, and may be, for example, a minimum value.
[数 4 ]  [Number 4]
Pvo(i) Pvo (i)
Pv(i) = - — — X Pavemax  Pv (i) =-— — X Pavemax
Pave(i)  Pave (i)
このように、 ピント評価値 ( P v ) として、 エッジ強調処理で 算出された評価値に対しての画面平均輝度による除算値を用い るのは、 ピント評価値が評価点 (画素) とその周囲の画素との輝 度の差がどの位あるかに関係したものであるので、 取得した画像 間に輝度のバラツキがあり、 画面平均輝度 (当該画面を構成する 画素個々の輝度の総和を当該画面の全画素数で除算した輝度値) そのものが変化した場合、 そこから計算される指標の絶対値も変 化してしまうのを回避するためである。 As described above, when the focus evaluation value (Pv) is a value obtained by dividing the evaluation value calculated by the edge enhancement processing by the screen average luminance, the focus evaluation value is calculated based on the evaluation point (pixel) and its surroundings. Since there is a variation in the brightness between the acquired images, the average brightness of the screen (the sum of the brightness of the individual pixels that compose the screen is calculated as This is to prevent the absolute value of the index calculated from the change if the luminance value itself divided by the total number of pixels changes).
例えば、 周囲との輝度差が 2 0 %あるとする。 平均輝度 5 0の とき当該 2 0 %の輝度差は 1 0 となり、 平均輝度 1 0 0では 2 0 となる。 このように、 同じ変化率であっても元の画面平均輝度に より絶対値が大きく異なってしまう。 一般の可視光顕微鏡などの 光学系では問題となることは少ないが、 紫外光顕微鏡などの光学 系ではこのような問題は顕著となる。 For example, assume that there is a 20% difference in luminance from the surroundings. Average luminance of 50 At this time, the 20% luminance difference is 10 and the average luminance is 100 when the average luminance is 100. As described above, even if the rate of change is the same, the absolute value greatly differs depending on the original screen average luminance. This problem is rare in optical systems such as ordinary visible light microscopes, but is significant in optical systems such as ultraviolet light microscopes.
そこで本実施の形態では、 このような画面輝度変化に対応する ため、 エッジ強調処理で算出したピント評価値を画面平均輝度 ( P ave) で規格化することによって、 画面輝度変化によるピン ト評価値への影響を防ぐようにしている。 つまり、 ピント評価値 としてその画面平均輝度の除算値を用いることによって、 画面平 均輝度 5 0、 輝度差 2 0 %の場合のピント評価値は 0 . 2 ( 1 0 / 5 0 ) となり、 画面平均輝度 1 0 0、 輝度差 2 0 %の場合のピ ント評価値も 0 . 2 ( 2 0 / 1 0 0 ) となって互いに一致するこ とになり、 フォーカス位置間における輝度のバラツキによるピン ト評価値への影響が排除されることになる。  Therefore, in the present embodiment, in order to cope with such a change in screen luminance, the focus evaluation value calculated by the edge enhancement processing is normalized by the screen average luminance (P ave), thereby obtaining the focus evaluation value due to the screen luminance change. To prevent impact on In other words, by using the divided value of the screen average luminance as the focus evaluation value, the focus evaluation value when the screen average luminance is 50 and the luminance difference is 20% is 0.2 (10/50). When the average luminance is 100 and the luminance difference is 20%, the pin evaluation value is also 0.2 (20/100), which matches each other. The effect on the evaluation value is eliminated.
次に、 ピント位置算出回路 1 1 Dは、 評価値算出回路 1 1 Cで 算出したピント評価値の最大値に基づいてピント位置を算出す る回路であり、 本発明の 「ピント位置算出手段」 に対応する。 一般的に、 画像オートフォーカス制御は、 レンズ一ワーク間距 離が異なる複数のフォーカス位置でサンプル画像を取得し、 その 中から最大のピント評価値が得られるサンプル画像のフォー力 ス位置を検出することにより ピント位置を決定する。 したがって、 サンプル画像が多いほど (サンプル間のフオーカス移動量が狭い ほど) 高精度なオートフォーカス制御が実現できる。 しかしその 一方で、 サンプル数が増大すれば処理に要する時間も大きくなり、 オートフォーカス制御の高速性を確保できなくなる。 そこで本実施の形態では、 第 6図に示すように、 算出した ピン ト評価値の最大値 P v(m)及びその近傍の複数のピント評価値 ( P v(m-l), P v(m+l), P v(m-2), P v(m+2), P v(m-3), P v(m+3) ) に基づいて最適フォーカス位置 (ピント位置) を検出するよ うに している。 Next, the focus position calculation circuit 11D is a circuit that calculates the focus position based on the maximum value of the focus evaluation value calculated by the evaluation value calculation circuit 11C, and the “focus position calculation means” of the present invention. Corresponding to In general, image autofocus control involves acquiring sample images at multiple focus positions with different distances between the lens and the workpiece, and detecting the force position of the sample image from which the maximum focus evaluation value can be obtained. Determines the focus position. Therefore, the more the number of sample images (the smaller the amount of focus movement between samples), the more accurate autofocus control can be realized. However, on the other hand, if the number of samples increases, the time required for processing also increases, and the high-speed autofocus control cannot be ensured. Therefore, in the present embodiment, as shown in FIG. 6, the calculated maximum value Pv (m) of focus evaluation values and a plurality of focus evaluation values (Pv (ml), Pv (m + l), Pv (m-2), Pv (m + 2), Pv (m-3), Pv (m + 3)) to detect the optimal focus position (focus position). are doing.
第 6図に示すようにピント位置近傍は上に凸の二次曲線に近 い。 そこで、 ピン ト位置近傍点を使い、 最小二乗法により近似二 次曲線を計算し、 頂点を求め、 それをピント位置とする。 図中実 線は 3点 ( P v(m), P v(m-l), Pv(m+1))、 破線は 5点 ( Pv(m), P v(m-l) , P v(m+l) , P v(m-2) , P v(m+2))、 一点鎖線は 7 ( P v(m) , P v(m-l) , Ρ v(m+l), Ρ v(m-2) , Ρ v(m+2) , Ρ ν(ηι- 3), Ρ v(m+3)) のピント評価値から近似計算した曲線である グラ フの 開き具合は異なるが、 頂点の位置はほぼ同じで、 単純な処理なが ら有効な近似方法であることがわかる。  As shown in FIG. 6, the vicinity of the focus position is close to a quadratic curve convex upward. Therefore, the approximate quadratic curve is calculated by the least-squares method using the points near the focus position, the vertices are obtained, and this is set as the focus position. In the figure, the solid lines are 3 points (Pv (m), Pv (ml), Pv (m + 1)), and the broken lines are 5 points (Pv (m), Pv (ml), Pv (m + l). ), P v (m-2), P v (m + 2)), and the dashed line is 7 (P v (m), P v (ml), Ρ v (m + l), Ρ v (m-2 ), Ρ v (m + 2), ν ν (ηι-3), Ρ v (m + 3)) It is almost the same, and it turns out that it is an effective approximation method with simple processing.
なお、上述の曲線近似法に限らず、例えば、 P v(m)及び P v(m+l) の 2点を通る直線と、 P V (m-1)及び P V (m-2)の他の 2点を通る直 線とから互いの交点を計算してそれをピント位置とする方法 (直 線近似法) や、 正規分布曲線近似等の他の近似法を用いてビン卜 位置を検出するようにしてもよい。  In addition to the curve approximation method described above, for example, a straight line passing through two points Pv (m) and Pv (m + l) and another line of PV (m-1) and PV (m-2) Calculate the point of intersection from a straight line passing through two points and use it as the focus position (linear approximation method), or use other approximation methods such as normal distribution curve approximation to detect the bin position. It may be.
第 2図を参照して、 メモリ 1 5 は、 コントローラ 7の C P Uの 各種演算に用いられる。 特に、 メモリ 1 5のメモリ空間には、 ォ 一トフォ一カス制御部 1 1 における各種演算に供せられる第 1 メモリ部 1 5 A及び第 2メモリ部 1 5 Bが割り当てられている。 本実施の形態では、 オートフォーカス制御の高速性を図るため に、 レンズ一ワーク間距離を連続的に変化させながら、 複欽のフ ォ一カス位置でサンプル画像を各々取得するようにしている。 こ れにより、 各フォーカス位置でレンズを停止させて画像を取得す る場合に比べてオートフォーカス制御の高速化が図れる。 Referring to FIG. 2, memory 15 is used for various calculations of the CPU of controller 7. In particular, a first memory unit 15A and a second memory unit 15B that are used for various operations in the autofocus control unit 11 are allocated to the memory space of the memory 15. In the present embodiment, in order to achieve high-speed autofocus control, sample images are respectively obtained at multiple focus positions while continuously changing the distance between the lens and the work. This As a result, the speed of the autofocus control can be increased as compared with the case where the lens is stopped at each focus position to acquire an image.
レンズ駆動部 4に対する ドライバ 8の指示電圧とレンズ駆動 部 4の実移動電圧との関係を第 7図に示す。 ピエゾ素子でなるレ ンズ駆動部 4は位置制御用移動量検出センサを備えている。 第 7 図中の実移動電圧はこのセンサモニタ信号である。 指示電圧はォ 一トフォーカス制御開始位置にレンズを移動させた後、 C C D力 メラ 6の映像信号フレーム毎に所定量ずつ変化させている。 指示 電圧と実移動電圧を比べると、 応答に遅れはあるものの、 移動は 滑らかで、 指示電圧の段差をつぶしながら漸増領域の両グラフの 傾きがほぼ同じになっている。 これから、 等速度相当指示電圧に 対し、 レンズが等速度で動作していることがわかる。 したがって、 画像同期信号に同期してサンプル画像を取得すれば、 フォーカス 軸座標一定間隔でピント評価値を計算、 取得することが可能とな る。  FIG. 7 shows the relationship between the command voltage of the driver 8 for the lens driving unit 4 and the actual moving voltage of the lens driving unit 4. The lens driving section 4 composed of a piezo element has a position control movement amount detection sensor. The actual moving voltage in FIG. 7 is this sensor monitor signal. The command voltage is changed by a predetermined amount for each video signal frame of the CCD camera 6 after moving the lens to the autofocus control start position. Comparing the indicated voltage and the actual moving voltage, although the response is delayed, the movement is smooth, and the slopes of both graphs in the gradually increasing region are almost the same while the steps of the indicated voltage are crushed. From this, it can be seen that the lens is operating at a constant speed with respect to the command voltage corresponding to the constant speed. Therefore, if a sample image is acquired in synchronization with the image synchronization signal, it becomes possible to calculate and acquire the focus evaluation value at fixed intervals of the focus axis coordinates.
更に本実施の形態では、 オートフォーカス動作の高速化を図る ために、 サンプル画像の取得工程とピント評価値の算出工程とを 並列に行うようにしている。  Further, in the present embodiment, in order to increase the speed of the autofocus operation, the sample image obtaining step and the focus evaluation value calculating step are performed in parallel.
これは、 第 8図に示すように第 1 メモリ部 1 5 Aに画像データ を取り込みながら、 第 2メモリ部 1 5 Bの既に取り込まれた画像 デ一夕を処理してピント評価値 P Vを算出するというダブルバ ッファリ ングで構成できる。 本例の場合、 第 1 メモリ部 1 5 Aに は偶数フレームで取り込んだ画像データを処理し、 第 2メモリ部 1 5 Bには奇数フレームで取り込んだ画像データを処理するよ うにしている。  As shown in Fig. 8, the focus evaluation value PV is calculated by processing the image data already captured in the second memory unit 15B while loading the image data into the first memory unit 15A as shown in Fig. 8. It can be configured with double buffering. In the case of this example, the first memory unit 15A processes image data captured in even-numbered frames, and the second memory unit 15B processes image data captured in odd-numbered frames.
次に、 以上のように構成される本実施の形態の画像処理装置 1 の動作について第 3図を参照して説明する。 第 3図はォ一 卜フォ 一カス制御部 1 1 における工程フロー図である。 Next, the image processing apparatus 1 of the present embodiment configured as above The operation of will be described with reference to FIG. FIG. 3 is a process flow chart in the autofocus controller 11.
まず、 被写体試料 Wのォ一トフォーカス処理領域、 フォーカス 位置検索範囲、 取得画像サンプル間のフォーカス移動量 (フォー カス軸ステップ長)、 画像平滑化処理条件、 エッジ強調処理条件 などの初期設定が入力された後 (ステップ S 1 )、 オートフ ォー 力ス制御が実行される。  First, initial settings such as the autofocus processing area of the subject sample W, the focus position search range, the amount of focus movement between acquired image samples (focus axis step length), image smoothing processing conditions, and edge enhancement processing conditions are input. After that (step S1), the auto force control is executed.
対物レンズ 3は、 レンズ駆動部 4の駆動によりオートフォー力 ス制御開始位置からフォーカス軸方向に沿って移動を始める (本 実施の形態では被写体試料 Wに接近する方向。) と共に、 面像同 期信号に同期して被写体試料 Wのサンプル画像を取得する (ステ ップ S 2, S 3 )。 次いで、 取得したサンプル画像のフォーカス 軸座標(レンズ一ワーク間距離座標)を取得する(ステップ S 4 )。  The objective lens 3 starts moving along the focus axis direction from the auto-force control start position by driving the lens driving unit 4 (in the present embodiment, the direction approaching the subject sample W), and the surface image is synchronized. Acquire a sample image of the subject sample W in synchronization with the signal (steps S2 and S3). Next, the focus axis coordinates (coordinates of the distance between the lens and the work) of the obtained sample image are obtained (step S4).
この後、 取得したサンプル画像に対する画面平均輝度算出処理、 画像平滑化処理、 エツジ強調処理及び輝度規格化処理でなるピン 卜評価処理が行われる (ステップ S 5〜S 8 )。  Thereafter, focus evaluation processing including screen average luminance calculation processing, image smoothing processing, edge enhancement processing, and luminance normalization processing is performed on the acquired sample image (steps S5 to S8).
画面平均輝度算出工程 (ステップ S 5 ) は、 平均輝度算出回路 1 1 Bにて演算される。 算出された画面平均輝度は、 後にピント 評価値の算出に供される。 なお、 この画面平均輝度算出工程は、 平滑化処理工程 (ステップ S 6 ) の後に行うようにしても よい。  The screen average luminance calculation step (step S5) is calculated by the average luminance calculation circuit 11B. The calculated average screen brightness is later used for calculating the focus evaluation value. This screen average luminance calculation step may be performed after the smoothing processing step (step S6).
画像平滑化処理工程 (ステップ S 6 ) は、 平滑化処理回路 1 1 Aにて処理される。 この画像平滑化処理工程では、 例えば [数 1 ] で示した演算式で画像平滑化処理が行われる。 これにより、 取得 したサンプル画像において、 光源の単一波長化に起因するスぺッ クルの影響が排除される。  The image smoothing processing step (step S6) is processed by the smoothing processing circuit 11A. In this image smoothing process, the image smoothing process is performed using, for example, an arithmetic expression represented by [Equation 1]. As a result, in the acquired sample image, the influence of the scattering caused by the single-wavelength light source is eliminated.
エッジ強調処理工程 (ステップ S 7 ) は、 評価値算出回路 1 1 Cにて実行される。 この工程では、 先の平滑化処理工程 (ステツ プ S 6 ) で平滑化処理されたサンプル画像に基づいて、 例えば上 記 [数 2 ] で示したエッジ強調処理式によって特徴部 ·輪郭部の 画素間の輝度データ差を計算し、 これをピント評価値の基礎デ一 夕とする。 The edge enhancement processing step (step S 7) includes an evaluation value calculation circuit 1 1 Executed in C. In this step, based on the sample image smoothed in the previous smoothing processing step (step S6), for example, the pixel of the feature part / contour part is obtained by the edge emphasis processing equation shown in the above [Equation 2]. The difference between the luminance data is calculated, and this is used as the basic data for the focus evaluation value.
次に、 ステップ S 7で算出されたピント評価値を画面平均輝度 で規格化する輝度規格化処理工程 (ステップ S 8 ) が行われる。 この工程は、 評価値算出回路 1 1 Cにて実行される。 第 3図に示 した例では、 先のエッジ強調処理工程 (ステップ S 7 ) によって 得られたピント評価値 (Pvo(i)) に対し、 画面平均輝度算出ェ 程 (ステップ S 5 ) で得られた画面平均輝度 (P ave(i)) で除雾 することにより、 [数 3 ] 式における輝度規格化済のピント評価 値 Pv(i)を算出する。  Next, a brightness standardization process (Step S8) for normalizing the focus evaluation value calculated in Step S7 with the screen average brightness is performed. This step is executed by the evaluation value calculation circuit 11C. In the example shown in FIG. 3, the focus evaluation value (Pvo (i)) obtained in the previous edge enhancement processing step (step S7) is obtained in the screen average luminance calculation step (step S5). By dividing by the average screen luminance (Pave (i)), the focus evaluation value Pv (i) whose luminance has been normalized in Expression 3 is calculated.
以上のステップ S 2〜ステップ S 8 によってォ一卜フォー力 スループ ( A Fループ) が構成される。 この A Fループでは、 取 得される各フォーカス位置におけるサンプル画像それぞれに対 して上述と同様な処理が実行される。  The above steps S2 to S8 constitute the auto-force sloop (AF loop). In this AF loop, the same processing as described above is executed for each of the sample images at the obtained focus positions.
本実施の形態においては上述のように、 レンズ駆動部 4は対物 レンズ 3 を連続的に移動させた状態で C C Dカメラ 6は所定の サンプリ ング周期で被写体試料 Wを撮像し、 画像取得工程 (ステ ップ S 3 ) とピント評価値計算工程 (ステップ S 8 ) とは並列白勺 に処理される (第 6図, 第 7図)。 したがって、 先に取り込んだ サンプル画像のピント評価値を計算する一方で、 次なるサンプリレ 画像を取得することができ、 その結果、 映像信号 1 フレーム周期 でピント評価値の演算が可能となって、 ォ一トフォーカス動作の 高速化が実現される。 対物レンズ 3の総移動長が全検索範囲に達すると A Fルー プ は終了し、 得られた各サンプル画像のピント評価値に対して、 画 面平均輝度の最大値( P avemax)を乗算する処理が実行される(ス テツプ S 9 , S 1 0 )。 その結果、 各サンプル画像のピント 価 値 P vは、 上記 [数 4 ] 式で示した演算式で求められた場合 と同 義になる。 In the present embodiment, as described above, with the lens drive unit 4 moving the objective lens 3 continuously, the CCD camera 6 images the subject sample W at a predetermined sampling cycle, and performs an image acquisition step (step). Step S3) and the focus evaluation value calculation process (step S8) are processed as parallel stirrups (Figs. 6 and 7). Therefore, while calculating the focus evaluation value of the previously captured sample image, the next sampler image can be obtained. As a result, the focus evaluation value can be calculated in one frame cycle of the video signal, and Faster one-focus operation is realized. When the total moving length of the objective lens 3 reaches the entire search range, the AF loop ends, and the focus evaluation value of each obtained sample image is multiplied by the maximum value of the average screen brightness (Pavemax). Is executed (steps S 9 and S 10). As a result, the focus value Pv of each sample image becomes the same as the case where it is obtained by the arithmetic expression shown in the above [Equation 4].
なお、 第 4図に示した工程フローのように、 エッジ強調処理に よるピン卜評価値の計算で A Fループを完結させ、 同図ステ ップ S 1 0 Aで示すように、 A Fループ終了後に各サンプル画像 に対 し一括して、 ピント評価値の画面平均輝度による規格化処理を [数 4 ] 式で示した演算処理を用いて行うようにしても、 結果的 に、 第 3図に示した例と同等な処理を実現できる。  Note that, as shown in the process flow shown in FIG. 4, the AF loop is completed by calculating the focus evaluation value by the edge enhancement processing, and as shown by step S10A in FIG. Even if the standardized processing based on the screen average luminance of the focus evaluation value is performed collectively for each sample image using the arithmetic processing shown in [Equation 4], the result is as shown in Fig. 3. Processing equivalent to the above example can be realized.
第 5図に、 平滑化処理 (第 3図のステップ S 6 ) 及び輝度規格 化処理 (第 3図のステップ S 8 ) を行って得られるフォーカ スカ ーブ (F C 1 ) を実線で、 画面平均輝度による規格化を行わずに 平滑化処理のみ行って得られるフォーカスカーブ ( F C 2 ) を一 点鎖線でそれぞれ示す。 また比較のため、 第 2 2図に示した従来 のフォーカスカーブ (F C 3 ) を点線で示す。  In FIG. 5, the focus squib (FC 1) obtained by performing the smoothing process (step S 6 in FIG. 3) and the luminance normalization process (step S 8 in FIG. 3) is represented by a solid line and the screen average The focus curves (FC 2) obtained by performing only the smoothing process without normalizing by the luminance are indicated by dashed lines. For comparison, the conventional focus curve (FC 3) shown in FIG. 22 is indicated by a dotted line.
第 5図から明らかなように、 本実施の形態によれば、 光学系の 影響部を大きく改善し、 最適フォーカス位置 (ピント位置) とし て検出されるべきピント評価値のピークを顕在化させることが できる。 これにより、 短波長、 単一波長の光学系においても安定 かつ正確なオートフォーカス動作を実現することができる。  As is clear from FIG. 5, according to the present embodiment, the affected part of the optical system is greatly improved, and the peak of the focus evaluation value to be detected as the optimal focus position (focus position) is made to be obvious. Can be done. As a result, stable and accurate autofocus operation can be realized even in an optical system having a short wavelength and a single wavelength.
また、 サンプル画像の平滑化処理だけでも光学系影響部の改善 が見られるので、 必要に応じて輝度規格化処理工程 (第 3 ΙΙΠこお いてステップ S 8 ) を省略してもよいが、 この輝度規格化処理を 行う ことによって光学系影響部の更なる改善が図られ、 ピント位 置のより正確な検出が可能となる。 In addition, since the effect of the optical system is improved only by the smoothing processing of the sample image, the luminance standardization processing step (step S8 in the third step) may be omitted as necessary. Brightness normalization processing By doing so, the effect of the optical system can be further improved, and the focus position can be detected more accurately.
続いて、 ピント位置算出工程力 S行われる (ステップ S 1 1 )。 このピント位置算出処理は、 ピン ト位置算出回路 1 1 Dにて実行 される。 ピント位置の算出には、 第 6図を参照して説明したよう に、 ピント評価値の最大値とその近傍の複数のピント評価値の各 点を通る近似曲線を求め、 その頂点を検出してこれをピント位置 とする。  Subsequently, a focus position calculation process force S is performed (step S11). This focus position calculation processing is executed by the focus position calculation circuit 11D. To calculate the focus position, as described with reference to Fig. 6, an approximate curve passing through the maximum value of the focus evaluation value and each point of a plurality of focus evaluation values in the vicinity is obtained, and the vertex is detected. This is the focus position.
これにより、 従来より広く適用されている山登り法と比較して 効率的かつ高精度にピント位置を検出することができるので、 ォ 一トフォ一カス動作の高速化に^:きく貢献することができる。  As a result, the focus position can be detected more efficiently and with higher precision than the hill-climbing method that has been widely used in the past, so that it is possible to significantly increase the speed of autofocus operation. .
一方、 第 6図において横軸のレンズ一ワーク間距離を総検索範 囲とした場合、 動作中にピント位置を通過したことが判断できれ ば、 P v 0n+ 3)以降の画像取得は不要となり、 その分の動作時間を 削減することができるので、 オー トフォーカス動作の更なる高速 化を実現することが可能となる。 なお、 ピント位置通過の判断手 法としては、 ある一定以上のピン ト評価値 (パラメ一夕として与 える、 或いはこれまでのフォーカス動作結果から学習する) を越 える山を通過して、 近似に必要なサンプル数を取得する方法等が 挙げられる。  On the other hand, if the distance between the lens and the work on the horizontal axis is the entire search range in Fig. 6, if it can be determined that the camera has passed the focus position during operation, image acquisition after P v 0n + 3) becomes unnecessary. However, since the operation time can be reduced by that amount, it is possible to further increase the speed of the autofocus operation. As a method of judging passage of the focus position, an approximate approximation is made by passing through a mountain that exceeds a certain focus evaluation value (given as a parameter or learned from the focus operation results so far). There is a method to obtain the required number of samples.
そして最後に、 対物レンズ 3をピント位置へ移動させる移動ェ 程 (ステップ S 1 2 ) を経て、 本実施の形態のオー トフォ一カス 制御が完了する。  Finally, the autofocus control according to the present embodiment is completed through a movement step of moving the objective lens 3 to the focus position (step S12).
以上のように、 本実施の形態によれば、 短波長、 単一波長の光 学系に起因する影響を排除して高精度なオー トフォーカス制御 を安定して行う ことができ、 これにより例えば半導体ゥエーハ等 の表面に形成される微細な構造体を分解能高く観察、 検査するこ とができるようになる。 As described above, according to the present embodiment, it is possible to stably perform high-accuracy autofocus control by eliminating the influence caused by a short-wavelength, single-wavelength optical system. Semiconductors, wafers, etc. It will be possible to observe and inspect fine structures formed on the surface with high resolution.
(第 2の実施の形態)  (Second embodiment)
次に、 本発明の第 2の実施の形態について説明する。  Next, a second embodiment of the present invention will be described.
近年の半導体ゥエー八は最小パターン幅 (プロセスルール) の 微細化とともに、 高さ方向へより立体的な構造をとろう としてい る。 光源の短波長化により、 焦点深度も浅くなり、 高低差のある 対象物ではフォーカスの合う部分が少なくなるなど、 不利になる 傾向にある。 画面内に高低差があり、 それぞれにピントの合う面 が異なる場合、 例えば試料のどの表面を基準にするなどの 「どこ にピントを合わせる」 という能動的なフォーカス動作が求められ る。 しかし、 ピント評価値から最適フォーカス位置を求める従前 のォ一トフォーカス制御方法では、 見たい所にピントが合わない という不都合がある。  In recent years, semiconductor devices have been miniaturizing the minimum pattern width (process rule) and trying to have a more three-dimensional structure in the height direction. As the wavelength of the light source becomes shorter, the depth of focus also becomes shallower, and objects with height differences tend to be disadvantageous, such as fewer focused parts. If there is a height difference in the screen and the planes in focus are different, an active focus operation of “focusing where” such as using which surface of the sample as a reference is required. However, the conventional autofocus control method for obtaining the optimum focus position from the focus evaluation value has a disadvantage that the desired focus is not focused.
そこで、 本発明のオートフォーカス制御方法を応用して、 画面 内に高低差が存在するような試料の任意の面にピントを合わせ る方法を以下説明する。  Thus, a method of applying the autofocus control method of the present invention to focus on an arbitrary surface of a sample where there is a height difference in a screen will be described below.
上述の第 1 の実施の形態においては、 取得したサンプル画面の 全体 (あるいは一部対象領域) に対してピント評価値を算出する 例について説明したが、 本実施の形態では、 例えば第 9図に示す ように、 取得したサンプル画面を複数の領域に分割し、 それぞれ の分割領域 W i j ( i , j = 1 〜 3 ) 毎にピント評価値を計算し、 ピント位置を算出する。  In the above-described first embodiment, an example has been described in which the focus evaluation value is calculated for the entire acquired sample screen (or a part of the target area). In the present embodiment, for example, FIG. As shown, the obtained sample screen is divided into a plurality of regions, and a focus evaluation value is calculated for each of the divided regions W ij (i, j = 1 to 3) to calculate a focus position.
各分割領域 W i j におけるピン ト評価値の算出には、上述の第 1 の実施の形態と同様な画像平滑化処理及び画面平均輝度による 規格化処理をそれぞれ実行する。 これにより、 光学系に影響され ずに高精度にピン ト位置を検出できる。 To calculate the focus evaluation value in each divided area W ij, the same image smoothing processing and normalization processing based on the screen average luminance as in the first embodiment are executed. This will affect the optics The focus position can be detected with high accuracy without the need.
以上の処理の結果、分割領域 W i j 毎に当該分割領域に対応した フォーカスカーブが得られることになる。 このとき、 何れかの分 割領域が他の分割領域と比べてピント位置が異なっている場合 には両者間においてピント面に高低差が存在することが明らか となるので、 何を優先してフォーカス位置とするかをパラス 一夕 で指定することによって能動的なフォーカス動作が可能となる。 パラメ一夕の例としては次のものがある。  As a result of the above processing, a focus curve corresponding to the divided region is obtained for each divided region W ij. At this time, if any one of the divided areas has a different focus position as compared with the other divided areas, it is clear that there is a height difference between the two focused areas. Active focus operation is possible by designating the position in a short time. The following is an example of the parame night.
1 .最もレンズ一試料間距離が短いもの(試料の最も高いところ) 2 .最もレンズ一試料間距離が長いもの(試料の最も低いところ)1. The shortest distance between the lens and the sample (the highest point of the sample) 2. The longest distance between the lens and the sample (the lowest point of the sample)
3 . 画面の特定の位置 3. Specific position on the screen
4 . 画面分割結果から多数決で定まる最適フォーカス位置 (よ り 特徴的なところ) 等。  4. Optimal focus position (more distinctive part) determined by majority decision from screen division result.
なお、 第 9図では画面分割数を 3 X 3の 9分割とした例を説明 したが、 画面分割数はこれに限られない。 画面分割数を増やすほ ど詳細な情報が得られることになる。 また、 分割画面が互いに重 なり合ってもよく、 使用状況に合わせて画面分割数をダイナミ ツ クに変更するようにしてもよい。  Although FIG. 9 illustrates an example in which the number of screen divisions is 3 × 3 = 9, the number of screen divisions is not limited to this. More information is obtained as the number of screen divisions is increased. In addition, the divided screens may overlap each other, and the number of screen divisions may be changed dynamically according to the use situation.
以上、 本実施の形態によれば、 対象試料に対し、 何を優先して 最適なフォーカス位置とするかを指定することにより、 「ど こに ピントを合わせる」 という能動的なフォーカス動作に十分に対応 することができる。  As described above, according to the present embodiment, by specifying what priority is given to the optimum focus position with respect to the target sample, it is possible to sufficiently perform the active focus operation of “focusing where”. Yes, we can.
(第 3の実施の形態)  (Third embodiment)
次に、 本発明の第 3の実施の形態について説明する。 本実施の 形態では、 本発明に係るオートフォーカス制御方法を適用するこ とによって、 取得した画像デ一夕から被写体試料の全焦点画像を 合成する方法について説明する。 Next, a third embodiment of the present invention will be described. In the present embodiment, by applying the autofocus control method according to the present invention, an all-focus image of the object sample can be obtained from the acquired image data. The method of combining will be described.
通常の光学系では、 光学系の焦点深度を越える立体的な物体を 見る場合、 全体にピントの合う画像を見ることができず、 検査観 察の目的を満たすことができない。 共焦点光学系など特殊な光学 系を用いて全体にピントの合う全焦点画像を得たり、 三角法に基 づいて角度の異なる画像から全焦点画像を得るなどの方法によ り解決を図っているが、 これらは特殊な光学系を用いるため、 安 価に実現することができない。  With a normal optical system, when viewing a three-dimensional object that exceeds the depth of focus of the optical system, it is not possible to see an image that is entirely in focus, and the objective of inspection and observation cannot be satisfied. A special optical system such as a confocal optical system is used to obtain an in-focus image that is entirely in focus, or to obtain an all-focus image from images at different angles based on trigonometry. However, these cannot be realized at low cost because they use special optical systems.
一方、 物体の画像を階層的に取得した後に合成処理する方法も 提案されている (特開 2 0 0 3 — 2 8 1 5 0 1号公報)。 しかし ながら、 合成に使用する画像情報の容量、 合成処理時間、 複数枚 画像取得後にしか結果を得られない、 などの問題が残る。  On the other hand, a method has been proposed in which an image of an object is obtained hierarchically and then synthesized (JP-A-2003-281501). However, there remain problems such as the capacity of image information used for composition, the composition processing time, and the result obtained only after acquiring multiple images.
そこで、 本実施の形態では、 上述の第 1の実施の形態において 説明したオー トフォーカス制御方法を実行する過程で、 被写体試 料 Wの全焦点画像を得るようにしている。 その制御フローを第 1 0図に示す。 取得した画像 (サンプル点) のピント評価値を画面 平均輝度で規格化する工程 (ステップ S 8 ) の後、 画像の合成処 理工程 (ステップ S 8 M ) を追加している。  Thus, in the present embodiment, an all-focus image of the subject sample W is obtained in the process of executing the autofocus control method described in the first embodiment. The control flow is shown in FIG. After the process of standardizing the focus evaluation value of the acquired image (sample point) with the screen average luminance (step S8), an image synthesis process (step S8M) is added.
なお、 その他の工程については、 上述の第 1の実施の形態で説 明した工程フロー (第 3図) と同様であるので、 対応する工程に は同一の符号を付し、 その説明は省略するものとする。  Since the other steps are the same as those in the process flow (FIG. 3) described in the first embodiment, the corresponding steps are denoted by the same reference numerals and description thereof will be omitted. Shall be.
さて、 画像合成を行うに当たり、 上述の第 2の実施の形態で説 明したように、 取得したサンプル画面を複数の領域に分割し (第 9図)、 それぞれの分割領域 W i j を単位として画像を合成する。 なお、 画面の分割数は特に限定されず、 分割数が多いほど処理を 精細に行う ことができ、 一画素単位まで分割領域を微細化できる。 また、 分割領域の形状は四角に限らず、 円形状等にも変更するこ とができる。 Now, when performing image synthesis, the acquired sample screen is divided into a plurality of regions (FIG. 9) as described in the second embodiment described above, and each divided region W ij is used as an image. Are synthesized. Note that the number of screen divisions is not particularly limited, and the greater the number of divisions, the more finely the processing can be performed, and the smaller the divided area down to one pixel unit. Further, the shape of the divided area is not limited to a square, but can be changed to a circular shape or the like.
また、 メモリ 1 5 (第 2図) として、 偶数フレームで取り込ん だ画像データを処理する第 1 メモリ部 1 5 Aおよび奇数フレー ムで取り込んだ画像デ一夕を処理する第 2メモリ部 1 5 Bに加 えて、 第 1 1 図に示すように、 全焦点処理用の第 3 メモリ部 1 5 Cを用意する。 この第 3メモリ部 1 5 Cには、 合成画像データ格 納領域 1 5 C 1 と、合成画像を構成する各分割領域 W i jの高さ(レ ンズーワーク間距離) 情報格納領域 1 5 C 2 と、 これら各分割領 域 W i j のピント評価値情報格納領域 1 5 C 3 とが、それぞれ設け られている。  A memory 15 (FIG. 2) includes a first memory unit 15A for processing image data captured in even frames and a second memory unit 15B for processing image data captured in odd frames. In addition, as shown in FIG. 11, a third memory section 15C for omnifocal processing is prepared. The third memory unit 15C includes a composite image data storage area 15C1 and a height (distance between lenses and work) information storage area 15C2 of each divided area Wij constituting the composite image. A focus evaluation value information storage area 15 C 3 of each of the divided areas W ij is provided.
被写体試料の全焦点画像を合成するにあたっては、 レンズーヮ —ク間距離が異なる複数のフォーカス位置でサンプル画像を取 得し、その各々のサンプル画像について各分割領域 W i j 毎にピン ト評価値を算出し、分割領域 W i j 間で相互に独立して最もピン ト 評価値の高い画像を抽出した後、 全体画像として合成する処理を 行うようにしている。  In synthesizing the omnifocal image of the subject sample, sample images are obtained at a plurality of focus positions with different lens-park distances, and the focus evaluation value is calculated for each of the sample images for each divided area W ij Then, after extracting an image having the highest focus evaluation value independently from each other among the divided areas W ij, a process of synthesizing the entire image is performed.
以上のようにして、 本発明の 「全焦点画像合成手段」 が構成さ れる。 第 1 0図に示した工程フロー図において説明すると、 ステ ップ S 1〜ステップ S 8の工程を上述の第 1の実施の形態と同 様な手法で取得サンプル画像について各分割領域 W i ]' 毎に実行 した後、 ステップ S 8 Mの画像合成工程へ移行する。  As described above, the “all-focus image synthesizing unit” of the present invention is configured. Referring to the process flow chart shown in FIG. 10, the steps S1 to S8 are performed in the same manner as in the above-described first embodiment. 'After each execution, move to the image synthesis process in step S8M.
第 1 2図は、 ステップ S 8 Mの詳細を示している。 オートフォ —カス動作開始後、 最初に取り込んだ画像を使って第 3メモリ部 1 5 Cを初期化する (ステップ a , b )。 すなわち、 ステップ b において、 最初の画像を合成画像データ格納領域 1 5 C 1 にコピ 一し、 高さ情報格納領域 1 5 C 2 を一回目のデ一夕で埋め、 ピン ト評価値を各分割領域 Wij のピン卜評価値情報格納領域 1 5 C 3にコピーして初期化する。 FIG. 12 shows the details of step S8M. After the start of the autofocus operation, the third memory unit 15C is initialized using the first captured image (steps a and b). That is, in step b, the first image is copied to the composite image data storage area 15 C 1. The height information storage area 15 C2 is filled with the first data, and the focus evaluation value is copied to the focus evaluation value information storage area 15 C 3 of each divided area Wij and initialized. .
二回目以降は、 各分割領域 Wij 毎に、 取得画像のピント評価値 と合成画像のピント評価値とを比較する (ステップ c;)。 取得画 像のピン ト評価値が大きい場合は、 画像をコピーし、 これに相当 する高さ情報とピント評価値情報を更新する (ステップ d )。 逆 に、 取得画像のピント評価値が小さい場合は、 処理を行わない。 これを分割数分繰り返す (ステップ e )。 これで 1 フレーム ( 3 3. 3 m s e c ) の処理を完了する。  After the second time, the focus evaluation value of the acquired image and the focus evaluation value of the composite image are compared for each divided region Wij (step c;). If the focus evaluation value of the acquired image is large, the image is copied, and the corresponding height information and focus evaluation value information are updated (step d). Conversely, if the focus evaluation value of the acquired image is small, no processing is performed. This is repeated for the number of divisions (step e). This completes the processing of one frame (33.3 msec).
一連のォートフォ一カス制御の動作フローにおいて、 上述の処 理は、 例えば、 第 1 メモリ部 1 5 Aに偶数フレーム画像データを 取り込みながら、 第 2メモリ部 1 5 Bに既に取り込まれている 1 フレーム前の奇数フレーム画像デ一夕の各分割領域 Wij につい て行い、 第 3メモリ部 1 5 Cの対応する格納領域に必要データ、 情報をコピーあるいは更新する、 という流れになる。  In a series of autofocus control operation flows, the above-described processing is performed, for example, while fetching even-numbered frame image data into the first memory unit 15A, and storing one frame already captured in the second memory unit 15B. The process is performed for each divided area Wij of the previous odd-numbered frame image, and necessary data and information are copied or updated in the corresponding storage area of the third memory unit 15C.
本実施の形態において、 上述の処理は、 第 1の実施の形態で説 明した被写体試料 Wのオートフォーカス制御に付随して行われ るようにしているが、 勿論、 単独で当該処理を行う ことも可能で ある。  In the present embodiment, the above-described processing is performed along with the autofocus control of the subject sample W described in the first embodiment, but it is needless to say that the processing is performed alone. Is also possible.
以上の処理をオートフォーカスに必要な枚数の画像に亘つて 処理することで、 オートフォーカス動作終了時に、 分割領域 Wi]' ごとに、 最もピン卜の合っていた部分、 その高さ情報、 ピン卜評 価値を得ることができるようになる。 これにより、 被写体試料 W のフォーカス位置座標だけでなく、 分割領域 Wij ごとに、 被写体 試料" Wの全焦点画像、 形状までをもオンラインかつリアルタイム で取得することが可能となる。 By performing the above processing for the number of images required for autofocus, at the end of the autofocus operation, the best focus, the height information, Reputation can be gained. This allows online and real-time not only the focus position coordinates of the subject sample W, but also the omnifocal image and shape of the subject sample "W" for each divided area Wij. It is possible to obtain at.
特に、 合成画像データ格納領域 1 5 C 1 にコピーされた合成画 像をモニタ 9 (第 1 図) に表示させることにより、 対物レンズ 3 の全検索範囲に亘る移動の過程で、 分割領域ごとに焦点が合う様 子を観察できるようになるので、 表示された被写体試料 Wの高さ 分布の様子をォ一 トフォ一カス動作中に容易に把握できるよう になる。  In particular, by displaying the composite image copied to the composite image data storage area 15C1 on the monitor 9 (Fig. 1), the objective lens 3 moves over the entire search range, and is divided by each divided area. Since the in-focus state can be observed, the displayed state of the height distribution of the subject sample W can be easily grasped during the autofocus operation.
更に、 本発明に係るオートフォーカス制御方法を用いて被写体 試料の全焦点画像を合成しているので、 短波長、 単一波長の光学 系に起因する影響を排除した上での高精度なオートフォーカス 制御が確保されており、 これにより、 半導体ゥエーハ等のような 階層的に展開された構造体表面の全焦点画像を分解能高く取得 することができる。  Furthermore, since the all-focus image of the object sample is synthesized by using the auto-focus control method according to the present invention, high-precision auto-focusing is possible while eliminating the influence caused by the short-wavelength, single-wavelength optical system. Control is ensured, so that an all-in-focus image of the surface of a hierarchically developed structure such as a semiconductor wafer can be acquired with high resolution.
(第 4の実施の形態)  (Fourth embodiment)
次に、 本発明の第 4の実施の形態として、 オートフォーカス動 作で取得した画像データから被写体試料の立体画像を合成する 方法について説明する。  Next, as a fourth embodiment of the present invention, a method of synthesizing a stereoscopic image of a subject sample from image data acquired by an autofocus operation will be described.
上述しているように、 画像オートフォーカス動作は、 複数のフ オーカス位置においてサンプル画像を取得しピント評価を行つ ている。 そこで、 本実施の形態では、 取得したサンプル画像の中 からピントの合っている部分を抜き出し、 これを高さ方向の情報 と組み合わせることによって、 立体画像を合成することができる。 例えば第 1 3図に示すように、 オートフォーカス動作時に取得 した各サンプル画像 R a , R b , R c及び R dの各々に対してピ ント位置検出を行った後、 ピントの合っている所を抽出してこれ を高さ方向 (フォーカス軸方向) に組み合わせることによって、 構造物 Rの立体画像を合成することができる。 As described above, in the image autofocus operation, sample images are acquired at a plurality of focus positions and focus evaluation is performed. Thus, in the present embodiment, a three-dimensional image can be synthesized by extracting a focused part from the acquired sample image and combining this with information in the height direction. For example, as shown in FIG. 13, after performing focus position detection on each of the sample images Ra, Rb, Rc, and Rd acquired during the autofocus operation, the focus position is determined. By extracting and combining this in the height direction (focus axis direction), A stereoscopic image of the structure R can be synthesized.
本実施の形態の立体画像の合成方法の一例を第 1 4図のフロ —チャートに示す。 図において、 上述の第 1 の実施の形態 (第 3 図) と対応するステップには同一の符号を付し、 その詳細な説明 は省略する。  An example of a method of synthesizing a stereoscopic image according to the present embodiment is shown in a flowchart of FIG. In the figure, steps corresponding to those in the above-described first embodiment (FIG. 3) are denoted by the same reference numerals, and detailed description thereof will be omitted.
本実施の形態では、 初期設定 (ステップ S 1 ) の後、 立 #:画面 ノ ツファクリア工程 (ステップ S 1 A ) を有している。 この工程 では、 過去に取得した立体画面を記憶するメモリ領域の初期化が 行われる。 その後、 上述の第 1の実施の形態と同様に、 複数のフ オーカス位置において被写体試料のサンプル画像を取得し、 その 各々について平滑化処理、 エッジ強調処理によるピント評価値の 算出、 及び算出したピント評価値の画面平均輝度による規格化処 理を行う (ステップ S 2〜ステップ S 8 )。  In the present embodiment, after the initial setting (step S1), a vertical #: screen notch clearing step (step S1A) is provided. In this step, the memory area for storing the stereoscopic screen acquired in the past is initialized. Thereafter, similarly to the first embodiment described above, sample images of the subject sample are acquired at a plurality of focus positions, and for each of them, a focus evaluation value is calculated by smoothing processing and edge enhancement processing, and the calculated focus is calculated. The standardization process is performed based on the screen average brightness of the evaluation value (Step S2 to Step S8).
ピント評価値の算出後、 画面内各点でこれまでのデ一夕と取り 込んだデータとでとちらがピントが合っているかを比較し、 取り 込んだデータの方がピントが合っている場合はデータを更新す る処理が行われる (ステップ S 8 A )。 この処理は、 各々のサン プル画像それぞれについて実行される。  After calculating the focus evaluation value, compare each point in the screen with the data captured so far and the captured data to see if they are in focus.If the captured data is in focus The data is updated (step S8A). This process is performed for each sample image.
以上のようにして、 本発明の 「立体画像合成手段」 が構成され る。 なお、 本例では、 上述の第 2の実施の形態のように画面を複 数の領域 W i j に分割し、分割した各領域それぞれについて上述の 処理を行うようにしているが、 分割数は特に限定されず、 一画素 単位で処理を行うようにしてもよい。  As described above, the “stereoscopic image combining means” of the present invention is configured. In this example, the screen is divided into a plurality of regions W ij as in the above-described second embodiment, and the above-described processing is performed for each of the divided regions. The processing is not limited, and the processing may be performed in units of one pixel.
従って、 本実施の形態によれば、 オートフォーカス制御の終了 後、 被写体試料 Wの最適フォーカス位置情報だけでなく、 ピン ト が合っている複数のサンプル画像を高さ方向に組み合わせるこ とによって、 被写体試料表面の立体画像をも容易に取得すること ができるようになる。 Therefore, according to the present embodiment, after the end of the auto focus control, not only the optimum focus position information of the subject sample W but also a plurality of focused sample images can be combined in the height direction. Thus, a three-dimensional image of the surface of the object sample can be easily obtained.
更に、 本発明に係るオートフォーカス制御方法を用いて被写体 試料の立体画像を合成しているので、 短波長、 単一波長の光学系 に起因する影響を排除した上での高精度なオートフォーカス制 御が確保されており、 これにより、 半導体ゥエーハ等のような階 層的に展開された構造体表面の立体画像を分解能高く取得する ことができる。  Furthermore, since the stereoscopic image of the subject sample is synthesized using the autofocus control method according to the present invention, the high-precision autofocus control with eliminating the effects caused by the short-wavelength, single-wavelength optical system is eliminated. As a result, a three-dimensional image of the surface of a hierarchically developed structure such as a semiconductor wafer or the like can be acquired with high resolution.
(第 5の実施の形態)  (Fifth embodiment)
続いて、 本発明の第 5の実施の形態について説明する。  Subsequently, a fifth embodiment of the present invention will be described.
上述の各実施の形態では、 本発明に係るオートフォーカス制御 方法をコンピュー夕を中核とする画像処理装置 1で実現する例 を説明してきた。 この構成は少し複雑で、 単にフォーカスを合わ せたいというニーズにマッチしない場合がある。 すなわち、 フォ —カス後の処理が必要ない場合等のために、 簡単なハードウェア により本発明のォ一トフォーカス制御方法を実行するアルゴリ ズムを実現できれば、 適用範囲が広がり工業の自動化に大きく貢 献できると考えられる。  In each of the above-described embodiments, an example has been described in which the autofocus control method according to the present invention is realized by the image processing apparatus 1 having a computer as its core. This configuration is a bit complicated and may not match the need to just focus. That is, if an algorithm for executing the autofocus control method of the present invention can be realized with simple hardware for cases such as when post-focus processing is not required, the scope of application is expanded and industrial automation is greatly contributed. It is thought that we can offer.
そこで、 本実施の形態では、 コンピュータを使用せずに上述し てきた本発明のオートフォーカス制御方法を実現できるオー ト フォーカス制御装置の構成について説明する。 このオー トフォー カス制御装置は、 後述するように、 ビデオ信号デコーダや F P G A (Field Programmable Gate Array) に代表される演算素子、 設定保存用のメモリ等で構成でき、 更に必要に応じて、 C P U 、 Central ProcessingUnit)や P M C (Pul se Motor Controller)、 外部メモリ等の集積回路が用いられる。 これらの素子群は、 共通 の配線基板上に実装されることにより、 単一の基板ュニッ トとし て、 またはこれを収納するパッケージ部品として使用される。 Thus, in the present embodiment, a configuration of an auto-focus control device that can realize the above-described auto-focus control method of the present invention without using a computer will be described. As will be described later, this autofocus control device can be configured with a video signal decoder, an arithmetic element represented by an FPGA (Field Programmable Gate Array), a memory for storing settings, and the like. Integrated circuits such as a ProcessingUnit), PMC (Pulse Motor Controller), and external memory are used. These elements are common It is used as a single board unit or as a package component that houses it by being mounted on a printed circuit board.
(第 1 の構成例)  (First configuration example)
第 1 5図に本発明のオートフォーカス制御装置の第 1 の構成 例による機能ブロック図を示す。 図示するオートフォーカス制御 装置 3 1 は、 ビデオ信号デコーダ 4 1、 F P GA 4 2、 フィ一ル ドメモリ 4 3、 C P U 4 4、 R OM/R AM 4 5 , P M C 4 6 、 I / F回路 4 7で構成されている。  FIG. 15 shows a functional block diagram according to a first configuration example of the autofocus control device of the present invention. The illustrated auto focus control device 31 includes a video signal decoder 41, an FPGA 42, a field memory 43, a CPU 44, a ROM / RAM 45, a PMC 46, and an I / F circuit 47. It is composed of
フォーカス動作に使用するビデオ信号は、 N T S C方式にェン コードされているアナログ画像信号であり、 これがビデオ信号デ コーダ 4 1 により水平/垂直同期信号、 E V E N (偶数) ZO D D (奇数) フィールド情報、 輝度情報のデジタル画像信号に変換 される。  The video signal used for the focus operation is an analog image signal encoded in the NTSC system. This is a horizontal / vertical sync signal by the video signal decoder 41, EVEN (even number) ZO DD (odd number) field information, It is converted into a digital image signal of luminance information.
F P G A 4 2は、 上述の第 1 の実施の形態において説明した本 発明に係るオートフォーカス制御フロ一 (第 3図) において所定 の演算処理を行う演算素子で構成され、 本発明の 「画像平滑化手 段」、 「エッジ強調処理手段」 および 「評価値算出手段」 に対応す る。  The FPGA 42 is configured by an arithmetic element that performs a predetermined arithmetic processing in the autofocus control flow (FIG. 3) according to the present invention described in the above-described first embodiment. “Means”, “edge enhancement processing means”, and “evaluation value calculation means”.
この F P G A 4 2は、 ビデオ信号デコーダ 4 1 によりデジタル 信号化された同期信号とフィールド情報から、 画面内の有効な部 分の情報を取り出し、 その輝度情報をフィールドメモリ 4 3 に格 納する。 そして、 同時に順次フィールドメモリ 4 3からデータを 読み出し、 フィルタリ ング (画像平滑化処理)、 平均輝度計算、 ピント評価値計算といった演算処理を行う。 なお、 F P G A 4 2 の集積度により、 フィールドメモリ 4 3、 C P U 4 4、 P M C 4 6の機能を F P G A 4 2内に組み込むことも可能である。 フィ一ルドメモリ 4 3は、 インターレースで出力され偶数フィ 一ルド及び奇数フィールドで 1 フレームが構成されるビデオ信 号を取り扱うため、 上記のフィールド情報を一時的に保存する目 的で使用される。 The FPGA 42 extracts effective portion information in the screen from the synchronization signal and the field information digitized by the video signal decoder 41, and stores the luminance information in the field memory 43. At the same time, the data is sequentially read from the field memory 43, and arithmetic processing such as filtering (image smoothing processing), average luminance calculation, and focus evaluation value calculation is performed. Note that, depending on the degree of integration of the FPGA 42, it is possible to incorporate the functions of the field memory 43, the CPU 44, and the PMC 46 into the FPGA 42. The field memory 43 is used for temporarily storing the above-mentioned field information in order to handle a video signal which is output in an interlaced manner and is composed of even and odd fields.
C P U 4 4は、 P C M 4 6及び I Z F回路 4 7 を介して、 被写 体試料を支持するステージを移動させてレンズ一ワーク間距離 を変化させると共に、 各フォーカス位置で取得され F P G A 4 2 で演算された各サンプル画像のピン ト評価値から最適フォー力 ス位置 (ピント位置) を計算するなど、 システム全体の動作を管 理する。 この例において、 C P U 4 4は、 本発明の 「ピント位置 算出手段」 に対応している。  The CPU 44 changes the distance between the lens and the work by moving the stage that supports the object sample via the PCM 46 and the IZF circuit 47, and is obtained at each focus position and calculated by the FPGA 42. It manages the operation of the entire system, such as calculating the optimal force position (focus position) from the focus evaluation value of each sample image obtained. In this example, CPU 44 corresponds to the “focus position calculating means” of the present invention.
R OM/ R AM 4 5は、 C P U 4 4の動作ソフ トウェア (プロ グラム) とピント位置の計算に必要なパラメ一夕の記憶用として 使用される。 なお、 R OM/R AM 4 5は、 C P Uに内蔵されて いてもよい。  The ROM / RAM 45 is used for storing the operating software (program) of the CPU 44 and the parameters required for calculating the focus position. The ROM / RAM 45 may be built in the CPU.
P M C 4 6 は、 ステージを移動させるパルスモ一夕 (図示略) の駆動用制御素子であり、 インターフェース回路 ( I / F回路) 4 7 を介してステージのコントロールを行う。 また、 ステージ位 置を検出するセンサの出力が、 I Z F回路 4 7 を通じて P C M 4 6 に供給されるようになっている。  The PMC 46 is a drive control element for a pulse motor (not shown) for moving the stage, and controls the stage via an interface circuit (I / F circuit) 47. Further, the output of the sensor for detecting the stage position is supplied to the PCM 46 through the IZF circuit 47.
以上のように構成されるオー トフォーカス制御装置 3 1 にお いては、 図示しない C C Dカメラからサンプル画像のビデオ信号 が供給される。 このビデオ信号はビデオ信号デコーダ 4 1 を介し て F P G A 4 2 に入力され、 ここで入力画像の平滑化処理、 平均 輝度計算、 ピント評価値の演算がなされる。 F P G A 4 2は、 フ ィールド終了の同期信号のタイミングで、 C P U 4 4にピン ト評 価データを転送する。 In the autofocus control device 31 configured as described above, a video signal of a sample image is supplied from a CCD camera (not shown). This video signal is input to the FPGA 42 through the video signal decoder 41, where the input image is smoothed, the average luminance is calculated, and the focus evaluation value is calculated. The FPGA 42 focuses on the CPU 44 at the timing of the synchronization signal at the end of the field. Transfer pricing data.
C P U 4 4は、 フィールド終了のタイミングでフォーカスステ —ジの座標を取得し、 それをレンズ—ワーク間距離として使用す る。 以上の処理を本発明のオー トフォ一カス動作に必要な回数繰 り返した後、 C P U 4 4はピン ト位置の計算を行う。 そして、 最 適フォーカス位置へステージを移動させて、 オートフォーカス動 作を終了する。 なお、 必要に応じて、 画面分割機能、 被写体試料 の全焦点画像合成処理、 及び 又は、 立体画像合成処理が行われ る。  The CPU 44 obtains the coordinates of the focus stage at the end of the field and uses it as the lens-work distance. After repeating the above processing the number of times necessary for the autofocus operation of the present invention, the CPU 44 calculates the focus position. Then, the stage is moved to the optimal focus position, and the auto focus operation ends. As necessary, a screen division function, an all-focus image synthesizing process of a subject sample, and / or a stereoscopic image synthesizing process are performed.
以上のように構成される本発明のオートフォーカス制御装置 を既存の C C Dカメラ、 モニタ、 パルスモータ等のフォーカス軸 移動手段等に有機的に接続することにより、 上述の画像処理装置 1 と同等の機能を実現することが可能となるので、 簡易かつ簡素 な構成で本発明のオートフォーカス制御方法を実施でき、 コス ト および設置スペース等の点でも非常に有利となる。  By organically connecting the autofocus control device of the present invention configured as described above to a focus axis moving means such as an existing CCD camera, monitor, pulse motor, or the like, a function equivalent to that of the image processing device 1 described above is obtained. Therefore, the autofocus control method of the present invention can be implemented with a simple and simple configuration, which is very advantageous in terms of cost and installation space.
(第 2の構成例)  (Second configuration example)
第 1 6図は、 本実施の形態におけるオートフォーカス制御装置 の第 2の構成例による機能ブロック図である。 なお、 第 1 の構成 例 (第 1 5図) と対応する部分については同一の符号を付し、 そ の詳細な説明は省略する。 本構成例におけるオートフォーカス制 御装置 3 2は、 ビデオ信号デコーダ 4 1、 F P GA 4 2、 C P U 4 4、 R〇M/R AM 4 5、 P M C 4 6及び I Z F回路 4 7で構 成されている。  FIG. 16 is a functional block diagram of a second example of the configuration of the autofocus control device according to the present embodiment. Parts corresponding to those in the first configuration example (FIG. 15) are denoted by the same reference numerals, and detailed description thereof will be omitted. The autofocus control device 32 in this configuration example includes a video signal decoder 41, an FPGA 42, a CPU 44, an R / M / RAM 45, a PMC 46, and an IZF circuit 47. I have.
上述の第 1 の構成例におけるォ一トフォ一カス制御装置 3 1 においては、 イン夕一レースの画像を T V (テレビジョ ン) と同 様のイメージとして処理するために、 フィールドメモリ 4 3 を使 用して、 フレーム情報から制御を行うようにしていた。 しかし、 オートフォーカス動作だけを考えれば、 フレーム情報を使う必要 はなく、 フィールド単位での処理で十分な場合もあり、 また、 こ れがメリ ッ トなることもある。 In the autofocus controller 31 in the first configuration example described above, the field memory 43 is used to process the image of the in-night race as an image similar to a TV (television). Control from the frame information. However, considering only the autofocus operation, there is no need to use the frame information, and processing on a field-by-field basis may be sufficient, and this may be an advantage.
そこで、 本構成例におけるオートフォーカス制御装置 3 2は、 第 1 の構成例からフィールドメモリ 4 3 を取り除いた構成とさ れている。 この構成により、 フィ一ルドメモリへの情報の転送タ イミング処理が不要となるので、 上述の第 1 の構成例に比べて物 理的にも論理的にも簡単な構成とすることができる。 また、 フィ 一ルド単位でピント評価処理を行えるので、 フレーム単位で処理 する第 1の構成例に比べてピント評価値のサンプリ ング間隔が 短くなる等のメ リ ッ トがある。  Therefore, the autofocus control device 32 in the present configuration example has a configuration in which the field memory 43 is removed from the first configuration example. With this configuration, there is no need to perform a timing process for transferring information to the field memory, so that a configuration that is physically and logically simpler than that of the above-described first configuration example can be made. Also, since focus evaluation processing can be performed in field units, there is an advantage that the sampling interval of focus evaluation values is shorter than in the first configuration example in which processing is performed in frame units.
(第 3の構成例)  (Third configuration example)
第 1 7図は、 本実施の形態におけるオートフォーカス制御装置 の第 3の構成例による機能ブロック図である。 なお、 第 1 の構成 例 (第 1 5図) と対応する部分については同一の符号を付し、 そ の詳細な説明は省略する。 本構成例におけるォ一トフォーカス制 御装置 3 3は、 ビデオ信号デコーダ 4 1、 F P GA 4 2、 C P U 4 4、 R OM/ R AM 4 5 > ? 〇 4 6及び 1 / 回路 4 7で構 成されている。  FIG. 17 is a functional block diagram of a third configuration example of the autofocus control device according to the present embodiment. Parts corresponding to those in the first configuration example (FIG. 15) are denoted by the same reference numerals, and detailed description thereof will be omitted. The autofocus control device 33 in this configuration example includes a video signal decoder 41, an FPGA 42, a CPU 44, a ROM / RAM 45>? 46 and a 1 / circuit 47. Has been established.
本構成例におけるオー トフォーカス制御装置 3 3 は、 F P G A 4 2内に P M C 4 6の論理ブロックを内蔵させて、 上述の第 2の 構成例に比べて、 P M C 4 6の独立した論理回路を不要とした構 成を備えている。 この構成により、 P M C 4 6のための独立した I Cチップが不要となり、 基板サイズ、 実装コス トの低減を図る ことができるようになる。 (第 4の構成例) The autofocus control device 33 in this configuration example incorporates the PMC 46 logic block in the FPGA 42, and does not require an independent logic circuit for the PMC 46 as compared to the second configuration example described above. It has the following configuration. With this configuration, an independent IC chip for the PMC 46 is not required, and the board size and mounting cost can be reduced. (Fourth configuration example)
第 1 8図は、 本実施の形態におけるオートフォーカス制御装置 の第 4の構成例による機能ブロック図である。 なお、 第 1 の構成 例 (第 1 5図) と対応する部分については同一の符号を付し、 そ の詳細な説明は省略する。 本構成例におけるオートフォーカス制 御装置 3 4は、 ビデオ信号デコーダ 4 1、 F P GA 4 2、 C P U 4 4、 R OM/R AM 4 5、 AD (Analog to Digital) D A (Digital to Analog) 回路 4 8、 及び I Z F回路 4 7で構成さ れている。  FIG. 18 is a functional block diagram of a fourth configuration example of the autofocus control device according to the present embodiment. Parts corresponding to those in the first configuration example (FIG. 15) are denoted by the same reference numerals, and detailed description thereof will be omitted. The auto focus control device 34 in this configuration example is composed of a video signal decoder 41, FPGA 42, CPU 44, ROM / RAM 45, AD (Analog to Digital) DA (Digital to Analog) circuit 4 8, and IZF circuit 47.
本構成例におけるオートフォーカス制御装置 3 4は、 フォ一力 スステージの駆動源をパルスモー夕からアナログ信号コント口 —ルのピエゾステージで構成した例を示しており、 上述の第 2の 構成例における P M C 4 6 に代えて、 A D Z D A回路 4 8が用い られている。 なお、 八070八回路 4 8は、 例えば C P U 4 4内 に取り込み可能であり、 この場合は当該八0ダ0八回路 4 8 を外 付け回路とする必要はない。  The auto-focus control device 34 in this configuration example shows an example in which the driving source of the force stage is configured by a piezo stage of an analog signal controller from a pulse motor, and in the above-described second configuration example, An ADZDA circuit 48 is used instead of the PMC 46. Note that the circuit 708 can be taken into, for example, the CPU 44. In this case, the circuit 48 need not be an external circuit.
また、 八0/0八回路 4 8 にぉぃて、 D A回路部分は C P U 4 4からの指示電圧をアナログ信号に変換するための回路であり、 AD回路部分は、 ピエゾステージの移動位置を検出するセンサ (図示略) からの信号をデジタル信号に変換し C P U 4 4へフィ ードバックするための回路である。 なお、 当該フィードバック制 御を行わない場合、 AD回路部分は省略可能である。  In addition, the DA circuit part is a circuit for converting the instruction voltage from the CPU 44 into an analog signal, and the AD circuit part detects the moving position of the piezo stage. This is a circuit for converting a signal from a sensor (not shown) to a digital signal and feeding it back to the CPU 44. When the feedback control is not performed, the AD circuit part can be omitted.
(第 5の構成例)  (Fifth configuration example)
第 1 9図は、 本実施の形態の第 5 の構成例として、 上述の第 3 の構成例 (第 1 7 図) におけるオー トフォーカス制御装置 3 3 の 具体的構成例を示している。 なお、 図において対応する部分につ いては同一の符号を付し、 その詳細な説明は省略する。 FIG. 19 shows a specific configuration example of the autofocus control device 33 in the above-described third configuration example (FIG. 17) as a fifth configuration example of the present embodiment. Note that the corresponding parts in the figure And the same reference numerals are given, and detailed description thereof is omitted.
本構成例におけるオー トフォーカス制御装置 3 5は、 共通の配 線基板 5 0上に、 ビデオ信号デコーダ 4 1、 F P GA 4 2、 C P U 4 4、更にはフラッシュメモリ 4 5 A、 S R AM(S tic Random Access Memory) 4 5 B、 R S ドライバ 4 7 A、 電源監視回路 5 1、 F P GA初期化 R OM 5 2および複数のコネクタ 5 3 A, 5 3 B , 5 3 C , 5 3 Dがそれぞれ実装されて構成されている。 フラッシュメモリ 4 5 Aおよび S R AM 4 5 Bは、 上述した R OM/R AM 4 5 に対応し、 一方のフラッシュメモリ 4 5 Aには、 C P U 4 4の動作プログラムやォ一トフォーカス動作の初期設 定情報 (フォーカス移動速度、 平滑化処理条件等) が格納され、 他方の S R AM 4 5 Bには、 C P U 4 4におけるピン ト位置の痕 算に必要な各種パラメータの一時的な保存等に用いられる。  The auto-focus control device 35 in this configuration example includes a video signal decoder 41, an FPGA 42, a CPU 44, a flash memory 45A, a SRAM (SRAM) on a common wiring board 50. tic Random Access Memory) 45 B, RS driver 47 A, power supply monitoring circuit 51, FPGA initialization ROM 52, and multiple connectors 53A, 53B, 53C, 53D It is implemented and configured. The flash memory 45 A and the SR AM 45 B correspond to the ROM / RAM 45 described above, while the flash memory 45 A has an operation program of the CPU 44 and an initial state of auto focus operation. The setting information (focus movement speed, smoothing processing conditions, etc.) is stored, and the other SRAM 45 B is used to temporarily store various parameters necessary for the CPU 44 to calculate the focus position. Used.
R S ドライノ、 4 7 Aは、 コネクタ 5 3 A〜 5 3 Dを介して接繞 されている外部機器との通信に必要なインターフェース回路で ある。 ここで、 コネクタ 5 3 Aには C C Dカメラが接続され、 コ ネクタ 5 3 Bには上位コントロ一ラまたは C P Uが接続されて いる。 また、 コネクタ 5 3 Cには電源回路が接続され、 コネクダ 5 3 Dにはフォ一カスステージが接続されている。 なお、 フォー カスステージはパルスモータを駆動源として備え、 そのコント口 ーラである P M Cは、 F P GA 4 2内に組み込まれている。  The RS dryino, 47A, is an interface circuit necessary for communication with the external device that is connected via the connectors 53A to 53D. Here, a CCD camera is connected to the connector 53A, and a host controller or a CPU is connected to the connector 53B. A power supply circuit is connected to the connector 53C, and a focus stage is connected to the connector 53D. The focus stage includes a pulse motor as a drive source, and its controller, PMC, is incorporated in the FPGA 42.
以上のように、 本構成例におけるオートフォーカス制御装置 3 5 によれば、 一枚の配線基板 5 0上に、 本発明のオー トフォー力 ス制御方法を実現するアルゴリズムを実行できる各種素子を実 装した、 外形寸法が例えば 1 0 0 m m四方の基板実装体として構 成できる。 これにより、 装置コス トの低減および装置構成の簡素 化を図ることができる。 また、 機器の設置自由度が高められるの で、 これまで使用できなかった産業分野において、 オートフォー カス動作が要求される現場ニーズに容易に対応できるようにな る。 As described above, according to the autofocus control device 35 of the present configuration example, various elements capable of executing the algorithm for realizing the autoforce control method of the present invention are mounted on one wiring board 50. Thus, it can be configured as a board mounted body having an outer dimension of, for example, 100 mm square. This reduces equipment costs and simplifies equipment configuration. Can be achieved. In addition, since the degree of freedom of equipment installation can be increased, it is possible to easily respond to the on-site needs that require autofocus operation in industrial fields that could not be used until now.
以上、 本発明の各実施の形態について説明したが、 勿論、 本発 明はこれらに限定されることなく、 本発明の技術的思想に基づい て種々の変形が可能である。  Although the embodiments of the present invention have been described above, the present invention is, of course, not limited to these, and various modifications can be made based on the technical concept of the present invention.
例えば以上の第 1の実施の形態では、 レンズ一試料間距離を異 ならせるのに対物レンズ 3側をフォーカス軸方向へ移動させる 構成について説明したが、 これに代えて、 試料を支持するステ一 ジ 2 を移動させるようにしてもよい。  For example, in the above-described first embodiment, the configuration in which the objective lens 3 side is moved in the focus axis direction so as to make the distance between the lens and the sample different has been described. Di 2 may be moved.
また、 以上の第 1 の実施の形態では、 レンズ一試料間距離を変 化させる駆動系としてピエゾ素子でなるレンズ駆動部 4及びそ の ドライバ 8で構成したが、 これに限らず、 レンズ試料間距離を 高精度かつ滑らかに変化させることができるものであれば、 他の 駆動系を適用してもよい。  Further, in the first embodiment described above, the driving system for changing the distance between the lens and the sample is constituted by the lens driving unit 4 composed of a piezo element and its driver 8, but the present invention is not limited to this. Other drive systems may be applied as long as the distance can be changed accurately and smoothly.
例えば、 第 2 0 A図は駆動源としてパルスモータ 2 0を用いた 例を示す。 この場合、 ドライノ、 2 1 はパルスモータコントローラ 2 2から供給される制御信号に基づいてパルスモータ 2 0 に対 する駆動信号を生成する。  For example, FIG. 20A shows an example in which a pulse motor 20 is used as a drive source. In this case, the dryino 21 generates a drive signal for the pulse motor 20 based on a control signal supplied from the pulse motor controller 22.
また、 レンズ駆動部 4及び上記パルスモータ 2 0は、 いわゆる フィー ドフォワード制御で駆動するようにしたが、 レンズ位置あ るいはステージ位置を検出するセンサを設けて、 駆動源をフィ一 ドパック制御する構成も適用可能である。  The lens driving unit 4 and the pulse motor 20 are driven by so-called feedforward control. However, a sensor for detecting the lens position or the stage position is provided to control the driving source by feed pack control. A configuration is also applicable.
第 2 0 B図は、 フィードパック制御によって駆動源を制御する 駆動系の一構成例を示している。 ドライバ 2 4は出力指示回路 2 5から供給される制御信号に基づいて駆動系 2 3 に対する駆動 信号を生成する。 この場合、 駆動系 2 3 としてはシリ ンダ装置や モータ等が適用可能である。 位置センサ 2 6はス トレインゲージ ゃポテンショメータ等で構成でき、 その出力を取込み回路 2 7 に 供給する。 取込み回路 2 7は位置センサ 2 6の出力に基づいて出 力指示回路 2 5へ位置補償信号を供給し、 駆動系 2 3の位置補正 を行う。 FIG. 20B shows an example of the configuration of a drive system that controls a drive source by feed pack control. Driver 2 4 is output instruction circuit 2 A drive signal for the drive system 23 is generated based on the control signal supplied from 5. In this case, a cylinder device, a motor, or the like can be applied as the drive system 23. The position sensor 26 can be constituted by a strain gauge, a potentiometer, etc., and the output thereof is supplied to the acquisition circuit 27. The take-in circuit 27 supplies a position compensation signal to the output instruction circuit 25 based on the output of the position sensor 26, and performs position correction of the drive system 23.
また、 以上の各実施の形態では、 C C Dカメラから供給される ビデオ信号を NT S C方式で説明したが、 これに限らず、 例えば P A L (Phase Alternation by Line) 方式で処理することも可 能である。 また、 ビデオ信号デコーダ部を交換することで、 I E E E 1 3 9 4、 カメラリ ンクなど他のフォーマツ トへの対応が可 能となる。 この場合は、 ビデオ信号デコーダ回路の機能を F P G A 4 2内に取り込むことも可能である。  Further, in each of the above embodiments, the video signal supplied from the CCD camera has been described in the NTSC format. However, the present invention is not limited to this. For example, the video signal can be processed in a PAL (Phase Alternation by Line) format. . Also, by replacing the video signal decoder, it is possible to support other formats such as IEEE1394 and camera link. In this case, the function of the video signal decoder circuit can be incorporated into the FPGA 42.
更に、 本発明のオートフォーカス制御を実行して得られる各サ ンプル画像のピント評価値やフォーカス位置などもサンプル画 像と共にモニタ 9 (第 1 図) に表示させることも可能である。 こ の場合、 これらの情報を N T S C等に変換して表示するためのェ ンコーダ回路を別途設ければよい。 このエンコーダ回路は、 例え ば上述の第 5の実施の形態で説明した構成のォ一トフォーカス 制御装置の基板実装部品のひとつとすることもできる。  Further, the focus evaluation value and the focus position of each sample image obtained by executing the auto focus control of the present invention can be displayed on the monitor 9 (FIG. 1) together with the sample image. In this case, an encoder circuit for converting such information into NTSSC or the like and displaying it may be provided separately. This encoder circuit may be, for example, one of the board mounted components of the autofocus control device having the configuration described in the fifth embodiment.

Claims

請求の範囲 The scope of the claims
1 . レンズ一被写体間距離が異なる複数のフォーカス位置で前 記被写体の画像データを各々取得する画像取得工程と、 1. an image acquisition step of acquiring image data of the subject at a plurality of focus positions having different lens-subject distances;
前記取得した各画像データに基づいて前記複数のフォーカス 位置毎に各々ピント評価値を算出する評価値算出工程と、  An evaluation value calculating step of calculating a focus evaluation value for each of the plurality of focus positions based on each of the acquired image data;
前記ピント評価値が最大となるフォーカス位置をピント位置 として算出するピント位置算出工程と、  A focus position calculating step of calculating a focus position at which the focus evaluation value is the maximum as a focus position;
前記算出したピント位置へ前記レンズを前記被写体に対して 相対移動させる移動工程とを有するオートフォーカス制御方法 において、  Moving the lens relative to the subject to the calculated focus position.
前記画像取得工程と前記評価値算出工程との間に、 前記取得し た画像データを平滑化処理する画像平滑化工程を有し、  An image smoothing step of smoothing the acquired image data between the image acquiring step and the evaluation value calculating step,
前記平滑化処理した画像デ一夕に基づいて前記ピン ト評価値 を算出する  Calculating the focus evaluation value based on the smoothed image data;
ことを特徴とするオートフォーカス制御方法。  An auto-focus control method, characterized in that:
2 . 前記画像平滑化工程の前又は後に、 前記取得した画像デー 夕の画面平均輝度を算出する平均輝度算出工程を有し、  2. Before or after the image smoothing step, there is an average luminance calculating step of calculating a screen average luminance of the acquired image data,
前記ピント評価値として前記算出した画面平均輝度による除 算値を用いる  A value obtained by dividing the calculated screen average luminance is used as the focus evaluation value.
ことを特徴とする請求の範囲第 1項に記載のオー トフォー力 ス制御方法。  2. The method for controlling an auto force according to claim 1, wherein:
3 . 前記評価値算出工程では、 前記取得した画像デ一夕におけ る隣接画素間の輝度データ差に基づいて前記ピント評価値を算 出することを特徴とする請求の範囲第 1項に記載のオートフォ 一カス制御方法。 3. The evaluation value calculation step, wherein the focus evaluation value is calculated based on a luminance data difference between adjacent pixels in the acquired image data overnight. Auto focus control method.
4 . 前記ピント位置算出工程では、 前記算出したピント評価値 の最大値及びその近傍の複数のピント評価値に基づいて前記ピ ント位置を算出することを特徴とする請求の範囲第 1項に記載 のオートフォーカス制御方法。 4. The focus position calculating step according to claim 1, wherein the focus position is calculated based on a maximum value of the calculated focus evaluation value and a plurality of focus evaluation values in the vicinity thereof. Auto focus control method.
5 . 前記画像取得工程では、 前記レンズ一被写体間距離を連続 的に変化させながら、 前記複数のフォーカス位置で前記画像デー 夕を各々取得することを特徴とする請求の範囲第 1項に記載の オートフォーカス制御方法。 5. The image acquisition step according to claim 1, wherein in the image acquisition step, the image data is acquired at each of the plurality of focus positions while continuously changing the distance between the lens and the subject. Auto focus control method.
6 . 前記画像取得工程と前記評価値算出工程とを並列に行う こ とを特徴とする請求の範囲第 1項に記載のオートフォーカス制 御方法。  6. The autofocus control method according to claim 1, wherein the image acquisition step and the evaluation value calculation step are performed in parallel.
7 . 前記被写体の照明光源に紫外光を用いることを特徴とする 請求の範囲第 1項に記載のオートフォ一カス制御方法。  7. The autofocus control method according to claim 1, wherein ultraviolet light is used as an illumination light source for the subject.
8 . 前記取得した画像デ一夕を複数の領域に分割し、 前記分割 した各領域毎に前記ピント位置を各々算出することを特徴とす る請求の範囲第 1項に記載のオートフォーカス制御方法。  8. The autofocus control method according to claim 1, wherein the acquired image data is divided into a plurality of regions, and the focus position is calculated for each of the divided regions. .
9 . 前記分割した各領域のピント位置における画像を当該領域 間で合成することにより、 被写体の全焦点画像を取得することを 特徴とする請求の範囲第 8項に記載のオートフォーカス制御方 法。  9. The autofocus control method according to claim 8, wherein an omnifocal image of a subject is acquired by combining images at the focus positions of the divided areas between the areas.
1 0 . 前記分割した各領域のピント位置における画像を複数の フォーカス位置間で合成することにより、 被写体の立体画像を取 得することを特徴とする請求の範囲第 8項に記載のオー トフォ 一カス制御方法。  10. The autofocus according to claim 8, wherein a stereoscopic image of a subject is obtained by synthesizing an image at a focus position of each of the divided areas at a plurality of focus positions. Control method.
1 1 . レンズ一被写体間距離が異なる複数のフォーカス位置で 前記被写体の画像データを各々取得する画像取得工程と、 前記取得した各画像データに基づいて前記複数のフォー力 ス 位置毎に各々ピント評価値を算出する評価値算出工程と、 11. An image acquisition step of acquiring image data of the subject at a plurality of focus positions having different lens-subject distances, An evaluation value calculation step of calculating a focus evaluation value for each of the plurality of force positions based on the obtained image data;
前記ピント評価値が最大となるフォーカス位置をピント位置 として算出するピン卜位置算出工程と、  A focus position calculating step of calculating a focus position at which the focus evaluation value is maximum as a focus position;
前記算出したピント位置へ前記レンズを前記被写体に対して 相対移動させる移動工程とを有するォートフォ一カス制御方法 において、  Moving the lens relative to the subject to the calculated focus position.
前記取得した画像データを平滑化処理する画像平滑化工程と、 前記取得した画像データの画面平均輝度を算出する平均輝度算 出工程とを有し、  An image smoothing step of smoothing the acquired image data, and an average luminance calculating step of calculating a screen average luminance of the acquired image data,
前記平滑化処理した画像データに基づいて前記ピント評俯値 を算出するとともに、  While calculating the focus grade value based on the smoothed image data,
前記ピント評価値として、 前記算出した画面平均輝度によ る除 算値を用いる  A value obtained by dividing the calculated screen average luminance is used as the focus evaluation value.
ことを特徴とするオートフォーカス制御方法。 An auto-focus control method, characterized in that:
1 2 . 前記評価値算出工程では、 前記取得した画像デ一タ にお ける隣接画素間の輝度データ差に基づいて前記ピント評価值を 算出することを特徴とする請求の範囲第 1 1項に記載のオー ト フォーカス制御方法。  12. The evaluation value calculation step, wherein the focus evaluation value is calculated based on a difference in luminance data between adjacent pixels in the acquired image data. Auto focus control method described.
1 3 . 前記ピント位置算出工程では、 前記算出したピン ト 評価 値の最大値及びその近傍の複数のピント評価値に基づいて ttr記 ピント位置を算出することを特徴とする請求の範囲第 1 1項に 記載のォ一トフォ一カス制御方法。  13. The focus position calculating step includes calculating a ttr focus position based on the calculated maximum focus evaluation value and a plurality of focus evaluation values in the vicinity thereof. The focus control method described in the section.
1 4 . 前記画像取得工程では、 前記レンズ一被写体間距離を連 続的に変化させながら、 前記複数のフォーカス位置で前記画像デ 一夕を各々取得することを特徴とする請求の範囲第 1 1項 ίこ記 載のォ一トフォーカス制御方法。 14. The image acquisition step, wherein the image data is acquired at each of the plurality of focus positions while continuously changing the distance between the lens and the subject. Item Automatic focus control method described above.
1 5 . 前記画像取得工程と前記評価値算出工程とを並列に行う ことを特徴とする請求の範囲第 1 1項に記載のオートフォー力 ス制御方法。  15. The auto force control method according to claim 11, wherein the image acquiring step and the evaluation value calculating step are performed in parallel.
1 6 . 前記被写体の照明光源に紫外光を用いることを特徴とす る請求の範囲第 1 1項に記載のオー トフォーカス制御方法。 16. The autofocus control method according to claim 11, wherein ultraviolet light is used as an illumination light source for the subject.
1 7 . 前記取得した画像データを複数の領域に分割し、 前記分 割した各領域毎に前記ピント位置を各々算出することを特徴と する請求の範囲第 1 1項に記載のォ一トフォ一カス制御方法。 17. The photographing apparatus according to claim 11, wherein the acquired image data is divided into a plurality of regions, and the focus position is calculated for each of the divided regions. Scrap control method.
1 8 . 前記分割した各領域のピント位置における画像を当該領 域間で合成することにより、 被写体の全焦点画像を取得すること を特徴とする請求の範囲第 1 7項に記載のオートフォーカス制 御方法。 18. The autofocus system according to claim 17, wherein an image at a focus position of each of the divided areas is synthesized between the areas to obtain an all-focus image of a subject. Your way.
1 9 . 前記分割した各領域のピント位置における画像を複数の フォーカス位置間で合成することにより、 被写体の立体画像を取 得することを特徴とする請求の範囲第 1 7項に記載のオートフ オーカス制御方法。  19. The auto focus control according to claim 17, wherein a stereoscopic image of a subject is obtained by combining images at the focus position of each of the divided areas at a plurality of focus positions. Method.
2 0 . レンズ一被写体間距離が異なる複数のフォーカス位置で 取得された各画像データに基づいて前記複数のフォーカス位置 毎に各々ピント評価値を算出する評価値算出手段と、  20. evaluation value calculation means for calculating a focus evaluation value for each of the plurality of focus positions based on each image data acquired at a plurality of focus positions having different lens-subject distances;
前記算出したピン ト評価値の最大値に基づいてピント位置を 算出するピン ト位置算出手段とを備えたオートフォーカス制御 装置であつて、  An auto-focus control device comprising: a focus position calculating unit that calculates a focus position based on the calculated maximum focus evaluation value;
前記取得した画像データを平滑化処理する画像平滑化手段を 有し、  Image smoothing means for smoothing the acquired image data,
前記平滑化処理した画像データに基づいて前記ピント評価値 を算出する The focus evaluation value based on the smoothed image data Calculate
ことを特徴とするオー トフォーカス制御装置。  An auto-focus control device characterized by the following.
2 1 . 前記取得した画像デ一夕の画面平均輝度を算出する平均 輝度算出手段を有し、 前記ピン卜評価値として、 前記算出した画 面平均輝度による除算値を用いることを特徴とする請求の範囲 第 2 0項に記載のオートフォーカス制御装置。 21. An average luminance calculating means for calculating an average screen luminance of the acquired image data overnight, wherein a value divided by the calculated average screen luminance is used as the focus evaluation value. Item 20. The autofocus control device according to Item 20.
2 2 . 前記評価値算出手段は、 前記取得した画像データにおけ る隣接画素間の輝度データ差を算出するエッジ強調処理手段で あることを特徴とする請求の範囲第 2 0項に記載のオートフォ 一カス制御装置。 22. The auto-focusing apparatus according to claim 20, wherein said evaluation value calculating means is edge enhancement processing means for calculating a luminance data difference between adjacent pixels in said acquired image data. One-cass control device.
2 3 . 前記ピント位置算出手段は、 前記算出したピント評価値 の最大値及びその近傍の複数のピント評価値に基づいて前記ピ ント位置を算出することを特徴とする請求の範囲第 2 0項に記 載のオートフォーカス制御装置。  23. The focus position calculating unit according to claim 20, wherein said focus position calculating means calculates said focus position based on a maximum value of said calculated focus evaluation values and a plurality of focus evaluation values in the vicinity thereof. The autofocus control device described in.
2 4 . 前記取得した各画像デ一夕を用いて、 前記被写体の全焦 点画像を合成する全焦点画像合成手段を備えたことを特徴とす る請求の範囲第 2 0項に記載のオートフォーカス制御装置。 24. The auto-focusing apparatus according to claim 20, further comprising omnifocal image synthesizing means for synthesizing an omnifocal image of the subject using each of the acquired image data. Focus control device.
2 5 . 前記取得した各画像データを用いて、 前記被写体の立体 画像を合成する立体画像合成手段を備えたことを特徴とする請 求の範囲第 2 0項に記載のオートフォーカス制御装置。 25. The autofocus control device according to claim 20, further comprising a stereoscopic image synthesizing unit that synthesizes a stereoscopic image of the subject using each of the acquired image data.
2 6 . 当該オートフォーカス制御装置は、 前記評価値算出手段 と、 前記ピント位置算出手段と、 前記画像平滑化手段と、 前記画 像データの画面平均輝度を算出する平均輝度算出手段とが、 単数 又は複数の素子として同一基板上に実装された、 基板実装体でな ることを特徴とする請求の範囲第 2 0項に記載のオー トフォー カス制御装置。 26. The autofocus control device includes: an evaluation value calculation unit; a focus position calculation unit; an image smoothing unit; and an average brightness calculation unit that calculates a screen average brightness of the image data. 20. The autofocus control device according to claim 20, wherein the autofocus control device is a board mounted body mounted on the same board as a plurality of elements.
2 7 . 前記基板上には、 レンズ一被写体間距離調整用の駆動手 段を制御する駆動制御用素子が実装されていることを特徴とす る請求の範囲第 2 6項に記載のオー トフォーカス制御装置。 27. The auto-control device according to claim 26, wherein a drive control element for controlling a drive device for adjusting a distance between a lens and a subject is mounted on the substrate. Focus control device.
2 8 . 前記評価値算出手段と、 前記画像平滑化手段と、 前記平 均輝度算出手段とが、 単一の F P G A (フィールド · プログラマ ブル ·ゲート · アレイ) で構成されていることを特徴とする請求 の範囲第 2 6項に記載のオートフォーカス制御装置。 28. The evaluation value calculating means, the image smoothing means, and the average luminance calculating means are constituted by a single FPGA (field programmable gate array). An autofocus control device according to claim 26.
2 9 . レンズ一被写体間距離が異なる複数のフォーカス位置で 前記被写体の画像データを各々取得する画像取得手段と、 前記取 得した各画像データに基づいて前記複数のフォーカス位置毎に 各々ピント評価値を算出する評価値算出手段と、 前記算出したピ ント評価値の最大値に基づいてピント位置を算出するピント位 置算出手段と、 前記算出したピン卜位置へ前記レンズを前記被写 体に対して相対移動させる駆動手段とを備えた画像処理装置に おいて、 ' 2 9. Image acquisition means for acquiring image data of the subject at a plurality of focus positions having different lens-subject distances; and focus evaluation values for each of the plurality of focus positions based on the acquired image data. Evaluation value calculation means for calculating the focus value, a focus position calculation means for calculating a focus position based on the maximum value of the calculated focus evaluation value, and moving the lens to the calculated focus position with respect to the object. In an image processing apparatus provided with a driving means for relatively moving
前記取得した画像データを平滑化処理する画像平滑化手段を 有し、 前記平滑化処理した画像データに基づいて前記ピント評価 値を算出する  An image smoothing unit that performs smoothing processing on the obtained image data, and calculates the focus evaluation value based on the smoothed image data
ことを特徴とする画像処理装置。  An image processing apparatus.
3 0 . 前記取得した画像データの画面平均輝度を算出する平均 輝度算出手段を有し、 前記ピント評価値として前記算出した画面 平均輝度による除算値を用いることを特徴とする請求の範囲第 2 9項に記載の画像処理装置。  30. An image processing apparatus according to claim 29, further comprising an average luminance calculating unit that calculates an average screen luminance of the acquired image data, wherein a division value by the calculated average screen luminance is used as the focus evaluation value. An image processing apparatus according to the item.
3 1 . 前記評価値算出手段は、 前記取得した画像デ一夕におけ る隣接画素間の輝度データ差を算出するエッジ強調処理手段で なることを特徴とする請求の範囲第 2 9項に記載の画像処理装 置。 31. The method according to claim 29, wherein said evaluation value calculation means is edge enhancement processing means for calculating a luminance data difference between adjacent pixels in said acquired image data. Image processing equipment Place.
3 2 . 前記ピント位置算出手段は、 前記算出したピント評価値 の最大値及びその近傍の複数のピン ト評価値に基づいて前記ピ ント位置を算出することを特徴とする請求の範囲第 2 9項に記 載の画像処理装置。  32. The focus position calculating means, wherein the focus position calculating means calculates the focus position based on the calculated maximum focus evaluation value and a plurality of focus evaluation values in the vicinity thereof. The image processing device described in the section.
3 3 . 前記取得した各画像データを用いて、 前記被写体の全焦 点画像を合成する全焦点画像合成手段を備えたことを特徴とす る請求の範囲第 2 9項に記載の画像処理装置。  33. The image processing apparatus according to claim 29, further comprising: an all-focus image synthesizing unit that synthesizes an all-focus image of the subject using each of the acquired image data. .
3 4 . 前記取得した各画像データを用いて、 前記被写体の立体 画像を合成する立体画像合成手段を備えたことを特徴とする請 求の範囲第 2 9項に記載の画像処理装置。 34. The image processing apparatus according to claim 29, further comprising a stereoscopic image synthesizing means for synthesizing a stereoscopic image of the subject using each of the acquired image data.
PCT/JP2004/012609 2003-08-26 2004-08-25 Autofocus control method, autofocus controller, and image processor WO2005026802A1 (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
US10/569,480 US20070187571A1 (en) 2003-08-26 2004-08-25 Autofocus control method, autofocus control apparatus and image processing apparatus

Applications Claiming Priority (4)

Application Number Priority Date Filing Date Title
JP2003301918 2003-08-26
JP2003-301918 2003-08-26
JP2004212119A JP4158750B2 (en) 2003-08-26 2004-07-20 Autofocus control method, autofocus control device, and image processing device
JP2004-212119 2004-07-20

Publications (2)

Publication Number Publication Date
WO2005026802A1 true WO2005026802A1 (en) 2005-03-24
WO2005026802B1 WO2005026802B1 (en) 2005-05-26

Family

ID=34315613

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/JP2004/012609 WO2005026802A1 (en) 2003-08-26 2004-08-25 Autofocus control method, autofocus controller, and image processor

Country Status (4)

Country Link
JP (1) JP4158750B2 (en)
KR (1) KR20060123708A (en)
TW (1) TWI245556B (en)
WO (1) WO2005026802A1 (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN100426843C (en) * 2005-04-15 2008-10-15 索尼株式会社 Control apparatus, control method, and computer program
CN100426842C (en) * 2005-04-15 2008-10-15 索尼株式会社 Control apparatus, control method, computer program, and camera

Families Citing this family (12)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2007108455A (en) * 2005-10-14 2007-04-26 Fujifilm Corp Automatic focusing controller and control method
JP2008234619A (en) * 2007-02-20 2008-10-02 Toshiba Corp Face authenticating device and face authenticating method
KR101034282B1 (en) 2009-07-31 2011-05-16 한국생산기술연구원 The Method for Controlling Focus in Image Captured from Multi-focus Objects
JP5621325B2 (en) * 2010-05-28 2014-11-12 ソニー株式会社 FOCUS CONTROL DEVICE, FOCUS CONTROL METHOD, LENS DEVICE, FOCUS LENS DRIVING METHOD, AND PROGRAM
US10015387B2 (en) 2013-08-09 2018-07-03 Musahi Enginerring, Inc. Focus adjustment method and device therefor
JP6476977B2 (en) * 2015-02-19 2019-03-06 大日本印刷株式会社 Identification device, identification method, and program
JP6750194B2 (en) 2015-06-19 2020-09-02 ソニー株式会社 Medical image processing apparatus, medical image processing method, and medical observation system
KR102640848B1 (en) 2016-03-03 2024-02-28 삼성전자주식회사 Method of inspecting a sample, system for inspecting a sample, and method of inspecting semiconductor devies using the same
WO2018003999A1 (en) 2016-06-30 2018-01-04 株式会社ニコン Imaging apparatus
JP6793053B2 (en) * 2017-02-09 2020-12-02 リコーエレメックス株式会社 Inspection device and focus adjustment support method
JP7037425B2 (en) * 2018-04-23 2022-03-16 株式会社ディスコ How to detect the focal position of the laser beam
CN114257710B (en) * 2020-09-23 2024-02-20 北京小米移动软件有限公司 Optical anti-shake structure and camera module and terminal equipment with same

Citations (15)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JPH0583614A (en) * 1991-09-24 1993-04-02 Canon Inc Electronic still camera
JPH07284130A (en) * 1994-04-14 1995-10-27 Rohm Co Ltd Stereoscopic vision camera
JPH099132A (en) * 1995-06-23 1997-01-10 Canon Inc Automatic focusing device and camera
JPH09224259A (en) * 1996-02-19 1997-08-26 Sanyo Electric Co Ltd Image synthesizer
JP2000275019A (en) * 1999-03-23 2000-10-06 Takaoka Electric Mfg Co Ltd Active confocal imaging device and method for three- dimensionally measuring by using it
JP2002048512A (en) * 2000-07-31 2002-02-15 Nikon Corp Position detector, optical instrument, and exposure device
JP2003005088A (en) * 2001-06-22 2003-01-08 Nikon Corp Focusing device for microscope and microscope having the same
JP2003029138A (en) * 2001-07-19 2003-01-29 Olympus Optical Co Ltd Autofocusing method and ultraviolet microscope
JP2003029130A (en) * 2001-07-11 2003-01-29 Sony Corp Optical microscope
JP2003075713A (en) * 2001-09-03 2003-03-12 Minolta Co Ltd Autofocusing device and method, and camera
JP2003086498A (en) * 2001-09-13 2003-03-20 Canon Inc Focal point detecting method and system
JP2003163827A (en) * 2001-11-22 2003-06-06 Minolta Co Ltd Object extracting device and photographing device
JP2003195157A (en) * 2001-12-21 2003-07-09 Agilent Technol Inc Automatic focusing of imaging system
JP2003264721A (en) * 2002-03-11 2003-09-19 Fuji Photo Film Co Ltd Imaging apparatus
JP2004101240A (en) * 2002-09-05 2004-04-02 Mitsui Eng & Shipbuild Co Ltd Stacked belt ring inspection method and device

Patent Citations (15)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JPH0583614A (en) * 1991-09-24 1993-04-02 Canon Inc Electronic still camera
JPH07284130A (en) * 1994-04-14 1995-10-27 Rohm Co Ltd Stereoscopic vision camera
JPH099132A (en) * 1995-06-23 1997-01-10 Canon Inc Automatic focusing device and camera
JPH09224259A (en) * 1996-02-19 1997-08-26 Sanyo Electric Co Ltd Image synthesizer
JP2000275019A (en) * 1999-03-23 2000-10-06 Takaoka Electric Mfg Co Ltd Active confocal imaging device and method for three- dimensionally measuring by using it
JP2002048512A (en) * 2000-07-31 2002-02-15 Nikon Corp Position detector, optical instrument, and exposure device
JP2003005088A (en) * 2001-06-22 2003-01-08 Nikon Corp Focusing device for microscope and microscope having the same
JP2003029130A (en) * 2001-07-11 2003-01-29 Sony Corp Optical microscope
JP2003029138A (en) * 2001-07-19 2003-01-29 Olympus Optical Co Ltd Autofocusing method and ultraviolet microscope
JP2003075713A (en) * 2001-09-03 2003-03-12 Minolta Co Ltd Autofocusing device and method, and camera
JP2003086498A (en) * 2001-09-13 2003-03-20 Canon Inc Focal point detecting method and system
JP2003163827A (en) * 2001-11-22 2003-06-06 Minolta Co Ltd Object extracting device and photographing device
JP2003195157A (en) * 2001-12-21 2003-07-09 Agilent Technol Inc Automatic focusing of imaging system
JP2003264721A (en) * 2002-03-11 2003-09-19 Fuji Photo Film Co Ltd Imaging apparatus
JP2004101240A (en) * 2002-09-05 2004-04-02 Mitsui Eng & Shipbuild Co Ltd Stacked belt ring inspection method and device

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN100426843C (en) * 2005-04-15 2008-10-15 索尼株式会社 Control apparatus, control method, and computer program
CN100426842C (en) * 2005-04-15 2008-10-15 索尼株式会社 Control apparatus, control method, computer program, and camera

Also Published As

Publication number Publication date
TWI245556B (en) 2005-12-11
TW200527907A (en) 2005-08-16
JP2005099736A (en) 2005-04-14
WO2005026802B1 (en) 2005-05-26
KR20060123708A (en) 2006-12-04
JP4158750B2 (en) 2008-10-01

Similar Documents

Publication Publication Date Title
WO2005026802A1 (en) Autofocus control method, autofocus controller, and image processor
US20070187571A1 (en) Autofocus control method, autofocus control apparatus and image processing apparatus
JPS6398615A (en) Automatic focus adjusting method
CN105579880B (en) The method of work of endoscope-use camera system, endoscope-use camera system
EP3035104B1 (en) Microscope system and setting value calculation method
JP2007159047A (en) Camera system, camera controller, panoramic image generating method, and computer program
JPH09298682A (en) Focus depth extension device
JP2009145645A (en) Optical device
EP2136234B1 (en) Microscope imaging system, storage medium and exposure adjustment method
WO2017033346A1 (en) Digital camera system, digital camera, exchangeable lens, distortion/aberration correction processing method, and distortion/aberration correction processing program
JPH11325819A (en) Electronic camera for microscope
JP2010107866A (en) Digital camera and optical apparatus
US8212865B2 (en) Microscope image pickup apparatus, microscope image pickup program product, microscope image pickup program transmission medium and microscope image pickup method
US7925149B2 (en) Photographing apparatus and method for fast photographing capability
CN100378487C (en) Autofocus control method, autofocus controller, and image processor
JP2009069748A (en) Imaging apparatus and its automatic focusing method
US20160275657A1 (en) Imaging apparatus, image processing apparatus and method of processing image
JP2013246052A (en) Distance measuring apparatus
JP2013076823A (en) Image processing apparatus, endoscope system, image processing method, and program
JP2006145793A (en) Microscopic image pickup system
JP2011166497A (en) Imaging device
JPH11197097A (en) Electronic endoscope device which forms perspective image
US20090168156A1 (en) Microscope system, microscope system control program and microscope system control method
JP5996462B2 (en) Image processing apparatus, microscope system, and image processing method
JP6025954B2 (en) Imaging apparatus and control method thereof

Legal Events

Date Code Title Description
WWE Wipo information: entry into national phase

Ref document number: 200480030003.4

Country of ref document: CN

AK Designated states

Kind code of ref document: A1

Designated state(s): AE AG AL AM AT AU AZ BA BB BG BW BY BZ CA CH CN CO CR CU CZ DK DM DZ EC EE EG ES FI GB GD GE GM HR HU ID IL IN IS KE KG KP KR LC LK LR LS LT LU LV MA MD MG MN MW MX MZ NA NI NO NZ OM PG PL PT RO RU SC SD SE SG SK SL SY TM TN TR TT TZ UA UG US UZ VC YU ZA ZM

AL Designated countries for regional patents

Kind code of ref document: A1

Designated state(s): GM KE LS MW MZ NA SD SZ TZ UG ZM ZW AM AZ BY KG MD RU TJ TM AT BE BG CH CY DE DK EE ES FI FR GB GR HU IE IT MC NL PL PT RO SE SI SK TR BF CF CG CI CM GA GN GQ GW ML MR SN TD TG

121 Ep: the epo has been informed by wipo that ep was designated in this application
B Later publication of amended claims

Effective date: 20050331

WWE Wipo information: entry into national phase

Ref document number: 1020067004017

Country of ref document: KR

DPEN Request for preliminary examination filed prior to expiration of 19th month from priority date (pct application filed from 20040101)
122 Ep: pct application non-entry in european phase
WWE Wipo information: entry into national phase

Ref document number: 10569480

Country of ref document: US

Ref document number: 2007187571

Country of ref document: US

WWP Wipo information: published in national office

Ref document number: 1020067004017

Country of ref document: KR

WWP Wipo information: published in national office

Ref document number: 10569480

Country of ref document: US